title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Machine Learning: Strides and Limitations
Machine Learning: Strides and Limitations A conversation with Andrew McAfee, Hilary Mason, and Claudia Perlich As AI and machine learning (ML) become more mainstream in business applications and more widely accepted by the public — in everything from ‘smart’ vacuum cleaners and navigation systems, to the ubiquitous Alexa and Siri — it’s important to view the strides as well as the shortcomings with a wide lens. MIT IDE Co-director and Principal Research Scientist, Andrew McAfee, did just that during a fireside chat at the recent IDE Annual Conference held virtually on May 20. McAfee spoke with two “rock Starts in the discipline,” Hilary Mason, Data Science in Residence, at Accel (lower left in photo), and Claudia Perlich, Chief Scientist at Dstillery (upper left in photo), in a wide-ranging discussion of about machine learning’s past, present, and future. What follows are some key excerpts and takeaways from the conversation. Andrew McAfee: You’ve been around to watch ML go from a very niche discipline, something that was maybe used to recognize handwritten digits on checks, to this technology that appears to be taking over the economy. What was your ‘a-ha’ moment when you first realized the potential of ML? Hilary Mason: I’ve had a series of those moments at different times in my career but the set of problems and places where the technology can actually be impactful has radically expanded over the past 10 years. ML techniques are not new; people have been using them in financial services and for really high-value problems for a very long time. But when I was starting out, we began to answer questions that previously had been out of reach from a technical perspective. For example, it allowed coders to go beyond the very narrow edges of the problem to create huge data domains with lots of human labor involved in the classification of data. We started having more generalized approaches to solving problems, and that is one of those step-function changes in capability that’s really interesting. Claudia Perlich: I also saw many changes over the years. For the longest time data mining, as it was called then, was this niche course. Suddenly, people realized that data mining could span many different application domains; social sciences, medical applications, and even law — and everyone wanted to take the course! I knew all these different applications existed, but my frustration before 2010 had been our limitations to fun pilots where you try to see how far you can push the technology. I was never convinced that the programs were put to their best use — not because of insufficient data, or even incorrect data; there was no technology stack or an API where you could just ping the cloud, and get the answer. Back then, you had to almost manually re-implement everything from scratch. Today, ML is a much more relevant value proposition across domains that couldn’t previously afford the technical infrastructure and the skills needed. In addition, personalization has created new industries where auctions and billions of decisions are made daily in real-time based on analytics. Now, there’s an economic model — an affordable infrastructure — that didn’t exist before. AM: What areas still need work? HM: The commoditization of the underlying platform was hugely enabling. It started happening around 2008 or 2009, but it’s still happening right now. Deployment and monitoring of machine learning systems is still something of an open question. It’s not solved. And so we will continue to see this unfold over the next five to 10 years. CP: We don’t yet have sufficient control systems that understand when the model might be going out of whack, or whether we’re really using it for the right cause. Most ML models typically are built for a specific use case, and the person who built it understands the boundaries, but beyond that, generalization becomes questionable, at best. HM: When you’re dealing with data collected from the real world, it will change; the real world changes. Things happen for random reasons; human behavior changes over time. All of that impacts the accuracy of a model and a system. And that’s not even getting into potential adversarial attempts to impact it deliberately. So, while ML is easier now, there’s still a lot of work to do in order to make sure that the things we make are working accurately, and are repeatable. On the human side of the discipline, people inside of organizations must be working together efficiently, using the right features, have some understanding of the providence of those things and what they actually mean when they’re creating systems. We do have a fairly long road ahead of us, but that’s not to diminish the incredible progress. AM: How would you describe ML’s progress and capabilities? What can we compare it to? HM: Machine learning is the mobile phone of making decisions. The smartphone is a piece of infrastructure that fundamentally changes the way we do even common tasks in our everyday lives or business, but it doesn’t change everything. And the fact that we all walk around holding something like this is also not the end game. It’s absolutely a big deal. It enables us to do things ways we couldn’t do them before. Something like Google maps on a mobile device is revolutionary; we not only look at it when we don’t know where we’re going — which, by the way, has made going new places a completely different experience than it used to be — but we now look at it when we know where we’re going to figure out the best way to get there, given current conditions. That means that two levels of information are available to us instantly, and that changes the whole experience of navigating an unknown place, but also a known place. CP: It’s a technology that gives you additional information to help you make better decisions. What you do with the information you gather should still be your choice. Just because you build a classifier to detect breast cancer doesn’t mean that the computer is telling the person what the next step should be. It means you are given some additional component that should be integrated into a decision to ultimately achieve a better outcome. For instance, in radiology, the benefit is not only finding out if there’s cancer in an image, but supporting radiologists with reports so they actually have more time to talk to the patient — work the machine learning can’t do. So classifying images is not the value proposition: It is giving professionals additional information or making their life and workflow easier. AM: In other words, not making very experienced, highly trained and busy people act like clerks for a lot of their working day. CP: Exactly right. HM: I like framing this as reducing cognitive drudgery, and the drudgery is not always separable from the core of the work. AM: What are some caveats as move forward with adoption? CP: There has been lots of hesitation in adopting these technologies, and a good amount of care and concern is important. Transparency isn’t the main concern; machines are not much less transparent than people. The bigger concern is the scale at which machines can execute, which means that the potential for large scale, unintended side effects is so much bigger than with human design. I would always prefer that we’re thinking a lot harder about how we make decisions with the things that the machines give us, rather than asking how the machines came up with that thing in the first place. If you think about biases in hiring, for example, you can try to forcefully make a model that’s gender-blind, but it’s going to be really, really painful and I wonder if we should be doing it at all. If you are convinced that you want to hire an equal ratio of men and women, nothing prevents you from doing so with the data you already have. Sometimes, pushing moral questions onto ML is extremely unfair and unproductive. Let’s not focus on getting complete transparency on the machine learning system, let’s focus on greater transparency about the ground rules that we’re using to make important decisions like who should we hire? Who should we let out on parole? Things like that. At the same time, if I trust the machine, I need to be willing to go along with its recommendations, because let’s face it, if human cognition was as good as machine learning, we probably wouldn’t need it. The value proposition of machine learning is that it can do statistical things a lot better. HM: There are clearly applications where using ML is really not a great idea, or we need to give people a better understanding of the way those decisions are being made — things like sending someone to jail, or giving someone financial credit. But there are also many applications that don’t have that same kind of impact, and maybe we’re willing to be a little more flexible on how we make those decisions; like, do we think this temperature sensor is giving us a good reading or not? AM: What are you optimistic about? HM: I am most optimistic about machines being able to interface with human beings, such as much better use of natural language, of looking at our environment and making inferences about what’s going on. This goes a long way toward alleviating that cognitive drudgery and those tedious tasks — in areas like healthcare or education — and to do more meaningful work as human beings. I wish it could free us to be more human and less focused on technology, or less constrained by technology. That’s my vision. I’m also terrified, because these same technologies can be used to manipulate people. We are not spending nearly enough time on the adversarial side of machine learning and the impact that has on decision-making. So I’m both very excited and also have a lot of concerns. CP: My concern is the extent to which we are becoming less cognizant of what’s going on in the world and more tuned into our own tastes; we can become complacent and adopt a false sense of security about what we think we know.
https://medium.com/mit-initiative-on-the-digital-economy/machine-learning-strides-and-limitations-76d7959c4524
['Mit Ide']
2020-06-01 16:11:07.597000+00:00
['Machine Learning', 'MIT', 'Automation', 'Algorithms', 'AI']
Understanding Performance metrics for Machine Learning Algorithms
Understanding Performance metrics for Machine Learning Algorithms Performance metrics explained — How do they work and when to use which? Performance metrics are used to evaluate the overall performance of Machine learning algorithms and to understand how well our machine learning models are performing on a given data under different scenarios. Choosing the right metric is very essential to understand the behavior of our model and make necessary changes to further improve the model. There are different types of performance metrics. In this article, we’ll have a look at some of the most used metrics. Confusion Matrix. A confusion matrix is used to evaluate the performance of classification algorithms. Confusion Matrix As we can see from the image above, a confusion matrix has two rows and two columns for binary classification. The number of rows and columns of a confusion matrix is equal to the number of classes. Columns are the predicted classes, and rows are the actual classes. Now let’s look at each block of our confusion matrix: 1) True Positives (TP): In this case, the actual value is 1 and the value predicted by our classifier is also 1 2) True Negatives (TN): In this case, the actual value is 0 and the value predicted by our classifier is also 0 2) False Positives (FP) (Type 1 error): In this case, the actual value is 0 but the value predicted by our classifier is 1 3) False Negatives (FN) (Type 2 error): In this case, the actual value is 1 but the value predicted by our classifier is 0 Source of Image: Effect Size FAQs by Paul Ellis The end goal of our classification algorithm is to maximize the true positives and true negatives i.e. correct predictions and minimize the false positives and false negatives i.e. incorrect predictions. False negatives can be worrisome especially in medical applications e.g., Consider an application where you have to detect breast cancer in patients. Suppose a patient has cancer but our model predicted that she doesn’t have cancer. This can be dangerous as the person is cancer positive but our model failed to predict it. Accuracy. Accuracy is the most commonly used performance metric for classification algorithms. Accuracy can be defined as the number of correct predictions divided by Total predictions. We can easily calculate accuracy from the confusion matrix using the below formula. Accuracy works well when the classes are balanced i.e. equal number of samples for each class, but if the classes are imbalanced i.e. unequal number of samples per class, then accuracy might not be the right metric. Why is accuracy an unreliable metric for imbalanced data? let’s consider a binary classification problem where we have two classes of cats and dogs, where cats consist of 90% of the total population and dogs consist of 10%. Here cat is our majority class and the dog is our minority class. now if our model predicts every data point as cats still we can get a very high accuracy of 90%. This can be worrisome especially when the cost of misclassification of minority class is very high e.g., in applications such as fraud detection in credit card transactions, where the fraudulent transactions are very less in number compared to non-fraudulent transactions. Recall or sensitivity. Recall can be defined as the number of correct positive predictions divided by the sum of correct positive predictions and incorrect positive predictions, it is also called a true positive rate. The recall value ranges from 0 to 1. Recall can be calculated from the confusion matrix using the below formula. The recall metric is used when the classes are imbalanced. Recall answers the following question:- Out of all the actual positive class samples how many did we correctly predict as positive and how many should have been predicted as positive but were incorrectly predicted as negative? Recall is all about minimizing the False Negatives or Type 2 error, so when our objective is to minimize false negatives we choose recall as a metric. Why is recall a good metric for imbalanced data? let’s consider the example of an imbalanced dataset from the confusion matrix above, there are 1100 total samples in the dataset out of which 91% samples belong to the negative class, the TP, TN, FP, FN values are True positive = 20 True Negative=800 False-positive = 200 False Negative=80 now if we put these values in our recall formula we get recall = 0.2, this means that out of all the actual positive class samples only 20% were correctly predicted as positive and 80% samples should have been predicted as positive but were incorrectly predicted as negative. Here, we can see that despite getting a high accuracy of 74.5% the recall score is very low as the number of false negatives is more than the number of true positives. Specificity. Specificity can be defined as the number of correct negative predictions divided by the sum of correct negative predictions and incorrect negative predictions, it is also called a true negative rate. The specificity value ranges from 0 to 1. The specificity value can be calculated from the confusion matrix using the below formula. Specificity answers the following question:- Out of all the actual negative class samples how many did we correctly predict as negative and how many should have been predicted as negative but were incorrectly predicted as positive? let’s consider the example of an imbalanced dataset from the confusion matrix above, the TP, TN, FP, FN values are True positive = 20 True Negative=800 False positive = 200 False Negative=80 If we put these values in our specificity formula we get specificity = 0.8, this means that out of all the actual negative class samples 80% were correctly predicted as negative and 20% samples should have been predicted as negative but were incorrectly predicted as positive. Precision. Precision can be defined as the number of correct positive predictions divided by the sum of correct positive predictions and incorrect negative predictions. The precision value ranges from 0 to 1. Precision value can be calculated from the confusion matrix using the below formula. The precision metric is used when classes are imbalanced. Precision answers the following question:- Out of all the positive predictions how many were actually positive and how many were actually negative but we incorrectly predict them as positive? Precision is all about minimizing the False Positives or Type 1 error, so when our objective is to minimize false positives we choose precision as a metric Why is Precision a good metric for imbalanced data? let’s consider the example of an imbalanced dataset from the confusion matrix above, the TP, TN, FP, FN values are, True positive = 20 True Negative=800 False-positive = 200 False Negative=80 now if we put these values in our precision formula we get precision = 0.09, this means that out of all the positive predictions only 9% were actually positive the remaining 91% were actually negative but were incorrectly predicted as positive Here, we can see that despite getting a high accuracy of 74.5% the precision score is very low as the number of false positives is more than the number of true positives What do different values of precision and recall mean for a classifier? High precision (Less false positives)+ High recall (Less false negatives): This model predicts all the classes properly High precision (Less false positives)+ Low recall (More false negatives): This model predicts few values but most of the predicted values are correct Low precision (More false positives)+ High recall (Less false negatives): This model predicts many values, but most of its predicted values are incorrect. Low precision (More false positives)+ Low recall (More false negatives): This model is a no-skill classifier and can’t predict any class properly F1- Score. F1-score uses both precision and recall values. It is the harmonic mean of Precision and recall score. The F1-score can be calculated using the below formula. F1-score gives a balance between Precision and recall. It works best when the precision and recall scores are balanced. we always want our classifiers to have high precision and recall but there is always a trade-off between precision and recall when tuning the classifier. let’s consider we have two different classifiers one with high precision score and other with high recall score if we want to output the performance of each classifier as a single metric then we can use F1-score, as we can see F1-score is nothing but a weighted average of precision and recall scores If precision=0 and recall=100, then F1-score=0. F1-score always gives more weight-age to the lower number, If even one value of precision or recall is low then F1-score is pulled down There are different variants of F1-score a) Macro F1 score/ Macro averaged F1-score For the Macro F1 -score, we first calculate the F1-score for each class and then take a simple average of the individual class F1-scores. The Macro F1-score can be used when u want to know how your classifier is performing overall across each class. b) weighted average F1-score For the weighted average F1 -score, we first give weight-age to the F1-score of each class by the number of samples in that class, then we add these values and divide them by the total number of samples. F-beta score. Sometimes it is important to give more importance to either precision or recall while calculating the F1-score, in such cases we can use the F-beta score. The F-beta score is a generalization of the F1-score where we can adjust the beta parameter to give more weight-age to either precision or recall depending upon the application. The default beta value is 1, which is the same as the F1-score. A smaller beta value gives more weight to precision and less weight to recall whereas a larger beta value gives less weight to precision and more weight to recall in the calculation. For Beta = 0: we get F-beta score = precision For Beta = 1: we get a harmonic mean of precision and recall For Beta = infinity: we get a F-beta score = Recall For 0<Beta<infinity: if the beta value is close to 0, we get an F-beta score closer to precision and if it is a larger number then we get an F-beta score closer to recall ROC (Receiver Operating Characteristics) curve. The ROC curve is a metric used to visualize the performance of a binary classification problem. ROC is a probability curve, it is a plot of False positive rate vs True positive rate. The AUC score is the area under the ROC curve. The higher the AUC score better the performance of the classifier. The AUC score values can range from 0 to 1. The True Positive Rate can be calculated using the below formula. Larger the True positive rate means that all positive points are classified correctly. The False Positive Rate can be calculated using the below formula. Smaller the False positive rate means that all negative points are classified correctly. ROC curve plot The above plot shows the ROC curve, A smaller value on the x-axis means fewer false positives and more true negatives, which means more negative points are classified correctly. A larger value on the y-axis means more true positives and fewer false negatives, which means more positive points are predicted correctly. What do different values of the AUC score mean? AUC score < 0.5: the classifier is worse than a no-skill classifier AUC=0.5: it’s a no skill classifier and is not able to classify positive and negative classes, it just randomly predicts any class 0.5<AUC<1: it’s a skillful classifier, there is a higher chance that this classifier can classify positive and negative classes properly, as a higher AUC score means the FPR is low and the TPR is high AUC=1: it’s a perfect skill classifier and it can classify all negative and positive class points accurately Precision-Recall Curve. Precision is a metric that quantifies the number of correct positive predictions made and Recall is a metric that quantifies the number of correct positive predictions made out of all positive predictions that could have been made. The Precision-Recall curve is a plot of precision vs recall, on the x-axis, we have recall values and on the y-axis, we have precision values. A larger area under the precision-recall curve means high precision and recall values, where high precision means a low false-positive rate, and high recall means a low false-negative rate. Precision-recall curve plot The above plot shows a precision-recall curve, A no-skill classifier is a horizontal line on the plot, A skillful classifier is represented by a curve bowing towards (1,1) and a perfect skill classifier is depicted as a point at the co-ordinate (1,1) Log Loss. Log Loss(Logarithmic Loss) is also called as logistic loss or cross-entropy loss. log loss improves the accuracy of a model by penalizing false classifications. in general, the lower the log loss, the higher the accuracy. The values of log loss can range between 0 to infinity. To calculate log loss the model assigns a probability to each class, For log loss, the output of the classifier needs to be probability values between 0 and 1, these probability values signify the confidence of how likely a data point belongs to a certain class. log loss measures the amount of uncertainty of our predictions based on how much it varies from the actual value The log loss for multi-class classification is calculated using the below formula where, M = Number of classes N = Number of samples y_ij = it’s a binary indicator if observation (i) belongs to class j it’s 1 or else it’s 0 p_ij = it indicates the probability of observation (i) belonging to class j Let’s say a no skill classifier which predicts any class randomly has a log loss of 0.6, then any classifier which predicts a log loss of more than 0.6 is a poor classifier. MSE (Mean Squared Error). Mean squared error is the average of the square of the difference between the actual value and the predicted value. mean squared error is a metric used for regression analysis. MSE gives us the mean of the distance from actual points to the points predicted by our regression model i.e., how far our predicted values are from the actual values. Computing the gradient is much easier in MSE and it penalizes larger errors very well. The squaring is done to reduce the complexity so that there are no negative values. Lower the MSE means the model is more accurate and the predicted values are very close to the actual values. where, Y = actual values Y^ = predicted values MAE (Mean Absolute Error). MAE is another metric that is used for regression analysis. It is quite similar to mean squared error, It is the average of the absolute difference between the actual value and predicted value. Unlike MSE, MAE gives us the absolute average distance between the actual points and the points predicted by our regression model i.e., how far our predicted values are from the actual values. Computing the gradient is not easy in MAE and larger errors are not penalized in prediction. Lower the MAE means the model is more accurate and the predicted values are very close to the actual values. where, Y = actual values Y^ = predicted values — — — — — — — — — — — — — — Thank you for reading this article. For any suggestions or queries please leave a comment. References. If you like this article, do share it with others.
https://medium.com/analytics-vidhya/understanding-performance-metrics-for-machine-learning-algorithms-996dd7efde1e
['Kiran Parte']
2020-08-07 08:57:07.050000+00:00
['Deep Learning', 'Performance Metrics', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
Custom-Shaped Switch in iOS Apps
Once I got a rather simple-looking task from a regular client of mine: he needed to make a screen for an iOS app with several switches. The switch was long and thin. It was different from both the UISwitch, which has size 51x31 points, and the Material switch, which is more flexible in size, but the thumb is bigger than the track. Switches from client’s design I found many solutions on GitHub, but they were either modifications of existing UISwitch, or variations of Material switch. Other options didn’t look like switches at all. I’ll give some references, in case you’re looking for something different: AIFlatSwitch — the switch looks more like an animated checkbox. MJMaterialSwitch —an iOS implementation of the Android Material Switch. LabelSwitch — the closest switch to what I was looking for. Longer than the standard one, with text on top. Unfortunately, it didn’t work when I tested it. TTSwitch — a very old switch, circular or rectangular. PWSwitch — an animated flat switch with a rather interesting design. Paper Switch —a standard iOS switch, but with an interesting effect on its superview. TKSwitcherCollection — a whole collection of different switch components. SwiftySwitch — a switch component with custom geometry, but written in an old version of Swift. PVSwitch — another Material-looking switch. None of them completely fulfilled my requirements. Just to be clear, I needed: A switch component, looking similar to the standard UISwitch. A library which can be compiled in a Swift5-based project. View can be places and set up right in a Storyboard. The closest to what I needed was SwiftySwitch. There were several issues I needed to fix on the go. One of the biggest advantages of SwiftySwitch is that it requires only one file. It supports CocoaPods, but obviously, this way of integration wasn’t an option for me. So I just dragged the swift file into my project. The standard way of old control integration into iOS projects includes three steps: Upgrading old Swift syntax to a modern one. Apple changes some function names, deprecates some functions, introduces new ones. Usually, Xcode offers some automatic code replacements. Fixing bugs. The same functions may work differently, UIKit views may look different than they used to. After a syntax update, custom control needs testing and bug fixing. Adjusting for project needs. You find the closest control to your needs, but it rarely matches completely. You may need to add or remove some features, or change the behavior of the view. Here’s the original file: To update Swift to a modern version, you can use the integrated Xcode function. To use it, click on the Edit menu and select Convert, then To Current Swift Syntax: Conversion to current Swift syntax If you project is already using the latest Swift version, it won’t offer you to convert only one file. This problem can be solved by either creating a separate project and running the conversion there, or reviewing the file manually. If you try to build the project, Xcode will show you errors in the file (if it doesn’t, it means that your file is already good to go). Fixing errors When your project builds, test it and check if it has any bugs. In my case I only found a couple of issues with speed and geometry, but fortunately, all these parameters could be configured in Storyboard Editor. If you’re going to use it in many places, you can change the default values: Default values And that’s how it looks in the app: Switches in the app after modifications Final source code: I hope it was helpful for you. See you next time and happy coding!
https://medium.com/swlh/custom-shaped-switch-in-ios-apps-41de6b5873e6
['Alex Nekrasov']
2020-11-19 17:36:18.853000+00:00
['Mobile App Development', 'Swift', 'iOS App Development']
How We Can Prove Nature Codes Reality
Everything depends on light. Zero and one. Light is the constant in the reality we live. At present. We know, if the light goes out, we’re dead. This means, we can prove that nature codes reality. If you live in Florida, and you move from south to north, on the eastern coast, you will notice, different kinds of lizards. The lizards are all the same. Except as you move north, on the coast, they change. This is basic evolution. Co-evolution, to be exact. The lizards change their shapes, and colors, depending on their environment. Proximity to the sun. Water. And water’s relationship to the sun. Hydrogen, when burned, produces water. Explaining anger and depression, over change. Water moves as hydrogen moves, meaning, everything moves, according to its relationship to light. This throws everything off, as far as timing, where southern Florida is on one clock, northern Florida, and, also, east and west, is on a slightly different clock. We know about the subtle change in timing, as it comes to, and from light. We call this ‘time zones.’ Explaining the relativity of light (and time). Meaning no two anything’s share a time zone. This you get to with your abstract (version of) intelligence. Meaning, this is something you automatically know, without ‘knowing it.’ So, the idea light is the (only) constant is not completely correct. Light is a line. A line is the diameter, and circumference of a circle. Meaning, we have two constants. Zero and one. Energy and light. Time and space. Any X and-or Y. Any zero and-or one. Zero is dependent on one, and vice versa (making everything dependent on everything else). This explains why technology is now taking over everything, and will determine, eventually, if we even have an ‘earth.’ Light. Sun. Conservation of the circle is the core dynamic in nature.
https://medium.com/the-circular-theory/how-we-can-prove-nature-codes-reality-1eab4a65f618
['Ilexa Yardley']
2017-08-03 12:06:44.941000+00:00
['Circular Theory', 'Universal Relativity', 'Universal Circularity', 'Technology', 'Science']
Covid-19’s Other Sobering Statistics
Image: Pixabay / Omni Matryx Covid-19’s Other Sobering Statistics A tale of American woe in 20 numbers Beyond the more than 100,000 U.S. Covid-19 deaths, several numbers starkly reveal strife and stress that’s distributed unequally and not as routinely publicized. Here’s a snapshot of U.S. statistics at this sobering moment in time (each number links to the source of the data): America’s global ranking on Covid-19 deaths. Deaths accumulated in 90 days since the first one. This is more than the combined U.S. deaths in the Korean and Vietnam wars, both in total and on a per capita basis. U.S. share of global Covid-19 deaths. The U.S. has 4.25% of the world’s population. The year that began the last flu season in which more than 100,000 people died. Deaths up through May 3 that could have been prevented had U.S. officials implemented social distancing policies one week sooner, according to a Columbia University study. Mothers with children 12 and under who say that since the pandemic started, “the children in my household were not eating enough because we just couldn’t afford enough food.” Adults who say they’ve experienced high levels of stress during the pandemic. People who’ve died in nursing homes. It’s important to note that Covid-19 spares no age groups, though it is decidedly worse for people over age 65. Healthcare workers killed by the disease. Adults who say they feel anxious or nervous at least half the days of every week. Deaths per 100,000 black people. Compares with 74.3 for Hispanics, 45.2 for whites, and 34.5 among Asians. This is no surprise to demographers and health-care experts, who’ll tell you that life expectancy varies dramatically based on disparities deeply rooted in geography, wealth, and race. Deaths among black people in Chicago, who make up 30% of the city’s population. Deaths among Native Americans in Arizona, who make up just 5% of the state’s population. Unemployment among Hispanic and Latino workers as of May 8, vs. 16.7% for black workers and 14.2% for whites. In the past 10 weeks, 40 million people have filed for unemployment, according to the latest jobs report released today. Unemployment among teenagers (more than double from the year prior). Hispanic adults in households where at least one adult has lost their job or had income reduced, vs. 43% for the entire population. People who have been diagnosed with Covid-19 who have died (however, the death rate varies wildly for younger people and older people, and is based only on diagnosed cases, not total cases). Number of men and boys age 75 and younger who’ve died, compared to women and girls. Americans who say they’re “pretty sure” they had Covid-19, though only 2% said that’s been confirmed by a test. Meanwhile, 20% know someone who has been hospitalized or died due to the disease, and that figure is 34% among black Americans. Number of states where the number of new Covid-19 cases were rising as of this writing on May 28, 2020. Scientists expect additional substantial outbreaks of infection, anytime between this summer, this fall, and into 2022. all depending on the extent to which preventive measures are encouraged and followed.
https://medium.com/luminate/covid-19s-other-sobering-statistics-dfc828f5ccbc
['Robert Roy Britt']
2020-05-28 15:10:45.011000+00:00
['Covid 19', 'Coronavirus', 'Unemployment', 'Death', 'Stress']
Don’t Waste Your Life Trying to Prove Yourself
There it is again, that feeling that I’m not enough. The belief that if I don’t prove that I’m a competent leader, I’m going to get found out. I just received an email requesting that I update my board on sales performance. “That’s it,” I think to myself. “I’m going to get found out. At some point, everyone’s going to see that I’m a fraud. They don’t trust me. Everyone is thinking that I don’t know what I’m doing. I’m being called out and better get my shit in order”…and on it goes. Even though it’s Saturday morning and I have a full day planned with my family, I start obsessing over how I should respond. I’m thinking I should go type an email response, and that my kids can wait. Everyone must know that I’m doing my best and working hard. So, I’ll prove it to them by responding immediately.
https://medium.com/swlh/dont-waste-your-life-trying-to-prove-yourself-c67726a7a04a
['Zach Arend']
2019-12-28 17:05:39.290000+00:00
['Work', 'Leadership', 'Self', 'Mindfulness', 'Productivity']
Which is Better To Learn, Python or Node.Js?
Every project has its specifications and demands. And when you’re building an application, it’s most important to choose the right technology to code it. We’ll take a look at Python vs. Node.js to learn about their benefits, downsides, and use cases so you can make an educated decision about which one is best suited to your project. Why Your Tech Stack Choice Matters You can ask your peers for advice about what technology to choose, Google the answer, or ask developers which technology they prefer. Each source will give you a different opinion, but none of these options will tell you reliably which technology is the best fit for your project. Programming languages and frameworks were designed to fulfill specific project goals, and that’s the main criteria to base your choice on. Don’t go by popularity alone. For example, some technologies are a better fit for Big Data applications (like Python and R), while others are more often used for building large desktop applications (like Java and C/++/#). The choice of technology should be deliberate and based on your needs and capabilities, such as: The type of project: business application, game, payment software Product type: a dynamic messenger, or a data analytics platform Application geography: local, countrywide, or worldwide Budget: how much you can spend on technology and developer salaries to build and support your project in the long run The list can go on, but it’s essential to take every feature of your future product into consideration when choosing the technology you’ll use to build it. By comparing Python vs. Node.js for backend development, we’ll show you how good technologies vary in their advantages and areas of application. Python: Pros, Cons & Python Use Cases Python is an oldie, but a goodie. This programming language originated in the early ’90s and is still one of the most innovative, flexible, and versatile technologies thanks to its continually developing libraries, excellent documentation, and cutting-edge implementations. For example, Python is the go-to language for data science, machine learning, and AI projects. According to JetBrains research, it will remain that way for the next five years. Python also has one of the largest communities that contributes to improving the language to handle modern-day programming tasks, as shown in this diagram. Like any other technology, Python has its pros, cons, and specific spheres of application. I have used Python for many different projects like monitoring and payment platforms, real estate and security solutions, FinTech (ClearMinds), travel(Padi Travel, Diviac), and healthcare (Haystack Intelligence) platforms. Time and time again, it has proven to be a robust technology for handling all of the tasks our clients came with. Python Pros Python has many advantages that facilitate development in diverse projects, from startups to big enterprise platforms. Here are some of the most prominent ones: Python reduces time to market Python allows you to develop an MVP or a prototype in a limited time frame, so you can reduce time to market (TTM). That’s achieved thanks to Python’s rapid development methodology — which allows you to maintain several iterations at a time — and the DRY (don’t repeat yourself) principle, which means you can reuse parts of the code. These Python features offer a lot of flexibility to your project since you can go back and forth with the consumers, offer a solution, gather feedback, make improvements, and scale your prototype into a fully fledged web application. Reddit user: I work for the loan management division of a company that finances large purchases (furniture, refrigerators, etc.). My coworkers manage our accounts and I support them and management with data analysis and workflow automation. Since there is such a focus on productivity, a short delivery time is often the most important thing, right after “How many FTEs will that save?” So I use Python for its flexibility and the speed at which it allows me to write usable code. I can cover more bases much more quickly than I could with something like .NET, Java, or any Windows scripting utilities, and none of my work is user-facing, so I don’t need extensive GUI capabilities. Python fits this niche perfectly. Python has a simple syntax One of the top reasons why developers like Python so much is that it has a simple syntax that allows them to express concepts in just a few lines of code and makes it easier to solve errors and debug the code. Python is all about code readability. It’s also simple enough for the clients to understand, which makes for more convenient collaboration. One of the top reasons why developers like Python so much is that it has a simple syntax that allows them to express concepts in just a few lines of code and makes it easier to solve errors and debug the code. Python is all about code readability. It’s also simple enough for the clients to understand, which makes for more convenient collaboration. Python has a wide range of development tools and frameworks Sublime Text, a popular code editor, provides support for Python coding, as well as additional editing features and syntax extensions. Powerful web frameworks simplify the process and allow developers to focus on the logic of your applications. We use Django, which is a full-stack framework for developing all kinds of applications (simple or complex) and (thanks to its DRY philosophy) optimizing the time required to complete a project. Sublime Text, a popular code editor, provides support for Python coding, as well as additional editing features and syntax extensions. Powerful web frameworks simplify the process and allow developers to focus on the logic of your applications. We use Django, which is a full-stack framework for developing all kinds of applications (simple or complex) and (thanks to its DRY philosophy) optimizing the time required to complete a project. It has a large community Comparing Python and Node.js, Python is a more mature open-source language and has one of the biggest user communities. It has an incredible number of contributors, from junior to experienced. That means at least two things: it’s easy to find developers, and you get an active, supportive community that’s eager to share solutions and improve the language. Reddit user: I create software libraries for Raspberry Pi add-ons (known generally as HATs for hardware attached on top) and- for better or worse- the canonical language on the Pi is Python. It’s seen generally as a fairly friendly language for beginners and since the whole community is involved with projects, examples, guides and tooling there’s no reason to go against the grain. But that’s not to say I don’t enjoy Python. It’s quite probably my least-hated programming language on reflection. I’ve just released Python libraries to deploy fonts for use with example code that drives LCDs, OLEDs and eInk displays- working with namespace packages and entry points has been interesting and has allowed me to solve the font problem in a way that can be shared and built upon by the community. Python Cons Python is a great fit for most types of projects, but it does have a couple of limitations: Python is single-flow Like any interpreted language, Python has a slower speed of execution compared to compiled languages (like C or Swift). It might not be the best choice for applications that involve a lot of complex calculations, or any project where speed of performance is the most important requirement (for example, in high-frequency trading). Like any interpreted language, Python has a slower speed of execution compared to compiled languages (like C or Swift). It might not be the best choice for applications that involve a lot of complex calculations, or any project where speed of performance is the most important requirement (for example, in high-frequency trading). Weak in mobile computing Python is great for developing server and desktop platforms, but it is considered weak at mobile computing. That’s why few smartphone applications are written in Python. When to Use Python Python is the language of choice for all sorts of projects, whether small or large, simple or complex. That includes business applications, desktop user interfaces, educational platforms, gaming, and scientific apps. As for the area of application, Python is mostly used for: Data science , including data analysis (Apache Spark), machine learning (Tensorflow), and data visualization (Matplotlib): some Facebook systems use Python’s Pandas library of data analysis tools; face and voice recognition systems; neural networks and deep learning systems , including data analysis (Apache Spark), machine learning (Tensorflow), and data visualization (Matplotlib): some Facebook systems use Python’s Pandas library of data analysis tools; face and voice recognition systems; neural networks and deep learning systems Web development : web development frameworks (Django, Flask, CherryPy, Bottle) : web development frameworks (Django, Flask, CherryPy, Bottle) Desktop GUI : 2D image-processing software like Scribus and GIMP; and 3D animation software like Cinema 4D, Maya, and Blender : 2D image-processing software like Scribus and GIMP; and 3D animation software like Cinema 4D, Maya, and Blender Scientific Applications : 3D modeling software like FreeCAD and finite element software like Abaqus : 3D modeling software like FreeCAD and finite element software like Abaqus Gaming : 3D game engines (PySoy) and actual games, such as Civilization-IV and Vega Strike : 3D game engines (PySoy) and actual games, such as Civilization-IV and Vega Strike Business applications : Reddit was rewritten in Python in 2005, and Netflix’s engine is written in it : Reddit was rewritten in Python in 2005, and Netflix’s engine is written in it DevOps, system administration, and automation scripts : small apps for automating simple tasks : small apps for automating simple tasks Parsers, scrapers, and crawlers : a parser for compiling data about forecasts from different websites and displaying the results : a parser for compiling data about forecasts from different websites and displaying the results Software testing (including automated tests): unit-testing tools like Pytest, or web testing tools like PAMIE and Selenium Python is an easy, yet powerful, a versatile programming language with advanced documentation and high-level development frameworks. It’s the go-to language for Big Data applications and also suits business solutions, educational platforms, scientific and healthcare applications. Node.js: Pros, Cons & Node.js Use Cases Node.js is an environment that allows JavaScript to be used for both back-end and front-end development, as well as to solve compatibility issues. It can also be defined as a server-side scripting language. It was launched in 2009, not that long ago, and is steadily gaining in popularity. Node.js Pros When comparing Python vs. Node.js for web development, Node has a few benefits to boast about: Node.js enables fast performance When comparing Node.js vs. Python speed, you’ll find that the former is faster. Node.js is based on the Google V8 engine, which makes it good for developing chatbots and similar real-time applications. Reddit user: I run a small business and do all the tech work, which includes scripts, services, internal web apps, api scraping, DB admin, etc. I like that I can develop quickly with Node. If we were going to scale out anything I’d probably go with a more mature and locked down technology, but the MEAN stack is perfect for us at the moment. I also enjoy Javascript as a language It enables full-stack development You need one team of developers who knows JavaScript, and they can do the whole application, front- and back-end. It’s one way to reduce costs, considering that it’s easy to find JavaScript developers and you don’t need that many. You need one team of developers who knows JavaScript, and they can do the whole application, front- and back-end. It’s one way to reduce costs, considering that it’s easy to find JavaScript developers and you don’t need that many. Great for developing real-time apps Its event-driven architecture allows you to develop chat applications and web games. Node.js Cons Node.js requires a clear architecture It’s an event-driven environment, so it can run several events at a time — but only if the relationships between them are well written. It’s an event-driven environment, so it can run several events at a time — but only if the relationships between them are well written. It can’t maintain CPU-intensive tasks A heavy computational request will block the processing of all other tasks and slow down an application written with Node. Therefore, it’s not suitable for projects based on data science. A heavy computational request will block the processing of all other tasks and slow down an application written with Node. Therefore, it’s not suitable for projects based on data science. Underdeveloped documentation Unlike Python, which has comprehensive and up-to-date documentation, Node.js documentation is lagging. Plus, there are no core libraries and tools; they have too many alternatives, so it’s not always clear which you should choose. When to Use Node.js Node.js is the go-to technology for developing apps like Ad services, gaming platforms or forums. It’s good at handling projects with a lot of simultaneous connections or applications with high-speed and intense I/O (input/output), as well as applications such as productivity platforms (e.g., content management systems), P2P marketplaces, and eCommerce platforms. Node is used in different types of web applications, such as: Social and productivity platforms: LinkedIn, Trello; Business applications: eBay, Walmart; Payment systems: PayPal; Entertainment platforms: Netflix. Looking at Python vs. Node.js performance and use cases, we can see that both cater to different needs. Node.js is used for solutions where Python isn’t usually applied — or example, for real-time applications that require more speed, or in cases where you want the same team to work on both front- and back-end development. Conclusions As you can see, Python vs. Node.js, both have their advantages and disadvantages, and they are used for different kinds of projects. So when you are choosing between Node.js or Python, you need to look at all the pros and cons to decide which one is most suitable for your project application. I have been working with Python for a long time, and over the years I have used it to build everything from high-quality mid-size web applications to complex enterprise-grade solutions. And every project has convinced me(and still does) that Python helps simplify development, reduces time and costs, and allows me to scale the project quickly and easily. If you are looking to learn Python or Node JS in depth, I highly recommend Mosh courses. The link to all of the courses are below:
https://medium.com/javascript-in-plain-english/which-better-to-learn-python-or-node-js-ad21f452e14f
['Mosh Hamedani']
2020-09-25 00:47:06.907000+00:00
['Python', 'JavaScript', 'Web Development', 'Nodejs', 'Programming']
Determining the Shape of a Hanging Cable Using Basic Calculus
A perfectly flexible chain in equilibrium suspended by its ends and subject to gravity has the shape of a curve called the catenary. The name was coined in 1690 by the Dutch physicist, mathematician, astronomer, and inventor, Christiaan Huygens in a letter to the prominent German polymath Gottfried Leibniz. The catenary is similar to a parabola which led the great Italian astronomer, physicist, and engineer, Galileo Galilei, the first to study it, to mistakenly identify its shape as a parabola. The correct shape was obtained independently by Leibniz, Huygens, and the Swiss mathematician Johann Bernoulli in 1691. All of them were responding to a challenge proposed by the Swiss mathematician Jacob Bernoulli (Johann's older brother) to obtain the equation of the “chain-curve.” Figure 1: From left to right, Jacob Bernoulli (source), Gottfried Leibniz (source), Christiaan Huygens (source) and Johann Bernoulli (source). The figures that Leibniz and Huygens sent to Jacob Bernoulli are shown below. They were published in the Acta Eruditorum, the “first scientific journal of the German-speaking lands of Europe.” Figure 1: The figures submitted by Leibniz (left) and Huygens (right) to Jacob Bernoulli for publication in the Acta Eruditorum (source). Johann Bernoulli was delighted that he had successfully solved the problem his older brother Jacob failed to solve. Twenty-seven years later, he wrote in a letter: The efforts of my brother were without success. For my part, I was more fortunate, for I found the skill (I say it without boasting; why should I conceal the truth?) to solve it in full…. It is true that it cost me study that robbed me of rest for an entire night. It was a great achievement for those days and for the slight age and experience I then had. The next morning, filled with joy, I ran to my brother, who was struggling miserably with this Gordian knot without getting anywhere, always thinking like Galileo that the catenary was a parabola. Stop! Stop! I say to him, don’t torture yourself any more try­ ing to prove the identity of the catenary with the parabola, since it is entirely false. — Johann Bernoulli Finding the Equation of the Catenary To find the equation of the catenary the following assumptions are made: The chain (or cable) is suspended between two points and hangs under its own weight. The chain (or cable) is flexible and has a uniform linear weight density (equal to w₀). The treatment here follows closely the book by Simmons. To simplify the algebra, we will let the y-axis pass through the minimum of the curve. The length of the segment from the minimum to the point (x, y) is denoted by s. The three forces acting on the segment are the tensions T₀ and T, and its weight w₀s (see figure below). The first two forces are tangent to the chain. Figure 2: This figure contains the parameters and variables used in the calculation. For the segment to be in equilibrium horizontally and vertically, the following two conditions must be obeyed: Equation 1: Equilibrium conditions for the segment with length s. The differential equation we need to solve is: Equation 2: Differential equation we need to solve. We now have to re-write this equation in terms of y and x only. We first differentiate it to obtain: Equation 3: The derivative of Eq. 2 The derivative ds/dx can be written in terms of dy/dx as follows: Equation 4: The derivative ds/dx written in terms of dy/dx. Figure 3: The infinitesimal triangle used in Eq. 4 Eq. 3 then becomes: Equation 5: Differential equation of the catenary. To quickly solve Eq. 5 we conveniently introduce the following variable: Equation 6: Definition of u used to solve Eq. 5 Using Eq. 6, Eq. 5 becomes: Equation 7: Eq. 5 expressed in terms of the variable u. This equation can be integrated by variable separation and a simple trigonometric substitution u = tan θ: Equation 8: Eq. 7 after integration. Since the y-axis pass through the minimum of the curve we have: Equation 9: The variable u is zero at the minimum of the curve. Substituting Eq. 9 in Eq. 8 we obtain: Equation 10: Using Eq. 9 to determine c in Eq. 8. Substituting c=0 into Eq. 8 and solving for u we obtain:
https://medium.com/cantors-paradise/determining-the-shape-of-a-hanging-cable-using-basic-calculus-453305d1cb65
['Marco Tavora Ph.D.']
2020-11-30 19:14:07.352000+00:00
['Calculus', 'Physics', 'Science', 'Math']
Get Ready For Concurrent Rendering In React
React has seen a lot of changes lately and with evolving terminology and certain buzzwords, there can be a lot of confusion with what’s to come. In this write-up, I hope to clarify some of the understanding around Concurrent Rendering, and also give you some tips on how to get ready for the upcoming changes to React. What is Concurrent Rendering? “Concurrent Mode lets React apps be more responsive by rendering component trees without blocking the main thread.” — Dan Abramov Basically, Concurrent Rendering is an umbrella term for the rewrite to the React architecture that allows React to run in a more performant way without blocking the main UI thread. By enabling Concurrent Mode in our apps, React starts using the new architecture so we can gain access to user experience and performance benefits right away. If you’re looking to take Concurrent Rendering even further, the API is slowly being released, which allows us to have even more control over the new architecture. This will give our applications a smoother user experience and give us more control over the priorities that the browser is working on. The image below illustrates some of the proposed API: Async Mode VS Concurrent Mode “You might have previously heard Concurrent Mode being referred to as “Async Mode”. We’ve changed the name to Concurrent Mode to highlight React’s ability to perform work on different priority levels. This sets it apart from other approaches to Async Rendering.” — React team Initially, when the React team started demoing and talking about these future additions, they used the term Async Mode. This caused a bit of confusion in the community and was misleading. I was also confused. They’re the same thing. After the React Team realized the term Async Mode was causing a lot of confusion, they started gravitating towards using the name Concurrent Mode, since Async Mode didn’t quite fit. How do we try it? Concurrent Mode is available for testing on 16.7.0-alpha.0. It’s a limited API, that will be evolving over time. This alpha is just for testing purposes, and it’s not recommended to replace your current codebase just yet. We can enable Concurrent Mode by using .createRoot instead of .render in our applications initialization. This will automatically opt us into Concurrent Mode. Alternatively, we can wrap our application with the <ConcurrentMode> wrapper. When should we have our apps ready for it? The React team recently released the following roadmap. This gives us an idea of when we should expect Concurrent Mode but these timelines could change. Version Release Schedule: ** Update Nov 2020 : Although React didn’t hit their release schedule with concurrent mode, they released React 17 which will get us further ready for concurrent mode in the near future. How do we get ready? Use StrictMode StrictMode was enabled in React 16.3. It is enabled when you wrap your component with <StrictMode></StrictMode> and is a tool for highlighting potential problems in our applications. When enabled, React compiles a list of all class components using the unsafe lifecycles, and logs a warning message with information about these components. This makes it easy to address the identified issues, which will make it easier to take advantage of Concurrent Rendering in the future. You can read more about the advantages of StrictMode here. Upgrade old lifecycle methods The following lifecycle methods will be deprecated in future versions of React and can be problematic with Concurrent Rendering. componentWillMount componentWillReceiveProps componentWillUpdate Most of the use cases for the old lifecycle methods can be rewritten with the two new lifecycle methods: getSnapshotBeforeUpdate getDerivedStateFromProps For more information on how to upgrade old lifecycle methods, click here. The following is a chart from the React blog that gives a great visual of how the new React lifecycle methods fit in. Conclusion As a recap, enabling StrictMode and updating old lifecycle methods is a way to make it easier for us to take advantage of Concurrent Rendering in the near future. React plans to implement these changes into a release tentatively before the summer of this year. Concurrent Rendering will allow our applications to be more performant by unblocking the main UI thread of the browser, equating to smoother user experiences. In the future, we will also have access to the individual APIs of Concurrent Mode that will allow us to fine-tune our experience with things like prioritizing events, different roots for loading our app, ability to cache and warm data, and the ability to lazy load content that is off the screen with the hidden prop.
https://medium.com/well-red/get-ready-for-concurrent-rendering-in-react-120c2fdcd7a9
['Lee Dauphinee']
2020-11-24 14:46:17.361000+00:00
['Developer Tools', 'React', 'Web', 'Web Development']
Astronomers Have Detected a Planet’s Radio Emissions 51 Light-Years Away
Astronomers Have Detected a Planet’s Radio Emissions 51 Light-Years Away ExtremeTech Follow Dec 21 · 3 min read by Ryan Whitwam Astronomers have detected thousands of exoplanets, but there’s only so much we can know about them from light-years away. A new study from Cornell University could help shed light on the conditions of exoplanets by analyzing radio emissions connected to their magnetic fields. The researchers claim this marks the first time an exoplanet has been detected in the radio bands. This project started with the study of Jupiter, which has a hugely powerful magnetic field. Several years ago, study lead author Jake Turner conducted an analysis of Jupiter’s magnetic field. In the new study, that data becomes the basis for hunting exoplanets. The team processed the Jupiter data to simulate the radio frequency signal from a distant gas giant. The results became a template for similar planets that might be 40 to 100 light-years away from the observer. Using the Low Frequency Array (LOFAR), the team scanned several nearby solar systems that are known to host exoplanets. If the signals from one of these stars matched the template, that would indicate they’d found an exoplanet’s emissions in the radio spectrum. It took more than 100 hours of observational time, but a star known as Tau Boötes 51 light-years distant exhibited exactly the kind of signal the researchers were hoping to find. Turner and his colleagues even used other radio telescopes to repeat the analysis, and the signal is still there. And that makes sense — Tau Boötes has one known exoplanet, a gas giant called Tau Boötes b that orbits very close to the star. The LOFAR radio telescope. According to the researchers, the signal is understandably very weak. There were several other stars with radio pings that could have been planets, but the one in Tau Boötes was much more significant. The team is now calling on other researchers to confirm the findings — data on an exoplanet’s magnetic field could be invaluable, but it’s still possible the signal is coming from the star or some other local source rather than the planet. The researchers say that the magnetic field of a planet can offer hints of composition and habitability. For example, Earth’s magnetic field is a product of the planet’s iron core, and the field helps deflect dangerous radiation that can harm living things and strip away a planet’s atmosphere. Mars’ lack of a magnetic field is believed to be one of the reasons it’s so inhospitable. After confirming and refining Turner’s results, astronomers might be able to learn about distant worlds by scanning for radio frequency emissions. Now read:
https://medium.com/extremetech-access/astronomers-have-detected-a-planets-radio-emissions-51-light-years-away-74527f1a49ce
[]
2020-12-21 13:15:31.808000+00:00
['Space', 'Science', 'Astronomy', 'Exoplanets', 'Aliens']
What’s a Matrix Without a Glitch or Two?
Medium has a lot of great features. The story editor, while a bit limited in format options, is intuitive, easy to use and makes a great eye-catching story. The ability to build publications is pretty cool, and the interface between writers and readers is really pretty special. There is a lot to like about Medium all around, and in a lot of ways it is remarkable how resilient the platform is. For instance, I have never lost even a few words of a draft story. The historical draft preservation, even across multiple open tabs is pretty awesome too. But, every once in a while, there is a glitch. One of the built-in features of the platform that I have been using pretty regularly is the ability to pre-schedule articles for publishing. I love being able to set the schedule so that my article publishes just when I want. Yesterday that feature didn’t work right though. The daily update piece that I pre-scheduled to post at 5:07 am my time today mysteriously showed up in the regular queue for publication yesterday. It wasn’t supposed to appear in there at all. ILLUMINATION is a busy publication and it got pushed out in the normal flow of editors working through the articles. Not a big deal in the grand scheme of things. And, I am not really naïve or narcissistic enough to believe that anyone is sitting by their computer waiting for my daily feature to come out at precisely 5:07 am either. But it makes me feel better if my updates are spaced more than just a few hours apart. So, guess what? Yep! Mini feature. Whoop, whoop! Plus, one of the benefits of a re-do is that I can update things I missed and bring you a bonus daily tip! The Daily Tip! (Bonus Edition) Revitalize your older stories! Medium prohibits duplicate content, and one of the things you cannot do is delete an existing established story, then re-publish it. At least not without making substantive changes to the story, including the title. But what can you do if you have an older story that is pretty solid, but you wish a few more people could get their eyes on it? One good way to do that is exactly what I am doing here. You can write up a teaser for your older story with a new title and enough content within the new story to make it viable on its own — then link your older story in the new one. This works especially well if there is something going on in the news that relates to your prior story, making it especially pertinent today. Another approach is linking two of your older stories with a common new theme i.e. you have two older poems about autumn leaves and it’s the first day of fall. Speaking of Poems Something I missed putting in my regular feature is Dr Mehmet Yildiz‘s new invitation to poets which encourages using YouTube videos to support your writing. Read all about it here: ILLUMINATION Daily Bulletin One of the minor drawbacks of scheduling my July feature ahead of time is that the link to Dr. Yildiz’s daily story bulletin might be a day behind. This mini update allows me to bring you the hottest-off-the-press version; here: Link to Regular Feature And, just in case you are the one person sitting by the computer at 5:07 am, here is the link to my errantly pre-published full feature for today, July 27, 2020: Thanks for reading! P. S. Ironically, when I went to publish this a few hours ago to meet that 5:07 am deadline, Medium was struggling with images for some reason, and I could not upload a picture for this until now. Shhh. Don’t say anything though, I don’t want them to feel bad after I talked them up in the first part of the article! ;-)
https://medium.com/illumination/whats-a-matrix-without-a-glitch-or-two-b9fca114a212
['Timothy Key']
2020-07-27 14:53:27.028000+00:00
['Leadership', 'Inspiration', 'Writing', 'Reading', 'Poetry']
Making a Stand Alone Executable from a Python Script using PyInstaller
Making a Stand Alone Executable from a Python Script using PyInstaller This article is also available in English. Please, click here to read the English version.
https://medium.com/dreamcatcher-its-blog/making-an-stand-alone-executable-from-a-python-script-using-pyinstaller-d1df9170e263
['ওয় স', 'Wasi']
2019-07-11 08:34:00.323000+00:00
['Executable', 'Py2exe', 'Make Exe', 'Python', 'Pyinstaller']
My Mother's Day Wish to All the Moms I Know
It’s Mother’s Day. I’ve been a mother for 22.5 years now. It was the biggest blessing to me ever and I am grateful every day for these three amazing humans who call me mom (although I never gave them permission to stop calling me mommy — but that’s another story). I have a lot of friends who are moms and a lot of relatives who are moms. They are not just moms — they are great moms. There’s totally a difference between moms who are great and moms who are not — the ones who are great know how to squash self and the ones who are not put themselves first at the expense of their children. But even the good moms have to face the day when being a mom is not their 24-hour job. My wish today is that those moms, the ones who sacrificed their time, their figures, their health and their careers to produce the kinds of children who will grow up to be good people, I want those moms to figure out what they want after being a mom. I want them to think about what they want without having to run it through the prism of what is best for everyone else. It’s a whole lot harder than you might think. I am that mom. I became a single mom against my will when my children were 11, 14 and 15. Today, my oldest will graduate soon from the University of Virginia (I am super proud) and my youngest is about to graduate from high school and head to William and Mary (I am super proud of her, too). My middle graduates next year from James Madison University (again, hella proud). But guess what? For the first time since I was 25-years-old and living in an upstairs apartment in Park Fairfax off Quaker Lane just outside Washington, D.C., I will be alone — like for months. I’m single. I’m not dating anyone. I am alone. Sometimes, it bums me out but mostly I try to think about it from the angle of — I don’t have to consider anyone else when making decisions. Of course, there are some limits — I need to stay here to keep in-state tuition for my kids who are still in college and I don’t want to be far from them. I have a dog and a cat and because of a recent moment of craziness, I got a puppy. I need to consider them somewhat but they are fairly easy. It’s time for me to think about what I want and what I like. I’ve gradually been easing into it and at first, I made the mistake of looking around me and seeing what other people like and thinking, that’s what I need to do. But that brought no fulfillment or satisfaction and put me right back to “What do I want?” For years, I planned my hours around sports practices, drama stuff, meals I had to cook or I felt guilty, family vacations, making sure my kids were doing whatever they were supposed to do to make it to a good college, etc. It was a lot of pressure and I always felt like I could be doing something more. That is gone. I did it. I got them into good schools. They are good people. Now, back to what do I want? I can’t even write straight out about what I want. Notice how I keep avoiding the topic? It’s really, really hard to sit down and think, “What do I like? What would I really want to do with my time if I could just ignore any judgment from anyone else?” Geez. I don’t have any big answers right now. Child number 3 doesn’t leave until August. Then it’s really going to be go-time. I know I want to write more — maybe a book. I want to learn more about cooking. I want to run more. I’d like to meet a nice man who understands that I don’t need him to complete me and appreciates my quirks. I’d like to learn to do hunt trials with my dog. I would like to figure out how to become an inspirational speaker to women, men or children who think they don’t matter. I want to get my yard under control. When I look around at my friends, almost all of whom are still married to the father of their children, I see they are a few steps behind me. They have to consider another human being when they make decisions. I’m envious of some of them because they truly do seem to operate as a team. Others, not so much. But most of all, these are good women. I want so much for them to listen to that little voice inside that tells them that it still has dreams. I’m learning to listen to that little voice and it is has been a gift. There’s nothing more important in this life than being a mother to a child and being there for that child, day in and day out. Nothing. But when that’s done, I hope you can recover that part of yourself that knows how to find the joy in life by figuring out what it is that you want. Happy Mother’s Day 2018 to all the mothers I know who are simply lovely. May 2018 be the year you find out what else you have to experience in this wonderful life.
https://shannoneubankshowell.medium.com/my-mothers-day-wish-to-all-the-moms-i-know-640b4b6df33f
['Shannon Eubanks']
2018-05-13 01:34:42.830000+00:00
['Parenting', 'Empty Nest', 'Self-awareness', 'Mothers Day']
2019 Annual Data Visualization Survey Results
The results maintain a theme seen in last year’s survey that, at least among respondents, we are seeing a real youth movement in the profession. Half of the responses indicated 3 or fewer years of experience. Gender diversity still skews dramatically toward men, who outnumber women practitioners 2 to 1. There’s an interesting detail in the questions that posed a few standard phrases about data visualization and asked how strongly the respondent agreed or disagreed with them. One question asked whether respondents were on the lookout for new tools. The answer was one which we might expect, with a significant agreement that new tools and techniques are important to success. But if we examine the results of a related question, we see that respondents don’t think the tools themselves are keeping them from being successful. Rather, more respondents thought it was their skills, not their tools, that held them back. Annual Survey Visualization Challenge This overview will soon be joined by more robust and interesting deep dives into the data. That’s because the Data Visualization Society is running a competition, with celebrity judges and cash prizes, for the best data visualization made with the survey results. I’m sure the results of that challenge will provide great insights for our community using this rich dataset. You can see the full description of the challenge here. I didn’t want to dig too deeply into the data, beyond what was necessary to clean it and provide a simple overview above, but I noticed that there was a theme to the responses I saw that was summarized by another of these questions. Time. If we look closely, it goes beyond this question and shows up throughout the free-text responses. For instance, what are our biggest frustrations? In 2018, the theme among responses was most decidedly that the data visualization community felt like it needed to improve its design skills. It was clear, regardless of whether you considered yourself a scientist, engineer or designer, that respondents wanted to invest more in learning design than new tools or techniques. I wrote about this in the context not only of the overall community but also with a specific focus on responses from R users. This year, time is a clear theme in the data. Throughout the responses, whether asked specifically about how much time they have to make data visualization or just generically what they needed to make better data visualization, the community responded with “time”. So… time to do what?
https://medium.com/nightingale/2019-annual-data-visualization-survey-results-334d3523073f
['Elijah Meeks']
2019-08-07 18:27:31.520000+00:00
['Design', 'Data Science', 'Data', 'Data Visualization', 'Dvsintro']
Alliteration Begins with A
Example of alliteration in Edgar Allan Poe’s The Raven poem. Hey there! Alexander here. I love using alliteration in my poems, prose & other writings. For those that are unfamiliar with alliteration, it is defined as: “Alliteration is […] a stylistic literary device which is identified by the repeated sound of the first or second letter in a series of words, or the repetition of the same letter sounds in stressed syllables of a phrase.” (Source: Wikipedia) Here is a new epigram I posted to my Instagram grid, which garnered 40 comments & over 1.1k likes in its first 24 hours. As you can see in the image, the quote reads: “At the cemetery, amongst the dead, is where I finally feel at peace.” I intentionally employ alliteration in this short saying, but not in the classical way you would typically see. The conventional approach is to repeat words with the same first letter. For example, “The big brown bear…rolls on the ground.” The three B’s are the alliterated style in action. What I did with my epigram on Instagram is to spread it out throughout the sentence. The three alliterated words are: “At the cemetery…” “…amongst the dead,” “…at peace.” These three words were intentionally chosen, as I select all of my words with intention, & as every good poet & writer should do. The reader encounters three words that begin with the letter A. The power of alliteration, especially in poetry & short prose, is that it psychologically draws the reader into the piece & lulls their conscious mind into a brief hypnotic state. Humans, especially our brains, find comfort in the familiar, & what is more familiar than repeating words that start with the same letter throughout our speech? In the first part of the epigram, I could have written “At the cemetery” as “In the cemetery…” but I frowned on using it. The reason is the word “in” may have implied, subconsciously, that I was buried “in” the ground of the cemetery vs. walking above ground, observing the headstones as I strolled by. Speaking of observing headstones in a cemetery, that brings us to the second section of the saying. I could have written the phrase “amongst the dead” as “between the dead” but I decided against it. It would have been equally accurate but did not contribute to the alliteration effect I was striving for. As well, I could have chosen the word “among” vs. “amongst.” The former is the standard American English variation whilst the latter is the British English variant. Heck, it’s fun to play around with language like this, especially when you are not British. The third section contains the final element of this saying’s alliteration effect. Again, I could have chosen a different word, but I went with “at peace” vs. “in peace.” However, this could unintentionally imply that I was “in the ground” or I was “rest[ing] in peace” rather than in a state of peace I had not experienced before. Now, you would think my gamut of alliteration was over, but it is not. I add to the overall effect by writing a second alliteration into the third section. If you can remember the quote, you’ll recall I close the epigram with “…I finally feel at peace.” I carry out the second alliteration sequence with two consecutive words that start with the letter F. These two F words immediately proceed the final round of the first alliteration sequence. Again, I constructed the sentence with these words intentionally, as to psychologically penetrate the reader’s subconscious mind more deeply. I love to play around with language & alliteration is one reason that I do. Alliteration happens a lot in speeches, poetry & prose. You’ll be surprised, but many people naturally use it in their everyday language too. What’s your favorite example of alliteration? Leave a comment to let me know!
https://medium.com/the-inkslinger/alliteration-begins-with-a-d7736df6a8b6
['Alexander Bentley']
2017-11-22 18:00:15.476000+00:00
['Cemetery', 'Writing', 'Poetry', 'Creative Writing', 'Quotes']
Venture capital is going to murder Medium
It’s a crying shame, really. I love Medium. It’s the best writing environment on the web, and they sweat the details like nobody else. The community too is just peach. This could have been a love story for the ages. But I don’t think we’ll grow old together, Medium and I. I suspect it’ll end quite tragic, actually. $132,000,000 is a lot of money after all, and that’s how much venture capital Medium has been dipped in. Before having a prayer or a song about how to turn into that multi-billion-dollar business it must to satisfy the required rate of return. The clock started running three years ago when $25,000,000 of Series A growth dynamite was rigged. That means they’re about half ways until the bomb explodes, and so far the company doesn’t seem much luck finding the code that’ll disarm. All the vanity metrics are up, but it’s clear that there’s no idea how to turn that into actual moolah model. Well, short of following the same brain-dead advertising extraction scheme that Medium was specifically founded to counter. So credit to Ev, at least that strategy seems out — for now. And credit to the whole Medium team for building that better typewriter. Seriously. That’s hard! There were a bunch of big typewriter makers out there already, and how much can you improve that mousetrap anyway? But improve they did. It’s just that in Silicon Valley, you can’t merely make a better typewriter and sell that at a profit. No, you have to DISRUPT. You have to REINVENT. Well, at least you need the appearance of that, while you squeeze eyeballs until they pop out enough advertising dollars to give the VCs that 10x return. So what is Medium going to do now, after axing a third of their staff? They’re going to essentially think about how they should fulfill the mission they were founded five years ago to pursue: So, we are shifting our resources and attention to defining a new model for writers and creators to be rewarded, based on the value they’re creating for people. And toward building a transformational product for curious humans who want to get smarter about the world every day. It is too soon to say exactly what this will look like. Wut? Five years is not enough time to think about how we should make any money in a way congruent with our founding values? That just doesn’t compute. But the convoluted language and indirection does, and the ticker is spelling S. H. I. T! This is tragic, but also expected. Medium rigged that VC bomb and is failing to disarm. And just like most other VC bombs, it too will explode and take with it the prospect of a lovely, smaller, important typewriter business. The web, of course, will go on, if murder is indeed what she’ll write. Especially for anyone who moved to Medium but hedged their bets by keeping their own domains. A quick dump and nothing will break — except our hearts. Again. Cheers to none of this actually happening, and Ev finding a diamond in the rubble before it’s too late. But if you’re a publisher on Medium, I’d dust off the contingency plans none the less.
https://medium.com/signal-v-noise/venture-capital-is-going-to-murder-medium-656cbccf4829
[]
2017-01-06 16:20:32.748000+00:00
['Startup', 'Venture Capital']
MacGyver Season 1 Episode 8 Science Notes: Corkscrew
Remember, I’m just going over the MacGyver hacks with science stuff in them. DIY Blacklight This one is fairly legit. MacGyver is in an escape room and needs to find a blacklight to read some hidden words on the wall. He says it would be easier to build a blacklight than it would be to find it. Here is MacGyver’s build. Use a smart phone LED light and an old floppy disk. In theory, this could work. Here is the short answer: most white LED lights work by having an ultraviolet light with a fluorescence coating to produce white light (which is the way the old school tube-like fluorescent lights work). This means that the white LED also produces UV light (also called blacklight). You just need to block out the visible light-and that’s where the floppy disk comes in. If you take the actual disk out of the floppy, some of them block visible light. I actually wrote a WIRED post on this-here it is. https://www.wired.com/2016/12/make-uv-light-phones-led-flash/ Fluorescence of stuff on the wall The second part of this hack is to use the DIY UV light to read the stuff on the wall. Here’s how that works. Electrified stair rail A bad guy is getting away and running down a stairwell. MacGyver pulls some wires out of a wall light and touches one of the wires to the rail and the guy gets shocked and falls. Would this work? Maybe. In order for the guy to get shocked, there has to be a complete electrical circuit that passes through the dude. That means the current would come out of the wall, go to the rail, go to the guy, go OUT of the guy, and then back to the wall. In order to get through the guy, he would have to be grounded and the rail would have to NOT be grounded. I suspect that building code requires a rail to be grounded for safety-but you never know. In order to get the guy grounded, he would have to stand on conducting ground (like metal) and have terrible shoes. But still, it’s at least possible. Hacking magnetic lock MacGyver is trapped in another room-with essentially nothing in it. He grabs some wire out of the ceiling panels can fishes out the wires for the security pad. Then he manually enters the keypad code by connecting wires. OK, this could work. However, it if it’s a legit security pad it would probably be harder to hack. Wine bottle rocket MacGyver takes some wine bottles, dumps out some of the wine and recorks them. Then he pumps them up and let’s the cork pop out. Now it’s a water bottle rocket. Here is the launch in slow motion. Of course like many MacGyver hacks, this is real. The only problem is that it would take a normal person a few minutes to set up and not a few seconds. Radio jammer MacGyver needs to take out some remote controlled guns. He grabs a CB radio from a truck and hooks it up to a large power supply. This broadcasts enough static to jam the radio signal to the guns. Let’s go over the details. Could he get a CB out of a truck? Yes. Easy (but it wouldn’t be as quick as he does it). Could he hook it up to a power supply? I think he used the power lines to some metal crusher. This probably wouldn’t work. The CB runs on DC current and the big power is probably AC. Also, it probably expects 12 volts. Would this jam the signal? Here’s where he might get lucky. If the guns run on the same channel as the CB -it would work. If the power supply messes up the radio so that it just somehow broadcasts on a bunch of frequencies-it would work. So, it’s possible.
https://rjallain.medium.com/macgyver-season-1-episode-8-science-notes-corkscrew-42d868450bf3
['Rhett Allain']
2020-12-24 20:02:35.541000+00:00
['Macgyver', 'Science', 'DIY']
Can’t Sleep Well? One of These Techniques May Help You
Can’t Sleep Well? One of These Techniques May Help You How to tell if you suffer from insomnia and improve your sleep Photo by Ihsan Aditya from Pexels Insomnia is a sleep disorder that affects as many as 35% of adults. It is marked by problems getting to sleep, staying asleep through the night, and sleeping as long as you would like into the morning. It can have serious effects, leading to excessive daytime sleepiness, a higher risk of auto accidents, and widespread health effects from sleep deprivation. There are many types of insomnia. Acute Insomnia is one of them, characterized by isolated and occasional incidents, often linked to stressful times, which disappear as soon as these have been overcome. Chronic Insomnia is a disorder that occurs with the frequency of at least three nights a week and for a period of three months or more. Why should we sleep? Sleeping well is fundamental to our health and quality of life. Sleep has several functions, including consolidating our memory and regenerating the body by slowing down physiological functions (blood pressure, heart rate…). How to know whether you suffer from Insomnia or not? Photo by Ben Blennerhassett on Unsplash Try answering these four questions: 1) Do you have one or more of the following symptoms? Difficulty falling asleep Early morning awakenings Night-time awakenings Tiredness and lack of energy during the day Daytime sleepiness Irritable mood Difficulty in maintaining attention and concentration Poor performance at school or work 2) Are you dissatisfied with how and how much you sleep? 3) Do you experience this at least three times a week? 4) Has this condition been present for at least 3 months? If the answer to these four questions is YES, then we can assume that you are suffering from Chronic Insomnia and it is advisable that you consult an expert. What can be the causes of Insomnia? Presence of medical problems Stressful events (separation, economic problems) or life changes (relocation, job change) Unfavorable environmental conditions (the room where you sleep is noisy, too hot …) Excessive time spent in bed Uneven sleeping habits The tendency to take daytime naps Inappropriate use of sleep-promoting drugs Evening activities that are incompatible with sleep (for example, staying on a cell phone or a PC, drinking coffee …). Photo by Lisa Fotios from Pexels From this list of causes, it is easy to see that most of the causes that lead us to have sleep problems are due to our bad habits. How to prevent or reduce sleep problems? It is possible to promote a series of healthy behaviors that promote sleep through the application of certain rules, the so-called Sleep Hygiene Rules. These are some of the rules that promote health and sleep quality: Maintain a regular sleep habit (go to bed and wake up at about the same time) Do regular physical activity by early afternoon: it promotes deep sleep and facilitates falling asleep Maintain a regular diet: in the evening eat foods that are not too heavy, but do not go to bed without dinner because the glycemic drop can favor awakenings The afternoon nap should not exceed 30 minutes (in this way we do not enter deep sleep but remain in the first more restful sleep) Avoid drinking alcohol especially in the evening: alcohol seems to promote sleep but tends to induce superficial sleep, increasing nighttime awakenings and early awakening in the morning Avoid smoking and drinking caffeine: these are stimulating substances that promote fragmented sleep If you cannot sleep, do not stay in bed, but get up and do activities that do not require energy, not very activating (like reading a book) and go back to bed only if you are sleepy Don’t watch TV or read in bed — it’s important to associate bed only with sleep and sexual activity Maintain adequate room temperature (not too high and not too low) and make sure the room is dark and quiet.
https://medium.com/the-innovation/cant-sleep-well-one-of-these-techniques-may-help-you-dd4eedc71d53
['Harish Maddukuri']
2020-12-28 16:32:41.858000+00:00
['Insomnia', 'Mental Health', 'Sleep', 'Self', 'Lifestyle']
Offline Attribution: How to evaluate the impact of Offline actions on Online environments — Part 2 of 2
Last week in this post we talked a little about Offline Attribution, its parallel with Online, the use of Lift as a method of attribution and some ways of calculating it. Just to recap, let’s look again at the components of the Lift calculation. The components of Lift: Observed, Baseline and Difference. We stopped at the different ways of estimating the Baseline. We looked at: The use of minutes before insertion as the baseline The use of average sessions on other days at the same time as the baseline So now, without further delay… 3. Statistical Models for Estimating the Baseline Of course, we would be remiss if we didn’t mention the crème de la crème of data science tools, statistical modeling! In advance it is important to say that it is quite difficult to start working with Offline Attribution by diving straight into Bayesian time series, machine learning and predictive models, which require high technical and/or statistical knowledge. In addition, it is more difficult to treat the data — to feed the model to be used — than in the two previous methodologies. Conceptually, however, the story here is the same: Using the minutes preceding the insertion (as well as the corresponding minutes on other days, depending on the model) to estimate a baseline that is then subtracted from the observed data. In this example we use 30 minutes prior to insertion so that the model can predict a baseline for the 5 minutes after insertion. The main advantage of using a statistical model for estimating the baseline is that we overcome the problem of over or underestimation because of anomalies and seasonality. A well-constructed model does not depend on long-term conclusions and is able to provide a reliable value for each insertion individually, even though a similar test may never have been done before (e.g. a new broadcaster, a new program, a new product or a new ad). There are several package and library options for Python and R that help you work with time series. There is no perfect statistical model, so we recommend that you investigate the alternatives and evaluate the trade-off between the complexity of its operation and the reliability of the result. Measurement of Results Up to now we have talked a lot about how to calculate values that represent the impact of Offline on Online. But what can we do with those numbers? Whatever the applied methodology, it is possible to assign to a value to each insertion on TV, for example, a comparative metric with which it is possible to identify the programs that performed better than others. We can gain some insights from the following sample table: Model B on Planet TV resulted in the largest volume of Lift (600). For a focus on performance or a more aggressive campaign, this investment may be a priority. If the objective is economy i.e. the highest possible result at the lowest cost, the scenario reverses. All inserts have a Cost per Lift under $2.50, except for Model B on Planet TV, at a cost of $8.33 per Lift. However, Tomorrow TV has a lower average cost per Lift of around $1.75. The Real Network comes next with an average cost of around $2.00 and then Planet TV with $5.40. If there were negotiations for the purchase of television space and it wasn’t possible to use only Model A or Model B with a given broadcaster, then the cheapest investment would be insertions on Tomorrow TV. Points to Note So far so good, but when working with Offline Attribution we need to keep a few things in mind. Here are some points in relation to what we discussed over the 2 posts. 1. Overlap of Insertions When we work with attribution in the digital world, no user interaction occurs at the same time as another (except when viewing multiple banners from the same campaign on the same page, but let’s leave that aside for now). Each click occurs at a distinct moment, so it is not common to have overlap in this case. When we use Lift in Offline Attribution, however, it is possible to display two ads in a very short time, such as one or two minutes, or even the same minute. This becomes even more problematic when we consider how many broadcasters there are on TV and radio. You can use a variety of rules to mitigate this problem: consider audience ratings, attributing strictly to a channel, using machine learning to understand the attribution rule… Although this post has not addressed the subject, the important thing to keep in mind is that overlap may occur and must be dealt with. 2. Offline Attribution is not just about Lift Lift is a comparative metric for insertions that provides information beyond audience ratings. Other information can be used, such as the Gross Rating Point itself, rate of new users, how the source/medium distribution stands after insertion, etc. Particularly in relation to traffic sources, your analysis may even have a focus on Organic or Direct traffic, as the consumer is expected to search for your brand after an Offline placement. In addition, the methodologies do not necessarily need to be applied only to sessions. You can use the same rationale for installing applications or a conversion in your digital environment. 3. Is TV taking people from my site? Any of the methodologies can return a negative value for Lift for an insertion. Conceptually this does not make the slightest sense: Would a TV advertisement make fewer people open their application, for example? In cases where there is a negative Lift value, we recommend that you treat the result as zero. 4. Granularity of Data The more sub-divisions you insert in your analyses (e.g. Lift by broadcaster, program, day, state, product, city etc.) the greater the granularity of the data. This means that for each combination of these dimensions the absolute value of Lift tends to be smaller (the volume of sessions itself, minute by minute, is smaller with a larger number of sub-divisions). This makes simpler methodologies (such as using minutes before insertion as the baseline) less dependable. Bear in mind that with greater granularity, you need to have a more sophisticated methodology in accordance with the need for more specific insights. Getting Your Hands Dirty! Keeping all that we’ve said in mind, the most important step of Offline Attribution is to begin to see that Online and Offline are not two distinct entities that do not mix, but are complementary media in the journey of the consumer. So prepare to get your hands dirty, choose a methodology, do some testing and start comparing the criteria traditionally used to purchase offline media with the new insights that Offline Attribution brings.
https://medium.com/dp6-us-blog/offline-attribution-how-to-evaluate-the-impact-of-offline-actions-on-online-environments-part-2-62dc72d4f22f
[]
2019-06-13 15:03:53.053000+00:00
['Attribution', 'Data Science', 'Marketing']
Getting started with Machine Learning
Or Deep Learning. Or Data Science. Or Data analytics. Or whatever buzz word that utilizes machine learning. Let’s not begin with our journey into machine learning by straight away analyzing housing price markets using various regressions or by making a ‘recommender system’ using Restricted Boltzmann Machines(RBMs) or doing something crazy with Reinforcement Learning. Yeah, I know that’s the ‘cool’ application part that apparently everyone wants to start but you see, you can’t construct a really cool 1000 storey without having REALLY good foundations. Almost all of these application based courses lack these foundations and I know many of my friends who take these courses, get stuck on a part and realize, they don’t actually know what the hell is going on. Convinced enough on getting the basics in check? Let’s get started. Machine Learning is math. MACHINE LEARNING COURSE OFFERED BY STANDFORD ON COURSERA BY ANDREW NG I cannot stress enough how important this course is for the basics. This course lays the foundation for all your Machine Learning. This course was recommended to me by every professor I knew. If you’re from CS or any other engineering background, take this course. Do all the assignments. They’re fun. They’re not easy. Week 4 and 5 will make you cry. (Building a neural network and writing backpropagation isn’t easy). But it’ll be all worth it. If you’re from a non-tech background like business or arts whatever, you may skip the assignments although I highly recommend that you do them. An excerpt from an actual conversation Friend: “But it’s in OCTAVE. Nobody uses it anymore. And he’s so slow. And it’s too theoretical. It’s too much maths.” Me, an intellectual: “Listen, mate. Yes, it’s in Octave. But it’s alright! It’s not that bad. Andrew explains it in the first video why he’s using Octave. He’s taught this in many languages before and Octave was found to be most suitable. And anyway it’s only an 11-week course, which if you’re doing in summer, you’ll complete it in 6–7. Yeah, sometimes he’s slow. Watch the videos at 1.5X. It’s too theoretical? All applications are based on solid theory. It’s too math heavy? Yeah, it is. Although, if you know basic calculus and linear algebra, you’re good to go. If you’re from non-engineering/not a math heavy background, it may be bit of a problem but Andrew explains all of that in a very intuitive manner (including backpropagation) so you’ll be just fine. Except in week 4 and 5. You will bleed” Learn Python/R If you already know one or the other, you’re good to go. I chose Python because it’s more versatile and as a CS guy, it’ll be more useful for me. R is more used by math/statistics academia. It doesn’t really matter where you learn these from. If you’ve already coded before like in C/C++/Java whatever, it’ll take approx 2–3 days to get hang of python and understand it and within a week you’ll be writing good pythonic code(if you practice daily, that is). If you haven’t coded before, like ever, it’ll take at least 2–3 weeks to get to the point where you can read, understand and write ML code. Take it slow, you’re not in a rush. (Use Anaconda. Please. Don’t mess around installing all those libraries and jupyter notebook all by yourself. It’s not worth it. And always use a virtual environment. Thank me later. ) I learned Python from “Python Crash Course: A Hands-On, Project-Based Introduction to Programming” by Eric Matthes. It was complemented by Corey Schafer and Dan Bader on YouTube. Once you’ve learned python, start messing around with NumPy. Then Pandas. Then make cool graphs with matplotlib. Make cooler graphs with seaborn. Take your time, fiddle with these libraries. When you’re done with it…. BUY AN APPLICATION BASED COURSE ON UDEMY/COURSERA You now know the basics, know the math, and most importantly, python. You’re in such a good position to start the course. It can be Machine Learning A-Z, Data Science A-Z, any machine learning masterclass, or Deep Learning specialization on Coursera again taught by Andrew(this time in python :D ). Oh, and btw, you’ll breeze through these courses. Normally these will take you many weeks and a lot of frustration, now in barely 1–2 weeks you’ll be able to implement all them cool multivariate linear regressions, logistic regression, SVMs, etc. It’s gonna be super intuitive now that you’ve put in all the hard work. Now is the time to expand your horizon. Read articles on Medium until your recommendations look like this…. AI vs ML vs DL Basically… Here’s a really cool diagram telling you about the different areas in ML Want a book? I highly recommend Hands–On Machine Learning with Scikit–Learn and TensorFlow. It’s probably the best book out there for Machine and Deep Learning. I’ll recommend waiting for the second edition as it uses Tensorflow 2.0 in the Deep Learning part. Also, Start Kaggling. I highly recommend you doing data science competitions. Read kernels, you’ll learn a lot. Or build something like a recommender system ;). Thanks for reading this article. All the best for your Machine Learning journey.
https://medium.com/mackweb/getting-started-with-machine-learning-1dd4f324de8a
['Mayank Arora']
2019-07-01 13:31:05.237000+00:00
['Deep Learning', 'Data Science', 'Artificial Intelligence', 'Machine Learning']
Latest picks: In case you missed them:
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://towardsdatascience.com/latest-picks-the-ethics-of-ai-bad67df03b1e
['Tds Editors']
2020-11-05 14:36:08.216000+00:00
['The Daily Pick', 'Data Science', 'Machine Learning', 'Artificial Intelligence']
Imperfection of Data: Interview with Sarah Groff Hennigh-Palermo
View of Neglect at Flux Gallery. Image by Jung In Jung. Sarah Groff Hennigh-Palermo uses the aesthetics of data visualization to depict the quotidian lives of objects, specifically books. Her piece for Artificial Retirement, Neglect, is an installed library which features an abstracted measure of how (not) often its books are read. Pleas of “Read Me, Read Me” flash across the screen while vivid color bars fade away representing time. Inspired by the art of Stefanie Posavec and Giorgia Lupi, Groff Hennigh-Palermo’s work explores how data art can communicate the imprecise. Groff Hennigh-Palermo has played with computers, art and data at the School For Poetic Computation and the Office for Creative Research. She is currently a master candidate at New York University’s Integrated Digital Media program. Closeup of Neglect. Image by Jung In Jung. This isn’t your first project using data visualization and your library. Can you tell us about the works leading up to Neglect? It’s the third in a series of works asking questions about reversing the uses of data and sensors, making the former about emotions and the latter about helping objects express what they most want to instead of what we want. The first piece in the series, Timeline of Neglect, plays with the form of a bar chart, with each bar falling to pieces as the book is ignored. The second, the Neo-Neglect brought the data collection into the real world. In this case, lightboxes alerted me to how the books were feeling and they could text or post to the web as well. In this case, I wanted the books and the view to be even closer together. Snapshot of Timeline of Neglect. Image by SGHP. I don’t usually associate data art with expressing imperfection. Why did you choose this medium for Neglect? I’ve worked as a data and web designer in my day job for a few years now, and I have always been struck by the way engineers and scientists insist on the perfectibility and inherent truthfulness of data. In that world, imperfect data is just incomplete data and the dream of perfection is the dream of ever-present sensors, so we can have all the tiny bits of info and thereby mount the truth. And I think this is all bananas. Hennigh-Palermo’s Eyeo 2014 lightning talk. But I love the form of data vis and the promise of sensors, so I wanted to do something with them that went again the standard narrative of their uses. Instead of working to make a milk carton tell us the milk is going bad, sensors can be used to tell us our books are lonely, maybe. Instead of recording numbers and locations, maybe a data visualization can help us feel the sadness of the books’ vitality ebbing away. (I did a lightning talk at Eyeo, too, about the way hole-y imperfect data can be a great subject for data visualization and generate a space we can project our imaginations into, like a novel.) Overall, the question of imperfection is at the heart of data art. In the data community, sometimes people will call works that they find insufficiently rigorous or under-dedicated to pure communication “data art” as, like, an insult. It isn’t real vis because it cares about aesthetics or communicating somewhat emotionally. I want to put things into the world that stand against that. Congratulations on your collection for Electric Objects! Can you share a bit on what it will be? Thanks! The theme of this Art Club was “Bringing the Outside In” so I am doing five abstract data art pieces to bring outside data into frames and thereby into people’s living rooms. They use print techniques like half-toning, too, and pretty simple geometries. Since the EO1 frames sit in rooms all day, I am aiming for the pieces to be chill and pleasant and to match the kind of rooms I like looking at on, like, Apartment Therapy or SF Girl By Bay. Work and Image by SGHP. As a young artist, what do you plan to explore next? Well I don’t quite feel done with the series yet. For instance, this iteration came out too literally. I wonder what it would be like with a more impressionistic visualization or without the metadata beneath. More generally, I have the EO1 commission and I hope more shows in my future. It’s taken me a long time to feel okay calling myself an artist — now I’m just excited to make & show more. And of course I’m finishing up my thesis: I defend in December. That project is (unsurprisingly!) about memory and data art. I’m building a system for collecting and reviewing memories inspired by Proust’s In Search of Lost Time. It’s meant to be a prototype for interacting with information in a different way. Feel free to follow along at my thesis process site. Artificial Retirement is one of Flux Factory’s 2016 major exhibitions curated by Jung In Jung and Joelle Fleurantin. Closing Event: September 11th 6pm with performances by Amelia Marzec and Robert Mayson, Openfield, Kat Sullivan, Sergio Mora-Diaz and Caitlin Sikora. AR is a Flux Factory major exhibition.
https://medium.com/artificial-retirement/imperfection-of-data-interview-with-sarah-groff-hennigh-palermo-6f41aaa0e393
[]
2016-09-11 01:20:05.519000+00:00
['Design', 'Time', 'Art', 'Exhibition', 'Data Visualization']
When the world doesn’t take you seriously
When the world doesn’t take you seriously You don’t need respect from everyone in order to succeed Photo by David Marcu on Unsplash “She barely looks old enough to drive, but she’s a professor.” That’s how one of our deans liked to introduce new faculty at the fall convocation every August. Especially when we were fresh out of grad school, or didn’t have enough facial hair. Plenty of us have had this kind of problem. Either you look too young, too female, too small, too heavy, or too…something. Whatever it is, not everyone recognizes all the respect or authority you deserve. They don’t take you seriously. Maybe they never will. Other people might go a step further, and undermine you constantly. Maybe they don’t do it on purpose. I’m sure the dean was just joking when he suggested I or other young people didn’t look like real faculty. But it’s a little tone deaf these days to introduce a professor as if she were your teenage daughter. Maybe, just a little bit? This happens to me more than I’d like. Despite everything, people interrupt me during conversations and talk over me during meetings. Not all the time, but enough to get on my nerves. They don’t mean to act that way. But when they look at me, their first instinct is to ask where my parents are — not my thoughts on budgets or job candidates. My favorite thing is when someone does pretend to listen for a while. Then they cut me off, and repeat everything I’ve just said back to me, with a little twist that enables them to take credit for the ideas. I’ve considered trying to grow a beard to remedy this situation. Maybe a tasteful goatee. Dying my hair grey. Or maybe toting a briefcase around instead of a messenger bag. Anything to enhance my gravitas. A monocle might be overkill. Then again, I’ve always wanted to look like Mr. Peanut. One day, I’m going to live my dream.
https://jessicalexicus.medium.com/when-the-world-doesnt-take-you-seriously-8120da8087fe
['Jessica Wildfire']
2018-03-28 01:15:35.181000+00:00
['Work', 'Professional Development', 'Life Lessons', 'Entrepreneurship', 'Self']
Becoming a Data scientist: which path to take?
Becoming a Data scientist: which path to take? What are the options out there for you to become a data scientist? Photo by Lili Popper on Unsplash Whether you have been into the data science field or just entering, I believe you will significantly benefit from this article in many ways. We first outline the importance of the data science field, then we discuss different types of organizations, highlighting the importance of being data-driven. Then we talk about the demand of data scientists, and their skills; emphasizing the dynamicity of the field and the need for continuous learning. Then we dive deep into different routes available for you to become a data scientist or take your skills to the next level. Photo by Campaign Creators on Unsplash You must have heard about the fact that Data Science is the sexiest job of the 21st Century and might be wondering how come, it is so overhyped? Why is everyone talking about it? Even after the pandemic, people started to talk more about it and use data science techniques to analyze data. Some got it right, and some got it wrong; that is any way out of scope to our discussion here. Undoubtedly, data and AI is the fastest growing industry with multi-billion dollar potential. Consequently, every organization is trying to make the most out of it. There are three types of organizations. 1) which have the data and they would like to get insights out of it, which have the data and they would like to get insights out of it, 2) the ones who have the skills, and can gain insights and help businesses become data-driven. The ones which are providing data science skills, experts, and consultancy services, the ones who have the skills, and can gain insights and help businesses become data-driven. The ones which are providing data science skills, experts, and consultancy services, 3) the ones who provide specialized platforms to support the organizations in achieving their data-related objectives. You may see some organizations having a blend of these skills together. However, these are the capabilities that businesses need to become data-driven. Considering the costs and associated ups and downs, some companies may develop their capabilities or outsource them. Note: For further details about becoming a data-driven business, you can follow my series on data-driven businesses as given below The above-mentioned different types of organizations and their dire wish to become data-driven dictates the high and urgent demand for data scientists and Machine Learning engineers. They are the ones, which would in the end crunch the numbers and make sense of them and uncover hidden insights in the data for the businesses. With all the exciting and wide range of opportunities being available for data scientists, getting yourself skilled and becoming acquainted with data science is a great way to show your competitive edge and prove your value for the business. Data Science is a very complex, dynamic, and continuously evolving field, which makes it challenging as well as exciting. The skills needed, languages that you can leverage to build data science and machine learning pipelines, libraries, frameworks, tools are constantly changing and maturing — this asks for nothing more, nothing less than continuous learning. Let’s discuss the different options and paths available for you to become a data scientist and (if you are already a data scientist) what to do more to achieve the next level of expertise. 1. Degree There are several universities providing degrees (Master Level) or postgraduate level courses in data science under different names such as data analytics, machine learning, data science, business analytics, to name a few. There are also various online (remote) alternatives available. One has to evaluate them according to their circumstances, suitability, and affordability. It is up to you to choose a college or an online, in the case of online studies, you do not have to relocate, but costs might still be high, depending on the program that you choose. 2. Data Science Fellowship Programs There are many institutes, companies, or startups offering several months hands-on fellowships, which give you the possibility to do hands-on assignments focusing on solving a business problem for one of the partners (or sponsor)of the institute/fellowship. It is an excellent opportunity for both the business and fellows to see if they fit for each other in the longer run, and in short-run companies get their problem solved while you as a graduate get the hands-on experience for a real project. 3. Learning through online platforms (MOOCs) Many online sites are offering courses that you can take to become an expert in the art of data science. I call it art and science because it involves both of them; solving business problems is an art, and solving it using data science, as its name suggests, is science. And you need both; only one would not suffice in today’s job market. There are many online platforms offering courses in data science; some of the providers include Coursera, Edx, DataCamp, Udacity, Udemy, and many more. Also, many universities are offering online courses and specializations through these platforms. These programs are economically feasible for pretty much everyone, and you can take them at your own pace, which helps the student to grasp the concept and still do the whole course. It is up to you to decide for an individual course, a micromaster or a nano degree program In addition, there are a few hands-on projects, which help you to see a step by step guideline on how to build a project for a business problem. 4. Doing hands-on projects This centres around taking it on your own, taking the control in your own hands, and steering it yourself. Although this is true for all other cases but this is more true and much needed in this case. You have to start with one hands-on project and scale it up and sideways for application scenarios, algorithms, etc. 5. Reading books Especially for those who love reading, to learn a concept, and then implement or master it. It is also an excellent way for many, but maybe not for all; It takes time, and you might be lost when it comes to doing hands-on. I would recommend this only once you have the necessary hands-on experience and know the fundamentals. And in today’s agile and fast-moving world, I would emphasize to start with any source that you like and combine it with others as you go along. There is no one silver bullet. 6. Medium There are a plethora of articles on almost every subject on Medium. They are usually not very long, and this helps in many ways to keep you focused, to get going by achieving valuable and useful knowledge, and get it implemented and turn it into something that sticks. To be able to use this platform, you can either create a free account or a paid/member account. Then the next step is to look for articles in top publications such as TowardsDataScience and TowardsAI. I usually publish my articles in these publications. A couple of exemplary articles are about a Feature Store and Implications for Data-Driven Businesses. Photo by Isaac Owens on Unsplash One great way to learn Machine Learning (and anything else, of course) is to write about what you have learned. You could also become a writer on Medium, and submit your stories about Data and AI to this publication here. 7. Kaggle and other competitions Many people nail down technical concepts by joining competitions like Kaggle and others. It is also a great way to put yourself into a framework of discipline, following the deadlines, being challenged, and achieve something which, in addition to you mastering on concepts, gives you monetary benefits, gets you the recognition that you can claim for your fame. Of course, It does not happen overnight, it takes time, and everything does, so be patient and persistent in whatever you do. 8. Youtube Videos You can also learn data science by following on YouTube. There are many courses or subject specific, both short and long videos that you can quickly serach and skim through. Overall you are the one who knows yourself the best, what works for you, and what does not? You know the things about you which you might be reluctant to share with others but would help you to decide which option (or combination of them) would suit you the best. Let me raise some questions here, which might help you to think, plan, and execute your journey to become a data scientist. Should I consider an online or face to face program? What are the advantages and disadvantages for each of them in consideration with my particular scenario, life situation, monetary and other aspects? Can I combine them with my current job, or shall I take a break and join an onsite fellowship? What are the concepts that I know well, and what do I need to master, or should I start from scratch? Are there any pre-requisites? Do I need to learn a programming language first, for example? What is the time commitment required? What is the required cost contribution on my behalf? Think about these questions and others and try to answer them for yourself to evaluate the best way for you to learn and master data science.
https://towardsdatascience.com/becoming-a-data-scientist-which-path-to-take-438af9a78c08
['Chan Naseeb']
2020-06-08 13:58:10.917000+00:00
['Deep Learning', 'Artificial Intelligence', 'Learning', 'Data Science', 'Machine Learning']
Put a Tie on and Stop Swearing so Much
While I was out running today I got thinking about an Inc. Magazine article I read years ago written by a business consultant who would go into large corporations and tell the executives to take their ties off and start swearing more. This same consultant would tell small business owners to put a tie on and stop swearing so much. Anyone who has worked for both large and small companies can fully appreciate how well this “tie” anecdote captures the differences that exist between the big and the small. Borrow the “Good Things” Those of us in small business tend to view large corporations as lacking pace and passion. Mostly we are right. However, logic says the “bigs” can’t be all bad or they wouldn’t have gotten big. The trick for a small business owner is simple — — borrow the “good things” from big business and run away from the “bad things “ as fast as you can. For example, large corporations tend to be somewhat formal while small businesses tend to be somewhat informal. Most corporations have a written procedure for almost everything while most small businesses have written procedures for almost nothing. The lack of formality in small businesses is one of the things that let them move fast. Unfortunately, it is also what may prevent them from achieving their full potential. Don’t Stop Swearing I would never suggest that you put a tie on nor would I suggest that you stop swearing. However, I would suggest, “winging it ain’t working it”. A lack of formality hinders a businesses’ ability to improve. Failing to formalize (document) best practices for critical processes means an organization will likely never be best-in-class and for certain will never be world-class. Identifying, documenting and continuously improving critical processes is one of the “good things” all small businesses should borrow from corporations. Run Far Away One of the “bad things” small businesses should run away from is the annual corporate planning ritual. Boy do they love their annual plan. What a waste of time. The crazy thing is they know it is a waste of time and they still do it. Don’t get me wrong. You need a plan. You need a strategic plan. You need a real plan for how to create a real unfair advantage over your competition. What you don’t need is a bunch of spreadsheets with a bunch of numbers. One Last Thing If you grew up in a large corporation and now own a small business, you already know how to sort out the good and bad. However, if you have never worked in a large company you should consider hiring a senior manager who has. Big companies have a lot to offer your business. Just remember to sort through the “good” and “bad” carefully. Finally, if you ever do decide to wear a tie, be sure to warn your team first. Otherwise you may freak them out. They’ll think you’re selling the place. Or going to court. Or …
https://medium.com/tips-from-the-trenches/put-a-tie-on-and-stop-swearing-so-much-2eba4a627bc3
['Mark Groulx']
2017-05-10 17:40:11.676000+00:00
['Small Business', 'Startup', 'Leadership Development', 'Business Strategy']
WebSocket Programming with Java
A Websocket allows to create a communication channel between a client and a server. In particular, a communication channel that uses the websocket protocol as a communication protocol. The websocket protocol is compatible with the HTTP protocol (yes, the one used to connect web browsers and web servers). However, it has very special improvements: (1) lower overhead than HTTP and (2) bi-directional web communication. Figure 1. Traditional HTTP communication vs WebSocket Traditional Web communication using the HTTP protocol works as follows: (1) first, a client (usually a web browser) have to connect to the server; (2) then, the client send a request for a resource (such as a request for a webpage); (3) after that, the server responds, and (4) closes the communication channel. On the other hand, websocket works as follows: (1) keep the connection open; moreover (2) both client and server can make requests and send responses. There are diverse Java libraries that allows developers to create Websocket based applications in Java. Either for Java application that will have the role of client and connect to existing Web servers. Or, Java applications that will connect with a server also wrote in Java. We will explore the former — a Java client to connect to an existing server. Getting Started First, we need a library that help us to deal with the details. I am showing you examples using Eclipse Tyrus – an open source API for easy development of WebSocket applications. Therefore, we need to add to our project the following dependencies: (1) tyrus.client, (2) tyrus.server, and since we are building a standalone application, we need also help with a container, (3) tyrus.container.grizzly. Click the links and get there the dependecies configuration (Maven, Gradle, etc.) or get the bundles with all the JAR files you need. Up to you. Set up your project and then move forward with the coding part. Server The API allow us to work using Java Annotations or “traditional” API interfaces. The example below follows the annotations approach. Arguably, using annotations provides better information about classes and method’s responsibilities. Let us start with the server. It is shown in Figure 1. Important elements: Import the package — tyrus.server. We need the class Server. Create a Server object. Arguments for the constructor: hostName (localhost in this example), port in which the server will run, the root path for the server app, and the important one, the class to handle the server work. It can be a class annotated with ServerEndpoint, (as our example will) or a class implementing ServerApplicationConfig or a class extending ServerEndpointConfig. Run the start() method in the server object, and that is all. Figure 1. ChatServer.java ServerEndpoint ServerEndPoint is a class level annotation that declares that the class it decorates is a web socket endpoint that will be deployed and made available in a WebSocket server. Our ServerEndPoint is as shown in Figure 2. Important elements: The class level anotation ServerEndpoint specifies the URL in which the endpoint will be published (in conjunction with the hostName, port, and path defined before). In this case, our example will be running as localhost:8025/project/app. The rest of the class is pretty strait forward to read: methods that will be called automatically when a connection open, onOpen(), a message is received, onMessage(), or the connection closes, onClose(). In the example, we are working with text messages. But binary is also supported. This server prints and returns (to the client) the same message that it receives. Input as parameter and output as a return value. Simple, right? Figure 2. ChatServerEndpoint.java Client Now, it is time to create a client to connect with our server. Figure 3 shows the important elements: As can be imagine, our class need to be annotated, now as a ClientEndpoint Line 36 and 39 do the magic. We create a ClientManager and then ask it to make a connection between the supplied annotated endpoint (our client) and a server specified by an URI. Once the connection is established, the logic is similar to the one in the server: methods that will be called automatically when a connection open, onOpen(), a message is received, onMessage(), or the connection closes, onClose(). Figure 3. ChatClientEndpoint.java What is the client doing? First, when the connection opens, the client sends a message to the server using session.getBasicRemote().sendText() –Line 11. The server receives the message and answers with exactly the same string. Then, the method onMessage() will run in the client. Just for fun, our client in the method onMessage() is reading a string from the user’s keyboard (using System.in) and sending that string to our server. At this point, the lifecycle of our application consist of the methods onMessage() being executed on the client and on the server sides. Source Code The complete source code for the described example is available on my GitHub. Explore the details there and happy coding. Challenge Now that we have a vanilla implementation of WebSockets, How hard it could be to add some decoration here and create a ChatApp with a nice GUI for the client? 🤔
https://medium.com/swlh/how-to-build-a-websocket-applications-using-java-486b3e394139
['Javier Gonzalez']
2020-12-26 12:25:11.005000+00:00
['Websocket', 'Java', 'Client Server Model', 'Java Apis', 'Eclipse Tyrus']
Previously Unrevealed Moments Your Favorite Literary Heroines Had Their Periods
Previously Unrevealed Moments Your Favorite Literary Heroines Had Their Periods Because moon river plays, whether or not it’s included in the text. Just a warning — if you’re not into reading about dismal lady days, this isn’t the post for you. There are a lot of articles on Medium that do not discuss the great sea of cabernet (shocking, I know) and maybe you should consider reading one of them instead. As for me, I have never written a post about misshapen she blessings on Medium, so I’m kinda excited about all this. Thanks for hosting it, Jane Austen’s Wastebasket. I expect it will be a masterpiece! Or, maybe a mistresspiece. 😉 Here we go! Many people don’t know this, but literary heroines experienced the monthly emergence of the devil’s decoupage the same way many human women do. They get the cramps, the growls, the moans, the args, the whole Stravinsky compositions. Just because old timey writers were squeamish about giving us the details doesn’t mean the details didn’t occur. So here are a few moments when iconic characters found themselves juggling the usual drama in addition to stopovers from old Auntie Flo-Rance. When Lady Catherine de Bourgh gatecrashed Lizzie Bennet on the topic of marriage to everyone’s fave, Mr. Darcy If you’ve read Pride and Prejudice, then you’re familiar with how everything unfolded. Lady C was leaning hard on the nosy vibe (a trope Jane Austin perfected), i.e. showing up and spraying her unwanted opinions on everyone in the vicinity. Her goal was to extract a promise from our heroine, Lizzie, not to smear herself all over Darcy’s toast and biscuits. But, Lady C was a bit late. Earlier in the book Jane Austin wrote a wet t-shirt scene for Darcy and Lizzie was kinda into it. So Lizzie shut down Lady C and sent her sputtering off to whatever rich person activity was next on her social calendar, (probably knitting sheep for underprivileged farmers). This is all interesting, but Jane left one thing out. She forgot to include the fact that Lizzie was also checking into the Red Riptide Inn at the time of this Very Important Conversation. And when one has a bouquet of pink carnations waiting in one’s room, one is not usually in the mood for wearing the restrictive shackles of a society where the young obey the old, the poor obey the rich and ladies play nice while boiling underneath. Meaning, Lizzie had HAD IT. So, really, we have the vermillion viper to thank for one of the greatest romances in the literary canon. When Jo March got all emotional and chopped off her hair “for money” This one came about while the March family was running around desperate for funds and drama was at an all time high. Feeding off the chaotic atmosphere, Jo decided that rather than sucking it up and playing nice to Aunt M, she’d rather pull a Felicity. Of course, Jo being Jo, she staged the whole thing for a slow reveal, pulling away her bonnet all, “It me. Deal, fam.” (Dialogue from Louisa’s first draft). Yes, this was 100% in character for Jo. Girl loved a grand, romantic gesture. However, you’ll also be unsurprised to find out she was birthing a garnet gemstone when she transformed her scalp into that of an electrocuted hedgehog. Completely understandable. How many of us have made regrettable hair-related decisions under the same circumstances? Amy, however, was not receiving her monthly berry subscription when she sneezed out the line, “Oh Jo! Your one beauty!” Amy was just a terrible human being. When Jane Eyre started feeling indecisive and “magically” heard the voice of her crazy ex. Right, Jane. We all believe you. You totally heard old ‘chester’s voice croaking out your name and that meant you simply had to go back to him and marry him and take care of him forever. FATE DEEMED IT SO. Or, could it be the scarlet shadow dropped by for an unexpected tarriance and gave your emotions a swirly? You were such a practical woman, Jane. Normally in such possession of resolve. Everyone knew you needed to delete his number from your phone, block him on the socials and get on with your fabulous, wealthy singlehood. None of us wanted you to hook up with St. Good Guy Dude (so boring!), but there were bound to be lots of hot guys running around the moors who never once locked their wives up in attics. I mean, we’re talking about a Gothic novel, are we not? There’s always another set of abs around the corner. Jane, oh Jane. If only you’d learned not to let your overflowing cup of kidney beans do the decision-making. C’est la vie, I suppose.
https://medium.com/jane-austens-wastebasket/previously-unrevealed-moments-your-favorite-literary-heroines-had-their-periods-2a5c2ddafbb4
['Sarah Lofgren']
2019-05-24 01:14:25.734000+00:00
['Satire', 'Humor', 'Women', 'Feminism', 'Books']
Rethinking “Stress Relief”. axe-throwing is out; rage rooms are in.
Furthermore, in a recent study, Richard Schonberger’s research on “frustration-driven processes” outlines an elaborate system (meant to be used by businesses) to provide individuals with a “positive outlet for frustrations” (within the workplace)… Despite the article’s initially narrow scope, I think that its findings can, and should, be applied in a much broader sense: to the masses; the general population — i.e., well beyond just “within the workplace.” In terms of identifying and defining common stressors, Schonberger writes: The term ‘frustration’ is especially enabling and advantageous in that it captures personal motivation: We humans don’t like frustrations, and are positively inclined to see them eliminated. It should go without saying that we don’t always choose to go the best way about handling, or “eliminating,” those frustrating stressors that we constantly face in our day-to-day lives… However, this extreme inclination to remove those negative barriers can have silver-linings by: leading us down certain avenues that (ultimately) help to increase our self-awareness; guiding us to find our own creative outlets, thus teaching us how to effectively express ourselves; introducing us to new and unexpected activities which allow us to ‘let go’ of whatever is bogging us down by means of physical exertions (like smashing things). As cited in Oppong’s article, physical activity has been correlated with lower cortisol levels, thus amounting to lower amounts of stress and frustration.
https://medium.com/an-injustice/rethinking-stress-relief-90a446a804d5
['Riley Dewaters']
2020-04-04 03:08:25.293000+00:00
['Innovation', 'Entrepreneurship', 'Stress Management', 'Trends', 'Rage']
Design+Sell/Build: A market-driven innovation framework for B2B2C companies
How design connects sales and product teams to capitalize on market signal Most product management articles describe how to improve product experiences in a business-to-consumer (B2C) context where there is a clear line of site and access to the end customer. But for many software companies, customer experiences are delivered in partnership with their clients and often as part of a larger service design experience. In those cases, access to the users is often limited for contractual or regulatory reasons and building trust to gain access to this type of information takes time. As they work to build a shared discipline for customer-centric continuous innovation with their clients, these organizations need methods to better consume and deliver on indirect product insights in this challenging product delivery model. Leaning in to Outside-in Product Insights Imagining desirable capabilities, features, and services is fundamental to the growth of any product but the ideation is often not exclusive to the product organization. They might originate from executive leadership, a high-risk client, or a board member just as easily as they might originate from within the product organization. But when they do, the challenge the product organization faces is what to do with those new items in relation to existing product priorities. Even with a formal principle declaring that each idea must be validated before any significant investment is made, the source of the idea can often bring pressure that causes it to move forward despite validation. While the product organization works to defend the release and the doctrine that guides their work, these items make their way into the release just the same and, unfortunately, often make it to market without broad adoption. This outside-in release planning behavior is most often seen in competitive markets with sales-led organizations managing complex, high-expectation clients. In these cases, the client has choice and can become a very vocal advocate for product capabilities they require in order to commit. In an effort to win or retain business, Sales offers platform flexibility. They isolate and document the value propositions for requested capabilities to build confidence with the client—often agree to add them as part of the deal. In parallel, executive leadership is continuously aggregating and synthesizing client and market feedback into strategic insights that they believe can change the trajectory of the business—often advocating to move these items to the front of the line. The behavior described above is considered a bad thing within product and engineering teams. But it isn’t in its purest form. It just doesn’t have an intake model that can effectively negotiate it in with other priorities so it creates bottlenecks and resourcing issues that frustrate teams. Like many other functions in a company, executive leadership and sales leadership are product enthusiast and business strategists. They care just as deeply about the company mission and vision and are equally invested in its success. They respect agile product methods in principle and want to find ways to account for the perspective they live and breathe in the field day in and day out. Both the salesperson and the executive leader do their homework. They work to understand the product to the best of their ability. Their goal is to increase the likelihood that what both they and the client(s) seek can be achieved in an effort to secure or retain business. As the growth engine for the company, their role is to continue to make connections between the market expectations, clients’ needs, and the solutions they offer. But if the connection to the product discipline isn’t tight enough or the knowledge transfer isn’t strong enough between sales and product, they can color outside the lines more than they know. Product organizations just need a better way to consume and synthesize the insights and needs as they surface. With a partnered approach to idea refinement in the field, the proposed solution can align to the roadmap and release plan in the flow of the negotiations — as opposed to after it has been sold. So how can B2B2C product companies embrace this high value input and help validate the hypotheses alongside these front-line strategic partners while protecting the validated release schedule? Enter Design+Sell/Build Design+Sell/Build is exactly what it sounds like. It brings product design and product marketing discipline to the front lines of the sales process to help visualize and articulate the possible. As a creative partner, designers draw pictures — including diagrams, interfaces, journey maps, and use cases—and tell stories of what is being suggested so the idea or opportunity can be better understood by all parties. Concurrently, these designers help pull out the value proposition, product benefits, and key metrics to define how the solution will be judged by the client and its end users. The resulting customer definition, “happy path” user experience, and solution benefits combine into storytelling collateral to help gain alignment within both the client organization and the product and engineering team. This approach does not replace the product and engineering effort required to deliver the solution. It can't. With limited time and limited exposure, the team leverages design sprints and other problem framing activities to build up a conceivable, relatively untested “happy path” solution to the problem. The primary goal is to provide some objective clarity on the target solution so that clients can feel confident that the solution is in the ballpark, engineering teams can roughly size the effort to build the solution, and product can have a constrained point of departure and set of metrics to drive the capability development.
https://medium.com/nyc-design/design-sell-build-a-market-driven-innovation-framework-for-b2b2c-companies-60bfe2e23c22
['Paul Burke']
2020-12-31 00:23:34.281000+00:00
['Product', 'Product Design', 'New York', 'Design Thinking', 'Product Management']
A Private Look at the Sexual Experience with Two Different Husbands
My second husband and I attended a concert. A well-known singer had released an album. I’d bought us tickets to surprise him. We’d just gotten married a few weeks earlier. The light show was fantastic, and the band played many of my favorites. I stood the second half and swayed as I sang along. He grinned and watched me for a moment. Then he filmed the audience with his phone and included me in the video, even though I’d asked him not to. “You’re sure enjoying yourself,” he said. I nodded. “I love this group.” His question surprised me. I thought it self-evident since the venture was several hours away and the tickets expensive. I smiled at him, but he didn’t notice as he panned the crowd. I considered taking his hand but hesitated, unsure if I should. A few hours later, as we drove to the hotel, I admired his lean look. Sharing this musical event with him had been special, and I wanted to prolong this sense of closeness. We walked into the room, and I went straight to the bathroom to change. I’d brought sexy lingerie for the occasion. My heart sunk at the muffled sounds of the TV. I wore my best smile and stepped out. He didn’t bother to look up as I slid next to him. I cuddled close. He absentmindedly put his arm around me. I leaned in to kiss him, hoping he’d get the hint. He glanced at me and flashed a quick smile. “Love ya, Gorgeous,” he said. “Love you too.” I sighed and waited for the show to end. As the closing credits rolled across the screen, I gazed coyly at him and then brushed my lips against his. I pulled his head close and ran my hand lightly across his shoulder and down his arm. “Let me brush my teeth,” he said. Disappointed, I watched him walk away. A few minutes later, he scooted beside me. He parted my legs with his hand and stroked my clit with his finger. But I’d wanted more — to touch the softness of his skin and the long length of his penis but knew he didn’t like to be touched. He needed the control. Yet, I longed to be closer. To be taken by him. I deepened the kiss as my arousal grew, but he seemed far away. Detached. I tried running my tongue over his lips in hopes I’d stir him. “I don’t need to cum,” he said, an excuse he often gave these days. My belly ached with grief as I stifled pangs of rising insecurity. Why doesn’t he want sex? What’s wrong? I tried to think of the last time he’d told me I was beautiful. I drew closer and embraced him as the pleasure grew. I wanted to share my passion — for this to be our moment. As I leaned against his body, I couldn’t feel any signs of arousal. His penis flopped limply against his leg. When was the last time he’d fucked me? I rubbed the tips of my nipples against him, trying to incite desire. A few minutes later, I gasped and came. He kissed my forehead. “Somebody had a good time.” The emptiness of the comment bothered me. He pulled me into a hug as if I was his teddy bear. Soon, I heard the soft sounds of sleep. Hurt, I rolled over and cried silently.
https://medium.com/breaking-taboos/a-private-look-at-the-sexual-experience-with-two-different-husbands-2a6584389867
['Marie Lynne']
2020-12-18 21:51:03.565000+00:00
['Relationships', 'Intimacy', 'Sex', 'Lessons Learned', 'Psychology']
Is Trump Having a Nervous Breakdown?
You cannot judge Trump based on normal behavior. He has never lived a normal life like you and me so his behavior, i.e. coping skills, are not the same as everyone else. And while those life skills might be judged as abnormal for the general population, they have worked for him and helped him achieve success for more than 70 years. What you have to understand about Trump is that his wealth has insulated him from reality. He has always lived in a bubble surrounded by people who tell him that he is the greatest, the smartest, the richest. He has never lived in the real world like other people. Any time that reality has intruded on his fantasy world, he has always been able to escape any consequences. And because he has never had to suffer any consequences, he believes that he is invincible. Even his numerous bankruptcies have had no personal consequences for him. The banks would write off the losses and stop lending to him. Trump never realized that banks routinely write off bad loans. They consider it the cost of doing business. In his mind, he “won”, i.e. didn’t have to pay the loans. His fortune allowed him to hire an army of lawyers and accountants whose only job was to help him cheat, the only way he knows to get ahead, and then to protect him from any consequences. Even the way he ran his company would be considered aberrant by normal standards. He routinely stiffed contractors and lenders, illegal behaviors that would get most CEOs fired. Trump is not the CEO of his company. He is the owner. You can’t fire the owner. Owning his own company allowed Trump to live in an alternate reality. He surrounded himself with a group of fawning underlings who stroked his ego and unquestioningly followed his orders no matter if they were illegal. He made the rules. Their job was to follow them. This is why he had so many problems as president. He didn’t understand that he couldn’t run the government the way that he ran his company. I wasn’t shocked when he violated norms and tried to take illegal actions. That was how he ran his company for years. He thought that he could continue making up his own rules and breaking the laws as it suited him as president. It’s why he fired so many people. He didn’t understand that government officials, staff and cabinet members were not his employees, obligated only to him. So he kept hiring and firing until he found people who would act like employees, adopt his fantasies and implement his policies. Which is how he ended up where he is now, hunkered down in the White House, surrounded by a bunch of whackos, strategizing how he can overturn the results of the election. This may not be normal for the rest of us, but it is perfectly normal for him. He isn’t having a mental breakdown. He is continuing the behavior that has worked for him his entire life.
https://medium.com/politically-speaking/is-trump-having-a-nervous-breakdown-dbc7498ddf24
['Caren White']
2020-12-23 19:18:32.393000+00:00
['Politics', 'Government', 'Trump', 'Mental Health', 'Life Lessons']
Oh, It Stings…A Lot
A Tiny Moment of “Here’s What You Get When You Never Stop” Photo by David Bruyndonckx on Unsplash The Moment It can take a lot to question if things are working out or if there is a path through all of this uncertainty. I sit here looking beyond the tree line and don’t know why or when the fix is arriving. I know asking why or when rarely yields comfort. What and how are more suitable, but sometimes it feels trite. Perhaps, it is merely the ride with its bumps and twists. Maybe there is nothing after all. The pursuit of things can be rewarding, but then there are the odds always weighing in on you. Maybe it’s time for some action? The Reflection You take on a lot moving through life. You get knocked down and disappointed all the time. You rip through scabs, feel old pains, and discover new ones. You lose most of your friends. You don’t win that first place ribbon. You remember being in a lost, dark, room. You can smell the winter and its isolation. Potential gain rushes past you. The absence of motion puts you in the thick of painful thoughts and memories. I mentioned that this would sting. I promise there will some uplifting thoughts appearing here shortly. Give me a second. When you don't stop, you don't have a lot of time to feel bad. It can be a wonderful thing. If you always seem on the go, you don’t have time to watch the screen of misfortune. The solution has a downside. The more you do, the more there are opportunities for disappointment. You can look up or down. Sigh or scream. Why would I take on any more? Enough is enough and enough is too much! Oh, wait, you never stop. Keep moving. You get what you get and you do what you do and it will never be enough because nothing will ever be enough. What do you get when you don’t stop? You get more and more of both sides of joy and pain. Try to find what is enough. You get experiences from the struggle. That’s what you get! The Takeaway A smiling, supportive, friend is enough. A genuine family member is enough. family member is enough. Donuts are definitely enough. Back sweat (eww)while working in the garden is enough. Feeling better after an injury heals is enough. Every day breathing is enough. Teardrops are enough. Josh Kiev is an actor, chef, and wants you to know that you are enough.
https://medium.com/tiny-life-moments/oh-it-stings-a-lot-65dcf13c32f6
['Josh Kiev']
2020-12-01 16:36:47.966000+00:00
['Motivation', 'Life Lessons', 'Positive Thinking', 'Tiny Life Moments', 'This Happened To Me']
The history of autonomous vehicle datasets and 3 open-source Python apps for visualizing them
The history of autonomous vehicle datasets and 3 open-source Python apps for visualizing them plotly Follow Oct 15 · 10 min read Special thanks to Plotly investor, NVIDIA, for their help in reviewing these open-source Dash applications for autonomous vehicle R&D, and Lyft for initial data visualization development in Plotly. Author: Xing Han Lu, @xhlulu 📌 To learn more about how to use Dash for Autonomous Vehicle and AI Applications register for our recorded webinar with Xing Han Lu, Plotly’s Lead AI/ML Engineer Imagine eating your lunch, streaming your favorite TV shows, or chatting with your friends through video call inside the comfort of your car as it manages to drive you from home to work, to an appointment, or to a completely different city hours away. In fact, it would drive you through rain and snow storms, avoid potholes and identify pedestrians crossing the streets in low-light conditions; all this while ensuring that everyone stays safe. To achieve this, automotive manufacturers will need to achieve what is called “level 5 driving automation” (L5), which means that the “automated driving features will not require you to take over driving” and “can drive everywhere in all conditions”, as defined by SAE International. All the levels of driving automation defined by SAE International [Source] Achieving this level of driving automation has been the dream of many hardware engineers, software developers, and researchers. Whichever automotive manufacturer capable of achieving L5 first would immediately become ahead of the competition. It could also result in safer roads for everyone by mitigating human error, which causes 94% of serious crashes in the United States. For these reasons, various models, datasets, and visualization libraries have been publicly released over the years to strive towards fully autonomous vehicles (AV). In order to support the development of autonomous driving and help researchers better understand how self-driving cars view the world, we present four Dash apps that cover self-driving car trip playbacks and real-time data reading and visualizations, and video frame detection and annotations. From KITTI to Motional and Lyft: 7 years of open-access datasets for the AV community Published in 2012 as a research paper, the KITTI Vision benchmark was one of the first publicly available benchmarks for evaluating computer vision models based on automotive navigation. Through 6 hours of car trips around a rural town in Germany, it collected real-time sensor data: GPS coordinates, video feed, and point clouds (through a laser scanner). Additionally, it offered 3D bounding box annotations, optical flow, stereo, and various other tasks. By building models capable of accurately predicting those annotations, researchers could help autonomous vehicles systems accurately localize cars/pedestrians and predict the distance of objects/roads. The KITTI benchmark, one of the earliest AV datasets released for research [Source] In 2019, researchers at Motional released nuScenes, an open-access dataset of over 1000 scenes collected in Singapore and Boston. It collected a total of 1.5M colored images and 400k lidar point clouds in various meteorological conditions (rain and night time). To help navigate this dataset, the authors also released a Python devkit for easily retrieving and reading collected sensor data for a given scene. It also included capabilities for plotting static 2D rendering of the point clouds on each image. In the same year, Lyft released their Level 5 (L5) perception dataset, which encompasses over 350 scenes and over 1.3M bounding boxes. To explore this new dataset, they created a fork of the nuScenes devkit and added conversion tools, video rendering improvements, and interactive visualizations in Plotly. In order to encourage the community to leverage the dataset for building models capable of detecting objects in the 3D world, they released a competition on Kaggle with a prize pool of $25,000.
https://medium.com/plotly/the-history-of-autonomous-vehicle-datasets-and-3-open-source-python-apps-for-visualizing-them-afee9d13f58a
[]
2020-10-27 20:26:34.627000+00:00
['Plotly', 'Machine Learning', 'Autonomous Vehicles', 'Uber', 'Lyft']
Why Is It Scary to Be a Leader?
Why Is It Scary to Be a Leader? 8 Ways to Be Bold in a Digital Age Photo by Wynand van Poortvliet on Unsplash We litter social media with the quotes that have stood the test of time and triumphantly repeated as motivation to keep us going. The boldly spoken words of men such as JFK and Martin Luther King memorialized and ultimately cost them their lives. But times have changed. With the convenience of elaborate computer stations, boasting dual monitors and lightning speed internet connections, we can run an empire of opinions, keeping the dance going by a never-ending stream of intelligent questions and third-level inquiry, all from the safety of home. If the pen is mightier than the sword, well then the sophistication of the Android and latest iPhone must be mightier than the smartest of drones. Sticking your neck out and taking a position can place you in a social media noose with a lynching of angry followers ready to express them in contradiction. In the opposite corner, the pervasive erosion of “Group Think” keeps the masses stuck in repetitive cycles of hopeful thinking and ineffectiveness. So how do any of us define what it means to be bold? In a world that will provide a good and rewarding life by playing in the middle ground of the safe, why would anyone risk and take action of boldness? Every person must answer that last question for themselves but for the ones that feel truly called, the ones that feel compelled to lead, to serve and communicate their message buried deep inside, you MUST be bold. So preparing for your emergence, I’ve prepared 8 considerations to walk through: 1. You’ll feel alone. In reality, you’ll be in brilliant company, attracting the attention to others busy in their own boldness, eager to befriend and even collaborate on projects. You’ll be in a cool club of others that are looking for validation to continue in their own bold actions and struggling with the same thing you are, isolation. If you hold your course, committed to your alternative way of living, you’ll prevail. 2. Others fears are projected on to you. Automatically, the surrounding ones will have their patterns disrupted because your new actions are not in line with the role they have you locked into. This will come to you in the form of others offering you caution. ‘Be careful, you could get hurt’ is the common underlying message. ‘I don’t want you to be disappointed’ and other messages that really have nothing to do with you communicated with prominent emotion. 3. You’ll come face-to-face with your fear. Whether it be brief whispers or outright screams, the voices of your ego will surface to offer you a false comfort in throttling back to the land of mediocre. Concern about what others are thinking and how you’re perceived is a constant battle. Everything you’ve believed about reputation will feel like this badly tailored suit of bullshit that eventually you rip off with machete style swings and begin unfolding the cape trapped underneath. Photo by Clark Tibbs on Unsplash 4. You’ll doubt your self-worth. Somewhere along the road of domestication, we believed stories about ourselves that we are not of value. Constant questioning if you’re good enough will drive you crazy until eventually, after hundreds or thousands of affirmations, you’ll finally believe that you are amazing and begin acting in that way. 5. You must apologize and ask for forgiveness. Occasionally, you’ll think being bold means falsely asserting yourself in an area that, frankly, is none of your business. You’ll live in other people’s lane, you’ll say inappropriate things and feel like a complete idiot and, you must apologize and mend some fences. But, if you keep going, you’ll learn to set your own boundaries and become truly bold, even fierce. 6. You’ll be massively criticized. If you remain in YOUR truth, you’ll be criticized, called an asshole, and (some) people won’t like you. Trust me, this rarely has anything to do with you. You will trigger the insecurities and fears of others and, since many haven’t developed in safe self-expression, anger is their only answer as a fight, flight or freeze response. Anger is a sign of trapped fear and rarely connected to any error on your part. The answer is grounded compassion. Be bold! Photo by Etienne Girardet on Unsplash 7. You’ll stand on the false pedestal of accolades. “The crucible for silver, and the furnace for gold, but people are tested by their praise.” ~Proverbs 27:21 As gold is refined by being heated, so you too will be tested as your fan base grows and people praise you, especially follow you. Some will adore you so much, they will tell you anything. As creatures of survival, our brains are hard-wired to look for signs that our bodies are safe and our minds are right. To over-tune your ear to praise is like calling vodka, water because it’s clear, you’ll eventually end up drunk and incoherent, i.e., the opposite of bold. 8. Your self-image will be dismantled and rebuilt. Your truth, your beliefs, your mindset… they are ALL that matters. Living boldly without compromise to your values requires having a grounded self-image, resolved, confident and not propped up by ANY external circumstances that are all to subject to the winds and tides and times of change. So many times I’m fueled by temporal bursts of motivation, yell a victory cry and raise the flag in bold proclamation to the world of my plans only to fall flat on my face days later. Yes, I’ve done this all too many times. We charge into battle unequipped, with no or little support systems, and wonder why we are not victorious. Damn. I’ve mourned the days of defeat, felt the pain of bankruptcy, reached for the touch of lost love and watched dreams of youth vanish into the distance. So much so that when I think of being bold, the comfort of playing safe sounds so, so freaking good. The girl I was scared to approach morphed into a meaningful relationship. Business ventures I didn’t feel qualified to run became challenges to learn. With the help of brief acts of boldness, my life became more resilient, and I learned things aren’t as hard as we often imagine them to be. When appropriately executed, boldness has caused the universe to rush to my aid in unexplained ways, providing resources and trials that got me through to the next level, like playing a high-stakes poker game. So take stock. Quiet your mind and feel that sense of boldness, the courage, the resolve that is right in the middle of the chest, beating with consistency and connected to a source higher and mightier than you ever experienced or dreamed. You too can be bold, even when it’s a scary proposition. In service to life, ~Robin
https://medium.com/the-innovation/why-is-it-scary-to-be-a-leader-cb427dec2f4a
['Robin Austin Reed']
2020-12-21 16:33:04.751000+00:00
['Leadership', 'Courage', 'Emotional Intelligence', 'Productivity', 'Faith']
Checklist For Testing BigData Systems
Big Data is always the buzzword in the software industry primarily due to a large amount of data generated which can not be processed with traditional computing methods. Big Data has a characteristic of 3 Vs ie. Volume (amount of data), Velocity (Speed of data in and out), Variety (Range of data type and sources). Having said that, testing big data systems comes with altogether new complexity, it requires a lot of skills. There are no structured strategies to test such systems as your scripts, checks and validations will depend on business logic, however there are certain methods or checklists to take care while testing big data systems. This article will primarily revolve around such checklists and strategies. Note — This article will NOT cover the HOWs and implementations. Before diving into checklists, lets also quickly discuss the components of any application handling big data - Data Ingestion, is the layer where data ie large amount of data, will be injected to Big Data systems. This data can be structured, semi-structured or unstructured and storage can be Hadoop, MongoDB or any other storage. Its also called pre-processing stage where testing things is very critical, if we go wrong at this stage whole pipeline would get affected with incorrect analysis. Data processing, as the name suggest, in this layer the processing of ingested data happens. Processing like aggregation of data based on business rules and eventually the formation of key value pairs which will be processed through map-reduce jobs. Data Visualisation, once the data is processed as per business rules, the processed data gets ETL to either directly to data warehouse or some systems has target source as an indeterminate step through which data get loaded to data warehouse thereafter so that the meaningful information can be extracted through business intelligence and analytics. Big Data pipeline
https://medium.com/datadriveninvestor/checklist-of-testing-bigdata-systems-8f41257591de
['Nayan Gaur']
2020-01-11 04:51:40.490000+00:00
['Test Strategy', 'Mapreduce', 'Testing', 'Big Data', 'Hadoop']
Los Moodboards venden historias y productos
in In Fitness And In Health
https://medium.com/viral-amplifier/los-moodboards-venden-historias-y-productos-a652e1c00a0a
['Mijail Rodriguez Hernandez']
2017-10-11 15:23:35.885000+00:00
['Storytelling', 'Mood Board', 'Lifestyle', 'Marca']
Pandas, Dask or PySpark? What Should You Choose for Your Dataset?
Do you need to handle datasets that are larger than 100GB? Assuming you are running code on the personal laptop, for example, with 32GB of RAM, which DataFrame should you go with? Pandas, Dask or PySpark? What are their scaling limits? The purpose of this article is to suggest a methodology that you can apply in daily work to pick the right tool for your datasets. Pandas or Dask or PySpark < 1GB If the size of a dataset is less than 1 GB, Pandas would be the best choice with no concern about the performance. 1GB to 100 GB If the data file is in the range of 1GB to 100 GB, there are 3 options: Use parameter “chunksize” to load the file into Pandas dataframe Import data into Dask dataframe Ingest data into PySpark dataframe > 100GB What if the dataset is larger than 100 GB? Pandas is out immediately due to the local memory constraints. How about Dask? It might be able to load the data into Dask DataFrame depends on the datasets. However, the code would be hanging when you call APIs. PySpark can handle petabytes of data efficiently because of its distribution mechanism. The SQL like operations are intuitive to data scientists which can be run after creating a temporary view on top of Spark DataFrame. Spark SQL also allows users to tune the performance of workloads by either caching data in memory or configuring some experimental options. Then, do we still need Pandas since PySpark sounds super? The answer is “Yes, definitely!” There are at least two advantages of Pandas that PySpark could not overcome: stronger APIs more libraries, i.e. matplotlib for data visualization In practice, I would recommend converting Spark DataFrame to a Pandas DataFrame using method toPandas() with optimization with Apache Arrow. Examples can be found at this link. It should be done ONLY on a small subset of the data. For example, the subset of the data you would like to apply complicated methods on, or the data you would like to visualize.
https://medium.com/datadriveninvestor/pandas-dask-or-pyspark-what-should-you-choose-for-your-dataset-c0f67e1b1d36
['Alina Zhang']
2020-08-25 20:59:58.304000+00:00
['Machine Learning', 'Pandas', 'Spark', 'Data Science', 'Big Data']
Using Machine Learning to Predict Patients’ Diabetes Status
“There is a major shortage of data scientists.” M. Stonebraker (Turing Award Winner), March 2020. www.unsplash.com In this article, we will go through some of the basics of machine learning and implement them using Python libraries. I will try to be as basic as possible, avoiding the use of too many “technical jargons” to carry along folks that are not yet solid in programming or machine learning. About the Dataset The dataset contains 2000 observations from patients, with 8 features and a target that states whether they have diabetes or not. The 8 recorded features about the patient include pregnancies, glucose, blood pressure, skin thickness, insulin, BMI, Diabetes Pedigree Function (likelihood of diabetes based on the family) and age. The target here is titled “outcome”, and is “1” if the patient has diabetes or “0” if they don’t. 𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬 (𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐕𝐚𝐫𝐢𝐚𝐛𝐥𝐞𝐬 𝐨𝐫 𝐈𝐧𝐩𝐮𝐭): Pregnancies Glucose Blood Pressure Skin Thickness Insulin BMI Diabetes Pedigree Function Age 𝐓𝐚𝐫𝐠𝐞𝐭 (𝐃𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐕𝐚𝐫𝐢𝐚𝐛𝐥𝐞 𝐨𝐫 𝐎𝐮𝐭𝐩𝐮𝐭): Outcome Want to know some basics of machine learning? Machine Learning is the science of training computers to make decisions based on data without being explicitly programmed for every scenario. Machine Learning can be broadly classified into two branches, namely supervised and unsupervised learning. Supervised Learning: Is kind of the most common branches of machine learning and is often what people are referring to when using the term “Machine Learning” and is the task of learning a function that maps an input to an output based on example input-output pairs. The most common supervised learnings are classification models, such as decision trees, which aim to classify observation based on their inputs and regression, which predicts a scalar response variable based on a set of input variables. Unsupervised Learning: is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labelled responses. The most common unsupervised learning method is cluster analysis, which is used for exploratory data analysis to find hidden patterns or grouping in data. Also, a crucial but overlooked step in data science projects is Data Visualization and Exploratory Data Analysis (EDA). EDA is a necessary step to be conducted to get a better sense of how our data looks like and if it conforms with all the requirements before we even train models and see results that will aid decision making. Our scope in this article In this article, we will use: i. Classification methods via a popular library called scikit-learn which handles much of the technical maths behind the algorithms. ii. NumPy: a Python library that allows us to work with arrays, matrices and perform linear algebra (it stands for Numerical Python). iii. Pandas: a Python library that provides fast, flexible, and expressive data structures designed to work with relational and * labelled* data. It is a fundamental high-level building block for doing practical, real-world data analysis in Python. iv. Two popular Python libraries to create visualisations (Matplotlib and Seaborn) as well as looking at some of the common steps used in the first steps of data analysis. Overall, we will look at how we can understand the diabetes dataset via Exploratory Data Analysis (EDA). We will also look at how to train a model (k-nearest neighbour classification algorithm) with the diabetes dataset and use it to predict if new patients have diabetes. In conclusion, we will discuss various methods of determining how good our model is. Prerequisites For this article, I will be using Python & Jupyter notebooks. The easiest way to use Python is via Anaconda as it comes with all the required packages that you may need. Caution! Notebooks are essentially an interactive session of Python so that when you create variables, they are stored in memory. This can be problematic when re-running cells out of order (i.e. running one cell before you run the one above it), and so your code may not behave as expected. This is particularly problematic for those who are less familiar with programming, but it can trip up even experienced users as it is unintuitive. The notebook and dataset used in this article can be found here on Github. Exploratory Data Analysis (EDA) First, we need to import some libraries that are commonly used for EDA and visualisations. Matplotlib and Seaborn are two straightforward plotting libraries in Python. 𝐍𝐨𝐭𝐞: "Datern.mplstyle" is a stylesheet that we use to make our visualizations look more presentable. import numpy as np import pandas as pd # We will use the Seaborn library import seaborn as sns # Matplotlib forms basis for visualisation in Python import matplotlib.pyplot as plt # Set the default style for plots plt.style.use('Datern.mplstyle') colors = plt.rcParams["axes.prop_cycle"]() #Gradient colours from matplotlib.colors import LinearSegmentedColormap nodes = [0,0.5,1.0] color = ['#00ACF0','#ffffff'] cmap = LinearSegmentedColormap.from_list("", list(zip(nodes, color))) %config InlineBackend.figure_format='retina' Now we load the diabetes dataset ( diabetes-dataset.csv ) into the notebook and begin to explore the characteristics within: df = pd.read_csv('diabetes-dataset.csv') We can then use Panda to see the types of features that we have in the dataset using .dtypes or even more, detail using .info() df.dtypes df.info() Now to see some of our data, we can look at the first 5 entries using .head() df.head() We can look at other parts of the data using .tail() and .sample() # Prints the last 10 entries df.tail(10) # Produces a random sample from the data df.sample(frac=.01) We can also get a better understanding of the quantitative variables in the dataset that help understand our continuous data before we start to visualize, using .describe() : df.describe() Data Visualisation Data visualisation is the graphical representation of our data. We use charts, graphs and maps to make it easier to see trends and other features in our dataset. # Now we make continous plots of quantitative data fig=plt.figure(figsize=(20,20)) for i,col in enumerate(df.drop(['Outcome'],axis=1)): ax=fig.add_subplot(4,2,i+1) sns.distplot(df[col]) Observation Blood pressure, Glucose and BMI are uniformly distributed. Skin Thickness, Diabetes Pedigree Function, Pregnancies and Age are positively (right) skewed. Blood Pressure, Skin Thickness, Insulin and BMI have zero values which can be regarded as outliers. _ = sns.countplot(x='Pregnancies', hue='Outcome', data=df).set(title='Pregnancies against Outcome', xlabel='Pregnancies', ylabel='Outcome') Interpretation We will observe here that even though there are more people with a low number of pregnancies, it can be assumed that people with a higher number of pregnancies are more likely to have diabetes. This is because not up to half of those with a low number of pregnancies (between 0–2) have diabetes. However, half or more than half of patients with over 3 number of pregnancies have diabetes. _ = sns.countplot(x='Outcome', data=df).set(title='Patients Diabetes Status Count', xlabel='Outcome', ylabel='Count') Interpretation This bar plot shows that more number of people whose observations are taken do not have diabetes. _ = sns.countplot(x='Pregnancies', data=df).set(title='Patients Pregnancies Status Count', xlabel='Pregnancies', ylabel='Count') Interpretation More people of whose observations are taken have low numbers of pregnancies. Classification (K-Nearest-Neighbour) Classification is a well-known area of supervised learning where the target variables take the form of categories, such as given an email example, classifying if it is spam or not. In this example, we will be using the k-nearest-neigbour classifier, which is one of the most intuitive classification algorithms. The k-nearest-neighbors classifier is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions). Building our model Now we are ready to build our initial model. Scikit-learn has a KNeighborsClassifier module built-in, which makes it very simple for us. Here, we will be seeing how we can predict if a patient is likely to have diabetes or not. We will start by creating two separate DataFrames for our targets and our features. # Create feature and target arrays y = df["Outcome"].values X = df.drop(["Outcome"], axis = 1) Since KNN uses the distances between observations (not necessarily euclidean though) we will need to scale our data: #Scaling - crucial for knn from sklearn.preprocessing import StandardScaler ss = StandardScaler() X = ss.fit_transform(X) Now we need to split our data into train and test set. Observe here that we are splitting into 80% train and 20% test. from sklearn.model_selection import train_test_split # Split into training and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42, stratify = y) Finally, it is time to build and train our model: from sklearn.neighbors import KNeighborsClassifier # Create a k-NN classifier knn1 = KNeighborsClassifier() # Fit the classifier to the training data knn1.fit(X_train,y_train) Checking the accuracy of our model Now we need to determine how good our model is. The most common metric is accuracy, which is how many we got right vs how many we got wrong while testing the model against the 20% test data from the dataset that were not part of the train data used in training the model. # Print the accuracy print(knn1.score(X_test, y_test)) Our model can predict whether someone is likely to have diabetes or not with 81% accuracy — and that is acceptable!. A huge thanks to Datern for motivating me to write this piece. That’s it y’all! Let’s continue this conversation. Follow me on Twitter @_NuhuIbrahim.
https://medium.com/dev-genius/using-machine-learning-to-predict-patients-diabetes-status-481643dc7389
['Nuhu Ibrahim']
2020-12-20 10:29:01.287000+00:00
['Machine Learning', 'Python', 'Supervised Learning', 'K Nearest Neighbours', 'Data Science']
Weekly Machine Learning Research Paper Reading List # 3
Weekly Machine Learning Research Paper Reading List — #3 Authors: Lei Duan, Guanting Tang, Jian Pei, James Bailey, Akiko Campbell, Changjie Tang Venue: Data Mining and Knowledge Discovery Journal Paper: URL Abstract: When we are investigating an object in a data set, which itself may or may not be an outlier, can we identify unusual (i.e., outlying) aspects of the object? In this paper, we identify the novel problem of mining outlying aspects on numeric data. Given a query object 𝑜o in a multidimensional numeric data set 𝑂O, in which subspace is 𝑜o most outlying? Technically, we use the rank of the probability density of an object in a subspace to measure the outlyingness of the object in the subspace. A minimal subspace where the query object is ranked the best is an outlying aspect. Computing the outlying aspects of a query object is far from trivial. A naïve method has to calculate the probability densities of all objects and rank them in every subspace, which is very costly when the dimensionality is high. We systematically develop a heuristic method that is capable of searching data sets with tens of dimensions efficiently. Our empirical study using both real data and synthetic data demonstrates that our method is effective and efficient.
https://medium.com/towards-artificial-intelligence/weekly-machine-learning-research-paper-reading-list-3-61d9c86c2538
['Durgesh Samariya']
2020-08-18 16:56:40.095000+00:00
['Research', 'Machine Learning', 'University', 'Science', 'Academia']
A lot of great writers suck at titles and this is what they taught me about writing…
Seth Godin has a post title that scored zero in that headline analyzer everyone recommends. Seriously. Zero. Zip. No score. It all started with a stupid thing I read… I honestly think I suck at titles. Like, I pull them out of the damn air. One time I had an article I thought was pretty good, but no one read it. Like, NO ONE. A big fat ZERO. So I changed the title and in one day it had hundreds of likes. Which explains why I started reading about writing better titles. Because, duh! Suckage. Anyway, so I was reading about better titles and I read one weird old tip that said to write 50-100 titles for every article and you’d eventually get the hang of good titles. My first thought was — are you frigging kidding? Um — no! (because, if I had to write 50-100 titles for every article, you’d forget who I was before I wrote the next article, because hot damn that’s a lot of titles…) So I got this crazy idea to check out the titles of some of my favorite writers and see how they fared. It was supposed to be incentive, but it backfired. Which leads back to Seth Godin. His blog post titles scored really low. 0, 26, 41. One of his titles got a 72, but that’s rare for him. Mostly, his titles all sucked. At least in that analyzer. Neil Gaiman didn’t fare any better… I grabbed 5 of his blog post titles and tested them, too. He scored 27, 71, 64, 46 and 57. Neil frigging Gaiman. He’s won the Hugo, Nebula, and Bram Stoker awards. First author to win both the Newbery and Carnegie for the same book. And his blog titles suck. Like, capital S, Suck! According to the analyzer, anyway. Liz Gilbert and Mary Oliver don’t even have blogs and their titles still suck. They post on Facebook and — no surprise — their titles also suck. Mary Oliver uses titles like “Spring” and “To Begin With, the Sweet Grass” Liz Gilbert doesn’t even bother with titles. She starts out with things like “Dear Ones…” or “Question of the day…” Buzzfeed, on the other hand… I suspect the people posting at Buzzfeed and sites like that must use that headline analyzer religiously because they use titles like these… This Guy Got Run Over By A Deer And The Video Is Kind Of Amazing 25 Of The Most Powerful Photos Of The Week 12 Tips for Better Sleep, From People Who Sleep Better Than You Those titles scored wicked high. 74, 82 and 66. I can see why people would click those titles. There’s a certain “feel” there, you know? Have you ever heard of Yellow Journalism? Yellow journalism is an American term for journalism and writing that presents little or no legitimate or well-researched news, and instead uses eye-catching headlines, exaggerations, scandal-mongering or sensationalism. The term yellow journalism (and yellow press) popped up around 1900, in reference to the big New York newspapers as they battled for circulation. Because if you have a lot of newspapers competing against each other and you’re trying to get people to stop and pay a nickel for your paper instead of some other paper, it makes sense. And it’s just as competitive here, right? We all still want your nickel, which is about what I get paid when you throw a couple claps my way. Because when there’s so many stories and so many writers, it’s hard to be heard above the noise of the crowd. So that catchy title thing makes some kind of sense. Writers, I think, are artists painting with words… After spending way too much time analyzing titles, I sat back and thought about writers I truly admire. The ones I analyzed and more… Liz Gilbert, Mary Oliver, Neil Gaiman, Seth Godin and so many more. Anne Lamott, Jenny Lawson, Norman Doidge. Wally Lamb, Paula Todd, Naomi Klein, Dr. Clarissa Pinkola Estes, Julia Cameron, John Cleese, Katy Bowman. Jenny Lawson makes me laugh until I cry. Neil Gaiman fills me with wonder at what one person can do with words. Liz reminds me of the power of kindness. Writers, I think, are artists painting with words. They paint in the colors of joy or sorrow, discovery and delight and a thousand other splendid surprises. My favorite writers don’t need headlines that rank 84 on the headline analyzer. I would read anything they write. It’s their voice, their art and their heart that brings me back time and time again. Which made me realize the problem with eye-catchy titles They appeal to people who like eye-catchy titles. It’s the same as the concept of the freebie. If you give away freebies, you draw freebie-seekers. So when you write those sensational titles, you’ll always have to compete against the other sensational titles for the attention of the people looking for that buzz. My favorite writers didn’t build their audience overnight. I get that. They built their audience by writing consistently. By showing up. And I think that would be a LOT harder to do if you had to write 50 or 100 titles for every article. Which pretty much means my titles are still going to suck, but I’m kind of okay with that. Unless they get no reads. Then I’ll change them. What do you think? Do you work hard to write great titles? I suspect that there’s a balance between getting attention and building the kind of reputation you want to have as a writer. I don’t think “sensationalism” is what I want to be known for, really. So, what do you think? Any weird old tips you can share?
https://medium.com/linda-caroll/a-lot-of-great-writers-really-suck-at-titles-and-this-is-what-they-taught-me-about-writing-9b304e86af8d
['Linda Caroll']
2020-02-23 19:31:36.904000+00:00
['Ideas', 'Writing', 'Advice', 'Writing Tips', 'Inspiration']
Are digital tools helping us as much as we claim they are, though?
Just a quick anecdote to open: a few months ago, I had someone Facebook Messenger me to go on Slack. When I got on Slack, he sent me something from Google Docs to work on. That’s three platforms in the span of about 47 seconds. Theoretically a Google link can be shared in Facebook Messenger, so we could have potentially eliminated one step there. 47 seconds and three steps doesn’t sound like a big deal, no. But over the course of a day, a week, a month, etc… I’d legitimately have to wonder how much time is wasted in offices by people bouncing around between various platforms. I’d assume it’s going to add up. The №1 reason this happens, of course, is that both individuals and silos (departments) become comfortable with specific workflows and tools. They don’t want to break away from those tools. But sometimes they must do projects with other teams, and those teams prefer different tools. This is usually a 27 car pile-up. At my last full-time job, the tech stack guys/girls used one thing (probably Asana), and the marketing “gals” (referred to as such) used another thing (I think OneNote). Do these programs speak to each other? Sure. But bring human context into it (“Why are we using this, Jim?”) and it gets messy. If you don’t believe me on people clinging to the tools they know, look at some stats around adoption of tech. The №2 reason, I’d guess, is that people view themselves as insanely busy. Most of work is about “getting this thing done.” We claim it’s strategic but it’s not often really that. So if you’re super slammed and need to get something done, maybe you’ll message someone across three-four platforms just because it seems relatively natural within how busy you are. This is the reason group chat apps are not as productive as we think, though. It seems like this all is pointing in two directions. Direction 1: Digital got to scale too quickly There weren’t a lot of “set rules,” and it was impacting different ages at the same time. Easiest conceptualization: your grandmother on Facebook. Another one: how most people use LinkedIn (a complete mess featuring number puzzles and bikini shots and updates about your kid’s soccer game). Digital getting to scale so quickly and everyone kinda using it in their own way created what we call “digital noise.” Direction 2: Digital is now officially “more shit to manage” Think of how many new platforms aiming to “fix a pain point” come out monthly. In each industry, it’s gotta be hundreds if not more. The real reason is because spinning out a platform isn’t super expensive. If you have the skills and can bring people together on it, you can spin it out, get some clients, and maybe get bought. CHA-CHING. You just set up your life for 20 years because you conned a few people into thinking your software will make their teams more productive. Congrats. But in an era where work stress is supposedly higher than ever and managers are stretched thinner than male jeans in Portland, aren’t all these platforms becoming more shit to manage? Isn’t that eventually going to counter-set the productivity? “Well the notes are in Asana and the docs are in Google and we’re updating on Slack but we also have a management board with Trello and we use FB for Work during the day as well as a private LinkedIn group and some mindfulness tools and a few productivity boosters. All software, of course.” That sounds ridiculous, but I’ve worked at 4–5 places where it’s completely true. To get everything done, you need to check 11 things per day. And people put stuff that relates to other stuff in different places, so it’s kind of like putting together a jigsaw puzzle to just do your fucking job. What’s the solution? I guess the easiest answer here is “focus on simplicity,” but that’s very hard in business. Each silo has their own leaders, check-writers, and personalities. They will buy the stuff that seems to work for them. It becomes a cacophony of competing platforms and confusion, but no one cares — they want to look good and “smart as a buyer” to their bosses. Mary Barra, who runs GM, once talked about situations where “the simplest task at work becomes impossible.” I think we’re there in a lot of companies, and it’s largely because of the proliferation of digital. One other answer here would be “have clear priorities and indicate a desire for reduced communication channels to make sure info flows smoothly,” but I mean… LOL that most companies could ever achieve that. Work isn’t about info flowing smoothly. It’s about protecting your perch. If you read this, though, I’m curious to know: do you see work as a competing mix of platforms? Do you see digital as now just “more shit I need to manage?”
https://tedbauer2003.medium.com/are-digital-tools-helping-us-as-much-as-we-claim-they-are-though-50376ecff639
['Ted Bauer']
2018-10-07 12:30:05.090000+00:00
['New Digital Tools', 'Future Of Work', 'Productivity Apps', 'Productivity', 'Management']
Not Invited
I Hate it When I am Not Included Photo by Mockaroon on Unsplash It bothers me when I am not invited or included. I know I shouldn’t care so much. But, I do. I hate that feeling of being left out. It brings up memories of when I was younger and girls talked about going to places that I was not invited. When I go back to the raw feeling of what it was like when they talked and giggled about the fun they had. I hate the feeling of being left out. Not asked. Not invited. And not included.
https://medium.com/the-partnered-pen/not-invited-74aee2f93309
['Laura Mcdonell']
2020-02-11 22:14:19.444000+00:00
['Self-awareness', 'Emotions', 'Friendship', 'Included', 'Perspective']
Subtracting God from Science and Life
The ‘subtraction story’ Charles Taylor’s Secular Age begins with an interesting question. Why in medieval times did almost no one doubt God’s existence but for many in the 21st century it is almost impossible not to doubt his existence? Most of us have heard a version of this ‘subtraction story’. In medieval times, people in Western culture needed religion as a tool for explanation. They believed in an ‘enchanted’ world of magic, demons, and evil spirits where faith in God comforted them. But now we have science we can leave that world behind. We are now free to face reality. Science or culture? Taylor spends 900 pages tracing the development of this story from medieval times up to the present. He observes that, despite their popularity, people rarely state the scientific evidence for these subtraction stories. They may refer to the success of science or evolution but with little further justification. There’s no point where the scientific evidence tips the scales inevitably towards secularism. Taylor traces the origin of subtraction stories back to the 19th century. During this period, a trend developed for ‘conversion stories’ where influential figures testified to the divorce of science from religion as an inevitable stage in humankind’s development. For example, Max Weber: To the person who cannot bear the fate of the times like a man, one must say: may he rather return silently…The arms of the Church are open widely and compassionately for him. The driving force behind these ‘testimonies’ seems to be the moral imperative of casting off superstitions to face life ‘like a man’. This ethical motivation remains common in the “culture wars”. Where those who reject their norms are dismissed as being “on the wrong side of history”. As with most sacred stories, it reflects the time and place of its origin. These secular narratives combine the rise of science with rugged Western individualism — the “frontier spirit”. These myths grew during the European Industrial Revolution as people left the comfort and authority structures of village, family and church to make a new life in cities. It is not surprising that these subtraction stories have struggled to gain traction outside of Western culture. Science or philosophy: is naturalism testable? It is now common to argue science requires methodological naturalism. It is easiest to begin by defining a related concept, metaphysical naturalism. Michael Ruse calls this “a complete denial of the supernatural” (Oxford Handbook of Atheism, p383). Ruse then defines methodological naturalism: “a conscious decision to act… as if metaphysical naturalism is true…”(Ruse, p383) Methodological naturalism requires us to assume there are no authorities — other than human reason and science. People will often present methodological naturalism as a neutral position. We will act as if metaphysical naturalism is true until there is enough data to reject it. But what would be enough evidence to reject methodological naturalism?Richard Lewontin, formerly Professor of Genetics at Harvard, provided a candid response: It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.” Secularism and logic Another problem with naturalism is that it fails to account for the objectivity of logic — a key assumption of science. Greg Bahnsen’s debate with Gordon Stein is a classic. They both agreed the laws of logic are objectively true (that is, true for all people at all times). They also concurred that this is a foundational assumption of the scientific method. But as Bahnsen pointed out, in the naturalist worldview the laws of logic can only be human conventions: “well if you’re an atheist and a materialist you’d have to say they’re just something that happens inside the brain… If the laws of logic come down to being materialistic entities then they no longer have their law-like character.” Stein had no answer. Since the debate in 1985, a viable naturalistic account is still lacking as acknowledged by several atheists. For example, Thomas Nagel, an atheist and Professor of Philosophy and Law, Emeritus, at New York University essentially repeats Bahnsen’s conclusion. Most naturalists believe in the objectivity of logic but “it does seem to be something that cannot be given a purely physical analysis.” (Mind and Cosmos, p83–84). Paul Feyerabend, an atheist and formerly Professor of Philosophy at the University of California, Berkely rejected the claim that scientific knowledge is objectively valid. He argued, in Farewell to Reason, this was impossible as the scientific method is based on culturally bound conventions. An alternative story The central role of Christianity in the development of modern science is now commonly accepted among historians of science, such as Edward Grant. On a practical level, the funding and support of cathedral schools by the medieval church was key to the development of modern universities. As were the development of scholarly societies, like the Royal Society in the UK, formed by scientists motivated by “faith seeking understanding”. An ordered universe A central assumption of science is the ordered nature of our universe. For example, Samuel Clarke, a pioneer of modern science in the 18th Century put it like this: What men commonly call ‘the course of nature’… is nothing else but the will of God producing certain effects in a continued, regular, constant, and uniform manner. Alvin Plantinga, formerly Professor of Philosophy at the University of Notre Dame argues this assumption is rooted in Christianity: The idea is that the basic structure of the world is due to a creative intelligence: a person, who aimed and intended that the world manifest a certain character. The world was created in such a way that it displays order and regularity; it isn’t unpredictable, chancy or random (Where the conflict really lies, p272). Out of the armchair and into the world Another foundation of modern science is the need to test our conclusions using empirical data. This is often cited as a key contrast between science and the ‘blind faith’ of religion. But once more, we can trace this scepticism back to Christian theology. Peter Harrison, formerly professor of science and religion at the University of Oxford, argues the rise of experimental science in the 17th Century was inspired by the Protestant Reformation. The Reformation challenged the medieval church’s reliance on ancient authorities like Aristotle. The Reformation also led to the rediscovery of Augustine’s teaching on sin: “On the one hand, the fall provided an explanation for human misery and proneness to error; on the other, Adam’s prelapsarian perfections, including his encyclopaedic knowledge, were regarded as a symbol of unfulfilled human potential.” (Peter Harrison, Fall of man and the foundations of science, p11) The popular subtraction stories of our time ignore the Christian foundations for the origins of modern science. The 19th Century conversion stories of leaving behind the superstitions of the past still resonate with us as heirs to the modern world. They make us feel courageous and mature. The psychological benefits of these stories, rather than any obvious scientific discovery, seems to account for their enduring success.
https://medium.com/interfaith-now/subtracting-god-from-science-and-life-2fd13211ac2c
['Nick Meader']
2020-08-15 14:27:02.567000+00:00
['Religion', 'Christianity', 'History', 'Spirituality', 'Science']
Why Do Things Fall?
Lines, curved and straight Understanding straight lines hold the key to gravity. What is a straight line? What is the distinctive feature of a straight line that separates it from non-straight lines? The question might seem silly and obvious at first, but let me warn you, it is not. Imagine two straight, parallel lines that never meet. Now imagine a curved line. We certainly know that the straight line is completely different than the curved one, but not always. A spherical globe Look closely at something spherical, something curved, a globe for example. Take a minute and look at the picture. Which lines look straight? The latitudes or the longitudes? Even though the latitudinal lines may seem straight, they are not. Why? Because they turn, and straight lines, by their very definition, do not turn. So then, straight lines on a plane are not exactly the same on a sphere or curved surface. It is important to create this mental distinction of straight lines on surfaces of differing geometry. With the previous observation in mind, let's try to imagine a straight line on a curved surface. For simplicity, let’s use a cone as our curved surface. Keep in mind that the line should never turn. Got a picture in your head? Most probably you did not. Rather get a piece of paper with a straight line drawn on it and fold the paper into a cone. It should roughly look like the one in the picture below. Cone Geodesics, Source: Wolfram See it now? What does our straight line look like now? We had a straight line on the coordinate plane, but on the cone, there is no straight line. Well, the straight line obviously is on the cone since all we did was just fold the paper, but once folded into a cone, it no longer resembles a straight line. Let me introduce you to Geodesics. A straight line on a curved surface is a geodesic. This means that any straight line when plotted on a curved surface, remains straight but does not look straight.
https://medium.com/discourse/why-do-things-fall-2ae1800e104d
['Mukut Mukherjee']
2019-01-08 03:06:17.146000+00:00
['Astrophysics', 'General Relativity', 'Gravity', 'Physics', 'Science']
How Hromadske Radio put women first in its coverage of the COVID-19
In a nutshell Reliable COVID-19 news and human interest stories kept Ukrainian citizens in the capital and the east of the country up-to-speed during the pandemic. What is Hromadske Radio? Established in 2013, Hromadske Radio is a public radio station based in Kyiv, Ukraine. It is one of Ukraine’s few non-commercial and non-governmental news talk radio stations and online media platforms and employs 70 full and part-time journalists and other staff. Broadcasting 24 hours a day, seven days a week, the station reaches its listeners via satellite and its own broadcasting network covering six regions across Ukraine. With 15 transmitters of its own, it has the potential to reach about three to five million people in up to 25 cities and towns. Transmitters are not easy to obtain as they must be licensed by the government and are restricted to certain frequencies and power levels. Many Ukrainian stations in the Donbas region are jammed by Russian channels, meaning that they hear Russian-based broadcasts over the same frequency. Hromadske’s audience is evenly split in terms of gender, especially in the Donbas region of Ukraine where conflict has taken place with the Russian Federation since 2014. According to data from 2018, its radio audience here is 47% male and 53% female. Most of these listeners tune into its FM frequency, as do those from Kyiv, which is made up of 58% men and 42% women, according to data from 2017. New demographic data about its audience is being collected this October. Hromadske podcast on a story of a Kyivan who contracted the coronavirus. In addition to around 110,000 weekly listeners via FM radio, listeners can also access Hromadske online through its website and mobile app, which were redesigned last year. As well as live programming, the website acts as a repository for over 100 different Hromadske podcasts on everything from reasons to travel to Ukrainian music to how to verify the information. Some are longstanding and have hundreds of episodes while others have a short run and have already come to an end. The podcasts are released via its website and via other podcasting platforms. Hromadske’s combined monthly web and podcast audience is around 200,000 to 300,000, depending on the month. 95% of Hromadske’s revenue comes from international donors, including the likes of National Endowment for Democracy, International Renaissance Foundation, the UK’s Foreign and Commonwealth Office, and Pact. These donors are keen to financially support independent outlets like Hromadske Radio as their presence is deemed hugely important for protecting the interests of Ukrainian civilians in a media environment at risk from influence by oligarchs. The other 5% of revenue is from listener donations and renting out its studio for recording to external clients. Hromadske has plans to gradually decrease the share of international donor money over time and to gain more listener donations to support its work. Despite the introduction of a number of reforms since 2014 to increase transparency around media ownership, Ukraine is still ranked 96 out of 180 countries in the World Press Freedom Index by Reporters Without Borders. Political tensions between Kyiv and Moscow over the annexing of Crimea by the Russian Federation in 2014 continues to have an effect on media coverage in the country. In 2017, a ban was imposed on Russian journalists and Russian-owned news outlets. A year later, any assets that Russian journalists had in the country were frozen as part of an extension of sanctions, according to the Committee to Protect Journalists. Since then, Ukrainian broadcast media can only air 30–40% of Russian content, despite the fact that much of the country is bilingual. How did Hromadske Radio handle the COVID-19 crisis? At the beginning of the pandemic in March, the editorial team included some COVID-19 related content in some of its existing broadcasts. However, it lacked funding for specially made programmes focusing on the pandemic. It was impossible to attract commercial funding for this, so the team applied for grants, which meant there was a delay launching the shows that it wanted. The Social Distance show hosted by Tetyana Troshchynska as the host. In April, at the height of Ukraine’s pandemic, the team started producing a twice-weekly show called Social Distance Talk Show. Funded by two donors, the hour-long show covered the social and medical aspects of COVID-19 featuring experts including doctors, psychologists, health workers and teachers. For example, one show looked at the experience of a man who successfully recovered after COVID-19 while another looked at kids in lockdown and the time they were spending online. In addition to airing on its FM station, a podcast version was released on its website, as well as SoundCloud, Spotify, Google Podcasts and Apple Podcasts. As the case numbers dropped in July, the team decided to scale back production to once a week. Hromadske Radio launched two additional COVID-19 shows in May to inform listeners about different aspects of the pandemic. A daily rubric— COVID-19 News in the Morning — went out every weekday during the morning show with the relevant information about the coronavirus and key public health messages released by the Ukrainian government. In the evening drive time slot, listeners could hear the daily COVID-19 radio news bulletin, including the latest government data on the number of positive cases in Ukraine and its death toll. The same bulletin was then published online as a news publication and a podcast on the website. With women making up more than half of Hromadske Radio’s FM listeners and website audience, the team decided to create a podcast about women’s experience during COVID-19 that also aired on its FM station. ‘The Female side of Quarantine’ concentrated on personal stories of everyday women dealing with situations arising from the pandemic. For instance, one segment examined domestic violence during the lockdown, while another looked at how a mother of a two-year-old girl coped during her quarantine experience. Since April, Hromadske Radio has run a campaign urging its audience to verify the information and be aware of COVID-19 stories that might not be true. Audio clips broadcast several times a day explain reputable sources of information and explain why it is important to listen to advice from the medical authorities. Other topics include why you should observe the two-metre social distancing rule and why it helps to wear a mask. These clips are also posted to social media. The aim is to help Ukrainians understand how the information they consume can help stop the virus’ spread. Hromadske received plenty of feedback from its audience during this period, mainly via incoming calls received by sound engineers and guest administrators and funnelled through to hosts and studio guests. There is also a significant number of comments that come through social media, particularly Facebook, (45,000 followers to date) Twitter (41,700 followers ) and Instagram (50,400 followers). The interest in COVID-19 meant that, at the height of the pandemic, Hromadske’s website received 307,400 unique users in April 2020 and 491,700 page views. Hromadske Radio secured three new COVID-19-related grants during the first half of the year; one that the team applied for and two from organisations that contacted the station looking to support its work. These grants were especially important given the station was not able to generate its usual level of advertising revenue due to COVID-19-related economic downturn. Unlike other independent outlets in other countries, Hromadske didn’t see an uptick in donations from readers. Face-to-face events for fundraising also had to be suspended and will potentially be moved online. Face-to-face events for fundraising also had to be suspended and will potentially be moved online. Face-to-face fundraising events, which count for 1% of Hromadske’s annual budget and take place up to four times a year alongside fundraising campaigns, have also had to be suspended due to the pandemic. These events usually support online crowdfunding campaigns — such as the campaign last year to buy an FM license from the government — and feature talks with the radio station’s most popular journalists as well as the editor-in-chief. Held in Kyiv, about 30 to 50 people typically attend such events. These will likely move online in the second half of 2020. How has COVID-19 changed the future of Hromadske Radio? The editorial team will concentrate more of its broadcasting on healthcare-related topics in the future. The Ukrainian public is increasingly interested in health because of the unsatisfactory level of healthcare in some regions and the fact that efforts are being made by the Ukrainian government to reform the sector. Subsequently, and pending further funding, Hromadske is considering producing specific health-related shows. It will also invite more medical specialists onto its general interest shows to answer questions and talk about timely health issues. Hromadske will continue to rely on more remote work for its operations due to the increasing number of confirmed cases in Ukraine over the last few weeks. Everyday activities will continue to be done online and meetings and conferences will use Zoom and Skype. Some reporters and presenters will continue to voice shows from home, though it depends on microphone and audio quality. Hromadske Radio will continue to use its studio for live shows and presenting the news. In terms of revenue, Hromadske Radio plans to increase other income streams in order to become more sustainable and less dependent on grants. In April 2020, following a successful crowdfunding campaign, Hromadske launched a new commercially viable frequency in Kyiv. The new frequency is available on car FM receivers and will be monetised via advertising and sponsorship sales. This will not be straightforward given the fact that the pandemic has hit the advertising industry in Ukraine. For ethical reasons, Hromadske’s editorial board also took the decision not to run advertising from the alcohol or gambling industries, which are the main pillars of the Ukrainian radio advertising market. Many other Ukrainian media organisations find loopholes to this advertising ban and air those ads anyway. What have they learned? “Hromadske Radio understood that the challenges are also opportunities. To our surprise, we learned how to do most of our pre-production meetings and some interviews remotely. ” Kyrylo Loukerenko, executive director and co-founder of Hromadske Radio
https://medium.com/we-are-the-european-journalism-centre/how-hromadske-radio-put-women-first-in-its-coverage-of-the-covid-19-40d180fe9608
['Tara Kelly']
2020-07-29 10:57:03.053000+00:00
['Resilience', 'Radio', 'Case Study', 'Covid 19', 'Journalism']
How to build your own Neural Network from scratch in Python
Motivation: As part of my personal journey to gain a better understanding of Deep Learning, I’ve decided to build a Neural Network from scratch without a deep learning library like TensorFlow. I believe that understanding the inner workings of a Neural Network is important to any aspiring Data Scientist. This article contains what I’ve learned, and hopefully it’ll be useful for you as well! What’s a Neural Network? Most introductory texts to Neural Networks brings up brain analogies when describing them. Without delving into brain analogies, I find it easier to simply describe Neural Networks as a mathematical function that maps a given input to a desired output. Neural Networks consist of the following components An input layer , x , An arbitrary amount of hidden layers An output layer , ŷ , A set of weights and biases between each layer, W and b and between each layer, A choice of activation function for each hidden layer, σ. In this tutorial, we’ll use a Sigmoid activation function. The diagram below shows the architecture of a 2-layer Neural Network (note that the input layer is typically excluded when counting the number of layers in a Neural Network) Architecture of a 2-layer Neural Network Creating a Neural Network class in Python is easy. Training the Neural Network The output ŷ of a simple 2-layer Neural Network is: You might notice that in the equation above, the weights W and the biases b are the only variables that affects the output ŷ. Naturally, the right values for the weights and biases determines the strength of the predictions. The process of fine-tuning the weights and biases from the input data is known as training the Neural Network. Each iteration of the training process consists of the following steps: Calculating the predicted output ŷ , known as feedforward , known as Updating the weights and biases, known as backpropagation The sequential graph below illustrates the process. Feedforward As we’ve seen in the sequential graph above, feedforward is just simple calculus and for a basic 2-layer neural network, the output of the Neural Network is: Let’s add a feedforward function in our python code to do exactly that. Note that for simplicity, we have assumed the biases to be 0. However, we still need a way to evaluate the “goodness” of our predictions (i.e. how far off are our predictions)? The Loss Function allows us to do exactly that. Loss Function There are many available loss functions, and the nature of our problem should dictate our choice of loss function. In this tutorial, we’ll use a simple sum-of-sqaures error as our loss function. That is, the sum-of-squares error is simply the sum of the difference between each predicted value and the actual value. The difference is squared so that we measure the absolute value of the difference. Our goal in training is to find the best set of weights and biases that minimizes the loss function. Backpropagation Now that we’ve measured the error of our prediction (loss), we need to find a way to propagate the error back, and to update our weights and biases. In order to know the appropriate amount to adjust the weights and biases by, we need to know the derivative of the loss function with respect to the weights and biases. Recall from calculus that the derivative of a function is simply the slope of the function. Gradient descent algorithm If we have the derivative, we can simply update the weights and biases by increasing/reducing with it(refer to the diagram above). This is known as gradient descent. However, we can’t directly calculate the derivative of the loss function with respect to the weights and biases because the equation of the loss function does not contain the weights and biases. Therefore, we need the chain rule to help us calculate it. Chain rule for calculating derivative of the loss function with respect to the weights. Note that for simplicity, we have only displayed the partial derivative assuming a 1-layer Neural Network. Phew! That was ugly but it allows us to get what we needed — the derivative (slope) of the loss function with respect to the weights, so that we can adjust the weights accordingly. Now that we have that, let’s add the backpropagation function into our python code. For a deeper understanding of the application of calculus and the chain rule in backpropagation, I strongly recommend this tutorial by 3Blue1Brown. Putting it all together Now that we have our complete python code for doing feedforward and backpropagation, let’s apply our Neural Network on an example and see how well it does. Our Neural Network should learn the ideal set of weights to represent this function. Note that it isn’t exactly trivial for us to work out the weights just by inspection alone. Let’s train the Neural Network for 1500 iterations and see what happens. Looking at the loss per iteration graph below, we can clearly see the loss monotonically decreasing towards a minimum. This is consistent with the gradient descent algorithm that we’ve discussed earlier. Let’s look at the final prediction (output) from the Neural Network after 1500 iterations. Predictions after 1500 training iterations We did it! Our feedforward and backpropagation algorithm trained the Neural Network successfully and the predictions converged on the true values. Note that there’s a slight difference between the predictions and the actual values. This is desirable, as it prevents overfitting and allows the Neural Network to generalize better to unseen data. What’s Next? Fortunately for us, our journey isn’t over. There’s still much to learn about Neural Networks and Deep Learning. For example: What other activation function can we use besides the Sigmoid function? can we use besides the Sigmoid function? Using a learning rate when training the Neural Network when training the Neural Network Using convolutions for image classification tasks I’ll be writing more on these topics soon, so do follow me on Medium and keep and eye out for them! Final Thoughts I’ve certainly learnt a lot writing my own Neural Network from scratch. Although Deep Learning libraries such as TensorFlow and Keras makes it easy to build deep nets without fully understanding the inner workings of a Neural Network, I find that it’s beneficial for aspiring data scientist to gain a deeper understanding of Neural Networks. This exercise has been a great investment of my time, and I hope that it’ll be useful for you as well!
https://towardsdatascience.com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6
['James Loy']
2020-03-04 11:44:18.141000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Towards Data Science', 'Data Science']
3 Reasons Why You Should Be More Like Bamboo
3 Reasons Why You Should Be More Like Bamboo What looks weak is strong. Photo by Michael Payne on Unsplash Have you ever wondered how sharks keep their teeth so white without going to the dentist? Well, even if you haven’t, I bet that’s all you’re thinking about now. It turns out actually, that sharks do indeed go to a dentist, just not the same kind as you or I. Most notably in the coral reefs of the Pacific and Indian Oceans, “cleaner fish” showcase themselves with their bright colors and set up underwater cleaning stations. Sharks let them know it’s safe to approach, and the cleaner fish then eat the mucus, bacteria, and fungus off the sharks’ pearly whites. The relationship is mutually beneficial. The cleaner fish get a meal, and the sharks get a mouth free from danger. This is known as symbiosis. Symbiosis is the art of living and working together to create an advantageous relationship. Parents help children grow up and children (hopefully) help parents when they grow old. Bees gather nectar from flowers and help pollinate more flowers in the process. I believe that writers should have symbiotic relationships as well. This is why when chatting with Akshad Singi, I knew it was an opportunity to both teach and learn. By working together instead of competing against one another with our writing, I believe we can share a message that can benefit more people. Thus, we have decided to join forces to provide two articles that promote the same concept: why we should be more like bamboo. When discussing bamboo, I couldn’t help but consider whether or not the relationship between pandas and bamboo is symbiotic as well. Sure, the pandas eat the bamboo, but the bamboo get to feed an endangered species. Bamboo also gets the opportunity to be further studied and assessed for its tremendously unique qualities, as I will discuss in the remainder of the piece. While reading, consider your symbiotic relationships and wonder how and why you can be more like bamboo.
https://medium.com/real-1-0/3-reasons-why-you-should-be-more-like-bamboo-2279505ebd1a
['Jordan Gross']
2020-11-25 17:21:47.376000+00:00
['Mindfulness', 'Mental Health', 'Self Improvement', 'Life Lessons', 'Inspiration']
“Medium Crazy” or Mega?
In my just-over-two-years on Medium, I’ve received my fair share of goofy and/or insulting comments on my stories. Certainly, many of you can relate. Some leave you shaking your head, while others are infuriating. And then there are those “bon mots” which actually incite pity for the commentators. With that said, none have given me more pause for thought than the comment I recently received on my story about my sexual encounter with an octopus. Now, hold up because I’m certain new readers here will be thinking, “What a sick twist.” Conversely, those of you who regularly read me — and I thank you from the bottom of my crusty heart — know a spoof when they see it. Of course, I didn’t really have sex with an octopus, guys! After all, what cephalopod would be turned on by me? Apparently, a reader took my story for the gospel, and lest you think I’m pulling your chain, here is the comment: “The statircal nature of thisis disturbing. I don’t joke about having sex with babies, kids, goats. Certainly anyone can appreciate the evolution of a mollusks- MILLIONS of years later allows modern day homospaiens to take a step. While “intellignece” can be relative; sophisticsyion is not. If she doesn’t outwit a shark, prior, she might livr 1- yr before a mate enters her to release 100s of 100s of eggs, millions? Then she will die after she nurtures, does not leave her den and protect her young. Before you stay something so unitelligent, NOT funny, read my first two sentences.” As you’ll note, I did not fix the garbled sentences and profusion of typos in this maniac’s throw-down. I had to read it several times before it sunk in that this wasn’t a joke. Whoa, Nellie, but this is some concerning stuff! How in the hell does one come up with a rational response to someone who believes I actually got busy with an octopus! More to the point, someone who thinks I “sexually abused” said sea creature. Before you toss this off as a “joke,” read it a couple of times. Now, you’ll see from my response, as I linked to the story, below, that I was pissed. But, as I think about this, perhaps I was remiss in that I didn’t attempt to engage this person in anything resembling an intelligent discourse about the perceived octopus abuse. But, what the ever-loving F, people? Does this comment not have “crazy” all over it? Yes, I was being “statircal,” to quote the commentator, and anyone who can’t see that is nuts with a capital “N.” You’ll note that the person in question has no stories, nor followers on this platform. It’s as if she (since you’re going to know, anyway), fell out of a cloud, sucked down a bottle of vodka and a handful of pills, and then wrote that mess up. Friends, I’m dying to know. What would be your response to such an inane and insane diatribe? Would you ignore it or suggest that the person seek psychiatric help, posthaste? For the record, I wouldn’t fuck a goat, either, or any animal. Bestiality is not my thing, although I can be a bit of a beast. Nor would I abuse a child, in ANY fashion. I’m very provincial. I like my sex with someone who has a penis. The grown-up, human kind. As you’ll see there was no response to my own. It’s as if this person disappeared back into the padded room from which she emerged. I’m making a joke but for the record, I do feel empathy for anyone who is struggling mentally and or emotionally. Someone with a tenuous grip on reality. Trust me, I have my own issues and know how debilitating they can be. We’re all struggling right now, so perhaps I should chalk this up to the pandemic and put this individual’s questionable mind at rest by saying: “Dear…I did not, in any way, shape or form, engage in sexual conduct with an octopus. You read this on Medium, a place where writers share stories of all kinds, some sad, some funny, some true, and some satirical. That is the word, by the way. And that is the type of story you recently read, from me. Now, please, go and take a lie-down, snuggle with a pet or loved one or do whatever it is you need to calm the hell down. And I promise you, no more stories about messing with mollusks. Thanks for reading. Sincerely, Sherry.”
https://medium.com/the-top-shelf/medium-crazy-or-mega-f5beb4f04176
['Sherry Mcguinn']
2020-12-12 18:01:01.303000+00:00
['Sherry Top Shelf', 'Reader Comments', 'Mental Health', 'Satire', 'Sexuality']
Writing comes from a very deep place.
Photo by Rosie Kerr on Unsplash There are times when I feel that my older ‘self’ was much more aware than my ‘self’ of today… I never used to examine why I feel compelled to write. I just did it. All through grade school, high school, the Army and in my life post Army. In 2012 I fell off the wagon and spent 5 years barely writing anything. There are half a dozen reasons why I stopped but that’s a story for another day. Just know that every day during those years I treated myself to some of the most negative and bullying self-talk a person can endure. I could safely say that every day I wasn’t writing I had a bit of hatred for myself. Eventually I came to my senses and began writing again. When I made the decision I was determined to do things differently. I realized that if I didn’t try to understand the compulsion (and even control it to some extent) I would eventually stop again. This time permanently. Guru’s tell us that knowing our WHY will help us succeed. I never knew why I had to write. It happened automatically and why didn’t matter. Before I get started let me say this much — I don’t exactly know what writing is. Yes, I know it means putting words down on a medium that other people can read. But in my mind I’ve never thought beyond using it to give out information, explore my personal world or try to tell a story. Writing means different things to different people. At one point it was a coping method for me. For you it might be something different. Knowing why has become important because as I’ve gotten older I realized that I don’t write the same way, or for the same reasons. My methods and process have altered with my new mental status. My verbiage, intonation, tone, voice and vocabulary changed radically due to the influence of multiple sources (other writers, writing groups, instructors, non-writing friends and that one time I had a mentor for 15 minutes) and partly because my vocabulary has matured. I was the kind of nerd who read the dictionary for fun. That’s why I say I don’t know what writing is anymore. With this many sources and influences I feel like I know less about the craft now than I did then. It used to be all instinct and young energy, now it’s more deliberate. Now that I think about it, that deliberate nature might be why I stopped writing in the first place. It was less fun and spontaneous. It became work instead of exploration which destroyed the essential uniqueness. I reasoned that taking a few classes, or finding a writing group — not so much to formalize my practice but to get me around people who still had that joy — might help me sort things out. To the degree that my experiment succeeded I found a few answers about what writing is and why I write. I’m sharing here with the hope that my discoveries might help you. Here’s what I found. ANSWER 1 — WRITING IS ABOUT CONNECTING TO THE DEEP PLACES It took me years to realize there are two types of writing — Surface and Deep. Surface is the banal stuff you do at work — emails, spreadsheets, reports, etc. Deep is what you’re doing when you make a personal journal entry, craft a heartfelt post on your blog, write a letter to someone you love or write fiction. As a child I wrote stories to help me understand my day to day life. My childhood wasn’t ‘bad’ in the traditional sense but there were a lot of stereotypical things that went on that made no sense to me. When I got confused about something I would write it down and use a story to help me figure out the event and my feelings. Everything I wrote from 9 to probably 17 was about coping with the world around me. A lot of it boiled down to a desperate need to feel accepted. At that age it made sense to focus on the external. I could see it and experience it first-hand. I wrote what I saw and built a story around it to help figure out why people did what they did. This was years before I was fully ‘aware’ of myself and the fact that I was dealing with depression. A term I never officially heard until I was 15. Connecting to strangers can be difficult and downright dangerous. We lived in the projects with some pretty ruthless characters. My introduction to one new boy was a punch to the face. I never was part of the crowd. Millions of pre-teens experience this. The only thing ‘special’ about me was writing. I thought telling a cool stories would help me fit in. If you don’t know, people living in situations like that (hood, broke, no dad, dealers on every corner, moms with a half dozen small children, no job) only traffic in the real. Their daily experience is harsh. My stories frequently fell on deaf ears (on the good side) or nearly got my ass whipped (on the bad). For years I had the reputation of being a ‘liar’ because I made up stuff instead of always telling the ‘truth’. Still, I persisted because writing helped make the world a little less scary. I could do whatever I wanted in a story and the words ordered my chaos. I was driven to understand my world using the written word. ANSWER 2 — WRITING IS A NATURAL OUTGROWTH OF LOVE I speak fluent Monster and Science Fiction Movie. I couldn’t wait for Soul Train to be over on Saturday morning so the creature features could start. Chiller Theater’s title sequence used to scare the bejesus out of me. I went to the dollar theater every chance I got to see the latest rubber monster fracas and loved anything by the Shaw Brothers. Add in a healthy dose of Old Time Radio serials and it’s easy to see that I was doomed from the get-go. I was in love you see. Deeply in love with what I was seeing and hearing. I was transported each time a title card would display on the screen or a 50 foot spider walked silently up behind its unsuspecting victim. And don’t get me started on Godzilla… Seeing things on the screen or hearing them on the radio generated an automatic response in my mind to begin re-writing. I could sometimes see where things might have been clearer or even cooler. My imagination lit up like a nuclear Christmas tree and it was a blast. I tried to write what I was seeing in my head after watching a movie or hearing a radio story. The other part of that equation was that I genuinely loved people. I’ve always felt that we’re better as a unit than we are as bickering individuals and I’m also a bit naïve (or at least I used to be). That lead me to want to share the stories I was writing. As I said before I felt like acceptance was just around the corner if I told the right story to the right crowd and the right time. ANSWER 3 — WRITING IS FOCUSED ENERGY AND THOUGHT Many people tend to drift through the day only semi-patient while waiting for it to be over. Many of us focus on getting ready for the weekend starting on Monday morning. I’m guilt of this. Unfortunately it’s super easy to sleepwalk through an entire life this way. Writing forces focus on the writer and the reader. I honestly think this is why people who love to read seem to go further in life than those who shun books. So long as the writing isn’t boring people will willingly go along for the ride. Do it well enough and they’ll repay you with money and hours of their lives. My experience has taught me being awake and aware is imperative to good writing. How do you get focus? It’s simple really — You have to interact with real live people. You have to take what you learn about them (and through the interaction learn about yourself) back to the written page and do your best to make it live. Stories and articles are about and for people. I suggest that you learn people. Focus on what makes them tick. Learn how we manipulate each other and love each other. Bring that out with your words and you’ll have a major part of the formula. It’s like in business. People don’t buy from companies, they buy from other people. The stories, articles, and blog posts we produce MUST be about people and the way they interact with each other or their environment. My early stuff came from the struggle to cope with event or incident. Someone had done something and I needed to understand and deal with it. As I got older I began to write in a way that was less instinctive because I was learning ‘the rules’. I worry that English classes and even creative writing courses sometimes damage creative minds by forcing them to rely on rote patterns. It definitely did a number on me. I no longer needed focus so long as I followed the rules. Please don’t get me wrong. I am a strong proponent of the 3 act structure. It’s imperative to understand story structure if you want others to be able to follow you. My problem lies with forcing a young and adapting mind to a set pattern before it’s formed its own voice. I truly believe that’s the reason you see thousands of articles and videos that have to explain to a writer how to find their own voice. Children do this kind of creative thinking naturally. Something in the process of education takes it away as we mature. Later on, when you remember that you have a creative soul, you end up having to be taught all over again how to re-capture your unique voice. When you sit down to write remember your best weapon is your mind. Bring everything you have to everything you do and you’ll succeed. Never half-ass it. Honestly, even if you don’t succeed with millions of book sales, being able to say that you were truly focused on producing your best work that’s a major win for you. ANSWER 4 — WRITING IS FREAKING HARD And in other news, water is wet… Hear me out. Getting ideas for stories is stupid simple. I’ve had 6 ideas in the last 40 seconds. Most of them are crap. Would they produce a decent book or article? Who knows? Maybe, maybe not. It’s easy to plan out a story or get some idea of how it should go, but the act of sitting down in a chair and not getting back up until you have words on the page is HARD. I said before that it takes a strong understanding of human nature to tell a great story. Someone who was really good at that was the recently passed Toni Morrison. She understood black women very well. Her stories didn’t read like simpleminded tales. There was a depth and pain there that not many of us can produce. That’s the kind of hard I’m talking about. Never mistake writing schlock for writing stuff that moves you. Something I’ve noticed since joining Medium is the people who get into their deepest feelings are the ones who seem to enjoy wide readership. If you can’t make yourself laugh, or feel the pain and wonder or, even better, bring yourself to tears with what you’re doing how do you expect anyone else feel those sensations? We’re all thinking, feeling extremely social beings despite what the news would have you believe. We rely on each other even if we act like we don’t. The words we write reach out and you never know who you’re going to touch, entertain, or help. Never treat your words lightly. Writing is hard because it MUST be. Words carry weight and the impact can be devastating in the wrong hands. Those of us who do this owe it to anyone who might read to do the hard part up front. FINALLY Anyway, that’s some of what I’ve learned over the last two years. I finally understand that the more I learn, the less I seem to know. It’s my compulsion to write that drives me to put words down and continue to learn how to do it better — an endless Mobius loop. I hope that some of what I’ve said resonates. It took me a long time to write this article and it went through about 4 drafts. Even now I feel it could benefit from more polishing. That’s what I meant by hard. The work never seems finished. I could sit and do this for another week trying to make it perfect but at the end of the day it’s time to let it fly and see how it does on its own. Peace! You just read another exciting post from the Book Mechanic: the writer’s source for creating books that work and selling those books once they’re written. If you’d like to read more stories just like this one tap here to visit
https://medium.com/the-book-mechanic/writing-comes-from-a-very-deep-place-d80085194be
['Ronn Hanley']
2019-09-17 22:28:16.978000+00:00
['Writing Life', 'Personal Story', 'Memories', 'Compulsion To Write', 'Writing']
The “Pharma Bro” Who Loved Me
Like the rest of the internet, I read the above piece earlier tonight. At first, I had to hold myself back from quoting basically the entire thing on Twitter. It’s so salacious and mind-blowing. We’ve all heard various stories of Stockholm Syndrome. Or people drawn towards prison inmates for various reasons. And in a way, this also reads like a fairly textbook case of negging (even just professionally, at first) by Martin Shkreli. Again, at first, this reads like a combination of all three, in a way. At the same time, this isn’t your typical no-name person getting ensnared in such a trap — or series of traps. This was a reporter at Bloomberg with a basket of serious bylines, who broke many of the early stories about Shkreli to the world. Did she just get in too deep and couldn’t find her way back? But the more I think about it, the more I think this isn’t as scandalous as it might seem on the surface. I mean, in many ways, yes, this is nuts. But I think it’s okay to feel less badly for Christie Smythe (though certainly not for her family) because there’s a pretty clear path where she is actually playing a calculated game to leverage such coverage (and obviously the subsequent hoopla) to take control of the situation. Not just over Shkreli, which, if the reporting is accurate, she has not talked to in months and seen in over a year.¹ But over her life professionally as well. As the article notes, at first she could not sell the book on the matter. She’ll clearly be able to now! This whole thing — from the premise, to the Elle photoshoot complete with a dress from “The Vampire’s Wife” — is beyond bizarre. It crosses so many unsavory lines. But I’m not sure who — if anyone — to sympathize with. My guess would be that we’ll all find out soon enough in the subsequent drama that unfolds, likely extravagantly and loudly.
https://mgs.medium.com/the-pharma-bro-who-loved-me-5aee7cc80eb9
['M.G. Siegler']
2020-12-22 18:17:16.774000+00:00
['Romance', 'Martin Shkreli', 'Crime', 'Journalism', 'Christie Smythe']
On Writing Emotion
Almost every time I start working with a new client, they ask me for pointers on writing character emotion without falling into telling or cliché. As I wrote my answer, I realized it would make a great blog post because, let’s face it, writing emotion is hard. Here’s a technique I’ve come up with over the years that I hope you’ll find helpful. A good starting point for writing emotion. A good place to start is with the Emotion Thesaurus by Angela Ackerman and Becca Puglisi. The book catalogues the physical responses, mental responses, and sensations associated with each of a broad list of emotions. A lot of authors (including me when I first started learning this technique) stop there. That’s why you get a lot of anxiety describes as sweaty palms & thumping hearts in books. But instead of just telling us the character’s palms are sweaty, try showing the character wiping her hands on her skirt or shying away from shaking hands with someone, hiding those sweaty palms behind her back. Show her wiggling an eyebrow because she’s in a cold sweat that’s tickling her as it drips down her face. It’s OK to do some physical cues — face getting hot, skin prickling, electricity running up the back of her legs — but don’t only do that. Once you have reviewed the entry/entries for the emotion you’re trying to convey, put yourself into the character’s body and conduct a character interview. Why are you doing what you’re doing in this scene? What does it mean to you? How does it make you feel? Then dive deeper, leveraging your own experience with these emotions: How do your legs feel when you’re scared/nervous/angry? How does your stomach feel? What gestures might you make (tugging your hair when nervous, biting your lip or the inside of your cheek, shoving your hands in your pockets, tugging at the bottom of your shirt)? Different characters may tend to feel emotions in different parts of their bodies, and this can be a great way to differentiate voices in a multiple-point-of-view story. Description also plays a role in getting emotion on the page. What your character notices about the world is influenced by how she’s feeling. For example, if I’m sad, and I look outside and see it’s raining, I might feel the rain is heavy and depressing and awful. But if I’m happy, I look out the window and see how the water glistens on the leaves or how the intense green reminds me of my honeymoon in Belize. So you write the emotion not by putting feelings on the page, but by showing how the character’s feelings (and their backstory) influence how they perceive everything in the world around them. The details you as the writer choose will help convey the character’s emotions without ever naming that emotion on the page. Same goes for dialogue. An easy crutch to fall back on is using dialogue to convey emotion such as, “Mom, you make me so angry when you talk to me like that!” I’m not saying you can never do that. In fact, it can be very effective, especially when it’s more voicey than my example, but make sure it’s not the only way you’re conveying emotion. Even with all of these techniques, your characters may still not come alive on the page without sharing their internal thoughts and feelings with the reader. And yes, this can sometimes include a little bit of telling. Consider this excerpt from debut author Gita Trelease’s stunning Enchantée: “Aurélie, what are you saying?“ Lazare said, as if offended on Camille’s behalf. “She doesn’t care. Husbands aren’t any good unless they’re gone — which you would know, Lazare, if you were required to have one.” His face clouded. “My apologies.” “Not at all,” Camille said, using her best imitation of Aurélie to cover the confusion she felt. “But I wonder if it’s not time for me to go.” She counterfeited a pretty yawn. “It’s almost morning and I’ve a long ride back to Paris.” — pg 225, Enchantée Both “as if offended” and “confusion she felt” are tells, but they’re so judiciously used with gestures (the counterfeit yawn), facial expressions (his face clouded), and dialogue that the emotion of the scene is clear. If you can identify the emotions you want to convey, and then convey them with a mix of gestures, physical sensations, description, internals, and dialogue, you’ll be well on your way to writing emotion that will keep your reader turning pages. Next time you’re reading one of your favorite authors, pay attention to how they do this. Two of my current favorites, Leigh Bardugo and Maggie Stiefvater, are masters of showing emotion without naming it on the page. Their styles are very different (Bardugo is more lush and Stiefvater more sparse in style), but they both end up delivering gripping stories in part because of how they write emotion. What tips have you learned about conveying emotion in your writing? Which authors do you think do it particularly well? This topic could fill multiple books, so feel free to continue this discussion in the comments.
https://medium.com/no-blank-pages/on-writing-emotion-160f75f6b264
['Julie Artz']
2019-04-15 14:48:24.493000+00:00
['Editing', 'Writing', 'Craft', 'Fiction Writing', 'Writing Exercise']
This Is The First Brainchild Of Breaking Rainbow Productions
After a year of development, British writer/producer Stu Laurie and his husband Kevin have joined forces with American media company MADE, LLC to form Breaking Rainbow Productions. This brand is dedicated to telling stories from LGBT creators or about LGBT protagonists, and the first creation of this new label is this publication on Medium called, “TheGAZE.” Find Us @lgbtGaze Part of the magic of this partnership has been the exchange of ideas between the founders of MADE (Kristen & Matt Berman) and Stu Laurie. As a gay writer, Stu brings a personal connection to the source material. Matt and Kristen believe that talent is valuable regardless of its creators’ identity. They have often been asked, “How’d a straight couple end up making gay movies?!” The truth is, they didn’t. Matt and Kristen sought out talented people who could help them fulfill their mission of producing innovative media. Stu and his husband have a dedication to telling stories from the perspective of LGBT people and unearthing talent from all over the world. To bring a new perspective to issues of LGBT rights and identity, this publication will explore LGBT issues throughout history. From famous LGBT figures throughout time who accomplished great feats to the current legislative battles around the world which effect LGBT people, we will take an investigative & creative approach to finding newsworthy stories. Stay Tuned… Original content from TheGAZE is right around the corner! Breaking Rainbow Productions has an exciting group of projects in development as well. To any new writers, directors, producers, or artists that are looking out for opportunities, we are working diligently to provide them. From the BRP team, we look forward to sharing amazing stories with you. ❤
https://medium.com/lgbtgaze/this-is-the-first-brainchild-of-breaking-rainbow-productions-1875670c6a41
['Breaking Rainbow Productions']
2018-06-10 21:28:44.227000+00:00
['Marketing', 'Entertainment', 'LGBTQ', 'Media', 'Philosophy']
The Tale of Harry Elephante
A long, long time ago, there was an elephant named Harry Elephante. Harry was known far and wide for his famous singing ability. Crowds of elephants would gather to listen to Harry croon his majestic tones and hit notes once deemed impossible by elephant musical historians. For Harry Elephante, life was good. The years rolled by, and Harry traveled the globe, performing for adoring crowds of not just elephants, but hippos, rhinoceroses, giraffes, and other woodland creatures who could appreciate a good tune. Harry always wore a smile on stage, but offstage was a different story. Not depressed, not dejected, simply aware that something was missing from his life. He didn’t know what this “something” was, and depending on his mood, he either pondered the thought or washed it away in a torrent flood of booze, loose elephants, and a copious drug habit. After one riveting performance in the Florida everglades, Harry sat backstage on his enormous, elephant-sized leather couch. The room was filled with an assortment of musicians, production folks, hangers-on, and groupies, but one pair of elephant eyes caught Harry’s attention. He told one of his security pandas to bring over the particular elephant from across the room. Her name was Brandi, and two years later, they were wed — on one condition: Harry had to clean up his act. Drugs and loose elephants were no longer on the menu. She could live with the booze. For Harry Elephante, life was good. A couple of years later, Brandi squatted underneath her favorite Banya tree, trying to pee on a pregnancy reed. She stared at the reed for what seemed to be an eternity before the reed turned blue. She was pregnant! Harry was overjoyed and happily joined his wife, Brandi, watching every newborn elephant video and reading every newborn elephant book they could find. He joined all the newborn elephant parent groups online and participated in all the arguments over vaccinations, elephant rearing, daycare, and the best movie sequel of all time. The one thing Harry knew was he knew nothing. But no one truly knows anything, and that made his anxiety lessen a bit. Twenty-two long months passed until Brandi’s trunk awoke Harry. It was time! They raced to the Animal Cracker Hospital and let nature take its course. Brandi pushed and pushed and pushed some more until Harry saw the top of his baby elephant’s head coming out of Brandi’s trunk. The sight of his daughter on the precipice of life overwhelmed Harry, and tears of joy poured down his elephant face. *Note: This is exactly how elephants give birth. Please don’t ruin a good narrative flow by stopping here and googling, “How do elephants give birth?” and for God’s sake, don’t go down the elephant hole by watching elephant births on youtube. You’ll never come back and finish the rest of this story* Anyway… The newborn elephant crawled its way out of Brandi’s trunk and into the tiny hands of their doctor. Brandi’s normal OB-GYN was an ostrich named Lucy, but she wasn’t available that day. Instead, a mongoose named George safely delivered their baby. George announced they had a baby girl and placed her on Brandi’s chest. Harry wept even more tears of joy and alternated between kissing Brandi and staring at his daughter, an elephant he had just met but loved more than anything else he had ever seen. And that’s Harry heard the commotion. Suddenly the doors flung open, and a rich asshole named Yates, along with his college lacrosse buddies Chip, Yancy, Fieldwith, Spencer, and Thurston, stormed into the delivery room. Yates grabbed his assault rifle, dropped down into position, and fired, killing Harry instantly. His lacrosse bros cheered and took pictures of Yates standing next to his fresh kill. C’mon. Really? No — obviously, that didn’t happen. Do you think I would write this lovely tale and then end it on such a horrible note? Of course not! I’m not a monster. I’m not an elephant either — just a man. A man who believes hunting elephants is wrong, and anyone who does is probably a rich asshole with a jerkoff first name. Anyway… Back to the story. Brandi cradled their newborn daughter while the mongoose doctor finished cleaning up Brandi. One of the nurses on staff, a raccoon with a gambling problem, congratulated them and asked for the baby’s name. Harry and Brandi smiled and said at the same time, Chloe Shay. The raccoon nurse nodded and asked, “Shay? Like Shea Stadium?” Brandi shook her head no while Harry nodded his head yes. More time passed before Brandi looked up at her husband and asked if he wanted to hold his daughter? Harry once again nodded his head and held Chloe for the first time. He stared into her eyes, and that’s when everything clicked. He had found it. The something that was missing. What he had been searching for all his life. He was finally at peace and the happiest he had ever been, until the next day when he was even happier. Harry, Brandi, and Chloe were a family, and they lived happily ever after. The End.
https://medium.com/the-haven/the-tale-of-harry-elephante-44beff5adfcb
['Tom Starita']
2020-12-24 00:10:09.489000+00:00
['Flash Fiction', 'Writing', 'Fiction', 'Baby', 'Short Story']
Drug classification — on cAInvas. Training a deep learning model to…
Drug classification — on cAInvas Training a deep learning model to prescribe a drug based on the patient’s data. A prescription drug is one that requires a medical prescription to be dispensed by law. On the other hand, an over-the-counter drug is one that can be dispensed without a prescription. When it comes to the prescription of drugs, doctors look into various attributes of patient-related data before coming to a conclusion. This can have consequences varying from the efficiency of the medicine in the patient’s body to side effects caused and incorrect prescriptions may in some cases lead to irrevocable effects in patients (including death). To start with, can we train a deep learning model to prescribe medicines to patients based on their medical data? Read on to find out! Implementation of the idea on cAInvas — here! The dataset The dataset is a CSV file with the features regarding a patient that affects drug prescriptions like age, sex, BP level, cholesterol, and sodium-potassium ratio and the corresponding drug prescribes in each case. Preprocessing Balancing the dataset Looking into the classes and the spread of values among them in the dataset — There are two ways to balance the dataset — upsampling — resample the values to make their count equal to the class label with the higher count (here, 91). downsampling — pick n samples from each class label where n = number of samples in class with least count (here, 16) Here, we will be upsampling. Dataset upsampling The replace parameter of .sample() is set to True to indicate that samples can be repeated in each class to achieve the given count. The df_balanced data frame has 455 samples, 91 of each class. Categorical variables The ‘sex’ column does not define a range and thus is one-hot encoded while changing from a categorical attribute to a numerical attribute. This means if there are n unique values in the column, an array of length n is created for each where only the ith value is set to 1 with reference to an array that defines the indices of the column values in the array. In many cases (mostly in the input columns), if there are n unique values, an array of length n-1 is created as the extra column can be redundant for identifying the column value from the encoded array. This is achieved by setting the drop_first parameter as True in the get_dummies() function as shown in the code cell below. Since this column has only 2 unique values in the data frame, there will not be any difference between one-hot encoding and label encoding the column. The values in the columns Cholesterol and BP represent range-kind values as seen by the values below. These columns are label encoded instead of One-hot encoding, i.e, each value is replaced by a numeric value. Since this is a classification problem, the output of the model which is now as an integer should be one-hot encoded. Onehot encoding Snapshot of the dfx data frame Train-test split Using an 80–10–10 ratio to split the data frame into train- validation- test sets. These are then divided into X and y (input and output) for further processing. Train test split The training set has 364 samples while the validation set has 45 and the test set has 46 samples. Scaling the values A peek into the snapshot of the dfx data frame makes it evident that the columns have values in different ranges. The min-max scaler can be used to scale the values between the minimum and maximum values defined (min-0, max-1 by default). Minmax scalar The MinMaxScaler function of the sklearn.preprocessing module is used. Logically, the training set is the only data we are allowed to see or work with while training the model while the other two are used to evaluate its performance, the MinMaxScaler object is fit on the train data and the fitted model is used to transform the data in all the three datasets. The model The model is a simple one consisting only of Dense layers. Drug classification model The model is compiled using the Cross-entropy loss function because the final layer of the model has the softmax activation function and the labels are one-hot encoded. The Adam optimizer is used and the accuracy of the model is tracked over epochs. The EarlyStopping callback function monitors the validation loss and stops the training if it doesn’t decrease for 8 epochs continuously. The restore_best_weights parameter ensures that the model with the least validation loss is restored to the model variable. The model is trained with a learning rate of 0.01 for 64 epochs but the model stops before that due to the callbacks. The model achieved 100% accuracy on the test set. In problems such as these, it is important to keep the accuracy extremely high (100%) as chances cannot be taken with a patient’s medication. The metrics The plot of accuracies The plot of losses Prediction Let’s perform predictions on random test data samples — Drug classification prediction Find the implementation of the print_sample() function in the notebook link above! Random tests sample prediction deepC deepC library, compiler, and inference framework are designed to enable and perform deep learning neural networks by focussing on features of small form-factor devices like micro-controllers, eFPGAs, CPUs, and other embedded devices like raspberry-pi, odroid, Arduino, SparkFun Edge, RISC-V, mobile phones, x86 and arm laptops among others. Compiling the model using deepC — Head over to the cAInvas platform (link to notebook given earlier) and check out the predictions by the .exe file! Credits: Ayisha D
https://medium.com/ai-techsystems/drug-classification-on-cainvas-18e6471df32a
['Ai Technology']
2020-12-23 08:24:09.082000+00:00
['Tinyml', 'Machine Learning', 'Artificial Intelligence', 'IoT', 'Deep Learning']
We Should Care More About Campaign Finance
“I don’t care who [politician’s name] takes money from.” I saw this comment on a Facebook post recently, and it shocked me. I had to remind myself that not too many years ago, I might have said the same sort of thing. I knew that some politicians are corrupt, but I also believed that for a genuinely principled person, money would have no real effect on their choices. What changed my mind? Science. Some years ago, I went through a period of fascination with behavioral economics — which, for those of you who don’t know, is kind of like the Reese’s Peanut Butter cup of the social sciences: “You got your economics in my psychology!” “You got your psychology in my economics!” One of the books I read then was Dan Ariely’s The Honest Truth About Dishonesty, which has a chapter on ‘conflicts of interest’. He begins by describing a landmark 2010 study published in the Journal of Neuroscience, titled “Monetary Favors and Their Influence on Neural Responses and Revealed Preference.” The authors define ‘favor’ as a situation “in which one agent makes a gesture or provides a gift without any explicit expectation of reciprocity.” Like, say … a campaign donation. The study was constructed like this: First, participants were told that their participation fee (variously $30, $100, or $300) was provided by one of two art galleries. Next, the participants were shown a series of 60 paintings — each randomly paired with the logo of either the sponsor gallery or the non-sponsor one — while an fMRI machine ran brain scans. Finally, outside the scanner, they were shown the 60 paintings and logos again and asked to rate how much they liked or disliked each piece on a nine-point scale. The fMRI scans showed that viewing the paintings paired with the sponsor logo caused significantly different brain activity compared with the non-sponsor paintings — especially in the ventromedial prefrontal cortex, an area of the brain essential for social decision-making. In the second part of the study, participants said they liked the art from their sponsor gallery more, irrespective of the style of painting. What’s more, it turns out that the higher the participation fee, the more intense the unconscious favoritism for the sponsor gallery. The effect on both brain imaging and rated preference was smallest at $30, larger at $100, and very large at $300. Yet when participants were asked whether the sponsor’s logo had any effect on their art preferences, every single one answered “no”. This last part was the most important, in my view — that our preferences and choices can be — will be — demonstrably affected by ‘favors’ and ‘gifts’, even as we insist that we are being completely impartial. Our own sense of principle is no match for our unconscious mind. Ariely also talked at length in the chapter about the psychological techniques used by pharmaceutical reps to indirectly influence doctors, ranging from the dispersal of logo-encrusted pens to developing extended friendships outside of work. “Some reps would go deep-sea fishing or play basketball with the doctors as friends. Such shared experiences allowed the physicians to more happily write prescriptions that benefited their ‘buddies’. The physicians, of course, did not see that they were compromising their values when they were out fishing or shooting hoops with the drug reps; they were just taking a well-deserved break with a friend.” The exact same techniques are used by lobbyists on politicians, and if you imagine that politicians are somehow managing to be more honorable and principled than doctors, you … would probably be alone in that expectation. Governmental lobbyists, says Ariely, “spend a small fraction of their time informing politicians about facts as reported by their employers and the rest of their time trying to implant a feeling of obligation and reciprocity in politicians who they hope will repay them by voting with their interest in mind.” The Center for Responsive Politics explains it like this: If a lobbyist, a CEO, a union president, or a PAC director has supported your campaigns year after year, then comes knocking on your door seeking help with legislation, how can you not at least listen? It’s only natural. It’s only human to try to do favors for people who’ve done favors for you. This doesn’t mean that every politician is on the take, or that his or her vote is up for sale to the highest bidder. Nearly all lawmakers elected to Congress are committed to a core set of issues they care about deeply and would never compromise on. But what about the many other issues Congress deals with that fall outside those bedrock issues? Can a one-two punch of a lobbying effort plus campaign contributions move a politician’s vote from yea to nay, or prompt him or her to add an amendment to a bill, or seek an ‘earmark’ appropriation for a generous supporter, or write a letter on their behalf to the federal agency that’s causing them trouble? “Whatever issue brought you here today, I guarantee if there’s a decision to be made in Washington, it’s been touched, pushed, massaged, tilted over, just a little, so the folks with money do better than everyone else.” — Senator Elizabeth Warren Sunita Sah is a research psychologist who studies how professionals alter their behavior as a result of conflicts of interest. In one overview paper for The Journal of Law, Medicine, and Ethics, she concludes, “Professionalism offers little protection; even the most conscious and genuine commitment to ethical behavior cannot eliminate unintentional, subconscious bias.”
https://karawynn.medium.com/we-should-care-more-about-campaign-finance-aa4543da7f47
['Karawynn Long']
2020-03-02 21:37:46.515000+00:00
['Politics', 'Election 2020', 'Psychology']
Not Just the Drag Queen Candidate: An Interview With NYC Council Candidate Marti Cummings
What makes you the person you are today, and what led you to run for public office? I grew up on a farm that’s been in my family for 200 years, meaning I have that instilled sense of “Americana” in me. I didn’t grow up with a lot of money. My dad was a salesman and my mom was an elementary school teacher. They raised me with the value of working hard. My grandparents lived through the Depression. I remember being a kid and asking one of my grandmothers for money, my grandmother recognizing the value of the dollar because of what she lived through, and would say, “No, you have to do chores and earn it.” Putting effort and value into my work has shaped who I am. I always had the dream of living in New York City. I always knew what I wanted, and I knew it would take a lot of hard work to get it. My aunt lived in Tarrytown, so when I was growing up we would go visit. I graduated high school on June 3rd, and on June 23rd I was living in New York City at 17 years old. I got sidetracked with addiction and then got sober almost 9 years ago. That instilled spirituality in my value of hard work. I’ve had a drag career for almost ten years now, which in itself takes a lot of hard work. I’m fortunate to have a team now, but for the longest time I was my own manager, I was my own agent, I was my own…. flyer maker even. I entered politics as an outsider which I knew was also going to take a lot of hard work. I worked very hard to help create the HKDems [Hell’s Kitchen Democrats] and eliminate the McManus Club, which was one of the oldest political institutions in the state. I know as a candidate that I have to show that I am in this for the right reasons. I’m not just the “drag queen candidate.” I’m the candidate who’s going to work for each and every person because those are the values that have been instilled in me. I love New York, I love my community, and I love my neighborhood. I’m ready to expand the work that I have already been doing for the people of this city, and take it to the next level as an elected representative. Your run taps into a lot of the mental and emotional aspects of the early Queer liberation movement here in New York, and the nation as a whole. What does it mean to you to be one of the people to carry forth this historic Queer energy? I look at leaders like José Julio Sarria who was a drag queen and ran for public office in San Francisco in 1961. This was years before Stonewall happened in a time where that type of action from that type of person wasn’t really heard of. José went on to start the Imperial Court System which raises tons of money for LGBTQ+ charities and organizations. Today I look at people like Andrea Jankins who’s a transwoman of color on the Minneapolis City Council. I look towards people here in the city like Jimmy Van Bramer who’s been a council member and is now running for Borough President in Queens. I also look at Danny Dromm who’s the head of the LGBTQ+ Caucus, and Melissa Sklarz who’s an incredible transwoman who ran for the State Assembly seat in Queens and does so much work. These are Queer leaders who have been doing work in politics for a long time. For me, it’s a big opportunity to spread this type of Queer awareness since I am the first drag queen to run in New York. When I win I’ll be the first drag queen in office here in New York. Some can say that’s historic, but for me, the impact comes from when I was deciding to run. I reached out to a very popular Congresswoman who we all know and I asked for her advice. She said, “Listen, at the end of the day you want to win, but you have a demographic of young, Queer people watching you who will be inspired and feel a part of the [political] process in a way they have never felt before.” People shouldn’t run for office for recognition. They should run for office because they want their voice to be heard and because they want to be a part of the process. They want their leaders outside of the political establishment, those whose day to day lives are affected. I’m not a traditional candidate and I get messages every day from young people across the nation. I barely have time to cry over these messages that are both positive and of pain-filled experiences, then a second later go out on the street and blast my message on a megaphone. Though it’s overwhelming, I am so grateful and it is so rewarding. When people are looking up to you, there’s pressure to do well and you feel kind of isolated, but what keeps me going is the aim of helping these young people have a brighter and better future and fighting for the opportunities for them to grow and succeed. I’m fighting for the LGBTQ+ community, but also for anyone that has been marginalized and doesn’t feel like they are a part of the system. I want to bring these affected people in and I want us to work together. What would you say is the most pressing issue occurring in the city right now? Issues vary from district to district. Though I’m running for District 7, it’s still important to see what issues the other districts are dealing with as well, because what I will be voting on will affect the city as a whole. A big issue in my district currently is pedestrian and bike lanes. Every day you hear and read about people dying on the street because they were hit by a car, or even hit by a bicycle. We drastically need to work on our street system. They just did great work on 14th Street by clearing out the cars and putting bus lanes in. This is easing up traffic and is better for the environment. This is great, but now we need to expand upon the rest of the City. Combating climate change is also a pressing matter, and work on that starts in our own backyards. Making sure our streets and parks are cleaner, cutting-down on plastics, increasing green space, and addressing all forms of pollution. Cancer and rates of asthma are pretty high in my district, and that’s due to an increasing number of people who are unable to afford adequate healthcare coverage. As the progressive movement expands and moves forward, do you think the current New York City Council is moving forward with it, or lagging behind? I think the City Council has been doing a great job. I think Speaker Corey Johnson is an incredible leader, and I love him very much. He’s a good friend of mine. With that said, all good work can be expanded upon and done better. The Council just voted recently to shut down Rikers Island. That’s great news because Rikers is horrible. However, now they are allocating all of this money to build smaller, borough-wide jails. We should be taking that money and investing it in our homeless population. I’m a big believer in rehabilitation over incarceration. I’m an addict, I’m an alcoholic, and I could easily be one of those people living on the streets. We need to invest in safe injection sites in the city because there is no reason we shouldn’t have that here. If people are going to continue to build these multi-million dollar mega penthouses that are going to be left vacant, why wouldn’t the city invest in housing for homeless people? Mayor DeBlasio has really let us down on the issue of homelessness. I hope City Council really starts to work on those issues, and if not, then I’ll shake it up and get it done. There’s no reason to have almost 70,000 people on the streets each night in New York City. That’s what we’re dealing with. Do you think the support for your campaign is coming from the progressive movement as a whole or mainly people who have supported you in your drag career? People see you on “The X Change Rate” for example, as an activist while also in drag. Who is your base? I think for a lot of people the latter is the case. People tell me, “Oh my god, I’ve known about your drag for a long time, I come to your shows, I’m all in.” But there are people who have never seen me in drag who watched my campaign video, and then read about the work I’m doing and go, “Oh werk, I want to get behind this person, they share my values.” People feel represented for the first time. You have a Genderqueer, gender-neutral drag queen that’s coming into the political system for the first time. That is really shaking things up like never before in local politics, and that gets people excited. They say things like, “That’s badass and really amazing!” Like I mentioned before, I’m in this to win and I cannot wait to do this job, but I know, if I don’t win, that at least younger people have seen and feel represented, and now they are going to to go out and do the hard work and make change themselves. Is there anything else you would like to mention or include? Free speech is absolutely a beautiful thing we have here in our country. However, we cannot write death threats to people and think that type of behavior is acceptable. After my TED Talk went out, I received such a massive negative response in the comments that they had to just disable the comments section for that talk. We need to change the societal narrative that enables this type of behavior. I’m lucky enough to have a good cry about it and then move on, but a lot of kids are cyber-bullied and it ends very differently because they are young and don’t have a grasp on the situation. Some of these kids take their own lives. That is an epidemic we need to address and work on. I don’t want to live in a world where people are being bullied to death. Additionally, at the end of the day, things like political party affiliation don’t matter, because we are all concerned with certain important matters. Is our rent going to be paid? Am I going to be paid what I’m worth for the time and work that I put in? Are we going to be able to provide for our family or our friends and put food on the table? Are we going to have healthcare coverage? We all want the same basic things. It’s about humanizing these issues and topics. When people say about me, “Oh that’s so cool that type of person is running”, I’m glad. I’m trying to put a human face on today’s issues. Initially, people get wrapped up in my drag queen or They/Them pronouns, but once they get to know me they know I’m here for the issues like anyone else. A lot of people hate door knocking [canvassing]. I love it. It puts a human face to who I am instead of just reading a leaflet that says something like “Drag queen running for office.” You get to meet the person on the other side. I hope I get people excited about my campaign so people then can decide to go out and join their community board, or volunteer for a campaign or charity they believe in, or even give a dollar to a campaign. Phone banking, writing postcards, anything. Politics is a huge team effort; the candidate is just a part of it. I hope that people feel drawn to this campaign and become a real part of it.
https://maxmicallef.medium.com/not-just-the-drag-queen-candidate-an-interview-with-nyc-council-candidate-marti-cummings-c2adcea92adb
['Max Micallef']
2020-12-06 04:55:40.900000+00:00
['Politics', 'New York City', 'New York', 'Interview', 'Government']
Now you can develop your own dApp, that will work on dApp Builder Platform
We have launched the Custom dApps development environment on the dApp Builder website. If you are blockchain developer and good in Solidity, HTML and JavaScript you are able now to create your own dApp such as our Voting, Betting, Multisignature Wallet or Custom Token dApps. Now we are improving our marketplace to allow developers to publish their custom dApps there. Soon blockchain developers will be able to sell their own dApps in our marketplace for Ether or DAP tokens. But you are able to try our development environment and to start developing of your custom dApps on dApp Builder platform right now.
https://medium.com/ethereum-dapp-builder/now-you-can-develop-your-own-dapp-that-will-work-on-dapp-builder-platform-cd5748958454
['Dapp Builder Team']
2018-10-26 07:12:54.695000+00:00
['Token', 'Blockchain', 'Development', 'Ethereum', 'Dapps']
Suicide Is an Answer, But Not the Right One
I was scared. I could feel the effects of the pills. I couldn’t control my legs. I knew I was slipping. I no longer wanted to die, but it was too late. I wanted to be upstairs in bed with Flora and Zoey, not fading fast in a beanbag on the kitchen floor. My whole body went numb. Help me. I’m scared. I don’t want to go! The last thing I remember is wiping away the tears. Then everything went black. Every time I remember those last moments, I start to cry. As I write this on May 29, 2019, at 10:57 a.m., five years from the day, my shoulders are heaving with sobs I can’t stop. No one tells you that before you die, you realize you wish it weren’t over. If I had jumped from a tall building, I bet I would have spent my last seconds wishing I was in the arms of the people I love. The people who weren’t as lucky as I can’t look back with regret. They are gone. I think of how close I was to missing out on the joy and pain of the past five years. These have been the best years of my life, and I almost missed them. What got me to the point that I thought killing myself was the only way to stop my pain? The darkness of life When you suffer from a mental illness, the constant battle can become too much. I don’t think of myself as a strong person, and I could not take the bombardment of negative emotion. I tried to kill myself four times. The first three were nothing more than cries for help. I was never in danger of dying from my injuries. But I attempted. I messed myself up to the extent that I still carry the scars. I can trace the pain of those first three attempts with my finger. My arms are a roadmap through the worst parts of my life. But five years ago it was different. I had come halfway across the world to the Philippines to change my life. I thought I could leave everything behind and start fresh. But it was the same. I was still depressed all the time. I was asleep more than I was awake. When I was awake, I was anxious and close to panic. I couldn’t focus. The voices were so loud that they drowned out everything else. I couldn’t get the medication I needed. When I needed help most, nothing was available. If I had tried harder and asked more people, maybe someone would have helped me. But I had given up. Flora and I always fought. She couldn’t understand why I was always in bed. She couldn’t understand what was happening. She was angry. She lashed out. She didn’t know I was dying inside. Nobody could understand. Not Flora, not the doctors. If I had tried harder and asked more people, maybe someone would have helped me. But I had given up. I gave in to the voices and the suicidal self-talk. I no longer wanted anyone to help. I was tired. I wanted it all to end. At first I didn’t know how I was going to do it. I was scared. I didn’t know if I had the courage to actually take my own life. I spent a few days planning my exit. I searched Google for the least painful ways to kill yourself. I read blogs about suicide, brushing past the trigger warnings like they were nothing. If I was trying to convince myself suicide was the only way out, I was doing a very good job. I never did anything halfway. May 29, 2014 I woke that morning feeling good; so good I forgot about my plans for a time. I sat on the porch and smoked, drinking my coffee and closing my eyes against the bright sunlight. Flora came downstairs, raging about my smoking habit. Maybe I made the mistake of letting the smoke float into the house. But it was just the beginning. When she got started, she didn’t quit for hours. I let her yell until I couldn’t stand it, then went upstairs and put a pillow over my head. The plan to kill myself would go forward. Nobody loved me. Nobody wanted me. I couldn’t be a good father to Zoey. I knew my death would be hard for her, but in the long run it would be better than having a crazy father. Thinking about Zoey almost changed my mind, but I had already passed the point of no return. I went through the day in a dreamlike state. I would look at something and think it was the last time I’d ever see it. I sent notes to family overseas to check in and hopefully cushion the blow of my death. I was sad, but also relieved it would soon be over. Later that night, after more fighting with Flora, I kissed Zoey and put her to bed. I was crying, but I didn’t let Flora see because I didn’t want her to know I was feeling anything more than anger. I didn’t want her to help because I wanted to punish her. If Flora knew what I was thinking, she would have tried to help me. Even though she was angry, she loved me. She had learned to love me despite my illness and the hell I put her through. She was angry, but she also couldn’t imagine life without me. I went downstairs and wrote a 14-page suicide note. It took me until 1 a.m., but I finished. When I had written all I wanted to say, I knew it was time. May 30, 2014 I set up everything so I only had to press a button for my note to be published on all my social media accounts. I was ready. I went outside and had one last cigarette, savoring the smoke filling my lungs. I was scared, but I’d made up my mind, and there was no turning back. I grabbed every pill bottle I could find and emptied them on the table. These were pain pills and medication left over from unsuccessful drug trials. I don’t know why I kept them, but they were about to come in handy. Some of the pills were time-release, so I crushed them into powder. I got a big jug of water, took one last look around the house, and swallowed everything. I gagged repeatedly but kept it all down. I was numb. I sat in my beanbag and pressed the button that would publish my suicide note. I closed my computer and turned off my phone. About 15 minutes later, my stomach got very upset. I knew I’d have an accident if I didn’t do something, and the thought of them finding me covered with shit horrified me. I somehow got to my feet and started up the stairs to the bathroom. I was very dizzy and weak, but I made it to the toilet and did my business. The way down was much harder, but I sat down and took one step at a time. I made it back to my beanbag, willing myself not to throw up. I couldn’t move my legs. Fear crept up from the pit of my stomach and froze me in place. I started to cry. All I wanted was to be safe, tucked in next to Flora and Zoey. I wanted to take it back. The reasons I had for killing myself no longer made sense. I wanted to live! But it was too late. I was dying. Knowing I was dying made me cry even more. I wiped away the tears and knew no more. Afterthoughts I woke the next morning, and Flora managed to get the right kind of help. An ambulance took me to emergency. They put a tube in my nose and pumped charcoal into my stomach. I still have a scar on my nose from the tape holding the tube in place. I survived. In the hospital, I was allowed to realize what a wonderful life I had. So many people loved me, and they only needed to hear me asking for help. I wanted to punish everybody because I thought they didn’t care, but I was the one pushing them away. If you read this whole experience, I hope you understand everything can get better if you just ask for help. I know your mind is telling you no one loves you, but they do. Your mind is lying when it tells you the only thing you can do is end your life. Take it from someone who lived to tell: You will regret it if you try to kill yourself. Why would you take the last resort when all you have to do is ask someone, anyone, to help you? True courage is asking for help. It doesn’t take courage to kill yourself. It takes courage to live. Ask someone. Ask me if you have no one. Ask for help. We are waiting.
https://humanparts.medium.com/suicide-is-an-answer-but-not-the-right-one-f0062530aa1a
['Jason Weiland']
2019-07-11 18:07:54.306000+00:00
['Mental Health Awareness', 'Suicide', 'Death', 'Mental Health', 'Mind']
Catalyst Programme Week 3
The Catalyst Programme is a six-week training hosted by The Shortcut to prepare Finnish immigrants for starting or joining startups. This blog follows the October 2018 participants to show what the program is about, what they are learning, and who they are. In Week 3, former Catalyst participants mentored us on the A through Z of startups. Week 3 was a unique week for the Catalyst Programme. Rahul and Hanna handed over the reins to several graduates of the summer program cohort to lead us through founding a startup, from idea to legalese. As part of their Catalyst Programme, they distilled Y Combinator’s Startup School into a one-week curriculum, then tested the program on us! The experience was unlike other schooling I have received. There were no rules, no formulas, no if-then situations presented. Rather, the startup lessons were mostly colloquial; just founders sharing stories of their businesses at different stages. I shouldn’t be surprised, given what we have learned about startups. This isn’t math. There is no one path to success, or even a single definition of success. It all depends on you! As exciting and empowering that this realization is, it is also distressing. As Y Combinator founder Paul Graham points out, being a student, which has been most of my life, involves a lot of “gaming the system.” But, it doesn’t work like that in startups. What makes startups successful (and potentially transformative) is the fact that they expressly operate to disrupt existing systems. An exciting approach, but one that requires a different way of thinking. Thanks to Pedro Cunha for his artistic vision! Mentors are one of the most effective tools to help figure out this approach. Though your path is ultimately your responsibility, a mentor is someone who can relate to your position and reflect on the feelings, thoughts, and decisions that led to their next steps. A mentor, much like the Y Combinator curriculum, doesn’t give you the answer — there is no answer. But, they have the experience and insight that can contextualize your impending decision. To better understand mentorship, I reached out to the previous Catalyst participants who prepared the Startup School curriculum. As people who completed the same program, they stood where my cohort and I stand today, so they are the perfect resources to understand the relationship between the Programme and mentorship. I also asked a founder experienced in mentoring startups to complement their insight. Here is what each had to say: Shila: Originally from Nepal, Shila is an accountant by trade pivoting to working in startups. Shila’s advice is to engage mentors by telling a story. It is not enough to come ask questions; cultivating meaningful relationships requires vulnerability and honesty to share your journey and the challenges you are grappling with now. The nature of your relationship and engagement should serve as the guide for determining what is relevant or not. Personal topics like relationship issues are not necessarily off-limits, but should be avoided if you’re meeting someone for the first time or asked for the meeting under different pretenses. Shila’s advice for Catalyst participants is not to underestimate how those they meet may help them. There are many types of participants with varied backgrounds, but embracing this diversity, and taking advantage of as many events as possible, opens your perspective to new approaches. This wisdom is indicative of Sheila’s broader understanding of a mentor’s role: though a mentor-mentee relationship can be strictly confined to a professional context, it does not have to be and, indeed, the most powerful relationships are those that become more elaborate over time. Tasos: Tasos is a passionate learner and conversationalist who has lived in Finland for 8 years. He understands mentors as more strictly confined to a professional relationship; that is, a mentor is someone with more experience in a given field that has “been there done that” and can share wisdom on how to get where you want to go. Therefore, a person should have many mentors. Tasos, for example, found mentorship in the collective wisdom of The Shortcut, which connected him to different events, people, and resources to revamp (or catalyze) his career search. The guidance Tasos sought was given by the entire network, a sort of communal mentorship, rather than any one individual. Of course, the mentor-mentee relationship isn’t a one-way street. Though mentee’s must be humble and curious, they can’t leave their values at the door. Challenge, prod, and ask during conversations. This shows you think critically, but also gives the mentor an opportunity to re-think their perspective and embrace different viewpoints. A mentee does not have the sole responsibility to learn. Anne: Anne Wanjuku Fagerstrom is a Kenyan born mother of three that loves to bring calm to chaos at home and at work. She agrees that mentors are responsible for the mentor-mentee relationship, but stresses that this is in making a mentee feel comfortable and supported. As the experienced party, mentors should identify individuals who are in a similar position they once were and offer their help. The extent of this help varies, but at its core it is an earnest and unique interest in the mentee. Mentees, on the other hand, must be proactive and clear: initiate meetings, implement the advice you receive, and seize any opportunities your mentor presents. Though the Catalyst introduces you many potential mentors, Anne did not cultivate any relationships directly. She wasn’t ready. The program opened her eyes to an entirely new world called startups and turned her understanding of work, and herself, on its head. She has to understand her own motivations, skills, and challenges before seeking guidance from others. Her advice for participants, therefore, is to embrace change wholeheartedly. Say yes, be uncomfortable, and embrace new directions! Valentina: From snowy St. Petersburg, Valentina is a quadrilingual customer service professional. She, like Tasos, understands mentors in a more strictly professional sense. Having aligned professional interests is paramount to a mentor opening the “correct” doors, which is one of the main ways a mentor can help their mentee. Valentina believes the key to success is transparency and purpose. Being precise in your asks, proactive in your preparation, and diligent in your communication are all important strategies for helping the mentor help you and gives clear direction to the relationship. If the relationship isn’t mutually beneficial anymore, then there is no harm in not continuing. It is all about clear expectations and clear communication. Tomi: Tomi Kaukinen is a formal investment manager turned serial entrepreneur that has built several global media companies. Now on a much needed sabbatical, he is spending his time-off mentoring startups and founders. Credit to Wasim Al-Nasser Tomi shared that mentorship wasn’t something he set out to do after ending his latest venture; indeed, he had no idea what to do. A conversation with a mentor (thank you Anne Badan of The Shortcut) made him realize that all his years grinding away at his ventures have taught him valuable lessons and insights about entrepreneurship, startups, and professional paths. He’d always loved teaching, so he agreed to mentor at The Shortcut Lab, working to demystify the romanticized world of startups. There is a lot of idealization of the startup founder, but few people speak about how difficult and isolating the experience can be. When Tomi quit his secure asset management job to build mobile apps, he felt alone and directionless, unable to find a network that could relate to the stress of starting a personal venture. A veteran of the industry now, Tomi is an open book about his experience, hoping his honesty and enthusiasm make people feel comfortable using him as a source of wisdom and guidance. Tomi says that he hasn’t just taught as a mentor, but learned as well. Never has he had to listen so much in his life, to pause and relate to others’ viewpoints. As a founder, he didn’t have the luxury to listen; he had to pitch, sell, and lead. Tomi already knows this experience will make him a better entrepreneur in the future by increasing his willingness to search for talent or opportunity that may require more patience to discover.
https://medium.com/the-shortcut/catalyst-programme-week-3-a61b06ec473a
['Thomas Rocca']
2018-11-21 10:37:53.002000+00:00
['Mentorship', 'Event', 'The Shortcut', 'Professional Development', 'Entrepreneurship']
How to Make iOS App Secure From Screenshot and Recording?
Preventing Screen Capturing and Recording in iOS App How to Make iOS App Secure From Screenshot and Recording? Making iOS App Secure from Screenshot and Recording Thanks to the mobile era we have mobile apps for everything these days. Every business from a barbers shop to huge retailers has apps so that they can be closer to their customers. On one hand, we really leverage this convenience but on the other hand, there are risks of exposing a lot of confidential information while using these apps. And it becomes very vital when dealing with payments and other sensitive information. As a developer of these apps, it is our responsibility to put checks to make sure privacy and security are not compromised. One of the ways is to detect/prevent screenshot and screen recording action and taken an action or inform the user to take appropriate action. Use Cases Here are some use cases where screen capture and screen recording can expose sensitive information: 1. Login information can be recorded Any app that requires a login to get access to sensitive information. We need to make sure that only the intended person can log in. If screen recording or screen capture is allowed on the login it can expose confidential information. 2. Recording of Streaming Content Let’s take an example of a content streaming app, for example, Netflix, which I think everyone is aware of. We pay a monthly subscription to stream content. If screen recording was allowed, one can record with the device’s recording option and watch the content later without even having a membership. 3. Payment Information Any retail or banking app deals with payment/transactions. From a security point of view, we need to be watchful of any information being captured from the app to protect the user’s account. If we aren’t careful it will lead to a major leak in from application and secured transaction details will be compromised. Here is how you implement it 1. How to Prevent Screen Capture As a developer, we can prevent/track screenshot very easily by listening to notification userDidTakeScreenshotNotification available in iOS 11 and above.Let’s see the code in action class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. NotificationCenter.default.addObserver(self, selector: #selector(didTakeScreenshot(notification:)), name: UIApplication.userDidTakeScreenshotNotification, object: nil) } @objc func didTakeScreenshot(notification:Notification) -> Void { print("Screen Shot Taken") } } In the code above we are adding ourselves as an observer to notification userDidTakeScreenshotNotification and catching with function didTakeScreenShot which will get triggered anytime a user tries to take a screenshot. At this point in time as a developer, we have the opportunity to handle in a way we want for our app. For eg. show a warning message and then kill the app after informing the user. Here is a gif that captures it. 2. How to Prevent Screen Recording To check if a screen is getting captured/recording all we have to do is check for isCaptured property on UIScreen. Let’s look at the code sample below func isRecording() ->Bool { for screen in UIScreen.screens { if (screen.isCaptured) { print("screen is recorded") return true } } return false } We can decide to check for recording by calling the method isRecording. We can call this method depending on our needs. We can either call it on different states of our view life cycle or having a timer to check for this. Here is a demo that shows screen recording detection.
https://medium.com/swlh/how-to-make-your-ios-app-secure-from-screen-shot-and-recording-82b6aea26b33
['Shashank Thakur']
2020-09-15 22:35:47.613000+00:00
['Mobile App Development', 'iOS', 'Swift', 'iOS App Development', 'Security']
6 Programming Habits That (Surprisingly) Not Many Developers Have
6 Programming Habits That (Surprisingly) Not Many Developers Have Distinguish yourself from the herd Photo by Burst on Unsplash When it comes to being a good programmer, there are certain habits that immediately pop up in your mind. There are some habits that most programmers would agree are great to have, but in reality, most of them don’t have these habits themselves. As we all know, we are defined by our habits. To become a better programmer, we should try to build great programming habits. Here are six great programming habits that you should try to build to stand out from the pack.
https://medium.com/better-programming/6-programming-habits-that-surprisingly-not-many-developers-have-c58acd9a67f3
[]
2020-05-19 13:56:59.506000+00:00
['Programming', 'Software Development', 'JavaScript', 'Technology', 'Startup']
Inspiring Words That Don’t Inspire
Sometimes there’s just nothing there, Christyl Rivers Everything happens for a reason Nope. Not everything happens for a reason. That raindrop did not fall to the left of the rose petal for a reason. The raindrop just fell. That dog did not pee on your berry bush for a reason. Silky just peed, because she’s a dog. Some things happen for a reason. But of all the sayings that are supposed to inspire you to think, “Oh, there’s a much bigger picture here!” this one is just lame. “God works in mysterious ways,” is similar, but not quite. Part of the trouble is the word reason. Reason can mean every action prompts a reaction, which is Newton’s idea. Reason can also mean applying logic. Reason can mean metaphysical magic beyond our reckoning is at work in your life. If we don’t understand it, the reason must be beyond reason. Miraculously, you will see that the neighbors dog pee in your yard and you’ll recognize that it must be a sign that you are to be less critical, or not look down at others, or plant trees somewhere else, or any infinite number of interpretations. Let’s face it, though, folks. The main reason that the dog peed is that dogs pee. We want to make everything in the Universe about us. Everything happens for a reason, seems to assure us that we are important enough for forces greater than ourselves to be looking out for even the tiniest event. Also, we want to know, or be able to interpret why things, especially bad things, happen. They happen because life happens. How you respond to a crisis is up to you. Don’t give up any scintilla of power you have to a greater reason. Unless, that reason, is one that you know you are helping to shape and co-create along with all the forces in the Universe that also shape your fate. What other people think of you is none of your business This one is meant to tell you that you should be brave. Take risks. The idea behind this phrase is a sound idea, but people can’t figure it out very often. They abuse, and sometimes limit, its power. Of course, Jesus , Joan of Arc, Gandhi and Greta Thunberg cared/care about what people think about them. Most heroes are reviled before they are honored. They care, but they act anyway. Being courageous and doing the right thing is really, really hard. But they did it. This phrase is misused by people who think it means that you are free to do whatever you want and not be deterred by the judgment of others. Let’s say you park in a spot reserved for someone else. Your “cause” is correct, you know the person who is ‘entitled’ to this spot is just a jerk. She is not really entitled, in your eyes, to this spot any more than one else. But, really, you just affected everyone else who will be impacted by your misguided behavior. We are called not to judge others, and this phrase turns that thought on its head. Do care about what others think because we all have to live harmoniously as possible in a shared world. Do take risks, but please, think about all the ramifications. If your cause is truly greater than any resulting turmoil or harm, than go forward blessed with courage. A more straightforward thought would be: Don’t be deterred to do what you know is right, but don’t be selfish about it, either. All life is suffering The is the first of the four noble truths as taught by Buddha. However, there is an issue, first of all, in that he never said it. The ancient word referring to “suffering “ from the Sanskrit, is Duhkha, also spelled Dukkha, (and there may be more variations as well). “Life is Duhkha.” However this word does not translate at all well into English, or any other language, so its meaning is very distorted. All life is Duhkha. All life has dissatisfaction because of our constant wanting of something other than the ever so predictable and unavoidable Duhkha. “All life is suffering” certainly doesn’t sound inspiring does it? The teaching meant that suffering is part of life. It is talking about dissatisfaction, and yes, it also refers to the many kinds of suffering people impose upon themselves and what they experience from external circumstances. It confuses people, rather than inspiring, because it is not about the simple fact that all life also includes joy, love, elation, and beauty. Of course these counterweights to suffering are equally important in enlightenment, but this phrase is not concerned with them. When thinking that all life is Duhkha, we can imagine that thoughts of dissatisfaction will rise and fall away from our mindfulness (or lack there of). If we are mindful of our attachment to constant need for something other than what we have, we don’t always have to lose out because of our obsessive attachment. The intent of this phrase has no flaws at all. Yet, some people say “All life is suffering” when their favorite shampoo is all gone. And, some people say it when something tragic happens. I think people often use it in jest. And the phrase is just, what can I say: Dissatisfying, for this and many other situations. It’s lost its original punch, and I cringe when I hear it, because it means so very much to so many people that it really doesn’t mean much at all anymore. Inspiring conclusion I could write a book on all the words we toss around everyday without thinking about what we say. Maybe I ‘ll do that. Stay tuned to see if there is actually a guiding reason behind why some things DO happen for a reason, such as I need to sell a book.
https://christylrivers.medium.com/inspiring-words-that-dont-inspire-fc18989d5fe7
['Christyl Rivers']
2019-12-30 19:58:41.581000+00:00
['Life Lessons', 'Writing', 'Humor', 'Inspiration', 'Philosophy']
Nimiq and Kindhumans
Nimiq and Kindhumans Blockchain tech for charity campaign transparency Introducing Kindhumans Kindhumans celebrates the good in humanity by creating a community around kindness, promoting conscious consumption, and always giving back. Co-founded by Suzi and Justin Wilkenfeld and a team of former GoPro, Live Nation/Feld Entertainment, and Reef alumni, Kindhumans is building a trusted platform and destination for the community of conscious consumers and causes promoting a healthy and sustainable lifestyle for the planet. The organization’s goal is to promote kindness through the celebration of people, organizations, and brands that are helping shape a better future through better products, enriching content, education, events, and grassroots community building. Kindhumans will leverage partnerships through collaborations with various causes, athletes, experts, and influencers to create the greatest impact possible by always giving back and looking forward to a better, more positive and more sustainable future. “Kindness. Pass it on.” Campaign The “Kindness. Pass it on.” campaign is an online giveback program that aims to spread kindness and raise an initial goal of $1M for good causes. In hopes of growing the campaign and kindness movement, Kindhumans call-to-action, or “Kindness Challenge,” encourages participants to take three kind actions that result in a kinder world and have a measurable, positive impact. The campaign steps are: Pay it forward by recognizing the kind humans in your life by sending them a message of thanks or kindness along with a Kindhumans sticker pack the company calls a “badge” of kindness; Give back to a cause with a direct and measurable impact through every purchase made. Current partnerships with accredited charitable organizations include: Leave No Trace (educating kids about the environment), National Forest Foundation (promoting the health of National Forests), and 501CTHREE (providing clean drinking water to residents of Flint, MI); Pass it on by nominating three friends to take the challenge and grow the kindness movement! The process is fun, family-friendly and helps strengthen the positive bond in our global community by connecting people through simple, thoughtful acts of kindness. Cryptography and transparency Being kind, collecting donations and pursuing a greater purpose is all about trust — and trust is fostered by transparency and traceability. Nimiq can help. Transactions stored on the blockchain are basically immutable, but moreover, each transaction can also carry additional information that is immutable as well. This fundamental feature of an open and decentralized blockchain will help Kindhumans to achieve cryptography-based transparency. Information can not be altered or deleted going forward, thus allowing creation of an immutable reference. Cryptography, created to encrypt data is now part of fostering transparency. How beautiful is that? Kindhumans will record the hash of published reports on the Nimiq Blockchain and provide a link on their transparency page so that the public can verify the genuine, untampered nature of each report. Kindhumans and Nimiq want to give back In wanting to “give back”, Kindhumans’ and Nimiq’s spirits are well aligned. Since its conception, Nimiq has been built with the mindset of helping others through the creation of a better kind of money. As part of this effort, Nimiq supported a proposal on bringing cryptocurrencies for day-to-day transactions to communities disconnected from the global banking system, giving them the ability to transact with the world without needing to rely on a middleman. You can read more on the crypto adoption proposal or visit cryptoadoption.io. To further manifest the spirit of “giving back”, Nimiq has immutably committed 2% of the total coin supply to supporting good cause initiatives with high social or ecological impact. Together, with purpose Kindhumans is aiming to build a marketplace for ethical products guided by the principle of delivering a more responsible, mindful, and holistic consumer platform. These plans offer scope for joint growth, as Nimiq is committed to providing the best non-custodial crypto payment experience and will prioritize projects with a greater purpose. This would equip Kindhumans’ mindful platform with the means of accepting the most modern form of payment. Nimiq is excited to support Kindhumans on various levels and looks forward to making a positive impact wherever we can.
https://medium.com/nimiq-network/nimiq-and-kindhumans-b24247e19f78
['Team Nimiq']
2019-08-14 02:51:10.921000+00:00
['Sustainability', 'Ecommerce', 'Blockchain', 'Nimiq', 'Cryptocurrency']
Business Change Marketing: Grab The Long Tail and Hold On!
What if I told you to pay more attention . . . pay lots of attention to the “long tail” ? Would that mean anything to you? If yes, keep reading. If no, keep reading and pay very close attention, because this is the single most important issue for your business’ growth and your business change that you will encounter. Any time. Ever. It’s that important. What’s the long tail? As the picture in your mind suggests, it’s a connection to something that you can grab from far away. And why does it matter? Here’s the example: One of the things that this company does is computer support. Another thing we’re very good at is management consulting. And then there’s the place where those two items converge . . . and they do. So let’s say we decide that the world “needs” to know that, and we take steps to drive traffic to this web site. Let’s say those steps include concentrating on the phrases computer support and management consulting. Bad idea. “See you on the unemployment line” bad. Why? Because those are broad phrases. They mean too many different things to too many different people. They’ll attract attention, for sure, but we’d be competing with many, many other companies trying to do the same thing, meaning both that our message will get lost, and that the people we attract won’t necessarily be the people we wish to land on our virtual doorstep. If you believe that the only bad attention is no attention, that doesn’t matter. And if you believe that you’re good enough at weeding out the people who waste your time to make attracting the wrong attention a mere byproduct, then I applaud you. But what if you buy the phrase “computer support” from Google, pay six dollars every time someone clicks on your link, and manage to get three thousand people to your web site with twenty-nine hundred and nine-seven of them being people who simply won’t ever buy your services? And that’s the GOOD outcome? The long tail is where you get lots of people to pay attention to you by grasping at the “tail” you hang out there. For example, if you search for “kindle censorship” on Google, the very first result points you to a blog post here from a few months ago. It’s part of our long tail, and if you clicked on either of those links you helped extend that tail for us. Is there direct impact on our bottom line? No. And that’s what the long tail is. You’re out there. People notice you. And then . . . you make sure they stay around once you’ve gained their attention. Pay attention to your long tail. If you need help, we’re glad to provide it; Answer Guy SEO Consulting and Search Engine Optimization works.Just make sure you dangle that long tail out there if you want your business to keep growing.
https://medium.com/business-change-and-business-process/business-change-marketing-grab-the-long-tail-and-hold-on-4b2c850b0df4
['Jeff Yablon']
2016-12-19 00:22:40.034000+00:00
['Marketing', 'SEO', 'Business Process']
The Curse of the Codependent
Often when people talk about how loving their parents were, how loving their spouse is or how loving they are, they cite two things: caretaking and sacrifice. While these two traits are usually thought of as positive, they can be used in a negative context. A codependent is someone who constantly gives and sacrifices for others. They are people-pleasers who suffer from low self-worth, have weak boundaries and are often not true to who they really are in order to preserve relationships. Such people often get into relationships with narcissistic people who constantly take. Probably the best definition of codependency I’ve ever heard was from Noel Bell who wrote, “The greatest hallmark of codependence is that someone else decides how you feel about yourself.” I was floored reading that because it reflected exactly how I lived. Of course, not all codependents are the same. For example, some have better self-worth than others. However, they all suffer from a low level of self-worth that causes them to abandon their own needs for others’ needs. “Codependents confuse caretaking and sacrifice with loyalty and love.” — Ross Rosenberg The quote above highlights the issue for the codependent. They cannot see that caretaking and sacrifice, as they use it, is not love. At all. When they are giving and giving, it isn’t completely selfless. They are trying to get something out of it. Caretaking to a codependent is: “Oh, you poor thing. Let me make you into something ‘better’, and by ‘better’ I mean whatever it is I value.” One’s self-expression is only deemed as okay if it doesn’t bother the codependent. Sacrifice to a codependent is: “Look at what I do for you! And this is how you repay me?” The codependent will do things for people and try to use it as a token to redeem love. Some would call that conditional love. I call it business. I don’t really see how that’s love any more than it is emotional prostitution. This is largely how I was loved growing up. It is how I learned to love. Unfortunately, people have been at the receiving end of my own emotional prostitution. But as they say, awareness is the first step to recovery. So what are loyalty and love? Loyalty is, “I’m here.” Love is, “You are your own person. Go and be it.” With this in mind, there is no room for caretaking and sacrifice as the codependent uses them. Your romantic partner isn’t a child. You can offer support, but it is not your responsibility to bend over backwards for them, especially in order to receive love from them. Furthermore, if you think this person needs your help, that would imply that they aren’t the best fit for you, wouldn’t it? In that case, let them go and live their life. The deeper solution to curb codependent tendencies is to improve one’s self-esteem, to see that one is worthwhile and worthy of love as is. It doesn’t have to be purchased from someone if you give it to yourself. When you do that, you will stop looking for others to tell you how to feel about yourself and you won’t chase love anymore. You will not tolerate any and every kind of treatment, so your boundaries get reinforced. You will know that each individual is responsible for their own emotional wellbeing and not aim to make others happy. You will never feel disappointed in your decision to love others because you genuinely value them and require nothing in return.
https://alchemisjah.medium.com/the-curse-of-the-codependent-e9abdededa5b
['Jason Henry']
2019-03-18 03:50:50.536000+00:00
['Relationships', 'Love', 'Sex', 'Self', 'Psychology']
7 Easy Tips For Effective Content Writing
7 Easy Tips For Effective Content Writing Visualmodo Follow Jan 16 · 4 min read Effective content writing is a complex mixture of an accurate structure, tips like high writing standards, unique information, and clarity of what is being presented. Regardless of the content deals with an entertaining blog post or an article published in a scientific journal, the same rules and the content marketing trends apply. What makes any message powerful is an inclusion of a personal view and a hook that inspires the visitor right from the start. Even a little paragraph in a daily news report that stands out from the rest makes a difference because if an author could walk an extra mile to analyze and get deeper, it adds more credibility. It is not the word count that earns the trust but a different wording that has a psychological effect of a primary source. As the virtual content grows and an average person has more access to argumentative posts with controversial opinions, it is crucial to provide due analysis and several key elements like visual aspect, a personal writing style, and good structure with a call to action between the paragraphs. Even for the complex posts, it must provide an interactive experience where reading the content becomes more than exploring the text. 7 Easy Tips to Deliver Effective Content Writing In both marketing and non-commercial content, it is crucial to focus on personal experiences or a set of reasons that make the post, webpage or an article unique. Even if they talk goes about a news report or discussion of a famous event, there should be more than repeating information that is already available. Great content has a style with analytical work that catches the reader’s attention. Improve readability and add more images and interactive elements as you strive for effective writing delivery. Add sub-headings with the bullet points to keep information condensed. Likewise, if you need to provide an emotional element to your writing. Place several links or the mental references that will make the target audience think of a particular experience. Include clear visuals. If there are statistics or a scientific background for subjects like Healthcare and Engineering, it is best to provide visual quotes that tell of an argument that reveals the main subject. The addition of YouTube videos can bring on an element of customization because it is possible to introduce more flexibility. While still meeting the latest marketing trends. Always focus on a sentence that comes at the beginning, an introduction part. Just like in a classic essay paper format, the most important information comes here. It is an allegorical book cover that helps people to choose what they like or ignore. If you struggle with how to start writing or need a cover page for an assignment, the best approach is turning to a professional online who can help with custom content that is free from plagiarism, spelling mistakes, and the elements that should be avoided. Engagement and Keywords Tips For Effective Content Writing Proofread your content aloud While it may take a while to eliminate the mistakes. Getting your text checked twice is what helps to avoid repetitions, spelling errors, broken links, and the sentences that sound weak. When you read aloud, it makes an impression of hearing the paragraphs as if from another speaker. In case something sounds lengthy or not credible enough. You can spot the weak link and write content in a more conversational, vivid style. Use the keywords. Successful content is always built upon powerful trigger phrases that help the search engines find your post or an article as people enter the phrases that match. Remember to avoid overfilling the sentences because it can make the final text unreadable. Use 2–3 keywords per paragraph. Engage your readers. Add special surveys, statistics, questionnaires, and the elements that suggest a user opinion. The SEO specialists call it a call to action element. This means that there is a high possibility of interaction between the author and a visitor. Self-Analysis and The Good Topics An effective content writing does not mean that we need to follow every marketing trend. What may work for a famous blog does not mean that it will automatically work for you. The best part about being creative is self-analysis and a selection of good ideas. It works absolutely the same way as it does when choosing among debate topics for middle school essays. Only you know the best entry that inspires you! Take your time to see what posts are the most popular and study what is in demand. However, if you can add something that is not easy to find online. Do your best to advertise the content on social media and the regular posts with the outside links. Even if you are just starting out. Establish a dedicated social circle that will help you to brainstorm ideas and discuss both victories and failures. Some things may not lead to success right from the start because it takes time for effective content to appear. As good things come to those who wait. So, the content writing that is unique and strong will always pay off in the end. Content Writing Tips Author Bio If you want to find out how to become an excellent student. How to meet the challenges of college life, Connie is the one to ask. Exploring academic writing rules, formatting standards, and the technical tricks, she knows the latest news. Friendly, caring, and easy to read, Connie is a perfect guide.
https://medium.com/visualmodo/7-easy-tips-for-effective-content-writing-8f57626fdea3
[]
2020-01-16 21:22:28.357000+00:00
['Content', 'Writing', 'Creative', 'Blogger', 'Creation']
Time Series Analysis, Visualization & Forecasting with LSTM
Time Series Analysis, Visualization & Forecasting with LSTM Statistics normality test, Dickey–Fuller test for stationarity, Long short-term memory The title says it all. Without further ado, let’s roll! The Data The data is the measurements of electric power consumption in one household with a one-minute sampling rate over a period of almost 4 years that can be downloaded from here. Different electrical quantities and some sub-metering values are available. However, we are only interested in Global_active_power variable. import numpy as np import matplotlib.pyplot as plt import pandas as pd pd.set_option('display.float_format', lambda x: '%.4f' % x) import seaborn as sns sns.set_context("paper", font_scale=1.3) sns.set_style('white') import warnings warnings.filterwarnings('ignore') from time import time import matplotlib.ticker as tkr from scipy import stats from statsmodels.tsa.stattools import adfuller from sklearn import preprocessing from statsmodels.tsa.stattools import pacf %matplotlib inline import math import keras from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers import Dropout from keras.layers import * from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from keras.callbacks import EarlyStopping df=pd.read_csv('household_power_consumption.txt', delimiter=';') print('Number of rows and columns:', df.shape) df.head(5) Table 1 The following data pre-processing and feature engineering steps need to be done: Merge Date & Time into one column and change to datetime type. Convert Global_active_power to numeric and remove missing values (1.2%). Create year, quarter, month and day features. Create weekday feature, “0” is weekend and “1” is weekday. df['date_time'] = pd.to_datetime(df['Date'] + ' ' + df['Time']) df['Global_active_power'] = pd.to_numeric(df['Global_active_power'], errors='coerce') df = df.dropna(subset=['Global_active_power']) df['date_time']=pd.to_datetime(df['date_time']) df['year'] = df['date_time'].apply(lambda x: x.year) df['quarter'] = df['date_time'].apply(lambda x: x.quarter) df['month'] = df['date_time'].apply(lambda x: x.month) df['day'] = df['date_time'].apply(lambda x: x.day) df=df.loc[:,['date_time','Global_active_power', 'year','quarter','month','day']] df.sort_values('date_time', inplace=True, ascending=True) df = df.reset_index(drop=True) df["weekday"]=df.apply(lambda row: row["date_time"].weekday(),axis=1) df["weekday"] = (df["weekday"] < 5).astype(int) print('Number of rows and columns after removing missing values:', df.shape) print('The time series starts from: ', df.date_time.min()) print('The time series ends on: ', df.date_time.max()) After removing the missing values, the data contains 2,049,280 measurements gathered between December 2006 and November 2010 (47 months). The initial data contains several variables. We will here focus on a single value : a house’s Global_active_power history, that is, household global minute-averaged active power in kilowatt. Statistical Normality Test There are several statistical tests that we can use to quantify whether our data looks as though it was drawn from a Gaussian distribution. And we will use D’Agostino’s K² Test. In the SciPy implementation of the test, we will interpret the p value as follows. p <= alpha: reject H0, not normal. p > alpha: fail to reject H0, normal. stat, p = stats.normaltest(df.Global_active_power) print('Statistics=%.3f, p=%.3f' % (stat, p)) alpha = 0.05 if p > alpha: print('Data looks Gaussian (fail to reject H0)') else: print('Data does not look Gaussian (reject H0)') We can also calculate Kurtosis and Skewness, to determine if the data distribution departs from the normal distribution. sns.distplot(df.Global_active_power); print( 'Kurtosis of normal distribution: {}'.format(stats.kurtosis(df.Global_active_power))) print( 'Skewness of normal distribution: {}'.format(stats.skew(df.Global_active_power))) Figure 1 Kurtosis: describes heaviness of the tails of a distribution Normal Distribution has a kurtosis of close to 0. If the kurtosis is greater than zero, then distribution has heavier tails. If the kurtosis is less than zero, then the distribution is light tails. And our Kurtosis is greater than zero. Skewness: measures asymmetry of the distribution If the skewness is between -0.5 and 0.5, the data are fairly symmetrical. If the skewness is between -1 and — 0.5 or between 0.5 and 1, the data are moderately skewed. If the skewness is less than -1 or greater than 1, the data are highly skewed. And our skewness is greater than 1. First Time Series Plot df1=df.loc[:,['date_time','Global_active_power']] df1.set_index('date_time',inplace=True) df1.plot(figsize=(12,5)) plt.ylabel('Global active power') plt.legend().set_visible(False) plt.tight_layout() plt.title('Global Active Power Time Series') sns.despine(top=True) plt.show(); Figure 2 Apparently, this plot is not a good idea. Don’t do this. Box Plot of Yearly vs. Quarterly Global Active Power plt.figure(figsize=(14,5)) plt.subplot(1,2,1) plt.subplots_adjust(wspace=0.2) sns.boxplot(x="year", y="Global_active_power", data=df) plt.xlabel('year') plt.title('Box plot of Yearly Global Active Power') sns.despine(left=True) plt.tight_layout() plt.subplot(1,2,2) sns.boxplot(x="quarter", y="Global_active_power", data=df) plt.xlabel('quarter') plt.title('Box plot of Quarterly Global Active Power') sns.despine(left=True) plt.tight_layout(); Figure 3 When we compare box plot side by side for each year, we notice that the median global active power in 2006 is much higher than the other years’. This is a little bit misleading. If you remember, we only have December data for 2006. While apparently December is the peak month for household electric power consumption. This is consistent with the quarterly median global active power, it is higher in the 1st and 4th quarters (winter), and it is the lowest in the 3rd quarter (summer). Global Active Power Distribution plt.figure(figsize=(14,6)) plt.subplot(1,2,1) df['Global_active_power'].hist(bins=50) plt.title('Global Active Power Distribution') plt.subplot(1,2,2) stats.probplot(df['Global_active_power'], plot=plt); df1.describe().T Figure 4 Normal probability plot also shows the data is far from normally distributed. Average Global Active Power Resampled Over Day, Week, Month, Quarter and Year timeseries_plot.py Figure 5 In general, our time series does not have a upward or downward trend. The highest average power consumption seems to be prior to 2007, actually it was because we had only December data in 2007 and that month was a high consumption month. In another word, if we compare year by year, it has been steady. Plot Mean Global Active Power Grouped by Year, Quarter, Month and Day grouped_plot.py Figure 6 The above plots confirmed our previous discoveries. By year, it was steady. By quarter, the lowest average power consumption was in the 3rd quarter. By month, the lowest average power consumption was in July and August. By day, the lowest average power consumption was around 8th of the month (don’t know why). Global Active Power by Years This time, we remove 2006. pd.pivot_table(df.loc[df['year'] != 2006], values = "Global_active_power", columns = "year", index = "month").plot(subplots = True, figsize=(12, 12), layout=(3, 5), sharey=True); Figure 7 The pattern is similar every year from 2007 to 2010. Global Active Power Consumption in Weekdays vs. Weekends dic={0:'Weekend',1:'Weekday'} df['Day'] = df.weekday.map(dic) a=plt.figure(figsize=(9,4)) plt1=sns.boxplot('year','Global_active_power',hue='Day',width=0.6,fliersize=3, data=df) a.legend(loc='upper center', bbox_to_anchor=(0.5, 1.00), shadow=True, ncol=2) sns.despine(left=True, bottom=True) plt.xlabel('') plt.tight_layout() plt.legend().set_visible(False); Figure 8 The median global active power in weekdays seems to be lower than the weekends prior to 2010. In 2010, they were identical. Factor Plot of Global Active Power by Weekday vs. Weekend plt1=sns.factorplot('year','Global_active_power',hue='Day', data=df, size=4, aspect=1.5, legend=False) plt.title('Factor Plot of Global active power by Weekend/Weekday') plt.tight_layout() sns.despine(left=True, bottom=True) plt.legend(loc='upper right'); Figure 9 Both weekdays and weekends follow the similar pattern over year. In principle we do not need to check for stationarity nor correct for it when we are using an LSTM. However, if the data is stationary, it will help with better performance and make it easier for the neural network to learn. Stationarity In statistics, the Dickey–Fuller test tests the null hypothesis that a unit root is present in an autoregressive model. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity. Stationary series has constant mean and variance over time. Rolling average and the rolling standard deviation of time series do not change over time. Dickey-Fuller test Null Hypothesis (H0): It suggests the time series has a unit root, meaning it is non-stationary. It has some time dependent structure. Alternate Hypothesis (H1): It suggests the time series does not have a unit root, meaning it is stationary. It does not have time-dependent structure. p-value > 0.05: Accept the null hypothesis (H0), the data has a unit root and is non-stationary. p-value <= 0.05: Reject the null hypothesis (H0), the data does not have a unit root and is stationary. stationarity.py Figure 10 From the above results, we will reject the null hypothesis H0, the data does not have a unit root and is stationary. LSTM The task here will be to predict values for a time series given the history of 2 million minutes of a household’s power consumption. We are going to use a multi-layered LSTM recurrent neural network to predict the last value of a sequence of values. If you want to reduce the computation time, and also get a quick result to test the model, you may want to resample the data over hour. I will keep it is in minute. The following data pre-processing and feature engineering need to be done before construct the LSTM model. Create the dataset, ensure all data is float. Normalize the features. Split into training and test sets. Convert an array of values into a dataset matrix. Reshape into X=t and Y=t+1. Reshape input to be 3D (num_samples, num_timesteps, num_features). lstm_data_preprocessing.py Model Architecture Define the LSTM with 100 neurons in the first hidden layer and 1 neuron in the output layer for predicting Global_active_power. The input shape will be 1 time step with 30 features. Dropout 20%. Use the MSE loss function and the efficient Adam version of stochastic gradient descent. The model will be fit for 20 training epochs with a batch size of 70. lstm_timeseries.py Make Predictions train_predict = model.predict(X_train) test_predict = model.predict(X_test) # invert predictions train_predict = scaler.inverse_transform(train_predict) Y_train = scaler.inverse_transform([Y_train]) test_predict = scaler.inverse_transform(test_predict) Y_test = scaler.inverse_transform([Y_test]) print('Train Mean Absolute Error:', mean_absolute_error(Y_train[0], train_predict[:,0])) print('Train Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_train[0], train_predict[:,0]))) print('Test Mean Absolute Error:', mean_absolute_error(Y_test[0], test_predict[:,0])) print('Test Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_test[0], test_predict[:,0]))) Plot Model Loss plt.figure(figsize=(8,4)) plt.plot(history.history['loss'], label='Train Loss') plt.plot(history.history['val_loss'], label='Test Loss') plt.title('model loss') plt.ylabel('loss') plt.xlabel('epochs') plt.legend(loc='upper right') plt.show(); Figure 11 Compare Actual vs. Prediction For me, every time step is one minute. If you resampled the data over hour earlier, then every time step is one hour for you. I will compare the actual and predictions for the last 200 minutes. aa=[x for x in range(200)] plt.figure(figsize=(8,4)) plt.plot(aa, Y_test[0][:200], marker='.', label="actual") plt.plot(aa, test_predict[:,0][:200], 'r', label="prediction") # plt.tick_params(left=False, labelleft=True) #remove ticks plt.tight_layout() sns.despine(top=True) plt.subplots_adjust(left=0.07) plt.ylabel('Global_active_power', size=15) plt.xlabel('Time step', size=15) plt.legend(fontsize=15) plt.show(); Figure 12 LSTMs are amazing! Jupyter notebook can be found on Github. Enjoy the rest of the week! Reference:
https://towardsdatascience.com/time-series-analysis-visualization-forecasting-with-lstm-77a905180eba
['Susan Li']
2019-05-17 13:26:57.215000+00:00
['Machine Learning', 'Tutorial', 'Python', 'Time Series Forecasting', 'Deep Learning']
My Summer on Platform at Strava
My name is Vidya, and I just graduated from UC San Diego, where I studied Mathematics and Computer Science. This summer, I had the opportunity to intern at Strava on the Platform team, and learned a lot along the way. Airbrake for Scala Services My first project focused on integrating Strava’s Scala microservices with third-party software called Airbrake, which allows for easier error discovery through logging, visualizations of error specifics, and deployment information. Strava’s website and APIs are powered by Active, a Ruby on Rails application which contains the majority of Strava’s business logic. However, Strava has moved from a monolithic app (mostly contained on the Rails app) to a service-based infrastructure. Before this project, Strava engineers had been accustomed to using Airbrake with Active, but not with Scala services (which is where they increasingly have to work) — my project allows them to use Airbrake, and improve their error-discovery process, when working with services. This project involved implementing three main parts: A Finagle filter that catches and sends notifications for fatal errors and uncaught exceptions to Airbrake Airbrake deploy notifications for service deploys (this was added to our internal deployment tools) A user-customizable severity filter to filter errors and set severity based on exception class name A high-level overview of how the Airbrake Finagle Filter works. This project reinforced several important lessons: Code should be extensible — thanks to a suggestion by another engineer on my team, I made the Airbrake filter generic so that it can be used with other failure filters. If we decide to use the filter with other software in the future, this small change will make a big difference. Simple is better. During initial design discussions, we thought that building the severity filter would require a prefix tree; we found that we could instead use a simple key-value Map, which worked exactly as intended. Documentation is important! I tried to ensure that the tools I worked on were documented in a detailed, thorough manner — this wasn’t the most glamorous part of the project, but I’m glad that clear documentation will make it easier for other engineers at Strava to use and benefit from my work! I was excited about the impact of my project — from seeing a service error appear on Airbrake for the first time, to receiving enthusiastic Slack messages from other engineers who started using it for their work. This was an awesome first project, but it was only the beginning! Jams! One of the many reasons why Strava is such a special place to work is Jams — a 4-day hackathon during which employees are encouraged to pitch, build, and demo creative side projects that could potentially help improve the product or the company in some way. My Jams idea involved using QR codes to share four features of the mobile app: athlete profiles (to facilitate following), specific activities, challenges, and clubs. This would make it easier for athletes to connect with one another and find content on Strava. Scan the QR code! Contact Strava Support if you’d like to see this feature in the app. I worked with an Android engineer and a web engineer on the Summit team over the course of three days to bring the idea to life. One of the coolest parts of this project was the fact that I was able to work with technologies I hadn’t used before — my teammates patiently gave me crash courses on Ruby, Kotlin, and Swift. A happy side effect of the project: my computer’s Dock looked much more colorful thanks to all the new IDEs I had to download! An IDE rainbow. My main takeaway from Jams was that trying new things is extremely important — from pitching my project idea in front of the whole company to working on aspects of the product that I had been unfamiliar with. This project also helped me learn more about what engineers on other teams at Strava are working on, and how they work. It was especially helpful to gain perspective on how engineers outside of Platform work because the next project that I began working on after Jams is meant to help engineers across the company with their workflows. Candidate Deploys for Scala Services My next project was part of a larger collaboration between the Platform and Productivity Engineering teams which is focused on developer experience and productivity. The goal was to make it so engineers developing Scala services could test new features against real data without impacting athlete experience. This involved lots of planning — the team spent around two weeks drafting documents that detailed use-cases, goals, implementation plans, and potential pitfalls of the project. To determine a baseline for measuring its success, we drafted a survey which we sent to engineers developing new services. The next steps, which comprised most of the rest of my internship, involved working on deployment configurations for these new services. This was meant to allow different deployments of the same service to be routed to, with a focus on normalizing service naming and making things backwards compatible (so engineers can work with the new configurations without breaking existing things). Working on this project reminded me that understanding your customers is important. Before this summer, my idea of “customers” had constituted end-users, such as the athletes who use the Strava app and website — I hadn’t considered internal users, like other Strava engineers. However, in writing docs that would be easily understandable for the Airbrake project, and in using surveys to collect metrics to understand how people might benefit from service canaries, I realized the importance of having a customer in mind. Fun Non-technical Things This summer, I learned how to make mochi with other #womenintech at Strava, went kayaking for the first time at a team off-site, befriended several dogs at the company picnic and on company-wide Dog Fridays, contributed two new coffee-related Slackmojis (:philz: and :philz-no-mint:), and discovered that many of my coworkers like Taylor Swift as much as I do. Most importantly, I went on so many one-on-ones that I think I may have set a company record, and met several incredibly smart, interesting people at Strava as a result. I made an effort to eat lunch, get coffee, and just chat with not only the engineers on my team, but also analysts, engineers, and product managers across the different teams at Strava. My advice to new interns is to definitely try to meet as many people across the company as possible — you will learn about the building blocks of the organization, the range of potential opportunities in the industry, and how to grow as an individual and professional from people who were once in your shoes. Special thanks to Matt, Connor, and Sara (my mentors), to Julie and Aidan (my Jams team), to everyone I had one-on-ones with (Adam, Annie G., Brian T., Cathy, Chris G., David, Dickson, Elyse, J, Jeff, Kevin, Lulu, Melissa, Merty, Mindy, Nathan, Raf, Varun, Will S., and Yudi) and to the countless other people who helped me throughout my internship!
https://medium.com/strava-engineering/my-summer-on-platform-at-strava-c4c17ae24377
['Vidya Raghvendra']
2019-11-15 16:01:02.334000+00:00
['Airbrake', 'Internships', 'Microservices', 'Strava', 'Scala']
The Virus vs The Virus:
The Virus vs The Virus A Trump Twitter Timeline Here’s a look at what the President of the United States was Tweeting as America’s Covid-19 body count continued to mount. The below examples are not exhaustive. But they are certainly exhausting. March 28: 2,000 American Covid-19 Deaths March 31: 3,000 American Covid-19 Deaths April 1: 4,000 American Covid-19 Deaths Editor’s note: All Were Not Saved. April 2: 5,000 American Covid-19 Deaths Editor’s note: Oil would soon move into negative value territory. April 3, 6,000 American Covid-19 Deaths Editor’s note: This was when the president urged people to get out and vote during a pandemic Kelly lost.. April 4: 8,000 American Covid-19 Deaths Editor’s note: We’d later learn that this drug killed more than it saved. April 5: 9,000 American Covid-19 Deaths April 6: 10,000 American Covid-19 Deaths April 8: 14,000 American Covid-19 Deaths April 10: 18,000 American Covid-19 Deaths Editor’s note: It turns out, no. April 11: 20,000 American Covid-19 Deaths April 13: 22,000 American Covid-19 Deaths April 15: 23,000 American Covid-19 Deaths April 16: 30,000 American Covid-19 Deaths April 17: 35,000 American Covid-19 Deaths April 19: 40,000 American Covid-19 Deaths Editor’s note: He was not right on ventilators. Nor, testing. April 21: 45,000 American Covid-19 Deaths April 24: 50,000 American Covid-19 Deaths Editor’s note: Liberate Georgia? April 25: 52,000 American Covid-19 Deaths Editor’s note: He repeatedly called it a hoax. Editor’s note: For a guy who criticizes the media, Trump really buried the lede here. This was the Lysol moment. Editor’s note: Yes, please stop. Editor’s note: Donald Trump just accused someone else of lying. April 26: 55,000 American Covid-19 Deaths Editor’s note: Perhaps he’d consider working less hard? Editor’s note: The UV Light paired with Injected Disinfectant = Wheels Off. Editor’s note: Fox is not biased against Trump. Editor’s note: Someone misspelled Nobel Prize. In fairness, he was referring to Pulitzers anyway. Of course, none of the Tweets are as damning as the spoken comments on the matter. See this video from The Recount for a reminder. And keep in mind that this video material was all gathered before the president’s mused about putting disinfectant inside our bodies.
https://davepell.medium.com/the-virus-vs-the-virus-e9356703d8ff
['Dave Pell']
2020-04-27 04:20:23.096000+00:00
['Politics', 'Trump', 'Social Media', 'Media', 'Coronavirus']
Why children need the outdoors now more than ever
Provoking care and curiosity for the environment California Chaparral Photo by Ian McDermod When I would teach children about the natural environment, I set the stage as all of us being “field scientists” who are being tasked to learn about the different ecosystems around them. Being based in Orange County California, the primary ecosystems we would study included explorations of the chaparral, riparian, and oak woodland ecosystems. In each of these ecosystems we would hike, identify plants and animals, and journal to record our observations. I would teach specific curriculum, but our activities and ecosystem exploration were a loose enough teaching structure that it allowed the children’s own curiosity and interest be the limit to the knowledge and experience they walked away with. Kids are naturally curious and engage with the parts of nature they find provoking to them. In the riparian ecosystem investigation, students would utilize small nets to capture and temporarily hold frogs and small freshwater insects to use for later identification. Just about every kid will happily partake in what is to them an epic adventure of exploring a newly discovered creek to safely find and capture the elusive Pacific tree frog. In all their fun and exploration they often forget they are even doing science. Who knew environmental studies could be so fun! Photo by Jordan Whitt on Unsplash Even when catching frogs, sometimes certain students might develop their own fascination or interests. One student I remember grew bored of looking for things in the creek, and was interested instead in the soil. I gave him a meter that when probed into the ground, measured the temperature, pH (acidity), and moisture of the soil levels. The student then proceeded to probe in multiple different spots to compare the soil by the creek to the soil further upstream on the trail. This young soil scientist didn’t need me to to assign him this project, he got there from his own questioning and curiosity. Children will have their own interests about the natural world as long as you take the time to explore that with them. I have met many students who confide to me after our ecosystem explorations that they are interested in studying plants, birds, and mushrooms when the grow up. Spending time outdoors nurtured children's interests, allowed them a safe place to ask questions and explore, and invited them to now care for our fragile ecosystems and the organisms that make it up.
https://medium.com/age-of-awareness/why-children-need-the-outdoors-now-more-than-ever-c8cdd960a206
['Ian Mcdermod']
2020-04-29 22:03:30.275000+00:00
['Nature', 'Outdoor Education', 'Environment', 'Child Development', 'Outdoors']
Easy-to-Read JSON With This Chrome/Firefox Extension
Easy-to-Read JSON With This Chrome/Firefox Extension Print JSON and JSONP nicely when you visit it directly in a browser tab Img source: Chrome web store JSON is a very popular file format. Sometimes we may have a JSON object inside a browser tab that we need to read and this can be difficult. We may need to go and search for an online tool that turns it into an easy-to-read format so we can understand it. Now, here is a Chrome and Firefox extension that does the formatting and makes your JSONs instantly pretty inside your browser, without having to perform many unnecessary steps. It comes with support for JSON and JSONP and highlights the syntax so that you can differentiate different attributes and values accordingly. It also comes with the option to collapse nodes, clickable URLs that you can open in new tabs, and you see the raw, unformatted JSON. It works with any JSON page, regardless of the URL you opened. It also works with local files, after you enable it in chrome://extensions . You can inspect the JSON by typing json into the console. You can install the extension by going here for Chrome and here for Firefox and then test it, for example, by visiting this API response. This is what it looks like, before formatting: Now, take a look at the beautiful JSON response you get with JSON Formatter: Here is a pro tip: Hold down CTRL (or CMD on Mac) while collapsing a tree, if you want to collapse all its siblings too. It is an open-source project, so you can view its source code on GitHub. Thanks for reading!
https://medium.com/better-programming/easy-to-read-json-with-this-chrome-firefox-extension-29f4938dce50
['Fatos Morina']
2019-08-12 22:50:49.483000+00:00
['Technology', 'Json', 'Productivity', 'Education', 'Programming']
The Dome Keeper
Tricks. Suspended, silent, still, experienced placed ancestral Gaea, giver of sky and shore, earthen immaculate goddess sits a thankless nymph a mythological whore. Ancient runes, abandoned schools, their leather binds like memory around her waist. Reshaped bestiary whips and chains against command’s foolish race: half starved sufferers know not the sacred wait, thinking their first mistake. Nature runs deeper in search for her coven, quiet riot down under she feeds the iron to the peat. Our eternal lay — a harlot’s lot— loveless concubine. Wanton legs slightly bent she watches her blood drool, synergetic seeds, squiggly miracles above the dam to spawn the sprouting pool. Mother, who art the earth involves no man to mate the living to the daily bread, she will not measure one who carries no balance to the weight. Sky-ladder slot with a warrior’s heart, hussy face cries questions laughs answers. Without her we would die alone, nothing in the darkness of space to save us. She is biology’s chore, natural as wheat, plentiful as stars; solid gold —a prostitute’s pay. Man shows no food miles, ledgers for the take. Gaea would let him starve but not for probity’s way.
https://medium.com/literally-literary/the-dome-keeper-836799beb472
['Michael Stang']
2019-05-05 16:44:28.560000+00:00
['Poetry', 'Sustainability', 'Ignorance', 'Tragedy', 'Sex']
Reminiscing
It was a breezy raining day, I stayed by the front room window watching the restless trees swaying under the unforgiving force of the wind. Living in an area that’s prone to flooding, I hated rain with a strength I never knew I had. Seeing our goats and other goats from neighbouring streets hiding under the collapsed uncompleted building across the road, was not particularly pleasing. The chickens were not spared even, they were actually the first to scamper to safety at the approach of the rain. The popular rain song came hurriedly to mind even at such a grand age of twenty. I started slowly ‘Rain rain go away Come again another day Little children want to play’ Then fiercely singing it with the venom of a falling iroko tree (hard wood tree from West Africa). Five minutes into my trance inducing singing I could no longer discern if the water flowing down my beef-starved cheeks was from the rain or my precious tears. So unlike me but I tend to enjoy this vulnerability residing at the core of my being. A sure sign of weakness, not mine but of the entire human race. Waking up from my folly, I realised I had stepped away from the window, walked through the doors and sat on a branch of a rain-battered tree, gently swaying with a milder breeze whose ferocious father had just vacated the throne for his son, a gentle ruler. It was nothing after all but another episode of my infantile day-dream. Nature must really be laughing me to scorn but I dare not complain. I have been to heights were mere mortals are afraid to dream about and to depths where even the beasts of the sea dare not venture. I am not the master of my own fate but an actor whose script was written before he was conceived in his mother’s womb. All I ever wanted was to be great in everything not necessarily the best as I have an innate obsession with shyness, you see I hate any effusive adulation. How I survived I do not know but I understand my place. The little village of eminent personalities. The home of the great elder statesmen. The likes of Prof Timidity and Dr Caution. Not to forget Chief Judge Rational and the much loved wife of the President, First Lady Realism. Imagine all originating from a place you could define as Dystopian yet all becoming more refined than expected, my safe humble self. Do not feel sorry for my naked emotional dissonance and do not applaud my perceived fortune of mental balance. My existence has been misrepresented by my experience so I became who I never wanted to become. The son of a noble father and a mother who refused to learn how to drive a car. I am not ashamed of her, she was one of my best friends until my father took over. Being the second son and child I escaped the pressure of the first son and only child. A beautiful place to be if you are infinitely lazy. I am not implying my laziness, only alluding to the artistic resonance of it. It takes a lot to be lazy, requires the skill of an artist whose strength revolves around having attention to detail. Never mind being called lazy so many times by my noble father. I escaped the rigours of hard work, hiding behind my elder brother and younger siblings. I sailed through the ranks in flying colours. I hid in the cracks in the wall of eight children sired by my noble father. I become a genius but an angry one. Nay, more like an irritant. Perfectionism was my obsession. I like to toy with the label of autism, though undiagnosed. Many people called me weird, an appellation I was inclined to accept without any hint of guilt. Yet they loved me. Who wouldn’t love a clown? A clown? Yes. My existence had shrouded my nature in secrecy. Funniness was the regalia and clowning the rag tied to my waist. Life was my stage and survival was the script so I acted with the seriousness of a method actor. No child’s play, no mean feat either. I surpassed the standard set before me, my mother’s. I passed with flying colours, only if colours could truly fly. I was my mother’s son when I was bad in school but now my father’s son after nature had become lenient on my mental fortune. At a young age I realised the gulf in strength between my mental capacity and that of my siblings. Theirs was as bright as the Vitamin D inducing early morning sun and mine was weaker than the tailboard of an overcrowded rickety molue (not your typical bus). The beginning of my journey into the land of day-dreams. I was a cheat living a double life. My quiet simple self and my genius self. My quiet simple self was shy, boring, blatantly awkward and lacking in confidence. My genius self was a beast of a man. He was the best at everything. A footballer, doctor, actor, musician, writer, engineer and many more professions I can’t remember. He was everyone from Bill Gate to Wole Soyinka (Nigerian Nobel Laureate in Literature) . Bill because he is rich and Soyinka because of his English. I maintained a level-headedness expected of me as being badly behaved with a porous brain is a recipe for disaster. A porous brain? Yes, not just a self-deprecating way to describe myself. The words of my noble father. He had few words to describe me; dunce, dullard and porous brain. Looking back to those beautiful times still brings smile to my unchanging face except for the beards masking my childish heart.
https://medium.com/frictional-autobiography/reminiscing-7722fdfed653
['Prince Humphrey']
2016-10-18 12:21:27.303000+00:00
['Storytelling', 'Life Lessons', 'Storyofmylife', 'Life', 'Autobiography']
Inspire
I am a warrior of life, striving to be a better person every day. Finding my voice on writing. Follow
https://medium.com/afwp/inspire-686e0406ee81
['Ivette Cruz']
2020-12-29 15:32:41.158000+00:00
['Poetry', 'Self Improvement', 'Motivation', 'Life Lessons', 'Positive Thinking']
A Framework for Doing Good
A Framework for Doing Good Do good unless, to do so, you would have to sacrifice your long-term happiness. In 1972, a 25-year-old philosophy student published an article in ​Philosophy and Public Affairs.​ His name was Peter Singer, and his F​amine, Affluence, and Morality​ became a classic. In it, Singer asserted that it was our obligation to help those in need. He also provided a framework for evaluating the morality of one’s actions. While “rather draconian” seems like an appropriate description of the framework, it made me think — and come up with an alternative, something I (and maybe you) could use to make good decisions. Go on, read the article, I’ll wait. The rest of this post builds on (and often disagrees with) Singer’s ideas, and you will have much more fun reading it if you bring your own opinions instead of relying on mine. By the way, that article is a beautiful, insightful piece of writing. It has significant flaws (some of which I have noticed and will happily point out), but its conclusion is powerful and, at its core, right (with caveats, read on). (I am just starting to think about this topic, so excuse my cluelessness and naïveté. Ethics is a fascinating branch of philosophy that I know nothing about; instead, this post is about what feels right.) The famous pond thought experiment While Singer wrote the article against the backdrop of the Bangladesh Liberation War (I admit, I had to Google it), it’s the pond thought experiment that survived the test of our shared memory. It goes like this: If I am walking past a shallow pond and see a child drowning in it, I ought to wade in and pull the child out. This will mean getting my clothes muddy, but this is insignificant, while the death of the child would presumably be a very bad thing. Most people would agree that it is one’s duty to save the child. That’s when Singer delivers his slam dunk by turning the obligation to save the drowning child into an obligation to save the nine million refugees in East Bengal. To do that, he asks two questions: Does proximity matter? In other words, does your moral duty to prevent a bad thing from happening change with distance to the person in peril? Sure, there’s a much better chance you will be able to save someone from drowning when they are nearby, but distance has little effect on your ability to help those who are starving. So, assuming you can help someone far away as easily as someone nearby, does your moral duty change? When you put it like that, the answer seems obvious: why would it? Of course, this idea disregards nationalism and the ever more prevalent us-versus-them rhetoric, and so is not trivial to implement in practice. Yet as a moral principle, it’s hard to argue against: From the moral point of view, the development of the world into a “global village” has made an important, though still unrecognized, difference to our moral situation. Expert observers and supervisors, sent out by famine relief organizations or permanently stationed in famine-prone areas, can direct our aid to a refugee in Bengal almost as effectively as we could get it to someone in our own block. There would seem, therefore, to be no possible justification for discriminating on geographical grounds. Does the number of potential saviors matter? That is, does your moral duty to prevent a bad thing from happening diminish as the the number of people who have the power to help grows? Singer notes that there is clearly a psychological difference: One feels less guilty about doing nothing if one can point to others, similarly placed, who have also done nothing. To prove that the number of people who can help is irrelevant, Singer returns to the pond example: would your moral duty to save the child lessen if there were other people who saw the child drowning and were as close to the pond as you? Hopefully the answer is a passionate no¹. Singer does clarify that if everyone chipped in equally to prevent a bad thing from happening, one’s moral duty would lessen together with the scale of the problem. In practice, however, there will always be more bad things to prevent, and most people will not chip in. Your moral duty is thus to always do as much as you can. The tragedy of the commons Now here’s an important question: why do so few people help others? Singer offers two interesting theories², but I think I have come up with something better (there’s a good chance that thousands before me have had the same “unique” insight): philanthropy is a flipped tragedy of the commons (if you need to brush up on the tragedy of the commons, take a few minutes to read the seminal article by Garrett Hardin). Here’s why. Bad things happen in the world (e.g., 10% of the world’s population lives in extreme poverty). Preventing bad things from happening is our moral duty. However, doing so requires time and money. Each actor in the system can freely decide how much time and money to give; there is no control mechanism. If everyone chipped in, bad things could be prevented. However, while you will bear the cost of doing good, the benefit will be entirely someone else’s. In other words, not donating, not volunteering, not helping others at your own expense can be seen as a rational individual-level decision, even if it doesn’t stand up to an ethical scrutiny. Hardin asserts that there is no technical solution to the problem of the commons. To “fix” the world, we have to change the way we behave. However, eradicating poverty requires global cooperation of the sort that we seem incapable of. Just look at climate change: we have emitted so much carbon dioxide into the atmosphere that we might have already passed the point of no return. And global poverty doesn’t even threaten to kill us. In short, cooperation is hard. And there might be no technical a solution. So I am going to propose a moral one. The strong version Now let’s get back to Singer. So far I’ve avoided talking about his exact definition of our moral duty to prevent bad things from happening. He offers two, and both of them, while theoretically well reasoned, would likely have disastrous results if implemented in practice. The “strong version” is what Singer says is the correct definition³: We ought to give until we reach the level of marginal utility — that is, the level at which, by giving more, I would cause as much suffering to myself or my dependents as I would relieve by my gift. There are two important points to note here. First, inflicting suffering is acceptable as long as that suffering benefits the recipient. Second, such suffering should be inflicted as long as the benefit to the recipient is greater than the suffering, thus effectively reducing the giver to the state comparable to that of the recipient. Now the reason the pond example is straightforward is that saving a life there is cheap. A much more appropriate illustration of Singer’s strong version would be the following example: You are passing a minefield, the legacy of a recent civilian war. In the middle of the minefield, you see a girl. You have no idea how she got there, but you know that if you don’t help her, she will most likely die. You are her only hope. If you do decide to help her, however, there’s a 30% chance that you will be killed, and a 30% chance that you will be maimed for life. Your children watch as you calculate the probabilities and decide that it is your moral duty to try and save the girl (see analysis at the end of this post⁴). You don’t make it back alive. In short, Singer’s strong version requires inflicting significant suffering unto oneself and others. If this was the accepted moral code, I would hate helping others. I would also end up being miserable, demotivated, and too poor to give. This is not an ethical theory — it’s a mutual self-destruction manual. The moderate version Alright, says Singer, here’s a more lenient proposal. It’s not right, but it’s better than nothing: We should prevent bad occurrences unless, to do so, we had to sacrifice something morally significant. The moderate version drops the marginal utility bit. In other words, you are no more obliged to reduce yourself to the level of suffering experienced by the refugees in East Bengal if (and only if) you consider that to be a morally significant sacrifice. With this definition, we are getting closer to something we could work with. However, it still requires us to deny all pleasures and non-essential needs to ourselves and our families. The simplicity of the two definitions is appealing, yet it is also why they will never work: they fail to recognize that their object is a human being. A human being who might believe in doing no harm, who has a moral duty to their family, and who assigns value to their own happiness and well-being⁵. The human version So here’s my moral duty theory. The Singer of 1972 might have called it the “weak version”. I’d rather prefer the “human version”. While it is inspired by my own values⁶, you don’t have to be a starry-eyed bookworm to make it work for you. Alright, are you ready? Here you go: We ought to do good unless, to do so, we had to sacrifice long-term happiness of ourselves or others. Here’s what this one sentence is actually saying: It is our moral duty to do good Simply not doing bad things is not enough. Doing good encompasses the prevention of bad things, the focus of Singer’s article. But it also includes other ways of benefiting those around us. It expands our options. (Note that when making big “do good” decisions, it is one’s moral duty to choose the most impactful cause⁷.) It also provides a framework for one’s everyday behavior. Saving a life is a worthy, but costly endeavor. Making someone happy, even if only for a few minutes, is (almost) free. You can smile. You can open the door. You can say good morning, please, thank you. You can make them tea. It’s also something that everyone, regardless of their financial situation, can do. Happiness is the universal currency Human life does not have an obvious reason for existence (if it did, Wikipedia’s article on the meaning of life would not have 222 references). So we end up creating our own purpose, whether that’s being the best person one can be, following God’s will, or helping others. Whichever adventure you choose, happiness is the universal currency. It’s the ultimate success metric. Happiness is easy to measure, you need only ask. It transcends individual differences — we each have our own definition. And it acknowledges that there is more than one way to live a good life. Lifting others up instead of drowning together Finally, I do away with the marginal utility nonsense. And I incorporate my belief in doing no harm. I could not inflict pain on one person to benefit another. I also could not ask one person to inflict pain on themselves to benefit another⁸. So I don’t. Instead, the limiting factor to the good one can do should be the effect on one’s long-term happiness. In my previous post I touched upon the idea of hedonic adaptation theory (one’s tendency to revert to a previous level of happiness despite any significant positive or negative events). Buying new sneakers might lead to a short-term increase in happiness, but the feeling will pass in a day or two, leaving you at the pre-purchase happiness level (“the happiness set point”). Not all events lead to the reversion to the set point, however. For example, research suggests that you can buy long-lasting happiness, up to a certain point. Once you reach that point, however, you should start donating. So here’s what I’m saying: when the choice is between a short-term happiness increase (or a short-term happiness loss) and doing good, you should do good. You should not, however, sacrifice your (or anyone else’s) long-term happiness. So, to recap, do good unless, to do so, you would have to sacrifice your long-term happiness. Try it the next time you are thinking of buying that new pair of sneakers. Yes, you might genuinely need new shoes. But there’s a chance you don’t. There’s a chance that buying them would change nothing for you. There’s a chance that not buying them will help save a life⁹.
https://ernes7a.medium.com/a-framework-for-doing-good-5f3b4339d02
['Ernesta Orlovaitė']
2019-10-28 11:55:41.740000+00:00
['Society', 'Effective Altruism', 'Philosophy', 'Ethics', 'Philanthropy']
“Let’s put a pin in it”: Intuitive Content Saving on YouTube’s Mobile App
With over a billion hours of video watched daily, YouTube is one of the largest and most influential content platforms of the modern day. It’s algorithm, while flawed, is regarded as one of the best at determining what its users want to see, consistently delivering new and interesting videos. However, one thing Google doesn’t (yet) consider is the habits, mood, and free time of their users. It’s quite a common occurrence for a long video to be recommended when users may only have a few minutes on hand. A podcast that shows up while in the mood for spectacle, a movie trailer in the feed when scrolling for background music. These are all cases where the recommended content is interesting and relevant to the user, just at the wrong place or time. They’d love to see it later, and in fact their finger is just hovering over the clip, tempted to throw their responsibilities to the wind and indulge in YouTube’s endless flow of entertainment. But when the finger reaches the screen, it flicks up, sending the unfortunate video careening off the screen into the virtual void, relegated alongside that which is “spam”, “not interested”, and “unappealing”, to sink back beneath the vast ocean of videos in Google’s servers, unlikely to surface again. A Relic of a Different Era Credit where credit is due, the problem of saving a video to watch later is something YouTube has thought of long ago, adding the feature under the apt name: “Watch Later”. In fact, this feature has been around since late 2010, over a decade ago at the time of writing. It created an automatically generated private playlist in the user’s account called “Watch Later”, in which videos could be added or removed via a button in the video player. It got translated along to the YouTube official mobile app, which landed on Android at around the same time and onto iOS in late 2012. It put the action of saving to Watch Later into a little three dot menu in the video card, and buried the playlist under the user’s account page, where it remains to this day. Watch Later Button in YouTube 15.43.32 As you might imagine, a system that was never designed for mobile, adapted in an age when the rules of mobile design were still in the midst of being written, ends up falling a bit short. Don’t get me wrong, it’s perfectly functional, and I myself use it on the daily. However, with the passage of time and how deeply buried the interface is, it’s fallen out of common use, replaced by individualized solutions that bypass the provided option. Breaking Down the Problem Users have one of two choices when stumbling across a video they can’t watch right now. They can either let it slide by, hoping for it to stay in the feed until they do, leaving their content at the tender mercies of the Algorithm. Alternatively, they could record it in some way to come back to, either with the built-in solution or a homemade one. In talking to users, I arrived at several key insights. It’s remarkable how little people use the inbuilt option. Several opted to use primarily the Desktop version, allowing them to open many videos in new tabs, coming back to the whole tab at a later date. Several others took it upon themselves to organize the videos in custom playlists, ignoring the default one provided. More still opted for the most ancient of solutions, remember the title or channel and search for it later. It was almost always a time constraint issue. Nearly every time, the feature YouTube engineers spent many hours building in the early 2010s was replaced in favor of more convenient options, ones that were fewer taps and clicks away. Building it up from scratch Given the improvements in app design paradigms over the last decade, what can we do now to implement a mobile first, accessible solution to take the place of YouTube’s Water Later? Initial Draft of Ideas Right from the start, one particular solution stood out: pinning. A paradigm popularized by forums and chat clients, whereby a particular piece of content is selected to always show up at the top of the feed. YouTube uses it themselves in their comments section and live chat, used by creators to bring attention to a particular post. Pinned Post in Live Chat (source: Ninomae Ina’nis Ch.) In this case, it wouldn’t quite be the traditional use of pinned posts, as those are usually for others to see, rather than for oneself. However, it provides a good basis for which we can build an interaction for keeping videos you want in your feed. Design Explorations Alright, we have an idea. Now let’s make it look good. The initial criteria for these entry points is that they’re unobtrusive, easily accessible, and fits in with the existing design language.
https://kevinsun-dev.medium.com/lets-put-a-pin-in-it-intuitive-content-saving-on-youtube-s-mobile-app-4c1367502967
['Kevin Sun']
2020-12-05 05:36:49.941000+00:00
['Kevin Sun', 'Android', 'Case Study', 'Design', 'YouTube']
How to Create Stock Alert System using Python and Windows Task Scheduler
Photo by Malvestida Magazine on Unsplash There are a plethora of stock alert systems available for free or for a nominal price. However, I found most of these alert system too simplistic (e.g. daily price movement, magnitude, etc.). Maybe I was not lucky enough to find better alert system so I decided to write an alert system on my own using Python and Windows Task Scheduler. The idea is to create a sophisticated alert system that triggers every day at market close (or open depending on your applications), finds stocks that fit your criteria, and send an email including stock symbols. Code Structure Trigger code at specific times of the day using Windows Task Scheduler (this will be explained later) [Windows] Import watch list of stocks of interest [Python] Download historical stock data of all symbols in the watch list [Python] Find which stock fit the alert criteria [Python] Send email alert containing the stock symbols discovered in the previous step [Python] Python Libraries There are two main libraries needed to accomplish this task: pandas_datareader: This library enables downloading time series of stock data from different sources. smtplib: This is a SMTP protocol client for sending emails. This library is needed to send daily email alerts. Other standard libraries include: pandas, numpy, and datetime. Alert Discovery Code: Let’s write the python code that everyday (could be every week or month) downloads historical data and alerts us if any stock symbol fits our criteria. First, we need to import our watch list stored locally in a csv file: Next, using pandas_datareader library, time series of stock data can be downloaded and alerts determined. For the purpose of this study, I am going to set up an alert if the value of close of the day is below 100 day moving average. AlertFinder function contains the alert function. This function can be easily changed to fit your own needs. The following block shows AlertFinder function: After finding stock symbols that fit our criteria, the next task would be setting up an email system so we can get notified on when stock alerts are triggered. Sending emails via Python is very easy using smtplib library. First, we need to declare email credentials in the code (you can also encrypt your username and password instead of explicitly declaring those values. for more information you can refer to here). Then smtplib can login into your account and send emails: I primarily use Gmail as my main emailing account. In order to use a gmail account, we have make sure Gmail allows less secure apps to interact with your account. In order to do this you need to: Sign in to Gmail Click here to access Less Secure App Access in My Account. Next, to “Allow less secure apps: OFF,” select the toggle switch to turn ON. Setting up Windows Task Scheduler The final part of this tutorial is on how to set up a windows task scheduler to repeatedly run every day. We need to write a bat file containing the path of the Python location and our code location. Simply create a text file on your windows machine, then rename text file to run.bat. Then, open the bat file using notepad and write following commands in the bat file: cd /D "C:\Users\<ComputerUerName>\Anaconda3\condabin" call activate.bat "C:\Users\<ComputerUerName>\Anaconda3\python.exe" "C:\Users\<ComputerUerName>\Desktop\Stock Portal\main.py" The first line of the code activates the Anaconda environment so we can use pre-installed Python libraries. The second line of the code simply runs Python and main.py which contains the alert code. The last thing we need to do is simply add a new task in windows task scheduler to run the bat file daily using the following process. First, search “Windows Task Scheduler” from the windows search dialog box on the lower left part of your windows machine, and click on Windows Task Scheduler. Then, follow each step from the list and screenshots below: Create a basic Task from the upper right side of the window Specify the name of this task. Press next. Specify the frequency that this task should run. In this scenario I have selected daily frequency. Press next. Specify the starting date and hour of the task. In this case I have specified the task to run every day at 3:35 pm from the beginning of Janaury 10, 2020. Press next. Select Running a Program in the next window and press next. In the following window, click on browse and select the bat file that we created in the previous section. Press next. Finally click on the “open the properties” check box and press finish. In the setting windows of the task scheduler we would like to ensure this task runs even if the computer has shut down. In the conditions window of the task, make sure the “Wake the computer to run this task” is checked. In the settings window, makes sure “Stop the task if it runs longer than” is set to one hour or less. This will ensure that if there is a problem with the task, it gets restarted before the start of the next day. 1- Select a Basic Task 2- Create a basic task 3- Frequency of the task 4- Select start date and time 5- Select an action type 6- Select run.bat file and press next 7- Finish the basic task 8- Select the created task setting: make sure to wake up computer for running this task ( if computer is off) 9- Stop task after 1 hour in case if code runs into a problem That’s it, guys. You now know how to create alert systems using Python and Windows to notify you what happens in the market daily. Obviously the code that I have shown you is simple and can be improved significantly (e.g. weekly report on performance of stocks, sophisticated stock selection criteriaetc.). However, the possibilities are endless and you can create your own very sophisticated alert system. Codes: All codes are published on the Stock Alchemy GitHub [Link]. For more finance and Python related articles visit http://stockalchemy.com/.
https://medium.com/quantjam/how-to-create-stock-alert-system-using-python-and-windows-task-scheduler-5e909c24af27
['Amir Nejad']
2020-11-05 21:56:04.854000+00:00
['Python', 'Analytics', 'Data Science', 'Stock Market', 'Stock Alerts']
The Best Content Marketing Managers Use These 10 Essential Marketing Tactics
Photo by Austin Distel on Unsplash Content marketing is such a broad and multifaceted discipline, it’s maddening to try and reduce it to any one block of advice. Even in specific industries, it’s hard to find a one-size-fits-all approach that works for everyone — different industries, business sizes, and brands require fine-tuning for their circumstances and demographics, and what objectively works well for one business may not work at all for another. Still, regardless of your individual position, there’s always room to make yourself a better content marketer. I’ll illustrate this with an analogy: it’s impossible to find a dish or a meal that everybody’s going to like, but some cooks are better than others, and it’s always possible to make yourself a better cook by widening your skillset and using better techniques. In the content marketing realm, these are the techniques you need to be using if you want to be a better marketer: 1. Market research. Market research is essential if you want to know who your customers are. You need to know who the best demographics for your brand are, where they live, how they live, and what they need and want. Without this information backed up by objective data, you’ll be flying blind or relying on your biased, unreliable instincts to direct your campaign. Market research doesn’t have to be expensive or even that intensive; you can access census data for free. 2. Competitive research. You need to know who you’re up against if you want to be successful. With competitive research, you’ll learn what areas your competitors are dominating, so you can find alternate routes to success, weaknesses in their strategies to avoid, and strengths to improve upon. You’ll figure out what’s currently working and not working for your industry and demographics, and you’ll get inspired by what your competitors are doing (or not doing) in their respective strategies. 3. Written content. There’s a reason why content marketing budgets are continuing to increase,. Written content helps you optimize for search engines, retain customers you already have, attract new ones, increase conversion rates, and fuel other strategies like email marketing and social marketing. 4. Visual content. Written content is on a bit of a decline in some industries, however, as visual content is rising in popularity to the average user. Images and videos are a bit more difficult to produce, but they tend to have much higher viral potential. The good news is that video content is getting easier and easier to produce, so there’s not much stopping you from incorporating it into your campaign. 5. Personal branding. People don’t love or trust corporations; they love and trust people. If you want any of your marketing strategies to be effective, it has to have a personal touch. One of the best ways to do this is by leveraging the power of a “personal brand,” or an individual identity affiliated with your corporate brand. You can use this to build a reputation, gather individual connections, and syndicate content related to your main brand. It can also serve as a good resume if you end up changing paths in your marketing career. 6. Guest posting. Guest posting is an extension of your personal branding strategy in many cases, and involves producing content for external publishers. It’s a fantastic way to get more visibility and a better reputation for your brand, and it can send both direct traffic and domain authority (for SEO) your way. Without it, your content strategy will exist in a vacuum. Here’s a step-by-step guide on how to do it. 7. Customer service. Don’t forget that customer service is a major element of your content marketing strategy, or should be. Customer retention is just as important, if not more important than customer acquisition, and increasing customer knowledge through content is one of the best ways to preserve that rate. For example, include FAQs, guides, tutorials, and troubleshooting documents — and make sure that information is readily available to anyone who needs it. 8. Influencer marketing. Industry influencers hold the key to so many different benefits — they can help you get your content to go viral, they can boost your reputation just by mentioning you, and they might even be willing to give you some advice about your campaigns. There’s not much of a science to engaging with influencers, but it is a tactic that requires development over time. 9. Social distribution and syndication. Social media channels are free to claim and free to post on, so if you aren’t using them, you’re doing yourself a disservice. Every piece of content you publish should be shared via your social media channels to get it in front of more potential users, and most of your content should be worked into an ongoing syndication schedule to keep your social feeds from getting stale. The audience building benefits are extraordinary. 10. Email marketing. Believe it or not, email marketing is one of the most effective marketing tactics available. According to a survey of 357 marketers that I conducted, email marketing was rated the second-easiest tactic to execute, and the 5th-highest in terms of ROI (out of 10 tactics surveyed). B2C companies, B2B companies, and everything in between can make use of promotional and informative emails — it just depends on what you’re promoting and how you’re promoting it. People won’t stop using email anytime soon, so learn how to leverage it for your brand. Take a look at the strategies of some top-performers in content marketing, from inbound marketing specialists like Hubspot and Moz, as well as corporate brands like Chipotle or Coca-Cola. You can see them using these tactics to promote their brands, invariably. These are the tactics that turn content team members into team leaders, and turn small businesses into nationally established ones. Learn to master them, through experience and constant education, and you’ll find success in almost any content marketing application you desire.
https://jaysondemers.medium.com/the-best-content-marketing-managers-use-these-10-essential-marketing-tactics-74dbab37c176
['Jayson Demers']
2020-04-18 21:29:16.753000+00:00
['Marketing', 'Content Marketing', 'SEO', 'Content Strategy', 'Online Marketing']
What is a startup, anyway?
Welcome back to the View from the Monolith! I’ve spent the last few years siloed in a monolithic organisation, but now I am adventuring in the world of startups. In this blog, I share the learning I glean along the way. It took me a while to realise that there was one obvious word missing from my look at the language of startups (didn’t see that? Check it out here!). It is such a fundamental word to the discussion that it went right by me. But when I stopped to think about it, the definition wasn’t clear; when it gets right down to it, what actually is a startup? Age To my naïve monolithic brain, I hear “startup” and I think “recent”. How recent is recent, though? Is the organisation still a startup after two years? Five? Ten? I don’t know, I guess we need to look elsewhere to build our definition. Oh come on, who can’t spell ‘Shortcut’? Growth Certainly, in the sense we use it, it is not simply any recently formed company: a newly-opened coffee shop might attract a flood of clients from startups but is not one itself. Partly this is a function of the business type, the startups we’re talking about are generally tech oriented, but there’s something else there too. It’s about an ambition for growth. Most new independent coffee shops are not launched with a view to take on Starbucks, but every startup is looking for the next big idea, has a glimmer in their eye that they may become the next unicorn. Wait, unicorn? Another word for our startup vocabulary, here,the unicorn is a startup that has made it truly big, becoming valued at more than $1 billion. Unlike their horned equine counter-parts, startup unicorns do exist, but they are almost as rare. Every startup aims for growth, but they won’t all become Uber or Airbnb. Successful, yes, but still uses Comics Sans on the chart The core idea Still, in the risky world of startups, if they are going to make it big, or be considered a success at all, there needs to be one strong central idea. A problem to solve in a new way. The startup needs to look for a way to turn the solution to that problem into something which will provide consistent return on investment. A problem with a once-and-done solution is not a good basis for a startup; the core idea needs to be one that will reliably continue to generate revenue. Embracing the possibility of failure I mentioned this in my first View from the Monolith, but the startup philosophy is one which, while it’s not setting out to fail, is ready for failure. It treats it as part of the lifecycle. If one solution didn’t work, move on and look for the next one. Our working definition So, what is a startup? I think it probably means different things to different people. But here’s the definition we use at The Shortcut, as laid out by Steve Blank: https://youtu.be/wKh8e3dplmM “A startup is a temporary organisation designed to search for a scalable and repeatable business model.” That matches the conclusions we were working to quite nicely (and not accidentally!). The startup needs to be scalable, it needs to grow. The core idea must be a repeatable business model. The word “temporary” brings us back to our age question. It doesn’t quite answer it, but the essence is there: there are no ten-year-old startups. The aim of the startup is to find that business model, and then use it to become something else. If you want to join the search for a scalable repeatable business model, then perhaps entrepreneurship is for you. If you need help starting down that path, then the Catalyst Programme at The Shortcut might be just what you need. Come along to the information session to learn more!
https://medium.com/the-shortcut/what-is-a-startup-anyway-92d550fa95a8
['Rob Edwards']
2019-02-13 06:33:42.594000+00:00
['Stories', 'Startup']
Crafting Recommendation Engine in PySpark
Hello, In Today’s Online Era every customer is loaded with multiple choices. Suppose recently I was looking to buy a budget-friendly laptop without having any idea of what to buy. There are possibilities that might waste a lot of time browsing around on the internet and crawling through various sites hoping to find some useful info. I might look for recommendations from other people. The rapid growth of data collection has led to a new era of information. Data is being used to create more efficient systems and this is where Recommendation Systems comes into play. Recommendation Systems are a type of information filtering system as they improve the quality of search results and provide items that are more relevant to the search item or related to the search history of the user. They are used to predict the rating or preference that a user would give to an item. Companies like Netflix and Spotify depend highly on the effectiveness of their recommendation engines for their business and success There are might be some website or web app that might recommend me some best laptops based on my previous purchases or laptop owned by my friends. I could just log in and jackpot! I got well-recommended laptops matched to my taste. This is the power that is being harnessed by Recommendation Engines. From Amazon to Netflix, recommendation engines are one of the most widely used applications of Machine learning. This Blog basically dedicated to Intuitive Explanation of Recommendation engines and its implementation in PySpark. So Let’s Roll! Recommendation Engines Recommendation Engines better known as recommender system is a tool that analyzes data in order to make suggestions for something that might be a website, Ad banner, products user might be interested in. Putting it in a more simple fashion, It is basically a system that suggests services, products, information to the user based on their trends or analysis of data. This data can be the history of the user or behavior of similar users. The recommendation system opens the door for users to expose to the whole digital world through their experiences, behaviors, preferences, and interests. These systems also provide an efficient way for companies like Amazon, Walmart to provide consumers with personalized information and solutions. Advantages of Recommender Systems These systems are helpful in increasing click-through rates, revenues significantly for organizations Recommendation engines empower positives effects on user experiences hence bringing higher customer satisfaction and retention. For example, With Netflix Instead of having to browse through thousands of box sets and movie titles, Netflix presents you with a much narrower selection of items that you are likely to enjoy. This functionality delivers a better user experience. With this, Netflix achieved lower cancellation rates, saving the company around a billion dollars a year. Amazon Started using the Recommender system almost 20 years ago Now it has been deployed to other industries such as finance and travel. Workflow for Recommendation Systems Basically Recommender system works by providing suggestions to a particular user through a kind of filtering process that is fully based on user perspective and browsing history. The input for the Recommender system will be information about the user like his browsing history, preferences and likings. This information reflects the prior usage of the product as well as the assigned ratings. They also rank the products and services by how many users like them. It helps in the sorting products which are liked by users the most. With the help of cosine similarity, one can also find how similar different products are to each other. If products are very similar to each other, they might appeal to the same users. Typical Recommendation Engine Source: https://new.hindawi.com/journals/sp/2019/5941096/fig2/ There are mainly three kinds of recommender systems:- 1)Demographic Filtering- They offer generalized recommendations to every user, based on movie popularity and/or genre. The System recommends the same movies to users with similar demographic features. Since each user is different, this approach is considered to be too simple. The basic idea behind this system is that movies that are more popular and critically acclaimed will have a higher probability of being liked by the average audience. When we open some app on our phone it asks to access our location. Based on our location and people around it will recommend the things which we may like and this is a basic recommender present in any application. 2)Content-Based Filtering- They suggest similar items based on a particular item. This system uses item metadata, such as genre, director, description, actors, etc. for movies, to make these recommendations. The general idea behind these recommender systems is that if a person likes a particular item, he or she will also like an item that is similar to it. This can be seen in applications like Netflix, Facebook Watch, etc.which recommend the next movie or video based on the director, hero, etc. 3)Collaborative Filtering- This system matches persons with similar interests and provides recommendations based on this matching. Collaborative filters do not require item metadata like its content-based counterparts. This is the most sophisticated personalized recommendation that means it takes into account what user likes and not likes. The main example of this is Google Ads. Under the umbrella of Collaborative filtering, we have following kind of methods- Memory Based - It basically identifies the clusters of users in order to calculate the interactions of one specific user to predict the interactions of other similar users. The second thought process will be identifying the items clusters rated by user A and predicting user’s interaction with item B.Memory based methods fail while dealing with large sparse matrices. It basically identifies the clusters of users in order to calculate the interactions of one specific user to predict the interactions of other similar users. The second thought process will be identifying the items clusters rated by user A and predicting user’s interaction with item B.Memory based methods fail while dealing with large sparse matrices. Model Based-Methods basically revolves around ML and Data mining tools and techniques. The traditional Machine learning approach is used to train models and getting prediction out of it. One advantage of these methods is that they are able to recommend a larger number of items to a larger number of users, compared to other methods like memory-based. These methods work well with large sparse matrices as compared to the memory-based approach. 3) Hybrid Systems -Consolidated both types of information with the aim of avoiding problems that are generated when working with one kind of Recommender systems. Now Enough of Theories Practical time Now! Building Recommendation Engine with PySpark According to the official documentation for Apache Spark - “Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.” On a high level, It is an open-source wide range data preprocessing engine that has its own development APIs which have distributed workloads on accomplishing streaming, machine learning or SQL workloads that demand repeated access to data sets. Spark core power lies in processing both batch and stream jobs. Basically, Apache Spark offers high-level APIs to users, such as Java, Scala, Python, and R. Although, Spark is written in Scala still offers rich APIs in Scala, Java, Python, as well as R. I can keep talking about Spark the whole day because of its awesomeness but it will be out of scope for this blog. For More insights Please Refer to this link. Let’s jump straight to building a Recommendation engine using MLib Spark Python API. Integrating PySpark with Anaconda That's going to be a bit tricky when you are setting up Apache Spark at the Initial Stage. This Post really helps in quickly initializing with PySpark with Jupyter notebook integration. Dataset In order to build our recommendation system, we have used the MovieLens Dataset. You can find the movielens_rating.csv that we have used in our Recommendation System Project here. This data consists of 105339 ratings applied over 10329 movies. Approach for Recommender System With Collaborative filtering, we make predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating). The underlying assumption is that if a user A has the same opinion as a user B on an issue, A is more likely to have B’s opinion on a different issue x than to have the opinion on x of a user-chosen randomly. At first, people rate different items (like videos, images, games). Then, the system makes predictions about a user’s rating for an item not rated yet. The new predictions are built upon the existing ratings of other users with similar ratings with the active user. Importing Essential Libraries In order to build the Machine learning model with PySpark, we will use the following packages- findSpark-For Jupyter Notebook Integration with Spark spark.Session-Entry point for any Spark Application. pyspark.ml- Machine Learning API for PySpark. pandas-For Data import and EDA. Confused about Spark Session let’s break it down pyspark.sql.SparkSession(sparkContext, jsparkSession=None) The entry point to programming Spark with the Dataset and DataFrame API. A SparkSession can be used to create DataFrame, register DataFrame as tables, execute SQL over tables, cache tables, and read parquet files. In order to create Spark Session follow below build pattern- Retrieving the Data & Data-Preprocessing For Spark, 2.x comes with great power of inbuilt loading of .csv file as a PySpark Dataframe. We will use the data.describe().show() function to display information about the spark data frame. Output Screenshot: We can do a split to evaluate how well our model performed, but keep in mind that it is very hard to know conclusively how well a recommender system is truly working for some topics. Especially if subjectivity is involved, for example not everyone that loves star wars is going to love star trek, even though a recommendation system may suggest otherwise. As we are using a Smaller dataset so 80:20 train split will be sufficient enough for our use-case. (training, test) = data.randomSplit([0.8, 0.2]) Build the recommendation model using ALS on the training data Spark provides a machine learning algorithm MLib.Spark MLib algorithm aims in providing practical machine leaning with scalability & ease. Below are some terminologies related to spark machine learning library- Extraction: Extracting features from “raw” data. Extracting features from “raw” data. Transformation : Scaling, converting, or modifying features. : Scaling, converting, or modifying features. Selection : Selecting a subset of a larger set of features. : Selecting a subset of a larger set of features. Locality Sensitive Hashing (LSH): This class of algorithms combines aspects of feature transformation with other algorithms. Spark MLlib library for Machine Learning provides a Collaborative Filtering implementation by using Alternating Least Squares. The implementation in MLlib has these parameters: numBlocks is the number of blocks used to parallelize computation (set to -1 to auto-configure). rank is the number of latent factors in the model. iterations are the number of iterations to run. lambda specifies the regularization parameter in ALS. implicitPrefs specifies whether to use the explicit feedback ALS variant or one adapted for implicit feedback data. alpha is a parameter applicable to the implicit feedback variant of ALS that governs the baseline confidence in preference observations. Now let's build our ALS model and fit this model with training features created by randomsplit() function. Point to be noted here that I am not using cold start strategy which is used to ensure we don’t get NaN evaluation metrics. Bingo!!! Our Model is fit Now Next Step will be generating Recommendation or prediction from fitted Alternating least-squares Model. Generating Predictions & Model Evaluation After successful execution of spark jobs its time to evaluate the build model using inbuilt transform function. This function is more or less similar to predict() function in the traditional machine learning algorithm(Sklearn). However, transform () function transform the input test data or unseen data in order to generate predictions. Output Screenshot Evaluating a model is a core part of building an effective machine learning model. In PySpark we will be using RMSE(Root mean squared Error) as our evaluation metric. The RMSE described our error in terms of the rating column. So What’s Next Still we didn't get Recommendations of Movies for our users. Let’s kick this now! Recommending Movies with ALS So now that we have the model, how would you actually supply a recommendation to a user? The approach here will be simple We will be taking a single userid example11 as features and pass it to trained ALS Model. The same way we did with the test data! Output Screenshot Now we will use model.transform() function in order to generate recommended movies along with their predicted features. Output Screenshot End Notes From the Usecase, we observe that including a recommendation in your current system will be efficient and attractive as it enforces experience to involve more customer engagement. From a business point of view, it generates great revenues. Moreover, Recommendation Systems are entirely dependent on Business goals. PySpark has definitely powerful Machine leaning API which is continuously evolving. I strongly recommend you to Apply this Collabaortave filtering with other Recommendation engine Datasets. Keep Practicing Keep Learning! The full code along with the Dataset used for this project is available on my GitHub. References If you like this post, please follow me. If you have noticed any mistakes in the way of thinking, formulas, animations or code, please let me know. Cheers!
https://medium.com/analytics-vidhya/crafting-recommendation-engine-in-pyspark-a7ca242ad40a
['Shashwat Tiwari']
2020-01-12 13:59:34.092000+00:00
['Machine Learning', 'Pyspark', 'Spark', 'Recommender Systems', 'Big Data']
September 10–21 Karma Progress. Powerful Bids and Scoring Ratings
Hello, dear friends! We are happy to present new features we have delivered: Adding pictures Now you can make your bid even more attractive for potential investors! After all, your very idea, impression and passion determines investment decision, not only numbers. From now forward you could attach photos and change their order while you are creating a bid. So, what is your story? Look, how colorful the bids are now! In order to simplify your work with documents we added multifile upload. We created our own multifactorial complicated CMS-system (“Backoffice”) where scoring specialists could analyze information and interact with platform participants in the most comfortable way. Now we’ve made the reason of rejection visible. For example, if borrower didn’t present all the necessary data (or we have some additional questions), he or she will see what exactly needs to be corrected. After corrections order must be verified again and opportunity to edit any information inside will be restricted during scoring. Every time before an order will be placed on the market, scoring specialist will evaluate it and put the special ranking separately for borrower and separately for collateral. We use crosswise evaluation based on the most relevant databases: •Federal Migration Service; •Database of the Unified State Register of Legal Entities, Unified State Register of Private Entrepreneurs (Federal Tax Service); •Real estate database of Rosreestr (The Federal Service for State Registration, Cadastre and Cartography); •Federal Bailiff Service; •Federal Tax Service; •Federal Service for Financial Monitoring; •Unified Federal Register of Data on Bankruptcy; •Unified information system of Government procurement; • Automobile Inspection Service (STSI); •Credit reference bureau (3 largest bureaus: NBKI, Equifax, OKB); •Etc. (more than 20 databases). In next publications we will provide full description of the the rating criteria, stay tuned! Cheers ^_^
https://medium.com/karmared/september-10-21-karma-progress-powerful-bids-and-scoring-ratings-b15b00f7933e
['Karma Project']
2018-09-27 05:48:44.613000+00:00
['Development', 'Credit Scoring', 'Blockchain', 'Features', 'Release Notes']
4 Ways to Love, Your Enemy
4. Find common ground Heck, a few of my best friends to date, are my old enemies. After forgiving them, sympathizing with them, and understanding them, we found out we actually had a lot in common. Finding common ground allows you to see past the hate and anger and look at that individual as a human, and possibly, one you actually love after all the debris is cleared out. Make new friends with old enemies. Once all the grunt work I stated above is completed, then there is room for growth — love. Endgame Sure, there are bad people out there; not every enemy is meant to be loved. However, there shouldn’t be anything stopping you from getting the peace of mind for yourself, and from possibly benefiting them as well. This world is consumed with hate; we need more love. And what better way is there to show appreciation than bestowing it upon an enemy? In order to love ourselves honestly, and others, we have to understand and accept the nature of humanity. When people are scared, hurt, or sad, they attack. Rid yourself of the hate and anger and spread the love. I haven’t been happier since I have let go of all my hate. Some instances, didn’t pan out. My enemies brushed my approach off, and that’s OK. As long as you find it within yourself to heal, accept, and forgive them, you can show the maximum amount of love, by being an example.
https://meghansmadness.medium.com/4-ways-to-love-your-enemy-333356d7302c
['Meghan Gause']
2020-07-22 12:42:58.168000+00:00
['Self Improvement', 'Mental Health', 'Life', 'Love', 'Forgiveness']
Step by Step Facial Recognition in Python
Step by Step Facial Recognition in Python A simple how-to using Python, Pillow, and a few lines of code… In this article, I will guide you to create your own face recognition in images. For this purpose, I will use the Python face recognition library and Pillow, the Python Imaging Library (PIL). I chose to use Visual Studio Code since I need to use integrated terminal. First, I start by setting a virtual environment and install pipenv on my terminal. Run pipenv shell to start your virtual environment and install the face recognition library. For this tutorial, I created two folders named known and unknown . The first folder includes pictures of some of the more well-known people in politics like Barack Obama, Donald Trump, Bernie Sanders, Joe Biden, and Elizabeth Warren. The latter includes different pictures of the people from the first folder, some of the 2020 Presidential candidates, and some SNL characters (played by different actors) of Donald Trump, Barack Obama, and Bernie Sanders. I will run a match on the known and unknown folders to see if there are any pictures of known people in the unknown folder. I can do this by command line easily by running: This will go through all the images and show us the matches in the second folder from the first one. As you can see from the output, Bernie_SNL.jpg —which was performed by Larry David—is matched as Bernie Sanders. To avoid that, I will check the distance of each match, which essentially tells how much of a match the images are, by running: face_recognition — show-distance true ./img/known ./img/unknown I can see the decimal value of distance between matched images. I will add the flag and change the tolerance so the matching algorithm will only accept the values under a certain number. Adjusting tolerance helps get more accurate results. As seen in the above image, Bernie_SNL.jpg did not match with the real Bernie Sanders.jpg . If I just want to get the names of the people in the images, I will use: face_recognition — show-distance true ./img/known ./img/unknown | cut -d ‘,’ -f2 to get the output below. Let’s move one of the unknown people, Andrew Yang, to our known folder and run the code above again. As you see below, Andrew Yang will also be defined as a known person and it will show the matches from the unknown folder. If we want this process to go faster we can add — cpus flag to our command line. Now I will create the Python files to work with the facial recognition library.
https://medium.com/better-programming/step-by-step-face-recognition-in-images-ad0ad302058a
['Ulku Guneysu']
2020-12-15 02:26:45.803000+00:00
['Python', 'Python3', 'Facedetection', 'Face Recognition', 'Data Science']
6 Methods Copywriters Use to Boost Their Confidence as a Writer
Writing has become a passion of mine in the past couple of years, and I did struggle a lot at first because I did not have a system in place to get some writing done. I didn’t know where to start and I wasn’t confident enough. That’s why becoming confident in my writing has been one of my main priorities. I have probably mentioned this before, but English is my second language and it is also the reason I have decided to write in English, to practice my grammar and punctuation, and to get more confident with the language. The beginning was tough, but persistence has paid off. I recently came across an interview with one of my favorite writers and marketers, the great copywriter Gary Halbert. In the interview, he talks about a lot of topics, and one of them is writing. He goes on to explain some key elements and some limiting beliefs. Listening to his interview gives me a lot of relief on some of the doubts I had, and he also provides some great tips on how to become confident with your writing. Here are some of them that have helped me improve my writing and become more confident. The Blah Blah Blah Method. Gary Halbert and Eugene Schwartz are firm believers that writer’s block does not exist. Let me explain… Gary Halbert mentions that when he writes copy and he hit a point when he runs out of words, he continues writing “BLAH BLAH BLAH” on the paper. He does not stop, as he believes that this is the way to continue the flow of the writing and when the ideas come back, he writes it and continues with the flow. The point is to not stop and to discipline yourself to keep the writing state in motion. Eugene Schwartz does something similar, he locks himself in a room for at least 35 min; he write nonstop until the time is up, no matter if what he is writing is incoherent, as I say the ideas is to get into the flow, you can always edit later. 2. On the first draft. Another thing he mentions is that you shouldn’t care much about your first draft. Remember, your first draft is not your final product. This not only applies to writing but to everything; a business idea, or other products you may create. The idea of your first draft should be just to put your ideas on paper. Forget about grammar and punctuation, or how clear your idea is. “The first draft of anything is shit.” ― Ernest Hemingway The first draft will always look ugly (I wish you could see mine while writing), but after you finish writing, you can always go back and fix things, readjust ideas, and edit. So don’t pressure yourself. Just make sure you put your ideas on paper and write. 3. Write what you know. I believe this is the key to not having writer’s block, well actually there are two: research and write what you know. I have noticed in myself that every time that I try to write about a new topic or about something that I’m not familiar with I always struggle, the reason being that I don’t know much about it and that is hard to form the idea. Now if I do the opposite, it’s different. When I write about some experience or something that I have a wealth of knowledge in, it is suddenly very easy for me to write. Bear in mind that most writers are also big readers, so they fill their minds with a lot of information and formulate their ideas from it. That’s why you should care about what you write. For example, the purpose of my content is to help others become more confident about the things they are trying to start throughout my own experiences. That’s why I prioritize learning and experimenting, so I am better able to acquire more experience to teach others. Don’t just write for the sake of writing. Care about your readers and give them your best. 4. Ask for help. You can always ask someone to review your writing before you publish something. And you should, in fact. You can ask a friend, boyfriend, or girlfriend, family. Just let someone see the article. Admittedly, this is something that is difficult for me to do, because sometimes my ego gets in my way and doesn’t let me do the things I should do. But I have gotten better. Don’t let anything stop you, ask for help for tips, and do what is necessary to achieve the level of writing that you want. 5. Practice Now, what use would all these tips be if you don’t practice your writing? This past month I set a goal to write at least one hour daily, no matter if I finish or not. At a minimum, I had to write for an hour a day. The idea is to become better, and this can only be achieved by writing and practicing my craft. Since I started doing this, I have noticed a lot of improvements in my writing, my ideas, and how easy it has become, and the best of all, it is not as painful as it was in the past. 6. Take time check for spelling mistakes Now, once you have done all of this, it is time to take a small break. Yes, disconnect yourself from your writing, for a couple of hours or a day if it is necessary. When you have finished with your first draft, you should at least let a day pass, and then go back with your head clear to revise and fix your spelling and grammar mistakes. Give it some time and read it out loud. Then stop, take a break, go back, and revisit it again. This will help you clarify your idea and find things that need improvement.
https://medium.com/bulletproof-writers/6-methods-copywriters-use-to-boost-their-confidence-as-a-writer-14bcf5446cc5
['Al Roman']
2020-09-03 19:59:47.098000+00:00
['Copywriting', 'Confidence', 'Writing', 'Writer', 'Writing Tips']
how to build data science portfolio | portfolio for data scientist
How do you get a Data Scientist job? It is difficult to know enough statistics, understanding the machine, scripting, etc. to be able to get a job. One thing I’ve noticed recently is that quite a few people might have the qualifications needed to get a job but no portfolio. Although a resume counts, getting a portfolio of public proof of your expertise in data science will do wonders for your career prospects. And if you have a recommendation, it’s important to be able to show prospective employers what you can do, rather than simply tell them that you can do something. This article will have links to where different data management experts (data technology administrators, data analysts, social media symbols, or a mix thereof) and others will chat about what to have in a portfolio and how to get heard. So let’s proceed with that! A Portfolio Value : A portfolio is important in addition to the value of learning by the development of a portfolio, since it will help you get work. For this post, let’s describe a portfolio as public proof of your data science expertise. I got this concept from DataCamp ‘s David Robinson Chief Data Scientist when Marissa Gemma interviewed him on the blog Style Analytics. He has been asked to land his first job in the industry and has said, For me, the most successful approach was to do the civic service., I blogged and did a lot of open-source growth, and these helped display public proof of my data science skills. But the way I secured my first job in the industry has been an especially remarkable example of civic service. During my PhD, I was an involved writer on the Stack Overflow programming forum, and one of my responses came across an engineer at the organization (one discussing the theory behind the beta distribution). He was so impressed with the response that he contacted me [through Twitter], and I was hired a few interviews later. The more professional service you do, the greater the possibility of a tragic mistake such as this: someone seeing your service and leading you to a career opening, or someone asking you having learned about the work you’ve done. People sometimes forget that software developers and data scientists often submit their complaints to Google. If these same people are addressing their problems by reading your public service, they will think more about you and reach out to you. A Skill Prerequisite portfolio : Most businesses do tend to see employees with at least a bit of real-life experience for an entry-level job. You’ve maybe seen posts like the one below. The dilemma is how can you get experience because you need first work experience? If an answer is given, then the answer is programming. Projects are maybe the perfect alternative to job experience or, as Angshuman has said, If you have no data science background so you simply have to do independent ventures. In reality, when Jacqueline Nolis interviews applicants she needs to learn about a review of a recent issue/project you faced. I want to know about a project that they just worked on. I‘m telling them how the project began, how they’ve decided it was worth time and effort, their approach and their outcomes. I’m even telling them what they learnt from the study. I’m learning a lot from the responses to this question: how they can say a storey, if the dilemma has to do with the bigger picture, and if they’ve been working hard to do something. If you don’t have any job experience relating to data science, the best choice here is to speak about a data science initiative you’ve been working on. Project Forms to Include in Portfolio : Data science is such a large area that it is impossible to determine what kinds of projects recruiting managers want to see. At Kaggle’s CareerCon 2018 (video) William Chen, Data Science Manager at Quora, shared his thoughts on the matter. I support ventures where people are expressing interest in data in ways that go beyond homework assignments. Some kind of final class project where you are testing an interesting dataset and discovering fascinating outcomes … Put effort into writing … I would want to see some good writing ups where people discover fascinating and innovative stuff … make some visualisations and share their job. Many people understand the importance of project creation, but one thing many people worry about is where you get the fascinating dataset and what you do with it. Jason Goodman, Airbnb’s data scientist, has a Developing Data Portfolio Projects Advisory post where he speaks about several different project proposals and has good suggestions about what kind of data sets you can use. He also echoes one argument from William regarding dealing with fascinating results. I think the best projects in the portfolio are less about sophisticated simulations and more about dealing with fascinating results. Many people are doing something about financial records or Twitter data; that can work, but the data is not that important necessarily, because you are working uphill. In the article one of his other arguments is that web scraping is a perfect way to get fascinating results. If you’re interested in learning how to create your own dataset by Python Web scraping, you can see my article here. If you come from college, it’s important to remember that your work will qualify as a project (a very big one). William Chen can be heard learning about it here. Jeremie Harris in The four best ways of not getting employed as a data scientist has said, It’s hard to think of a better means of getting your resume tossed into the ‘definite no’ pile than highlighting work you’ve completed with your highlighted personal ventures on meaningless proof-of-concept datasets. If in question, here are some ventures that do you more harm than they do to you: * Classification of longevity in the Titanic dataset. * MNIST dataset hand-written digit recognition. * Identification of the flowers by iris dataset. The following picture provides partial samples of the datasets Titanic (A), MNIST (B), and iris © classification. There aren’t many ways to differentiate yourself from other candidates using these data sets. Verify the novel proposals are mentioned. Iterative portfolios Favio Vazquez has an outstanding post on how he got his work as a data scientist, in which he spoke. All of his advice of course is to build a portfolio. Do you have a portfolio? If you are hunting for a good-paying career in computer science do some practical technology ventures. If you can get them listed on GitHub. In addition to playing in Kaggle, find something you enjoy or a dilemma you want to solve, and use your experience to do so. One of the more important results is that when you go to work search you still have to keep changing. I applied to nearly 125 positions (for instance, maybe you’ve applied for a lot more), I only got 25–30 answers. Some were just: Thank you, but nope. And I had almost 15 interviews with them. I’ve learnt from each. Nice. Nice. I have had a lot of rejection to deal with. Really, something I wasn’t preparing for. But I liked the interview process (not all of them for being honest). I learned a lot, programmed, read a lot of articles and posts every day. They helped a lot. You should also refresh your portfolio, as you learn about and develop yourself. Many other advice posts share the same sentiment. As told by Jason Goodman, When you publish it online the job is not completed. Do not be afraid to continue contributing to or updating your ideas after they have been written! This advice is extremely valid when finding employment. There are also examples of accomplished people like Airbnb’s Data Scientist Kelly Peng who also persevered and continued to work to develop. She was looking over how many positions she applied for and consulted in one of her blog entries. She obviously applied to several employers and managed to stay. She also discusses in her essay how you ought to keep learning about your interviewing experiences. Take note of all the questions you have asked about the interview, particularly those that you have forgotten to answer. You might fail once again, but at the same place, you don’t fail. Still, you should be studying and developing. Incorporate Portfolio into 1 Page Overview : One way that anyone discovers your portfolio is always by your CV so it’s worth noting. A data science portfolio is a focal point for your professional expertise. Your CV is an opportunity to address your credentials succinctly and fit for that particular position. Skim recruiters and recruiting managers return very quickly and you have only a limited period to get an idea. Improving your CV will improve the odds of obtaining an interview. You have to make sure that you list every single line and every single segment of your CV. Length: Keep it easy and max one tab. For a short skim this gives you the most effects. Recommend a simple one-column resume because skim is fast. 2. Goal: Do not have any. They don’t help you separate yourself from others. The more valuable things (skills, tasks, knowledge, etc.) they take away rooms. Cover letters are incredibly discretionary unless you personalize them sincerely. 3. Coursework: List appropriate coursework for job description available. 4. Skills: Don’t assign your talents numerical scores. Using terms like skilled or common, or stuff like that, whether you want to score yourself on your abilities. You can also absolutely rule out evaluations. 5. Skills: Do list the technical skills listed in the job description. The order in which you list your skills will show what you are better at. 6. Projects: Don’t mention popular or homework tasks. They aren’t informative enough to differentiate you from other candidates. Listed novel ventures. 7. Projects: Display outcomes and have ties included. Place percentile rank when you have competed in the Kaggle competition as it lets the person reading your resume realize where you are in the competition. There is still space for ties to writing ups and articles in the parts of tasks as they make the recruiting manager or recruiter dive deeper (bias to real-life messy situations where you discover something new). 8. Portfolio: Complete our on-line activity. A LinkedIn profile is the most common. It’s kind of like an expanded CV. Profiles from Github and Kaggle can help to show off your work. Complete any profile and provide links to other websites. Print up the GitHub Repositories details. Include links to the profiles/blog you share your information (medium, quora). Specifically, data science is about information sharing and explaining to other people what the data means. You don’t have to do them all, but just pick those and do it (more on this later). Social Media Importance : This is very similar to the section on Value of a Portfolio, only broken into parts. You can provide support for your resume by providing a Github website, a Kaggle profile, a Stack Overflow, etc .. Filling out online accounts can be a positive indicator for recruitment managers. Generally speaking, when I assess a nominee, I ‘m curious about hearing what they’ve posted online, even though it’s not done or polished. And it’s almost certainly easier to share something than to share none. As Will Stanton said, the reason data scientists want to see public work is that These methods are used by data scientists to discuss their own findings and find answers to the questions. If you are using these methods, then you are signalling to data scientists that you are one of them, even though you haven’t been a data scientist ever before. A lot of data science is about collaboration and data presentation so it is nice to have these profiles online. Besides being useful and offering useful exposure with these channels, they can also help you get noticed and guide people through your resume. Via different outlets, people can and do find your resume online (LinkedIn, Facebook, Twitter, Kaggle, Email, Stack Overflow, Tableau Web, Quora, Youtube …). You can also note the multiple forms of social media feed into one another. You need to have a kind of README.md with an overview of your research, since a lot of data science is about sharing outcomes. Be sure that the README.md file explains explicitly what your project is, what it is doing and how to execute the code. Kaggle : Fact, completing one Kaggle contest doesn’t allow someone to be a data scientist. Neither does one class take, or attend a seminar-workshop, or study one dataset, or read one book in data science. Worked on competition(s) improves your exposure and adds to your expertise. It is a compliment to the other tasks, not the only litmus test of one’s expertise in data science. I totally concur with Reshama ‘s views on this. The point of how to take a class about something in particular doesn’t make you a specialist on something nor does it grant you a career. I’ve actually done a course called Python for Data Analysis and I’m going into great depth on Pandas, Matplotlib, and Seaborn. It won’t give you a job automatically or make you an instant expert in Matplotlib or Seaborn but it will improve your expertise, show you how libraries work, and help you develop your portfolio. Anything you do will improve your employability. Linkedin : Unlike a length-confined resume, a LinkedIn profile helps you to identify your tasks and job experience in greater detail. Udacity has a handbook to build a successful LinkedIn profile. Their search feature is an integral aspect of LinkedIn and you must have appropriate keywords in your profile for you to turn up. Recruiters also use LinkedIn to scan for candidates. LinkedIn lets you see which businesses were looking for you and who saw your profile. Tableau Public : Not every job in data science uses Tableau or some other BI tools. If you are applying to jobs where these tools are being used, though, it is important to note that there are places where you can place dashboards for public use. For instance, if you say you‘re learning or you know Tableau, put a few dashboards on Tableau Public. While many businesses might be well off learning Tableau on the job, having public evidence of your Tableau skill can help Conclusion : For several years, getting a good resume was the main method for job applicants to relay their talents to prospective employers. There’s more than one way to show off your talents these days and get a career. A public evidentiary portfolio is a way to get benefits you would not otherwise have. Importantly, a portfolio is an iterative operation. As your experience grows you should change your investments over time. Never stop studying, and never expand. Even this blog post is being updated with reviews and rising awareness. Career Guide and roadmap for Data Science and Artificial Intelligence &and National & International Internship’s, please refer :
https://medium.com/dev-genius/how-to-build-data-science-portfolio-portfolio-for-data-scientist-fde92961dbb2
['Shaik Sameeruddin']
2020-09-06 22:30:19.279000+00:00
['Machine Learning', 'Artificial Intelligence', 'Data Science', 'Deep Learning', 'Towards Data Science']
Machine Learning for Inventory Optimization
Machine Learning for Inventory Optimization by Larry Snyder There is currently a lot of buzz about using machine learning (ML) techniques for predicting the future state of a supply chain (demand forecasting being the most popular use case). ML algorithms predict future behavior based on past occurrences and their associated environment. In this blog post, we aim to start a new kind of buzz by talking about using ML for prescribing how supply chains should operate (in order to achieve an optimal state). In short, we’ll use ML for prescriptive, rather than predictive, analytics. We will use a case to highlight the application of ML for supply chain optimization. Imagine a supermarket that sells a certain brand of potato chips called Super Crispies. The demand for Super Crispies on a given day depends on a lot of factors, like the day of the week (demand is higher near the weekend), weather (demand is higher on hotter, sunnier days), and even macroeconomic factors like the stock market (Super Crispies are a high-end item, so demand is higher when the market is doing well). These are called features. The supermarket has kept terrific data on historical sales of Super Crispies, including not only the demand, but also the values of the features, on each day. Here’s a snippet of the database: Each case of unsold Super Crispies costs the supermarket $0.05 per day in holding costs, and each case of demand for Super Crispies that cannot be met because the supermarket has run out of inventory costs $0.70 in stockout costs. Let’s say today is a Thursday with a high temperature of 80–84 degrees and sun, yesterday’s stock market was down 0.5%-1%, there’s no upcoming holiday, and there is a current promotion. So how many cases of Super Crispies should the supermarket hold in inventory? If we knew the probability distribution for the demand on a day with these particular values of the features — for example, a normal distribution with a mean of 9.4 and a standard deviation of 2.1 — then we could simply apply the newsvendor problem to determine the optimal safety-stock level. Unfortunately, we don’t know this probability distribution. One way to get around this missing data is by fitting a distribution to the historical demands for days with the same features as today. For example, here is a histogram of demands on days with 80–84 degrees, sun, and so on: The demands appear to follow a lognormal distribution; the best fit has parameters 𝜇 = 2.41 and 𝜎 = 0.29. This distribution is drawn on the plot, as well. Using this information, the newsvendor problem then tells us that the optimal safety stock level is 5.7 cases, for which the supermarket can expect to incur an average holding and stockout cost of $0.41 per day. As you can see, the lognormal distribution is not a great fit, however. This is not surprising, given that we have only 14 total data points matching these values of the features. Nevertheless, this is an intuitive and reasonable way to solve this problem. We call it separated estimation and optimization (SEO) — we first estimate the demand distribution, and then use that in an optimization model (the newsvendor problem) to solve the problem. Even though the SEO approach is common and effective, we recently asked ourselves whether the current availability of richer data sources might suggest that machine learning (ML) could be a useful tool for optimizing inventory — of Super Crispies or any other product. My colleague Martin Takáč, my Ph.D. student Afshin Oroojlooy, and I recently wrote a paper about the use of a specific branch of machine learning called deep neural networks (DNN), also known as deep learning, to solve problems like the one discussed above. DNN tries to build a model that relates inputs (like temperature, day of the week, etc.) and outputs (like demand). Suppose the demand was a linear function of the temperature, like D = 0.1T. In this case, there’s no role for DNN — something much simpler, like linear regression, is all we need. But the demand for Super Crispies does not have such a simple relationship to the features. We can’t write a function that tells us D when we know the features. DNN has proven very effective in teasing out complicated relationships like this one. We won’t go into further details about deep learning here. There’s lots of information about it on the web however; here’s a good place to start. Deep learning assesses the quality of a solution using a loss function, which measures how far the output is from its target. If the output is a demand prediction, then the loss function would measure how close the prediction was to the actual demands — sort of like a forecast error. While DNN is well suited for predictive analytics tasks such as demand forecasting, it is less commonly used for prescriptive analytics — optimization — which is what we aimed to make it do in our paper. To do this, we moved away from the typical loss functions that measure distance from a target (because in optimization we don’t usually know what the target is), and instead used a loss function that measures the cost of a solution. In fact, our loss function is very similar to the newsvendor objective function. In our paper, we test our DNN method for optimizing inventory levels by using a data set containing 13,170 demand records for 23 product categories from a supermarket retailer in 1997 and 1998 (available here). This data set isn’t too different from the fictional Super Crispies data set described above. The features in this data set are department, day of week, and month of year. We used our DNN method to optimize the inventory level for a range of holding cost values, keeping the shortage cost fixed. We also implemented several other ML algorithms that do not use deep learning. Here are the results: The red curve plots the cost of the DNN method. The yellow curve is for the SEO method discussed above. The other curves represent other ML methods. As you can see, our method beats the other methods. It saves anywhere from 18% to 30% over the SEO method, depending on the holding/stockout cost ratio. One downside of DNN is that it requires the network to be “trained” using lots of historical data. Even if that data is available (a big “if”), training takes lots of computational power — over 12 hours of computation time, in our case, using the state-of-the-art DNN code TensorFlow. On the other hand, once the model is trained, it can recommend new inventory levels in real time. If your data comes from a single probability distribution, and you know that distribution and its parameters, then SEO or another simple approach is probably your best choice. But if your data set is noisy, as many real-world data sets are, and you have a rich history to work from, then DNN may be a great choice for optimizing your inventory. We think DNN and other ML methods will find more and more uses in the prescriptive analytics space in the near future. (Our technical paper is available here.) _________________________________________________________________ If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars.
https://medium.com/opex-analytics/machine-learning-for-inventory-optimization-38a9ac86a80a
['Opex Analytics']
2019-07-25 14:24:46.671000+00:00
['Machine Learning', 'Artificial Intelligence', 'Operations Research', 'Deep Learning', 'Analytics']
Comparing for Loop with ES6 forEach Method
Before starting, let’s check what is for loop and what is forEach. For Loop For is a keyword, to create looping statements, just like while and do…while. JS engine can directly understand what it is. forEach forEach is a function defined in Array.prototype , just like all other functions. The engine can’t understand it directly. It needs to get the function definition in order to execute it. For vs forEach Choosing one over another will have huge impact on performance. When you have to deal with an array of bigger size, choosing the right one matters. Let’s see an example below. //create an array with 100K entries arr = []; for(var i = 0; i < 100000; i++){ arr.push(i); } //for loop console.time("For Loop"); console.groupCollapsed(); for( i=0; i< arr.length; i++ ){ console.log(arr[i]); } console.groupEnd(); console.timeEnd("For Loop"); //forEach console.time("forEach"); console.groupCollapsed(); arr.forEach( x => console.log(x)); console.groupEnd(); console.timeEnd("forEach"); The above code will log 1–100K on the console along with the time it takes to execute the function. The result is given below (on a laptop with 6Gb Ram) forEach took 6 more seconds to execute it. logging on console usually takes some time. So 50+ seconds are reasonable. Let’s see another example. //initialize an array with 100K entries arr = [], a = [], b = []; for(var i = 0; i < 100000; i++){ arr.push(i); } //for loop console.time("For Loop"); console.groupCollapsed(); for( i=0; i< arr.length; i++ ){ a.push(arr[i]); } console.groupEnd(); console.timeEnd("For Loop"); //forEach console.time("forEach"); console.groupCollapsed(); arr.forEach( x => b.push(x)); console.groupEnd(); console.timeEnd("forEach"); The above will simply copy an array of 100K entries to 2 other arrays. Instead of a=arr and b=arr , we push the entries one by one. The time taken is given below. forEach took more than twice the time taken by for loop. When you are dealing with a small array, you can choose forEach as it is much simpler to write and they don’t have much difference in performance. Why do people fall for forEach? forEach is easy to write. Call back functions are used. No index variable is required. callback functions can directly access values one by one. Little cleaner than for loop. Things they forget when moving to forEach
https://medium.com/swlh/comparing-for-loop-with-es6-foreach-method-ed5a8fd9636b
['Sagar V']
2019-12-07 10:01:01.666000+00:00
['JavaScript', 'Programming', 'Web Development', 'Software Engineering', 'Software Development']
5 UX Tips to Master Mobile App Onboarding
User’s permanent relationship with the product is one of the toughest things to achieve for product creators. Despite all their efforts, sometimes the journey with the app may end up being even shorter than a one-time date. Data reveals a merciless truth: 1 out of 4 people who downloaded an app never return to it after first use Of course, stats don’t paint the whole picture, but they appear to reveal some insights about users’ first impressions of apps. The onboarding process is crucial when creating the initial experience. It starts right after user launches your app for the first time. It is a perfect place for you to show your product key features and its value. However, overwhelming users with information is a great way to annoy them. If you want to lose them in a few seconds, poorly designed onboarding will be a good choice. But hey! Well-designed onboarding experience increases the likelihood of users’ returns to the app. How to find a golden mean? hello gif from giphy Do you want to make users fall in love with your app at first sight? Follow these tips ⏬ ⏬ 1. Less is more This means reducing screens and effort from users. Show the most crucial features, not all of them. Sometimes 5 screens or less is enough. More screens may overwhelm users. It’s not a secret knowledge — we’ve all been guilty of mindlessly taping dozens of “next” at the speed of light. That’s why you should reduce user inputs to an absolutely necessary minimum. Not everything is important on the entry level. design by Daniel Tkachenko 2. Users’ value Focus on users needs. Tell the audience how your app will improve their lives. Show what brings them the biggest value. They’ve already downloaded it — ensure them that they search for a perfect product is over. Pocket app perfectly shows users’ benefits 3. Let’s play Encourage users to interact. A series of actions get them engaged. It’s also an easy way of initial user personalization. Gather the most crucial information about the user at this point but in an interactive way. interactive onboarding from Habitify app 4. “Are we there yet?” Only a few of us prefer neverending journeys over the feeling of progress. Users may drop the app when the ending of the onboarding is unknown. Just show the progress bar to give them a hint when the process will be over. design by Dreamworks and last but not least… 5. Hello! Welcome your users! It’s as easy is it says. The hello message makes them feel more welcomed to an application.
https://medium.com/beautiful-code-smart-design-by-10clouds/5-ux-tips-to-master-the-app-onboarding-6ff8167ae7c1
['Agnieszka Cieplińska']
2019-07-05 11:45:49.939000+00:00
['Design', 'App Design', 'Mobile', 'Apps', 'UX']
Your 2 Percent
Your 2 Percent What 98 Percent Won’t Practice Image by Brenda Geisse from Pixabay From middle school through my mid-forties, I was a professional classical musician. During the journey, I learned several lessons that apply to any career and creative endeavor you may choose to undertake. I’ll share with you the following: Your 2 Percent My last flute teacher was Julius Baker, the former principal flutist of the New York Philharmonic. It would be an understatement to say that it was an honor to study with him. The best lesson he taught me was: “Practice the 2 percent you don’t know.” He explained that out of all the flutists who went to study with him, he could predict which parts of pieces they would stumble on, either by playing wrong notes or rhythms. He said, “You have to find the two percent you don’t know, and then practice it until you get it right because 98 percent of people won’t do it. They’ll keep stumbling over the same notes, and continue to play what they love playing because it’s easy.” Think about the two percent in your own life. What are you avoiding in practice, what are you stumbling on, and what are you continuing to do because it’s easy? The rooms are small, then become larger During many years of studying music, this is what I learned: the first rooms that you study in, and take your lessons in, are small. If your first music teacher isn’t coming to your parent’s house, you’re going to visit the teacher in a small studio. If you’re lucky, the studio’s room has a small upright piano for accompaniment, and hopefully, the piano plays in tune. Then you work with your teacher, and if the teacher is ambitious, you’ll be encouraged to start auditioning for various music groups and ensembles. Perhaps you’ll be encouraged to hold recitals. Every step at this point is a beginning step, but over time and circumstance, the steps become bigger. You’ll leave your first teacher and progress to auditioning and studying with teachers that are professional musicians. At this point, the rooms and the stages become bigger. You’ll walk into a professional musician’s studio, and it will have a grand piano that plays in tune and is maintained by a professional piano tuner. The artwork on the studio walls will also be different. You’ll see oil paintings instead of framed prints. The only thing that remains constant is the moment you take out your instrument and then asked what you’ll be playing. You’ll have five to ten minutes to demonstrate what you’re able to do, and the professional will assess whether or not they want to work with you. However, moving from the small rooms to the bigger ones requires the hours of practice you’ve done behind the scenes when no one is watching, and no one is listening. Those are the moments that separate the 2 percent from the 98 percent; the times you are honing your craft instead of binge-watching Netflix programs. I didn’t show up to play for the principal flutist of the New York Philharmonic unprepared. I showed up through a professional introduction, and after multiple hours of practicing and performing. Be willing to get on an airplane By complete luck, I was born in New York City, a central hub for arts and music. However, one of the greatest flutists at the time was Jean Pierre Rampal, who lived in France. I did not have the fortune of meeting or studying with Mr. Rampal, although I attended several of his performances at Lincoln Center. I discovered that one of Mr. Rampal’s students, a prominent musician in his own right, was going to teach a class in Italy that summer. I decided to apply to the class and bought an airplane ticket to Italy. I had no idea if I would be accepted, as the class was by audition only. I thought, “Even if I don’t get accepted, I will have a vacation.” I was the only American who showed up for the class. I auditioned and was accepted. Three days later, I was studying with the Deutsche Grammophon recording artist, Patrick Gallois. If you’re seeking out the best professional opportunities, you need to take calculated risks. Be willing to get on an airplane, to meet the professionals, and to be in the places where you can promote your talent. To this day I’m still in contact with musicians I’ve met from years ago, who are lifelong friends. Putting it together: Start with where you are No one is born an expert, although, at an early age, you may be fortunate to discover a talent or two. My talents were reading, writing, and music. I was horrible at math and science. By the time I was in high school, it was evident that music was emerging as the strongest talent. Several of my teachers took interest in that talent and nurtured it. Whether you are young, old, or advanced, seek out the best teachers who are your cheerleaders. Listen to their constructive advice. They’ve gone before you; they’ve done the work. They will also be honest with you if you are off track or heading in the wrong direction. Before taking up the flute, I cycled through singing, playing the guitar, and playing the piano. Music was the overall draw, but it took a while to discover the flute was my instrument of choice. Take the time to experiment with your talent and to discover its exact mode of expression. Then follow and develop your talent with all your heart and resources. Remember your two percent and practice it well.
https://medium.com/better-advice/your-2-percent-476b7f5cdeba
['Yve Laran']
2020-11-17 07:37:42.453000+00:00
['Self-awareness', 'Self Improvement', 'Life Lessons', 'Advice', 'Life']
‘A safe space to draw’
Why and when to draw After a short meditation exercise, we talked through some research about how drawing benefits the mind. For example, drawing stimulates creativity, helps you focus, helps you solve problems, and can reduce stress. For more on this topic I recommend Cara Bean’s cute little illustrated booklet ‘Why Draw?’ We also discussed how drawing and visually communicating with other people can help us understand one another and collaborate better; for more on this, see the book Meeting Design. We also discussed scenarios in which drawing is helpful, the first being when you’re feeling stuck and computer work isn’t solving it. Does that sound familiar? Sometimes you just can’t wrap your mind around a topic unless you map it out and process it in physical space. Our brains only get the benefits of drawing when we are actually ‘drawing’ — not typing or creating visuals on the computer. Recently I’ve gotten stuck while planning a presentation, trying to figure out a trip itinerary, and planning this very workshop. In each case, once I realized I was at an impasse, I grabbed a big piece of paper (11x17 is a great size for this) and started mapping out my thoughts. It helped. No matter how big or small the problem is, drawing it always helps me distill my thinking, and I think it’s one of the most powerful tools we can use to get unstuck. Beyond unraveling your thoughts, another impactful way to use drawing is to distill what you are hearing from other people: “I think you’re saying…” It’s like visual active listening, and it is one of the best things you can do to improve collaboration with others. I’ve done this often in meetings, jumping up to the whiteboard to take visual notes and draw out diagrams. It is an immensely helpful way to bring the group into agreement about what we are — and are not — talking about. Of course it’s even better if you can draw with others. Collaborating visually, whether on paper, real or virtual whiteboards, or through other tools, is a more efficient and fun way to work with your team. After discussing these and a few other scenarios, we jumped right into drawing. Drawing to calm the mind We began with a few easy pattern exercises to help people calm their minds and get used to putting pen to paper. I had them draw tight spirals — starting in the center, spiral out and keep the lines as close together as you can without touching. There is something special about creating spirals! As I draw them, I can feel a sensation in the front of my brain, like it’s being massaged. It is meditative. You should try it right now and see what I mean. Here’s another pattern with circles: We did few more similar pattern doodles, and after the workshop the attendees told me that these mind-calming drawing exercises were their favorite part of the day. There is a whole series of books out there to teach people how to create simple, brain-soothing patterns; at the recent Graphic Medicine Conference I met Sandy Steen Bartholomew, creator of some of the ZenTangle series of books and cards — I highly recommend her books if you’re interested in this topic. I bought the Tangles of Santa Fe, which has lots of very cool southwestern patterns. On the topic of calming the mind, we also discussed the how nature can help us calm down and reduce our stress, be more creative, and increase our problem solving — actually a lot of the same benefits as doodling! So during the day I diffused essential oils and brought print-outs of nature scenes as inspiration.
https://medium.com/pictal-health/a-safe-space-to-draw-4078f99cf0fa
['Katie Mccurdy']
2019-04-10 00:30:28.116000+00:00
['Healthcare', 'Doodle', 'Design', 'Drawing', 'Visual Thinking']
Ubuntu
Ego says I am me. Higher self says not so fast. It’s not I, it’s we. The higher self says. I exist because we are. This is ubuntu. Together we can. Accomplish anything. Alone we will die.
https://medium.com/imperfect-words/ubuntu-e77cfe93d1f7
['Jim Mcaulay']
2020-04-05 13:22:23.536000+00:00
['Haiku', 'Theology', 'Ubuntu Not Technology', 'Desmond Tutu', 'Psychology']
Amazon Halo: Accessory or Standalone Wearable?
The Amazon Halo seems like another attempt at Amazon trying to get outside the home. Before the Halo it was the Amazon Echo Loop. Unlike Apple and Google, Amazon consistently struggles to be with their customers at all times. A wearable is a great way to get into the market, but like the Echo Loop, it is not the best at what it tries to do. However, everything so far that Amazon is telling me about my heartbeat, tone, and body mass index seems relatively accurate. Additionally, the Halo is fun to play with once in a while. Photo: Amazon Tone By itself, the Amazon Halo is a very interesting concept. I enjoy playing with tone to see how I felt that day or during a specific moment. But in general, I usually find myself asking, ‘why?’ The Amazon Halo’s tone feature is more akin to a mood ring for your throat. There are times where I will look back and think, “ah that’s interesting. I didn’t think I sounded that way.” But half the time, the people I’m with don’t notice it either. For example, I work in customer service and I answer phones a lot. It’s not my favorite thing to do, but I try to be polite and no one ever said I’m rude. In fact, I usually get praises for my customer service. Amazon doesn’t praise me, though. They know when I’m happy, annoyed, or frustrated. I’m not going to say that’s scary, but it is weird to think about and it raises the biggest question for me about Amazon Halo. Why do I need to know this? Is it a bit disingenuous to not mean every ‘have a great day!’ or ‘thank you for calling!’ Yes, but that’s what it means to be human. The Halo’s biggest failure when it comes to tone is that it’s not human. It can’t detect the difference between my genuine disgust of another human being and my annoyance at needing to work, or even my exhaustion when I’m talking to someone. To Amazon, all of these are the same thing because of my ‘tone.’ They aren’t able to see my face and they don’t mean the same thing. By itself there is no issue with knowing you were disingenuous nor is there anything wrong with how tone works. It is very accurate most of the time and aside from these small human quirks, there’s not much else to argue about. My biggest gripe with Tone is how frequently it checks my tone. For the days I have it on, it doesn’t check it as often as I thought it would be based on the advertising. Instead, it records periodically. This is of course to save battery and not always have a microphone on you. But there are two settings for recording: Less Tone and More Tone. But I couldn’t really tell the difference aside from my battery life. One could always bypass the need for these settings by recording specific conversations or by using live tone, though. The Tone feature is very accurate from what I’ve seen. After I drafted this, I decided to run it through the Tone’s feature of recording a specific conversation It was able to tell I was focused which makes sense since I am reading from a script. But beyond that, Tone is amazing to show off to people who were skeptical of the feature. Both my brother and girlfriend hated the idea of Amazon listening to you, but after they saw the live feature, they wanted to try themselves. It’s neat, but even then, everyone forgot about it pretty quickly. I personally saw the novelty of it wear off pretty fast and I kept the band muted most of the time because it was both a drain on the battery. I really only show it off when I use the live tone feature. Body Scan Body Scan is the second biggest feature the Halo advertises. I didn’t mind taking pictures of myself to send them to Jeff Bezos. In fact, I was laughing the whole time, but it does require you to put faith in Amazon’s security and promises about what they do with these images. I’m 180lbs, 5’10” and I haven’t worked out in about 6 months, so I expected my Body Mass Index (BMI) to be higher than it was previously, and it was. My previous measurements I took in May put me at 18–19% BMI. Currently Amazon says that my BMI is 20.2%. So, it’s not a huge difference, but taking a look into the BMI slider, I would say that I am closer to 22%. Indeed, the most interesting fact about these pictures is that I learned I should be doing more leg and glute exercises. I wasn’t expecting it to be completely accurate, but it’s nice to have at least an estimate as calculating BMI is hard without the proper equipment. That said, it should be warned that putting too much weight in what pictures say is not the best thing in the world, so I would hesitate on holding it to be the most accurate reading. Even Amazon cautions this stating in the app, “BMI is based on your height and weight, but doesn’t distinguish between fat and muscle.” However, it does recommend everyone to scan their body every two weeks, so it will be interesting to see how the data changes overtime. Sleep Tracking Sleep Tracking for me, is one of the more interesting things the Halo does. I don’t find myself looking at the Apple Health sleep section often, but I do look at the Amazon Halo one. It may be because there is less to look at, but in general a quick examination of the Halo Sleep data gives a lot of information at a glance. I can easily see when I fell into a light sleep, deep sleep, REM, and finally how much time I spent awake. Additionally, a giant circle saying ‘good’ or ‘bad’ is just simple. It gives most of the information I need about my sleep. My biggest issue with the Halo’s sleep function is the body temperature. It shows me a graph with my baseline, but it doesn’t tell me what my baseline is. Instead, it will say my body temperature was +0.8 or -0.1 degrees compared to my baseline. Which is cool, but what’s my baseline? It’s a minor factor, but I feel it would be nice to be included even if it was an estimate. The whole point of the baseline is for you to monitor your trends in your sleeping environment. Again, it’s neat, but I do not believe that this is important overall. Amazon themselves says “your nightly skin temperature can vary for many reasons.” This means, if you are trying to track something in particular, it would be good to see, but for day-to-day usage, it’s just a nice bonus. Activity Tracking and the Lab With that said, the Halo activity tracking is great. I enjoy seeing how much of my activity was moderate and intense and so forth. Amazon claims that the band can detect the intensity of the exercise and I’ve yet to see that proven wrong. My intense activity is seemingly always recorded accurately. By default, it shows the weekly activity instead of the daily activity and it aims for everyone to get 150 ‘activity’ points per day which can come from a combination of activities. Further, the app penalizes you for being sedentary which reduces your overall activity points. The Lab is also a very interesting concept and it works well with the activity tracking. The Lab is also perhaps the best thing about the Amazon Halo app. For people struggling to get into exercising or health in general the Lab is very useful. It gives you advice and videos from a variety of sources such as WebMD, Headspace, Orangetheory, etc. to learn and practice specific exercises that help with getting good habits in place. I don’t find myself using the workouts as there are other apps such as JEFIT to give me exercise ideas, but for someone who is unsure of where to start the variety of resources the Lab gives are tremendous. From working out, nutrition, sleep, and tone, there is something for everyone in the Lab. I found myself going through the Tone section of the Lab often. It’s fun to learn about, but I’m not entirely for how long it will remain interesting. Much of the advice is simple and the advice could be found elsewhere without the monthly subscription. With that as well, it just brings me to the biggest question I have with the Amazon Halo, ‘why?’ Conclusions Most everything Amazon put into the Amazon Halo can be found elsewhere. The two biggest features of the Halo were both the BMI estimator and Tone. Both of which are not at all interesting enough to charge a band twice a day. The BMI estimator allows you to upload multiple scans, so you can track your progress, but honestly, I do not think people need to use it. After the first body scan, you can use the estimator. There’s nothing stopping someone from using the BMI slider and a full body mirror in order to estimate their own BMI. It is the same with tone, most people understand what they sound like and they even have the added benefit of understanding why they soun a specific way. Don’t get me wrong, if you want to play around with the features and get estimations, it is fun, but I don’t think the Halo is worth the $99.99 price tag Amazon put on it. Since I got the band in early access, I don’t think I overpaid with my $64.99, but for everyday use I feel one would be better suited for a Fitbit, Apple Watch, or another smart watch Overall, I feel the Amazon Halo is a good-looking accessory, but not much else at this stage since other wearables do the same.
https://edwardmorante.medium.com/amazon-halo-accessory-or-standalone-wearable-3d22b1407cf2
['Edward Morante']
2020-12-24 12:02:53.255000+00:00
['Wearables', 'Review', 'Technology', 'Amazon']
Platforms — from Steemit to OpenBazaar pt.1
Soon i will post an article on Steemit and OpenBazaar analysis. These products are good examples of platforms. So let’s start with the Platform Basics first. I highly recommend reading the “Platform Scale: How an emerging business model helps startups build large empires with minimum investment” book. Next is the short condensed version of this book, so all credits go to Sangeet Paul Choudary. Platform Basics “We are not building a software. We are enabling interactions” “The ecosystem is the new warehouse” “The invisible hand is the new iron fist” “Today, the AppStore is the reason iPhones sell” “Users First, Revenues Later” “Interaction-first! Not a technology-first!” Pipes (internal + control) Pipes focus on consumers only. Firms compete on the control and ownership of internal resources. A traditional manufacturing chain is a pipe. It is pushing value from producer to consumer. Early digital business models also followed the pipe design. Pipes scale by aggregating internal resources toward efficient value creation and delivery to consumers. Pipes focus on optimizing process flow. Platform (external + interaction) Platforms must build virtual interaction spaces. Enables producers and consumers to connect and interact. Allows (external) participants to co-create and exchange value with each other. Platform does not create the end value, but only enables value creation Curate and govern. Create better trust. Without strong curation, greater content can actually lead to a poorer user experience leading to reverse network effects. Enables plug-and-play business model. Other businesses can easily connect and build products on top of the platform. Platforms scale by ability to orchestrate a global connected ecosystem and optimization of value-exchange interactions. From Pipe to Platform 3 forces Increased connectedness Decentralised production Rise of artificial intelligence 3 shifts Shift in markets: from consumers to producers Shift in competitive advantage: from resources to ecosystem Shift in value creation: from processes to interactions Organizations must shift from dollar absorption to data absorption. Liquidity — ensuring enough overlap between supply and demand. Core value unit — minimum standalone unit of value that is created on top of the platform. Its a scaling variable. Platform Manifesto The ecosystem is the new warehouse The ecosystem is the new supply chain The network effect is the new driver for scale Data is the new dollar HR -> Community management Inventory control -> Liquidity management Quality control -> curation, repetition Sales funnels -> user journeys Destination -> distribution Loyalty program -> behaviour design Business process optimization -> data science Sales commission -> social feedback Algorithms are new decisions makers Market research -> real-time customisation Business development -> plug-and-play architecture “The invisible hand is the new iron fist” 3 basic platform configurations Marketplace/Community-dominant: AirBnB, Uber, Facebook, Alibaba, Reddit Infrastructure-dominant: Android, Wordpress, Dropbox Data-dominant: Jawbone, Nest Principles of building a platform You know you have a platform when the users can shape their own experience Plug-and-play business design. Open participation and strong filters Balancing value creation for both producer and consumer Strategic choice of “free” Pull, facilitate, match Layering on new interactions Enabling end-to-end interactions Creation of persistent value beyond the interaction — ratings and reputation, etc. Challenges Replicating the technology of AirBnB or YouTube is considerably smaller challenge compared to replicating their respective communitites of hosts and video creators. “Chicken and egg problem” — no producer means no consumer means no producer… “The ghost town problem” — ensuring that producers produce, and create value “Double company problem” — platform is two-sided, so building it is twice as difficult Interaction off the platform — client may want to continue interaction off the platform (see Upwork). How to solve chicken-and-egg problem? One of the ways of solving this problem is to ensure that the product has a ‘standalone mode’. Essentially, a user should be able to derive value out of the product even when other users aren’t on it. The ‘off-platform’ reputation of the producers should attracts the consumers to the platform. Beg, borrow, steal is how platforms build traction Find a (inorganic) bait to start the loop Ensure there is no friction in the feedback loop Minimize the time it takes for the startup to reach critical mass Incentivize role that is more difficule to attract (“Lady’s night is when free drinks is offered for ladies”) Staging the creation of 2-sided markets — OpenTable gets 1st-side of platform by providing restaurant management software (the bait, standalone mode, creation of value units) before any 2nd-side consumers signed up. Fake contents (like dating sites do), fake interactions (like PayPal did), fake supply… Design your platform so that producers can bring large-number of consumers to the platform Start as a producer on your platform Staging (attracting only one side fiest) is not possible in some cases (i.e. payment mechanisms). In such scenario the solution is to provide backward compatibility (like e-mail service). Solving chicken-and-egg problem on such platforms requires solving quality control issues, rather then gunning for critical mass of users. Quality control Quality = moderation + algorithms + social signals Controlling quality in an open and participative environment. Controlling quality with minimum friction. Controlling quality through processes that scal non-lineary. Platforms must be designed in a manner that optimally balances the quality and quantity of interactions by balancing traction and friction. In the early days of the platform friction is often reduced. “Craigslist, the king of quantity, suffers on quality. It does not have a reliable method of determining a user’s reputation”. But AirBnB built a reputation system. Trust Trust is the critical factor on the platform. Platform requires creation of alternate trust mechanisms: Confirmed identity Centralized Moderation — in it’s early days every platform uses centralized moderation in some form Community Feedback — comments, votes, ratings, reviews, replies Codified Behaviour — implicit rules Culture Completeness Insurance Values Physical goods (eBay) Virtual goods (Medium, Youtube) Standardized services (Uber) Non-standardized services (Upwork, TaskRabbit) Data (Waze, Nest) Capturing Values can be done in different ways: Charging one side to access another Charging 3rd party for advertisement Charging for premium tools Charging consumers for access to high-quality curated producers Charging producers for ability to signal high quality At least one side is usually subsidized to participate on the platform. Producers may even be incentivized to participate. Dating sites are a great example of two-sided markets which, often, rapidly build out traction on one side but fail to get any uptake on the other. Typically, such markets are asymmetrical with one side that is harder to attract (the ‘hard’ side) and the other which is relatively easier to get traction on (the ‘easy’ side). Platform Thinking: We’ve got to figure who creates value and who we charge for that. Consumers can offer in return: Attention Reputation Influence Goodwill Money Monetization options Transaction cut (fee) — requires a mechanism for owning end-to-end interaction (i.e. don’t allow you users to buy off-the platform). So the platform must create more value than they capture Advertisement API/data licensing Business Model Options: “Subscription-based” revenue model — dating web-sites, B2B platforms “Paid placement” revenue model — classifieds “Lead generation” revenue model — financial comparison engines Platform advantage Better marginal costs. Example: value creation for AirBnB is very cheap. In general, platforms allow producers to obtain market access without commensurate investment. Platforms may also provide access to tools for free that may have been expensive to access before. Network effects. 1 + 1 = 3. Network effect guarantee repeatable interactions. Community culture Learning filters Virality. More users bring more users Platforms, by their very nature, enable abundance. i.e. “Long tail” Today network effect is reached by “cummulative value” — value that scales as the producer/consumer uses the platform more often. Cummulative value takes 4 forms: Reputation Influence Collections — a larger collection creates increasingly higher feedback for a producer Learning filters — platform becomes more useful with usage Core Interaction Platform architecture should be organized around core interaction. Creation Curation — is critical on an open-access platforms Customisation Consumption A platform must enable all 4 actions. Keys to platform scale lie in simplifying these actions. To power the core interaction platform must leverage all 3 layers of the business model. Ask these questions to design core interaction: What is the core interaction that the platform enables? What is the core value unit? Who is the producer? What motivates him to produce? Who is the consumer? What motivates him to consume? How does the producer create the core value unit? What channels are used to create core value? How does the consumer consume the unit? What channels are used to consume core value? How is the quality of unit is determined? What is the filter used to serve the unit to the consumer? What consumer actions help to create the filter? How does the consumer consume relevant units? What tools and services should the platform provide to enable interactions? What curation and customization tools and services should the platform provide? (algorithmic, social, editorial curations) What consumption tools and services should the platform provide? How these tools and services help platform to pull, facilitate, match? What currency does the consumer provide in return for the value? How does platform capture some portion of this currency for itself? Conclusions Please stay tuned if you are curious how and why Steemit business model works (or doesn’t work), how OpenBazaar is trying to solve ‘chicken-and-egg’ problem, and why ‘money-for-posts’ is not really a good idea. Soon i will post an article about that. Anton.
https://medium.com/chain-cloud-company-blog/platforms-from-steemit-to-openbazaar-pt-1-485648402f51
['Anthony Akentiev']
2017-01-31 11:37:16.845000+00:00
['Platforms', 'Startup', 'Blockchain', 'Steemit']
The Top 10 Programming Blogs in 2020
The Top 10 Programming Blogs in 2020 Blogs from people that I admire or with large communities Photo by Joel Muniz on Unsplash Programming is an interesting field since it gives us the superpower to control computer systems. It can be used in airplanes, traffic control, robots, self-driving cars, websites, mobile apps, and for a ton of other use cases. Now the main thing is that software engineers have created several programming languages, and each of them is suitable to solve a different problem. Today, I’m gonna share with you some websites and blogs that write about different programming languages and the best practices when using them. This list is in no particular order — all of them are great reads! (This is not a sponsored post. All blogs listed are among my favorite reads.)
https://medium.com/better-programming/top-10-programming-blogs-in-2020-dda86feead1f
['Juan Cruz Martinez']
2020-08-27 12:05:38.215000+00:00
['JavaScript', 'Programming', 'Software Development', 'Python', 'Web Development']
Finding Stillness
Finding Stillness A poet’s dive into sensation Photo by Joshua Fuller on Unsplash A quiet reserve This small, silent well stirs Spinning And churning the sweetest waters Dowsing the twisted turning And burning of flesh on fire Creates this smoke medicine Takes disease Bakes discomfort Annihilates indifference Cloaked in disdain These embers of misery freed Raining their pleasures glow Pain’s best friend An earthen remain Ashes Dust Settle They wait now — For the healing to spill over From containers No longer contained For the words That must stumble forth By way of fingers that Can no longer move fast enough To capture them all on paper
https://medium.com/write-like-a-girl/finding-stillness-a2b3c7056c3c
['Annine Teresa Raffaella Massaro']
2020-11-27 19:40:59.597000+00:00
['Poetry', 'Reflections', 'Poem', 'Writing', 'Self']
When Parks Are Closed
When Parks Are Closed But still we go for beauty and air, for love of place Picnic table barriers. Martin Luther King Jr. Shoreline Trail, Oakland, CA. Photograph by Aikya Param, 8/24/2020 Binoculars and smart phone with me, I’m out exploring But the Shoreline Trail is closed. Yet people and dogs walk the path And properly helmeted riders fly by on fancy bicycles: Father and son, mother and daughters. The Nature Center is closed, a pointy sixties modern With fine facilities, is closed. I snap photos with my phone distracted and drawn By tiny flowers on the ground, And the phone app tells me the names in Latin and English Along with uses and how to grow Them as well as warnings of leaf-eating things living there, Spoiling their allure for gardeners. Barriers of orange open weave sheets cover picnic tables, Rough wooden picnic tables. “We’re closed. Around this table, please don’t gather this year,” Except for one in back of the field. Memories of roasted marshmallows, sauerkraut, And fluted white paper plates, And fragrant franks and burgers in buns with relished veggie options. I’m out expecting to enjoy beauty And birds but only one Canadian goose waddles And not one seagull soars overhead. Another day I try the other Shore and walk the Arrowhead Marsh Trail to see feathery Wild oats and fennel umbrellas alongside.
https://medium.com/poetry-palace/when-parks-are-closed-89946a11f8b
['Aikya Param']
2020-08-27 16:59:09.610000+00:00
['Poetry', 'Environment', 'Memories', 'Outdoors', 'Beauty']
Don’t Fear Failure
Don’t Fear Failure It is More Risky to Be Complacent Photo by John T on Unsplash Possible to Be Too Satisfied The literal meaning of Complacency’s Latin root is, “Very Pleased”. It is possible to become too satisfied to a point where no amount of change, growth, or improvement seems necessary.
https://medium.com/the-partnered-pen/dont-fear-failure-f3a6ddfb363c
['Laura Mcdonell']
2020-05-22 01:38:46.095000+00:00
['Religion', 'Growth', 'Productivity', 'Failure', 'Faith']
How 6 product designers tackle friction with product managers
Catt Small Title: Product Designer Company: Etsy Industry: E-commerce Tweet at her: @cattsmall What do you enjoy most about working with a PM on a product’s design? PMs have a strategic mindset with a heavy focus on where the product is headed. And that’s great because, as a product designer, I’m thinking about the user’s perspective and how that fits into the business line. So when we get together, it’s much better for the direction of the product. What is the main source of friction when working with a PM on a product’s design? The biggest source of friction I’ve found is giving designs the time they need to be refined, but also ensuring we’re meeting timelines. There’s this attachment designers get where we have this amazing thing we’ve considered, but often this ‘okay’ state gets released. It sometimes is better to just get something out and then improve upon it. So it can be a positive, but you have to actually iterate upon things because sometimes those ‘halfway’ designs release and they never get looked at again. What are some tactical things about product design you think all PMs can do to better communicate with designers? PMs should go into projects without assumptions. Based on larger numbers, one can say, “Oh, on average this is how people are.” But when you talk to individuals and gather qualitative data on people’s reasoning, you realize the many different user segments. It can be harmful to think of your users as a monolith. Once you abandon that way of thinking you can become a better, tactical strategist. Also, participate in a design sprint. After we did a design sprint for a project, everyone was just more excited about design — especially the PMs. You’ll feel a lot closer to your designers and find it easier to communicate. What’s one communication strategy you think PMs should adopt? Asking more questions of designers. The best PMs I’ve worked with have asked of me directly instead of planning for me, or speaking for me, or saying “Hey Catt, we figured out this thing, now make it look nice.” It leads to this feeling of autonomy and ownership. In your opinion, a PM can make a product designer’s life easier by…? Trusting them. Trusting that where product designers are coming from is a place of genuine care for the users. And trusting that we have a special expertise that somewhat overlaps with product management but has its own perspective and is just as important. Check out Catt’s design work here.
https://medium.com/product-to-product/how-6-product-designers-tackle-friction-with-product-managers-7877937d6044
['Tarif Rahman']
2018-05-31 16:18:35.480000+00:00
['Product Design', 'Technology', 'Design', 'User Experience', 'Product Management']
From Fiat to Cryptocurrency: Conversion Solution with OLWallet
Crypto community is growing day by day and ever more people are involved in this process. There is nothing surprising in it because cryptocurrency has many attractive benefits in comparison to fiat money. Such funds are safe from counterfeiting, transactions are made quicker between different cash accounts, and operations with currencies can be conducted without third-parties and on a non-attributable. For the very reason, people prefer to keep their funds in crypto. However, not every person knows what wallet to choose for this purpose and which one will help to conduct different transaction operations including conversion of fiat money into cryptocurrency and vice versa. The Essence of OLWallet The decentralized cryptocurrency wallet of the OLPORTAL application — OLWallet will assist you to control your finance in the best way. This means that your funds will be secured by blockchain technologies that prevent any tampering attempt by having allocated the information about your transactions and your money on different devices at the same time. Hence, it makes it difficult to hack any database. OLWallet is also a great solution for those people preferring to invest into cryptocurrency and convert their money from fiat to crypto or in reverse. Moreover, you can keep your dollars and euros on the same account of the wallet either. OLWallet is one of the profitable features of our application OLPORTAL and will be very helpful for people who have been into crypto for many years and those who got interested in the sphere recently enough. In layman’s terms, everybody will learn how to manage his or her funds with the help of this wallet. With OLWallet you don’t need to worry about your money kept on different devices and apps at random because now all your funds will be gathered in one place! However, getting back to the point, the most attractive function of the wallet is assets management including conversion of fiat money into cryptocurrency. From Fiat to Cryptocurrency: User’s Manual The process of fiat money conversion into crypto will be rather simple for everybody who will use OLWallet. Here we put a step-by-step instruction for your convenience. The first step allowing you to use the application is registration of your OLWallet account. After filling in a form, you will be asked what your accounts to bind to OLWallet address. Then you’ll denote necessary accounts and, voila, you may start using the app. When launching OLWallet, you will see your total amount of money, attached accounts, and credit cards that you have added during the registration. Then you should choose one of your accounts and then you should find the sign with two arrows opposed to each other. After doing this, you won’t have much to do but choose a target currency and press the “next” button. This is it. It is as easy as winking! Six simple steps and your currency is converted. In addition, the process of exchanging your money will be fast and safe. It is the most crucial quality that any user looks for. Since we have started to develop our own cryptocurrency, we have thought through every possible way to make our product the most useful. So, here’s still many interesting and beneficial functions of OLWallet that we are going to disclose in the next articles. As for now, we will continue to update you on the latest developments and news of OLPORTAL on our social media! Join our Medium, Reddit, Facebook, and other pages! reddit.com/user/olportal facebook.com/OLPortal23
https://medium.com/ol-portal-steps-forward-to-the-future-communicatio/from-fiat-to-cryptocurrency-conversion-solution-with-olwallet-a75887f76528
[]
2018-03-27 14:26:11.161000+00:00
['Cryptocurrency Investment', 'Mobile App Development', 'New Economy', 'Communication', 'Bitcoin']
The Quantum Bubble
The Model Building Continues It cannot be underestimated how strong a grip quantum mechanics held on the physics community. Only a trickle of lone voices, spread over the next several decades, still spoke of the unsolved problem of radiation from a classical electron. So unpopular was this topic that authors, in their technical papers, often apologized in print for bringing it up. So long as the electron was a point, it had to radiate under any form of acceleration. But the electron couldn’t radiate in the atom. So, it couldn’t be a point. Physicists turned to the next best thing: spherical shells. Shells have an intuitive appeal to physicists. Fritz Zwicky once insulted a colleague thus: “he is a spherical bastard, a bastard any way you look at him.” First explored by Abraham, Lorentz, and Poincare, spherical shell models had a growing repertoire of their own problems. How did they stay together instead of blowing apart from self-repulsion? How could they remain stable? In 1933, less then a decade after Heisenberg and Schrodinger’s work, G.A. Schott published a paper which described a theoretical scenario in which it is possible for an extended charged distribution — again a spherical shell — to accelerate in a circular orbit without radiating energy. If the sphere is spinning and orbiting at the same time, and if the periods of these were set just right, the sphere would never radiate. He had a proof of concept that classical laws might be able to solve the problems faced by atomic theorists. Schott continued studying classical models, as did Dirac, who published in 1938. Despite Schott’s progress, Dirac continued to insist that quantum mechanics was the only option on the table; propping a myth that has been sustained to this day. The real breakthrough occurred in 1963, when George Goedecke published a more general description of radiationless motions. He studied various spherical shells and solid spheres, spinning or fixed. He was excited by his preliminary results to speculate on: a ‘theory of nature’ in which all stable particles (or aggregates) are merely nonradiating charge-current distributions whose mechanical properties are electromagnetic in origin. Although Goedecke’s research on radiation diffused into the general consciousness of the field, there was very little interest in moving it forward. In 1986, a professor at MIT, Herman Haus — apparently without knowledge of Goedecke’s work — published his own general condition for acceleration without radiation, one that was more physically intuitive. He showed that a current distribution would only radiate if it contained Fourier components that were lightlike, synchronous with light speed. But so had quantum mechanics diverged from classical mechanics that Haus didn’t know that he was contributing to quantum theory. He imagined charge distributions as dense collections of points instead of a continuous surface membrane that could describe a fundamental particle. Haus gave a talk about his paper to his graduate class. One of his students voiced an interest, and Haus handed him a copy. The student was Randell Mills. Mills was something of a polymath who was earning an MD at Harvard, but was more interested in inventing new medical technology. Having completed his medical coursework in only three years, he was using his fourth year for electives at MIT. In the first years out of med school, Mills churned through the math for a new a kind of MRI, invented and tested a complex chemical chain reaction for a new drug delivery compound, and invented and tested a new kind of cancer therapy based on the Mossbauer Effect. The latter earned him a publication in Nature. When these kinds of intellectuals happen, best idea is for the rest of us to just get out of the way. When Mills read Haus’s paper as a student, he imagined that it could provide the foundation for a new theory of nature, in which electrons were classical membranes of charge constrained by the requirement of no radiation. Over the next few years, Mills developed the foundation for a new theory in which the electron was a spherical membrane of moving charge, like a soap bubble, centered on the proton. While classical electron theorists had always assumed some kind of sphere (oscillating or orbiting, rigid or deformable) would be the answer, they had never considered centering an electron shell on the proton before. Although the math was more complicated, the basic physics was very similar to Bohr’s model of the atom. Mills’s model matched the well known energy levels of hydrogen. But it did more: it was the first to offer a physical explanation for why the ground state orbit was stable to radiation but the excited states were not, using the Goedecke-Haus condition. In the decade that followed, Mills took his model light years ahead of quantum mechanics in terms of predictive power. He calculated the state lifetimes and line intensities of the hydrogen excited states, thousands of numbers. He calculated the spectrum of helium, thousands of numbers. He calculated the electron energy levels of the first twenty-electron atoms in the periodic table, walking through the atoms one by one, hundreds of numbers that matched to within the error bars of NIST experimental data. In quantum mechanics, any interaction between two electrons is basically an unsolved problem, and a computational nightmare. But Mills’s electron shells reduced the problem to a much simpler force equation. And by incorporating relativistic corrections for the fast-moving inner electron shells, it made the theory’s predictions even more accurate. Mills spent another several years rebuilding all of quantum chemistry with his own model, calculating bond energies, lengths, and geometry. Despite this demonstration of predictive value, Mills’s theory has been subject to the embargo of alternative theories by mainstream physicists. It is not a conspiracy, just psychology. It is the same with every other moment of revolutionary change; the old guard resists the change, while the young and curious embrace it. There is much to do. We have learned to think through the quantum sieve about so many experiments in the last eighty years that unraveling the way quantum models wave-particle duality, non-locality, tunneling, and quantum teleportation will not be easy. We are constantly told that our intuitions about nature must be wrong, instead of the other viable alternative — our model for understanding the phenomena is bad. Really bad. While the ability to calculate known experimental data is one thing, what the scientific community really needs to engage a new paradigm of thought is at least one model experiment that demonstrates the abject failure of the old paradigm. For good measure, let’s discuss two.
https://medium.com/discourse/the-quantum-bubble-8e9c3d9d1d92
['Brett Holverstott']
2019-10-07 21:00:29.299000+00:00
['Science', 'Quantum Physics']