title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
You Need These 3 Coaches to Succeed in Life | You Need These 3 Coaches to Succeed in Life
As Bill Gates said at the beginning of one of his Ted Talks, “everybody needs a coach”.
Photo by Sarah Cervantes on Unsplash
Maturing made me realize that I will start now paying for things if I am expecting quality. For this reason, I pay a member account for different online services. Also, I ask for help from external people on topics where I do not have the experience nor the expertise to tend for myself. Not to mention that I do also pay for simply having my handhold during a new journey.
And all this makes me feel like a woman that knows what she wants.
I wasn’t always like that. Part of it was due to not affording it. Services, in general, come with a price, of course. But I was also vain to believe that I can do everything by myself. I was deluding myself that by not requiring help, I was in control. Unfortunately, it was the exact opposite.
When I had less time on my hands, I understood the value of being supported by the expertise and professional individuals. Up until then, I had no restrictions on taking my time. But was the time effectively used?
After I became a mother, my personal time reduced dramatically. I had no time for basic hygiene so any complex undertakings, such as getting back into shape or leisure time, were simply out of question.
As I was putting hours into raising my son, I was also piling up frustration, tiredness, and eventually, anger. I was forgetting an important rule they tell us during the pre-flight safety briefings:
Passengers should also heed the instruction to put on their own oxygen mask before helping others put on their masks. Passengers who have put on their own oxygen masks will be fully capable of helping others in nearby seats, but passengers who do not may become impaired before they are able to help others or themselves.
Widely applicable to many situations beyond the initially intended context, this instruction translates into the simple advice of tending for yourself to be able to take care of the others (including infants).
One day I finally realized I was not being selfish for asking and taking personal time away from the baby. At the same time, I also came to the conclusion that a coach can help one gain more focus on goals by achieving them more effectively. You know that saying: “we’re too poor to afford to buy cheap things”. As counterintuitive as it may sound, in fact, we end up spending more (on frequent replacements, for instance).
Final Thoughts
As we age, we also begin appreciating time more, over money. Thus, hiring a coach makes a lot of sense. Some examples of coaches:
Fitness & nutrition coach — yes, you can do it on your own too, but if you are new to it, a coach will help you get a head start on your health goals.
Therapist — yes, you can study psychology by yourself and use your friends for venting, but you might take years to reach the same level of a therapist.
Financial consultant — yes, you could do fine with basic savings but hey, who doesn’t want to avoid relying on their pension alone when they retire? Well, a financial consultant can help you understand how to invest some of your money, while you can. And this kind of insight requires domain expertise.
The list can go on with different other areas ranging from business to love affairs. We live in a world of services. For any need, there is — or soon will be — a service to fulfill it.
We can’t and shouldn’t try to be the experts on everything. We should also not limit ourselves to self-sufficiency. At the end of the day, this is how we help the economy grow.
Love,
/ET | https://medium.com/in-fitness-and-in-health/do-i-really-need-a-coach-f147ca51bc43 | ['Eir Thunderbird'] | 2020-12-20 21:13:12.929000+00:00 | ['Fitness', 'Mental Health', 'Feminism', 'Coaching', 'Parenting'] |
Back Of The Envelope Quick Data Analysis | We will be using Indian Liver Patient Dataset from the open ML to learn quick and efficient data visualisation techniques.
This data set contains a mix of categorical and numerical independent features and diagnosis result as the liver and non-liver condition.
from sklearn.datasets import fetch_openml
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
I feel it is easier and efficient to work with Pandas dataframe than default bunch object. The parameters “as_frame=True” ensures that the data is a pandas DataFrame, including columns with appropriate data types.
X,y= fetch_openml(name="ilpd",return_X_y=True,as_frame=True)
print(X.info())
As the independent data is in Pandas, we can view the number of features, the number of records with null values, and data type of each feature with “info”.
Independent features, count and datatypes — based on the above code discussed (image by author)
It immediately provides a lot of information about the independent variables with just one line of code.
Considering, we have several numerical features it is prudent to understand the correlation among these variables. In the below code, we have created a new dataframe X_Numerical without the categorical variable “V2”.
X_Numerical=X.drop(["V2"], axis=1)
Few of the machine learning algorithm doesn’t perform well with highly linearly related independent variables. Also, the objective is always to build a machine learning model with the minimum required variables/dimensions.
We can get the correlation among all the numerical features using the “corr” function in Pandas.
With the seaborn package, we can plot the heatmap of the correlation to get a very quick visual snapshot of the correlation among the independent variables.
relation=X_Numerical.corr(method='pearson')
sns.heatmap(relation, annot=True,cmap="Spectral")
plt.show()
In a glance, we can conclude that the independent variable V4 and V3 have close relation and few of the features like V1 and V10 are loosely negatively correlated. | https://towardsdatascience.com/back-of-the-envelope-quick-data-analysis-3e0c614595ec | ['Kaushik Choudhury'] | 2020-09-16 17:14:02.033000+00:00 | ['Machine Learning', 'Scikit Learn', 'Python', 'Data Science', 'Programming'] |
A Word On Fascism, Free-Speech, and Cancel-Culture. | A Word On Fascism, Free-Speech, and Cancel-Culture.
Let's get make things clear; oil and water don’t mix.
Getty Images/iStockphoto/wildpixel
In this globalized and connected world, everybody seems to have an opinion on every topic. Social networks like Twitter, for instance, are one of the leading means of people letting themselves go with their opinions and thoughts.
This is a privilege we have as a society nowadays, but as stated in Spiderman, “With Great Power, Comes Great Responsibility.” Sadly, people do not tend to understand the power and scope of their words. Most people understand that terms such as “Free Speech” should allow them to say whatever they want, no facing any consequence.
In light of the foregoing, other phenomenons like the “Cancel-Culture” have popped up, which has caused many terms like “Fascism” to come back to the picture, being wrongly used and indiscriminately applied to situations and people that have done nothing related to those terms.
This article has been inspired by a highly followed Twitter account (in my country) that called me a fascist just for tweeting we should cancel a comedian for saying misogynist and sexist comments. He claimed the comedian has “free speech” to say whatever he wants, and nobody has to mess up with his job… and that makes me a fascist (let’s laugh together). Reading the comments, I realized the number of people that agreed with that narrative, which — I'm not gonna lie — worried me and made me write this paper.
However, do Fascism, Cancel-Culture, and Free-Speech have some correlation between them? No. In this article, I will elaborate on what these terms mean and why they do not correlate. Let’s start with fascism. | https://medium.com/the-apeiron-blog/a-word-on-fascism-free-speech-and-cancel-culture-d72d8a45e6da | ['Eduardo A. Llano'] | 2020-12-09 16:37:19.073000+00:00 | ['Self-awareness', 'Opinion', 'Politics', 'Fascism', 'Twitter'] |
Are You Superstitious? | Are You Superstitious?
Superstitions are dumb, but superstitious people aren’t
Photo by Lachlan on Unsplash
I don’t believe in ghosts, witches, or fortune tellers of any kind. I don’t believe that sleeping with your pajamas on backward or a spoon under your pillow will make a snow day the following day more likely.
Contrary to what every Venezuelan believes, I know that placing my handbag on the floor will not cause me to lose money, and I’m certain that passing the salt from my hand directly to another person’s hand won’t bring me bad luck.
I have close relatives who attribute people’s personalities to their Zodiac sign, and I think it’s beyond silly. Three people of the same sign, Pisces say, may have vastly different personalities, yet, somehow, they all fit Pisces perfectly. How can they not see that they’re only considering what matches their foregone conclusions?
Given my rejection of superstitious thinking, why do I sometimes embrace it like my life depended on it?
When I go on a plane, I must pray the whole rosary from beginning to end, starting right when the plane begins accelerating to take off. I will hardly breath or move until I’m done.
My teeth are a source of superstitious dread. I know something bad will happen if I don’t floss, and that something awful will befall my family if I fail to wear my mouth guard two days in a row.
My most involved superstitious phase had to do with my autistic son’s application to a residential post-secondary program he had his heart set on. After his interview, as we were driving home, I said to myself: If Diego gets in, I promise I won’t cut or color my hair for a year, and I’ll pray the rosary every day.
He did get in, and I followed two of my three promises. For a whole year, I didn’t cut my hair — which, by the way, grows phenomenally fast. From the day we found out he’d gotten in until his last day in the program two years later, I prayed the entire rosary every single day.
I postponed the promise about not coloring my hair for another time. | https://medium.com/contemplate/are-you-superstitious-1a7915c40da | ['Daniella Mini'] | 2020-04-23 17:23:10.641000+00:00 | ['Contradictions', 'Humor', 'Parenting', 'Psychology', 'Disability'] |
World Humanitarian Day: ‘I will not let the pandemic stop me from serving those in need’ | “I usually arrive at the distribution site at 09:00. We have distributions nearly every day, whether it’s in-kind food assistance or cash transfers. First, I check in with the assessment teams from our cooperating partners and the observers from the authorities who selected the families for assistance. The selection is based on our vulnerability criteria — for example, families who experience acute economic stress, households headed by women or elderly people, families with more than nine members or those that include members with a disability.
“Often, I go and talk to the families in their homes prior to the distribution, to make sure that the right people were registered. During the distribution itself, my job is to ensure that all the rules and principles of WFP are being respected, including the core principles of humanity, impartiality, neutrality and independence.
“Banners at the distribution sites clearly state what our beneficiaries should receive, but often they cannot read, so I explain this to them. I count the money they receive to make sure the amount is correct and sometimes I reject bills that are in a bad condition and ask the staff from the financial service provider to replace them with newer ones. I also inform our beneficiaries about our complaints and feedback hotline and how WFP can be reached in case of need.
“These are my main tasks during the day, but of course they are not all. I meet with local officials, community leaders and local councils to explain our selection criteria and consult with them on when to conduct registrations. I also need to explain to them that our resources are limited and we only can assist those most in need.
“Sometimes, people show up at the distribution sites who have not been registered for assistance, asking us if we can assist them in some way. It makes me sad to tell them that unfortunately we do not have extra food for them. Sometimes they get rude and swear at me. But I do not mind, because I know they are desperate.
“People trust WFP and they trust me quickly, especially the women. They share their stories with me and, often, I am the only outsider they can share their experiences with. They tell me about their poverty, the struggle for food and — sadly — often about domestic violence. | https://medium.com/world-food-programme-insight/world-humanitarian-day-i-will-not-let-the-pandemic-stop-me-from-serving-those-in-need-cda2f8e9c630 | ['World Food Programme'] | 2020-08-18 12:06:59.774000+00:00 | ['Humanitarian', 'Women', 'Afghanistan', 'Covid 19', 'Coronavirus'] |
The Side Hustles That Actually Worked For Me | The Side Hustles That Actually Worked For Me
A primer on money-making that doesn’t drive you mad
Photo by Joseph Frank on Unsplash
Everybody likes money — so much so that a lot of people spend hours upon hours just for a handful of cents answering surveys, doing menial work, and even solving captchas by hand.
This post is not that, it’s a list of side hustles that have actually worked for me. Better even: these are all things I either enjoyed or “didn’t hate".
Offline side jobs
Over the years I worked on a farm, in construction, in a warehouse — and I had a heck of a time (de-)constructing event locations back when those were still a thing.
Right now I’m driving around throwing pizza at people and somehow they give me money for being out and having fun.
To me that was always the key point: All of these jobs can be a whole lot of fun if you wrap your mind around them and learn how to enjoy work as a hobby. In each of these places I had coworkers who were like me and others who absolutely hated the work.
While the others complained about the free food being leftovers from the kitchen instead of five-star meals we were having a great time joking and laughing. And we enjoyed the best tasting sandwiches you could get after some long hours of tough, physical work that clears your mind.
And that is before you factor in how these jobs all paid surprisingly well, how you don’t have to pay taxes on anything up to 450€ in side income (at least here in Germany) and how the work was basically a free gym membership where they give you money to work out.
If you can get into that mindset and enjoy what the jobs can offer you this is easily the best way to earn money because the hard work is all on someone else’s part.
A moderately successful YouTube channel
For a long time I had lots of fun together with my best friend with a hobby-based YouTube channel about magnetfishing. This channel never had more than 8k subscribers, and yet it paid for all my power tools, afforded me a pickup truck that we used to get rid of all the scrap metal — and most importantly we enjoyed this hobby greatly.
That is the power of YouTube and I’m still bitter that magnetfishing is now outlawed in most parts of Germany, I would still head out every other weekend to pull scrap metal out of rivers and canals.
This is how my weekends looked for over two years. Good times.
Instead I’m now doing programming videos where I share my experiences and lessons from the last eight years, the woes of job hunting, and the joy of coding for fun. See if you like what I do:
Writing and translation
These days I obviously write here on Medium and my own site www.codingtofreedom.com — but for a great deal of my school years I worked as a freelance translator between German and English through Fiverr and private follow-up gigs that spawned from there.
Translation is great because it lets you focus on typing more than writing — a great way to unwind after a long day of six hours in school. I paid for my scooter this way and I used that to get to my first job interview which landed me my coding apprenticeship and later day job.
I had this odd little tool that showed text paragraph by paragraph and allowed you to easily translate them one at a time for rapid progress. I can’t seem to find that again no matter how hard I look, but I clearly remember it being shareware that looked like the good old Windows XP days, and yet it worked like a charm.
Monetized curation blogs
For a good three years I ran a network of monetized Tumblr blogs that curated niche-specific content, eventually reaching a total of two million followers. Eventually, Tumblr in their wisdom decided to change their rules, lose 30% of their traffic within months, and got sold for a fraction of what they had been acquired for.
But the idea of content curation is still valid if you stick to Tumblr’s new rules, definitely valid in many other places on the internet. Just think about mailing lists that digest and curate content, job offers, access to spreadsheets full of data — curation is evergreen if you figure out how to make it work for yourself.
The great part about Tumblr was that it was fully automated, with looping queues and completely hands-off and earned me more than my day job during good months. So definitely keep curation efforts in mind for the next time you dig through your life for monetizable skills.
Selling scrap art
Now this was built right on top of my Youtube channel and I also made sales online — but all of that was unrelated to the offline sales on flea markets.
There are exactly three things you need:
A 15$ angle grinder
A 30-150$ stick welding setup
A source of scrap metal
This is a very physical business, but at the same time also one that is great fun, easy to pick up, and that can be replicated in every city at least once and still find interested customers. Just try your hands at it with some scrap flowers made from old rebar and bicycle parts, those actually sell for ten or fifteen bucks.
Bottle openers, garden decoration, if you add a drill to your toolset you can also start making those birds from metal and stones that you can oftentimes see. In terms of material cost and business risks any of these “flea market businesses” will be extremely cheap to get into, but expect a substantial time commitment during the summer. Ideally you pair it with an Etsy store for the best of both worlds.
User testing
I know I just listed surveys as horrible in the intro — but user testing actually worked for me. You sit down, go through a website and record yourself struggling with certain elements. Fifteen minutes, no big drama, no big payment. Decent enough though, I always liked this work and wonder why I ever stopped.
Summary: earning money can be fun
I love earning money on the side, especially so of course when it’s on my own terms and time commitment like here on Medium, on YouTube, or on my own website. But when it comes to those physical, offline side jobs that I’ve held over the years those were great fun as well, perfect to clear my mind after a long work week and to have a physical balance to the desk job.
These days I make good use of my car license and deliver pizza with a company car which basically comes out to being paid to drive around and explore the city while somebody else pays for gas and maintenance. | https://medium.com/the-innovation/the-side-hustles-that-actually-worked-for-me-56b2d79e2350 | [] | 2020-12-19 15:32:15.172000+00:00 | ['Life', 'Self Improvement', 'Productivity', 'Life Lessons', 'Work'] |
“I would regret more for not trying than for failing”- Anna Juusela, Founder & CEO of We Encourage | “I would regret more for not trying than for failing”- Anna Juusela, Founder & CEO of We Encourage Michelle Tran Follow Sep 8 · 6 min read
Shifting from the retail industry to creating an impact start-up, Anna Juusela — Founder & CEO of We Encourage, had to deal with tons of fears and uncertainty in the beginning. Yet, as her empathy for equality issues was too forceful, she decided to overcome her fear, move forward and take action. In this interview, Anna talks about her inspiration and journey as an entrepreneur.
Tell us a bit about yourself — your professional and personal background.
I have a degree in Fashion Design and I also studied International Business as a second degree. After working in a retail chain as a shop manager for a few years, I realized that I wanted to do something new. I established my first company — Yanca Oy Ltd and built this fashion agency, in cooperation with other stakeholders but it didn’t work out. After that, I took the company forward as a consulting and teaching services business about visual merchandising. My work is to train, consult and teach visual merchandisers and retailers on profitability, stock control trends, etc.
My profound interest in trends, technology and equality has always been self-evident. I always feel feminism is a part of me, yet I don’t label myself as a feminist as I feel it’s too restricted. I believe equality embraces even more meanings, not only the equality between two genders. In many ways, these passions led me towards discovering the possibilities of AI and Blockchain, and later establishing We Encourage.
Could you introduce ‘We Encourage’ to our readers?
We believe that every girl has the right to education, equality and empowerment. Every woman has the right to live free from violence and oppression. Our mission is to enable that.
We Encourage is a web-based fundraising platform for SME NGOs and individual fundraisers for causes aimed to empower and support women and girls. We support and help from ideating, creating fundraising causes, marketing planning to executing the fundraising and follow-up.
We ensure transparency and traceability of donations, secured by blockchain technology provided by our strategic partner, Charity Wall. In addition to our fundraising platform, we are building an open-source conversational AI tool for victims of intimate partner violence.
Being your own boss or venturing into entrepreneurship can be a wonderful step to find a fulfilling career, but it’s also not as simple as just setting out and deciding to change the world. So, what motivated you to take the leap and especially create an impact startup?
Growing up in a family of entrepreneurs, I was more determined to start my own venture. As I mentioned, my first company was founded while I was pursuing my second degree. I created the business plan during a school course and applied for a loan for business operations.
As I was intently following some ongoing trends, I got interested in new technologies like Blockchain and AI, together with their prospects. Back in 2016, I saw a documentary about an Afgan rapper, Sonita, who was almost sold to marriage to fund her brother’s wedding. Immediately, many thoughts came to me with frustration, “If a life of the girl is bought with the price of $2000, with all the freedom returned to them, we should also give them an educational opportunity and empowerment to follow their dreams.” This discomfort got stuck and started to live its own life inside of me. On the other hand, I was too afraid to take action, as my idea was so different from my current retailing entrepreneurial journey, not to mention that I didn’t know how to build Blockchain or AI solutions.
In 2018, I attended a Blockchain Forum’s event and shared my idea of using blockchain in incentivizing poor families who desire to educate their daughters but cannot afford and for that reason sell them into marriage. I was also encouraged to pitch my idea in a Blockchain Summit pitching competition just in a few weeks. At the stage, the worst happened, the first jury member laughed at me, “How stupid are you? Who would sell their daughter to marriage?” Fortunately, I got support and encouragement from another jury member, Teemu Jäntti and I decided to move forward.
What was the process of founding We Encourage? Did you go through some training like joining an accelerator?
It took me a few weeks after the Blockchain Summit pitching session to decide whether I should really continue with this idea. I was overwhelmed with layers of fear, “What would happen with my other company? How am I able to handle kids?” Nevertheless, my motivation was extremely strong, I knew I would regret more for not at least trying than for failing. Moreover, I wonder “How can I fail if I keep getting up?” Finally, I made my decision to at least give it a try.
As all my network was in the retail side, the first thing to do was to start networking on the startup and NGO sphere. As a result, it started to pay off, in Spring 2019, I was speaking at the UNESCO blockchain and practice conference.
Back then, I didn’t know I was on my way to build an impact startup until I was invited to Oxygen, a community of other impact-oriented entrepreneurs. Henceforth, I realized what I was about to do, was actually new and worthwhile. After that, I attended their impact accelerator. During the first 10 months, I built my network, gradually gathered people around me and took the mission forward. Later, I found my team members, iterated and reiterated the idea. Consequently, in August 2019, I was brave enough to officially establish We Encourage. One year later, we have finally launched our very first fundraising cause.
Being your own boss means you are on the hook for keeping up with trends in your field, let’s say, the technology. What is the role of technology in your business?
We are utilizing blockchain technology for transparency and traceability, and AI as a conversational tool for providing psycho-social support and guidance to women victims of intimate partner violence. I wish my business will be able to integrate these technologies more automatically in the future.
What is your advice for people who aspire to start their own business?
Just take action, start with little steps, start growing your network as soon as possible. Move towards your fear, something magical might be hiding behind that. Also, learn to distinguish genuine feedback and advice from opinions.
What has been the most positive experience of this whole journey?
I think it’s the feeling that you get when people understand our mission, how important equality is and their empathy with the harsh reality many women are putting up with. In Finland or some western countries, people may care about other tangible facts, like the payment gap between genders, while closing their eyes to the oppression that women are facing, for example, a little girl being raped by her cousins yet being punished by that. This is what we’re pioneering to solve.
Do you think entrepreneurship can be learned or is it a talent?
I think entrepreneurship seems to be a mindset rather than a talent. You need to be a creator, eager to try things and stand the uncertainty. Interestingly, entrepreneurship can be learned along the journey, as long as you have a concrete idea and passionately execute it. Obviously, you are the one who takes full responsibility for whatever you do.
Thank you very much for sharing your story with us, Anna!
Ready to conquer all the barriers and fuel your success? If yes, you own the chance to build your dream venture. Turn your idea into a viable business through our upcoming 3-week entrepreneurship program B.Y.O.B (Be Your Own Boss) from The Shortcut. Applications are open now! Find more information here. | https://medium.com/the-shortcut/i-would-regret-more-for-not-trying-than-for-failing-anna-juusela-founder-ceo-of-we-encourage-dae635bb616a | ['Michelle Tran'] | 2020-09-08 10:28:20.982000+00:00 | ['Entrepreneurship', 'Impact Startup', 'Be Your Own Boss'] |
Bank Data: Classification Part 3 | This blog is part 3 out of 4 and we will be discussing Boosting.
Gradient Boosting
Gradient Boosting is a machine learning technique for classification and regression used to turn weak learners to strong learners.
gb_clf = GradientBoostingClassifier() # Grid Search
param_gb = {"n_estimators":[100, 300, 500], "max_depth":[3, 5]}
grid_gb = GridSearchCV(gb_clf, param_grid=param_gb) grid_gb.fit(X_train_new, y_train_new) grid_gb.cv_results_
We continue to use grid search and take a look at the cross validation performance. Looking at the results above, we should be obtaining a test score of around 76%
Confusion Matrix
# Confusion Matrix print('Confusion Matrix - Testing Dataset') print(pd.crosstab(y_test, grid_gb.predict(X_test), rownames=['True'], colnames=['Predicted'], margins=True))
confusion_matrix_metrics(TN=719, FP=3790, FN=829, TP=10166, P=10995, N=4509)
The confusion matrix above shows an F1 Score of 81%, where Recall is the main contributor.
Feature Importance
# Graphing
fig, ax = plt.subplots(figsize=(15, 10))
ax.barh(width=gb_clf.feature_importances_, y=X_train_new.columns)
By using feature importance we can see what features our Gradient Boosting model believes to be paramount. The main distinction between Gradient Boosting and Random Forest is that Gradient Boosting is more specific/clear when deciding which features are important; there is no thin line, the distinction between important features is clear as day.
Gradient Boosting on Important Features
We will be selecting a threshold and then performing feature selection to perform Gradient Boosting on our new features.
# Selecting the top features at a cap of 0.08
gb_important_features = np.where(gb_clf.feature_importances_ > 0.08)
print(gb_important_features)
print(len(gb_important_features[0])) # Number of features that qualify # Extracting the top feature column names
gb_important_feature_names = [columns for columns in X_train_new.columns[gb_important_features]]
gb_important_feature_names
The code above shows important features above our threshold of 0.08. These will be the features that will be selected and tested.
# Creating new training and testing data with top features
gb_important_train_features = X_train_new[gb_important_feature_names]
gb_important_test_features = X_test[gb_important_feature_names]
The code above only includes the selected features in the dataframe.
We will now run through our training data with a grid-search and look at the results:
param_gb = {"n_estimators":[100, 500, 700], "max_depth":[3, 7, 9]} # Grid search
grid_gb = GridSearchCV(gb_clf, param_grid=param_gb) grid_gb.fit(gb_important_train_features, y_train_new) grid_gb.cv_results_
The results of the cross validation shows that we are bound to get a score of around 72%.
Confusion Matrix for Important Features
# Confusion Matrix
print('Confusion Matrix - Testing Dataset')
print(pd.crosstab(y_test, grid_gb.predict(gb_important_test_features), rownames=['True'], colnames=['Predicted'], margins=True))
confusion_matrix_metrics(TN=593, FP=3916, FN=825, TP=10170, P=10995, N=4509)
By looking back, we can see that feature selection showed little to no difference in our F1 Score results. | https://medium.com/analytics-vidhya/bank-data-classification-part-3-21aaa23c5ea4 | ['Zaki Jefferson'] | 2020-10-01 16:03:47.957000+00:00 | ['Machine Learning', 'Boosting', 'Python', 'Gradient Boosting', 'Data Science'] |
Trapped in “The Yellow Wallpaper” | In these recent times, I cannot stop thinking about “The Yellow Wallpaper” by Charlotte Perkins Stetson. It’s an old story. Of a trapped woman.
The woman feels ill but has been told she is not ill. Her husband is a doctor. She must stay in the room with the yellow wallpaper. She must wait it out, this is what must be done.
But in our hearts, we question this remedy. Who is her husband anyway?
In our hearts, we know she is disobedient. She cannot quite follow the rules. Of being female. She is unhappy with her situation.
So she must stay by herself in the room with the yellow wallpaper.
Maybe this is how we feel lately: Crazy. Trapped. Unhinged. Ugly.
Eventually she claws at the walls.
And behind the walls, something taunts her. She knows she is missing something, so she stretches her body to reach the highest places on the wall to tear it down, the paper comes off in strips of ugly, marred yellow and underneath! Underneath, there are images jeering at her. Bared teeth. She tries to get behind it. She crouches to see better.
Is there another woman back there? Some shadow of herself?
It is only temporarily soothing to tear at the paper.
She cannot stop. Underneath, there are bars, keeping her contained. There’s no escape. So she circles the room, develops a path that becomes well-marked, ingrained, a trail around the edge. Now she is a trapped animal with one thing in mind, Get out.
She can’t stop moving, pacing, wondering if she has done something wrong.
Of course she thinks she is to blame. For wanting to find an opening, a space, a window, a breath of fresh air. To get away from the ugliness of the wallpaper. To get away from something inside herself. To jump? Maybe.
Yes. To jump. | https://medium.com/literally-literary/remember-the-yellow-wallpaper-d89c48c35fe | ['Kristin Doherty'] | 2020-05-06 04:27:24.650000+00:00 | ['Depression', 'Nonfiction', 'Women', 'Literally Literary', 'Feminism'] |
The Top “Exploring Sobriety” Posts of 2020 | I’ve been writing more than ever this year and want to thank each of you for reading. As the year wraps up, I’ve gathered the ten most popular “Exploring Sobriety” posts of 2020 into the list below.
If you’re a new reader, this list is the perfect starting place. If you’ve been following me for a while, it’s a great way to catch up on any posts you missed or wanted to read again. I hope you all enjoy these and find them interesting or helpful.
10. Why Addicts Do Endless Mental Math
“One of the things that non-addicts rarely realize about alcoholism is how much planning goes into it. This is especially true among so-called ‘high-functioning alcoholics.’ My drinking days required constant calculations, as I tried to balance law school and work with getting drunk every single night.”
Read more:
9. Why Recovering Addicts Become Obsessed With Healthy Living
“After quitting drinking, other health changes followed. I ditched the cigarettes, started eating better, and became an avid runner. Today, I’m living a healthier lifestyle than ever before.
I’ve become a bit of a stereotype. There’s a trend among recovering addicts to get obsessed with healthy living, and I’m certainly a part of it. My drinking years had been on one extreme of the health spectrum, and now I’ve swung to the other.”
Read more:
8. My Biggest Personality Change Since Quitting Drinking
“Although I didn’t realize it at the time, years of heavy drinking had turned me into an incredibly dull person. I didn’t have any serious interests or hobbies. I worked hard in school and at my job, but spent the rest of my time mostly just watching television.
Quitting drinking gave me the room I needed to discover who I really am. I became happier and more optimistic. I also started to develop interests outside of work, and I stopped sitting in front of the TV all night.
Among all the changes that my personality went through though, there’s one that stands out above the rest: I gained an extraordinary amount of self-discipline.”
Read more:
7. How to Relax at Home Without Alcohol
I finally quit drinking around three-and-a-half years ago. At the time, one of my biggest fears about sobriety was whether I could ever learn to relax without alcohol.
I was an incredibly tense and high-strung person. When I pictured life without booze, I imagined one long anxiety attack.
Read more:
6. The One Thing Harder Than Getting Sober
“My sober life was nothing like I imagined it would be. I had been blaming alcohol for all of my problems, and I thought that everything would get better as soon as it was gone. The reality was that my life had even gotten a bit worse since I had quit drinking.
Getting through those months was tough. I came up with every excuse I could think of to go back to drinking. I considered throwing in the towel and resigning myself to a life as a drunk.”
Read more:
5. The Most Dangerous Risk a Recovering Addict Can Take
“One of the things that helped me to finally stay sober this time, after failing again and again in the past, was that I learned to stop sabotaging myself.
Whenever I’ve tried to quit drinking, it’s always felt like a war raging between to halves of my mind. One side recognizes the danger of alcohol and is ready to quit. The other side is still ruled by my addiction, and wants more than anything to go back to drinking.”
Read more:
4. The Biggest Mistake I’ve Made Since Quitting Drinking
“Throughout most of my adult life, I’ve struggled with two serious addictions: alcohol and tobacco.
My drinking and smoking habits grew hand-in-hand, both becoming daily vices during college. By the time I graduated, I was smoking a pack of cigarettes a day and getting drunk every night.”
Read more:
3. How Long Does It Take Until Sobriety Starts to Feel Good?
“Like a lot of addicts, I regularly daydreamed about how perfect my life could be if only I could give up alcohol. I imagined living every moment to its fullest, reaching my full potential, and never having another regret.
Often I’d even plan out my hypothetical schedule for my first days sober: waking up early to exercise, cooking every meal from scratch, staying focused at work and spending all my free time productively.
Of course, the reality of being newly sobriety didn’t live up to those over-the-top fantasies.”
Read more:
2. Why I’ll Never Go Back to Alcohol
“Drinking in moderation sounded so easy in theory. It was the same thing I was already doing, just less. How hard could that be? For me though, it wasn’t just hard — it was impossible.
I think non-addicts might never quite be able to understand how strong an addict’s compulsions are. No amount of logic, reasoning, or desire was ever enough for me to resist the pull of another drink.”
Read more:
1. How An Alcoholic Thinks About Drinking
“Whenever I used to drink (which was daily for many years), my goal was always the same: to get as drunk as possible.
I would say that I liked the taste of beer, or that I needed to unwind, but the real point of my drinking was to get as intoxicated as I could by the end of the night. The only lines I tried not to cross were getting sick and passing out, and I even crossed those lines with embarrassing frequency.
In my mind, the drunker I was, the better. My thinking was really as simple as that.”
Read more: | https://medium.com/exploring-sobriety/the-top-exploring-sobriety-posts-of-2020-acaf5259adf2 | ['Benya Clark'] | 2020-12-21 19:34:09.253000+00:00 | ['Addiction', 'Mental Health', 'Sobriety', 'Alcoholism', 'Self Improvement'] |
The Dark Side of Design | Note: I was inspired to write this after reading the findings of dark patterns by researchers at Princeton University and University of Chicago. Check it out!
At its core, UX design is about helping people to do what they want in the simplest way possible. The goal is to make peoples’ lives a little bit easier by removing any unnecessary frustration in the way of their goal. Designers try and do the thinking for their users, since thinking is hard and makes your brain hurt. 🤯
When external powers get involved, they can force designers to stray away from this happy view of UX. KPIs and sales targets begin to enter the equation, and some designers are put into a tricky situation where they have to intentionally hinder the user experience to get more dollars into the corporate bank. This notion is sometimes referred to as “Dark Design”.
No, dark design is not about designing your product to support a dark mode. (Fun fact, every system has a dark theme — just close your eyes! 🙈) Dark design is about making the conscious decision to make users go out of their way to avoid unintended or detrimental choices that benefit the product owner.
Think of dark design as the opposite of the aforementioned “good” UX definition. It’s about restricting people to do what they want in the most complicated way possible. It wants to make peoples’ lives a little bit harder by adding unnecessary frustration in the way of their goal. It makes people think a lot, since thinking is hard and they’ll hopefully just cough up dollars instead.
Get it yet? Too bad, I’m still going to go through some examples of the various types of dark design so you can be aware of the tricks that some systems are trying to pull on you. I’m not going to name and shame because blame is bad for tha’ game, yo. 🎤 | https://medium.com/swlh/the-dark-side-of-design-137f9ba37c54 | ['Joshua Braines-Mead'] | 2019-12-18 00:20:19.194000+00:00 | ['Design', 'UI', 'User Experience', 'User Interface', 'UX'] |
Practical Model-Based Testing — Say “Hello MBT” | Time to Code
Throughout the rest of this post I will discuss all the steps necessary to perform in order to complete the Traffic-Light example I described above.
We start by setting up the development environment and begin with the Model-Based testing tool: GraphWalker
GraphWalker is an open-source MBT, driven and developed by Kristian Karl and others (I also used Kristian’s previous MBT tool, org.tigris.mbt, about 13 years ago).
GraphWalker takes the user-friendly-first approach, sacrificing some of the MBT more advanced capabilities. This approach is what makes MBT with GraphWalker a breeze.
Workspace
Create a workspace where you will have the project folder, as well as a lib folder that holds GraphWalker’s jar files.
$ pwd
/Users/me/workspace $ mkdir lib $ cd lib/
2. GraphWalker
Download GraphWalker’s jars into lib.
3. Java
GraphWalker needs JDK 1.8, preferably Oracle’s (especially if you want to build the jars from source): https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
An alternative Java could be AdoptOpenJDK at: https://adoptopenjdk.net/ (unless you want to build GraphWalker from source).
If Java 1.8 is not your system’s default, after installing JDK 1.8, you have to run the following command at the beginning of every GraphWalker session (or the proper command for your system to have a shell session with Java 1.8)
$ export JAVA_HOME=`/usr/libexec/java_home -v 1.8`
4. Maven
We will be using Maven, as this is GraphWalker’s default implementation environment.
On Mac you can install Maven through Homebrew:
$ brew install maven
I recommend on having the ‘jq’ Json utility as well:
$ brew install jq
5. Default Maven GraphWalker Project
Create the Maven HelloMbt project and cd into the HelloMbt project’s directory:
$ pwd
/Users/me/workspace $ export JAVA_HOME=`/usr/libexec/java_home -v 1.8` $ mvn archetype:generate -B \
-DarchetypeGroupId=org.graphwalker \
-DarchetypeArtifactId=graphwalker-maven-archetype \
-DgroupId=com.mbtroads -DartifactId=HelloMbt \
-DarchetypeVersion=LATEST $ cd HelloMbt
6. GraphWalker Studio
Change the default model that was generated by GraphWalker.
Launch the GraphWalker studio:
$ java -jar ../lib/graphwalker-studio-4.2.0.jar
Open your browser at: http://localhost:9090/studio.html
In GraphWalker-Studio click the ‘Open’ icon on the left, and choose the model file:
...workspace/HelloMbt/src/main/resources/com/mbtroads/SmallTest.json
For the simplicity of the “Hello MBT” example that we build today, we will use this generated model with small changes in order to adopt it to our example application.
In the studio open the ‘Properties’ panel (click the 3-dots-lines button on the left)
At the top of the Properties panel, under ‘Model’ → ‘Name’, rename the model to “TrafficLightModel”.
Next, we will rename all of the model’s generated elements: the edges and vertices.
A Vertex is the rectangle that represents a state in which the model can be during runtime.
An Edge is the arrow that connects 2 vertices in a certain direction, and which represents a transition from one state to another during runtime.
To rename a model’s element just click it and rename it in the Properties panel, under ‘Element’ → ‘Name’.
The result should look like this:
Make sure that the ‘Start element’ button at bottom of the Properties panel is ‘On’ for the ‘v_Start’ element and ‘Off’ for all the rest (this what makes v_Start becomes green).
Your model should have the following 3 vertices: v_Start, v_Red and v_Green, and the following 2 edge names: e_Toggle and e_TurnGreen, where e_Toggle is used in three different edges in the model.
We will leave the chosen test-generator ‘random(edge_coverage(100))’ (at the bottom of the Properties panel) for this post. The generator is a directive to GraphWalker for how to generate the tests and when to stop. This generator will cause GraphWalker to randomly choose the next edge (when it has choices) and to stop once it has visited all the edges. We will look at other generators in a future post.
Click the ‘Play’ button to test that it doesn’t have any errors
Click the ‘Save’ button that saves the model as test.json in your ‘Downloads’ folder.
Exit the studio and copy the test.json file into the ‘resources’ folder in the HelloMbt project. Delete the SmallTest.json file (this is the generated model that we don’t need anymore) and rename the test.json model file to: TrafficLightModel.json
7. Test Cases Generation
At this point we can already generate the test-cases for our requirements.
At the root of the HelloMbt project run:
$ java -jar ../lib/graphwalker-cli-4.2.0.jar offline -m src/main/resources/com/mbtroads/TrafficLightModel.json “random(edge_coverage(100))” | jq ‘.currentElementName’
The generated test suite
GraphWalker can generate and execute tests that span on several connecting models to enable the representation of a large application or suite of applications.
8. Model’s Interface Generation
Run the following line to generate the interface from the model:
$ mvn clean graphwalker:generate-sources
* The first Maven run may take longer since Maven downloads all the required dependencies.
If you use Mac and get errors on .DS_Store, just remove those .DS_Store files.
The generated interface can be found at:
target/generated-sources/graphwalker/com/mbtroads/TrafficLightModel.java
Later we will see how our test implements this model’s interface.
9. Write the Target Application
Let’s add the TrafficLight application code which we plan to test with our MBT test.
Create the file: src/main/java/com/mbtroads/TrafficLight.java
With the following application code:
package com.mbtroads; public class TrafficLight { public static enum Color {
RED, GREEN
} private Color myColor; public TrafficLight() {
myColor = Color.GREEN;
} public Color getCurColor(){
return this.myColor;
} public void setColor(Color color){
this.myColor = color;
} public void toggleLight(){
if(this.myColor == Color.GREEN) {
this.myColor = Color.RED;
}
else {
this.myColor = Color.GREEN;
}
} public static void main(final String[] args) {
final TrafficLight target = new TrafficLight();
target.setColor(Color.RED);
System.out.println(target.getCurColor());
} }
10. Test Code
We are going to re-write the default test that was generated for us by GraphWalker. In a future post I plan to show how to generate the test skeleton from the model.
Replace the code in the generated test file: src/main/java/com/mbtroads/SomeSmallTest.java
with the following code:
package com.mbtroads;
import org.graphwalker.core.machine.ExecutionContext; public class SomeSmallTest extends ExecutionContext implements TrafficLightModel { TrafficLight trafficLight = null; @Override
public void e_Toggle() {
System.out.println(“Running: e_Toggle”);
trafficLight.toggleLight();
} @Override
public void e_TurnGreen() {
System.out.println(“Running: e_TurnGreen”);
trafficLight.setColor(TrafficLight.Color.GREEN);;
} @Override
public void v_Start() {
System.out.println(“Running: v_Start”);
trafficLight = new TrafficLight();
assert(trafficLight.getCurColor() == TrafficLight.Color.GREEN);
} @Override
public void v_Red() {
System.out.println(“Running: v_Red”);
assert(trafficLight.getCurColor() == TrafficLight.Color.RED);
} @Override
public void v_Green() {
System.out.println(“Running: v_Green”);
assert(trafficLight.getCurColor() == TrafficLight.Color.GREEN);
} }
During test execution, GraphWalker decides which transition to perform on the model’s graph, and in parallel, the same transition is being executed on the test by calling the corresponding test method.
Let’s examine the code in the SomeSmallTest.java test file.
Test methods that implement edges perform actions that make the target application transitions from one state to another:
@Override
public void e_Toggle() {
System.out.println(“Running: e_Toggle”);
trafficLight.toggleLight();
}
Here the Traffic-Light application transition from one light to another.
Test methods that implement vertices check (using assertions) that the target application is at the expected state (same state as the model at this point of the test execution):
@Override
public void v_Green() {
System.out.println("Running: v_Green");
assert(trafficLight.getCurColor() == TrafficLight.Color.GREEN);
}
Here the test checks that the Traffic-Light application has the Green light.
11. Test Execution
Since the test in this example is not a Junit test (and is under the ‘main’ code), we have to explicitly ask Maven to enable the assertions in the test code. In a future post we will see how to have the MBT tests executed through Junit.
Enable assertions:
$ export MAVEN_OPTS=”-ea”
Remember to set the right Java environment for the current shell session:
$ export JAVA_HOME=`/usr/libexec/java_home -v 1.8`
Execute the GraphWalker test:
$ mvn clean graphwalker:generate-sources compile exec:java -Dexec.mainClass=”com.mbtroads.Runner”
The tests should pass.
You can now play with the code to see how it works, for instance, in TrafficLight.java change the code in toggleLight() from:
else {
this.myColor = Color.GREEN;
}
into this:
else {
this.myColor = Color.RED;
}
This will cause the TrafficLight application to toggle from Red to Red instead of Green.
When you execute the test again you should get an assertion error originated in the v_Green() method (you have to enable assertions as I showed above):
@Override
public void v_Green() {
System.out.println(“Running: v_Green”);
assert(trafficLight.getCurColor() == TrafficLight.Color.GREEN);
}
When the model transitions to v_Green, the test v_Green() method is executed and it expects the traffic light colour to be Green but it found it to be Red.
Summary
While this was a very simple example, I hope it showed the potential of Model-Based Testing on a large-scale cloud application where you may have frequent business requirements updates. With every requirement change you only need to update the model to immediately get a running full-coverage generated test suite.
I plan to cover some more advanced features of GraphWalker in future posts.
My experience with Model Based Testing | https://medium.com/cyberark-engineering/practical-model-based-testing-say-hello-mbt-b16292ffff06 | ['Ofer Rivlin'] | 2020-07-08 15:57:10.335000+00:00 | ['Model Based Testing', 'Test Automation', 'Test Driven Development', 'Formal Verification', 'Java'] |
15 Productivity Hacks to Get More Done Each Day | In an ideal world, you world breeze through your todos and still have time to do everything you enjoy. Unfortunately, many of us come home from working feeling discouraged because we wish that we could have gotten more done. Instead of beating yourself up for not getting enough done, start using these 15 productivity hacks to get everything you want, accomplished.
1. Become a time management ninja using your Calendar.
If you want to be more productive then you should only be focusing on activities that deserve your time. The best way to achieve this is by effectively managing your calendar.
As Renzo Costarella writes in a previous Calendar post, “On a basic level, most people use their calendar to schedule meetings, with each empty slot representing a time when you’re available. If you only had a couple of meetings scheduled in a day, this leaves considered free time.”
“A great strategy to use for calendar management is time blocking. As you schedule meetings on your calendar block out times throughout the day for finishing specific tasks,” adds Costarella. “That way you’ll accomplish what you need without over-extending yourself to meetings or unfocused tasks.”
2. Stick to a “work uniform.”
Yes, he gets in hot water over and over, now. But, as a business professional, did you ever wonder why Mark Zuckerberg used to wear the same outfit over-and-over again? The same orange t-shirt was because it saved him time and stress. Instead of spending 20-minutes looking for an outfit and worrying how he looked in it, Zuck already knew what he was going to wear and how’d he look.
As an added perk, wearing the same thing ensured that he was saving his energy for more important decisions. “I really want to clear my life to make it so that I have to make as few decisions as possible about anything except how to best serve this community,” Zuckerberg said. He also mentioned that he had “multiple grey shirts.”
When it comes to your work outfit, it can actually be whatever you prefer. Steve Jobs wore turtlenecks, while some professionals have a handful of suits on rotation. The idea is to have a minimalist and comfortable wardrobe that also reflects your industry. Grey t-shirts may be acceptable in Silicon Valley, but times are changing. And even an incredible t-shirt is not okay in the courtroom. I see the millennials looking quite snappy these days. Do cast a critical eye around every once in a while — so you don’t look like you are homeless.
3. The Pomodoro Technique
This is one of the most well-known hacks out there. And, for good reason, it’s been used by successful and productive people for decades because it helps:
You focus on the task at hand.
Eliminates multitasking.
Develops a sense of urgency.
Helps you stop being a perfectionist.
Reduces stress because you’re doing one thing at a time.
Gives your brain a chance to relax and recharge.
If you’re new to this concept, it’s simply where you break all of your tasks into 25-minute blocks of time. After those 25-minutes are up, you take a 5-minute break. After four of these 25-minute blocks you take a longer break — usually 15–30 minutes. Of course, people have modified this technique to better fit their own personal preference habits — but the idea is the same.
4. Keep one-day a week meeting free.
“One of my favorite hacks is No Meeting Wednesdays, which we borrowed from Facebook,” writes Dustin Moskowitz, CEO of Asana. “With very few exceptions, everyone’s calendar is completely clear at least one day out of the week. Whether you are a Maker or a Manager, this is an invaluable tool for ensuring you have some contiguous space to do project work. For me personally, it is often the one day each week I get to code.” This also eliminates the time wasted at unproductive meetings.
5. Group “like” jobs.
Also known as batching, this is a productivity hack where you group similar tasks together and complete them at the same time. This way you’re using the same frame of mind and not constantly shifting focus. For example, checking all of your emails, texts, and social messages first thing in the morning. Another example would be doing most of your cooking for the week on a Sunday since this involves not just preparing the meal, but also cleaning up afterward.
6. Follow the two-minute rule.
From David Allen’s bestselling book, Getting Things Done, this rule simply states that if something takes under two minutes to complete, then you should just do it. With the two minute rule, you can cross off all of these small tasks before they consume your thoughts and time.
7. Limit your phone usage.
What’s your biggest distraction? It’s most likely your phone, and that makes sense. Every time you phone buzzes, you stop what you’re doing and check out the notification — whether if it’s a text, email, or social media notification. That may not sound like a big deal, but considering that it takes 25-minutes to return to the original task, you can now see why you should limit your phone usage.
If you have the self-discipline, put your phone on silent on airplane mode. If you can’t then use the Moment app. If you’re Android 6.0 Marshmallow then make use the Do Not Disturb mode. Schedule in specific times to check your phone, like eight a.m., noon, and 4:30 p.m. and you’ll still remain in the loop without checking your phone every five minutes.
8. Listen to music.
Music has been found to maintain focus and help you stay productive, but, pick your music wisely. Listening to pop music may not be effective since you’re singing along. There’s a nifty app called Focus At Will that can help determine which type of music helps you concentrate best. As a result, you’ll boost your productivity.
9. Use a bullet journal.
Most productive use notebooks to jot down their thoughts and ideas. It’s a surefire way to help them remember these things. But, instead of a notebook, start bullet journaling. This strategy is basically an empty notebook that is your calendar, to-do list, sketchbook, and a diary in one location. What makes it so useful is that it can be organized any way you want.
10. Set macro goals and micro quotas.
There was a study on motivation that shows abstract thinking can be an effective method to help with discipline. In other words, you need to balance “dreaming big” with intrinsic motivators, aka the self-determination theory.
The best course of action here is to set “micro quotas” and “macro goals.” While your goals should relate to your big picture, the quotas are the minimum amount of work you must do daily to achieve those goals. For example, if you’re writing a book, then your quota could be writing two pages a day.
11. Chew more gum.
I love coffee. It’s delicious and gives me a much-needed boost — like I’m sure it does for you. However, like everything else in life, too much of it can be a bad thing. For example, coffee has been found to trigger anxiety — and you don’t want that when you’re already stressed. Instead of pouring another cup of Joe, chew some gum. Studies have found that chewing gum can helps with concentration and retaining information.
12. Use red and blue.
Your workspace has a major influence on your productivity. That’s why you should always keep it organized and clutter-free — along with getting some plants and exposing yourself to natural sunlight. However, you should also incorporate some red and blue around your workspace. According to a Science Daily study, red can help increase attention to details while blue can spark creativity.
13. Procrastinate productively.
You turn on Netflix to decompress or clear your head. Next thing you know you just watched an entire season of a show. That’s not good. Here’s the thing. We need to procrastinate occasionally. It’s a great way to recharge and refocus. But, you should be procrastinating productively. As opposed to watching Netflix, pick-up a book or take your dog for a log-walk.
14. Find your “golden hours.”
You’ve probably heard that you should eat a frog each morning. Not literally. It actually means that you should get your most important task done and over with first thing in the morning. We tend to be focused and energetic in the morning. Instead of eating that frog, schedule your most important tasks for the time that works best for you. If you’re unsure about your own “golden hours” then check out “The Perfect Workday to Maximize Motivation.”
15. Just be you.
Darrin Brege, the Creative Director and strategist at HelloWorld, encourages his team to design, build, and race paper boats. Adrienne Weissman, the chief customer officer of G2 Crowd, choreographs a dance routine to her favorite song in her head.
While these hacks are able to make you more productive, the truth is they may not work for you. If there’s something you do that keeps you pushing forward — then go ahead and keep doing it. | https://medium.com/calendar/15-productivity-hacks-to-get-more-done-each-day-aad1f5b6f380 | [] | 2019-07-05 22:54:57.588000+00:00 | ['Work', 'Productivity', 'Pomodoro Technique', 'Time Management'] |
The Magpie Means Nothing To Me | I walk my dogs down the ravine everyday
And I will see a magpie.
First of all I want to say that I’m not superstitious
That would be bad luck
Anyway…
The magpie will trigger a thought
‘It’s doom again for me!’
I’m not afraid, I’m just anxious
The past has crept up stealthily
Beyond me again and
Lies crouched in the future
Ready to ambush me
Doom! Doom! Doom!
I’m not crazy, I just live in
An alternative reality
And it’s unhealthy…
The bloody magpies have it in for me!
I remind myself I don’t believe in magpies,
I scan the air for another one,
I definitely don’t believe two will bring joy,
It just cancels out the first one,
There’s always at least five hanging around here,
God what have you done to me?
I’m sorry, it was my fault, I know that,
It’s my fault I’m crazy,
Maybe I could see a dove?
That would really mean something,
I never see a dove around here,
A seagull would do,
They’re white too,
Which means something pure,
I lost touch with the inside of me,
Can anything save me?
Count my steps,
Feel the breeze on me,
Come out of my head,
It’s not too late surely?
Remember reality?
There’s three magpies now,
Whatever, it means nothing to me… | https://medium.com/literally-literary/the-magpie-means-nothing-to-me-7aab81429c8f | ['John Horan'] | 2020-07-17 22:20:32.973000+00:00 | ['Poetry', 'Ocd', 'Mental Health', 'Faith', 'Superstition'] |
From Pandas to Scikit-Learn — A new exciting workflow | The new ColumnTransformer will change workflows from Pandas to Scikit-Learn
Scikit-Learn’s new integration with Pandas
Scikit-Learn will make one of its biggest upgrades in recent years with its mammoth version 0.20 release. For many data scientists, a typical workflow consists of using Pandas to do exploratory data analysis before moving to scikit-learn for machine learning. This new release will make the process simpler, more feature-rich, robust, and standardized.
Black Friday Special 2020 — Get 50% Off — Limited Time Offer!
If you want to be trusted to make decisions using pandas, you must become an expert. I have completely mastered pandas and have developed courses and exercises that will massively improve your knowledge and efficiency to do data analysis.
Get 50% off all my courses for a limited time!
Summary and goals of this article
This article is aimed at those that use Scikit-Learn as their machine learning library but depend on Pandas as their data exploratory and preparation tool.
It assumes you have some familiarity with both Scikit-Learn and Pandas
We explore the new ColumnTransformer estimator, which allows us to apply separate transformations to different subsets of your data in parallel before concatenating the results together.
estimator, which allows us to apply separate transformations to different subsets of your data in parallel before concatenating the results together. A major pain point for users (and in my opinion the worst part of Scikit-Learn) was preparing a pandas DataFrame with string values in its columns. This process should become much more standardized.
The OneHotEncoder estimator was given a nice upgrade to encode columns with string values.
estimator was given a nice upgrade to encode columns with string values. To help with one hot encoding, we use the new SimpleImputer estimator to fill in missing values with constants
estimator to fill in missing values with constants We will build a custom estimator that does all the “basic” transformations on a DataFrame instead of relying on the built-in Scikit-Learn tools. This will also transform the data with a couple different features not present within Scikit-Learn.
Finally, we explore binning numeric columns with the new KBinsDiscretizer estimator.
A note before we get started
This tutorial is provided as a preview of things to come. The final version 0.20 has not been released. It is very likely that this tutorial will be updated at a future date to reflect any changes.
Continuing…
For those that use Pandas as their exploratory and preparation tool before moving to Scikit-Learn for machine learning, you are likely familiar with the non-standard process of handling columns containing string columns. Scikit-Learn’s machine learning models require the input to be a two-dimensional data structure of numeric values. No string values are allowed. Scikit-Learn never provided a canonical way to handle columns of strings, a very common occurrence in data science.
This lead to numerous tutorials all handling string columns in their own way. Some solutions included turning to Pandas get_dummies function. Some used Scikit-Learn’s LabelBinarizer which does one-hot encoding but was designed for labels (the target variable) and not for the input. Others created their own custom estimators. Even entire packages such as sklearn-pandas were built to support this trouble spot. This lack of standardization made for a painful experience for those wanting to build machine learning models with string columns.
Furthermore, there was poor support for making transformations to specific columns and not to the entire dataset. For instance, it’s very common to standardize continuous features but not categorical features. This will now become much easier.
Upgrading to version 0.20
conda update scikit-learn
or pip:
pip install -U scikit-learn
Introducing ColumnTransformer and the upgraded OneHotEncoder
With the upgrade to version 0.20, many workflows from Pandas to Scikit-Learn should start looking similar. The ColumnTransformer estimator applies a transformation to a specific subset of columns of your Pandas DataFrame (or array).
The OneHotEncoder estimator is not new but has been upgraded to encode string columns. Before, it only encoded columns containing numeric categorical data.
Let’s see how these new additions work to handle string columns in a Pandas DataFrame.
Kaggle Housing Dataset
One of Kaggle’s beginning machine learning competitions is the Housing Prices: Advanced Regression Techniques. The goal is to predict housing prices given about 80 features. There is a mix of continuous and categorical columns. You can download the data from the website or use their command line tool (which is very nice).
Inspect the data
Let’s read in our DataFrame and output the first few rows.
>>> import pandas as pd
>>> import numpy as np >>> train = pd.read_csv(‘data/housing/train.csv’)
>>> train.head()
>>> train.shape
(1460, 81)
Remove the target variable from the training set
The target variable is SalePrice which we remove and assign as an array to its own variable. We will use it later when we do machine learning.
>>> y = train.pop('SalePrice').values
Encoding a single string column
To start off, let’s encode a single string column, HouseStyle , which has values for the exterior of the house. Let’s output the unique counts of each string value.
>>> vc = train['HouseStyle'].value_counts()
>>> vc 1Story 726
2Story 445
1.5Fin 154
SLvl 65
SFoyer 37
1.5Unf 14
2.5Unf 11
2.5Fin 8
Name: HouseStyle, dtype: int64
We have 8 unique values in this column.
Scikit-Learn Gotcha — Must have 2D data
Most Scikit-Learn estimators require that data be strictly 2-dimensional. If we select the column above as train['HouseStyle'] , technically, a Pandas Series is created which is a single dimension of data. We can force Pandas to create a one-column DataFrame, by passing a single-item list to the brackets like this:
>>> hs_train = train[['HouseStyle']].copy()
>>> hs_train.ndim
2
Master Machine Learning with Python
Master Machine Learning with Python is an extremely comprehensive guide that I have written to help you use scikit-learn to do machine learning.
Import, Instantiate, Fit — The three-step process for each estimator
The Scikit-Learn API is consistent for all estimators and uses a three-step process to fit (train) the data.
Import the estimator we want from the module it’s located in Instantiate the estimator, possibly changing its defaults Fit the estimator to the data. Possibly transform the data to its new space if need be.
Below, we import OneHotEncoder , instantiate it and ensure that we get a dense (and not sparse) array returned, and then encode our single column with the fit_transform method.
>>> from sklearn.preprocessing import OneHotEncoder
>>> ohe = OneHotEncoder(sparse=False)
>>> hs_train_transformed = ohe.fit_transform(hs_train)
>>> hs_train_transformed array([[0., 0., 0., ..., 1., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 0., ..., 1., 0., 0.],
...,
[0., 0., 0., ..., 1., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.]])
As expected, it has encoded each unique value as its own binary column.
>>> hs_train_transformed.shape (1460, 8)
If you are enjoying this article, consider purchasing the All Access Pass! which includes all my current and future material for one low price.
We have a NumPy array. Where are the column names?
Notice that our output is a NumPy array and not a Pandas DataFrame. Scikit-Learn was not originally built to be directly integrated with Pandas. All Pandas objects are converted to NumPy arrays internally and NumPy arrays are always returned after a transformation.
We can still get our column name from the OneHotEncoder object through its get_feature_names method.
>>> feature_names = ohe.get_feature_names()
>>> feature_names array(['x0_1.5Fin', 'x0_1.5Unf', 'x0_1Story', 'x0_2.5Fin',
'x0_2.5Unf', 'x0_2Story', 'x0_SFoyer', 'x0_SLvl'], dtype=object)
Verifying our first row of data is correct
It’s good to verify that our estimator is working properly. Let’s look at the first row of encoded data.
>>> row0 = hs_train_transformed[0]
>>> row0 array([0., 0., 0., 0., 0., 1., 0., 0.])
This encodes the 6th value in the array as 1. Let’s use boolean indexing to reveal the feature name.
>>> feature_names[row0 == 1] array(['x0_2Story'], dtype=object)
Now, let’s verify that the first value in our original DataFrame column is the same.
>>> hs_train.values[0] array(['2Story'], dtype=object)
Use inverse_transform to automate this
Just like most transformer objects, there is an inverse_transform method that will get you back your original data. Here we must wrap row0 in a list to make it a 2D array.
>>> ohe.inverse_transform([row0]) array([['2Story']], dtype=object)
We can verify all values by inverting the entire transformed array.
>>> hs_inv = ohe.inverse_transform(hs_train_transformed)
>>> hs_inv array([['2Story'],
['1Story'],
['2Story'],
...,
['2Story'],
['1Story'],
['1Story']], dtype=object) >>> np.array_equal(hs_inv, hs_train.values) True
Applying a transformation to the test set
Whatever transformation we do to our training set, we must apply to our test set. Let’s read in the test set and get the same column and apply our transformation.
>>> test = pd.read_csv('data/housing/test.csv')
>>> hs_test = test[['HouseStyle']].copy()
>>> hs_test_transformed = ohe.transform(hs_test)
>>> hs_test_transformed array([[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 0., ..., 1., 0., 0.],
...,
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 1., 0.],
[0., 0., 0., ..., 1., 0., 0.]])
We should again get 8 columns and we do.
>>> hs_test_transformed.shape
(1459, 8)
This example works nicely, but there are multiple cases where we will run into problems. Let’s examine them now.
Trouble area #1 — Categories unique to the test set
What happens if we have a home with a house style that is unique to just the test set? Say something like 3Story . Let's change the first value of the house styles and see what the default is from Scikit-Learn.
>>> hs_test = test[['HouseStyle']].copy()
>>> hs_test.iloc[0, 0] = '3Story'
>>> hs_test.head(3) HouseStyle
0 3Story
1 1Story
2 2Story >>> ohe.transform(hs_test) ValueError: Found unknown categories ['3Story'] in column 0 during transform
Error: Unknown Category
By default, our encoder will produce an error. This is likely what we want as we need to know if there are unique strings in the test set. If you do have this problem then there could be something much deeper that needs investigating. For now, we will ignore the problem and encode this row as all 0’s by setting the handle_unknown parameter to 'ignore' upon instantiation.
>>> ohe = OneHotEncoder(sparse=False, handle_unknown='ignore')
>>> ohe.fit(hs_train) >>> hs_test_transformed = ohe.transform(hs_test)
>>> hs_test_transformed array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 0., ..., 1., 0., 0.],
...,
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 1., 0.],
[0., 0., 0., ..., 1., 0., 0.]])
Let’s verify that the first row is all 0's.
>>> hs_test_transformed[0] array([0., 0., 0., 0., 0., 0., 0., 0.])
Trouble area #2 — Missing Values in test set
If you have missing values in your test set (NaN or None), then these will be ignored as long as handle_unknown is set to 'ignore'. Let’s put some missing values in the first couple elements of our test set.
>>> hs_test = test[['HouseStyle']].copy()
>>> hs_test.iloc[0, 0] = np.nan
>>> hs_test.iloc[1, 0] = None
>>> hs_test.head(4) HouseStyle
0 NaN
1 None
2 2Story
3 2Story >>> hs_test_transformed = ohe.transform(hs_test)
>>> hs_test_transformed[:4] array([[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0.]])
Trouble area #3 — Missing Values in training set
Missing values in the training set are more of an issue. As of now, the OneHotEncoder estimator cannot fit with missing values.
>>> hs_train = hs_train.copy()
>>> hs_train.iloc[0, 0] = np.nan
>>> ohe = OneHotEncoder(sparse=False, handle_unknown='ignore')
>>> ohe.fit_transform(hs_train) TypeError: '<' not supported between instances of 'str' and 'float'
It would be nice if there was an option to ignore them like what happens when transforming the test set above. As of now, this doesn’t exist and we must impute it.
Must impute missing values
For now, we must impute the missing values. The old Imputer from the preprocessing module got deprecated. A new module, impute , was formed in its place, with a new estimator SimpleImputer and a new strategy, 'constant'. By default, using this strategy will fill missing values with the string ‘missing_value’. We can choose what to set it with the fill_value parameter.
>>> hs_train = train[['HouseStyle']].copy()
>>> hs_train.iloc[0, 0] = np.nan >>> from sklearn.impute import SimpleImputer
>>> si = SimpleImputer(strategy='constant', fill_value='MISSING')
>>> hs_train_imputed = si.fit_transform(hs_train)
>>> hs_train_imputed array([['MISSING'],
['1Story'],
['2Story'],
...,
['2Story'],
['1Story'],
['1Story']], dtype=object)
From here we can encode as we did previously.
>>> hs_train_transformed = ohe.fit_transform(hs_train_imputed)
>>> hs_train_transformed array([[0., 0., 0., ..., 1., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.]])
Notice, that we now have an extra column and and an extra feature name.
>>> hs_train_transformed.shape (1460, 9) >>> ohe.get_feature_names() array(['x0_1.5Fin', 'x0_1.5Unf', 'x0_1Story', 'x0_2.5Fin',
'x0_2.5Unf', 'x0_2Story', 'x0_MISSING', 'x0_SFoyer',
'x0_SLvl'], dtype=object)
More on fit_transform
For all estimators, the fit_transform method will first call the fit method and then call the transform method. The fit method finds the key properties that will be used during the transformation. For instance, with the SimpleImputer , if the strategy were ‘mean’, then it would find the mean of each column during the fit method. It would store this mean for every column. When transform is called, it uses this stored mean of every column to fill in the missing values and returns the transformed array.
The OneHotEncoder works analogously. During the fit method, it finds all the unique values for each column and again stores this. When transform is called, it uses these stored unique values to produce the binary array.
Apply both transformations to the test set
We can manually apply each of the two steps above in order like this:
>>> hs_test = test[['HouseStyle']].copy()
>>> hs_test.iloc[0, 0] = 'unique value to test set'
>>> hs_test.iloc[1, 0] = np.nan >>> hs_test_imputed = si.transform(hs_test)
>>> hs_test_transformed = ohe.transform(hs_test_imputed)
>>> hs_test_transformed.shape (1459, 8) >>> ohe.get_feature_names() array(['x0_1.5Fin', 'x0_1.5Unf', 'x0_1Story', 'x0_2.5Fin',
'x0_2.5Unf', 'x0_2Story', 'x0_SFoyer', 'x0_SLvl'],
dtype=object)
Use a Pipeline instead
Scikit-Learn provides a Pipeline estimator that takes a list of transformations and applies them in succession. You can also run a machine learning model as the final estimator. Here we simply impute and encode.
>>> from sklearn.pipeline import Pipeline
Each step is a two-item tuple consisting of a string that labels the step and the instantiated estimator. The output of the previous step is the input to the next step.
>>> si_step = ('si', SimpleImputer(strategy='constant',
fill_value='MISSING'))
>>> ohe_step = ('ohe', OneHotEncoder(sparse=False,
handle_unknown='ignore'))
>>> steps = [si_step, ohe_step]
>>> pipe = Pipeline(steps) >>> hs_train = train[['HouseStyle']].copy()
>>> hs_train.iloc[0, 0] = np.nan
>>> hs_transformed = pipe.fit_transform(hs_train)
>>> hs_transformed.shape (1460, 9)
The test set is easily transformed through each step of the pipeline by simply passing it to the transform method.
>>> hs_test = test[['HouseStyle']].copy()
>>> hs_test_transformed = pipe.transform(hs_test)
>>> hs_test_transformed.shape (1459, 9)
Why just the transform method for the test set?
When transforming the test set, it's important to just call the transform method and not fit_transform . When we ran fit_transform on the training set, Scikit-Learn found all the necessary information it needed in order to transform any other dataset containing the same column names.
Transforming Multiple String Columns
Encoding multiple string columns is not a problem. Select the columns you want and then pass the new DataFrame through the same pipeline again.
>>> string_cols = ['RoofMatl', 'HouseStyle']
>>> string_train = train[string_cols]
>>> string_train.head(3) RoofMatl HouseStyle
0 CompShg 2Story
1 CompShg 1Story
2 CompShg 2Story >>> string_train_transformed = pipe.fit_transform(string_train)
>>> string_train_transformed.shape (1460, 16)
Get individual pieces of the pipeline
It is possible to retrieve each individual transformer within the pipeline through its name from the named_steps dictionary atribute. In this instance, we get the one-hot encoder so that we can output the feature names.
>>> ohe = pipe.named_steps['ohe']
>>> ohe.get_feature_names() array(['x0_ClyTile', 'x0_CompShg', 'x0_Membran', 'x0_Metal',
'x0_Roll', 'x0_Tar&Grv', 'x0_WdShake', 'x0_WdShngl',
'x1_1.5Fin', 'x1_1.5Unf', 'x1_1Story', 'x1_2.5Fin',
'x1_2.5Unf', 'x1_2Story', 'x1_SFoyer', 'x1_SLvl'],
dtype=object)
Use the new ColumnTransformer to choose columns
The brand new ColumnTransformer (part of the new compose module) allows you to choose which columns get which transformations. Categorical columns will almost always need separate transformations than continuous columns.
The ColumnTransformer is currently experimental, meaning that its functionality can change in the future.
The ColumnTransformer takes a list of three-item tuples. The first value in the tuple is a name that labels it, the second is an instantiated estimator, and the third is a list of columns you want to apply the transformation to. The tuple will look like this:
('name', SomeTransformer(parameters), columns)
The columns actually don’t have to be column names. Instead, you can use the integer indexes of the columns, a boolean array, or even a function (which accepts the entire DataFrame as the argument and must return a selection of columns).
You can also use NumPy arrays with the ColumnTransformer , but this tutorial is focused on the integration of Pandas so we will stick with just using DataFrames.
Master Python, Data Science and Machine Learning
Immerse yourself in my comprehensive path for mastering data science and machine learning with Python. Purchase the All Access Pass to get lifetime access to all current and future courses. Some of the courses it contains:
Exercise Python — A comprehensive introduction to Python (200+ pages, 100+ exercises)
— A comprehensive introduction to Python (200+ pages, 100+ exercises) Master Data Analysis with Python — The most comprehensive course available to learn pandas. (800+ pages and 300+ exercises)
— The most comprehensive course available to learn pandas. (800+ pages and 300+ exercises) Master Machine Learning with Python — A deep dive into doing machine learning with scikit-learn constantly updated to showcase the latest and greatest tools. (300+ pages)
Get the All Access Pass now!
Pass a Pipeline to the ColumnTransformer
We can even pass a pipeline of many transformations to the column transformer, which is what we do here because we have multiple transformations on our string columns.
Below, we reproduce the above imputation and encoding using the ColumnTransformer . Notice that the pipeline is the exact same as above, just with cat appended to each variable name. We will add a different pipeline for the numeric columns in an upcoming section.
>>> from sklearn.compose import ColumnTransformer >>> cat_si_step = ('si', SimpleImputer(strategy='constant',
fill_value='MISSING'))
>>> cat_ohe_step = ('ohe', OneHotEncoder(sparse=False,
handle_unknown='ignore')) >>> cat_steps = [cat_si_step, cat_ohe_step]
>>> cat_pipe = Pipeline(cat_steps)
>>> cat_cols = ['RoofMatl', 'HouseStyle']
>>> cat_transformers = [('cat', cat_pipe, cat_cols)]
>>> ct = ColumnTransformer(transformers=cat_transformers)
Pass the entire DataFrame to the ColumnTransformer
The ColumnTransformer instance selects the columns we want to use, so we simply pass the entire DataFrame to the fit_transform method. The desired columns will be selected for us.
>>> X_cat_transformed = ct.fit_transform(train)
>>> X_cat_transformed.shape (1460, 16)
We can now transform our test set in the same manner.
>>> X_cat_transformed_test = ct.transform(test)
>>> X_cat_transformed_test.shape (1459, 16)
Retrieving the feature names
We have to do a little digging to get the feature names. All the transformers are stored in the named_transformers_ dictionary attribute. We then use the names, the first item from the three-item tuple to select the specific transformer. Below, we select our transformer (there is only one here — a pipeline named ‘cat’).
>>> pl = ct.named_transformers_['cat']
Then from this pipeline we select the one-hot encoder object and finally get the feature names.
>>> ohe = pl.named_steps['ohe']
>>> ohe.get_feature_names() array(['x0_ClyTile', 'x0_CompShg', 'x0_Membran', 'x0_Metal',
'x0_Roll','x0_Tar&Grv', 'x0_WdShake', 'x0_WdShngl',
'x1_1.5Fin', 'x1_1.5Unf', 'x1_1Story', 'x1_2.5Fin',
'x1_2.5Unf', 'x1_2Story', 'x1_SFoyer', 'x1_SLvl'],
dtype=object)
Transforming the numeric columns
The numeric columns will need a different set of transformations. Instead of imputing missing values with a constant, the median or mean is often chosen. And instead of encoding the values, we usually standardize them by subtracting the mean of each column and dividing by the standard deviation. This helps many models like ridge regression produce a better fit.
Using all the numeric columns
Instead of selecting just one or two columns by hand as we did above with the string columns, we can select all of the numeric columns. We do this by first finding the data type of each column with the dtypes attribute and then testing whether the kind of each dtype is 'O'. The dtypes attribute returns a Series of NumPy dtype objects. Each of these has a kind attribute that is a single character. We can use this to find the numeric or string columns. Pandas stores all of its string columns as object which have a kind equal to ‘O’. See the NumPy docs for more on the kind attribute.
>>> train.dtypes.head() Id int64
MSSubClass int64
MSZoning object
LotFrontage float64
LotArea int64
dtype: object
Get the kinds, a one character string representing the dtype.
>>> kinds = np.array([dt.kind for dt in train.dtypes])
>>> kinds[:5] array(['i', 'i', 'O', 'f', 'i'], dtype='<U1')
Assume all numeric columns are non-object. We can also get the categorical columns in this manner.
>>> all_columns = train.columns.values
>>> is_num = kinds != 'O'
>>> num_cols = all_columns[is_num]
>>> num_cols[:5] array(['Id', 'MSSubClass', 'LotFrontage', 'LotArea', 'OverallQual'],
dtype=object) >>> cat_cols = all_columns[~is_num]
>>> cat_cols[:5] array(['MSZoning', 'Street', 'Alley', 'LotShape', 'LandContour'],
dtype=object)
Once we have our numeric column names, we can use the ColumnTransformer again.
>>> from sklearn.preprocessing import StandardScaler >>> num_si_step = ('si', SimpleImputer(strategy='median'))
>>> num_ss_step = ('ss', StandardScaler())
>>> num_steps = [num_si_step, num_ss_step] >>> num_pipe = Pipeline(num_steps)
>>> num_transformers = [('num', num_pipe, num_cols)] >>> ct = ColumnTransformer(transformers=num_transformers)
>>> X_num_transformed = ct.fit_transform(train)
>>> X_num_transformed.shape (1460, 37)
Combining both categorical and numerical column transformations
We can apply separate transformations to each section of our DataFrame with ColumnTransformer . We will use every single column in this example.
We then create a separate pipeline for both categorical and numerical columns and then use the ColumnTransformer to independently transform them. These two transformations happen in parallel. The results of each are then concatenated together.
>>> transformers = [('cat', cat_pipe, cat_cols),
('num', num_pipe, num_cols)]
>>> ct = ColumnTransformer(transformers=transformers)
>>> X = ct.fit_transform(train)
>>> X.shape (1460, 305)
Machine Learning
The whole point of this exercise is to set up our data so that we can do machine learning. We can create one final pipeline and add a machine learning model as the final estimator. The first step in the pipeline will be the entire transformation we just did above. We assigned y way back at the top of the tutorial as the SalePrice . Here, we will just use the fit method instead of fit_transform since our final step is a machine learning model and does no transformations.
>>> from sklearn.linear_model import Ridge >>> ml_pipe = Pipeline([('transform', ct), ('ridge', Ridge())])
>>> ml_pipe.fit(train, y)
We can evaluate our model with the score method, which returns the R-squared value:
>>> ml_pipe.score(train, y) 0.92205
Cross-Validation
Of course, scoring ourselves on the training set is not useful. Let’s do some K-fold cross-validation to get an idea of how well we would do with unseen data. We set a random state so that the splits will be the same throughout the rest of the tutorial.
>>> from sklearn.model_selection import KFold, cross_val_score
>>> kf = KFold(n_splits=5, shuffle=True, random_state=123)
>>> cross_val_score(ml_pipe, train, y, cv=kf).mean() 0.813
Selecting parameters when Grid Searching
Grid searching in Scikit-Learn requires us to pass a dictionary of parameter names mapped to possible values. When using a pipeline, we must use the name of the step followed by a double-underscore and then the parameter name. If there are multiple layers to your pipeline, as we have here, we must continue using double-underscores to move up a level until we reach the estimator whose parameters we would like to optimize.
>>> from sklearn.model_selection import GridSearchCV >>> param_grid = {
'transform__num__si__strategy': ['mean', 'median'],
'ridge__alpha': [.001, 0.1, 1.0, 5, 10, 50, 100, 1000],
}
>>> gs = GridSearchCV(ml_pipe, param_grid, cv=kf)
>>> gs.fit(train, y)
>>> gs.best_params_ {'ridge__alpha': 10, 'transform__num__si__strategy': 'median'} >>> gs.best_score_ 0.819
Getting all the grid search results in a Pandas DataFrame
All the results of the grid search are stored in the cv_results_ attribute. This is a dictionary that can get converted to a Pandas DataFrame for a nice display and it provides a structure that is much easier to manually scan.
>>> pd.DataFrame(gs.cv_results_)
Lots of data from each combination of the parameter grid
Building a custom transformer that does all the basics
There are a few limitations to the above workflow. For instance, it would be nice if the OneHotEncoder gave you the option of ignoring missing values during the fit method. It could simply encode missing values as a row of all zeros. Currently, it forces us to fill the missing values with some string and then encodes this string as a separate column.
Low-frequency strings
Also, string columns that appear only a few times during the training set may not be reliable predictors in the test set. We may want to encode those as if they were missing as well.
Writing your own estimator class
Scikit-Learn provides some help within its documentation on writing your own estimator class. The BaseEstimator class found within the base module provides the get_params and set_params methods for you. The set_params method is necessary when doing a grid search. You can write your own or inherit from the BaseEstimator . There is also a TransformerMixin but it just writes the fit_transform method for you. We do this in one line of code below, so we don’t inherit from it.
The following class BasicTransformer does the following:
Fills in missing values with either the mean or median for numeric columns
Standardizes all numeric columns
Uses one hot encoding for string columns
Does not fill in missing values for categorical columns. Instead, it encodes them as a 0's
Ignores unique values in string columns in the test set
Allows you to choose a threshold for the number of occurrences a value must have in a string column. Strings below this threshold will be encoded as all 0's
It only works with DataFrames and is just experimental and not tested so it will break for some datasets
It is called ‘basic’ because, these are probably the most basic transformations that typically get done to many datasets.
from sklearn.base import BaseEstimator class BasicTransformer(BaseEstimator):
def __init__(self, cat_threshold=None, num_strategy='median',
return_df=False):
# store parameters as public attributes
self.cat_threshold = cat_threshold
if num_strategy not in ['mean', 'median']:
raise ValueError('num_strategy must be either "mean" or
"median"')
self.num_strategy = num_strategy
self.return_df = return_df
def fit(self, X, y=None):
# Assumes X is a DataFrame
self._columns = X.columns.values
# Split data into categorical and numeric
self._dtypes = X.dtypes.values
self._kinds = np.array([dt.kind for dt in X.dtypes])
self._column_dtypes = {}
is_cat = self._kinds == 'O'
self._column_dtypes['cat'] = self._columns[is_cat]
self._column_dtypes['num'] = self._columns[~is_cat]
self._feature_names = self._column_dtypes['num']
# Create a dictionary mapping categorical column to unique
# values above threshold
self._cat_cols = {}
for col in self._column_dtypes['cat']:
vc = X[col].value_counts()
if self.cat_threshold is not None:
vc = vc[vc > self.cat_threshold]
vals = vc.index.values
self._cat_cols[col] = vals
self._feature_names = np.append(self._feature_names, col
+ '_' + vals)
# get total number of new categorical columns
self._total_cat_cols = sum([len(v) for col, v in
self._cat_cols.items()])
# get mean or median
num_cols = self._column_dtypes['num']
self._num_fill = X[num_cols].agg(self.num_strategy)
return self
def transform(self, X):
# check that we have a DataFrame with same column names as
# the one we fit
if set(self._columns) != set(X.columns):
raise ValueError('Passed DataFrame has different columns
than fit DataFrame')
elif len(self._columns) != len(X.columns):
raise ValueError('Passed DataFrame has different number
of columns than fit DataFrame')
# fill missing values
num_cols = self._column_dtypes['num']
X_num = X[num_cols].fillna(self._num_fill)
# Standardize numerics
std = X_num.std()
X_num = (X_num - X_num.mean()) / std
zero_std = np.where(std == 0)[0]
# If there is 0 standard deviation, then all values are the
# same. Set them to 0.
if len(zero_std) > 0:
X_num.iloc[:, zero_std] = 0
X_num = X_num.values
# create separate array for new encoded categoricals
X_cat = np.empty((len(X), self._total_cat_cols),
dtype='int')
i = 0
for col in self._column_dtypes['cat']:
vals = self._cat_cols[col]
for val in vals:
X_cat[:, i] = X[col] == val
i += 1
# concatenate transformed numeric and categorical arrays
data = np.column_stack((X_num, X_cat))
# return either a DataFrame or an array
if self.return_df:
return pd.DataFrame(data=data,
columns=self._feature_names)
else:
return data
def fit_transform(self, X, y=None):
return self.fit(X).transform(X)
def get_feature_names():
return self._feature_names
Using our BasicTransformer
Our BasicTransformer estimator should be able to be used just like any other scikit-learn estimator. We can instantiate it and then transform our data.
>>> bt = BasicTransformer(cat_threshold=3, return_df=True)
>>> train_transformed = bt.fit_transform(train)
>>> train_transformed.head(3)
Columns of the DataFrame where the numerical and categorical columns meet
Using our transformer in a pipeline
Our transformer can be part of a pipeline.
>>> basic_pipe = Pipeline([('bt', bt), ('ridge', Ridge())])
>>> basic_pipe.fit(train, y)
>>> basic_pipe.score(train, y) 0.904
We can also cross-validate with it as well and get a similar score as we did with our scikit-learn column transformer pipeline from above.
>>> cross_val_score(basic_pipe, train, y, cv=kf).mean() 0.816
We can use it as part of a grid search as well. It turns out that not including low-count strings did not help this particular model, though it stands to reason it could in other models. The best score did improve a bit, perhaps due to using a slightly different encoding scheme.
>>> param_grid = {
'bt__cat_threshold': [0, 1, 2, 3, 5],
'ridge__alpha': [.1, 1, 10, 100]
} >>> gs = GridSearchCV(p, param_grid, cv=kf)
>>> gs.fit(train, y)
>>> gs.best_params_ {'bt__cat_threshold': 0, 'ridge__alpha': 10} >>> gs.best_score_
0.830
Binning and encoding numeric columns with the new KBinsDiscretizer
There are a few columns that contain years. It makes more sense to bin the values in these columns and treat them as categories. Scikit-Learn introduced the new estimator KBinsDiscretizer to do just this. It not only bins the values, but it encodes them as well. Before you could have done this manually with Pandas cut or qcut functions.
Let’s see how it works with just the YearBuilt column.
>>> from sklearn.preprocessing import KBinsDiscretizer
>>> kbd = KBinsDiscretizer(encode='onehot-dense')
>>> year_built_transformed = kbd.fit_transform(train[['YearBuilt']])
>>> year_built_transformed array([[0., 0., 0., 0., 1.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
...,
[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.]])
By default, each bin contains (approximately) an equal number of observations. Let’s sum up each column to verify this.
>>> year_built_transformed.sum(axis=0) array([292., 274., 307., 266., 321.])
This is the ‘quantile’ strategy. You can choose ‘uniform’ to make the bin edges equally spaced or ‘kmeans’ which uses K-means clustering to find the bin edges.
>>> kbd.bin_edges_ array([array([1872. , 1947.8, 1965. , 1984. , 2003. , 2010. ])],
dtype=object)
Processing all the year columns separately with ColumnTransformer
We now have another subset of columns that need separate processing and we can do this with the ColumnTransformer . The following code adds one more step to our previous transformation. We also drop the Id column which was just identifying each row.
>>> year_cols = ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt',
'YrSold']
>>> not_year = ~np.isin(num_cols, year_cols + ['Id'])
>>> num_cols2 = num_cols[not_year] >>> year_si_step = ('si', SimpleImputer(strategy='median'))
>>> year_kbd_step = ('kbd', KBinsDiscretizer(n_bins=5,
encode='onehot-dense'))
>>> year_steps = [year_si_step, year_kbd_step]
>>> year_pipe = Pipeline(year_steps) >>> transformers = [('cat', cat_pipe, cat_cols),
('num', num_pipe, num_cols2),
('year', year_pipe, year_cols)] >>> ct = ColumnTransformer(transformers=transformers)
>>> X = ct.fit_transform(train)
>>> X.shape (1460, 320)
We cross-validate and score and see that all this work yielded us no improvements.
>>> ml_pipe = Pipeline([('transform', ct), ('ridge', Ridge())])
>>> cross_val_score(ml_pipe, train, y, cv=kf).mean()
0.813
Using a different number of bins for each column might improve our results. Still, the KBinsDiscretizer makes it easy to bin numeric variables.
More goodies in Scikit-Learn 0.20
There are more new features that come with the upcoming release. Check the What’s New section of the docs for more. There are a ton of changes.
Conclusion
This article introduced a new workflow that will be available to Scikit-Learn users who rely on Pandas for the initial data exploration and preparation. A much smoother and feature-rich process for taking a Pandas DataFrame and transforming it so that it is ready for machine learning is now done through the new and improved estimators ColumnTransformer , SimpleImputer , OneHotEncoder , and KBinsDiscretizer .
I am very excited to see this new upgrade and am going to be integrating these new workflows immediately into my projects and teaching materials.
Master Python, Data Science and Machine Learning
Immerse yourself in my comprehensive path for mastering data science and machine learning with Python. Purchase the All Access Pass to get lifetime access to all current and future courses. Some of the courses it contains:
Exercise Python — A comprehensive introduction to Python (200+ pages, 100+ exercises)
— A comprehensive introduction to Python (200+ pages, 100+ exercises) Master Data Analysis with Python — The most comprehensive course available to learn pandas. (800+ pages and 300+ exercises)
— The most comprehensive course available to learn pandas. (800+ pages and 300+ exercises) Master Machine Learning with Python — A deep dive into doing machine learning with scikit-learn constantly updated to showcase the latest and greatest tools. (300+ pages)
Get the All Access Pass now! | https://medium.com/dunder-data/from-pandas-to-scikit-learn-a-new-exciting-workflow-e88e2271ef62 | ['Ted Petrou'] | 2020-11-26 00:16:52.997000+00:00 | ['Machine Learning', 'Pandas', 'Scikit Learn', 'Python', 'Data Science'] |
How to build trust on your team in a few minutes a day | As a leader, it can be easy to miss the fact that your team is struggling to build trust with one another. There can be a lot going well: high quality work, goals met, etc. But you may start to notice that your team just doesn’t feel like a team.
This lack of trust can manifest in different ways like:
A lack of vulnerability : The team may not share things about their personal lives or not go into much detail. They may not talk about things going wrong with their work or with the team.
: The team may not share things about their personal lives or not go into much detail. They may not talk about things going wrong with their work or with the team. The absence of conflict : The team may not feel safe to disagree with one another or with the team leader. They may only share disagreements 1:1 or in private.
: The team may not feel safe to disagree with one another or with the team leader. They may only share disagreements 1:1 or in private. For more symptoms, check out Five Dysfunctions of a Team.
And for teammates that don’t yet trust each other, it can be a difficult, unhappy experience. I once worked on a team that had all the strikes against us: we had a leader who kept us in the dark about what was happening at the company, we had unclear and shifting expectations for our team’s work, and there was a complete lack of resources to actually get the work done.
And as a result, our working relationships were almost non-existent. That meant we didn’t feel safe asking one another for help, we couldn’t open up about how we were feeling, and we ended up competing with each other instead of collaborating.
We didn’t trust each other, and our work and happiness suffered.
Critically, this situation wasn’t a failure of the team but rather a failure of leadership. It’s a leader’s most important responsibility to build trust on a team. Without trust, a team can’t collaborate and can’t achieve their goals.
The importance of trust on teams
The reason trust matters it that trust is closely linked to team performance, especially when the team is engaged in interdependent tasks common to cross-functional teams.
Trust is even more important for teams that are coming up with novel solutions or innovative products because trust creates a collaborative environment that enhances a team’s ability to be creative.
Going beyond just performance, trust is a key component in friendship — and friendship is strong predictor of engagement at work. Not to mention, the need for connection with other people is almost on par with our need for food and shelter.
And while building trust on a team does require intention, it doesn’t have to mean huge behavioral shifts or expensive team off-sites. Instead, you can do small activities with your team each day that build trust over time.
Building trust on a daily basis
As a leader, building trust on a team is about setting the right conditions for teammates to connect and build relationships with one another.
Team off-sites and activities are one way to do this, but they often take place only once a quarter or even once a year. And that makes it harder for the team to maintain the effects. Annual company retreats are a great example of this — the team often comes back feeling motivated, energized, and highly connected to one another. But within a few weeks, most of the team will be back to their normal day-to-day and will have lost touch with that sense of connection.
At my company, Range, this is why we think about how we can integrate trust-building activities into our daily workflow.
Small, daily trust-building is powerful because it’s easy for a team to do, which means we can consistently make time for it and prioritize it. And, it’s connected to the work, which means that it doesn’t feel like a distraction, but instead directly impacts our ability to work together.
How you can build trust on your team
Daily trust building doesn’t have to be heavyweight, and at Range, it isn’t. We do two things every day:
Answer a team building question. A team building question can cover information about yourself, which helps to humanize you to one another. For example, “What was you favorite class in high school?” Team questions can also focus on how you work together, which helps start conversations about how you work. For example, “How do you prefer to communicate?” Answering a question takes just few minutes and can even be done asynchronously. If you’re short on ideas, Plucky created cards to help. Daily emotional check in. As part of using Range’s product (of course we dog food our own product 🐶), we choose an emoji each to day that represents our mood. This sounds simple, but we’ve found it lowers the weight of talking about emotions and how we’re doing. It also gives the rest of the team permission to ask how someone is doing or to offer help. You can also accomplish this by adding a check in round to your morning standup or sharing an emoji over Slack.
I hope these tips help you to build trust on your team starting tomorrow. And we’d love to hear from you — what daily activities do you do on your team to build trust? If you’re curious to learn more about a tool that makes trust-building easy, check out what we’re building at Range. | https://medium.com/range/how-you-can-build-trust-on-your-team-in-a-few-minutes-a-day-8b7f09055ff0 | ['Jennifer Dennard'] | 2018-08-21 14:01:02.001000+00:00 | ['Trust', 'Leadership', 'Teamwork'] |
A Comprehensive Hands-on Guide to Transfer Learning with Real-World Applications in Deep Learning | The three transfer categories discussed in the previous section outline different settings where transfer learning can be applied, and studied in detail. To answer the question of what to transfer across these categories, some of the following approaches can be applied:
Instance transfer: Reusing knowledge from the source domain to the target task is usually an ideal scenario. In most cases, the source domain data cannot be reused directly. Rather, there are certain instances from the source domain that can be reused along with target data to improve results. In case of inductive transfer, modifications such as AdaBoost by Dai and their co-authors help utilize training instances from the source domain for improvements in the target task.
Reusing knowledge from the source domain to the target task is usually an ideal scenario. In most cases, the source domain data cannot be reused directly. Rather, there are certain instances from the source domain that can be reused along with target data to improve results. In case of inductive transfer, modifications such as AdaBoost by Dai and their co-authors help utilize training instances from the source domain for improvements in the target task. Feature-representation transfer: This approach aims to minimize domain divergence and reduce error rates by identifying good feature representations that can be utilized from the source to target domains. Depending upon the availability of labeled data, supervised or unsupervised methods may be applied for feature-representation-based transfers.
This approach aims to minimize domain divergence and reduce error rates by identifying good feature representations that can be utilized from the source to target domains. Depending upon the availability of labeled data, supervised or unsupervised methods may be applied for feature-representation-based transfers. Parameter transfer: This approach works on the assumption that the models for related tasks share some parameters or prior distribution of hyperparameters. Unlike multitask learning, where both the source and target tasks are learned simultaneously, for transfer learning, we may apply additional weightage to the loss of the target domain to improve overall performance.
This approach works on the assumption that the models for related tasks share some parameters or prior distribution of hyperparameters. Unlike multitask learning, where both the source and target tasks are learned simultaneously, for transfer learning, we may apply additional weightage to the loss of the target domain to improve overall performance. Relational-knowledge transfer: Unlike the preceding three approaches, the relational-knowledge transfer attempts to handle non-IID data, such as data that is not independent and identically distributed. In other words, data, where each data point has a relationship with other data points; for instance, social network data utilizes relational-knowledge-transfer techniques.
The following table clearly summarizes the relationship between different transfer learning strategies and what to transfer.
Transfer Learning Strategies and Types of Transferable Components
Let’s now utilize this understanding and learn how transfer learning is applied in the context of deep learning.
Transfer Learning for Deep Learning
The strategies we discussed in the previous section are general approaches which can be applied towards machine learning techniques, which brings us to the question, can transfer learning really be applied in the context of deep learning?
Deep learning models are representative of what is also known as inductive learning. The objective for inductive-learning algorithms is to infer a mapping from a set of training examples. For instance, in cases of classification, the model learns mapping between input features and class labels. In order for such a learner to generalize well on unseen data, its algorithm works with a set of assumptions related to the distribution of the training data. These sets of assumptions are known as inductive bias. The inductive bias or assumptions can be characterized by multiple factors, such as the hypothesis space it restricts to and the search process through the hypothesis space. Thus, these biases impact how and what is learned by the model on the given task and domain.
Ideas for deep transfer learning
Inductive transfer techniques utilize the inductive biases of the source task to assist the target task. This can be done in different ways, such as by adjusting the inductive bias of the target task by limiting the model space, narrowing down the hypothesis space, or making adjustments to the search process itself with the help of knowledge from the source task. This process is depicted visually in the following figure.
Inductive transfer (Source: Transfer learning, Lisa Torrey and Jude Shavlik)
Apart from inductive transfer, inductive-learning algorithms also utilize Bayesian and Hierarchical transfer techniques to assist with improvements in the learning and performance of the target task.
Deep Transfer Learning Strategies
Deep learning has made considerable progress in recent years. This has enabled us to tackle complex problems and yield amazing results. However, the training time and the amount of data required for such deep learning systems are much more than that of traditional ML systems. There are various deep learning networks with state-of-the-art performance (sometimes as good or even better than human performance) that have been developed and tested across domains such as computer vision and natural language processing (NLP). In most cases, teams/people share the details of these networks for others to use. These pre-trained networks/models form the basis of transfer learning in the context of deep learning, or what I like to call ‘deep transfer learning’. Let’s look at the two most popular strategies for deep transfer learning.
Off-the-shelf Pre-trained Models as Feature Extractors
Deep learning systems and models are layered architectures that learn different features at different layers (hierarchical representations of layered features). These layers are then finally connected to a last layer (usually a fully connected layer, in the case of supervised learning) to get the final output. This layered architecture allows us to utilize a pre-trained network (such as Inception V3 or VGG) without its final layer as a fixed feature extractor for other tasks.
Transfer Learning with Pre-trained Deep Learning Models as Feature Extractors
The key idea here is to just leverage the pre-trained model’s weighted layers to extract features but not to update the weights of the model’s layers during training with new data for the new task.
For instance, if we utilize AlexNet without its final classification layer, it will help us transform images from a new domain task into a 4096-dimensional vector based on its hidden states, thus enabling us to extract features from a new domain task, utilizing the knowledge from a source-domain task. This is one of the most widely utilized methods of performing transfer learning using deep neural networks.
Now a question might arise, how well do these pre-trained off-the-shelf features really work in practice with different tasks?
It definitely seems to work really well in real-world tasks, and if the chart in the above table is not very clear, the following figure should make things more clear with regard to their performance in different computer vision based tasks!
Performance of off-the-shelf pre-trained models vs. specialized task-focused deep learning models
Based on the red and pink bars in the above figure, you can clearly see that the features from the pre-trained models consistently out-perform very specialized task-focused deep learning models.
Fine Tuning Off-the-shelf Pre-trained Models
This is a more involved technique, where we do not just replace the final layer (for classification/regression), but we also selectively retrain some of the previous layers. Deep neural networks are highly configurable architectures with various hyperparameters. As discussed earlier, the initial layers have been seen to capture generic features, while the later ones focus more on the specific task at hand. An example is depicted in the following figure on a face-recognition problem, where initial lower layers of the network learn very generic features and the higher layers learn very task-specific features.
Using this insight, we may freeze (fix weights) certain layers while retraining, or fine-tune the rest of them to suit our needs. In this case, we utilize the knowledge in terms of the overall architecture of the network and use its states as the starting point for our retraining step. This, in turn, helps us achieve better performance with less training time.
Freezing or Fine-tuning?
This brings us to the question, should we freeze layers in the network to use them as feature extractors or should we also fine-tune layers in the process?
This should give us a good perspective on what each of these strategies are and when should they be used!
Pre-trained Models
One of the fundamental requirements for transfer learning is the presence of models that perform well on source tasks. Luckily, the deep learning world believes in sharing. Many of the state-of-the art deep learning architectures have been openly shared by their respective teams. These span across different domains, such as computer vision and NLP, the two most popular domains for deep learning applications. Pre-trained models are usually shared in the form of the millions of parameters/weights the model achieved while being trained to a stable state. Pre-trained models are available for everyone to use through different means. The famous deep learning Python library, keras, provides an interface to download some popular models. You can also access pre-trained models from the web since most of them have been open-sourced.
For computer vision, you can leverage some popular models including,
For natural language processing tasks, things become more difficult due to the varied nature of NLP tasks. You can leverage word embedding models including,
But wait, that’s not all! Recently, there have been some excellent advancements towards transfer learning for NLP. Most notably,
They definitely hold a lot of promise and I’m sure they will be widely adopted pretty soon for real-world applications.
Types of Deep Transfer Learning
The literature on transfer learning has gone through a lot of iterations, and as mentioned at the start of this chapter, the terms associated with it have been used loosely and often interchangeably. Hence, it is sometimes confusing to differentiate between transfer learning, domain adaptation, and multi-task learning. Rest assured, these are all related and try to solve similar problems. In general, you should always think of transfer learning as a general concept or principle, where we will try to solve a target task using source task-domain knowledge.
Domain Adaptation
Domain adaption is usually referred to in scenarios where the marginal probabilities between the source and target domains are different, such as P(Xₛ) ≠ P(Xₜ). There is an inherent shift or drift in the data distribution of the source and target domains that requires tweaks to transfer the learning. For instance, a corpus of movie reviews labeled as positive or negative would be different from a corpus of product-review sentiments. A classifier trained on movie-review sentiment would see a different distribution if utilized to classify product reviews. Thus, domain adaptation techniques are utilized in transfer learning in these scenarios.
Domain Confusion
We learned different transfer learning strategies and even discussed the three questions of what, when, and how to transfer knowledge from the source to the target. In particular, we discussed how feature-representation transfer can be useful. It is worth re-iterating that different layers in a deep learning network capture different sets of features. We can utilize this fact to learn domain-invariant features and improve their transferability across domains. Instead of allowing the model to learn any representation, we nudge the representations of both domains to be as similar as possible. This can be achieved by applying certain pre-processing steps directly to the representations themselves. Some of these have been discussed by Baochen Sun, Jiashi Feng, and Kate Saenko in their paper ‘Return of Frustratingly Easy Domain Adaptation’. This nudge toward the similarity of representation has also been presented by Ganin et. al. in their paper, ‘Domain-Adversarial Training of Neural Networks’. The basic idea behind this technique is to add another objective to the source model to encourage similarity by confusing the domain itself, hence domain confusion.
Multitask Learning
Multitask learning is a slightly different flavor of the transfer learning world. In the case of multitask learning, several tasks are learned simultaneously without distinction between the source and targets. In this case, the learner receives information about multiple tasks at once, as compared to transfer learning, where the learner initially has no idea about the target task. This is depicted in the following figure.
Multitask learning: Learner receives information from all tasks simultaneously
One-shot Learning
Deep learning systems are data-hungry by nature, such that they need many training examples to learn the weights. This is one of the limiting aspects of deep neural networks, though such is not the case with human learning. For instance, once a child is shown what an apple looks like, they can easily identify a different variety of apple (with one or a few training examples); this is not the case with ML and deep learning algorithms. One-shot learning is a variant of transfer learning, where we try to infer the required output based on just one or a few training examples. This is essentially helpful in real-world scenarios where it is not possible to have labeled data for every possible class (if it is a classification task), and in scenarios where new classes can be added often. The landmark paper by Fei-Fei and their co-authors, ‘One Shot Learning of Object Categories’, is supposedly what coined the term one-shot learning and the research in this sub-field. This paper presented a variation on a Bayesian framework for representation learning for object categorization. This approach has since been improved upon, and applied using deep learning systems.
Zero-shot Learning
Zero-shot learning is another extreme variant of transfer learning, which relies on no labeled examples to learn a task. This might sound unbelievable, especially when learning using examples is what most supervised learning algorithms are about. Zero-data learning or zero-short learning methods, make clever adjustments during the training stage itself to exploit additional information to understand unseen data. In their book on Deep Learning, Goodfellow and their co-authors present zero-shot learning as a scenario where three variables are learned, such as the traditional input variable, x, the traditional output variable, y, and the additional random variable that describes the task, T. The model is thus trained to learn the conditional probability distribution of P(y | x, T). Zero-shot learning comes in handy in scenarios such as machine translation, where we may not even have labels in the target language.
Applications of Transfer Learning
Deep learning is definitely one of the specific categories of algorithms that has been utilized to reap the benefits of transfer learning very successfully. The following are a few examples:
Transfer learning for NLP: Textual data presents all sorts of challenges when it comes to ML and deep learning. These are usually transformed or vectorized using different techniques. Embeddings, such as Word2vec and FastText, have been prepared using different training datasets. These are utilized in different tasks, such as sentiment analysis and document classification, by transferring the knowledge from the source tasks. Besides this, newer models like the Universal Sentence Encoder and BERT definitely present a myriad of possibilities for the future.
Textual data presents all sorts of challenges when it comes to ML and deep learning. These are usually transformed or vectorized using different techniques. Embeddings, such as Word2vec and FastText, have been prepared using different training datasets. These are utilized in different tasks, such as sentiment analysis and document classification, by transferring the knowledge from the source tasks. Besides this, newer models like the Universal Sentence Encoder and BERT definitely present a myriad of possibilities for the future. Transfer learning for Audio/Speech: Similar to domains like NLP and Computer Vision, deep learning has been successfully used for tasks based on audio data. For instance, Automatic Speech Recognition (ASR) models developed for English have been successfully used to improve speech recognition performance for other languages, such as German. Also, automated-speaker identification is another example where transfer learning has greatly helped.
Similar to domains like NLP and Computer Vision, deep learning has been successfully used for tasks based on audio data. For instance, Automatic Speech Recognition (ASR) models developed for English have been successfully used to improve speech recognition performance for other languages, such as German. Also, automated-speaker identification is another example where transfer learning has greatly helped. Transfer learning for Computer Vision: Deep learning has been quite successfully utilized for various computer vision tasks, such as object recognition and identification, using different CNN architectures. In their paper, How transferable are features in deep neural networks, Yosinski and their co-authors (https://arxiv.org/abs/1411.1792) present their findings on how the lower layers act as conventional computer-vision feature extractors, such as edge detectors, while the final layers work toward task-specific features.
Thus, these findings have helped in utilizing existing state-of-the-art models, such as VGG, AlexNet, and Inceptions, for target tasks, such as style transfer and face detection, that were different from what these models were trained for initially. Let’s explore some real-world case studies now and build some deep transfer learning models!
Case Study 1: Image Classification with a Data Availability Constraint
In this simple case study, will be working on an image categorization problem with the constraint of having a very small number of training samples per category. The dataset for our problem is available on Kaggle and is one of the most popular computer vision based datasets out there.
Main Objective
The dataset that we will be using, comes from the very popular Dogs vs. Cats Challenge, where our primary objective is to build a deep learning model that can successfully recognize and categorize images into either a cat or a dog.
Source: becominghuman.ai
In terms of ML, this is a binary classification problem based on images. Before getting started, I would like to thank Francois Chollet for not only creating the amazing deep learning framework, keras , but also for talking about the real-world problem where transfer learning is effective in his book, ‘Deep Learning with Python’. I’ve have taken that as an inspiration to portray the true power of transfer learning in this chapter, and all results are based on building and running each model in my own GPU-based cloud setup (AWS p2.x)
Building Datasets
To start, download the train.zip file from the dataset page and store it in your local system. Once downloaded, unzip it into a folder. This folder will contain 25,000 images of dogs and cats; that is, 12,500 images per category. While we can use all 25,000 images and build some nice models on them, if you remember, our problem objective includes the added constraint of having a small number of images per category. Let’s build our own dataset for this purpose.
import glob
import numpy as np
import os
import shutil np.random.seed(42)
Let’s now load up all the images in our original training data folder as follows:
(12500, 12500)
We can verify with the preceding output that we have 12,500 images for each category. Let’s now build our smaller dataset, so that we have 3,000 images for training, 1,000 images for validation, and 1,000 images for our test dataset (with equal representation for the two animal categories).
Cat datasets: (1500,) (500,) (500,)
Dog datasets: (1500,) (500,) (500,)
Now that our datasets have been created, let’s write them out to our disk in separate folders, so that we can come back to them anytime in the future without worrying if they are present in our main memory.
Since this is an image categorization problem, we will be leveraging CNN models or ConvNets to try and tackle this problem. We will start by building simple CNN models from scratch, then try to improve using techniques such as regularization and image augmentation. Then, we will try and leverage pre-trained models to unleash the true power of transfer learning!
Preparing Datasets
Before we jump into modeling, let’s load and prepare our datasets. To start with, we load up some basic dependencies.
import glob
import numpy as np
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img %matplotlib inline
Let’s now load our datasets, using the following code snippet.
Train dataset shape: (3000, 150, 150, 3)
Validation dataset shape: (1000, 150, 150, 3)
We can clearly see that we have 3000 training images and 1000 validation images. Each image is of size 150 x 150 and has three channels for red, green, and blue (RGB), hence giving each image the (150, 150, 3) dimensions. We will now scale each image with pixel values between (0, 255) to values between (0, 1) because deep learning models work really well with small input values.
The preceding output shows one of the sample images from our training dataset. Let’s now set up some basic configuration parameters and also encode our text class labels into numeric values (otherwise, Keras will throw an error).
['cat', 'cat', 'cat', 'cat', 'cat', 'dog', 'dog', 'dog', 'dog', 'dog'] [0 0 0 0 0 1 1 1 1 1]
We can see that our encoding scheme assigns the number 0 to the cat labels and 1 to the dog labels. We are now ready to build our first CNN-based deep learning model.
Simple CNN Model from Scratch
We will start by building a basic CNN model with three convolutional layers, coupled with max pooling for auto-extraction of features from our images and also downsampling the output convolution feature maps.
A Typical CNN (Source: Wikipedia)
We assume you have enough knowledge about CNNs and hence, won’t cover theoretical details. Feel free to refer to my book or any other resources on the web which explain convolutional neural networks! Let’s leverage Keras and build our CNN model architecture now.
The preceding output shows us our basic CNN model summary. Just like we mentioned before, we are using three convolutional layers for feature extraction. The flatten layer is used to flatten out 128 of the 17 x 17 feature maps that we get as output from the third convolution layer. This is fed to our dense layers to get the final prediction of whether the image should be a dog (1) or a cat (0). All of this is part of the model training process, so let’s train our model using the following snippet which leverages the fit(…) function.
The following terminology is very important with regard to training our model:
The batch_size indicates the total number of images passed to the model per iteration.
indicates the total number of images passed to the model per iteration. The weights of the units in layers are updated after each iteration.
The total number of iterations is always equal to the total number of training samples divided by the batch_size
An epoch is when the complete dataset has passed through the network once, that is, all the iterations are completed based on data batches.
We use a batch_size of 30 and our training data has a total of 3,000 samples, which indicates that there will be a total of 100 iterations per epoch. We train the model for a total of 30 epochs and validate it consequently on our validation set of 1,000 images.
Train on 3000 samples, validate on 1000 samples
Epoch 1/30
3000/3000 - 10s - loss: 0.7583 - acc: 0.5627 - val_loss: 0.7182 - val_acc: 0.5520
Epoch 2/30
3000/3000 - 8s - loss: 0.6343 - acc: 0.6533 - val_loss: 0.5891 - val_acc: 0.7190
...
...
Epoch 29/30
3000/3000 - 8s - loss: 0.0314 - acc: 0.9950 - val_loss: 2.7014 - val_acc: 0.7140
Epoch 30/30
3000/3000 - 8s - loss: 0.0147 - acc: 0.9967 - val_loss: 2.4963 - val_acc: 0.7220
Looks like our model is kind of overfitting, based on the training and validation accuracy values. We can plot our model accuracy and errors using the following snippet to get a better perspective.
Vanilla CNN Model Performance
You can clearly see that after 2–3 epochs the model starts overfitting on the training data. The average accuracy we get in our validation set is around 72%, which is not a bad start! Can we improve upon this model?
CNN Model with Regularization
Let’s improve upon our base CNN model by adding in one more convolution layer, another dense hidden layer. Besides this, we will add dropout of 0.3 after each hidden dense layer to enable regularization. Basically, dropout is a powerful method of regularizing in deep neural nets. It can be applied separately to both input layers and the hidden layers. Dropout randomly masks the outputs of a fraction of units from a layer by setting their output to zero (in our case, it is 30% of the units in our dense layers).
Train on 3000 samples, validate on 1000 samples
Epoch 1/30
3000/3000 - 7s - loss: 0.6945 - acc: 0.5487 - val_loss: 0.7341 - val_acc: 0.5210
Epoch 2/30
3000/3000 - 7s - loss: 0.6601 - acc: 0.6047 - val_loss: 0.6308 - val_acc: 0.6480
...
...
Epoch 29/30
3000/3000 - 7s - loss: 0.0927 - acc: 0.9797 - val_loss: 1.1696 - val_acc: 0.7380
Epoch 30/30
3000/3000 - 7s - loss: 0.0975 - acc: 0.9803 - val_loss: 1.6790 - val_acc: 0.7840
Vanilla CNN Model with Regularization Performance
You can clearly see from the preceding outputs that we still end up overfitting the model, though it takes slightly longer and we also get a slightly better validation accuracy of around 78%, which is decent but not amazing. The reason for model overfitting is because we have much less training data and the model keeps seeing the same instances over time across each epoch. A way to combat this would be to leverage an image augmentation strategy to augment our existing training data with images that are slight variations of the existing images. We will cover this in detail in the following section. Let’s save this model for the time being so we can use it later to evaluate its performance on the test data.
model.save(‘cats_dogs_basic_cnn.h5’)
CNN Model with Image Augmentation
Let’s improve upon our regularized CNN model by adding in more data using a proper image augmentation strategy. Since our previous model was trained on the same small sample of data points each time, it wasn’t able to generalize well and ended up overfitting after a few epochs. The idea behind image augmentation is that we follow a set process of taking in existing images from our training dataset and applying some image transformation operations to them, such as rotation, shearing, translation, zooming, and so on, to produce new, altered versions of existing images. Due to these random transformations, we don’t get the same images each time, and we will leverage Python generators to feed in these new images to our model during training.
The Keras framework has an excellent utility called ImageDataGenerator that can help us in doing all the preceding operations. Let’s initialize two of the data generators for our training and validation datasets.
There are a lot of options available in ImageDataGenerator and we have just utilized a few of them. Feel free to check out the documentation to get a more detailed perspective. In our training data generator, we take in the raw images and then perform several transformations on them to generate new images. These include the following.
Zooming the image randomly by a factor of 0.3 using the zoom_range parameter.
using the parameter. Rotating the image randomly by 50 degrees using the rotation_range parameter.
degrees using the parameter. Translating the image randomly horizontally or vertically by a 0.2 factor of the image’s width or height using the width_shift_range and the height_shift_range parameters.
factor of the image’s width or height using the and the parameters. Applying shear-based transformations randomly using the shear_range parameter.
parameter. Randomly flipping half of the images horizontally using the horizontal_flip parameter.
parameter. Leveraging the fill_mode parameter to fill in new pixels for images after we apply any of the preceding operations (especially rotation or translation). In this case, we just fill in the new pixels with their nearest surrounding pixel values.
Let’s see how some of these generated images might look so that you can understand them better. We will take two sample images from our training dataset to illustrate the same. The first image is an image of a cat.
Image Augmentation on a Cat Image
You can clearly see in the previous output that we generate a new version of our training image each time (with translations, rotations, and zoom) and also we assign a label of cat to it so that the model can extract relevant features from these images and also remember that these are cats. Let’s look at how image augmentation works on a sample dog image now.
Image Augmentation on a Dog Image
This shows us how image augmentation helps in creating new images, and how training a model on them should help in combating overfitting. Remember for our validation generator, we just need to send the validation images (original ones) to the model for evaluation; hence, we just scale the image pixels (between 0–1) and do not apply any transformations. We just apply image augmentation transformations only on our training images. Let’s now train a CNN model with regularization using the image augmentation data generators we created. We will use the same model architecture from before.
We reduce the default learning rate by a factor of 10 here for our optimizer to prevent the model from getting stuck in a local minima or overfit, as we will be sending a lot of images with random transformations. To train the model, we need to slightly modify our approach now, since we are using data generators. We will leverage the fit_generator(…) function from Keras to train this model. The train_generator generates 30 images each time, so we will use the steps_per_epoch parameter and set it to 100 to train the model on 3,000 randomly generated images from the training data for each epoch. Our val_generator generates 20 images each time so we will set the validation_steps parameter to 50 to validate our model accuracy on all the 1,000 validation images (remember we are not augmenting our validation dataset).
Epoch 1/100
100/100 - 12s - loss: 0.6924 - acc: 0.5113 - val_loss: 0.6943 - val_acc: 0.5000
Epoch 2/100
100/100 - 11s - loss: 0.6855 - acc: 0.5490 - val_loss: 0.6711 - val_acc: 0.5780
Epoch 3/100
100/100 - 11s - loss: 0.6691 - acc: 0.5920 - val_loss: 0.6642 - val_acc: 0.5950
...
...
Epoch 99/100
100/100 - 11s - loss: 0.3735 - acc: 0.8367 - val_loss: 0.4425 - val_acc: 0.8340
Epoch 100/100
100/100 - 11s - loss: 0.3733 - acc: 0.8257 - val_loss: 0.4046 - val_acc: 0.8200
We get a validation accuracy jump to around 82%, which is almost 4–5% better than our previous model. Also, our training accuracy is very similar to our validation accuracy, indicating our model isn’t overfitting anymore. The following depict the model accuracy and loss per epoch.
Vanilla CNN Model with Image Augmentation Performance
While there are some spikes in the validation accuracy and loss, overall, we see that it is much closer to the training accuracy, with the loss indicating that we obtained a model that generalizes much better as compared to our previous models. Let’s save this model now so we can evaluate it later on our test dataset.
model.save(‘cats_dogs_cnn_img_aug.h5’)
We will now try and leverage the power of transfer learning to see if we can build a better model!
Leveraging Transfer Learning with Pre-trained CNN Models
Pre-trained models are used in the following two popular ways when building new models or reusing them:
Using a pre-trained model as a feature extractor
Fine-tuning the pre-trained model
We will cover both of them in detail in this section. The pre-trained model that we will be using in this chapter is the popular VGG-16 model, created by the Visual Geometry Group at the University of Oxford, which specializes in building very deep convolutional networks for large-scale visual recognition.
A pre-trained model like the VGG-16 is an already pre-trained model on a huge dataset (ImageNet) with a lot of diverse image categories. Considering this fact, the model should have learned a robust hierarchy of features, which are spatial, rotation, and translation invariant with regard to features learned by CNN models. Hence, the model, having learned a good representation of features for over a million images belonging to 1,000 different categories, can act as a good feature extractor for new images suitable for computer vision problems. These new images might never exist in the ImageNet dataset or might be of totally different categories, but the model should still be able to extract relevant features from these images.
This gives us an advantage of using pre-trained models as effective feature extractors for new images, to solve diverse and complex computer vision tasks, such as solving our cat versus dog classifier with fewer images, or even building a dog breed classifier, a facial expression classifier, and much more! Let’s briefly discuss the VGG-16 model architecture before unleashing the power of transfer learning on our problem.
Understanding the VGG-16 model
The VGG-16 model is a 16-layer (convolution and fully connected) network built on the ImageNet database, which is built for the purpose of image recognition and classification. This model was built by Karen Simonyan and Andrew Zisserman and is mentioned in their paper titled ‘Very Deep Convolutional Networks for Large-Scale Image Recognition’. I recommend all interested readers to go and read up on the excellent literature in this paper. The architecture of the VGG-16 model is depicted in the following figure.
VGG-16 Model Architecture
You can clearly see that we have a total of 13 convolution layers using 3 x 3 convolution filters along with max pooling layers for downsampling and a total of two fully connected hidden layers of 4096 units in each layer followed by a dense layer of 1000 units, where each unit represents one of the image categories in the ImageNet database. We do not need the last three layers since we will be using our own fully connected dense layers to predict whether images will be a dog or a cat. We are more concerned with the first five blocks, so that we can leverage the VGG model as an effective feature extractor.
For one of the models, we will use it as a simple feature extractor by freezing all the five convolution blocks to make sure their weights don’t get updated after each epoch. For the last model, we will apply fine-tuning to the VGG model, where we will unfreeze the last two blocks (Block 4 and Block 5) so that their weights get updated in each epoch (per batch of data) as we train our own model. We represent the preceding architecture, along with the two variants (basic feature extractor and fine-tuning) that we will be using, in the following block diagram, so you can get a better visual perspective.
Block Diagram showing Transfer Learning Strategies on the VGG-16 Model
Thus, we are mostly concerned with leveraging the convolution blocks of the VGG-16 model and then flattening the final output (from the feature maps) so that we can feed it into our own dense layers for our classifier.
Pre-trained CNN model as a Feature Extractor
Let’s leverage Keras, load up the VGG-16 model, and freeze the convolution blocks so that we can use it as just an image feature extractor.
It is quite clear from the preceding output that all the layers of the VGG-16 model are frozen, which is good, because we don’t want their weights to change during model training. The last activation feature map in the VGG-16 model (output from block5_pool ) gives us the bottleneck features, which can then be flattened and fed to a fully connected deep neural network classifier. The following snippet shows what the bottleneck features look like for a sample image from our training data.
bottleneck_feature_example = vgg.predict(train_imgs_scaled[0:1])
print(bottleneck_feature_example.shape)
plt.imshow(bottleneck_feature_example[0][:,:,0])
Sample Bottleneck Features
We flatten the bottleneck features in the vgg_model object to make them ready to be fed to our fully connected classifier. A way to save time in model training is to use this model and extract out all the features from our training and validation datasets and then feed them as inputs to our classifier. Let’s extract out the bottleneck features from our training and validation sets now.
Train Bottleneck Features: (3000, 8192)
Validation Bottleneck Features: (1000, 8192)
The preceding output tells us that we have successfully extracted the flattened bottleneck features of dimension 1 x 8192 for our 3,000 training images and our 1,000 validation images. Let’s build the architecture of our deep neural network classifier now, which will take these features as input.
Just like we mentioned previously, bottleneck feature vectors of size 8192 serve as input to our classification model. We use the same architecture as our previous models here with regard to the dense layers. Let’s train this model now.
Train on 3000 samples, validate on 1000 samples
Epoch 1/30
3000/3000 - 1s 373us/step - loss: 0.4325 - acc: 0.7897 - val_loss: 0.2958 - val_acc: 0.8730
Epoch 2/30
3000/3000 - 1s 286us/step - loss: 0.2857 - acc: 0.8783 - val_loss: 0.3294 - val_acc: 0.8530
Epoch 3/30
3000/3000 - 1s 289us/step - loss: 0.2353 - acc: 0.9043 - val_loss: 0.2708 - val_acc: 0.8700
...
...
Epoch 29/30
3000/3000 - 1s 287us/step - loss: 0.0121 - acc: 0.9943 - val_loss: 0.7760 - val_acc: 0.8930
Epoch 30/30
3000/3000 - 1s 287us/step - loss: 0.0102 - acc: 0.9987 - val_loss: 0.8344 - val_acc: 0.8720
Pre-trained CNN (feature extractor) Performance
We get a model with a validation accuracy of close to 88%, almost a 5–6% improvement from our basic CNN model with image augmentation, which is excellent. The model does seem to be overfitting though. There is a decent gap between the model train and validation accuracy after the fifth epoch, which kind of makes it clear that the model is overfitting on the training data after that. But overall, this seems to be the best model so far. Let’s try using our image augmentation strategy on this model. Before that, we save this model to disk using the following code.
model.save('cats_dogs_tlearn_basic_cnn.h5')
Pre-trained CNN model as a Feature Extractor with Image Augmentation
We will leverage the same data generators for our train and validation datasets that we used before. The code for building them is depicted as follows for ease of understanding.
Let’s now build our deep learning model and train it. We won’t extract the bottleneck features like last time since we will be training on data generators; hence, we will be passing the vgg_model object as an input to our own model. We bring the learning rate slightly down since we will be training for 100 epochs and don’t want to make any sudden abrupt weight adjustments to our model layers. Do remember that the VGG-16 model’s layers are still frozen here, and we are still using it as a basic feature extractor only.
Epoch 1/100
100/100 - 45s 449ms/step - loss: 0.6511 - acc: 0.6153 - val_loss: 0.5147 - val_acc: 0.7840
Epoch 2/100
100/100 - 41s 414ms/step - loss: 0.5651 - acc: 0.7110 - val_loss: 0.4249 - val_acc: 0.8180
Epoch 3/100
100/100 - 41s 415ms/step - loss: 0.5069 - acc: 0.7527 - val_loss: 0.3790 - val_acc: 0.8260
...
...
Epoch 99/100
100/100 - 42s 417ms/step - loss: 0.2656 - acc: 0.8907 - val_loss: 0.2757 - val_acc: 0.9050
Epoch 100/100
100/100 - 42s 418ms/step - loss: 0.2876 - acc: 0.8833 - val_loss: 0.2665 - val_acc: 0.9000
Pre-trained CNN (feature extractor) with Image Augmentation Performance
We can see that our model has an overall validation accuracy of 90%, which is a slight improvement from our previous model, and also the train and validation accuracy are quite close to each other, indicating that the model is not overfitting. Let’s save this model on the disk now for future evaluation on the test data.
model.save(‘cats_dogs_tlearn_img_aug_cnn.h5’)
We will now fine-tune the VGG-16 model to build our last classifier, where we will unfreeze blocks 4 and 5, as we depicted in our block diagram earlier.
Pre-trained CNN model with Fine-tuning and Image Augmentation
We will now leverage our VGG-16 model object stored in the vgg_model variable and unfreeze convolution blocks 4 and 5 while keeping the first three blocks frozen. The following code helps us achieve this.
You can clearly see from the preceding output that the convolution and pooling layers pertaining to blocks 4 and 5 are now trainable. This means the weights for these layers will also get updated with backpropagation in each epoch as we pass each batch of data. We will use the same data generators and model architecture as our previous model and train our model. We reduce the learning rate slightly, since we don’t want to get stuck at any local minimal, and we also do not want to suddenly update the weights of the trainable VGG-16 model layers by a big factor that might adversely affect the model.
Epoch 1/100
100/100 - 64s 642ms/step - loss: 0.6070 - acc: 0.6547 - val_loss: 0.4029 - val_acc: 0.8250
Epoch 2/100
100/100 - 63s 630ms/step - loss: 0.3976 - acc: 0.8103 - val_loss: 0.2273 - val_acc: 0.9030
Epoch 3/100
100/100 - 63s 631ms/step - loss: 0.3440 - acc: 0.8530 - val_loss: 0.2221 - val_acc: 0.9150
...
...
Epoch 99/100
100/100 - 63s 629ms/step - loss: 0.0243 - acc: 0.9913 - val_loss: 0.2861 - val_acc: 0.9620
Epoch 100/100
100/100 - 63s 629ms/step - loss: 0.0226 - acc: 0.9930 - val_loss: 0.3002 - val_acc: 0.9610
Pre-trained CNN (fine-tuning) with Image Augmentation Performance
We can see from the preceding output that our model has obtained a validation accuracy of around 96%, which is a 6% improvement from our previous model. Overall, this model has gained a 24% improvement in validation accuracy from our first basic CNN model. This really shows how useful transfer learning can be. We can see that accuracy values are really excellent here, and although the model looks like it might be slightly overfitting on the training data, we still get great validation accuracy. Let’s save this model to disk now using the following code.
model.save('cats_dogs_tlearn_finetune_img_aug_cnn.h5')
Let’s now put all our models to the test by actually evaluating their performance on our test dataset.
Evaluating our Deep Learning Models on Test Data
We will now evaluate the five different models that we built so far, by first testing them on our test dataset, because just validation is not enough! We have also built a nifty utility module called model_evaluation_utils , which we will be using to evaluate the performance of our deep learning models. Let's load up the necessary dependencies and our saved models before getting started.
It’s time now for the final test, where we literally test the performance of our models by making predictions on our test dataset. Let’s load up and prepare our test dataset first before we try making predictions.
Test dataset shape: (1000, 150, 150, 3)
['dog', 'dog', 'dog', 'dog', 'dog'] [1, 1, 1, 1, 1]
Now that we have our scaled dataset ready, let’s evaluate each model by making predictions for all the test images, and then evaluate the model performance by checking how accurate are the predictions.
Model 1: Basic CNN Performance
Model 2: Basic CNN with Image Augmentation Performance
Model 3: Transfer Learning — Pre-trained CNN as a Feature Extractor Performance
Model 4: Transfer Learning — Pre-trained CNN as a Feature Extractor with Image Augmentation Performance
Model 5: Transfer Learning — Pre-trained CNN with Fine-tuning and Image Augmentation Performance
We can see that we definitely have some interesting results. Each subsequent model performs better than the previous model, which is expected, since we tried more advanced techniques with each new model.
Our worst model is our basic CNN model, with a model accuracy and F1-score of around 78%, and our best model is our fine-tuned model with transfer learning and image augmentation, which gives us a model accuracy and F1-score of 96%, which is really amazing considering we trained our model from our 3,000 image training dataset. Let’s plot the ROC curves of our worst and best models now.
ROC curve of our worst vs. best model
This should give you a good idea of how much of a difference pre-trained models and transfer learning can make, especially in tackling complex problems when we have constraints like less data. We encourage you to try out similar strategies with your own data!
Case Study 2: Multi-Class Fine-grained Image Classification with Large Number of Classes and Less Data Availability
Now in this case study, let us level up the game and make the task of image classification even more exciting. We built a simple binary classification model in the previous case study (albeit we used some complex techniques for solving the small data constraint problem!). In this case-study, we will be concentrating toward the task of fine-grained image classification. Unlike usual image classification tasks, fine-grained image classification refers to the task of recognizing different sub-classes within a higher-level class.
Main Objective
To help understand this task better, we will be focusing our discussion around the Stanford Dogs dataset. This dataset, as the name suggests, contains images of different dog breeds. In this case, the task is to identify each of those dog breeds. Hence, the high-level concept is the dog itself, while the task is to categorize different subconcepts or subclasses — in this case, breeds — correctly.
We will be leveraging the dataset available through Kaggle available here. We will only be using the train dataset since it has labeled data. This dataset contains around 10,000 labeled images of 120 different dog breeds. Thus our task is to build a fine-grained 120-class classification model to categorize 120 different dog breeds. Definitely challenging!
Loading and Exploring the Dataset
Let’s take a look at how our dataset looks like by loading the data and viewing a sample batch of images.
Sample dog breed images and labels
From the preceding grid, we can see that there is a lot of variation, in terms of resolution, lighting, zoom levels, and so on, available along with the fact that images do not just contain just a single dog but other dogs and surrounding items as well. This is going to be a challenge!
Building Datasets
Let’s start by looking at how the dataset labels look like to get an idea of what we are dealing with.
data_labels = pd.read_csv('labels/labels.csv')
target_labels = data_labels['breed'] print(len(set(target_labels)))
data_labels.head() ------------------
120
What we do next is to add in the exact image path for each image present in the disk using the following code. This will help us in easily locating and loading up the images during model training.
It’s now time to prepare our train, test and validation datasets. We will leverage the following code to help us build these datasets!
Initial Dataset Size: (10222, 299, 299, 3) Initial Train and Test Datasets Size: (7155, 299, 299, 3) (3067, 299, 299, 3) Train and Validation Datasets Size: (6081, 299, 299, 3) (1074, 299, 299, 3) Train, Test and Validation Datasets Size: (6081, 299, 299, 3) (3067, 299, 299, 3) (1074, 299, 299, 3)
We also need to convert the text class labels to one-hot encoded labels, else our model will not run.
((6081, 120), (3067, 120), (1074, 120))
Everything looks to be in order. Now, if you remember from the previous case study, image augmentation is a great way to deal with having less data per class. In this case, we have a total of 10222 samples and 120 classes. This means, an average of only 85 images per class! We do this using the ImageDataGenerator utility from keras.
Now that we have our data ready, the next step is to actually build our deep learning model!
Transfer Learning with Google’s Inception V3 Model
Now that our datasets are ready, let’s get started with the modeling process. We already know how to build a deep convolutional network from scratch. We also understand the amount of fine-tuning required to achieve good performance. For this task, we will be utilizing concepts of transfer learning. A pre-trained model is the basic ingredient required to begin with the task of transfer learning.
In this case study, we will concentrate on utilizing a pre-trained model as a feature extractor. We know, a deep learning model is basically a stacking of interconnected layers of neurons, with the final one acting as a classifier. This architecture enables deep neural networks to capture different features at different levels in the network. Thus, we can utilize this property to use them as feature extractors. This is made possible by removing the final layer or using the output from the penultimate layer. This output from the penultimate layer is then fed into an additional set of layers, followed by a classification layer. We will be using the Inception V3 Model from Google as our pre-trained model.
Based on the previous output, you can clearly see that the Inception V3 model is huge with a lot of layers and parameters. Let’s start training our model now. We train the model using the fit_generator(...) method to leverage the data augmentation prepared in the previous step. We set the batch size to 32 , and train the model for 15 epochs.
Epoch 1/15
190/190 - 155s 816ms/step - loss: 4.1095 - acc: 0.2216 - val_loss: 2.6067 - val_acc: 0.5748
Epoch 2/15
190/190 - 159s 836ms/step - loss: 2.1797 - acc: 0.5719 - val_loss: 1.0696 - val_acc: 0.7377
Epoch 3/15
190/190 - 155s 815ms/step - loss: 1.3583 - acc: 0.6814 - val_loss: 0.7742 - val_acc: 0.7888
...
...
Epoch 14/15
190/190 - 156s 823ms/step - loss: 0.6686 - acc: 0.8030 - val_loss: 0.6745 - val_acc: 0.7955
Epoch 15/15
190/190 - 161s 850ms/step - loss: 0.6276 - acc: 0.8194 - val_loss: 0.6579 - val_acc: 0.8144
Performance of our Inception V3 Model (feature extractor) on the Dog Breed Dataset
The model achieves a commendable performance of more than 80% accuracy on both train and validation sets within just 15 epochs. The plot on the right-hand side shows how quickly the loss drops and converges to around 0.5 . This is a clear example of how powerful, yet simple, transfer learning can be.
Evaluating our Deep Learning Model on Test Data
Training and validation performance is pretty good, but how about performance on unseen data? Since we already divided our original dataset into three separate portions. The important thing to remember here is that the test dataset has to undergo similar pre-processing as the training dataset. To account for this, we scale the test dataset as well, before feeding it into the function.
Accuracy: 0.864
Precision: 0.8783
Recall: 0.864
F1 Score: 0.8591
The model achieves an amazing 86% accuracy as well as F1-score on the test dataset. Given that we just trained for 15 epochs with minimal inputs from our side, transfer learning helped us achieve a pretty decent classifier. We can also check the per-class classification metrics using the following code.
We can also visualize model predictions in a visually appealing way using the following code. | https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a | ['Dipanjan', 'Dj'] | 2018-11-17 00:15:09.056000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Data Science', 'Deep Learning', 'Towards Data Science'] |
DataBasic: A suite of data tools for the beginner | DataBasic: A suite of data tools for the beginner
Easy-to-use web tools that help data newbies (and the more experienced) grasp & learn the basics
By Matt Carroll <@MattatMIT>
I’m a data geek from way back at The Boston Globe. Naturally enough, I have a deep interest in data, data visualizations, and the tools used to build them.
That includes starting to review new data and data viz tools as they become available. This is my first review, and I’m looking at a suite of three tools for data beginners, released under the name DataBasic.io. The basic idea is to introduce “easy-to-use web tools for beginners that introduce concepts of working with data,” as the site explains.
The tools are: Word Counter, WTFcsv, and SameDiff. Each aims to solve a particular data problem, and they do their work well. But what’s of particular interest to me is what all three accomplish in a very deep way — they are easy to use.
That’s unlike many other data tools, which are geared for hard-core users. Not that there’s anything wrong with complicated tools for hard-core users. Sometimes the tools do complicated data work, and by necessity are fairly complex. I’ve used data tools for a long time, so am not put off (too much) by sharp learning curves and crappy user experiences, as long as the tool does what’s promised. Plus, frankly, I’m paid to put up with the pain, so I work through it.
But I also understand how new users can be intimidated and turned off by these types of tools and experiences. That’s really too bad, because there’s so much data available that can help people understand so much more about their community, politics, the environment, even their own finances. The possibilities are limitless. What’s needed are simple tools that can be mastered easily so more people can participate in data surfing.
That’s where the DataBasic tools fit in. They are beautiful and simple, easy and intuitive to use, and are great for beginners. Who are they useful for? Anyone who might want to dive into data, but is unsure where to start, including students, community groups, or journalists (that’s me). The tools were tested in classrooms and workshops to make sure they worked well and were easily understandable. The developers get that learning data tools can be a miserable experience. As the web site says: “These tools were born out of frustration with things we were trying to use in our undergraduate classes, so we feel your pain.”
They’re even determined to put some fun into data. For training, they even include some data sets such as song lyrics from Beyonce and a list of survivors from the Titanic.
Here’s a quick rundown, with a more detailed description below:
Word Counter analyzes your text and tells you the most common words and phrases. Putting on my journalist hat, right away I can see reporters using this as a standard tool for basic analysis of politicians’ speeches.
analyzes your text and tells you the most common words and phrases. Putting on my journalist hat, right away I can see reporters using this as a standard tool for basic analysis of politicians’ speeches. Same Diff compares two or more text files and tells the user how similar or different they are. Of the three, this tool offers the most intriguing possibilities. For example, it could be used to analyze how a politician’s stump speech evolves over time. Or how witness’ statements have changed, or how any particular set of statements has changed.
compares two or more text files and tells the user how similar or different they are. Of the three, this tool offers the most intriguing possibilities. For example, it could be used to analyze how a politician’s stump speech evolves over time. Or how witness’ statements have changed, or how any particular set of statements has changed. WTFcsv is more traditional, in that it’s geared towards spreadsheets, the data maven’s basic tool. It’s designed for the data newbie who has no idea what to do with a spreadsheet. It helps a user peek behind all those columns and rows through some simple analysis. BTW, “Csv” refers to a common text file that can easily be imported into a spreadsheet program like Excel. “WTF” stands for … well, you know.
The tools were developed by two people with a deep interest and knowledge of data and data visualizations, Rahul Bhargava, a research scientist at the MIT Media Lab’s Center for Civic Media, and Catherine D’Ignazio, an assistant professor in the Journalism Department at Emerson College in Boston. (BTW, big transparency alert: I’m hopelessly conflicted writing this review, as I’m friends with both Catherine and Rahul. Also, I’m Catherine’s partner in creating a cool photo engagement app for newsrooms called NewsPix, and I work with Rahul in Civic Media.)
I’m looking at these tools from the point of view of a beat reporter wondering how to use them to help find stories or dive deeper into material. And while they are designed for beginners, I can see Word Counter and SameDiff gaining traction with experienced reporters, as well.
Word Counter: A ‘word cloud’ is only the beginning
Word Counter takes text and analyzes it in several ways. It creates your basic word cloud, but also does word counts, in several interesting ways.
For instance, let’s take the speech Donald Trump’s speech in June when he announced his candidacy for president.
WordCounter breaks its analysis into a few different pieces — A word cloud, Top Words, Bigrams, and Trigrams.
Top Words is a basic word count. In Trump’s speech, the most common words were People (47 mentions), Going (44), Know (42), and Great (34).
Bigrams and Trigrams are counts on two- and three-word combos. Top Bigrams: Don’t (39), going to (38), I’m (37)
And top Trigrams: “I don’t” and “(We’)re going to”, tied at 11; and “that’s the,” “going to be,” and “in the world,” tied at 10.
A nice touch: Each of the lists of the Top Words, Bigrams and Trigrams can be downloaded as a csv.
It’s a nice tool that might help quickly analyze or provide some insight into someone’s speech, talk, or writing. Unfortunately, this tool didn’t help our aspiring data journo come up with a story.
Too bad there wasn’t a tool in this suite that would let us take Trump’s speech announcing his candidacy and compare it with Obama’s speech doing the same.
Oh wait — there is, and it’s called…
… SameDiff: Let’s compare speeches by Obama & Trump
I took the speeches Obama and Trump made announcing their candidacy for president and easily imported them as text documents into SameDiff.
The first thing SameDiff does is give us a report, which announces: “These two documents are pretty different.” A shocker, huh? Who would think that Obama and Trump might share widely divergent views? (Yes, that’s a joke.) But that’s good. Big differences means that there is a lot of contrast between the two speeches, which gives us something to write about.
SameDiff creates three columns: The first and third columns are basically word clouds noting the top words used by each candidate (you can get the exact word count by hovering your cursor over a word.) The middle column lists the words in common.
It’s interesting to see the differences in what the two men talk about. With Obama, top words include: Health, future, divided, opportunities, family and children.
With Trump, some top words: Going, China, Trump, Mexico, stupid, Obamacare.
What does this tell us? We can see that Obama in his speech was focused very much on individual needs, including healthcare, and was concerned about divisions in the country.
Trump focussed more on problems in the international arena (not to mention he likes talking about himself). And he’s not above throwing around derogatory terms like “stupid.”
The column that shows which words they used in common? Not that illuminating, in this case: Great, country, jobs… Words that you might find in most political speeches given by any random politician.
As a reporter, I can’t say I could write a story based on this analysis. But it does help shine a little light on the slant each candidate was pushing, which could help inform whatever story I do write. This tool was helpful.
WTFcsv: Helping the beginner database maven
The goal of WTFcsv is to show the data newbie what’s inside a spreadsheet, or a file that can be imported into a spreadsheet. Like the other apps, it is well designed and simple, with a clean interface.
It was simple and intuitive to import a file (it can take a csv, xls, or xlsx, which are three common spreadsheet file types).
Once a file is imported, it provides basic information about what’s in the file. To test it, I imported a US Census file on college graduation rates at the state level. The state-by-state data shows the education levels of state residents, 25 or older. There are 12 columns of information, which range from state names and populations to education levels, such as “Less than 9th grade,” and “Bachelor’s or higher.”
The analysis, provided in a nice card display, broke down each column. For instance, it looked at the percents of the 25 and older population with a bachelor’s degree or higher education. The WTFcsv histogram broke the info into ranges that, for example, that four states had between 18–22 percent. (Ahem, and using my own analysis: For Bachelor’s or higher, Massachusetts led all states with 39.4 percent.)
It also provides a “What do I do next?” set of questions that can help prod the beginner.
Bottom line: Maybe of use for people who are new or are intimidated by spreadsheets, but that’s about it. It will be of some use for awhile, but as data users become more experienced, they will turn to other, more powerful tools, such as pivot tables, to do the same types of analysis.
Overall, the DataBasic suite
I like all three tools. All are all simple to use. The design and user experience is great. “Intuitive” is the key word. I had no trouble importing files. My favorite was SameDiff, because I can see how useful this would be for even an experienced reporter, but I can see how all three would benefit wanna-be data reporters.
Matt Carroll runs the Future of News initiative at the MIT Media Lab and writes the “3 to read” newsletter, which is a weekly report on trio of stories and trends from across the world of media. | https://medium.com/3-to-read/databasic-a-suite-of-data-tools-for-the-beginner-5165651e46a5 | ['Matt Carroll'] | 2016-01-14 17:16:08.693000+00:00 | ['Journalism', 'Data', 'Data Visualization'] |
Hitchhiker’s Guide to Developing Ontology Smart Contracts in SmartX (Python) | 2. Cyano Wallet
Wait! Before we get into the details of smart contract development, you need to install Cyano Wallet.
Cyano Wallet is a web extension wallet which holds Ontology’s two tokens (ONT and ONG), as well as any tokens that you decide to create following Ontology’s OEP4 token protocol.
You can install Cyano Wallet for Chrome here:
Once installed, Cyano Wallet can be accessed from where all your other Chrome extensions are.
It will ask you to create an account and memorize a mnemonic phrase and a private key — make sure to keep these recorded in a safe place.
Once you’re past this phase, Cyano Wallet should look like this:
While many of your smart contracts will depend on Cyano Wallet for handling token transfers to and from wallet addresses, Cyano Wallet is most important for holding ONG, which is the Ontology token that serves as “Gas” for smart contract deployment and invocation.
For those unfamiliar with Gas, it’s basically a transaction fee, paid out when a developer deploys a smart contract to the Ontology blockchain, or when a user invokes a function of a deployed smart contract.
For Ethereum, the fee is a small fraction of ETH; for Ontology, the fee is ONG.
So at this point, you may be wondering: “How am I supposed to get ONG? Do I even want to make the investment of purchasing ONG if I’m just trying out developing smart contracts on Ontology in the first place?”
These are all valid concerns that are also immediately addressable.
First, in your Cyano Wallet settings, you can select “TestNet” in the first dropdown menu and choose any node address in the second. | https://medium.com/better-programming/hitchhikers-guide-to-developing-ontology-smart-contracts-in-smartx-python-6a152fe8aad7 | ['Tim Feng'] | 2019-08-06 23:30:01.785000+00:00 | ['Python', 'Smart Contracts', 'Blockchain', 'Ontology', 'Programming'] |
Why Your Friends Might Be Holding You Back From Living Your Best Life and What to Do About It | Why Your Friends Might Be Holding You Back From Living Your Best Life and What to Do About It Gaby Spadaro Follow Dec 29 · 4 min read
Photo by Tim Mossholder on Unsplash
You hear you're the average of the five people you hang out with, and you're scared to death; maybe your friends do not help you get to where you want. You want to become the best version of yourself but deep down, you know you are not hanging out with the right people, so you stay where you are.
You stick around because going out of your comfort zone to meet new people scares you, or you don’t want to put in the effort.
So, there you are, feeling bad for yourself, knowing again that maybe if you had a healthier community around you, it would be easier to achieve the goals you want and feeling happier with yourself.
When you feel tired of the place you are, and you commit yourself to make a significant change in your life people will often lash out if you start acting differently. You began to realize that the new values you are forming for yourself might not align with your current group.
Here is when it’s time to decide what's best for you. To make hard choices and maybe say goodbye to all those friends that aren't adding any value to your life or those friends that are only sucking your energy.
But friends are an essential part of your life. If you find good friends, they will lift you and help you get in the right direction; good friends will support you and be there for you when you don’t feel right and are going to genuinely be interested in your projects and help you achieve your dreams.
They will be honest with you and communicate when they believe things aren't going well, and they will respect your time. It's essential to hang out with friends that you admire, friends who inspire you to be the best version of yourself.
Friends are not only to share sorrows and pains.
Friends are essential to enjoy and create experiences, to reflect that we are strong. Friends are that space where we can be a child again and feel that we belong. Most importantly, friends have to help us grow because if you don’t grow together, it will be hard for you to become the person you always wish to be.
It will be hard to see some people go out of your life, but life is much better when you have a community that supports you, and you help each other day to day bring the best versions of yourself.
Here are some truths I finally accepted
You Should Let Go of Toxic Friendships That You Love That You Know Are Not Adding Value to Your Life
It sounds harsh, but if you love yourself, you will realize that sometimes you have friends you love that are not fighting for their dreams or are just not bringing anything interesting to the table. You guys are not growing together or supporting each other’s, and although you guys might be having fun together, every time you see them you leave the place feeling worse about yourself. | https://medium.com/datadriveninvestor/why-your-friends-might-be-holding-you-back-from-living-your-best-life-and-what-to-do-about-it-33b25efb5f4a | [] | 2020-12-29 17:38:18.823000+00:00 | ['Self-awareness', 'Self Improvement', 'Self', 'Friendship', 'Love Yourself'] |
How To Salt Away Those Stains | Simple tips to make everyday living a breeze and stain freer
While salt is a staple in your recipes and flavors your food, there are other ways to use salt that won’t affect your health and/or your recipes. Salt is a preservative in almost all foods, check out what else salt came do outside of the body. Salt can check freshness, make things refreshed, remove stains, balance color and so much more.
Photo by Jason Tuinstra on Unsplash
Light bulb moments can be as simple as looking into your kitchen cabinets for everyday solutions and revelations. Seek and you will find, open and it will be given unto you.
Photo by Matias Wong on Unsplash
Photo by EPMcKnight
Got fresh or non-fresh eggs? Make sure your eggs are fresh before cooking them in your favorite recipe or frying them and discover a smell that turns your stomach or the yolk is too thick to fry. You can test their freshness, by adding two tablespoons of salt to a cup of water and pop your eggs in. The fresher the eggs, the lower it will be in the water. The fresh eggs will sink right to the bottom while the older eggs or less fresh eggs will float on the top. If the egg floats in the middle, it is halfway between. Eat up!!
Photo by Terry Vlisidis on Unsplash
Got red wine on your carpet? While white wine will dilute the red wine from your carpet but not entirely. To totally eliminate the residual stain of red wine from your carpet, add salt and banish the red wine from your carpet. After the white wine application, wipe down the area with a sponge and cold water. Next sprinkle some salt and wait around 10–15 minutes. Finally, vacuum the area and see the stain disappear right before your eyes. This salt application may be applied to fabric, clothes and couch cushions, also.
Photo by Duncan Sanchez on Unsplash
Got stings and bites pains? Relieve those pains easily with a little salt. Stung by a bee, relieve the sting by covering the areas in salt to reduce the swelling and stop or minimize the pain. For insect bite, soak the entire areas in saltwater and then finish off with a coating of vegetable oil.
Photo by Uwe Jelting on Unsplash
Got frost on your windows? Salt has always been the greatest enemy of ice. i.e. put salt on ice and watch it dissolve the ice. Ice cream makers of yesterday used salt on ice during the ice cream making process. Notice when roads are covered with ice, what do they use to melt the ice? Salt! Frost on windows, gone with salt.
Photo by Salmen Bejaoui on Unsplash
Got ants annoying you around the house? Salt is kryptonite to ants. They run for life to get away from it. Ant battle? Win it with salt. Sprinkle the salt directly on their path or create a barrier that diverts their route, around your door frames and skirting boards.
Photo by Etienne Girardet on Unsplash
Got lipstick stains on your glasses or cups? Rub the edges of your glasses with salt before placing them into the dishwasher or washing them. The stains remove effortlessly.
Photo by Sonny Ravesteijn on Unsplash
Got smelly overcooked coffee? Banish that bitter taste with a little bit of salt to combat that awful taste. Adding a pinch of salt to your cup of coffee, makes it less bitter. Bitter coffee from an overused percolator, use four tablespoons of salt, fill with water then percolate as normal. Rinse and no more bitter coffee.
Photo by Clem Onojeghuo on Unsplash
Got slow cooking pot of water or food? Throw in a pinch of salt and the boiling process speeds up, just that simply. Salt allows the water to boil at a higher temperature.
Photo by Dave Lastovskiy on Unsplash
Got warm drinks for a party? Fill a bucket with water and some ice, adding salt will make the drinks cool faster. Pour in some salt, stir the drinks around and they will turn icy cold in minutes.
Photo by Nathan Dumlao on Unsplash
Got dishwasher soap? Make your own! Take three or four drops of dish soap, add large tablespoon of baking soda and then add salt. Take this mixture and pour into dishwasher powder container. You get sparkling dishes and saved a few dollars.
Photo by Lucas George Wendt on Unsplash
Go grease on your dishes, pot and/or pans? Salt dissolves grease. Sprinkle the pan with salt until it soaks up all the grease and then wash it. As a preventive grease splattering remedy, sprinkle salt in your pan before you fry. This will help prevent grease from splattering all over the stove and kitchen worktops.
Photo by Artem Makarov on Unsplash
Go grimy sponges? Restore the sponge and remove that grime by soaking the sponge overnight. Add a 1/4 cup of salt per quart of water. Stubborn grime will be removed.
Photo by Mick Haupt on Unsplash
Got clogged drains? Take white vinegar, baking soda and salt and wash your drains clean as a whistle. One cup of baking soda and salt (each) added to a cup of vinegar and let the bubbling begin and allow it too remain for ten minutes before pouring some boiling water down the drain.
Photo by Dan Gold on Unsplash
Got wimpy salad? Give your salad a makeover with salt. Make it crisp with a little salt. Sprinkle a little salt on the top of the salad and the freshness will last for a few hours.
Photo by Giorgio Trovato on Unsplash
Got brown fruit? Saltwater is a preservative. Soak fruit in saltwater for a short period of time, this will help to retain their color. Great for fruit salad or platter.
Photo by Steve Long on Unsplash
Got melted particles on your iron’s metal soleplate? No more ruining your clothes. Sprinkle salt all over a newspaper on your ironing board. Turn iron onto high and run it over your salty newspaper. This will help remove all that residue on your iron.
Photo by Mel Poole on Unsplash
Got faded towels after a few washes? When washing new towels, add a cup of salt to the load. This will help set the colors on your towels and help them to look brighter longer.
Photo by Gene Pensiero on Unsplash
Got slugs and snails? As a kid, I learned this trick from my mother. We sprinkled the salt directly on the snail and it just dissolved or exploded. Ugly!! If on your plant, sprinkle salt around the plant to stop them from adhering to your plants.
Photo by Aziz Acharki on Unsplash
There are so many other uses of salt and not enough time to convey all but have a listed enough to give you the benefits of using salt and how you can save dollars by doing so. There are tons more usage such as salting out weeds from pavement or cracks, salting out that stinky sneakers smell, salting out eggs spills from your floor or counter, salting out mildew from clothes and bathroom walls, salt to the rescue to keep cut flowers fresh longer, salting out poison ivy, salting out fire in fireplace quicker and cleaner, salting out candles dripping, got bad breath, congestion, muscle pain, bee and wasp stings, nail biting, bruises, pestering insects, and many many more.
In conclusion, just a few lessons to convey the importance of salt and how it can make a significance difference in many household chores and is within your reach within your kitchen cabinets.
For additional reads: | https://medium.com/illumination-curated/how-to-salt-away-those-stains-4fa8cbbf0883 | ['Ep Mcknight'] | 2020-12-27 18:13:21.388000+00:00 | ['Self Improvement', 'Startup', 'Life Lessons', 'Education', 'Life'] |
How About NASA Sending a Woman to the Moon by 2024? | How About NASA Sending a Woman to the Moon by 2024?
A grand gesture that is way overdue
Buzz Aldrin walking on the Moon in 1969 — Image by Pixabay
It’s been quite a few years since NASA and the United States space program has sent astronauts to the Moon. To be more specific, December 12, 1972, was the last time.
In those days, female astronauts were nonexistent. Finally, in 1978, NASA selected 35 new candidates, six of whom were women¹. Five years later, one of them named Sally Ride, became the first woman in space.
NASA plans for a female moon landing
Recently, NASA formally outlined a $28 billion plan for returning to the Moon by 2024². In the program called Artemis, NASA plans to launch a woman and a man onto the lunar surface.
However, the agency’s timeline depends on Congress releasing $3.2 billion to construct a landing system.
The plan calls for two astronauts to travel in Orion, an Apollo-like capsule that will blast off on a massively powerful rocket known as SLS.
NASA discusses details
The NASA administrator, Jim Bridenstine, added these details, “The $28 billion represents the costs associated for the next four years in the Artemis program to land on the Moon. SLS funding, Orion funding, the human landing system, and of course, the spacesuits — all of those things that are part of the Artemis program are included.”
And he also pointed out, “The budget request that we have before the House and the Senate right now includes $3.2 billion for 2021 for the human landing system. It is critically important that we get that $3.2 billion.”
The US House of Representatives previously passed a bill to allocate $600 million for building the lunar lander. Yet NASA is going to need even more funds to complete the vehicle.
Bridenstine continued, “I want to be clear, we are exceptionally grateful to the House of Representatives that, in a bipartisan way, they have determined that funding a human landing system is important — that’s what that $600 million represents. It is also true that we are asking for the full $3.2 billion.”
Astronaut Bonnie Dunbar, wearing an extravehicular mobility unit spacesuit — Image by Public Domain
CNN interview
Bridenstine first informed CNN in 2019 about their plans to put the first woman astronaut on the Moon by 2024³. He stressed that this person would be someone “who has been proven, somebody who has flown, somebody who has been on the International Space Station already.” He further stated it would be someone who is already an astronaut.
When that interview aired, there were a total of twelve active woman astronauts. Since that time, five more female astronauts have been added to the NASA roster. But it’s still unclear if these newest astronauts would qualify to be on that first landing mission.
When asked when crew members for Artemis would be named, the NASA chief said he would like to create that team at least two years before the mission.
However, he also stressed, “I think it’s important we start identifying the Artemis team earlier than not… primarily because I think it will serve as a source of inspiration.”
White House views on the NASA lunar mission
The White House is striving to renew the United States as the world leaders in space. They believe that sending astronauts back to the Moon is a great starting point. Additionally, the mission intends to extract valuable water-ice deposits from the Moon’s South Pole.
It is believed that these deposits could be used to make rocket fuel right on the Moon. This would accomplish two things: 1) produce fuel at lower costs than transporting it from Earth, 2) serve as a foundation for a future lunar economy.
However, Vice President Mike Pence has voiced concerns regarding China’s space ambitions. In January 2019, they became the first nation to land a robot rover on the Moon’s far side⁴. The Chinese are now prepping for its initial mission to bring lunar soil samples back to Earth.
China has been working on new-generation spacecraft for its astronauts that could travel to deep space destinations. Although the Chinese are not on a timeline to achieve this by 2024, it is expected that they will make notable progress towards such a goal within the decade. | https://medium.com/discourse/how-about-nasa-sending-a-woman-to-the-moon-by-2024-fcce07903869 | ['Charles Stephen'] | 2020-11-04 16:04:08.392000+00:00 | ['Women', 'Women In Tech', 'Space', 'Government', 'Science'] |
Offline: When Your Apps Can’t Connect to the Internet | Image credit: photosteve101
Internet access is not ubiquitous. Everyone experiences the frustration of being offline. When you design a mobile app which doesn’t work without internet, you’ve made a decision about when and where your app may be used.
But consider:
You’re on a flight, and you don’t want to pay for Wi-Fi.
You’re on a camping trip in a remote location.
You want to unplug for awhile and get work done.
You can’t afford a data plan for your phone.
There are many reasonable instances where users won’t have Internet access. Knowing this, how will you design your apps differently?
Does “offline” mean “no content”?
Not necessarily.
Whenever you open an app, it has the opportunity to download content. When offline, apps can use this content to improve the experience. And most do, but some go further than others.
Image credit: Kārlis Dambrāns
Let’s do a walkthrough of some common social, mapping, and productivity apps, to see how they function offline.
Social apps
The typical use cases of a social app are to browse your feed, post comments and items of interest, and manage your network of friends. Let’s turn on airplane mode and observe.
FACEBOOK
First up, Facebook:
We see pretty much what we’d expect to see: our news feed and an alert that we’re not connected to the Internet. There’s a fair bit of content too. Well done, Facebook.
Now, let’s page through the various sections of the app, and see what we find. First up, friend requests:
This isn’t a great experience. The explicit “No Internet Connection” alert is gone, replaced by a spinning indicator that spins forever. That’s not very helpful or informative. There’s a lot of unused whitespace that could be better utilized as well.
Next up, notifications:
This is slightly better. At least I can see some cached content, though it’s not a lot. Unfortunately, there are now two spinners. It’s not clear what the difference is between them, or what is happening. And that’s still a lot of wasted space.
Finally, the miscellany menu:
The most obvious thing missing is thumbnails for each item. This seems strange, as those thumbnails are static and shouldn’t change often.
Now, let’s try to post something:
This is really clear behavior. My post has been queued for submission with a message (not even an error!) giving me the opportunity to retry submission.
Since this is a post I don’t plan to keep, I’ll hit the “x” and discard it.
Well done, once again. I get a popup reassuring me that the post will go live as soon as I have an Internet connection, though I can delete it if I wish. Facebook provides an excellent experience while offline.
The good:
Newsfeed is populated with posts
Offline posting sets clear expections with very good messaging
The bad:
Inconsistent alerts for “No Internet Connection”
Loading spinners which never stop
Poor use of available space on loading screens
Missing menu thumbnails
PINTEREST
Next up, Pinterest:
This is a very underwhelming screen to see when I first open the app. There is zero content here for me to browse. In addition, the app doesn’t explain why there’s a lack of content. There isn’t any indication that I’m offline.
As with Facebook, let’s page through the different sections. First, search:
Oddly enough, there is content here. I don’t expect search to work, but it’s interesting that there’s a landing page at all. Again, no indication that I’m offline.
Let’s type in a search keyword, just for kicks:
The search result screen is basically identical to my home page. It tells me there are no pins, but not why. Did I mistype my search term?
Let’s try to add a pin and see what happens:
So far, so good. I want to pin a photo, so I’ll click “Photos”.
I was able to easily select a photo. Let’s add my pin to a board:
Bummer. Pinterest doesn’t cache my existing boards. I’ll have to create a new one:
That’s very bad. Pinterest allows me to get through that entire workflow before telling me that offline pinning is impossible. It won’t even let me save a draft of the pin for later.
Let’s see what my profile looks like:
It looks like I have no pins and no boards, which is false, but Pinterest isn’t telling me why. Ironically, my stats are accurate (number of pins, likes, followers, following), but are in stark contradiction with the list of pins and boards below.
What happens if I click “47 Pins”?
Yet another inconsistent view. The counter in the top right shows “47”, but the list below tells me there are no pins to display. This is a very poor offline experience.
The good:
There’s really nothing great about using this app offline
The bad:
No cached content (and no explanation why)
I’m allowed to get far into the pinning process before learning it’s impossible
Profile stats contradict what is shown
INSTAGRAM
So far, we’ve seen a big difference in the offline experience of social apps. How does Instagram fare?
This feels very similar to Facebook. There’s a good bit of content preloaded, and a prominent “No Internet Connection” alert.
Let’s page through the sections and see what we find. “Explore” first:
Not great, as there’s no content here, but at least we don’t have the infinite spinners of the Facebook app. A simple option to refresh the page is much more sensible.
What about my user profile?
Similar to Pinterest, none of this content is loaded. However, neither are the stats, which means they aren’t in conflict. Also, Instagram remains clear about the lack of an Internet connection.
Can we edit the profile?
Nope. In contrast to the clear “No Internet Connection” message in the previous screens, this one is very ambiguous. And a terrible use of whitespace.
Finally, let’s try the direct messaging feature:
A similar result to the “Edit Profile” screen. However, the messaging (and message styling) is different for no good reason.
Overall, I’d place the Instagram offline experience somewhere between Facebook and Pinterest. Good, but not great.
The good:
The “No Internet Connection” signs are mostly consistent across the app
The bad:
Can’t edit a user profile
The only cached content is from the main feed
Poor use of whitespace
TWITTER
Let’s look at one more social app: Twitter.
As with Facebook and Instagram, there’s a good bit of cached content available. Unlike those apps, there is no messaging to indicate that you’re offline. That’s not necessarily a problem, but I prefer to be told that the app is offline.
Let’s take a look at notifications:
It’s clear that Twitter takes caching seriously. The number of notifications here is impressive.
What about messages?
Here we have a clear message prompting us to retry. It should probably indicate the reason that messages aren’t loading, but that’s a minor quibble.
I didn’t have any direct messages when I took this screenshot. Later, I checked this screen, and it showed a list of direct messages while offline, with no message indicating any loading problems. This is very consistent with the timeline and notification sections.
Next, let’s look at my user profile:
As with the other social apps, none of my personal content has been cached. But here, there’s a clear message that content isn’t loading.
Can I edit my profile?
Maybe? No error message yet. Let me make a change and click “Save”.
Nope. And an incomprehensible error message at that. It would be better to disable the actions that are impossible to perform in offline conditions, like “Edit Profile”, and explain why, rather than letting users click them.
Now, I’m going to try composing a new tweet:
Click “Tweet”:
And… nothing happens. That’s not very reassuring. I expect confirmation that my tweet will post as soon as I have Internet connectivity, as Facebook does.
Let’s scroll through the different timeline views now. First, “Discover”:
Cached content. Fantastic. Now, “Activity”:
No cached content here.
Let’s click the “Follow people” button, just for fun:
Not only are there no recommendations, this is a horribly-designed empty state. I’ve complained about poorly-used whitespace in other apps, but this space has been filled with useless UI, which is far worse.
The good:
Lots of cached content. Far and away the best.
Clear and (mostly) consistent Internet connectivity errors
Offline posting works, though with the caveat above
The bad:
Caching is handled very differently across the app
Looks like you can edit a profile offline, until you can’t
Awful “Can’t edit profile” message
No confirmation after composing a tweet offline
Poorly designed “Follow people” view
To sum up, none of these social apps nail the offline experience, but some are clearly better than others. In general, there’s lots of room for improvement.
Mapping apps
The typical use case of mapping apps is to get you where you need to go. Apps won’t get you new directions while offline, but how well do Google Maps and Apple Maps use their cache?
GOOGLE MAPS
Remember, in offline mode, apps don’t know your current location. Let’s open Google Maps and see how it handles that:
It remembers my last location, and starts me there. We’re zoomed in pretty far though. Mapping apps have access to a lot of data, and have to be selective about what they cache. Let’s zoom out and see what Google Maps has cached:
Downtown Portland. Can we zoom further?
Portland metro area. More?
And there’s the cutoff. Not bad, all things considered, though I don’t know why they didn’t include a high-level map of the entire Earth, if only for context.
What else can we do offline?
Let’s try switching to Satellite view:
That doesn’t work. We have a cached street map view, but no other views.
Let’s look into the settings:
There are lots of pages here. Since most of these aren’t settings that people will change often, I won’t go through these now. Some options work offline, others don’t, and the only way to tell is to try accessing each section, which feels disconcerting.
Once I finished with the settings, I switched back to map mode. To my surprise, the map had entirely disappeared. Bad news if I had been depending on it.
All in all, a reasonable offline experience, with a few nasty surprises.
The good:
Remembers your last location
Holds an impressive amount of map detail in cache
The bad:
Forces me to guess what works online and what doesn’t
Maps disappear and won’t come back
APPLE MAPS
Google Maps has a reasonable offline experience, so what about Apple Maps? Let’s open it and find out:
First, Apple Maps refuses to show my last position on the map. Also, it doesn’t appear to cache any of the map, at this zoom level anyway.
And now I get a popup telling me that I need to turn off airplane mode to see my current location. I click “Cancel” instead. Let’s zoom out and see if any of the map has been cached.
The answer is “yes, but not much”. Only Portland, Oregon and the surrounding area. Let’s zoom out farther.
This is strange. If you’re going to show a pin for my last search result, it would make sense to cache that part of the map.
Let’s try Satellite view:
Nothing. Not terribly surprising, given that Google Maps didn’t cache Satellite view either. All in all, a pretty lackluster performance.
The good:
Not much good to be said about this offline experience
The bad:
Very limited caching of the map
Doesn’t show your last position
Productivity apps
Let’s finish with a productivity app. The primary use cases of a productivity app is to sync files across your devices, and let you open and work with those files.
DROPBOX
How well does Dropbox perform?
Everything looks great. What about photos?
Same here. Of course, this is what I would expect, since these files already exist on my device. Let’s look at the “Favorites” section now:
I don’t have any favorite files, but if I click “Learn more”…
…I get a “No Internet Connection” message. It seems like a strange choice to not cache instructions for using the app. Regardless, this is one of the nicer error messages I’ve seen. It’s informative, makes good use of the space, and is clever as well.
Let’s look at the “Settings” section:
Lots of options here. I won’t go through these, but as with Google Maps, the only way to know which options work offline and which don’t is to click them.
Now, let’s try to upload a file to Dropbox:
I choose “Add Files”…
…“Photos”, select a photo, and…
…get an error message. In this case, the error message is misleading. The message tells me that the upload failed, but when I close it…
…Dropbox queues the photo for upload later, like Facebook and Twitter do. The behavior is good, but the messaging around it is confusing. But all in all, a reasonable experience.
The good:
File uploads are queued while offline
“No internet connection” error messages are well-designed
The bad:
Help content is not available offline
“Upload failed” message is misleading
Principles of good offline design
AGGRESSIVE CACHING IS A BETTER USER EXPERIENCE
When I open an app, I expect to see lots of content there, regardless of whether I’m connected to the Internet or not. If the content isn’t there, I’ll get frustrated and switch to a different app which does a better job of caching the information I want to see.
LET PEOPLE INTERACT WITH THEIR CONTENT. ALWAYS.
One solution to caching is to show the information, but keep it read-only. This isn’t adequate. If I’m reading a post on my social feed, I want to comment while the thought is fresh in my mind. If you make me wait, I won’t interact with it at all.
It’s okay if the information won’t be uploaded until I have an Internet connection. I just want to record my thoughts. Instead, remind me that I’m offline and that my posts will go online when I do.
MAKE ERROR MESSAGES INFORMATIVE AND CONSISTENT
A loading spinner that never stops isn’t helpful. If I don’t have Internet connectivity, and you don’t have my content cached, tell me. I won’t be offended, and I’ll be grateful that you set clear expectations for the experience.
DON’T LET PEOPLE START SOMETHING THEY CAN’T FINISH
If I get to the end of a multi-step workflow, only to find that I can’t complete it while offline, I’m going to be ticked off. Tell me up front. Better yet, don’t let me start the workflow. Consider hiding the workflow from the interface until I’m back online.
DON’T SHOW CONTRADICTORY INFORMATION
If you tell somebody that they have a new message, but when they open their inbox, there’s nothing there, they’ll be confused. Make sure all portions of your app are telling the same story.
WHEN CACHING, CHOOSE BREADTH OVER DEPTH.
If I can browse my feed, but not my own profile page, that’s a disjointed experience. It’s better to cache a little of everything than a lot of some things and nothing of others.
DESIGN YOUR EMPTY STATES WELL
They’re easy to overlook, but empty states often look… empty. Vast swaths of unused screen real-estate is wasteful and looks bad. Instead, use the potential frustration of an empty view to surprise and delight your users.
That said, even though an empty screen is bad, a screen filled with pointless UI is worse. Make your empty states simple and elegant, not cluttered and redundant.
NEVER SHOW THE RAW ERROR MESSAGE
Messages like Error: The operation couldn’t be completed. (kCFErrorDomainCFNetwork error 2.) are cryptic and scary. If you can’t connect to the Internet, tell people so in simple words.
REMEMBER WHAT YOUR USERS WERE DOING LAST
If your app is sensitive to location, be responsible and take users to where they were last. It’ll help them regain their bearings and know where they are. If you regain Internet connectivity, allow them to jump to their present location.
NEVER PURGE THE CACHE WHILE OFFLINE
I depend on my data to keep me informed, safe, and cognizant of my surroundings. If it suddenly disappers, I’ll be lost, perhaps literally. Wait until I’m back online before you refresh my data.
And that’s it. Designing offline apps can feel like an edge case, but your users will be grateful that you took the time to give them what they needed, when they needed it.
Additional resources
Here are some other articles about designing offline mobile apps. Mention your favorite articles in the comments, and I’ll add them here: | https://uxdesign.cc/offline-93c2f8396124 | ['Daniel Sauble'] | 2015-12-01 22:56:56.339000+00:00 | ['Offline', 'Design', 'UX'] |
Jack Parsons and the Real “Suicide Squad” | Jack Parsons and the Real “Suicide Squad”
The Story of a Sorcerous Scientist
On October 2nd of 1914, a boy named Marvel Whiteside Parsons was born in Los Angeles, California. He later came to be known as John Whiteside “Jack” Parsons. Jack came from a very wealthy family. He grew up on Orange Grove Avenue in Pasadena, otherwise known as “millionaire mile”. The street was lined with extravagant mansions. More to the point, as a result of his privileged upbringing and daring intellect, Jack became a chemist, rocket engineer, and propulsion researcher. He would even go on to become associated with Caltech and was one of the principal founders of both the Jet Propulsion Laboratory (JPL) and the Aerojet Engineering Corporation. Jack Parsons also invented the first castable, composite rocket propellant. He pioneered the advancement of liquid-fuel and solid-fuel rockets. This was cutting-edge technology at the time. Jack was at the forefront of the field of rocketry. He even invented the propellant technology that was used on the Space Shuttle and in the Minuteman missile. Of course, the thing was that along with having a penchant for physics, Jack Parsons was also a Thelemite occultist who was deeply devoted to Aleister Crowley and his teachings. In 1939 he went to the Church of Thelema on Winona Boulevard, in Hollywood, and witnessed a ceremonial performance of the Gnostic Mass. Shortly thereafter Parsons came to strongly believe in the power of magick. Moreover, he felt that it was a force that could be explained by quantum mechanics. In many ways, Jack was a chemist by day and an alchemist by night. Parsons even went so far as to recite the “Hymn to Pan” prior to every test launch of a new rocket. He made absolutely no distinction between being a scientist and an occultist.
In his youth, Jack was utterly fascinated by stories of supernatural powers, as well as scientific wonders. He once tried to summon the Devil right in his own bedroom. Jack Parsons was more than willing to sell his soul to Satan in order to learn the secrets of the universe. As a boy, Jack dreamed about one day building the first rocket ship that would take men to the Moon. Although at the time, the idea was considered to be nothing more than spectacular science-fiction. Back then most people thought that space travel was impossible, but not Jack. He was a radical visionary thinker. Unfortunately for him, as a result of being a bit of a bookworm and a spoiled brat, Parsons was often bullied in school. He had but one friend, a boy named Edward Forman. They shared a similar passion for sci-fi pulp fiction, plus Ed stood up for Jack on more than one occasion. In 1928 they even went so far as to adopt the Latin motto per aspera ad astrata (through hardship to the stars) at which point they became amateur rocketeers. They soon pockmarked the Arroyo Seco canyon with their dangerously explosive experiments. This is around the time when Jack began using glue as a binding agent for the loose powder in his DIY rockets. Jack’s single-minded passion for building spaceships even distracted him from doing his schoolwork. So, ultimately, Jack’s mother decided to send him to a military boarding school to straighten him out but in typical fashion, Parsons was expelled for blowing up his dormitory toilets.
Although his mother and father had been divorced for quite some time when Jack’s father died in 1931, he and his mother basically lost everything at that point. This was all the more important because Parsons was attending college to study physics and chemistry at the time but he had to drop out because they could no longer afford his tuition. In the end, he was forced to stop attending Stanford University during the Great Depression. Then, in 1934 Jack reunited with his childhood friend Ed Forman and a graduate student named Frank Malina to form the Caltech-affiliated Guggenheim Aeronautical Laboratory Rocket Research Group (GALCIT) supported by chairman Theodore von Karman. As their experiments got more and more dangerous they had to move out into the desert to an area called the Devil’s Gate Dam. In time they even became known around campus as the “Suicide Squad”. In 1939 the GALCIT Group gained funding from the National Academy of Sciences to work on the Jet-Assisted Take Off (JATO) project for the US military. Their rocket experiments were the cover story of the August 1940 edition of Popular Mechanics, which discussed the prospect of being able to ascend above Earth’s atmosphere and one day even reach the Moon. Then, in 1942 during WWII they founded Aerojet to develop and sell the technology. In this way, the Suicide Squad became JPL in 1943.
Two years before JPL emerged Jack Parsons and his wife Helen Northrup joined the Agape Lodge, the Californian branch of the Thelemite Ordo Templi Orientis (OTO). Then, at Aleister Crowley’s bidding, Jack Parsons replaced Wilfred Talbot Smith as its leader in 1942. Parsons ran the Lodge from his mansion on Orange Grove Avenue, otherwise known as the “Parsonage” in Pasadena. In the end, though, Jack was expelled from JPL and Aerojet in 1944 due to the OTO’s rather infamous reputation. Needless to say, the academic community wasn’t happy about working with a drug-addicted sex-crazed sorcerous scientist. They were also really just tired of Parsons’ hazardous workplace conduct. To make matters worse, the following year Jack separated from Helen after having an affair with her sister Sara. Then Sara left Jack for his so-called friend L. Ron Hubbard. Around that time, Jack performed the “Babalon Working” to bring an incarnation of the Thelemic goddess Babalon into his life. In 1946, Jack went out into the Mojave Desert taking down seventy-seven clauses of what came to be known as his Book of Babalon. Then, shortly after he summoned the Scarlet Woman, Jack returned home where he met Marjorie Cameron. She immediately became his new lover. Following that, Hubbard defrauded Parsons of his life savings to start Scientology, and Jack resigned from the OTO. As time went on Jack Parsons tried to keep his childhood dream alive by working as a consultant for the Israeli rocket program, but he was soon accused of espionage and was forced to abandon his lifelong career. Then, at the age of only 37, the enigmatic founder of the Suicide Squad perished in a catastrophic home laboratory explosion. In a death most befitting of a mad scientist, Jack Parsons went out with a bang. | https://joshuashawnmichaelhehe.medium.com/jack-parsons-and-the-real-suicide-squad-28450f9e0a6 | ['Joshua Hehe'] | 2018-10-16 20:27:29.577000+00:00 | ['Religion', 'Biography', 'History', 'NASA', 'Science'] |
What My Time as a Management Consultant Taught Me About Parenting | What My Time as a Management Consultant Taught Me About Parenting
Somehow the “balls to the wall” culture made me a better mom.
Photo by Markus Spiske on Unsplash
When I finally graduated from my doctoral program, I was living the dream. Well, maybe not the dream you’re thinking of. I didn’t have much money. But I was living the dream I embraced as a young impressionable kid watching the Mary Tyler Moore Show. I was a single girl living in a teeny tiny one-bedroom apartment, making my way in the big city.
And I was ready to make my mark.
My first job out of grad school was as an associate consultant in a large international management consulting firm. My job was probably exactly like you’d imagine. Lots of smart, driven, competitive people traveling a lot, working hard and playing hard. At the end of each month, all of the consultants were rank-ordered in terms of billable hours. And the list was posted in the break room. True story. It was an intense culture.
I used the term “balls to the wall” in the title of this piece. I didn’t even know that phrase until I took this job. But I quickly figured out what it meant and how to live that motto even without the requisite balls.
I’ve often joked that years spent working in management consulting are like dog years. It felt like I got 7 years of experience in that very first year. I did some really cool projects and have some great “in the trenches” stories from those days. Like the time I arrived in Mexico City without a single peso. But anyway… Let’s just say, I learned a ton of things.
However, I wouldn’t say I learned many things that prepared me to be a mother. Quite the contrary, I might say that I learned a lot of things that had the potential to make me a very very bad mother. For example, I learned to push through even when I was exhausted. When I was a young fresh-faced associate consultant, that was fine. But given how short my patience tends to be when I’m tired, it probably wouldn’t be my best strategy for parenting.
As a consultant, I also learned how to manage projects like a boss. Kids are not projects, however. And they really don’t take kindly to being treated as such. There was no spreadsheet in the world that was going to get my toddlers potty trained any faster. Or make my first grader a better reader. Or make my anxious child adjust to a new school more easily. Or raise those scores on their college entrance exams.
Sadly, when I became a mom, I realized that some of my hard-won consulting skills needed to go. I worked really really hard at unlearning some of those very same skills I had been so proud of back in my crazy management consulting days.
I learned to stop looking at everything as a deliverable. I stopped seeing time as money. I learned to slow down and just enjoy the moment. Not perfectly… by any stretch, because some of those things weren’t really skills, but personality traits. Or just my temperament. If I’m honest, I have to admit that I’m pretty controlling and type-A by nature.
Traits and temperaments tend to be difficult to change. But little by little I’ve gotten better. My kids are teenagers now and I’m still unlearning some of those skills, traits, and lessons from my consulting days.
However, there is actually one skill from those days that has continued to serve me well as a mother.
Back when I started my career, corporate America was in the heyday of the vision statement. Do you remember the vision statement? Lots and lots of organizations hired people like me to help them craft the perfect set of words. Every single one of my clients assured me that their vision statement would not simply be something that they hung on the wall and printed in their annual reports.
However, that’s exactly what most vision statements ended up being.
It would have been easy to become disillusioned with the entire idea of a vision statement after seeing the fate of almost everyone I helped create. I definitely started to doubt that they mattered or that they affected corporate performance in any way. After a while, I started to feel a little like a snake oil salesman when I worked on yet another vision statement project.
However, deep down I still believe in the power of the vision statement. Maybe it’s more accurate to say that I believe in the power of a clear vision.
Day-to-day of parenting isn’t particularly glamorous. On those days when you feel like you can’t look at another sticker chart, or listen to your burgeoning reader read another book, or play one more game of chutes and ladders, or even cook one more stinking dinner for people who would prefer a happy meal from McDonald’s anyway, a clear vision can really help you pull through.
A clear vision is also a great aid when making the 700 billion decisions and choices parents make for their kids or regarding their kids. Visions help us know what things to care about, what issues to take seriously, and which ones really aren’t all that important in the grand scheme of your vision. (And spoiler alert, it turns out most problems fall into the latter category.)
My vision as a parent is that I hope to raise nice, self-sufficient humans who have a pretty good idea of who they are, how they work, and how to be a good members of society. Now there are lots of details that fall under that big vision. But in a broad sense, that vision has been my touchstone for many years.
When the birthday party invitation didn’t arrive or the teacher assignment wasn’t what I had hoped, I asked myself, “will this prevent my kids from becoming nice humans?” “will they be unable to become self-sufficient adults?” Usually, the answer was no.
Of course, there have been times I’ve lost sight of the vision. I found myself twisted in knots over something that my vision would have said wasn’t worth getting so worked up about. Those are the times I am grateful that my wise friends would remind me to lighten up, or to sleep on it, or just asked me if I would still care about whatever I was driving myself crazy thinking about in a year or two.
And occasionally, there have been issues, problems, or setbacks that felt as if they might postpone the vision. But as I said, I stopped thinking about time in terms of billable hours quite some time ago. When it comes to parenting, time really isn’t money. Time is a gift. I tend to think of setbacks just as things unfolding on a different schedule now.
No one is more surprised than I am to realize that I actually learned something as a management consultant that has been so core to my life as a parent. It’s even more surprising when I consider the fate of most vision statements out there in the world.
Perhaps that’s the way life goes. Sometimes the things we least expect to have value actually stick with us for years. Sometimes the people we least expect to teach us things are the ones who impart the biggest lessons. And sometimes the things we thought might be a complete waste of time (or our client’s money), actually turn out to be valuable.
I’ll always be grateful for those “dog years” I spent as a management consultant. And perhaps I’m most grateful for what I learned about creating a clear vision.
And sticking with it.
*******************
I help parents of anxious kids and teenagers relax and enjoy parenting again while helping their children create the bright future they deserve. If you’re struggling to parent a perfect but anxious child, I’d love to give you a free downloadable copy of my book, Stop Worrying About Your Anxious Child. | https://medium.com/candour/what-my-time-as-a-management-consultant-taught-me-about-parenting-65e1a14442bd | ['Dr. Tonya Crombie'] | 2020-07-16 17:39:41.685000+00:00 | ['Self-awareness', 'Self Improvement', 'Life Lessons', 'Personal Development', 'Parenting'] |
The Common Sense of Science | The Common Sense of Science
Jacob Bronowski’s meditation on the history and philosophy of science
International travel has its benefits. Somewhere between Sydney, Toronto and back again, I found time to read Jacob Bronowski’s The Common Sense of Science. First published in 1951, it’s as profound and relevant today — essential reading for anyone interested in science and science communication (Bronowski would doubtless argue that the two are inseparable).
The book focuses on three key themes in science — order, cause, and chance — introduced during a historical trip from Euclid to Einstein, via Renaissance Europe and the Industrial Revolution.
Science, according to Bronowski, begins with the notion of order — the idea that there are distinct categories of things that share important properties and so can be treated as interchangeable. This ordering is itself an experimental act. It begins with intuitions but these need to be tested:
“It is an experimental activity of trial and error. We must from the outset underline its empirical nature, because there is no test for what is like and what is unlike except an empirical one: that the arrangement of things in these groups chimes and fits with the kind of world, the kind of life which we act out.”
Ordering also depends on the question at hand. We often talk of falsely comparing apples with oranges. Yet Newton’s key insight was that, in the context of gravity, apples and moons may be ordered together.
Newton is also credited with the modern notion of cause, in which the universe is conceived as a machine whose future can, at least in theory, be predicted if its current state is known in sufficient detail:
“Our conception of cause and effect is this: that given a definite configuration of wholly material things, there will always follow upon it the same observable event… As the sun sets, radio reception improves. As we press the switch the light goes on. As the child grows, it becomes able to speak. And if the expected does not happen, if reception does not improve, or the lamp does not light up, or the child persists in gurgling, then we are confident that the configuration from which we started was not the same… The present influences the future and, more, it determines it.”
However, Bronowski points to flaws in this characterisation of cause and effect. The problem is that we cannot fully know the present, even hypothetically, and so our predictions of the future will always carry uncertainty.
This is the third theme — chance — the idea that things are never fully knowable or predictable. The best science can hope for is to reduce uncertainty, to characterise the likelihood of different potential outcomes. It is the most recent of the concepts and the least appreciated, by scientists and non-scientists alike.
“The future does not already exist; it can only be predicted. We must be content to map the places into which it may move and to assign a greater or less likelihood to this or that of its areas of uncertainty.”
Beyond order, cause, and chance, there are other key themes running throughout. Bronowski begins by drawing historical parallels between science and the arts. There was never a “golden age” of the arts that has been somehow supplanted by science. In every great civilization, scientists and artists have “walked together”. Ultimately, science is like any form of human thought or culture. What sets it apart is only its organisation.
The pay-off comes in Chapter 8, when Bronowski finally cements the link between science and literature in perhaps my favourite passage of the book:
“We cannot define truth in science until we move from fact to law. And within the body of laws in turn, what impresses us as truth is the orderly coherence of the pieces. They fit together like the characters in a great novel, or like the words in a poem. Indeed, we should keep that last analogy by us always. For science is a language, and like a language, it defines its parts by the way they make up a meaning. Every word in a sentence has some uncertainty of definition, and yet the sentence defines its own meaning and that of its constituents conclusively. It is the internal unity and coherence of science which gives it truth, and which make it a better system of prediction than any less orderly system” (p136).
The book ends where it began with a defence of science. Bronowski was writing just after the end of World War II and the unleashing of nuclear weapons that owed their existence to Einstein and his famous equation, E = mc². Science stood accused of providing the means of our own destruction.
Bronowski argues that the blame really lies with a society that determines the direction that science takes and provides a context in which science is used to nefarious ends. But he does not absolve scientists entirely:
“They have enjoyed acting the mysterious stranger, the powerful voice without emotion, the expert and the god. They have failed to make themselves comfortable in the talk of people on the street; no one taught them the knack, of course, but they were not keen to learn. And now they find the distance which they enjoyed has turned to distrust, and the awe has turned to fear; and people who are by no means fools really believe that we should be better off without science” (p146).
The Common Sense of Science is now over 65 years old. While some of the language is quite dated and at times pompous, the central message still rings true. | https://medium.com/dr-jon-brock/the-common-sense-of-science-c463af768296 | ['Jon Brock'] | 2019-01-10 12:16:30.737000+00:00 | ['Science', 'History', 'Science Communication', 'Philosophy'] |
Visual design is not just making things pretty | Most laymen have this idea that designers are just operating complicated software and pushing pixels on the screen to make pretty things.
Sometimes, you get feedback from teammates who are not designers telling you:
How can we make this look better? Currently, it is almost there, but I need it to be prettier and more oomph! Can you help me to fill up this space? I’m not sure how, it looks very empty now. Workaround it.
There are times I feel flattered but speechless. Knowing you are that designer in the office who makes things pretty, and receiving feedback for changes that are ridiculously vague too can be difficult. 😅
“How can we make this look better? Currently, it is almost there, but I need it to be prettier and more oomph!”
Visual design is more about:
Communicating the function and solution of how things work
Establishing hierarchy and order through a visual system
Optimising the user’s experience
Strengthening brand perception
Simplifying complex issues
Creating a balance of consistency and delight
And less about…
The likes and dislikes
Making things look attractive or better
Slapping trendy colours, fonts, and patterns
Filling up every space just because it is “too empty”
Aesthetics in visual design
What about aesthetics? Isn’t it important? Yes, the function of aesthetics plays an important role in visual design. Aesthetically pleasing design stimulates and satisfies our senses. It gives us pleasure, evokes positive emotions, and eases the user’s experience. For example, a website with a pleasant user interface and seamless experience leaves a better impression and tends to have more credibility.
However, aesthetics is not only looking at attractive design but also the perception of usability. People perceive attractive things as usable, it forms a bond with the user and the design.
As designers, we can leverage design to influence human behaviour and emotion.
Visual design matters all the time
It is a process that needs to account for the entire lifecycle of a product’s engagement. Visual design has to be integrated from the beginning to the end as deliverables can take forever. Not forgetting, pushing pixels is time-consuming as iterations take time when they are in the wireframing and prototyping stages.
Because visual design hardly happens by accident, this process has to be thoughtfully laid out and executed.
Communicate a consistent brand personality: Avoid adding too many elements and visual styles. Sometimes less is more. Sometimes design refactoring is needed if the project gets bigger– cleaning up and revising the style guide/mockup might be necessary.
Avoid adding too many elements and visual styles. Sometimes less is more. Sometimes design refactoring is needed if the project gets bigger– cleaning up and revising the style guide/mockup might be necessary. Establish a clear visual and typographic hierarchy: This ensures you are communicating your design effectively, how you want to influence your users to view them. Below is a list of key principles in visual design.
– Size and scale to draw the user’s attention.
– Pairing typefaces increases visual hierarchy. The size and weight of fonts play a part in how you want the content to be associated with them. For example H1 (Heading 1) as the largest font for headline, H2 (Heading 2) as the secondary font for the subheading, H3, and other styles in other areas of your content.
– Colours for contrast and importance to elements in your design.
– Perspective to create an illusion to focus on important areas in your design.
– Viewing patterns such as the F and Z patterns are used generally. Your design and content flow are crucial in helping the user to have a better reading experience.
– Spacing such as breathable white spaces creates an overall balance and call to action altogether.
– Symmetry and balance composition ensure no single area of the design overpowers other areas. Elements are organised, balanced, and fit together in a seamless whole. It is easy to navigate without feeling clueless.
– Proximity of elements gives the user the perception of which group of elements are related and allows them to engage further.
– Motion effect can engage with the user in a compelling way when done subtly and meaningfully. This is another way to call for attention as well.
Follow consistent standards and patterns: As users tend to bring in their own set of expectations to things they have already experienced elsewhere, confusion occurs when the user tries too hard to piece together information. This may lead to getting stuck for too long and feeling frustrated, resulting in an unpleasant user experience. Make things easier for the users, and not force them to learn new tricks and representations for each task. It will reduce the thinking process and eliminate confusion.
Research in visual design
Lastly, we must not underestimate the power of research! Research allows room for exploration and provides fuel for new ideas.
Doing research is vital before presenting your design to stakeholders and owners. It helps you to support your design solutions and processes, debate, and understanding on how the project should be done.
For example, when developing a product, insights into the user’s behaviour will provide a sense of understanding of how the action is, which can lead to empathy. It can also provide you with some clues and patterns to help you create a better user interface design for your product. Ultimately, it provides you as a designer, and the stakeholders a more nuanced understanding of the needs of your target audience you are designing for, while providing knowledge backed up by research, and addressing questions at the same time throughout the design process.
Nonetheless, research is not only exclusive to visual design but all other fields of design too. :)
In conclusion, visual design is much bigger than pixel perfection and making things pretty. It is about problem-solving and creating better experiences and places for people through thoughtful design work! | https://uffingoffduty.medium.com/visual-design-is-not-just-making-things-pretty-4828e49a570e | ['Mabel Ho'] | 2020-12-30 04:27:24.067000+00:00 | ['UI Design', 'Design', 'Visual Design'] |
Dear Universe | Dear Universe
I demand a refund for having a terrible year
Photo by NASA on Unsplash
Dear Universe
Please can you review my year as I’m sure it was mishandled by your staff. They have failed to maintain the level of good fortune and positive vibes that I’m accustomed to.
I have found myself constantly surrounded by idiots. Why on Earth would you think this is a great initiative? They can barely answer my questions, consistently look at me dumb-founded and I’ve caught at least two of them fornicating in the stockroom.
As you know, this year hasn’t been easy. I’ve cajoled Gary in accounts to finally wear deodorant but have still to address the halitosis situation. Our Miami office is knee deep in crime and I fear for the safety of our staff as the city continues to sink. This was not advertised in your 84 page glossy brochure.
You recommended I live in Australia and I quote:
“It is actually undoubted that people in Australia enjoy high-quality life. A low population level, with low pollution and plenty of fresh air available along with some great natural landscapes and beautiful scenery.”
What a joke that is! For the last month I’ve suffered asthma attacks and struggled for breath as bush fires raged near my home. The fresh air is chocked full of smoke and I feel the natural landscape is constantly threatening my life.
You didn’t even mention that 9 of the world’s deadliest snakes live here.
Quite simply, I demand a refund!
When I signed up for a ‘good time’ at the start of 2019 in what you described as ‘Deal of the Century!’ and ‘Not to be Missed!’ I expected a whole lot more.
The service is beyond a joke.
Did you really expect me to say nothing about the long response time? I was left on hold late June when I rang in to complain about my lack of net-worth. It was almost 9 hours before you sent a sign to me by cutting off my electricity. It wasn’t quite the response I was expecting Universe.
Your unprofessional approach in emergencies simply cannot continue. Every interaction has been met with silence. I even tried the emergency ‘Reach-out to G-d’ hotline. I followed the instructions carefully. I said five hail-mary’s and delivered a sermon to my next door neighbor who was greedily fornicating. I even got on my knees for additional prayer servicing. Yet I still received the cold-shoulder.
It makes me believe, that despite being in this industry for billions of years, that you lack experience in dealing with people. How could you let life get so out of control? You told me to sacrifice everything.
“You do you,” you said, “and everything in the Universe will fall into place,” you said. When? What’s the timeline for this? What’s your ETA? How long do I need to wait?
I would try emailing Head-office but conveniently there’s no address. Do you think it’s a joke calling the CEO the ‘Big Guy in the Sky’? How can I possibly get in touch? I spoke to Belinda and she clearly believes he doesn’t exist. Even worse, Accounts have suggested it’s a front for Him downstairs. I have no idea what they’re talking about and all this conjecture is making me nervous.
Can I add that your attention to detail is non-existent (like your Head-office). When I ask for a Triple, Venti, Half Sweet, Non-Fat, Caramel Macchiato, I expect to receive EXACTLY that. Not get back a Flat White and a half-arsed apology about the state of employment and how you can’t trust the youth’s to do a proper job. I PAID YOU IN ADVANCE!
These situations are usually considered universally unacceptable Universe.
I’ve endured all the poor results, negative reviews, pay-cuts, decommissions, unemployment, bad pay-outs, divorce, every single Miami Dolphin loss and an ever receding hairline, assuming you would take action.
This is so damaging to your brand Universe. How do you expect to run a successful business on smoke and mirrors?
You promised me the world but all I got was a dumpster fire.
I will not recommend ANYBODY to live life as part of your Universal program. It’s beyond a joke.
My bad experience with your outfit this year has left me no option but to complain. I’ve already written half a dozen bad reviews on Amazon. I’ve even posted to Trip Advisor and made sure ALL the local media knows how poorly you treat your customers.
Please can you ensure standards are raised for 2020.
I expect a full refund by the end of the week.
Yours Angrily
Reuben Salsa | https://medium.com/the-bad-influence/dear-universe-d5297dec0a47 | ['Reuben Salsa'] | 2019-12-03 17:24:35.166000+00:00 | ['Humor', 'Ideas', 'Satire', 'Entrepreneurship', 'Life'] |
Machine Learning | Let us consider that we are designing a machine learning model. A model is said to be a good machine learning model, if it generalizes any new input data from the problem domain in a proper way. This helps us to make predictions in the future data, that data model has never seen. Now, suppose we want to check how well our machine learning model learns and generalizes to the new data. For that we have over fitting and under fitting, which are majorly responsible for the poor performances of the machine learning algorithms.
Underfitting:
A statistical model or a machine learning algorithm is said to have underfitting when it cannot capture the underlying trend of the data. (It’s just like trying to fit undersized pants!) Underfitting destroys the accuracy of our machine learning model. Its occurrence simply means that our model or the algorithm does not fit the data well enough. It usually happens when we have less data to build an accurate model and also when we try to build a linear model with a non-linear data. In such cases the rules of the machine learning model are too easy and flexible to be applied on such a minimal data and therefore the model will probably make a lot of wrong predictions. Underfitting can be avoided by using more data and also reducing the features by feature selection.
Overfitting:
A statistical model is said to be overfitted, when we train it with a lot of data (just like fitting ourselves in an oversized pants!). When a model gets trained with so much of data, it starts learning from the noise and inaccurate data entries in our data set. Then the model does not categorize the data correctly, because of too much of details and noise. The causes of overfitting are the non-parametric and non-linear methods because these types of machine learning algorithms have more freedom in building the model based on the dataset and therefore they can really build unrealistic models. A solution to avoid overfitting is using a linear algorithm if we have linear data or using the parameters like the maximal depth if we are using decision trees.
How to avoid Overfitting:
The commonly used methodologies are:
Cross- Validation:
A standard way to find out-of-sample prediction error is to use 5-fold cross validation.
Early Stopping:
Its rules provide us the guidance as to how many iterations can be run before learner begins to over-fit.
Pruning:
Pruning is extensively used while building related models. It simply removes the nodes which add little predictive power for the problem in hand.
Regularization:
It introduces a cost term for bringing in more features with the objective function. Hence it tries to push the coefficients for many variables to zero and hence reduce cost term
Good Fit in a Statistical Model:
Ideally, the case when the model makes the predictions with 0 error, is said to have a good fit on the data. This situation is achievable at a spot between overfitting and underfitting. In order to understand it we will have to look at the performance of our model with the passage of time, while it is learning from training dataset. With the passage of time, our model will keep on learning and thus the error for the model on the training and testing data will keep on decreasing. If it will learn for too long, the model will become more prone to overfitting due to presence of noise and less useful details. Hence the performance of our model will decrease. In order to get a good fit, we will stop at a point just before where the error starts increasing. At this point the model is said to have good skills on training dataset as well our unseen testing dataset.
Introduction to Dimensionality Reduction
In statistics, machine learning, and information theory, dimensionality reduction or dimension reduction is the process of reducing the number of random variables under consideration by obtaining a set of principal variables. It can be divided into feature selection and feature extraction.
Machine Learning:
As discussed in this article, machine learning is nothing but a field of study which allows computers to “learn” like humans without any need of explicit programming.
What is Predictive Modeling:
Predictive modeling is a probabilistic process that allows us to forecast outcomes, on the basis of some predictors. These predictors are basically features that come into play when deciding the final result, i.e. the outcome of the model.
Dimensionality Reduction
In machine learning classification problems, there are often too many factors on the basis of which the final classification is done. These factors are basically variables called features. The higher the number of features, the harder it gets to visualize the training set and then work on it. Sometimes, most of these features are correlated, and hence redundant. This is where dimensionality reduction algorithms come into play.
Dimensionality reduction is the process of reducing the number of random variables under consideration, by obtaining a set of principal variables. It can be divided into feature selection and feature extraction.
Dimensionality Reduction important in Machine Learning and Predictive Modeling
An intuitive example of dimensionality reduction can be discussed through a simple email classification problem, where we need to classify whether the e-mail is spam or not. This can involve a large number of features, such as whether or not the e-mail has a generic title, the content of the e-mail, whether the e-mail uses a template, etc. However, some of these features may overlap. In another condition, a classification problem that relies on both humidity and rainfall can be collapsed into just one underlying feature, since both of the aforementioned are correlated to a high degree. Hence, we can reduce the number of features in such problems. A 3-D classification problem can be hard to visualize, whereas a 2-D one can be mapped to a simple 2 dimensional space, and a 1-D problem to a simple line. The below figure illustrates this concept, where a 3-D feature space is split into two 1-D feature spaces, and later, if found to be correlated, the number of features can be reduced even further.
There are two components of dimensionality reduction:
Feature selection:
In this, we try to find a subset of the original set of variables, or features, to get a smaller subset which can be used to model the problem. It usually involves three ways:
1. Filter
2 .Wrapper
3 .Embedded
Feature extraction:
This reduces the data in a high dimensional space to a lower dimension space, i.e. a space with lesser no. of dimensions.
Methods of Dimensionality Reduction
The various methods used for dimensionality reduction include:
Principal Component Analysis (PCA)
Linear Discriminant Analysis (LDA)
Generalized Discriminant Analysis (GDA)
Dimensionality reduction may be both linear or non-linear, depending upon the method used. The prime linear method, called Principal Component Analysis, or PCA, is discussed below.
Principal Component Analysis
This method was introduced by Karl Pearson. It works on a condition that while the data in a higher dimensional space is mapped to data in a lower dimension space, the variance of the data in the lower dimensional space should be maximum.
It involves the following steps:
Construct the covariance matrix of the data.
Compute the eigenvectors of this matrix.
Eigenvectors corresponding to the largest eigenvalues are used to reconstruct a large fraction of variance of the original data. Hence, we are left with a lesser number of eigenvectors, and there might have been some data loss in the process. But, the most important variances should be retained by the remaining eigenvectors.
Advantages of Dimensionality Reduction
• It helps in data compression, and hence reduced storage space.
It reduces computation time.
It also helps remove redundant features, if any.
Disadvantages of Dimensionality Reduction
• It may lead to some amount of data loss. | https://medium.com/analytics-vidhya/the-ultimate-guide-of-over-fitting-under-fitting-and-dimensionality-reduction-28f44f632768 | ['Sameer Sitoula'] | 2020-09-08 15:56:54.191000+00:00 | ['Machine Learning', 'Overfitting', 'Data Science', 'Underfitting', 'Dimensionality Reduction'] |
Intelligent virtual reality to support learning in authentic environments | Do you know how to improve learning through VR?
Next year, the number of VR users will grow to 171 million. VR market expected to increase to a $15.9 billion industry by 2019. This technology may be helpful for different areas including learning. A 3D generator of environments can be used to make education more efficient. VR technology is already used in hundreds of classrooms in the U.S. and Europe.
Support learning
Many learners in Primary school are struggling to reach appropriate levels of understanding and skills in basic subjects. Google has started to create VR-education that teaches using students’ preferences to play. Also, they made the Cardboard — VR viewer for mobile devices that allows exploring different countries in various centuries.
The company Discovr works on VR software, and they found that VR helped to retain knowledge 80% more than by using traditional ways to teach. Also, there is education app named AI that develops avatars and communicating as in social media to assess learners with questions that require searching the answers. Users get valuable feedback to help to cope with difficult issues.
Virtual reality also used to teach not only basic subjects but helps to explore different cultures, opinions and so on. For example, the company Diageo wants to teach through virtual reality about negative results of driving when you drunk. They suggest experience becoming a passenger seat of a car with a drunk driver by wearing a VR headset.
Educators have used VR tools to create recreations of historic sites and engage learners in subjects such as economics, literature, and history as well. Learners may receive transformative experiences through different interactive resources.
Use simulations to implement new skills
Virtual reality brings immersive experiences that create situations to which the learners would not have access. It helps to provide ‘what if’ scenarios to discover and manipulate aspects of virtual reality. Students learn to implement new skills and notice their weaknesses.
For example, a virtual submarine helps to discover natural processes that appear under the surface of a rock pool. Learners may explore things at a microscopic level, investigate ancient countries.
AI can supplement virtual reality, providing a possibility to interact with and give feedback to the learner’s actions as in real life. Intelligent Tutoring Systems lead the process as well, give advices for the students, ensure that they reach appropriate learning goals without becoming frustrated.
Apply virtual pedagogical agents
A social context will maintain cooperative activities and it will create better personal interactions. Virtual agents are aimed to build long-term cooperation with the learners. New technology allows creating such personal tutors to improve learning outcomes.
Virtual coaches are important elements in digital environments such as edutainment applications. They help to lead the learning process and motivate students. The virtual characters provide cognitive support, decrease learners’ frustration and act as educational companions. Virtual assistants may be present verbally, through video, as virtual reality avatars or artificial characters.
These coaches require human-like behavior with appropriate movements and gestures. Virtual agents need to support conversations and use speech to explain something. The researches proved that human voice helps to remember and understand better the material.
You need to use virtual characters for delivering guidance, not only for engagement. The pedagogical agents may educate how to cope with difficult situations as well such as bullying incidents. For example, FearNot is a virtual environment where students who have this negative experience play the role of an invisible advisor to a character in the play who is bullied. It helps to discover bullying issues and be prepared.
Many learners in Primary school are struggling to reach appropriate levels of understanding and skills in basic subjects. Google has started to create VR-education that teaches using students’ preferences to play. Also, they made the Cardboard — VR viewer for mobile devices that allows exploring different countries in various centuries.
Virtual reality also used to teach not only basic subjects but helps to explore different cultures, opinions and so on. For example, the company Diageo wants to teach through virtual reality about negative results of driving when you drunk. They suggest experience becoming a passenger seat of a car with a drunk driver by wearing a VR headset.
Virtual coaches are important elements in digital environments such as edutainment applications. They help to lead the learning process and motivate students. The virtual characters provide cognitive support, decrease learners’ frustration and act as educational companions. Virtual assistants may be present verbally, through video, as virtual reality avatars or artificial characters.
©Itsquiz — Be competent! | https://medium.com/age-of-awareness/intelligent-virtual-reality-to-support-learning-in-authentic-environments-263c14fa4640 | [] | 2017-02-13 11:56:33.367000+00:00 | ['Technology', 'Software', 'Education', 'AI', 'Virtual Reality'] |
Image Creation for Non-Artists (OpenCV Project Walkthrough) | Image Creation for Non-Artists (OpenCV Project Walkthrough)
An attempt at exploring how computer vision and image processing techniques can allow non-artists to create Art
The end result
This project stemmed from my predilection of the visual arts — as a computing student, I’ve always envied art students in their studios with splodges of colors and pristine white canvases. However, when I’m faced with a blank canvas and a (few) bottle(s) of paint, I don’t know where to start. Without the hours of training Art students have, I simply can’t go very far unless I’m following some Bob Ross tutorial. That’s why I’ve decided to explore and see if I can code something that will enable me to create art “automatically”.
I did this project a looong time ago for my final year project when I wanted to create some software program that can allow its users to sketch something stickman-like on a pre-selected background, select the category of what was drawn, and the programme will intelligently figure out which image to poisson-blend in. After that, the user can also select a neural transfer style to be applied on the image.
So in this article I will be giving a brief walkthrough of this project. Because there are many parts to it, I won’t be going in-depth with the technical implementations. However, if you’d like me to do so, leave a comment and let me know! :)
Part 1 — Selection of the background image
Because I wanted the user to have a range of images to choose from, I’ve coded a python script that acted as an image search engine. If an image is selected as the user browses through a range of available background images, similar looking background images would be retrieved. The similarity is calculated based on colour histograms in the HSV colour space. A more detailed explanation can be found here.
Here’s the result. If a user selects this background image:
Similar looking images would be retrieved:
Part 2 — Start drawing!
This is the fun part! The program seeks to match whatever you’re drawing with the set of images in the database. So if you draw an image with the wings spread out versus if you draw an oval (for example) the retrieved results would be different. Additionally, because the programme is not as robust, the user would have to specify the category (for example bird or boat, or car). For this particular project, I have six categories. The drawing function is also coded in Python with openCV. Once the outline is drawn, the list of coordinates would be saved. 100 sample points (the number I used for my testing, this can vary — the more points the more accurate but the programme would take longer) was taken from this outline of the drawing.
The drawing did not have to be this specific, it could be just a stickman bird figure too (or stickbird lmao)
Part 3 — Comparing the sample points with pre-processed image descriptors
Initially, I spent a very long time trying to figure out how to use image segmentation methods to get a clean cut of the target image objects so I could crop them out of their original images, figure out which one(s) matched my sketch the most, and them poisson-blend into my result image. However, this was really tricky to do and OpenCV’s image saliency techniques only worked well with certain types of images. For example,
For these images, clean masks can be obtained
However, if the background gets a little complicated or if the subject is not as clear, this happens:
Hence, I decided to use the COCODataset instead.
These are some of the masks of the “birds” category
With these masks, I can use the outline of my sketch to compare against the shape context histogram descriptors of these masks. How it works can be broken down into four steps:
Step 1: Finding the collection of points on shape contour Step 2: Computing the shape context Step 3: Computing the cost matrix Step 4: Finding the matching that minimizes total cost
BY THE WAY, shape context is another fun topic to discuss that’ll take a full article by itself, so I shall not be discussing it here — but do let me know if that’ll be of interest!
Here’s some results:
So it works OKAY. The problem with the Hungarian matching algorithm is that it is extremely time costly — its O(n³)! This is also the part where I’m thinking of implementing some ML/DL techniques to speed up the process.
Part 4 — Blending the closest match into the selected background image
The last part of this project (after using the target mask to “crop” the image out) is to use poisson blending to blend the image into our initially selected background image.
Poisson blending is one of the gradient domain image processing methods. Pixels in the resultant image are computed by solving a Poisson equation.
For each of the final pixels:
if mask(x,y) is 0:
final(x,y) = target(x,y)
else:
final(x,y) = 0
for each neighbor (nx,ny) in (x-1,y), (x+1,y), (x,y-1), (x,y+1):
final(x,y) += source(x,y) - source(nx,ny)
if mask(nx,ny) is 0:
final(x,y) += target(nx,ny)
else:
final(x,y) += final(nx,ny)
All masked pixels with a non-zero value affects each other’s final value. The matrix equation Ax=b is solved to compute everything simultaneously. The size of vectors x and b are both number of pixels in the target image — vector x is what we are solving for and contains all pixels in the final image while vector b is the guiding gradient plus the sum of all non-masked neighbour pixels in the target image. These non-masked neighbour pixels are the values of pixels at the mask boundary and the guiding gradient defines the second derivative of the final mask area. Matrix A is a sparse matrix and computes the gradient of the final image’s masked pixels.
The equation for x is solved to get the final image. If the guiding gradient is zero, we are just solving a Laplace equation and the values at the mask boundary are blended smoothly across. With the gradient of the source image as the guiding gradient, the mask area takes on the appearance of the source image but is smoothly blended at the boundary.
OpenCV’s seamless clone function is an implementation of the algorithm mentioned above.
Some results:
The birds were not in the original background image — they were “blended” in
Part 5 — Applying neural style transfer on the image with pre-trained models
I did a separate article on this here! Do check it out if interested. But in a nutshell, what it does is it takes the image above and apply some cool art style to it. There is of course a range to choose from and the range largely depends on which pre-trained neural models you use.
Here’s some examples: | https://medium.com/swlh/image-creation-for-non-artists-opencv-project-walkthrough-d56ee21db5b6 | ['Ran', 'Reine'] | 2020-08-15 18:07:23.112000+00:00 | ['Python', 'Art', 'Software Development', 'Technology', 'Computer Science'] |
Spring Boot Microservices — Containerization | Spring Boot Microservices — Containerization
In this article, we will understand the advantages of containerizing Spring Boot Microservices. We will also do sample implementation, based on Docker and Spring Boot. This is the 7th part of our series — Spring Boot Microservices — Learning through examples
Photo by Sebastian Herrmann on Unsplash
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
This is very similar to the package we build for Spring Boot Microservices. Our Spring Boot applications are packaged as Jar files which assume that the runtime environment has Java installed. Containers are packaged in such a way that they can run directly at operating system level. This helps to isolate the bundled softwares from its environment and ensure that they work uniformly. We will understand this topic in more detail with the help of following sections —
The background — This part will discuss the evolution of deployment patterns and technologies, resulting in the concept of Containers. I have already covered similar content in the article on Kubernetes. If you have already gone through this, you can skip this part.
How Containers Work? — This part will discuss Containers in more detail including its core components and functions.
Sample Implementation — We will do sample implementation based on Docker along with our Spring Boot Microservices. We will create and run containers for our already built services.
The background
Age of Physical Machines
In the early era, applications were deployed on the physical machines. Deploying one application per machine was very costly in terms of resource utilization. On the other hand if we try to optimize this, by deploying multiple applications on the same machine, it was not possible to define the resource boundaries. Failure in one application could have propagated to others systems.
To understand this, lets say our physical machine is hosting two applications — E-commerce Application and the Inventory Management System and if the latter is eating up all the memory, most often our e-commerce system will stop working as well. Virtualization brought the much needed reform!
Age of Virtual Machines
The technology allowed us to run multiple Virtual Machines (VM) on the single physical machine with the clear separation of resources, platform, libraries and security.
This facilitated better utilization of resources. Enterprises invested in the shared pool of Virtual Machines, which provided more flexibility and scalability to the software systems.
If the holidays season require our e-commerce system to work with two machines, we can get the additional machine(virtual) much faster. When the holiday season is over, I can return back the additional Virtual machine, releasing the resources back to the shared pool.
The technology worked great, but soon the enterprises realized this is relatively heavier. It required the whole operating system to be installed, on each of the virtual machines. Additionally, though the procurement time was greatly reduced, but the pace did lag behind in the age of Agile Development and Microservices.
Agile Development & Microservices
At the time when we were moving from physical to virtual machines, the software development was going through the transformation as well. The development model changed with the introduction of Agile methodology. We started updating and deploying the applications much faster.
It became difficult for the monoliths to survive the Agile world. The fast pace development set the foundation for Microservices. Our e-commerce system was no more a monolith now. It got divided into multiple microservices — Product Catalog, Inventory, Shopping Cart, Order Management and many others. With the Microservices, the traditional deployment model did not work, and it posed new challenges.
The number of microservices, each with a different set of technologies and dependencies, needed a different environment. The earlier adopted options included “one service one host” or “one service one vm”. Again, this was a costly option, and on the other hand deploying “multiple services per host or vm” created similar challenges as in the case of applications.
Microservices were broken down in smaller services so that they can evolve independently. Each of service needs were unique, in terms of technologies and resources, and it needed its own isolated space. Containers provided the lightweight virtualization.
How Containers Work?
Lightweight Containers
Containerization provided the means to partition the underline system (physical or virtual) into the virtual containers. They are lightweight compared to the virtual machines because they don’t need the extra load of a hypervisor, but run directly within the host machine’s kernel.
Containers share the operating system but they can have their own filesystem, memory and CPU. This facilitate flexible and faster deployments, which in turn helps in achieving continuous development and deployment practices.
Containers solved the fundamental problem of resource sharing and isolation with the much efficient mechanism.
Docker
There are many container technologies available in the market but Docker is a clear winner. We will be using this for our sample implementation. Before jumping to the exercise, lets get familiar with the core components of this technology.
Containers are realized with the help of images which are nothing but the build and run instructions for a software application. We need to bundle all the dependencies as part of the container image. In case of Spring Boot Microservices this will typically contain the Java environment and the executable Jar. You will soon see the running example as part of the Sample Implementation.
courtesy: docker.com
Docker CLI (Command Line Interface) provide the options to create and run the container images. But the real work is done by Docker Engine which keeps running as a long-running daemon process. Docker provide the APIs which the programs can use to talk and instruct the Docker daemon.
Images typically reside in a central repository called Docker Registry. Any user or group interested in running the container image can pull it from here.
Benefits
Containerization provides many benefits in the development of software systems —
Isolation and security allow us to run multiple containers at the same time on a given host. For instance, we can deploy one or more instances of Product Catalog Service and Inventory Service on the same machine.
The container becomes the unit for distributing the application. This means, we can create container image of Product Catalog Service and give it to other teams for further validation and deployment steps.
Container images are standardized and portable. We can use the same image of Product Catalog Service to deploy in the production environment. Be it local data center or the remote cloud, it works consistently.
Now that we understand container and its benefits, lets dive into the practical world of it.
Sample Implementation
With the sample implementation, we will try to understand how the container technology works with our Spring Boot based Microservices. I have segregated the exercise in easy to follow steps —
Basic Setup
In the sample implementation we will use Docker, which is the most popular container technology. You can visit https://www.docker.com/get-started to get the appropriate docker setup on your machine. If you are on linux you just need a binary, but for Mac or Windows, you need to install Docker Desktop.
We also need Java, Git, Maven as we will be containerizing our Spring Boot Microservices. We already created multiple services during our previous exercises in this series — Learning through Examples. Lets get the microservices code from Github, by running the following command —
This will get the code for all the services. We will be creating the container image for our product catalog service. For this, lets browse to the directory spring-boot/product-catalog . All the commands in subsequent sections have to be executed in the context of this directory.
Creating Container Image
Docker can build images automatically by reading the instructions from a Dockerfile . A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession. Here is the sample Dockerfile —
FROM adoptopenjdk:11-jre-hotspot as builder
ARG JAR_FILE=target/product_catalog-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
In this file, we are giving following instructions
FROM command instructs to use base image as the one available for adoptopenjdk:11-jre-hotspot .
command instructs to use base image as the one available for . ARG command instructs to use JAR_FILE as a variable. Default value specified is target/product_catalog-0.0.1-SNAPSHOT.jar . This refers to the executable jar which will be generated once we run mvn install command
command instructs to use JAR_FILE as a variable. Default value specified is . This refers to the executable jar which will be generated once we run command ADD command instructs to copy the jar file to the image filesystem with the new name as app.jar
command instructs to copy the jar file to the image filesystem with the new name as ENTRYPOINT command allow you to configure a container that will run as an executable. In our case, we are instructing to execute the jar file with the help of java command.
With all the build instructions written in Dockerfile , we will create the container image with the help of docker build command. This will generate the container image and tag it with the name spring-boot-examples/product-catalog
docker build -t spring-boot-examples/product-catalog .
You can check the available image with the docker command docker images . This will list down all the docker images registered locally. One of them should be with the name above. Now that you have the container image ready for Product Catalog Service, you can start the service with docker run command
docker run -p 8080:8080 spring-boot-examples/product-catalog
The service will work the same way as it does through mvn spring-boot:run . We can access the create/update/delete and get apis. Play around with the service to ensure its apis are working correctly. As a sanity check, I can create a test-product-123 with the following post request.
{
"title":"test-product-123",
"desc":"test product 123",
"imagePath":"http://test-image-path",
"unitPrice":10.00
}
and use the get api to view the product details by accessing http://localhost:8080/product/test-product-123 . If both the api work, it means you have successfully created the container image of our Spring Boot Microservice.
We can also create the container image with the help of Spring Boot build plugin, Spotify Maven plugin, Palantir Gradle plugin, and Jib from Google. You can check out the spring guide on this here. The guide also provides suggestions to optimize the image through multi-stage builds.
We touched upon some basic commands in dockerfile to create the image. To make the images more robust and flexible, you should check the full list of commands available here.
Running the Container
We already did a sample run of our container image in the previous section. ENTRYPOINT command helps in defining the executable instructions for the image. In this section, we will focus on running the container with some additional options.
Lets say we want to set the initial memory size for the Product Catalog Service. We can update the dockerfile by adding this option with the ENTRYPOINT command.
FROM adoptopenjdk:11-jre-hotspot as builder
ARG JAR_FILE=target/product_catalog-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Xmx500m","-jar","/app.jar"]
This will start the service and ensure that the maximum size of the memory allocation for JVM is not exceeded beyond 500 MB. If we want to keep this value flexible, we can refer an environment variable JAVA_OPTS with the sh command as below.
FROM adoptopenjdk:11-jre-hotspot as builder
ARG JAR_FILE=target/product_catalog-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["sh","-c","java ${JAVA_OPTS} -jar /app.jar"]
Once we build the image, we can pass the value of this environment variable at the time of running the container.
docker run -p 8080:8080 -e "JAVA_OPTS=-Xmx500m" spring-boot-examples/product-catalog
You can define the environment variable in dockerfile with the help of ENV instruction, but it will restrict the value at build time only. With -e option, you can define the environment variable in docker run command. We used it to pass arguments to Java process. The approach is bit different if we want to pass the properties in spring context. To pass those values, we must provide placeholders ${0} and {@} as below in the dockerfile
ENTRYPOINT ["sh","-c","java ${JAVA_OPTS} -jar /app.jar ${0} ${@}"]
Once the dockerfile is updated, build the image again, and run the docker container. We can pass the value ${JAVA_OPTS} as part of the run command.
docker run -p 9000:9000 -e "JAVA_OPTS=-Xmx256m" spring-boot-examples/product-catalog --server.port=9000
Docker supports the command in two forms — exec and shell. We already used exec form in our example. Usage of shell form looks like this -
ENTRYPOINT command param1 param2
Docker suggests to use exec as the preferred form. This gives complete control on the process execution unlike the shell form. For instance we can stop the service by pressing CTRL-C in case of exec form which is not possible through shell form.
Publishing Container Image
Container images are like any other system file which can be exchanged across users and groups. We did learn to create the image in the previous section. To reuse the images, we need to publish them to a central repository. Once the image is published, it can be pulled by any authorized user.
Docker Hub is a hosted repository service provided by Docker for finding and sharing container images. Typically enterprises need private repository to push and pull the container images for their services and applications. There are many hosted providers providing this feature including Docker Hub. You can also create the repository in your local data center. Irrespective of where the repository is, lets see how we can publish the image so that its available to the other users and groups in your enterprise.
If we are publishing the image first time, we must save the image by passing its container id. This can be found with the help of docker ps command. Considering our Product Catalog Service , we can use commit command to achieve this.
$ docker commit c16378f943fe product-catalog
Now, push the image to the registry using the image ID. In this example the registry is on host named ecommerce-doc-registry and listening on port 5000 . To do this, tag the image with the host name or IP address, and the port of the registry
$ docker tag product-catalog ecommerce-doc-registry:5000/spring-boot-examples/product-catalog $ docker push ecommerce-doc-registry:5000/spring-boot-examples/product-catalog
Once you get the confirmation, other people or groups such as testing team can use pull command to get the image and use it for starting the service as discussed previously.
docker pull ecommerce-doc-registry:5000/spring-boot-examples/product-catalog
Running Multiple Instances
We successfully created and ran the container image of Product Catalog Service. Its time to see some real benefits of containers. In this section, we will try to run multiple instances of the service. Lets start the two instances of the service as below.
docker run --name instance-1 -p 8081:30001 spring-boot-examples/product-catalog docker run --name instance-1 -p 8082:30001 spring-boot-examples/product-catalog
The docker run command is bit different than we saw above. Here we are specifying names for each of the running containers — instance-1 & instance-2 . Also we are using different host ports for each of these containers. So the instance-1 is running at 8081 port on the machine whereas instance-2 is running at 8082. If you do docker ps , you will see the instances running as below.
You can access each of the instance separately. For instance, you can call http://localhost:8081/product/test-product-123 to get the product details for the product id — test-product-123 . Both the instances are completely isolated. They are using their own runtime environment. You can also check the resource consumption with the help of docker stats command.
Multiple instances of a service demands additional support like Service Discovery, Load Balancing and Container Orchestration. Docker Swarm and Kubernetes provide the required platform to cater these needs.
Similar to the multiple instances, we can also run multiple services on the same machine. It does not matter what the technology platform of other service is. For instance you might develop Inventory service based on Java 14 or you might completely change the technology stack to Node.js. Containers can provide the isolation to each of these services so that all of them can coexist without impacting each other.
Next Steps
Docker is a very wide topic and touches upon multiple aspects of application deployment. The objective of this article is to give you a high level view on containerization. This should be good enough from a developer perspective.If you are into software architecture, you must check out more specific docker topics to develop an holistic approach towards the technology.
You can check Docker Compose which is a tool for defining and running multi-container Docker applications. With Compose, using a single command, you can create and start set of services.
You can check Docker Swarm, which provides many features to build docker cluster and orchestration capabilities. You can also check Kubernetes which is one of the most popular container orchestration framework. I have already covered this topic in detail and can be checked out here — Spring Boot Microservices — Kubernetes-ization. | https://medium.com/swlh/spring-boot-microservices-containerization-963d77348fa0 | ['Lal Verma'] | 2020-10-19 13:44:38.279000+00:00 | ['Virtualization', 'Software Engineering', 'Docker', 'Spring Boot', 'Containers'] |
Blackbox Weekly | Your regular dose of fintech news, views and insights
Can Robinhood Give Coinbase a Run for Its Money?
With the announcement and roll out of its commission-free cryptocurrency product Robinhood Crypto, Robinhood Markets Inc. is gaining ground on Coinbase.
Fintech Around The Web
1. Coinbase Has Made It Easy for Online Businesses to Accept Cryptocurrencies
Last week, popular cryptocurrency exchange Coinbase launched “Coinbase Commerce”, a service allowing businesses to directly accept bitcoin, bitcoin cash, ethereum and litecoin payments right in their online checkout flow.
2. Venezuela Has Launched The Presale of Its Digital Token, Making It the First Country to Do So
Called the petro, critics have said the petroleum-backed digital currency is an attempt by the cash-strapped country to borrow credit against its untapped oil. The move is also said to be an attempt to circumvent Western sanctions imposed against Venezuela for its autocratic leadership last August. While President Nicolas Maduro claims the petro cryptocurrency raised $735 million in the first day of a pre-sale, a currency is only as legitimate as the people who hold it believe it to be. Venezuela has had serious problems both being transparent and in managing inflation. And the currency is already at risk of losing legitimacy if Maduro isn’t reelected come April.
3. Amazon found a partner in Bank of America for its lending to small businesses
With the launch of Amazon Lending in 2011, Amazon has been originating loans, ranging from $1,000 to $750,000, to small business owners that are top sellers on Amazon.com. Last week, CNBC reported Amazon has been in partnership with Bank of America Merrill Lynch since October 2016 where the bank serves as a “$500 million revolving credit facility”, helping the tech giant reduce its risk while still providing credit to merchants for the acquisition of inventory.
4. House Backs Bill That Would Benefit Bank/Fintech Partnerships
The House recently approved a bill that would make it so high-interest rate loans could be sold to non-bank entities, such as fintechs like LendingClub, at interest rates higher than state caps. That means a debt collector or fintech in a state where interest is capped at 16% could both buy debt and issue new credit to borrowers at higher interest rates than that 16% cap. For fintech firms, this legislation could provide an easier way for them to get one of the perks — and added revenue stream — of being a bank without actually being a bank, which could be an appealing opportunity for the myriad of fintechs looking to monetize their products. | https://medium.com/blackbox/blackbox-weekly-ab6e332639da | ['Cadence Bambenek'] | 2018-02-26 14:41:36.473000+00:00 | ['Fintech', 'Newsletter', 'Blockchain', 'Amazon', 'Cryptocurrency'] |
Science Writing: Guidelines and Guidance | These are notes for a class called “Writing about Science, Medicine, and the Environment,” which I have taught for several years at Yale. I update them here from time to time. They are published under Creative Commons Licence (CC BY-NC-ND)
Structure
In this class, we are writing stories. Without structure, stories are random sentences and fragments of scenes. Here are some thoughts about how to give a story an effective overall structure:
The All-Important Introduction
Within a few paragraphs, a reader will decide whether to finish reading your story or to move on to something else. In this brief preliminary time, you should give readers a clear idea of where the story is heading, an idea compelling enough to keep them with you.
Journalists call the sentence or two where you sum up the gist of your story the nut graf. Your nut graf should be intriguing, perhaps even surprising. If it states something most readers already know, they won’t feel the need to keep reading. If it is too obscure, readers won’t know why they should care enough to invest more time with your story.
You should be able to underline your nut graf. Scattering it in bits and pieces around the first half of an article is not acceptable. Don’t leave it up to your reader to gather those bits and pieces together and assemble your nut graf for you.
As you write the rest of your story, make sure that it really lives up to the nut graf. Otherwise, your readers will feel like they’ve been the victims of a bait-and-switch.
If you find yourself struggling to come up with a nut graf, that may tell you something important. You may not actually have a story yet. You may only have a topic.
Think of it this way: Ebola is not a story. How health workers and scientists together stopped an Ebola outbreak is a story.
Think about your reader’s brain, part 1: no mind-reading required.
Once you have done the research for a story, all its pieces are accessible to you all at once. If you write some of it down on the page but leave things out, your expert mind will automatically fill in the gaps. It can be hard to realize how much you’re filling in — it’s a journalistic equivalent of an optical illusion.
Your readers don’t have access to that knowledge. They have to rely for the most part on what you put on the page. If you leave a crucial piece of information out, or omit some crucial event in a sequence, you leave your reader to struggle. Be kind!
Think about your reader’s brain, part 2: no ships in bottles.
Let’s say you’ve taken part 1 of this guidance to heart as you work on a story about solar power. Determined to leave no gap in the story, you start explaining physics, from Archimedes to Newton to Bohr. By the time you’re done with the ancient Greeks, you hit your word limit.
There’s a paradox at the heart of science writing. On the one hand, you have to make sure that you include essential pieces of information in a story. But you cannot try to write everything about your story. In fact, a substantial amount of the work in giving a story structure is figuring out all the stuff you can throw out and still get away with a successful narrative. Rather than building a perfect miniature ship in a bottle, think of what you’re making as a low-dimensional representation of reality: a well-made shadow.
Time is your friend.
Introductions distill your story’s point at the outset, perhaps with a compelling scene or anecdote. Once you get past the introduction, give the overall story a structure that readers can follow intuitively. Stories take place in time, so use time as a tool.
Pick a point in time to begin telling your story. Let the reader know when that time is. Then move forward through time, in clearly marked steps. If you jump forward a year between paragraphs, let us know. Otherwise, we’ll have to guess if you’ve jumped forward an hour or a decade. Again, be kind!
For the most part, narrative time should flow in one direction. Jumping back and forth in your chronology can be effective, but only if readers can keep up with your temporal acrobatics. Just as importantly, you need to make it clear why you’re abandoning a strict timeline for flashbacks or flash-forwards. Otherwise, it may simply seem like bad organization. Remember that you alone can see your timeline clearly in your head. Readers have only what you put on the page.
Transitions
Why does one paragraph follow another? Why does one sentence follow another? Readers should be able to see for themselves the way that the parts of a story link snugly together. Otherwise, it’s easy to get lost among disconnected passages.
When you follow a timeline, you can use chronological order to make these links clear. But if you are shifting from one aspect of a person’s life to another, you can’t rely on time. You need to find other ways to make the shift logical and compelling. If you are working your way through an explanation or an idea, you will have to show the conceptual links between the parts.
Magazine articles and books sometimes use line breaks and drop caps to divide stories into major sections. But these breaks are not an excuse to start up from some other, arbitrary place. To keep students focused on the importance of transitions, I don’t allow section breaks in class assignments.
Scenes
Novels, short stories, and plays are all organized around scenes — focused moments in which people do and say things that advance the overall narrative. As reporters, we don’t make up scenes. Instead, we reconstruct them from our reporting of real events.
In some cases, we can write about things we observed ourselves. While planning out your research, think about the opportunities you can have to observe parts of your story unfold. There may be an event already planned (a demonstration, a trial, a game). Or you can arrange a scene yourself, such as asking an ecologist you want to profile to take a hike together so you can observe how they make sense of the natural world.
Some scenes you’ll write about took place long ago. You’ll have to piece them together from whatever evidence remains — memories, videos lingering on YouTube, diaries in university archives.
Before you add a scene to a story, make sure it matters. What event or point is it illuminating? If you cut a scene out, does the story still hold together? If it doesn’t, the scene is essential. If it does, then the scene is a digression. It may be funny, cool, amazing. But it has to go.
When you begin a scene, set it. Provide enough detail so that the reader knows where and when it’s happening. To make it evocative, take advantage of the cinematic power of our brains. Give readers things to see, hear, touch, smell. But make sure these sensory details are relevant to the story and not random details.
Find ways to convey the humanity of people in your scenes. Use their words, appearances, and actions. A poorly developed scene will read like a procession of faceless ghosts drifting through a phantom world. Give your scenes life.
Paragraphs
Paragraphs are lovely, underappreciated units of narrative — bigger than sentences but smaller than stories. Take full advantage of their power.
Each paragraph should be placed in the right logical place in a story. The internal structure of paragraphs matters, too. In the first sentence, we should understand why it flows from the last sentence of the previous paragraph, and each subsequent sentence should also flow logically from the previous one.
Each paragraph should have a unifying point. Don’t start talking about one thing at the outset of a paragraph and then unwittingly slide into another topic midway through.
Endings
Give careful thought to where your story will stop, and how. Sometimes a quote from the main subject of your story will beautifully sum up the whole tale. Sometimes you build up to a climax, and then jump forward a few years to a brief scene that acts as a powerful postscript. An ending can be an opportunity to zoom out from the particulars of your story to the bigger picture (say, an archaeologist’s work on an island and what it means for our understanding of the peopling of the world).
Resist the temptation of suddenly veering off onto a related topic at the last moment, leaving the reader hanging.
Style
Stories are about people.
This is important to bear in mind when writing about science. It’s all too easy to forget that science doesn’t happen by itself. To say, “A study found that salt is bad for you,” is problematic. Studies don’t do anything. People run studies. And people find out things.
Interview people to understand their experiences as human beings. Scientists are not robots chunking out new bits of knowledge. Doctors are not packages of software spitting out diagnoses. Asking a simple question such as, “How did you end up spending your life studying quantum computers?” or “What was the most important experience you’ve had as a hospice worker?” can uncork powerful human stories.
Writing about people also helps pull in readers who might not otherwise think that your subject is interesting. People like to read about people. To get readers to care about something — say, leeches — try to make them care about the people who care about leeches. (These people do exist, and they can be a lot of fun to hang out with.)
Active voice, not passive
The scientific community favors writing in the passive voice. They shouldn’t, nor should you. The passive voice dissolves the power of narrative. It destroys the impact of action. It sows confusion about who did what. Sometimes the passive voice cannot be avoided. (See what I did there.) But for the most part you can find an active-voice alternative. This is not a meaningless grammatical game. By making an effort to create active prose, you will end up discovering more about the actions — and the people behind those actions — that give power to your story.
Quotes
People speak, write, sign, or otherwise communicate. No story can make sense without some sort of communication. Imagine a novel where characters don’t say or think a single word. Imagine a movie without a line of dialogue. A reported story without any quotes can feel just as odd.
There are exceptions, of course — business analysis stories, for example, or short crime reports. But the long-form journalism we’re working on in this class needs the spoken words of its characters. So quote people, quote them well, and quote them often.
It’s important to start putting quotes into a story as early as possible. An opening scene, for example, is a natural opportunity to introduce voices to the narrative. As you introduce a main character into a story, you can find a good quote from them that sums up an important aspect of the story to come.
If you wait till halfway or more through a story to start quoting people, it will be a disconcerting surprise. By then, readers will have come to assume that you’re telling a story without quotes. Why, after so many silent paragraphs, are you quoting people now?
Lots of quotes help a story, but they have to be good quotes. A good quote is pungent. It brings the speaker to life in the reader’s mind. It doesn’t meander. If you drop in a long paragraph of words from someone into your story, it’s usually a good idea to see if you can cut most of it away, leaving behind the best part.
If you’re narrating an event, set quotes off in their own paragraphs. Use fictional dialogue as a guide. It is confusing to read a paragraph in which a quote is wedged in the middle of exposition. It’s even more confusing to have two or more quotes from a person sprinkled in a paragraph. And quoting two or more people within one paragraph is unreadable.
With few exceptions, quotes should only be full sentences, not fragments dropped in the middle of a sentence. Readers need to shift clearly into the voice of a character for a time, and then back to the voice of the narrator. Fragments of quotes leave readers flipping back and forth in unnecessary confusion.
Here are some more rules for quotes:
— No brackets or ellipses. (Remember, these are not term papers.)
— You can trim out um’s and ah’s and other stammerings, but you can’t leave out words or replace them with words you wish your subject had actually said.
— If a quote sounds like drab exposition, just use your own words to move that part of the narrative forward.
— Quotes should, whenever possible, have this format:
“Quod erat demonstrandum,” said Irving Euclid, a geometer at the University of Athens.
They should not be set up backwards:
Irving Euclid, a geometer at the University of Athens, said “Quod erat demonstrandum.”
or
Quod “erat demonstrandum,” said Irving Euclid, a geometer at the University of Athens.
— Also, avoid needless alternatives to said, such as enthused, opined, ululated. These words typically end up just sounding arch and arbitrary. It’s the quote that matters, not the verb next to the quote.
— Avoid an “unquote.” — i.e., Euclid explained that quod erat demonstrandum. It feels odd to have a narrator tell us what people said, rather than quoting them directly. It’s as if the character is talking in another room. All we hear is muffled speech, with the narrator running in from time to time to let us know second-hand what they said.
— Introduce speakers clearly by name the first time you quote them. If you quote them again later, use their name if they haven’t appeared in the story for a while. (Last name only for adults, first name for children and pets.)
Can you write in first person? If you’re Joan Didion, definitely.
Students often want to write in the first person, especially when writing about scenes they witness during in-person reporting. I generally discourage this. Using the first person turns the writer into a character. Is the writer important enough to the story to warrant that special role? If not, the writer becomes an awkward guest.
There are, of course, exceptions to this rule. Sometimes first person is the right choice. A story focused on a writer’s own experiences with a disease, for example, obviously requires the first person. For the most part, though, “I” am best left out of stories, even if that means making the extra effort to write around oneself.
Rhetorical questions
Try to avoid them. They are the empty calories of science writing. Replace rhetorical questions with declarative sentences that advance the story.
Jargon
Scientists invent words, which they use to talk to each other efficiently. But most people outside a scientist’s subspecialty have no idea what many of these words mean — including other scientists. Tritrophic, metamorphic, anisotropic — these are not the words to tell stories with.
Everyday language has a wonderful power to express the gist of scientific research without forcing readers to hack through a thicket of jargon. But there’s no algorithm you can use to determine what’s jargon and what isn’t. You need to develop your mind-reading abilities. Ask yourself if readers will know what you’re talking about. If you need help, find a friend who is not an expert on your story’s subject and conduct a little vocabulary quiz.
(I keep a running list of words I’ve encountered in assignments that are examples of unacceptable jargon. You can find it here.)
There may be times when you absolutely have to use jargon. These times are far rarer than you may think. If you choose to introduce a term, do not simply throw out it out in a sentence and then explain it later. Do the reverse. Until readers grasp the concept behind jargon, it acts as dead weight that pulls your story down into the murk of confusion.
Formality, jargon’s dangerous cousin
Even if you don’t use a single word of jargon, you can still use language in a way that’s confusing and unwelcoming. Scientists, for example, will sometimes say that a drug works “in mouse.” In is a familiar word. So is mouse. But “in mouse” only makes sense to certain scientists. The rest of your readers will have to struggle to figure out that you mean that the drug had promising results in experiments on lab mice. You want your readers flying forward, relishing your metaphors and dramatic turns. You don’t want them puzzling over obscure phrases and trying to guess their meaning.
Formality is also dangerous because it drains the passion from prose. It is entirely possible to let readers experience wonder, sadness, fear, outrage, and joy when reading about science — without sacrificing accuracy.
Don’t presume readers think just like you.
A lot of journalism has behind it a moral mission. Reporters want to tell stories that are important. Reading a story may improve people’s lives. It may lead to changes in law, shifts in political priorities, or improvements in how people treat each other or the environment. Essays, op-eds, and other opinion-based pieces can also change how people think, using rhetoric, storytelling, and argument.
In order to change minds and move readers, you must recognize that many of your readers may not think the way you do. They may not rank their values in the same order as you. Things that you take for granted as being important may not seem that way to many of your readers. This doesn’t make your readers monsters. In fact, if you dig down deep enough, you’ll find a lot of common ground between you and many of your readers. But if you presume to think for them, you may alienate them.
Overlooking these facts can lead writers to mistakenly assume their readers share all their own values and have reached the same conclusions about the issues they’re writing about. They end up preaching to the converted. If you’re writing about someone trying to make a city more “sustainable,” explain what you actually mean by that buzzword, and explain why it’s important. Do not expect such words to light up a whole network of meaning and values in your reader’s mind. They may not be familiar with the word, or they may not value it as you do.
Reporting tips
On reporting trips, bring a small, reliable digital recorder. Olympus makes good devices, but other companies do too. Bring fresh batteries as backup.
Bring a notebook, too. Make note of things that won’t get picked up by your recording — facial expressions, color of the sky, etc. Note good quotes to remember to use later. Jot down the time if you can.
Do not rely on phones as recorders. They are not as reliable as digital recorders. But use them to take photographs, which can serve as visual aids as you write your piece.
For phone calls, Skype and other software can work well on your computer — use them with recording plug-ins. In my experience, recording apps on phones are unreliable, but there may be something out there that works well.
Under Connecticut law, it is illegal for a person to record a telephone conversation without the knowledge of all parties to the conversation. Ask for permission. (Check your own state’s laws if you’re reporting elsewhere.)
A subject can request for a conversation to be off the record at the beginning. They cannot retroactively declare something off the record.
Checking stories
It’s dangerously easy to make factual mistakes in science writing, because we deal in so many facts — from the number of insects on the Earth (about 10 quintillion) to the year in which Henrietta Leavitt discovered a pattern in the brightness of stars that made it possible to measure the universe (1912).
Some publications will employ fact-checkers to check your work. Otherwise, check it yourself. If you check facts against published sources, go as far upstream as you can — to journal papers, government web sites, and other authoritative information. Don’t rely on another reporter.
Do not simply send your drafts to sources to check. In doing so, you are giving away the responsibility for your work to someone else. It’s fine to call a source and paraphrase information or quotes.
For more on checking, consult The Chicago Guide to Fact-Checking, by Brook Borel.
Formatting checklist
Before submitting a story (to a teacher or an editor), go through a final checklist. Even the most gorgeous piece of prose can be ruined by careless handling. Here are some crucial items to cross your list:
Is your file properly formatted? (File format, name, word count, etc.)
Is your in-person reporting on display?
Do you have a nut graf?
Are you close to the assigned word count?
Are your quotes properly formatted?
Is your fact-checking annotation in good shape?
Double-check your spelling and grammar. Kill that last dangling participle.
(Thanks to guests who have visited my class over the years and helped shape my thoughts on these matters, including Joshua Foer, Maryn McKenna, Annie Murphy Paul, Michael Specter, Florence Williams, and Ed Yong.) | https://medium.com/swlh/science-writing-guidelines-and-guidance-8c6a6bc37d75 | [] | 2019-12-07 16:57:56.269000+00:00 | ['Writing'] |
Extract data from Crunchbase API V4.0 using Python | Photo by Carlos Muza on Unsplash
Given the current pandemic situation, there aren’t a lot of companies actively hiring. Alongside applying to career sites, I wanted to have a data-driven approach to my problem. Later in this series, I’ll use NLP and a Jobs API to find jobs that best match my profile. But for now, this post focusses on identifying high growth companies from CrunchBase, which I can then target for jobs.
Recently, Crunchbase released its API v4.0 and there isn’t any comprehensive documentation available online to take advantage of this powerful platform using python. So I thought to take a stab at it.
What is Crunchbase?
If you are reading this post I am going to assume that you already know what CrunchBase is. But if you don't, in simple terms it is a platform that helps users get all the information about companies all over the world. It includes information such as Revenue, Investors, Number of employees, contact information, and more. You can visit Cruncbase Data to get a complete list of data points.
CrunchBase API allows the developers to leverage this same data that powers CrunchBase.com and allow them to make their own application or website.
For my use-case, I am going to extract information for all the companies in Los Angeles
Let's get started
Step 1: Get CrunchBase API key and request URL
Again I am going to assume you have the API key handy, but if you don't, you can visit Crunchbase Data and register to get access to the API.
Next, we need the request URL. You can visit SwaggerHub on Crunchbase Data to get a complete list of Crunchbase endpoints along with their own examples that you can try yourself.
SwaggerHub
The API URL will be of the following format:
https://api.crunchbase.com/api/v4/entities/organizations/crunchbase?user_key=INSERT_YOUR_API_KEY_HERE
Since I am searching for organizations in LA, I will be using “POST /search/organizations” URL.
Now let's get to the actual coding part
Step 2: Request data using python
Import all necessary Packages
import requests
import json
import pandas as pd
from pandas.io.json import json_normalize
We will use the request module to send an API request to Crunchbase.
Define your API user key
userkey = {"user_key":"INSERT_YOUR_API_KEY_HERE"}
Define search query parameter
The search query parameter is the query that you will pass to the request API to get the required data. (You can see example queries on SwaggerHub)
Since I am finding companies in LA, my query will look something like this:
query = {
"field_ids": [
"identifier",
"location_identifiers",
"short_description",
"categories",
"num_employees_enum",
"revenue_range",
"operating_status",
"website",
"linkedin"
], "limit": 1000,
"query": [
{
"type": "predicate",
"field_id": "location_identifiers",
"operator_id": "includes",
"values": [
"4ce61f42-f6c4-e7ec-798d-44813b58856b" #UUID FOR LOS ANGELES
]
},
{
"type": "predicate",
"field_id": "facet_ids",
"operator_id": "includes",
"values": [
"company"
]
}
]
}
The first part of the query is “field_ids”. Here I mention all the entities that I need from CrunchBase. which in this case is [identifier, location_identifier, short_description, categories, num_employees_enum, revenue_range, operating_status, website, LinkedIn].
You can get a complete list of field_ids from SwaggerHub by clicking on the required API example and then switching to “schema” under description as shown below.
How to get the complete list of field_ids
“limit”: 1000 defines the number of results the query returns. In this case, it is 1000 which is the maximum limit for Crunchbase Pro.
“query”: {} defines the actual query part. In this case, I want to find companies in Los Angeles. So I have defined the “location_identifier” value as
“4ce61f42-f6c4-e7ec-798d-44813b58856b”
The above string is the UUID (universally unique identifier) of Los Angeles.
To get UUID of anything:
Go to SwaggerHub -> GET /autocomplete -> Click “Try it out” -> type in query in the querybox -> Execute -> copy the UUID in response body.
Get UUID for anything
The second part of the query is what we want Crunchbase to return, which in this case the company data. Hence we have set “facet_id” value as “company”.
Now that we have our query setup, we will now create 2 functions that will return the number of companies and extract data and save it as a pandas data frame. Here, POST request with API URL, userkey as a parameter, and passing query as json.
def company_count(query): r = requests.post("https://api.crunchbase.com/api/v4/searches/organizations", params = userkey , json = query)
result = json.loads(r.text)
total_companies = result["count"]
return total_companies def url_extraction(query): global raw r = requests.post("https://api.crunchbase.com/api/v4/searches/organizations", params = userkey , json = query)
result = json.loads(r.text)
normalized_raw = json_normalize(result['entities'])
raw = raw.append(normalized_raw,ignore_index=True)
Now since there are more than 1000 companies, I had to loop my query till I get all my results. The way I did this was by adding “after_id” key in the query part and the last UUID as the key. By doing this the loop will fetch new data after the last UUID that was fetched.
raw=pd.DataFrame() comp_count = company_count(query) data_acq = 0 # data_acq while data_acq < comp_count:
if data_acq != 0:
last_uuid = raw.uuid[len(raw.uuid)-1]
query["after_id"] = last_uuid
url_extraction(query)
data_acq = len(raw.uuid)
else:
if "after_id" in query:
query = query.pop("after_id")
url_extraction(query)
data_acq = len(raw.uuid)
else:
url_extraction(query)
data_acq = len(raw.uuid)
This is how the “raw” data frame looks after extraction. | https://medium.com/priyanshumadan/extract-data-from-crunchbase-api-using-python-8e99ed6bc73e | ['Priyanshu Madan'] | 2020-06-11 20:47:17.664000+00:00 | ['API', 'Python', 'Crunchbase', 'Data Science', 'Job Hunting'] |
Weekly update #15 | Another week has passed and it is time to share the results with our community.
During the last two weeks, a lot of work has been done concerning development and testing of essential Live Stars webcam platform features, such as streams and LIVE token processing. We have reworked model account dashboard and completed the development of one of the most important parts of the platform — administration panel.
At the moment we are testing out the visual side of the website, mark-up, and user interface.
We are now conducting closed beta test of platform functions. We still have to complete the stress-test of our new streaming system.
During the next several weeks we will finish our redesign tests and will be ready for public testing. | https://medium.com/live-stars/weekly-update-15-fa0f88cb6a22 | ['Live Stars'] | 2018-08-14 10:53:33.897000+00:00 | ['Startup'] |
This Is Why You’ll Forever Be an Ignorant Person | You Think You're Smarter Than You Actually Are
On April 19, 1995, one Mcarthur Wheeler robbed two banks in Pittsburgh. What makes his case unique is that seeing how lemon juice makes ink invisible, he smeared his own face with lemon juice, believing that this would make him invisible to surveillance cameras.
He then carried out the robberies full of confidence, in broad daylight, and he didn’t even bother wearing a mask to hide his face from cameras. Why would he? To him, that would be like superman wearing a bullet-proof vest.
Of course, he was quickly identified and arrested by the police later that day. And to prove just how confident he was in his little magic trick when he was being taken away by the cops he mumbled, "but I wore the juice."
Now, you think of him and laugh at just how ignorant he was and even wonder if he was high on drugs. But how many times have you been so damn sure about something… and still ended up being wrong? Cause I for one know that has happened to me a lot. And this is what the Dunning-Kruger effect aims to explain.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - Bertrand Russell
Named after social psychologists David Dunning and Justin Kruger, the Dunning-Kruger effect is a cognitive bias that shows that people tend to overestimate their own knowledge and abilities in areas where they are incompetent, while those who are more competent do the opposite; thereby creating an ironic scenario where the less we know about something, the more confident we are about that subject, and vice-versa.
To put it in other words, the Dunning-Kruger effect shows that when people lack basic knowledge on a subject, they also lack the experience to understand how little they know. As a result, they vastly overestimate their comprehension of the subject.
To justify this claim, Dunning and Kruger conducted numerous studies and the results all pointed to a single truth: we are all Mcarthur Wheeler in one area of our lives or the other. | https://medium.com/live-your-life-on-purpose/this-is-why-youll-forever-be-an-ignorant-person-223f78c1e4fc | ['Raphael Paul'] | 2020-12-18 12:31:31.696000+00:00 | ['Life', 'Self Improvement', 'Psychology', 'Advice', 'Life Lessons'] |
“I hate myself”: The Parasite of Self-Hatred | Original article written by Dr. Samar Habib at https://conscioused.org/wiki/i-hate-myself/
Everyone has a Parasite living inside their brain. And mine was running the show.
Standing in front of a mirror in my parents’ home, aged 30, single and shamefully broke, I slapped myself in the face as hard as I could. How could you be so stupid? And again, slap! You’re so stupid! And again, and again, and again, and again, until I couldn’t take the pain anymore.
You’re such an idiot. I hate you. I hate you so much. How could you be so weak? How could you be so stupid?
And out of the silence came one last slap to sear the pain in a ripe, red seal.
Rewind back a couple of years: one of the courses I was teaching at a local university was about sexuality, it caused controversy and I got hated on big time. Later that year I was assaulted. I held on for another two semesters before, on an impulse, I resigned.
Ever since that moment, a voice in my head kept hating me for throwing away everything I worked so hard for. For not being strong enough to weather through.
The rumination was never-ending and the Parasite was eating me alive, day and night. The only way to escape the incessant negative thinking was to sleep. And I slept most of the time. When awake, I was paralyzed by fear and anxiety — I didn’t answer calls, go anywhere or see anyone. Even sunlight felt hostile to the skin. Everything hurt.
According to the Toltec Tradition, everyone has a Parasite just like this. She might be domesticated and not giving too much trouble, but all it takes is one mistake, and she’ll spare no expense ravaging your soul. In times of trouble these Toltec Parasites can become inflamed, and when they do, they take over our inner worlds.
You’ll know your Toltec Parasite is inflamed when you find yourself stuck in a repetitive thought-loop and the unstoppable rumination debilitates you. The anxiety triggered by the looping thoughts forces you to stay in ruts longer than you need to, prevents you from taking action, and sows seeds of self-doubt, puncturing holes in your self-esteem.
Man, I wish I knew back then what I know now. But there were no shortcuts to figuring out what to do with my inflamed Parasite. I can’t say I have it all sorted out, but I have learned some valuable lessons to keep my rumination in check.
Lesson 1: Realize that it’s no big deal
I sink in my mattress, overwhelmed with depression and both the inability and unwillingness to do anything. I don’t realize that I am doing something by listening to Tibetan Buddhist nun, Pema Chodron. That doesn’t seem productive to me. I think of it as just a way to shrink from my responsibilities. And my Parasite lets me know it.
You’re a loser! You’re finished. Die already. Why are you even still here?
I hate you.
Nobody cares about you.
You don’t deserve to live.
Kill yourself while you still can.
That last one is especially weird. Hurry! Senility is only 60 years away! But I guess no one said your inner critic had to be logical.
Pema Chodron makes me realize that nothing is permanent. From her I learn that impermanence is one of the Buddhist precepts.
Can you imagine the relief I feel the moment she reminds me that circumstances, feelings, preferences, environments, desires, and everything under the sun are all impermanent? Nothing lasts forever, everything is ever-changing, Pema says. The teaching sinks into my chest like the good medicine it is.
I relax into my pain. I stop resisting, knowing it’s not forever. Relaxing into pain is like gliding a hot knife through butter. Relaxing into pain is the first few moments of getting into a warm bath. Something heavy and awful releases its hold on me. And as I relax into my pain I take to heart the words of the Beatles song: “There will be an answer, let it be.”
The first rule of Parasite is not to resist Parasite. Let it be.
Lesson 2: Separate Self From Toltec Parasite
You know when someone says they’re “in two minds” about something? It’s because one part of their personality is saying one thing, and the other is saying something else. So, like with me, one part of me thinks I dodged a bullet (probably literally) leaving that job, and another part is bitch-slapping me for quitting.
So one day I decide to start talking to my Parasite. By isolating her as a part and having a conversation with her, I expose her to the other parts of me that are logical and playful. I take the venom out of her sting, slowly, over time. I play the long game. Patient and persevering, like the tortoise from Aesop’s fable.
Take for example the time I was invited to Turkey to give lectures there. My Parasite finds a way to rain on my parade. Your research is world-famous, but there isn’t a single university with balls big enough to hire you, she says.
“Well, if you’re not going to let us do anything other than lie in bed all day, I’m going to listen to more Pema when I get home,” I say to her.
“Shut up, you’re a loser anyway,” she shoots back.
“Get lost.”
“I see your ‘Get lost,’ and I raise you ‘’Forget everyone and everything”. “I don’t want to play!” Parasite exclaims. “I want to stay angry, abrasive and mean.”
Then she falls silent.
Just like with any bully, it helps if you can prepare your responses in advance. Hypnotherapist Marisa Peer teaches her clients to say this to the mean people in their lives: “thank you for sharing that.” It shuts them down, and shuts them up, she says.
I’m standing in line at the Virgin Airlines check-in counter, minding my business and she hits me with I hate you! Just like that, spontaneously.
I hate you. You’re a loser. Kill yourself while you still can.
Every word sends a venomous shiver down my spine.
You’re worthless. Nobody cares about you. They screwed you out of house and home and you let them. Handed them what they wanted on a silver platter.
“Is that so? Please, tell me more.” I surrender to the fact of my experience. I can’t fight it anymore. I’m exhausted. I let her tear me up. And yet a strange thing happens as I surrender. She stops.
The Parasite notices I’ve shifted gears, so she adapts.
My sister, feeling sorry for me, asks me to join her for a vacation after my trip in Turkey finishes. That’s how you know people love you. They still want to hang out with you even when you’re a lava lamp of misery and a tesla coil of emotional electrocution.
The Tesla Coil of Emotional Electrocution
On a train from London to Paris and completely unexpectedly, my Parasite flares up: What now, Einstein? Seriously, what are we going to do with all that time we still have to live? You can’t kill yourself; it’s too selfish and you’re too much of a chicken to do it. Besides, you’d probably mess it up, anyway.
To be perfectly honest, she’s right. I don’t see suicide as an option. I don’t kill spiders or cockroaches, how could I bring myself to kill an entire human being? It’s impossible. But there is an insidious taunting going on. She’s subtly pointing out my incompetence. Even in suicide, I’d fail.
Then it hits me. What if I could play with the voice of my inner Parasite? What if I could raise her pitch so she starts to sound like Chip or Dale from the Chipmunks? What if I imagine her as shredder from Teenage Mutant Ninja Turtles?
I am now waiting eagerly, silently for her to come on. She’s onto me. She lays low for the rest of the trip. The more I lay in wait for my rumination to start, the more it subsides.
Lesson 3: Identify with the Listener Not The Part
Over time, I start to see that the Parasite is a part of me, and that I am the whole. I start to ask the question who am I? It’s actually a special type of meditation taught by a Hindu mystic named Ramana Maharshi. I would close my eyes and ask who am I? and wait for the answer. All thinking would come to a stop and what I’m left with is who I am.
In the ancient spiritual traditions of Hinduism and Buddhism we are divine beings. Likewise for Don Miguel Ruiz Jr. of the Native American Toltec Tradition. He’s the first one to teach about about The Parasite. We are not our thoughts, we are the witness to our thoughts, he says.
This witness is cosmic consciousness. It is everywhere. It is never born, never dies, always was and always will be. That’s who I am.
This new realization transforms my relationship to my life and the people who venture into it. I don’t go to bed resenting villains or cursing enemies. I don’t thirst for revenge at some theoretical point in the future. I understand that we are all precious and that our actions don’t define our divinity.
If you don’t want to get into spirituality then consider this idea from communication and listening expert, Julian Treasure, instead:
You are not the inner voice. You are the one listening.
You have no idea how much that inner shift has saved my life. I did it in meditation by separating my awareness from my mind, but that seems a little abstract, so here’s a more fun exercise to try out:
Go to a busy place but where you would still feel safe. Close your eyes and focus on the sounds around you.
Pick a single sound, a nearby chirping perhaps, the overall noise of the crowd, the wind, whatever. Focus on nothing but the sound. When I do this exercise, I focus so much on the sound I have the experience of disappearing. I no longer have biographical thoughts. Thoughts about who and where and what and when I am cease. I’m just receiving the sound. I am the sound. There’s nothing else. Just sound. No judgment. No self. No chance for me to beat-up or uplift myself.
I practice this listening meditation whenever I need reminding that I’m not the voice. I’m the listener. When the voice begins to overwhelm, I know that I can take the position of the listener, rather than identify with what the voice is saying. The more you practice this, the better you’re able to disengage from the distress. It’s like a muscle. The more you practice, the better you get.
Ideally and at all times, I want to be able to know that I’m the listener, not the voice.
Try it. Go to a busy place. Set a timer if you have to. Close your eyes. Become the sound.
Lesson 4: Investigate the Origin of the Part…
Continue reading at https://conscioused.org/wiki/toltec-parasite/ | https://medium.com/the-post-grad-survival-guide/the-parasite-295c5b392122 | [] | 2019-04-16 06:02:53.647000+00:00 | ['Self Improvement', 'Personal Development', 'Life Lessons', 'Inspiration', 'Psychology'] |
Emails From A CEO Who Just Has A Few Changes To The Website | Date: May 14th, 2016 at 2:44:05 AM PST
From: Ed Pratt <[email protected]>
To: [email protected]
Subject: Website Changes
Hey guys,
Just looking at the website. I know I said I loved it yesterday, but looking at it now, I hate it. It feels too cluttered. Too many words.
@Design team — pls mock up what it would look like with fewer words.
-E
Date: May 14th, 2016 at 2:45:08 AM PST
From: Ed Pratt <[email protected]>
To: [email protected]
Subject: RE: Website Changes
And fewer images.
-E
Date: May 14th, 2016 at 2:48:13 AM PST
From: Ed Pratt<[email protected]>
To: [email protected]
Subject: RE: RE: Website Changes
Hey guys,
The navigation is confusing. We should make it all one page like everybody else.
@Design team — make it all one page like everybody else.
-E
Date: May 14th, 2016 at 2:50:19 AM PST
From: Ed Pratt <[email protected]>
To: [email protected]
Subject: RE: RE: RE: Website Changes
And add infinite scrolling. I never want to reach the end of the page. Ever.
Date: May 14th, 2016 at 2:51:02 AM PST
From: Ed Pratt <[email protected]>
To: [email protected]
Subject: RE: RE: RE: RE: Website Changes
Actually don’t.
-E
Date: May 14th, 2016 at 2:51:02 AM PST
From: Ed Pratt <[email protected]>
To: [email protected]
Subject: RE: RE: RE: RE: RE: Website Changes
What if the homepage was just a picture of a white wall? Think about it. People will be like “Where am I? What’s happening?” and isn’t that the whole point of having a site?
@Design team — white wall it.
-E
Date: May 14th, 2016 at 2:54:32 AM PST
From: Ed Pratt <[email protected]>
To: [email protected]
Subject: RE: RE: RE: RE: RE: RE: Website Changes
This “About Us” section is a fucking mess.
@Design team — white wall it.
Date: May 14th, 2016 at 2:57:21 AM PST
From: Ed Pratt <[email protected]>
To: [email protected]
Subject: RE: RE: RE: RE: RE: RE: RE: Website Changes
You know what? Scrap the words altogether. Words are cliché. You know who hated words? Hemingway.
-E
Date: May 14th, 2016 at 3:17:01 AM PST
From: Ed Pratt<[email protected]>
To: [email protected]
Subject: RE: RE: RE: RE: RE: RE: RE: RE: Website Changes
Hey,
So I’ve been looking over the site a little more and here are a few more changes:
-More words.
-More images.
-Separate pages for each section.
-Add an “About Us” page.
@Design team — pls mock that up and get it over to me by EOM (end of morning).
Thx
-E
Date: May 14th, 2016 at 3:20:12 AM PST
From: Ed Pratt <[email protected]>
To: [email protected]
Subject: RE: RE: RE: RE: RE: RE: RE: RE: RE: Website Changes
Hey guys,
Just took another look at the website. Looks great.
Thanks,
-E | https://medium.com/slackjaw/emails-from-a-ceo-who-just-has-a-few-changes-to-the-website-43ccb7b31709 | ['Amanda Rosenberg'] | 2016-07-09 20:58:10.460000+00:00 | ['Tech', 'Humor', 'Design'] |
I Won’t Be Telling My Daughter to Unlock Her Potential | I was always going to be a ballerina. Unlike every other 6-year-old girl in a pink tutu, I was actually going to make it. The fantasy would dance through my head while I listened to Swan Lake twirling around the lounge room.
Why not me? I was bursting with potential.
I was a unique snowflake, destined for the stage. I would embody grace and the extraordinary potential of the human body.
The reality came crashing down one way or another, in the same way I kept crashing into furniture. I can’t entirely recall the moment but I do remember thinking I didn’t have the body type for it.
The rakishly thin, delicately boned porcelain doll. I was destined to embody a generously sized booty.
Dylan Moran riffs on potential: “You should stay away from your potential. You’ll mess it up! It’s potential, leave it! It’s like your bank balance— you always have much less than you think. You don’t want to find out that the most you could possibly achieve, if you harvested every screed of energy within you, that all you would get to would be maybe eating less cheesy snacks.”
The glut of posts online compelling you to ‘unlock your potential’ is enough to drive anyone into a hole to curl up and be mediocre the rest of your life.
This mysterious box buried deep within you which holds the keys to your best life.
Who wouldn’t be desperately trying to crack the code?
Last night in yoga, a dim-lit room with thunderstorm soundtrack, the teacher with the soothing voice and shaved head was telling us about our in-built search engines.
That in our brains, we have these little Googles constantly searching, ruminating, debating, contemplating.
And the practice of yoga is to be still. Silent. At rest.
I’m sitting cross-legged which is sending shooting pains up my back and I’m trying not to wriggle.
I’m listening to her words but my internal Google is spiralling out of control: that’s interesting, be still, I really need to get better at being still. Maybe I should start mediating again, that would help. I probably need to work out some way to do it, I don’t want Matilda to learn that she always needs to be busy achieving things and..what is this music anyway? Why does yoga music always have to have weird chanting in it?
I’m so conditioned to achieving, improving, that even talk of letting go and being still sends me into action-planning mode, turning it into a large-scale self-improvement project. Maybe I can wake up at 7am every day and track in my bullet journal how still I’m being then I’ll be able to see my progress.
Since becoming a Mum I feel increasingly frustrated by the unicorn of potential, now seemingly even further out of my grasp.
It’s buried under piles of dirty nappies and time lost, shimmering on the horizon while I retrace my steps looking for another sock my baby kicked off 5 kilometres ago.
There’s this German word, fernweh meaning homesickness for a place you’ve never been to. I wonder if that captures how I feel about my potential. I miss it, I haven’t reached it and yet I long to go back to it, like the heady pink tutu days when it was as bounteous as my backside.
Perhaps that maddening unicorn is always out of reach. You climb to the next rung of your potential, and hey! There’s another one. And another, and another. You reach the top, you’re literally the king of the world and there before you is another ladder.
What I’m after is contentment.
The more the days roll into months with my daughter, the more I’ve realised this is life now, it’s happening. I look into her tiny little face while she’s feeding and every now and then feel a flush of deep contentment.
I forget about my plans to study, write a book or finally start that NGO for kids who can’t read good and think, ‘if this is it, this is good.’
I want those moments to punctuate my day, not be the exception.
I don’t want someone with a megaphone shouting down my ear: YOU MUST UNLOCK YOUR POTENTIAL BEFORE YOU DIE!!
I hope my daughter explores her potential. I dearly hope she chases the unicorn, for a time.
But I will be compelling her to be still, let go, enjoy what you’ve got now, because this is life, you’re living it. Don’t wait until you reach that next rung to be content.
But try to eat less cheesy snacks. | https://cherielee-37146.medium.com/i-wont-be-telling-my-daughter-to-unlock-her-potential-c1e71cbf47a0 | ['Cherie Gilmour'] | 2019-10-23 23:56:59.003000+00:00 | ['Mindfulness', 'Self', 'Parenting', 'Productivity', 'Life Lessons'] |
Via Negativa | 2. Reduction adds resilience
In 2009, two pilots from Air France flight 447 were in serious trouble. After flying through some storms earlier during their journey, some sensors on the wing caught ice and stopped working. This by itself, was not a big deal. As with most aviation accidents, it was a series of small errors that ultimately lead to a catastrophic outcome.
When some of the airspeed sensors stopped working, the autopilot disabled itself to give control back to the pilots. They themselves went on and were confronted with an array of different signals that were all simultaneously competing for their attention. A focus on the wrong signals lead to wrong interpretations. This on the other hand lead to wrong decisions that eventually resulted in the plane to stall.
It wasn’t until the very end that the pilots realized what was happening. The last recorded exchange was “Damn it, we’re going to crash… This can’t be happening!.” A few seconds later, the sound of desperate insight turned into fatal silence.
That day, all of the 228 passengers on board of Air France 447 died.
We increasingly depend on automated systems. As long as everything works as expected, they bring us great comfort, increased safety, and reliability. The moment systems become “safer”, however, is the moment we become negligent of what they actually do. Once they fail, we fail spectacularly, or even worse: we crash.
A large number of safety features and signals can easily become so complex that it’s hard to make sense of the overwhelming information. All these well intentioned additions and precautions may have well added to the cause of the failure itself. The more complex something gets, the harder it is to understand what’s happening.
When I started designing websites, life was easy. All you needed was an editor, an HTML, and a CSS file. On days when I felt witty, I even added some JavaScript. Today, I feel like I need to go through a regimen of tasks before I get to write my first line of code.
It usually involves setting up NPM, Webpack, Babel, React JS and all the other technical mumbo jumbo. All these layers come together to build a pyramid of inter-dependent blocks. This approach has greatly increased the efficiency of our craft but came at the cost of fragility. Once a module or dependency breaks, the whole system falls apart. We easily forget that the underlying foundation of the web is plain old simple HTML, CSS, and JS — even if some of the most avant-garde technologists want us to believe they aren’t.
Go to awwwards.com and take a minute to browse through the latest featured websites. The pattern is obvious. Most of them have vivid imagery, fullscreen autoplay videos, parallax-backgrounds, large JavaScript files, and artfully spinning loading indicators that build great suspense for the site’s content to finally appear.
Most websites get awwwarded to do more, not less. There is nothing wrong with that, as long as nothing goes wrong.
More than ever, it feels like we’re building and designing with the assumption that everything just works. That everyone will be using the same class of device, browser, and network connection we have. And how are many of us dealing with this ever increasing challenge? By reducing complexity where it’s easiest: empathy. Instead of designing and building for others, we’re designing and building for slightly altered versions of ourselves.
Through Via Negativa, we are getting into similar territory to what Jeremy Keith calls Resilient Web Design.
Resilient Web Design is about starting with the essence, and enhance from there. Instead of building an array of features simultaneously, you start with what’s needed and augment it with all the nice-to-have’s. When someone uses the crappiest browser we shall not mention by name, the core of the experience still works:
Lots of cool features on the Boston Globe don’t work when JavaScript breaks; “reading the news” is not one of them. Mat Marquis
By forcing us to focus on what’s needed, we ensure that we get our priorities straight. We all prefer boarding a plane that is safe, over a plane that hast the best entertainment system. The same applies to the way we build products — even when it’s not about life or death.
Make no mistake, that doesn’t just apply to the web. It applies to any technology, whether it’s development for iOS, Android or any other platform out there. If you are thinking about replacing a standard OS component or navigation pattern with your own, think again. Reinventing existing components is more expensive than we think. It’s not just that custom code needs to be maintained but users need to learn it too. Overall consistency often trumps individual greatness.
In iOS, swiping to the left is a system wide gesture convention to navigate to an app’s previous screen. In Gmail however, instead of going back to my inbox, it opens my previous email. Gmail behaves like Photos, instead of most other productivity apps. Swiping through emails is a well intentioned idea, but it quickly results in confusion since it clashes with pre-existing OS patterns.
When I started turning my website into a chat, I originally started with a version that was technically much more complex than what I have now. It used Natural Language Understanding so users could speak freely instead of the constrained version I eventually went for. The problem was that it became technically so difficult to manage, that I couldn’t ensure the experience was robust and consistent. I limited the functionality for the sake of the interface’s predictability. Interestingly enough, very few people ever complained about the fact that they couldn’t type whatever they wanted.
Takeaway: A reduction of complexity doesn’t necessarily lead to a reduction of usefulness and delight. By removing what isn’t essential, we’re adding resilience to the things that are. The baseline experience is the same for everyone. From there, everyone is on their own. | https://uxdesign.cc/via-negativa-4bb536f235d5 | ['Adrian Zumbrunnen'] | 2018-08-20 12:20:28.592000+00:00 | ['Product Management', 'Design Thinking', 'Startup', 'User Experience', 'UX'] |
On The Conference Trail: Cryptolina, USAID Blockchain4ID, and Techtonic Summit | There’s this joke that people who think traveling for work is fun, probably don’t travel for work enough. Getting to visit new places on the company’s expense account sounds great in theory, but tight schedules and constant jetlag tend to reduce the fun factor substantially.
I was recently reminded of how challenging business travel can actually be when you’re doing so on a budget. Over a period of 12 days this June, I traveled to 5 destinations, attended over a dozen meetings, and spoke at 3 conferences. I did this with a single 20" Samsonite Evoa spinner and a Nomatic backpack. (For travel nerds, that’s a total of 60 liters of carry and around 16kg of weight, and it included a hundred metal BX8 souvenir coins, a hundred postcards, and our official BloomX tablecloth for conference tables.)
First rule of efficient business travel: carry-on is the only way to go
Day 1: Manila to San Francisco (Travel time: 13 hours)
I landed in SFO at 9:30pm with no accommodations. I had been wanting to try HotelTonight for some time and had decided that I was going to finally give it a shot this trip.
HotelTonight collects all of the hotels that still have spare rooms on a given evening and puts them together for easy perusal. True to their name, there were about a dozen hotels to choose from when I landed, all of them discounted by at least 30%. I picked the cheapest one in the Union Square area, and was checked in about an hour later.
After a very pleasant bespoke Old Fashioned at nearby Biig, I took a nap that unfortunately lasted all of four hours. I listened to the early morning mewling of the Tenderloin as I waited for the sun to come up.
This was the easy part of the trip, I only had two meetings in the city — one of them with our friends at the Stellar Foundation — before I needed to head back to the airport.
Day 2: San Francisco to Los Angeles (Travel time: 1 hour)
I landed in LAX by 10pm and checked in to the Sheraton closest to the airport. I’m not in Los Angeles very often, but whenever I am, I’m reminded of how huge it is. At 1,300 square kilometers, you could fit two Metro Manilas inside its land mass. (Bangkok has it beat though, at 1,500 sq. km.)
All of my appointments the following day were in Santa Monica so I thankfully didn’t have to make my way into the downtown area. I got to meet and hang out with some great folks from Strong Ventures, Wavemaker, Veritoken, and Science Inc.
By this time, the crypto bear market was in full swing, but with perfect LA weather it was hard to be depressed about it.
24 and sunny at the Santa Monica Pier
(Speaking of depression: I was back in LAX by that same evening, waiting for a 10pm red-eye to Raleigh, North Carolina. It was an otherwise great day to be connecting with crypto folks though.)
Day 3: Los Angeles to Raleigh (Travel time: 5.5 hours)
The Raleigh/Durham airport is small relative to the city’s aspirations. It’s one of the final candidates to host the second campuses of both Apple and Amazon, and its Research Triangle area already has major facilities from brands like IBM, Google, and Cisco.
Raleigh is also the site of one of the industry’s longest running conferences, Cryptolina, and Bloom was one of the exhibitors and speakers.
BloomX’s conference table at Cryptolina. I had a few dozen conversations while sitting comfortably behind these chairs, probably my most relaxed conference experience this year.
I got to meet some fine folks over the two conference days, including Jameson Lopp and Tatiana Moroz, the latter of which even interviewed me for her crypto show. I didn’t get to socialize much in the evenings, but I did get to eat a lot of pulled pork and check out a couple of local pubs.
I’m guessing this wasn’t the quintessential North Carolina pub
I was scheduled to speak on the second day of Cryptolina, and I don’t think I’ve ever spent more time and energy mentally preparing myself. Not because of the talk itself, mind you, but because of what was going to happen after.
I spoke about the continuing growth of crypto in the ASEAN region and how we were building the BloomX network to usher in a new generation of crypto fans
My presentation that day was scheduled to end at 3:30pm, and I had a flight out of Raleigh at 7. On its own, that wasn’t a big deal. All I had to do was finish my talk, pack up all our remaining conference merch, say goodbye to some folks, grab my suitcase from my hotel, and head to the airport.
But once I set foot on that plane was when the most ridiculous part of my whole trip was about to begin: now I was heading to Thailand.
Day 6: Raleigh to New York to Hong Kong (Travel time: 24 hours)
When I was planning all this a few weeks ago, it seemed like a funny challenge that only a crazy person would have accepted: I had booked two speaking engagements within 36 hours of each other, in cities that were separated by over 15,000 kilometers. (The first was the two-day Cryptolina conference in Raleigh that I had just finished, and the second one was a three-day private USAID blockchain workshop in Bangkok.)
I suppose I could have just declined one of the invitations, but that solution just seemed so much less interesting than mustering the Herculean (Sisyphean?) energy necessary to do both. And so I packed myself into three consecutive economy-class flights, and spent half the time wondering why I was doing this to myself, and the other half genuinely curious whether I would even survive the whole affair.
Digression №1: Some thoughts on pay lounges
I had a long layover at JFK, prior to transforming myself into a flying sardine for 15 hours, and got to try LoungeBuddy for the first time. As implied by its name, it’s an app for finding and booking pay-lounges in hundreds of terminals around the world, and is a great solution if you’re not lucky enough to have a spot at any of the more exclusive areas. It’s not cheap at $40–50 a pop, but you’ve got food and drinks for four hours, and is theoretically less annoying than having to sit and wait at your gate.
Unfortunately, I found the pay lounges in JFK to be overrun with people, such that it was impossible to find a quiet corner. Perhaps LoungeBuddy has done its job too well, but when I showed up at my chosen lounge, they had stopped accepting walk-ins and were very close to maximum capacity. Not exactly the most relaxing way to wait for your long-haul.
Day 7: Hong Kong to Bangkok (Travel time: 2.5 hours)
I finally landed in Bangkok at 10:30am on Monday morning. I checked in to my hotel, showered and changed, ran to the USAID offices near Sukhumvit, and was giving a presentation on Bloom by 2pm.
I spoke about our experiences running a crypto remittance platform for the past few years, and answered a lot of familiar questions about how it all worked. How do you mitigate volatility risk? Why is this better than using banks and the SWIFT network? How do you reduce costs if you’re converting to and from fiat on both ends of the transfer?
I gave away copies of my book Reinventing Remittances with Bitcoin to the folks who wanted to do a deep dive, although I unfortunately only had half a dozen left. For the rest of the attendees (and really, for anyone else reading this), I gave out links to the fully-illustrated PDF at bloom.solutions/book, or the Amazon Kindle edition.
I remember being light-headed when I first arrived at the conference venue, but adrenaline and a genuine fear of embarrassment helped me power through the rest of the day. The other resource speakers included folks from ConsenSys and Civic Ledger.
I ended up facilitating a couple of remittance-focused breakout sessions and ideating on various blockchain solutions that USAID and its partner organizations could potentially deploy or support in target regions.
USAID, UNESCAP, UNCDF, ILO, and a handful of blockchain specialists
Digression №2: Some thoughts on jetlag
There are tons of techniques to fight jetlag while traveling. Ironically the one that I’ve found that works best for me is not to fight it all, and instead control your itinerary as much as you can.
For me, that means limiting non-Asian destinations to a maximum of 1 week. (Of course when I say “non-Asian,” I really just mean locations that are greater than 4 hours off from your home timezone.) Don’t bother acclimating to US timezones and just accept that you’ll be sleeping terribly, so that when you return to Asia your body doesn’t feel like it’s even left.
In practice, this means avoiding late evening US meetings, because there’s a high likelihood that you’ll be asleep by 8 or 9pm each night.
Day 11: Manila (Total flight time: 3.5 hours)
I arrived in Manila on the heels of a pretty bad typhoon, and was sad to see that the complaints about Grab’s steadily worsening service were true. It took 30 minutes to find a Grab car from the airport at 11pm, and the total cost ended up more expensive than the official airport taxis.
The next morning I was heading to the annual Techtonic Summit in SMX Convention Center. I was scheduled to participate in a mock debate, that funnily enough involved me attacking cryptocurrency and blockchain proponents. The goal of the whole debate format was to create an opportunity for the audience to hear both sides of the cryptocurrency discussion, instead of just hearing the unbridled enthusiasm.
Basically it was the polar opposite of the last two speaking engagements I’d just been on, which was a weird change of pace.
The Blockchain debate at Techntonic Summit (photo: @bitpinas)
Of course, four years worth of defending crypto meant that I was basically an encyclopedia for every criticism ever levied against our community, and it was an interesting experience to temporarily embody the opposition.
After the debate, I went straight home and entered into a 12-hour coma.
Digression №3: Some Numbers
As a quick recap: my total displacement over 12 days was 35,000 kilometers via 7 separate flights. I spent a little over 43 hours in the air, and carried 60 liters of luggage.
It’s hard to recommend airlines or hotels because half of my flights and accommodations were free, either through my mileage program or paid for by the conference, so I wouldn’t be giving a very accurate analysis of price vs. performance. I can however wholeheartedly recommend HotelTonight, and will try it again in every city that it covers. I can give only a middling recommendation for LoungeBuddy though. It really needs a more real-time view of how many people are currently in a given lounge.
I’m not sure if I’ll ever attempt a trip that’s quite as hectic as this again, but it was certainly a memorable two weeks. I’m already packing for the next one!
### | https://medium.com/bloomx/on-the-conference-trail-cryptolina-usaid-blockchain4id-and-techtonic-summit-33cf9d3e27c2 | ['Luis Buenaventura', 'Bloomx'] | 2018-06-26 06:19:48.017000+00:00 | ['International Development', 'Cryptocurrency', 'Blockchain', 'Travel', 'Startup'] |
#50 Growth Weekly News | Learn the latest news in Digital Analytics world and join us on this week’s stories in the comments.
Last blog post for 2019 and we wanted to make it a little bit Christmassy.🎅🏼 May the new year brings us health, joy and exciting new updates from Google. So, grab your cocoa, sit next to your fireplace and enjoy the last stories for this year…
Last week in summary: A quick flashback in Google Cloud’s year, along with an algorithm walk-through to save memory for count-distinct calculations. Also, BigQuery Reservations is now available in beta for US and EU regions. | https://medium.com/growth-analytics/50-growth-weekly-news-6cd5b5471eb5 | ['Alexandra Poulopoulou'] | 2019-12-19 10:47:49.757000+00:00 | ['Google Cloud Platform', 'Holiday Shopping', 'News', 'Google Analytics', 'Cloud Computing'] |
Selecting Subsets of Data in Pandas: Part 4 | The dreaded SettingWithCopy warning when doing chained indexing
This is the fourth and final part of the series “Selecting Subsets of Data in Pandas”. Pandas offers a wide variety of options for subset selection, which necessitates multiple articles. This series is broken down into the following topics.
Become an Expert
If you want to be trusted to make decisions using pandas, you must become an expert. I have completely mastered pandas and have developed courses and exercises that will massively improve your knowledge and efficiency to do data analysis.
Master Data Analysis with Python — My comprehensive course with 800+ pages, 350+ exercises, multiple projects, and detailed solutions that will help you become an expert at pandas.
Get a sample of the material by enrolling in the free Intro to Pandas course.
Learning what not to do
In all programming languages, and especially pandas, the number of incorrect or inefficient ways to complete a task heavily outnumbers the efficient or idiomatic ones. The term idiomatic refers to code that is efficient, easy to understand, and a common way (among experts) to accomplish a task in a particular library/language.
The first three parts of this series showed the idiomatic way of making selections. In this section, we will cover the most common ways users incorrectly make subset selections. Some of these bad habits might be perfectly acceptable in other Python libraries, but will be unacceptable with pandas.
Getting the right answer with the wrong code
One of the issues that prevents users from learning idiomatic pandas, is that it is still possible to get the correct final result while using highly inefficient and non-idiomatic code. Completing a task isn’t necessarily indicative that your code is correct.
Here are a few reasons why a solution that gives the correct result might not be good:
A slow solution might not scale to larger data
A solution might work in this particular instance but fail with slightly different data
A solution might be very fast, but hard to interpret by others
Chained indexing with lists
Chained indexing is the first and most important subset selection problem we will discuss. Chained indexing occurs whenever two subset selections immediately following each other.
To help simplify the idea, we will look at chained indexing with Python lists. Let’s first create a list of integers:
>>> a = [1, 5, 10, 3, 99, 5, 8, 20, 40]
>>> a
[1, 5, 10, 3, 99, 5, 8, 20, 40]
Let’s make a normal subset selection by slicing from integer location 2 to 6.
>>> a[2:6]
[10, 3, 99, 5]
Chained indexing occurs whenever we make another subset selection immediately following this one. Let’s select the first element from this new list in a single line of code:
>>> a[2:6][0]
10
This is an example of chained indexing.
Master Data Analysis with Python
Master Data Analysis with Python is an extremely comprehensive course that will help you learn pandas to do data analysis.
I believe that it is the best possible resource available for learning how to data analysis with pandas and provide a 30-day 100% money back guarantee if you are not satisfied.
Assigning a new value to a list with chained indexing
Let’s say, we wanted to change this value that was selected from above from 10 to 50. Let’s attempt to do this assignment with chained indexing:
>>> a[2:6][0] = 50
>>> a
[1, 5, 10, 3, 99, 5, 8, 20, 40]
Nothing happened???
As you can see, the list a was not modified at all. The reason for this, is that Python created a temporary intermediate list directly after the first subset selection. It might be easier to write out the execution of the above into two separate steps:
>>> a_temp = a[2:6]
>>> a_temp[0] = 50
>>> a_temp
[50, 3, 99, 5] >>> a
[1, 5, 10, 3, 99, 5, 8, 20, 40]
The temporary object was the only one modified
As you can see, the intermediate object, a_temp , was the only object modified. The original was left untouched.
But doesn’t Python modify objects that are the ‘same’?
Let’s take a look at a closely related example where Python will modify two variables at the ‘same’ time. Let’s create a new list, a1 and set it equal to b1 and then modify the first element of it:
>>> a1 = [0, 1, 2, 3]
>>> b1 = a1
>>> b1[0] = 99
>>> b1
[99, 1, 2, 3] >>> a1
[99, 1, 2, 3]
Both have changed!?!
In the above example, we made a single call to change the first element of b1 . This modified both b1 and a1 . This happened because a1 and b1 are referring to the exact same object in memory. a1 and b1 are simply names that are used to refer to the underlying objects, which in this case, are the same.
Proof they are the same with the id function
We can prove that a1 and b1 are referring to the same object with the built-in id function, which returns the memory address of the object.
>>> id(a1)
4501625608 >>> id(b1)
4501625608 >>> a1 is b1
True
So, why did our assignment with chained indexing fail?
Whenever you take a slice of a list, Python creates a brand new copy (a shallow-copy to be exact) of the data. A copy of an object is completely unrelated to the original and has it’s own place in memory.
Whenever we write a[2:6] , the result of this is a brand new list object in memory unrelated to the list a . The statement a[2:6][0] = 50 does actually make an assignment to that temporary list copy, but it is not saved to a variable, so there is no way to track it.
To properly assign the 2nd position in the list a to 50, you would simply do a[2] = 50 instead of chained indexing.
Shallow vs Deep Copy (Advanced)
You can safely skip this section as it won’t be relevant to our subset selection. Python creates a shallow copy when performing a slice on a list. If you have mutable objects within your list, then these inner list objects won’t be copied and will still be referring to the same object.
Let’s create a list with a another list inside of it.
>>> a = [7, [1, 2], 5, 6, 10, 14, 19, 20]
>>> a
[7, [1, 2], 5, 6, 10, 14, 19, 20]
Let’s take a slice of this list:
>>> a_slice = a[1:4]
>>> a_slice
[[1, 2], 5, 6]
Let’s look at the id of the inner list from both a and a_slice
>>> id(a[1]) == id(a_slice[0])
True
They are the exact same! Python has created a shallow copy here, meaning every mutable object inside of the slice is still the same as it was in the original.
Making an assignment to the inner list
Let’s change the first value in the inner list of a_slice and see if it changes the inner list of a :
>>> inner_list = a_slice[0]
>>> inner_list
[1, 2] >>> inner_list[0] = 99
>>> a_slice
[[99, 2], 5, 6] >>> a
[7, [99, 2], 5, 6, 10, 14, 19, 20]
The inner list for both variables had its first element changed. That inner list never copied when taking that first slice and therefore it exists in only one unique place in memory.
Chained indexing assignment in one step
We can modify this inner list in a single chain of indexing in one line of code.
>>> # output our current list
>>> a
[7, [99, 2], 5, 6, 10, 14, 19, 20] >>> a[1:5][0][0] = 1000
>>> a
[7, [1000, 2], 5, 6, 10, 14, 19, 20]
Using the copy module to create a deep copy
The standard library comes with the copy module to make a deep copy of your object. A deep copy creates a copy of every single mutable object within your object.
Let’s re-run the code from a couple sections above where we take a slice of a list containing an inner list and this time make a deep copy before checking the id of each inner list.
>>> import copy >>> a = [7, [1, 2], 5, 6, 10, 14, 19, 20]
>>> a_slice = copy.deepcopy(a[1:4]) >>> id(a[1]) == id(a_slice[0])
False
Chained Indexing in pandas
Chained indexing happens analogously to pandas DataFrames/Series. Whenever you do two (or more) subset selections one after the other, you are doing chained indexing. Note, that this isn’t 100% indicative that you are doing something wrong but for the vast majority of cases that I have seen, it is.
Let’s walk through several examples of chained indexing on a pandas DataFrame. To simplify matters, we will use some fake data on a small DataFrame.
>>> import pandas as pd
>>> df = pd.read_csv('../../data/sample_data.csv', index_col=0)
>>> df
Chained Indexing Example 1
Let’s select the columns food , age , and color and then immediately select just age using chained indexing:
>>> df[['food', 'age', 'color']]['age']
Jane 30
Niko 2
Aaron 12
Penelope 4
Dean 32
Christina 33
Cornelia 69
Name: age, dtype: int64
It might be easier to store each selection to a variable first:
>>> a = ['food', 'age', 'color']
>>> b = 'age'
>>> df[a][b]
Jane 30
Niko 2
Aaron 12
Penelope 4
Dean 32
Christina 33
Cornelia 69
Name: age, dtype: int64
Chained Indexing Example 2
Let’s use .loc to select Niko and Dean along with state , height , and color . Then, let's chain just the indexing operator to select height and color .
Some code snippets such as this one will be an image in order to fit more characters into a single line
That’s a lot of brackets in the above expression. Let’s separate each selection into their own variables. Below, the variable a is technically a two-item tuple of lists.
>>> a = ['Niko', 'Dean'], ['state', 'height', 'color']
>>> b = ['height', 'color'] >>> df.loc[a][b] # outputs same as previous
Chained Indexing Example 3
Let’s use .iloc first to select the rows 2 through 5 and then chain it again to select the last three columns.
>>> df.iloc[2:5].iloc[:, -3:]
Chained Indexing Example 4
Let’s select the rows Aaron , Dean , and Christina with .loc and then the columns age and food with just the indexing operator.
>>> df.loc[['Aaron', 'Dean', 'Christina']][['age', 'food']]
Chained Indexing Example 5
Select all the rows with age greater than 10 with just the indexing operator and then select the score column.
>>> df[df['age'] > 10]['score']
Jane 4.6
Aaron 9.0
Dean 1.8
Christina 9.5
Cornelia 2.2
Name: score, dtype: float64
Identifying Chained Indexing
First, all the examples from above are things you should strive to avoid. All the selections from above could have been reproduced in a much simpler and more direct manner.
As mentioned previously, chained indexing occurs whenever you use the indexers [] , .loc , or .iloc twice in a row.
If you are having trouble identifying chained indexing you can look for the following:
A closed bracket followed by an open bracket — Look for ][ as with df[a][b]
as with .loc or .iloc following a closed bracket like in example 3: df.iloc[2:5].iloc[:, -3:]
Another way to determine if you have chained indexing is if you can break the operation up into two lines. For instance, df[a][b] can be broken up into:
>>> df1 = df[a]
>>> df1[b]
Thanks to Tom Augspurger for the first bullet. See his blog post on indexing for more.
Making the examples idiomatic
Let’s re-write all of the above examples idiomatically.
Chained Indexing Example 1 — Idiomatic
>>> # df[['food', 'age', 'color']]['age'] - bad
>>> df['age']
Jane 30
Niko 2
Aaron 12
Penelope 4
Dean 32
Christina 33
Cornelia 69
Name: age, dtype: int64
Chained Indexing Example 2 — Idiomatic
>>> # df.loc[['Niko', 'Dean'], ['state', 'height', 'color']][['height', 'color']] - bad >>> df.loc[['Niko', 'Dean'], ['height', 'color']]
Chained Indexing Example 3 — Idiomatic
>>> # df.iloc[2:5].iloc[:, -3:] - bad
>>> df.iloc[2:5, -3:]
Chained Indexing Example 4 — Idiomatic
>>> # df.loc[['Aaron', 'Dean', 'Christina']][['age', 'food']] - bad
>>> df.loc[['Aaron', 'Dean', 'Christina'], ['age', 'food']]
Chained Indexing Example 5 — Idiomatic
>>> # df[df['age'] > 10]['score'] - bad
>>> df.loc[df['age'] > 10, 'score']
Jane 4.6
Aaron 9.0
Dean 1.8
Christina 9.5
Cornelia 2.2
Name: score, dtype: float64
Why is chained indexing bad?
There are two primary reasons that chained indexing should be avoided if possible.
Two separate operations
The first, and less important reason, is that two separate pandas operations will be called instead of just one.
Let’s take example 4 from above:
>>> df.loc[['Aaron', 'Dean', 'Christina']][['age', 'food']]
When this code is executed, two independent operations are completed. The following is run first:
>>> df.loc[['Aaron', 'Dean', 'Christina']]
The result of this is a DataFrame, and on this temporary and intermediate DataFrame the second and final operation is run to select two columns: [['age', 'food']] .
Let’s see this operation written idimoatically:
>>> df.loc[['Aaron', 'Dean', 'Christina'], ['age', 'food']]
There is exactly one operation, a call to the .loc indexer that is passed both the row and column selections simultaneously.
SettingWithCopy warning on assignment
The major problem with chained indexing is when assigning new values to the subset, in which Pandas will usually emit the SettingWithCopy warning.
Let’s use example 5 from above with its chained indexing version to change all the scores of those older than 10 to 99.
>>> df[df['age'] > 10]['score'] = 99 /Users/Ted/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
The SettingWithCopy warning was triggered. Let's output the DataFrame to see if the assignment happened correctly.
>>> df
If you are enjoying this article, consider purchasing the All Access Pass! which includes all my current and future material for one low price.
Failed Assignment!
Our DataFrame failed to make the assignment. Let’s break this operation up into two steps to give us more insight into what is happening.
>>> df_temp = df[df['age'] > 10]
>>> df_temp['score'] = 99 /Users/Ted/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
The warning is triggered yet again even though there are two lines of code. df_temp is a subset of a DataFrame and any subsets that continue with another subset assignment are liable to trigger the warning. It doesn’t matter if it happens on the same line or not.
Let’s take a look at df_temp :
>>> df_temp
The assignment completed correctly for the intermediate DataFrame but not for our original. The reason for this, is the same reason as to why the chained indexing did not work for the list at the top of this tutorial.
When we run df[df['age'] > 10] , pandas creates an entire new copy of the data. So when we try and assign the score column, we are modifying this new copy and not the original. Thus, the name SettingWithCopy make sense: pandas is warning you that you are setting (making an assignment) on a copy of a DataFrame.
How to assign correctly?
You should never use chained indexing to make an assignment. Instead, make exactly a single call to one of the indexers. In this case, we can use .loc to properly make the selection and assignment.
>>> df.loc[df['age'] > 10, 'score'] = 99
>>> df
SettingWithCopy example that does assignment
Let’s do another nearly identical chained indexing as the previous example, except we will reverse the order of the chain. We will first select the score column and use boolean indexing to choose the people older than 10 and assign them a score of 0.
First we will, just make the selection (without assignment) so you can see what we are trying to assign.
>>> df['score'][df['age'] > 10]
Jane 99.0
Aaron 99.0
Dean 99.0
Christina 99.0
Cornelia 99.0
Name: score, dtype: float64
Now, let’s make the assignment:
>>> df['score'][df['age'] > 10] = 0 /Users/Ted/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
The warning is triggered again. Let’s output the DataFrame:
>>> df
What happened this time?
The first part of the above operation selects the score column. When pandas selects a single column from a DataFrame, pandas creates a view and not a copy. A view just means that no new object has been created. df['score'] references the score column in the original DataFrame.
This is analogous to the list example where we assigned an entire list to a new variable. No new object is created, just a new reference to the one already in existence.
Since no new data has been created, the assignment will modify the original DataFrame.
Why is a warning triggered when our operation completed successfully?
Pandas does not know if you want to modify the original DataFrame or just the first subset selection.
For instance, you could have selected the score column as a Series to do further analysis with it without affecting the original DataFrame.
Let’s get a fresh read of our data and see this example:
>>> df = pd.read_csv('../../data/sample_data.csv', index_col=0) >>> s = df['score']
>>> s
Jane 4.6
Niko 8.3
Aaron 9.0
Penelope 3.3
Dean 1.8
Christina 9.5
Cornelia 2.2
Name: score, dtype: float64
Let’s set all the values of scores that are greater than 5 to 0.
>>> s[s > 5] = 0 /Users/Ted/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
Why was the warning triggered here?
This last assignment does not use chained indexing. But, the variable s was created from a subset selection of a DataFrame. So it's really not any different than doing the following:
>>> df['score'][df['score'] > 5] = 0
Pandas can’t tell the difference between an assignment like this in a single line versus one on multiple lines. Pandas doesn’t know if you want the original DataFrame modified or not. Let’s take a look at both s and df to see what has happened.
>>> s
Jane 4.6
Niko 0.0
Aaron 0.0
Penelope 3.3
Dean 1.8
Christina 0.0
Cornelia 2.2
Name: score, dtype: float64
s was modified as expected. But, what about our original?
>>> df
Our original DataFrame has been modified, which means that s is a view and not a copy.
Why is the warning message so useless?
Let’s take a look at what the warning says:
A value is trying to be set on a copy of a slice from a DataFrame
I’m not sure what copy of a slice actually means, but it isn’t what we had in the previous example. s was a view of a column of a DataFrame and not a copy.
What the warning should really say
A better message would look something like this:
You are attempting to make an assignment on an object that is either a view or a copy of a DataFrame. This occurs whenever you make a subset selection from a DataFrame and then try to assign new values to this subset.
Summary of when the SettingWithCopy warning is triggered
In summary, whenever you make a subset selection and then modify the values in that subset selection, you will likely trigger the SettingWithCopy warning.
It might help to see one more example of when the SettingWithCopy is triggered.
Let’s begin by selecting two columns from df into a new variable:
>>> df1 = df[['color', 'age']]
>>> df1
Let’s display the age column from this new DataFrame:
>>> df1['age'] # no warning for output, there is no assignment here
Jane 30
Niko 2
Aaron 12
Penelope 4
Dean 32
Christina 33
Cornelia 69
Name: age, dtype: int64
Let’s add a new column weight :
>>> df1['weight'] = [150, 30, 120, 40, 200, 130, 144] /Users/Ted/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
We triggered the warning because df1 is a subset selection from df and we subsequently modified it by adding a new column.
>>> df1
The original is left unchanged.
>>> df
Let’s continue and change all the ages to 99, which will again trigger the warning.
>>> df1['age'] = 99 /Users/Ted/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel. >>> df1
The original was also left unchanged:
>>> df
Master Python, Data Science and Machine Learning
Immerse yourself in my comprehensive path for mastering data science and machine learning with Python. Purchase the All Access Pass to get lifetime access to all current and future courses. Some of the courses it contains:
Exercise Python — A comprehensive introduction to Python (200+ pages, 100+ exercises)
— A comprehensive introduction to Python (200+ pages, 100+ exercises) Master Data Analysis with Python — The most comprehensive course available to learn pandas. (800+ pages and 300+ exercises)
— The most comprehensive course available to learn pandas. (800+ pages and 300+ exercises) Master Machine Learning with Python — A deep dive into doing machine learning with scikit-learn constantly updated to showcase the latest and greatest tools. (300+ pages)
Get the All Access Pass now!
How does pandas know to even to trigger the warning?
In the above example, we created df1 , which when modifying the age column, triggered the warning. How did pandas know to do this?
df1 was created by df[['color', 'age']] . During this selection, pandas alters the is_copy or _is_view attributes.
If we call is_copy like a method then we will get the object it was copied from if it is a copy or None will be returned. Let's see its value for df1 :
>>> df1.is_copy()
The private attribute _is_view is a boolean:
>>> df1._is_view
False
Let’s check these same attributes for df . Since df was read in directly from a csv it should not be a copy or a view. If it indeed is a copy, then it will return None .
>>> df.is_copy is None
True >>> df._is_view
False
Let’s find out if a single column as a Series is a view or a copy.
>>> food = df['food']
>>> food.is_copy is None # not a copy
True >>> food._is_view
True
Selecting a single column returns a view and not a copy
False Negatives with SettingWithCopy with .loc and .iloc
Unfortunately, when using chained indexing where .loc and .iloc are used as the first indexer, the warning will not get triggered reliably. Let's take a look at an example where no warning is triggered and no change is made to the data.
Let’s change the ages of Niko and Dean to 99.
>>> df = pd.read_csv('../../data/sample_data.csv', index_col=0)
>>> df.loc[['Niko','Dean']]['age'] = 99
>>> df
Let’s make a slight change and use slice notation to select all the names from Niko through Dean instead.
>>> df.loc['Niko':'Dean']['age'] = 99
>>> df /Users/Ted/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
WTF????
By changing from a list to a slice within .loc , the warning is triggered AND the DataFrame is modified. This is craziness.
Some good news
None of the stuff that we have done for SettingWithCopy needs to be memorized. Even I don't know whether a subset selection will return a view or a copy. You don't have to worry about any of that it.
Two common scenarios
You will almost always find yourself in one of two scenarios:
You want to work with the entire DataFrame and modify a subset of it You want to work with a subset of your original DataFrame and modify that subset
Scenario 1 is solved by using exactly one indexer to make the selection and assignment.
Scenario 2 is solved by forcing a copy of your subset selection with the copy method. This will allow you to make changes to this new DataFrame without modifying the original.
Scenario 1 — Working with the entire DataFrame
When you are doing an analysis on a single DataFrame and want to work only on this DataFrame in its entirety then you are in scenario 1.
For instance, if we want to change the color of all the people who live in Texas to maroon, we would do this in a single call to the .loc indexer.
>>> df = pd.read_csv('../../data/sample_data.csv', index_col=0)
>>> df.loc[df['state'] == 'TX', 'color'] = 'maroon'
>>> df
We add 5 to the score of Jane , Dean , and Cornelia like this:
We can change a single cell, such as the age of Aaron to 15:
>>> df.loc['Aaron', 'age'] = 15
>>> df
All of these examples involve simultaneous selection of rows and columns, which are the most tempting to do chained indexing on.
Scenario 2
In scenario 2, we would like to select some data from our original DataFrame and do an independent analysis on it separate from the original. For instance, let’s say we wanted to select just the food and height columns into a separate DataFrame.
Let’s go ahead and make this selection:
>>> df1 = df[['food', 'height']]
>>> df1
As we saw from above, this DataFrame is still linked to the original DataFrame. We can check the is_copy attribute.
>>> df1.is_copy()
Use the copy method
To get an independent copy, call the copy method like this:
>>> df1 = df[['food', 'height']].copy()
>>> df1.is_copy is None
True
df1 is no longer linked to the original in any way. We can now modify one of its columns without getting the SettingWithCopy warning.
Let’s change the height of every person to 100:
>>> df1['height'] = 100
>>> df1
Avoiding ambiguity and complexity
We will now move away from the SettingWithCopy warning and cover some subset selections that I tend to avoid. These subset selections that I personally find either ambiguous or adding complexity to pandas without adding any additional functionality.
However, this does not mean that these subset selections are wrong. They were built into pandas for a purpose and many others make use of them and don’t have a problem using them. So, it will be up to you whether or not you decide to use these upcoming subset selections.
Selecting rows with just the indexing operator
The primary purpose of just the indexing operator is to select column(s) by passing it a string or list of strings. Unexpectedly, this operator completely changes behavior whenever you pass it a slice. For instance, we can select every other row beginning with integer location 1 to the end like this:
>>> df = pd.read_csv('../../data/sample_data.csv', index_col=0) >>> df[1::2]
Even stranger, is that you can make selections by row label as well. For instance, we can select Niko through Dean like this:
>>> df['Niko':'Dean']
Even more bizarre….partial string subset selection
The most bizarre thing is that you can use partial string matches on the index when using a slice with just the indexing operator. For this to work, the index will need to be sorted. Call the sort_index method to do so:
>>> df_sort = df.sort_index()
>>> df_sort
Now, you can do use slice notation with partial strings. For instance, if we wanted to select names that begin with ‘C’ and ‘D’, we would do the following:
>>> df_sort['C':'E']
Technically this slices from the exact label ‘C’ to the exact label ‘E’.
I never do this
I never pass just the indexing operator slices like in these examples for the following reasons:
It is confusing for there to be multiple modes of operation for an operator — one for selecting columns and the other for selecting rows
It is not explicit — slicing can happen by both integer location and by label.
If I want to slice rows, I always use .loc/.iloc
The .loc/.iloc indexers are explicit and there will be no ambiguity with them. This follows from the Zen of Python that explicit is better than implicit.
Scalar selection with .at/.iat
Both .at/.iat are indexers that pandas has available to make a selection of one and only one single value. Each of these indexers works analogously to .loc/.iloc .
.at makes its selection only by label and .iat selects only by integer location. They each select a single value. The term scalar is used to refer to a single value.
Let’s take a look at an example of each. First, let’s select Dean’s age.
>>> df.at['Dean', 'age']
32
Notice that a scalar value was returned and not a pandas Series or DataFrame.
We can select the cell at the 5th row and 2nd column like this:
>>> df.iat[5, 2]
'Melon'
What is the purpose of .at/.iat ?
These indexers offer no additional functionality. You can select a single value with .loc or .iloc . But, what they do offer, is a performance improvement, albeit a minor one. So, if there is a performance critical part of your code that does lots of scalar selections you could use .at/.iat .
Personally, I never use them as they add needless complexity to the library for a small performance gain.
Summary
Idiomatic pandas is efficient, easy to read, and common among experts
It is easy to write non-idiomatic pandas — lots of ways to do the same thing
Getting the right answer does not guarantee that you are using pandas correctly
Assignment with chained indexing with a list does not work: a[2:6][0] = 5 - NEVER DO THIS!
- NEVER DO THIS! Assigning an entire list to a new variable does NOT create a new copy. Both variable names will refer to the same underlying object
Slicing a list creates a shallow copy
To make new copies of any mutable objects within a list, you need to do a deep copy
Chained indexing happens when you make two successive subset selections one directly after the other
Avoid chained indexing in pandas
Identify chained indexing — closed then open brackets ][ or .loc/.iloc following a bracket like this: df[a].loc[b]
or following a bracket like this: Another way to identify chained indexing is if you can break up the indexing into 2 lines of code.
Chained indexing is bad because it uses two calls to the indexers instead of one and more importantly triggers the SettingWithCopy warning when doing an assignment
warning when doing an assignment The SettingWithCopy warning gets triggered when you make a subset selection and then try to assign new values within this selection. i.e. you have done chained indexing!
warning gets triggered when you make a subset selection and then try to assign new values within this selection. i.e. you have done chained indexing! Chained indexing is the cause of the SettingWithCopy warning. Chained indexing can happen in the same line, consecutive lines, or two lines very far apart from each other.
warning. Chained indexing can happen in the same line, consecutive lines, or two lines very far apart from each other. Whenever you make a subset selection, pandas creates either a view or a copy
or a A view is a reference to the original DataFrame. No new data has been created.
is a reference to the original DataFrame. No new data has been created. A copy means a completely new object with new data that is unlinked from the original DataFrame
means a completely new object with new data that is unlinked from the original DataFrame The reason the SettingWithCopy warning exists, is because you might be trying to make an assignment that actually fails or you might be modifying the original DataFrame without knowing it.
warning exists, is because you might be trying to make an assignment that actually fails or you might be modifying the original DataFrame without knowing it. Regardless of why the SettingWithCopy warning was triggered, you should not ignore it.
warning was triggered, you should not ignore it. You need to go back and understand what happened and probably rewrite your pandas so you don’t trigger the warning
One of the most common triggers of the SettingWithCopy warning is when you do boolean selection and then try to set the values of a column like this: df[df['col a'] > 5]['col b'] = 10
warning is when you do boolean selection and then try to set the values of a column like this: Fix this chained assignment with .loc or .iloc like this: df.loc[df['col a'] > 5, 'col b'] = 10
or like this: You can use is_copy or the private attribute _is_view to determine if you have a view or a copy
or the private attribute to determine if you have a view or a copy It is not necessary to know whether you have a view or a copy to write good pandas, because you will likely be in one of two scenarios.
In Scenario 1, you will be working on one DataFrame and want to change values in it and continue to use it in its entirety
In Scenario 2, you will make a subset selection, store it to a variable, and want to continue working on it independently from the original data.
To avoid the SettingWithCopy warning, use .loc/.iloc for scenario 1 and the copy method with scenario 2.
warning, use for scenario 1 and the method with scenario 2. Personally, I avoid using just the indexing operator to select rows with slices because it is ambiguous.
I also avoid using .at/.iat because it adds unneeded complexity and adds no increased functionality.
Master Python, Data Science and Machine Learning
Immerse yourself in my comprehensive path for mastering data science and machine learning with Python. Purchase the All Access Pass to get lifetime access to all current and future courses. Some of the courses it contains:
Exercise Python — A comprehensive introduction to Python (200+ pages, 100+ exercises)
— A comprehensive introduction to Python (200+ pages, 100+ exercises) Master Data Analysis with Python — The most comprehensive course available to learn pandas. (800+ pages and 350+ exercises)
— The most comprehensive course available to learn pandas. (800+ pages and 350+ exercises) Master Machine Learning with Python — A deep dive into doing machine learning with scikit-learn constantly updated to showcase the latest and greatest tools. (300+ pages)
Get the All Access Pass now! | https://medium.com/dunder-data/selecting-subsets-of-data-in-pandas-part-4-c4216f84d388 | ['Ted Petrou'] | 2020-08-25 02:50:10.022000+00:00 | ['Machine Learning', 'Python', 'Python Pandas', 'Jupyter', 'Data Science'] |
Time Series Analysis; Applying ARIMA Forecasting Model to the U.S. Unemployment Rate Using Python | In this thread, I’m going to apply the ARIMA forecasting model to the U.S. unemployment rate as time-series data. Also, I’ll bring the proper codes which I run the model using Python (IDE Jupyter Notebook). At the end of this thread, I put two YouTube videos for training purposes.
Step 1- Data preparation
First, we need to import the necessary dependencies to the Jupyter Notebook. the first group of libraries is needed for the data manipulation and the second set of libraries are needed for the model development.
Import dependencies for data manipulation
Dependencies to develop the ARIMA forecasting model
Data for the three cases are pulled out from the Economic Research Services of the Federal Reserve Bank of St. Louis fred.stlouisfed.org website. Data included seasonally adjusted unemployment rate for the United States, State of Oregon, and the State of Nevada. I put the links to the data at the bottom of the thread; however, you can download the data in a CSV format from the GitHub user content link here too.
Importing data to Jupyter Notebook
Step 2- Data Visualization
The following line graph displays the trends of the unemployment rate for the three time-series data, which consist of the U.S., Oregan, and Nevada. Also, I mentioned some top annotation over the line graph that may provide more information for readers. It’s just a sample to produce the line graph using Python programming, matplotlit package, and ggplot style.
Line graph for three unemployment rates in the U.S., State of Oregon and Nevada
Stationarity
Time-series data should be stationary. A stationary series means that the properties [means, variance, and covariance] do not change over time. Note that seasonality and trends are not stationary because they demonstrate the value of the time series at different times [e.g., the temperature in winter is always low]. Hypothetically, time-series data should be stationary to run the ARIMA forecasting model. Autocorrelation and order differencing to visualize the stationary status. As displayed in the below graph, the first Autocorrelation of the state of Nevada dataset is nonstationary data. This means that the state of the Nevada dataset changes over time, and it plays a key role to determine the criteria. Mean, variance and covariance are changing over time, therefore, this issue which called as stationarity attribute should be solved before employing data. Toward this end, the first and second-order of differencing was applied to display the stationarity situation among the dataset. First-order differencing defined as 6 rows down and second-order differencing 12 rows down. Both orders are stationary, but the second-order is more stationary than the first order.
Autocorrelation and two orders of differencing to test the stationarity
ARIMA Parameters
AR(p) Auto Regression: This component refers to using the past values as own lags in the linear regression model to the prediction
This component refers to using the past values as own lags in the linear regression model to the prediction I(d) Integration: It uses differencing of observations through subtracting an observation from the previous step to make the time series stationary.
It uses differencing of observations through subtracting an observation from the previous step to make the time series stationary. MA(q) Moving Average: It’s a model that uses the dependency between an observation and a residual error from a moving average model applied to the autoregression.
Basic model was run order of (p,d,q) = (1,1,1) for the U.S. unemployment rate data. As noted that the U.S. unemployment dataset was stationary and the first order yields a certain result. AIC was obtained approximately 776 in the first-order model.
The following left graph displays the KDE residuals. Kernel Density Estimation (KDE) is a non-parametric way to estimate the probability density function (PDF) of a random variable. This function uses Gaussian kernels and includes automatic bandwidth determination.
Residual Results for the KDE Test
Step 3- Running ARIMA model
Types of Forecasting
Univariate Forecasting: in the method, the forecasting model is applied to the single time-series data set. In this thread, the stationary time-series data is a univariate forecasting model. Multivariate Forecasting [exogenous variables]: this sort of forecasting model relied on the multivariate dataset. In other words, each time-series data has dependencies on the other time-series data set such as forecasting the hourly weather based on temperature, pressure, wind speed, and wind direction.
The U.S. unemployment rate data set divided to train and test the dataset. The train data defined as 80% and the rest defined as 20% for the test split. The result demonstrated forecasting from 2012 to 2020. It means that regardless of the seasonal variation, the unemployment rate in the U.S. could be moved in forecasting endpoints.
Line Graph for the Forecasting, Actual, and Training Unemployment Dataset
The next model created in accord with the order (p,d,q) = (2,0,1). This model obtained as the most fitted model because the U.S. unemployment rate was stationary and it does not need the differencing attribute. In this graph, three lines are visualized in this graph which included Train and Actual data, and the Forecasting model. Running the AutoARIMA model yielded to obtain the most optimized result with the lowest AIC rate. AIC obtained 775.9 in the optimized model.
ARIMA Model Statistics
A seasonal ARIMA model is applied in the next running model. In the SARIMA Final Forecasting Model, the line graph predicted the future trend of the U.S. unemployment rate. However, the unemployment rate has placed in an unusual situation because of the COVID-19 pandemic issue, it will take time for the trends to return to the previous status. In other words, some businesses could adapt to new conditions, particularly in worker-based businesses. Regardless of the seasonal variation, the unemployment rate in the U.S. moving slightly down based on the final ARIMA forecasting model.
SARIMA Seasonal Final Forecasting Model for the U.S. Unemployment Rate
You can use the following tutorial videos on YouTube to find the Python codes. Date preparation is illustrated in video part 1, and running the ARIMA model using Python is illustrated in video part 2.
Video part (1) includes the data preparation and data wrangling using Python (Jupyter Notebook) | https://david-hasan.medium.com/time-series-analysis-arima-forecasting-model-using-python-9a7874cefcd7 | ['David Hasan'] | 2020-11-10 06:59:38.259000+00:00 | ['Arima', 'Time Series Analysis', 'Unemployment', 'Python', 'Data Science'] |
I joined Medium to make money | Writer’s Blog
I joined Medium to make money
The ‘I made x amount in a month’ made me do it.
Medium was on my radar for a while before I actually started to write on it. A good friend and writing partner suggested that I start putting my articles up here. I had already started a blog and was trying to write articles, so she figured she would tell me about it.
She told me that I could make money writing on here, and having been scorned by the ‘you can make money’ crowd in the past I took what she said with a pinch of salt.
I had glanced at the platform, read a few articles and decided that it wasn’t for me. I didn’t give it a chance. I didn’t believe that it could make me money, at least not enough to make it worth my time.
It wasn’t until I stumbled across an article that told me how much it was possible to make in a month that I began to do more research.
Then something changed.
Seeing just how much money it was possible to make, made Medium’s potential, in my eyes, so much greater. The numbers were ridiculous, and I knew I was unlikely to get there. Even a few extra dollars a month would be great for pocket money though.
Knowing what was possible on Medium made excited to write. Every new clap (back when we were paid by claps) made my heart soar! This was a platform where people came to read and write and learn, and it was perfect for me.
It was because of Medium that I started a 30-day challenge, one where I forced myself to publish every day.
This helped me establish a writing habit that I’m still holding up to this day. Granted it’s changed slightly, I’m better organised, and only posting every weekday instead of every day, but those early days on Medium have taken me from writing every now and then, to incorporating writing into the largest part of my work week.
I’m not against advertising salaries on Medium.
Some people find articles that advertise how much money you are making annoying. I find them inspiring! I feel like I learn something new every time I look at an article like that.
One thing that I’ve noticed is that people who make money from writing never just make their writing money from Medium. The people who are really successful diversify.
Looking at their success can help me decide for myself what I want to pursue financially.
These stories aren’t made for the sake of bragging or celebrating. They are there for teaching. Open and honest transparency about what is working in the writing market and what isn’t is vital in today’s day and age!
Websites are filled with ‘get rich quick’ schemes, and passive income plans, but it’s difficult to sort through what works and what doesn’t. I can’t count the number of times that I’ve fallen for something that has cost me time and energy but never paid up.
A lot of the Medium articles have things in common. So if it works for many it should be worth trying out!
Transparency is a good thing.
Not only can advertising what works help new writers figure out what will work for them, but transparency also inspires new writers, to actually start writing.
When I saw the articles advertising their salaries, I didn’t immediately assume I would be making the same amount in a few short months.
It’s not false advertising.
The people making these articles didn’t hide the fact that they had been working for months or years. I knew that it would take a long period of time before anything would come of my writing.
It’s important that when reading these articles, you don’t skim over all of the hard work that people put into making their writing careers make money for them!
Don’t be disheartened.
Different things work for different people if you’re not making the money you want to make now, figure out what you can do to change your monetary situation. Don’t get down and give up because you’re not making the amount you want to!
Like I said earlier, the articles are supposed to be inspiring.
If you search through enough of them, you’ll get clues about how success works for these people and start implementing those practices for yourself!
Looking at success stories help us understand what works and what doesn’t.
These people shouldn’t be shunned for sharing their monetary success. They should be celebrated for it! I know how much time and energy they’ve saved me on money-making schemes in the long run! So get your notebook ready and go study what works and what doesn’t!
As always, I cannot wait to see you on the bookshelf! | https://medium.com/writers-blog/i-joined-medium-to-make-money-5662e70b1213 | ['Beth Van Der Pol'] | 2020-03-19 14:38:15.288000+00:00 | ['Money', 'Writing', 'Articles', 'Finance', 'Monetization'] |
Top 5 AI applications to look for in 2021 | Artificial Intelligence is the new electricity :- Andrew Ng
Introduction
Artificial Intelligence and machine learning is now one of the most talked technology in the world right now. It is no longer a buzzword and puzzle. The organizations across the globe have started implementing these technologies to create innovative products. AI and machine learning has taken an important roles in the fields of banking and finance, travel and transport, insurance , public safety as well as in healthcare domain. Even various NGOs are harnessing this technology to assist the nature and it’s being in the best possible ways.
In 2021 , AI is going to go more steps further and it is going to play an significant role in the business transformation. Here are the five ways AI will be used in the applications
AI- Powered chips
source : google
AI chips could play an important part in the economic growth going forward as it has been widely used in cars which are getting more autonomous , smart homes where electric appliances are more intelligent. AI chips refers to the new generation of the microprocessors which are specially designed to carry out the AI tasks faster using less power.
Graphics-processing units are particularly good at AI-like tasks, which is why they form the basis for many of the AI chips being developed and offered today.
Companies like Apple, Nvidia , Alphabet , Arm , Intel and many more have already done significant amount of research and we are curious to see the upcoming applications based on AI chips in 2021.
Cybersecurity
Source : Google
We all know data is the new oil . We all know that the consumption of data is getting more and more day by day. This gives rise to more cyber attacks. In response to this challenge, AI tools for cybersecurity has emerged to assist the information security team reduce breach risks and improve their security. Companies are investing in improving their cybersecurity infrastructure. AI will play a key role in facilitating cybersecurity in companies.The technology will also reduce the response time of companies and expenses. It will help companies achieve following :
Gaining the access to the inventory which is accessing the information systems will help in categorization and measurement of business criticality which play big roles in inventory.
AI-based cybersecurity systems provide us with the up to date knowledge of global and industry specific threats to help us in making critical prioritization decisions based not only on what could be used to attack our enterprise, but based on what is likely to be used to attack your enterprise.
AI powered systems can provide us the context for prioritization and response to security alerts, for fast response to incidents, and to surface root causes in order to mitigate vulnerabilities and avoid future issues.
Some of the companies which have started implementing this are :- Google, IBM Watson, Juniper networks,Balbix.
Voice Search
Source: Google
We know the voice search is widely used and it is the talk of the town.Google and Amazon have already captured the maximum market of smart, voice-based home products. Apple is entering the race with its own smart speakers. In future, companies will work on applications that support this voice-based technology.Google Assistant, Siri by Apple, Alexa, Cortana, and Bixby. Companies have started building applications which work as Customer service representatives or customer service assistant.
Chatbots for customer support
A chatbot is an artificial intelligence (AI) software that can simulate a conversation (or a chat) with a user in natural language through messaging applications, websites, mobile apps or through the telephone.In the digital-driven world, companies use AI-powered chatbots in business communication. These bots are mainly used to increase customer engagement. AI-powered chatbots minimise the need for human intervention.
For several years chatbots were typically used in customer service environments but are now being used in a variety of other roles within enterprises to improve customer experience and business efficiencies.
Chatbots built using some of the bot frameworks currently available may offer slightly more advanced features like slot filling or other simple transactional capability, such as taking pizza orders.
But, it’s only advanced conversational AI chatbots that have the intelligence and capability to deliver the sophisticated chatbot experience most enterprises are looking to deploy.
Data highways
A large-scale communications network providing a variety of often interactive services, such as text databases, electronic mail, and audio and video materials, accessed through computers, television sets, etc. Also called data highway.
The data needs to be valued since it’s use is rising exponentially. AI-based systems and solutions will enable startups with data mining, analysis of business data and predictive analytics implementation. There will be multiple projects focusing on data highways in 2021.
Hope you like this article …!
Thanks
Saurav Anand
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/top-5-ai-applications-to-look-for-in-2021-84b5c2d58d2e | ['Saurav Anand'] | 2020-12-02 20:04:30.940000+00:00 | ['Machine Learning', 'Data Science', 'Artificial Intelligence'] |
Spotlight on Jeffrey Miller | Spotlight on Jeffrey Miller
Class of 1987
Jeffrey Miller, class of 1987
Jeffrey Miller is a 1987 Eureka College grad now living in Daejeon, South Korea. After completing his bachelor’s at Eureka, he enrolled in a master program in English at Western Illinois University. He took two of the graduate class that I taught, and I was a member of his thesis committee. His thesis was collection of short stories. In addition, he read fiction manuscripts for. The Mississippi Valley Review, a little magazine published by Western’s English Department.
Jeffrey then went to South Korea, where he has been teaching for twenty-six years. Currently employed by Solbridge International School of Business, Jeffrey teaches advanced writing courses for students who plan to study in the United States. He also teaches reading and history classes. Most of his students are from Korea, China, and Kazakhstan. This semester he has selected Fahrenheit 451 as required reading for his students.
Jeffrey has been married for eight years. He and his wife have four children — three boys and a girl. He writes, “They keep me young chasing them around the house.” He also admits, “It’s about time for me to come home.” He is actively searching for a job in the States. In addition to his teaching, Jeffrey published two works of fiction in 2012: War Remains: A Korean War Novel; and Ice Cream Headache. His collection of short stories will be published later this year. One of those stories is a reworking of the prize-winning story he wrote for EC’s Impressions. He is currently at work on four projects: three novels and a novella. In his own words, “I bounce from one to the other whenever I experience writer’s block.” | https://medium.com/studs-terkel-project/spotlight-on-jeffrey-miller-212c41cd9e9 | ['Eureka College Alumni'] | 2016-03-31 15:29:37.638000+00:00 | ['Writing', 'Short Story'] |
What is the moss of New York? | Photo by Madina on Unsplash: https://unsplash.com/photos/0cBKrqmlJrE
Though moss is a simple plant, we shouldn’t underestimate its wisdom. It’s one of the oldest, wisest, and most experienced forms of life. Moss is an opportunist making the most of what’s available. It lies in wait, sometimes for years, for the right conditions to grow and reproduce.
Moss exhibits the skill of anabiosis When water is scarce, moss will completely dry out and play dead for as long as needed. But they aren’t dead at all. In their drying, they lay the groundwork for their renewal. Sprinkle them with a little water and the moss will spring back to life as if nothing had happened. They can also regenerate themselves from just a minuscule fragment.
Moss is the first life form to reinhabit an area that’s experienced devastation and loss. It’s the plant of second chances. Moss is hopeful. Despite destruction and stress, moss finds a way in and sets the stage for more life to return. Through their actions of collecting and holding water and contributing to the nutrient cycle of land, the will of moss changes the world, turning barren rock to gardens with enough time.
As I walk through my city of New York now, devoid of so much life and so much of what I love about it, I wonder what our proverbial moss will be. What will come back first to literally create the conditions that will seed the path for restoration and revitalization? What will create a haven for life and growth on the cultural bedrock of our city? What and who will set down roots here to build an enduring legacy for others, and how?
If only humans and our dreams could be as resilient as moss. If only we could find a way to see this time of COVID-19 only as a holding pattern. Not something that destroys us but something that makes us stronger, more resilient, more determined to thrive in the days ahead. Unfortunately we don’t have that seemingly-magical power of anabiosis. We can’t curl up in a ball, dry out, and wait for better times. We have to keep living, breathing, moving, working, eating, and growing. We’ll have to make a way out of no way, and the only way we can do that is together. | https://medium.com/age-of-awareness/what-is-the-moss-of-new-york-348513a7968f | ['Christa Avampato'] | 2020-08-06 02:28:58.048000+00:00 | ['Sustainable Development', 'Nature', 'Cities', 'Economy', 'Science'] |
Reader-funded journalism in a crisis: lessons from OKO.press | In a nutshell
A mix of search-friendly articles and in-depth coverage — both written and audio — are helping this Polish investigative news outlet expand beyond its traditional audience.
What is OKO.press?
Founded in June 2016, OKO.press is one of Poland’s first non-profit investigative journalism site. Its goal is to promote democratic values, human rights and government transparency and does so by publishing fact checks, investigations and analysis about the state of politics in Poland. Employing 28 people, OKO.press was the 2020 recipient of Index on Censorship’s prestigious Freedom of Expression Award Fellowship for Journalism.
In recent years, Poland’s press freedom ranking has plummeted following measures put in place by the current Law and Justice party (PiS). In 2016, 163 journalists from public television lost their jobs or were forced to quit because of their critical attitude towards the government while the Polish National Council of Radio Broadcasting and Television, an independent supervisory body, was abolished and replaced by a national media council, mostly staffed by PiS or its supporters. Although independent media in Poland scores higher in public trust, according to the Reuters Institute 2020 Digital News Report, the country still fell to 62nd out of 180 countries in the World Press Freedom Index by Reports Without Borders. This is its fifth consecutive decline and the country’s lowest ever position.
OKO.press is funded by a mix of individual donations and grants. In 2019, 80% of its revenue came from individual donations. This included voluntary, regular and one-off donations. Grants meanwhile made up 20% of its revenue. OKO.press currently carries no ads. It offers a free membership where readers can register to receive its weekly newsletter and gain early access to articles. In 2020, OKO.press projects 85% of its funds will come from reader donations and 15% from grants.
In terms of its digital readership, 93% of OKO.press’ unique visitors are based in Poland, with the remainder coming mainly from Germany, the UK and the United States. Recently, the publication has started to attract more readers (54%) outside of Poland’s five biggest cities — Warsaw, Krakow, Wroclaw, Poznan and Gdansk. In a reader survey conducted in May 2019, most respondents said they read OKO.press for quality reporting about politics, LGBTQ rights, women’s rights, social and political movements, ecology and climate.
How did OKO.press handle the COVID-19 crisis?
During the initial stage of the pandemic, like many publications, the team sought to explain the impact of the virus in Poland and abroad. The team published daily COVID-19 special reports every evening for more than three months with analysis of the government data and infographics. These drove a significant amount of organic search traffic; one report alone had over 400,000 page views. Months on, these articles continue to drive significant website traffic from search engines and provide a sideways entrance into the OKO.press site for new readers.
As an outlet with a strong focus on human rights, OKO.press’ team focused on monitoring constitutional restrictions as a result of COVID-19. For instance, it reported about how young Poles feared the government using the pandemic to curb their freedom and investigated how Polish women on maternity leave would have their allowances cut because of the pandemic. Workers’ rights were an ongoing theme and it looked at how a quarter of employed people earned less than before COVID-19. One of the most popular articles in April and May was a leaked government report on a proposal to introduce an extended 60-hour working week. This story received over 15,000 shares on Facebook and was viewed over 550,000 times.
OKO.press also had success with several types of written articles:
The result of OKO.press’ extensive coverage was that overall traffic went up by over 200% between February (1.7 million unique visitors) and April (5.1 million). 37% of traffic between March and May 2020 came from articles about health compared to just 4% between December 2019 and February 2020. OKO.press’ social media followers have also increased substantially; for example, Twitter followers rose by 92% in the period from January to June 2020 while its Facebook page saw a 37% increase, despite having a large number of followers before the pandemic started.
Financially, OKO.press saw significant gains between April and June. One-off donations increased by 75% compared to the previous quarter while new regular donors saw a 25% spike against the same period. This was surprising to the team because the only donation messaging on the website was a pop-up window and a pandemic banner.
A pop-up banner on the OKO.press website (translated)
How has COVID-19 changed the future of OKO.press?
OKO.press’ readership has changed dramatically since the pandemic. Before COVID-19 hit, according to reader surveys the team had conducted, OKO.press’ typical audience type was a 40–50-year-old with anti-government views and an interest in news covering justice and the rule of law in Poland. Now, looking at comments on social media, OKO.press has gained a new audience, some of whom identify as pro-government supporters. Although OKO.press will continue to try and attract readers that share its core values — pro-human rights, pro-democracy and liberal — the team also hopes that its matter-of-fact analysis and fact checks will attract a wider range of readers from across the political spectrum.
The OKO.press team will also approach remote work differently after the pandemic. At the moment, the team cannot work from the office because the office does not allow for social distancing. However, in a recent staff survey, half of the staff said they wanted to return to the newsroom. A phased return means the team will work for three days from home and two days from the newsroom. Once all COVID-19 restrictions are lifted, the team will allow staff to continue flexible working patterns.
Given the jump in readership since COVID-19, OKO.press understands how critical it is to invest more in better IT infrastructure. Although the website has been accessible throughout the pandemic, there is a risk that surges in traffic could send it offline in the future. As a result, the team will upgrade servers, hire dedicated IT specialists and conduct a website upgrade in 2021 to support its future ambitions.
What have they learned?
“First of all, we learned that flexibility and the digital model of our organisation was our advantage. An engaged team is the foundation of our organisation, so we need to care about people more in the new remote mode. We learn to adapt to a VUCA world (Volatility, Uncertainty, Complexity, Ambiguity). Early risk assessment helped us to implement solutions to counter the negative effects of the pandemic and economic crisis. We have organised three fundraising campaigns to mitigate the risk of a decrease in donations and currently we focus on applying for grants to strengthen our financial situation.”
Marek Pękalski, Chief Executive Officer, OKO.press | https://medium.com/we-are-the-european-journalism-centre/reader-funded-journalism-in-a-crisis-lessons-from-oko-press-5447910a41b6 | ['Tara Kelly'] | 2020-09-17 14:19:56.165000+00:00 | ['Resilience', 'Case Study', 'Covid 19', 'Journalism', 'Journalists'] |
Tutorial: Receive and send Bitcoin by Java | Mixin team has published a tutorial based on Java. The developer can receive and send bitcoin after reading the tutorial.
Create a bot in Mixin Messenger to receive user’s message
Receive & send Bitcoin
The Java developer can create a bot in Mixin Messenger after reading the tutorial. The bot can receive message and Bitcoin from Mixin Messenger users. The Bot can send text and Bitcoin to Mixin Messenger user.
There are many SDK, tutorials, and open source projects created by Mixin Network community. | https://medium.com/mixinnetwork/tutorial-receive-and-send-bitcoin-by-java-7c49d6a4523d | ['Lin Li'] | 2019-02-13 07:46:03.068000+00:00 | ['Java', 'Open Source', 'Programming', 'Mixinnetwork', 'Bitcoin'] |
Read This When You Feel Like You’re Not Enough | Read This When You Feel Like You’re Not Enough
“There’s no point if other people are better than me.”
Growing up as an Asian Canadian, I was often compared to another kid. It was the norm. It’s the culture.
Who’s doing better in school? Who got fatter? Who is better at XYZ?
And when you grow up with this mentality like myself, your self-esteem can become fragile because it all depends on how you compare to the person beside you.
So if you ever feel like you’re not enough because someone else is better, please read this.
No one in this world has your experience, your history, or your story.
No one is you!
Your mom’s egg (which only gets released once a month…if maybe) and your dad’s single sperm (out of 200 million) came together and miraculously created YOU.
For those who have struggled to conceive, you know what I’m talking about.
There will always be people who are better off than you. There will always be people who are worst off than you.
Take a moment to look up for a moment from your phone.
See that guy or girl waiting in line, that mom crossing the street or that old man waiting for the bus? They may or may not be having a better day than you.
You have your own unique set of skills and abilities, strengths and weaknesses, values and beliefs.
Sure, you may share some of these things with another person but the combination is what makes you who you are.
So stop comparing yourself to others. It’s not like apples and oranges, more like numbers and colours.
So Readers, What were you compared to as a kid? Were you the better kid or the one that needed “improving”? | https://medium.com/change-becomes-you/read-this-when-you-feel-like-youre-not-enough-545036a1e410 | ['Katharine Chan', 'Msc', 'Bsc'] | 2020-12-26 15:19:20.007000+00:00 | ['Self Improvement', 'Motivation', 'Life Lessons', 'Personal Growth', 'Inspiration'] |
StaggeredVerticalGrid of Android Jetpack Compose | Learning Android Development
StaggeredVerticalGrid of Android Jetpack Compose
Using Jetpack Compose Layout to make StaggeredVerificatlGrid
Photo by Julian Hochgesang on Unsplash
In Jetpack Compose, even though we have LazyColumnFor to do what RecyclerView has, but there’s not readily available StagerredGridLayout provided.
Fortunately, Google provided a sample of how this can be implemented as per this StackOverflow shared.
With that, I used to randomly generate some stagged cards and explore how it is done.
Using StaggeredVerticalGrid
To use it, it is simple. Just provide the maxColumnWidth for the items (e.g. Card) and then provide the item (e.g. Card) accordingly.
StaggeredVerticalGrid(
maxColumnWidth = columnWidth.value.dp,
modifier = Modifier.padding(4.dp)
) {
(1..100).forEach {
Card(it.toString(), columnWidth)
}
}
In my example, I generate a random maxColumnWidth so we can get different column count when I regenerate them. I also provide 100 Cards, that randomly generate the color and height, to make the staggering view nicer.
Understand how Staggered is done
I have modified the original StaggeredVerticalGrid to be more compute efficient by storing the initial computer locations, but overall they are very similar to the original algorithm.
It is using the Layout composable function.
@Composable
fun StaggeredVerticalGrid(
modifier: Modifier = Modifier,
maxColumnWidth: Dp,
content: @Composable () -> Unit
) {
Layout(
content = content,
modifier = modifier
) { measurables, constraints ->
// The algorithm generate staggered vertical grid
}
}
The algorithm is broken into 4 parts.
1. Calculate the right width for the columns
val columns =
ceil(constraints.maxWidth / maxColumnWidth.toPx()).toInt()
val columnWidth = constraints.maxWidth / columns
val itemConstraints = constraints.copy(maxWidth = columnWidth)
Given the maxColumnWidth , the algorithm will first calculate the with it can stretch the most for each of the items.
Calculate the total columns possible using maxWidth divided by the maxColumnWidth From the column counts, when we can get the exact width that can be allocated to each item without violating the maxColumnWidth Then copy all the constraint over, with only modifying the maxWidth for each item.
2. Compute the total height of the layout
val colHeights = IntArray(columns) { 0 }
val placeables = measurables.map { measurable ->
val column = shortestColumn(colHeights)
val placeable = measurable.measure(itemConstraints)
placeableXY[placeable]
= Pair(columnWidth * column, colHeights[column])
colHeights[column] += placeable.height
placeable
}
For each of the measurable (e.g. Card), we’ll try to place the column that is the shortest height.
From the measurable , we’ll get the exact item i.e. placeable to get the height of it to be added to colHeight .
The colHeight is used to keep track of the height of each column, so that we can place the placeable into the shortest column area, and also we can get the final height needed for the layout.
As shown in above the diagram, you’ll notice each placeable is always placed at the shortest column first.
3. Get the height of the layout
And finally the longest height ie. colHeight[0] is used to determined the layout height.
val height = colHeights.maxOrNull()
?.coerceIn(constraints.minHeight, constraints.maxHeight)
?: constraints.minHeight
Notice the layout is represented in yellow color
4. Place all the peaceable (i.e. card) in the layout
Lastly, after the layout size is determined, we can then place each of the cards accordingly to the X, Y location that was previously stored in placeableXY . | https://medium.com/mobile-app-development-publication/staggeredverticalgrid-of-android-jetpack-compose-fa565e5363e1 | [] | 2020-12-17 11:44:53.025000+00:00 | ['Mobile App Development', 'Jetpack Compose', 'Android', 'AndroidDev', 'Android App Development'] |
Supporting Celebrity Addicts | Yesterday, news broke that an incredibly famous celebrity had entered rehab for cocaine and alcohol addiction, following a recent relapse. Within hours, the internet had drowned under a flood of support for him.
Celebrities and non-celebrities alike tweeted out well-wishes. A thousand blog posts were written about destigmatizing addiction. Even the gossip magazines seemed to take a supportive tone.
However, I have to admit that my own first response was jealousy. I’m not proud of it, but I couldn’t help comparing his attempt to get sober with my own.
When I quit drinking, there were no tweets of support — not even a call or text. Few people in the world knew that I had developed a drinking problem, and nobody knew that I was attempting to get sober.
Like many addicts, I had grown incredibly isolated during my years of drinking. My friendships weren’t exactly strained; instead, they had just faded away.
Another key difference is that this celebrity has the money and time to enter in-patient rehab for the next two months. If I had tried to do that, I would have derailed my entire career. When I quit, I was forced to squeeze sobriety into my life, and didn’t have the benefit of being able to devote my full attention to it.
It’s no good, though, to take someone else’s problems and make them all about myself. The truth is that I’m happily sober and he’s currently suffering, and I certainly wouldn’t want to trade places with him in a million years. From the bottom of my heart I wish him the best, as I do for all struggling addicts.
We Need Better Systems
I think the real lesson to take from this outpouring of support is that we need better support systems for all addicts. We need to find a way for people to extend the empathy they feel for a charismatic celebrity to friends, family, and strangers.
It’s easy to love an addict from a distance. It becomes a lot harder when you get up close.
Most addicts are more similar to me than they are to this celebrity. By the time they’re ready to quit, they’ve likely alienated themselves to a great degree. They won’t have hordes of friends and fans wishing them well.
It’s great that we have so many peer-based support groups like AA. They provide free and cheap ways for recovering addicts to rapidly build a support network. But on their own, they aren’t always enough.
There are many addicts who need rehab to get better, but they simply can’t afford it. Many more can’t risk disrupting their lives. When a celebrity leaves rehab, they’re often welcomed back to the industry with open arms. Things don’t go as smoothly in most professions.
I don’t mean to make it sound as if celebrities with addictions have it easy — it’s obviously still a struggle, no matter how strong your support system is. I just want all addicts to have access to the same resources that the richest addicts have.
How do we actually get there?
More public funding would be a great start. Rehab programs should be 100% government funded for anyone who has financial need. Over the long run, these programs typically save tax payers money due to lower medical costs down the road. It’s a win-win option that most governments have completely ignored.
Changes to professional organizations would also help. There should be systems in place to protect an addict’s job while they are recovering. It should be treated no differently than if someone had been in the hospital for a physical ailment.
Showing support. As much as I hate to admit it, those tweets which made me jealous are actually a great step in the right direction. It really is wonderful that so many people are on board with destigmatizing addiction. Every time that someone publicly proclaims their support for recovering addicts, it makes it a little bit easier for another addict to get help.
The outpouring of support that this celebrity received on the internet yesterday is a clear sign of progress. Society is increasingly understanding that addiction is not a moral failure.
I hope that within my lifetime we see the stigma surrounding addiction disappear completely. I hope that the same empathy which is extended to a high-profile celebrity can be extended to the homeless addicts that so many of us pass each day without a second thought. Let’s continue to support the celebrities, but let’s fight for systemic changes too. | https://medium.com/exploring-sobriety/supporting-celebrity-addicts-b250fe3d20e5 | ['Benya Clark'] | 2020-12-23 00:59:12.312000+00:00 | ['Sobriety', 'Celebrity', 'Mental Health', 'Addiction', 'Recovery'] |
How does the global economic recovery look like in the next two years? | GLOBAL ECONOMY
How does the global economic recovery look like in the next two years?
Visualizing the economic projection for the year 2020 with the expected rebounds in 2021 & 2022
It has been an unprecedented year in so many ways. Although the biggest concern was health-related, the resulting economic fallout has been equally devastating. The general consensus right now is that the global economy will see a rebound in growth albeit at a gradual pace. According to the data by The Organization for Economic Co-operation and Development (OECD), the global GDP will rise by 4.2% in 2021 after a decline of 4.2% this year.
The V-shaped recovery projected earlier by many analysts based on extraordinary stimulus measures taken, concerted health policies enforcement & aggressive search for a vaccine for the pandemic hasn’t really materialized as a much larger second wave of infections took the globe by a storm. Even though three vaccines have moved into the deployment stage, things might get worse before they get any better. Renewed lockdowns are endangering the fragile economic recovery.
Along with the current economic projections, the OECD has also forecast an upside and downside scenario (chart above) — for the latter, delayed vaccinations or new outbreaks could cause a $4 trillion GDP loss by 2022 compared to current projections. On the flip side, vaccine rollout and boosted consumer and business confidence, could add $3 trillion to the global economy.
These current economic projections are based on the premise that renewed virus outbreaks remain contained, and a widely available vaccine by the end of 2021 helps to support consumer & business confidence. With this in mind, let’s look at three infographics visualizing the projected GDP growth/decline for 2020, 2021, and 2022.
2020 | https://medium.com/technicity/how-does-the-global-economic-recovery-look-like-in-the-next-two-years-49d01de066d9 | ['Faisal Khan'] | 2020-12-18 23:22:08.707000+00:00 | ['Health', 'Investing', 'Business', 'Finance', 'Economics'] |
Predicting Newspaper Sales with Amazon SageMaker DeepAR | A passion for print media
At Sales Impact, a 100% subsidiary of Axel Springer, we are all about sales of print media. We provide regional sales activities and wholesale communication for the supervision of retail sales and the logistics involved for delivery domestic and overseas. Also we do the planning and execution of sales marketing measures, customer acquisition within the scope of direct sales, coordination of the German “Sunday market” and much more.
At my team market analytics, we evaluate, advise and control what happens in the German print media market in terms of sales, logistics and advertisement. This happens at an international, national, regional, wholesale and shop level for print media such as WELT and BILD. My work as a Data Scientist mainly gravitates around the prediction of the market and the calculation of key figures in the market.
Vast complexity, vast opportunities
Our shop-level sales data is among our most valuable assets. Without going too deep into detail, we know the sales of some 100,000 shops for Axel Springer’s print media with some delay. Making use of this data is hugely important to understand our print media sales. But sometimes a delay in shop-level sales data is unacceptable for instance when the editorial department of the BILD wants to know how well it performed last week in terms of sales. We can solve this and other related problems with predicting the sales for these 100,000 shops!
Your friend in the cloud
Using Amazon Web Services, we can leverage their machine learning solution Amazon SageMaker in order to make such a prediction. But then, how would you predict some 100,000 shops without losing the information that exists among these shops? Fortunately, there is an algorithm out there that takes into account just this: the Amazon SageMaker DeepAR forecasting algorithm. But this really can be translated to any problem that has at least several hundreds of concurrent time series like e.g. with many products.
The DeepAR forecasting algorithm is a supervised learning algorithm for forecasting scalar (one-dimensional) time series using recurrent neural networks (RNN) and it is astonishingly sound. You can hand this algo tens of thousands of time series possibly together with time-independent categories and additional time-dependent information for each time series, and it will train a model that then is able to predict the potential future of a time series (possibly together with its specific time-independent categories and additional time-dependent information).
Predicting Germany’s major newspapers’ sales
So we adopted and automated this RNN-based algorithm as follows in Figure 1 and did see some major improvements of our prediction quality compared to the singular approaches such as ARIMA or exponential smoothing that were previously in place. The original paper suggests a general improvement of accuracy of around 15% for the prediction of related time series compared to state-of-the-art methods. If you need a starting point for the implementation of DeepAR using SageMaker, I recommend this notebook from Amazon.
Figure 1: Exemplary sequence of the workflow.
It is amazing how much you can automate with a little help of the boto3 (the AWS SDK for Python) library. For your ease, you can find the complete boto3 workflow below (though the data pre- and post-processing part is missing). Please note that we used .json files as the input data type.
This is how a normal run can look like on my laptop:
Figure 2: Exemplary workflow as log output.
Implications to our business
With the more accurate prediction of sales, we are able to give even more accurate projections to the editorial departments. Also, we work on taking into account these sales predictions to improve logistical key figures we provide to our business partners.
Summary
In this article, we write about predicting newspaper sales using Amazon SageMaker DeepAR. After a short company and team introduction, we give a shallow description of our shop-level sales data and the related problem. We then describe how DeepAR is a suited algorithm for this problem, followed by an overview of our solution together with some sample code to reproduce our solution. Finally, we claim that such a prediction with DeepAR is beneficial to our business.
About the author: Justin Neumann is a Data Scientist and MS in Predictive Analytics helping to transform companies into analytical competitors. He works at Sales Impact, a subsidiary of Axel Springer. | https://medium.com/axel-springer-tech/predicting-newspaper-sales-with-amazon-sagemaker-deepar-dffde3af4b20 | ['Justin Neumann'] | 2020-02-19 12:44:29.394000+00:00 | ['Machine Learning', 'Technology', 'Artificial Intelligence', 'Time Series Analysis', 'Deep Learning'] |
System thinking for designers | What is happening in the world?
In our work we regularly deal with asks from clients like:
What is the best experience we can provide to the end-user with this new service?
What is the outcome we are trying to achieve in the online experience?
How might we design a tool that bridges customer expectation and company vision?
These questions ask us to create for a user. In response, we will give form to a concept and endeavour to make it useable, valuable and desirable to them. We will develop the experience surrounding that entity; determining how it might respond and react to continuously meet demands.
This is design thinking as is being used in and by businesses; prioritising the user needs to build products and services. We have trained, studied and practiced to have the means and know-how to address these. Sometimes we get the answer just right and have the most wonderful sense of accomplishment.
However, we’re finding that not all our clients questions are answerable in this way, life has become more complicated.
“Thinking through the ecosystem of touch points is simply more complicated than in the past.” — Olof Schybergson, CXO Accenture Interactive in conversation with The Wall Street Journal (Rhodes, 2020).
We are now designing products that are increasingly reliant on a myriad of tangible and intangible interdependencies. This has led to a change in the questions we’re getting from clients, for instance:
How can postal companies meet the demand for faster, cheaper delivery?
How can we transform transparency in support of sustainability targets in the fashion industry?
How might we identify the most informed targets for drug discovery using human data?
The scale, velocity and difficulty of business continues to escalate, therefore so will the problems we are using design thinking to address.
“A full 80% of respondents believe systems will interact seamlessly with humans, and 78% think these systems will embrace the way humans work.” — Accenture’s Future Systems (Burden, et al., 2019)
The Future Systems research demonstrates client expectation for the solutions they buy into. Companies are making vast investments in technology but need it to be integrated across their organisation and not just in “pockets” (Burden, et al., 2019). That can mean multiple geographies, infrastructures, cultures, users and needs.
There has been a fundamental shift in what we are being and should be held accountable for. This year in Fjord Trends we advocated for life-centred design, inspired by John Thackara’s call to design for all life on earth and not just us, the humans (Fjord, 2020). It is a large and daunting ask but we need to acknowledge that the scope of our responsibilities has expanded not just as professionals but as individuals.
We’re dealing with AI, machine learning, voice/facial recognition, AR/VR, ethics, privacy, sustainability, bottom lines, accessibility, inclusivity… and that all needs to fit together and be managed. We’re building, changing and working with systems.
The ask from us has changed rapidly. Yet, how do we evolve from being user experience experts to system level decision making enablers for our teams? How do we build systems that improve and consider experiences beyond the direct end users? We need to level up. We can do that by using not just design thinking but systems thinking as well.
Hold up, what is systems thinking?
System thinking has many explanations. For Peter Senge, who is a leading system scientist in his field:
“Systems thinking is a discipline for seeing wholes. It is a framework for seeing interrelationships rather than things, for seeing patterns of change rather than static “snapshots.” — Peter Senge in The Fifth Discipline (Senge, 2006)
Each discipline in the systems community has their own definition of systems design depending on their expertise and focus. It is constantly redefined by those thinkers and difficult to be explained in just one sentence. As well as sure to start an argument if you tried.
The question shouldn’t be ‘what systems thinking is.’ It should be ‘What system thinking means for designers.’ As designers, we need to define systems thinking in a way based on how design practice perceives the world.
It is not a definition but a lens you use to examine a problem. Using systems thinking in design means not looking at individual elements like an interface, product, user journey or service but the whole system of which all these parts are. As Senge has stated above, we need to watch a system to grasp how it’s working; how actions play out overtime, the push and pull between parts, what causes reactions and maintains balance.
Using systems thinking with design thinking means we switch between the holistic and the user view continuously. We endeavour to harmoniously blend both of these mentalities to build mental models of that system to inform us on how and where we should intervene.
But how do we move our approach from design thinking?
Why we need to evolve as designers
“For years, the application of user-centered and human-centered design advocated by so many has often separated people from ecosystems. Now, designers must start to address people as part of an ecosystem rather than at the center of everything. This means designing for two sets of values: personal and collective.” — Fjord Trends 2020 (Fjord, 2020)
Let’s look at this in practical terms of how we need to work with these “two values” (Fjord, 2020). Design thinking starts with the user. We try to discover and understand their needs in order to meet them somehow.
Take a chair for example, a solitary chair, chosen by the user for form, function or both to fulfill their needs.
This chair is connected to nothing apart from the person sitting in it but what might it mean for that person if it did more? Jacinta Dixon was diagnosed with a rare form of Alzheimer’s. As an avid reader, one of the harshest realities of that diagnosis is dyslexia. She lost her ability to read, something that was so important to her.
The Dock, Accenture’s Global Innovation hub, in collaboration with Design Partners and Amazon, worked with Jacinta to create a new chair (featured on the Big Life Fix) that might bring back that joy we find in new worlds, characters and realities.
Using voice technology offered a promising possibility for the team but because this was very personal, they needed it to feel comfortable for Jacinta. Their concept re-imagined a reading chair; by embedding this technology into it they designed a way for her to access an ecosystem of audiobooks, music and radio as well as voice assistant feedback for weather, time and calendars. The team engineered custom tactile controls, with each having its own unique signature to create recall with muscle memory rather than standard visual ques.
The work and result achieved by this team proves the value of user-centered design. But why are we discussing this in relation to systems-centered design? The project team consciously made the decision to limit how the chair was connected to other products and services as they felt it could be unmanageable for their user and contrary to the comfortable (non-intrusive) environment, she wanted within her home.
However, let us imagine what it might be like in the right circumstances, for the once simple chair to be plugged into a much wider ecosystem. We will now zoom out and look at this with a system-centered design mindset. After all many of our projects still start with a user focus to gain an initial understanding before we move to a systems view.
Here we can see the chair evolves and the implications of it being connected to this wider network. This is inspired by the work of Michael E. Porter, James E. Heppelmann and the diagram they used in How Smart, Connected Products Are Transforming Competition — HBR to demonstrate connected products on farms (Hepplemann & Porter, 2014). Illustration: Daniel Eames Visual designer at Fjord Dublin.
From the systems lens we start to look at how a chair like this needs to fit and connect within a wider ecosystem, in ways that are both mandatory, essential and beneficial. For example, integrating with other connected products or services; like smart thermostats to keep warm while being stationary or the doorbell if someone calls or remote home security monitoring.
And then what about the integration between different companies for the different parts, how it effects their product eco-system. It is not just about visualising the a spider web of connections but also the knock-on effect between them. We need to understand the story being told between system parts. What is the cost of creating this chair to the planet, the workers and end users? When we zoom-out we may need to think of the wider community, what is its effect on hospitals as well as the state systems such as healthcare that it will reach to.
Extending our thinking beyond the user, the product, service and into the wider system that connects or disconnects these multiple entities allows us to start looking at the problem from a new perspective. This is the system practice; if we think of the user as the center we are starting from the outside and working in. It requires us to understand the interdependencies between the questions and problems we have at each level. As we move through those levels, from product to business to community systems, changes emanate and ripple. We need to be cognisant of the trickle down or upstream flow of what we do and systems thinking provides the context for us to do that.
With questions coming at us from a systems level we need to equip ourselves with the means to answer them in new ways. We have great skills but in terms of complex systems we need to extend our design capabilities by learning systems theory, adapting our mindset and adding complimentary methods.
How do we do this without becoming overwhelmed or maintain the ability to critically assess the problem when it extends like this? System design can provide the tools to help us navigate this complexity.
First we need to adjust our mindset
Design thinking absolutely works but working this way will give us the flexibility to weave in and out of the different perspectives. System-centered design is more than a toolkit that we can pick and choose from, it requires a fundamental shift in how we think. We need to align our mindset with the approach.
System and design thinking work together harmoniously. Illustration: Daniel Eames visual designer at Fjord Dublin.
Design thinking is user centred, its methods ask us to identify and understand their problems to provide better and novel experiences based on the bottom-up. With systems thinking we flip that perspective. The lens is to understand how the consequences of a solution affects the overall system based on top-down holistic view.
System-centered design is user-centred but at scale. We continuously need to zoom-in and zoom out to recognise and look for dependencies between different parts of a system. For designers, the required mindset shift is not about leaving design thinking behind, it is more about incorporating system thinking mindset so that we can cover more ground with the impact our work has. It is about asking even bolder questions at system level.
The system scientist Barry Richmond puts it simply, “That is, people embracing Systems Thinking position themselves such that they can see both the forest and the trees (one eye on each).” (Richmond, 1994). Our design thinking is extremely good at seeing and understanding the individuals but here’s the shift we need to make:
System and design thinking mindset characteristics. Some of these characteristics are referencing Hugh Dubberly’s presentation, “Compostmodern: Think Even Bigger. Framing design as conversations about systems” and specifically the transition to a systems approach for designers, (slide 30). Here he talks about moving from the need to ‘seek simplicity” to “embrace complexity” and moving to a “good enough for now” attitude. (Dubberly, 2016) Illustration: Daniel Eames visual designer at Fjord Dublin/
The ability to constantly zoom in and out will help in adjusting your approach and be system focused. That is the most important piece of information you need to take from this.
Here’s what this approach looks like
This is not just a theory, the Fjord team at the Dock, have incorporated this into our practice. Always successful? Absolutely not, it’s been all of the clichés; continuous learning curve, up-hill battle etc.
At the Dock our aim is to extend service and product design from focusing on the human experience of a product or service to considering the purpose of a system. The intention is to address the whole by devising and understanding it’s individual parts, that way we can work out how we can better facilitate the system’s desired purpose.
What does a systems practice mindset look like on the job? Let’s go back to the three how might we statements from the beginning that exemplified the need for something more.
How might we identify the most informed targets for drug discovery using human data?
The ask was to investigate introducing AI as an assistive technology in drug discovery but how? And more importantly why? As with many projects this started with the introduction of a new technology but while AI may be able to improve many things it must be integrated correctly. The incorporation of new technology to a current system requires an understanding of the whole, we needed a systems focus and mindset.
We would need to design future processes and products. This would require a change in their ways-of-working and the technological infrastructure. Not only to fit but to provide value to everyone involved. We created system blueprints detailing pain points (zoomed in) with their system level root causes (zoomed out) to look at how we would address these at systemic level, it was problem orientated. This created a narrative around the current system to influence decision makers and show them the implications as well as possibilities of change.
How can we transform transparency in support of sustainability targets in the fashion industry?
We brought together an ecosystem of partners in the retail value chain to reframe the problem to be solved. The research was conducted at a systems level and having this holistic view allowed us to design and propose concepts for new ways of working.
The research explored the extensive intricacies of supply chain within the textile industry and captured the as is state based on 21 different companies. What the team built using that research facilitated discussion amongst experts. When we shift our mindset towards system thinking it becomes easier to accept ‘good enough for now’ (Dubberly, 2016) because you are striving less for experience perfection and more for knowledge.
How can postal companies meet the demand for faster, cheaper delivery?
We explored how Accenture postal, courier and retail clients can address the challenges posed by a rapidly evolving eCommerce delivery environment. Starting with the postal industry as our focal point, a cross-disciplinary team collaborated closely with partner clients to create a next-generation delivery system by augmenting the workforce to help them deliver a better service with AI amplified route optimisation and demand forecasting.
This new system would need to balance the conflicting goals of all the different elements and actors; like the consumers demand for speed and flexibility with the challenging cost and logistics environment that delivery providers need to operate within. An example of this balance was considering the goals of the AI powered demand forecasting module and the reality faced by delivery agents in the field such as traffic, incorrect addresses and bad weather. Demand forecasting needed to accommodate such variables, for example by avoiding routes with heavy traffic or enabling delivery agents to manually override suggested routes or sequences when appropriate and re-factor these changes into the broader system.
At Fjord, systems design is essential to our Designed Intelligence capability that focuses on human collaboration with artificial intelligence. AI, like any technology should not be designed in isolation. Its successful deployment and adoption depends on how it fits with existing people, practices and processes. In our experience Systems design gives us the right context to know how AI can be integrated into an organisation responsibly and effectively.
“If we can design systems that effectively blend peoples skills with AI, we’ll be able to devise disruptive business strategies, empower people to cope with increasing complexity in the workplace and enhance the human experience.” — Fjord Trends 2020 (Fjord, 2020)
System-centered design in necessary
We can see from these design questions that complexity will be the new constant for designers (and if you think we’ve always tackled complexity, think of this as complexity 2.0). To respond we need to take the dual approach of looking at problems from the bottom-up and top-down. Systems thinking will more and more become a necessary extension to our design thinking practice.
At the Dock some of our team have taken on this role of Service and Systems designers where we tackle problems with this dual approach. In that role we’re learning to come at challenges from the human experience but also investigating the ripple effect of introducing or altering an element in a system and seeking to understand dependencies between different parts.
This is our strategy to level up and meet these new challenges head on; systems thinking, addressing the complex whole and remembering to constantly zoom in and zoom out. We are designers and putting the user at the centre is what we do. It’s engraved in our understanding of the world. By incorporating system-centered design into our practice we can deliver better experiences and outcomes at each level. We can make it work. | https://medium.com/design-voices/system-thinking-for-designers-e9f025698a32 | [] | 2020-06-19 13:14:25.664000+00:00 | ['System Thinking', 'Design', 'Design Thinking', 'Mindset', 'Experience'] |
Diabetes Prediction | Diabetes is a serious problem which many people face nowadays and which can lead to other serious health diseases.During the period of Covid-19 we also came to know that the conditions of a diabetic patient is much more critical than a non-diabetic patient.So if we can take help of deep learning to predict the risk of getting diabetic and early prediction of diabetic will help people take care of their health and prevent themselves from getting diabetic.
Table of Content
Introduction to cAInvas
Importing the Dataset
Data Analysis and Data Cleaning
Trainset-TestSet Creation
Model Architecture and Model Training
Introduction to DeepC
Compilation with DeepC
Introduction to cAInvas
cAInvas is an integrated development platform to create intelligent edge devices.Not only we can train our deep learning model using Tensorflow,Keras or Pytorch, we can also compile our model with its edge compiler called DeepC to deploy our working model on edge devices for production.The Diabetes Prediction model which we are going to talk about, is also developed on cAInvas. All the dependencies which you will be needing for this project are also pre-installed.
cAInvas also offers various other deep learning notebooks in its gallery which one can use for reference or to gain insight about deep learning.It also has GPU support and which makes it the best in its kind.
Importing the dataset
While working on cAInvas one of its key features is UseCases Gallary.When working on any of its UseCases you don’t have to look for data manually.The data is in the of table and is present as csv format.We will load the dataset through pandas as a dataframe in our workspace.
Loading dataset
Data Analysis and Data Cleaning
To gain information about the data that we are dealing with, we will use the command df.info() and we get the following information.
RangeIndex: 2000 entries, 0 to 1999
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Pregnancies 2000 non-null int64
1 Glucose 2000 non-null int64
2 BloodPressure 2000 non-null int64
3 SkinThickness 2000 non-null int64
4 Insulin 2000 non-null int64
5 BMI 2000 non-null float64
6 DiabetesPedigreeFunction 2000 non-null float64
7 Age 2000 non-null int64
8 Outcome 2000 non-null int64
dtypes: float64(2), int64(7)
Next we will remove the duplicate data, replace zero with NaN, then drop all NaN values from dataframe and again obtain the information.For this we can run the following commands;
Data analysis and cleaning
TrainSet-TestSet Creation
Next step is to create the train data and test data.For this we will drop the ‘outcome’ column from the dataframe and store it in a variable.For the labels we will use the ‘outcome’ column and store it in another variable.Then we will use the scikit learn’s train-test split module to create the train and test dataset.
Train test split
Model Architecture and Model Training
After creating the dataset next step is to pass our training data for our Deep Learning model to learn to classify Diabetic and Non-Diabetic Pateients.The model architecture used was:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 8) 72
_________________________________________________________________
dense_1 (Dense) (None, 8) 72
_________________________________________________________________
dense_2 (Dense) (None, 4) 36
_________________________________________________________________
dense_3 (Dense) (None, 1) 5
=================================================================
Total params: 185
Trainable params: 185
Non-trainable params: 0
_________________________________________________________________
The loss function used was “binary_crossentropy” and optimizer used was “Adam”.For training the model we used Keras API with tensorflow at backend.The model showed good performance achieving a decent accuracy.Here are the training plots for the model: | https://medium.com/ai-techsystems/diabetes-prediction-234fa946b2fe | ['Ai Technology'] | 2020-12-21 11:23:22.395000+00:00 | ['Tinyml', 'Machine Learning', 'Artificial Intelligence', 'IoT', 'Deep Learning'] |
Building a Scalable Data Pipeline | A good data pipeline is one that you don’t have to think about very often. Even the smallest failure in getting data to downstream consumers can be noticed immediately, so it’s important to build a system that is able to deliver events quickly and accurately.
Requirements
When building a data pipeline, we kept the following 2 requirements in mind:
Allow for many downstream consumers of data. We have several different data systems in place past the ingestion pathway, and all of them need to receive data sent through the pipeline.
HBase, which serves our results page in real time
A Hadoop data mining cluster, which we use to ingest data into Druid and also to run batch analysis jobs
Amazon S3, which we use for backups in the event of failure of either of the other 2 systems
Potential future systems
2. Ability to build streaming applications. At Optimizely, we are striving to build a strong internal shared data infrastructure to run computations in real time.
Architecture
Old Architecture:
Initially, we were using Apache Flume to ingest data from our application servers into HBase and S3. While this worked reasonably well for some time, there were some major operational issues we encountered using Flume to ingest directly into the downstream systems.
Flume required us to use one channel per downstream sink. As a result, we were sending 3 copies of every event through Flume; one for S3 and one for each of our two HBase clusters that we had at the time. Furthermore, there were no guarantees that all 3 copies would end up on different hosts, so it was possible for us to lose events entirely in the event of a host going down. We used Flume’s file channels for buffering events between our log servers and HBase/S3. While these proved to be very handy in the event of temporary process failures, they did not provide us much buffer in the event of extended failure of a downstream system. Furthermore, any exceptions or errors thrown in Flume would stop other events from being processed. As a result, we would have to rush frantically to fix the Flume issue before upstream systems started feeling back pressure.
In order to avoid data duplication and provide a more reliable source of truth for our data streams, we decided to go with Kafka as a message bus between our log servers and our downstream data systems. Using Kafka has provided us with immense benefits, particularly the following:
We no longer send 3 copies of our data over the network into Kafka, which reduces our network traffic. Kafka provides us a single source of truth for our event stream — previously it was difficult to debug if we had events appearing in S3 but not HBase or vice versa. It is much easier to recover from downstream failures, as Kafka retains 7 days of data by default. With Flume file channels we had about 6 hours of data retention at a maximum. Having data in Kafka has opened up a wide variety of applications, including stream processing using Samza. For instance, one application we have is calculating the top experiments on Optimizely by traffic in real time using Samza on top of our event stream. Furthermore, this greatly aids our high level goals of giving our engineering teams access to our raw data streams and enabling them to turn data into action.
While Kafka has provided us with accessibility and reliability for our data, we decided to retain Flume in our pipeline, inspired by this blog post on Cloudera’s page. Using “Flafka” instead of Kafka enabled us to migrate quicker than expected and reuse several components already in our codebase. In particular, Flume’s existing integration with both HDFS and HBase allowed us to seamlessly transport data from Kafka into those systems.
New Architecture:
Migration
Migrating our infrastructure over from Flume to Flafka was a tricky problem at first. We decided to coordinate this migration task with a separate effort to modify our HBase schema and make it more compact. One great blessing during the migration process was that both HBase schemas were idempotent. That way, we would be able to run both the old ingest pipeline (without Kafka) and the new one (with Kafka) simultaneously. Thanks to idempotence, we lost no events and incurred no downtime as a result.
Moving to the Kafka-based ingest path had other positive effects as well. We run two HBase clusters in parallel called Prod and Loki (don’t ask me why we didn’t name the first one Thor). Running 2 clusters has been very convenient for us when we need to do routine maintenance Previously, we used two separate Flume channels to write data to Prod and Loki, so it was possible that events would appear in one cluster, but not in the other. Now, we have two separate Flume clusters to write to Prod and Loki, but they both consume the same Kafka topic. Therefore, it is easy for us to keep the two clusters synced, since we have Kafka as a single source of truth for both.
Monitoring
We monitor all our backend systems using a combination of Nagios for alerting and OpenTSDB for monitoring and metrics collection. In order to gather metrics, we run tcollector on our Flume and Kafka hosts. tcollector picks up Flume metrics via the HTTP JSON endpoint and Kafka metrics via JMX. We wrote a few custom tcollector modules to pick up metrics like Kafka consumer lag and augment the existing collectors.
Deployment
We need to be sure that the deployment process for Flafka remains relatively painless, just in case we need to update our libraries. For deployments and configuration management, we utilize Cloudera Manager heavily. Using Cloudera Manager has enabled us to spin up Flume and Kafka clusters very quickly and get the project moving without much initial overhead. For automatic provisioning and host setup, we use Chef heavily in our stack. When an instance boots up in AWS, we assign it a tag that corresponds to the Chef role that the instance needs to run once it’s bootstrapped with Chef. The Chef role also handles automatically registering instances into Cloudera Manager and starting up the necessary service; thus, we can have hosts running Flume or Kafka automatically within about 15 minutes of us bringing them up in AWS. This functionality has been a massive benefit for us in scaling our stacks up to meet traffic.
Issues
We don’t have Flume or Kafka hooked up to a cluster manager like Mesos; as a result it’s hard to scale out our cluster in case of emergency. We mainly rely on AWS autoscaling groups for our scaling needs. We might use Mesos with Kafka eventually, but so far our cluster has been sufficiently large to handle the scale of traffic we process (and even was able to handle a 3x traffic spike a few days ago)
We would have difficulty handling any network partition that might render Kafka inaccessible.
We use Flume’s failover sink functionality to try and get around this by dumping to S3 whenever Kafka is offline; while that would prevent any data loss, it does mean that downstream consumers of the data will not be able to receive any data until we restore Kafka and read the data back in from S3.
Our Cloudera Manager auto registration has been a bit flaky; we have occasionally run into issues where it marks a healthy host as dead.
If you liked this blog post, we’re hiring. | https://medium.com/engineers-optimizely/building-a-scalable-data-pipeline-bfe3f531eb38 | ['Krishnan Chandra'] | 2016-02-01 23:24:13.267000+00:00 | ['Big Data', 'Data Science'] |
How You Can Use Social Media To Get More Freelance Clients | Why You Should Use Social Media
There are hundreds of millions of people that browse social media every single day and you can take advantage of that. On Instagram alone, there is over 25 million business that you could do some kind of freelance work for.
Another great thing about social media is that it is really easy to reach out to these businesses. You can also have business discover you so that you have freelance work coming in without you having to do anything.
If you can create really nice profile you can grow quickly. There are some great ways to grow fast and if you can get one of your posts to go viral, you can grow significantly.
Another amazing thing about social media is that if people follow you for a while, they may see more of the orders you have completed in the past. This means that potential clients will be able to trust you more and therefore more likely to buy your service. | https://medium.com/the-kickstarter/how-you-can-use-social-media-to-get-more-freelance-clients-5c56491e494d | ['Ewan Mcbride'] | 2020-11-28 13:35:12.055000+00:00 | ['Digital Marketing', 'Entrepreneurship', 'Freelancing', 'Social Media', 'Social Media Marketing'] |
3 Tricks to Scaling Onboarding with Google Admin | Here’s how to do it:
(1) Automatically update all@ by nesting your Google Groups
A Group can be added to another Group a member, just like an email address.
For example:
Create your team groups, say team-a@ and team-b@
Chose “Team” access level: “Only managers can invite new members, but anyone in your domain can post messages, view the members list, and read the archives.”
Do NOT check the box “allow anyone on the internet to post messages”
Add each team member to their correct group
1 team member per team group
Create your company-wide group, say all@
Add team groups to all@
It looks like this:
Now, if you add a new team member to team-a@, their email address will automatically be reflected in all@ and receive emails sent to all@!
(2) Automatically update Calendar events by adding Groups to events as guests
A Google Group can be added to a Calendar event as a guest, just like an email address.
Here’s what your all-hands Calendar event could look like:
Here’s what your team meeting events could look like:
Now, if you add a new team member to team-a@, their email address will automatically become a guest of your All-Hands Weekly Meeting and the Team A Meeting. Easy.
(3) Automatically update Drive folder viewing and editing permissions with Groups
Last, but not least, a Google Group can be added to a Drive folder to set the level of viewing and editing access for that folder, also just like an email address.
For example, the structure could look like this:
Company Drive Folder : all@ view only access, yourname@ edit access
Team A Drive Folder : team-a@ view and edit access
Team B Drive Folder : team-b@ view and edit access
This translates to:
Everyone in all@ can VIEW what’s in the Company Drive Folder, including what’s in:
Team A Drive Folder
Team B Drive Folder
Only you can EDIT what’s in the Company Drive Folder, including what’s in:
Team A Drive Folder
Team B Drive Folder
Only team members in team-a@ can EDIT what’s in the Team A Drive Folder
Only team members in team-b@ can EDIT what’s in the Team B Drive Folder
Here’s what your company drive folder could look like:
This means that if you add a new team member to team-a@, they will automatically have viewing permissions to everything in the Company Drive Folder AND editing permissions to everything in the Team A Drive Folder. Neat.
As a bonus, this structure gives all and only the right team members editing access to folders. When a company hits 100 employees, you start to see files and folders go missing because it’s too easy to move things around in Drive without training… and who has time to spend on that?! | https://leiarollag.medium.com/3-tricks-to-scaling-onboarding-with-google-admin-b644d44ead5d | ['Leia Rollag'] | 2018-08-27 00:55:12.227000+00:00 | ['Startup', 'People Operations', 'Google Calendar', 'Google Drive', 'Onboarding'] |
[JS/TS] Simple App State Management | Provider
First, Provide must be unique in the application. It makes sure all the listeners have the same data.
To make it happen, I used a design pattern: Singleton. It creates a private instance, and provides a public method to make instance accessible from the outside. This prevents the instance to be created more than once.
class Provider {
private static instance: Provider; public static getInstance() {
if (this.instance) {
return this.instance;
} else {
this.instance = new Provider();
return this.instance;
}
}
}
It should avoid modifying data from outside without control. Therefore, Provider sets data as private, and gives the public methods to manipulate data, such as addItem , deleteItem , getItems , updateItem , etc. To make demo simple, I wrote a method, addItem . Input will use it.
Finally, Provider should hold the function of the listeners. The listeners tell Provider what to do when data is update. Therefore, Provider can notify listeners by running all of these functions. These functions ask for the data in Provider. Provider always sends the copy of data to them for protecting data from being polluted.
The code is presented as followed:
In conclusion, there are 3 key points of Provider: | https://medium.com/swlh/js-ts-simple-app-state-management-c04c2a89155d | ['Jijie Liu'] | 2020-11-09 20:17:49.883000+00:00 | ['Typescript', 'Provider', 'State Management'] |
Crossing the bridge of memory | I grew up in Lebanon, less than a kilometre away from a Palestinian refugee camp — one of many that were set up across the country to accommodate hundreds of thousands of refugees who were forced into exile in 1948. These camps were supposed to be temporary.
More than 70 years later, tents — which were once pitched across the country — have been replaced with concrete walls and roofs which, in turn, have gradually morphed into a dense and shockingly dire urban space. Generations have since been born into this state, somehow belonging to un-belonging — this flesh of the Palestinian diaspora, a blistering wound of exile.
As a Lebanese who has stood at the border and witnessed the Israelis building settlements on occupied territories, wondering how Palestinians’ homes and lands can be so close yet so unreachable, Mourid Barghouti’s memoir consolidated my perspective on displacement. It also evoked a strangely conflicting set of emotions: joy intermingled with an overwhelming feeling of loss.
I Saw Ramallah (1997) is Barghouti’s personal testament to the cruelty of exile, of living on the periphery of belonging as it synchronously knocks on the door of the Palestinian collective in the diaspora.
This postcolonial work is a riveting rendering of the Palestinian predicament as it presents an experiential account of displacement as a direct and tragic result of the Israeli occupation that caused the mass exodus of Palestinians in 1948, or what is otherwise referred to today as the Nakba.
An Account of Palestinian Displacement
After 30 years in exile, moving from one city to another, and setting roots nowhere, the poet Mourid Barghouti finally returns home in 1967 for the first time since the Israeli occupation of Palestine. As he crosses a bridge from Jordan to Ramallah, he is struck by the changes the city has undergone. His mind drifts to the past as he tries to absorb everything around him but struggles to reconcile this “idea of Palestine” with the one from his memories.
Photo by Ahmed Abu Hameeda on Unsplash
In the book’s foreword, Edward Said praises the book as “one of the finest existential accounts of Palestinian displacement”. Barghouti’s memoir provides an experiential narrative on living outside of borders and existing in fragments of time as he tries to “put the displacement between parenthesis, to put a last period in a long sentence of the sadness of history, personal and public history.” Reading this autobiography is not reading one man’s story, it is an insight into the Palestinian actuality.
Crossing the bridge of memory
The narrative begins on a bridge over the Jordan River as Barghouti prepares to cross over to the West Bank, nothing short of symbolic of the point of return, of entering one’s memories as if teleporting into a timeline of historical events. In this scene, Barghouti is the traveler who cannot interfere with the timeline or geography, or stay for that matter, as the situation is impermanent, and he remains somewhat displaced in spite of his presence in the setting.
Barghouti lets us in on his thoughts as he feels compelled to contemplate his own existence, and to shed light on the ambiguity of his condition like many other displaced Palestinians when he raises these questions:
“Here I am walking toward the land of the poem. a visitor? A refugee? A citizen? A guest? I do not know. Is this a political moment? Or an emotional one? Or social? A practical moment? A surreal one? A moment of the body? Or of the mind?” I Saw Ramallah by Mourid Barghouti
This excerpt appears to be a commentary on the conditions of the displaced, who, in relation to their homeland, are overcome with the feeling of loss. Barghouti wrote about this land for decades. She was the subject of his poetry, but now, as she concretized before his eyes, he was struck by a sense of foreignness (perhaps, because of the impermanence of his visit) which intermingled with the disconcerting, albeit familiar, feeling of not having a definitive status.
Painting an Emotional Landscape
In I Saw Ramallah, Barghouti paints a sentimental landscape of his homeland, comparing the Palestine of his memories with the present one as an “idea of Palestine” — one that cannot be materialised in the shadow of the Israeli occupation.
Photo by Jakob Rubner on Unsplash
For instance, his emotional reaction to the chopping of the fig tree, which no longer had an apparent function, shows that Palestinians have been holding on to the symbols of Palestine because they have been forced to leave her. The fig tree was a remnant of the past, the Palestine of writer’s childhood, before it was robbed from him.
However, Barghouti believes the homeland should constitute more than a bunch of symbols in its people’s memory. It is not merely an abstraction or a metaphor as it can be experienced with the senses. “It stretches before me,” the Palestinian author writes “as touchable as a scorpion, a bird, a well; visible as a field of chalk, as the prints of shoes.”
He argues that the Israeli occupation relies on the reduction of Palestine into symbolism, which can fade with time and subsequently dissipate from the mass recollection, as more and more settlements are built on the very soil that is metaphorized in the Palestinian collective memory.
On writing
“Writing is estrangement,” writes Barghouti. Aware of his own work’s vulnerability, he compares writing to the state of being far from home.
Is I Saw Ramallah a tragic telling of the story of Palestine? Probably not. Though he weaves the threads of sorrow throughout the memoir, Barghouti contends that the tragic circumstances of the Palestinian reality does not necessarily result in exclusively tragic writing.
The writing isn’t tragic per se, but he ingeniously merges the poetic and the political in his narrative.
It is this precise convergence of politics and sentiment through first-person narration that makes the book so enthralling. Barghouti is a skilled storyteller, a refugee “hungry for his own borders” who finds shelter in the land of poetry, and a citizen of his memories. I Saw Ramallah is his attempt to build a bridge between the past and present. | https://medium.com/the-open-bookshelf/book-review-crossing-the-bridge-of-memory-in-i-saw-ramallah-by-mourid-barghouti-1812198aa225 | ['Fatima H. Elreda'] | 2020-06-01 18:13:01.779000+00:00 | ['War', 'Books', 'Politics', 'World', 'Palestine'] |
The Top 10 Keyword Research Tools and How To Use Them | Ubersuggest has been a favorite of many online entrepreneurs and bloggers worldwide, and for a reason. It is a fast and free keyword research tool. While most keyword research tools assume you know the keywords to use even when you don’t, Ubersuggest gives you keyword ideas.
You only need to enter a simple generic term, and this keyword research tool will help you narrow down the options with its suggestions. Upon loading the webpage, you can begin your keyword research immediately.
Just enter a URL or seed keyword, set your region preferences, and click Search. What’s more, it gives you related keyword suggestions and long-tail keyword suggestions.
How Ubersuggest works
Ubersuggest breaks down keyword research into three main categories: overview, keyword ideas, and SERP analysis.
Domain overview
This is a graph that presents the search volume over time. It will show you the search volume for any keyword in any country in the last 12 months. You will be able to see if the keyword popularity is increasing or declining.
The overview also shows you the difficulty score for your desired keywords, ranging between 1 and 100. The higher the number, the harder it will be to compete for the keyword and vice versa.
Keyword ideas
The second part of Ubersuggest’s keyword research displays a list of keyword ideas. The keywords are sourced from Google Ads and Suggest recommendations, making it easy for you to identify potential keywords for your business.
You will also have the ability to see the search volume data for each of the keywords as well as the paid difficulty, cost per click, and SEO difficulty data.
SERP analysis
This section shows you the top 100 brands that are ranking for a particular keyword. This section goes a step further and shows the countries and regions where the brands are located.
Source: Neil Patel
If anything, Ubersuggest is one of the best keyword research tools available today. The best part of it is that it offers a comprehensive list of potential keywords as well as brands that are outranking you for any given keyword. | https://medium.com/better-marketing/worlds-top-10-keyword-research-tools-in-the-marketing-today-8d49abfd9c0c | ['Alphan Maina'] | 2019-11-01 04:39:31.955000+00:00 | ['SEO', 'Data', 'Google', 'Keyword Research Tools', 'Keyword Research'] |
Today’s Rembrandts in the Attic: Unlocking the Hidden Value of Data | De Nacthwacht by Rembrandt, 1642 (Rijksmuseum Amsterdam)
A shorter version of the below appeared in The Harvard Business Review on May 15, 2020
Twenty years ago, Kevin Rivette and David Kline wrote a book about the hidden value contained within companies’ underutilized patents. These patents, Rivette and Kline argued, represented “Rembrandts in the Attic” (the title of their book*). Patents, the authors suggested, shouldn’t be seen merely as defensive tools but also as monetizable assets that could be deployed in the quest for profits and competitive dominance. In an interview given by the authors, they referred to patents as “the new currency of the knowledge economy.”
We are still living in the knowledge economy, and organizations are still trying to figure out how to unlock under-utilized assets. But today the currency has shifted: Today’s Rembrandts in the Attic are data. At the same time, the currency and the means of unlocking the value of data are quite different than with patentable innovations. Unlike patents, the key to harnessing data’s value is unlikely to be found in restrictive licensing approaches. Unlocking the value of data requires a new approach, one that recognizes that the value of data ultimately lies in access, collaboration, and application of data to solving a wide range of problems using tools like machine learning.
The vast amounts of data being generated today represent a vast repository of potential value (and danger), and not just monetary: there is tremendous social good to be unlocked as well. But do organizations — and more importantly, do we as a society — know how to unlock this value? Do we know how to find the insights hidden in our digital attics and use them to improve people’s lives? And how do we unlock these treasures in a manner that is societal beneficial?
At The GovLab, an action-oriented think tank located within NYU, we are dedicated to unleashing the societal value of data to improve decision making. Our work with corporations, governments, and non-profit organizations has convinced us of the tremendous potential (and risks) of this data, and also that the potential remains largely unfulfilled.
In what follows, we outline four steps that we think could help data stewards (we resist using the term “data owners”) to maximize the potential.
If there is an overarching theme that emerges, it is about the value of re-using. In recent years, several countries have witnessed the rise of an open data movement, and a growing number of organizations have taken steps to release or made accessible previously siloed data sets. Despite occasional trepidation on the part of data holders, our research has repeatedly shown that such efforts can be value-enhancing — both for data holders and for society at large. Better and more transparent re-use of data is arguably the single most important measure we can take to unleash the full possibilities of data.
1. Develop new methodologies to identify and measure the value of data
The first step required to fulfill this potential is for all stakeholders to arrive at a better understanding of just what we mean by value. Today there exists widespread consensus that data is valuable. Despite such agreement, however, there exists no equally accepted method for calculating or estimating the value of data. Such a consensus must be arrived at through a broad process of consultation that involves data holders and users from all sectors, as well as policymakers, researchers and academics, and civil society or other groups representing the public interest.
One important consideration is what variables or indices to use. While data may have monetary worth, it can also have what a recent report by the Bennett Institute for Public policy refers to as “social welfare” value, which means that unlocking or sharing data could contribute to “the wellbeing of all society.” These two forms of value — societal and monetary — may not always coincide. For example, in re-using data, an organization may surrender a certain amount of financial advantage (or incur an opportunity cost) even while contributing to the broader social welfare. To guide such difficult decisions, policymakers and society at large must determine broader metrics of valuation and consider how various metrics interact and sometimes clash. In addition to social and monetary value, other metrics to consider include potential harms that may result from releasing or sharing data; and the opportunity costs of not re-using data. All these metrics co-exist in a delicate balance. Considered together, they can help organizations determine the true value of data.
2. Develop enabling ecosystems and collaborative frameworks to move from extraction to co-creation of value
Unlike physical assets, data goods are non-rivalrous and intangible, which means that they can be shared without depriving their original holders of benefit. The process of maximizing under-utilized data assets will therefore often involve arriving at new institutions and frameworks to enable data collaboration and what we call “co-creation of value.” This concept of co-creation is not new and various experts have called for the creation of new institutions to facilitate it in different sectors. In her book, The Entrepreneurial State, University College London Professor Mariana Mazzucato argues that such a framework is necessary to bring the public and private sectors together to spur innovation. She writes:
“Creating a symbiotic (more mutualistic) public–private innovation ecosystem thus requires new methods, metrics and indicators to evaluate public investments and their results. Without the right tools for evaluating investments, governments have a hard time knowing when they are merely operating in existing spaces and when they are making things happen that would not have happened otherwise. The result: investments that are too narrow, constrained by the prevailing path-dependent, techno-economic paradigm.”
Data use can operate the same way, bringing together different institutions from the public and private sector to find new, innovative approaches through what we call “data collaboratives.” We outline some specifics of these institutions and frameworks below. The broader goal is to create new ecosystems of sharing that move beyond legacy models of value extraction and asset hoarding.
Drawing on the analogy with patents (those earlier “Rembrandts in the attic”), it is worth pointing out in this context the dangers and risks of not sharing. While patents can be competitive assets for companies, they also often block innovation and prevent true competition from emerging. In much the same way, data hoarding can result in broader societal and monetary losses. These losses may ultimately rebound on the data holders themselves, who fail to benefit from missed-out innovations or breakthroughs.
3. Innovate with new data collaborations and re-use conditions
In order to enable sharing, we need new structures that foster partnerships and more collaborative approaches. The old model of single-ownership is outdated and no longer conducive to maximizing the value of data assets. Data holders are also unlikely to foster public value creation by simply auctioning off their “Rembrandts” to the highest, most well-resourced bidder. Several structures have been proposed, including data co-ops, data commons and (our preferred term at The GovLab) data collaboratives.
Data collaboration can take many forms. In our typology, we generally focus on two defining variables: engagement and accessibility. The first variable, engagement, refers to the degree to which the data supply and demand actors co-design the use of corporate data assets. We find that collaboration is often independent, in that the private-sector holder has little to no involvement in data re-use, cooperative, in that data suppliers and data users work together, and directed, in that the data holder seeks a specific product. The second variable, accessibility, the extent to which external parties can access private data. Within it, we find that data is either open access, in that there are few restrictions on who can see it, or restricted, in that only pre-selected partners received unfettered access.
From these variables, a variety of models emerge. For data collaboratives where there is both independent data usage and open access, we see public interfaces such as APIs, which the startup Numina uses in its Street Intelligence initiative, and data platforms, such as Uber Movement. Data collaboratives with open access and directed use often are prizes and challenges in which companies make data available to individuals who compete to develop apps. These competitions can be open challenges that allow participation from the public, such as the LinkedIn Economic Graph Challenge, or more selective challenges directed at a few trusted partners, such as the Orange Telecom Data for Development Challenge. Other models include data pools, trusted intermediaries, research and analysis partnerships, and intelligence generation ventures.
While each of these frameworks has its distinguishing characteristics, they all share a commitment to developing fresh forms of data management and re-use conditions, including new sharing agreements and licensing provisions. They begin from a recognition that data is in many respects unlike conventional assets, and that more sophisticated forms of governance and value-maximization are required in order to work through some of the tradeoffs and competing valuation metrics. In particular, these new partnerships and holding entities can help balance the private (often monetary) value of data holders and the wider societal benefits of sharing.
A variety of barriers stands in the way of streamlining open and re-using data, including the absence of an enabling and scalable policy and legislative agenda, the lack of internal capacity and limited access to external expertise and resources. And at a time of increased data asymmetries, and the growing need for data to develop artificial intelligence we need to develop new intermediaries that can help to lower these barriers and build a center of expertise that is available to all; and democratizes data availability.
Those focused on enabling data collaboration should also take a page from the open source software movement. Open source has become the default business model thanks to the existence of flexible yet trusted policy and legal instrument, concepts of good stewardship, and an ethos of collaboration and peer-learning.
4. Identify and nurture data stewards
As data collaboratives and other similar structures gain increasing validity, it is becoming clear that new human and institutional roles will be required to foster them (and more generally to encourage a culture of sharing). In our work at the GovLab, we have identified a key role within data holding organizations for what we call data stewards. As the European Commission’s High-Level Expert Group on Business-to-Government Data Sharing recognizes, these individuals or teams empowered to proactively initiate, facilitate, and coordinate data sharing are essential to using cross-organizational and cross-sector data toward the public interest.
Data stewards can be seen as the curators of the “Rembrandts” held by a business or institution. They are individuals or groups who manage data within their organizations, and whose specific remit is to maximize both the societal and monetary value of data assets by fostering collaboration and sharing, with an eye to maximizing both societal and monetary value. Among other responsibilities and roles, data stewards can identify under-utilized data that may have potential value; locate and foster partnerships to help unlock that value; and ensure a responsible framework that balances potential benefits of sharing against possible risks such as harms to privacy or security.
*Venkatesh Hariharan reminded me of the book during a fascinating discussion at The GovLab on governance and data collaboratives, which was in part the inspiration for this blog.
** Thanks also to Andrew Young, Dave Green, Akash Kapur, Michelle Winowatan and Andrew Zahuranec for their input. | https://medium.com/data-stewards-network/todays-rembrandts-in-the-attic-unlocking-the-hidden-value-of-data-e98bc308fd9e | ['Stefaan G. Verhulst'] | 2020-06-18 14:46:16.426000+00:00 | ['Data4good', 'Data Science', 'Data', 'AI'] |
THE ETERNAL PARTY: Understanding My Dad, Larry Hagman–By Kristina Hagman | When you have a very famous father, like mine, everyone thinks they know him.
My dad, Larry Hagman, portrayed the storied, ruthless oilman J.R. on the TV series Dallas. He was the man everyone loved to hate, but he had a personal reputation for being a nice guy who fully subscribed to his motto: DON’T WORRY! BE HAPPY! FEEL GOOD! Dad had a famous parent, too―Mary Martin, known from many roles on Broadway, most memorably as Peter Pan. Off-stage she was a kind, elegant woman who maintained the down home charm of her Texas roots. Both were performers to the core of their beings, masters at crafting their public images. They were beloved. And their relationship was complex and often fraught.
My father never apologized for anything, even when he was wrong. But in the hours before he died, when I was alone with him in his hospital room, he begged for forgiveness. In his delirium, he could not tell me what troubled him, but somehow I found the words to comfort him. After he died, I was compelled to learn why he felt the need to be forgiven.
As I solved the troubling mystery of why my happy-go-lucky, pot-smoking, LSD-taking Dad had spent his last breaths begging to be forgiven, I also came to know my father and grandmother better than I had known them in life.
KRISTINA HAGMAN is an artist who has been featured in the Los Angeles Times, USA Today, People magazine, Closer magazine (UK), and more. She is the daughter of Larry Hagman and granddaughter of Mary Martin and lives in Santa Monica, California.
You can purchase your own copy of The Eternal Party on Amazon, Barnes and Noble, or at your local indie bookstore. | https://medium.com/carol-mann-agency/the-eternal-party-understanding-my-dad-larry-hagman-the-tv-star-america-loved-to-hate-by-bede5c38ee46 | ['Carol Mann Agency'] | 2016-09-27 17:49:38.436000+00:00 | ['Books', 'Celebrity', 'Television', 'Love', 'Family'] |
Construction Tech Startups Are Poised To Shake Up A $1.3-Trillion-Dollar Industry | Originally published in TechCrunch on November 17, 2020.
In the wake of COVID-19 this spring, construction sites across the nation emptied out alongside neighboring restaurants, retail stores, offices and other commercial establishments. Debates ensued over whether the construction industry’s seven million employees should be considered “essential,” while regulations continued to shift on the operation of job sites. Meanwhile, project demand steadily shrank.
Amidst the chaos, construction firms faced an existential question: How will they survive? This question is as relevant today as it was in April. As one of the least-digitized sectors of our economy, construction is ripe for technology disruption.
Construction is a massive, $1.3 trillion industry in the United States — a complex ecosystem of lenders, owners, developers, architects, general contractors, subcontractors and more. While each construction project has a combination of these key roles, the construction process itself is highly variable depending on the asset type. Roughly 41% of domestic construction value is in residential property, 25% in commercial property and 34% in industrial projects. Because each asset type, and even subassets within these classes, tends to involve a different set of stakeholders and processes, most construction firms specialize in one or a few asset groups.
Regardless of asset type, there are four key challenges across construction projects:
High fragmentation: Beyond the developer, architect, engineer and general contractor, projects could involve hundreds of subcontractors with specialized expertise. As the scope of the project increases, coordination among parties becomes increasingly difficult and decision-making slows.
Poor communication: With so many different parties both in the field and in the office, it is often difficult to relay information from one party to the next. Miscommunication and poor project data accounts for 48% of all rework on U.S. construction job sites, costing the industry over $31 billion annually according to FMI research.
Lack of data transparency: Manual data collection and data entry are still common on construction sites. On top of being laborious and error-prone, the lack of real-time data is extremely limited, therefore decision-making is often based on outdated information.
Skilled labor shortage: The construction workforce is aging faster than the younger population that joins it, resulting in a shortage of labor particularly for skilled trades that may require years of training and certifications. The shortage drives up labor costs across the industry, particularly in the residential sector, which traditionally sees higher attrition due to its more variable project demand.
A construction tech boom
Too many of the key processes involved in managing multimillion-dollar construction projects are carried out on Excel or even with pen and paper. The lack of tech sophistication on construction sites materially contributes to job delays, missed budgets and increased job site safety risk. Technology startups are emerging to help solve these problems.
Image Credits: Bain Capital Ventures
Here are the main categories in which we’re seeing construction tech startups emerge.
1. Project conception
How it works today: During a project’s conception, asset owners and/or developers develop site proposals and may work with lenders to manage the project financing.
During a project’s conception, asset owners and/or developers develop site proposals and may work with lenders to manage the project financing. Key challenges: Processes for managing construction loans are cumbersome and time intensive today given the complexity of the loan draw process.
Processes for managing construction loans are cumbersome and time intensive today given the complexity of the loan draw process. How technology can address challenges: Design software such as Spacemaker AI can help developers create site proposals, while construction loan financing software such as Built Technologies and Rabbet are helping lenders and developers manage the draw process in a more efficient manner.
2. Design and engineering
How it works today: Developers work with design, architect and engineering teams to turn ideas into blueprints.
Developers work with design, architect and engineering teams to turn ideas into blueprints. Key challenges: Because the design and engineering teams are often siloed from the contractors, it’s hard for designers and engineers to know the real-time impact of their decisions on the ultimate cost or timing of the project. Lack of coordination with construction teams can lead to time-consuming changes.
Because the design and engineering teams are often siloed from the contractors, it’s hard for designers and engineers to know the real-time impact of their decisions on the ultimate cost or timing of the project. Lack of coordination with construction teams can lead to time-consuming changes. How technology can address challenges: Of all the elements of the construction process, the design and engineering process itself is the most technologically sophisticated today, with relatively high adoption of software like Autodesk to help with design documentation, specification development, quality assurance and more. Autodesk is moving downstream to offer a suite of solutions that includes construction management, providing more connectivity between the teams.
3. Pre-construction
How it works today: Pre-construction is the translation of designs into highly detailed, executable construction plans. This includes many subprocesses such as finding labor, bid management, takeoff estimation and scheduling.
Pre-construction is the translation of designs into highly detailed, executable construction plans. This includes many subprocesses such as finding labor, bid management, takeoff estimation and scheduling. Key challenges: The pre-construction process is highly complex, yet one of the least digitized elements of the construction process. Subjective decision-making and Excel sheets still drive most of pre-construction, which makes it both time-consuming and difficult to update with new requests or changes.
The pre-construction process is highly complex, yet one of the least digitized elements of the construction process. Subjective decision-making and Excel sheets still drive most of pre-construction, which makes it both time-consuming and difficult to update with new requests or changes. How technology can address challenges: Marketplaces like Sweeten help connect contractors to projects, while digital workflow platforms like SmartBid and Building Connected help general contractors reduce the time and administrative burden of managing complex bid processes. Solutions like Alice Technologies take a predictive approach, using machine learning to optimize productivity.
4. Construction execution
How it works today: The execution of a construction project is usually performed by a combination of a general contractor’s in-house labor and/or trade-specific subcontractors. Even simple builds may require dozens of contractors.
The execution of a construction project is usually performed by a combination of a general contractor’s in-house labor and/or trade-specific subcontractors. Even simple builds may require dozens of contractors. Key challenges: With so much complexity on the physical job site, it is difficult for contractors to get a sense of real-time progress. Decision-making is more often reactionary than proactive, and it is hard for contractors to assess the full downstream impacts of each decision.
With so much complexity on the physical job site, it is difficult for contractors to get a sense of real-time progress. Decision-making is more often reactionary than proactive, and it is hard for contractors to assess the full downstream impacts of each decision. How technology can address challenges: There are several software-led approaches to managing execution complexity, including field management tools like Rhumbix, on-site safety management software like Safesite, or site-visualization tools like Openspace, OnSiteIQ or Smartvid.io. Other companies have taken a full-stack approach to disrupting the construction process. Mosaic, for example, uses proprietary software to turn construction plans into detailed manuals that allow a build to be performed with fewer, less specialized laborers.
5. Post-construction
How it works today: Even before the build is completed, contractors will start preparation of operating manuals, inspections and testing, close-out documentation and other tasks to prepare the asset for turnover.
Even before the build is completed, contractors will start preparation of operating manuals, inspections and testing, close-out documentation and other tasks to prepare the asset for turnover. Key challenges: Document management, reporting and handover can be time-consuming, as it involves coordination across all parties involved with the build.
Document management, reporting and handover can be time-consuming, as it involves coordination across all parties involved with the build. How technology can address challenges: Most commonly, project management tools will offer a module to help manage this process, though some targeted solutions like Pype or Buildr focus on digitizing the closeout process.
6. Construction management and operations
How it works today: Construction management and operations teams manage the end-to-end project, with functions such as document management, data and insights, accounting, financing, HR/payroll, etc.
Construction management and operations teams manage the end-to-end project, with functions such as document management, data and insights, accounting, financing, HR/payroll, etc. Key challenges: The complexity of the job site translates to highly complex and burdensome paperwork associated with each project. Managing the process requires communication and alignment across many stakeholders.
The complexity of the job site translates to highly complex and burdensome paperwork associated with each project. Managing the process requires communication and alignment across many stakeholders. How technology can address challenges: The nuances of the multistakeholder construction process merit value in a verticalized approach to managing the project. Construction management tools like Procore, Hyphen Solutions and IngeniousIO have created ways for contractors to coordinate and track the end-to-end process more seamlessly. Other players like Levelset have taken a construction-specific approach to functions like invoice management and payments.
Key barriers holding back construction tech
It’s no secret the construction industry has been slow to innovate. Despite the advancement of heavy machinery, drone technology and computer vision, construction productivity per worker has barely changed for decades. Today, the majority of construction firms spend less than 2% of annual sales volume on IT, according to the 2019 JB Knowledge Construction Technology Report). Yet technology has the potential to help firms save significantly across other cost categories like overhead labor, materials and construction labor, so why have construction firms been so slow to adopt technology?
First, the fragmentation and complexity of so many involved parties creates complexity around the ultimate decision to adopt new solutions. Construction projects are the ultimate example of “too many cooks in the kitchen.” It can be hard to pinpoint a single decision-maker among the owner, general contractor and subcontractors involved. On top of that, there are often misaligned incentives, as a product’s end user is likely different than the party that ultimately bears the cost.
Second, many firms have an aversion toward adopting point solutions, which most existing products are today. With very lean technology teams, it can be burdensome for construction companies to integrate new solutions. Because of this, most construction firms shy away from solutions that only serve one workflow, as potential productivity benefits are often offset by the time and effort required to set up the software and train teams how to use it.
Lastly, many construction firms have a low risk tolerance given the heavy liabilities associated with building projects, therefore they lack a culture of innovation. Some firms, however, are beginning to change this. Firms like Suffolk and Turner Construction have dedicated chief innovation officers to evaluate and implement construction technology.
Turning the corner on tech adoption
Just as incumbents in industries like financial services and real estate came to embrace innovation over time, we believe that now more so than ever, construction companies will begin to embrace construction technology at an accelerated rate — not because they want to, but because they have to. One of the primary problems facing the construction industry is a labor shortage, so firms across the industry must deploy new technologies to increase productivity amid an ever-shrinking workforce.
Recessions of the past have taught us that construction jobs tend to come back much slower than construction volume. This is evident particularly after the 2008 Great Recession. Construction volume (measured by overall project value) returned to its pre-2008 peak in about five years, while construction jobs had still not fully recovered even a decade later. With the rise of the gig economy, construction laborers — both existing and potential — have an increasing number of alternative job options, many with similar pay and more stable hours. For that reason, the construction labor force is aging quickly, with fewer young people joining the industry.
Since COVID-19 took hold in the United States this spring, it reinforced the pressure for construction companies to improve productivity. Mandated shutdowns cut nearly one million construction jobs in April 2020, and while many have returned to work, BLS data shows that September 2020 construction employment is still lower than February 2020 levels by 394,000 jobs. Nearly five years of construction job gains were wiped out in one month, and if the industry behaves anything like prior recessions, it will take many years to fully recover.
To remain competitive, improving productivity is a necessity for construction firm’s long-term survival, and adoption of construction technology solutions is one of the ways to do so. Much like how COVID accelerated tech adoption in industries like retail and banking, we expect to see construction companies accelerate adoption of software that helps them turn manual processes into digital, cloud-based workflows.
While perhaps simple in theory, this shift from pen and paper to mobile devices is a large undertaking that requires retraining entire workforces. Construction workers are extremely skilled in their crafts and learn on the job from their mentors. Skilled laborers are often resistant to change, preferring age-old and tried-and-true methods such as paper, pen, string and tape measures. But almost everyone is now comfortable with mobile applications, so construction workers are likely at a point where they’ll be ready to jump into the digital world. Once construction firms make a digital shift, they will unlock a treasure trove of data and analytics. We’re excited for the next phase of construction technology, where firms can use data-driven insights to make better decisions that reduce costs, improve productivity, streamline projects and increase worker safety.
We already see startups beginning to enable this digital shift across all phases of the construction process. Whether it’s pre-construction plans optimized by Alice Technologies’ predictive intelligence platform, or real-time progress tracking visualized through OpenSpace’s onsite capture images, as investors, we’re bullish on the next wave of construction tech companies. | https://medium.com/ideas-from-bain-capital-ventures/construction-tech-startups-are-poised-to-shake-up-a-1-3-trillion-dollar-industry-9aab629e570a | ['Allison Xu'] | 2020-12-02 17:03:55.939000+00:00 | ['Startup', 'Innovation', 'Insights', 'Construction Tech', 'Construction'] |
Angular — A Comprehensive guide to unit-testing with Angular and Best Practices | Angular — A Comprehensive guide to unit-testing with Angular and Best Practices
A step by step guide to a unit testing angular project with some examples
Unit Testing ecosystem
Unit testing is as important as developing a project nowadays and it becomes an integral part of development. It actually boosts the quality of the code and confidence of developers. Unit tests are written in the Jasmine framework and executed by karma, A test runner and these are executed in the browser.
Sometimes, we write tests before we even start developing which is called TDD. We mostly follow BDD since we are using a jasmine framework.
Let’s jump in right away!! Here are the concepts that we will cover in this article
Features of Unit Testing
When can we write Unit Tests
Why use Jasmine
Why use Karma
Setup an Example Project with karma and jasmine
Testing Configuration in Angular Project
Key Concepts
Testing Component Class
Testing Component DOM
Testing pipes
Testing services
Code Coverage
Best Practices
Conclusion
Check out how to execute protractor e2e tests in Docker.
Features of Unit Testing
we have two kinds of testing which are end-to-end testing and unit testing. There are specific features that qualify for unit testing. Let’s go through those.
Used for behavior tests, Jasmine encourages Behavior-driven development.
Should run very fast
Should be tested in isolation without any dependencies
Should define clear output for the input
The smallest unit can be tested as a unit
When can we write Unit Tests
We have seen features of unit tests, let’s see where we can use these
At the beginning and completion of initial development
If we add any feature to the project, we have to write unit tests for it
If we change any existing feature, we have to write unit tests for it
Why use Jasmine
Jasmine is a behavior-driven javascript testing framework. Here are some features of Jasmine
Default integration with karma
Provides additional functions such as spies, fakes, and pass-throughs
easy integration with reporting
Why use Karma
Karma is a test-runner for javascript apps. Here are some features of Karma
Angular recommends Karma and CLI create it out of the box.
We can automate tests in different browsers and devices
We can watch files without manual restart
Online documentation is good
We can easily integrate with a continuous integration server
Setup an Example Project with karma and jasmine
We don’t have to set up anything to start on unit testing. Thanks to Angular CLI, it creates all the set up (jasmine and karma) for us. Let’s test.
// create a new angular project
ng new angular-unit-testing // start the test
npm test
And here is the result in the browser
test results in the browser
For Testing different features of angular, I created a sample project which is a to-do list App. Here are the screenshot and clone it from here.
git clone // clone it from heregit clone https://github.com/bbachi/angular-unit-testing.git // install dependencies and start the app
npm install
npm start
todo App
Testing Configuration in Angular Project
When we run the npm test or ng test , Angular takes the configuration from angular.json. Look at the test part of an angular.json file.
We are specifying test.ts is our main file and karma.conf.js is our karma configuration file
testing part of angular.json
In the test.ts file, we are loading all the spec files from the project for testing and another file is karma.conf.js where we specify all the configuration such as reports, port, browsers, etc..
// Then we find all the tests. const context = require.context('./', true, /\.spec\.ts$/); // And load the modules. context.keys().map(context);
karma.conf.js
Key Concepts
There are some key concepts that we need to learn before we begin testing the app. Let’s see what are those
TestBed
This is the basic building block of the angular testing module. It actually creates a dynamically constructed test module that emulates an Angular NgModule. The metadata that goes into TestBed.configureTestingModule() and @NgModule are pretty similar and this is where we actually configure the spec file.
If we look at the below file, TestBed.configureTestingModule() takes a similar configuration as @NgModule. So, It actually creates a dynamic testing module and always put this configuration in the beforeEach() which runs before every test method.
sample file
compileComponents()
If we look at the above file again, we are using compileComponents() method. If we are creating a component with external files such as styleUrls, templateUrl, We need to read these files from the file system before we even create a component and this is asynchronous operation and that’s why the entire configuration is placed inside an async function.
What it actually doing is it reads the external files from the file system asynchronously and compiles the corresponding component. If we are running the tests with angular CLI, we don’t need to do this. This is because CLI compiles all the code before testing.
why two beforeEach functions
If we look at the below file, we are using two beforeEach functions. why do we need two?
Remember compileComponents() is asynchronous So, we want to make sure that component is compiled before we even create a fixture to access the component instance. The second beforeEach() runs after the first one.
app.component.spec.ts
Why do we use NO_ERRORS_SCHEMA
Let’s select an app.component.ts component from the above project and test it.
Look at the spec file that angular CLI generated for us when we use the command ng n c . I added f (line 5) to describe global function to run only this file when we execute tests. These are called Focused Tests
spec file angular CLI generated for us
Let’s run the npm test and we would see a bunch of errors because of nesting components. since the app component has other components inside such as header, footer, and add-item.component, etc.. We have to declare those in the app.component.spec.ts declarations array (line 11).
karma showing errors because of nesting components
Another way is to use NO_ERRORS_SCHEMA. With this, we tell angular “hey angular, ignore all the unrecognized tags while testing app.component”
Let’s use this and see the results.
usage of NO_ERRORS_SCHEMA
app component test is successful
All the introduction and configuration part is done. Let’s dive into some examples!!
Testing Component Class
With this in place, Let’s go ahead and test the app component. Let’s test this deleteItem method in app.component.ts
deleteItem method in app.component.ts
app.component.spec file
Testing Component DOM
Let’s test the footer component where we have footerText and numberOfItems. Let’s change these in the component and test the footer template DOM
the footer of the app
footer spec file
Notice the fixture.detectChanges(), we are initializing the footerText or numberOfItems in the test spec and we are invoking angular change detection cycle to make sure DOM has all these values before we can even test.
Here is the result in the browser. Since footer is a standalone component, We can even see the component rendered in the karma browser from the last spec.
test results in the browser
Testing pipes
Angular pipes are easier to test since they normally don’t have any dependencies.
I have a custom pipe that shows the current year after the footer text like below. Let’s test this one.
a pipe which displays current year after footer text
copyright.pipe.spec.ts
Here are the results
Test result in the browser
Testing services
Services are the easiest ones to test. we have one service in our project which is app.service.ts. Let’s test this one. we have two kinds of testing synch and async testing. We don’t have any async calls to test for this one.
We have an addItem method in the service, Let’s test that one.
app.service.spec.ts
Here are the results in the browser.
test results in the browser
Code Coverage
Code coverage represents how much percent of code is covered with unit tests. Usually, we have a target of 80%. Angular CLI generates the coverage report in a separate folder called coverage. Let’s see how we can generate it.
Run the following command and it will generate the coverage folder as well.
ng test --no-watch --code-coverage
coverage report in the console
coverage folder created
If you load the index.html from this folder in the browser, we can see the full report.
full coverage report
Every folder in the app has this index.html, we can load that in the browser to see the actual report for that particular folder or module. For example, Let’s load the app folder
only app module files
Best Practices
Make sure unit test runs in an isolated environment so that they run fast.
Make sure we have at least 80% of the code coverage
When it comes to testing services, always use spies from the jasmine framework for the dependencies this makes testing a lot faster.
When we subscribe to observables while testing, Make sure we provide both success and failure callbacks. This will remove unnecessary errors.
When testing components with service dependencies, always use mock services instead of real services.
All the TestBedConfiguration should be placed in beforeEach not to repeat the same code for every test.
while testing components, always access the DOM with debugElement instead of directly calling nativeElement. This is because debugElement provides an abstraction for the underlying runtime environment. This will reduce unnecessary errors.
instead of directly calling This is because debugElement provides an abstraction for the underlying runtime environment. This will reduce unnecessary errors. Prefer By.css instead of queryselector if you are running the app on the server as well. This is because the queryselector works only in the browser. if we are running the app on the server, this will fail. But, we have to unwrap the result to get the actual value.
instead of if you are running the app on the server as well. This is because the queryselector works only in the browser. if we are running the app on the server, this will fail. But, we have to unwrap the result to get the actual value. Always use a fixture.detectChanges() instead of ComponentFixtureAutoDetect. ComponentFixtureAutoDetect doesn’t know the synchronous update to the components.
instead of ComponentFixtureAutoDetect doesn’t know the synchronous update to the components. Call compileComponents() if we are running the tests in the non-CLI environment.
if we are running the tests in the non-CLI environment. Use RXJS marble testing whenever we are testing observables.
Use PageObject model for reusable functions across components.
Don’t overuse NO_ERRORS_SCHEMA. This tells angular to ignore the attributes and unrecognized tags, prefer to use component stubs.
Another advantage of using component stubs instead of NO_ERRORS_SCHEMA is that we can interact with these components if necessary. It’s better to use both techniques together
Never do any setup after calling compileComponents(). compileComponents is asynchronous and should be executed in async beforeEach. Maintain the second beforeEach to do the synchronous setup.
Conclusion
Unit testing Angular application is fun if you understand the core features of Angular Testing module, jasmine, and karma. An angular Testing module provides testing utilities to test the app, Jasmine is a BDD testing framework and karma is a test runner. | https://medium.com/bb-tutorials-and-thoughts/angular-a-comprehensive-guide-to-unit-testing-with-angular-and-best-practices-e1f9ef752e4e | ['Bhargav Bachina'] | 2020-05-30 17:51:09.164000+00:00 | ['Programming', 'Software Development', 'Web Development', 'JavaScript', 'Typescript'] |
Surgical Strike in Pakistan a Botched Operation? | Indian fighter jets carried out strikes against targets inside undisputed Pakistani territory, but open-source evidence suggested that the strike was unsuccessful.
At around 3:30 a.m. local time on February 26, a flight of 12 Mirage 2000 multi-role fighters reportedly attacked facilities of the terrorist group Jaish-e-Mohammad (JEM) near Balakot town in Mansehra District, Pakistan. Using open-source evidence, @DFRLab was able to geolocate the site of the attack and provide a preliminary damage assessment.
While most sources at the time identified the strike as taking place in the Pakistani town of Balakot, others located the strike at “Jaba Top,” a likely reference to a mountain top near the village of Jaba, about 10 kilometers south of Balakot.
Identification
On the morning of the February 26, after daybreak, a spokesperson for the Pakistan Armed Forces, Major General Asif Ghafoor, tweeted images from what he alleged to be the impact area of the Indian bombs.
Initial tweet showing damage left by Indian strike. (Source: @OfficialDGISPR/archive)
Ghafoor claimed that the payload was dropped in a hurry and impacted in an open area. The immediate imagery seemed to corroborate the statement, as the photographs were taken in a wooded area and did not show any damaged structures.
Two photos included fragments from the ordinance supposedly dropped by the Indian fighter jets. One image contained what looked like a set of tail fins and another what appeared to be a component for the targeting system.
The most identifiable photo was of the fins, which closely resembled those on the Israeli-produced SPICE-2000 precision-guided munitions (PGM). SPICE uses INS/GPS coordinates for autonomous navigation and electro-optical seekers for target identification and terminal navigation. According to Rafael, the manufacturer of the SPICE series of PGMs, the electro-optical sensor matches the scene with pre-loaded imagery for accurate target acquisition and homing.
Bomb components compared against a mockup SPICE-2000 among other Rafael products mounted on an F-16 model during the 2017 Paris Air Show. (Source: AIN Online/archive)
Initial reports stated that Mirage 2000s dropped 1,000-kilogram (approximately 2,000 pound) bombs on their targets, which is the same size of the SPICE-2000. This was corroborated by later reports, which stipulated that the SPICE-2000 was used in the strike. These munitions are compatible with the fighter jets, lending credibility to this assessment.
Location of the Strike
In addition to the imagery posted by the Pakistan Armed Forces spokesperson, video footage emerged of the area near the strike, showing a second point of impact. The video showed locals talking about the strike and claimed that only one man received injuries as a result of being startled by the bomb blasts.
Video showing additional damage in the target area. (Source: @shamajunejo/archive)
The video posted on social media showed a valley similar to the one appearing in the video shared by the Pakistan Armed Forces spokesperson. A single cluster of buildings was visible on top of the mountain ridge. Imagery posted by Uppsala University Professor and UNESCO chair Ashok Swain on Twitter showed a closeup of this building, allowing for a clearer view of the area.
Overlap of closeup view and a still from the video footage, confirming that both images are from the same location. (Sources: @shamajunejo/archive; @ashoswai/archive)
Using this more detailed imagery, @DFRLab was able to confirm conclusively that this strike did in fact take place near Jaba Top. A comparison between tree clusters and the buildings visible in the imagery provided a positive match. | https://medium.com/dfrlab/surgical-strike-in-pakistan-a-botched-operation-7f6cda834b24 | [] | 2019-03-05 21:04:17.597000+00:00 | ['India', 'Pakistan', 'Security', 'Military', 'Social Media'] |
Don’t Rely on New Year’s Resolutions | Don’t Rely on New Year’s Resolutions
Your life is happening today.
Photo by Isaac Smith on Unsplash
I am not here to bash on New Year’s Resolutions. They’re great and they can be very useful. There is something extremely comforting and revitalising about them. Just like most people, I like starting fresh on the first day of the new year. I just think we might need to tweak resolutions a little bit to actually get them to work.
Boiling Frog and Resistance
It takes about two months (a little over 60 days) to form a new habit and, depending on the habit, it may take even longer to reap the benefits of it.
So, let’s say that you decide to meditate every day starting in January. You will sit down on January 1st, or maybe 2nd, with some guided meditation you found on YouTube after you realised that you had decided to try this thing but didn’t do any research on it in advance, and you’re pretty hungover and tired, and all you really want to do is just stay on the couch and watch Home Alone for the gazillionth time… ahem, what were you supposed to be doing? Oh, right! Meditation. Maybe you could just postpone it until tomorrow. Or, like, next week. You’ll have some more time and inspiration then. Maybe you’ll do it after the movie, or after you check your phone, or after dinner, or after you brush your teeth, or after…
Once when “after” will have finally arrived, you will sit down and try to follow the guided meditation and it will be boring. Trust me, it will be boring. People say that it’s amazing and that they feel great after it, they are so calm and centered, but the first time (and the next few times probably), it will be boring.
You will want to get up and do something else. Your brain will go into overdrive and what is an apparently endless stream of consciousness will have drowned you. You will get up from your chair, sofa, or floor and decide that this whole thing is just overrated so you’ll be better off doing something else.
Now that might be true. Meditation is not for everyone.
But this could be applied to any resolution you may have right now. You might try journaling, exercising every day (ambitious and probably not a great idea, you need rest days), eating healthier, cutting out some food, drink, or activity. Whatever it is, the first time you try it, you will not like it because your brain is not used to the effort it takes to do it, which causes resistance.
To make sure you stick with your New Year’s Resolutions, start preparing your brain for them today. You want to exercise every day (you should really think about some rest days) and now you are not exercising at all? Start by exercising once this week, then twice next week, three times the week after that. You see where I’m going with this.
Think of your brain as the proverbial boiling frog. When a frog is in a pot of water that is being slowly heated, it will (according to the fable, though not reality) not jump out. You need to warm up your brain to your resolutions.
Think BIG and Be Prepared for the Worst
Another way to go about it might be making a big resolution. Something truly life-changing. Perhaps you want to change your career, go back to school, learn a new language, leave your partner, or find a partner.
I’m thinking something really big.
Usually, we are advised against setting goals that are too large because it’s easy to become overwhelmed and give up too soon. But that too can be avoided with a little bit of preparation.
So take some time and think of a big goal. The sky is the limit — well, my front door is my limit currently, but one can hope.
When the goal is set, because it’s so large, it’s impossible to try tackling it altogether at once. It necessary to create a strategy, a plan of attack. Think about how you can go about achieving this goal. Take this month to research it. Join an online community of like-minded people, read an inspiring biography or story, learn from others who have managed to achieve what you want to achieve. Write it all down.
Now, look at your plan and think about all the things that could go wrong. I mean, everything that could cause you to quit, everything that could cause problems for you in achieving it. Your fears, money, other people and their negativity, your negativity (we can all get into our own heads sometimes), lack of knowledge.
What are you afraid of? What do you think is missing? What do you need to move forward? What is stopping you from achieving your goal? Think of worst case scenarios. Will you need more money, more time, more patience, a better support system?
Analyse your “worst case scenarios”. Scrutinise every little thing about them and learn from them. Create several strategies that will have you prepared for almost anything life may throw at you.
Visualise and Work Hard
Now that you are prepared for the worst case, think of the best case scenario. Write it down. Put it on your fridge. Stick it on the wall above your desk at home. If you are a particularly visual person (as is the majority of the people), try making a vision board by drawing or making a collage of cut-outs.
While visualisation is really important in achieving your goals, it is only important insofar as it motivates you to do the work that needs to be done. The same is true for big goals and even, more generally, for setting New Year’s Resolutions. They should be an inspiration for you to do more work to become the person you want to be.
But you also need to realise that you are that person today. You today has the potential, the discipline, the strength, the perseverance to be the you you want to be next year.
So, since your life is happening today, start being that person today by slowly incorporating small changes that will prepare you for the big change you are hoping will come in a month. | https://medium.com/modern-woman/dont-rely-on-new-year-s-resolutions-f63ac207d05d | ['Kristina Hemzacek'] | 2020-12-01 20:42:32.379000+00:00 | ['Self Improvement', 'Motivation', 'Self', 'Strategy', 'New Year Resolution'] |
There’s an Addiction Epidemic in the Trans Community | I’m a gender-fluid addict — and there’s a lot more of me. I first noticed just how many of us there are when COVID hit. Suddenly there were more online support groups than ever before. More importantly, there were more groups specifically for trans and gender-nonconforming people — honestly, so many more than I ever could even imagine.
All these trans-specific meetings got me started thinking. Are trans people at a higher risk for addiction? Statistics would say so.
“Although data on the rates of substance abuse in gay and transgender populations are sparse, it is estimated that between 20 percent to 30 percent of gay and transgender people abuse substances, compared to about 9 percent of the general population.” — Jerome Hunt
That’s almost triple the rate for cishet people. Minority stress could be a significant factor for LGBTQ+ reaching our breaking point and using drugs. Social pressures for transgender people, such as people deadnaming us regularly and never using the correct pronouns, can be a considerable stressor too.
In a study by Tara Lyons, et al., A qualitative study of transgender individuals’ experiences in residential addiction treatment settings: stigma and inclusivity, it states
“Fourteen participants had previous experience of addiction treatment and their experiences varied according to whether their gender identity was accepted in the treatment programs. Three themes emerged from the data that characterized individuals’ experiences in treatment settings: (1) enacted stigma in the forms of social rejection and violence, (2) transphobia and felt stigma, and (3) “trans friendly” and inclusive treatment. Participants who reported felt and enacted stigma, including violence, left treatment prematurely after isolation and conflicts. In contrast, participants who felt included and respected in treatment settings reported positive treatment experiences.”
In my rehab experience, I received zero respect. No one asked me my pronouns; they all assumed I was a girl. I went to an eating disorder wing of the rehab, and when I tried to explain to them I didn’t have body dysmorphia, I had body dysphoria, I received strange looks. I had actually to explain to them what dysphoria was. I’ve been to nine mental hospitals and rehabs, and only one of them gave a damn about my gender identity and pronouns.
So why does no one care about transgender drug addicts? I think there’s a simple yet incredibly sad answer to that question. Society doesn’t care about the transgender community, and it also doesn’t care about drug addicts. So a transgender drug addict? Forget it. Society has already turned their head the other way.
With the higher rates of addiction in the trans community, we need to be doing more outreach in those communities. A great example of this would be the Morris Home, in Philadelphia, which provides trans-affirming care for transgender addicts.
Why are there not more places like the Morris Home? Why did I only find out about it after I went to rehab? Probably, because none of my social workers and any of the nine hospitals and rehabs, I went to were competent in transgender-affirming care.
Mental healthcare fails us at every turn, so it’s no surprise that the addiction rehabilitation industry would fail us too.
There is a real and genuine epidemic out there for transgender people who are addicted today.
Society needs to stop ignoring trans people when they say they are hurting. It might save someone’s life. We can begin by using a trans person’s preferred name and proper pronouns — both of which have been proven to prevent suicide (especially in trans youth).
As far as addiction, I believe that it all begins with harm reduction. Who do you think is more at risk of being picked up by the police — a cis addict, or a transgender addict? Having a safe place for those addicts to go is crucial. At the very least, if you feel moved by this piece, help out with your local harm reduction coalition. They might do things such as hand out clean needles on the streets, or maybe even hand out free Narcan — a life-saving drug that can reverse the effects of an opioid overdose.
By giving trans people the love and respect they deserve before their addiction develops, we may be able to stop an epidemic entirely. And if we can’t, we can at least use harm reduction methods to make sure that those trans addicts are as safe and well taken care of as possible. | https://medium.com/an-injustice/theres-an-addiction-epidemic-in-the-trans-community-faf02a629525 | ['Elliot Geliebter'] | 2020-09-03 19:18:57.655000+00:00 | ['Addiction', 'Mental Health', 'Injustice', 'LGBTQ', 'Transgender'] |
Why Too Much Self-Help Is Not Working Well for You | Why Too Much Self-Help Is Not Working Well for You
And how to stop productivity guilt from hurting your mindset.
Photo by Tim Mossholder on Unsplash
If you try to follow all the self-help you’ve been reading, then you must:
Wake up at 5 am.
Take a cold shower.
Drink two gallons of water.
Journal for twenty minutes then meditate.
Do intermittent fasting.
Exercise four times per week.
Read one book a week. Etc.
But here’s the thing:
Whenever you miss or don’t keep up, your mind says: “You should work harder instead of taking a day off. You’re not serious about getting fit if you skipped today at the gym.”
Consuming self-help without moderation can hurt your mental well-being. You’ll feel guilt build up whenever you miss all the expectations you’re setting for yourself.
Scott Young, the author of Ultralearning, calls it the productivity guilt.
This begs the question: how can you get better in your life and career without judging and talking to yourself harshly?
To answer this question, I want you to look closely at this quote from Sam Harris, the creator of Waking Up meditation app. He says:
“The most important relationship in your life is your relation with your mind.”
When your relation to your mind isn’t a healthy one, nothing else will work — self-help advice, your friendships, your family bonds — nothing.
The key point is this:
Go easy on yourself when reading and following all that self-help advice. | https://medium.com/age-of-awareness/why-too-much-self-help-is-not-working-well-for-you-fe9bcec8644f | ['Younes Henni'] | 2020-12-09 12:05:46.261000+00:00 | ['Self Improvement', 'Personal Development', 'Productivity', 'Mindfulness', 'Life'] |
A short summary of Java coding best practices | Documentation and Comments
It’s worth mentioning that almost all code goes changes throughout its lifetime and there will be times when you or someone is trying to figure out what a complex block of code, a method, or a class is intended to do unless clearly described. The reality is almost always as follow
There are times that the comment on a complex piece of code, method, class does not add any value or serve its purpose. This usually happens when commenting for the sake of it.
Comments should be used to give overviews of code and provide additional information that is not readily available in the code itself. Let’s get started. There are two types of comments
Implementation Comments — are meant to comment out code or comment about a particular implementation of the code.
Documentation Comments — are meant to describe the specification of the code from an implementation-free perspective to be read by developers who might not necessarily have the source code at hand.
The frequency of comments sometimes reflects poor quality of code. When you feel compelled to add a comment, consider rewriting the code to make it clearer.
Types of Implementation Comments
There are four (4) types of implementation comments as shown below
Block comment — see example below
— see example below Single line comment — when the comment is not longer than a line
— when the comment is not longer than a line Trailing comments — Very short comment moved to the right end
— Very short comment moved to the right end End of line comment — begins a comment that continues to the newline. It can comment out a complete line or only a partial line. It shouldn’t be used on consecutive multiple lines for text comments; however, it can be used in consecutive multiple lines for commenting out sections of code.
// Block comment
/*
* Usage: Provides description of files, methods, data structures
* and algorithms. Can be used at the beginning of each file and
* before each method. Used for long comments that do not fit a
* single line. 1 Blank line to proceed after the block comment.
*/ // Single line comment
if (condition) {
/* Handle the condition. */
...
} // Trailing comment
if (a == 2) {
return TRUE; /* special case */
} else {
return isPrime(a); /* works only for odd a */
} // End of line comment
if (foo > 1) {
// Do a double-flip.
...
} else {
return false; // Explain why here.
} //if (bar > 1) {
//
// // Do a triple-flip.
// ...
//}
//else
// return false;
Documentation comments (i.e. Javadoc)
Javadoc is a tool that generates HTML documentation form your java code using the comments that begin with /** and end with */ — see Wikipedia for more details on how Javadoc works or just read along.
Here is an example of Javadoc
* Returns an Image object that can then be painted on the screen.
* The url argument must specify an absolute {
* argument is a specifier that is relative to the url argument.
* <p>
* This method always returns immediately, whether or not the
* image exists. When this applet attempts to draw the image on
* the screen, the data will be loaded. The graphics primitives
* that draw the image will incrementally paint on the screen.
*
*
*
*
*
*/
public Image getImage(URL url, String name) {
try {
return getImage(new URL(url, name));
} catch (MalformedURLException e) {
return null;
}
} /*** Returns an Image object that can then be painted on the screen.* The url argument must specify an absolute { @link URL}. The name* argument is a specifier that is relative to the url argument.* * This method always returns immediately, whether or not the* image exists. When this applet attempts to draw the image on* the screen, the data will be loaded. The graphics primitives* that draw the image will incrementally paint on the screen. @param url an absolute URL giving the base location of the image @param name the location of the image, relative to the url argument @return the image at the specified URL @see Image*/public Image getImage(URL url, String name) {try {return getImage(new URL(url, name));} catch (MalformedURLException e) {return null;
And the above would result in an HTML as follow when javadoc is run against the code that has the above
See here for more
Here are some key tags that you can use to enhance the quality of the generated java documentation.
@author => @author Raf
@code => {@code A<B>C}
@deprecated => @deprecated deprecation-message
@exception => @exception IOException thrown when
@link => {@link package.class#member label}
@param => @param parameter-name description
@return => What the method returns
@see => @see "string" OR @see <a ...></a>
@since => To indicate the version when a publicly accessible method is added
For a complete listing and more detailed description see here
Twitter’s coding standard advises against the use of @author tag
Code can change hands numerous times in its lifetime, and quite often the original author of a source file is irrelevant after several iterations. We find it’s better to trust commit history and OWNERS files to determine ownership of a body of code.
Following are examples of how you could write a documentation comment that is insightful as described in Twitter’s coding standard
// - The doc tells nothing that the method declaration didn't.
// - This is the 'filler doc'. It would pass style checks, but
doesn't help anybody.
/**
* Splits a string.
*
*
*
*/
List<String> split(String s); // Bad.// - The doc tells nothing that the method declaration didn't.// - This is the 'filler doc'. It would pass style checks, butdoesn't help anybody./*** Splits a string. @param s A string. @return A list of strings.*/List split(String s);
// - We know what the method splits on.
// - Still some undefined behavior.
/**
* Splits a string on whitespace.
*
*
*
*/
List<String> split(String s); // Better.// - We know what the method splits on.// - Still some undefined behavior./*** Splits a string on whitespace. @param s The string to split. An { @code null} string is treated as an empty string. @return A list of the whitespace-delimited parts of the input.*/List split(String s);
// - Covers yet another edge case.
/**
* Splits a string on whitespace. Repeated whitespace characters
* are collapsed.
*
*
*
*/
List<String> split(String s); // Great.// - Covers yet another edge case./*** Splits a string on whitespace. Repeated whitespace characters* are collapsed. @param s The string to split. An { @code null} string is treated as an empty string. @return A list of the whitespace-delimited parts of the input.*/List split(String s);
It’s important to be professional when it comes to writing comments
// Avoid (x)
// I hate xml/soap so much, why can't it do this for me!?
try {
userId = Integer.parseInt(xml.getField("id"));
} catch (NumberFormatException e) {
...
} // Prefer (✔️)
// TODO(Jim): Tuck field validation away in a library.
try {
userId = Integer.parseInt(xml.getField("id"));
} catch (NumberFormatException e) {
...
} | https://rhamedy.medium.com/a-short-summary-of-java-coding-best-practices-31283d0167d3 | ['Rafiullah Hamedy'] | 2020-08-18 16:50:40.097000+00:00 | ['Documentation', 'Clean Code', 'Java', 'Coding Style', 'Coding Conventions'] |
So, You Want to Change the World | Bad things happen and are a normal part of life.
Unfortunate events occur all the time and are waiting around every corner. Most of the time, the best decision we can make is to stop and deal with what we’re feeling right away. Because bottling it up will only make the situation worse down the line.
Whether it’s grieving or simply taking some time to recollect ourselves, it’s the most natural element of our humanity and therefore necessary.
Our emotions and reactions toward these happenings should last longer than the one-year timeframe stipulated in the Diagnostic and Statistical Manual (DSM) without us having to feel worried that something out of the ordinary is wrong.
The DSM is in a nutshell, the widely influential bible of mental disorders. And if we take a look, we’d see that basically, anyone who has lost someone or something they genuinely cared about, will be diagnosed as mentally ill after one year (if they still display the same symptoms).
This latest version of the DSM — DSM-5 — was published in 2013 by the American Psychiatric Association. It is flawed, to say the least, because it takes time — certainly more than one year — for us humans to acknowledge and come to terms with particular challenges that we face.
Subjectively, notions such as these — that oversimplify human behaviours — irk me, because they too, enforce the “toxic positivity mentality” we’re all dealing with today.
Are you feeling down? Something must be wrong if you’re not happy all the time or if you’re not pursuing it. When dealing with our pain, we might have caring people that say things that they assume will cheer us up. Most of the time, though, they don’t really help and actually make us feel worse. However, we can’t be too mad at these people as they — out of the goodness of their hearts — are just trying to make us feel better. They’re not experts, and even those that are the actual experts believe that one can deal with loss or other hefty circumstances and be over them within a quite short period.
Now, why am I mentioning this?
I believe that mourning a loved one, coping with illness or unemployment, coming out of a bad relationship, suffering the aftermath of a dysfunctional family, disturbed childhood or abuse in any way or form, are all valid reasons that can cause turmoil in our lives. I’m sure the way we initially felt when it happened won’t go away anytime soon. This is perfectly normal, and people should have the freedom to come to terms with whatever it is they’re dealing with at their own pace.
The connection we all share.
As a person that was depressed for over 20 years, aside from my own experiences, I have done some thorough reading on the subject.
Despite the growth in depression rates, my viewpoint remains that not everyone is depressed. However, it doesn’t mean that life is significantly better for those that aren’t. The isolation and disconnection that comes from occurrences and hardships we face are what connects all of us in general.
The said shared connection lies in what’s holding us back. At this point, it is safe to say that this is Resistance. The same one I’ve discussed in past articles. And pretty much the same one most of us would rather keep putting up with, instead of taking responsibility for things we can control within our own lives.
As mentioned in a piece of writing I wrote titled On quitting; why it is and will be okay.;
"I like to see Resistance as the Universe’s way of testing us; it’s that thing that prevents us from doing what we have to. It’s mostly internal, but can be external as well. Regardless, it’s very potent and can manifest into procrastination, addiction, conflict, illnesses and other circumstances that level up the more we progress."
It’d be quite hypocritical of me to downplay someone’s experiences, especially after criticising the DSM. But realising that we’re connected in this way, means that at least we know what we’re dealing with and that together we can start taking steps in the right direction.
The first and foremost step is that of Acceptance. It’s the only step that allows us to move forward, make progress and grow. The things we lament and struggle with really would’ve gone otherwise if we had the means, power and control to do them differently. But as this is Fate, we do not have much of a choice. We either accept it to the point that we fall in love with it — the ancient Stoics called this Amor Fati — or we proceed to live the rest of our lives hurt, angry, bitter and unfulfilled all the while tediously trying to control what we simply cannot.
The other step is the Realisation that our life and time here on Earth is finite. Despite us desperately believing otherwise, we’re not immortal. Sometimes we’re naïve to think that we deserve a break because we’re tired of being kicked from every angle. So because we’re overwhelmed, we take some time off. Over time we lose focus and our ego kicks in. Taking care of ourselves becomes self-preservation as we sink deeper into the abyss.
Weirdly this isn’t something new that just our current society deals with. The early Stoics being aware of the different temptations Resistance could bring, prevented the above by keeping the ego at bay by remembering that they too would have to die one day (Memento Mori). This thought didn’t only make them haul ass, but also kept them humble.
These two steps are fundamental, especially if we want to continue on with our lives and eventually contribute to something larger than ourselves.
It’s worth noting that I’m not claiming that depression or mental disorders can be tackled this easily with just two steps. I am, however, merely proposing that as humans, we stop looking at adversity the same way we’re accustomed to. Reprogramming our thoughts and behaviours is not an easy task. But it is possible to train ourselves to expose the truth in the thought patterns that accompany our challenges. In doing so, we start noticing them for what they indeed are, and move forward quicker.
Just how important is this goal to you anyway?
Steve Maraboli once wrote the following;
“Cemeteries are full of unfulfilled dreams… countless echoes of ‘could have’ and ‘should have’… countless books unwritten… countless songs unsung… I want to live my life in such a way that when my body is laid to rest, it will be a well needed rest from a life well lived, a song well sung, a book well written, opportunities well explored, and a love well expressed.”
I heard this quote many times, and something about it empowers, yet frightens me at the same time.
See, there’s no way to know for sure that any of us will accomplish significant goals such as these even if we abide by Amor Fati and Memento Mori every single day. And then we’re stuck thinking “what’s the point anyway?”.
Going back to Steve’s quote, aside from its purpose of empowering and frightening, I believe that he wanted to create a sense of urgency in those who read it. An urgent call for us to be more aware and consciously make the decision to lead a life of purpose and meaning rather than being whirled around by society.
He’s daring us to make priorities despite all the Resistance we deal with, and to also determine which activities matter the most.
I happen to have written a bit about these topics. Feel free to read some of my other ramblings. But the point I wanted to reach is that which I arrived at in one of my other articles — here’s a summary;
When you’re meant to do something, when you feel this deep urge to do something, there are a few ways things can play out; you either pursue it, or you resist it.
Most of us choose the latter, though, such a feeling — as it is our purpose — is not easy to get rid of, so it persists at the back of our minds until one day it, sadly, becomes a regret if not acted upon.
I don’t know whether this is ultimately driven by fear or not in your case, it’s up to you to know or figure this out. What I’m sure about is that instead of dealing with just Resistance, you’re also resisting yourself. Impeding, as a result, the connection between who you currently are and who you aspire to be. To that, I keep saying “resist your intrinsic resistance itself because this is what convinces you not to do the things you want”.
Nevertheless, your thoughts, emotions and feelings — no matter how shitty — still reveal something beautiful about you as a person. Be it a commitment to someone or something, you’re capable of empathy — a trait that not everyone has the privilege to say they have.
You might think you’re not capable, but it’s precisely because you have these thoughts that you’re more than capable of achieving them and making a difference.
You are here, you’ve made it this far in spite of circumstances you have difficulty looking past. And you know what? That’s a good thing because it’s the exact point you have to be at, at this point in time.
I mentioned this in my previous article, and I think it’s fitting to end this one on the same note;
“Accept and be at peace with your decisions and sacrifices you make for what IS, even if undesired.”
I invite you to check out these three articles next as I believe that they take the thought further. | https://medium.com/the-ascent/so-you-want-to-change-the-world-578fe8e20b59 | ['Ilsmarie Presilia'] | 2019-10-30 12:16:01.502000+00:00 | ['Self Improvement', 'Life Lessons', 'Self', 'Mental Health', 'Stoicism'] |
It’s Better to Regret Something You’ve Done Than Something You Haven’t | “I made you something. I spent weeks on it. I’m still not happy with it but I’ve erased so many times the paper is about to tear.”
Drawing by Melancholicanna
“Oh wow!” I gushed. “That’s incredible! You really spent some time on this didn’t you, very impressive.
She asked a bunch of questions about the things I wrote. Smart questions like why I took the story down the path I did. Asking about the origin of my more insane ideas.
I answered all of her questions, even though some answers might have been considered inappropriate to tell some one of her age. But the kid was going to die sooner than later so I did. She seemed to have that deep understanding of writing it took me years to develop.
I showed her the outline of my upcoming book and the completed parts. She showed me some things she wrote. Very wholesome.
It seemed like there was something she wanted to ask but was afraid to say. I asked my embarrassing question first.
“You could have wished for anything and they would have tried to make it happen. Why me?”
While still looking me dead in the eye she said.
“I want you to fuck me.”
Image author
Lexi said it firmly and directly. The kid didn’t miss a beat. She wasn’t scared, just waiting for the right time. There is no right time on that one.
“Uh… That’s very flattering and you are a beautiful girl, but that’s going to be a hard no.”
Her face crumbled. “Why not?”
“How old are you?”
“I’ll be 16 soon, maybe. In Ohio that’s the age of consent.”
“I get it. I do. I’m sorry but I can’t. No. The foundation said not to get too close. What you are asking from me would be crossing a line that’s bad for both of us.”
She gave me the sick kid sad eyes. My heart was breaking but I had to be firm. I’m the adult here.
“No! Look, I’m sorry. I would really like to help you, I would. But I’m a public figure and even if I wasn’t, no. You are a beautiful woman, and some day in the futureee…”
Oh shit. Some fucking wordsmith I am.
“Ehhh.. I… I’m going to smoke and pee. But I promise I’ll be back.”
When I looked back and saw her scrunched up sour face. Then started having a coughing fit from her heart breaking. The image of her trying not to cry it is forever burned in my mind. But hell nope.
I broke character when I said no. Because Hogan Torah wouldn’t deny someone especially under these circumstances just because the law said it was wrong. But I didn’t say no because of the law, I said no because you don’t have sex with kids. Period.
I’m not going to be arrested and go to jail for being a fake. If I have sex with a minor I will.
Her parents were waiting out side the door for me.
“Did she ask you?” Said her stepdad.
“Huh?”
“What did you say?”
Image Author
“No!” Oh my god. That’s why they picked me! Son of a bitch! I’m an actor playing a character! Though it’s more of a gimmick really.
I am morally ambiguous about damn near everything. But I’m not a murderer and I don’t fuck kids.
Lexi’s parents were both now giving me the face. | https://hogantorah.medium.com/its-better-to-regret-something-you-ve-done-than-something-you-haven-t-5f9f0ad8f9d2 | ['Hogan Torah'] | 2020-11-17 23:22:07.531000+00:00 | ['Humor', 'Death', 'Health', 'Art', 'Fiction'] |
3D modeling for UI Designers | 3D modeling for UI Designers
Part 1 — Creating 3D objects
Recently I've had the opportunity to work in an awesome project at HE:labs. It was an interactive isometric map where each building represented a project and on mouse over we displayed a short looping animation as a preview of what that project was about. I got some really good feedback for this app and the client was really happy with it, so I felt like it could be a good opportunity to explain a really simple approach for creating 3D images and animations for apps using Maya.
If you never ever opened Maya, the interface can be a bit scary. With so many little buttons and options, it’s quite easy to get lost.
This was me, opening Maya for the first time
But I'll try my best to make this tutorial simple, so let's focus on what's important for us. I'm going to split this tutorial into 3 parts. The first one for creating 3D models, the 2nd one for rendering out images and the third part for creating simple 3D animations.
I’m using Maya 2015. The interface tends to change a bit from one version to the other, so if you don’t have Maya installed on your computer, I suggest installing the same version to follow along :)
Understanding the interface
First of all, make sure that the dropdown menu on the top left corner is set to "polygons". Each option of this dropdown unlocks several other options on the top menu, so it's important that it's set to polygons while we're creating 3D objects.
When you change the option on the dropdown, you'll notice that the top menu also changes
On the top menu, if you select Create > Polygon primitives, you'll see a bunch of options for creating basic objects like cubes, spheres and cylinders. Every time that you create a primitive, you'll notice that inside the Channel box/Layer editor tab on the right side of your monitor, a new Input with the name of the primitive will be created ( something like polyCube1 ). Clicking on this name, you'll be able to modify some attributes of the primitive, like width, height and depth.
Now that we have created our first primitive, let's explore the main tools for modelling. On the left side of your monitor, you'll notice 3 buttons with primitives and red arrows. Those buttons allow you to Translate ( Move ), Rotate or Scale an object in a Maya scene.
Using the scale tool to change the size of a cube
These tools can also be used for vertices, edges and faces. If you right click on an object, you'll see some options that allow you to change the selection mode between them.
Scaling a face, rotating an edge and moving a vertice
By default, you should see a viewport with 4 different cameras. I like to work using a single panel with a perspective view. If you’d like the same, you can change it by selecting Panels > Layouts > Single Pane.
You should also try holding down alt and middle mouse dragging to move the camera around. If you hold down alt and left drag instead, you’ll rotate the camera. If you want to set the camera to an object, just select the object and press F :)
Modeling an old Factory
Now that we understand a bit better how things work, let's create this factory icon to explore modelling techniques.
The final result should look like this :)
We can start by creating a cube. Select an edge and hold down command ( or ctrl if you're using Windows ) and right click on the cube. You'll see another menu show up. While still holding down the right mouse button, move the mouse to Edge ring utilities and then to Edge ring and split. Inside INPUTS, you can change the split type from relative to multi and select how many divisions you want. We'll use these divisions to create the roof.
Hovering to select can be tricky at first but it makes the flow faster once you get it :)
Now you can select the faces on top of the cube, and extrude them ( edit mesh > face > extrude ) This will allow you to pull those faces out, creating the roof. You should also turn off the option "Keep Faces Together" on the side panel and increase the Offset on the black and yellow menu close to the control arrows. After that, if you delete one edge from each "roof" you'll get a triangle shape.
To create the door, for example, you can start with a cube, then extrude and offset the front face to create the door trim. You can detach the front face from the door trim by going to Mesh > Separate > Extract. Keeping it separated will allow you to model the door without affecting the trim. To create the metal bars of a rolling door, use the edge ring and split, and then extrude the faces. Once you have one door ready, you can duplicate it by pressing Ctrl + D. | https://medium.com/helabs/3d-modeling-for-ui-designers-3c59a75caad0 | ['Krystal Campioni'] | 2016-07-01 14:29:16.462000+00:00 | ['3d', 'Design', 'UI', '3d Design', 'English'] |
3 Reasons Why You Don’t Get Things Done With Your To-Do List | 3 Reasons Why You Don’t Get Things Done With Your To-Do List
And how to fix them.
Photo by Breakingpic on Pexels
To-do lists are one of the most popular productivity tools ever, and that is for a good reason. It’s crazy simple, you write your tasks and then you check them off after you finish them, but the question is are they really effective in helping you get your tasks done? Well the answer depends on who you are asking.
There’s really a mixed consensus, there are people who view to-do lists as pointless because it is much more effective to schedule your time using a calendar. Although these people are not wrong in any way (because what’s important is what works for you) there are just ways that make to-do lists better and not at all futile.
On the other hand, there are people who actually use to-do lists and get a ton of stuff done, and they cite reasons on how it works for them. | https://medium.com/the-innovation/3-reasons-why-you-dont-get-things-done-with-your-to-do-list-338f11bc6312 | ['Ronald Mathew'] | 2020-07-13 13:29:43.055000+00:00 | ['To Do List', 'Productivity'] |
Famous Dairy Restaurant | On my very first visit to New York City with my friends Les and Martha Jane, the city was as alive as I’d imagined it, but far more exotic, and I thought I could imagine exotic pretty clearly before entering this world. We had chosen this trip to further bond our relatively new grad school friendship. Les and I met in his first year of a Ph.D program and my first year in a Master’s program at Tennessee. We both had unwisely chosen to take a seminar in Shakespearean comedy. Unwisely not because of the Bard or the genre, but because the professor — a renowned visiting scholar — thought it appropriate to read to us every class day from his own stage history section of the Riverside Shakespeare.
These would have made nice bedtime stories, had we been reclining.
Our only grade that term was the seminar paper that he cheerfully assigned. Fifteen pages on the comedies.
I asked for a meeting with him to get further direction:
“Professor, could you give me a couple of ideas to pursue, because I’m at a loss for a suitable topic?”
He looked at me and smiled.
“Just think about it. I’m sure you’ll do fine,” and then he waved from his desk.
I was dismissed.
I did not do fine on that paper.
I received a “B,” which might sound fine to you, but in a grad program, students are allowed only two “B’s.” Only. Any more, and it’s kaput, finito. All done and over.
I think Les made an “A,” but it was his commiseration for my haplessness that began our friendship.
And so, New York.
My childhood best friend, Jimbo, had moved to the City only a month earlier, so we had a place to stay, sort of. Jimbo shared a two bedroom apartment on 96th Street with a friend form his undergrad days, Jack Hitt.
THE Jack Hitt.
Sorry Jack, but it was you lounging in your bathrobe, welcoming us and disparaging, with Les, the dulcet sounds of Depeche Mode.
Random memories of that trip include: smoking pot on the building’s roof; seeing a performance by the Alvin Ailey Dancers; attending my first Broadway play or musical, Amadeus, starring Tom Hulce; riding in a cab through Central Park to the East Side to view Man of Iron, one of those trendy Polish films that were the rage as Solidarity reigned; having drinks at Sardis and a hot pastrami sandwich at the Stage Door Deli.
Our quarters were cramped, but we had a ball.
And then Jimbo suggested a place to eat on our last day:
the Famous Dairy Restaurant on West 72nd Street.
As readers of this series know, I grew up eating in Jewish delis. But I had never before eaten in a dairy restaurant, and only vaguely understood what was happening. And what was happening was this:
Check out the menu; savor the aromas; make a decision in your mind as to what you would choose had you been there with the four of us. Only in your mind, too, because the place closed years ago. And that kills me.
What do I wish I had ordered? Well, start with the stuffed cabbage. You know you’ve reached a certain level of maturity when cabbage sounds almost too appealing (and it would have come with a potato pancake!). Or, what about the hot noodle pudding, the pirogies, the kreplach? The Kasha Varnishkhas with mushroom sauce.
I think Jimbo ordered the vegetarian cutlet, and Martha Jane, the blueberry blintzes. I got the cheese blintzes and a side of potato pancakes. We shared each other’s treats and in our fairly cramped table for four, with waiters mumbling and other patrons slurping, we had a fine meal, even though I really felt like a bumpkin in this strange place that was almost my birthright.
That fineness, of course, included Les and his meal, too, and there was a part of me that admired him, and another part that envied him for braving whatever goes into vegetarian liver. My memory says that on the sandwich, there was a healthy, one-inch thick slice of raw white onion. Les ate the whole thing, and I’ve hardly seen before or since one man look so satisfied with his choice and its taste.
So very fulfilling.
But as I intimated at the onset, it was the gift that kept on giving. I don’t mind hearing the random belch or burp. I can even take a bit of…gas. But it wasn’t just when he belched that we smelled the liver and onion. It reeked off of my forever grad school buddy for every single second of our togethered being.
It reeked the entire twelve hour journey that night — twelve because of the snow which ended somewhere in Virginia, but not before it caused us to miss the cutoff from I-95 to I-85. The trip should have taken eight hours, ten with snow. We chose to drive all night, too. That’s what I mean about being young and fairly stupid.
Les ached so badly that he couldn’t drive a mile. And poor Martha Jane had to be the one to share the back seat with him. Les is six feet four, too, so he and back seats don’t coexist well. But that’s where he sat on that entire journey (it was his car anyway), and, of course, his vegetarian liver and onion sandwich sat right alongside with him.
And with us.
We even lowered the windows for a few minutes, even in the blizzard.
Jimbo and I shared the driving, and I remember so well that I brought a cassette of some of our music, a taste of which can be found in the links below:
Lost somewhere in North Carolina, in a car with your friends, listening to various appropriate sounds, or not, the world seems so close, so clear. Jimbo looked at me once,
“This playlist…”
And even Les agreed.
Here’s to us and the Famous Dairy Restaurant. More famous and memorable than we ever would have guessed.
And to car rides after dark. | https://medium.com/one-table-one-world/famous-dairy-restaurant-b93e24aa8eb8 | ['Terry Barr'] | 2020-12-18 19:43:54.287000+00:00 | ['Friendship', 'Deli', 'One Table One World', 'New York', 'Food'] |
Yesterday | Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.
Check your inbox
Medium sent you an email at to complete your subscription. | https://medium.com/afwp/yesterday-556f0a33024b | ['Sean Zhai'] | 2020-10-30 15:31:02.130000+00:00 | ['Poetry', 'Writing', 'Politics', 'Covid 19', 'Philosophy'] |
Learn to Calculate your Portfolio’s Value at Risk | Learn to Calculate Your Portfolio’s Value at Risk
Step by Step Guide to Risk Managing Your Portfolio with Historical VaR and Expected Shortfall
In this article, we are going to learn about risk management and how we can apply it to our equity portfolios. We are going to do that by learning about two risk management metrics, Value at Risk (VaR) and Expected Shortfall (ES) while also going through a step by step guide on how you can build a model to calculate these metrics specifically for your portfolio.
Photo by Nathan Dumlao on Unsplash
What is VaR?
The best way to explain VaR is to pose the question it helps answer:
What is the maximum loss I can expect my portfolio to have with a time horizon X and a certainty of Y%?
In other words, a one day 99% VaR of $100, means that my portfolio’s one-day maximum loss for 99% of the times, would be less than $100.
We can essentially calculate VaR from the probability distribution of the portfolio losses.
Visual representation of the portfolio returns probability distribution.
How do you calculate VaR?
There are several different methodologies that one can use to calculate VaR. The three most common ones are:
Historical Simulation
Monte Carlo
Variance Covariance
In this blog, we will only cover Historical Simulation.
What is VaR Historical Simulation?
The Historical Simulation Method entails calculating daily portfolio changes in value to determine the probability distribution of returns. It does that by looking at how your portfolio would have behaved historically.
Once you have your portfolio’s returns or losses, you can calculate within a confidence interval the worst possible outcome.
What are the mechanics of calculating VaR using Historical Simulation?
Using historical data, determine your portfolio’s value for a number of days (typically around 500) Calculate the % change between each day Using your current portfolio valuation, calculate the monetary impact of the % change. Sort your results from the highest loss to most profit Depending on your confidence interval, the nth value that corresponds to that percentage — This is your one day VaR. Multiply it by the square root of the number of days you want your time horizon to be, i.e. 5day VaR = 1day VaR * sqr(5) (This is because the returns are assumed to be independently and identically distributed (normal with mean 0)) (Note: I will show you how we can check this at the end of our implementation)
What are the limitations of VaR?
As with any metric, there are advantages and disadvantages. VaR is so widely used because it’s straightforward to understand. However, it does come with some drawbacks:
VaR assumes slim tails, i.e. Tail risk is not sufficiently captured.
VaR doesn’t consider black swan events, i.e. things that happen rarely and unexpectedly (unless within your look-up set)
Historical VaR is slow to capturing changing market conditions as it assumes past performance represents future performance.
What is Expected Shortfall (ES)?
Expected Shortfall, is a risk metric that attempts to address one of the drawbacks of VaR. VaR assumes that the risk in the tail-end of the distribution is improbable with a thin tail. However, not surprisingly, in the real world we have seen distributions where the tail is quite fat:
To address that, ES takes the average of the tail.
Calculating the Historical VaR and ES for our portfolio in Python
First up, we need to define our portfolio holdings.
import pandas as pd data = {'Stocks':['GOOGL', 'TSLA','AAPL'], 'Quantity':[100, 50, 300]} #Define your holdings # Create a DataFrame of holdings
df = pd.DataFrame(data)
With our holdings defined, we need to source historic prices for each of our stocks that will allow us to value our portfolio (if you need to see different options and sourcing for sourcing historical data, check out my previous blog). In this instance, I am using the Tiingo API, which will result in a pandas dataframe:
from tiingo import TiingoClient def SourceHistoricPrices():
if info == 1: print('[INFO] Fetching stock prices for portfolio holdings')
#Set Up for Tiingo
config = {}
config['session'] = True
config['api_key'] = 'private key'
client = TiingoClient(config)
#Create a list of tickers for the API call
Tickers = []
i=0
for ticker in data:
while i <= len(data):
Tickers.append(data[ticker][i])
i=i+1
if info == 1: print('[INFO] Portfolio Holdings determined as', Tickers)
if info == 1: print('[INFO] Portfolio Weights determined as', data['Quantity'])
#Call the API and store the data
global HistData
HistData = client.get_dataframe(Tickers, metric_name='close', startDate=dateforNoOfScenarios(today), endDate=today)
if info == 1: print('[INFO] Fetching stock prices completed.', len(HistData), 'days.')
return(HistData)
One thing to note here is that we need to define the start and end date by which we require historical data which will drive the number of historical simulations. This should be a user-defined input. A slight problem with that, however, is that the stock market is not open every day. Hence, we need to calculate the business days involved. Without include, a holiday calendar is quite hard to be precise, but for our purposes, a quick approximation should suffice.
ScenariosNo = 500 #Define the number of scenarios you want to run today = datetime.date.today() - datetime.timedelta(days=1) def is_business_day(date):
return bool(len(pd.bdate_range(date, date))) def dateforNoOfScenarios(date):
i=0
w=0
while i < ScenariosNo:
if (is_business_day(today - datetime.timedelta(days = w)) == True):
i = i+1
w = w+1
else:
w = w+1
continue
#print('gotta go back these many business days',i)
#print('gotta go back these many days',w)
#remember to add an extra day as percentage difference will leave first value blank (days +1 = scenario numbers)
return(today - datetime.timedelta(days = w*1.04 + 1)) #4% is an arbitrary number I've calculated the holidays to be in 500days.
Next up, we need to value our portfolio.
def ValuePortfolio():
HistData['PortValue'] = 0
i=0
if info == 1: print('[INFO] Calculating the portfolio value for each day')
while i<= len(data):
stock = data['Stocks'][i]
quantity = data['Quantity'][i]
HistData['PortValue'] = HistData[stock] * quantity + HistData['PortValue']
i = i+1
which results in:
So now, we can go ahead and calculate the percentage change:
def CalculateVaR():
if info == 1: print('[INFO] Calculating Daily % Changes')
HistData['Perc_Change'] = HistData['PortValue'].pct_change() #calculating percentage change
Then the portfolio’s valuation change:
HistData['DollarChange'] = HistData.loc[HistData.index.max()]['PortValue'] * HistData['Perc_Change'] #calculate money change based on current valuation
if info == 1: print('[INFO] Picking', round(HistData.loc[HistData.index.max()]['PortValue'],2),' value from ', HistData.index.max().strftime('%Y-%m-%d'), ' as the latest valuation to base the monetary returns')
Then determine the n-th value we need to pick for our confidence interval:
ValueLocForPercentile = round(len(HistData) * (1 - (Percentile / 100)))
if info == 1: print('[INFO] Picking the', ValueLocForPercentile, 'th highest value')
Sort the data and select the value:
global SortedHistData
SortedHistData = HistData.sort_values(by=['DollarChange'])
if info == 1: print('[INFO] Sorting the results by highest max loss')
and finally, calculate our portfolio’s VaR value:
VaR_Result = SortedHistData.iloc[ValueLocForPercentile + 1,len(SortedHistData.columns)-1] * np.sqrt(VarDaysHorizon) print('The portfolio\'s VaR is:', round(VaR_Result,2))
Resulting in:
Putting it all together gives:
from tiingo import TiingoClient
import pandas as pd, datetime, numpy as np
import matplotlib.pyplot as plt import warnings
warnings.filterwarnings('ignore') #Tiingo API is returning a warning due to an upcoming pandas update #User Set Up
data = {'Stocks':['GOOGL', 'TSLA','AAPL'], 'Quantity':[100, 50, 300]} #Define your holdings
ScenariosNo = 500 #Define the number of scenarios you want to run
Percentile = 99 #Define your confidence interval
VarDaysHorizon = 1 #Define your time period
info = 1 #1 if you want more info returned by the script # Create a DataFrame of holdings
df = pd.DataFrame(data)
print('[INFO] Calculating the max amount of money the portfolio will lose within', VarDaysHorizon, 'days', Percentile, 'percent of the time.') today = datetime.date.today() - datetime.timedelta(days=1) def is_business_day(date):
return bool(len(pd.bdate_range(date, date))) def dateforNoOfScenarios(date):
i=0
w=0
while i < ScenariosNo:
if (is_business_day(today - datetime.timedelta(days = w)) == True):
i = i+1
w = w+1
else:
w = w+1
continue
#print('gotta go back these many business days',i)
#print('gotta go back these many days',w)
#remember to add an extra day (days +1 = scenario numbers)
return(today - datetime.timedelta(days = w*1.04 + 1)) #4% is an arbitary number i've calculated the holidays to be in 500days. def SourceHistoricPrices():
if info == 1: print('[INFO] Fetching stock prices for portfolio holdings')
#Set Up for Tiingo
config = {}
config['session'] = True
config['api_key'] = 'private key'
client = TiingoClient(config)
#Create a list of tickers for the API call
Tickers = []
i=0
for ticker in data:
while i <= len(data):
Tickers.append(data[ticker][i])
i=i+1
if info == 1: print('[INFO] Portfolio Holdings determined as', Tickers)
if info == 1: print('[INFO] Portfolio Weights determined as', data['Quantity'])
#Call the API and store the data
global HistData
HistData = client.get_dataframe(Tickers, metric_name='close', startDate=dateforNoOfScenarios(today), endDate=today)
if info == 1: print('[INFO] Fetching stock prices completed.', len(HistData), 'days.')
return(HistData) def ValuePortfolio():
HistData['PortValue'] = 0
i=0
if info == 1: print('[INFO] Calculating the portfolio value for each day')
while i<= len(data):
stock = data['Stocks'][i]
quantity = data['Quantity'][i]
HistData['PortValue'] = HistData[stock] * quantity + HistData['PortValue']
i = i+1 def CalculateVaR():
if info == 1: print('[INFO] Calculating Daily % Changes')
#calculating percentage change
HistData['Perc_Change'] = HistData['PortValue'].pct_change()
#calculate money change based on current valuation
HistData['DollarChange'] = HistData.loc[HistData.index.max()]['PortValue'] * HistData['Perc_Change']
if info == 1: print('[INFO] Picking', round(HistData.loc[HistData.index.max()]['PortValue'],2),' value from ', HistData.index.max().strftime('%Y-%m-%d'), ' as the latest valuation to base the monetary returns')
ValueLocForPercentile = round(len(HistData) * (1 - (Percentile / 100)))
if info == 1: print('[INFO] Picking the', ValueLocForPercentile, 'th highest value')
global SortedHistData
SortedHistData = HistData.sort_values(by=['DollarChange'])
if info == 1: print('[INFO] Sorting the results by highest max loss')
VaR_Result = SortedHistData.iloc[ValueLocForPercentile + 1,len(SortedHistData.columns)-1] * np.sqrt(VarDaysHorizon)
print('The portfolio\'s VaR is:', round(VaR_Result,2)) def CalculateES():
ValueLocForPercentile = round(len(HistData) * (1 - (Percentile / 100)))
ES_Result = round(SortedHistData['DollarChange'].head(ValueLocForPercentile).mean(axis=0),2) * np.sqrt(VarDaysHorizon)
print('The portfolios\'s Expected Shortfall is', ES_Result) SourceHistoricPrices()
ValuePortfolio()
CalculateVaR()
CalculateES()
Finally, we can also use a histogram to see how our portfolio returns fit with the assumption that we’ve made in point 6 above:
The reason we can extend 1 day VaR or ES by multiplying by the Square Root of days, it is because the returns are assumed to be independently and identically distributed (normal with mean 0)
import scipy
import scipy.stats, matplotlib.pyplot as plt def plotme():
data1 = HistData['Perc_Change']
num_bins = 50
# the histogram of the data
n, bins, patches = plt.hist(data1, num_bins, normed=1, facecolor='green', alpha=0.5)
# add a 'best fit' line
sigma = HistData['Perc_Change'].std()
data2 = scipy.stats.norm.pdf(bins, 0, sigma)
plt.plot(bins, data2, 'r--')
plt.xlabel('Percentage Change')
plt.ylabel('Probability/Frequency')
# Tweak spacing to prevent clipping of ylabel
plt.subplots_adjust(left=0.15)
plt.show() plotme()
Returning:
We are hence validating our assumption. | https://medium.com/financeexplained/learn-to-calculate-your-portfolios-value-at-risk-e1e2c5c68456 | ['Costas Andreou'] | 2020-05-15 19:32:21.568000+00:00 | ['Risk', 'Fintech', 'Personal Finance', 'Python', 'Stocks'] |
Set Up AWS CLI on Your Mac in 2 Steps | Set Up AWS CLI on Your Mac in 2 Steps
Deploy from your command line, manage your AWS instances, and more
Photo by Pankaj Patel on Unsplash.
Amazon Web Services (AWS) is one of the market leaders in cloud computing. It has lots of services, ranging from infrastructure as a service (IAAS) to platform as a service (PAAS) and some software as a service (SAAS).
AWS services can be accessed in two ways: via the GUI (the AWS management console) and programmatic access via API, CLI, SDK, etc.
For this article, we will be focusing on setting up the CLI interface.
There are two major benefits of using the AWS CLI interface: | https://medium.com/better-programming/set-up-aws-cli-on-your-mac-in-2-steps-78faa3ea8a1e | ['Okoh Anita'] | 2020-11-13 15:38:25.329000+00:00 | ['Programming', 'Cli', 'Aws Cli', 'Software Development', 'AWS'] |
As Israel Turns 71, Here Are Some Reasons Behind Its Economic Success | The State of Israel was established on May 14, 1948. Today, the country turns 71. As far as nations go, Israel is still a teenager. But the country’s economic parameters do not reflect that of one. In all measures, Israel is among the most world’s most developed nations.
GDP per capita (current US$). Data from the World Bank.
Israel’s GDP per capita at the end of 2017 was $40,270, which is higher than the EU average of $33,836. It is also more than all of its immediate Arab neighbours put together. It has a low and stable inflation rate of 1.4 percent and an unemployment rate of 4 percent. Israel is ranked 22 out of 189 countries in the Human Development Index — a metric that captures a country’s performance in three dimensions: life expectancy, education and gross national income per capita. The start-up scene in Israel is unparalleled. It has the highest number of start-ups per capita in the world and the per-capita venture capital investment in 2008 was “2.5 times greater than in the United States, more than 30 times greater than in Europe, 80 times greater than in China, and 350 times greater than in India.”¹
At 71 years of age, isolated from its neighbours and constantly embroiled in border tensions, these statistics are astonishing.
Much of the research done on this subject has provided us with conventional reasons to explain Israel’s success. These reasons include the role of state intervention in the early years, the benefit of foreign aid and the effects of the 1985 Stabilization Plan. But some questions remain unanswered. For example, how did Israel meet the sudden increase in labour demand after the Six Day War and how have cultural factors acted in tandem with government policies to give Israel a unique advantage over other countries with similar policies. This article answers these questions and provides a more comprehensive explanation for the economic success of Israel.
Israel’s economy is best studied in three phases: 1948–1965, 1967–1973 and 1985 — now.
The groundwork for the first phase was laid by the Yishuv, the Jewish community in Palestine before the State of Israel was established. The land was not richly endowed with natural resources and the agricultural economy was abandoned by the Arab population under the Ottoman rule. Despite this, the first two waves of Jewish immigrants in 1882 (known as First Aliyah) and 1904 (Second Aliyah) sought to revive it.² These attempts to settle would have failed, if not for Baron Edmond Rothschild, a member of a rich European banking family, who financed settlements of the First Aliya, and then once again, of the Second Aliya.³
When the state of Israel declared independence in 1948, the economy was guided by the Yishuv’s socialist ideology. The two pillars of socialism in the Yishuv period was Histadrut, the trade union that organised all economic activities of Jewish workers and kibbutzim, the collective agriculture societies that contributed the lions share to the economy in the first phase.⁴ This is different from state intervention that one would normally associate socialism with. There was direct state intervention as well, which began after independence, but it was more of a necessity for the primitive economy, rather than an ideological stance. Large-scale infrastructure projects required massive funding and could not be carried out by private industries.
Soon after the Declaration of Independence, the new state was left shattered from the 1948 Arab-Israeli War. Many of the enterprises were destroyed and the country was isolated from its neighbours. There was large-scale immigration and the population almost doubled in less than five years. When the government needed to raise money to rebuild the economy it turned towards the Jewish communities abroad for help and received around $750 million in aid from the US and Europe from 1949–1965.⁵ This aid was of significant help in building the newly established state. However, this alone does not explain the 10 percent annual growth rate that Israel enjoyed in this period. The most unlikely stimulus for the growth of Israel was the reparations paid by West Germany to compensate Jewish victims of Nazi crimes. These paid for most of the capital goods that the newly established state needed. In the research published by the Royal Institute of International Affairs titled German Reparations to Israel: The 1952 Treaty and Its Effects, the importance of these reparations is well documented. The reparations accounted for 30 percent of Israel’s budget in currency and 10 percent of her import budget for ten years, and “these percentages, in fact, give only an inadequate idea of the economic benefits.” The author of this article calls the nearly $800 million worth of free capital input “a vital transfusion of blood.”⁶ Despite providing a comprehensive account of the reparations, the research does not go into how the capital was efficiently paired with human capital to result in such rapid development. This missing explanation is provided by the authors of Start-up Nation and will be looked at a later section of this article. With the input of capital from these reparations and the foreign aid, along with the influx of immigrants for labour, Israel was able to build its nascent economy and grow rapidly in the first phase until 1966.
In 1966 the rapid growth came to a halt, partly due to the new policies implemented by the government. In order to control the rising inflation and widening trade deficits, the government tightened credit and reduced the budget. Unfortunately, these policy changes came at a time when major infrastructure projects were still being carried out, leading to a sudden decline in construction and a less than 2 percent growth rate for two years.⁷ The brief recession ended after the Six-Day War when the acquisition of new territories presented opportunities for new development projects. Furthermore, the decision by France to stop supplying arms led to the emergence of Israel’s own defence industry, creating growth in the electronics and metal sectors.⁸ Yossi Vardi, one of Israel’s foremost entrepreneur puts it best, “The two real fathers of Israeli hi-tech are the Arab boycott and Charles de Gaulle because they forced on us the need to go and develop an industry.”⁹ But, there’s still an element missing here. Whilst new development opportunities emerged in the captured territories, the nation was able to take advantage of this opportunity and grow at 12 percent per year because of an increase in labour supply as well. Immigration was low during this period, so where did labour come from? The answer to this is outlined by economists Ali Kadri and Malcolm MacMillen in their study The Political Economy of Israel’s Demand for Palestinian Labour.
According to their study, the labour demand required to carry out the new construction projects was fulfilled by the Palestinians from the occupied territories of West Bank and Gaza Strip. This labour was not only cheaper than Jewish labour but could also be used as a mechanism to influence the wage rates and employment in the occupied territories. The study remarks that “Jewish labour from the Eastern bloc cannot cope with the hardships of the work carried out by Palestinian labour” and because of Israel’s small labour market the occupied territories have become a “pool of surplus labour.”¹⁰ However, this study does not explain how the labour requirements were fulfilled in instances when Israel barred Palestinian labour in the interest of national security. Israel periodically stopped Palestinian labour from entering Israel during heightened tensions and this would have been a hindrance to any major projects that were carried out during the time. Perhaps, new settlers and Asian labourers were the other sources that continued to help meet the labour requirements.¹¹ Fueled by new opportunities and the supply of cheap labour, Israel’s economy rebounded from the brief recession. This second phase of economic growth, however, lasted merely six years, coming to a halt after the Yom Kippur War in 1973.
After the war began a decade-long period of slow growth and high inflation. The war expenses and the rising energy costs were a catalyst, but the policy of the government to index wages allowed deficits to grow out of control. The government was forced to rely on printing money to cover spending that drove the inflation rate to as high as 450 percent in 1984 and reduced the growth rate to a mere 3 percent.¹² People joked that it was cheaper to take a taxi than a bus because the fare would be cheaper by the end of the ride when the currency would be worthless.¹³ The most pressing concerns required massive reforms in government and public thinking. The interventionist model was beginning to fail. The heavy government influence in the markets was proving to be inefficient in the era after massive infrastructure projects. Finally, in July 1985, the government introduced the Economic Stabilization Plan that led the way to reform the market structure and encouraged private industries. The government slashed the budget deficit, devalued the currency, deregulated financial markets and foreign trade, and most importantly, privatized most of the state-owned companies.¹⁴ This marked the end of a state-led economy in Israel that lasted for over three decades and ushered in the era of a free market.
The new policy changes, along with an influx of skilled and educated Soviet immigrants, led Israel back to the path of rapid expansion for the decade since 1985. The new environment unleashed the entrepreneurial spirit of the country and led to the emergence of one of the world’s most advanced technology industry. Israel’s new economy was able to absorb the shocks of the economic recessions that occurred in 1997, 2001 and 2008 and emerge stronger than most other developed economies.¹⁵ Most economists attribute the current status of Israel’s economy to the implementation of the 1985 plan. Barry Rubin, the author of Israel: An Introduction, does a thorough job comparing the state of the leading industries before and after 1985. The leading industries in Israel, high technology, chemicals and pharmaceuticals, defence, textiles, and energy have all benefitted from the stabilization plan and the subsequent policies that encouraged entrepreneurship. Each of these industries has seen a growth in the number of companies and an increase in exports.¹⁶
While Barry Rubin, like many other researchers, highlights the role the government played in helping establish new start-ups by providing venture capital, the authors of the bestselling book Start-Up Nation, Dan Senor and Saul Singer, provide a different perspective by focusing on the culture and the nature of the people of Israel. Their perspective runs like a thread connecting all the elements that have led to Israel’s economic success, but most importantly, explains the start-up success that this young nation enjoys. The authors summarize that Israel’s success is “a story not just of talent but of tenacity, of insatiable questioning of authority, of determined informality, combined with a unique attitude toward failure, teamwork, mission, risk, and cross-disciplinary creativity.” Senor and Singer describe the entrepreneurial drive of the people, their commitment to an idea and their persistence as the Israeli chutzpah — “gall, brazen nerve, effrontery, incredible ‘guts,’ presumption plus arrogance.” This chutzpah is seen in the everyday life of an Israeli and is a key ingredient to the start-up success, as well as the economic success in the earlier years. Bitzu’ist is another Hebrew word that the authors use to describe the people of Israel. It loosely translates to pragmatic. Bitzu’ists are seen as “crusty, resourceful, impatient, sardonic, effective, not much in need of thought but not much in need of sleep either.” Another intangible feature that drives the people is davka. Davka can be roughly translated to mean “despite” with a “rub their nose in it” twist.¹⁷ This attribute drove the people of Israel to work in factories that were in areas frequently bombed by enemies.
A key ingredient is the role of the military in the lives of the people. Most of the founders of successful start-ups have come from the military. The time spent in the military provides the youth with maturity and experience like none other and the elite units of the military have served as incubators for many of the tech start-ups. Immigration played a major part in Israel’s success as well. Gidi Grinstein, the President of Reut Institute, explains that immigrants are risk takers and are not averse to starting over and as a result “a nation of immigrants is a nation of entrepreneurs.”¹⁸ The government policy to assimilate these immigrants further enabled these people to thrive and contribute to the economy.
The features highlighted by Senor and Singer have shown the cultural and personal attributes that have contributed to Israel’s economic success. Although their book has many stories of how these factors have come to play an important role, these features are mostly intangible and can only be measured by proxy of economic parameters. So, it is difficult to establish a causal relationship between concepts such as chutzpah, bitzu’ism and davka and economic success. But having said that, these features make Israel unique compared to nations with similar economic policies but less economic success.
While this article has focused on research that highlights the economic success of Israel, there are opposing views that have arguments of merit. Perhaps the most supporting evidence for these arguments is the annual growth rate that has been between a low 2 to 4 percent since 2015. Shir Hever, the author of The Political Economy of Israel’s Occupation, says that the macro level indicators published by the Central Bank of Israel hide the widening income inequality that is among the worst in the developed world. Hever also finds Israel’s GDP to be a misleading figure that is only high during wars and crisis when it increases as a result of spending in war industries.¹⁹ This brings into question the sustainability of Israel’s economic success. One factor that researchers across the board agree on is the deteriorating quality of Israel’s education system, which once provided quality human capital, propelling the high-tech industry of Israel. Hever highlights the widening gap between Israel and the other OECD countries and Dan Ben-David, a leading economist in Israel, supports this with statistical evidence from math and science test scores.²⁰ An article published by Wharton Business School also talks about the duality of Israel’s economy. It argues that while high-tech export is flourishing, the domestic economy is not, and this could lead to the downfall of Israel’s economy in the long-term.
Withstanding criticism, Israel’s economic progress will remain a case study for years to come. Born only over 71 years ago, Israel has embraced both socialism and capitalism and overcome significant hurdles to stand as one of the world’s most developed economies today. The key factors leading to this success include the foreign aid and reparations received in the early years, the availability of Palestinian labour when needed most, the landmark policy shift in 1985, the immigrants, the role of mandatory military training, and the cultural factors deeply embedded in the Israeli way of life. These elements, identified by different people at different times, all come together in explaining Israel’s past economic success. | https://sarveshmathi.medium.com/as-israel-turns-71-here-are-some-reasons-behind-its-economic-success-8f354fe9cb27 | ['Sarvesh Mathi'] | 2019-05-13 21:01:01.809000+00:00 | ['Startup', 'Israel', 'Culture', 'History', 'Economics'] |
The Master is a Must | In a certain town a very beautiful young lady suddenly arrived out of the blue. Nobody knew from where she came; her whereabouts were completely unknown. But she was so beautiful, so enchantingly beautiful, that nobody even thought about where she had come from. People gathered together, the whole town gathered — and all the young men almost three hundred young men, wanted to get married to the woman.
The woman said, “Look, I am one and you are three hundred. I can be married only to one, so you do one thing. I will come again tomorrow; I give you twenty-four hours. If one of you can repeat Buddha’s Lotus Sutra, I will marry him.
All the young men rushed to their homes; they didn’t eat, they didn’t sleep, they recited the sutra the whole night, they tried to cram it in. Ten succeeded. The next morning the woman came and those ten people offered to recite. The woman listened. They had succeeded.
She said, “Right, but I am one. How can I marry ten? I will give you twenty-four hours again. The one who can also explain the meaning of the Lotus Sutra I will marry. So you try to understand — because reciting is a simple thing, you are mechanically repeating something and you don’t understand its meaning.”
There was no time at all — only one night — and the Lotus Sutra is a long sutra. But when you are infatuated you can do anything. They rushed back, they tried hard. The next day three persons appeared. They had understood the meaning.
And the woman said, “Again the trouble remains. The number is reduced, but the trouble remains. From three hundred to three is a great improvement, but again I cannot marry three persons, I can marry only one. So, twenty-four hours more. The one who has not only understood it but tasted it too, that person I will marry. So in twenty-four hours try to taste the meaning of it. You are explaining, but this explanation is intellectual. Good, better than yesterday’s, you have some comprehension, but the comprehension is intellectual. I would like to see some meditative taste, some fragrance. I would like to see that the lotus has entered into your presence, that you have become something of the lotus. I would like to smell the fragrance of it. So tomorrow I come again.”
Only one person came, and certainly he had achieved. The woman took him to her house outside the town. The man had never seen the house; it was very beautiful, almost a dreamland. And the parents of the woman were standing at the gate. They received the young man and said, “We are very happy.”
The woman went in and he chitchatted a little with the parents. Then the parents said, “You go. She must be waiting for you. This is her room.” They showed him. He went, he opened the door, but there was nobody there. It was an empty room. But there was a door entering into the garden. So he looked — maybe she has gone into the garden. Yes, she must have gone because on the path there were footprints. So he followed the footprints. He walked almost a mile. The garden ended and now he was standing on the bank of a beautiful river — but the woman was not there. The footprints also disappeared. There were only two shoes, golden shoes, belonging to the woman.
Now he was puzzled. What has happened? He looked back — there was no garden, no house, no parents, nothing. All had disappeared. He looked again. The shoes were gone, the river was gone. All that there was emptiness — and a great laughter.
And he laughed too. He got married.
This is a beautiful Buddhist story. He got married to emptiness, got married to nothingness. This is the marriage for which all the great saints have been searching. This is the moment when you become a bride of Christ or a gopi of Krishna.
But everything disappears — the path, the garden, the house, the woman, even the footprints. Everything disappears. There is just a laughter, a laughter that arises from the very belly of the universe.
But when it happens for the first time, if you have not been led slowly, slowly, you will go mad.
This Buddhist story says that he was led slowly, slowly. The woman was the master. The woman is symbolic of the master. She led him slowly, slowly. First, recite the sutra; second, understand intellectually; third, give a sign that you have lived it. These are the three stages. Then she led him into nothingness.
The master leads you slowly, slowly; makes you by and by ready.
—
From Osho, Tao: The Pathless Path, Vol. 2, Chapter 9 | https://medium.com/devansh-mittal/the-master-is-a-must-7fcd6a45f525 | ['Devansh Mittal'] | 2019-10-14 14:03:07.533000+00:00 | ['Truth', 'Education', 'Spirituality', 'Fiction', 'Psychology'] |
One Important Case Where Trump Was Right About Fake News | The Code Of Ethics
The code of ethics breaks down into four major points.
Seeking the truth and reporting it
Meaning ethical journalism needs to be accurate and fair with as little-to-no bias as possible. It also requires journalists to be “honest and courageous in gathering, reporting, and interpreting information.”
Minimize harm
This aspect is constantly overlooked in the age of social media. Journalists need to decipher how much harm a potential story can cause, and if it’s worthy of being published. It also requires knowing how to handle sensitive subject matters, such as ones involving victims of sex crimes or juveniles.
Act independently
Journalism is on its own island. You’re not supposed to work with the police or other government authorities. You shouldn’t receive gifts, favors, or any special treatment from anyone (especially if you’re doing a story on said groups). Journalists are to serve the public, and acting in the best interest of the public requires journalism to be its own entity.
Be accountable and transparent
The last requirement of ethical journalism requires journalists to take responsibility for their work while also being as transparent as possible to the public. While some stories are rather straightforward, there are also murky, more ethical articles that require clear and honest explanations from journalists. This also requires journalists to expose organizations or individuals who aren’t acting within these guidelines. | https://medium.com/the-masterpiece/one-important-case-where-trump-was-right-about-fake-news-3fd24269edf2 | ['Isaiah Mccall'] | 2020-12-11 02:16:29.985000+00:00 | ['Journalism', 'News', 'Trump', 'Election 2020', 'The Masterpiece'] |
Data Scientists Are Not Magicians | Why does this happen to data scientists so often?
Photo by Tim Gouw on Unsplash
Even after hearing horror stories from data scientists and analysts, I never believed it was too bad.
As a newcomer in the industry, I thought that a lot of what I was hearing was simply a gross exaggeration of events.
That is…until it happened to me.
My Experience
As a final year computer science student, all of us are required to complete a capstone project.
Since I major in data science, I was keen on taking up a project that involved data analysis and machine learning.
After reading the scope of the project, I had multiple ideas in mind and was pretty excited about moving forward with it. My teammates and I already had a list of questions mapped out for our project supervisor and client.
However, the first meeting with the project supervisor was a complete disaster. Without disclosing too much, here is a brief overview of how the meeting went:
Supervisor: “Client A has a lot of data they are collecting through a variety of channels. They don’t know what to do with the data. Can you do some data analysis on it?”
Student: “Alright, that seems fine. Are there any specific questions they want to answer with this data?”
Supervisor: “Not exactly, maybe just build dashboards and apply machine learning techniques on it.”
After realizing that we were stuck in the “data analyst and rock” situation, my teammates and I decided to just go along with the flow and work with what we had.
Student: “Sure, that seems doable. When can we have access to this data?”
The supervisor explained that it would take some time for us to get access to this data. In fact, we may never get our hands on it at all till the last few weeks of the project submission.
After trying to explain multiple times that it wasn’t possible to start without knowing at least the variables present or the type of data being collected, we gave up.
Based on the details given to us in the project description and sample datasets online, we created a vague proposal and presented it to the supervisor.
Just like the “data analyst and the rock” story, this didn’t go too well either.
Supervisor: “This is great and all, but can you create a generic machine learning model that the client can use?”
What this means: He wanted us to use supervised machine learning techniques to create a machine learning model that would work on data it has never seen before.
We don’t have labels, neither do we have any idea of what the data looks like or what prediction is to be made.
In short, he expected magic.
If one machine learning model could be created and used on any kind of data, and predict anything in the world, machine learning engineers wouldn’t have jobs. | https://towardsdatascience.com/data-scientists-are-not-magicians-130c3cdcdbeb | ['Natassha Selvaraj'] | 2020-09-24 21:54:58.853000+00:00 | ['Machine Learning', 'Technology', 'Artificial Intelligence', 'Data Science', 'Programming'] |
An Alternative View on Transsexuality | Having lived in New York’s Greenwich Village for almost 50 years — and sold advertising to the Big Apple’s escorts of all genders and orientations for 20 of those years, I am certainly not a clueless heterosexual oblivious to today’s alternative sexualities and fluidities. Not only have some of my best friends been gay — some have been transgendered as well!
I know the initial theory on transsexuality which espouses that some individuals are born identifying as the other gender and are thus locked in the wrong body. It’s not that I dispute the theory. But I’ve known many gay men who appeared and acted more womanly than half the shemales I knew. So exactly what is the difference between a gay man who’s content to live his life as a gay man versus a transsexual who yearns to change her gender?
This is complicated. There is so much penumbral activity in the community it’s difficult to decipher the essence of the situation. Let me give you a quick example.
Brothels in New York are sublimely progressive. Not only do you have your female whorehouses. But you can also visit a transsexual bordello — and even a place where both sit on the couch side by side awaiting your choice of companions.
This can lead to some odd pairings. Many years ago while working for a sex magazine, I had a “client,” a madam who ran a small brothel that featured females, males, and trannies. One day as I entered to collect the ad money, there was a distinct air of unease among the residents. And the madam was looking very agitated.
“So what’s gives?” I asked concerned about the boss (who was as much a friend as customer). The answer: “Apple has a crush on Rachel. But Rachel doesn’t want her.”
This was not the first time I’d heard of one person falling for another who wasn’t interested. But Apple was a shemale! And falling for a woman? I expressed my confusion with the situation when another transsexual in the house overheard and duly informed me out loud — and with disdain — “there’s such a thing as a transsexual lesbian!”
It was about then that my head exploded. I get the old “different strokes for different folks” ethos. But it was difficult to fathom (and still is) why a man would want to become a shemale so she could then fall in love with other females. Wouldn’t it just be easier to stay a heterosexual man? I mean…what’s the point?
Then there was the reality that a small but significant percentage of my shemale advertisers would book couples and have sex with a man’s wife while he watched. And that doesn’t sound all that out of the mainstream to me. Let’s see. Human born a male. Changes to female…but can steal get and maintain an erection for a female?
Ok! So back to the issue. What’s my alternative view — beyond these mind-boggling anecdotes? What I found with most of my transsexual clients and friends was one thing they shared in common more than anything else. They wanted to attract a straight man!
And here was their MO: First, become as “passable” as possible via endless surgeries and procedures until they look better than half their female counterparts. Once having achieved that goal, the girls could then attract straight men who don’t realize that the gorgeous woman who so hypnotizes him is actually a transsexual.
Then there’s the flirting and courtship phase during which the shemale teases her pursuer to the point at which he’s half-crazy with lust. She might even offer that she’s on her period but would love to blow him until she’s ready for sex.
The hope is that by the time the shemale is ready to reveal the truth, the “straight” man is willing to overlook the surprise and venture forth regardless. After all…she has an orifice for him to fill. And as we all know, a significant percentage of men crave filling that opening. So what’s the big deal?
More often than you think, this actually works, and the shemale “turns” the straight guy. Mission accomplished. She has now entered a relationship with a man who’s essentially straight — but enlightened enough to clear a small (or maybe large) hurdle. You get the idea. Again…the shemale doesn’t want a gay man. She wants a straight man. And accomplishing that goal is a lot easier for a tranny than it is for a gay man.
Conversely, a gay man, no matter how flamboyant and feminine, is not an individual who craves a straight counterpart. He’s perfectly happy to partner with another gay man. And thus, has no need to change his gender to score a straight guy.
That gender change costs a fortune. So why bother — especially when gay men are what appeal to him, and making that change would effectively eliminate a man of that sexual orientation.
So yes, I understand all about a man born in a woman’s body and vice versa. But I don’t think that’s all of it. Transsexuality is a complicated issue — and probably one better accepted than analyzed. I dealt with all manner of genders, orientations, and nationalities while selling all those “bodywork” ads. And I counted trannies as no better or worse than any other. Hey! They advertised and paid their bills. Who am I to judge on any other criteria?
More about transsexuals: | https://medium.com/everything-you-wanted-to-know-about-escorts-but/an-alternative-view-on-transsexuality-f0dc94f338d4 | ['William', 'Dollar Bill'] | 2020-11-09 18:08:24.939000+00:00 | ['Life', 'Transgender', 'Psychology', 'Culture', 'Sex'] |
Books That Made 2020 Less Terrible | Because, it’s been pretty hard, right?
Photo by Jason Leung on Unsplash
2020 has been a pretty shitty year for a lot of us, I’d go as far as saying that everybody in some vein has been affected by the devastating effects of a global pandemic.
A lot of people have tried to see the good in adversity, which I can highly respect — however struggle to empathise with.
Being someone who is naturally a realist (and often, a pessimist) a glass half empty approach is how I viewed the most part of 2020. However, one thing I can agree — is that it’s ignited an old flame of mine that was slowly burning out: reading.
At the beginning of 2020, I had an unnecessary obsession with Instagram, and although I can blame it on the fact that I was backpacking and wanted to capture each moment — I do think being so engrossed in technology stops you from feeling present in the moment.
When lockdown struck, and I was forced to fly home from Malaysia, I became increasingly anxious and worried about the future, like many others. This stress turned me into somewhat of an introvert, for the first time in my life.
The last time I’d felt like this was probably as a child/teenager, whilst I was trying to navigate being a young adult. Reading has always been a huge part of my life, but it’s only really in the last year or two that I’ve fallen back in love with it.
This year, I read about fifteen books in total, all varying in terms of genre, length and subject matter. Although the below books weren’t released in 2020, they were the books that gave me a glass half full approach and made 2020 feel less depressing.
So, without further ado — here are the books that made 2020 less terrible!
Everything I Know About Love — Dolly Alderton
After being recommended this book by one of my girlfriends nearly two years ago, I bought the book and immediately fell in love with the pace of the book and how relatable the book was due to Dolly’s age and experiences. With Dolly only being five years older than me, there were a lot of synergies with growing up in the boom of the internet (yes, I’m that old that dial up internet worked through a phone) and the world of MySpace and MSN.
It felt very nostalgic, and a window into Dolly’s life and her experiences as a woman growing up in London. Although I left London at 18 to go and study, I still consider myself a Londoner at heart and felt right in the midst of this book the whole way through. It made me laugh, cry and reflect on the importance of female friendships — which this book encapsulates beautifully. Strangely, it was a book that helped me to feel present despite what was going on around the world.
My Sister The Serial Killer — Oyinkan Braithwaite
This is a book that definitely leans into dark humor and taboo topics, and you become so invested in the protagonist and her sister! Set in Nigeria, what I loved about this book isn’t just the hilarity of it, and beautifully short chapters (great if you have a short attention span like me) — is that it isn’t set in the classic locations you always get with every book.
There’s something about learning about a new culture or location that appeals to me, and although there’s a nostalgic familiarity with reading something set in London — somewhere new is always something that I love when reading. I wish I found this book sooner, and I’m hoping that Braithwaite releases another book soon!
Crazy Rich Asians — Kevin Kwan
I committed the unthinkable crime last year, and that was watching the film before the book. Although this is suicide amongst avid readers, hear me out — I didn’t realise that it was a book until after I watched the film. Not only did I fall in love with the film, but I honestly think the book is better and juicier than the movie.
After visiting Singapore in March 2020 (shortly before I had to fly home) I fell in love with everything there. The food, the grandness of it all and of course, the glamour. This book had me in stitches, and is definitely one of my favourite books of all time. The film is also brilliant — and makes you wish that you were in every single scene!
Before The Coffee Gets Cold — Toshikazu Kawaguchi
After walking into waterstones in the summer, I randomly decided to pick up the first book that I saw, which led me to picking up before the coffee gets cold. I had no idea what to expect as I didn’t even read the blurb — instead just purchased it and went into it with an open mind.
Translated from Japanese, the story is (in essence) about time travel, and that’s the only spoiler I’m going to give! It’s a short book with four interlinking stories, and I really found it a beautiful book that evoked emotions that I wasn’t expecting. I planned to go to Japan in August for my 25th birthday, so reading this book was the next best thing!
Overall, beautiful stories — and I hope to read more from this author when his books are translated!
Conclusion
Although I read a lot of heavy books this year, the four mentioned brought me the most joy, and ultimately helped me to get through some pretty tough months this year.
I’d love to hear which books should be in my “to be read pile” for 2021 — ideally ones that aren’t hyped too much as they usually end up being a bit disappointing… | https://medium.com/curious/books-that-made-2020-less-terrible-d53d5b67a6e2 | ['Claire Stapley'] | 2020-12-17 22:08:35.836000+00:00 | ['Reading', 'Books', '2020'] |
Shots Fired: Brands & Clap Back Culture | Sassy burgers and deodorant are in a war for your love
“Delete your account.”
“Turn your hat around. You aren’t Bart Simpson and this isn’t 1997.”
“No, but your opinion is (trash) though.”
“Oh look…another Twitch streamer we haven’t heard of.”
Anger can often seem like the tone of the internet. “Never read the comment section” is uttered as if a digital-age proverb. What made these fiery retorts notable is that they weren’t from a salty no-name troll, but from the maker of sea-salt fries, the international fast-food chain, Wendy’s. In 2016 the brand’s Twitter account took on a decidedly off-brand, off-color approach to trolls — they trolled them back. People loved it. Since then, many-a-brand has joined the skirmish, and brand feuds and customer-directed clap backs have become news-worthy beyond the headlines of ad trades.
Brands today are in a desperate fight for your love, throwing shade for attention. We are in a new era of brand jockeying: the participation in clap back culture.
Brands have engaged in feuding for as long as they’ve had competitors. Take, for example, our national entanglement in the Cola Wars in the late 70s to 90s, which drafted battalions of newly-chubby Americans into its long and bitter fight. Coincidentally, the end of the Cola Wars was also when the first known usage of the term “clap back” appeared, in 1990. For the less on fleek among us, clap back refers to a “a quick, sharp, and effective response to criticism,” often deployed on internet haters. As the internet has grown louder, so has the resounding clap.
The year Twitter hit mainstream, 2012, was the same year we saw the first of the online brand clap backs, between Taco Bell and Old Spice. The traction from the exchange opened the comeback-floodgates, as we collectively cheered cookies picking fights with movie theaters and cellphones insulting the size of a customer’s manhood. As customers engaged with brands in less than polite terms — clearly looking for a reaction — well, they got a comeback, too. Insulting rude customers would be unthinkable IRL (in real life), but is simply the law of the jungle on social media.
The tempo of social media — the immediacy, casual tone, insider internet language, and the expectation of broadcasting whatever trivial thought was on your mind — forced brands to contend with a new aspect of their identity. No one wanted to hear from a corporate-sounding brand talking about themselves from a script. Hucksterism isn’t content, so brands needed to be entertaining, responsive, and real. This revelation made brands evaluate the need for a more human tone; that is, a more imperfect tone. Brands participating in social media had to integrate into culture’s topic and nature of conversation, not the other way around…And the general nature of that conversation wasn’t always polite.
In a crowded space, brands resorted to tactics used by attention-starved denizens of social media everywhere — they behaved badly. In hot pursuit of “engagement metrics” brands took note of the cult of personality around unstable, norm-defying personas. Hungry brands followed the drama like blood in the water. People piling on comments like children in a school yard yelling “Fight! Fight!” was metric gold, and became further encouragement for brands to keep the clap backs coming.
An unintended consequence of feeding the comeback beast was that in doing so, brands have started to blur their identifiable edges and lose their established brand voice. The tone of social media can often feel like a young (arguably white) male by default, and as brands try to perfect the choice comeback — terse, informal, and aided with timely memes — they start to sound the same; like a pissy juvenile in a dark bedroom. Those Wendy’s quotes at the top of this article could have well been from any troll…not what you want when establishing your unique brand.
Advertising occupies a strange space; we both shape culture and reflect it. We’re swept up in the zeitgeist of the day, where bullying and increasingly inflammatory remarks are normal parts of conversation. People with opposing views are dummies, news is what we read, and all else, fake. Every day we choose sides, and sharp criticism flows ceaselessly about the Other.
Coming at the Other isn’t without consequences. Brands have the power to legitimize and validate through their large footprint, spend, and impact on mainstream conversations. To fuel a manufactured clap back culture is to give credibility to attention-seeking public confrontation; even by the standards of a large corporation with an image to maintain, an aggressive legal team, and the advisement of brand strategists such as myself at their back. Brands are giving meanness for affection merit.
This next generation seems to have internalized this lesson. In my research with Generation Z, a group of high-school-aged respondents in Oklahoma extoled the importance of society being more accepting of others. Five minutes later, the exact same group all said they had fake social media profiles, in order to anonymously bully peers. Like a brand that adorns their stores with signs about Having it your Way, spends countless millions on Customer Service, and then turns arounds and cuts you down on social media, this next generation is learning multiple brand identities — one polished, one Mr. Hyde — is normal. It’s how you get love and fame. This research also points to Finstagram-loving Gen Z as compartmentalizing their identities, which they treat as a suite of personal brands…which only makes one wonder what role these brands have played in Z doing so.
Marketers aim to give consumers what they want. With the amount of sharp commentary and provocation of Others online, it’s natural to assume that means comebacks and a feed full of sass. But it appears Americans are looking for reprieve from the negative tide. On Reddit, r/MadeMeSmile, r/Aww, /HumansBeingBros and r/WholesomeMemes content regularly hit the front page, and feel good stories circulate the internet to live news reports with mind-boggling speed. CEB Gartner, who tracks changing American values in their Values and Lifestyle Survey, have found that among the top 10 values for 2019 are Courtesy (5) and Happiness (10). The values that have moved up the most are values around serenity, safety, and relaxation. Clap backs are in opposition to all of these values.
There is certainly good to be found in brands exploring complex emotions and diversity in their identity, including the occasional anger. And seeing a witty and cutting responses will always be a guilty pleasure. However, brands need to take another beat before they engage in competitor (or customer) beat-downs just for cheap likes. Not all brands can or should dilute their personality by assuming the identity of a faceless, sophomoric troll. The kids are watching and internalizing our salty, fractured-identity endorsements. And, perhaps most of all, America is looking for courtesy and safety in their lives, not ceaseless and undiscerning anger.
And the customer is always right. | https://jesswatts.medium.com/shots-fired-brands-clap-back-culture-62aa17926b9 | ['Jess Watts'] | 2019-03-01 02:21:34.752000+00:00 | ['Marketing', 'Internet Culture', 'Bullying', 'Twitter', 'Social Media'] |
SaaS Metrics: Benchmarking Your Churn Rates | Churn is probably one of the most documented SaaS metrics. The aim of this article is not to define and speak about the general concept of churn (you can find plenty of outstanding resources about that) but to analyse 4 SaaS benchmarks and see what they say about real churn figures.
The 4 benchmarks we’ll analyse are: Pacific Crest Survey, Open startups, Totango annual SaaS metrics report and Groove SaaS small business survey (if you know more of them please => @clemnt)
But before digging into the actual numbers I would like to stress on a very important point: before you compare churn numbers you have to clearly define which churn you’re talking about.
User or revenue churn? Annual or monthly? Gross or net?
When you see a churn figure you have to ask yourself whether:
It’s revenue or user churn
or churn It’s annual or monthly churn
or churn It’s gross or net churn
Revenue or user: pretty straightforward, in one case you measure the revenue lost from a cohort during a specific time range and in the other the number of paying customers lost from a cohort during that period of time.
Annual or monthly : meaning the churn is measured for a year or a month period. Since both are very often used here is a table with monthly churn and its annualised equivalent. I just use the formula: Annual Churn = 1 — ( 1 — monthly churn )^12.
Be careful though as you shouldn’t mix monthly plans churn with annual plans churn (different beasts).
Gross or Net churn: probably the trickiest of the 3. Gross revenue churn doesn’t take into account revenue expansion (= some users of your cohort can actually end up paying more at the end than at the beginning of the period considered, because they have upgraded their account) while net churn does.
So it’s totally possible to have a 6% monthly gross churn and a 2% monthly net churn. So if you see a 2% revenue churn alone, without context, it’s hard to compare it with other figures. Here is a good post on Chartmogul explaining it. And yes with net churn it’s possible to have a negative churn rate.
Now that we’ve made that clear let’s start.
TL;DR
The “general observations about customer account churn rates by segment” that Tomasz Tunguz shared are confirmed by ¾ of the benchmarks analysed. We cannot draw any meaningful conclusion from Totango survey, hence the ¾.
From Tomasz Tunguz
Couple of comments after receiving some messages / DM on Twitter:
of course the industry you are in can play a role . Especially in industries where the competition is not as fierce as in others and where customers have less choice and as consequence cannot change easily your churn rate can be lower.
. Especially in industries where the competition is not as fierce as in others and where customers have less choice and as consequence cannot change easily your churn rate can be lower. that said there is still a very strong correlation between the segment you’re going after and churn . Higher ACV generally means real sales / field sales, account managers, longer term contracts etc… and logically lower churn.
. Higher ACV generally means real sales / field sales, account managers, longer term contracts etc… and logically lower churn. I’ve received a couple of messages from startups saying that they are on the SMB segment (with ACV <$1k) and have monthly user churn between 1% and 1.5%. So yes it’s totally possible, they are doing an awesome job :-) The 3% - 7% range is where most of the companies targeting this segment are, but you can be far lower (and that’s awesome).
Pacific Crest Survey
About this benchmark
Conducted by David Skok and Matrix Partners the Pacific Crest Survey is the most complete and detailed benchmark in the SaaS industry. You won’t find better than this currently (available publicly at least). A must read.
Survey participants
The benchmark is based on the answers from 306 companies:
$4MM median revenues, but nearly 50 companies with >$25MM and 80 with <$1MM
46 median full-time employees
284 median customers, with 25% having >1K customers
$21K median annual contract value (ACV), with 30% below $5K and 20% above $100K
Good mix of sales channels including field sales, inside sales and mixed distribution models
Participation from around the world, though primarily U.S.
So we’re speaking mostly about companies selling software to the enterprise and mid-market segments here and probably fewer to SMBs customers (especially that in the results underneath the companies with less than $2.5M of revenue are excluded).
Results
The 2 graphs above are very interesting to get a bit more details:
the churn rate from customers with an ACV (annual contract value) < $1K is much higher than customers with an ACV > $1K
ACV < $1K generally corresponds to SaaS companies with a “self serve - low price” model where there’s generally no direct sales / few inside sales.
Comments
Churn figures:
Median annual gross dollar churn = 6% ~ 0.51 monthly
Median annual customer churn = 8% ~ 0.69 monthly
Knowing that:
it excludes companies with revenue less than $2.5M
the median annual contract value (ACV) is $21K (in the )
the median number of customers per company is 284
We can infer that these churn results are representative of SaaS companies targeting mid-size / enterprise customers with generally sales / inside sales
Open startups
About this benchmark
I’ve basically collected the churn data from the “open” SaaS companies (except Outreach Signals which is more an online agency than a SaaS).
Benchmark Participants
median MRR = $18.9k, average MRR = $72.8k ( completely distorted by Buffer revenue)
median ACV = $552, average ACV = $941
These are mainly SaaS targeting SMBs, with a much lower price point, ACV and also more early stage companies compared to the Pacific Crest survey ones.
Results
median monthly gross revenue churn = 11.1% ~ 75% annual (huge)
median monthly user churn = 5.4% ~ 46.% annual
We can also notice that revenue churn is higher than customer churn for these companies, it’s the opposite for the Pacific Crest Survey ones.
Conclusion
These numbers are interesting because they confirm that at early stage and for a lower price point, SaaS companies tend to have higher churn rates (user and revenue) compared to Enterprise — mid market ones. Which was expected.
Totango Annual SaaS Survey
About this benchmark
Totango is a customer success software (one of their aim is to lower churn) which runs every year a survey among SaaS companies to benchmark several metrics.
Survey Participants
“500 SaaS professionals” which revenue distribution looks like this:
60% of participants have an ARR < 10M.
Results
A first problem is that the survey doesn’t say clearly whether we’re talking about gross or net revenue churn. Since they don’t have negative churn on the graph I infer that it’s gross churn.
So basically:
34% of companies have an annual gross revenue churn < 5% (= 0.43% monthly)
31% of companies have an annual gross revenue churn between 5% and 10% (= 0.43% — 0.87% monthly)
31% of the companies have an annual gross revenue churn > 15% (> 1.3% monthly)
Conclusion
We have unfortunately not enough information about the precise profile of these companies or the distribution of churn by ACV to draw real conclusions.
Groove Small Business Conversion Survey
About this benchmark
In 2013 Groove, a customer support software, run a survey to benchmark several SaaS metrics among small business saas.
Survey participants
“712 respondents who have reached Product/Market Fit, have been in business for at least 6 months and have at least $1,000 (but less than $500K) in monthly recurring revenue.”
Average MRR = $10.5K
average ACV (for B2B participants) = 1680$
Results
Again the problem is that Groove doesn’t precise which churn they talk about (revenue, net, gross or user churn). Especially that in their survey the question asked was quite open “What is your churn rate?”.
That being said we can infer that it’s monthly user churn (from the context).
3.2% monthly user churn (= 32% annual)
Conclusion
The participants of this survey are very similar to the ones from the open startup (around the same ACV and MRR) with a lower monthly user churn though, 3.3%, compared to the 5.4% of the open startups.
But again we can only assume that it’s user churn and we don’t really know the methodology used by each company to calculate their churn. So these results should be taken cautiously.
So finally, what should we think of these numbers?
Tomasz Tunguz shared a table that shows his “general observations about customer account churn rates by segment”
This is also what I regularly see among SaaS startups, so let’s compare that to our 4 benchmarks.
Pacific Crest Survey:
Mid-Market / Enterprise segment
Median annual customer churn = 8% ~ 0.69 monthly
That’s in line with the 0.5–1% monthly and 6% — 10% customer churn from the table above.
Open Startups
SMB segment
Median annual customer churn 46.% ~ 5.4% ~ monthly
That’s in line with the 3–7% monthly and 31% — 58% customer churn from the table above.
Totango
We cannot draw meaningful conclusion here.
Groove Survey
Big warning as we don’t have enough context and info on what and how the churn rate was calculated. If we pretend it’s coherent and concerns monthly user churn then:
SMB segment
Median annual customer churn 32% ~ 3.2% monthly
That’s in line with the 3–7% monthly and 31% — 58% customer churn from the table above.
So if you are a pre- P/M fit and early stage startup (less than 1-2 years with paying customers):
it’s normal if your churn fluctuates a lot (even from month to month)
it’s likely that your churn rates are higher than the ones above
If you have reached P/M fit and that you are running your business for more than 2 years and: | https://medium.com/point-nine-news/saas-metrics-benchmarking-your-churn-rates-e9ae2c7129b5 | ['Clement Vouillon'] | 2016-06-13 08:30:14.597000+00:00 | ['Churn', 'Startup', 'SaaS', 'Practical Guides', 'Metrics'] |
On Being Invisible | Photo by Stefano Pollio on Unsplash
On being invisible: Have you felt this?
Several years ago, I was in a restaurant in Branson, Missouri with my beautiful, vibrant, and 25 years younger, friend. She is one of those women who attract men like butterflies to their favorite Black-Eyed Susan flowers.
This was a common occurrence for us…men flock to her, barely noticing that I am at the table. Pheromones, I’m told. They affect the behavior of the opposite sex. My friend has lots of pheromones. I, apparently, do not.
One day I attended a book signing. The author is handsome and young-ish. Most of the women in line were smiley, flirty, and obvious…I remember acting and Being like that. The author was smiley, flirty, and very talkative with those younger women. When it was my turn, he took my book, asked my name (almost without looking up), signed his book and handed it back to me. Next.
Wow! That was fun…not!
You know, I’ve done this. When I was younger and hiring employees for my company, well, back then it was still OK to ask a candidates’ age. I’d look through the pile of applications and almost immediately dismiss anyone whose age exceeded 40. In my opinion, they were most likely not trainable, set in their ways, and overly opinionated.
Of course, I senselessly overlooked potentially tremendous employees. Back then, they were invisible to me. Why? Because, to me, they were old.
I was invisible when I was with my pheromone laden friend and to the handsome young author. Why? Because I am old.
Have you experienced being invisible? I know I’m not the only one who has behaved this way and who has experienced not being seen. Although older men are typically held in higher esteem than older women, I suspect older men experience moments of being invisible, too.
Shockingly (because it “feels like” people born in 1969 should be about 25!), people born in 1969 and earlier are now 50-plus. According to Google, with research from AARP, there are 108.7 million people are 50-plus. This includes 76.4 million boomers (born 1946–64), compared with 49 million Gen-Xers (born in the 1960s and 70s) and 82 million millennials (born between 1981 and 1996). Moreover, the population of people 50-plus will continue to grow over the next decade to the tune of 19 million, versus growth of only 6 million for the 18–49-year-old population.
Marketing media recognizes the buying power of our 50-plus age group. More and more companies are using older actors to hawk their goods and services. We see gray hairs on television, magazines, and online; they are far from invisible. Apparently, we 50-plus folks find them to be honest and believable. Is it that old know me-like me-trust me factor? Probably.
Older models are the new rage. It makes sense, doesn’t it? I’m 70. I don’t want to purchase clothes that look fabulous on a 20-something woman. Take a look at China Machado: she was born in 1928. Carmen Dell’Orefice appeared on Vogue when she was 15…she is modeling again at 85. Twiggy is now in her 60s and still working. These women never expected to be working in their industry as senior citizens. It was unheard of for (practically) anyone over 25 to still be modeling. Not today. Older models speak to the 108.7 million people who are 50-plus.
Now, let’s consider the senior citizens we see in advertising. What do they have in common? Almost every one of them presents as fabulously confident and each one is good looking. They sport expensive haircuts. They are in great physical shape, thin and tone. Maybe the photos are airbrushed, maybe the camera angles are ideal, but I sure have never seen wrinkly arms like mine…or a pot belly, so common to older men and post-menopausal women.
Even the older, not famous actors used in advertising, the ones who are supposed to look like us and our neighbors…their teeth are perfect. Their makeup hides the reality of spotted and wrinkled skin. Their hair is just slightly not perfect. Right?
Those 50-plus people who are featured are our age but they don’t look like us…not in real-life.
“So, what’s your point, Mickie?”, you may be thinking. Is this the green monster of jealousy raising its ugly head? Are you just venting about being an older, sometimes invisible woman?
No.
My point is, even being the largest population segment, as we age, we become more and more invisible. In spite of the power of AARP and the use of Boomers in advertising, in real life, there are some people who don’t see us.
As a 20-something, those people over 40 were invisible to me.
Now it’s my turn to be the one not seen.
It’s human nature in the United States, I guess. Elderly are not valued here the way we are in many other cultures. Families are spread all across the Country. Many children today don’t know their grandparents; perhaps do not have interaction with grandparent-age adults.
When people are undervalued in their families and cultures, their personal value is eroded. As we accept that lower position in society, we become more invisible.
When we are undervalued, we don’t shine. You know, that vibrant energy that emanates from excited, focused, directed people. You’ve seen it…you’ve probably experienced that level of happiness yourself.
Many older people have allowed that Light to become dull. Many seniors don’t smile a lot…because they are ashamed of their teeth (which really does not matter) or because they have lost that inner glow that smiling creates and shares with others.
A lot of 50-plus people have retired and chosen to sit in a comfy chair rather than continue interaction in their communities, sharing their gifts through volunteering, or finally living their dream of entrepreneurship. Inactivity, isolation, removing ourselves from personal interaction allows our Light to become dull…and we allow ourselves to become invisible.
My suggestions to avoid being invisible?
· Choose to smile a lot…smiling is the fastest way to increase your positive energy level and smiling is contagious. Practice smiling on the telephone…people can feel the energy of a smile in a telephone conversation.
· Join a yoga and/or a meditation group … you’ll find a free group for seniors, or a low-cost offer near you. OR, consider Googling yoga and meditation and begin home practice until you’re ready to join a group. When you choose to be active, you’ll feel much better about yourself and your Light will shine brighter.
· Make a list of your strengths, business and personal. Find a volunteer opportunity to share all that experience and positive input you have to offer. Or, find a part-time job.
· Wear only clothes that feel good on you. If you have lost or gained weight, choose clothes that fit you now. Repair some zippers, replace buttons, and pull out that iron that’s gathered dust. Remember that book, Dress for Success? There is still a lot to be said for presenting yourself the best you can. When we look good, we feel good. That does not mean wearing designer clothes. It means being You. Dress to present your authentic self.
· Every evening, write a list of five things you are grateful for on that day. Could be a beautiful sunrise or sunset; watching birds in your yard; talking on the phone with a grandchild; baking cookies for your neighbor, a conversation with a friend, enjoying whipped cream on your morning coffee…simple things that we pay attention to and savor.
Recap: Decide to smile more, become involved, choose an activity you enjoy, dress as well and comfortably as you are able, keep a gratitude journal, and let your Light shine.
Yes, I did all of this. And I was still invisible around my high-pheromone friend and to the author. Yes, not being seen…feeling less-than…hurts and is undermining. One thing I’ve learned in the past few years is this brilliant philosophy: Other people’s opinion of me is none of my business.
The more I smile, continue to explore and learn, share my gifts, do things I enjoy, stay involved in my community and continue creating my authentic life, the more brightly my Light shines and the less likely I will be identified as an invisible old woman.
I’m learning to embrace my journey into Being 70 and remaining visible. Thank you for being on this Journey with me! | https://mickiezada7.medium.com/on-being-invisible-492645476bf6 | ['Mickie Zada'] | 2019-07-16 14:53:02.671000+00:00 | ['Personal Growth', 'Seniors', 'Being Invisible', 'Aging', 'Self-awareness'] |
Build Habit-Forming Products with Hook Model & Lean Startup | Trigger
A trigger, both internal and external, leads to action which is undertaken regardless if the users are aware of such impulse or not. External triggers such as a notification, an app icon on a smart phone or an e-mail sent to the user can evoke certain repeated behaviours until a habit is formed. While, as Eyal indicated, “negative emotions frequently serve as internal triggers” e.g. uncertainty may lead us to Google thing, and boredom can send us falling down a YouTube rabbit hole.
Loneliness & boredom can be internal triggers to use Facebook.
Action
In this case, the Action is understood as the minimum interaction the user needs to have with the product to obtain a reward. For it to take place, it has to require less effort than thinking, as a routine activity is carried out almost unconsciously. In other words: the action should be as simple to do as possible. BJ Fogg, Director of Persuasive Tech Lab at Stanford University, formed a theoretical behaviour model explaining that three elements must occur simultaneously for an action to happen: motivation, ability, trigger. He argues that if the response does not occur, one of the listed elements has to be missing.
With Google, the action is as simple as typing a query.
Variable Reward
The variable character of the reward increases users engagement as it creates the need for feedback, focuses attention and gives pleasure. Studies show that our actions are carried out to handle stress caused by our desire for three types of rewards: the tribe, the hunt and the self. Their combination boosts their effectiveness in creating user habits. For instance, Email is an excellent example of incorporating such strategy as it delivers all three types of rewards irregularly: we have a social obligation to answer our emails (the tribe), received emails may contain information that is of value to us such as business opportunities (the hunt), going through, removing and organizing emails give a sense of fulfilling a task (the self).
On a side note, the variable reward is not necessary needed in products that already operate in a variable situation such as e.g. Uber. I guess we wouldn’t like to insert variability to the experience of “Can I get where I need to go and do it on time?”
Pinterest as another example of the variable rewards of the “hunt”.
Investment
As users invest in the product — their time, money, emotional commitment, effort, etc. — increases the probability of them coming back and using it again. Investment creates preference due to our tendency to overvalue the work we put in, serve our need to be consistent and avoiding cognitive dissonance.
In Twitter the investment comes in the form of following another user.
Implementing Hook Model in the Lean process
Now as we grasped the basics of the Hook model we can go onto trying to blend it into our Lean Startup methodology.
Naturally, not all products are meant or even need to be habit-forming. If however if that is something you are aiming at, it is best to think about it right from the start. The development of a habitual product requires many iterations and users behaviour analysis followed by constant experiments. Now isn’t that something that sounds just perfectly in line with the lean process?
The lean approach makes startups less risky by helping launch the product faster and cheaper than traditional methods. In the process, an experiment is viewed as the venture’s first product. Therefore, if at a given or any upcoming stage it proves to be successful, the manager will be able to take up specific actions: gather participants of the early market, conduct new experiments, further iterations and finally build the actual product. This way, when launched, it has a significant number of users, it is based on validated hypothesis, has a proven capability of solving real problems and the venture is equipped with a functional specification of what to implement next.
If there is one idea which has transformed the way we pursue innovation today more than any other, it’s the idea of using the scientific method to handle uncertainty.
The build-measure-learn feedback loop, a tool enabling to test the vision continuously, is at the core of the Lean Startup methodology. We don’t need the best possible hypothesis we want to test or the best possible plan of how to proceed. What needs to be done is getting through the build-measure-learn feedback loop with maximum speed to ensure product-centric learning. | https://medium.com/elpassion/build-habit-forming-products-with-hook-model-lean-startup-d6be3e0b911 | ['Aleksandra Bis'] | 2018-01-22 13:43:55.708000+00:00 | ['Nir Eyal', 'Startup', 'UX Design', 'Lean Startup', 'MVP'] |
The Ultimate Guide to Fulfillment, Pt. 1 | Real life insights into fulfillment and logistics for startups and subscription box companies
We’ll start with an overview of how fulfillment centers how, then delve into the steps of finding the right partner in future posts.
After a product has been manufactured, all companies face the same challenge: How to get the goods to the end customer?
For startups that want to scale beyond shipping in-house, Third Party Logistics companies (3PL) offer a way to outsource the fulfillment of their products.
Fulfillment partners will store inventory in their warehouse, package the product, and ship it, which can save a startup a boat load of time and money. As you can imagine, this dramatically shifts how your business is set up from an operational standpoint.
You’ll want to be prepared in your search, because the right partner will be a trusted resource that helps you grow, while choosing the wrong option can lead to dissatisfied customers, delays and countless headaches.
Pro-tip: It’s not all about the cost, you want to make sure you find a quality partner and think about the long-term. Anthony Thomas, Founder of Sticker Mule, offers his reasoning for thinking long-term,
“when you’re small it’s almost impossible to achieve a major cost savings, but the upside is that as you grow costs won’t grow linearly if you’re constantly striving to keep things simple. This is why it’s so important to build rapport with your partners early on. If you beat them up over price when you’re small you might get a small win, but then they won’t care to work with you to architect solutions that can deliver bigger wins down the road.”
Fulfillment 101: Fulfillment is generally understood as the steps between processing an order and making sure the product arrives at the intended customer.
On a grander scheme, the actual process can include coordinating freight from your factory (shipping from manufacturer to your warehouse), final assembly of the product, storage, packaging, labeling, and more.
Here’s an overview of the positives and negatives of working with a fulfillment warehouse: There are a number of advantages:
Potentially cheaper fulfillment rates — By having lower cost of labor and cheaper real estate, fulfillment companies can have significant savings in their operations.
— By having lower cost of labor and cheaper real estate, fulfillment companies can have significant savings in their operations. Cheaper Shipping Rates — Fulfillment warehouses ship large quantities of items and receive cheaper shipping rates from the major couriers. They’re also well versed in the shipping logistics landscape, and can give you access to the widest range of shipping options.
— Fulfillment warehouses ship large quantities of items and receive cheaper shipping rates from the major couriers. They’re also well versed in the shipping logistics landscape, and can give you access to the widest range of shipping options. Shorter Shipping Times — Strategically choosing your fulfillment partner closer to the bulk of your customers means your customers get their orders faster.
— Strategically choosing your fulfillment partner closer to the bulk of your customers means your customers get their orders faster. Freedom of time — As a growing business, your time is your most limited factor. In all likelihood, fulfillment is not your core competency, and by outsourcing this component of the business you free up more time for the things you are really good at.
Some downsides to consider:
Expensive — Fulfillment centers need to make money, and they charge for everything from storage fees to how the shipment is packed.
— Fulfillment centers need to make money, and they charge for everything from storage fees to how the shipment is packed. Less control — It’s your face on the product, but you no longer have direct control of when shipments get processed and sent out. If there are delays in the warehouse, the customer comes to you (and they won’t care about why your warehouse couldn’t ship it out).
— It’s your face on the product, but you no longer have direct control of when shipments get processed and sent out. If there are delays in the warehouse, the customer comes to you (and they won’t care about why your warehouse couldn’t ship it out). Locked in contracts — Typically, 3PLs want a contract with startups because getting vendors set up takes a lot of time, and they make their money through volume over time.
— Typically, 3PLs want a contract with startups because getting vendors set up takes a lot of time, and they make their money through volume over time. Long on-boarding process — You have to negotiate rates, send product to the warehouse, wait for them to input it into their system, and get their software to talk with your eCommerce site and other software. It can take weeks to get up and running.
How warehouses work: Fulfillment centers house products from many vendors, and while they suggest they can handle anything, you’ll find that each warehouse has it’s own strengths and weaknesses.
Some 3PLs will specialize their offering (i.e. only do bottled beverages, hardware with lithium-ion batteries, dry packaged goods, etc.), while others focus on a particular size of company.
If you have a more complicated product or packaging procedure (think, subscription box), you’ll want to focus on warehouses that cater to this, such as Dotcom, or negotiate lower pick-and-pack fees.
Most warehouses will be open to all sorts of custom experiences, but you will often have to pay for variations. Warehouses make money through a variety of ways:
A one-time setup charge (from a few hundred dollars to thousands)
Monthly account management fee
freight delivery charge
Pallet break down and storage fees ($12 to $20+ per pallet per month)
Charges for repackaging items and shipping, and the little discussed price gouging on shipping rates.
Fun fact: While warehouses receive discounts on shipping rates, they don’t often pass on all the savings to you, their customer.
As a practice, many 3PLs like predictability in their business (such as consistent sales volume or weekly deliveries) because it helps them to budget time and schedule labor appropriately.
It is this consistency that allows them to optimize their operation and save money. For an overview into the inner-workings of a 3PL, Matthew Carrol provides an excellent graphic:
(Source: Quora)
There are many variables to consider when evaluating whether to work with a fulfillment center. Jeff Becker, Co-Founder ofEarhoox , notes,
“outsourcing fulfillment can be very beneficial, but it’s important to understand the costs to do it yourself versus a third party. The volumes need to make sense, the storage fees depending on your particular product need to make sense, and your market or where you are shipping have to make sense.”
Next, we’ll talk about how to refine your current operations in order to make a smooth transition to outsourcing.
//
Glossary of terms:
Pick-and-Pack: The process of taking an item from the shelf and putting it into the shipping package (parcel).
The process of taking an item from the shelf and putting it into the shipping package (parcel). Lick-and-Ship: Applying a label to the packaged item and shipping out the parcel.
Applying a label to the packaged item and shipping out the parcel. Pick or Touch fees : A fee associated with each time the warehouse “touches” an item that goes into the final package, such as taking an item out of a “pick location” i.e. a shelf in a warehouse or placing a sticker on the outside of the box.
: A fee associated with each time the warehouse “touches” an item that goes into the final package, such as taking an item out of a “pick location” i.e. a shelf in a warehouse or placing a sticker on the outside of the box. Kitting: When you need to combine individual items to be grouped, packaged, and supplied together. This often happens when products are first received, and need to be grouped together to be sold.
When you need to combine individual items to be grouped, packaged, and supplied together. This often happens when products are first received, and need to be grouped together to be sold. A subscription box is a good example: For Spoil, the warehouse would need to pick the tea infuser, the hydrating mask, the disc of soap, the tea, etc. with each item representing a separate “pick.”
(Source: Spoil)
WMS (Warehouse Management System) : The software system warehouses use to track inventory levels.
: The software system warehouses use to track inventory levels. Case or Case Pack: Shipping packages that contain individual SKUs grouped together
Shipping packages that contain individual SKUs grouped together Pallet: A flat transport structure that is used to transfer large amounts of goods (often stacked Case Packs) from your manufacturer to the warehouse, within the warehouse, and often for large orders to retailers. They require a lifting device such as a forklift, pallet jack, or front loader. These have become popular interior design elements of late.
(Source: Joanne Cleary)
Tools and resources mentioned:
Sticker Mule: Custom die-cut stickers for startups.
Dotcom Distribution: eCommerce logistics partner specializing in high volume subscription boxes. | https://medium.com/the-chain/how-to-find-your-first-warehouse-pt-1-2d974e262a2a | ['Akin Shoyoye'] | 2016-08-05 00:43:21.024000+00:00 | ['Tech', 'Startup', 'Fulfillment', 'Ecommerce', 'Subscription Boxes'] |
Why Sexual Pen Names Are So Important — And Should Be Respected | Pen Names Are More Than Just A Name
Recently, I have begun pondering the ethics of sexual entitlement. It’s a creative venture that has taken me to an unusual place in my writing life. It has also forced me to begin asking myself a simple question that's increasingly sounding like a Batman riddle.
“What is in a name?”
As I continue to throw myself down the rabbit hole of sex blogging, I can’t help but notice a strange trend — many sex writers don’t use their real names. More often than not, they are frequently women and often masquerade under stylish profile pictures. These are people that live enchanting double lives on the internet. On one hand, they’re vulnerable, and often exciting tales of sexual exploration capture your attention by gripping with sheer brutal honesty on a taboo subject. You feel enthralled and compelled to keep reading because of the taboo nature of sex, especially in the world of women. While there is a weird disconnect in my self-awareness of their pen name, that doesn’t discount the fact that the education and stories they write are often fascinating and enlightening.
However, I have often come to accept that there is a darker side to this lifestyle. Once again, societal double standards come into play here. For men, most of us wouldn’t give a shit about oversharing. When men give sex advice, which is usually just a form of egotistic braggadocio, we’re often praised and put on a pedestal. Rarely, is there ever a fear of recourse in a man oversharing too much about his sexual interests.
Women, unfortunately, do not have this same luxury. For all of the joy that the internet has brought us, it can very easily lead to a dangerous realm for my female peers. Time and time again, society has proven unkind to the female perspective on many important issues — with sexuality being somewhere near the top of that list. With the rise of social media, comes the painful reality that safety is a very real concern for many. From possibly deranged stalkers to ridiculous violent threats that should be taken seriously, there is no telling who may pop out of the dark corners of the world wide web and attempt to ruin your life.
Pen names are more than just a fancy pseudonym — there a real-world shield against very real-world problems. While I do believe that certain social standards should be held, I’m also self-aware enough to know that being a woman adds an extra level of risk to many activities of life. A risk that many sex writers are cognizant of at all times.
Women, Sex Writing and Career Safety
In the world of career work, taboo subjects and workplace social habits simply do not mix.
Whether you work as a school teacher or a powerful company, it’s a difficult fact of life that certain things are borderline forbidden in the workplace. Whether that’s sexuality, social justice moments, or fighting toxic workplace cultures, there’s always the heavy risk of performing career suicide. Particularly, in the case of women and sex, it’s more than just career suicide — it could be the spark to a misogynistic fire if they're not careful.
As this pandemic continues to ravage the world, the benefits of having job security and the benefits that come with that, cannot be understated. Most writers, sexual or otherwise, are not financially independent people. They are not out here making Tim Denning levels of cash through sheer writing alone. Actually, most of us can’t realistically pay the bills with our writing. And to even achieve that level of financial success through content writing, it takes a long period of the hustle and grind that’s often not mentioned enough around here. When compounded with the life pressures of womanhood, having a job that can back you up is an absolute blessing.
Thankfully, sex writing is more than just money — it’s art. It’s the expression on sexual matters that have taken increased importance in today’s modern society. It’s sexual education for those that never received the proper real-world talk that one should have as they grow up. It’s a therapy for those with sexual traumas and healing for the broken-hearted.
Sex writing may not be good for the bank account, but it’s incredible for the human soul.
While the world of career work still caters to the old-fashioned view that women are still sluts for speaking on sexuality, sex writing can have a positive effect.
I mean yeah you have your derivative “how to give better sex and blowjobs” reading that we’ve seen a million times, but there is a lot of important information out there for those exploring sexuality.
No One Is Entitled to Your Personal Life
At the end of the day, no one is entitled to someone’s personal life — fake name or otherwise.
As scandalous as some of their writing may be, the truth is we’re not owed anything. This extends to a much deeper conversation on the nature of Parasocial relationships:
Parasocial relationships are one-sided relationships, where one person extends emotional energy, interest and time, and the other party, the persona, is completely unaware of the other’s existence. Parasocial relationships are most common with celebrities, organizations (such as sports teams) or television stars. In the past, parasocial relationships occurred predominantly with television personas. Now, these relationships also occur between individuals and their favorite bloggers, social media users, and gamers.
Source: National Register of Health Service Psychologists
These types of relationships are the literal definition of socially-distanced. It used to be the world would gush over celebrities and high-profile individuals like Will Smith, Keanu Reeves, and Kim Kardashian. However, with the rise of social-media centric society, it’s much easier to cultivate a connection with your favorite online personality. Whether it’s on Twitter and Facebook, the streaming community of Twitch and OnlyFans, or popular YouTubers, they are the ones most frequently using this style of relationship. Or even a certain writing platform we all know and love.
However, due to the one-way nature of this connection, it’s easy to demand more out of our favorite personalities. When we read a juicy sex story, we want more details. When they detail an amazing sexual encounter, we may wish we were there. More and more, we may push the boundary of expectation from our favorite writer — and not even realize it.
Pen names exist to protect people from that level of social demand. When someone can just as quickly look you up on a social media platform, it demands an extra layer of security. While social media engagement has given way to increased creative freedom, it has also created another avenue of social stalking. Specifically, if you’re a woman oversharing her sex life, it’s all too easy to fall into the stress of toxic perceptions.
Pen Names Are More Than Just Sex
Pen names exist for a reason — if you want vulnerable, honest, and truly informed writing, a pen name helps. It inspires confidence and liberation in the minds of those that utilize them. A pseudonym not only gives you the ability to become someone else, but it also allows you to craft powerful critiques that wouldn’t be otherwise possible.
And there is an incredible historical precedent to support this.
Ayn Rand is not a real name — it’s a pseudonym that was adopted because her real name is too complicated for American readers.
Dr. Suess is also not a real name (God I hope people knew that) — it’s a pseudonym created due to a college drinking incident. He would go on to become one of the most popular kids' writers of all time.
Stan Lee is also not a real name — Well, he created Marvel Comics and we all saw how that went.
Mark Twain, Voltaire, Toni Morrison, and Lemony Snicket are — you guessed it — not real names, but pen names for their careers as writers. For various reasons, they all chose a pen name that was more favorable. Their work also influenced history in one form or another.
Taken one step further, the made-up names of social media influencers, YouTubers, and music artists are all a form of a pen name. Using a made-up name sounds way cooler than most of our average sounding names.
Source: Electric Literature (more sources at bottom of page)
Conclusion
In the world of sexual writing and blogging, pen names serve an important purpose. It’s more than just creating a sexy alter ego — it’s oft necessary insurance of safety from discrimination and judgment.
Personally, I don’t think it makes you a pussy for hiding behind a pseudonym. In fact, as long as you’re speaking your truth, I think it shouldn’t matter really.
So, to answer my earlier question… “What is in a name?”
A human being. And that’s enough. | https://medium.com/sexography/why-sexual-pen-names-are-so-important-and-should-be-respected-ea90317f6646 | ['Dayon Cotton'] | 2020-12-28 20:09:45.740000+00:00 | ['Equality', 'Women', 'Writing', 'Sexuality', 'Sex'] |
My Favorite “What If”… | I’ve never really been a fan of Christmas. Unlike most people, I’ve always dreaded the holidays and I honestly find this season of the year quite depressing. The financial strain, the numerous obligations you wish you could get out of but mostly, for me, I’ve always disliked the holidays because I rarely got to spend it with my whole family under one roof. The holidays make me sad, plain and simple.
Two years ago, Christmas day really gave me a reason to be miserable. My husband and I had the best gift of all to give to our family and friends. We were 2 weeks away from that famous 3-month baby announcement. Except for that day never happened and now I have all the reasons in the world to hate Christmas.
The first few months of 2017 were extremely painful both physically and psychologically. Fortunately, the physical pain following a miscarriage is somewhat manageable. The psychological pain though, that is a different story. I didn’t handle it very well… I couldn’t comprehend the sadness I was experiencing over something I’ve never had, over someone I never got to hold.
But being pregnant even for only a few weeks was the most amazing feeling I’ve ever experienced. I felt so strong: a baby was growing inside of me, a new dream was in the making. My plan for the months to follow was set in stone. I would study part-time, I would prepare his or her nest. My husband and I started thinking of names. We ended up just calling our unborn baby “Peanut”. Peanut became our biggest obsession. EVERYTHING revolved around Peanut.
But Peanut ended up being just a series of symptoms, the most beautiful series of symptoms nonetheless. I didn’t get to carry him or her in my arms, but I’ll always carry my Peanut in my heart. It will forever be my favorite “what if”.
I trusted my body to care for my unborn baby, but after the miscarriage, I couldn’t trust it anymore. I felt like a failure and kept asking myself what I might have done wrong. I had so many questions but no one had answers for me. “It happens more often than you’d think. One in four women go through a miscarriage”, my doctor tells me. One in four women??? How come I’ve never met anyone who’s had a miscarriage before?
Chances are though that one of your aunts, your mom or grandma, a distant friend or a colleague have miscarried at least once in their lifetime. But no one talks about it because hello, who likes to talk about their failures? No one, yet you shouldn’t think of a miscarriage as a defeat. It just means that the pregnancy was not a viable one. As Michelle Obama superbly puts it:
“ I think it’s the worst thing that we do to each other as women, not share the truth about our bodies and how they work, and how they don’t work”.
Yes ladies, please speak up. Don’t be ashamed or scared to share your story. Let’s help each other get through something so traumatic, yet so common. Let’s stop making this topic a taboo one. It is (almost) the year 2019, for F*** sakes!
Almost 2 years have passed since that stormy night in 2016 and I am still waiting for my bright rainbow. Only now I am not as sad anymore because I am grateful for the things Peanut was able to teach me. It taught me how strong I could be and how much my body is able to handle. It taught me that a storm doesn’t last forever. I might haven’t come across a rainbow yet, but I sure have seen lots of rays of sunshine since December 25th, 2016. And for that, I am extremely thankful.
If you or someone you know has gone through a miscarriage, I strongly recommend reading Ariel Levy’s memoir “The Rules Do Not Apply”. But really, it is a must-read for all women. | https://nellydaou.medium.com/my-favorite-what-if-5439e639d0f1 | ['Nelly Daou'] | 2019-02-25 05:44:41.437000+00:00 | ['Holidays', 'Family', 'Miscarriage', 'Psychology', 'Infertility'] |
30 Lessons About Life You Should Learn Before Turning 30 | 30 Lessons About Life You Should Learn Before Turning 30
Lessons I eventually learned, but wish I could have mastered a long time ago…
Photo by Julian Hochgesang on Unsplash
1. Life is a delicate balance between “ready, aim, fire” and “ready, fire, aim.”
Chances are you naturally lean toward one of these approaches: either unnecessarily cautious or foolishly ambitious.
The reality is that some situations call for great care and deliberation, while others call for you to dive right in before you have time to convince yourself not to.
If you’re taking out a huge loan to start a business, you need to be cautious. If you’re thinking of starting a YouTube channel you just need to get started.
Either way, you better learn the difference.
2. You should automate as many things as possible
Steve Jobs developed a uniform of a black turtleneck, Levi’s blue jeans, and white New Balance sneakers.
Mark Zuckerburg mostly sticks to a gray t-shirt.
Barack Obama would choose between a charcoal gray and a navy suit every day.
The more time and energy you spend on things that don’t matter, the less you will have for the things that do matter.
Develop defaults in every area of your life: outfits, meals, morning routines, etc. You want as much of your life running on auto-pilot as possible so that you have the ability to manage the parts that most people never get under control.
3. Journey>Genesis
I’m not talking about the bands here (although I think the point would still stand). What I mean is that the process is more important than the act of starting.
We spend a lot of time celebrating the start of things, but not nearly enough time encouraging people when things get tough. And they always get tough.
4. You’ll be happier if you love the journey more than the destination
Not only is the journey more important than the starting line, it’s more important than the finish line.
If all you care about is the mountaintop, the climb is going to be rough.
The other thing about the finish line is that it’s a moving target. Once you get close to one finish line, you’ll spot another one just beyond it that’s even more appealing.
5. Your relationships will all change drastically
In college, I had an amazing community of friends including a close-knit inner circle and I figured that we were going to stick together for the long haul.
But life happens and for the most part we’ve gone our separate ways. There’s no hard feelings here. I realize this is the way life works.
I also have several family members who have recently moved further away. Those relationships will be different now that we can’t see them as often.
As long as you fight to maintain several relationships that are deep enough to sustain you in the present, you’ll be able to stand firm amid the ebb and flow of people coming and going from your life.
6. The past is best used as a wellspring of gratitude
There are two enormous mistakes when it comes to the past: dwelling on the negative aspects or romanticizing the positive ones.
But I don’t believe you should never look back. I’m all about living in the moment, but all that lead you to this moment was important too.
When you look back, focus on the things that you can be grateful for now. Every one of us has had ten thousand amazing things happen to us that we didn’t deserve. We’ve been treated to moments of true beauty worth cherishing. Don’t turn your back on all you’ve been given.
7. Procrastination is okay if it works
Everyone seems to hate procrastination, but it’s not always bad. Most people who procrastinate come through in the 11th hour and things work out fine.
Yes, it might be stressful getting there, but you have lots of areas in your life that need work. If procrastination is getting the job done, maybe it’s not the first thing you should look to fix.
That being said…
8. Procrastination is a silent killer of your deepest dreams
When it comes to your dreams, there’s no external accountability. You can’t come through at the 11th hour because there is no defined 12th hour. There are no deadlines unless you create them yourself.
Most people never stop and define their deepest, most important goals. This allows them to procrastinate on them indefinitely.
If you don’t come out and say that you want to write a novel, it’s not going to happen.
This is why I created a 10 Year Plan for a Remarkable Life, and why I think you should create one too:
9. You can be spontaneous even when you have plans
When it comes to creating a 10 Year Plan, one of the biggest objections people have is that they value spontaneity and don’t want to be tied down.
Fair enough, but it’s important to remember that your plans, your goals, and your calendar are there to serve you, not the other way around.
If you want to change something, change it. If you want to be spontaneous, be spontaneous.
Plans set you in the right direction so you don’t drift off course. If you need to pivot because your values and priorities change, go ahead.
It’s just as spontaneous to seize an unexpected opportunity when you had plans as when you didn’t.
10. Money can buy a lot of things, but the most important thing it can buy is freedom
Call it Financial Independence, call it retirement, call it escaping the rat race, call it whatever you want. Having enough money saved up gives you the freedom to live life on your terms.
You can do the work that you want to do when you want to do it and in the style you want to do it.
This is attainable for most people, but you have to be intentional about it. I lay out the mechanics of it in this article:
11. The people that are holding you back mean well
When you decide to make the leap from average to awesome, you’ll run into a surprising amount of resistance from people who should love and support you.
They don’t think they are discouraging your dreams, they think they are saving you from disappointment. They have no idea that they are pulling you back toward mediocrity.
Be grateful that they care, but have the wisdom to know when not to listen.
12. Getting older isn’t a bad thing
I used to dread turning 30, but my 30’s are shaping up to be the best decade of my life so far.
Yes there are inevitable declines that come with age and yes, seasons do change in life. Your 30’s probably won’t have the same vibe as your 20’s. But each season has its advantages and you learn to appreciate them in their time.
13. Happiness should be a central focus in marriage: the other person’s happiness
What I hate about the concept of looking for “the one” is that it sets up this idea that marriage is about finding someone who completely meets your needs. Not only is this never going to happen, it’s not what marriage is in the first place.
If you choose to get married, do it with the mindset of having an opportunity to love the other person well.
If you think that marriage was designed to make you happy, well, you know what the divorce rate is…
14. Finding yourself is great, losing yourself is better
This piggybacks off the last point. Ironically, the best way to be happy is to pursue the happiness of others instead of your own.
There’s a lot of focus today around self-discovery and “finding yourself” has always been something that young people were stereotypically supposed to do.
I think introspection and self awareness are very important. But my advice is as soon as you’ve learned what you need to know, get the focus off yourself as quickly as possible.
15. It’s okay to fail at stuff
I’ve spent way too much of my life being afraid to fail at things. This fear leads to inaction and inaction stunts your growth.
Here’s a news flash: no one has ever gotten good at anything without failing. Ever.
How do you think you learned how to walk?
Sometime in our childhood, we pick up this idea that failing is something to be embarrassed about. The sooner you banish that silly idea from your brain the better.
16. The one thing you should be afraid of is regret
Too many people have this backwards. They fear failure but not regret.
Once you have truly experienced both, you’ll realize that there’s no comparison: regret is a million times worse than failure.
Regret is a problem that becomes worse and worse with age. Regret at 30 isn’t so bad, there’s plenty you can still do about it. Regret at age 70 or 80 is brutal.
The choices that you make now really matter. Give some thought as to how your future self will evaluate them.
17. How you manage your energy is even more important than how you manage your time
When it comes to productivity, the focus is often on time management. This makes sense since using your time well is certainly a component of productivity, but it’s far from the full picture.
If you want to get important things done, you also have to manage your energy and attention. It doesn’t matter how much time you allocate to something if your brain is fried and you can’t concentrate on the task at hand.
If you haven’t learned to to take a mental rest and prepare yourself for another bout of work, your productivity will suffer.
18. Your health is your wealth
Time is even more important than money.
The most likely thing that will rob you of your time is poor health. The leading causes of death are all chronic diseases. If you die early, there’s a small chance it will be a fluke accident and a large chance it will be the culmination of poor habits.
Do what you can to severely limit your consumption of sugar and processed foods. Go on walks. Eat home cooked meals. Spend time with people you love.
The default way of living is the path to chronic disease. If you want a different outcome you need different habits.
19. Everyone knows something you don’t
People say this all the time and don’t mean it.
I mean it.
The people you think are stupid. Your political enemies. The people who got brainwashed by a cult.
Everyone.
I’ll go one further: even if you are on the right side of a debate, there’s an overwhelming chance that your opponent knows something about the debate that you do not.
Have the humility to listen and you’ll keep growing.
20. Experimentation is the best way to vet advice
You’re going to be exposed to a lot of advice, much of it will be contradictory.
Avoid fat — no, carbs — wait, calories — excuse me, I meant nightshades — wait, really it’s the omega-6’s that are the problem.
Large scale studies are expensive and the science behind complicated issues moves slowly. That’s not a knock on science, that’s just a suggestion that the fastest way forward is to conduct a series of n=1 experiments.
This applies to all areas. Some personal finance gurus say to cut lattes and invest the difference. Some say it’s not worth the effort. You have no idea whether or not it is until you try it out. Try giving up something that’s a habitual part of your life for a month. If you don’t miss it, great. If it’s the worst month of your life, maybe look somewhere else to save a few bucks.
21. Focus on the smallest changes that make the biggest difference
The pareto principle — also known as the 80/20 rule — says that 80% of the results come from 20% of the causes.
In other words, you can almost always find opportunities where small changes make an enormous difference.
For instance, when it comes to losing weight, cutting soda and juice is just one change, but it can completely transform your health.
22. Keep a journal
It can take a little time to get used to the habit of keeping a journal, but it’s worth it.
Most people are in a reactive mode all day long. The practice of journaling teaches you to turn inward in reflection and to proactively bring something to the page.
23. If you can be grateful for how far you’ve come, excited about where you are going, and are in love with the journey, you are winning.
This represents a healthy relationship with time: past present and future.
Grateful for the role of the past.
Excitement for a future that you are actively trying to create.
A love for the journey you are on and the precious present moment you are in.
That’s winning.
24. Find meaning in your obstacles
Consider this powerful section from the book Quiet: The Power of Introverts in a World That Can’t Stop Talking by Susan Cain:
Unhappy people tend to see setbacks as contaminants that ruined an otherwise good thing (“ I was never the same again after my wife left me”), while generative adults see them as blessings in disguise (“ The divorce was the most painful thing that ever happened to me, but I’m so much happier with my new wife”). Those who live the most fully realized lives — giving back to their families, societies, and ultimately themselves — tend to find meaning in their obstacles.
25. If you don’t like the story you are telling yourself, tell yourself a different story
The only lens you have through which to view your existence is that of a story.
You are constantly narrating your own life to yourself.
You get to control the story.
The facts never change, but it’s the story that resonates with you.
Telling yourself that you are a loser is a story. Telling yourself you used to be a loser but are turning it around is also a story. Same facts, different results.
Choose wisely.
26. Your weaknesses are your strengths
When I was a kid, I took medication for ADHD.
I’m not sure if I ever had ADHD, but even if I did, there’s no way I’d ever take medication again.
What I have isn’t a weakness, it’s a competitive advantage. It’s a superpower.
I couldn’t pay attention for eight straight hours of school even if you paid me, but I can focus plenty well on the things that I care about.
My inability to sit and behave for eight hours might not fit the modern education system, but why should I care about the modern education system?
27. Try to separate how you spend your time from how you make your money
There’s only one way to make money: selling.
Most people sell their time for money.
A better approach is to invest your time creating something that you can sell besides time.
This was a core thesis of my book on money.
28. The only sensible attitude is gratitude
You will often be tempted to complain about your life.
This isn’t a great look.
You have been given so much that you don’t deserve.
What did you do to deserve your beating heart?
Everything you have is a gift.
Not only is gratitude right, it’s therapeutic and healing.
Gratitude is the greatest psychological hack on the market.
29. The only thing that goes up with age is maturity
As you get older, every single natural ability begins to decline.
Your physical and mental potential peak in your early 20’s.
It’s still possible to improve your body and mind with age because chances are you weren’t anywhere close to your potential in your early 20’s. For instance, I’m in better shape in my 30’s than I was for most of my 20's.
But my body will eventually decline.
My mind will cease to work as quickly.
The areas where I will always be able to improve are my wisdom and character.
Take care of your body, but if you want to peak as a person later than 21, you should focus on your character and wisdom.
30. Lessons aren’t learned from lists
Yes, I recognize the irony of concluding a listicle post with this point.
Too many people read lists like this, feel good for a minute, then move along unchanged.
The only way you will grow is to get out there, experiment, and put things into practice.
I really hope you liked this list. But more importantly, I hope you do something with it. | https://medium.com/the-post-grad-survival-guide/30-lessons-about-life-you-should-learn-before-turning-30-6249873501e5 | ['Matthew Kent'] | 2018-10-16 15:15:57.687000+00:00 | ['Personal Growth', 'Life Lessons', 'Personal Development', 'Productivity', 'Life'] |
Finding Yourself Again on the Road to Burnout | The 5 Stages
Stage 1: The honeymoon
This stage is defined by high job satisfaction, energy, and creativity.
A few months ago, I took a new job in project management that I was extremely excited about. Everything was great — I was working on many different IT projects and learning a lot. I was also working with many different personalities, and I was excited about it.
Then, Stage 2 happened.
Stage 2: The balancing act
Stage 2 is about being aware that some days are more stressful than others. In this stage, you start to feel job dissatisfaction and inefficiency, fatigue, sleep disturbances, etc.
My optimism began to slowly disappear.
With time, I started to understand I wasn’t going to do what I want and my role on this job wasn’t as clear as I expected it. Every day, I’d have new projects appearing from nowhere I had to work on and others I had to stop without knowing why.
People started to solicit me regularly and I wasn’t even sure if I was the person in charge.
At this point, I was feeling :
Work inefficiency: I needed more time to achieve the same tasks
I needed more time to achieve the same tasks Moral fatigue: I was putting too much energy focusing on unessential tasks
I was putting too much energy focusing on unessential tasks Avoidance: I was avoiding some work by focusing on something else. For this reason, I had to regularly work extra hours to finish my work.
Stage 3: Chronic symptoms
This stage is an intensification of Stage 2.
No matter how much I tried to escape this new situation of extreme stress, I was struggling even more. At this point, I felt I lost a sense of purpose at work, and I had to admit I was stuck on a pattern of deception.
I was regularly exhausted after a workday — at the point of chronic exhaustion. But I also noticed I was feeling some anger. I started to become disagreeable to every suggestion or behavior I didn’t like.
Stage 4: Crisis
Now, it’s the time where the symptoms become critical …
At this stage, I remember noticing the following.
Every little thing could change my emotions radically . I started feeling overwhelmed just by hearing email notifications. It was enough for me to feel angry, talk in a different way, and become kind of disagreeable.
. I started feeling overwhelmed just by hearing email notifications. It was enough for me to feel angry, talk in a different way, and become kind of disagreeable. I became more and more pessimistic. No matter the situation, I was thinking about the worst-case scenario, and I often thought about being fired.
No matter the situation, I was thinking about the worst-case scenario, and I often thought about being fired. Obsessing about work frustration . I started talking a lot to myself about my frustrations without even noticing how long I could talk about the same issue.
. I started talking a lot to myself about my frustrations without even noticing how long I could talk about the same issue. I started to say “no” to almost everything. I’m usually the one who accepts to help everyone, but at this point, I was unable to deal with everything.
At this stage, I learned:
How hard it can be to admit you reached your limits
How important it is to share your emotions with others and put into words what you feel about the situation
It can be helpful to talk about what you’re feeling with people that’ll support you during this difficult time. I started to share my difficulties with my colleagues and my family. The more you open up, the more you’ll be able to put into words the emotions you’re dealing with.
Stage 5: Enmeshment
At this point, the symptoms are embedded in your life, and you’re likely to develop a significant physical or emotional disease.
I don’t think I’ve ever reached the stage of enmeshment. But I feel like I’m close to it at the time of writing this article, and I’m doing everything I can to avoid getting there.
This journey to burnout is like a road — you can always choose your final destination.
As experience can be a hard teacher, it taught me how to deal with some uncomfortable truths, especially the following:
I’m not a superhero; I should accept and recognize my breaking point
Perfectionism can become my worst enemy
can become my worst enemy Saying “yes” to everyone and everything in the workplace isn’t possible
Looking back, I also realize I was caring a lot about my responsibilities in this new job and about pleasing others.
It makes sense to me now that the very fact that you care about your job and other people puts you at greater risk of burning out than not caring. Christina Maslach, an American psychologist, explained this in her book “Burnout: The Cost of Caring.”
As wisdom comes from experience, we should always learn from those difficult experiences.
No matter how hard it can be to recognize you’re on the road, you’re not alone — and you can use this situation to learn more about yourself. | https://medium.com/better-programming/engineers-on-the-road-to-burn-out-86490f065362 | ['Siana Macdad'] | 2020-07-22 14:15:06.466000+00:00 | ['Mental Health', 'Work Life Balance', 'Burnout', 'Engineering Mangement', 'Programming'] |
How to Run a Marathon in Just Under Seven Hours | I usually don’t mention my completion of a marathon unless someone asks me directly, but my husband, a runner himself, brags about it on my behalf.
You’ve done one! he says when I try to minimize mention of the race. It didn’t kill me (barely)! I retort.
But the benefits of running a marathon in just under seven hours are numerous. It requires very little training. It takes you long enough to reach the next aid station that you do not need to worry about overhydration. There is plenty of time to chat with people moving along at a similar pace. And, while I’m sure that the people at the front of the race are friendly too, everyone at the back has a fascinating tale of why they are there.
There is a sense of humility and purpose amongst those people trudging along in front of the time cutoff-pick-up van. | https://medium.com/runners-life/how-to-run-a-marathon-in-just-under-seven-hours-e56173c9f480 | ['Allison Fleck'] | 2020-10-28 17:37:07.152000+00:00 | ['Running', 'Motivation', 'Marathon', 'Walking', 'Inspiration'] |
6 reasons why you need branded content now | Your marketing team can’t do it all.
You invest in traditional and digital advertising while dedicating long hours to tweaking your strategy in hopes of boosting your return on investment. You do all this while trying to keep up with the latest trends and watching your budget like a miserly Disney villain.
Wouldn’t it be nice to have a go-to tactic in your ever-changing media arsenal that’s both high-quality and enriches the lives of your core audience? Something you can deftly recycle without fear of the message going stale or engagement tapering off?
You need branded content. And you need it now.
We get it. Branded content is a prime target when you have to cut 10 percent of your marketing budget. It’s not an easy tactic to execute well, either. And to top it off, these thoughtful, often multilayered storytelling campaigns designed to attract and educate your core audience usually don’t even include a revenue producing call-to-action. On purpose.
But the risk is low compared to the potential reward. Here are six reasons why.
1. Trust
Branded content is necessary specifically because it isn’t selling your audience. Branded content is sharing and teaching. You’re adding value to someone’s life. In many cases, you’re creating opportunities for your current and future customers to experience something that benefits them before buying your product or converting in some other way. And that engenders trust.
Don’t get us wrong. Ads are great. And you absolutely need them to sell your product. But branded content earns you fans. And you need fans to thrive in the long term.
A 2017 Time Inc. study showed roughly two out of three people polled trusted branded content more than ads. While that’s not shocking, the same pool of respondents went on to emphasize the creativity and alternative approaches that go into making quality branded content vs. advertising products are significant in that equation. That was especially true among younger respondents.
Ads are great. And you absolutely need them. But branded content earns you fans. And you need fans to thrive in the long term.
2. Audience retention
Your ad may be great, but it’s still an ad. You’re going to need to expose people to it over and over, hammering them with your message until you finally hook them. And does breaking your customer’s resolve sound like a great way to start a long-term relationship? Branded content, on the other hand, lets you create stories people remember. It lets you stretch your creative legs. Would you rather see 40 Lexus commercials or watch this spoof crime drama starring Colin Quinn and Seth Meyers? We’re not suggesting you need to choose one over the other. We’re saying you need both. And studies show you need fewer touch points for it to be effective. (A 2016 IPG/Forbes/Newhouse study showed brand recall was 59 percent higher on branded content compared to ads.) | https://medium.com/creative-lab/6-reasons-why-you-need-branded-content-now-88709dc0a114 | ['Eric Brandner'] | 2019-02-27 00:42:24.198000+00:00 | ['Branded Content', 'Marketing', 'Content Marketing', 'Digital Marketing', 'Advertising'] |
Sharing Cars with Strangers. Much of my life has been an exercise in… | Days blur into weeks and months into seasons. I can’t remember how I imagined 2015, but it looks different from the other side. I’ve made and lost friends and watched others slip away. I’ve seen familiar buildings torn down and grown used to the vacant spaces they’ve left behind. I’ve found comfort in recognizing the same faces along my morning commute.
Last week I got into a car with two strangers, both artists. The driver made whimsical etchings of skeletons doing things like driving a convertible over the Bay Bridge. The passenger was a graphic designer. I don’t remember the details. We dropped her off at the Academy of Art and headed north, toward my apartment.
Traffic was heavy and it had been a long week; I’d woken up that Tuesday thinking it was Saturday. We crawled up Third Street and I regretted leaving the office during rush hour. The driver sat beside me, quiet, and I noticed his three watches — two on the right wrist and one on the left.
“We have so much time,” he said.
I didn’t detect any irony.
On the face of one watch, Salvador Dalí looked up at me with his crazy eyes wide and mustache erect. The antennae of his whiskers marked the minute and hour. An ant marched around the perimeter, telling the seconds.
“If you wear three watches, you’ll always have time to do the things you want to do,” he said.
I laughed. He didn’t.
“Are you a fan of Dalí?” I asked.
“Yes, very much,” he said, nodding at a photo he’d taped to the inside of his windshield. This version of the painter held a cat — a pet ocelot, I recalled, named Babou. How strange the things we remember.
I wasn’t paying attention when the driver asked me a question.
“Have you ever been in the eye of a hurricane?” he asked again.
I recalled sitting at home during heavy rains — but no, that’s not the same.
“It’s otherworldly. Like being in a dream,” he said.
He told me about sitting in his backyard during the middle of a storm. He lived in the south and thunderstorms were common, but his was a big one. I imagined him in a plastic chair in the middle of an unkempt lawn, eyes closed.
“Not even the birds are chirping,” he said. “They know before we do.”
I asked how long he stayed outside, but he didn’t know.
“I sat there until it was stupid to stay,” he said. “At a certain point, you have to go back inside. I didn’t want to, though.”
Much of my life has been an exercise in leaving at the right time.
“Time stands still,” he said. “I can’t explain it to you. If you ever experience it, you’ll understand.”
I wished desperately to understand. I still do. The ant kept marching around his Dali watch. | https://medium.com/ondemand/sharing-cars-with-strangers-9fbd4a8a0e61 | ['Lauren Scherr'] | 2015-05-21 21:45:32.872000+00:00 | ['San Francisco', 'Storytelling', 'Lyft'] |
Tableau Operational Dashboards and Reporting on DynamoDB — Evaluating Redshift and Athena | Organizations speak of operational reporting and analytics as the next technical challenge in improving business processes and efficiency. In a world where everyone is becoming an analyst, live dashboards surface up-to-date insights and operationalize real-time data to provide in-time decision-making support across multiple areas of an organization. We’ll look at what it takes to build operational dashboards and reporting using standard data visualization tools, like Tableau, Grafana, Redash, and Apache Superset. Specifically, we’ll be focusing on using these BI tools on data stored in DynamoDB, as we have found the path from DynamoDB to data visualization tool to be a common pattern among users of operational dashboards.
Creating data visualizations with existing BI tools, like Tableau, is probably a good fit for organizations with fewer resources, less strict UI requirements, or a desire to quickly get a dashboard up and running. It has the added benefit that many analysts at the company are already familiar with how to use the tool. We consider several approaches, all of which use DynamoDB Streams but differ in how the dashboards are served:
DynamoDB Streams + Lambda + Kinesis Firehose + Redshift DynamoDB Streams + Lambda + Kinesis Firehose + S3 + Athena DynamoDB Streams + Rockset
We’ll evaluate each approach on its ease of setup/maintenance, data latency, query latency/concurrency, and system scalability so you can judge which approach is best for you based on which of these criteria are most important for your use case.
Considerations for Building Operational Dashboards Using Standard BI Tools
Building live dashboards is non-trivial as any solution needs to support highly concurrent, low latency queries for fast load times (or else drive down usage/efficiency) and live sync from the data sources for low data latency (or else drive up incorrect actions/missed opportunities). Low latency requirements rule out directly operating on data in OLTP databases, which are optimized for transactional, not analytical, queries. Low data latency requirements rule out ETL-based solutions which increase your data latency above the real-time threshold and inevitably lead to “ETL hell”.
DynamoDB is a fully managed NoSQL database provided by AWS that is optimized for point lookups and small range scans using a partition key. Though it is highly performant for these use cases, DynamoDB is not a good choice for analytical queries which typically involve large range scans and complex operations such as grouping and aggregation. AWS knows this and has answered customers requests by creating DynamoDB Streams, a change-data-capture system which can be used to notify other services of new/modified data in DynamoDB. In our case, we’ll make use of DynamoDB Streams to synchronize our DynamoDB table with other storage systems that are better suited for serving analytical queries.
To build your live dashboard on top of an existing BI tool essentially means you need to provide a SQL API over a real-time data source, and then you can use your BI tool of choice-Tableau, Superset, Redash, Grafana, etc.-to plug into it and create all of your data visualizations on DynamoDB data. Therefore, here we’ll focus on creating a real-time data source with SQL support and leave the specifics of each of those tools for another post.
Kinesis Firehose + Redshift
We’ll start off this end of the spectrum by considering using Kinesis Firehose to synchronize your DynamoDB table with a Redshift table, on top of which you can run your BI tool of choice. Redshift is AWS’s data warehouse offering that is specifically tailored for OLAP workloads over very large datasets. Most BI tools have explicit Redshift integrations available, and there’s a standard JDBC connection to can be used as well.
The first thing to do is create a new Redshift cluster, and within it create a new database and table that will be used to hold the data to be ingested from DynamoDB. You can connect to your Redshift database through a standard SQL client that supports a JDBC connection and the PostgreSQL dialect. You will have to explicitly define your table with all field names, data types, and column compression types at this point before you can continue.
Next, you’ll need to go to the Kinesis dashboard and create a new Kinesis Firehose, which is the variant AWS provides to stream events to a destination bucket in S3 or a destination table in Redshift. We’ll choose the source option Direct PUT or other sources, and we’ll select our Redshift table as the destination. Here it gives you some helpful optimizations you can enable like staging the data in S3 before performing a COPY command into Redshift (which leads to fewer, larger writes to Redshift, thereby preserving precious compute resources on your Redshift cluster and giving you a backup in S3 in case there are any issues during the COPY). We can configure the buffer size and buffer interval to control how much/often Kinesis writes in one chunk. For example, a 100MB buffer size and 60s buffer interval would tell Kinesis Firehose to write once it has received 100MB of data, or 60s has passed, whichever comes first.
Finally, you can set up a Lambda function that uses the DynamoDB Streams API to retrieve recent changes to the DynamoDB table. This function will buffer these changes and send a batch of them to Kinesis Firehose using its PutRecord or PutRecordBatch API. The function would look something like
Putting this all together we get the following chain reaction whenever new data is put into the DynamoDB table:
The Lambda function is triggered, and uses the DynamoDB Streams API to get the updates and writes them to Kinesis Firehose Kinesis Firehose buffers the updates it gets and periodically (based on buffer size/interval) flushes them to an intermediate file in S3 The file in S3 is loaded into the Redshift table using the Redshift COPY command Any queries against the Redshift table (e.g. from a BI tool) reflect this new data as soon as the COPY completes
In this way, any dashboard built through a BI tool that is integrated with Redshift will update in response to changes in your DynamoDB table.
Pros:
Redshift can scale to petabytes
Many BI tools (e.g. Tableau, Redash) have dedicated Redshift integrations
Good for complex, compute-heavy queries
Based on familiar PostgreSQL; supports full-featured SQL, including aggregations, sorting, and joins
Cons:
Need to provision/maintain/tune Redshift cluster which is expensive, time consuming, and quite challenging
Data latency on the order of several minutes (or more depending on configurations)
As the DynamoDB schema evolves, tweaks will be required to the Redshift table schema / the Lambda ETL
Redshift pricing is by the hour for each node in the cluster, even if you’re not using them or there’s little data on them
Redshift struggles with highly concurrent queries
TLDR:
Consider this option if you don’t have many active users on your dashboard, don’t have strict real-time requirements, and/or already have a heavy investment in Redshift
This approach uses Lambdas and Kinesis Firehose to ETL your data and store it in Redshift
You’ll get good query performance, especially for complex queries over very large data
Data latency won’t be great though and Redshift struggles with high concurrency
The ETL logic will probably break down as your data changes and need fixing
Administering a production Redshift cluster is a huge undertaking
For more information on this approach, check out the AWS documentation for loading data from DynamoDB into Redshift.
S3 + Athena
Next we’ll consider Athena, Amazon’s service for running SQL on data directly in S3. This is primarily targeted for infrequent or exploratory queries that can tolerate longer runtimes and save on cost by not having the data copied into a full-fledged database or cache like Redshift, Redis, etc.
Much like the previous section, we will use Kinesis Firehose here, but this time it will be used to shuttle DynamoDB table data into S3. The setup is the same as above with options for buffer interval and buffer size. Here it is extremely important to enable compression on the S3 files since that will lead to both faster and cheaper queries since Athena charges you based on the data scanned. Then, like the previous section, you can register a Lambda function and use the DynamoDB streams API to make calls to the Kinesis Firehose API as changes are made to our DynamoDB table. In this way you will have a bucket in S3 storing a copy of your DynamoDB data over several compressed files.
Note: You can additionally save on cost and improve performance by using a more optimized storage format and partitioning your data.
Next in the Athena dashboard you can create a new table and define the columns there either through the UI or using Hive DDL statements. Like Hive, Athena has a schema on read system, meaning as each new record is read in, the schema is applied to it (vs. being applied when the file is written).
Once your schema is defined, you can submit queries through the console, through their JDBC driver, or through BI tool integrations like Tableau and Amazon Quicksight. Each of these queries will lead to your files in S3 being read, the schema being applied to all of records, and the query result being computed across the records. Since the data is not optimized in a database, there are no indexes and reading each record is more expensive since the physical layout is not optimized. This means that your query will run, but it will take on the order of minutes to potentially hours.
Pros:
Works at large scales
Low data storage costs since everything is in S3
No always-on compute engine; pay per query
Cons:
Very high query latency- on the order of minutes to hours; can’t use with interactive dashboards
Need to explicitly define your data format and layout before you can begin
Mixed types in the S3 files caused by DynamoDB schema changes will lead to Athena ignoring records that don’t match the schema you specified
Unless you put in the time/effort to compress your data, ETL your data into Parquet/ORC format, and partition your data files in S3, queries will effectively always scan your whole dataset, which will be very slow and very expensive
TLDR:
Consider this approach if cost and data size are the driving factors in your design and only if you can tolerate very long and unpredictable run times (minutes to hours)
This approach uses Lambda + Kinesis Firehose to ETL your data and store it in S3
Best for infrequent queries on tons of data and DynamoDB reporting / dashboards that don’t need to be interactive
Take a look at this AWS blog for more details on how to analyze data in S3 using Athena.
Rockset
The last option we’ll consider in this post is Rockset, a serverless search and analytics service. Rockset’s data engine has strong dynamic typing and smart schemas which infer field types as well as how they change over time. These properties make working with NoSQL data, like that from DynamoDB, straight forward. Rockset also integrates with both custom dashboards and BI tools.
After creating an account at www.rockset.com, we’ll use the console to set up our first integration- a set of credentials used to access our data. Since we’re using DynamoDB as our data source, we’ll provide Rockset with an AWS access key and secret key pair that has properly scoped permissions to read from the DynamoDB table we want. Next we’ll create a collection- the equivalent of a DynamoDB/SQL table- and specify that it should pull data from our DynamoDB table and authenticate using the integration we just created. The preview window in the console will pull a few records from the DynamoDB table and display them to make sure everything worked correctly, and then we are good to press “Create”.
Soon after, we can see in the console that the collection is created and data is streaming in from DynamoDB. We can use the console’s query editor to experiment/tune the SQL queries that will be used in our live dashboard. Since Rockset has its own query compiler/execution engine, there is first-class support for arrays, objects, and nested data structures.
Next, we can create an API key in the console which will be used by the dashboard for authentication to Rockset’s servers. Our options for connecting to a BI tool like Tableau, Redash, etc. are the JDBC driver that Rockset provides or the native Rockset integration for those that have one.
We’ve now successfully gone from DynamoDB data to a fast, interactive dashboard on Tableau, or other BI tool of choice. Rockset’s cloud-native architecture allows it to scale query performance and concurrency dynamically as needed, enabling fast queries even on large datasets with complex, nested data with inconsistent types.
Pros:
Serverless- fast setup, no-code DynamoDB integration, and 0 configuration/management required
Designed for low query latency and high concurrency out of the box
Integrates with DynamoDB (and other sources) in real-time for low data latency with no pipeline to maintain
Strong dynamic typing and smart schemas handle mixed types and works well with NoSQL systems like DynamoDB
Integrates with a variety of BI tools (Tableau, Redash, Grafana, Superset, etc.) and custom dashboards (through client SDKs, if needed)
Cons:
Optimized for active dataset, not archival data, with sweet spot up to 10s of TBs
Not a transactional database
It’s an external service
TLDR:
Consider this approach if you have strict requirements on having the latest data in your real-time dashboards, need to support large numbers of users, or want to avoid managing complex data pipelines
Built-in integrations to quickly go from DynamoDB (and many other sources) to live dashboards
Can handle mixed types, syncing an existing table, and tons of fast queries
Best for data sets from a few GBs to 10s of TBs
For more resources on how to integrate Rockset with DynamoDB, check out this blog post that walks through a more complex example.
Conclusion
In this post, we considered a few approaches to enabling standard BI tools, like Tableau, Redash, Grafana, and Superset, for real-time dashboards on DynamoDB, highlighting the pros and cons of each. With this background, you should be able to evaluate which option is right for your use case, depending on your specific requirements for query and data latency, concurrency, and ease of use, as you implement operational reporting and analytics in your organization.
Other DynamoDB resources: | https://medium.com/rocksetcloud/tableau-operational-dashboards-and-reporting-on-dynamodb-evaluating-redshift-and-athena-b4dbb7135f15 | ['Ari Ekmekji'] | 2019-09-09 17:35:58.283000+00:00 | ['Dashboard', 'Tableau', 'Analytics', 'Sql', 'Dynamodb'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.