title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Hope Amongst Her Lands | There’s no greater truth found atop the peak of a mountain than in the darkness of a hallowed valley. Yet, there’s a moment of perspective that is sought and revealed in the two locations. In the valley, the wanderer is looking up, hoping to overcome fear while peering up at those impossible heights — and the treacherous trail ahead. And upon the mountaintop, the wanderer is looking down, feeling that accomplishment brewing inside their souls — as they spread their spirits toward the sky and grasp that those infinite possibilities waiting for them below. From the sunset to the sunrise, the light understands the dark.
They see life as specs of spheres tossed around like a great Go game of existence. Yet neither view is more distinguished than the other. Each is as significant as the other — giving us the interconnected viewpoints needed to understand the truth of our being in the world. The fact that we must excavate from nature, that we are integrated wholeness with her. That truth, beauty, and meaning is not something she bestowed upon us but something we must gather from the riddles scattered in her skies, along with her deserts of ice and sand, drifting in her azure seas, waiting upon barren mountains, and resting serenely in swaying prairies.
In this civilization, we have lost these perspectives. Through fire, asphalt, coal, steel spires, and carbon skies, we have shielded our sight from those scattered revelations. We built skyscrapers hoping to see further, yet; we see only the shadows of smog. We built houses, hoping to wither the storms of life, yet we contain it only within these walls with an existential dread that binds us from stepping outside our front doors. The journey is lost in the falsehood of comfort.
We seek certainty, but we don’t know what it is. So, we replaced it with materialism and psychologically manipulating messages on our television screens. Our airwaves cluttered with the wrong ideas — calling out for help in our multifaceted ways. Our brains yearn for calmness to see within its vast Innerspace, yet we keep turning up the volume. Life is so loud. I can barely hear my thoughts as I write these words.
We need to find a balance of the soul and mind. Those worlds of logic, emotion, art, science, civilization, and nature — that seemed separate, but that is a grave illusion. Nothing is separate, just like nothing is permanent. Everything is flux. Everything is connected. Our problem is believing the illusion as reality. Between our worldviews, and what we believe is and what actually is — the more separation between these two, the more we suffer. The valley and the mountain are the same.
Hence, we must co-exist and embrace impermanence. Right now, we’re trying to be conquerors of our world, but we’re failing. The consequence will be that she will exile us from these fertile lands, and we will wander the stars as lost memories of an ancient land that gave us so much. We will saunter until we forget her blue oceans and immense green forests — until we forget just how amazing it tastes to breathe her rich decadent air.
I’m sitting on a bench listening and watching the birds sing and fly from tree limb to tree limb — orange chested robins chirp, communicating in their frantic way. Azure bluejays mimic predators to scare off other birds, honing in on what they truly want. The vermillion glow of luxurious cardinals sits quietly in Maple trees, as red-winged blackbirds sit atop cattails. They sing and mate and dance. They live upon electrical wires, thinking them tree limbs — immersed in this new world of ours, and I wonder, how loud it is to them? The cars that wisp by or the roar of planes mimicking them overhead.
Above us, turkey vultures scan and wait. For them, life and death are the same. The world has yet to change because no species has transcendent this natural divide between the whithering realms of entropy and time — the gods of impermanence.
Red-tailed hawks see it all. They know the truth, in a vole’s final soliloquy. They see the fall and rise of civilizations crumble in a winter’s repose. And I wonder what these exact scene will be like in a hundred years. Will the birds still sing? Will the predators again eat? How loud will life have gotten? Or will it have grown quiet, like the jungles just before the tiger pounces?
Right now, America has grown desperate. The very foundations are cracking, and we are all falling into the void. A pandemic rages across the world. And systemic racism and injustice burn up our hearts and minds. I’m afraid it’s all just too loud and painful. We need to unite. We need to come together. We need to learn to live in harmony. To grow like these trees and vines and burn bright like the fireflies at night. We need a revolution. To grow as one instead of drifting apart of as none. We have to change the very foundations we are too lazy to change. We have to quiet the world while making sure we hear every voice equally. Only then can empathy connect the valleys within us to the mountaintops we long for — only then can our world truly prosper and grow.
I know these words are nothing. That I’m just an impermanent being echoing into that void, just more noise within the 1’s and 0’s of our virtual reality. I know that someday, I too will be forgotten — if not, already. But I still believe in both the valley and the mountain. I still believe in seeing ourselves today but looking ahead to the future. A future that we can write together.
I love humanity. It enrages me, yes. I’m always disappointed in humans, yet only because I care deeply. I still have hope. And I know, together, we can view ourselves — and this beautiful Earth and the universe — from both perspectives. From the perspective of the valley of our inner-worlds to the peak of the mountains and our external world. Both viewpoints need to be our guides for tomorrow. We need to see the love and beauty within. We need to sing our songs and poetry of our hearts to each other. Compassion and love are the only ways forward. Only then can we can see the truth for what it is: bountiful in mystery and promise.
I still have hope. Do you? | https://medium.com/scribe/hope-amongst-her-lands-3fbd679480b8 | ['Bradley J Nordell'] | 2020-07-19 19:44:55.454000+00:00 | ['Creative', 'Existentialism', 'Nonfiction', 'Humanity', 'Nature'] |
Basics of graph plotting | Most of us data scientists go into the industry because we love data (whatever that means? No, I don’t know either!). The ability to create easily readable plots is often an afterthought. Most job descriptions will mention that being able to visualise data is important but I have never had a sensible conversation with anyone either at interview or on the job about best practices around visualisation. Given how many bad plots are out there it is most definitely not because we’re all experts at it!
There’s no point in spending 2 months doing some analysis to then have all your stakeholders stare blankly at your presentation. While creating a fantastically complicated interactive d3 visualisation make look impressive, if the underlaying graph is poorly executed then it’s rather a waste of time trying to improve things with some java code. There’s a famous expression that involves glitter and a dog that I won’t repeat here.
To me, a graph should be minimalist and, where possible, self explanatory. Therefore, I have decided to share my opinions in a blog. I have included some approaches I consider when I plot a graph. | https://towardsdatascience.com/basics-of-graph-plotting-7eaadd11a8d | ['Matt Crooks'] | 2019-07-11 16:01:55.989000+00:00 | ['Python', 'Data Analysis', 'Data Science', 'Data', 'Data Visualization'] |
Kesalahan yang Dilakukan oleh Product Manager | Easy read, easy understanding. A good writing is a writing that can be understood in easy ways
Follow | https://medium.com/easyread/kesalahan-yang-dilakukan-oleh-product-manager-1922980e0225 | ['Fitra Akbar'] | 2020-09-22 05:11:18.181000+00:00 | ['Product Manager', 'Indonesia', 'Startup Life', 'Startup', 'Product Manager Indonesia'] |
Being in Your Own League Makes You Thrive in All Seasons | Being in Your Own League Makes You Thrive in All Seasons Simona Rich Follow Oct 20 · 4 min read
Photo by Arno Senoner on Unsplash
Today’s society is full of guilt so those with self-confidence are often seen as egotistic. There are of course egotistic people in the world, and many of them, yet self-confidence is different from egotism.
Self-confidence allows you to be in your own league, when you don’t look at what others are doing but do your own thing. This makes you able to create something very different from the rest, and therefore you get a premium for it.
Those who are at ease with themselves don’t try to mould their natures according to the latest trends and opinions of the world. They become the masters of their lives by growing from within — by learning to listen to the inner nature and not to that which is external.
They attract wealth and opportunities by focusing on developing themselves and mastering their art. Then all the blessings come as a by-product which is how the natural order is.
I don’t follow the numbers [money]. I focus on developing my acting skills. Then the numbers naturally come. — Anil Kapoor, Indian actor
If people have identical looks, identical employments and identical goals, they will experience identical lifestyles and identical problems: money shortage, unhappiness and lack of meaning.
There’s no need for such self-sacrifice, however. By simply being yourself you are already in your own league. All you need is to own it — detach from external influences and be led by what’s in your heart.
Early decision that made me thrive
Early in life, from the age of 22, I chose to be in my own league. I did what was in my heart — to write, make videos and offer consultations.
Because I naturally did what I loved to do, blessings in the form of loyal readers and income started reaching me six months into my career, allowing me to travel the world and live in my favorite country of all — India.
South India — my favorite place in the world. Image source: my own.
Twelve years later I’m still enjoying the dividends of my choice. I’ve just settled in a pretty studio apartment in the heart of Vilnius Old Town, my favorite place in my home country Lithuania:
My studio in Vilnius Old Town. Image source: allowed for public use by my landlord.
The reason I survived and thrived all these twelve years as a blogger was because I never looked at the competition and only did what was in my heart. I never followed the trends and I never looked at what other people were doing.
Because I thoroughly enjoy my work, customers, readers and viewers feel it. This kind of energy attracts and therefore most of my customers become loyal clients and supporters.
What it means to be in your element
Being your own unique self means you’re at ease with what you are. You don’t try to change yourself by looking at what the world is doing, but you focus on the person that you are and your growth is about improving what’s already good and reducing that which is undesirable.
There’s so much power in this kind of being. Any person who becomes at peace with what she is and learns to use what she was given to the best of her ability will develop the magnetic power which is what amazes the world.
When you no longer look outside of you for guidance but are led by that which is within, you naturally attract attention. When everyone is distracted by the trendy things to do and be, isn’t it natural to stop and take interest in that which is so unlike anything else?
Being your own unique self is especially important for entrepreneurs as there’s so much competition today. If you compete, you are just like everyone else. It’s not possible to survive for long without offering something truly unique. The world is changing so fast that in its turns many businesses die — unless they have something which others do not.
How to trust yourself
Being in your own league requires having trust in yourself. Trust comes from going through various events and instead of allowing obstacles to beat you down, to learn to overcome them — by yourself.
Trusting yourself comes from independence. If you live a totally independent life — living in your own place, earning your own money from your own business, solving your own problems— this eventually helps you to get to know yourself and to make wise choices.
Trusting yourself makes you unique, the center and not the periphery, which has attractive power. Your business is bound to reflect what you are.
Isn’t it natural for people to appreciate something so unique that it can’t be compared with anything else? Something that knows its value and owns its style?
Surely people want to get that which is different, one of a kind, and they are ready to pay a premium for it. And this is what keeps you thriving even in the most difficult times. | https://medium.com/the-inspired-mind/being-in-your-own-league-makes-you-thrive-in-all-seasons-c023f04fc328 | ['Simona Rich'] | 2020-10-20 14:21:07.618000+00:00 | ['Life Lessons', 'Self Help', 'Self Confidence', 'Business Lessons', 'Entrepreneurship'] |
DreamTeam Development Update #12’2019 | Dear Contributor,
In this month’s Development Report, you will notice that we didn’t include the typical bulleted list describing what we will focus on in the upcoming month. Instead, we will detail what we’ve been working on in both November and December. As a result, in January, you will not receive a December update. Instead, you will receive a year-in-review report detailing DreamTeam’s 2019 development. Over the last six weeks, DreamTeam has added a new game to the platform, released quite a few unified feature updates, started the development of two new features, released the updated Tokenomics report, and much more. Come with us as we discuss DreamTeam’s activity in November and December.
Game Features
In November, the Games Features team focused on scalability. As a result, many of the updates cover unified features. As the new unified features include all games, the updates may seem like a platform feature. However, as the features relate to the games themselves, the Game Features team will develop and release the updates.
Unified Player Profiles — All Games
As DreamTeam continues to add games, more and more users play multiple DreamTeam supported games. The new Player Profiles allow users to connect all of the DreamTeam games with a single profile. Having all games on a single profile is useful, not only from a navigation perspective but also in terms of building a profile that showcases who each user is as a gamer. And with the shift of becoming a more social platform, this is very important. The new Player Profiles contain four sections: Personality, Games, Achievements, and Activity. Let’s take a look at each one.
Personality
The personality section helps users showcase who they are as a gamer. It includes:
Some basic info about the user.
A quote — users can write something inspirational or funny.
Connect social platforms (Discord, Mixer, Twitch, Twitter, and YouTube) — helps users get more subscribers and views.
Toxicity control — Other gamers can now rate and review users as a gamer. Hopefully, this will help control the rising level of toxicity in the gaming world.
Games
The games section allows users to connect the games they play. DreamTeam currently supports Apex Legends, CS:GO, Fortnite, LoL, and Call of Duty: Modern Warfare. However, an additional 30+ games can be added to the Player Profile, more on that later. Once users connect a game, they’ll be able to click on that game to view all of their stats and see their progress.
Achievements
The achievement section shows off users’ Challenge Badges. Challenge Badges are awarded by competing in and winning DreamTeam Challenges. As we’ve described challenges in quite a few of our previous reports, we won’t go into detail about them now. A summary can be found in the Call of Duty section below.
Activity
The activity section displays live and upcoming challenges, challenge results, and allows users to manage their cash rewards.
Unified Navigation
As DreamTeam is becoming a game agnostic platform, the navigation menu has been updated to include all games and features. We won’t go into detail how it works as it is quite simple; users can choose a game and then select any feature for that game. Having a unified navigation not only helps users understand what we have for each game but helps them quickly find the feature.
Challenge Optimization
In December, the Game Features team optimized the Challenge feature. The first thing the team has done is to run a special event for Apex Challenges. For the entire month of December, the event will test whether larger prize pools and more frequent Prime Challenges will have an impact on the conversion and retention rates of Prime members.
The team will also start redesigning the challenge page, specific challenge cards, instruction, and a few other UX and UI details. The new Challenge page design will likely be released in January.
Prime Upgrade
In December, the team will also release the new Prime Upgrade feature. This will allow users to upgrade from basic Prime memberships to Prime Gold memberships. The ability to upgrade will make the transition from Prime to Prime Gold much more convenient and should improve our metrics.
Platform Features
In November, the Platform Features team released the new unified Registration with the option to sign up via Discord. By allowing users to sign up via Discord, the registration process becomes faster and easier. This will not only increase registration rates but also give DreamTeam an additional channel to reach out to its users.
Along with signing up and signing in via Discord, DreamTeam Profiles now features Discord integration. This allows DreamTeam to send relevant notifications directly to their Discord messengers. This feature will result in a rise in our retention metrics and help users stay connected to their DreamTeam accounts, even when they are signed out.
Call of Duty: Modern Warfare
DreamTeam released its fifth platform game in November and added additional features in December. To save everyone time, we’ll just detail the new game as a whole. If you have never heard of the Call of Duty series, here is some basic info. CoD was one of the most anticipated games of 2019 and is a first-person shooter developed by Infinity Ward and published by Activision. Modern Warfare is the sixteenth overall installment in the Call of Duty series.
CoD users can enjoy the same features as Apex Legend and Fortnite users. Lets quickly go over the basics of those features.
Stats
CoD Stats on DreamTeam consist of two sections: “overall stats” and “last week stats,” which are both broken down into two categories: “gaming experience” and “gaming skills.”
Quick Glance Stats
Users’ most essential stats are shown at the top of your CoD page. The “quick glance” stats include your rank, level, and DT Rating. The DT rating is a unique feature only found on DreamTeam. It shows a player’s complete skill level. Many factors are taken into account to calculate the DT Rating, including game experience, K/D, kills, many percentages, and much more.
Gaming Experience
The gaming experience section of the stats page displays the following gaming experience info:
Matches played
Winrate
Time played
Gaming Skills
The gaming skills section of the stats page displays the following gaming skills info:
K/D ratio
Score per minute/match
Total kills, deaths, assists
And much more
The “last week results” displays all of the same stats but for the previous week.
The Most Advanced LFG
The CoD LFG makes finding players even easier and faster. Here’s how:
Our LFG has the option to view posts based on when they were posted or by which players are currently online. The option to see the online players first is important because players use LFG to find someone to play with at that moment. Players no longer need to scroll through posts to find the players who are currently online or wait until an offline player replies to a message. Players can connect and start playing sooner. And as a bonus, we’ve also added a spam filter. Say goodbye to pointless posts forever.
Quick Skill Understanding
Each post displays two pieces of information related to a player’s skill: the DreamTeam Rating and CoD level. The DreamTeam Rating uses a complex algorithm to calculate a player’s overall skill based on stats and play experience. If you’re looking for a player and have a DT Rating of 8.5, you can quickly understand that it is probably a waste of time to reply to a post of a player who has a DT Rating of 0.6.
Filter System
Users can quickly filter LFG posts by “online on top,” platform, mic, game mode, and, most importantly, server region.
Sharing
Users can share that they are looking for players across Facebook groups and Twitter. Hopefully, users won’t need this as our LFG finds players pretty quickly. However, we like giving our users the option to promote us when we have the chance.
Prime LFG
CoD Prime Members are able to place their LFG post at the top of the list, remove post limitations, and have all of your info displayed in gold.
Daily Challenges
DreamTeam CoD Challenges are a way for users to compete and earn some cash by simply playing the game they love. There are two free 24-hour challenges and one Prime Challenge each day.
Cash and Badges
Users can receive cash prizes and unique CoD badges for their profile.
Leaderboards
The DreamTeam CoD Leaderboards display who’s the best and, well, who’s not. Users can follow their rise to the top of each leaderboard in real-time. DreamTeam CoD Leaderboards include:
DreamTeam Ratings
Average Win rate
K/D Ratio
And many more categories
The Leaderboards can be filtered by platform, period, and game mode.
In December, the Game Features team will also start the implementation of the Friends, Friends’ Leaderboards, and PayPal integration features. As these are new features, we will be sure to update you in future Development reports as their release dates near.
Game Analytics
In November and December, The Game Analytics team worked on launching several sales, events, and promos on the DreamTeam platform:
The Black Friday Sale
Apex Raffle
The Winter Challenge Event
The Xmas Sale
The US Magazine Promo
All of these sales, events, and promos are aimed at increasing several platform metrics. The only events to have concluded are the Black Friday sale and the Apex Raffle. As a result of those two events, DreamTeam saw a significant boost in acquisition, retention, and Prime conversion rates.
In addition to those sales, events, and promos, the Game Analytics team also ran a couple of pricing experiments across several geo tiers to optimize premium conversion rates, implemented DreamTeam’s first partner advertisement placement for the WESG championship, and launched a new desktop & mobile homepage for DreamTeam.gg
Throughout December, the Game Analytics team will be focused on the new platform currency and DreamTeam shop development. We will update you on this as its release date nears.
Overwolf
The Overwolf team has completed the development of the full scope, including comments and suggestions received from Overwolf reviewers. As a result, we have successfully submitted the application for the launch approval process. In short, the app is an Apex Legends companion app that PC players can use to assist them in getting a greater understanding of their skills and improving their game to increase their chance of winning.
DesignLab
In November and December, the Design Lab:
Conducted research of CoD player needs and preferences: created CoD player personas, defined stats on user profile preferences
Conducted usability testing of Challenges
Created illustrations of CoD Challenge badges
Created UX DreamTeam Shop design concept
Updated UI Challenges Concept design
Created a Stats comparison design concept
Create a Basic Friends functionality design concept
Created a Friends’ leaderboards design concept
Created UI CoD Challenges design
Blockchain and Payments
The Blockchain and Payments team released the updated DreamTeam Tokenomics paper. It can be found here.
If you missed our previous reports, you can find them here.
About DreamTeam:
DreamTeam — infrastructure platform and payment gateway for esports and gaming.
Stay in touch: Token Website|Facebook | Twitter | LinkedIn|BitcoinTalk.org
If you have any questions, feel free to contact our support team any time at [email protected]. Or you can always get in touch with us via our official Telegram chat. | https://medium.com/dreamteam-gg/dreamteam-development-update-122019-83500ef11bd3 | [] | 2019-12-27 06:51:00.961000+00:00 | ['Gaming', 'Dreamteam Development', 'Blockchain', 'Startup', 'Esport'] |
I got fooled once with my data. Not this time. | I got fooled once with my data. Not this time. Pawtocol Follow Jan 10 · 4 min read
Until recently, I never really thought about the value of data. Whenever I needed to let a new app get my info from Facebook, I agreed. It was just easier that way. I knew Amazon and Alexa were tracking every move I made when it came to their technology, and I let them. Then there is Google. I’d just about let them have everything only so I could make my life easier by using their suite of apps and software.
I didn’t mind Facebook selling access to my newsfeed only to use that data to show me specific ads I enjoyed or clicked on. I had no problem with Facebook reporting advertising revenue of nearly $50 billion last year. I mean, they are a for-profit business.
I never understood the value of it, but here is how they monetize data: As mentioned they use it to sell ads. Their data shows marketers you’re in a certain geographic region, recent searches (mine about white sneakers) and all other pertinent information about trying to make that sale to you. And it works because they know what you want. In a total act of brazen egregious behavior, they sell all your data outright to aggregators who are then able to do just about anything they want with it. They collect the data for their internal product development and then give you access to their products such as Google who has developed software we use every day thanks to our data points.
I never cared about any of this until I started noticing a growing trend in the news of some incredible abuses of our data that were downright scary:
Equifax data breaches of 150 million American’s credit data including social security numbers and bank accounts Capital One with 106 million banking and credit card records 4 Billion social media hacks in 2019 Cambridge analytical scandal where millions of Facebook accounts were surreptitiously accessed to sway an election
That last one was particularly troubling. Not because my Facebook profile contains sensitive information, but because I now questioned what was Facebook doing with all that profit if it wasn’t protecting me from something like this.
Then Andrew Yang came along and talked about the importance of data privatization and most recently, The California Consumer Protection act was passed which protects consumers’ data rights. All of this made me rethink data ownership. However, I was too far down the rabbit hole to backtrack, but now ahead enough not to make the same data privacy mistakes — both personally and professionally.
Now working in the pet industry that generates huge amounts of data (there are approximately 800 million pets globally) from the onset I know their data has value which makes me more motivated to protect their information.
At Pawtocol, pets own their data. It’s a complete 180 from everything we’re used to in the world of data.
With Pawtocol all your data is in one safe place.
Pawtocol is an online platform bringing blockchain and AI to the entire $100B+ pet industry, with privacy-first design principles that all users retain full custody of all platform data and the rights to anonymously sell their data for direct compensation.
These pet’s will keep their data and make money.
In this new pet economy that Pawtocol will introduce to the world, Pet parents can earn Universal Pet Income (UPI) by sharing their pet’s data with veterinarians, retailers, manufacturers, or researchers who can then use that data to improve their operations. In a way, this means pets can help to support themselves, and the community at large.
That UPI can then be spent by their two-legged companions on anything from treats to vet bills or traded on an exchange for cash. Read more here about data monetization.
We’re making this easy and familiar with our app. Our interface makes it simple to be part of this next tech generation by making it easy to input data and gather your results. You’d never even know you’re using a crypto app.
When it comes to that data, we don’t store nor do we manage it. Most importantly we don’t sell your data.
Learn more about Pawtocol: | https://medium.com/age-of-awareness/i-got-fooled-once-with-my-data-not-this-time-10a1473f5e00 | [] | 2020-01-10 17:22:40.744000+00:00 | ['Privacy', 'Technology', 'Pets', 'Tech', 'Startup'] |
AI and Data Science for Dummies — Chat with Classmates | These are my answers to questions about AI and its business practice, discussed among ~200 of my fellow classmates from IIT Bombay. They are modified slightly to protect privacy, to remove specific references and for better narration. This is the first part of a series of these posts. The second part discusses insights about ‘Why Doesn’t AI Work?’ , the third about ‘AI Hacks That Do Work.’ and the last about ‘Why and How to Get Started with AI.’
Data Science
At the simplest level data science is just that — a scientific analysis of data. In the fourth grade, when we all learned how to make simple graphs, we had become data scientists already.
You would think that I am exaggerating to make a point. Well, lookup Microsoft corporate strategy, and its focus on a new product called PowerBI — they are making a massive push on it as a way to cement Windows based systems in enterprises. Then look up demos they have for PowerBI. There is plenty available on YouTube. These demos talk about dash-boarding and how the extremely powerful software can visualize your darkest, deepest data to make excellent plots. And then tell me if a fourth grader can’t make those dashboards.
Of course, there is a lot more to PowerBI than making bar graphs, but the point is that even at that simplest level data science can be very powerful. Add mean and standard deviation to it, and you have covered almost everything in the world of business analytics. Sure, the size of data has bloated recently, particularly because of a take off in deployment of sensors and embedded devices (IoT). Still, your biggest intellectual problem as a data analyst is how to clean the various formats of data, rather than how to process it.
Artificial Intelligence
There is a small portion of data science world that focuses on using data to write better programs. Here is the intuition behind it. The simplest programs are ‘Do X’. They are very powerful and make up the foundation of the programming world.
Smarter programs say ‘If A do X else do Y.’ I don’t have to explain this, except to say that almost all programming in the last century, and most of programming in this, is as simple as that. Rules engines, and the so-called expert systems are but a set of chained, nested and looped if-else statements.
The breakthrough behind the field of artificial intelligence started with a simple question — can a machine automatically figure out the condition A in that statement and write these rules itself. We can convert ‘If A do X else do Y’ to ‘c = Cx if A else Cy’ and then depending on the value of c we can perform X or Y. Suddenly this is as simple as a classification problem. If we are given a set of pre-labelled data points, can we find a model, A, which can classify a new data point to Cx or Cy (or one of a number of classes in the generalized case)?
If we can do that then we don’t have worry about the if-else statements. All we need to do is to get that set of pre-labelled data points, also called training data, run the machine, and go home. We have learnt so many techniques to do classification from the fields of algebra and statistics — Naïve Bayes, logistic regression, decision trees, and what not.
Congratulations! If you have ever fit a line to some data, you have programmed an artificially intelligent system.
Why is this important?
So, what’s the big deal? Three things — one, this is a big deal by itself. You have no idea how many artificially intelligent systems seldom use anything more than probabilities. If you want to get more complexity, a popular machine learning algorithm is called Random Forest. It involves making decision trees based on multiple samples of the data, hence the forest, and then taking the mode or the median of the decisions by each of the trees. It’s pure statistics, nothing fancy. However, this is now empowering almost every aspect of human life. Turn anywhere, and it is likely that an intelligent machine like this is helping you along.
Neural Networks
Second, they figured something called a neural network. Each node in this network is essentially a weighted sum. You take a set of inputs, you weigh each of them and you sum them up. Simple.
Let’s make it real. In the fourth year at college one of my friends John (name changed) was really trying to impress this girl, Jane (name changed), who was a co-volunteer at a non-profit called Magic Bus. Magic Bus works for under-privileged children and organizes various camps and events in its efforts. John’s decision tree to go or not to go to an event was simple — if she was coming, John would brave everything and go. Otherwise if the event was a party (vs. a hike or a camp) and it was not raining, John would go.
Let’s say a bright-eyed data scientist plotted John’s behavior over the year, he/she could have taken three binary variables, a = whether Jane was going to attend, b = whether the event was a party, and c = whether it was going to rain. It would be very simple to write an equation p = w1.a + w2.b + w3.c, and set a threshold to predict if John was going to that event or not. That is the simple neuron in data science that everyone seems so crazy about. With the right set of weights, it would predicted John’s behavior accurately.
Let’s say Jane was also deciding based on weather forecast and the type of the event. Then there are two independent inputs, one hidden layer with a node for her decision (+ two to pass original inputs) and then one node for the final decision. How about whether John was going to wear his new jeans or not — so now we are talking about two nodes in the output layer. You can see how quickly it becomes a network of neurons.
The important thing is that we need to find the right set of weights. There are multiple algorithms to automatically detect these weights based on a given set of inputs and corresponding outputs. Something called Gradient Descent rules the roost.
It turns out that neural networks can transparently replace most statistical classification algorithms. This is very powerful, because now you can focus on one technique for a wide variety of problems. We should be teaching neural networks in seventh grade instead of linear regression. With one hidden layer between input and output a neural network can also emulate any polynomial relationship given sufficient data. This is called Multi-Level-Perceptron-1 or MLP1.
Deep Learning
Does anyone remember Newton and his iterative method of finding answers? For complex equations of the type x = f(x), with x on both sides, you would assume a value of x for the RHS, compute x on the LHS and then use that value for the RHS, and so on. You would continue till the difference between the values of x in subsequent iterations was near zero.
Same deal here — why do we have to decide directly on the inputs? We will find interim values, and then use those values to find the next set of interim values, and after doing that 100 times will we decide on the output. In other words, you are adding more and more layers of neurons between the input and the output layer. This is called a Deep Neural Network, and process of training it is called Deep Learning. It is very useful for non-linear classifications, like predicting whether a set of pixels represents a nose.
Complex AI Models
Here is the third big deal with AI, and it’s not that intuitive. To make any neural network work we must train it and get the right set of weights in the network. Turns out that the weights itself contain a lot of value.
There is a very popular model in NLP called Word2Vec. It comes up with a set of numbers (a vector) for each word. Vectors for words with similar meaning will have numbers very close to each other. You can also do things like [King] — [Man] + [Woman] and get the vector for [Queen]. These vectors in fact are the weights from certain neural networks built for some task like predicting the next word.
Once scientists figured out how the weights in neural networks carry so much value, they went crazy. Many of the most advanced models are a stack of neural networks where the weights are passed from one to another to get very sophisticated things done.
The Promise
The promise is insane. Now, as long as you have sufficient data you can teach a machine to program itself and learn most sophisticated, convoluted, non-linear relationships. The beauty is that you don’t have to understand those relationships yourselves, let alone articulate them. You can now afford to be completely ignorant. It’s not hard to imagine in the near future machines will be collecting all the data and making all the predictions, while humans will be focused on making smarter machines. Take any problem, select some [hyper-]parameters of a neural network, go to bed. Now, in fact, they have begun automating the process of selecting these hyper-parameters as well.
That is the promise. The reality? Coming up.
Next part in the series: ‘Why Doesn’t AI Work?’ | https://medium.com/ai-in-plain-english/ai-and-data-science-for-dummies-chat-with-classmates-359e18dcc529 | ['Praful Krishna'] | 2020-10-06 03:54:28.947000+00:00 | ['Machine Learning', 'Data Science', 'Neural Networks', 'Artificial Intelligence', 'Programming'] |
How to Stop Procrastinating in 15 Minutes | The Everlasting Battle of Motivation vs. Resistance
Whenever you decide if you’re going to do something or not, there’s a tug-o-war inside your brain between two opposing forces — motivation and resistance.
Motivation is everything that pushes you towards action. Your bad conscience, the good feeling afterward, and the promise you’ve given your accountability partner.
Resistance is everything that holds you back. Your warm bed, your desire to relax, all the other fun things you could do, and the horror you experience when you think about the huge mountain of work that seems impossible to dig through.
If motivation is greater than resistance, you act. If resistance is greater than motivation, you don’t.
In my roommate’s case, his resistance was bigger than his motivation. He knew he needed to get things done, but the thought of spending the next four hours glued to a laptop screen instead of enjoying his weekend made him resign. You can’t blame him.
Motivation is a finite resource. You can boost it by watching an inspirational video on YouTube, but on some days even the “Motivational Speech Megamix Vol. 15” won’t do the trick. Plus, you’re stuck on YouTube again, a prime source of distraction. Instead, you have to lower the resistance.
When you’re stuck with a task you dislike or dread, even the tiniest bit of resistance can make the difference between action and avoidance.
How can you lower the resistance? By drastically reducing the amount of work you set out to do.
Commit to only 15 minutes.
You greatly reduce the resistance and turn the towering mountain of work into a molehill you can easily take care of. Even the most dreadful task will seem doable when you know it’s only for 15 minutes. I could even talk to my future mother-in-law for that time.
Now you’re of course wondering how you’ll get all of your work done since your two-hour tasks can’t just be cut down to 15 minutes. This is where the power of default options comes into play.
What physics has to say about procrastination
In physics, the resistance you have to overcome to drag an object over a surface is calculated using friction coefficients. The coefficient of static friction is always bigger than the sliding one, which means that giving an object the initial push is much harder than keeping it moving. If you ever pushed a dead car down the road you know what I mean.
Procrastination is much like physics — getting started is the hard part, staying in motion not so much.
Once you’re immersed in an activity, your default option is to keep going. When you watch YouTube or Netflix and have autoplay enabled, you’re much more likely to stay for “just one more video.” When you’re already in your PJs, you’ll go to bed instead of hitting the bars. And when you’re already working, you’re much more likely to go on even after the 15 minutes have passed.
My roommate took this to the extreme. He committed to doing just two minutes every evening — almost zero resistance. But on average, he ends up working for about 45 minutes and accumulates three solid hours of work on his thesis before the weekend even comes around. Good for him, bad for me — kicking his ass was fun.
Commit to small efforts and then keep going. It’s much easier than facing everything at once. | https://medium.com/illumination/how-to-stop-procrastinating-in-15-minutes-b07ac4c25cde | ['Moreno Zugaro'] | 2020-11-13 03:12:31.536000+00:00 | ['Work', 'Procrastination', 'Advice', 'Productivity', 'Self Improvement'] |
Hello Gail, My First Winter Storm. | Hello Gail, My First Winter Storm.
I was a child again, for a day.
Photo by Kelly Sikkema on Unsplash
Snow day.
It started in a flurry — the storm I’ve been waiting for.
I stood by the window, working, one eye on the situation, watching fat flakes fall from the sky, powdering the ground in white.
My eyes delighted at the sight, for I’m from a tropical island, witnessing the first snowstorm of my life.
So new to me, I feel like a child.
And like a child, I left the warmth of the house and stepped onto the porch.
Snowflakes swirled in the wind, so clean and cold, deliciously refreshing. The tiny things caressed my cheeks, clung to my hair, lightly, gently. Such a cute feeling!
It’s unlike the storms I know, the tropical rainstorms, which come down heavy and mad. They drench bodies in an instant and drown all sounds with their ferocity.
A snowstorm is quiet, except for the wind.
I left footprints in the pristine whiteness, in my down jacket and unbearably thin lounge pants. Face upturned, there was no need to catch a flake with my tongue. They flurried into my mouth, slightly parted in a smile. Oh yes, I’m a dork. And the snow tasted like freshness.
Soon the cold caught up, work beckoned. I left the cold, but not the window, I wanted to soak up every second of it even as the light faded from the sky.
The night that followed was bright. Snow blanketed the streets, the houses decked in lights and the tops of cars, like the set of a Christmas movie. It looked both cold and cozy at the same time.
Later, I listened to the patter of the snow shower, the soft whipping of the wind. I imagined being out tomorrow, kicking up the powdery snow, a laugh escaping from my throat. Except that’s not my laugh. Just a memory from some TV show.
Experiencing my first winter storm, my wonder rekindled. The snowstorm had reduced me to a child again. I smiled, happy and warm under the covers. | https://medium.com/literally-literary/hello-gail-my-first-winter-storm-36b9768bed5c | ['Julie X'] | 2020-12-19 05:32:01.238000+00:00 | ['Happiness', 'Nonfiction', 'Joy', 'Life', 'Winter'] |
The Importance of Research in University Essay Writing | With university essay writing, the word count of the paper can often be deceptive. You might look at a 2,500 or 3,000-word paper and feel your heart sink at the thought of undertaking something of that size. But the most important qualifying factor, and the one which will really determine how much work is going to go into the paper, is, without a doubt, the research component. Let’s face it, if you are a young person at university nowadays, you can probably already type around 100 words-per-minute. If it was just a matter of writing 2,500 words, you could satisfy the word count requirement in under half an hour.
But being able to write both a good, and a fast essay requires being able to do good research. Time constraints and competing obligations are part and parcel of university life. If you are taking a full course load, you will, inevitably, find yourself trying to determine whether you should sacrifice time on project A and devote it to project B, simply because dedicating an equal amount of attention to both isn’t workable. Below are some tips and considerations for doing good research and writing a solid, time-sensitive research paper.
The find function of “ctrl+f”
For anyone unfamiliar, the find function or keyboard shortcut “ctrl+f” will be one of the most useful research tools you have at your disposal during your university career. The find function allows you to quickly locate vital information in a text of any size so that you can extract it and make it part of your paper. A side note: the find function only works on word and pdf documents. Sometimes you will be given texts to read that are made up of photocopies of pages from a book, paper, anthology etc. When that is the case, they are most likely .jpg files and the find function won’t work. You’ll have to employ your skim reading abilities here.
Let’s say you are writing a paper about the effects of new media technologies (e.g. Facebook) on depression among high school students. You have a section in your paper dedicated to the effects on girls, and another to its effects on boys. You open a peer-reviewed journal article you found on your library’s website. It’s 25 pages of dense information. You could read the whole thing, although that would take hours. You could try to skim read it, though you might end up missing what you’re looking for a couple of times before you eventually find it. Or, you could employ the find function, search the word “girls” and then “boys” and then cycle through every instance of the word in the paper until you find something that seems useful.
Wikipedia first
Throughout your undergraduate career you will be inundated with new information, concepts, and ideas. That is, afterall, the point. Discovering all that new information can be both thrilling, and intimidating. It is humbling to be constantly finding out how limited your knowledge and understanding of the world is. You might begin to write a paper on a topic you have never encountered before and feel stuck.
A good way to help you set up a framework for subsequent research is to see what Wikipedia has to say on it. While you are often expressly forbidden to cite Wikipedia as a source in your papers (often for good reason: the review process for articles is not that rigorous, and the articles themselves are often not written by experts), you can use Wikipedia to help structure your paper. As an aside, Wikipedia is often over-eagerly dismissed by academics on principle alone, despite a 2005 study in the journal of nature finding Wikipedia nearly as accurate as the Encyclopedia Britannica.
Wikipedia is a great starting point for research because it provides you with easy-to-digest background information, as well as breaks down topics into their constituent parts. If you are trying to write the standard five-paragraph paper, Wikipedia can provide subheading ideas, as well as keywords that you can then search in more trustworthy research and library search engines.
Knowing how to find information is important in and out of school
Knowing how and where to look for information is a skill that is acquired and sharpened over time. There is a reason so many courses, especially first year courses, have an entire introductory assignment whose purpose is to familiarize you with basic research methods and resources. If you pay attention to anything during the first couple weeks of your course, it should be this. Knowing how to search out and find information is half the battle when it comes to research (extracting and utilizing the information is the other). Additionally, knowing how to do your own, independent research will provide you with skills that your future employers will find valuable — such as the ability to find creative solutions to problems.
If you know where information is to be found, and how to quickly sort through available resources, you will cut hours off the essay-writing process. Some professional essay writers and essay writing services are so adept at conducting research, that they are able to complete relatively large last minute research essays for students (i.e. essays that are due within six hours of being requested).
Why University Essay Writing Matters
Conducting research is an essential part of university essay writing, and one of the most important skills you will be given an opportunity to work on and sharpen throughout your undergraduate degree. These are skills that will not only help you write a coherent, well-planned, and often much quicker paper, but that will serve you well when you eventually make your way out into the real world and start living your adult life.
Keep the above research tips and considerations in mind when writing essays at university, and if you still feel that your assignment or term paper is lacking, reach out to Homework Help Global and let one of our professional essay writers get you the mark you deserve.
References:
Hsu, J. (2009). “Wikipedia: how accurate is it?” Live Science. Retrieved from: https://www.livescience.com/7946-wikipedia-accurate.html
Randal, D. (2013). “Independent research cultivates skills employers value.” The Huffington Post. Retrieved from: https://www.huffingtonpost.com/donna-randall/independent-research-cult_b_2914807.html | https://medium.com/the-homework-help-global-blog/why-university-essay-writing-depends-on-conducting-good-research-27baf19def2b | ['Homework Help Global'] | 2019-07-22 17:24:57.585000+00:00 | ['Essay', 'Essay Writing', 'Custom Essay Writing', 'Custom Essay', 'Writing'] |
The Start | One
this is the start of pages and pages
poems in poems
a story
sloppy sometimes
mostly
uncertain spaces
misguided underscores
I don’t know what I am doing
but I am doing
something. everything.
this is the middle and
a shifting vantage point and
my whole heart except
the parts I hide (most of them)
I am a bird with chopped remiges
fidgeting in your love, hurting more
in metal skin
this is the way
murky, nebulous, inchoate
yes, I am afraid. | https://medium.com/meri-shayari/the-start-e3d02586fd8e | ['Rebeca Ansar'] | 2020-05-05 23:18:44.891000+00:00 | ['Storytelling', 'Self', 'Emotions', 'Poetry', 'Poem'] |
Hooked on Social J | My decision to become a journalist is, what I see as, the coalescence of my previous career choices, each inspiring and bolstering the next. Although I feel confident with my decision, it took me working many different jobs to reach it.
I attended Boston University for my undergrad, where I focused on film and television communications. Upon graduating in 2015, I was fortunate to land a job at an entertainment PR firm, publicizing independent films and feature documentaries, such as Class Divide and Zero Days. The exposure to incredible filmmakers and their powerful works inspired me to move from publicity to production.
I left my job in PR to pursue a career as a freelance video producer. Reaching out to all my connections in the production community, I managed to pick up a semi-steady stream of gigs, from PAing commercials to editing start-up promotions, to shooting music videos. With my budding skillset, I managed to land a job on the production of HBO’s Rolling Stone: Stories From the Edge and even produced two documentary shorts of my own.
Despite my growing network, I was not making enough money to support myself. So, I took a part-time job at the school where my mother works as a physical therapist; the Henry Viscardi School (HVS), in Long Island, New York. The school offers students with severe disabilities a traditional, yet specialized, educational setting along with “a variety of therapies, assistive technology and medical supports.” I worked as a teaching assistant there, helping feed, transfer and teach students with disorders such as spinal muscular atrophy (SMA) and cerebral palsy (CP).
Please ignore Mr. Danny’s lopsided collar
I gained an extraordinary perspective at HVS. Connecting with these students helped me understand their experiences. It taught me what it’s like for them to access our inaccessible world. Above all, it demonstrated to me the importance of finding the humanity in others. Working at the Henry Viscardi school boosted me over, what Arlie Russell Hochschild describes as, my empathy wall. Equipped with this new perspective, I applied to the Craig Newmark Graduate School of Journalism at CUNY.
Hoping to gain a fundamental knowledge of journalism, I originally applied and was accepted into the MA in Journalism program. However, after hearing Jeff Jarvis and Carrie Brown discuss an empathy-based approach to journalism that puts the communities’ needs first, I was hooked on Social Journalism.
With the techniques I acquire moving through the program, I hope to serve my community of differently-abled friends, particularly those living with neuromuscular disorders, like SMA and CP. Empathizing with this community of people is what exposed and encouraged me to correct my own misconceptions. Now my goal in the social journalism program is to understand how to replicate this experience through my work.
Wish me luck. | https://medium.com/access-granted/hooked-on-social-j-d8f4a92ee4e4 | ['Daniel Laplaza'] | 2018-12-13 23:23:13.786000+00:00 | ['Social Journalism', 'Cerebral Palsy', 'Spinal Muscular Atrophy', 'Socialj19', 'Journalism'] |
Journalists and Techies: An Important Alliance in Press Freedom, Safety | When journalists go missing — or worse, are murdered for investigating and reporting on government corruption or drug cartels — their investigative work and documents often disappear along with them. The work for which the journalist risked their life or freedom ends.
Efforts are underway to ensure that an investigative journalist’s work continues and is brought to light in the event a reporter’s life is tragically cut short, according to Javier Garza, a former Knight Fellow at the International Center for Journalists and former editorial director of El Siglo de Torreón. Garza said Andres D’Alessandro, executive director of the Newspaper Editors Association of Argentina, had an idea: What if a journalist’s notes, documents, etc. were preserved in a space, akin to a cloud, but with far more limited access and far greater security? In the event of a journalist’s jailing, disappearance or murder, another trusted journalist with access could pick up where the jailed or murdered journalist left off. Communication, trust, and the proper tech tools are key in making this journalistic handoff possible.
Keeping Investigative Information Safe
D’Alessandro asked Garza to partner with him to help give shape to this idea and bring it to fruition. The app or tech tool is still in its planning stages, but is necessary for journalists risking their lives to expose criminal behavior and corruption, said Garza, a fellow at the International Center for Journalists, who also does security work for journalists of the Word Association of Newspapers and News Publishers.
“There’s a need to protect materials that are part of an investigation,” said Garza, who is also a working journalist and hosts a radio show in his native Mexico. “This way, if a journalist senses that an investigation might be dangerous, they can protect their materials by loading to a server and then that server could be accessed by other journalists.”
Keeping journalists safe and their work secure is a topic Garza has researched extensively and continues to work toward. Last year, he published Journalists Security in the Digital World: A Survey: Are We Using the Right Tools?
A Matter of Life and Death
Security and secure tools are important life and death topics impacting journalists across the globe, particularly in Latin America, the Middle East, Central and Southeast Asia and Western Europe. Garza’s survey revealed that 70 percent of journalists do not use secure file storage and sharing and even fewer use encryption, geo-tracking, and risk-assessment tools. This, despite 45 percent of respondents indicating “they’ve had a security experience that could have been improved by a digital tool,” according to the survey.
According to the Committee to Protect Journalists, ten of the 30 journalists who have been killed this year were killed in Mexico. That rate is on par with the one dozen journalists killed in 2016.
Garza knew Javier Valdez, a fellow Mexican journalist, who was gunned down in broad daylight in May. Valdez was one of the founders of Ríodoce, a weekly that reported on crime and corruption in Sinaloa, a state known for rampant drug trafficking and violence. Valdez’s highly publicized murder was condemned by international groups, but like 90 percent of journalists’ murders, remains unsolved.
Speaking the Same Language
Garza hopes there will be a secure tool that would work much like Google Drive or Dropbox, which could be accessed from anywhere should a journalist be driven into exile, go missing or be murdered.
Among the biggest hurdles he faces, Garza said, is having journalists and software developers communicate and understand one another. “The main issue has been to try to get understanding between the people who are developing circumvention technologies that help provide protection and avoid censorship and those who need and would use those technologies to speak the same language.” Journalists, particularly old-school journalists, are not tech-savvy — “they came of age between Atari and Nintendo” — while some developers lack understanding of the particular needs of journalists and human rights defenders and activists, Garza said.
A Work in Progress
While social media has helped add a layer of security, the need for tech tools to enhance journalists’ safety is still a work in progress.
“What would (the proper tools) look like in terms of architecture?” Garza asked. “Once we have a clear idea, then we can bring in some journalists, do some pilot runs, and make journalists aware that the tool is available.” | https://medium.com/iff-community-stories/journalists-and-techies-an-important-alliance-in-press-freedom-safety-40fc8af7fb88 | ['Sylvia A Martinez'] | 2017-10-30 15:18:41.859000+00:00 | ['Journalism', 'Death', 'Security', 'Tech', 'Digital Marketing'] |
Detect Your Face Parts on Your Browser | Detect Your Face Parts on Your Browser dannadori Follow Nov 3 · 6 min read
Introduction
I’d like to enable all the machine learning models/ AI models in the world work in a browser! So for a little while now, I’ve been working diligently on creating an npm package that works with tensorflowjs + webworker [Here]. In the last article, I wrote a story about animating images on the browser using the White-box-Cartoonization model.
In this article, I‘m going to try to detect the parts of the face on the browser. This is what it looks like. Note that the picture on the left was created at https://thispersondoesnotexist.com/.
Goal of this article
There are several types of AI models that detect parts of the face.
The model for detecting landmarks was briefly introduced in a previous article. So we won’t deal with this one in this article.
In this article, we will try to segment the parts of the face as shown in the above “Introduction”. The model we will use is BiseNetV2. As you can see in the table, it seems to run super fast.
Specifically, for a 2,048×1,024 input, we achieve 72.6% Mean IoU on the Cityscapes test set with a speed of 156 FPS on one NVIDIA GeForce GTX 1080 Ti card, which is significantly faster than existing methods, yet we achieve better segmentation accuracy.
Even though it uses a GPU, is 156 FPS at 2048x1024 really? I’m not sure. Well, how fast will it be in the browser? In this article, I’d like to use BiseNetV2-Tensorflow, which is unofficially implemented in tensorflow, because it’s easy to convert to tensorflowjs.
Approach
Basically, you do the following
clone the repository
download the checkpoint
freeze the checkpoint
convert to tensorflowjs
However, for some reason, the script for freeze in the above repository has optimization commented out (*1), so I need to turn it back on. Also, the input resolution (shape) is fixed at 448x448, so I’d like to make it variable.
*1 3rd/Nov./2020, commit 710db8646ceb505999b9283c0837c0b5cf67876d
Operation
(1) First, go to github and clone BiseNetV2-Tensorflow.
(2) Next, we will download the trained checkpoints described in the Readme. In this case, we’ll create a checkpoint folder and put the files in it.
$ ls checkpoint/ -lah
-rw-r--r-- 1 33M 11月 3 06:24 celebamaskhq.ckpt.data-00000-of-00001
-rw-r--r-- 1 36K 11月 3 06:24 celebamaskhq.ckpt.index
-rw-r--r-- 1 11M 11月 3 06:24 celebamaskhq.ckpt.meta
-rw-r--r-- 1 43 11月 3 06:24 checkpoint
(3) Next, freeze the checkpoint. If you don’t want to change the code, please clone this repository. In this repository includes the files I edited. And please skip to (3–1).
We will use the script `tools/celebamask_hq/freeze_celebamaskhq_bisenetv2_model.py` to freeze the checkpoint. However, in the current version, optimization process is commented out. So,weI will uncomment this for you. The following part.
# optimize_inference_model(
# frozen_pb_file_path=args.frozen_pb_file_path,
# output_pb_file_path=args.optimized_pb_file_path
# )
Also, in this script, the input resolution (shape) is fixed at 448x448, so you can change it to accept any resolution.
input_tensor = tf.placeholder(dtype=tf.float32, shape=[1, 448, 448, 3], name='input_tensor')
-> input_tensor = tf.placeholder(dtype=tf.float32, shape=[1, None, None, 3], name='input_tensor')
With this change, the network is built with a tensor of an unknown shape, so some of the network definition process will not work. Let’s fix that part too.
The target file is `bisenet_model/bisenet_v2.py`. The parts of name=’semantic_upsample_features’ and name=’guided_upsample_features’ can not handle unknown shape, so we fix them to refer the shape in runtime. You can use tf.shape method.
x_shape = tf.shape(detail_input_tensor) # <------ here
semantic_branch_upsample = tf.image.resize_bilinear(
semantic_branch_upsample,
x_shape[1:3], # <------ here
name='semantic_upsample_features'
)
And, upsampling process also can not handle them. Fix it.
input_tensor_size = input_tensor.get_shape().as_list()[1:3]
-> input_tensor_size = tf.shape(input_tensor)
output_tensor_size = [int(tmp * ratio) for tmp in input_tensor_size]
-> output_tensor_size = [tf.multiply(input_tensor_size[1],ratio), tf.multiply(input_tensor_size[2],ratio)]
(3–1) Now the time to freeze the checkpoint. Execute following command.
$ python3 tools/celebamask_hq/freeze_celebamaskhq_bisenetv2_model.py \
--weights_path checkpoint/celebamaskhq.ckpt \
--frozen_pb_file_path ./checkpoint/bisenetv2_celebamask_frozen.pb \
--optimized_pb_file_path ./checkpoint/bisenetv2_celebamask_optimized.pb
(4) Next, We will convert to tensorflowjs
$ tensorflowjs_converter --input_format tf_frozen_model \
--output_node_names final_output \
checkpoint/bisenetv2_celebamask_optimized.pb \
webmodel
This completes the creation of the tensorflowjs model.
In addition, as I mentioned in (3), it’s a little bit troublesome to touch the source code, so if you feel it’s difficult, I suggest you clone this repository and follow the readme.
Run
Now, let’s use this model to detect the parts of the face.
The processing time and quality will vary depending on the size of the input image, so I’d like to try this with several different sizes. I’m experimenting with a Linux PC & Chrome with GeForce GTX 1660.
First of all, this is for a 448x448 image as input. It’s about 12FPS. Huh! Much slower than I expected.
The accuracy seems to be there.
Next, I tried inputting a 1028x1028 image. It’s so slow…it’s only 0.5FPS.
The accuracy was not much different from 448x448.
Next I’m going to try 288x288 and it looks like I’m getting about 20FPS. Finally, it’s getting to a realistic speed.
However, the accuracy is still a bit low. Too bad.
So, in this operation check, the results were significantly below my expectations in terms of speed. It looks like it’s better to use the default 448x448.
You can find a demo at the following page, so please give it a try.
Also, the source of the demo we used in this article is available in the following repository. The npm module to use this model as a webworker is also available in the same repository, please try it!
I found a performance evaluation in the repository of another implementation (pytorch), but it seems to be only about 16FPS with FP32. Anyway, it is far from 156FPS. I may have misunderstood something.
Additional evaluation
As well as the last time, I confirm it with Thinkpad and MacBook.
The specs of Mac Book is corei5 2.4GHz. If you try out the above demo, please turn on processOnLocal from the controller on the upper right and press the reload model button because Safari cannot use WebGL on Webworker.
Also, the specs of ThinkPad are corei5 1.7GHz.
When I set input to 448x448, it was about 0.8 on the MacBook, and Thinkipad was about 0.4. It is difficult to use in real time.
Finally
I tried the challenge of detecting the parts of the face on the browser, and it seems to be able to detect them in real time with decent performance on a PC with GPU. I’m looking forward to the improvement in performance of PCs and smartphones in the future.
And I am very thirsty. If you found this article interesting or helpful, please buy me a coffee.
I am very thirsty!!
Reference
I used the images from the following pages. | https://medium.com/the-innovation/detect-your-face-parts-on-your-browser-67b904f5c8fe | [] | 2020-11-05 15:46:51.256000+00:00 | ['Machine Learning', 'Computer Vision', 'TensorFlow', 'JavaScript', 'Tensorflowjs'] |
Fiction Opens Hearts and Minds | Let me tell you a story…
Let me paint you a whole new world to immerse yourself in for just a little while. A world where justice prevails and conflict is resolved. A world where people live happily ever after.
We know these stories aren’t real, but they allow us to immerse ourselves in dreams of the world we wish we lived in. They also allow us to experience things we will never experience in real life.
I discovered the power of fiction as a child, and fell in love with the stories that changed my life. Fiction gave me the opportunity to envision a future beyond the farm where I grew up. Fiction showed me a world beyond the small farming community where I never quite felt like I fit in. Fiction even gave me my first glimpse into cultures and people across time and space.
I devoured the stories I read. Looking back I realize I was searching for my place in the world. I was searching for a place where I would fit in because I never felt like I really fit in anywhere. I searched for a connection to people who had been described to me as the “enemy”. I wanted to know those people. I wanted to understand why they were my enemy beyond some adult telling me they were. So I read stories set in places all around the world.
I read stories about women who controlled their own lives and who overcame great obstacles. I read stories about women who were rescued by one Prince Charming or another never seeing their own power and felt discomfited by them. I read stories about faraway places where people lived differently than me and who spoke different languages. I read and I read and I read some more. And I wrote. I wrote my stories about lives that I imagined for myself and for other people.
I discovered an insatiable thirst for wanting to understand the why of human nature. This lead me to study Psychology as well as criminality and juvenile delinquency. And still I read. I read romances but cared more about the lives of the main character than the romance. In fact, I often skimmed over the romantic sections of such books with a “yeah, yeah, yeah.” like it was something that had to be there but didn’t pertain to my life.
I started to identify with people who had nothing in common with me, at least on the surface.
This is the power of fiction. Fiction has the power to cast us in starring roles in other people’s stories as we read them. We find we can connect with their stories, empathize with their plight, and understand their decisions. Fiction has a way of conveying information to us that debunks misconceptions without feeling threatening.
Fiction can help us dream a little bigger. Fiction can help us reach a little farther. Fiction can help us
embrace a little stronger. Fiction can help us let go a little easier.
Fiction holds within it the power to change how we interact with the world and therefore to change the world itself.
Fiction opens our hearts and minds to the experiences of other people in ways that can help us understand the humanity that binds us all. | https://tlcooper.medium.com/fiction-opens-hearts-and-minds-503434da19d5 | ['Tl Cooper'] | 2019-08-23 02:26:59.388000+00:00 | ['Life Lessons', 'Reading', 'Fiction', 'Writing'] |
Anxious Parents Don’t Have to Raise Anxious Kids | Tips for Parents Who Worry Too Much
Since I know other parents must also worry about how their anxieties affect their kids, I wanted to share how I approach parenting.
1. Pay Attention to the Messages You Give Out
According to the Anxiety and Depression Association of America: “Children of anxious parents are more at risk for developing anxiety disorders. This is because they will have both a genetic predisposition to developing an anxiety disorder and their environment may emphasize hyper-vigilance to risk cues.”
The thing is, kids are already hyper-vigilant — they are sensitive, smart, and they pay attention to everything going on around them. When there’s a new situation at hand, they look to their parents for clues about how to respond.
Now, I tend to get stressed out when I don’t have perfect control over what’s going on. It used to be more intense when I was younger, but I still don’t enjoy entirely new situations. I get antsy when we move to a new place, when we go on vacation, or even when we meet new people.
My kids soak this up like a sponge. If I’m snappy, they’re snappy too. If I want to avoid certain people, they demand to know why. If I get nervous about a doctor’s checkup, they don’t want to go at all.
I try to keep in mind that I have full control over my actions. Even if I feel hesitant, I can rein it in and let my kids see my calm, confident side. I don’t want my anxieties to keep them back from experiencing the world.
2. Decide Which Issues Need Serious Attention
The question now is: “Wait, should I pretend I’m never worried?”
That’s not a good approach, in my experience. Children don’t think you’re infallible, and they won’t fall for fake bravado. At least, not for long.
What they need from you is clarity. You have to make a difference between rational worries and irrational anxieties.
For example, if your kid is nervous on the first day of school, don’t talk about all the ways their school year could go wrong. Even if you’re privately worried about your kid fitting in or keeping up with their coursework, these worries aren’t based on reality. So you need to put a stop to your anxiety spiral and tell your kid that everything is going to be fine.
On the other hand, if your kid has been bullied or they’re failing a class, that is a good reason to worry. It’s something you need to talk about with your kid and their teachers, and together, you need to work on fixing the problem.
But you won’t be able to fix anything if your judgment is clouded by anxiety. So before you talk to your child about a serious issue, give yourself a chance to calm down.
3. Get a Second Opinion
I won’t lie — whenever a big crisis pops up, my wife is the first one to respond. She and I worry about different things, and we don’t have the same reactions to stress. Between us, we’re (almost) always able to come up with a measured response to our kids’ problems.
There are also other people we ask for advice at times. If you’re worried about being overprotective, it can be a good idea to ask friends and family for their insights.
I know that asking for advice can be a double-edged sword when you’re a parent. But you don’t have to blindly accept any advice you receive. Just listen to what people close to you have to say. From the outside, they might be able to spot something you missed — or they’ll tell you you’re worrying too much about the situation at hand. Being too close to the problem can skew your perspective.
4. Take Your Kids Seriously and Give Them the Tools They Need to Cope
Unfortunately, anxiety has a genetic component. If you worry about inconsequential things too much, there’s a chance your kids have inherited the same issue.
Over 7% of children in the US are diagnosed with clinical anxiety, and many more have anxious tendencies that don’t reach clinical levels. Some kids find social situations very stressful, others worry about bad things happening to the family. Some are irrationally afraid of certain situations (like public speaking or picking up the phone), while others are generally wary of the world.
This can lead to a dangerous feedback loop if you’re both anxiety-prone. Your kid comes to you about something they find scary, even though objectively, it’s no big deal. But you find the same thing stressful because of your own anxiety. So you validate your kid’s fears, making it seem like they are right to be worried. This makes it seem like they have no chance of dealing with the problem by themselves.
But you can avoid that feedback loop. You need to:
Listen to your kid. Don’t brush off their worries, and definitely don’t mock them. If your kid can’t come to you for advice, they’ll just keep ruminating about their worries all by themselves.
Don’t brush off their worries, and definitely don’t mock them. If your kid can’t come to you for advice, they’ll just keep ruminating about their worries all by themselves. Make sure you’re focusing on the right problem. Let them tell you exactly what they’re worried about — but be prepared to ask follow-up questions. For example, if your child is anxious about going to a party, they might feel nervous about the crowd… or about the gift they’re bringing, a particular child who is going to be there, etc. You need to understand exactly what’s going on.
Let them tell you exactly what they’re worried about — but be prepared to ask follow-up questions. For example, if your child is anxious about going to a party, they might feel nervous about the crowd… or about the gift they’re bringing, a particular child who is going to be there, etc. You need to understand exactly what’s going on. Be kind, but don’t let them just avoid the problem. Experts say that “constantly rushing to fix a child’s problems can perpetuate a lifelong cycle of dependence and resentment”.
If you can’t let them succumb to their anxiety, what should you do instead?
Talk to kids about self-soothing methods. Discuss what they can do in a stressful situation they can’t avoid. Explain that even if they can’t control how others react to them, they can still be in charge of their own actions. You can even look at some worst-case scenarios and add humor to the situation.
Just like overwhelming shyness, unchecked anxiety can have lasting consequences on your child’s development. But if you give them the tools they need to manage their anxiety, they’re going to handle life’s challenges with grace. | https://medium.com/publishous/anxious-parents-dont-have-to-raise-anxious-kids-dd16ad7bab2a | ['Eric Sangerma'] | 2020-12-18 14:29:15.125000+00:00 | ['Mental Health', 'Family', 'Relationships', 'Parenting', 'Anxiety'] |
Detecting Pneumonia in X-ray Images | Photo by National Cancer Institute on Unsplash
My primary goal with this article is to highlight a practical application as a result of building a convolutional neural network (CNN) model. In general, CNN models have a wide variety of applications; in this case, it’s building a model that can accurately detect pneumonia in x-ray images. Sounds cool, right? But why would we need a convolutional neural network when we have medical experts that can perform the same task?
Why would we need a CNN to detect pneumonia?
Across the world, there is a general lack of radiologists, and this number continues to diminish which causes significant resources to be spent in order to determine the results of medical imaging. In many cases, a lack of a radiologist delays test results. This could also mean relying on medical professionals that don’t have expertise in radiology, leading to misinterpreted results. Getting accurate results within a short period of time can be a difference-maker and possibly a life-saver for certain patients.
The images used in this particular project were for pediatric patients under 5 years old. According to the World Health Organization, pneumonia accounts for 15% of deaths in the world for children under 5 years old. Pneumonia that is caused by bacteria can be treated with antibiotics, but only one third of children receive them (https://www.who.int/news-room/fact-sheets/detail/pneumonia). Streamlining the process of accurately detecting pneumonia in children is a necessity, and it truly could save lives.
So now that we understand the need for a CNN, let’s look at some details on the project and final model results. Before moving on, if you’re not interested in some of the code used to build the project and just want to see final results (which is understandable if you don’t know Python), then please feel free to scroll to the end of the article.
The Data:
The data for this project was downloaded directly from https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia. The data contains 5,856 x-ray images with a mix of RGB and grayscale images.
Resizing Images:
Images were resized to 75x75 for efficiency when running on a local machine. Additionally, transfer learning was used with a pre-trained network, InceptionResNetV2, which requires that the minimum image size is 75x75.
Train, Validation and Test Sets:
The original train, test, and validation sets were 5,216, 624, and 16 images. All images and labels were combined and then resplit in order to increase the size of the test set to more accurately evaluate model results as 16 images alone would not give a clear enough picture. The final training set was slightly reduced to 5,153 images, the validation set with 632 total images was used in the modeling process to gauge model accuracy and tune the model further, and the test set was used to gauge how the model would handle unseen data with 71 total images.
Data Augmentation:
Data Augmentation was implemented to increase the size of the training set and give the model additional diversity of images to improve accuracy (you may have noticed the resplit training plot above was larger than the original). The initial training set was doubled with images having Pixel values under 25 replaced with 0. Essentially, this converts darker gray areas to black and allows the model to focus on the more important, lighter areas. Below is an original image compared to an altered image along with code used to make this alteration (the differences are very subtle, but effective):
#Change pixel values for data augmentation
i = (X_train >= 0) & (X_train < 25)
altered = np.where(i, 0, X_train)
Additionally, here is a plot of the original vs. augmented training set:
Building the Initial Model:
The first function below was created to visualize a confusion matrix in order to understand the breakdown of true positives, true negatives, false positives and false negatives. The second function below was created for the modeling process. As the documentation states, this function was built to build the neural network model, return classification reports and confusion matrix, and save the best model using a model checkpoint callback based on validation accuracy.
#Build Plot Confusion Matrix Function
def plot_confusion_matrix(cm, classes=[0, 1], normalize=False, title=None, cmap=plt.cm.Blues, ax=None):
"""
Print and plot a confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes)
plt.yticks(np.arange(0, 1), [0, 1]) if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
else:
pass thresh = cm.max() / 2.
j_list = []
for i in cm:
for j in i:
j_list.append(j)
zero = j_list[:2]
one = j_list[2:]
for i, j in enumerate(zero):
plt.text(x=i, y=0, s=j, horizontalalignment="center", fontsize=16,
color="white" if j > thresh else "black")
plt.text(x=0, y=0.2, s='True Negatives', horizontalalignment="center",
fontsize=16,
color="white" if j > thresh else "black")
plt.text(x=1, y=0.2, s='False Positives', horizontalalignment="center",
fontsize=16,
color="white" if j > thresh else "black")
for i, j in enumerate(one):
plt.text(x=i, y=1, s=j, horizontalalignment="center", verticalalignment="center", fontsize=16,
color="white" if j > thresh else "black")
plt.text(x=0, y=1.2, s='False Negatives', horizontalalignment="center",
fontsize=16,
color="white" if j > thresh else "black")
plt.text(x=1, y=1.2, s='True Positives', horizontalalignment="center",
fontsize=16,
color="white" if j > thresh else "black") plt.tight_layout()
plt.ylabel('True label')
Function to build model:
layers_list = [] # Create Model Checkpoint
mc = ModelCheckpoint('best_model_test.h5', monitor='val_accuracy', mode='max', verbose=1, save_best_only=True) def build_model(optimizer, epochs, batch_size, callbacks=mc, weights={0:1,1:1}):
"""
Build a neural network model, returning classification reports, confusion matrix,
and save best model using model checkpoint based on val_accuracy.
Input Parameters: optimizer, epochs, batch_size, callbacks, weights
"""
# Initialize a sequential model
model = Sequential() # Add layers
for i in layers_list:
model.add(i)
# Compile the model
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])
results = model.fit(X_train, y_train, callbacks=callbacks, class_weight=weights, epochs=epochs, batch_size=batch_size,
validation_data=(X_test, y_test))
build_model.results = results
# Output (probability) predictions for the train and test set
y_hat_train = model.predict(X_train)
y_hat_test = model.predict(X_test)
build_model.y_hat_train = y_hat_train
build_model.y_hat_test = y_hat_test
#Visualize Results
history = results.history
plt.figure()
plt.plot(history['val_loss'])
plt.plot(history['loss'])
plt.legend(['val_loss', 'loss'])
plt.title('Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.show()
plt.figure()
plt.plot(history['val_accuracy'])
plt.plot(history['accuracy'])
plt.legend(['val_accuracy', 'accuracy'])
plt.title('Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.show()
print('-----------------------------------
')
# Print the loss and accuracy for the training set
results_train = model.evaluate(X_train, y_train)
print('Train Results', results_train)
print('-----------------------------------
')
# Print the loss and accuracy for the training set
results_test = model.evaluate(X_test, y_test)
print('Test Results', results_test)
print('-----------------------------------
')
# Print Classification Reports
print('Train Classification Report')
print(classification_report(y_train, np.round(y_hat_train, 0),
target_names = ['Normal (Class 0)','Pneumonia (Class 1)']))
print('-----------------------------------
')
print('Test Classification Report')
print(classification_report(y_test, np.round(y_hat_test, 0),
target_names = ['Normal (Class 0)','Pneumonia (Class 1)']))
print('-----------------------------------
')
# load the saved model
saved_model = load_model('best_model_test.h5') # evaluate the model
_, train_acc = saved_model.evaluate(X_train, y_train, verbose=0)
_, test_acc = saved_model.evaluate(X_test, y_test, verbose=0)
build_model.saved_model = saved_model
print('Best Model Results
')
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
print('-----------------------------------
')
#Create Confusion Matrices
train_cm = confusion_matrix(y_true=y_train, y_pred=np.round(y_hat_train, 0))
test_cm = confusion_matrix(y_true=y_test, y_pred=np.round(y_hat_test, 0))
build_model.train_cm = train_cm
build_model.test_cm = test_cm
#Plot Train Confusion Matrices
plt.figure(figsize=(12, 6))
plt.subplot(121)
plot_confusion_matrix(cm=train_cm,
cmap=plt.cm.Blues)
plt.subplot(122)
plot_confusion_matrix(cm=test_cm,
cmap=plt.cm.Blues)
plt.subplots_adjust(wspace=0.4)
Now that the functions have been set up, here is the structure of the first model:
#Add layers
layers_list = [] layer1 = layers.Conv2D(75, (2, 2), padding='same', activation='relu', input_shape=(75, 75, 3))
layer2 = layers.MaxPooling2D((2, 2), padding='same')
layer3 = layers.Conv2D(75, (2, 2), padding='same', activation='relu')
layer4 = layers.MaxPooling2D((2, 2), padding='same')
layer5 = layers.Conv2D(75, (2, 2), padding='same', activation='relu')
layer6 = layers.MaxPooling2D((2, 2), padding='same')
layer7 = layers.Flatten()
layer8 = layers.Dense(75, activation='relu')
layer9 = layers.Dense(1, activation='sigmoid') layers_list = [layer1, layer2, layer3, layer4, layer5, layer6, layer7, layer8, layer9] #Utilize Stochastic Gradient Descent Optimizer
opt = keras.optimizers.SGD(learning_rate=0.01, momentum=.9) #Build model with pre-built function
build_model(optimizer=opt, epochs=50, batch_size=100, callbacks=mc)
This model performed well with 96.7% accuracy for the validation set. I tried a second model without transfer learning in an attempt to improve these results but to no avail. Truthfully, I tried what felt like millions of different parameters and hyperparameters and did achieve better results in some cases, but the best model results overall were determined with the prebuilt InceptionResNetV2 model. I won’t print the entire model structure here as it’s quite large, but if you’re interested, you can view the structure in Python with this code (and in my GitHub repo which is included at the end of this article):
#Import InceptionResNetV2
from keras.applications import InceptionResNetV2 #Build the model base with required input shape 75x75x3
cnn_base = InceptionResNetV2(weights='imagenet',
include_top=False,
input_shape=(75, 75, 3)) #View base structure
cnn_base.summary()
Typically, most would freeze the pre-trained network or at least part of the network to use the prebuilt model weights and reduce training time. I decided to be different and retrain the entire model to improve accuracy. This was possible to run on my Mac with the size of the dataset and using a smaller image size of 75x75x3. I included the base model as my first layer as shown below:
#Set random seed
np.random.seed(123) #Add layers including InceptionResNetV2 base
layers_list = [] layer1 = cnn_base
layer2 = layers.Flatten()
layer3 = layers.Dense(75, activation='relu')
layer4 = layers.Dense(1, activation='sigmoid') layers_list = [layer1, layer2, layer3, layer4] #Utilize Stochastic Gradient Descent Optimizer
opt = keras.optimizers.SGD(learning_rate=0.01, momentum=.9) #Build model with pre-built function
build_model(optimizer=opt, epochs=50, batch_size=100, callbacks=mc)
Final Model Results:
The function for making predictions on our unseen test set is below. The test set was held out until the final prediction in order to remove any bias that may be implemented when training the model.
#Build a function to make predictions on unseen data
def predict_new_images(test_img, test_lbls):
'''Predict saved model results on unseen test set, print classification report and plot confusion matrix.'''
#Transpose val labels
test_lbls = test_lbls.T[[1]]
test_lbls = test_lbls.T
#Standardize the data
test_final = test_img/255
predictions = build_model.saved_model.predict(test_final)
predict_new_images.predictions = predictions
test_cm = confusion_matrix(y_true=test_lbls, y_pred=np.round(predictions, 0))
print('Classification Report')
print(classification_report(test_lbls, np.round(predictions, 0),
target_names = ['Normal (Class 0)','Pneumonia (Class 1)']))
print('-----------------------------------
')
plt.figure(figsize=(10, 6))
plot_confusion_matrix(cm=test_cm,
cmap=plt.cm.Blues)
plt.savefig('images/final_model_result.png')
The function plots a confusion matrix showing the final results:
In some ways I’m a perfectionist, so having a result of 100% accuracy on the unseen test set made me happy. Personally, I find it amazing that out of 71 x-ray images in the unseen test set, this convolutional neural network model is able to detect whether or not a pediatric patient has pneumonia with 100% accuracy. Like I said, it is pretty cool.
For more details and the full notebook for this project, please visit my GitHub repo here: https://github.com/dbarth411/dsc-mod-4-project-v2-1-online-ds-sp-000. | https://medium.com/analytics-vidhya/detecting-pneumonia-in-x-ray-images-with-a-convolutional-neural-network-735f68f40564 | ['David Bartholomew'] | 2020-12-23 16:40:06.604000+00:00 | ['Machine Learning', 'Neural Networks', 'Artificial Intelligence', 'Technology', 'Data Science'] |
Secrets to successful growth: Clearly communicating your vision and trusting your team. | So, you’ve done it. You’ve built a company from the ground up, proved product-market fit, and are preparing to grow. Regardless of what led you to this moment, one thing always remains the same: scaling up a business marks the end of an era.
Despite the feeling of success, for you, or your business partners, it’s easy to be feeling more out of sorts than ever. You’ve noticed it’s impossible to continue to be everywhere at once (though let’s be real, was it ever really possible in the first place). You can feel your company’s culture is growing beyond your direct influence. And it’s normal to feel lost as your direct impact on your business’ core operations is less and less.
This all is exciting, but it can also be hard. It can make you question your identity, your value, and your role in your business’ future. How do you usher in a new era of growth and what is your role in it?
From working with a wide range of companies — from small startups to corporates — I’ve noticed a few commonalities in the ones that had successful growth. I wanted to share a few of the observations with those of you who are at the cusp of scaling, to hopefully keep in mind as you move forward into this exciting new adventure.
Vision. Vision. Vision.
Not surprisingly, all of these companies have a strong, clear, easy-to-communicate vision backing them. This is essential for success as it’s a unifying factor (remember, you can’t be everywhere at once).
A strong vision allows the team to make day to day decisions independently, and act autonomously, because it’s clear to them why the business exists (and why they are there). A question I like to ask on team engagement surveys is “Do you feel your work contributes directly to the success of the company?” This score is often lower when the vision is unclear, as the team doesn’t understand how their skills and effort contributes to the company’s purpose.
When the team is on the same page, moving toward the same destination, it becomes much easier to empower them to make decisions without you having to be a part of every single meeting. Instead, your effort goes into establishing communication channels that allow knowledge to be shared throughout the organization, and prevent best practices from getting lost in the shuffle.
Yes, for a long time you might have been able to look across your desk to your colleagues to share new thoughts, discoveries and ideas, but as your business grows, that becomes more difficult, if not impossible. Focus on communicating about and embodying your overall vision, and trust the wonderful team you hired to make the day to day choices that will lead you to fulfilling your purpose.
You need to let go (give autonomy) and trust.
Here’s the bottom line. If you’re afraid to let go and allow your team to be autonomous, or feel in constant conflict with the rest of the company, then it’s time for some honest self reflection.
Accept that your vision many not be clear.
It’s okay. This happens. Check in with what’s driving you forward. Why are you doing this? It’s easy to get lost in motivations only concerning you or your reputation, because the line between your identity and your company’s identity is blurry at best. But a vision is not a dream end state.
Take time to sort out your vision. What was that real belief, that contribution, the reason you brought this company into existence in the first place? If you struggle to find it, bring in the team. They helped you get this far, even if they can’t quite express it themselves. Trust them to help you guide yourself back to the vision. You don’t have to, and shouldn’t, do it alone.
I know it may feel like a step back, but it’s truly a giant leap forward to revisit and clarify this.
Check-in on your company culture & growth plan.
If you don’t trust your team you may have made some very off-base hiring decisions. I won’t go into much detail on this here, as this is an article within itself, but check-in with yourself on any team members that you struggle to trust.
Often it can be as fundamental as a difference in values, or the wrong skill set at the wrong time. Realign your values, get back in touch with your company culture, and freeze hiring until you’ve figured out how to find the right match.
Embrace that your way is no longer the only or best way.
This is hard. As entrepreneurs we build things from the ground up, we know the ins and outs of our business, and are all hands on deck for a large portion of its early life. We believe we know what’s best.
When things were just starting, yes, you did need to have that control and influence in order to learn and create traction in your business — but the time has come to let that go. The vision must speak for itself, and your team will have their own way of doing things.
They may not do it like you would do it, or maybe they can’t even do it with the skill level that you could, but you’re just going to have to get used to that. It’s necessary for people to be able to work how they work best (just like you had to!). It’s not about creating clones of yourself, or just hiring a bunch of cogs in a machine. It’s about finding people with aligned values.
You want people who think. You want to give them space and motivation to do this. You don’t want a team that’s only acting to make you happy. It’s not about you anymore. It’s about something bigger, the vision.
So, your job has changed, and that’s more than OK.
This means you have to change your own definition of success and understanding of the value you are contributing to your company. You need to update your own metrics of success.
Whether you like it or not you need to take a step back. You’re just not going to be able to work on your company or your product in the same detailed, on the ground, level you have been for so long.
Your job is to live in the future, in the big picture, as a leader and spokesperson for the vision of your company. For some, you will need to have both feet in the future door, others of you will still need one foot in the present. You need to always be thinking about how to translate your knowledge into process and vision and company culture.
Resist the temptation to meddle in the day to day. Too many leaders find new purpose in running into brainstorms, providing (often ungrounded) perspectives or advice, leaving bewildered teams torn between following word for word the decision of the leader or their own beliefs in achieving the vision.
I cannot emphasize this enough: Your job is not to meddle, it is to mentor, to guide, to support.
Your visionary skills need to be focused on your realm, communicating and developing the big picture. Yes, provide mentorship and support, but, give your team the space to think and succeed for themselves. Otherwise they will shut down.
Companies fail when founders or leadership starts competing with the team (often because of their own insecurities) because they feel the need to prove to the team that they have value. Your team knows you have value. Your leadership and your vision inspired them to join in the first place.
Don’t end up with a team that’s stressed, immobile and dependent on you for every tiny movement. This is a team that will likely leave for another company in order to tackle a challenge where they feel trusted and creative.
Your team is where innovation is going to come from.
Innovation doesn’t happen in a vacuum. It happens on the ground floor with the people working on your business on the day to day.
Again, being “visionary” doesn’t mean intruding into your team’s space and trying to unnaturally insert innovation yourself — that’s not your job as a leader. It’s to facilitate your team, grow them as individuals, to be able to find innovations.
Your new role is an exciting one, an essential one, where you keep your eyes set on the future horizon, and where a new observation, perspective, or idea can inspire a path forward.
—
Based on initial observations made at the 2019 BASE conference on how to Build, Advance, Sustain and Elevate businesses. A contribution to the Create Converge project which is supported by the European Union North Sea Region Programme. | https://medium.com/swlh/scale-up-step-back-move-forward-ed6458512832 | ['Melinda Jacobs'] | 2020-03-25 15:28:41.822000+00:00 | ['Vision', 'Growth Mindset', 'Startup', 'Company Culture', 'Scaleup'] |
AI Deserts | Yes, it’s true: Artificial Intelligence is coming and it’s going to change the world around us. Actually, AI and ML (machine learning) are already here, and we’re failing to appropriately grapple with the ramifications, especially ethical concerns. I don’t feel compelled to explain or support the above statements. Open any magazine, click randomly on any article on Medium, visit any public event at a think tank; chances are, concerns raised by the age of AI is the topic. Some of it will be bunk, some of it very thoughtful, but the topic is not exactly underdiscussed. What is under-discussed is how unevenly this change will happen, because we misunderstand and overestimate the preconditions for AI outside the private sector.
The first precondition for AI is data — lots of it. The private sector has digitized and enabled the collection, transport, and everyday use of massive amounts of previously inaccessible information. Google, Facebook, and by extension their advertising customers and applications that use their platforms, now have enormous amounts of information about us not just because we share this data explicitly, but also because these systems monitor what else we’re doing when we’re online. That’s why when you put a pair of shoes in your cart online, an ad for those shoes follows you around everywhere.
That’s not AI, but those retargeting ads are the tip of a huge iceberg. There is a hidden world of cooperating applications, passing data back and forth through a kind of digital bloodstream, constantly making connections. The digital transformation of legacy industries combined with greenfield digital environments like social networks has resulted in systems of systems that talk to each other because of how they are architected. AI and ML are possible in these environments because they consume and learn from the vast stores of data captured and connected across applications. If AI were a plant, data would be its air, soil, and water, and in these connected ecosystems, AI’s roots can reach far and wide to access resources. Massive amounts of data are needed to make these technologies work well. Remember what Google’s Peter Norvig once said: “We don’t have better algorithms than anyone else; we just have more data.”
But digital transformation hasn’t happened evenly across our society, and it’s particularly the public and social sectors that have been left behind. Here, large-scale systems of systems that talk to each other are few and far between, and the low availability and connectedness of data means that these sectors may become AI/ML deserts, so to speak. Visit the offices of your local homeless shelter and ask to see their data. (Of course, they won’t let you do so for privacy reasons, but pretend with me for a moment.) Their data is on paper, or trapped in a proprietary database that only one person in the organization still knows how to do exports from, and that person is retiring next month. They likely receive significant amounts of data from the other organizations and agencies they collaborate with in unstructured formats: email, fax, and phone calls logged by humans.
Visit the open data portal of any government website and you’ll see lots of data sets; they are usually manually exported from standalone systems on a regular basis (or once were, until someone’s stopped updating them after a personnel change) and they are almost all of the size you could easily download to your laptop in a minute or less with a reasonable internet connection. Open Data Barometer found in a study of 115 governments’ publicly available data that only half of it was machine readable. If a machine can’t read the data, it can’t be used to train an algorithm. AI will not grow in soil this thin and dry.
If a machine can’t read the data, it can’t be used to train an algorithm.
Take for example the question of how AI will be used in the military. There’s a lot of concern, and understandably so. Killer robots making independent decisions are the stuff of dystopian science fiction. So advocates for AI in the military are likely to tout the benefits of something like predictive maintenance for aircraft. Those kinds of applications raise fewer ethical complications, and paint a positive picture of servicemen and women enabled and empowered by technology to keep our military competitive. As the story goes, AIs will consume historical datasets of aircraft maintenance logs and begin to tell us when parts need to be replaced before they fail, reducing costs and increasing our military readiness.
But go into the field and try to find those historical data sets. If you are assuming that data is held in some sort of enterprise IT system, you will be very surprised. What you are likely to find (and forgive a little creative license to make my point — this is not meant to map exactly to any given military unit) is a spreadsheet that goes back six months, to when either the computer or the personnel responsible for maintaining it was assigned to that station. Perhaps it goes back earlier than that, but the earlier data is in a different format, or on paper logs. And each unit in each of the services maintains its data separately, with no standard across them. People much smarter about AI than me will point out that computers will soon be clever enough to overcome the standards problem; they’ll simply learn to infer what different fields mean and do the normalization themselves. But they may not have that chance; gaining access to that data from each individual unit is possibly the hardest challenge, because it’s not a technical problem, but a legal, bureaucratic, and human one. When we say things like “the Department of Defense has that data,” it makes it sounds like any given human of sufficient rank could assemble it. That simply isn’t true in practice. It’s not just the Department of Defense; what digital means is closer to the fragmented spreadsheet scenario than the systems of systems scenario in most government contexts, and certainly in the majority of nonprofits.
One theory goes that the benefits of AI will be so great that governments in particular will invest in the systems they need to take advantage of these technologies. In short, they’ll finally modernize. But governments have had plenty of reasons to modernize up until now, and to be fair, they’ve been trying, to the tune of $200B annually. (That’s what the US government spends on digital technology.) The problem is not how much government spends; it’s how government spends it.
Despite some bright spots of change, government remains constrained by antiquated procurement practices that focus on processes, rules, and mandated outputs rather than outcomes, and consistently result in mega-projects that take years and ultimately fail to produce working software. If you decided that the way to enable predictive maintenance of aircraft in our armed services was to build an enterprise system to capture that data (and therefore avoid the scenario in which contractors own the data and sell it back to government), you’d be staring down the barrel of a procurement process that would easily take six years, likely get delayed past that, and stood a small chance of ever being rolled out past a small pilot, if it even worked well enough to get that far. And that might only put you at the start of collecting the necessary data. Remember, algorithms are only as good as the data we feed them. What they want is many years of historical data across as many circumstances as possible. That’s very unlikely for many of the functions government is responsible for.
All this speaks to the second precondition for AI that we often fail to account for: organizational competence at digital that’s rooted in enduring structures and cultures. This is important because it’s a key driver of data availability, but it’s also important in its own right if you care about ethics in AI, because government is at least theoretically accountable to a democratic process. For example, to this issue of predictive maintenance of aircraft, you might argue that our military will have this capability because the contractors will start to put sensors on the aircraft they sell. That’s certainly possible, but while sensor-enabled devices proliferate rapidly in the consumer sector because consumer devices are replaced every year or two, most hardware used by the military has an operational life measured in decades, so it’s not going to happen soon. More to the point, when or if that does happen, the contractors will almost certainly try to sell the data and predictions back to the military rather than enabling them to have direct access, because that’s a better business model for them, and government will stay behind once again in its digital competency. Many cases like this just devolve to the same problem we started with: the private sector has the capabilities and therefore the power, while government, charged with, among other things, regulating the private sector, is fighting on and increasingly uneven playing field.
There are many exceptions to the rule of slow adoption of AI in government, of course, and many of the examples one might point to will suggest government’s lag would be a feature, not a bug. For example, the data collected through Project PRISM, the National Security Agency’s mass surveillance project, are ripe for AI and ML applications, and have almost surely been mined using these technologies. Remember, though, that these data were collected by systems built by the private sector; the NSA just accessed them. The reason government can do this is that it was sufficiently valuable for someone else to collect and retain enormous amounts of information about people for a wide variety of reasons; because, as many others have pointed out, we are the product. There are many areas where the value to society of collecting and making use of data is very high, but there’s insufficient profit incentive. That’s where government is supposed to act, but can’t, if it’s not digitally competent or competitive.
Government will try to do AI, of course, even without sufficient data or digital competence. Virginia Eubanks gives us just one chilling example in her book Automating Inequality: a predictive algorithm used in Allegheny County, PA, aimed at projecting which children are likely to become victims of abuse, and therefore should be removed from their families. In a classic case of failing to account for bias, the algorithm used proxies instead of actual measures of maltreatment. Eubanks explains “One of the proxies it uses is called call re-referral. And the problem with this is that anonymous reporters and mandated reporters report black and biracial families for abuse and neglect three and a half more often than they report white families.” There must be nuanced and thoughtful debate about whether algorithms should have any role in removing a child from their family, but let’s not judge the potential of AI from examples like this, in which government buys what’s sold as AI but is an impoverished cousin of AIs that leverage larger data sets, do a better job accounting for bias, and are much more thoroughly tested. This is not to say that AI in the private sector is always good (of course!), just that an organization without core digital competence is doomed to use AI badly, with potentially disastrous consequences.
This all probably sounds like great news to those eager to slow our march towards an AI future. The problem is, it won’t. The benefits companies stand to gain putting AI to use towards their goals will assure that (including those who sell to the public sector). What it does mean is that the gap between the sectors of our society is likely to grow to such a size that the sectors that haven’t already undergone a digital transformation — much of the public and social sectors — may literally never be able to catch up. The ethics of AI are just as complicated in public and social sectors as they are in the private sector, but the growing divide in capabilities is problematic in its own right. Do we want vulnerable populations and national issues to be beholden to the self-interested decisions of the private sector? There are consequences we don’t fully grasp to the spread of AI; there are also consequences we don’t fully grasp to the hugely uneven advance of it.
AI could mean the gap between the sectors of our society is likely to grow to such a size that much of the public and social sectors may literally never be able to catch up.
Efforts around “AI for good” are aimed at enabling AI to have positive impact, and these are laudable; but targeted efforts here will one by one face the barriers of poor data environments and low digital competence in government and nonprofits. What’s needed are more comprehensive efforts to deal with the root causes of the public and social sectors’ data and digital handicap — the work we need to be doing anyway to make government work for people regardless — before the AI revolution sends this whole problem into massive overdrive. That’s enormously hard work, but it’s work we must do.
We need an effective, capable public and social sector. Government and nonprofits play a critical role in what matters most in our lives: our health, our safety, our vulnerable kids, our friends and neighbors recovering from natural disasters, our veterans, our national infrastructure, our response to the climate crisis. Governments and advocates are also meant to serve as an important check on corporate power. We already live in a world with an enormous asymmetry between the capabilities of the private sector and the public and social sectors, and we are confronting a future in which that asymmetry will grow exponentially year over year.
How to address this must be among the questions we ask ourselves as we confront an AI age. | https://medium.com/code-for-america/ai-deserts-fc210fc2fd41 | ['Jennifer Pahlka'] | 2019-10-07 22:14:33.646000+00:00 | ['Machine Learning', 'Government Innovation', 'Artificial Intelligence', 'Government', 'Digital Transformation'] |
50+ Java Collections Interview Questions for Beginners and Experienced Programmers | 50+ Java Collections Interview Questions for Beginners and Experienced Programmers javinpaul Follow Jun 26 · 9 min read
image_credit — Java collections Fundamentals by Pluralsight
Java Collection and Generic are a very important topic for Java Interviews. They also present some of the hardest questions to a programmer when it comes to interviews, especially Generics.
It’s not easy to first understand what a particular piece of code doing with those question marks and other signs and then the pressure of interviews also makes it hard to answer complex usage of Generics.
But, with proper preparation and paying attention to both Java Collection and Generic, you can solve that hurdle. If you are looking for Java job but haven’t done well in the interviews you have given so far then you have come to the right place.
In this article, I have shared a lot of Java interview questions on various topics and difficulty levels.
There are Java questions for beginners as well as expert programmers. They are theoretical questions based upon Java programming concepts as well as coding and data structure algorithms questions for programmers, and this article is only going to make that collection even more valuable.
In this article, I am going to share some of the frequently asked Java Collection and Generic questions from Interviews. These are the questions you have often seen on a telephonic round of Java interview as well as on face-to-face interviews.
It’s useful for both beginners having 2 to 3 years of experience as well as experienced Java programmers with 5 to 6 years of experience.
This list has a collection of questions which has both easy and tough questions in it but the most important thing is that most of the questions have already been asked on interviews. I am sure you might have also seen it in your interviews.
knowing the answers to these questions will not only help you to crack your Java interview but also understand Java Generics and Collection topic in-depth, which will eventually help you to write better Java programmers and code.
Btw, if you are new to Java or want to solidify your Java knowledge then you should check out a comprehensive course like The Complete Java Masterclass before attempting to solve these questions. It will help you immensely by filling gaps in your knowledge and going back and forth. It’s also the most up-to-date course and covers every new feature introduced in new Java releases
50+ Java Collection and Generic Interview Questions
Without wasting any more of your time, here is my list of 50+ Java interview questions on Collection and Generics.
If you have done some work in Java +then you should know the answer to these questions but if you don’t you can always see the answer.
Instead of writing answers here, I have linked them to relevant posts so that you can try to solve the problem by yourself here and if you need you can get an in-depth discussion on individual posts to learn the topic in depth.
1) What is the Java Collection Framework and How do you choose different collections? (answer)
Here is the diagram which answers this question:
2) What are Generics in Java? (answer)
hint: Java feature to ensure type safety at compile time.
3) Which are your favorites classes from Java Collection Framework? (answer)
hint: Collection, List , Set , Map , ArrayList , Vector , LinkedList , HashMap , etc
4) When do you use Set, List, and Map in Java? (answer)
hint — use set when you don’t need duplicates, use List when you need order with duplicates, and use Map when you need to store key-value pair.
5) Which Sorted Collection have you used? (answer)
hint — TreeSet is one example of a sorted Collection
6) How HashSet works in Java? (answer)
hint — same as HashMap, using hashing and equals() and hashCode() method. HashSet is actually backed by HashMap where keys are elements you store in HashSet and values are always null.
7) Which two methods you should override for an Object to be used as a key in hash-based Collections? (answer)
hint — equals and hashcode
8) Can you use HashMap in a concurrent application? (answer)
hint — Yes, but only if you are reading from the HashMap and its initialized by a single thread, otherwise no.
9) What is the difference between HashMap and Hashtable in Java? (answer)
hint — HashMap is fast but not threadsafe, Hashtable is slow but thread-safe
10) What is the difference between synchronized and concurrent Collection in Java? (answer)
11) How ConcurrentHashMap works in Java? (answer)
partitions map into segments and lock them individually instead of locking the whole map.
12) What is PriorityQueue in Java? (answer)
A data structure that always keeps the highest or lowest element at the head so that you can access or remove it in constant time.
13) What is type-erasure in Generics? (answer)
Its a part of Java compiler which removes all type related information after compilation from Java so that the generated code is the same as before Generics.
14) What is the difference between ArrayList and Vector in Java? (answer)
hint — ArrayList is not synchronized hence fast, Vector is synchronized hence slow
15) What is the difference between LinkedList and ArrayList in Java? (answer)
hint — ArrayList is backed by array while LinkedList is backed by a linked list which means search with index is only possible in ArrayList.
16) What is the difference between Hashtable and ConcurrentHashMap in Java? (answer)
hint — ConcurrentHashMap is a new concurrent class with better scalability as only a portion of the map called segment is locked while Hashtable is an old class where the whole map is Locke for synchronization. Seet Java Collections: Fundamentals course for more details.
By the way, you would need a Pluralsight membership to join this course which costs around $29 per month and $299 per annum (14% discount) but its completely worth it. Alternative. you can also use their 10-day-free-trial to watch this course FREE.
17) What is the difference between LinkedHashSet and TreeSet in Java? (answer)
hint — TreeSet is a sorted set where elements are stored in their natural or custom sorting order depending upon comparable and comparator while LinkedHashSet is just an ordered collection that maintains insertion order.
18) Difference between extends and super in Java Generics? (answer)
19) What do you mean by thread-safe collection? Give an example of 2 thread-safe Collection in Java? (answer)
20) What is the relationship between equals and compareTo in Java? (answer)
21) What is the default size of ArrayList and HashMap in Java? (answer)
22) What is the load factor, capacity, and Size of the Collection in Java? (answer)
23) What is the difference between Iterator and Enumeration in Java? (answer)
24) When does ConcurrentModificationException occur? (answer)
25) What is the difference between fail-safe and fail-fast Iterator in Java? (answer)
26) What is CopyOnWriteArrayList in Java? (answer)
27) When do you use BlockingQueue in Java? (answer)
28) What is the difference between the peek() and poll() method of the Queue interface? (answer)
29) How do you find if an ArrayList contains an Object or not? (answer)
30) Can we store Integer in an ArrayList<Number> in Java? (answer)
31) How get method of HashMap works in Java? (answer)
hint — hashing, hashcode method is used to find bucket location for putting the mapping and equals is used for retrieval.
32) How do you sort a Collection in Java? (answer)
33) What is the difference between ListIterator and Iterator in Java? (answer)
34) What is the difference between HashSet and LinkedHashSet in Java? (answer)
35) When do you use EnumSet in Java? (answer)
36) List down 4 ways to iterate over Map in Java? (answer)
hint —
using for loop using for each loop of JDK 5 using Iterator using ListIterator
You can further see The Complete Java Masterclass course for more details. It’s also the most up-to-date course and covers every new feature introduced in new Java releases
37) How to create read-only Collection in Java? (answer)
38) What is IdentityHashMap in Java? (answer)
39) Difference between IdentityHashMap and WeakHashMap in Java? (answer)
40) What is the difference between Comparator and Comparable in Java? (answer)
41) What is DeQueue? When do you use it? (answer)
42) How do you remove an Object from Collection? (answer)
43) What is the difference between the remove() method of Collection and Iterator in Java? (answer)
44) What is the difference between ArrayList and ArrayList<?> in Java? (answer)
45) What is the difference between PriorityQueue and TreeSet in Java? (answer)
46) How can I avoid “unchecked cast” warnings? (answer)
47) What is the “diamond” operator in Java? (answer)
48) What is the covariant method overriding in Java? (answer)
50) What is the difference between bounded and unbounded wildcards in Java generics? (answer)
That’s all in this list of 50 Java Generics and Collections Interview Questions. They are a very important topic from the Java Interview point of view, especially collections. Make sure you prepare them well before going for any interviews. If you need further preparation you can also check out these Java Interview books and courses:
Further Learning
Java Interview Guide: 200+ Interview Questions and Answers
Java Programming Interview Exposed by Markham
Cracking the Coding Interview — 189 Questions and Answers
Data Structure and Algorithms Analysis for Job Interviews
Other Interview Questions Articles you may like to explore
Thanks for reading this article so far. If you like these Java Generics and Collections interview questions then please share with your friends and colleagues. If you have any questions or feedback then please drop a note.
P. S. — If you are serious about mastering Java Collections — one of the most important Java API then I also suggest you check out the Java Collections: Fundamentals course by @Richrad Warburton, a Java Champion on Pluralsight. It’s a great course to learn why and which collections Java programmers should use.
P. P. S — Quick Update, Pluralsight free weekend is here and you can access all 7000+ Pluralsight courses and projects for FREE this weekend. Make this count and learn a new skill or level-up the existing one. Don’t miss this out, it’s only for this weekend. And here is the link again: | https://medium.com/javarevisited/50-java-collections-interview-questions-for-beginners-and-experienced-programmers-4d2c224cc5ab | [] | 2020-12-11 08:57:57.748000+00:00 | ['Algorithms', 'Software Development', 'Coding', 'Java', 'Programming'] |
How to configure Gmail to work for you, not against you | How to configure Gmail to work for you, not against you
Email should be a tool, not a nuisance. Here’s how it works.
Photo by Webaroo.com.au on Unsplash
“You have read all messages in your inbox.”
To many of us, this is porn. The infamous Inbox Zero has been reached. All email has been taken care of. Nothing to see here anymore.
Inbox Zero comes at a cost: “During the workday, respondents reported spending an average of 209 minutes checking their work email and 143 minutes checking their personal email, for a total of 352 minutes (about five hours and 52 minutes) each day.”, according to CNBC.
Can you believe that? Assuming an 8-hour-workday, this leaves about two hours each day for everything else, including coffee with colleagues, chit-chat in the office and well, a tiny bit of actual meaningful work.
Essentially, the average American office worker does nothing but write and read email. The Germans are a little better with 2 hours per day, but the data is 5 years old and probably not super representative.
While communication is an important part of working in today’s world, it shouldn’t be the only thing we do. There are more important things out there that need to be done. | https://medium.com/swlh/how-to-configure-gmail-to-work-for-you-not-against-you-906e706eadbf | ['Dominik Nitsch'] | 2019-12-08 20:22:30.095000+00:00 | ['Lifehacks', 'Productivity', 'Gmail', 'Inbox Zero', 'Email'] |
The Weekly Authority #52 | How to Scale Content Marketing: 5 Tips for Creating Better Content While Conserving Your Resources
Content marketing can generate three times as many leads as outbound marketing at only 1/3 of the price (according to the Content Marketing Institute).
Creating great content day in and day out, however, can be a challenge. This is especially true for those who aren’t scaling their content marketing efforts.
To overcome this challenge and help you see a better ROI for your content marketing, this week, I’m sharing a handful of proven tips for effectively scaling content marketing.
5 Proven Tips for Scaling Content Marketing
1. Use a content calendar — A good content calendar can be a powerful roadmap to success when it comes to content marketing. Spanning several weeks to months, a content calendar can help you maintain a consistent brand voice, a regular publishing schedule and an effective plan for developing an array of content (from blogs and eBooks to social media content, videos and other visual content, Podcasts and much more).
2. Plan to atomize or repurpose your content — With content atomization, bigger pieces of content (like an eBook, for instance) are broken down into smaller pieces that are serialized (like a series of blog posts, for example). Repurposing content is similar and basically involves finding new ways to use, refresh and/or reformat existing content (like creating a series of Podcasts or videos out of a blog series, for instance). When you plan to atomize or repurpose content, you can squeeze the most out of every piece of content you create.
That can make it easier for any audience to digest (because the content is in various forms, suiting any audience’s preference whether it be reading, listening to or watching content). And that, in turn, can expand the reach and impact of your content.
3. Use the right tools — The right tools (in pretty much any setting) can be key to scaling the effort it takes to get something done. And content marketing is no different. Just a couple of the tools I use at Digital Authority to scale content marketing (and do other various other stuff with) are CoSchedule and Curata.
4. Figure out what’s working and replicate it — On a regular basis, check out the analytics data to evaluate which pieces of content are getting more (and less) traffic, clicks and conversions. When you know what’s working, you can give your audience more of what they like and what they’re looking for (both in terms of subjects/topics and format/types of content).
5. Get others to write for you — Another great way to scale content marketing while building authority, clout and attention for your brand online is to get other people to develop content for you. This can be guests (like guest bloggers, for instance) or even your own audience (i.e., user-generated content). If others are creating some of your content, it takes some of the burden off you — and it can make them (and their audience) more invested and interested in your content.
Have you tried these or any other tactics for scaling content marketing?
Tell me about your content marketing experiences, challenges and successes on Facebook and LinkedIn. And don’t hesitate to get a hold of me on social media to ask any digital marketing question or just to say ‘hi.’ I look forward to hearing from you! | https://medium.com/digitalauthority/the-weekly-authority-52-baf267b495bd | ['Digital Authority Co'] | 2017-03-29 12:02:01.847000+00:00 | ['Content', 'Digital Marketing', 'Marketing', 'Content Marketing'] |
Cambridge Analytica Explained: Data and Elections | We found this here.
Disclaimer: This piece was written in April 2017. Since publishing, further information has come out about Cambridge Analytica and the company’s involvement in elections.
Recently, the data mining firm Cambridge Analytica has been the centre of tons of debate around the use of profiling and micro-targeting in political elections. We’ve written this analysis to explain what it all means, and the consequences of becoming predictable to companies and political campaigns.
What does Cambridge Analytica actually do?
Political campaigns rely on data operations for a number of decisions: where to hold rallies, which states to focus on, and how to communicate with supporters, undecided voters and non-supporters. Essentially, companies like Cambridge Analytica do two things: profile individuals, and use these profiles to personalise political messaging.
What some reporting on Cambridge Analytica fails to mention is that profiling itself is a widespread practice. Data brokers and online marketers all collect or obtain data about individuals (your browsing history, your location data, who your friends are, or how frequently you charge your battery etc.), and then use these data to infer additional, unknown information about you (what you’re going to buy next, your likelihood to be female, the chances of you being conservative, your current emotional state, how reliable you are, or whether you are heterosexual etc.).
Cambridge Analytica markets (!) itself as unique and innovative because they don’t simply predict users’ interests or future behaviour, but also psychometric profiles (even though the company later denied having used psychographics in the Trump campaign and people who have requested a copy of their data from the company have not seen psychographic scores.). Psychometrics is a field of psychology that is devoted to measuring personality traits, aptitudes, and abilities. Inferring psychometric profiles means learning information about an individual that previously could only be learned through the results of specifically designed tests and questionnaires: how neurotic you are, how open you are to new experiences or whether you are contentious.
That sounds sinister (and it is), but again, psychometric predictions are a pretty common practice. Researchers have predicted personality from Instagram photos, Twitter profiles and phone-based metrics. IBM offers a tool that infers personality from unstructured text (such as Tweets, emails, your blog). The start-up Crystal Knows gives customers access to personality reports of their contacts from Google or social media and offers real-time suggestions for how to personalise emails or messages.
From a technical perspective, it doesn’t matter whether you predict gender, interests, political opinions or personality, the point is that you are using some data (your keystroke speed, your browsing history, your location) to learn additional, unknown information (your sexual orientation, your interests etc.).
This is terrifying! So everything can be predicted?
Well, yes, but also not quite. Profiling feels creepy (and it is), because it allows anybody with access to enough personal data to learn highly intimate details about you, most of which you never decided to disclose in the first place. This is worth repeating: someone can use your data to find out whether you are gay, even though you’ve never shared this information. Now here’s where it gets tricky: this derived information is often uncannily accurate (which makes profiling a privacy nightmare) but by virtue of being predictive, predictions also sometimes get it wrong. Also, a lot of things are inherently subjective. Who defines what is reliable or suspicious in the first place?
Think about the targeted ads you see online: how often do they misjudge your interests, or even your entire identity? From the perspective of an advertiser this is not a problem, as long as enough people click on ads. For you and me, and every single one of us, systematic misclassifications can have real-life consequences.
Even worse, profiling and similar techniques are increasingly used not just to classify and understand people, but also to make decisions that have far-reaching consequences, from credit to housing, welfare and employment. Intelligent CCTV software automatically flags “suspicious behaviour”, intelligence agencies predict internet users’ citizenship to decide they are foreign (fair game) or domestic (usually not fair game), and the judicial system claims to be able to predicts future criminals.
As someone once said: it’s Orwell when it’s accurate and Kafka when it’s not.
So profiling is widespread. But did Cambridge Analytica influence the Brexit vote and the US election?
This is my favourite question because the answer is so simple: this is very unlikely.
It’s one thing to profile people, and another to say that because of that profiling you are able to effectively change behaviour on a mass scale. Cambridge Analytica clearly does the former, but only claims (!) to succeed in the latter. Even before the company was in the news, their methods raised a lot of eyebrows amongst experts on data-driven campaigning, with one consultant claiming that “everyone universally agrees that their sales operation is better than their fulfilment product”.
The idea that a single company influenced an entire election is also difficult to maintain because every single candidate used some form of profiling and micro-targeting to persuade voters — including Hillary Clinton and Trump’s competitors in the primaries. Not every campaign used personality profiles but that doesn’t make it any less invasive or creepy!
Evangelicals use data mining to identify unregistered Christians and get out the vote through the non-profit United In Purpose. The organisation profiles individuals and then uses a scoring system to measure how serious they take their faith.
As early as 2008, the Obama campaign employed a data operation to assign every voter in the country a pair of scores that predicted how likely they would cast a ballot, and whether they supported him. The campaign was so confident in its predictions that the Obama consultant Ken Strasma has been quoted to boast that: “[w]e knew who … people were going to vote for before they decided.” Before Cambridge Analytica worked for Trump, the company supported Ted Cruz who described his data operation as “very much the Obama model — a data-driven, grassroots-driven campaign”. By the time Trump hired Cambridge Analytica in 2016, Clinton employed more than 60 mathematicians and analysts.
Voter tracking also doesn’t end online. Shortly after the Iowa caucus in early 2016, the CEO of “a big data intelligence company” called Dstillery told public radio program Marketplace that the company had tracked 16,000 caucus-goes via their phones to match them with their online profiles. Dstillery was able to learn curious facts, such as people who loved to grill or work on their lawns overwhelmingly voted for Trump in Iowa.
All of these efforts to use data, profiling, and targeting to change voters’ minds make it incredibly hard for any one of these data companies to singlehandedly manipulate the outcome of an entire election.
So Cambridge Analytica is a snake oil vendor and I shouldn’t be worried?
No, no, you should definitely be worried!
Using profiling to micro-target, manipulate, and persuade individuals is still dangerous and a threat to democracy. The entire point of building intimate profiles of individuals, including their interests, personalities, and emotions, is to change the way that people behave. This is the definition of marketing — political or commercial. When companies know that you are depressed or feeling lonely to sell you products you otherwise wouldn’t want, political campaigns and lobbyists around the world can do the same: target the vulnerable, and manipulate the masses.
We are moving towards a world where your hairbrush has a microphone and your toaster a camera; where the spaces we move in are equipped with sensors and actuators that make decisions about is in real-time. All of these devices collect and share massive amounts of personal data that will be used to make sensitive judgements about who we are and what we are going to do next.
Is this even legal?
Good question that begs a lawyer-answer: it depends. There are vast differences in the way that data is regulated in the US, around the world, and currently even within different countries of the EU.
Nearly every single 2016 US presidential candidate has either sold, rented, or loaned their supporters’ personal information to other candidates, marketing companies, charities, or private firms. Marco Rubio alone made $504,651 by renting out his list of supporters. This sounds surprising but can be legal as long as the fine print below a campaign donation says that the data might be shared.
Under UK and European data protection law, the situation is slightly different. Data protection regulates the way in which organisations can process personal data. You need some legal grounds for obtaining, analysing, selling, or sharing data and even then, the processing needs to be fair and not excessive. This is why the UK Information Commissioner Office is currently investigating whether Cambridge Analytica and others might have violated these rules, and some have argued that there is evidence they did.
What is also important to know: according to the UK Data Protection Act 1998 implementing EU Data Protection Directive 95/46/EC, any individual whose data is processed in the UK has the right to access it (Article 7), regardless of nationality.
Profiling is specifically addressed by the upcoming General Data Protection Regulation (GDPR), which gives citizens more rights to information and objection. It contains more explicit requirements for consent than previous regulations and the penalties for violations of the law can be much higher. The regulation is a good start but won’t solve all problems.
I want to read more about this!
Sure, here are some more resources.
We have an irregular newsletter that informs you about recent news on data exploitation.
If you want to understand the legal basis for profiling in the UK, the Information Commissioners Office has some good resources on their website, including a guide on how to file a data subject access request and raise a concern. We are contributing to ongoing consultations about profiling under GDPR, both in Brussels and with the ICO — check our website for updates.
Here’s an excellent Twitter feed that argues how Cambridge Analytica might have violated the UK Data Protection Act 1998.
David Carroll filed a data subject access request to Cambridge Analytica and shared some of his data on Twitter.
Wolfie Christl and Sarah Spiekermann wrote a superb report on corporate surveillance and digital tracking with lots of timely examples from finance to employment and marketing.
In 2012, ProPublica investigated how political campaigns use data about voters to target them in different ways.
In 2014, the US Federal Trade Commission published a report on data brokers in the US, called “A Call for Transparency and Accountability”. Tighter regulations of data brokers would affect the way that campaigns use data. | https://medium.com/privacy-international/cambridge-analytica-explained-data-and-elections-6d4e06549491 | ['Privacy International'] | 2018-04-12 14:41:43.181000+00:00 | ['Surveillance', 'Big Data', 'Privacy', 'Cambridge Analytica', 'Elections'] |
Day 20: Project, Project, Project | Me from 9am to 4pm today
It is so fun to get to work on a project that I am excited about! Part of the reason today was so fun was because I was actually able to make some decent progress on my canvas Avalanche game.
I was able to draw all the icicles at the top of the canvas and draw our little stickman hero at the bottom, and I succeeded in making our hero move left and right when the arrow keys are pressed.
Here’s a screen cap of my progress so far:
It doesn’t exactly capture our little hero cruising left and right, but there is the general idea. Next is the (even) harder part: Getting the icicles to fall at random intervals and then replicate in a way that makes sense.
Then comes the hardest part: Implementing collision awareness so the game ends when an icicle comes down on our poor hero’s head! And then there’s the whole thing about keeping score by seeing how long the user can stay alive. Also, storing a high score with Local Storage.
Woooo. I have a lot of work to do, and I’m excited to continue over the weekend and next week! | https://medium.com/the-road-to-code/day-20-project-project-project-6629f83b54ec | ['Dylan Thorwaldson'] | 2017-09-01 23:45:42.215000+00:00 | ['Design', 'Software Development', 'JavaScript', 'Coding', 'Learning To Code'] |
Geocoding and Reverse Geocoding in Python | Recently, I was attending a predicting house prices hackathon. That was the first time I was dealing with a dataset having geographic coordinates - latitude and longitude . While working on this hackathon I have understood about Geocoding, Reverse Geocoding, and finding the distance between two coordinates. In this article, you are going to learn these three techniques.
We will make use of geopy and reverse_geocoder libraries in this article. Let’s get started.
Geocoding
Geocoding is the process of converting addresses into geographic coordinates (i.e. latitude and longitude).
Geocoding is provided by different service providers such as Nominatim, Bing, Google, etc. These services provide APIs which can be used by anyone for Geocoding. Here, geopy is just a library that provides these implementations for many different services in a single package.
In the below example, we are using Nominatim service for Geocoding. The results stored in the location variable. We can then use location to get the required values such as latitude, longitude, etc. The raw method returns a dictionary containing all the returned values and we can access the required field from location.raw .
Note: Different Geocoding services such as Nominatim, Bing comes with their own limitations, pricing, quota etc. For example, Nominatim is free but provides but there is a limit on the requests it can process.
Reverse Geocoding
Reverse Geocoding is the process of converting geographic coordinates (latitude & longitude) into a human-readable address.
Note that we need to pass latitude and longitude in that order to reverse function.
Another alternative library for reverse geocoding is reverse_geocoder . Let’s look at an example:
Distance between two coordinates
Suppose you want to find the distance between two coordinates (i.e. distance between two locations given its latitude and longitude), you could use geodesic function from geopy to calculate the distance.
Conclusion | https://medium.com/towards-artificial-intelligence/geocoding-and-reverse-geocoding-in-python-c0112b8679c2 | ['Chetan Ambi'] | 2020-10-29 21:02:27.908000+00:00 | ['Machine Learning', 'Data Science', 'Programming', 'Python'] |
When the material meets the immaterial | When the material meets the immaterial
With incentives, things are not always what they seem
“Most of economics can be summarized in four words: ‘People respond to incentives.’ The rest is commentary,” Steven Landsburg writes on the very first page of The Armchair Economist (perhaps not without a hint of provocation).
He has a point. In fact, the observation stretches well beyond economics as we generally understand it: much of biology and even evolution can be explained on the basis of incentives and disincentives. It is because organisms tend to repeat behaviour that provides them with a benefit, and to avoid behaviour that is disadvantageous that they survive, prosper, procreate, evolve and persist.
But the term ‘commentary’ does a lot of the work in that quote. ‘Incentives’ are often — certainly implicitly — interpreted as material incentives, i.e. money, or the things that are typically bought with money. It is, in a sense, at the heart of the assumption of rationality: if we get more money by doing something, we’ll do more of it (work harder, for example), and if we get less (or need to pay) if we do certain things, we’ll do less of them (committing crime, say).
Immaterial drivers
In practice, we are often motivated or discouraged by other drivers than material (dis)incentives, of course, and that is certainly part of the commentary. A sense of duty may stimulate us to volunteer or donate to charity, friendship may make us help a friend move house, and guilt aversion, rather than the fear of punishment, may prevent us taking advantage of a colleague’s purse being unattended on her desk. We may buy products of a ‘trusted’ brand, one that we are ‘loyal’ too rather than buy a much cheaper, but otherwise mostly equivalent, alternative from a German discounter. All this is really comprised in Landsburg’s commentary.
Damn, 25 minutes late again (image: Melissa Maples CC BY)
Yet it can be interesting to distinguish between material incentives and immaterial influences on our choices. A famous and often quoted example of the remarkable interaction between the two is found in a paper by the economists Uri Gneezy and Aldo Rustichini, A fine is a price. They performed a field study in 10 day-care centres, where on average about 6% of pickups were up to 30 minutes late. There was no cost to being late, so the 94% timely pickups were obviously not motivated by a material incentive. The authors then introduced a fine in some of the locations, and found that in those centres the number of late pickups did not diminish, but instead roughly doubled. One explanation is that any original guilt of being responsible for a carer to stay late was crowded out by the payment of a fine. For some of the parents this was clearly a superior deal.
Some employers looking for staff offer incentives to their employees, encouraging people from their social networks to apply for jobs. This approach is motivated by the belief that such candidates tend to fit better, are of higher quality, and stay longer. An important side effect is that the cost of recruiting in this way is lower too. Overall this appears to be a win-win-win arrangement. Aside from the benefits to the employer, the existing employee gets a cool bonus — which can be as high as a few thousands of dollars, euros or pounds — and the new recruit gets a great job.
Could a similar incentive work the other way around — i.e. if you’re looking for a job, would it make sense to incentivize the people in your network to introduce you to their own employers (or potential employers in their network)? At first sight, commentary-less incentives would seem to make it, at least in principle, a workable proposition. A quick word with HR, or with the recruiting manager of a suitable department on behalf of your friend is such a small effort, that even the smallest compensation would vastly outweigh it, even the chance of success is low. (And anyone lucky enough to have an employer with an employee referral scheme may even benefit twice.)
But when we also look at the commentary, a different picture might emerge. Referring a friend or an acquaintance to a prospective employer is in the first place a favour, inspired by immaterial, social motives rather than material ones.
Material dominance?
That does not necessarily mean an additional material incentive might not boost that motivation, especially in the area of employment. Most of us choose to work where we work, and do what we do, as a result of both kinds of motives. We always face a trade-off, even if we are not entirely conscious of it: often we could earn more, but that would mean doing a job that is not as pleasant or rewarding as our current one. Likewise, we can envisage more appealing work, but it would not pay as much as we earn now. And still, between two jobs that are similar in all other respects, most of us would choose the one that pays more.
A case of incentive (image: Maklay)
It is not obvious how that material element would translate to the favour of helping someone find a new job, though. Imagine the amount on offer is small but not insignificant, say £50 or $50 — a nice extra for the referrer. But compared to the referral bonuses employers pay it is pitiful. It would be like asking a friend to spend a weekend helping you move house in return for £50. There are things we would do for free as a favour, or at a proper market rate, but not for something in between. Yet even an incentive comparable to an employer’s bonus might backfire. Leaving aside whether it would make economic sense to pay a friend £2,000 if their referral led to your finding a new job, they might question your friendship: do you really believe they would need that kind of incentive for what is really a favour between friends?
Interestingly, even in conventional employee referral schemes the bonus on offer is probably not the principal motivator, Laszlo Bock, Google’s former Senior VP of People Operations, says. People refer their friends because they like working for their company, not because they’re after a bonus.
Steven Landsburg’s observation is a fine heuristic for understanding and influencing people’s behaviour. But when you wonder whether material incentives are the best way to make people respond, don’t forget to check the commentary. | https://koenfucius.medium.com/when-the-material-meets-the-immaterial-f79707d16415 | ['Koen Smets'] | 2018-10-26 06:27:31.216000+00:00 | ['Recruiting', 'Economics', 'Psychology', 'Behavioral Economics'] |
James Has Fallen | James Gilliand was one of those kids who, on the first day of school in any random grade, appeared at a desk, having moved from some other school district far far away. He was a tall kid with close-cut and somewhat oily black hair. His complexion was a tint or two darker than most of ours, too, but even on that first day, he smiled wide and seemed ready to belong.
James wasn’t a loud guy. He didn’t make trouble either, and on the playground fields, he accounted well for himself. Tall enough to be a basketball go-to; rangy enough to be uncoverable in football; and agile enough to make infield plays, though never too fast.
What I’m saying is that James exuded a non-threatening air; he fit into our boy clique of Keith and Mark and Randy and Reggie well enough. I suppose I learned where his family lived, maybe way down Clarendon Avenue, but that didn’t matter so much. Our school friend group didn’t always carry over into neighborhood games. Many of my neighborhood friends were older, anyway, so there was enough “play” to go around.
So James became an easy, comfortable, uniform school friend. I feel like once, on some early fall day, I threw a touchdown pass to a wide-open James and we celebrated as boys do afterward. I wish I could see the moment as clearly as I do his smile, but with memory, nothing stays certain for very long anyway.
What I do remember, to my shame and near disbelief, was the day I saw James get up from his desk behind me and walk up to the teacher’s desk at the front of the room. Maybe he was asking for permission to use the restroom; maybe he was heading to the blackboard to solve a division problem. Maybe I went temporarily insane. We’ll likely never know.
But on his way back to his desk, as he was clearly minding his own business and thinking nothing of what could happen to a tall, well-liked new boy, I took a very strange, and uncharacteristic for me, chance.
I stuck my leg out in the aisle just as he got to my desk.
Now, boys in my grade often did this, usually to no success. The best that had ever happened was someone stumbled or hopped up, causing a distraction that our teacher, Mrs. Shivers, chalked up to no harm, no foul. Of course, we boys did much worse to each other: using two fingers to snap down on each other’s arms; popping each other on the arms for no reason other than suspected cooties.
Or, in the worst of near-puberty games, hiding behind doors and cracking each other in the “nuts” as we entered some unsuspecting space.
Who really understands boys of this age anyway?
So, did I think my leg trick would work? Did I want it to work? Did I think James Gilliand would see my leg and stand there smiling at me until I moved it? Did I convince myself that James was such a friend now that any trick on him would be friend-fitting?
I no longer remember what I thought before, but I do remember what I thought as I saw James hit my leg at ankle-level. I remember what I felt as I saw what happened next.
I truly never had and have never since seen a live human being imitate a felled tree before. It’s like he stopped dead, teetered for only a second, and then fell straight and true, forward onto his face. Nothing broke his fall or tried to: not his hands, not his knees, not anyone else’s mercy.
Right on his face, he fell.
He wasn’t hurt, and he wasn’t mad, either, when he rose. He just looked at me, shook his head, and grinned.
I want to tell you that Mrs. Shivers yanked me up from my seat, whacked me, and then sent me to Coach Howell’s, our principal’s, office. But she didn’t. Did she even scold me? Did she make me write sentences on the board or for homework:
“I will not trip James Gilliand or anyone else ever again?”
No, she didn’t.
I know that I apologized to James, and though I was mighty impressed by his fall and my power in that moment, I also know that I have felt appropriately bad about my act ever since.
Sometime later, though not that day, James and I got into an argument. It wasn’t about religion or politics or even about Alabama-Auburn football. I really don’t know what had happened, but I do know that for the only time in my life, I neither backed down from a “fight” nor could release myself from the headlock James gripped me in. In our boyhood wrestling matches, my brother Mike, our friend Robert, and I had imposed imitation headlocks, but when James caught me, I understood their power.
“Stop fighting Terry,” he said. “I don’t want to hurt you.”
I continued struggling for another few seconds, but I’ve always been a realist, and I knew in those seconds that James Gilliand’s hold on me couldn’t be broken.
And that’s an apt way of sealing this memory, I think. A way I can try to make peace with. | https://medium.com/weeds-wildflowers/james-has-fallen-63ce77298ea6 | ['Terry Barr'] | 2020-11-10 06:02:33.007000+00:00 | ['Weeds And Wildflowers', 'Schools', 'Religion', 'Nonfiction', 'Friendship'] |
Basketball Stories That Have Almost Nothing To Do With Basketball | Freshman year we were blowing out a team, so they put the scrubs — me — in the game. For some reason, the other team continued to intentionally foul. I just happened to have the ball every time they fouled, so I ended up taking twelve free throws in just a couple minutes. I made eight of them, and considering a few years before I’d only scored eight points the entire season, this was one of the most prolific games of my career.
Immediately after the game, my mom drove me to visit my sister at college by myself for the first time. It was also the first time I drank alcohol.
“If Mom asks what we did, tell her we saw a movie,” my sister told me after Mom drove away. She was on the diving team and wasn’t supposed to drink during “dry season.”
“What movie did we see?” I asked, already paranoid about an interrogation.
“I don’t know. Big Fish.”
Then about a dozen of us crammed into someone’s dorm room, and I had four Coronas and an entire bottle of Boone’s Farm Melon Ball, which I haven’t consumed — or even seen — since that night. On a trip from the dorm room to the bathroom, I passed out on the floor. The next day I swore to myself I’d never drink again. My mom drove me home and never asked what we did. A decade later I saw Big Fish and loved it. | https://humanparts.medium.com/basketball-stories-that-have-almost-nothing-to-do-with-basketball-2955fa829d9c | ['Ben Kassoy'] | 2015-09-10 15:28:28.210000+00:00 | ['Storytelling', 'Humor', 'Sports'] |
Amrapali and Buddhist Monk | A beautiful story is told about a disciple of Gautam Buddha. He was a young monk, very healthy, very beautiful, very cultured. He had come — just like Gautam Buddha — from a royal family, renouncing the kingdom.
In the West, just as Cleopatra is thought to be the most beautiful woman in the whole past of humanity, in the East, a parallel woman to Cleopatra is Amrapali. She was a contemporary of Gautam Buddha. She was so beautiful that there were always golden chariots standing at the gate of her palace. Even great kings had to wait to meet her. She was only a prostitute, but she had become so rich she could purchase kingdoms. But deep down, she suffered. In that beautiful body there was also a beautiful soul which hankered for love.
When a man comes to buy the body of a woman, she may pretend great love for him because he has paid for it, but deep down she hates him because he is using her as a thing, as an object — purchasable; he is not respecting her as a human being. And the greatest hurt and wound that can happen to anybody is when you are treated as a dead thing and your integrity, your individuality, is humiliated.
This young monk went into the city to beg. Not knowing, he passed by so many chariots of gold and beautiful horses he was amazed: “Who lives in this palace?” As he looked upward, Amrapali was looking from the window, and for the first time love arose in her heart — for the simple reason that the moment the young monk saw Amrapali, he bowed down to her with deep respect. Such beauty has to be respected, not to be used. It is a great gift of existence to be appreciated — but not to be humiliated.
At the moment this young, beautiful monk bowed down, suddenly a great upsurge of energy happened in Amrapali. For the first time somebody had looked at her with eyes of respect, somebody had given her the dignity of being a human being. She ran down, touched the feet of the monk and said, “Don’t go anywhere else; today be my guest.”
He said, “I am a bhikkhu, a beggar. In your great palace, where so many kings are waiting in a queue to meet you, it won’t look good.”
She said, “Forget all about those kings — I hate them! But don’t say no to my invitation, because for the first time I have given an invitation. I have been invited thousands of times by kings and emperors, but I have never invited anybody. Don’t hurt me, this is my very first invitation. Have your food with me.” The monk agreed.
Other monks were coming behind him, because Buddha used to move with ten thousand monks wherever he went. They could not believe their eyes, that the young monk was going into the house of the prostitute. With great jealousy, anger, they returned to Gautam Buddha. With one voice they said, “This man has to be expelled from the commune! He has broken all your discipline. Not only did he bow down to a prostitute, he has even accepted her invitation to go into her palace and have his food there.”
Buddha said, “Let him come back.”
For the first time Amrapali herself served food into the bowl of the monk. With tears of joy she said, “Can I ask a favor?”
The young monk said, “I don’t have anything, except myself. If it is in my capacity, I will do anything you want me to do.”
She said, “Nothing has to be done. The season of rains is going to start within two, three days…” And it was the rule of Buddhist monks that in the rainy season they stayed in one place for four months; for eight months of the year they were continually moving from one place to another, but for the four months of the rains it was absolutely necessary for them to stay somewhere where they could get a shelter.
Amrapali said, “In the coming four months, this palace should be your shelter. I don’t ask anything. I will not disturb you in any way. I will make everything as comfortable as possible for you, but don’t go for these four months.”
The monk said, “I have to ask my master. If he allows me, I will stay. If he does not allow me, you will have to forgive me: it is not in my hands, it is my master who decides where one has to stay.”
He came back. Everybody was angry, jealous, and they were all waiting to see if Gautam Buddha was going to punish him. Buddha asked, “Tell me the whole thing. What happened?”
He told Buddha everything. He also said that Amrapali… He did not use the word prostitute — that is a judgment. You have already condemned a woman by the very word, condemned her that she sells her body, that she sells her love, that her love is a commodity, if you have money you can purchase it.
He said, “Amrapali has invited me for the coming rainy season, and I have told her that if my master allows me, I will stay in her palace. It does not matter…”
There was great silence among the ten thousand monks. Nobody had thought that Gautam Buddha would say, “You are allowed to stay with Amrapali.” They could not believe their own ears; what were they hearing? A monk who has renounced the world is going to stay for four months in the house of a prostitute?
An old monk stood up and said, “This is not right! This man is hiding a fact. He says a woman, Amrapali, has invited him. She is not a woman, she is a prostitute!”
Gautam Buddha said, “I know, and because he has not used the word prostitute I am allowing him to stay there. He has respect — no judgment, no condemnation. He himself does not want to stay, that is why he has come here to ask his master. If you asked me to stay there, I would not allow you.”
Another monk said, “It is a strange decision. We will lose our monk! That woman is not an ordinary woman but an enchantress. This man, in four months, will be completely lost to the virtuous life, the good life, the life of a saint. After four months he will come as a sinner.”
Gautam Buddha said, “After four months you will be here, I will be here; let us see what happens, because I trust in his meditations and I trust in his insight. Preventing him will be distrusting him. He trusts me; otherwise there was no need to come. He could have thrown away the begging bowl and remained there. I understand him, and I know his consciousness. This is a good opportunity, a fire test, to see what happens. Just wait for four months.”
Those four months, for the monks, were very long. Each day was going so slowly, and they were imagining what must be happening, they were dreaming in the night about what must be happening. And after four months, the monk came back with a beautiful woman following him. He said to Buddha, “She is Amrapali. She wants to be initiated into the commune. I recommend her — she is a unique woman. Not only is she beautiful, she has a soul as pure as you can conceive.”
She fell at Gautam Buddha’s feet. This was even a bigger shock to those ten thousand people! And Buddha said to them, “I know these four months have been very long and you have suffered much. Day in and day out your mind was thinking only about what was happening between the monk and Amrapali, that he must have fallen in love with the woman and gone down the drain; four months will pass, the rains will stop, but he will not return — with what face?
“But you see, when a man of consciousness enters the house of a prostitute, it is the prostitute that changes — not the man of consciousness. It is always the lower that goes through transformation when it comes in contact with the higher. The higher cannot be dragged down.”
Her name, Amrapali, means… She had the biggest mango grove, perhaps one hundred square miles, and she presented it to Gautam Buddha — it was the most beautiful place. And she presented her palace, all her immense resources, for the spread of the message of Buddha.
Buddha said to his sangha, to his commune, “If you are afraid to be in the company of a prostitute, that fear has nothing to do with the prostitute; that fear is coming from your own unconscious because you have repressed your sexuality. If you are clean, then all judgment disappears.”
So the awakened has no judgments of what is good and what is bad, and the child has no judgment because he cannot make the distinction — he has no experience. In this sense it is true that every awakened person becomes a child again — not ignorant, but innocent. But every old person is not an awakened being. It should be so; if life has been lived rightly — with alertness, with joy, with silence, with understanding — you not only grow old, you also grow up. And these are two different processes. Everybody grows old, but not everybody grows up.
—
From Osho, Reflections on Khalil Gibran’s The Prophet, Chapter 33 | https://medium.com/devansh-mittal/amrapali-and-buddhist-monk-e91123ab4569 | ['Devansh Mittal'] | 2019-10-14 14:01:41.775000+00:00 | ['Education', 'Buddhism', 'Spirituality', 'Psychology', 'Osho'] |
Spark versus cuDF and dask | When working with a large amount of data, we often spend time analyzing and preparing the data. The purpose of this article is to compare the performance of two technologies very present in the big data universe. I will use Spark and cuDF to understand which commands are faster on both technologies.
Apache Spark is general purpose cluster computing system. It delivers speed by providing in-memory-computation capability. Whereas a CPU uses a few cores focused on sequential serial processing, a GPU has thousands of smaller cores made for multi-tasking.
Environment
For this test, I'll use a Hadoop Cloudera environment with six datanodes and Spark version 2.3 and we recently purchased new servers with Tesla V-100 gpu cards.
Spark Environment
Nvidia Environment
To get started, I prepared a dataset with just over 150 million rows and a few columns. The purpose here was to have a sizable database to generate some numbers.
CSV and Parquet files
I stored this database in two different formats, parquet and csv, in order to evaluate the early reading stages when we received data from the various systems. It is well known that parquet format brings us many advantages, but we do not always receive data already in this format, and it is very common to receive raw data in delimited text files.
Reading Data
As a first test, I read the files in both environments and in both formats, see the result.
Spark Read csv and counting records
cuDF read csv and counting records
Spark read parquet and counting records
cuDF read parquet and counting records
The time to read files does not differ much from the technologies tested, as we have to take into account that this requires disk I/O operations. Because parquet files are very compact, reading time is much better than reading csv files. The new GPU servers also came with ssd disks, which speeds up this kind of reading.
But the count operation was much faster on the GPU compared to Spark. A few milliseconds instead of seconds. Relatively here we begin to have visibility of the processing power of the GPU.
Group by mean()
The group by command is often used in early analysis. The idea here was to perform some commands, summarize data and evaluate performance times.
Spark group by CSV example
Spark group by parquet example
However, when I tried to do the same in the cudf environment, I had some problems.
cuDF group by csv example
cuDF group by csv kernel restarting
The same error happens for the parquet dataframe, and for that reason, I won’t even show the same error here.
All was not lost, I could try using dask because in this environment I have two GPUs with 32gb of memory each one.
Local CUDA Cluster
dask cudf group by csv example
dask cudf group by parquet example
Summary of execution times
execution times for command mean
Group by Max()
After checking that it is possible to perform group by operations with the same dataset that was used in spark, but this time using dask, I ran a few more commands.
Spark group by max operations
Dask cuDF group by max csv
Dask cuDF group by max parquet
Summary of execution times
execution time for max command
Conclusion
Obviously processing data in GPU is much faster than CPU, but we have to consider volumes and needs. Data scientists and engineers are now known to spend a lot of time preparing data before even processing it into Machine Learning models. In my tests, times have improved a lot, but I also had problems with the volume of data.
When writing this article and when I came across problems using cuDF, I found Dask, and noticed that scalability issues can be solved with this framework. GPU servers are much more expensive than Hadoop servers if you look at it individually, but I think it is possible to achieve significant time and cost savings by properly using GPU servers. I realize that every week cuDF has developed and in a short time we will have simple ways to process large volumes of data.
References
[1] https://rapids.ai/start.html
[2] https://docs.rapids.ai/start
[3] https://rapidsai.github.io/projects/cudf/en/0.10.0/10min.html
[4] https://rapidsai.github.io/projects/cudf/en/0.10.0/dask-cudf.html
[5] https://docs.dask.org/en/latest/dataframe.html
[6] https://spark.apache.org/docs/2.3.0/
[7] https://data-flair.training/blogs/apache-spark-ecosystem-components/
[8] https://www.tomshardware.com/reviews/gpu-graphics-card-definition,5742.html | https://medium.com/datalab-log/spark-versus-cudf-and-dask-4be71a45c055 | ['Amilton Pimenta'] | 2019-12-19 17:20:59.543000+00:00 | ['Software Engineering', 'Cuda', 'Spark', 'Gpu', 'Hadoop'] |
Practical Machine Learning Tutorial: Part.3 (Model Evaluation-1) | Practical Machine Learning Tutorial: Part.3 (Model Evaluation-1)
Multi-class Classification Problem: Geoscience example (Facies)
In this part, we will elaborate on some model evaluation metrics specifically for multi-class classification problems. Accuracy, precision, recall, and confusion matrix are discussed below for our facies problem. This post is the third part of part1, part2. You can find the jupyter notebook file of this part here.
When I was fresh in machine learning, I always considered constructing a model as the most important step of the ML tasks, while now, I have another concept; model evaluation skill is the fundamental key to modeling success. We need to make sure that our model is working well with new data. On the other hand, we have to be able to interpret various evaluation metrics to understand our model’s strengths and weaknesses leading us to model improvement hints. As we are dealing with the multi-class problem in this tutorial, we will focus on related evaluation metrics, but before that, we need to get familiar with some definitions.
3–1 Model Metrics
When we are working with classification problems, we will have 4 kinds of possibility with model outcomes:
A) True Positive(TP) is the outcome of the model correctly predicts the positive class. In our dataset, a positive class is a label that we are looking for specifically for that label prediction. For example, if we are analyzing ‘Dolomite’ class prediction, TP is the number of truly predicted Dolomite samples of test data by the model.
B) True Negative(TN) is an outcome where the model correctly predicts the negative class. Negative class in our dataset for Dolomite prediction are those facies classes that truly predicted as not Dolomite(predicted as the rest of classes and truly were not Dolomite).
C) False Positive(FP) is an outcome where the model incorrectly predicts the positive class. In our dataset, all facies classes that incorrectly predicted as Dolomite when we are evaluating Dolomite class prediction.
D) False Negative(FN) is an outcome where the model incorrectly predicts negative class. Again for Dolomite prediction, FN is the prediction of Dolomite as non-Dolomite classes.
1.Accuracy: it is simply calculated as a fraction of correct predictions over the total number of predictions.
Accuracy = (TP+TN) / (TP+TN+FP+FN)
2. Precision: this metric answers this question: what proportion of positive predictions is totally correct?
Precision = TP / (TP+FP)
looking at the equation, we can see that if a model has zero False Positive prediction, the precision will be 1. Again, in Dolomite prediction, this index shows what proportion of predicted Dolomite is truly Dolomite (not other facies are classified as Dolomite).
3. Recall: recall answer this question: what proportion of actual positives is classified correctly?
Recall= TP / (TP+FN)
looking at the equation, we can see that if a model has zero False Negative prediction, the recall will be 1. In our example, recall shows the proportion of Dolomite class that correctly identified by the model.
Note: to evaluate the model efficiency, we need to consider both precision and recall together. Unfortunately, these two parameters act against each other, improving one leads to decreasing the other. The ideal case is that both of them show near 1 values.
4. f1_score: The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and the worst score at 0. The relative contribution of precision and recall to the F1 score are equal. The formula for the F1 score is:
F1 = 2 * (precision * recall) / (precision + recall)
Let’s see one example of Logistic Regression classifier performance:
Run:
from sklearn.metrics import precision_recall_fscore_support
model_log=LogisticRegression(C = 10, solver = ‘lbfgs’, max_iter= 200 )
model_log.fit(X_train, y_train)
y_pred_log = model_log.predict(X_test)
print(classification_report(y_test, y_pred_log, target_names= facies_labels))
To evaluate the Logistic Regression classifier performance, let's look at the first facies class Sandstone(SS). When this model predicts a facies as SS, it is correct in 75% of the time(Precision). On the other hand, this model correctly identifies 89% of all SS facies members(Recall). We can guess that f1_score is somewhere between these two metrics. Support means the individual class members for the test.
Let's have some block of codes to implement the above-mentioned procedure in order for all models and plot the result as an average. Up to line 15, we defined the model objects with hyper-parameters that we already obtained from the grid-search approach. Then(line 16 to 25) models are appended into a list to be iterable when we want to fit and cross-validate in order. After cross-validation, we stored metrics results in the list for each model. line 37 to 52, we established a for loop to calculate the average value of each of these metrics for each model. The rest of the code is a plotting task. | https://towardsdatascience.com/practical-machine-learning-tutorial-part-3-model-evaluation-1-5eefae18ec98 | ['Ryan A. Mardani'] | 2020-11-04 02:57:02.834000+00:00 | ['Machine Learning', 'Multiclass Classification', 'Model Evaluation', 'Geoscience', 'Python'] |
€940,000 of grants awarded to media organisations to report on global development challenges | Supporting sustainable and impactful journalistic coverage is essential to increase public awareness for, and interest in, the many challenges and solutions contained within global development and the Sustainable Development Goals. The European Development Journalism Grants programme supports journalistic media organisations with its year-long reporting projects about global development topics.
We are therefore excited to be able to announce the 8 awarded projects in this new round of the funding programme.
De Volkskrant (The Netherlands) will investigate the consequences of world trade and globalisation in the least developed countries by producing relatable stories, starting and ending with a specific product such as the battery in a smartphone or a t-shirt, and easy-to-understand standalone data visualisations.
“In 2018 de Volkskrant investigated the challenges of food security in Africa with our (EJC funded) project De Voedselzaak. Our harsh conclusion was that the continent would be able to feed itself if the West and multinational corporations would give it a fair chance. We’re thrilled that this grant enables us to pick up where we left off and delve deeper into the complex and abstract systems that cause hunger and inequality. With on the ground reporting and consumer-centered journalism we want to show our readers what the consequences of their actions are.” — Stan Putman, editor
Disclose (France) will investigate French development aid in African countries, and the lack of transparency in the use of public funds, through on the ground reporting and a series of investigative stories.
“At a time when information for citizens is essential to assert their rights and hold power to account, this fund will allow us to investigate in depth the use of public funds but also to reveal the private interests that sometimes hide behind development aid. A strong democratic society requires independent journalism able to conduct public interest investigations. For this support, thanks to the EJC.” — Geoffrey Livolsi, editor-in-chief
Euronews (Europe) will report on the issue of toxic masculinity in some African countries because the expectations surrounding manhood can be an obstacle to achieving a more egalitarian society. In addition to a podcast and opinion pieces from the continent, the reporting will be done entirely in collaboration with local journalists in an effort to listen to as many African voices as possible.
“We’re a diverse team, working in 12 languages and representing many more cultures. Gender equality is non-negotiable to us in our work so we are grateful to the EJC for the opportunity to investigate the issue of toxic masculinity. In the wake of the #MeToo era, we think it is vital to engage men in the conversation by showing that gender-based pressure harms them too.” — Euronews team
New Internationalist (United Kingdom) will address the looming hunger crisis for the world’s poorest people, seeking out the key ingredients for a more equitable and sustainable food system. The Seat at the Table series will trace the food supply chain down from international, national, informal markets down to consumers by highlighting dangers and risks and pointing to long-term, sustainable solutions to this dysfunctional food system.
“It’s hugely exciting to have the chance to dig deeper into what is sadly looking set to be humankind’s primary challenge in the years ahead — how to make sure that everyone gets to eat. Covid-19 was a real wake-up call, revealing how millions are just one shock away from hunger, especially in low-income countries. Our reporting will focus on these intense vulnerabilities — and solutions to these — in the Global South and also look at how in wealthy nations too, millions of families struggle to access nutritious food. Thanks to EJC, New Internationalist can stay with this story throughout the coming year — a journalist’s dream! — to unearth stories from across sub-Saharan Africa and the UK.” — Hazel Healy, co-editor
RiffReporter eG (Germany) will investigate how the protection of wetlands, rivers and other natural ecosystems is connected to the supply of clean water and sanitation. Showing the interconnection between biodiversity and development to highlight scalable solutions for one of the most important yet highly underrated topics: the supply of clean water.
“One of the biggest challenges in the coming years is to provide drinkable water to a growing world population. But this challenge is not only about wells, pipes and water purification. The real sources of drinkable water are ecosystems like wetlands, rivers and areas with large groundwater reservoirs. Our team of 10 reporters will investigate in Europe, Africa and Asia how these ecosystems can be protected.” — Christian Schwägerl, co-founder and CEO “We are delighted to receive support for RiffReporter’s independent science and environment journalism on such an important topic. With our newly started syndication ‘Marketplace’ for distributing RiffReporter content, we will work with a growing network of publishing partners to provide independent, high-quality reporting to the public during this important phase.” — Tanja Krämer, co-founder and CEO
SciDev.Net (United Kingdom) will launch a weekly podcast looking at science and health in sub-Saharan Africa, giving African journalists a platform to talk about how science affects their communities and giving African researchers an opportunity to highlight their work to European and African audiences.
“We are really excited to be selected by the European Journalism Centre for this grant and we hope it will give us a springboard to launch a self-sustaining podcast focusing on the really exciting work being done by African scientists and innovators.” — Ben Deighton, managing editor
Tageszeitung (Germany) will follow the trail of German development money in projects focusing on access to clean water and sanitation. The investigation, with a solutions-oriented approach, aims to reveal problems and potential solutions to water-related problems in the least developed countries.
“We are honored to be selected for this grant. In the current Corona-crisis we have seen that access to clean water and sanitation infrastructure is limited in most of the least developed countries but that it is essential for public health, the development of poor communities and gender equality. The funds will allow us to dig deep into the major challenges of access to clean water and availability of sewerage infrastructure in four different regions of the world. We are grateful to have more resources to allow us, as reporters, to do mainly on-the-ground reporting and to come up with a multimedia approach to publishing our findings.” — Simone Schlindwein, Tageszeitung-correspondent in the Great Lakes Region in Africa
Vanity Fair (France), with the project ‘Raise your Voices’, will focus on twelve resilient young women in different parts of the world who have by their actions changed their communities on issues such as poverty, hunger, education, health, gender equality and sanitation. The print edition will each month publish a four-page-feature. Podcasts and videos will be shared through its website and social media channels.
“We are really glad to be able to start this journey thanks to the grant! Vanity Fair is recognised for its detailed investigations, but the French edition is also in the process of evolving into a direction where we will focus more on current issues. The project ‘Raise your Voices’, which will be telling the story of twelve young fearless girls committed to a cause, will play a big role in this process.” — Elvire Emptaz, editor
(None of the 8 awarded projects will be published behind paywalls and will be freely accessible to a national or a global online audience.) | https://medium.com/we-are-the-european-journalism-centre/940-000-of-grants-for-media-organisations-to-report-on-global-challenges-66f9d1960274 | ['Bianca Lemmens'] | 2020-07-09 11:59:31.830000+00:00 | ['Journalism', 'Funding', 'Sdgs', 'Media', 'Updates'] |
Leads, Leads, Leads, Leads, Leads! Brand Strategy? | Leads are the thing we all need to keep the sales pipeline filled. There is no denying that. As a business, you generally know how many leads it’ll take for you to close a certain amount of deals. (If you don’t, stop right now and figure it out before you run out of money.) You hustle to get those leads and concentrate on tactics that will drive as many people to your website as possible. You create forms to capture email addresses. You post ad, after ad, after ad. You attend trade shows, create videos, the list goes on and on. And, at the end of all of this, not much happens. Why? Because leads don’t bring you money. Clients do. And those people that are visiting your site are possibly not quality leads.
You can have 100,001 leads with 0 new clients. Or, you could have 10 leads with 1 new client. I hear people talk about it all the time as it relates to social media. The term “friends” on Facebook has warped our whole idea of what an actual “friend” is. Would you rather have 1,239 contacts on your phone you can’t rely on that heavily or 5 contacts on your phone you could call anytime you needed anything? We all know the answer. We like the world to see quantity, but we really care about quality.
Don’t Fall For It
I get a bit angry when I hear about marketing companies that approach businesses and promise a certain amount of leads for a certain dollar amount. These businesses get soooooo excited about the prospect of hundreds (or thousands) of new leads per month! Then, they run ads, or whatever, to get those leads. They need to live up to their end of the deal, right? They promised leads, and by golly, they will get some leads! But, those leads visit your website, maybe try out what you have to offer, only to realize that it’s not a good fit at all. Those promised leads are horrible. They do nothing but justify the existence of that marketing company.
What has this “marketing company” failed to do? They have failed to embrace and execute your brand in a way that targets, attracts, and delivers the right people at the right time to your business. These fly-by-night “marketing companies” don’t care about your brand. They say they do “branding”, but I’m not sure they know what that means. How long do they spend with you getting to know your company before launching the first ad, or website change? How much effort do they put into generating clear and unique copywriting that speaks to your audience? Are the photos, typography, or website templates they use pulled from a mass-produced library of designs they’ve used for 1,000 other businesses like yours? Don’t fall for it. Don’t be another tally on their whiteboard. You deserve better.
The Struggle Is Real
65% of companies struggle to generate traffic and leads. No surprise, right? Driving leads is hard! That’s why we are so eager to pay someone else to get us more traffic and more leads!
We get it. We like leads. Like I said before, without those leads we all die. The point here is to make sure we are getting the right kinds of leads. This is where a real brand strategy can help. Taking the time to define who you are, what you are, why you exist, who you’re targeting, what you’re selling, and what you want people to say about you when you’re not around will pay off for you in the end. We are so eager to get that logo done, get some business cards, launch a website, set up a Facebook page, and get t-shirts printed, that we don’t take the time to understand where we’ve been, where we are, where we’re going, and why we exist in the first place!
The Challenge
I want to leave you with this challenge. Show your website to a complete stranger and give them 7 seconds to tell you:
1. what you do, and
2. how you will make their life better.
Next, ask 5–10 people inside or outside of your business what you do. How many different answers do you get?
Just start there. Think about what was said. Is that who you want to be? If not, then it’s time to step back and strategize. Tactics without strategy just leaves feeling empty and lost. Tactics fail, strategy is for the long haul.
Let’s take time to think about strategy. Then, we can use that info to drive the right kinds of leads. The leads that turn into clients. Because they are the ones that pay us real money.
—
Want to be more effective with the time you have creating marketing strategies? You can read The 5 Step Process For a More Structured Marketing Strategy eBook for a more in-depth discussion of these concepts and how you can begin to implement them. | https://medium.com/thoughts-of-a-brand-strategist/leads-leads-leads-leads-leads-brand-strategy-bdd7d5564254 | ['Skot Waldron'] | 2018-05-18 16:43:01.101000+00:00 | ['Branding', 'Lead Generation', 'Marketing', 'Brand Strategy', 'Digital Marketing'] |
Looking Back | Medium has been one of those pleasant surprises in my life. And maintaining interest for a whole year was completely unexpected. I figured I would run out of material or interest after a few months. Yet here I am. Still enjoying it. Still writing.
The simple reason is, I have found an audience that responds. I don’t get too hung up on numbers. Audience size is not that important if they honestly like your efforts. I learned that as a musician. Polite applause stinks. I want people to stand and cheer, or get up and dance, or come up to me personally after a gig just to say how much they enjoyed it. Then I know their appreciation is real. It’s a good feeling.
I feel the same way when I get responses on Medium. Claps are nice and I greatly appreciate them. But if someone makes the effort to actually respond to one of my stories then I feel like I have accomplished something. Sure, I’d love to get thousands of reads on every story. I’d also love to win the lottery. But neither is likely so I am thankful for the reads I get.
I have a small group of regular readers on Medium. I consider them my friends as I have gotten to know them through their responses. A person’s personality eventually comes out in their response writing, which is not usually as edited and polished as their published stories. It is more revealing. So I feel like I get to know them after a while. And I enjoy writing for them. I won’t start listing them. You know who you are. But I will say thank you, Medium friends, for reading and responding. You bring me joy. | https://medium.com/mark-starlin-writes/looking-back-1ea54366a21b | ['Mark Starlin'] | 2019-05-26 03:32:40.395000+00:00 | ['Medium', 'Essay', 'Other Stories', 'Reading', 'Writing'] |
Pallet story #3 — Design system in a small team | In the previous episodes, we explained the processes behind component creation and design system management, but from a design perspective.
In this article, we will describe how we manage to push these components into production in a fast growing and changing environment.
Commute routine
Commuting, everyday, the same way, without any issue is probably the biggest satisfaction of my daily routines.
And when it comes to the components we plan to add in Pallet (our design system, you remember ?), I expect the same satisfaction, a routine with no issue, a clearly defined process that repeats over time, which could be summarized as follows:
Theory often works… in theory
With a clearly defined path to follow, all we have to do now is to dedicate resources in order to get things done.
In some occasions, we can allocate resources for that schema.
But in most occasions, we have high business impact features to provide, code to maintain, code to review, specs to detail, documentation to write, etc…
Basically, we often run out of time… like, most of the time.
That’s a real issue.
But there is a bigger problem. If we don’t improve and rely on our design system, we trigger issues like design inconsistencies, code repetitions, and we don’t provide reusable UI parts to increase development speed.
This is why we can’t do without it. And we have to find another path to support design, engineering and business at the same time.
Don’t stick with a unique plan, or you’ll get stuck
Our new features generally can’t wait for the design system, and the traditional design system release is not flexible enough.
We need another workflow, which allows us to feed the design system with components while making features.
We experienced another approach : developing design system components directly in our application codebase, and moving them to the design system with the smallest effort once they work as expected.
This strategy implies a strong decision : app and design system share the same UI dependencies with the same versions. This decision would appear risky, short term or not scalable to some of you.
Well, in fact, it works great.
There is no perfect decision, focus on the benefits
Of course, this decision is questionable (mostly from a developer’s perspective),but what initially looked like a scrappy workaround turned out to be profitable in many aspects:
- we can deliver missing design system pieces on demand.
- improved feedback loops during real features development vs. a classic release flow.
- better debugging, polishing vs storybook stories, as we test and use in the real usage context
- moving the components to the design system is a low cost process
Find your way
Here we are, after 1 year and a half we only ship UI made with Pallet components. We saved a lot of time when it comes to build a feature, by just composing Pallet elements.
We gained a lot of confidence on design consistency, with quicker design reviews. And finally, users and business make the most out of it, thanks to design impact with the User Experience.
Depending on your business, your team and your growth, there must always be an alternative path if the standard way is blocked.
You may think you’re heading the wrong way, but there’s no wrong way when multiple benefits await you by the end of the journey.
Unlike my daily commute, processes and flows are not daily routines, or repeating schemas, they evolve depending on your environment, and satisfaction comes up once you find the right path for your company.
Thank you for reading. | https://medium.com/everoad/pallet-story-3-design-system-in-a-small-team-b5752dbc6c48 | ['Nicolas Gregoire'] | 2019-12-24 19:01:00.894000+00:00 | ['Design Systems', 'Design', 'Process', 'Product', 'Everoad'] |
Interfaces in GOlang | INTERFACES
Before learning anything about what interfaces are, lets take a look at an example to clearly get the idea regarding why we need interfaces and how it could save us a lot of trouble while coding with GO.
Create a new directory and inside it create a main.go file. Paste the following code inside it.
package main import "fmt" type grade11Marks struct {
math int
physics int
}
type grade12Marks struct {
math int
computer int
} func main() { ram := grade11Marks{50, 80}
shyam := grade12Marks{60, 70} ram.printMarks()
shyam.printMarks()
} func (m grade11Marks) printMarks() {
fmt.Println("math: ", m.math, " physics: ", m.physics)
} func (g grade12Marks) printMarks() {
fmt.Println("math: ", g.math, " physics: ", g.computer)
}
Go ahead and run the code with the command go run main.go.
Basically, we declare two different types of struct grade11Marks and grade12Marks and with one variable for each ram and shyam respectively.
If you do not have knowledge regarding structs in GO please refer to the previous article that cover it.
We create a receiver function named printMarks() for each struct separately which are used to print the values inside the struct variable.
Now, these are two structs are very similar, and we could have a lot of common functions for these two to work together. Like, lets say storing in database, or even printing the values.
But one of the major restrictions in GO, is that it is a strictly typed language and we cannot pass two different type variables to a function, even if the action performed inside the function is very similar to both the functions.
Now, below is an example, where we can use an interface to represent both the types of struct grade11Marks and grade12Marks and pass both the values and perform actions on them. However, the example below might seem very trivial, but in bigger projects where there are a lot more functions and variables, this could be a very useful method for code re-usability.
Go ahead add the following code before the struct definitions:
type marks interface {
printMarks()
}
This is a definition of an interface type named marks. What this does is basically tell GO , that all the variables that have a receiver function which is named as printMarks, then they all belong to the type marks. So basically interface is a type in GO that incorporates multiple types.
If you do not understand concept behind receiver function please read the previous article regarding such.
Note: if the receiver function returns some value, then it is to be mentioned. Like, if our function had a return type of string then instead of just printMarks() we needed to write printMarks() string.
Now, add the following function, at the end of the code:
func print(b marks) {
b.printMarks()
}
What, this function does is, takes a value b of type marks, which is an interface that incorporates all the types that have the receiver function printMarks.
And, then we call the printMarks() function to perform this action.
There could be more complex situation here, that could save us a lot of code redundancy. But for the sake of simplicity we are simply calling the printMarks() receiver function.
Now, to actually use this function, remove the following lines:
ram.printMarks()
shyam.printMarks()
And, add the following lines:
print(ram)
print(shyam)
Now, run the program and see the output.The same output as before is seen. | https://medium.com/wesionary-team/interfaces-in-golang-2d675538f646 | ['Sajal Dulal'] | 2020-05-11 14:23:28.175000+00:00 | ['Interfaces', 'Development', 'Beginner', 'Golang', 'Tutorial'] |
How I Got Promoted in a Year from Junior to Mid Senior Developer | How I Got Promoted in a Year from Junior to Mid Senior Developer
Effective Tips From Being Promoted at Twitter
Photo by cgower on Unsplash
Everyone wants to get promoted. You get a title change. You get a compensation increase. You can work on larger-scope projects. But what does it take to get promoted? What are the steps to take so that one does not stagnate in his or her career?
There is no one perfect way to get promoted as a software engineer, but following these tips should set any developer for success!
1. Work on Multiple Projects
The main objective is to ensure that you can work on multiple projects and drive them to completion. You need to prove that you are a competent coder.
When selecting which projects you want to work on, you must ask yourself these questions and make sure the project satisfies these requirements. If not, discuss with your manager to see if it is possible to work on a different project that can meet these requirements.
i.) Is it significantly impactful?
This question can be answered in numerous ways. Does it improve the team’s operational load significantly? Does it improve the reliability of your services? Does it support new, important use cases for other external teams?
All projects are impactful in some sense. The keyword here is significant. If you are not sure about the impact of the project, ask your manager for clarity.
ii.) Is it high visibility?
You want to work on a project that many people are waiting on and are excited for it to be completed. Working on a high visibility project will require you to collaborate with multiple teams, which has several benefits.
First, it will foster relationships with various engineers and teams. You will start to understand what are the different projects that each team is working on and how they all fit together.
Second, you will increase your scope of influence. External teams may start asking you for feedback in evaluating their new services or feature changes.
These skills are the essential characteristics of any staff engineer as well as even a senior engineer. These two skills take a long time to build, and that is why it is important to start early.
iii.) Will I be the project lead?
You need to get clarity if you are leading the project or if someone else is leading the project. It may be that in your first project or first two projects, you are not the project lead, which is expected, since you may still be onboarding onto the team.
However, in subsequent projects, you may want to start taking on project lead responsibilities one at a time. You want to show proof that you can be an independent engineer as well as an engineer that can drive projects to completion. That would include tasks, such as writing technical design documents, planning the milestones, communicating to external teams for collaboration if necessary, etc.
If the project satisfies all 3 checkboxes, then it is a good project to work on! This will help you write an effective promotion document when the time comes. | https://towardsdatascience.com/how-i-got-promoted-in-a-year-from-junior-to-mid-senior-developer-3866c4ce0341 | ['Yen Huang'] | 2020-12-27 19:37:36.694000+00:00 | ['Technology', 'Software Engineering', 'Computer Science', 'Work', 'Programming'] |
7 Common SEO Mysteries Solved | Photo by Diggity Marketing on Unsplash
Even for experienced SEO professionals, search optimization can be confusing. There are hundreds, if not thousands of different factors that can enter into how your site ranks, and those factors change frequently (and oftentimes, without warning).
If you’re adhering to best practices — to the best of your ability — and there’s a sudden disruption in your progress, it’s common to feel disheartened. But take comfort knowing that it’s happened to all of us, and those pesky SEO mysteries aren’t always as mysterious as they may first seem.
Draw insight from these seven common SEO “mysteries” that often plague new campaigns:
1. Why did my traffic suddenly drop?
You’ve been seeing steady results for a while now, but all of a sudden, your organic traffic has declined. What could be the case? It depends on how severe the decline is. If you notice a decline of 10 percent or less, it’s probably nothing to worry about; you should expect some natural fluctuations due to index refreshes, new competitors, and new factors.
At the other extreme, if your traffic drops to almost nothing (which is extremely rare), you have a serious problem. It could mean your site is down or you’re facing a manual Google penalty (you can check to see if either of these are affecting your site in Google Search Console).
If you’re somewhere in the middle, check for any recent “bad” inbound links that could be considered as spam by Google, any recent content changes to your site that may have changed your page URLs, or a new Google update that may have significantly changed your rankings.
1. Why aren’t my pages showing up in search results?
If your pages aren’t showing up in Google search at all, it means they haven’t been indexed. If you’ve created a new site, don’t worry — it typically takes between 4 and 28 days for Google to index new web content. If you want to speed up the process, you can submit an XML sitemap through your Search Console (which is a good measure to take in general).
If you’re still having trouble with certain pages showing up, check your robots.txt file to make sure you haven’t accidentally blocked search bots from seeing your pages. As a last resort, you can check for crawl errors in Google Search Console to pinpoint the root cause of the problem.
2. What happened to my link?
If you built a link pointing to your site, but it’s suddenly disappeared, the solution is usually simple; the site that hosted it removed it. They may have found the link irrelevant, they may have removed your content entirely, or they may have replaced it with a “nofollow” link.
Double check with the publisher, and attempt to build a replacement link elsewhere.
3. Why do my rankings keep changing?
It’s natural to expect some kind of volatility in your rankings. It would be strange, in fact, if your rankings weren’t changing at all. Don’t drive yourself crazy by checking your rankings every day; instead, shoot for bi-weekly or monthly check-ins. Like the stock market, rankings will go up and down over time; what you’re looking for is an overall uptrend.
However, if you’re facing extreme volatility (drastic ups and downs on a regular basis), it means something in your strategy is inconsistent (such as alternating between black hat and white hat techniques, or producing both low-quality and high-quality content).
4. Why aren’t I seeing better SEO results?
This is a more open-ended problem than the others on this list. If you just started a campaign, remember that SEO is a long-term strategy, and depending on your niche, budget, and competition, it could take months before you start to see results.
If you’ve been at it for a few months and you aren’t satisfied with the results, consider upping your budget — more money means higher quality (in many cases), and higher volume. Don’t be afraid to consult with an expert if you can’t seem to build momentum.
5. Why is my traffic so volatile?
See my answer to “rankings” in point four. Volatility isn’t specific to rankings; it will affect your traffic as well. However, traffic bears an additional consideration; the ebb and flow of your business.
Does your industry have a “peak” season that could be responsible for driving more traffic, or does your traffic seem to be correlated with specific events (such as more “air conditioning” searches on especially hot days)?
6. Why is my site running slow?
This isn’t an analytics issue like the other mysteries on this list, but your site speed does have an impact on your rankings and performance. If you know your site loading speed is a problem but you can’t get it to load faster, consider downsizing the image files on your site and stripping any plugins you don’t use regularly.
Then, delete any meta information or drafts you don’t need and optimize your caching plugins so you can load faster on previous visitors’ devices. If speed continues to be a problem, consider upgrading your hosting provider.
These aren’t the only issues you could run into while managing an SEO campaign, but they are some of the most common. Your solution may not be obvious, but as long as you keep digging, eventually you’ll find the root cause — or at least, some way to reverse the situation.
There’s usually more than one culprit and more than one way to fix the problem — a gracious side effect of SEO’s complexity — so the next time you face an optimization enigma, remain calm and start troubleshooting.
For more content like this, be sure to check out my podcast, The Entrepreneur Cast! | https://jaysondemers.medium.com/7-common-seo-mysteries-solved-5f073857ce49 | ['Jayson Demers'] | 2020-08-12 16:15:13.971000+00:00 | ['SEO', 'Online Marketing', 'Marketing', 'Search Engine Optimizati', 'Content Marketing'] |
Identifying & Understanding Algorithmic Bias | Introduction
With the Digital Revolution well under way, we are interacting with algorithms on a daily basis. These algorithms make everyday decisions for us such as which posts we see as we scroll through social media and which websites we browse when searching for information online. They also help us make significant decisions such as who is eligible for a bank loan, who gets parole, and who will get interviewed for a job at a company.
For both small and significant decisions, algorithms are ever-present and for many, they are a cause for worry. After all, what if these algorithms, which are often kept behind closed doors at the companies who created them, are not treating people or information equitably. This question drives the worry that algorithms may be perpetuating existing divisions in broad areas such as gender, race, and political ideology. However, before considering the implications of algorithms in all of these areas, it is important to pinpoint exactly what “fair”, or unbiased, treatment would look like.
Defining Bias
There are currently two competing definitions of algorithmic bias when it comes to making predictions about people: equality across groups and equality across classifications. The first definition means that an algorithm’s error rate should be equal across different groups (e.g the algorithm should misclassify women just as much as it misclassifies men). The second definition means the algorithm’s prediction should have the same meaning across different groups (e.g. a risk score of 7 should correspond to the same risk level regardless of the individual’s race).
Both of these definitions seem entirely logical and consistent with each other, but it is mathematically impossible for both to be satisfied at once if two different populations (races, genders, etc.) have different base rates (source). In other words, if two groups already have different rates of something happening (paying back a loan, committing a crime, getting hired, etc) because of existing societal divisions, then the algorithm will be unable to satisfy both definitions of equality at once.
In light of this mathematical fact, the most operable and long-term definition of bias with which systems should be tested is the equality across classifications. This approach creates simpler systems which produce outputs with consistent meanings rather than conditional meanings depending on which group the system is making a prediction for.
Case Study: The COMPAS Parole Algorithm
One of the central cases which perfectly encapsulates the issue surrounding algorithmic bias is the COMPAS parole algorithm. Using a series of metrics, none of which is race, COMPAS assigns criminals a risk score which measures their probability of committing a crime if they are given parole. ProPublica found that the algorithm made more errors for African American men by placing more of them in the “high risk” category, thereby denying them parole, than it did for Caucasian men. However, they also reported that for each risk score, an equal proportion of African Americans and Caucasians reoffended. Thus the algorithm was equal across classifications, but not equal across groups (source).
While ProPublica’s concerns are certainly valid, the COMPAS algorithm should not be altered to be equal across groups. If it was, a judge viewing COMPAS’ report would have to interpret it differently based on the race of the individual since now each score would mean something different for different races. This goes against a very basic tenet of equality that groups should be treated equally regardless of race.
However, something still needs to be done to address the fact that the algorithm makes more errors for one group than another. Rather than changing the algorithm, instead we should focus on changing the circumstances which cause the algorithm’s biases in the first place. For example, african americans may have a higher rate of reoffending because predominantly black areas are policed more heavily and certain police officers may have bias against african americans. Giving police officers different training or reorganizing how police are distributed across neighborhoods might be measures which help bring the base rate of reoffending for african americans closer to that of caucasians. This would mitigate the discrepancy in the error rate between the two races.
Case Study: Amazon’s Hiring Algorithm
Another modern example of computerized systems exhibiting bias in making critical decisions concerning human lives is in hiring. A couple of months ago, Reuters released a report revealing that Amazon had attempted to create an automated resume ranking system to help them fill their technical roles but ultimately abandoned it because the system did not perform well with women’s resumes. It actively penalized resumes which included gendered words such as “women’s” and favored resumes including words frequently used by men.
Of course, Amazon was not trying to actively build a system which favored men over women; it happened because in technical roles, the gender gap makes it impossible for the algorithm to assign scores which are both equal across classifications as well as equal across groups. Having recognized this, Amazon acted in the right manner by disbanding the project and using a reduced version of the algorithm to automate basic tasks which do not directly impact the recruiting process.
This case highlights another approach to resolve problems related to using algorithms in human-related decisions: to reduce their functionality until they can be made in a way which the inequality among groups is negligible. In this case, any hiring algorithm for technical positions will not be equal among groups in gender (i.e it will more frequently score qualified women as lower than it does for men) because it is learning from existing biases in hiring practices. By significantly reducing the algorithm’s impact on actual human beings, hiring still adheres to the principles of equality (i.e equality across classification) while reducing the impact of existing divisions in our social structures.
What Does This Mean For Automating Human-Decisions?
Both of the COMPAS and Amazon cases demonstrate that it is possible to maintain the philosophical ideals of equality as our standard for determining whether or not a machine is biased as long as we take measures to mitigate their impact on real humans and work towards mitigating the social factors which lead to the inequality across groups.
As a society, we need to realize that our algorithms are not made pure. They are always a reflection of the choices we have made whether that is the data they are trained on, the operating assumptions that the engineers made, and existing societal norms as a whole. When companies like Amazon or public entities such prisons decide to incorporate algorithms into their decision-making process, they need to do with caution because while their outputs might mean the same thing for all groups, they may nevertheless perpetuate divisions which should be eradicated. Thankfully, the technology is not at a point yet where these decisions can be completely automated; so as the technology grows, society still has time to grow with it.
Also posted at https://mdb.dev/ethics-in-technology/defining-algorithmic-bias/ | https://medium.com/future-vision/defining-algorithmic-bias-239b887e6539 | ['Anmol Parande'] | 2019-04-22 19:18:10.422000+00:00 | ['AI', 'Bias', 'Algorithms', 'Ethics', 'Automation'] |
My Year in Data: a Visual Reflection on 2019 | If you lurk in the shadows of the data viz world, you’re no doubt familiar with this kind of exercise. Health and productivity apps offer more ways to track daily data than ever before. As people naturally obsessed with quantifying, many data visualization practitioners have used it as an exercise in reflection.
This was my 2019. It was the first year I decided to intentionally track activities like working out, computer productivity, and listening to music. Most notably, I was inspired by the legendary Feltron reports. But at the time of starting this project, I was also still riding high off my recent foray into creative coding during the annual Codevember challenge. So I wanted an opportunity to focus my experience with Javascript back to data visualization and to push it’s limits into creating non-traditional charts and visualizations. I also wanted to embark on a more personal exploration of my data.
Here’s how I did it (as well as some tips towards the end on how to do one yourself!).
Data tracking
It’s frankly scary how much data you can download about yourself without even trying. For example, you can request things like location data and daily steps with Google Takeout. But I also wanted to be able to track other things that aren’t as passive. If you own a smartphone, odds are that your steps are being tracked through some native health app (Google Fit or Apple Health, most likely). But if you want to track details for running or cycling, you are better off using something like Strava, which you have to “start” and “stop” at the end of each workout.
The Gyroscope desktop dashboard.
My central tracking app was Gyroscope. It’s a quantified self app that will aggregate data from multiple sources into one place. Even better, since it’s a small startup that is committed to privacy, they also have a dedicated data export page to download a csv file of your data for any category at any time. There are a few specific apps that I used to track other things, all of which fed into Gyroscope:
Strava : running, cycling, hiking, other workouts
: running, cycling, hiking, other workouts Last.fm : integrates with Spotify to passively track listening history
: integrates with Spotify to passively track listening history Google Fit : steps
: steps RescueTime: desktop app to track app usage on my laptop
My system wasn’t perfect though. I still had to remember to track a workout before I started, which didn’t always happen. I also use multiple computers (work and personal), which ended up skewing my productivity data as well. My step data didn’t truly start until March, so I had to do some backfilling with averages to fill in the gaps. But overall I was happy with the amount of data I collected.
You’ll notice that location is missing from my list. This was both an intentional and unintentional choice: I can download my location data (and have) but I didn’t want to dive down that rabbit hole for this first year. Maybe next time.
Analysis
While I could have done this in a cleaner way, I ended up tidying up my data in Google Sheets. Making pivot tables helps to total up distances per day and also to format things like numbers, strings, and dates.
Doing some light analysis allowed me to pull out interesting stats like this for each sketch (shown bottom right). This viz shows all of my tracked physical activity last year. View the full version here.
I also used pivot tables to quickly find things like averages, totals, outliers, etc. for each topic. The smarter, more responsible version of myself would have wrote a Python script to do this instead so I can use it again next year. But hey, sometimes you’re just in the zone and you forget to be an efficient human. Maybe next time.
Design
Most of my designs ended up relying on circular concepts, since a year by natures takes a circular form. Radial shapes felt evocative of the circular nature of seasons and the passing of time.
When I didn’t use a radial shape I also tried to include circles and bubbles as a way to evoke a “point” in time. Each of my datasets was by definition a timeline, so I wanted to retain this feeling throughout.
A snippet of my “Year of Running” viz. Full version.
These posters favor reflection and expression over precision, but each still has a legend for how to compare values. You’ll also notice that many circles overlap. I could have used a smaller total scale for each circle to make individual event (circle) easier to see. But for this project I was less interested in seeing individual events, and more interested in overall patterns.
For example, when I moved to Boston I started biking to work every day. So the fact that there are tons of dark blue dots for the “Workouts” viz, all in a row at the same size, isn’t particularly interesting. That’s just my work commute. What is interesting however, is the giant blue circle in July (my long distance ride to Walden Pond).
I wanted to incorporate a vibrant color scheme and to play with inverted shading as a way of categorizing activities. In the “Workouts” viz, each color represents a different activity, but when the acitivty is explored in depth (like “Running” above) this color becomes the background.
Although it goes against my usual workflow, I actually didn’t start with any sketches before I began designing and coding. I felt comfortable enough with creating graphics using p5.js that I started prototyping with the data straight away. I wouldn’t usually recommend this, but I actually found it helpful to use “real” data right from the start instead of sketching on paper without any data.
Code
Every poster is made using p5.js. No touch-ups in Illustrator. I’m not gonna lie, this ended up being a real pain in the ass a few times, but I’m pleased that I pushed through and stuck to code-only for this project. This was a personal challenge to brush up on my Javascript, but also a bit of a statement: coders can design and designers can code. Don’t let your lack of tool-specific knowledge stop you from making something great.
You can take a look at the code for each sketch in this repo (sorry for the parts that are a bit of a mess). It’s hacky in a few places, but it works!
This project reinforced my new love for the p5.js graphics library. It also gave me new ideas for how to use it. Despite not having the same interactive capabilities as a library like d3.js, p5 holds enormous opportunity for creative data visualization and nontraditional methods of graphicacy. And the learning curve is much less steep than something like d3.
Learnings
After diving into my data from last year, I started to realize all the other things I could have been tracking. Location, for example, would have been a great one to keep a more annotated log of rather than the massive Google Takeout data dump.
Sleep is another notable area that would be great to track, but I don’t have an Apple Watch or similar to do so right now. I also would like to track my reading through Goodreads, which I do currently but my reading last year was failry substandard … not a lot of data to viz from 2019. This year will be different though!
I also got the chance to reflect on certain peaks and troughs in the patterns that emerged from my data. None of these were surprising per se, but instead acted as a memory-booster. Similar to how a photograph can jog your memory of a recent vacation, I found that these visualizations took me back to memories of activity through each season, high and low.
Why did my physical activity spike so much after May? Probably because I moved from the massive urban city of London to the smaller, more balanced Somerville just outside of Boston. There’s also a state park close to where I live. Good sign for the move.
Why was I running so much in August? I think I had just gotten some new running shoes, so was feeling particularly motivated. The days were really long too, but once winter came it got harder. Does that make me a “fair-weather” runner? Perhaps. Maybe I should try to make running more part of my routine by run-commuting to work.
The questions and answers go on. And if I never would have started this side project, the data would have sat in databases and never really meant anything to me.
I learned that visualizing personal data can be both a creative and personal challenge. You confront certain patterns that defined your year, whether you meant to develop them or not. And then you have a choice for next year: How will my 2020 visualizations look in comparison?
Next year I look forward to seeing how my data has changed and what new types of data I may be generating. After completing this first year, here are my top tips for anyone interested in analyzing their own activity data:
Automate as much as possible: You will forget to track things—it’s inevitable. So as much as possible, use services like Gyroscope or a tool like IFTTT to automate data storage when possible
You will forget to track things—it’s inevitable. So as much as possible, use services like Gyroscope or a tool like IFTTT to automate data storage when possible Analyze your data before you start designing: If you know where the interesting patterns lie, you can design a visualization that highlights this point.
If you know where the interesting patterns lie, you can design a visualization that highlights this point. It’s ok to start late: My step tracking was all over the place for the first part of the year, for various technical reasons. But after analyzing my totals, it was clear that my daily numbers were fairly consistent. So I wrote an averaging function with a confidence interval of +/- 2000 steps and generated the data myself. The viz was still informative, even if only nine out of 12 months were “real” data.
My step tracking was all over the place for the first part of the year, for various technical reasons. But after analyzing my totals, it was clear that my daily numbers were fairly consistent. So I wrote an averaging function with a confidence interval of +/- 2000 steps and generated the data myself. The viz was still informative, even if only nine out of 12 months were “real” data. Keep asking why: maybe you had a few low months of activity in a row. Or maybe you had a strong start to the year, but then lost steam. Why? The answer might help to plan more realistic goals for next year, or to make a lifestyle change to change the future pattern.
I have never been one for New Year’s Resolutions, but this has got to be my closest effort to make one. Instead of letting tech companies harvest all the value from my data, I now hope to continue extracting some meaning for myself—to improve, to explore, and to celebrate another year. | https://medium.com/nightingale/my-year-in-data-a-visual-reflection-on-2019-e1545be7d680 | ['Benjamin Cooley'] | 2020-01-29 15:22:37.616000+00:00 | ['Design', 'JavaScript', 'Data', 'Data Visualization', 'Creative Coding'] |
by Martino Pietropoli | First thing in the morning: a glass of water and a cartoon by The Fluxus.
Follow | https://medium.com/the-fluxus/wednesday-therapy-50b5f6f17de8 | ['Martino Pietropoli'] | 2017-03-29 01:09:49.720000+00:00 | ['Humor', 'Wednesday', 'Comics', 'Cartoon', 'Psychology'] |
Too Many Damn Articles on Product-Market-Fit | By Gil Rosen
What is product market fit?
How do you know when you have it?
Why does it even matter?
I do not think it means what you think it means
I thought to write an article on Product-Market-Fit based on my own experience as an entrepreneur, investor, advisor, and founder of headandheart.capital. A quick google found 17 articles on this very topic — half semi-vague, half contradictory, some relevant yet incomplete, and most highly redundant. So I decided to write yet another, synthesizing my own thoughts with relevant insight from those very articles (tldr and links at the end). But you’ll like this one, I promise. And when you’re done you’ll hopefully know what (I think) PMF is, how you know if you have it, and how it should affect your venture’s priorities.
Why should I, or you, even care? Because people make investment, partnership, sales, acquisition, and countless other decisions based on having it or not, and if you’re wrong, you might just make the wrong decision. And making wrong decisions can hurt. I say this from experience as I lived through it with one of my first investments whom I worked closely with.
They were a B2B software venture, let’s call them MIMFO (that’s not their real name, but MIMFO rhymes with nympho and they definitely screwed things up for a bit). MIMFO seemed to have everything going for them — multimillion dollar revenues, nearly 100 employees, marquee customers. They were convinced they had PMF and raised financing, kicked off campaigns, opened sales offices, and made strategic acquisitions based on this premise — yet when the rubber met the road they failed to sell their product beyond their first dozen customers. It was a great product, their customers loved it, it was timely and innovative, and yet they struggled. They missed targets. They had to lay off staff. Eventually, management was replaced. Why? They tried to scale before actually having PMF, which meant they burned cash on an assumption of customer value that wasn’t actually there. The first action the new CEO took was to determine how to get to PMF.
We’ll dive more into MIMFO later in this piece, but the lesson is to beware of scaling prematurely. A venture’s longevity is determined by their cash flow, and as soon as it begins to hire and scale, its burn rate increases. If a venture can’t show revenues or progress quickly, their cash runs out and they’ll find it hard to raise more. It is thus critical to scale only once PMF is actually found.
Product Market Fit?
So what is product market fit then? It is NOT as simple as “A product that fits a market”. Through my own experience as an entrepreneur, advisor, and investor in dozens of ventures, as well as my graduate studies and research, I’ve refined the definition I use to the following:
“A product, business model, and customer engagement method that consistently meets the needs of a specific customer segment or persona better than the alternatives, such that they’re willing to pay for it.”
I had the pleasure of learning about Product-Market-Fit from Andy Rachleff, CEO of Wealthfront, and co-founder of Benchmark Capital, in his course on Aligning Startups with their Markets at Stanford’s Graduate School of Business. Rachleff is credited with coining the term, but from his telling he was heavily influenced by Don Valentine of Sequoia as well as the lean startup ideologies of Steve Blank and Eric Reis.
Rachleff describes a company with PMF as one that has proven its Value Hypothesis; why a customer is likely to use a given product. Rachleff continues that knowing the value hypothesis identifies the requisite features, audience, and business model.
Steve Blank describes the criticality of both Value and Growth Hypotheses as testing whether a product or service delivers value to customers using it, and how new customers will discover a product or service, respectively.
Eric Reis describes PMF as when a widespread set of customers resonate with the product.
Alex Schultz, CMO of Facebook, describes PMF as nonzero retention over time
Marc Andreeson of a16z describes PMF as being in a good market with a product that can satisfy that market. This means being able to answer the “why”: problem/need, the “how”: business model, and the “what”: product/service
While these definitions make sense, their qualitative nature doesn’t help quantify what PMF actually is and how you know if you have it.
When asked for indicative benchmarks, Rachleff relayed
For consumer focused companies, PMF is generally found when you have organic viral growth. That is, people love your product enough that they use it, and then engage their friends to use it, who then engage their friends to use it.
For B2B focused companies, PMF is generally found when you have 7–10 repeatable sales to customers similar to each other.
Alex Schultz in a separate lecture at Stanford, details
If retention over time (how many customers remain from time of acquisition) asymptotes to a percentage common to a given industry, it’s indicative of PMF. Ie if over time retention stabilizes to a reasonable number for an industry; this might be ~30% for an eCommerce company, or ~80% for a social media company.
This last point is a critical addition, as high churn shortly after customer acquisition implies that the customer value isn’t actually there — be it for a dozen successful enterprise sales or exponential viral growth.
Andrew Chen of a16z describes metrics of PMF for SaaS companies as 5% free to paid conversion, 3x LTV to CAC ratio, and <2% churn.
While this is just an example, having a viable business model means having a product whose revenues will be meaningfully greater than the cost of sale. If a sale requires months of work for an expensive account executive but only yields $10K in revenue it’s likely not a viable business model — but if the annual subscription is >$100K or if no account executive is needed and users can learn of and download the product on their own, then it may be.
SaaS metrics of PMF such as NPS, and questions such as “would you be very disappointed if this product were to be discontinued” are great indicators of value, potential growth, and potential churn, but nothing beats actual numbers of true growth and churn from paying customers.
Using the above descriptions we can begin to triangulate on what PMF actually IS by also considering what it’s used for.
Why does PMF Matter?
The Rachleff/Blank/Reis camp believes a venture should prove its Value Hypothesis before testing and investing in its Growth Hypothesis, as it makes little sense to invest in attracting customers that either won’t convert into, or won’t remain, paying customers. Testing and proving growth hypotheses involves significant marketing and sales efforts, this means truly understanding a customer segment/persona, their needs, the value you’re creating for them and how to engage with them.
The idea of hypothesis testing is critical here and Rachleff provides insight that has served me well. Ventures, by definition, have limited resources. This means you can’t test every customer segment or every business model simultaneously. The value propositions and features required by different customer segments will necessarily be different and you only have so many engineering hours. The stories that resonate with different customer segments will be different. The channels they use will be different. Their buying personas will be different. And if you try to build and test for them simultaneously you can’t differentiate between a failed hypothesis and between a shoddy effort. Giving someone half of their requisite features likely won’t solve their true need. Sending out one email blast to six different segments will yield little return as the average prospect needs >8 touchpoints for conversion. 100 impressions on facebook for 10 different personas won’t give you as much valuable information as 1000 impressions for one persona, which is a large enough sample size to understand if an audience does or doesn’t resonate with a product and its messaging.
Where investors love to diversify, operators necessarily need focus. It is critical to come up with tests that will allow for minimal investment, yet still yield ample information on whether a value hypothesis has succeeded or failed.
By understanding the ramifications of PMF on the actions and investments a venture subsequently takes, we can understand the insufficiency of solely having exponential growth, low churn, or a dozen sales to enterprise customers. Rather, we need exponential growth for a well understood market segment — what pain or need is common across those customers such that we can build an effective marketing campaign that will resonate with them and reach them on fewer channels. We need enterprise sales to customers that are similar to one another, so that we can confirm our value proposition and ability to sell is repeatable and predictable. We need to understand which customer personas have low or high churn to invest in features for those that will remain customers.
Taking all of this into consideration yields the definition provided earlier:
“A product, business model, and customer engagement method that consistently meets the needs of a specific customer segment or persona better than the alternatives, such that they’re willing to pay for it.”
PMF Broken Down
Let’s unpack this a bit.
PMF denotes:
There is a concrete identifiable customer segment or persona in focus
We understand their specific needs and pains
We have a product AND business model which address those pains
We understand how to communicate and engage with these customers so they understand and appreciate our value proposition
They care enough about our solution meeting their needs that they’re willing to pay for it — either directly, or indirectly (eg advertising, information, etc)
The above does not imply that you have a successful company, as perhaps the market you have found is small or dying. Nor does it imply that one should start with a customer segment/market and then define a product — that is an entire debate in and of itself which we’ll save for another article. Rather this merely indicates that a venture has found PMF for a specific product in a specific and clearly identifiable market.
The metrics around NPS and organic viral growth indicate that a product or service has evident enough value that people can recognize it, use it, pay for it, continue to use it, and recommend it to the peers they feel would benefit from it. For this to happen the product must really speak to their common yet specific needs.
The metrics around multiple sales to a similar customer segment indicate that multiple customers in that segment share in the same challenge, that your team is able to communicate that challenge and your value proposition, and that multiple customers in that segment find your product or service as solving that challenge to a degree they’re willing to pay for. One or two sales aren’t enough because perhaps you’re solving a fringe challenge that isn’t actually common for the segment, but if you get to 7–10 customers, it’s highly indicative that you’re on to something.
Once we are in a position where we are indeed clear and confident in the ability for our product/service and business model to meet the needs of a specific market, and our ability to understand and communicate with that market, we can begin to test our growth hypotheses and invest in scale.
What could go wrong?
There are countless stories of companies that lacked one or more of the PMF criteria, scaled early, and then struggled — some pivoting successfully, others less so. I’ve analyzed a few here to read as they interest you.
Dropbox found viral growth quickly, yet when it tried to monetize its consumer offering people switched to free alternatives such as Google drive. Most consumers weren’t willing to pay for the value Dropbox was offering. Ultimately Dropbox pivoted to focus on enterprise vs consumer needs and was able to profitably scale, but this is a clear example of over-indexing on one metric (growth and users) without considering the bigger picture — value people are willing to pay for.
This doesn’t mean all companies need show early revenues either — Facebook and Snapchat are two examples of companies that waited to focus on revenues, but they understood that their value was held in user data that could be monetized for hyper-focused advertising. They continued to focus on user value and growth, all the while aggregating valuable user data they believed would translate into advertising revenue.
C3IoT, the IoT platform started by Tom Siebel tried to circumvent the usual playbook by using Siebel’s deep financial resources and connections to sell their product to customers across multiple industries. Yet ultimately the engineering and customer support resources required to meet the needs of disparate customers across disparate industries with different priorities proved intractable at such an early stage, and they scaled back to focus on the Energy space.
Indeed this was the issue MIMFO faced as well. The CEO and COO were highly intelligent and highly technical problem solvers. They were able to successfully sell their software to a dozen companies in their geography across the Media, Energy, Finance, and Telecom verticals. Their customers were happy. They and their investors were sure they were ready for scale. Yet a closer look indicated that it was only the founders that were successfully selling because they could confidently tailor the product to customers’ needs. They would then engage product and professional services to build what they had promised. But this approach is not sustainable. You can’t expect your average sales person to reinvent the product for every customer, and building bespoke products presents tremendous challenges when you scale and need to meet the disparate product needs of hundreds of customers across a dozen customer segments. Once the company had exhausted their immediate network of prospects, they struggled to convert marketing leads into sales leads because they didn’t have a clear value proposition for a given customer persona. What they had actually built was a software enabled services company, which is indeed a viable business, but a very different one than a B2B software business. Palantir is one such example of a successful software enabled professional services business, and while it has found success, most Venture Capitalists are not interested in professional services heavy ventures, as scaling people is notoriously difficult and professional services margins (and valuation multiples) are significantly lower than software. After multiple quarters of missed targets, the board replaced the management team and refocused the company on the consumer banking vertical, righting the ship.
Groupon is famed for having grown virally and even IPO’d without truly having found a compelling value proposition for its paying clientele — the vendors selling products/services. Groupon promised to attract new customers to vendors with compelling deals, often sold at a loss for the vendor. While people indeed took advantage of the deals, there was no guarantee they weren’t already existing customers, nor that they would remain customers at the regular price. In fact, with competing services on Groupon, a customer could simply go from deal to deal and never remain loyal or pay full price. Moreover while there are industries where Groupon provided value, eg selling outstanding inventory of perishable/expiring goods, be they food, flights, or hotel rooms, for many industries it simply exposed businesses to customer segments that weren’t in their price category to begin with and quickly churned.
Succeeding in customer growth without a viable business model has been a recurring theme in meal-kit delivery services such as Sprig, Chef’d, and Munchery. They addressed a real problem, cooking regular meals, but could not do so profitably given the high costs of, well, everything, especially delivery. That coupled with a low barrier to entry makes it difficult to succeed from the getgo — perhaps an Uber or Postmates that have a reusable delivery infrastructure, or grocery retailers that have cold storage chains and access to raw materials and distribution, would fare better. Indeed, Blue Apron tackled high delivery costs by using the mail and is finally showing profitability post-pandemic.
One company I invested in focused on CPG planning software. They had a strong product that saved millions in logistics and production costs, and multiple big brands as successful customers. Yet the value was distributed across multiple disparate orgs, and the sales and implementation cycles were complex and lengthy, making it challenging to find a customer champion to push the sale through and ensure they could realize its value. The product was great, the market was large, the value was there, but engaging the customer in a manner to realize that value proved challenging.
Another two of my portfolio companies Oxygen and Bellhop found success after multiple pivots finding their market fit both in segment and in business model.
Oxygen.us started as a banking platform for the gig economy — banking and credit facility are notoriously difficult for those not on W2 with proof of regular salary. Oxygen initially focused on the Uber/Postmates/Taskrabbit driver segment and saw customer growth offering banking and credit services. Yet their possible value propositions were limited when compared to the freelancer/creative segment where they could offer both personal and business banking services including invoicing, budget management, and cash flow projections for the digital-first generation, while monetizing card transactions and credit.
Bellhop began as an aggregator for travel services — a one stop shop for the local uber/doordash/opentable. It quickly streamlined its focus to just ridesharing — providing users with options to choose between Uber/Lyft/Curb/Via etc. optimizing for price or speed. While it saw growth, margins remained tight as Lyft/Uber had the lion’s share of rides. Lyft and Uber’s loyalty programs and drivers operating on multiple platforms began to eat away at both retention and value. Bellhop has since expanded in two critical ways 1) onboarding legacy black car companies and taxi fleets as a monetizable distribution channel for their services; true airport pickup for city taxis, limo/van/scheduled pickup/long trip for the black car companies. 2)aggregation for bikes and scooters which cannot come to you and where closest proximity is advantageous. Moreover, Bellhop uniquely has aggregate user mobility data of value to utilities in planning for an electric future, or urban transport planners. These changes both enable a singular focus on the mobility challenge for a user — how to get from point A to point B, and how to better monetize the traditionally low margins of ridesharing to build a more profitable business model.
Go forth and fit
So, to summarize, Product-Market-Fit describes:
A product, business model, and customer engagement method that consistently meets the needs of a specific customer segment or persona better than the alternatives, such that they’re willing to pay for it.
This means:
There is a concrete identifiable customer segment or persona in focus
We understand their specific needs and pains
We have a product AND business model which address those pains
We understand how to communicate and engage with these customers so they understand and appreciate our value proposition
They must care enough about our solution meeting their needs that they’re willing to pay for it — either directly, or indirectly (eg advertising, information, etc).
Indicative benchmarks for PMF are:
For consumer focused companies — organic viral growth. That is, people love your product enough that they use it, and then engage their friends to use it, who then engage their friends to use it.
For B2B focused companies, when you have 7–10 repeatable sales to customers similar to each other.
Reasonable customer retention for the relevant industry (can find industry benchmarks)
Until a venture team finds PMF they should likely remain lean and test hypotheses for different product/market/business model combinations sequentially, investing enough in each to know as quickly as possible if they’re viable or not, and moving on to the next one if not. Once there’s enough confidence in PMF, they should invest in hiring the right personnel to test and execute the growth hypothesis for how to best scale the company.
Assuming PMF before you’re there is the surest way to burn cash and crash.
References
“Aligning Startups With Their Market” — Stanford GSB course notes and content
https://a16z.com/2017/02/18/12-things-about-product-market-fit/
https://medium.com/evergreen-business-weekly/product-market-fit-what-it-really-means-how-to-measure-it-and-where-to-find-it-70e746be907b
https://www.fastcompany.com/3014841/why-you-should-find-product-market-fit-before-sniffing-around-for-venture-money
https://andrewchen.co/zero-to-productmarket-fit-presentation/
https://www.slideshare.net/evanish/getting-to-product-market-fit
https://leanstartup.co/a-playbook-for-achieving-product-market-fit/
https://blog.hubspot.com/sales/product-market-fit
https://www.ycombinator.com/library/5z-the-real-product-market-fit
https://www.forbes.com/sites/hayleyleibson/2018/01/18/how-to-achieve-product-market-fit/?sh=372a8cbc476b
https://brianbalfour.com/essays/market-product-fit
https://sparktoro.com/blog/product-market-fit-is-a-broken-concept-theres-a-better-way/
https://fourweekmba.com/product-market-fit/
https://www.unusual.vc/field-guide-consumer/finding-product-market-fit-2
https://segment.com/academy/intro/measuring-product-market-fit/
https://techcrunch.com/2020/01/31/you-need-a-minimum-viable-company-not-a-minimum-viable-product-2/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAADOOZ15zNtP_yk6SdvLywSEIynygcgNcXVodl8lHrlbcXcsefzQOuSI9Nye62Vky2j_qfJTj_3723kj0T-hDDYiT5QgYFGBS_XvNqwhuLUZLzgrYGj02VstojxYBfNwNlYqTSJrgA1EJWOwrM2cMAkrp62XSvl8FwSxPBH9YEtXS
https://growthmarketingconf.com/how-to-find-product-market-fit/ | https://gilrosen.medium.com/too-many-damn-articles-on-product-market-fit-16d0fe63b406 | ['Gil Rosen'] | 2020-12-18 21:29:40.747000+00:00 | ['Venture Capital', 'Product Market Fit', 'Entrepreneurship', 'Startup Lessons'] |
5 Top Artificial Intelligence Frameworks for 2021 | Introduction
Let’s say you decided to practice and develop yourself in this field. Today we will see how software engineers can apply deep learning and Artificial Intelligence into their programming work.
The first thing that we must know is how it can be applicable, and here is a nice question to make a research on: “What are the most useful frameworks/ libraries to begin learning in 2021?” This is exactly the question I asked myself.
This is just what we’re going to address today in this article: I gathered the most popular five Artificial Intelligenceframeworks and libraries that every software engineer/ developer needs to know about. You will also find the official documentation pages and some practice applications on how to apply them.
This will help us to know them more than just their names. Enough with the introduction. Let me introduce you to the squad! 😄 | https://towardsdatascience.com/5-top-artificial-intelligence-frameworks-for-2021-7d3bf8e12ed1 | ['Behic Guven'] | 2020-12-26 15:30:12.613000+00:00 | ['Machine Learning', 'Data Science', 'Technology', 'Artificial Intelligence', 'Education'] |
Tapping emotions to build virality. Does it work? | Have you been noticing how great advertisements have increasingly become more and more emotional. Some of my favourite ones in the recent past have been major tear-inducers. Give these a look to see what I mean:
Fortune’s attempt at making us cry (and buy their oil).
A grandmother (Daddi) insists on feeding her bed ridden grand son a home cooked meal while the nurse is dutiful and wouldn’t let that happen. Fast forward 5–6 iterations of the same routine with no luck yet, the grandmother brings the nurse a meal as well. The next day the nurse melts and let’s the grandmother feed the grandson, the screen then reads Fortune Oil.
My first reaction to this was, well, I could have pasted any other home cooking related food brand and the ad would have worked as well as it did for Fortune Oil. But still it did make me teary eyed.
The same happens with the Riso Ricebran Oil ad, even though this is about women living their lives on their own terms and their empowerment, see below:
A married girl, mother of one, is backpacking through her holiday talking about living each day as it comes. Walking by the river and the paddy fields, traveling by the train and visiting chocolate stores having a time of her life traveling solo. Bang, the screen reads live 100 percent and the Riso Ricebrand Oil logo appears. Could this ad not be for any product that a typical Indian lady uses at home? It surely could be.
So here’s the point that I am making. Do these ads inspire us to buy a particular brand, not so much. But do they elicit a strong emotion, very much so. So what is happening here? Do they even work?
Yes, these ads are received well by the audiences, comments on Youtube and Facebook will tell you as much. However, will they remember which brand’s ad did they shed a tear for? Most certainly not. And that is because these are one time stories that the brand chose to run and plaster their brand at the end of the story.
This is part of the trend that the industry has seen globally for about half a decade now. And it isn’t about the sad emotion only, it is rather about emotion in general. There are many to tap into, Emotions such as inspiration, care, love for parents, perseverance, etc. In the same light see how Proctor and Gamble inspires us with this powerful ad about the best job in the word.
There are a various things at play here. Let’s discuss few of these and then perhaps we could try to suggest a few to-dos for brands with respect to these ads.
How come these story-like / short film type ads are all the rage now? Many brands have given it a go. Let me show you some good ones, Google started it all with their Sofie ad in 2011, see here. The ad makers embraced it all too well in the South East Asia markets too, see here a sad ad by MetLife, another one about an unsung hero by Thai Life Insurance in 2014 and another one by True Move H about giving unconditionally.
British Airways has done a lot of them in the same stride too over the years:
see here about Helena, the air hostess falling in love with India’s hospitality (inpired by a true story),
here about Sumeet and Chetna (husband and wife) coming closer in their travels sponsored by British Airways,
here when British Airways sponsored a trip for Esme’s grand parents to Australia. Esme and her parents shifted to Australia due to work and missed being together, and another one (our last British Airways example)
here when they sponsored a trip for a Non-Resident Indian man to go meet his mother in India from the USA.
So what can we tell you about these. Let’s see.
Content itself, not an unwanted break between content.
These ads aren’t breaks in between two segments of content you watch on TV, but content itself. The brands now make short-films, run them on TV a few times and then bank on the power of sharing over digital/social media platforms. Some of the ads you see in this discussion have several million views and that is a proof of concept.
No real brand advantage. Hence creating a bond over time.
Immense competition in mostly all categories has forced brands and ad makers to look outside of just talking about product/service. In an almost commoditised state of competition in most product categories, what choice does a brand have other than trying to take the conversation away from facts and reason. See here how, Vicks, an over-the-counter medication and ointment brand takes the persuasion to the emotion of care over the years rather than logic and reason.
Science / Psychology has backed Storytelling
Reason, evidence, facts, argument etc. do not affect us as a well told story does. Our attitudes, fears, hopes, and values are strongly influenced by a story and it changes our beliefs much more effectively than reason. You should notice it on yourself. Next time when someone starts an attempt at persuasion with reason and evidence notice how you become vigilant and want to understand each and every detail. However, also notice when you start listening a story how you put your intellect-guards down and settle in for a warm and fuzzy story that melts you. And in the process you forget to notice any logical leaps the story-teller may be taking.
However, can there be better product integration
At the outset we spoke about whether or not these ads inspire us to buy a particular brand and that’s one thing that is easy to question. However, for certain product categories it may be difficult to integrate the product in a way that the ad inspires a brand preference. For instance consider these latest ads (First, second, and third) from Amazon in India. They’ve integrated their product well in the stories but couldn’t any other e-commerce brand such as Flipkart not be used to resolve the tension in the ads. This is a challenge all ad makers and brands should take upon themselves.
Millennials are digital natives
For the first time there’s been a generation that has grown up with social media, smart phones and instant access to the internet and instant messaging and sharing. This changes the game in many ways. Humor me, it has changed how people seek information and make decisions. For instance, they always have access to someone who surely would have used the product or brand in question and so word of mouth is easy to find. Similarly, detailed technical information about the product is available to all in a few taps of their smartphone screens. With all this about millennials’ lifestyle put together, it is all the more reason to take the conversation beyond facts and logic to emotion and feelings.
What can brands do:
Does it work with all brands? Well, it doesn’t always work. Just imagine this ad or this one, that google launched not too long ago to be built for another search engine say, Microsoft’s Bing. Would it have worked? I think it would not. Market leaders are better placed to connect with their customers in this way since customers already interact with the brand in the situation that the brand is using in it ads and are so used to the brand in their day to day life that the cultural intensity anyway is defused time and again by the same brand. Consistency is key. You ask so what option do the smaller or non-market leader brands have? Well, we think consistency is key. If the brand is able to align their marketing activities including their ads with the core emotion they want to elicit over a period of time then over time audiences see it as second nature to think of that brand whenever the emotion presents itself. Consider, Coke and Happiness here, Nike and perseverance here, Apple and innovativeness here and even British Airways’ example discussed earlier in how they consistently built on the emotion over many ads and stories.
Goes without saying though, that genuine corporate alignment with an emotion and vision and then showing it through consistently telling emotional stories of how the brand enhances its customers life, will work naturally and much better than telling someone else’s emotional story and plastering your brand on it.
Let me leave you with this gem of a clip from our beloved adman show MadMen, see how Don Draper’s pitch to Kodak (a client) is on point and highly emotional. Could we say then that good ads will always relate with you on an emotional level instead of a transactional level? | https://medium.com/drizzlin/tapping-emotions-to-build-virality-does-it-work-215a16740d3d | ['Karan Verma'] | 2017-08-01 09:26:11.221000+00:00 | ['Marketing', 'Trends', 'Emotions', 'Digital Marketing', 'Advertising'] |
Getting Started With Concurrency in Python: Part I — Threads & Locks | The Basics of Threads
Python has two main modules that implement threads — thread and threading . The difference between them is that the latter one is an object-oriented implementation — and that is what we will be using in this article.
A Thread class is instantiated as follows (target is the function we want to execute in the thread):
Thread Class Instantiation in Python
In basic terms, once we create a thread instance, we have to start it with the .start() method. Take a look at the very simplistic example of just one thread below, where I/O bound is simulated with .sleep() .
Example of a Thread in Python
A thread is considered to be alive once it has started. The thread stops being alive after its run is terminated (upon either successful job completion or an interrupt/error). We can check the state of the thread using .is_alive() .
Building up on this simplistic example, if we want to run multiple threads in parallel, we have to start each one and join them in the end, using the .join() method. It’s important to understand that all threads belong to the same process, which means they all share the same data (variables, resources, etc.) — and we shall see how this can create problems later on.
Multiple Threads Example in Python
Threads can have names (either default or custom names passed as an argument) and machine-oriented identification numbers. We can access these using t.name and t.ident , respectively. Moreover, any given thread can know its own name — this is possible with threading.currentThread() module function.
We can also pass arguments to threads as in the (rather silly) example below: | https://medium.com/swlh/getting-started-with-concurrency-in-python-part-i-threads-locks-50b20dbd8e7c | ['Narmin Jamalova'] | 2020-11-20 18:34:45.974000+00:00 | ['Python', 'Threads', 'Programming', 'Python3', 'Concurrency'] |
21 Running Resolutions for 2021 | 21 Running Resolutions for 2021
It’s resolution time!
Photo by Yasemin K. on Unsplash
What are goals anymore? We set them last year, and look what happened.
Don’t let 2020 discourage you from aiming high. Not having a goal at all will guarantee stagnancy, and who wants to be stuck in this mess?
Here are 21 running goals for 2021. You might not be able to train for that race, so take this opportunity to switch up your mindset and improve your athleticism in different ways.
Run the year in miles: 2021 miles in 2021. Set a birthday goal to train for. Turning 30? A 30k might be the perfect way to celebrate. Run every street in your city. Find a map and go for the goal of exploring every sidewalk, cul-de-sac, and pedestrian bridge inside your city limits. Run streak! for a month, a season, or the full year, see how many days in a row you can run at least one mile. Go for speed. Can you run your fastest mile, 10k, or marathon this year? Check out Strava segments. Are you the fastest Strava user on your block? Speaking of Strava, their new Local Legend feature allows for a new type of goal setting. You might not be the fastest in town, but can you run segments the most? Pick one route and conquer it the more times than anyone in a 90 day period. Pay it forward. Do you have a friend who is interested in running, but doesn’t know where to start? Take them under your wing for a few runs a month and show them how rewarding this sport is. Push your limits. Put something on the calendar that’s totally out of your comfort zone, like two races in one week or back to back long runs. Set a world record! Did you know there are all sorts of Guinness Book of World Records events for “Fastest marathon in (insert silly costume here)”. I’ve known record holders for both fastest marathon in a toga and fastest marathon pushing triplets. Switch up your routine. Do you run after work, but aspire to be a morning runner? Do you always listen to music during exercise, but envy those people who can work out to just the sounds of nature? Use this year to shake up your paradigm. Start stretching. I don’t think I’ve ever met a runner who has their recovery strategy completely on point. Get on a strict schedule with your stretching, yoga, and foam rolling. Document your training. Take an extra 30 seconds to log your workout and note how you felt before, during and after the run. Having this data is invaluable. If you feel really good, or if you face an injury, you’re able to look back at the months prior and have a great idea of what works for you. Go somewhere new. Find a trail network you’ve always wanted to explore, and plan yourself a running vacation. Find a campground or Airbnb near the trailheads, and create your own training camp. Find your perfect gear setup. Make this the year that you truly commit to getting rid of those shorts that always make you chafe or the shoes that don’t quite fit. Build a relationship with a local running shop with a good return policy, and commit to getting your favorite gear down to a science. A subscription to Ultrarunning Magazine or Runner’s World is a fun way to kickstart your research. Get comfortable with maps. Map reading might be a thing of the past for the general population. For runners, especially trail runners, knowing your way around a map is important. Get used to the ones provided on your watch or phone, and then upgrade to the old fashioned paper maps and trailhead signs. Elevation challenge. You might not be able to travel to Mt. Everest right now, but can you scale the equivalent 29,032 feet on your local hill? Set a running-focused reading goal. There are so many amazing books about running. Lots of pro athletes have gotten into the cookbook scene as well. See if you can tackle one book a month and gain inspiration for your own training. Master mobility. Running is a repetitive motion, and it can cause stiffness and muscular imbalances if you don’t do your mobility exercises. I recommend runners work through this book to learn practical mobility tips. Get to know your body through running. Have you experimented with heart rate training? Do you know what it feels like to do barefoot strides in a field after a run? Endurance sports allow you to spend hours discovering more about your own physiology. What can you learn about yourself in 2021 on the run? Spread the love. This is an isolating time for everyone. A smile and wave from a distance goes a long way. Set a goal to give a welcoming nod or joyful hello to someone on each of your runs. You could change their day for the better.
Photo by Fitsum Admasu on Unsplash
Happy 2021, runners. What goal do you have your eye on this year? | https://medium.com/in-fitness-and-in-health/21-running-resolutions-for-2021-c2a840ba0bd0 | [] | 2020-12-23 16:43:47.326000+00:00 | ['Goals', 'Running', 'Productivity', 'Training', 'New Year'] |
Holiday Programming Is Soooo Important to Me | While beginning to browse online as I do every year around this time looking for airing dates and times for my cherished television specials, I said “Oh, crap” aloud as I remembered something: Charlie Brown airs on the ‘ABC’ channel. The majority of other favored holiday specials also air on channels of the like: NBC, CBS, etc. Since I canceled the household cable earlier in the year and switched solely to streaming services in order to save some money, I don’t receive those regular channels anymore. This is where the big realization hit me: I need my Peanuts specials. They cannot be skipped over.
Panicked, I went straight to Amazon. I purchased the Peanuts holiday edition on Blu-ray and it arrived two days later. I have some relief because it’ll do the job, but it won’t be exactly the same — and here’s why:
For me, whether it was the years watching with my auntie or the ones spent watching alone, part of the annual tradition is actually watching it on the TV when the show is airing on a live broadcast.
I remember throughout my life, if I’d say something along the lines of “I gotta be home to watch ‘so-and-so’ tonight at 8,” my mom would respond with “Okay, but don’t you already own it on video?”
Yes, Mom, I do… but it’s not the same.
I prefer watching the holiday programs on the same night that everyone else is watching. Makes me feel all warm and cozy.
Knowing that other folks are watching the same animated, claymation, or musical holiday movie at the same time as me in their own living rooms across the country makes me feel like part of something special; as if others are watching along with me.
Somewhere, in another home — maybe down the street, or across town, or even a few states over — there’s someone who’s sprawled across their couch with their fuzzy-socked feet up on the armrest, and they’re watching too. There might be somebody in a big puffy recliner chair rocking back and forth (as I am), with the flames from their faux fireplace and remote-control candles flickering and creating ambience in the room, and they, too, are awaiting one of the much-loved specials to begin.
Maybe there’s a child somewhere sitting on the floor of a small above-garage condo loft, with their hard-working single mom on the other side of the wall making their dinner, and this child is getting to see the show for their very first time, beginning yet another personal lifetime of loving this seasonal experience. | https://beesbuzz.medium.com/holiday-specials-are-soooo-important-to-me-db71fc87240f | [] | 2020-10-22 04:11:51.122000+00:00 | ['Self', 'Family', 'Love', 'Wellness', 'Holidays'] |
Surprised That Big Tech Failed Us? Here Is How We Could Have Known. | Surprised That Big Tech Failed Us? Here Is How We Could Have Known.
While the recent headlines about Facebook, Google, and other tech giants seem to cause outrage, their ethical failings were predictable from their business models. Long time ago.
Show me your business model, and I tell you about your ethics. It is often literally that simple. According to Milton Friedman and his school of economics, a firm’s purpose is to use its resources to increase its profits. With the kind addition, that it does so without deception and while staying within the boundaries of the law.
That is what companies do — and what we usually expect of them. No one would assume Exxon to be after anything else than higher profits. Yet, for some reason, technology companies in the more recent decades have been seen in a different light. There seemed to be something revolutionary about these geeks starting companies in Silicon Valley. They seemed to be more innocent; they appeared to be following values.
Photo by Mitchell Luo on Unsplash
From Google’s “Don’t Be Evil” to Facebook’s “Bring The World Closer Together,” these companies seemed to build around purpose. They wanted to sort the world’s information and make it accessible. They wanted to connect people. Those were higher dreams than money, and people believed the techies who peddled them.
Yet beneath the surface, there were always more ambitions lurking — not necessarily money per se, but a desire to shape the world. Latest since Steve Jobs wanted to “put a dent in the universe,” tech folks were high on their Kool-Aid, thinking they could cure the world. The kind of Silicon Valley libertarianism expressed publicly by its prophets, is way more dogmatic and absolutist than anyone should be comfortable with.
While many of the founders of these companies earnestly believe that they live in the future and that they can help bring that future to the world as a good deed, they are also entirely trapped in their bubble. Within their friend circle, investor circle, and companies, there is little diversity to speak of. The people whom they claim to want to be helping are mostly not represented at all. Other than in purpose statements.
And then comes Wall Street
While those idealistic founders can sustain their loss-making endeavors through the support of equally idealistic Venture Capitalists for a long time, at some point, that becomes unsustainable. Ever-larger rounds of money are needed, early VCs are pushing for exits, and early employees want to see returns as well. So eventually, they do an IPO.
And when they start to consider going public, the rules of the game change. They are advised to speak to Goldman Sachs and Morgan Stanley about helping them with the listing and start to hear about what investors want to see. And that is a sustainable business model, stellar revenue growth, and ideally profitability.
Hence,1–2 years before the IPO, many revenue mechanisms have been tried, and the strongest ones are doubled down on and ramped up. The game demands it.
At that point, even the most idealistic founders are confronted with Friedman’s logic (or face the choice of being replaced). So, they compromise — in the name of the mission, they must monetize as much as possible. And so, the same geniuses who created the core product to change the world, now put their brains to work to make the most money possible. Of course, in the name of making the world a better place.
Photo by Aditya Vyas on Unsplash
Hence the revenue model before IPO, which they start doubling down on, becomes the fate of the company. It becomes what everyone optimizes around. And this to the extent, as Friedman described, that the law permits. Which, in the very underregulated digital world, is quite a sizeable gray area. A gray area that has been really exploited.
The Insidious Attention Economy
A lot has been talked about, that when the product is free, the user is the product. In the context of Facebook or Google, who monetize through advertising, this is often read as the user trading their data for services. These companies are hoarding data, collecting it on their sites and across the web, and getting rich off of that.
Yet that is not true. They are not getting rich off of the data itself. The user’s data is not being sold. That would be the least of their interests. They are hoarding the data only to make the actual product more valuable — which is the user’s attention itself.
The time people spent on the services, where they get exposed to targeted advertising based on that data, that is the product, which is much more insidious.
If they were selling off the user’s data, it would be enough to collect a good quantity of it through various interactions. According to one study, with 300 likes, Facebook knows you better than your spouse. So, they could make just the data collection a key part of their activity, and not care about the time you spent on the service. At some point, the amount of data also has diminishing returns, and won’t improve the price.
Your attention, though, does not have diminishing returns. They can sell as much of that as there is a demand for — which is enormous. And the better information they have on you, the higher they can sell your attention. Hence the service is not optimized for data collection per se, but human engagement. Spend as much time as possible.
Photo by Austin Distel on Unsplash
Therefore, key algorithms that are now being criticized, such as the algorithm curating the news feed, are fundamental revenue mechanisms. As such, it is very predictable what their fitness function is, hence what they are being optimized for and what the likely outcome is. Engagement at all (human) cost. All it takes is thinking through the business model, to understand to what lengths, within the boundaries of the law, Facebook would go.
Look At The Income Streams
If you would be concerned about which products to use, because of all the headlines — have a look at the companies’ business models.
When a business is over 80% dependent on advertising revenue — your attention is the product, and your data makes it valuable. That is most likely not a trustworthy technology provider to deal with.
When a business sells products or services at considerable premiums, and that is their primary income stream, they would be hell-bent on earning and retaining customer trust.
Especially when free or low-cost alternatives, who monetize differently, are available, premium players are in the trust and security game. Those are better bets.
The recent branding emphasis of Apple, going all-in on Security and Privacy, is hence both honest and brilliant. In the age of surveillance capitalism, in which everyone is after your data and attention to sell you more stuff, the company selling their products in a classy environment and at a premium, has a niche to claim in being different.
Photo by Johny vino on Unsplash
Similarly, Microsoft has relevant messages around security and privacy, which resonate with many businesses. With their diversified income streams, which are dependent on them selling software and hardware, they are in a position to play that game well. Additionally, their focus is selling to enterprise customers primarily, who are highly concerned about security at all levels, and have no problems paying a premium for technology. Hence the business model can fully unfold around those needs and requirements.
It is, to my mind, no surprise, that we are ending the last decade with Apple and Microsoft as trillion-dollar companies, while the rest of tech is trailing behind. I see their positioning as being highly relevant for the decade to come and foresee substantial challenges for Facebook and Google. While there are antitrust and regulatory probes into them, Apple and Microsoft seem to be out of the line of fire for now. For a good reason.
In Summary
While technology companies have been seen in a different light for a long time, as founders claimed higher motives and groomed their image well, it has become clear that they are slaves to the profit motive as well. Just like any other corporation, especially in the lead up to and after a public listing, they are subject to the forces of Wall Street.
These forces are looking for them to monetize to the highest degree possible — stellar revenue growth and profitability. Hence, they put their smartest teams to work on doubling down on the revenue streams they have figured out up until then. And they will exploit them as much as possible, within the confines of the law. Within a legal and regulatory environment, which for much of new technology is still to be defined. There are whole host of gray areas being exploited to the maximum.
Therefore, we can predict how a company will act not by the prophetic words of the founders, but by its business model.
In the case of advertising, this model is selling the user’s attention and making that more valuable through collecting tons of data to better target. Hence, they will go to the limits on gathering information about people across the web and optimizing their services for time spent, not value created.
If we want to see technology companies act more ethically, we therefore either need to tighten up regulations, which has to happen, or as consumers need to be prepared to pay for services currently free to us. We all have the choice to leave companies’ services, which turn our attention to the product being sold.
We need to start choosing. | https://medium.com/swlh/surprised-that-big-tech-failed-us-here-is-how-we-could-have-known-86ddfb2fdd3 | ['Sebastian Mueller'] | 2020-03-08 12:17:26.959000+00:00 | ['Society', 'Economy', 'Business', 'Ethics', 'Technology'] |
COVID Destroyed My Social Life | With a desire to connect faces to this insidious disease, people where asked to participate in a project. Everyone who participated were asked the same questions and their responses developed into poems sharing their experiences.
Follow | https://medium.com/faces-of-coronavirus/covid-destroyed-my-social-life-4f6b8be7842a | ['Brenda Mahler'] | 2020-12-20 21:30:06.854000+00:00 | ['Faces Of Covid', 'Reflection', 'Poetry', 'Coronavirus', 'Covid 19'] |
Shuffle, Split, and Stack Numpy Arrays | Although there are packages such as sklearn and Pandas that manage trivial tasks like randomly selecting and splitting samples, there may be times when you need to perform these tasks without them.
In this article we will learn how to randomly select and manage data in NumPy arrays for machine learning without scikit-learn or Pandas.
Split and Stack Arrays
In machine learning, a common way to think about data structures is to have features and targets. In a simple case, let’s say we have data about animals that are either dogs or cats. The task at hand is to prepare an array for machine learning without the use of helpful libraries.
In this example, consider a spreadsheet-like array were each row is an observation and each column has data about that observation. The rows represent samples and the columns contain data about each sample. Finally, the last column is the target, or label for each sample.
Figure 1 — One way to think about features and targets in an array for machine learning. Image from the author, credit Justin Chae
To get started on a machine learning project that predicts cats and dogs. The array might have a few columns and rows or thousands (or millions!) — whatever the case, the major steps are going to be the same: split and stack.
Split Dataset
You may need to split a dataset for two distinct reasons. First, split the entire dataset into a training set and a testing set. Second, split the features columns from the target column. For example, split 80% of the data into train and 20% into test, then split the features from the columns within each subset.
# given a one dimensional array
one_d_array = np.array([1,2,3,4,5,6,7,8,9,10]) # randomly select without replacement
train = np.random.choice(one_d_array, size=8, replace=False) print(train) """ output
[ 3 5 10 9 6 8 4 7]
"""
Moreover, instead of always picking the first 80% of samples as they appear in the array, it helps to randomly select subsets. As a result, when we split, we actually want to randomly select and then split.
To randomly select, the first thing you might reach for is np.random.choice(). For example, to randomly sample 80% of an array, we can pick 8 out of 10 elements randomly and without replacement. As shown above, we are able to randomly select from a 1D array of numbers.
Random sampling is especially desired if the first half of the data contains all cats, since it prevents us from training on only cats and no dogs.
# a example array of data extended from Figure 1
# with shape (10, 4)
animals = np.array([[1,0,1,0],
[1,1,0,1],
[1,0,1,0],
[1,1,0,1],
[1,1,0,1],
[1,0,1,0],
[1,1,0,1],
[1,0,1,0],
[1,0,1,0],
[1,1,0,1]]) train = np.random.choice(animals, size=8, replace=True)
print(train) output
ValueError Traceback (most recent call last)
in <module>()
----> 1 train = np.random.choice(animals, size=8, replace=True)
2 print(train)
mtrand.pyx in numpy.random.mtrand.RandomState.choice()
ValueError: a must be 1-dimensional
""" """ValueError Traceback (most recent call last) <ipython-input-44-ecab0b58674d> in ()----> 1 train = np.random.choice(animals, size=8, replace=True)print(train)mtrand.pyx in numpy.random.mtrand.RandomState.choice()ValueError: a must be 1-dimensional"""
Oops — np.random.choice() only works on 1D arrays. As a result, it fails to sample from our animals array and returns an ugly error message. How to work around this issue?
First option. Turn the problem sideways and instead of sampling the array directly, sample the array’s index, then split the array by index.
Figure 2 — Randomly sample the index of integers, then use the result to select from the array. Image from the author, credit Justin Chae
If the array has 10 rows, the idea is to randomly select numbers from 0 through 9 and then index by the array by the resulting lists of numbers.
# length of data as indices of n_data
n_data = animals.shape[0] # get n_samples based on percentage of n_data
n_samples = int(n_data * .8) # make n_data a list from 0 to n
n_data = list(range(n_data)) # randomly select from range of n_data as indices
idx_train = np.random.choice(n_data, n_samples, replace=False)
idx_test = list(set(n_data) - set(idx_train)) print('indicies')
print(idx_train, idx_test)
print('test array')
print(animals[idx_test, ]) """ output of split indices and the smaller test array
indicies
[5 4 6 3 2 1 7 0] [8, 9]
test array
[[1 0 1 0]
[1 1 0 1]]
"""
Second option. If the goal is to return random subsets of an array, another way to accomplish the goal is to first shuffle the array and then sample it. Note that unlike some of the other methods, np.random.shuffle() performs the operation in place. Given the shuffled array, slice and dice it however you want to return subsets.
Figure 3 — Randomly shuffle the entire array, select from the array. Image from the author, credit Justin Chae
With this second method, since the array is shuffled, simply taking the first 80% of rows represents a random sample.
# shuffle the same array as before, in place
np.random.shuffle(animals) # slice the first-n and rest-of-n of an array
tst = animals[:8, ]
trn = animals[8:, ]
Split Array
Previously, we split the entire dataset, but what about the array, column-wise? In the example animals array, columns 0, 1, and 2 are the features and column 3 is the target. Sure, we could just return the 3rd column, but what if we have 5 or 100 features? In this case, negative indexing is a wonderful friend.
# negative index to slice the last column
# works, no matter how many columns
trgts = animals[:,-1]
print(trgts) """ output is a flattened version of the last column
[0 1 1 1 0 0 1 0 1 0]
"""
In the example above, the negative index slices the last column off, but it is now a 1D array. In some cases, this is desirable; however, the features and targets arrays have different shapes — this is a problem if we want to put them back together again. Instead, we can take care to slice the numbers of rows with negative indexing to reserve the 2D shape.
# len data as indices of n_data
n_data = animals.shape[0] # n_samples based on percentage of n_data
n_samples = int(n_data * .8) # m_samples as the difference (the rest)
m_samples = n_data - n_samples # slice n_samples up until the last column as feats
train_feats = animals[:n_samples, :-1] # slice n_samples of only the last column as trgts
train_trgts = animals[:n_samples, -1:] # ... repeat for m_samples
Stack Array
At this point, we’ve shuffled and split the dataset and split the features from targets. Now, how about putting everything back together again? To stack left-right and up-down, we can use np.hstack() and np.vstack().
Figure 4— A single array split four ways to train and test with features and targets. Image from the author, credit Justin Chae
To put Humpty Dumpty back together again, stack horizontally and then stack vertically.
# combine side-by-side
train = np.hstack((train_feats, train_trgts))
test = np.hstack((test_feats, test_trgts)) # combined up-down, returns original array
orig = np.vstack((train, test))
Structure Arrays
If you can’t or don’t use Pandas and only have NumPy, there are some ways to leverage the power of NumPy with the ease of Pandas without actually importing Pandas. But how?
Figure 5— Make a NumPy array structured with column names instead of just indices. Image from the author, credit Justin Chae
I found there are some cases where it is important to track the actual name of the feature (or the column) throughout the program. One way to do this is to pass around a list of names with the array, but it is a lot to keep track of. Instead, I found it extremely helpful to transform the array to be structured.
With a structured array, column index 0 is also indexed by the word ‘fur’ and so on.
# import a new library in addition to numpy
import numpy.lib.recfunctions as rfn # column names as a list of strings
col_names = ['fur', 'meow', 'bark', 'label'] # an array
animals = np.array([[1,0,1,0],
[2,1,0,1],
[3,0,1,0],
[4,1,0,1]]) # necessary to set the datatype for each cell
# set n dtypes to integer based on col_names
dtypes = np.dtype([(n, 'int') for n in col_names]) # use refunctions library to set array to structured
structured = rfn.unstructured_to_structured(animals, dtypes) print(structured['fur']) """ output
[1 2 3 4]
"""
For more on operations with structured arrays see Joining Structured Arrays; these methods discovered via Stack Overflow at https://stackoverflow.com/questions/55577256/numpy-how-to-add-column-names-to-numpy-array.
Summary
In this story, I present some of the NumPy functions that I learned and relied on while taking a university course in machine learning. The course restricted the use of pre-built libraries such as sci-kit learn and Pandas to reinforce specific learning objectives. These are just some of my notes that I hope are helpful to others seeking a few tips and tricks on NumPy with machine learning.
Thanks for reading, hope it works for you. Let me know if I can make any improvements or cover new topics. | https://medium.com/python-in-plain-english/shuffle-split-and-stack-numpy-arrays-83f82033bf17 | ['Justin Chae'] | 2020-12-26 17:13:09.432000+00:00 | ['Machine Learning', 'Python', 'Numpy', 'Programming', 'Data Structures'] |
Everyday Color Theory | Everyday Color Theory
A poetic crash course on the history of color
A version of this essay was presented at SPAN 2019 during the “Hue & Glue: Hands-On Color Theory” workshop. Read to the end to find instructions for an exercise that explores the relationship between color and light.
Throughout the essay, the accompanying images are scenes from the short film Everyday Color Theory by Cortney Cassidy and Jefferson Cheng.
The color you see is only the color you think you see. Your interpretation of light landing on a surface depends on your frame of reference and your frame of mind—both can be altered at the speed of light.
When I’m overwhelmed in a crowd I distract myself by looking for as many red-colored objects as I can. Red is not typically a calming color, especially if you’re staring at miles of brake lights in traffic, but it’s really easy for me to see. Red becomes the most dominant thing on my mind.
“Someone who speaks of the character of a colour is always thinking of just one particular way it is used.” — Ludwig Wittgenstein
Humans have used red since the neolithic era, as seen in the prehistoric cave drawings; When developing languages, red is typically the color named first after black and white. It’s now used so frequently in advertising—because it attracts the most attention—that people have learned to ignore it. The ad industry has successfully made a highly visible color…invisible.
In grade school, I learned that yellow was the most soothing to color with when I brought the expensive markers I wasn’t allowed to use at home to school. I colored in a picture of Paddington wearing a raincoat well enough for my teacher to hang it on the wall for parent-teacher day. The comfort I experienced from quietly meeting the black lines with a high-contrast yellow, disappeared as I waited for my mother to find me out.
In her piece commemorating a decade of internet colors, designer Laurel Schwulst reflects that “yellow tries to show you the way” in Google Maps. Artist Ian Whittlesea describes yellow as the easiest to mentally conjure in his seven breathing exercises to become invisible—inspired by the literature of Rosicrucianism, theosophy, and esoteric yoga. According to Whittlesea, Indigo is the most difficult color to generate and green can be the most difficult to keep stable.
Colors can be hard to see. To the chemist John Dalton, red, orange, yellow, and green all appeared the same. The rest of the color spectrum appeared as gradients of blue and purple. Dalton went on to write the first scientific paper on the subject of color blindness, “Extraordinary facts relating to the vision of colours” in 1798.
Ludwig Wittgenstein — the Austrian philosopher who popularized the rabbit-duck illusion as a means of describing two different ways of seeing — wrote in Remarks on Colour, that “not every deviation from the norm must be a blindness.” He pointed out that the “normal” sighted and the color-blind do not have the same concept of color-blindness.
People with decreased ability to differentiate between hues experience color through a series of judgments. My colleague, for example, explained that for red, green, and brown to function for him, he has to consider their context. These judgments may be “wrong” when they need to be “right”; Like the order of a traffic light, the alternating red and green battery light on a vape pen, or the order of a color legend matching the clockwise color placement in a pie chart.
“If one says ‘red’ and there are fifty people listening, it can be expected that there will be fifty reds in their minds,” wrote the artist and educator Josef Albers. We see fifty different reds because we each perceive and experience color differently.
The artist Wassily Kandinsky could practically hear and taste color because of his synesthesia. To him yellow was a trumpet’s C note, black was the end of things, “blue is cold, red is a square, and green is a feeling,” as summarized by the painter Amy Sillman who called Kandinsky’s philosophy a kind of color astrology in her essay “Drug, Poison, Remedy, Talisman, Cosmetic, Intoxicant.”
In 1810, writer Johann Wolfgang von Goethe decided that an equilateral triangle was the most effective form for his ideas on the psychological effect of colors. This new way of organizing color came about a hundred years after Sir Isaac Newton first arranged colors on a disk, to create an early form of the color wheel. The triangle served Goethe as a modular system of desirable and dissonant color relationships that evoked vibes like serious, mighty, serene, and melancholic.
Colors can change depending on the nature of surfaces, like the atmospheric oxidation of the Statue of Liberty’s plating from shiny copper to verdigris (a bluish-green patina). In 1906, the Army Corps of Engineers vetoed a proposal from the United States Congress to restore the statue, concluding that the patina protecting the underlying metal from corrosion “softened the outlines, and made it beautiful.”
We can see those softened outlines because light helps us discern forms. I find Vantablack—one of the darkest known substances, absorbing 99.96% of visible light—unsettling. As Kassia St Clair explains in The Secret Lives of Color, “black is an expression of light, in this case, it’s absence.” My eyes hunt for the surface, and Vantablack leaves nothing to see.
White surfaces reflect and scatter visible light, and according to Wittgenstein “very few people have seen pure white.” In the essay “In Praise of Shadows” on traditional Japanese aesthetics, Jun’ichirō Tanizaki wrote that “western paper turns away light, while [Japanese] paper seems to take it in, to envelope it gently, like the soft surface of a first snowfall.” If you take a piece of computer paper, that you know is white in its normal surroundings, and place it next to snow, the paper may appear grey. A white can be light grey in poor lighting or a light grey in good lighting.
Or a dress can look black and blue under yellow light and white and gold in blue light.
In full sunlight, the petals of a red flower appear bright red against duller green leaves. At dusk, the red flowers become darker while blue flowers appear brighter than they did in full daylight. This effect is called the Purkinje shift.
Albers based his career on studying these kinds of shifts in color. In his book Interaction of Color, first published in 1963, he created color theory exercises that could make “colors lie”—as Tamara Shopsin put it in her essay “Homage to an Homage of an Homage.” By itself, a color appears dominant, but when placed next to an even stronger hue, we can see its more diminished true nature.
Sixty years before Interaction of Color, the artist and historian Emily Noyes Vanderpoel published color studies in her forgotten book, Color Problems: A Practical Manual for the Lay Student of Color. Her work predicted trends that wouldn’t occur for several decades, like the concentric square format of Albers’s Homage to the Square.
In color therapy, also known as chromotherapy, shifts in color can change minds. Each color is believed to have healing energies that can affect the body, like red for passion, green for harmony, and violet for intuition. One method for administering chromotherapy includes eating foods of a specific color. The artist Sophie Calle does this in her series The Chromatic Diet. Inspired by Paul Auster’s Leviathan, in which a character inspired by Sophie Calle herself restricts her diet to foods of a single color on certain days, Calle recreates and photographs each meal in the book as an act to bring herself and the fictional character closer together.
An inventor in Arkansas advertised Vision-Dieter, his special two-color tinted glasses that deter shoppers from buying brightly-colored food packages. He boasted that “you won’t believe your eyes” and that the combination of a blue and brown tint was a “secret European color technology.” But the product didn’t work and the FDA destroyed most of the glasses.
Before the computer, the physical production of pigments limited our access to color. Naturally occurring, finely ground minerals mixed with toxic solvents — or the urine of cows fed mango leaves — were expensive. Ultramarine was the most expensive blue used by Renaissance painters and remained available only to the rich, or those backed by wealthy benefactors, until the invention of a synthetic version in 1826.
In “On Color,” Sillman notes that most oil painters can tell the difference between colors from their weight alone, but that also means we’re “somewhat doomed to the palette provided by manufacturers.” Even our ready-made digital palettes are predetermined for us by the choices of both software and hardware manufacturers. (Or whoever we borrow from when using the eyedropper tool.)
Our perceptions of people or objects in photographs change dramatically from black and white to color. If I see myself in a black and white photo, I am far less critical about my polychromatic flaws because they’re filtered out. And if I see a colorized photograph of a historical figure or event, I’m surprised by how real they seem.
With the invention of movable type, the colorful heraldic language for coats of arms, was transferred into a one-color hatching system for books, wax seals, and coins. The graphic design duo Dexter Sinister tested the transference of color data to one-color hatching by converting a László Moholy-Nagy panel that he originally “painted” over the phone with a manufacturer.
In the Farnsworth-Munsell test, in which a subject has to arrange 100 hues on a continuous gradation scale, less than sixteen percent achieve a perfect score. According to Johannes Itten, one of Albers’s instructors at the Weimar Bauhaus, distinguishing the many shades of a color depends on the sensitivity of the eye and “the response threshold of the observer.” In English, we only use about thirty names for colors in daily vocabulary, and only a “trained artist can discriminate and name a great many hues,” wrote Umberto Eco in “The Colors We See.”
Albers believed that developing a sensitive eye for color took practice, but it doesn’t come without a bit of work: Illustrator Tamara Shopsin writes that Albers’s exercises “were hard and time-consuming”; Sillman compared his Interaction of Color to notes from a test kitchen: “To do his exercises, you first have to gather color swatches like ingredients, splice and dice them, layer them and shift them around to test them out on your eyeballs.”
“When one becomes infatuated with the seven [spectral] colors, the mind is easily distracted,” wrote Masanoba Fukuoka, in his manifesto One-Straw Revolution. One of the most important organic farmers of the twentieth century, Fukuoka believed that viewing the colors of the world with “no-mind” — a state that recognizes the insufficiency of intellectual knowledge — helps one see the color of the colorless as color.
A flat gray surface can come to life through its small modulations of shading, which requires a visual sensitivity to tonal differences. The act of arranging the subtle differences is like arranging lengths of sticks or consecutive numbers, according to Wittgenstein, who once asked, “To what extent can we compare black and white to yellow, red, and blue?”
To use his own words in response, “Colors are the children of light, and light is their mother.” Notably, Vanderpoel also called color “the music of light.” Black and white, yellow, red, and blue preserve their relationship with light through the scales of tones between their lightest and darkest. The continuous scale does not change in saturation, but changes in brilliance. As all colors we can and cannot see differ depending on their surface, surrounding, and our state of mind—they will always in some way share their brilliance. | https://medium.com/google-design/everyday-color-theory-59c1ca0770cb | ['Cortney Cassidy'] | 2020-04-10 14:36:58.370000+00:00 | ['Color Theory', 'Design'] |
3 Lessons from 100+ Design Interviews | We start every interview with the same question: “Can you tell me what you had for breakfast?” This gives us time to adjust sound levels, change the framing, and dim the lights. Today, almost everyone responds with, “Just coffee.”
It’s the final day of Design Week, a 3-day conference for the Workday design community. And to be fair, last night was a late one. Visiting from a remote office in Victoria, BC to Workday headquarters in Pleasanton, California, my favorite part of the week was getting to know my new colleagues outside of work.
Understandably, everyone’s a little slower today. The lights we’ve set up in the hallway are too bright for this early in the morning. But we soon get to talking.
Best Laid Plans
All week, my team has been recording interviews with Workday Design during the breaks between presentations. The idea started simply enough. With over one hundred team members gathered together for the first time, we wanted to ask them a set of open-ended questions about design. Just a casual chat on camera. When would we get an opportunity like this again?
Over the next few months, I’d also fly to two more of our international offices, comb through weeks of footage, and eventually produce an interview mini-series exploring some of the biggest questions facing our designers: what design means to them, how to be successful, and what impact they want to have on the world.
I want to share three lessons I learned along the way about the importance of listening to your community. More than anything else, I want to encourage you to try this with your team at your company. I’m not saying film an interview miniseries. I’m not even saying write a blog post. But give your people an opportunity to speak their mind freely and you’ll be amazed at the things you’ll learn.
What is Design Week?
Let’s start from the beginning. What is Design Week? Design Week is an internal conference put on by and for members of Workday’s global design community. We brought together people who had worked together for years but only ever spoken through video conference calls. Now they were singing karaoke duets or playing cornhole together. “You’re much taller than I imagined!” I heard several people remark.
For speakers, it was an incredible opportunity to present your ideas to the entire design community — an incredibly enthusiastic and encouraging captive audience. Think of it like giving a great TED Talk at a really supportive family reunion. It made the interviews we recorded feel like home movies. We wanted to document the energy and excitement of the conference and still give the people who hadn’t presented an opportunity to share their ideas.
Six Questions and a Softball
The plan? Ask 100+ designers across Workday the hard questions.
We asked over 100 of our colleagues, from senior leadership to fresh interns, to answer the following questions in 1 word:
What does design mean to you? What does design mean at Workday? What is the culture of design? What makes a designer successful? What impact should design have on the world? How was Design Week?
We wrote the questions as a way to get to know our colleagues — what makes them get out of bed in the morning and come to work? All told, we recorded over 50 hours of footage from people passionate about design, and broke each question down into its own video. Here’s what we learned from that process.
Lesson 1: Don’t Think Twice
Sometimes constraints promote creativity. Cynthia and Stella consider their one-word replies carefully.
One way we kick-started creativity was to pose an impossible constraint. Answer a broad question like “What is design?” with just one word. Of course, there was pushback. And cheating. But limiting people to one word helped in two ways.
First, it gave us short and sweet soundbites with which to organize and structure our videos. Later we could comb through the footage and place similar answers together. We color coded the responses until a human heat map appeared. Putting similar sets of soundbites together brought forward themes which we then linked together into a natural storyline.
Second, posing a one-word constraint forced our participants to focus. Trying to distill really big ideas into a single word sparked a lot of creativity in their answers. It led them to reflect deeply on what was truly important. We had five minutes with each person and six questions, so pairing things down to the essentials meant we could interview everyone who signed up. It also meant that each person thought hard about what the question meant to them. That dramatically improved the quality of answers we received.
Lesson 2: Let Go and Trust
“When you trust people, you get great things done.” — Beth Budwig
If you’re anything like me, the idea of asking difficult questions about people’s jobs fills you with anxiety. What if the answers aren’t usable?
Personally, I’ve seen too many corporate culture videos where the answers feel inauthentic and the participants feel coached. The fun feels staged. This was the last kind of video we wanted to make. Authenticity and spontaneity requires tremendous trust between interviewer and interviewee. At the time, I was also brand new to design. So how could I get people I was meeting for the first time to trust me and really speak their minds?
One of the first things the team did to promote trust was to let go of our expectations. We had no message in mind to convey through the video; we’d let the participants tell the story. No coaching, no staging. We left the answers open-ended, we let anyone sign up, and we listened, really hard, to what each and every person had to say.
We also made it our goal to interview everyone in the organization, regardless of tenure and title. Several participants, each new to the company or still working as interns, were surprised we wanted to hear their opinions just as much as the opinions of the more senior leadership. In many cases, those new employees gave the strongest answers. It surprised me how committed the whole team was to this idea of recording everyone. Even after Design Week was over, we flew to remote offices and recorded hours of footage with anyone we had originally missed.
So in short, we built trust by removing expectations, creating a safe space to share, and actively seeking input. In return, everyone we interviewed gave us honest, thoughtful answers. Of course, opening up the interview process like that carried risks — it meant giving up control over what was said. But one interviewee, Beth Budwig, said it best: when you trust people, you get great things done.
Lesson 3: Ask, Don’t Tell
“Design is a communication medium, it is a language that gives meaning to articulate ideas and concepts.” — Ben Taylor
Finally, the main lesson from this interview marathon was the importance of asking, not telling. In early brainstorm meetings, we came up with a lot of great ideas for what we might want to say in a Workday design video. But those ideas were nowhere near as interesting, honest, or insightful as the answers we received. Sometimes it’s worth asking a question, even when you think you know the answer.
There was also a lot of value for participants. We wanted to hear from everyone, and everyone had something they wanted to say. Even when we ran late into the evening and the rest of the teams had gone out to dinner, people stayed in the hallway, lining up for their opportunity to talk with us. They had thought long and hard about these questions and valued the chance to share their ideas with the rest of the company, and, as of today, the public.
Asking your colleagues, leaders, and peers about what they find important is a great way to get to know them. From my own experience I feel much more engaged when I feel heard and understood by my colleagues and my company. There’s a lot to be learned by talking to our people this way. We spend a great deal of effort recruiting top talent and surrounding ourselves with brilliant people, so we shouldn’t be surprised that they have inspiring answers. We just need to ask more.
Summary
“There’s a saying: a rising tide lifts all boats. That’s sort of what I feel like the design team community is here for.” — Angelina Di Francesco
So, if there’s one thing to takeaway from our experiment, it’s to try it yourself. If you haven’t already, ask yourself and your team: Why do they do what they do? What impact do you want to have? What is your culture really like? It doesn’t have to be a video series or a blog post, but start asking questions. And most importantly, start listening.
Watch the full video series here and let us know what you think! | https://medium.com/workday-design/3-lessons-from-100-design-interviews-4bca49f25ab1 | ['Workday Design'] | 2020-03-24 16:01:01.067000+00:00 | ['Design Videos', 'UX Design', 'Design', 'Video Series', 'Design Media'] |
Can we agree ASMR is just brainwashing which leaves gullible people psychologically naked? | You can’t navigate Youtube without some bumbling fuckwit tapping a microphone knowingly. Sometimes they brush the microphone with a pleasant but eerie smile. They often whisper non-sequiturs and stir yoghurt. It’s like society has fallen into a Monty Python sketch. But no. This is the world of ASMR creation and some people are making a fortune.
The ONE formal study into ASMR (2015) describes it thus:
Autonomous Sensory Meridian Response (ASMR) is a previously unstudied sensory phenomenon, in which individuals experience a tingling, static-like sensation across the scalp, back of the neck and at times further areas in response to specific triggering audio and visual stimuli. This sensation is widely reported to be accompanied by feelings of relaxation and well-being. The current study identifies several common triggers used to achieve ASMR, including whispering, personal attention, crisp sounds and slow movements
And so the big game of ‘I can’t see the Emperor’s scrotum’ begins. Hundreds of thousands of people now claim that they can’t sleep without the soothing sounds of ASMR.
This is despite the fact that five years ago everyone was sleeping fine. I don’t remember people keeling over on a daily basis from sleep deprivation. I don’t remember everyone needing a prescription to Bob Ross to get through the night.
Which is why I’m calling bullshit on ASMR
I don’t doubt that ASMR is a thing. I imagine that a social species like ours would have a positive feedback response to specific stimuli. It also makes sense for that to be whispering and soothing close personal behaviours. It makes perfect sense. I’m not trying to overturn the ONE scientific study into this phenomenon. I don’t have a problem with the science. I have a problem with people’s thinking.
What’s wrong with ASMR and people’s thought processes?
Human beings suffer from many brain failings, some of which I highlight in my articles. We have a tendency to conform to what other people think and hate feeling excluded. This means if a twenty-something is offering head orgasms from her bedroom-cum-foley studio, you’ll sign up. You’ll sign up because your friends did. If they feel it. You feel it too. Not at first. At first, you’ll believe everyone is fucking mental. Then faced with cognitive distress of exclusion, you’ll feel it too.
Before you know it, you’ll be come an acolyte in the sinister and ridiculous world of ASMR
You’ll believe that you can’t sleep unless someone smacks their lips or uses a brush to clean a microphone. You’ll find yourself waxing lyrical about being whispered at whilst you fall asleep. People like me will point out that this was previously the preserve of horror movies.
You won’t care. You’ve got a prescription to Bob Ross.
Conformity is strong. The desire to belong is strong. The ASMR effect is optional bullshit on which you can base your relaxation habits. If you enjoy funnelling money towards people who are dicking around then go ahead. I’m not going to stop you.
A note of caution for ASMR enthusiasts
If you create a habit, and you reinforce that habit then there’s a chance that you’ll get stuck. Convince yourself that you can’t sleep without a chittering human-cicada deep-throating a microphone and you won’t be able to. Your brain doesn’t like you to be wrong. Your insane devotion to this mad fad will stick around. A lifetime of worshipping at the altar of the bizarre.
Youtubers cashing in are like the tailors in ‘The Emperor’s New Clothes’. They will leave town with piles of cash before the delusion breaks. Those people who swear by ASMR will find themselves feeling ridiculous. Caught standing around without their psychological clothes on.
In ‘The Emperor’s New Clothes’, it takes a child to point out that the eponymous hero is nude. Suddenly, the spell breaks. Everyone laughs and claims to have been able to see his hairy scrotum the entire time. There is a great deal of mirth.
ASMR isn’t a naked middle aged man with vanity issues, but it’s not far off that level of ridiculous. Keep watching those videos if you want. Keep convincing yourself that you’ve got a wonderful new relaxation hack for your psychological wellbeing. On behalf of the estate of Bob Ross and the hundreds of youtubers laughing their way to the bank, thank you.
I will continue to sit over here and laugh. ASMR in my mind stands only for Asinine Stupid & Mentally Retrograde. | https://medium.com/pickle-fork/can-we-agree-asmr-is-just-brainwashing-which-leaves-gullible-people-psychologically-naked-d6d9c93eca4 | ['Argumentative Penguin'] | 2018-12-20 21:21:09.147000+00:00 | ['Humor', 'Social Media', 'Psychology', 'YouTube', 'Asmr'] |
Far away from India, a Vedic ecosystem rises in Texas Gaushala | Meanwhile, two individuals, one in Russia and one in California (who happened to read my Medium article on Abhinav in January 2020) decided to donate one cow each to the gaushala. This time Abhinav selected two brown cows from the Sahiwal breed. They were named Asha and Kamala. The new cows made themselves comfortable faster than the first ones had, and soon they were all feeding and wandering together.
“They might be just five cows but I get the satisfaction that we saved them from slaughter,” said Pratibha.
The cows were still not being milked. Both Abhinav and Pratibha were unwavering in their decision to not use force. “Getting cow’s milk requires tapasya,” said Abhinav. “Until the cows willingly allow us to take their milk, we will wait,” he added. “And even when they get old, this is their home.”
But milk is just one of the gifts given by cows. Dried cow dung has been used as fuel, manure, an ingredient in Panchagavya (Ayurvedic formulation), pesticide, antibacterial cleaner and plaster for millennia in India. It is the stuff of folklore. Pratibha and her children lost no time in collecting cowdung and drying them into cakes of various sizes. It was hard work. Since the cows were free to walk all over the estate, they could drop their dung anywhere, which meant that collecting it involved a great deal of walking.
During one of my walks with Abhinav, he pointed out that wherever the cows had dropped their dung, the grass around it had grown in thick clusters. It was indeed true! There were clumps of grass sticking out in different parts of the meadow. Suddenly I remembered the thick grass clusters I had seen in childhood but never thought about the phenomenon. How oblivious we are of the subtle workings of nature.
Once the flyers announcing the sale of cow dung cakes were circulated, many Hindus began placing orders to buy them from mid-October. Gomaya (cow dung) is considered very auspicious to use in yajna or fire ceremonies. During the Navaratri festival, many individuals and temples perform yajna. The Gomaya from Texas Gaushala was donated by many Hindus to temples.
“Until now, we were only using Gomaya substitutes in our yajnas; we are glad there is a gaushala now to supply us with real Gomaya,” said the priest of Sri Krishna Vrundavana, a Hindu temple in Houston. Orders began to come even from other parts of the US. | https://medium.com/age-of-awareness/far-away-from-india-a-vedic-ecosystem-rises-in-texas-gaushala-8ac73444e734 | ['Sahana Singh'] | 2020-11-27 19:15:36.787000+00:00 | ['Environment', 'Nature', 'Agriculture', 'Culture', 'Lifestyle'] |
🎛 Manager Updates: Token exchange, unique tokens (NFTs), support for multiple languages & the addition of Trezor Wallet | We really appreciate the work StevenJNPearce did on this.
🐱 Crypto Kitties, Unique Tokens or “Non-Fungibles”
We love our kitties and really dislike the acronym NFT (Non-Fungible Token). Crypto-assets are already confusing enough. When I first heard the term fungible I thought the topic of conversation had moved to mushrooms. 🍄
We need to use clear language to help people grasp these concepts and acronyms seriously suck. We like the term collectible but that doesn’t work well for tokens that represent debt, insurance and options. We have decided to call them Unique Tokens. Although the community understands what NFTs are, our customers do not. People understand the concept of unique tokens a lot more easily. They grasp the underlying concept quickly and see how exciting these unique digital assets are.
A huge thank you to Palevoo and Igorline for their time and effort. We connected with them through Gitcoin and they have been great.
🗣 Managing Multiple Languages
The Balance Community has translated Manager into 🇪🇸 Spanish, 🇩🇪 German, 🇷🇺 Russian and 🇮🇹 Italian.
We have open issues on GitHub for 🇰🇷 Korean, 🇯🇵 Japanese, 🇵🇹 Portuguese, 🇵🇱 Polish, 🇧🇷 Brazilian and 🇨🇳 Mandarin. If you would like to help us translate the product into your native language, please get in touch.
💻 Contributors & Collaborators
Now that Balance Manager is open source, we welcome any open source contributions from the community. If you would like to collaborate with us and earn money, we would love to work with you. We are using Gitcoin to help us attract and reward remote collaborators. As Gitcoin matures and Balance starts to generate free cashflow we plan to deploy a lot of capital very quickly to help us design the best user interfaces for promising protocols. If you want to earn tokens, check out: https://gitcoin.co/profile/balance-io
Forgive the typo. Gitcoin needs to pull in my correct spelling of the word #BUIDL.
🤓 How can I help if I am not an engineer?
The biggest non-technical things we need help with are in community buidling. The only moat in the world of open economic protocols is the community of people who believe in the code. We need to widen that moat as quickly as possible as people start to move their assets out of closed source banks and into open source ones.
Never fork with the community.
If I forked Bitcoin and called it Burtoncoin, no one of you would support it. In the open source world the community has to be earned; it cannot be bought. That is what makes this movement so hard for traditional entrepreneurs, economists, bankers, politicians, investors & journalists to understand. Economic protocols are digital religions that people can believe in and benefit from.
A lot of the discussion goes on in our forum. We would love to welcome you to the Balance Community: https://spectrum.chat/balance
We really need help with our Clothing. If you can help us, we would love to talk.
📈 Better Beta Data
Nearly all of the customer support requests we have received have fallen into two categories: feature requests and data issues. We are working incredibly hard on getting clean data out of Ethereum. It turns out that counting all of the tokens within an Ethereum wallet is not an easy task. We are partnering with The Graph Protocol to get an open interface for token balances specified and shipped as quickly as possible.
We also have lots of issues with the price data. This is a really hard problem to solve. Lots of tokens trading in lots of places makes it hard to find a good provider. Lots of people refer to CoinMarketCap.com but they have a bunch of issues. We hope that a combination of efforts in this space from projects like Messari and MakerDAO will help us improve our pricing information.
When we are happy with the quality of our data sources, Manager will finally lose the Beta tag and we will call it a 1.0.
😍 Thank you for all of your help
We really appreciate all of the people who are helping us buidl Balance. Your friendly tweets, feature ideas, thoughtful links and recommendations really motivate us.
Onwards 🖖 | https://medium.com/balance-io/manager-updates-token-exchange-unique-tokens-nfts-support-for-multiple-languages-the-2c20601a9cc0 | ['Richard Burton'] | 2018-06-15 09:45:49.896000+00:00 | ['Cryptocurrency', 'Blockchain', 'Startup', 'Ethereum', 'Bitcoin'] |
10 Things I Wish You Knew About Borderline Personality Disorder | For at least 20 years — though I suspect much longer — I’ve been battling mental health issues. Initially, I was diagnosed with depression. That never seemed quite right, or all, and my next diagnosis was dysthymia.
Years passed with more problems and zero relief. I was then diagnosed with bipolar type two, and for a little while that seemed alright. Like I had some answers… until I didn’t because treatment wasn’t really doing anything for me.
For a long time, I felt stuck as if I had to resign myself to an unhappy life where adequate treatment wasn’t an option for me because nothing seemed to work. But finally, I got a new diagnosis — borderline personality disorder.
That diagnosis was much scarier than any of the others because I had all of these notions that BPD meant I would never have a shot at a healthy life. In reality, I had been misinformed about the condition. Once I began learning what it really meant to be borderline, everything clicked for me.
Here’s what I didn’t know, and what I wish more people understood today.
1. We can get better.
With proper treatment and personal responsibility, a person may not even continue to have symptoms. That’s right — despite what you have heard, BPD is not a life sentence. My treatment has involved a lot of personal work of reframing my perspective and getting rid of “stinking thinking.”
It’s helped me to know there’s a reason my mind panics and fears the worst if I let it run wild.
Of course, individual results vary, but remission from most symptoms is entirely possible for many people with borderline. But don’t just take it from me.
2. Treatment isn’t always that hard.
For me, the hardest part about being borderline was going through decades of misdiagnosis. Once I finally knew what I was dealing with, changing my ways was much so easier than I ever imagined.
I no longer felt guilty or helpless because meds didn’t work for me, and in fact, I felt empowered about my mental health for the first time in my life.
3. It’s not “worse” than any other mental illness.
There are no drugs approved by the FDA to treat borderline personality disorder. None! Doctors might prescribe meds for comorbid conditions like depression or anxiety, but treatment for BPD itself is individual and mostly about changing your thinking. Shifting your perspective.
Personally, I feel lucky having BPD since so many of my symptoms are now under my control.
4. Being borderline doesn’t make us manipulative, crazy, or psycho.
Lay people and even some health professionals make unhelpful assumptions about people with BPD. Some of the most fraudulent myths about the mental illness come from movies like Fatal Attraction. or Psycho.
The criteria for borderline means you have at least 5 out of 9 possible symptoms, and honestly, I manage all 9 symptoms without living out any of the false stereotypes.
5. Men can be borderline too.
BPD is not an illness that impacts only women. Men have it too, but unfortunately, we don’t know how many due to misdiagnosis. Men with BPD are frequently diagnosed with PTSD or basic depression instead.
I hope that more people can talk about the condition and raise awareness to help end a great deal of needless suffering among men who may not even know why they feel so lost and empty.
6. “Emptiness” may be the most misunderstood symptom of BPD.
Chronic emptiness is different than depression. I read one man describe it as more of a longing and I’m inclined to agree. I also think of it as a sort of ennui or restlessness of the soul.
You want to feel happiness and connection, but no matter how much you try, you feel dissatisfied and empty. Perhaps even bored of yourself, in part because you don’t have a great sense of who you really are.
That’s why effective treatment often hinges upon developing greater self-awareness.
7. It is not always an obvious affliction.
Readers sometimes tell me that I don’t seem or sound borderline, as if it’s something you could easily catch in a person’s writing. I think the assumption stems from the way the media has often portrayed the disease.
I am an introverted person who’s spent her entire life feeling like she must tamp down a cap on her wild and effervescent emotions. Very few people have ever seen me at my worst when I feel unable to contain my feelings. The pressure I feel to properly control the rumbling within me is in essence why I have so often failed to have healthy relationships.
8. Treatment has made me a much more chill and practical person.
The question has come up how I can write so much about relationships when I freely admit to so many failed ones. The thing is, I have had to do so much research and exercise in managing my own emotions and expectations just to treat my BPD symptoms, that I now enjoy an entirely new perspective about love and human connections.
I used to be the least chill person when it came to dating. Yes, I “obsessed” over each crush. I got carried away and tried to fit each guy into my fantasy of ideal love.
Once I got my borderline diagnosis and began working on my unhealthy attitudes, there was an enormous shift in how I look at love. I was finally able to relax and quit thinking that my worth was dependent upon my relationship status.
9. Childhood trauma might have a lot to do with it.
When I first wrote about this, my friend Judy McLain brought up good points about childhood trauma and BPD. Experts are still divided about whether or not childhood trauma causes borderline personality disorder, but there seems to be (at least) a strong correlation.
A lot of us who have had traumatic childhoods didn’t know how bad they were until we grew up and learned more about what’s healthy and what’s not. And plenty of borderline folks whom I’ve met over the years have struggled with attachment issues to their parents first.
It makes sense to me that BPD could be related to a dysfunction in a person’s relationship with their caregiver(s). So much of the borderline struggle connects to a lack of security in who we are and how we relate to others.
10. There is more hope than people think.
Right now, there’s still a ton of stigma surrounding borderline. The way the world sees those of us who’ve been diagnosed with the disorder can be shockingly sharp. I occasionally ask questions on Quora about life with BPD, and I’m taken aback by how many people insist that a person with borderline can never get better. They are convinced it’s impossible.
That’s a lie, and I’m so glad I did my research before I listened to the stigma and threw in the towel.
If you or someone you love struggles with the symptoms of borderline personality disorder, there is hope for a better life. You just need the right information to begin making progress. | https://medium.com/honestly-yours/10-things-i-wish-you-knew-about-borderline-personality-disorder-9299e52a2ad9 | ['Shannon Ashley'] | 2019-11-21 22:10:34.494000+00:00 | ['Personal Development', 'Culture', 'Mental Health', 'Self Improvement', 'Life Lessons'] |
What Is a Bear?. A private zoo owner thought he was… | Bradley Gerwig’s relationship with animals is complex. Over the course of his life, they’ve served as sources of income, nutrition, entertainment, and companionship. Some are nuisances; others are worthy prey. Asked if he considers himself an animal lover, he responds with an anecdote about how, as a high school student, he’d wake up at 4 a.m. and milk 40 cows for $20 a week. He’s raised everything from dogs and cats to lemurs and lions, and he’s a skilled hunter, fisher, and trapper. In the past few years, however, Gerwig hasn’t picked up his rifle, partly because of an aging body and partly because of an evolving conscience.
“I don’t have a desire to go out and hunt and kill anymore,” he says, “I’d sooner just take care of an animal.”
It’s a damp December morning, and Gerwig is at his kitchen table, sporting a T-shirt with a dramatic illustration of three bears cast in moonlight. He talks with a gravelly drawl that makes the word “tire” sound like “tar.” His white hair is brushed to the side, and semicircles of dirt line his fingernails. Seated to his right is his wife, Lurline. They rode the bus together in high school but didn’t start dating until their mid-twenties. “I felt sorry for him,” she jokes between sips of Folgers. They married, had some kids, and in the late 1970s built a small zoo on three of their five acres of property. For them, it seemed like a no-brainer. “We like foolin’ with animals, and little kids love seein’ animals,” Gerwig explains.
Opening a zoo in your backyard may sound like an insane, high-liability undertaking, but it’s not that unusual or legally complex. One of the most important steps in the process is obtaining an exhibitor’s license from the U.S. Department of Agriculture. Doing so requires completing an application and passing an initial inspection to show that you meet the agency’s minimum standards for care and housing. In other words, your fence must be tall enough, and your cages must be locked.
PETA and other animal law experts have for years been sounding the alarm over what they say is a tragically lax permitting process. “The agency automatically renews it once you get it, so it’s really the only hurdle you have to clear,” says Delcianna Winders, vice president and deputy general counsel of captive animal law enforcement at PETA. “It’s a very inexpensive license, and once you have it, it’s pretty much carte blanche. You can violate the law as much and as seriously as you want, and they will still renew your license year after year.”
Of course, not all zoos limit their aspirations to the USDA’s minimum standards. The big, well-known ones, such as the San Diego Zoo and the Bronx Zoo, belong to the Association of Zoos and Aquariums (AZA), a nonprofit accrediting organization that sets stringent regulations. Many of these institutions have missions of science and conservation at their core and go to great lengths to ensure their animals live in healthy, enriching habitats.
They’re the exceptions. Fewer than 250 zoos worldwide are accredited by the AZA. Meanwhile, more than 2,000 operations in the United States have active USDA exhibitor’s licenses. They range from traveling carnivals to roadside zoos like Gerwig’s, who says he didn’t even know he needed a USDA license until several years after opening up, when an agency inspector showed up one day and told him so. He invested roughly $10,000 to put up new fences and improve his animals’ enclosures and was then legit in the eyes of the federal government.
Arctic foxes, bobcats, coatimundis, lemurs, and, of course, bears followed. “Lions and bears,” Gerwig says, “that’s what the kids wanted to see.”
At no point in time was the zoo meant to be a full-time job or source of income, Gerwig says. Both he and Lurline had steady gigs with the U.S. Postal Service; he also delivered newspapers in the early mornings. The zoo never turned an annual profit, Lurline says, as the costs of feeding dozens of animals always outstripped the modest income generated by admissions. “If you want to call losing money every year a business, then I guess it was a business,” she quips.
Initially, the Gerwigs’ zoo consisted of mostly farm animals — goats, lambs, pheasants, and pigs — and was akin to a petting zoo. There were some picnic benches and a play area for kids in the front yard. Signs on the animals’ pens offered basic information about what they eat and how long they live.
About a year after opening, Gerwig took a few of his animals to a nearby carnival and set up a small display to promote the zoo. There, he says, an acquaintance approached him with a gift: a lion cub stashed in a cardboard box. They plopped the cute ball of tawny fur in Gerwig’s exhibit, and the kids went nuts. It was Gerwig’s first real exposure to the allure of exotic animals, and his zoo would never be the same. The lion, which his kids named Simba despite it being female, ended up becoming a full-time resident in Gerwig’s backyard for the next 18 years.
During that time, Gerwig steadily grew his collection of exotic wildlife. Arctic foxes, bobcats, coatimundis, lemurs, and, of course, bears followed.
“Lions and bears,” Gerwig says, “that’s what the kids wanted to see.” | https://gen.medium.com/how-a-bear-in-a-backyard-in-maryland-triggered-a-national-brawl-over-the-inner-lives-of-animals-bb55ecb69d3d | ['Chris Sweeney'] | 2019-09-27 00:46:28.292000+00:00 | ['Environment', 'Reasonable Doubt', 'Animals', 'Zoo', 'Animal Rights'] |
Flexible journalism to get past the wall | For decades now, Arab governments have been building a massive wall to obstruct the establishment of independent journalistic platforms. The State’s ownership of the major print houses helped consolidate its absolute control over the publishing industry in Egypt. Even privately-owned newspapers have had to be printed at State-owned printing houses due to the high cost of establishing private printing houses and because of the difficulty in obtaining the special security permits required by the insurance and State authorities.
Despite these restrictions, some private newspapers managed to acquire the necessary permits and secured a large readership. However, another barrier remained in their way to success: advertisements. For dozens of years — and it is still the case today — private companies with close ties to the State have dominated the advertising sector in Egypt.
Private newspapers that displayed objectivity in covering current events were deprived of ads. Some newspapers were able to overcome this deprivation, up until the year 2013, which saw the State gain total control over the advertising sector, whether print or digital. The message was clear: If you want your share of ads for your paper, be friendly to the oppressive regime; otherwise, don’t cover news about the President and his government.
Shortly after the start of the Arab Spring, many independent media-related initiatives were launched with a common and constantly evolving aim: online journalism. Those initiatives were funded by donor institutions or businessmen. When those news outlets started reaching large audiences and having a tangible influence on a society with no access to objective journalistic sources covering politics and the economy, the State initiated a severe crackdown on them. Blocking Egyptian and other Arab news websites in Egypt proved to be the most successful way of curbing this new wave before it would spark more enthusiasm for political change in the hearts of people angered by daily oppression and poverty.
Three years have passed since the Egyptian government took a decision to block 21 independent news websites. When that happened, these websites shifted to social media platforms, including Facebook and Twitter, as their outlets. However, following the 2016 US election and the increasingly worrisome “fake news” phenomenon, Facebook and other social media platforms took a decision to change their algorithms such that video and entertainment content would become more favored and the reach of written news more limited.
In light of this complicated situation, and as part of a journey full of challenges and experimentation, I am trying to find answers and solutions to the crisis of media in the Arab world. I want to develop a model designed to get past the massive wall blocking the independent press from reaching Arab citizens. Here are my findings so far:
1. Networks, not platforms
According to an Arab proverb, the lighter your burden, the easier you can move. Establishing a new media platform requires a lot of funding, as well as building a wide network of journalists and managing a huge workflow to produce dozens, or even hundreds, of news pieces daily. All this can be lost in the blink of an eye if the government decides to block your website or storm your headquarters and shut them down.
To avoid these scenarios, I believe that building and developing a network of small journalistic initiatives is more suitable for countries going through political transition. Such initiatives would allow for a variety of news products, not all of which would be focused on political content. They would also help develop the journalistic skills of many reporters who have a lack of technical knowledge due to working for many years in huge news institutions.
In addition to that, a multitude and diversity of news outlet brands would give confidence to audiences and provide many opportunities to choose between different news products. The variety of content, such as media initiatives, comedy shows and chat bots, would help circumvent attacks by the government, which always aim to slander journalism brands by dubbing them as traitors or foreign agents.
2. Comedy. A lot of comedy.
Bassem Youssef’s show “Al-Bernameg” shook the Arab media scene. Youssef always described his show as entertainment, but its news role was never a secret. Information and videos featured on the show were usually taken from news and media platforms. The show was discontinued due to the Egyptian State forcing TV networks to cancel it. Youssef then left Egypt to avoid arrest or at the minimum a travel ban.
Later, many other political comedy shows emerged, but they were also discontinued, either because of TV networks’ fear of facing severe consequences if these shows were too critical of the regime, or as a result of lack of funding.
Comedy plays a major role in revealing the fragility of dictatorships that cannot bear criticism. Investing in comedy shows that significantly make use news content can have a great future, especially in light of the lack of objective journalism in the Arab world, and of the counter-revolutionary forces’ control over most media platforms in the region.
Source: Media Use in the Middle East from Northwestern University
3. Remaking the news
Newscasts are still engaged in monologues. Who has thirty minutes in the morning to watch a newscast that could completely change by midday!? People are disillusioned with the classic 30-minute newscast involving short, anonymous headlines that might even be fake. How do they go back to following news with more credibility? By “remaking the news”.
AJ+ generated 2.2 billion Facebook video views in 2015
Let’s make daily news easy and accessible. A few years ago, short one or two-minute videos that briefly explain a single event or issue became viral. But how many events take place in the world every day? Dozens. What if we make one-minute videos to cover world news? What if we create this minute several times a day? There’s a chance to remake short newscasts that tether users to a certain time to follow the news. Scrolling down through many videos on social media platforms lead to a lot of high-quality videos going unnoticed. Tethering the remaking of the news to specific times every day could reinvent digital news.
4. Artists and the making of followers
Celebrity news has always been very popular, especially with teenagers and young adults. People like to follow actors, singers and models, and the involvement of such social influencers in political life makes their followers much more interested. What if those celebrities use their fame to help deliver the news to their fans and a larger number of people, especially teenagers and young adults?
Jennifer Lawrence is ‘Unbreaking America’s Political System Failure
Both the Arab spring and the recent US elections have seen involvement by celebrities in political events or awareness campaigns related to voting or combating corruption. We need to educate such influencers about how to take part in current events to reach wider audiences and to build more confidence in journalistic products and the news industry in general. Followers are usually loyal to those celebrities. Therefore, investing in building confidence in journalistic brands through constant contributions from influencers could have a great effect in bringing back society’s confidence in journalism.
5. Solutions journalism
The last eight years have brought about depressing news to hundreds of millions of Arab people. Disillusionment with the state of affairs and a state of frustration and despair have led many Arabs to stop following the news, particularly young people who have seen revolutions being defeated in Arab States in real time. Although many people are still interested in following the news, the news is often presented to them in a gloomy tone and entwined with words like depression, hopelessness and a dark future.
This is why it is so important to invest in “solutions journalism” that is hopeful. Such journalism should be developed by combining it with general policies people hope to see in their countries. Due to Arab governments constantly putting experts in the spotlight who say what the ruling regimes want to hear, many other analysts and experts have been left in the dark, especially in the fields of the economy, public health, housing and employment.
By producing a kind of journalism that helps citizens improve their professional lives, interact with their society and get involved in efforts to improve the general living conditions in the Arab world, journalism could regain its status in the life of Arab people. This time, it would be tied to finding a better life and being involved in building societies able to peacefully come together to create a better future. | https://medium.com/journalism-innovation/flexible-journalism-to-get-past-the-wall-8c5f82941a6d | ['Abdelrahman Mansour'] | 2019-04-05 02:25:28.340000+00:00 | ['News', 'Innovation', 'Journalism', 'Fake News', 'Politics'] |
Reflections | Reflections
My Weekly Roundup
Photo by Matheus Bertelli from Pexels
I’m doing my weekly roundup again! All my posts for the previous week together for anyone who’s interested.
Here’s last week’s featuring poetry.
I appreciate all the kind feedback on these posts and I hope that if it’s your first time reading them that you’ll enjoy them too. | https://medium.com/echoes-of-the-soul/reflections-145690712674 | ['Heather Ann'] | 2019-06-30 23:24:39.731000+00:00 | ['Poems On Medium', 'Poetry On Medium', 'Poetry', 'Poem', 'Writing'] |
Magnificent is the Quran | Magnificent is the Quran
Breathtaking words have come from above;
Full of guidance and love.
Like a rain that makes barren land alive;
Rain that helps you survive.
Written by the master of the worlds;
Giving a hope to fallible slaves.
Teaching you to become slaves of the merciful;
Freeing you from the slavery of mortals.
Guiding you to become thankful;
Making your life wonderful.
Words that bring the dawn;
Magnificent is the Quran. | https://medium.com/the-heart-of-quran/magnificent-is-the-quran-b2526ac93a3b | [] | 2020-09-03 02:41:54.117000+00:00 | ['Religion', 'Nonfiction', 'Quran', 'Islam', 'Poem'] |
How To Use the Superstructure Method to Boost Your Productivity | Image courtesy of Pixabay
Over the years, I’ve noticed that when colleagues have low productivity and low achievement, they also have high stress levels.
This tells me that although they want to get as much work done as possible, their efforts are actually reaping small rewards.
For example, I remember a colleague of mine who always appeared to be busy. He never liked to be disturbed, and never had time for any small talk. You would think that he was a productivity machine. But you’d be wrong!
In reality, he was busy but he struggled to get much done, and often missed his deadlines. He lacked the ability to work efficiently and productively.
As I often mention in my articles, learning how to work productively is something that should be taught in all schools and colleges. Sadly, it’s taught in very few.
However, it’s never too late to learn how to boost your productivity. And if you can spare a few minutes now, I’ll introduce you to the Superstructure Method — which will help you to save hours every week.
Taking Back Control of Your Time
The Superstructure Method is a tried-and-tested technique for organizing your work and boosting your productivity.
If you’re fighting a losing battle when it comes to mastering your day-to-day tasks, then this method will put you back in control of your time and help you accomplish whatever you want.
Before I show you the 4 simple steps that make up the Superstructure Method, it’s important for you to know that every task contains 3 components:
Intention: Why you are doing it Value: What benefits this task brings you Cost: What you have to give up or invest to achieve the value (your resources and time, etc.)
To pinpoint the best tasks to focus — as well as spending the right amount of time doing them — you’ll need to assess their value to you and your work.
This is exactly what the Superstructure Method will enable you to do.
Let’s dive straight in and take a look…
Boosting Your Productivity in 4 Simple Steps
Imagine being able to get your daily tasks done in half the time it normally takes you.
This might seem pie-in-the-sky, but stay with me, and I’ll show you how to make this a reality in your life.
It all has to do with the four steps that make up the Superstructure Method.
Step 1: Start with a clear intention.
The first step is to spend some time looking at all the tasks you have on hand and think about why you need to do these tasks.
I suggest you mentally ask yourself the following two questions about each task: “What benefit am I getting out of this task?” and “Will this action help me make progress towards my goal or my company’s goal this week?”
If you’re looking at all your tasks for the year ahead, then this step will probably take you a few hours. But if you just want to look at your day ahead, it should just take you a few minutes.
Step 2: Decide the task’s value.
The next step is — based on your short and long-term goals — to sort the list of tasks into one of these 3 categories:
Must Haves : Critical to achieving your objective. Without the successful completion of these tasks, your goals will remain dreams.
: Critical to achieving your objective. Without the successful completion of these tasks, your goals will remain dreams. Should Haves : Important but not critical. However, if you leave these tasks out you may lessen the impact of your goals.
: Important but not critical. However, if you leave these tasks out you may lessen the impact of your goals. Good to Haves: These tasks are nice to have, but not including them won’t have any negative impact on your goals.
Let me give you an example of these in action.
It’s early Monday morning, and you’ve just logged into your computer to check your work emails and your calendar.
You notice immediately that you have several important tasks to complete, as well as 2 essential meetings that you need to attend. In addition to this, you have multiple tasks that you’d like to get done and a couple of meetings that you’d be keen to attend.
Spend a few minutes putting your tasks and meetings into the Must Haves, Should Haves, and Good to Haves categories.
In the Must Haves category, you’d definitely want to include your important tasks and essential meetings. In the Should Haves category, put in the tasks and meetings that will help you and your organization achieve your goals. In the Good to Haves category, list anything else that is left over. (This might include meetings that you’d like to attend, but don’t really need to.)
Once you’ve listed out your tasks in this way, the next thing to do is to quantify them so you can rank them using numbers.
Let me explain what I mean…
Start by assigning a number value to each of your tasks. The higher the number, the more important/urgent/valuable it is.
Instead of a linear scale like 1 to 10, I prefer to use the Fibonacci sequence (1, 2, 3, 5, 8, 13, 21). This is because as the numbers go up in increasingly larger intervals, it’s much easier to visualize the difference between the numbers.
For example:
Must Haves:
Write project document (21)
Create PowerPoint presentation (13)
Attend management meeting (8)
Should Have:
Respond to project management emails (5)
Good to Have:
Attend meeting with colleagues to discuss Christmas party arrangements (1)
Step 3: Evaluate the task’s cost and prioritize.
After determining the priority of your tasks, the next thing you should do is to look at each task’s cost — the time cost.
Some tasks are difficult. Some require external help. Some require extreme focus. This is usually reflected in the time required to complete the task.
At this stage of your tasks evaluation, you only need to make rough estimates. Personally, I like to split cost estimates into half-hour intervals: 0.5, 1, 1.5, 2, 2.5, 3.
Why have I stopped that list at 3? That’s because I don’t suggest that you have any tasks that last longer than 3 hours. If you’ve calculated that a task will take longer than 3 hours, then it’s too big and you should break it down into smaller tasks — which will also bring the advantage of making it feel less daunting.
Once you’ve done this evaluation, you’ve now quantified the value and time cost of your tasks. All that’s left is for you to calculate the final score of each task and then to order them into priority, from highest to the lowest.
How to calculate the final score?
You just need to divide the task’s value by its time cost.
Take a look at the table below to see exactly how this works:
Step 4: Schedule the tasks.
If you’ve followed all the steps above, you’ll now know the priority of your tasks and approximately how much time you’ll be spending on each task. What’s next? It’s time to put all these things into action!
Fortunately, it’s as easy as scheduling these tasks on your daily/weekly/monthly planner.
“But which day and at what time should I tackle my tasks?” I hear you ask.
Well, if you’ve read my advice on categorizing your tasks into Must Haves, Should Haves and Good to Haves, you’ll know that your high-priority tasks should be tackled first. When you start doing this, you’ll no longer be overwhelmed with loads of tasks. And you’ll always have an organized plan that allows you to master your time and to consistently finish your critical and important tasks on or ahead of schedule.
From my experience of following the Superstructure Method, I noticed that after a few weeks I’d begun to create a solid routine for some recurring tasks such as replying to emails or having regular team discussions. And the good news about this is that routines save your time and energy, as you don’t have to think about them every time.
Oh, and one more thing…
Whenever you think of a new task, or if there are low-priority tasks that can’t fit in your schedule, I suggest you write them down in your task backlog file.
This file, which can be a paper notebook or a task scheduling app, is the place to record all your tasks. It’s a way for you to see at a glance what tasks you have completed — and what tasks you have to come. It will also make it easy to prioritize your tasks.
Succeeding in Life
When we live an unorganized life, we can expect unorganized and disappointing results.
For example, if you were due to sit a college exam, but had done zero studying in advance, you’d probably fail. But if you’d attended all the classes, did your research and studied enthusiastically for your exam — you’d be likely to pass with flying colors.
Disorganized people waste their time and energy; organized people profit from their time and energy. A recent Harvard Business Review study backs up this statement.
I really hope that you put the Superstructure Method into action in your life. If you do, I think you’ll be amazed by the results. You’ll have direction. You’ll have achievement. You’ll have success! | https://medium.com/the-ascent/how-to-use-the-superstructure-method-to-boost-your-productivity-e89f28c74dd6 | ['Leon Ho'] | 2020-10-29 19:02:27.980000+00:00 | ['Productivity', 'Time Management', 'Efficiency'] |
From the Blackfeet Reservation to Great Falls, Stephen Graham Jones’s The Only Good Indians Will Scare You Good | The Only Good Indians, by Stephen Graham Jones. Saga/Simon & Schuster, July 2020.
The inimitable — and prolific Blackfeet master of horror is back with a killer of a book. The Only Good Indians (out with Saga/Simon & Schuster onJuly 14, 2020), will take you up to the Blackfeet reservation and down to Great Falls, and along the way, it’ll scare you good. Four men are being haunted — and hunted — by a vengeful spirit who won’t rest until she takes them down, one by one.
Lewis is your average rez dude turned sort-of-suburbanite, with his super-athletic, super-cool white wife Peta, his job at the Post Office, and his bumbling-yet-charming crush on the Native girl who works with him. But one day, in climbing the stepladder to fix a light, in-between the blades of the fan, he sees an animal he killed back home — in a way he still knows was wrong. So begins his descent into a world of paranoia that brings out all of his fears and insecurities about the choices he’s made to get to this point in his life. Lewis loves his wife, but wonders what it means as a Native man to have chosen to be with a white wife — who doesn’t want children. He also can’t help but wonder if it was the right move for him to even leave the rez — and the spirit that haunts him pushes on these insecurities — until they become full manifestations –with gory consequences. What follows are the stories of the friends he hunted with, years ago — most of who stayed — some who made bad choices — and the fight for their lives that all of them — and their children — must now face.
Subtly funny, and trigger warning: definitely gory, this novel is engaging — and as per Jones’ usual, beautiful and strange. In many ways, this novel moves Native literature in a direction only Jones knows how, as it allows for the kind of writing and world-building that moves beyond the traditional, and into the new, without losing its Native roots.
Jones, a winner of the Texas Institute of Letters Award, a recipient of the National Endowment for the Arts in fiction, and winner of the Bram Stoker Award for long fiction has given us a story that will stay long in our minds — even if in nightmare form. | https://medium.com/anomalyblog/the-inimitable-and-prolific-blackfeet-master-of-horror-is-back-with-a-killer-of-a-book-b4591fda557b | ['Erika T. Wurth'] | 2020-04-15 18:55:55.286000+00:00 | ['Review', 'Books', 'Novel', 'Literature', 'Book Review'] |
The ‘UI and UX Design’ Pocket Guide (Volume One) 📘 | 1. Why Creativity is more important than the Design Tool 🛠
“From the Darkside, to the light… and relax. Just don’t mention The Last Jedi.”
In a recent article I read, they touched upon tools for Wireframing, Prototyping, and those tried and trusted UI Design tools that we eat, sleep and breathe (you know the ones). Oh boy, there’s quite a few to choose from, and they cook up all sorts of debates in #DesignTwitter and beyond!
Now, I won’t be touching on the Wireframing and Prototyping tools portion of the article I mentioned, but concentrate on the UI Design tools that are mentioned briefly in the article, and why you need to just choose the one that suits you and roll with it. There’s no magic tool to suit all types of designers, and you need to remember that, however many times you may be bashed over the head with advocates of a certain design tool.
Photoshop is what many designers turn still turn to, and what many, should we say, time-served designers still use on a regular basis, even with the introduction of Adobe XD in recent-ish times. And this can be for all manner of reasons. Something like Sketch not being available to Windows users for a start. The unity between applications inside of the Adobe Creative Suite. Stronger design tools for working with text and imagery. The list goes on.
The same goes for users of Sketch for example. They may have been looking for something more suited to UI Design today, an ability to move more swiftly through a project, or they just hate Adobes walled garden. Who knows. But like I’ve always said the skills maketh the designer, not the tools.
Every single one of the design tools available to us, either desktop or cloud based, is just that, a tool. If a designer has the will and determination to be the very best they can be, then they will become that with whatever tool they decide to go with. Certain tools may place restrictions on how far you can push your creativity, but they’ll never completely stifle it.
Great, no amazing, work can be produced in any design tool that you decide to go with if you’re willing to push yourself as far as you can go creatively. | https://medium.com/sketch-app-sources/the-ui-and-ux-design-pocket-guide-volume-one-dfb3675ea828 | ['Marc Andrew'] | 2019-08-13 16:25:22.408000+00:00 | ['UI', 'Design', 'Sketch', 'UX', 'Web Design'] |
EXCLUSIVE: This Is How the U.S. Military’s Massive Facial Recognition System Works | Over the last 15 years, the United States military has developed a new addition to its arsenal. The weapon is deployed around the world, largely invisible, and grows more powerful by the day.
That weapon is a vast database, packed with millions of images of faces, irises, fingerprints, and DNA data — a biometric dragnet of anyone who has come in contact with the U.S. military abroad. The 7.4 million identities in the database range from suspected terrorists in active military zones to allied soldiers training with U.S. forces.
“Denying our adversaries anonymity allows us to focus our lethality. It’s like ripping the camouflage netting off the enemy ammunition dump,” wrote Glenn Krizay, director of the Defense Forensics and Biometrics Agency, in notes obtained by OneZero. The Defense Forensics and Biometrics Agency (DFBA) is tasked with overseeing the database, known officially as the Automated Biometric Information System (ABIS).
DFBA and its ABIS database have received little scrutiny or press given the central role they play in U.S. military’s intelligence operations. But a newly obtained presentation and notes written by the DFBA’s director, Krizay, reveals how the organization functions and how biometric identification has been used to identify non-U.S. citizens on the battlefield thousands of times in the first half of 2019 alone. ABIS also allows military branches to flag individuals of interest, putting them on a so-called “Biometrically Enabled Watch List” (BEWL). Once flagged, these individuals can be identified through surveillance systems on battlefields, near borders around the world, and on military bases.
“It allows us to decide and act with greater focus, and if needed, lethality.”
The presentation also sheds light on how military, state, and local law enforcement biometrics systems are linked. According to Krizay’s presentation, ABIS is connected to the FBI’s biometric database, which is in turn connected to databases used by state and local law enforcement. Ultimately, that means that the U.S. military can readily search against biometric data of U.S. citizens and cataloged non-citizens. The DFBA is also currently working to connect its data to the Department of Homeland Security’s biometric database. The network will ultimately amount to a global surveillance system. In his notes, Krizay outlines a potential scenario in which data from a suspect in Detroit would be run against data collected from “some mountaintop in Asia.”
The documents, which are embedded in full below, were obtained through a Freedom of Information Act request. These documents were presented earlier this year at a closed-door defense biometrics conference known as the Identity Management Symposium.
ABIS is the result of a massive investment into biometrics by the U.S. military. According to federal procurement records analyzed by OneZero, the U.S. military has invested more than $345 million in biometric database technology in the last 10 years. Leidos, a defense contractor that primarily focuses on information technology, currently manages the database in question. Ideal Innovations Incorporated operates a subsection of the database designed to manage activity in Afghanistan, according to documents obtained by OneZero through a separate FOIA request.
These contracts, combined with revelations surrounding the military’s massive biometric database initiatives, paint an alarming picture: A large and quickly growing network of surveillance systems operated by the U.S. military and present anywhere the U.S. has deployed troops, vacuuming up biometric data on millions of unsuspecting individuals. | https://onezero.medium.com/exclusive-this-is-how-the-u-s-militarys-massive-facial-recognition-system-works-bb764291b96d | ['Dave Gershgorn'] | 2019-11-06 15:34:40.685000+00:00 | ['Privacy', 'Artificial Intelligence', 'Military', 'Facial Recognition', 'Industry'] |
Diligence at Social Capital Part 3: Cohorts and (revenue) LTV | [Note from the author: See an update to the thinking presented in these articles here. I no longer work at Social Capital, if you want to reach me you can email me at [email protected]]
We’ve gotten a ton of great feedback on these posts and are really excited that it’s providing a useful and valuable set of tools with which to understand your business. Feel free to email me at [email protected] if you have questions. I can’t guarantee that I’ll respond, but I’ll certainly try!
In the first two parts of this series we described how growth accounting can be applied to understand the sub-components of both user growth and recurring revenue growth. We mentioned that growth accounting has a shortcoming in that it doesn’t give us a sense of the lifecycle of a customer. In particular it doesn’t help answer questions such as: “Do customers spend more early vs. later in life?” Or perhaps “Does churn occur abruptly at some point or does it steadily occur through the life of a customer?”
To get started, let’s pretend we have a business that sells something to customers. The following description is indifferent with regards to whether the revenue is subscription or transaction based. We are interested in the lifetime value of customers in our business i.e. the cumulative revenue realized per customer.
Most descriptions of lifetime value (LTV) use a model which ends up with a formula based on a combination of contribution margin (m), retention rate (r) and discount rate (d) which encapsulates the infinite time horizon LTV (e.g. the wikipedia article).
LTV = m * r / (1 + d - r)
This model turns out to be of limited usefulness for understanding early stage companies because of the following assumptions that are built into it’s derivation:
Retention is constant both across cohorts and, perhaps more importantly, throughout the lifetime of a customer (i.e. it assumes that if you have a probability r of being retained from month 1 to 2 then you also have a the same probability r of being retained from month 20–21)
both across cohorts and, perhaps more importantly, throughout the lifetime of a customer (i.e. it assumes that if you have a probability r of being retained from month 1 to 2 then you also have a the same probability r of being retained from month 20–21) Constant unit economics throughout cohorts and customer lifetime which leads to a constant contribution margin.
throughout cohorts and customer lifetime which leads to a constant contribution margin. It assumes that these quantities are sufficiently constant over long enough time spans such that including the discount rate is sensible.
When we look at a company, it’s usually the case that none of these assumptions hold. Early stage companies have only a few month- or week-sized cohorts that usually vary significantly in retention as the underlying product is changing across cohorts. Early stage companies also usually haven’t settled on unit economics. These uncertainties are large enough that it’s not sensible to forecast so far into the future where discounting would matter.
We strongly prefer to look at empirically realized cohort LTV as opposed to imputed LTV based on a formula.
Getting back to our fictional revenue generating company, let’s look at some sample LTV curves.
Sample LTV curves
The parenthetical number is the size of the cohort in week zero. Note that younger cohorts appear as shorter lines because we don’t know much about their LTV yet. Also note that there is inherent ambiguity in a quantity such as “9 month (36 week) LTV”. In this data set it’s anywhere between $160-$280, which is quite a large range. The older cohorts appear to have been higher and the more recent ones lower. Also note that the 2014–03–24 cohort is unusually strong, possibly due to some unusually large customers in that cohort who are spending a lot. Overall, these LTV curves are linear which is to say that customers are consistently spending as they age. If customers were not getting value out of the product they would presumably slow their spend leading to a flattening of the LTV curve. Of course, any single LTV curve is always increasing. Also note that we decided to show only a few cohorts as showing them all would make the graph unreadable. We also indexed against the week of first spend rather than registration because different products encourage/require registration at different times relative to convincing customers to spend actual money.
Something to watch out for, the LTV for a given cohort at age T is computed as the total revenue realized by that cohort up to T divided by the total number of customers in the cohort including customers who may have churned out. Sometimes we see companies exclude churned customers and compute LTV at T based only on customers who are still paying at time T. This is not the way to go because you had to pay to acquire all those customers up-front (via marketing, etc.) and that expenditure doesn’t disappear when the customers churn out.
Also note that we typically only look at the top-line revenue LTV per cohort and leave out the contribution margin/unit economics discussion to a separate conversation. There’s the question of how customers are reacting to your proposed offering which is separate from your ability to deliver that offering with reasonable unit profitability. For the purposes of this discussion, we are only seeking to understand the first part of this question.
There are four types of behavior that any cohort can exhibit:
Flat LTV : The cohort spent once up-front and never spent again. The cohort is generating no further incremental revenue. Not necessarily bad if the flat LTV is at a value high enough to be very profitable. For example, eBay Motors probably exhibits this behavior.
: The cohort spent once up-front and never spent again. The cohort is generating no further incremental revenue. Not necessarily bad if the flat LTV is at a value high enough to be very profitable. For example, eBay Motors probably exhibits this behavior. Sub-linear LTV : The cohort continues to spend as time goes on although the spend decreases over time. Such cohorts approach Flat LTV after some time. Most businesses are in this category. Customers spend initially and then spend less and less as time goes on.
: The cohort continues to spend as time goes on although the spend decreases over time. Such cohorts approach Flat LTV after some time. Most businesses are in this category. Customers spend initially and then spend less and less as time goes on. Linear LTV : The cohort consistently spends the same amount per user in the cohort. This is probably what Spotify looks like. There is likely some fall-off in the first month but after that their cohorts probably evolve to linear or just under sub-linear assuming that the core Spotify user doesn’t intend to ever cancel their subscription. Truly linear LTV would be something like your relationship with a utility such as PG&E which is extremely high retention. Also note that there are different classes of linear LTV growth. A business can be linear with a large positive slope or linear with a smaller positive slope. If a business has a core of recurring customers in each cohort that continue to spend indefinitely then the LTV will be linear. The magnitude of the slope will be determined by how many non-core customers are in each cohort diluting the LTV of those core customers.
: The cohort consistently spends the same amount per user in the cohort. This is probably what Spotify looks like. There is likely some fall-off in the first month but after that their cohorts probably evolve to linear or just under sub-linear assuming that the core Spotify user doesn’t intend to ever cancel their subscription. Truly linear LTV would be something like your relationship with a utility such as PG&E which is extremely high retention. Also note that there are different classes of linear LTV growth. A business can be linear with a large positive slope or linear with a smaller positive slope. If a business has a core of recurring customers in each cohort that continue to spend indefinitely then the LTV will be linear. The magnitude of the slope will be determined by how many non-core customers are in each cohort diluting the LTV of those core customers. Super-linear LTV: Customers spend more as they age. For example, this is almost surely what Amazon sees. In your first month on Amazon you spend some money and in later months you spend much more. For another example, consider Slack. Each paying customer is a company that pays for some number of seats. In successive months a customer may purchase more seats as more people adopt the service in the company. These cases are the most exciting. They suggest the possibility of almost limitless LTV per customer.
Needless to say, what we really want to see are businesses that exhibit some strong evidence of linear to super-linear LTV in at least some of the cohorts of customer.
The visualization above of the LTV curves is good and easy to understand but it doesn’t give a good sense of whether the LTVs are getting better or worse. If all the LTV curves looked the same and everything was stable then we’d just use the formula above to compute full LTVs. However, we’re usually looking at companies that are fluctuating a lot in their early days so we’re really interested in the trends of LTV. Here’s how we prefer to look at LTVs to get a sense of those trends.
Sample LTV Trends
This image is a bit tricky at first so we’ll describe it in detail. In this image, the x-axis is the calendar week of the cohort. The bars in the background are the sizes of the cohorts. They’re included for reference purposes. The lines show successive LTV points as the cohort ages. For instance, the 2014–11–03 cohort (arrow 1 in the above figure) had about 350 customers and spent an average of $44 per customer in the initial week. After a month, the 4-week LTV for that cohort was a bit higher at $55 (on the green line). This cohort just passed 6 months (24 weeks) of age and thus the 24 week LTV for this cohort was just determined at $125. We do not yet know the 36 week LTV for this cohort because the cohort is not yet 36 weeks old and the above image does not attempt to extrapolate it for us (recall, that’s essentially the problem with the formula based approach which mixes in extrapolation with actual observation). Note that the lines in this figure can never cross each other because for every cohort the N+1 week LTV is greater or equal to the N week LTV. Also note that any given line is made up of different customers on successive data points. The N week LTV line measures this quantity for each successive cohort which are made up of distinct customers.
This visualization is good because it shows us trends in the LTV. For instance, you can see that the 12 week LTV started trending down a bit as the cohorts got larger starting in late 2014 (arrow 2). The earliest cohorts in early 2014 had very high LTVs (arrow 3). This is pretty common as early adopters are usually more inclined to use a product. The 2014–03–24 cohort that appeared oddly in the original LTV graph appears here as a clear spike (arrow 4). Note that the spike didn’t occur until sometime between 24 and 36 weeks in the lifetime of that cohort (arrow 5). That is also apparent in the original LTV figure where that cohort started growing strongly at around 28 weeks.
In terms of what we want to see when we look at the LTV trends of a company:
We want to see increasing LTV both for later larger cohorts and later in the customer lifecycle.
As the business attracts larger cohorts of customers it’s often the case that LTV degrades because the larger cohorts are made up of later adopting customers who are less inclined to use the product. If the product has truly great product-market fit then the later larger cohorts will monetize at even higher rates both as the cohorts get larger and as the cohorts age.
Another view that is sometimes useful is the heatmap view.
Truncated heatmap view of LTV
For this figure, the cohort week of first spend is on the y-axis. The x-axis along the top is weeks since first spend and the color is the cumulative LTV of that cohort. So the topmost row is the 2013–12–30 cohort which has 110 customers and whose LTV goes up as one goes to the right. The bars on the left are the sizes of each successive cohort. More recent cohorts have not yet revealed their LTV. As time passes diagonal lines are added to this figure. This is showing only the first few cohorts as if time stopped in March 2014 for readability. The full heatmap for all the cohorts looks like this:
Heatmap view of LTV corresponding to LTV trends above
Note that you can see the outlier 2014–03–24 cohort here as the green horizontal stripe. Fixed calendar phenomena (such as holidays or sales) manifest themselves as diagonal features in this image. Phenomena that affect fixed cohorts (such as customers gained via a burst of paid acquisition expenditure) appear as horizontal features. If there was a significant drop in, say, N-week retention, it would make itself apparent as a change in the color where cohorts would be taking longer to reach the color yellow for instance.
Each of the above visualizations has a strength and a weakness. The LTV curves give you a good sense of the shape of the curves but it starts becoming unreadable after a small handful of cohorts and it doesn’t show trends well. The LTV trends gives a good sense of the trend and shows all cohorts, but only gives a hint of the shape of the LTV curve via the spacing between the lines for each cohort (it’s essentially a contour plot). Also, the LTV trends doesn’t show the LTV at all points in the cohort age but rather at selected age milestones. The heatmap allows you to see all the cohorts at all points in their lifecycle, but does not give a good sense of how the values are increasing as it’s encoded in the color change. The heatmap also does a good job of showing seasonal effects. When we do diligence we typically prefer to see the curves themselves and the LTV trends and only occasionally use the heatmap view. If you’re coming to pitch your company to us you’d do well to make these graphs ahead of time.
So that’s the story for revenue lifetime value. Next time we’ll take this framework and use it to understand cohort level engagement and retention.
Edit: For reference, here’s the full table of contents.
[Note from the author: See an update to the thinking presented in these articles here. I no longer work at Social Capital, if you want to reach me you can email me at [email protected]] | https://medium.com/swlh/diligence-at-social-capital-part-3-cohorts-and-revenue-ltv-ab65a07464e1 | ['Jonathan Hsu'] | 2019-06-05 17:39:30.957000+00:00 | ['Startup', 'SaaS', 'Venture Capital'] |
China’s Social Credit Score is a Wrathful God | Believe it or not, many self-identified Christians have their faith because they love God/the idea of God/creation itself/the data stream governing the functions and actions of the perceived and unseen Universe. Naturally, others have faith (or try to have it) out of a fear of hell or fear of not getting to heaven. In other words, the faithful do what they do because they either love the “big, bearded man in the sky” or they’re afraid of him and the idea of Him.
China’s social credit score system is a Chinese version of the big, bearded man in the sky, except that it’s a massive Artificial Intelligence in the cloud connected to Big Data and biometric databases with an established, proven capability for profound personal, social, and financial repercussions if a citizen breaks whatever rules and guidelines exist at that moment. Currently, journalists with connections to dissidents can be prevented from flying commercial or renting an apartment, along with numerous other punishments for their “antisocial” associations. Being the mother of an activist son lowers the mother’s social credit score, and being the daughter of a father convicted of fraud lowers the daughter’s social credit score. And let’s not speak of the repercussions of associating with homosexuals or the Muslim minority in China …
The Chinese version of the Sistine Chapel’s “bearded man in the sky” giving life to Adam also points the finger, both for the creation of order in a social heirarchy and to shame men and women publicly for their sins against the state and the state’s perception of order. Large digital billboards connected to face-and-stride recognition cameras in public places single out the individuals in crowds with low social credit scores, and that garners them jeers, taunts, and the physical radius of an untouchable. The “social credit score god” of China is a punishing one, a demonstrable god that knows and remembers everyone’s sins, and most in China obey it, not out of love, but out of fear and a logical protection of their own score at the expense of compassion.
Nothing is new under the sun. Christian faith in Europe and beyond was once (and often still is) based largely on fear. A cheating man huddling in a dark corner of a church silently making deals with God to forgive him his lustful transgressions is no uncommon sight to the leadership of cathedrals and churches throughout time. Terrified men have stumbled and quivered their way toward inevitable death during religious battles and wars across the centuries, not out of a love for God, but out of fear that their faults would send them to one of hell’s nine circles. Fear itself, while being a motivator to do less bad, is not a motivator to do well at doing good, and certainly not with the confidence of the assertively virtuous.
My friend on Chaffee County, Colorado’s search and rescue team recently helped save a former astronaut who had fallen down several hundred feet of an icy slope, breaking bones and needing an evac. She regularly helps rescue drowning kids and floundering drunks from the cold headwaters of the Arkansas river. Lost hikers, bicyclists with altitude sickness, and fishermen having heart attacks all need help, and she does it gracefully and well, not because she is Christian (she isn’t), but because her selfless nature compels her to help anyone in need, from astronauts to those gross meth people while she’s on call (and even when she isn’t but happens to be around).
Now imagine her being in China and not saving a drowning mother of five with a low social credit score because of her children’s multiple fathers. Interacting with that woman, spending time performing rescue breathing on her, and calling for additional help might lower my friend’s credit score. Because of her compassionate nature, I know she would still do the right thing, and I would be the weak one to cut her off as a friend because I wouldn’t want to lower my own “Trust Score” (Facebook’s and Apple’s current term for a new, different, and distinctly American version of China’s social credit score).
China wants to keep their families together (the fundamental time-tested force of social stability), so they’ve imposed an incredibly intelligent, wrathful, technological god on their 1.3 billion people that can undoubtedly identify the patterns, purchases, and locations of a cheating husband and lower his score. That god can record and downgrade the score of a child stealing gum and never forget it, even after his death far in the future. That god can punish the excesses of wives spending money frivolously to the detriment of their family’s financial well-being. That god is a scary and awesome power.
While it is certain that all in China believe in the evidence-based existence of China’s social credit score god, it’s doubtful that any Chinese people will ever do what that massive, interwoven series of surveillance programs and behavioral algorithms requires because of a love for it. It will never compel a Chinese individual to act out of a love for their neighbors. And mostly, that technological god of punishment and vengeance will never practically evoke a love for compassionate action towards those of a lesser caste.
The future must remember that people once existed who did good deeds for sinners, the sick, and the dying, no matter who they were, and they … we … performed those acts of kindness not out of a “supernatural” love, but out of a supremely natural love and compassion for the living — not artificial — creations of the world. | https://medium.com/swlh/chinas-social-credit-score-is-a-wrathful-god-f6c1171f36bf | ['Andrew Garvey'] | 2019-07-22 07:03:06.604000+00:00 | ['Faith', 'Technology', 'Artificial Intelligence', 'Religion', 'China'] |
How to Stay Awake For Date Night | How to Stay Awake For Date Night
Try this for 10 minutes, so you can Netflix and Chill.
Photo by SeventyFour
Sleeping beauty
There were a few fairly awkward points during my early 20’s where I found myself without anything to do on a Friday evening. On those occasions, I would sometimes tag along with my cousin Conrad and his girlfriend to see whatever movie was opening that night. Yes, I was the ‘third wheel’ on their date, but I never felt the least bit guilty.
Why?
Well, when the lights went down in the theater, and the movie started playing, I could guarantee that the nice young lady (who Conrad eventually married) would fall into a deep unwakeable sleep.
Every. Single. Time.
Clearly, the demands of the week had taken a toll on her, and she utilized the darkness of the theatre to catch up on some much-needed shut-eye.
But as the prices of movies started to rise over the years, I often lamented the financial waste that was occurring each time he paid for her movie ticket. He was essentially paying double the price for a movie that only one of them was actually seeing.
The whole thing seemed like a travesty to me, but it didn’t appear to bother my cousin. As she quietly snored in the seat next to him, the two of us would have a great time whispering about the movie, making jokes, and sharing popcorn.
Nonetheless, I couldn’t help wondering if there was anything she could do to stay awake for their date. | https://medium.com/hello-love/how-to-stay-awake-for-date-night-facd12bb3724 | ['Keith Dias'] | 2020-12-24 12:16:36.093000+00:00 | ['Books', 'Advice', 'Relationships', 'Love', 'Self Improvement'] |
Introduction To PYCARET For Your First Data Science Project | Beginners Guide | In this introduction to Pycaret, you will learn how to automate your data science workflow with Pycaret, an automated machine learning library for your data science projects. Pycaret makes work easier by automating your exploratory data analysis (EDA) process and gives you results in a minute.
SUBSCRIBE TO RECEIVE MORE FREE PROJECTS AND COURSES LIKE THIS
JOIN TELEGRAM GROUP
What is PyCaret?
PyCaret is an open-source low code end-to-end machine learning library in Python. Its primary objective is to reduce the cycle time of the hypothesis to insights and make data scientists more productive in their experiments. It does this by providing a high-level API that is sophisticated yet easy to use for data scientists and analysts who seek to perform iterative, end-to-end data science experiments in a very efficient way. Through the use of PyCaret, the amount of time spent on coding experiments reduce drastically.
Who should use PyCaret?
PyCaret is a free and open-source library that is easy to install and can be set up either locally or on any cloud service within minutes. The licensing agreement also allows for the commercial use of the software. While there is no limitation of use, the ideal target audience is as follows:
Citizen data scientists and analysts who want to easily implement end-to-end data science projects in a low-code environment.
Data scientists who want to increase the productivity and efficiency of their experiments.
Data science students and analytics practitioners with no prior background in coding.
Small to midsize companies looking to implement data science projects without committing significant amounts of resources.
In order for you to make the best out of this tutorial, I have put this tutorial in the form of a video for a better understanding
Watch and practice along | https://medium.com/total-data-science/introduction-to-pycaret-for-your-first-data-science-project-beginners-guide-8ac67e190258 | [] | 2020-11-14 01:31:40.584000+00:00 | ['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Deep Learning', 'Pycaret'] |
Crown Development Update 21.07.20 | Crown Development Update 21.07.20
NFT resync functionality coming along. Trezor testers wanted. Crown Bounty program launched. Website drafts evolving.
Keep calm and buidl
Ashot
has been working on the nftoken database resync functionality. The release candidate should be ready for public testing by the end of the week. A non-mandatory update will follow shortly. This new version will resync the NFT database to avoid tx-nft db conflicts that led to client instability for some users.
New Crown-Electrum builds
are available for testing. You can get all the information in the Discord channel #bitcore-electrum-testing.
A Crown bounty program
has been launched by several contributors. Read all the details here and engage in open issues to receive rewards!
Bitcore
is progressing slowly, there are no relevant updates on this front.
Website
development for the new community presence has started after intensive feedback rounds on design and structure. You can have a look at the designs and discusssions in Discord.
Stay tuned for more development updates and general news. | https://medium.com/crownplatform/crown-platform-development-flash-21-07-2020-6704b8fd0cb | ['J. Herranz'] | 2020-07-29 20:57:35.752000+00:00 | ['Development', 'Crw', 'Crownplatform', 'Nft', 'Bounty Program'] |
How Netflixonomics and Fashionomics are Converging. | How Netflixonomics and Fashionomics are converging.
Most would agree that Netflix has changed the way we watch Television, but that would be only half the story. What Netflix has changed in monumental proportions is the economics of entertainment. Netflix represents a class of digitally native companies that have built a deeply personal relationship with the consumer and continue to improve the affinity with every byte of data they collect. Amazon, Spotify, and Google are other examples of such companies.
Back to the economics of entertainment — let’s look at the two key aspects of the value chain — Production (producers, studios) and Distribution (networks, theatres, and streaming services such as Netflix). While Netflix started purely as a distribution company (distributing content that others produced to end consumer through mail and streaming), it has very rapidly evolved into a production powerhouse with an estimated spend of $12–13B in 2018–19, expected to grow to $22.5B by 2022. To give you some perspective, this number is just shy of the total currently spent on entertainment by all of America’s networks and cable companies. Take a moment and let that sink in.
Netflix expanding to capture a larger share of the value chain.
But that’s not the most remarkable part of the story, what is remarkable is that it can produce and distribute content more profitably than any of its peers. If you asked why, then I applaud your curosity — in simplistic terms, Netflix understands the calculus of whether a show or film is worth making, better than any other player.
Here’s how: Netflix has created some 2,000 “taste clusters” by watching its watchers. Analysis of how well a show will reach, attract and retain customers in specific clusters, lets Netflix calculate what sort of acquisition cost is justified for such a show. It can thus target quite precise niches, rather than the broad demographic groups that broadcast television depends on.
Example of Taste Clusters — Source: https://bit.ly/2Ta2UXf
With quantitative understanding and personalized marketing, Netflix has managed to revive canceled shows with loyal fan bases, such as “Gilmore Girls”, and take up shows others turned down, such as “The Unbreakable Kimmy Schmidt”. Documentaries such as “Wild Wild Country” have become hot not just by word of mouth, but by being pushed on the home screen poster by individualized poster. The Economist states that “Netflix can take risks on such projects because failure costs it less than it does others. It lets the company get better results for a lesser-quality show than its peers can by showing it only to those who will like it.”
Another great example is “The Kissing Booth”, a romantic high-school comedy released in 2018. Critics hated it. But it has been seen by more than 20m households; millions of teenagers targeted by algorithms seem to love its leads, Jacob Elordi and Joey King. (Source: Economist)
If at this stage you are wondering what does this have to do with Fashionomics, then let’s dive right into answering that question. From how I see it, Fashionomics is converging with Netflixonomics in 3 ways:
Digital native retailers in Fashion will scale their own production just as Netflix has.
Large digitally native (Pure Play) retailers such as Amazon in more mass/premium market and Net-a-Porter in the luxury market will begin to build strong portfolios of their own brands powered by their deep understanding of the customer tastes and behaviors.
Just like Netflix, the retailers can understand very precisely what product attributes (color, size, style, fit, etc.) and customer interests (brands, categories, trends, etc.) work for which segments, using such information to estimate precise demand and produce only the necessary quantities to test, learn and iterate. Hence, reducing the risk of overstocks and obsolescence costs, a major drag on the profitability for any retailer.
Multi-brand digital retailers expanding to capture a larger share of the value chain
For example, Amazon’s activewear brand, Peak Velocity sells a $79 hoodie that has a Best Seller rank of #38 in the Active Hoodies category, one of Nike’s strongest categories. While Peak Velocity’s ranking of # 38 (as of Dec 6th), may seemingly be unimpressive, what may change your perspective is that the brand was launched only in November, and quickly climbed up the ladder to # 38 of a category where 62% of the revenue comes from a long tail of brands other than the top 5 brands. Why? A highly targeted product with personalized marketing increases sales conversion.
And Amazon is not alone in following this strategy, leading digital luxury retailers such as Yoox-Net-a-Porter, with brands such as Iris and Ink (Outnet), MR. P (Mr. Porter) and Matches Fashion with Raey, are beginning to make serious investments in this space, learning from the retailers such as Asos, whose private label business contributes almost 50% of its revenue.
This is quite a departure from the traditional wholesale model, where a brand mandates a certain mix of products that a retailer must buy even though the selection is not entirely corroborated by consumer behavior on the platform. While Private Label strategy is not new to the playbook of retailers,
the large digital retailers are building a new moat — customer data and personalized marketing, something that traditional brands lack due to the absence of direct, measurable access to digital consumers at scale.
2. Increasingly influenced by data and algorithms, Production will become more agile and targeted
Just as Netflix is able to create “The Kissing Booth” and find 20M viewers for the movie through its intimate understanding of its customers. Large digital retailers can find niche targets to cater to and fill the small white spaces that large brands are less likely to worry about.
One can argue that Fashion is seasonal and the historical data may not be the best predictor of future demand. That could have been true a decade ago, when fashion products, as a complex amalgamation of attributes such as color, style, silhouette, pattern, etc. could not be decoded and decoupled.
However, with advances in machine learning and image recognition algorithms, a fashion product can be decoded into multiple attributes, and a more sophisticated predictive model can be created to suggest which attributes, based on onsite behavior and third-party data, are likely to be preferred by customers in the near future.
Retailers can then start with cherry-picking the most popular attributes and overlay with the customer segments that have the highest affinity for such attributes to start sketching out the blueprints of their private label investments.
Agility is the key here. Rachael Proud, the designer who is leading Matches Fashion Private Label — Raey, says “If we’ve got a jumper and it’s a best seller and we’ve only ever done it in blue, we are immediately thinking: let’s do it in black,” she says, adding that if fabrics and trimmings are in stock, Raey can deliver product in as little as four weeks. (Source: BOF)
Proud and the Raey team have data on their side: they have a deep level of information evaluated on a weekly basis, from cost-per-click to real-time sell-through.
The unprecedented access to data allows these retailers to react in ways traditional brands can’t.
Example: In 2016, MatchesFashion, shifted Raey’s deliveries from seasonal to monthly collections that arrive on the site each week. These more frequent deliveries also help drive traffic to the site, says Proud. (Source: BOF)
While most of these retailers are focusing their initial efforts on the Basics (low risk, high turnover category), they will eventually start pushing the boundaries to scale their private label contribution. Something that a player such as Zalando is already demonstrating by conjuring up 17 private labels since 2010 and now generates 500 million euros ($599 million) of its 3.64 billion euros in annual sales from them, offering everything from Pier One sweaters costing less than 30 euros to Mai Piu Senza high-heeled boots at 170 euros or more. (Source: Bloomberg)
Obviously, these retailers will have to find the balance between scaling their own labels and protecting relationships with the brands that represent a sizeable share of their revenue. If retailers start to eat the share of the brands, it may take away the preferential treatments that these retailers get from the brands such as exclusive collections or early deliveries.
A more likely scenario is that the retailers and brands will co-develop capsule collections by combining their respective core expertise in consumer behavior mapping and product development. As an example, Calvin Klein partnered with Amazon recently to launch an exclusive collection online and in Amazon pop-ups. A trend that is definitely likely to continue.
Finally, just as Netflix is able to attract both successful writers/directors and identify and bet on new/emerging talent, digital retailers can command the same advantage. Net-a-porter recently launched a group of emerging designers through The Vanguard Program. While it is projecting this program as a mentorship program, it is really a way to both cater to the needs of the niche and emerging segments (white spaces) and to get greater control of their supply chains and thus improve profitability.
3. Distribution will increasingly become cheaper, personal, and global
Netflix is increasingly becoming a global household name in entertainment. It has very successfully created regional content and found an audience for such content globally, hence, improving the ROI on the content investments. For example, Dark is a German original released in the fourth quarter of 2017 that did well in its home country, and, according to the company, “has also been viewed by millions of members in the US and has outsized watching throughout Europe and Latin America.”
With e-commerce penetration expected to get into double digits across the globe, fuelled both by in-country and cross-country growth, the cost of logistics (cost per order) will continue to decline. Amazon is already achieving such scale and costs, where you can get free next-day delivery (in India, with Prime) and cross country delivery for under $10 (US to Dubai for less than 500 GMS of package).
Potential of Cross Border E-commerce. Source: DHL
While the last decade was about marketing to segments based on demographics, the sophisticated personalization and marketing tools can allow you to do so based on specific interests.
This means that a product sitting in any part of the world can be matched with the interests of a person sitting in another part of the world, and not only marketed but also shipped seamlessly. Farfetch is one of the leading retailers that is connecting this global supply with global demand, unlocking value for local boutiques and brands through global exposure.
Very soon, if not already, a customer living in Australia can discover and buy a local designer dress (say, an Abaya) by a Lebanese designer operating out of Dubai, all because of the power of personalization algorithms and cheap global shipping. While these cross border sales are still a small portion of the overall e-commerce, this share will continue to grow rapidly.
The Bottomline
If you are a digital retailer, the playbook to scale profitably and thrive is emerging clearly. To fight reliance on markdowns or promotions and boost profitability, you need exclusivity and scarcity as weapons. Not placing bets in the areas above will only make it harder to fend off the competition and to grow in a world where customers are overwhelmed with digital noise, and thus
will increasingly give their business to retailers that intimately understand their tastes and can respond to their Insta-fashion needs that can change in a snap. (pun intended)
For brands, it’s time to accept that your turf is under siege.
The new breed of tech businesses has clearly demonstrated that platforms which own the consumer relationship and leverage data to deliver a highly personalized experience, will continue to eat away large portions of the value chain.
If the majority of your business is driven by a wholesale business, it’s time to think diversification and find ways to get closer to the customer. One sure way is switching to the marketplace model with digital retailers in exchange for customer insights and data, leading to co-development of products, to reduce the inventory risk and obsolescence cost from the value chain.
Co-developed, smaller, exclusive and non-seasonal collections can help brands maintain both the novelty and profitability. Obviously, this requires rethinking of the brands’ supply-chains as the long lead times are a challenge, especially for luxury brands. Needless to say, it’s not going to be easy, but who said it’s easy to thrive in an industry that is silently being disrupted.
While Netflix has reinvented the economics of the entertainment business and started a gold-rush of original content distributed through personalized marketing, can large Fashion digital retailers follow this lead and achieve such scale? | https://amitrawal.medium.com/how-netflixonomics-and-fashionomics-are-converging-120acf55b4f0 | ['Amit Rawal'] | 2019-03-16 16:41:58.706000+00:00 | ['Retail', 'Personalization', 'Netflix', 'Amazon', 'Ecommerce'] |
Scaling the translation process | Going global is a very attractive strategy for companies and a great challenge for product and engineering teams. Usually when we start building products for international markets, the first step is to work on translation. I would like to share with you how we made this process simple and scalable, making life easier for developers and translators, as well as delivering a better experience for our customers.
Context
In 2019, when the RD Station International Product team was formed, there were already more than 120 RDoers in the product and engineering department, executing more than 15 deploys per day. In addition, the strings had already been separated from the source code into translation files and our digital marketing software — RD Station Marketing — was available in English and Spanish.
However, we didn’t have a template to follow regarding how the translation process should work. The responsibility to generate translations for English and Spanish belonged to any team editing / adding an i18n key. Developers could request these translations from our translators via form or slack. Instead, however, they would often use Google Translate or even replicate the keys in Portuguese for all languages.
Given this scenario, we identified four main problems:
Mixed experience
Parts of the product with text in different languages. In the email below, for example, we have text in both Portuguese and Spanish.
Example of an email with mixed experience
2. Dissatisfied external customer
Hola buen día, solicito información de como cancelar mi cuenta con RD Station pues considero que tiene muchas falencias para el idioma español.. es complejo busca ayuda o tutoriales que estén en este idioma. [Customer]
3. Development team with questions
Olá boa tarde, sou novo aqui em produto e estou adicionando uma chave longa de tradução. Como eu procedo para ter esses textos traduzidos? [Developer 1] Hoje como está o processo de solicitar traduções? Eu precisava das traduções (inglês e espanhol) dessas 3 chaves abaixo. [Developer 2]
4. Translation team with questions
Hi, I was just about to write you now. Do you have any context for the task you sent? Screenshots or where to find these strings in the product? [English translator] Holiss, buenos días. Una perguntita. ¿dónde puedo editar la parte de los pop-ups en el Station? Hay algo raro. [Spanish translator]
There were several available options to solve the problems mentioned above, but we decided to consider two of our engineering principles as the basis for creating the solution and we chose to automate the process.
Reducing global complexity is more important than reducing local complexity. Automate everything.
Automating the translation process
Solution
Centralize the responsibility of generating the translations in Crowdin — translation manager system, creating a method that listens to the work-in-progress version and delivers the translation within the same branch. Simple and scalable!
Solution architecture
To make this possible, we created a solution called the translation manager, which had some requirements:
Be SH or binary, supporting different operating systems like Linux, Mac and Windows;
It needed to be installed magically;
Being able to connect to more than one GitHub repository;
It couldn’t have dependencies.
Thus, we created an application in GoLang that was installed in the .git / hooks / pre-push folder when executing any interaction with the application in the development environment: `rails s`,` rails c`, `rails *`.
How it works
A developer changes or adds new strings in Portuguese to the application in their development branch;
When performing a git push… strings are sent by the translation manager and automatically translated into English and Spanish by Crowdin using the translation memory + translation machine (Google, Amazon, Microsoft, etc.);
That done, the already-translated strings are exported to the developer’s development branch by the translation manager. If there are any deleted strings, the translation manager also cleans the translation files.
Finally, changes are automatically submitted to the developer’s development branch in GitHub.
Translation process step by step
After this process is completed, the translators receive notifications of new strings via email and if the branch has not yet been merged, the corrections will be incorporated in the same PR, otherwise a new one will be generated from the master that can be merged later on. This way, the human translation process doesn’t create a bottleneck in the development process.
Translation workflow
The automation of this process ensured the quality of new translations that went into production. However, it didn’t solve the problem of poor quality existing translations in the software. To solve this problem, we used the in-context translation solution provided by Crowdin.
In-context translation
In-context translation allows you to translate texts directly from within the software in real time, by making them editable. Using this feature, translators were able to revise strings within the context of the page containing them, resulting in a much faster and more accurate translation.
In-context translation in RDSM
Conclusion
Demonstration of the new translation process
It was an incredible experience to have participated in the construction of the translation process and to have been able to solve the four problems listed at the beginning of this article. Remember:
Automate to ensure quality and gain scale. Create a globalized product and not just a globalization team.
Special thanks to all the people who participated in this project: Paula Hurtado, Peter Stanley, Jacobo Leen, Carlos Cuzik, Marlon Schweigert and Danilo da Silva. | https://medium.com/rd-shipit/scaling-the-translation-process-5cdf28ae268f | ['Danielle Moreira'] | 2020-11-20 14:08:09.887000+00:00 | ['Software Engineering', 'Translation', 'Globalization'] |
Dr. Timnit Gebru, Big Tech, and the AI Ethics Smokescreen | Photo by Echo Grid on Unsplash
This past week, news about Dr. Timnit Gebru, the eminent and beloved AI Ethics scholar fired by Google roiled the industry and highlighted the unsavory reality of being Black and ethical in a space dominated by powerful white men. It revived traumatic memories for many women of color who have faced gaslighting, exploitation, and erasure in the toxic tech industry. It also brought to light the broader issue of credibility and objectivity of AI ethics research funded by big tech.
Earlier this year, my colleague Ian Moura and I called out how elite institutions, the self-appointed arbiters of ethics are themselves guilty of racism and unethical behavior with zero accountability. A recent study unearthed that a significant number of faculty at top universities have received some form of financial support from big tech. Insidious influence of big tech shows up in framing of AI Ethics research, most of which is focused on solving ethical issues in such a way that AI development can continue unabated. Much of it is centered around risk mitigation on behalf of tech companies rather than well-being of marginalized communities.
This incident coincided with our annual summit where we brought together women working in AI Ethics space to learn from each other and celebrate the lesser-known voices in this space. As the selection committee was vetting the annually published 100 Brilliant Women in AI Ethics™ list, there was a vigorous debate on how to decide whether someone working for big tech with “ethics” in their title wasn’t just engaging in ethics washing? How many have the courage to call out unethical tech developed by their employer or straight up admit that the right solution is to ban said technology rather than try to redeem it?
Every so often these companies trot out AI Ethics luminaries to make an eloquent speech on the need for more ethical AI but when experts like Dr. Gebru point out the ethical flaws of these technologies, they are attacked, discredited, and discarded with impunity. The inevitable conclusion is that AI Ethics initiatives by big tech are designed to make problematic tech more palatable and they are used merely as a smokescreen to hide their transgressions.
Earlier this year, the AI Ethics world rejoiced as IBM left the facial recognition business and other tech companies signaled their willingness to follow suit. This glimmer of hope was due to the hard work of Black scholars like Dr. Timnit Gebru, Joy Buolamwini, and others. These past few years have been remarkable in the number of highly visible employee push-backs and protests against surveillance technologies used to track and incarcerate marginalized groups. However, as media attention waned and public pressure lessened, the tech companies have gone back to business as usual and are again courting deep-pocketed government agencies with new lucrative contracts for the same malevolent technologies.
Dr. Gebru’s firing is the latest in a series of efforts by big tech to squelch dissent within their ranks. Last year, Google allegedly fired multiple folks for worker activism and Meredith Whittaker, the co-founder of AI Now Institute parted ways with Google when all her paths to career progression at the tech giant were blocked after she led the employee walkout demanding structural change. Other tech giants are increasingly engaging in union-busting activities and there have been disturbing reports that Amazon has hired spies to surveil its workers and track labor movements.
With so many powerful forces working to suppress marginalized voices, how can we make any meaningful progress on AI Ethics?
We should start by protecting ethical whistle-blowers like Dr. Gebru and strengthen our labor laws so they protect the workers not the employers. The ‘hire and fire’ culture of tech industry is inherently dangerous as it enables abuse of workers at the hands of the powerful tech companies. Recently, the National Labor Relations Board (NLRB) filed a surprise complaint on Wednesday accusing Google of illegally surveilling and firing two workers who tried to form a union at the tech giant, while this sounds promising but resolution of these complaints takes a long time.
To get public support for such protections and resolutions would require clear articulation of the harms in a way that’s understandable by the public and can garner their support. This would mean dismantling and removing the stranglehold of AI Ethics gatekeepers in tech and academia. Inclusion of more marginalized voices in development and usage of AI technologies instead of only exploiting them for labor and data. Force big tech to share the resulting benefits with everyone instead of restricting access to the wealthy and privileged.
Technologies reflect the priorities and ethics of those building and funding them. We need to stop acting as though these technological outcomes are somehow separate from the environments in which technology is built. Elon Musk who forced workers to go back to work during the pandemic has now surpassed Bill Gates as the second-richest man. We need to dismantle the incentive structures designed to reward those benefiting from exploitation of workers and stop glorifying hoarding of wealth as if it were some heroic accomplishment. There is an urgent and critical need to divert resources to technologies that benefit humanity over the bottom line. We need to nurture alternative funding sources so that AI Ethics research doesn’t become the pet project of some billionaire or the redemption for big tech.
For those who are suggesting suggest that Dr. Gebru should just find another job are missing the point. When there are no safeguards for highly credible well-known scholars for speaking up against unethical misdeeds, what hope is there for other lesser-known voices from marginalized communities? Even if Dr.Gebru and others were to leave Google, given the dominance of this industry by a handful of tech companies who have become more powerful during the pandemic, the number of opportunities is shrinking very quickly.
Audre Lorde said, “For the master’s tools will never dismantle the master’s house. They may allow us to temporarily beat him at his own game, but they will never enable us to bring about genuine change.”
While it may seem naïve to try and change a powerful company from the inside, having ethical voices from marginalized and minoritized communities with a strong moral compass inside these companies and institutions may at the very least, slow the onslaught of questionable technologies and buy us some time to collectively figure out other sustainable solutions.
Dr. Gebru and others are the last line of defense in our quest for ethical and inclusive tech. If we don’t stand up for them now, soon there will be no one left to fight for us. | https://miad.medium.com/dr-timnit-gebru-big-tech-and-the-ai-ethics-smokescreen-45eb03d1fe6d | ['Mia Dand'] | 2020-12-08 00:14:40.619000+00:00 | ['Big Tech', 'Artificial Intelligence', 'Moral Responsibility', 'Ai Ethics', 'Ethics In Tech'] |
In The End | In The End
The voice inside our heads keeps us busy, distracted. But to what ultimate end?
As I carry on my daily work, my mind is focused. I’m immersed in the detail and the demand of the task.
But in the moments of reduced attention, in recent weeks particularly, it wanders to thoughts of my demise. I realise that I, my parents, my wife and children, and everyone we know, all one day gone.
Today I thought of a business friend who lost her child 10 or so years ago and the pain that she must know, and I hope that I never find myself there.
It’s a remarkable thought to contemplate one’s own mortality, and although it’s unnerving, it captures my interest. I had pneumonia a few years ago and I couldn’t take three steps without gasping for breath. I remember thinking, this must be what it feels like to be on the brink.
I often ask myself; who or what am I? Really, what am I?
I have no clue. It’s a pursuit of something I can never catch up to. It’s always just out of reach.
No, too scary.
Ok, just a little look.
No! Back to work now.
Yet, it pursues me, and away from the realisation of it I cannot get. Paradoxically, I seem to both know it and not, and it seems to know me.
Without this realisation, I could never come to terms with my own foolishness, the thoughts that rush in, my screaming and shouting, either on my own in the van as I drive, or at my son for his seeming unwillingness to embrace my allegedly solid advice towards efficiency.
Change that word, you use it too much…
Absent of the recognition and acceptance of my inevitable death, I would never understand that my demands on myself and others are utterly pointless. Success, happiness, money, recognition, reward and applause, every one ultimately a fool’s errand.
The disorganised dishwasher, the untidy hallway entrance, the shoes strewn around the living room, my insufferable insistence on timeliness and attention to detail, and my impatience at others’ apparent inability to meet my impeccably high standards — bollocks.
Every bit.
But I persist.
I think somehow it might end differently and my children will thank me for being as I am. As if that is a reasonable excuse.
Whatever it is — the fundamental basis of this monologue inside my skull — it doesn’t accept the wafer thin surface reasoning that it presents itself.
The fact is, none of us are getting out of here alive, so it matters. | https://medium.com/the-reflectionist/in-the-end-844be87c620 | ['Larry G. Maguire'] | 2020-11-18 20:09:57.425000+00:00 | ['Philosophy', 'Psychology', 'Reality', 'Life', 'Death'] |
Functional Programming illustrated in Python: Part 5 | Functional Programming illustrated in Python: Part 5
The IO Monad — laid bare
From the Functional Programming illustrated in Python series
But I don’t like Monads!
Monads, Monads, Monads… have you got anything without Monads?
Well, there’s Direct Function Application. That doesn’t have much Monad in it.
The problem is, functions don’t do anything. Sooner or later you’re going to want to write a program which interacts with the real world. It reads and writes to the terminal. It writes to the filesystem. It updates a SQL database. It turns a little red LED on and off. All of these things are decidedly stateful and side-effect-ful, and they are implemented in dirty, impure languages like C and ultimately machine code. That code needs to be boxed up and presented in a functional way in order to interact safely with functional code. That box is conventionally¹ a Monad.
The pure ValueAndLog Monad we’ve been using all along captures the idea of “outputting”: it starts with an empty buffer, and the “side effect” of bind is to add strings together as it goes along².
Now imagine a similar class where instead of a functional side-effect, a real-world side-effect takes place, like writing to the terminal. That’s it. From the outside, its API might look like ValueAndLog. On the inside, instead of appending to a hidden buffer, it actually writes to the terminal. Functional world, meet real world.
You’ll also see another explanation:
When performing impure operations, it’s important that they are performed in the right sequence. For instance, you need to insert a customer into your database before you can insert their first order.
A pure functional language only evaluates things. Since pure functions depend only on their arguments, and not any external system state, they could be evaluated in different orders, or in parallel — or even not at all, if the result isn’t used anywhere.
The Monad’s bind operation a >> b can be used to enforce ordering, in the same way that h(g(f(v))) implies that you must evaluate f before g before h.
That is all true too. But the way I prefer to think of it is that in a purely functional Monad (like ValueAndLog), its bind operation reads and/or updates hidden state in the wrapper class. In a real-world Monad, its bind operation reads and/or updates state in the real world. That’s why a Monad is such a good container for stateful behaviour.
Warning: I am now going to unpick some Haskell code to expose the plumbing in Python. Any inaccuracies are entirely my fault. I welcome corrections from experts.
Let’s do it
Let’s wrap I/O so that it can be used by pure functional code. The end result should do the same as this imperative Python code:
print("What's your name?")
name = input()
print("Hello " + name)
The corresponding Haskell looks remarkably similar:
main = do
putStrLn "What's your name?"
name <- getLine
putStrLn ("Hello " ++ name)
But underneath it’s very different.
getLine is not a function! It’s a value from the IO class. This value causes the bind operation to read a line of text, and pass it to the function on the right-hand side of the bind .
It’s a value from the IO class. This value causes the operation to read a line of text, and pass it to the function on the right-hand side of the . putStrLn is a function. But it doesn’t print a line of text! Rather, it returns a value from the IO class. That value tells the bind operation to print a particular line of text, and then call the function on the right-hand side.
Each of these IO values represents an “action”, something “to be done”, followed by doing “the next thing” (a.k.a. “the continuation”)
Nonetheless, this can still be translated into Python.
A class that represents Actions
Here is a naïve, but easy-to-understand, implementation of an IO action class.
class IO:
def __init__(self, action, arg):
self.action = action
self.arg = arg def __rshift__(self, func): # this is "bind" (>>)
if self.action == "getLine":
line = input()
return func(line)
elif self.action == "putStrLn":
print(self.arg) # always returns None
return func(None)
elif self.action == "return":
return func(self.arg)
else:
raise RuntimeError("oops") @staticmethod
def unit(v):
return IO("return", v) def putStrLn(text):
return IO("putStrLn", text) getLine = IO("getLine", None)
(Runnable code here)
The IO class contains a description of an action to be done, and the bind operation performs it. In each case, the value resulting from the action (if any) is given as the argument to the next function, and the return value of that function is returned, unchanged. The no-op action “return” just passes a wrapped value straight through.
That’s fine, although I don’t like that putStrLn and getLine are global, so I am going to move them inside the IO class for tidiness:
class IO:
... @staticmethod
def putStrLn(text):
return IO("putStrLn", text) IO.getLine = IO("getLine", None)
That’s a bit better. Remember that getLine is not a function: it’s a constant value, an instance of the IO class, so it can’t be created until the class has been created. putStrLn is a function which returns an IO value.
(Updated code here)
A better class
But there’s a more compact and natural way to do this. Each action can be represented as a function: an impure function, with no parameters, just like say_hello from part 0 of this series. We can store the action function directly inside the IO wrapper. It boils down to just this:
class IO:
def __init__(self, action):
self.action = action def __rshift__(self, func):
return func(self.action()) @staticmethod
def unit(v):
return IO(lambda: v) @staticmethod
def putStrLn(text):
return IO(lambda: print(text)) IO.getLine = IO(lambda: input())
Think about this carefully. Consider these examples:
v = IO.putStrLn("Hello") # what is the value of v ?
# does anything get printed yet? why? def dummy(x):
pass # do nothing v >> dummy # what does this do? IO.getLine # what is this value?
# does it read anything yet? why? IO.getLine >> dummy # what does this do?
If that’s not clear, look again at the one-line body of the bind operation:
def __rshift__(self, func):
return func(self.action())
self is the IO value on the left-hand side of >> , and func is the function value on the right-hand side.
Step one is to pick out the action which this IO wrapper contains:
return func(self.action())
^^^^^^^^^^^
Step two is to execute it, which will do some action and give a result³:
return func(self.action())
^^
Step three is to pass that value to the right-hand function:
return func(self.action())
^^^^^ ^
And step four is to return the value returned by that function, to the caller of bind.
Therefore, when we create an instance of the IO class, we just need to provide a lambda which does the action we want to do, and returns a value to be passed on to the function on the right.
See how the ordering is enforced. The action must be executed before its result is passed to func , since its return value forms the argument to func .
The main program
You can’t see any bind operations in the Haskell code, because they are hidden within the special syntax of the do block⁴.
To rewrite the do block with its funny <- as plain lambdas and binds, the lines are transformed one by one as described in the Assignments article. To recap:
do expr >> (lambda x:
x <- expr ⟾ do ...more code...
...more code... )
There is an extra case: you may have an expression whose unwrapped value is not used. Treat a bare expr as if it were ignore <- expr , unless it’s the last one.
do expr >> (lambda ignore:
expr ⟾ do ...more code...
...more code... )
The ignored parameter is conventionally named _ (an underscore).
The result of these transformations is the following Python:
main = (
IO.putStrLn("What's your name?") >> (lambda _:
IO.getLine >> (lambda name:
IO.putStrLn("Hello " + name)
)
)
)
At each stage, the value is an IO “action”. When bound ( >> ), the bind operator performs that action and passes its result (if any) as the argument to the function on the right-hand side (the continuation). That’s how getLine >> (lambda name: ...) assigns the parameter name to the result.
Finally, we need to run the program. What is main anyway? Is it a function? No — for one thing, it doesn’t have any parameters, and a pure function with no parameters is a constant. It’s a value of some sort. It’s an IO action value: a chain of actions for the whole program.
To perform that action, we have to bind it — to a dummy function which does nothing and returns a dummy IO value⁵.
main >> (lambda _:
IO.unit(None)
)
But in Python that explanation is not quite true. Python is an “eager” language, meaning it evaluates things as it goes along. In the process of calculating a value for main , it performs the side effects of printing text and reading a line. The only action it doesn’t perform is the final one, which is the value assigned to main . So by the time we have assigned a value to main , it has done all the actions apart from the last one. The find bind does that.
In contrast, Haskell is a “lazy” language, which means it doesn’t evaluate expressions until it needs their value.
Anyway, here is the full code:
To make it more compact, you can rewrite those static method definitions as lambdas:
That is getting seriously terse. Welcome to functional programming.
I believe that’s an accurate translation of the Haskell shown earlier, but as I said before, I welcome corrections from those who know better.
Stripped bare like that though, it also shows there’s really nothing to it. The actions are still the same original impure actions, input() and print(...) . The difference is that the result of each action is passed along by invoking the next function in the chain —the “continuation passing” style.
By the way, do you see how easy it is to add new actions? Try adding IO.readFile , which takes a filename as its parameter and yields the contents of that file. In main , you should then be able to replace IO.getLine with IO.readFile("/etc/hostname") .
Unit in action
The IO.unit function hasn’t served much purpose so far. To demonstrate it, I’m going to steal another example directly from the Haskell Wiki: the function promptTwoLines asks for two pieces of information, and returns the concatenation of them.
promptLine prompt = do
putStrLn prompt
getLine promptTwoLines prompt1 prompt2 = do
line1 <- promptLine prompt1
line2 <- promptLine prompt2
return (line1 ++ " and " ++ line2)
main = do
both <- promptTwoLines "First line:" "Second line:"
putStrLn ("You said " ++ both)
Now a literal translation, where Haskell’s return is our IO.unit :
Note the final expression in promptTwoLines :
IO.unit(line1 + " and " + line2)
This is the value which will be returned from promptTwoLines . The final value of one of these chains of IO actions must be an IO action, but rather than actually perform any IO, in this case we just need to wrap a calculated value (as if we’d just received it using getLine , say). Haskell’s return means “prepare this value to be returned”. It’s not the same as Python’s return , which is a flow-control construct (“stop executing this function right now”).
The value wrapped in IO is then unwrapped by bind at the point it is used, in this case as the parameter both of the next lambda:
promptTwoLines(....) >> (lambda both:
putStrLn("You said " + both)
)
Is IO a Monad?
Or is it a burrito? You decide.
Actually, all you have to do is to check whether the IO class constructed above fulfils the Monad Laws given before. This is left as an exercise for the reader.
You can understand how this code works without knowing whether or not IO is a Monad. But if you wanted to pass the IO class to something else which works with Monads in general, then it would be important.
Acknowledgements and further reading | https://brian-candler.medium.com/function-programming-illustrated-in-python-part-5-90c4882b21b7 | ['Brian Candler'] | 2020-11-01 13:09:01.313000+00:00 | ['Functional Programming', 'Python', 'Computer Science', 'Monads'] |
A Brief History of the Beer Game | A Brief History of the Beer Game
by Larry Snyder
I’m getting pretty excited about the upcoming release of the Opex Analytics Beer Game — it’s scheduled to go live in about three weeks! While the Opex version is brand new and uses cutting-edge algorithms and Artificial Intelligence, the Beer Game itself has quite an extensive history. I learned all about it while developing the Opex version and, to tide you over before the release, I’ll share a brief history with you.
Beer Game screenplay.
So… What is The Beer Game?
The Beer Game is a widely used in-class game that’s played in supply chain management and system dynamics classes. Instructors use it to demonstrate the Bullwhip Effect, the impact of hidden information and the importance of coordination across the supply chain.
It all began in 1956 when managers at General Electric noticed huge swings in production levels at one of their factories — swings much larger than the swings in consumer demand. A few years later, motivated by their discussions with the General Electric managers, professors at MIT began developing the original Beer Game.
It started with MIT professor Jay Forrester, regarded as the founder of system dynamics and author of the well-known book Industrial Dynamics. First, Forrester developed a simulation of a production–distribution system inspired by the GE factory. Forrester’s simulation was essentially a pen-and-paper spreadsheet. In the summer of 1958, MIT’s summer session used this production–distribution system as an in-class demonstration, rather than as a competitive game.
Schematic diagram of a three stages of production–distribution system described by Forrester, HBR, 1958.
Forrester’s 1958 Harvard Business Review article showed this model (left), which used a three-stage supply chain consisting of a retailer, a distributor and a factory.
Increase in order volatility due to 10% increase in retail sales. Forrester, HBR, 1958.
Forrester’s article demonstrated that a small increase in the volume of retail sales can make the retailer’s orders more volatile, the distributor’s orders even more volatile and the factory’s production more volatile still. This pattern came to be known as the bullwhip effect, though it wasn’t named that until a few decades later.
During MIT’s summer 1960 session, Forrester’s simulation became an actual game; players used a physical board and cards. Over the next decade or so, various aspects of the game evolved, including the number of stages (players), lead times and costs.
MIT professor J. Miller first specified the product in the simulation/game as beer in 1973. Miller explained this choice of product by noting that:
“In order to meet customer demand, many beer companies have to maintain large inventories of beer,” and that, “a significant activity of the beer company is to maintain the minimum amount of beer necessary to satisfy customer demand reasonably quickly.”
(To be honest, I always assumed the game was given its name as a mild attempt to pander to college students. We professors think we’re pretty clever when we do this sort of thing.)
The “standard” version of the Beer Game, if there is such a thing, was codified by another MIT professor, John Sterman, in a Management Science article. That version uses four stages, lead times of (mostly) 2 periods, holding and stockout costs of $0.50 and $1.00, and a (mostly) stable demand pattern with a demand “shock” a few periods into the game. This version of the game is the “Classic” setting in the Opex Analytics Beer Game.
While Forrester focused mainly on how dynamics of the system itself causes instability, Sterman was interested in the ways that managerial behavior, especially irrational, “panicky” behavior, causes instability. Sterman proposed a simple formula that captured players’ panicky behavior. Fun fact: the “human-like” computerized players in the Opex Analytics Beer Game follow this formula. In his 1990 book The Fifth Discipline, Peter Senge (another MIT professor) gave two rules to prevent this panicky behavior:
“(1) Keep in mind the beer that you ordered, because of the delay, has not yet arrived, and (2) Don’t panic.”
The System Dynamics Society began selling Beer Game kits in 1992. The Society sold about 20 kits that year; in 2004, it sold over 7,000.
There have been several computer implementations of the Beer Game. The Opex Analytics Beer Game is the latest, plus the only one that includes an AI-powered player. Using Reinforcement Learning (RL), the AI player learns optimal strategies and plays against you or, if you elect to be on the same team, plays alongside you to help improve your score. While testing the game, we found that eight out of ten Opexers failed to beat the AI player. We’re looking foward to watching many others try their hand at it soon.
Sadly, we must admit, no actual beer is involved. But it’s a fun way to learn about supply chain management and system dynamics, plus it uncovers ‘the possible’ in terms of the great value AI (even more specifically, RL) can create when used within operations. What more could you ask for in a game?
Stay tuned for the release of our Opex Analytics Beer Game coming August 9th, 2018!
Crediting: I learned most of what I discuss in this post from the article The Beer Game: Its History and Rule Changes.
_________________________________________________________________
If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars. | https://medium.com/opex-analytics/a-brief-history-of-the-beer-game-7dd3c325766e | ['Opex Analytics'] | 2019-07-25 15:07:20.480000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Logistics', 'Reinforcement Learning', 'Videogames'] |
Physics Puzzles: A Pebble Thrown in the Air | Physics Puzzles: A Pebble Thrown in the Air
Will It Take Longer to Go Up or Down?
I have a habit of throwing into the air whatever I hold. I do it so often that some call it as my addiction, yeah, it is so common to me. Depending on initial conditions, on how it left my hand, I make bare predictions: how many rotations it will make, what hight it will reach, and so on.
Often, when I go across a city, it is just a bottle of water or any other drink (of course, closen: I do not want to become wet 😄,) what makes it even funnier because a decrease in the water’s level changes the behavior of the bottle, by just say — drastically.
On the other hand, when I walk along a river, where many pebbles lay down, I pick the most rounded one for a ride to the air and back. And, here comes the puzzle.
Bob has thrown a pebble into the air from an ideally horizontal ground. It means that it will cover exactly the same vertical length during its up and down motions. Will it take longer for the pebble to rise or fall? (Also, say hello to Bob because he will return every now and then with new puzzles. ✋)
The solution
The velocity of an object is a vector that contains information about its speed and direction of motion. Galileo’s principle of inertia, Newton’s first law of motion, and the common sense, state that an object at rest, or with constant velocity will keep its state (being at rest, or going in a straight line) in absence of a net-force.
Every time you are about to describe the behavior of an object, you should start by answering if there is a net-force.
How?
Just do an experiment once and the answer pops out immediately.
If the object’s velocity changes, there is a net-force.
If it goes in a straight line, without changes its velocity, there is no
net-force. And the object will maintain its motion forever and ever until some force will not break it from its state.
So, back to the puzzle.
In the absence of a net-force, the pebble would continue its motion in a straight line, like that:
Yet, Bob’s chin would crash with the ground (mine too) if that happens! Because he knows that it cannot be the case!
Every object exerts the gravitational acceleration that depends on its mass and the position of a body on which it acts. For Earth’s nearby bodies, it is well-known, g=9.81m/s². And this value stays constant for any mass. For dust, pebble, you, and me, as long as the distance to the center of our planet stays relatively close, it is always just g. That is the geometrical property of the fabric of space. Many tried to show its dependencies of mass, and they all failed. Some scientists still give it some thoughts, but, most likely, the results of their works are meant to fail.
On a planet, without the atmosphere, the object’s motion would depend only on the gravitational pull and thus take a parabola arc. Because a quadratic function describes the position within the gravitational field; it would imply equal time for going up and down.
But. Here, on Earth, we have the atmosphere, which additionally exerts an air drag on our pebble thus makes the inequality of these times.
Drag force always opposes the object’s velocity. If it tends to go up, the force will act on it towards the down. The faster the object goes, the more molecules it bounces off, which means the drag is proportional to the velocity.
While the pebble goes up, the gravity and drag reinforce since they act in the same direction, towards the down. But, during the fall, the velocity flipped the direction, which makes the drag force flipped also. Hence, the forces oppose each other now.
So, during the fall, the net-force is smaller, which means the acceleration is smaller, which means the pebble will need more time to cover the same distance as during the rise.
The bonus
Do you know why parachutists cannot accelerate beyond a certain speed?
Because after reaching a certain velocity, the drag force opposes in direction and matches in magnitude the gravitational force, which means the net-force acting on a parachutist is 0; he cannot accelerate anymore; the forces “kill” each other, so to speak. | https://medium.com/cantors-paradise/physics-puzzles-a-pebble-thrown-in-the-air-63c1f84f86bf | ['Wojciech Wieczorek'] | 2020-07-30 05:34:30.304000+00:00 | ['Self', 'Education', 'Math', 'Physics', 'Science'] |
Building an Algorithmic Trading Strategy with Enigma Catalyst | Building an Algorithmic Trading Strategy with Enigma Catalyst
Walking you through the Enigma project and using Catalyst to build an automated trading strategy.
Introduction to Enigma
Blockchains, in their current form, don’t handle privacy well. The data that is stored on a blockchain is available for everyone to inspect — a side-effect of radical transparency. Although this is still better than the current data model (whose problems I summed up below), some data is just not suitable to put on display.
Problems with the current model:
No Privacy ~ theft (hacks), selling to other parties etc.
~ theft (hacks), selling to other parties etc. Lost Natural Income ~ your data is a natural resource
~ your data is a natural resource Sensitive Product Problem ~ some data is extremely personal
~ some data is extremely personal Aggregated Power ~ large companies own basically all of our data
Machine Learning, and especially Deep Learning, has given us insights on our data that are invaluable. With our rapid increase in computational power, together with the enormous growth of data, there is no better time to train these models. Deep Learning is on its way to reshape almost every industry out there:
Industries disrupted by Deep Learning [Source: Insight AI whitepaper]
The need for a new way of storing our personal data, that addresses the 4 main problems I mentioned above, on which Machine Learning models can still be trained, is huge. This is where Enigma comes in.
Enigma is a decentralized computation platform with guaranteed privacy. Data can be stored both on the public blockchain (non-sensitive data like ENG token transactions) or on the private Enigma network (sensitive data like medical logs). The private Enigma network architecture, which is not a blockchain, is that of a Distributed Hash Table, which is also a huge part of the InterPlanetary File System (IPFS). Read about that here. You own your data, so in essence, if a large company wants to train a model (like a targeted advertising model), you would sell your data to that company in order for them to use it. Computation on sensitive data is still possible, due Enigma’s secure multi-party computation. If you want to know more about Enigma, read their whitepaper.
Catalyst
Inspired by the rapid growth and proliferation of crypto-assets, we propose Catalyst — the first investment platform that enables developers to build, test, and execute micro crypto-funds.
This extract is from the Catalyst whitepaper, and it pretty much sums up what they aim to do. Catalyst is the first Application on the Enigma protocol, since the crypto-data that powers the platform comes from the Enigma decentralized data marketplace.
It aims to be the one-stop shop for quantitative traders to test their strategies in the crypto-asset domain. It also aims to be a platform on which people can buy strategies from Catalyst developers. The Python Software Development Kit is a Zipline based engine in which strategies can be back-tested or live-traded.
Building the Strategy
The first step will be to install Catalyst on your system. Follow the guidelines their documentation provides here.
What we’ll build is a simple strategy that utilizes the RSI or Relative Strength Index. This is a momentum indicator that signals the strength of price movements. This will be the logic:
If we are not in a position and the RSI is oversold (≤ 30), go long
If we are in a long position and the RSI reaches 60, close long
If we are not in a position and the RSI is overbought (≥ 70), go short
If we are in a short position and the RSI reaches 40, close short
These random exit-position numbers (40 for short, 60 for long) are chosen arbitrarily. It’s to make sure that the strategy can actually exit a position once the RSI has pulled back. One thing you may try is to optimize these values for maximal profit. Not like that will be a whole lot.
This is what the complete strategy looks like in Python, and we’ll break it down part by part:
I suggest just copying lines 1 through 12. This will basically be any strategy’s skeleton (along with initialize, handle_data and analyze) and I won’t go deeper into these.
If I run this algorithm with the catalyst.run_algorithm function (providing the right parameters), this is the output. First graph being our portfolio value, second graph price, with the buys and sells displayed as arrows. The third graph plots the RSI, and the last graph shows our percentage gain (Blue) compared to just holding the asset (Orange). Courtesy of our analyze function. | https://medium.com/coinmonks/building-an-algorithmic-trading-strategy-with-enigma-catalyst-1a407e9c02f8 | ['Jonas Bostoen'] | 2018-05-30 04:05:33.107000+00:00 | ['Python', 'Blockchain', 'Trading', 'Algorithmic Trading', 'Bitcoin'] |
Accidentals By Susan M. Gaines — Review | An epic coming-of-age story that interweaves science, birds, politics and romance whilst confronting some of today’s most important environmental issues
by GrrlScientist for Forbes | Twitter
Accidentals (Torrey House, 2020: Amazon US / Amazon UK) is a lovely story narrated by Gabriel, the 23-year-old son of a naturalized American who suddenly decides to leave California after residing there for more than 30 years to return to her native Uruguay. After some cajoling, Gabe leaves his high-paying but boring data analysis job to help his mother realize her dream to grow organic vegetables on the family’s abandoned estancia.
“You’ll like the estancia,” Mom said. I hadn’t even looked at her but she knew she was getting to me. “It’ll be spring, summer. If you don’t want to help with the farm, you can go exploring. Borrow a horse, or go hiking and birding.” (P. 12)
A month later, Gabe and his mother, Lili, are residing in his Abuela’s family house in Montevideo busily restoring order to the chaos reigning there, but soon, they are spending increasing amounts of time on the family estancia, a few hours’ drive away. At first, he feels like he’s on an obligatory holiday, but Gabe’s interest in birds tempts him to explore the ranch’s marshes and fields, and soon, he’s filling notebooks with detailed sketches and notes about his many fascinating avian discoveries and observations. On one of his birding expeditions, he meets a local microbiologist, Alejandra, who’s searching for undiscovered microbes on the estancia.
Gabe also becomes swept up in the ongoing family drama over what to do with the land. One of Lili’s brothers, Juan Luis, is determined to bulldoze the estancia so he can grow rice and sell it at a profit to a European market. Complicating (or perhaps clarifying) the discussion about the fate of the family estancia, Gabe stumbles across a species of rail (that’s a bird) lurking in the dense vegetation on the estancia’s wetlands that does not appear in any of his field guides. Is this an accidental: a bird that pops up far out of its range for reasons unknown? Or is this a species that is new to science?
Appropriately enough, accidents are critical to this novel. Not just rails, but as the story unfolds, other accidents play pivotal roles in driving the plot, too. I particularly enjoyed how this novel took its time to tell the story, and to tell it well. It starts slowly, conversationally, and gently builds to a surprising conclusion. Every word, every sentence, every scene adds depth and intensity to the story. Throughout the entire book, many seemingly disparate themes whose hidden and often intricate connections are melded into one story. The characters, both human and avian, were complex and believable, and I ended up liking them all. Gabe’s inner monologues were enlightening, and his growing love for Alejandra humanized him and made him vulnerable, transforming him from a detached observer to an active, passionate participant in his own life.
This modern coming-of-age story is intelligent and epic in scope: presenting a thoughtful commentary on intimate family relationships; an investigation into how repressive government regimes and political violence that have sabotaged the lives of so many families continue to reverberate generations later; a sharp criticism of globalization and a warning about the growing threat of environmental devastation that is steadily bulldozing its way into our everyday lives.
The author, Susan Gaines, writes exceptionally well: her short stories have been twice nominated for the Pushcart Prize. Her meticulous research into microbiology, ecology, rice cultivation, and the politics and history of Uruguay provide additional authenticity. Her elegant prose is so powerfully evocative that the landscape, the people and its birds bounced off the page, surrounding and immersing me in the unfolding story.
If I could sum up this multifaceted story in just one word, that would be love. From its first sentence to its last, this book focuses on what we love — children, spouses, family, friends, nature, the environment, country — and the many ways that we show our love. Highly recommended. | https://medium.com/swlh/accidentals-by-susan-m-gaines-review-74510eb700cc | ['𝐆𝐫𝐫𝐥𝐒𝐜𝐢𝐞𝐧𝐭𝐢𝐬𝐭', 'Scientist'] | 2020-11-08 13:56:15.576000+00:00 | ['Environmental Issues', 'Birding', 'Books', 'Book Review', 'Uruguay'] |
How can you promote a healthy work-life balance? | Finding balance in our lives. It certainly seems like an unattainable goal but I do believe employers can play an important role in promoting a healthy work-life balance.
So, what can management do to help their employees find balance in their lives? Here are a few ideas:
Encourage workday recovery breaks
Workday recovery breaks are very necessary for employees; they boost employee productivity and overall well-being. A study led by John Trougakos at the University of Toronto found that people who take restorative breaks benefit from increased focus and resilience, while those who used the time for chores or work simply miss out. Organizations that don’t provide opportunities for employees to recover from work during the day risk lower employee effectiveness and productivity, leading to burnout, absenteeism, and higher staff turnover, said Trougakos.
Essentially, breaks are set aside for you, so take them!
Employers need to do what they can to encourage their employees to take breaks (crunch time or not). Employees could grab a bite to eat with their family or friends, have a nice peaceful picnic, read a great book, or go exercise at the gym or park — anything but eating lunch at their desk.
Stefan Sagmeister, owner of New York studio Sagmeister & Walsh, even goes so far as to give his employees an entire year off every seven years! When his employees reconvene, their creativity is refreshed and they are able to bang out some pretty awesome work.
Check out his TED Talk here.
Establish a flexible work policy
Trying to effectively balance work and life can be a huge challenge. We often feel like we are being pulled in multiple directions and can experience stress and guilt because of it. Establishing a flexible work policy would help ease this stress and give employees more control over their lives. The result is a generally happier, more satisfied, and productive workforce.
Employers can offer flexihours or telecommuting options. The benefits of such flexibility are endless. According to hrcouncil.ca, employers can have better staff coverage, more efficient use of facilities and keep valued staff who have other life commitments.
It’s a win-win!
Promote vacation days
Completely shutting off and taking a vacation can be a challenge for some. However, it is necessary to take time off work for your mental, emotional, and physical health. If employees take meaningful vacation time, employers will see a brighter, more accomplished workforce.
Some start-ups are even offering employees an unlimited vacation policy. The idea is you take off as much time as you need as long as you get the job done. Joshua Reeves, co-founder of ZenPayroll feels that a flexible vacation policy helps build ownership mentality and strengthens employee commitment to the company. Other companies reaping the benefits of an unlimited vacation policy include Netflix, Prezi, and Sailthru.
All in all, there are a ton of ideas out there! I suggest taking the time to listen to and observe your staff. Be creative and find ways to help them find balance. I can promise that you will have a happier workforce because of it!
Originally published on alongside.com | https://medium.com/alongside/how-can-you-promote-a-healthy-work-life-balance-413108095584 | ['Emily Brennan'] | 2017-01-04 15:37:38.022000+00:00 | ['Work', 'Human Resources', 'Work Life Balance', 'Startup', 'Company Culture'] |
All You Need to Know About the Lambda Functions in Python | All You Need to Know About the Lambda Functions in Python
Practical examples and real usage of lambda functions in Python
Photo by Clint Patterson on Unsplash
If you’ve heard of lambda functions in Python but not sure about how to use them, you’re in the right place.
This article will provide all you need to know about the lambda functions in Python:
What are lambda functions in Python, how they differ from the normal functions?
Why lambda functions are useful and when to use them?
Practical examples and real usage of lambda functions in Python
What are lambda functions in Python, how they differ from the conventional functions?
Lambda functions are defined without a name so they are also called anonymous or nameless functions. We do not have to assign a name to lambda function as we do when defining normal functions.
# lambda function syntax
# lambda arguments: expression
lambda x,y: x+y # conventional function definition
def add_x_y(x,y):
return (x+y)
In addition, with lambda functions, we can have any number of arguments but we can only define a single line expression. It is not possible to define multi-line expressions as we can do when creating normal functions.
Why lambda functions are useful and when to use them?
When you need a single expression function which will be used once in your code, there is no need to define a normal function. You can define a lambda function whenever and wherever it is needed.
Lambda functions are also used when a function is needed as an argument to another function. Instead of defining a normal function, using a lambda function is more convenient and simple for such cases.
Practical examples and real usage of lambda functions in Python
You can find below several examples of how lambda functions are used.
# use of lambda functions
# x is the argument
# x * 2 is the expression which will be returned by lambda function double_x = lambda x: x * 2
print(double_x(4))
# output 8
# use of lambda functions
# x and y are the arguments
# x * y is the expression which will be returned by lambda function mult_x_y = lambda x,y: x * y
print( mult_x_y (4,5))
# output 20
Lambda functions are generally used in higher-order functions (filter(), map(), reduce() are very good examples of such functions) which take in another function as an argument. Let’s see how lambda function is used in such cases.
Example use with filter()
Filter() function takes a function and a list as arguments, then it returns the list items for which the function evaluates True.
# Use of lambda functions with filter() function
# Program to filter out only the items which are greater than 5 my_list = [1, 2, 3, 4, 5, 6, 7, 8]
new_list = list(filter(lambda x: x > 5 , my_list))
print(new_list) # output
[6, 7, 8]
In the example above, the filter function gets my_list and lambda function as arguments. It evaluates every list item with the expression ( x > 5) provided in the lambda function. If the evaluation result is True, then the new list contains this specific element. If not, the function filters out the element and exclude it from the new list.
Instead of the lambda function, we could be providing a normal function as an argument. But this would increase the complexity of the code. Whenever it is possible, using a lambda function makes the code simpler.
Example use with map()
Similarly, the map() function takes a function and a list as arguments. The function provided as an argument is called for each list item and returned results are contained in a new list.
# Use of lambda functions with map() function
my_list = [1, 2, 3, 4, 5, 6, 7, 8]
new_list = list(map(lambda x: x * 5 , my_list))
print(new_list) # output
[5, 10, 15, 20, 25, 30, 35, 40]
Example use with reduce()
reduce() function also takes a function and a list as arguments. This time, instead of returning a list, it returns a single value by applying the lambda function cumulatively to all the items in the list provided.
my_list = [1, 2, 3, 4, 5, 6, 7, 8]
result = reduce(lambda x,y: x + y , my_list)
print(result) # output
36 my_list = [1, 2, 3, 4, 5, 6, 7, 8]
result = reduce(lambda x,y: x * y , my_list)
print(result) # output
40320 my_list = [1, 2, 3, 4, 5, 6, 7, 8]
result = reduce(lambda x,y: x if x > y else y , my_list)
print(result) # output
8
Summary and Key Takeaways
In this short article, I have explained what lambda functions are and how to use them in Python. The key takeaways are;
Lambda functions are defined without a name so they are also called anonymous or nameless functions .
. We can have any number of arguments in lambda functions but we can only define a single line expression .
in lambda functions but we can only define a . You can use lambda function when you need a single expression function which will be used once in your code.
when you need a which will be used once in your code. Lambda functions are also used when a function is needed as an argument to another function.
I hope you have found the article useful and you will start using lambda functions in your own code. | https://medium.com/python-in-plain-english/all-you-need-to-know-about-the-lambda-functions-in-python-53db0fa7d02e | ['Erdem Isbilen'] | 2020-12-25 19:45:48.365000+00:00 | ['Python', 'Python3', 'Lambda', 'Data Science', 'Programming'] |
Facebook Develops An AI-Translator For Software Programmers | TransCoder is highly versatile
Currently, TransCoder can translate freely between C++, Java and Python, but the research team behind TransCoder says that it will be able to adapt to any programming language pair and fluently translate in between them.
Software with the sole purpose of translating between languages is already available, but the results are more often than not underwhelming and can’t be used in a fire-and-forget manner due to the differences in how each language is structured. These so-called S2S (source to source) compilers are far from compiling code errorfree and need extensive bugfixing to get the final result to work. It’s often easier to just rewrite the code from scratch in the desired language.
Any programmer who needs to change code from let’s say C# to C++ or the other way around will tell you the same thing: “Learn both C# and C++, then spend a whole lot of time rewriting code from scratch.”
TransCoder learns by translating not only from a source language to a target language but also reverse translating it back. If you have used Google translator in the past, then you surely know that hitting “reverse translate” a few times can give you weird results as the translator pulls more things out of context.
TransCoder translates code both ways to pick up on these differences and tweaks the code until both ways give the expected results. This way, it ensures coherence. | https://medium.com/illumination-curated/facebook-develops-an-ai-translator-for-software-programmers-c514f16d2fc5 | ['Kevin Buddaeus'] | 2020-06-09 09:51:49.637000+00:00 | ['Technology', 'Software Development', 'Business', 'AI', 'Programming'] |
Ex Libris Life | An evolving list of my literary heroes
Photo by Ivo Rainha on Unsplash
"When a griot dies, it is as if a library has burned to the ground."
-- Alex Haley
griot (pron.: /ˈɡri.oʊ/; French pronunciation: [ɡʁi.o]), jali or jeli (djeli or djéli in French spelling) is a West African historian, storyteller, praise singer, poet and/or musician.
The griot is a repository of oral tradition. As such, they are sometimes also called bards. According to Paul Oliver in his book Savannah Syncopators, "Though [the griot] has to know many traditional songs without error, he must also have the ability to extemporize on current events, chance incidents and the passing scene. His wit can be devastating and his knowledge of local history formidable". Although they are popularly known as "praise singers,” griots may also use their vocal expertise for gossip, satire, or political comment.
As one of my college poetry professors once said, "Write, Rite, Right...Always write how and what you feel, but show me rather than tell me." And, he would always tell me to read the works of others, whether old or new, just to get a feel of what else is out there and what has and hasn't been done.
My Top 5 Influences:
1. Langston Hughes
2. James Baldwin
3. Pablo Neruda
4. Amiri Baraka
5. Charles Bukowski
A Few Honorable Mentions:
Alex Haley, Ralph Ellison, Cornel West, George Orwell, Richard Wright, Albert Murray | https://medium.com/the-bazaar-of-the-bizarre/ex-libris-life-e820447d814c | [] | 2020-12-13 07:01:37.191000+00:00 | ['Literary', 'Lists', 'Bazaar Of The Bizarre', 'Heroes', 'Writing'] |
Stop Cutting Corners | Stop Cutting Corners
The hard path is always worthwhile.
Photo by TJ Dragotta on Unsplash
Try if you’d like, but you’ll find there are no cutting corners.
All the “hacks” and “quick tips” and “secrets” won’t save you from the inevitable need to see your goals through to completion.
When you look over your shoulder (stop looking over your shoulder!) at the success of others, and you grow envious, remind yourself to run your own race.
*They* didn’t get there overnight.
As Lynda Weinman once said, “I’m a 20-year overnight success story.”
No one gets there overnight.
No one.
No one-hit wonder shows up in the recording studio one-day, and has a gold record the next.
No writer pens a bestseller by taking the easy way out.
No entrepreneur goes from struggling to soaring in a few weeks.
No singer receives a standing ovation by faking it.
No business gets “raving fans” by selling out and compromising their values.
Even businesses that put their customers needs over their employees are doomed to fail.
Now that is cutting corners.
There’s no half-written articles.
No almost-finished operas.
No overnight gurus.
No 18-year old life coaches.
No half-baked marketing plan that doesn’t have a real “reason to believe.”
No personal brand that doesn’t have heart and soul.
There’s no worthy action without blind faith and passionate hope. You have to have blind faith. No false hope. Otherwise, why do you even care?
You can’t skip the third-quarter of a four-quarter game.
A project isn’t finished until everything has closed out.
You have to go “all in.” Deceive yourself all you want, but there’s no other way.
If you think you’re winning when you’re not living each moment with maximum attitude, effort and energy, then you’re mistaken.
The scoreboard of life says otherwise. What it says is:
“You’re losing.”
Hate to break it to ya.
Going “all-in” means getting to the depth and roots of exactly why you’re doing, what you’re doing.
Going “all in” means that you have a game plan. You have clarity and determination. You have goals, motivation, determination and direction.
You’re focused, man.
Going “all in” is leading from the heart — your heart. This is leadership at its finest. Going “all in” means there’s no shortcuts, no gimmicks, no B.S. excuses.
There really are no excuses. Eliminate them from your thoughts and vocabulary.
No excuses for others.
But most importantly, no excuses for yourself.
Going all-in means honesty, integrity and discipline. Then, there’s no cutting corners.
There are ways to become more efficient, and there are ways to improve to save your time, but ultimately you have to give your all to everything you do.
So tell the quick-fix salesman and that cranky voice inside your head that you will not be sold on half-ass measures and unfinished solutions.
Look back to where you’ve been. Look inside you to where you are now. And look forward with optimism and joy in your heart.
Do it with all your heart- with the spirit of a warrior- and you will never live with regret.
Stop cutting corners. Enjoy the ride. Join my newsletter for the best, free emotional intelligence and productivity content on the Internet! | https://medium.com/real-1-0/stop-cutting-corners-8c414582ba16 | ['Christopher D. Connors'] | 2020-10-20 20:59:36.646000+00:00 | ['Inspiration', 'Life Lessons', 'Motivation', 'Personal Development', 'Self Improvement'] |
What is Set in Java? | Photo by Joe Green on Unsplash
To understand what a set is, let’s understand what collections are.
A collection is a group of elements bundled into one instance. This can be a set, an arraylist, a linkedlist or a map. Each represents a different way to handle all these elements
A set is a collection that cannot contain duplicate elements.
Hm….what does that mean? Sounds straightforward but aren’t all elements unique in a sense?
It means in a set there will never exist two values in the same set.
Let’s take this array as an example: [1,3,3,4,2,5]. If we insert this array into the set, the set will contain [1,3,4,2,5] as its values.
Here is an example of how a set can be instantiated and iterated to get all of its values:
Notice a few interesting things here:
I instantiated mySet as a HashSet, not a regular set.
I added null as a string and null itself both twice into the set.
First of all, set is an interface and therefore cannot be instantiated directly. You have to instantiate it either as a HashSet or a LinkedHashSet, the ordered and the unordered version of a Set. I will talk about the difference between the two later on.
Now to the second point. Notice I added null as a string twice and the null value itself twice as well. What will actually happen is the value null and the string null will both appear in the set once:
If you recall, I did mention about how HashSet is unordered. What does that really mean?
It means that the order in which I add the items will not be persisted.
When I create the iterator to iterate through the items above, the items are not accessed or printed in the order in which we add in(we add in 3 and then “apple” but in reality we get null and then “apple pie”).
If instead of instantiating a hashset, we set mySet to a new LinkedHashSet instance, the iteration order would be as following instead: 3, apple, null as the value, null as the string, and then apple pie. The order in which it is inputed into the LinkedHashSet would be persisted.
That’s pretty much it to sets in general but here are a few additional things I want to cover: how do you convert a set to list and how can we convert a list to a set, vice versa?
Here’s how to convert a set to a list, vice versa:
If there are any questions or comments, please comment below.
Happy coding! | https://medium.com/dev-genius/what-is-set-in-java-5d8f8b6f35da | ['Michael Tong'] | 2020-09-16 19:00:40.742000+00:00 | ['Data Structures', 'Java8', 'Java'] |
Seduced by the Sea | Silence is all she wanted, anonymity she craved
Emerging from the ice-cold water, Jo lifts her face towards the sky, takes a gulp full of air and plunges back into the silence of the water.
Just above the surface the sounds pierce her ear drums, the noise of the seagulls squawking, and the pandemonium on the beach.
Jo swims to escape from it all, she feels more at home amongst the sea creatures.
Silence and anonymity are what draws her to the sea.
Pretending to be a dolphin, slipping her flippers onto her feet, yanking a bathing cap over her long hair, tucking the strands in to hide her humanness.
In hopes that a family of dolphins may mistake her for their own.
She was obsessed by their permanent smile, nonchalant demeanour and sleek shape.
Dolphins lacked the pressure Jo faced, language that contained high-pitch screeches was sweet music to her ears; the simplicity of it.
A Childish fantasy that has never left her.
Diving into the waves, Jo felt no emotional tugs to the world out there.
To live on land was hard work, words staggered out like drunken soldiers, when beneath the surface no words were needed.
Making friends was equally laborious.
Oh, to be a dolphin, free to be themselves, slipping and twirling through the water, flippers and bathing caps already built in.
Little did she know a dolphin’s permanent grin did not signify an easy life.
They too competed for a place in the school of dolphins, adolescence could be as traumatic for them as for her.
It was the silence and anonymity Jo envied.
Life beneath the sea served as only a temporary refuge until the water puckered her skin and the cold entered her bones.
She knew she could not remain there.
Walking towards the water’s edge, looking up towards the sky, it was time to return home. | https://medium.com/afwp/seduced-by-the-sea-6f53d3d51767 | ['Rebecca Jane Warrington'] | 2020-12-28 15:32:46.274000+00:00 | ['Mindfulness', 'Mental Health', 'Life Lessons', 'Life', 'Self Improvement'] |
Leadership: Expectations vs Reality | Daydreaming is part of the human experience. It helps us get through difficult days, giving us hope for the future. As kids, we daydream about fantastical scenarios of our adult life. We dream of living in Africa, running wild with our pet cheetah. Instead, we grow up to live in Brooklyn hunched over a laptop 12 hours a day.
When we want something, we imagine how good we’ll feel, how having this thing will change our lives. When reality doesn’t match our imagination, we’re disappointed. One of the best film representations of the chasm between expectations and reality is 500 Days of Summer. In the film Tom spends the better part of a year courting Summer. He puts all his hopes and dreams into building a relationship with her. She breaks up with him. He’s bitterly disappointed. A few months later they run into each other at a wedding. After they spend the evening laughing, dancing and reminiscing about their past she invites him to a party at her apartment. He arrives excited about the promise of a reunion. He imagines all the fun they’ll have, expecting they’ll be back together for good. His expectations fall apart almost as soon as he enters the apartment. Instead of a passionate kiss she greets him with a friendly hug. Rather than spending the party arm-in-arm he pours stiff drinks by himself. The final gut punch comes when he realizes she’s engaged to someone else. Reality takes over, his hopes for the relationship dashed. He sinks into a depression, leaving his bed only for junk food and whiskey.
We may not be lovesick young adults hoping our crush likes us back. Still, how we feel when starting a new relationship is similar to how we feel taking our first leadership position. We bring a load of expectations along with us. We’re hopeful this role will give us the career satisfaction we seek. The gap between expectations and reality is one of the most common reasons for coaching. While a leader might say they want help with working with other execs, having influence or making better decisions, what they often mean is: leadership isn’t what I expected and I don’t know how to deal with it. The dream slides away, leaving shadows of what could have been. More than one has confessed that the reality of leadership was too much, they longed to return to being part of the team.
Common leadership expectations vs reality gaps
Expectation: More autonomy
Reality: Less control than you think
You do get some more autonomy in your area, it’s not as much as you hope. You’ll have to include outside factors like other teams or larger company goals and often won’t be the final decision maker. While it might be due to a controlling boss, the bigger factor is a larger role in organizational planning. Leadership increases organizational demands, widening the complexity of decision making, reducing an individual leader’s autonomy. As you work through the tangle of organizational conflict and decision making you need to hone those collaboration skills. Instead of focusing on having control, you need to concentrate on building strong relationships. The faster you lose the ego and focus on others, the better experience you’ll have as a leader. | https://medium.com/swlh/leadership-expectations-vs-reality-766f746925dd | ['Suzan Bond'] | 2020-12-22 17:23:33.858000+00:00 | ['Management', 'Startup', 'Self', 'Leadership', 'Work'] |
Python eval() built-in-function | Let us understand the eval() built-in-function in python.
This would be a short article about eval function in python, wherein I would be explaining to you about eval function, its syntax, and few questions that are often asked in interviews so that you clearly understand it and answer those questions in ease. To get the full code, click on my GitHub repository down below:
Let's get started:
1. What is eval () in python and what is its syntax?
Answer: eval is a built-in- function used in python, eval function parses the expression argument and evaluates it as a python expression. In simple words, the eval function evaluates the “String” like a python expression and returns the result as an integer.
Syntax
The syntax of the eval function is as shown below:
eval(expression, [globals[, locals]])
Arguments or Parameters
The arguments or parameters of eval function are strings, also optionally global and locals can be used as an argument inside eval function, but the globals must be represented as a dictionary and the locals as a mapped object.
Return Value
The return value would be the result of the evaluated expression. Often the return type would be an integer. | https://towardsdatascience.com/python-eval-built-in-function-601f87db191 | ['Tanu N Prabhu'] | 2019-10-19 20:14:29.375000+00:00 | ['Python', 'Python3', 'Functional Programming', 'Programming', 'Python Programming'] |
The Best (and Worst way) of Solving the Palindrome Algorithm Question | Some silly palindromes
I have had a few technical interviews lately asking questions that are variations or actually include the palindrome question. This is why I thought it would be relevant to shed some light on how I learned to solve this interview question in the hopes that someone could benefit from how to solve it by reading how I did it.
As far as I know there are 4 ways of solving this question, but when I refer to the best or worst way I am referring to best and worst big-O notation in both time and space complexity. I should also note that I will be solving this problem in JavaScript.
The Problem
A typical string manipulation question, the palindrome question states that if given a string that is not empty, to write a function that will determine if the word or words is spelled the same forwards and backwards.
The final output to determine this could be true or false as anything in JavaScript has a truthy or falsy value anyways. This doesn’t necessarily mean that you have to return the boolean true or false each time however. In other words if you are trying to test an if condition in a function by saying that it needs to return some kind of specific answer, but the input given as an argument would not make it possible to return that answer, then it would return false if tested in a JavaScript console.
Worst way — Making a new reversed string
Approach Summary: The worst way would be to create a new string and go through each letter backwards from the original string to put every letter in reverse and then compare the two strings.
How to solve: We’ll start by defining the function’s name which it is determining if a string is a palindrome so it is named relevantly. Our input would be the string every time the function is called.
function isPalindrome(string) {
Next we’ll set a variable for our eventual reversed string:
const reversedString = ‘’
Now to go through the string itself to concatenate the reversed string we will make a for loop, but it will start from the last letter of the string (let i = string.length -1), then it will keep going until it reaches the first letter of the input (i>=0), and i which represents each index, will decrease every time it runs (i --).
for (let i = string.length -1; i>=0; i — ){
Now to put what goes inside the loop I am using the += operator to concatenate every letter into my empty string which will represent my first string backwards. String[i] represents each letter and it is important to remember that reversedString += string[i] is the same as reversedString = reversedString + string[i].
reversedString += string[i];
}
This next line will be written to return a truthy value where the original string is the same as the reversed string.
return string === reversedString
}
Altogether it looks like the following:
function isPalindrome(string) {
const reversedString = ‘’
for (let i = string.length -1; i>=0; i — ){
reversedString += string[i];
}
return string === reversedString
}
Where this solution fails to be the best would be the fact that it has a big-O notation of O(n^2) for time complexity, not the best on the charts, and O(n) on the space complexity side, pretty good. However this compared to the second solution, clearly shows the other solution having a better time and space complexity, which I will get into soon. The important takeaway to see here is this solution has a bad of a time complexity mainly because the program has to create a brand new string which takes longer.
Best way — Using the pointer system to compare the furthest left and right side
Approach Summary: On the other hand, the best way to solve this problem would be to define a left and right pointer and compare the letter that each is pointing to. Then using a while loop, while the right pointer is on the right and while the left pointer is on the left, if at any point in time the letters the pointers are pointing to are not the same then return false. Otherwise, return true.
How to solve: To begin solving this, it starts off the same way as the last to set up a function where we choose an appropriate name and the input is still the string given.
function isPalindrome(string) {
This time however we are going to set two variables to represent the left and right pointers. The left pointer will represent the first index of the string which starts at 0. To compliment the left, we have a right pointer set equal to the string’s length -1 which represents the furthest number on the right, or last index.
let leftPointer = 0
let rightPointer = string.length-1
Now using the while loop we will say that we want to run the following logic while the left pointer is on the left side compared to the right pointer.
while (leftPointer < rightPointer){
Our logic will contain an if condition that says that if the left letter is not the same as the right letter the first time the loop goes through, then return false. Which means that if our string was “abcbz” and if “a” was not the same as “z” then it would return false at this point.
if(string[leftPointer] !== string[rightPointer]) return false;
Let’s say our string was “abcba” though. The letter “a” and letter “a” on the both ends are the same so it would pass the return false line area and go to the next line which will increment or decrement depending on what it is. If it is a left pointer it will move more to the right and vice versa so that the pointers can compare both sides of the string.
leftPointer++;
rightPointer — ;
}
Now once the pointers are at the same point in the middle, the program will break out of the while loop and will run into the return true statement because that would indicate it never made the if condition true where both sides aren’t the same. This would mean it is a palindrome.
return true
}
Final results for this solution look like this:
function isPalindrome(string) {
let leftPointer = 0
let rightPointer = string.length-1
while (leftPointer < rightPointer){
if(string[leftPointer] !== string[rightPointer]) return false;
leftPointer++;
rightPointer — ;
}
return true
}
At the end of it all, we are left with a time complexity of O(n) and a space complexity of O(1). Why does this approach have better time and space complexity? This is because it doesn’t have to create a new string, all the program has to do is use some simple pointers working with the same size of an input and ultimately return true or false which takes less time than the first solution.
The palindrome question does a great job testing developer’s basic understanding of how to manipulate a string. It is a problem simple enough to solve to have a clearer chance to understand the reasons behind the big-O notations given to each which developers can use in more complicated problems down the road. I hope that this shorter blog was simple enough to help you understand how to solve this question by ultimately breaking down the parts of each solution and give some insight into how big-O notations are given. Until next time! | https://medium.com/javascript-in-plain-english/the-best-and-worst-way-of-solving-the-palindrome-question-4b7d2f9ada06 | ['Irene Scott'] | 2020-12-28 07:59:11.049000+00:00 | ['Software Development', 'Software Engineering', 'Algorithms', 'JavaScript', 'Web Development'] |
Design Stories: How Creative Melissa Lissone’s past has shaped her present | Adyen, for all intents and purposes, is a fintech company. But in industry-label only.
The company is more than just a payment solution. There’s a certain culture behind Adyen — one without hierarchy, without ego. But with plenty of style and panache. This really is a band of rebels, or self-proclaimed misfits.
From designers to developers to marketing geniuses. This is the real competitive advantage for the company known for dominating an industry.
One incredible team member is Creative, Melissa Lissone.
Melissa’s tale is the first of many stories we’ll share in our Design Stories — a new way to hear from Adyen about their experience, early successes, and tips for creating great designs.
How it all began
Melissa always loved to create things and started studying at a school where she was introduced with different ways to be creative — including photography, 3D furniture design, creating videos, graphic design and even setting up window shops.
Though she really wanted to get her bachelor’s, Melissa felt an ‘art academy’ would be too artsy as she tended to lean more towards the commercial side of design — focusing more on problem solving and work with briefings.
“For me it’s important to make sense of the design I’m creating and have a good concept behind it.”
So she ditched the traditional route of art school and instead studied at the prestigious University of Lincoln, where the focus was more on concept development. This piqued an interest in the theory where Melissa went on to study applied creativity at Amsterdam’s Hallo Academy.
Keeping it in the family
Growing up with two older brothers, Melissa was interested in technology from a young age. As she explains,
“My brothers would explain how technology worked — from DVD players to computers. But they always said each lesson would happen only once. So I needed to pay close attention, and that’s what I did.”
At the ripe age of 11, Melissa was installing CD drives for friends. But it wasn’t just her sibling’s fascination that influenced her love affair with technology.
A photographer from Indonesia — a part of the world synonymous with being head of the latest tech trends — Melissa’s grandfather was usually one of the first to implement color into photographs in the Netherlands. Melissa remembers visiting her grandfather and every week he’d pull out a new lens or filter or any myriad of photography-related tech. They would ooh and ahh and play around with the latest innovation together.
Her father was also influential in her life — always bringing home old computers from work. In fact, there wasn’t a time in her life that she can’t remember tinkering around with gadgets. To this day, her father still proudly shows off his latest geeky find from eBay.
Ironically enough, her father was an IT specialist at a bank. So when Melissa had the chance to work for in fintech, it was the perfect continuation to her upbringing and interests, and she felt at home in the industry.
From freelancing to full-time
Upon the arrival of her first born (Melissa is actually expecting number two in September), she realized that with freelancing, there wasn’t much of a routine, often making it difficult to find a rhythm for her son.
Having noticed that more and more companies and brands were establishing their own creative departments in-house made her wonder how it would be on the “other” side.
But just how much greener the grass is on the other side of the “freelance” fence?
Melissa, with plenty already on her table as full-time mom and full-time art director, also found herself cofounder of a new business venture. This is where the pros of structure overcame the cons of routine. | https://medium.com/adyen-design/design-stories-how-creative-melissa-lissones-past-has-shaped-her-present-95620583d65e | [] | 2017-09-18 12:49:07.377000+00:00 | ['Fintech', 'Design'] |
My journey to learn Python as a Petroleum Engineer | Photo by Patrick Tomasso on Unsplash
To be called literate in the 2020s there is a good chance you must know how to code. It may seem an exaggeration, but I certainly believe it would be true for engineering roles. I still remember vividly when in 2014 President Obama participated in the ‘Hour of Code’ to encourage students to pick up coding. That students are entering the job market now and in the coming years and I cannot even imagine how much of impact they would have in the way we live and work. One thing I know is learning how to code is fun, liberating and can save you lots of trouble in the long run while makes you look smarter than you are (certainly true in my case).
Now if you want to start learning Python, you do not have a problem of finding resources, you will have a unique problem of picking the right one. Sometimes it seems to me there are more python tutorials online than the population of earth! (a bit exaggeration but you get my point). My experience in learning new skills over the last few years has thought me one thing, the most time-consuming yet crucial part is mapping the learning path. If I get that part right, things fall in place nicely. I spent a lot of time online and offline discussing with experienced users to map the path for my learning and over the last few years, I have helped many friends and colleagues with where to start Python and how to approach it. In my previous workplace, I advocated for a formal python training to leadership, once I got their approval I researched, picked and tailored the course to suits our Petro-technical engineers and the course was successfully delivered on February 2019.
Photo by Dlanor S on Unsplash
Today I am going to share with you some of the frequent questions I receive and my answers to them. My hope is these questions and answers would ease your way on your journey to learn Python. I share the post on my LinkedIn and my Medium page, and I hope to update it regularly. Please use these as a guideline alongside your research. If you find alternative resources that were helpful please feel free to share them with me or comment them down below for everyone’s benefit. Also please do not hesitate to ask away your questions in the comment or direct message, I would be happy if I can help.
Question 1: Python or R, which one should I learn?
If you are an engineer or want to code for an engineering solution, my answer is Python. Python is the second most popular programming language now while R is 14th and they were in similar rank not long ago. You can read about some of their differences here. As a beginner for at least the first year of learning python, there is a very good chance that any problem you face has an answer ready for you on the internet and that makes the learning process a smoother journey.
Question 2: Python2 or Python3?
Python 2 is dead. So, if you are about to start learning Python, don’t even consider this question as part of your research.
Question 3: Where to start learning Python?
I picked up a few different courses, to begin with. As an engineer, I work with excel sheets, PDF files, office documents every day. Why should I pick up a course that teaches python very well, but the projects are about tic-tac-toe and some other random games? This is why “Automate Boring Stuff with Python” is my top recommendation. Al’s book and delivery are great. More importantly, the course is very practical which allows you to start coding on your small projects very quickly. Those small wins hopefully are going to motivate you and make it easy for you to commit to learning.
Question 4: I finished “Automate Boring Stuff with Python”, now what?
If writing scripts is all you want to know from Python, then “Automate Boring Stuff with Python” should be enough.
If you want to learn python more in-depth or you are thinking on developing applications with more complexity than automating scripts and such, then you may want to have a look at a computer science course with a focus on Python. I highly recommend the 2-part series “Introduction to Computer Science and Programming Using Python” by MITx which is available through the MIT website and Edx. What you get out of the course is how to frame your mind to code like a computer scientist. It is invaluable in making you a more efficient programmer and more comfortable with Python and its libraries’ documentation.
Question 5: Is there any other resources to learn from.
Yes, plenty, I list them here and try to explain in a few short sentences why they have been shortlisted and worth your time:
Think Python:
Great book for an introduction to Python, available for free. What I liked the most was how each chapter has some exercises to help you to judge your understanding.
A whirlwind Tour of Python:
It is a fast-paced introduction to Python and it is tailored to those who are new to Python but have programming background in other languages. If you are profieceint with VBA and you want to pick up Python, maybe this would be a good starting point for you. It is available for free from various sources, such as here and here.
Python for Data Analysis:
If you are ready to learn pandas, then why not learning it from the person who created it. Wes’s book is the go-to for learning pandas, combined that with his videos on YouTube and then learning pandas would be fun. You can purchase the book here.
Medium:
I found Medium website and “towards data science” publishing page particularly useful for finding like-minded people, latest trends and general coding/python tips and tricks that would be handy.
Reddit:
Reddit needs no introduction. I found learnpython and datascience subreddits very useful forums to follow.
Stackoverflow:
Last but not least is stackoverflow. There is a very good chance that any python questions that come to your mind have already been answered here. So, you would learn to rely on this very early in your learning journey.
Question 6: How can I practice.
Try to find easy projects around you. I know this is easier said than done. Finding a good personal project would be one the most challenging part to you in this journey as I had a hard time finding a project that worth doing, yet can deduce to simple challenges so it fits in my skill set and I can approach it.
If you cannot think of any projects early on, don’t panic, it is as natural as losing breath after running a marathon( in my case around the block). The solution is codewars. I found codewars around 2 years ago and that helped me solidify many of my learnings. It gives you small projects to practice your python skills, try to start from basics and gradually increase the level of difficulties.
I have developed a few small and big projects with Python over the last few years, I share two of my favourites below to hopefully gives you some motivation and idea.
The first one is a personal project I did at home using Raspberry Pi and my Solar system. My python code requests the energy output of my solar system every minute, collects it in a database and plots them for me on my command. I can check how many sunny days I had in a month and monitor the quality of my panels over time.
The second one is work-related and it is a project I did at my current role. My application successfully reduced the time required to generate gas and water type curves from over a week for our big fields to minutes, by automating a lot of calculations which were previously run in Excel. | https://aedalat.medium.com/my-journey-to-learn-python-as-a-petroleum-engineer-dfb5a2bbbe88 | ['Amin Noor'] | 2020-08-15 09:31:15.579000+00:00 | ['Python', 'Hour Of Code', 'Petroleum Engineering', 'FAQ', 'Learning To Code'] |
Discretisation Using Decision Trees | 1. Introduction
Discretisation is the process of transforming continuous variables into discrete variables by creating a set of contiguous intervals that span the range of variable values.
1.1 Discretisation helps handle outliers and highly skewed variables
Discretisation helps handle outliers by placing these values into the lower or higher intervals together with the remaining inlier values of the distribution. Thus, these outlier observations no longer differ from the rest of the values at the tails of the distribution, as they are now all together in the same interval/bucket. In addition, by creating appropriate bins or intervals, discretisation can help spread the values of a skewed variable across a set of bins with an equal number of observations.
1.2 Discretisation approaches
There are several approaches to transform continuous variables into discrete ones. This process is also known as binning, with each bin being each interval. Discretization methods fall into 2 categories: supervised and unsupervised.
Unsupervised methods do not use any information, other than the variable distribution, to create the contiguous bins in which the values will be placed.
Supervised methods typically use target information in order to create bins or intervals.
We will only talk about supervised discretisation method using decision trees here in this article
But before moving to the next step, let’s load a dataset on which we will perform the discretisation.
Discretisation with decision trees
Discretisation with Decision Trees consists of using a decision tree to identify the optimal splitting points that would determine the bins or contiguous intervals:
Step 1: First it trains a decision tree of limited depth (2, 3 or 4) using the variable we want to discretize to predict the target.
Step 2: The original variable values are then replaced by the probability returned by the tree. The probability is the same for all the observations within a single bin, thus replacing by the probability is equivalent to grouping the observations within the cut-off decided by the decision tree.
Advantages :
The probabilistic predictions returned decision tree are monotonically related to the target.
The new bins show decreased entropy, this is the observations within each bucket/bin are more similar to themselves than to those of other buckets/bins.
The tree finds the bins automatically.
Disadvantages :
It may cause over-fitting
More importantly, some tuning of tree parameters might need to be done to obtain the optimal splits (e.g., depth, the minimum number of samples in one partition, the maximum number of partitions, and a minimum information gain). This it can be time-consuming.
Let ’s see how to perform discretization with decision trees using the Titanic dataset.
Import useful Libraries
IN[1]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
2. Load the dataset
IN[2]:
data = pd.read_csv('titanic.csv',usecols =['Age','Fare','Survived'])
data.head()
3. Separate the data into train and test set
IN[3]:
X_train, X_test, y_train, y_test = train_test_split(data[['Age', 'Fare', 'Survived']],data.Survived , test_size = 0.3)
So, assuming that we do not have missing values in the dataset (or even if we have missing data available in the dataset, we have imputed them ). I am leaving this part because my main goal is to show how discretisation work.
So, Now let’s visualize our data such that we gain some insights out of it and understand the variables
4. Let’s build a classification tree using the age to predict Survived in order to discretise the age variable.
IN[4]:
tree_model = DecisionTreeClassifier(max_depth=2) tree_model.fit(X_train.Age.to_frame(), X_train.Survived) X_train['Age_tree']=tree_model.predict_proba(X_train.Age.to_frame())[:,1] X_train.head(10)
Now that we have a classification model using the age variable to predict the Survived variable.
The newly created variable Age_tree contains the probability of the data point belonging to the corresponding class
5. Checking the number of unique values present in Age_tree variable
IN[5]:
X_train.Age_tree.unique()
Why only four probabilities right?
Above in input four, we have mentioned max_depth = 2. A tree of depth 2, makes 2 splits, therefore generating 4 buckets, that is why we see 4 different probabilities in the output above.
6. Check the relationship between the discretized variable Age_tree and the target Survived .
IN[6]:
fig = plt.figure()
fig = X_train.groupby(['Age_tree'])['Survived'].mean().plot()
fig.set_title('Monotonic relationship between discretised Age and target')
fig.set_ylabel('Survived')
Here, we can see a monotonic relationship between the discretised variable and Age_tree the target variable Survived . That plot suggests that Age_tree seems like a good predictor of the target variable Survived .
7. Checking the number of passengers per probabilistic bucket/bin to under the distribution of the discretized variable.
IN[7]:
X_train.groupby(['Age_tree'])['Survived'].count().plot.bar()
Let's check the Age limits buckets generated by the tree by capturing the minimum and maximum age per each probability bucket to get an idea of the bucket cut-offs.
8. Checking Age limit buckets generated by the tree
IN[7]:
pd.concat( [X_train.groupby(['Age_tree'])['Age'].min(),
X_train.groupby(['Age_tree'])['Age'].max()], axis=1)
Thus, the decision tree generated the buckets : 0–11 , 12–15 , 16–63 and
46–80 , with probabilities of survival of 0.51 , 0.81 , 0.37 and 0.10 respectively.
9. Visualizing the tree.
IN[8]:
with open("tree_model.txt", "w") as f:
f = export_graphviz(tree_model, out_file=f) from IPython.display import Image
from IPython.core.display import HTML
PATH = "tree_visualisation.png"
Image(filename = PATH , width=1000, height=1000)
Tree Visualisation
As we can see from the plot, we obtain 4 bins for max_depth=2 .
As I mentioned earlier, there are a number of parameters that we could optimise to obtain the best bin split using decision trees. Below I will optimise the tree depth for a demonstration. But remember that you could also optimise the remaining parameters of the decision tree. Visit sklearn website to see which other parameters can be optimised.
10. Selecting the optimal depth of the tree
I will build trees of different depths, and will calculate the roc-auc determined for the variable and the target for each tree I will then choose the depth that generates the best roc-auc
IN[9]:
score_ls = [] # here I will store the roc auc
score_std_ls = [] # here I will store the standard deviation of the roc_auc for tree_depth in [1,2,3,4]:
tree_model = DecisionTreeClassifier(max_depth=tree_depth)
scores = cross_val_score(tree_model, X_train.Age.to_frame(),
y_train, cv=3, scoring='roc_auc')
score_ls.append(np.mean(scores))
score_std_ls.append(np.std(scores))
temp = pd.concat([pd.Series([1,2,3,4]), pd.Series(score_ls), pd.Series(score_std_ls)], axis=1) temp.columns = ['depth', 'roc_auc_mean', 'roc_auc_std'] print(temp)
Here, we can easily observe that we obtained the best roc-auc using depths of 1 or 2. I will select depth of 2 to proceed.
11. Transform the Age variable using tree
IN[10]:
tree_model = DecisionTreeClassifier(max_depth=2) tree_model.fit(X_train.Age.to_frame(), X_train.Survived) X_train['Age_tree'] = tree_model.predict_proba(X_train.Age.to_frame())[:,1] X_test['Age_tree'] = tree_model.predict_proba(X_test.Age.to_frame())[:,1]
12. Inspecting the transformed age variable in the train set
IN[11]:
X_train.head()
13. Checking the unique values of each bin in the train set
IN[12]:
X_train.Age_tree.unique()
14. Inspecting the transformed age variable in the test set
IN[13]:
X_test.head()
15. Checking the unique values of each bin in the train set
IN[14]:
X_test.Age_tree.unique()
Now, we have successfully discretize the Age variable into four discrete values that might help our model to make better predictions. | https://towardsdatascience.com/discretisation-using-decision-trees-21910483fa4b | ['Akash Dubey'] | 2018-12-24 14:30:40.902000+00:00 | ['Machine Learning', 'Data Science', 'Model', 'Artificial Intelligence', 'Feature Engineering'] |
How to Prevent Broken Data Pipelines with Data Observability | How to Prevent Broken Data Pipelines with Data Observability
And other important lessons for data teams
Image courtesy of Amarnath Tade on Unsplash.
If you work in data, these questions are probably a common occurrence:
“What happened to my dashboard?” “Why is that table missing?” “Who in the world changed the file type from CVS to XLS?!”
And these just scratch the surface. As the number of data sources and complexity of data pipelines increase, data issues are an all-too-common reality, distracting data engineers, data scientists, and data analysts from working on projects that actually move the needle.
In fact, companies spend upwards of $15 million annually tackling data downtime, in other words, periods of time where data is missing, broken, or otherwise erroneous, and 1 in 5 companies have lost a customer due to incomplete or inaccurate data.
So, how do you prevent broken data pipelines and eliminate downtime? The answer lies in traditional approaches to reliable software engineering.
Introducing Data Observability
Developer Operations teams have become an integral component of most engineering organizations. DevOps teams remove silos between software developers and IT, facilitating the seamless and reliable release of software to production.
Observability, a more recent addition to the engineering lexicon, speaks to this need, and refers to the monitoring, tracking, and triaging of incidents to prevent downtime. In the same way that New Relic, DataDog, and other Application Performance Management (APM) solutions ensure reliable software and keep application downtime at bay, Data Observability solves the costly problem of unreliable data.
Instead of putting together a holistic approach to address data downtime, teams often tackle data quality and lineage problems on an ad hoc basis. Much in the same way DevOps applies observability to software, I think it’s about time we leveraged this same blanket of diligence for data.
Data Observability, an organization’s ability to fully understand the health of the data in their system, eliminates data downtime by applying best practices of DevOps Observability to data pipelines. Like its DevOps counterpart, Data Observability uses automated monitoring, alerting, and triaging to identify and evaluate data quality and discoverability issues, leading to healthier pipelines, more productive teams, and happier customers.
To make it easy, I’ve broken down Data Observability into its own five pillars: freshness, distribution, volume, schema, and lineage. Together, these components provide valuable insight into the quality and reliability of your data.
Image courtesy of Barr Moses.
A robust and holistic approach to data observability requires the consistent and reliable monitoring of these five pillars through a centralized interface that serves as a central source of truth about the health of your data.
Data Observability provides an end-to-end solution for your data stack that monitors and alerts for data issues across your data warehouses, data lakes, ETL, and business intelligence, using machine learning to infer and learn your data, proactively identify data issues, assess its impact, and notify those who need to know. By automatically and immediately identifying the root cause of an issue, teams can easily collaborate and resolve problems faster.
Data observability facilitates greater collaboration within data teams by making it easy to identify and resolve issues as they arise, not several hours down the road. Image courtesy of Barr Moses.
Such an approach to data quality and reliability uniquely delivers:
End-to-end observability into all of your data assets. A strong Data Observability solution will connect to your existing data stack, providing visibility into the health of your cloud warehouses, lakes, ETL, and business intelligence tools.
A strong Data Observability solution will connect to your existing data stack, providing visibility into the health of your cloud warehouses, lakes, ETL, and business intelligence tools. ML-powered incident monitoring and resolution. It automatically learns about data environments using historical patterns and intelligently monitors for abnormal behavior, triggering alerts when pipelines break or anomalies emerge. No configuration or threshold setting required.
It automatically learns about data environments using historical patterns and intelligently monitors for abnormal behavior, triggering alerts when pipelines break or anomalies emerge. No configuration or threshold setting required. Security-first architecture that scales with your stack. Implicitly, Data Observability intelligently maps your company’s data assets while at-rest without requiring the extraction of data from your environment and scalability to any data size.
Implicitly, Data Observability intelligently maps your company’s data assets while at-rest without requiring the extraction of data from your environment and scalability to any data size. Automated data catalog and metadata management. Real-time lineage and centralized data cataloguing provide a single pane-of-glass view that allows teams to better understand the accessibility, location, health, and ownership of their data assets, as well as adhere to strict data governance requirements unlike manual catalogs.
Real-time lineage and centralized data cataloguing provide a single pane-of-glass view that allows teams to better understand the accessibility, location, health, and ownership of their data assets, as well as adhere to strict data governance requirements unlike manual catalogs. No-code onboarding. Code-free implementation for out-of-the-box coverage with your existing data stack and seamless collaboration with your teammates.
In the same way that software engineering teams shouldn’t have to settle for buggy code, data engineering teams don’t have to settle for broken data pipelines. By applying the same principles of software application observability and reliability to data, these issues can be identified, resolved and even prevented, giving data teams confidence in their data to deliver valuable insights.
As companies continue to move to the cloud, embrace more distributed data stacks (see: data mesh) and increasingly rely on AI to power previously manual functions (i.e., metadata management), I expect that data will increasingly rely on best practices of DevOps and software engineering to accommodate the growing data needs of the enterprise.
I don’t know about you, but I’m looking forward to a world in which the “why, how, who, and where?” of your data is much easier to answer.
To learn more about data observability, reach out to Barr Moses and the Monte Carlo team. | https://towardsdatascience.com/how-do-you-prevent-broken-data-pipelines-326f3c6d239e | ['Barr Moses'] | 2020-12-03 00:29:24.122000+00:00 | ['Data Analysis', 'Data Quality', 'Data', 'Data Engineering', 'Data Science'] |
The Focusing Illusion: How You Fool Yourself Into Happiness | We can see from the graphs above that reported life satisfaction increases up to the beautiful day, but then declines in the proceeding years. The apparent novelty seems to wear off as the reality of life and all its challenges take hold. Kahneman suggests that perhaps we get married in the hope the future will be better than today, or that we may maintain the blissful status quo. He cites Daniel Gilbert and Timothy Wilson’s research and asks if we have become victim to the “massive error” of affective forecasting [3] which accounts for the error in forecasting our future feelings about life.
For example, we want to be able to predict whether we will get married and have children, because we believe these life events are crucial determinants of happiness. On the marriage study results, Kahneman says that on the day a couple marry, they may know that rates of divorce and separation are high, but don’t apply this apply this to themselves. He says we can, of course, explain this data as representing a normal adjustment to life.
But Kahneman says we instead need to examine “the heuristics of judgement”. Or, how it is we arrive at answers to questions such as; “How satisfied are you with your life? and “How happy are you these days?” He says that these questions are not as straightforward as those such as “what is your telephone number?” but respondents often arrive at answers to all of these questions in only a few seconds. Kahneman says that people tend to have ready-made answers, or answers that recent events influence us. He says that this represents the fast acting System 1 which jumps into gear with little conscious control or deliberation on our part.
The Focusing Illusion: Take Happiness With A Pinch of Salt
Any apparent single event or series of events can influence our perception of life satisfaction and happiness. They are the perceived outcomes of what Kahneman calls The Focusing Illusion. More often, he says, we are not even aware that our minds have taken over. System 1 substitutes our interpretations of simpler life events for global evaluations of life. An illustration of this comes from a study by Norbert Schwarz in his 1988 examination of priming and communication [4].
Schwarz and colleagues ran experiments that examined the way people use information when making global judgments. In particular, they explored the way in which a question about a specific component of life satisfaction influenced a subsequent judgment of overall life satisfaction. Before completing the questionnaire, participants were asked to photocopy a document as a favour. Perceiving it not part of the study, they obliged. Half of the participants found coins on the photocopier planted by Schwarz, and results subsequently showed the lucky ones were higher on life satisfaction.
Of course, chance events of good or bad fortune are not the only influences on our perception of global life happiness. Recent history, as shown in the marriage data above, life tragedies, health, career success, family and financial circumstances, peer group influences and global events all bear heavy on our perception of happiness. However, Kahneman cautions that our evaluation of overall happiness likely comes down to a small sample of available concepts rather than a measured evaluation. Therefore, we might be better served taking these snapshot evaluations with, as he suggests, a pinch of salt.
“Even newly weds who are luck enough to enjoy a state of happy preoccupation with their love will eventually return to earth, and their experienced wellbeing will again depend, as it does for the rest of us, on the environment and activities of the present moment” - Daniel Kahneman | Cognitive Psychologist
Attention: The Key To Happiness
According to Kahneman, attention is the key to the question of life happiness and suggests that it is the events of now that really matter. Where we consider our happiness about life, we are bound to be influenced by recent events, and the marriage graphs reinforce this idea. Kahneman’s studies have shown that by measuring the speed of participant responses, considered evaluation of life happiness is generally absent.
When it comes to the focusing illusion, nothing in life is as important as you think it is when you are thinking about it, Kahneman says. The basis of the focusing illusion, he continues, is “what you see is all there is”. Or, you’re giving too much weight to a single factor as a determinant of wellbeing and happiness.
So what’s the bottom line?
Seems like a roundabout way of saying, if you want happiness, climb down from your head and get into the present moment. It seems to me that in all psychological investigation, we come to the same inevitable answer. That is, happiness is available now. As we allow our minds to drift to fanciful notions of the future, we take our eyes off the ground and miss the holes in the road. Same goes for lamenting the past–we fail to be present.
Where is our attention set? To what are we giving our focus and time?
We seem forever in need of escape from the only moment that life occurs. The car will make me happy, or the wife or husband. The new job will make me happy, the TV or the movies or some gadget or other. New lips or new tits, a rock hard six pack or bronzed skin. Whatever it is that will make me happy, fulfilled and whole, it’s not here and I don’t already have it. It’s somewhere out there.
So we keep looking, meanwhile creating grand illusions to keep us from ourselves. | https://medium.com/the-reflectionist/the-focusing-illusion-how-you-fool-yourself-into-happiness-e340b184d9df | ['Larry G. Maguire'] | 2020-11-15 21:25:31.006000+00:00 | ['Self', 'Wellbeing', 'Happiness', 'Psychology', 'Focus'] |
37 Video Marketing Stats You Need To Know For 2017 | With the meteoric rise of video, It is not difficult to understand why it is becoming a necessity for marketers in 2017. This comprehensive list of video statistics will help you understand why video is such a big deal.
One example would be, mobile video is expected increase 11x between 2016 and 2020. Another is, 67% of Millennials agree that they can find a YouTube video on anything they want to learn.
This just goes to show that people are looking for video.
To communicate to their audience, marketers are realizing more and more that video is the way to get their message across. The problem in the past has been, video take to much time and costs to much money.
2017 is the year marketers can no longer ignore video. The numbers don’t lie when it comes to video.
To learn more about how video marketing can help convert customers and increase brand awerness, see our infographic below. It breaks down 37 compelling video marketing statistics into eight different categories: video views by social network, most popular forms of online content, conversion rates, who uses video, your audience, content, video is the new TV, and video ad spending.
The 37 Must Know Statistics About Video Marketing:
Between Snapchat (10 billion), Facebook (8 billion), and YouTube alone (4 billion), there are 22 billion daily video views. Be sure to choose the right kind of content for your video. The 3 most popular forms of content are comedy (39%), news (33%), music (31%). YouTube reports mobile video consumption rises at least 100% year over year. Video in email leads to a 200–300% increase in click through rates. Including video on a landing page can increase conversions by 80%. After watching a video, 64% of users are more likely to buy a product online. Real Estate Listings That Include a video recieve over 400% more inquiries than those who don’t. Combining video with a full page ad boosts engagement by at least 22%. When you use video on social media, your audience is 10 times more likely to engage with it in some way. 65% of excecutives will visit a website and 39% will call a vendor after viewing a video. 50% of executives look for more information after seeing a product / service in a video. When you use video, you have a 53 times higher higher likelihood of ranking on the first page of Google. 86% of colleges and universities have a presence on YouTube. 65% of marketers plan to increase their mobile ad budgets to account for video. 86% of online video marketers use video content. 22% of small businesses plan to post a video in the next 12 months. 66% of B2B organizations use video in some capacity in their marketing campaigns, of which 73% report positive results on their ROI. 1 minute of video is equivelant to 1.8 million words to your audience. 90% of users say that product videos are helpful in the decision process. 1/3 of all online activity is spend watching video. 80% of users recall a video ad they have viewed online in the past month. 92% of mobile video consumers share videos with others. 36% of online consumers trust video ads. 75% of executives watch work-related videos on business websites at least once a week. 46% of users take some kind of action after viewing a video ad. 75% of online video viewers have interacted with an online video in the past month. 90% of online video views are between the ages of 18–34. This same group between the ages of 18 and 34 collectively are expected to spend more than $200 billion annually starting in 2017 and $10 trillion in their lifetimes. Enjoyment of video ads increase purchase intent by 97% and brand assocaiton by 139%. 5% of viewers will stop watching a video after 1 minute and 60% will stop watching by 2 minutes. The average user spends over 16 minutes watching online video ads every month. 4 of 5 users will click away if the video stalls while loading. 300 hours of video and uploaded to YouTube every minute. 59% of executives would rather watch video than read text. More video content is uploaded every 30 days than all 3 major U.S. T.V. networks combined have created in the past 30 years. Online video ads recieve over 18 times more viewer engagement than T.V. commercials. In 2017 video ad spending is expected to top $11.4 billion. Video ads make up 35% of total online spending.
Bonus: | https://medium.com/rendrfx/37-video-marketing-stats-you-need-to-know-for-2017-452e27d09bff | ['Peter Schroeder'] | 2017-03-21 13:29:45.243000+00:00 | ['Advertising', 'Marketing', 'Digital Marketing', 'Video', 'Video Marketing'] |
Marketers: 4 Ways to Prove to Millennials You’re Paying Attention | Now over 80 million members strong, millennials have beat out baby boomers as the largest living generation in the U.S. And with their digital-savvy mindsets and devices constantly at their fingertips, they’re also the most distracted generation of all time. 95% of millennials are doing other things while shopping — watching TV, waiting in line, dining with friends, Ubering to their next destination, and even working. They’re also shifting between media platforms up to 27 times per hour.
When it comes to successfully selling to millennials and driving them along the purchase funnel, keeping them focused and engaged is key. Hyperconnectivity and digital distraction continue to be constant hurdles for brands across all industries, who all seem to be asking the same thing…how do we crack the millennial code?
The best way to figure this out: go straight to the source. The SmarterHQ Millennial Reportsurveyed 1,000+ millennials on their shopping habits and marketing preferences to help brands everywhere understand how to better cater to their needs. Here are the top takeaways for marketers to start implementing, stat:
1. Stop Sending So Many Emails
Email is one of the top-ranked revenue-driving channels out there. Brands everywhere understand that email marketing is a solid strategy to implement and drive engagement. But while marketers have mastered the use of emojis in subject lines and how to drive higher click-throughs, they’re not quite understanding how often millennials want to see these messages in their inboxes. 74% of millennials report they receive too many emails — they feel bombarded by marketing messages and are frustrated with the amount of emails they receive from brands. Instead, the majority would rather receive 1–3 marketing emails per month.
How many of you are sending emails multiple times a week, or even multiple times a day? If brands keep sending endless emails, they risk this audience getting overwhelmed and glazing over them altogether. Dial back those sends to deliver only your most important offers and the content millennials will actually care about. Which brings us to…
2. Provide Personalized Content & Product Recommendations
Mass newsletters are a continuing strategy for marketers. But millennials want you to know something: don’t waste your time on batch-and-blasts, because it wastes their time, too. 70% of millennials say they are frustrated by brands sending irrelevant emails; they prefer to receive personalized emails offering certain information, such as discount notifications for previously browsed items or categories, sale notifications for previously carted items, and recommended products based on their interests. And with 70% of millennials also saying they are comfortable with brands tracking their purchasing and browsing behaviors if it means they’ll receive more relevant communications, this is a must-adopt strategy for marketers.
Sending messages catered to individual customers rather than the masses will also help you create brand loyalty with this crowd. Though millennials admit they’re not hardcore brand loyalists per se, their brand loyalty does increase by 28% on average if they receive personalized marketing communications.
3. Make In-Store Experiences Matter More
Probably the most common misconception about millennials’ shopping habits? They no longer shop in stores (which is so very wrong). Despite the growth of online shopping, a whopping 50% of millennials report they still prefer to shop primarily at physical locations. After in-store, 27% prefer desktop/laptop, 22% prefer phone/tablet, and 1% still browse physical catalogues.
Marketers’ priorities and strategies continue to shift to digital, but it’s important for brands to place an emphasis on brick and mortar as well. Better yet, they need to understand how to marry online and offline data to create a seamless experience for customers who prefer to interact with brands on more than one channel (which is also overwhelmingly common). This way, marketers can encourage in-store shoppers to make their next purchases online, and send online shoppers to storefronts for same-day returns.
4. Keep the Deals Coming
Millennials shop for all sorts of reasons. Of the shopper personalities out there, 30% of millennials refer to themselves as a bargain shopper, followed by those shopping for a specific purpose (18%), the researcher types (17%), the casual browsers (14%), and so on. With sale-seekers at the top of this list, if there’s a deal out there, millennials want to know about it.
Over 60% of these bargain shoppers prefer to shop in-store versus online — yet another reason why in-store experiences matter greatly to millennials. Millennials who prefer shopping online are typically researching a product or casually browsing, while the more serious shoppers (the seasonal, bargain, goal-oriented, impulse, or brand loyalist types) favor storefronts. To truly appeal to this audience, brands need to effectively identify and track the purchase behaviors of their customers and communicate future offers based on those behaviors in a timely manner.
Prove to Millennials You’re Paying Attention
Ultimately, brands need to take a step back and make sure their strategies align with what millennials really want: fewer emails in their inboxes, more relevant and personalized messages, prioritized in-store experiences, and timely communication on deals. If marketers start here, you’ll prove to millennials you’re not only listening, but you’ll show you understand and respect their wishes, too. And with that, let the engagement and revenue roll in.
Perspective from Kristen Hamerstadt, VP of Marketing at SmarterHQ | https://medium.com/element-three/marketers-4-ways-to-prove-to-millennials-youre-paying-attention-e35c285caff5 | ['Element Three'] | 2018-06-22 13:41:01.585000+00:00 | ['Personalization', 'Millennials', 'Marketing', 'Digital Marketing', 'Ecommerce'] |
Instant Worldbuilding | You can build a world with only four numbers. Simply put a date at the top of your story. This technique is especially useful for flash or micro-fiction. Or any time you have a word count limitation.
If I put:
-1776 -
at the top of a story, most American readers will envision the setting of the American Revolution: powdered wigs, horse-drawn carriages, tall-mast sailing ships, etc.
Adding a city to the date can also be helpful. If I put:
Chicago, 1926
at the top of a story, gangsters, flappers, jazz, and prohibition come to mind. | https://medium.com/mark-starlin-writes/instant-worldbuilding-3a42b6c775ea | ['Mark Starlin'] | 2019-09-20 16:51:15.670000+00:00 | ['Worldbuilding', 'Tips', 'Tips And Tricks', 'Other Stories', 'Writing'] |
J.K. Rowling Can’t Be Cancelled | Feminist epic success story J.K. Rowling is a former abuse victim who escaped and never let it happen again, and is now a not-so-fantastic beast to a small tranche of the trans community.
Rowling is the most visible face in the list of people attacked or cancel-bullied by extremist transgender activists — a case study in how vicious a certain segment of the trans community is, aided by natal women lacking basic common sense, or perhaps still fundamentally afraid to challenge anything that is or used to be a man, as too many women are wont to do.
Every time trans-activists try to bully Rowling into silence, she proves she’s more man than they ever were and more woman then they’ll ever be.
She’s got the money, the power, and the courage to stand up to male aggression, however it identifies.
The Harry Potter author and now Public Enemy #1 again for writing a grown-up novel featuring a cross-dressing serial killer originally rose to infamy for stating basic incontrovertible facts about a subject with which she has lifelong experience: Being a woman.
Unlike many of her detractors.
Rational-thinking feminists and abuse victims will recognize in Rowling’s haters the privileged narcissistic certainty that the world was designed for and must be maintained to please those born with XY chromosomes.
Ex-men who hate women and the women who love them
Anyone who thinks the far-right has cornered the market on misogyny, science denial, and ‘fake news’ has never visited the more extreme corners of trans-activism, where (mostly) men-turned-women do what men have done for thousands of years: Lecture women on what being a woman means, and defining who is a ‘real woman’.
Biological, scientific reality be damned.
Rowling has run afoul of trans-extremists and their loyal lapgirls many times, including this summer after she posted her response for speaking out on sex and gender issues.
They were already incensed over her biologically impeccable point:
Here’s an inconvenient biological truth: People born with XY chromosomes can’t menstruate. Ever.
Score one for J.K. Rowling‘s acknowledgement of a scientific reality that hasn’t changed for millions of years.
Then there was this Rowling pearl:
The Twitstorm vitriol strongly resembled what you see at Donald Trump’s organized hatefests.
Having just finished Why Does He Do That?: Inside the Minds of Angry and Controlling Men by Lundy Bancroft, I see a strong similarity between abusive cis-heteronormative men and trans-activist extremists.
The extremists share that same sense of entrenched entitlement stemming from being born male into a world designed for them. They may want to be women, but psychologically they’re unwilling to give up their birthright in a world ordered to suit them, and the hell with what anyone else wants.
Too often, their allies on the Regressive Left, as Sam Harris likes to call them, are willing to go along. The Regressive Left’s feminism often capitulates to patriarchal dictates originating in a twisted idea of ‘progressivism’. Meaning an overabundance of tolerance, even for toxic behavior and values they’d never condone from their adversaries on the right. On some fundamental level, Regressive Left feminists cringe from challenging any XY who can make them feel guilty. Easy enough to do: Just claim victimhood, Regressive Left catnip.
Regressive Left feminists won’t challenge female genital mutilation in service to ‘cultural relativism’; are less inclined to condemn ‘honor killings’; and in the U.K., they ignored young female sexual abuse victims because the perpetrators weren’t white enough or Christian enough for them.
They are, therefore, willing to throw their natal sisters under the bus in service to proving how ‘woke’ or progressive they think they are. They’ll ignore the same abuses against women they’d never tolerate from cis-heteronormative males.
Regressive Left feminists repeat what extremist trans-activists have trained them to say — that any man who declares himself a woman is a woman. No backtalk, young lady!
Willing to erase, as Rowling put it, natal women’s lived reality, all to please women who act an awful lot like abusive men.
#IStandWithJKRowling
If you think Greta Thunberg-haters and COVIDiots are out of their minds, or that QAnon’s belief in a Democratic cult of baby-eaters is insane, consider extremist trans-activist reactions to Rowling’s entirely reasonable explanation for why she criticizes the excesses of trans-activism, and why she doesn’t accept a man as a woman on his say-so.
Rowling is hardly ‘transphobic’ or a TERF (Trans-Exclusionary Radical Feminist) for pointing out there should be considerations and perhaps limits to trans women’s rights to women’s safe spaces. At least while we work to define and understand what transitioning means, including questioning whether someone still with a penis is a woman, and how it affects natal women who’ve been badly treated by penis-owners in the past. It’s created moral dilemmas for which there are no easy answers, and you can’t say, ‘Everyone is allowed to define themselves,’ while expecting everyone else to accept their definition. No one’s obligated to validate another’s self-perception.
Rowling also observes medical professionals and scientists are afraid to speak out for children who may be harmed by unquestioned non-medical dogma. Trans-activists have shut down important conversations on medical treatments for confused children and adolescents who think they’re trans. Medical history shows around 80% of them will outgrow their temporary gender dysphoria and the rest, when it’s clear they’re genuinely trans, can then be treated medically as they see fit.
Intelligent questions have been raised, in the meantime, on whether children and young people are pushed into it by well-meaning adults or perhaps even from their peer groups. The latter syndrome is called Rapid Onset Gender Dysphoria and the very idea incenses trans-activist extremists.
For those of us who favor critical thinking and debate, it’s troubling to think we can’t even talk about it without being labeled haters and bigots.
The more I delve into the way so-called ‘trans’ children are being treated medically, before their bodies develop on their own, and the more I read of kids wanting to transform for dysfunctional reasons — like they’re gay and they fear homophobia, or they think males have easier lives than women — the more I believe altering children’s bodies before they’re mature enough to make these decisions themselves amounts to unconscious child abuse.
What trans-extremists don’t care to understand — since when have entitled (ex) men ever listened to women anyway? — is you can critique excesses, especially the ignorance and ignoring of science and informed medical opinion, without denying there’s real gender dysphoria, that a certain number won’t ‘outgrow it’, and when people are old enough to make their own decisions, they can then move forward with whatever they deem they need or want, as adults.
Until then, nothing stops them from ‘social transitioning’, so they can try on their new identity, or several, and see how it fits.
We don’t allow children to make certain decisions for themselves because we realize they don’t have the experience or maturity yet: They can’t vote, drink alcohol, buy tobacco products, join the military, get married, get tattoos or body piercings, or engage in consensual sexual relations until they reach a certain age. Yet some believe impressionable children and youth can make informed decisions about altering their bodies in ways they can’t reverse if they do in fact ‘outgrow’ their trans identification.
We now understand how the brain doesn’t stop developing until around age 25, so it seems foolhardy and downright cruel to push often irreversible surgical procedures on the very young, as we now recognize arbitrarily assigning and surgically ‘fixing’ an infant born with intersex characteristics is unintentionally harmful.
Teenage girls are ‘binding’ their breasts and even exploring removing them. How many adults would support cutting off breasts to prevent cancer if it ran in a girl’s family? Feminist outrage would be justified. But it’s okay when she’s not comfortable with being a girl and no one wants to ask whether there are psychological reasons, before she loses her ability to ever feed her own child?
Rational-minded feminists want to know: Is this how you ‘smash the patriarchy’, by becoming a man? Why is it the female body, once again, is held responsible and not an overly-sexualized view of women in a patriarchal culture?
Similar practices abound in some African countries where female relatives ‘iron’ or flatten a developing girl’s breasts to make her less attractive to men, rather than questioning why the men in their culture think they’re entitled to have sex with her.
J.K. Rowling’s questions about ‘trans’ kids and adolescents shut down brains still entrenched in male privilege. Trans-activist extremists react as any abusive man would when challenged, with threats, vitriol and holy crusades.
Running with the Trumpies
Is there any difference between Trumpoplectic far-right language fits vs. trans-extremists’ and their allies’ colorful responses to Rowling on Twitter? | https://medium.com/illumination-curated/j-k-rowling-cant-be-cancelled-9a9557cf2bb8 | ['Nicole Chardenet'] | 2020-11-30 14:27:26.735000+00:00 | ['Gender Dysphoria', 'Feminism', 'Misogyny', 'Science', 'Transgender'] |
19 Best JupyterLab Extensions for Machine Learning | The ML workspace is an all-in-one web-based integrated development environment dedicated for machine learning and data science.
It is simple to deploy and lets you productively build ML solutions on your own machines. This workspace is a universal solution for developers preloaded with a variety of popular data science libraries (e.g., Tensorflow, PyTorch, Keras, Sklearn) and dev tools (e.g., Jupyter, VS Code, Tensorboard) perfectly configured, optimized, and integrated.
System Monitor is a JupyterLab extension to display system information (memory and cpu usage). It allows you to monitor your own resource usage.
The extension gives you an insight into how much resources your current notebook server and its children (kernels, terminals, etc) are using so you can optimize your ML experiments and better manage work.
LSP (Language Server Protocol) is a JupyterLab extension that enables inter-process communication to support multiple languages you may want to use.
LSP integration has several detailed but helpful features:
Hover shows a tooltip with function/class signature, module documentation or any other piece of information that the language server provides
Diagnostics — colors for critical errors, warnings, etc.
Jump to definition — use the context menu entries to jump to definitions
A highlight of references — all the usages will be highlighted when cursor is placed on a variable, function, etc.
Automatic completion for certain characters when triggered
Automatic signature suggestion
Advanced static-analysis autocompletion without a running kernel
Rename variables, functions and more, in both notebooks and the file editor
Diagnostic panel
Debugger is a JupyterLab extension that works as a visual debugger for Jupyter notebooks, consoles, and source files. It can help you identify and fix bugs so your machine learning models can work properly.
You can use the kernelspy extension for JupyterLab to inspect debug messages sent between the debugger UI and the kernel.
JupyterLab debugger can also be helpful when you’re working with VS code as you can inspect the debug messages to understand when debuq requests are made and to compare the behavior of the JupyterLab debugger with the Python debugger in VS Code.
This one is a JupyterLab extension for Git — a free and open-source distributed version control system. It allows you for version controlling. You simply use it by opening the Git extension from the Git tab on the left panel.
This extension gives you flexibility in use as its behavior can be modified via different settings.
This extension adds a few Jupytext commands to the command palette. You can use it to select the desired ipynb/text pairing for your notebook. It’s a small functionality but can help you navigate through your notebooks.
nbgather is a JupyterLab extension that has tools for cleaning code, recovering lost code, and comparing versions of code in Jupyter Lab. The extension saves you a history of all code you’ve executed and the outputs it produces to the notebook’s metadata.
After you download the extension, you can clean and compare versions of your code.
nbgather is in an alpha stage of development, so it still may have some glitches. Anyway, it’s worth giving a shot if you want to have uncluttered and consistent notebooks.
Variable Inspector is a helpful extension for JupyterLab that shows currently used variables and their values. It’s inspired by the variable inspector extension for jupyter notebooks and by the inspector extension included in jupyterlab.
As for now, it’s still being developed, so you may experience some glitches. Here’s what you can do with it:
Inspect variables for python consoles and notebooks
Inspect matrices in a datagrid-viewer, however, it may not work for large matrices
Inspect Jupyter Widgets inline and interactively
This JupyterLab extension gives you functionalities helpful in diffing and merging of Jupyter Notebooks. It understands the structure of notebook documents so it can make intelligent decisions when diffing and merging notebooks.
Here’s a short summary of the main features:
Compare notebooks in a terminal-friendly way
Merge notebooks in a three-way with automatic conflict resolution
View a rich rendered diff of notebooks
Have a web-based three-way merge tool for notebooks
View a single notebook in a terminal-friendly way
Voyager is a JupyterLab MIME renderer extension to view CSV and JSON data in Voyager 2. It is a simple solution that allows you to visualize data.
This extension provides a bare minimum integration with Voyager.
LaTeX is a JupyterLab extension that lets you live-edit LaTeX documents.
The extension runs on xelatex on the server by default but you can customize the command by customizing jupyter_notebook_config.py file. When it comes to bibliography, it runs on bibtex but you can also customize it.
Another element that you can customize is the ability to run arbitrary code by triggering external shell commands.
This one is a JupyterLab extension mimerenderer to render HTML files in IFrame Tab. It lets you view rendered HTML by double-clicking on .html files in the file browser. Files are opened in a JupyterLab tab.
Plotly is a JupyterLab extension for rendering Plotly charts.
To watch for changes in the extension’s source and automatically rebuild the extension and application, you can watch the jupyter-renderers directory and run JupyterLab in watch mode.
Another position on our list is a Jupyter extension for rendering Bokeh visualizations.
A Table of Contents extension for JupyterLab may not seem as much of a technical thing, but it can save you a lot of trouble when scrolling down and looking for information.
It auto-generates a table of contents in the left area when you have a notebook or markdown document open. The entries are clickable, and you can scroll the document to the heading in question.
Collapsible Headings is a helpful extension that lets you make headings collapsible. A selected header cell (i.e. markdown cell starting with some number of “#”) can be collapsed / uncollapsed by clicking on the caret icon created to the left of header cells or by using a shortcut.
Jupyter Dash is a library that makes it easy to build Dash apps from Jupyter environments (e.g. classic Notebook, JupyterLab, Visual Studio Code notebooks, nteract, PyCharm notebooks, etc.).
It has many helpful features:
Non-blocking execution
Display modes: external, inline, JupyterLab
Hot reloading: the ability to automatically update a running web application when changes are made to the application’s code.
Error reporting: a small user interface to display errors that result from property validation failures and exceptions raised inside callbacks
Jupyter Proxy Detection
Production deployment
Dash enterprise workspaces
The last one is a jupyterlab-sql extension that adds a SQL user interface to JupyterLab. It allows you to explore your tables with a point-and-click interface, and read and modify your database with custom queries.
Conclusion
The list of JupyterLab is quite extensive so there are many tools to choose from. You can use one, two, or all of them. Just make sure they’re not cluttering your Jupyter space and slow down processes.
Happy experimenting with extensions! | https://medium.com/neptune-ai/19-best-jupyterlab-extensions-for-machine-learning-f203598cdfc1 | ['Patrycja Jenkner'] | 2020-11-26 15:10:03.898000+00:00 | ['Machine Learning', 'Data Science', 'Jupyterlab', 'Jupyter Notebook', 'Extension'] |
How to Become a Better Writer — Even If You Just Started Out | Ask for feedback
“Do you mean that I should share my non-perfect work with others so that they can judge it and perhaps share their feedback with me?”
Yes. That’s exactly what I mean.
Writing online can feel lonely at times. I can call myself lucky living together with 2 friends that both work from home, but I can imagine that fellow writers without this luxury can find themselves isolated.
Look for ways to collaborate and engage with fellow writers. There are many online communities. They might look hidden and they don’t appear to you automatically, but I know for a fact that there are many communities you could join. Even if you’re just starting out.
(Leave a comment below if you’re having a hard time finding like-minded people or writing communities, I’m happy to help you out)
Don’t forget — it all starts with you. You have to make the ask. It’s not hard. Every writer had to start somewhere. Tim Denning didn’t start with 139,000 followers either.
Recently, I set myself the goal of reaching out to at least 1 writer a week, and I’m grateful to have already spoken to some great minds out there. We talk about our progress, challenges, and achievements and we’re just having a good time. Writing online is much more than just the cycle of write — edit — publish.
Besides that, we agreed on checking each other’s work to make it even better. This is personally a challenge for me to overcome and that’s because of 2 reasons:
I’m pretty impatient and eager to publish my work. However, I learned along the way that waiting is also a skill a great writer needs to possess. I don’t want to bother others with my work all the time. However, if someone else agreed to revise it, that’s their decision and I should make use of it.
Receiving feedback from fellow writers is super valuable. Yes, it feels great when your mom or spouse tells you that your work is great, but let’s be honest: they won’t tell you when it’s shit or missing the point. Fellow writers will help you out, so make use of it. And don’t forget to pay back the gesture. | https://medium.com/the-brave-writer/how-to-become-a-better-writer-even-if-you-just-started-out-398c7db1e2e | ['Jessie Van Breugel'] | 2020-12-01 17:02:48.768000+00:00 | ['Freelancing', 'Writing Tips', 'Life Lessons', 'Personal Development', 'Writing'] |
A Fantastic Way to Programmatically Create Diagrams for Different Cloud Architectures | Docker Solution for Graphviz, Diagram, and Cluster
I have posted several articles on how to create development and test Docker images [see references 4, 5, and 6 below]. I assume you know of Docker and have read them.
Docker is used for encapsulating an individual image of your application.
Docker-Compose is used to manage several images at the same time for the same application. This tool offers the same features as Docker but allows you to have more complex applications.
Figure 2.Docker-Compose. Illustration by Rachel Cottman
Diagrams and Graphviz for a Docker image: steps required
Based on the instructions in the articles mentioned above, Graphviz is put in the Docker image build by modifying the end of the dockerfile :
# Step 1
.
.
.
USER root
RUN apt-get update
RUN apt-get -y install python-pydot
RUN apt-get -y install python-pydot-ng
RUN apt-get -y install graphviz
RUN rm -rf /var/lib/apt/lists/*
I put the modifications at the end of the dockerfile because the USER root command changes the permissions and I was too impatient to place it back to the default user.
Note: You may want to change the USER value back.
The requirements.txt file is:
# Step 2
numpy==1.19.2
numba==0.51.2
matplotlib==3.3.2
diagrams==0.17.0
dev and test are subdirectories of the docker subdirectory to the clouds project directory:
# Step 3
.
.
.
|--clouds
|-- docker
|-- dev
|--- Dockerfile
|--- docker-compose.yml
|--- README.md
|--- requirements.txt
|-- test
|--- Dockerfile
|--- docker-compose.yml
|--- README.md
|--- requirements.txt
|-- src
|-- test
|--- requirements.txt
|--- README.md
.
.
Testing Graphviz in a Jupyter notebook
Diagrams depend on the Graphviz runtime. The previous section shows step-by-step how to create a Docker image with Diagrams and Graphviz.
Note: Change the dockerfile to include any Jupyter extensions or delete any extensions you may want.
I run Jupyter notebook inside the dev_1 Docker container. All the Jupyter notebook extensions I want are in the dev_1 Docker container.
(base) Macie:~ brucecottman$ updev [1] 38060
(base) Macie:~ brucecottman$ Creating network "dev_default" with the default driver Creating dev_dev_1 ... done Attaching to dev_dev_1
.
.
dev_1 | Or copy and paste one of these URLs: dev_1 | or http://127.0.0.1:8888/?token=4a9e25ae109156cc1ab8d5f363b990f65e306f93305e62a4
You must actually use port 8889 instead of port 8888 when you copy and paste the URL into a browser:
dev_1 | or http://127.0.0.1:8889/?token=4a9e25ae109156cc1ab8d5f363b990f65e306f93305e62a4
Now when I run the Python code in a Jupyter notebook:
from graphviz import Digraph dot = Digraph(comment='The Round Table') print(dot)
dot.node('A', 'King Arthur')
dot.node('B', 'Sir Bedevere the Wise')
dot.node('L', 'Sir Lancelot the Brave') dot.edges(['AB', 'AL'])
dot.edge('B', 'L', constraint='false')
print(dot.source)
dot.render('test-output/round-table.jpg', view=True)
dot
I get:
Figure 3. Show successful test run of Graphviz
Looks good so far! | https://medium.com/better-programming/a-fantastic-way-to-programmatically-create-diagrams-for-different-cloud-architectures-33b32a3d6cdc | ['Bruce H. Cottman'] | 2020-10-19 14:44:31.438000+00:00 | ['Python', 'Architecture', 'Docker', 'Programming', 'Cloud'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.