title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Pitchfork’s Best New Markov Chains | I am an avid Pitchfork reader, it is a great way to keep up to date on new music. Pitchfork lets me know what albums to listen to and what not to waste my time. It’s definitely one source I love to go to when I need something new.
One way Pitchfork distills down all the music they review and listen to is to award certain albums (an more recently tracks) as “Best New Music.” Best New Music, or BNM as I’ll start calling it, is pretty self explanatory. BNM is awarded to albums (or reissues) that are recently released, but show an explemplary effort. BNM is loosely governed by scores (lowest BNM was a 7.8), but I noticed that I would see some of the same artists pop up over the years. This got me to wondering. If an artist gets a BNM is their next album more likely to be BNM or meh?
We need data. Unfortunately Pitchfork doesn’t have an API and no one has developed a good one, so that lead me to scrape all the album info. Luckily, all album reviews are listed on this page http://pitchfork.com/reviews/albums/. To get them all I simply iterated through each page and scraped all new albums. I scraped the artist name, album name, genre, main author of the review, and year released. BNM started back in 2003 so I had a natural endpoint. In order to go easy on Pitchforks servers I built in a little rest between requests (don’t get to mad Pitchfork).
Now that I have the data, how should I model it? We can think of BNM and “meh” as two possible options or “states” for albums (ignoring completely scores). Markov Chains allows us to model these states and how the artists flow through them. Each pass through the chains represents a new album being released. A conventional example is weather. Imagine there are only rainy days and sunny days. If it rained yesterday there may be a stronger probability that it might rain tomorrow, however the weather could also change to sunny, but at a lower probability. Same goes for sunny days. For my model, just replace sunny days with BNM and rainy days with meh.
Sunny “S” ,Rainy “R”, and the probabilities of swapping or staying the course
With all my data, I was able to calculate the overall Markov models. I took all artists that that had at least 1 BNM album, 2 albums minimum, and at least 1 album after the BNM album. This insures that these probabilities actually mean anything. I can only tell what the probability of staying BNM is if you have at least one more album after your first BNM. Once I distilled all the artists down using the above criteria getting the probabilities was easy. I simply iterated through each artists discography, classifying the “state” change between them (meh to meh, meh to BNM, BNM to BNM, BNM to meh)
Finally, with all the numbers crunched I plugged them in to the visualization at the top. NOTE: all the visualizations were NOT created by me. I simply plugged in my calculated probabilities and labels. The original visualization along with a fantastic explanation of markov chains can be found at http://setosa.io/blog/2014/07/26/markov-chains/. The visualization and all the code behind it was created by him NOT me. As I said before I only supplied the probabilities.
Overall Best New Music
If you look at the size of the arrows you can tell the relative probability of each state change. As you can see BNM are pretty rare and artists don’t stay that way for long (thin arrow). What is much more common, as you probably guessed, are meh albums leading to more meh albums (thick arrow). As you can see, it is more likely that an artist will produce a meh album after BNM. What is interesting is that it is more likely to release a BNM after a BNM than it is to go from meh to BNM. These conclusions seem pretty obvious, in retrospect, however since we lumped all artists together we might be missing some nuance.
Now the above metrics are for all artists, but it it probably unfair to lump in Radiohead (who churns out BNM like its nothing) to the latest EDM artist. I redid my analysis only this time further splitting all the artists by their genre. Below are the three most interesting genres.
METAL | https://medium.com/datahungry/pitchforks-best-new-markov-chains-340c09214f73 | ['Marcello Ricottone'] | 2018-01-04 14:02:04.122000+00:00 | ['Music', 'Data Science', 'Projects', 'Data Visualization', 'Markov Chains'] |
3 Machine Learning Books that Helped me Level Up | There is a Japanese word, tsundoku (積ん読), which means buying and keeping a growing collection of books, even though you don’t really read them all.
I think we Developers and Data Scientists are particularly prone to falling into this trap. Personally, I even hoard bookmarks: my phone’s Chrome browser has so many open tabs, the counter was replaced with a “:D” emoji.
In that zeal for reading and learning most of us experience, we usually end up lost, not sure of which book to pick up next. That’s why today I’ll give you a very short list: just 3 Machine Learning books, so that you won’t just bookmark it and forget it.
Each of these books has helped me immensely in different stages of my career as a Data Scientist, particularly in my role as a Machine Learning Engineer.
Here come the books!
O’Reilly: Data Science from Scratch with Python
I have a very personal attachment to this book, since it’s the one that got me my job. That’s right! I knew next to nothing about Data Science, even what Data Science was, before picking up this book.
I did have a pretty strong Probability and Statistics background, and knew enough Python to defend myself. However, I was missing the practical side of it.
This book did many things for me. It:
Showed me how to process data in Python efficiently and elegantly (following Python’s good practices ).
efficiently and elegantly (following Python’s ). Taught me how to implement most simple Machine Learning algorithms from scratch.
most simple from scratch. Showed me what the day-to-day job of a Data Scientist may look like.
of a Data Scientist may look like. Taught me how to communicate my results to others clearly.
I wholeheartedly recommend it if you’re new to the Data Science community. It will give you a clear overview of most topics you’ll need in order to start being a productive Data Scientist.
It will also showcase Python’s most commonly used libraries and expose you to a lot of idiomatic code, which is always a plus.
Here’s a link to Data Science from Scratch on Amazon.
Springer: Introduction to Statistical Learning
This book is the most comprehensive Machine Learning book I’ve found so far. I learned a lot from it, from Unsupervised Learning algorithms like K-Means Clustering, to Supervised Learning ones like Boosted Trees.
The first chapters may feel a bit too introductory if you’re already working in this field (at least that was my experience). However, they also sum up many things you may not have learned in such an organized way before.
The later chapters are, however, where I think this book really shines. Its explanation of random forests, boosted trees and support vector machines are spot on.
Here are some of the topics you can learn from Introduction to Statistical Learning:
Regression and Supervised Learning Algorithms: from Linear Regression and SVM s to tree-based methods.
and Algorithms: from and s to methods. Unsupervised Learning techniques: especially Clustering, including the K-Means algorithm.
techniques: especially Clustering, including the K-Means algorithm. Sampling methods, and other general Machine Learning core concepts .
. The meaning, advantages and disadvantages of metrics such as accuracy, recall, precision, etc.
I think this book has been my best read so far this year, and it’s made me into a more round up Data Scientist. I recommend it if you have a bit more experience, but want to polish your edges. It is also a very good reference book to keep on your shelf.
It also shows everything’s implementation in R, which I didn’t find particularly useful, but it didn’t hurt. You’ll probably import most of this code from SciKit learn anyway.
As before, here’s a link to Springer’s Introduction to Statistical Learning on Amazon.
Deep Learning by Goodfellow, Bengio et al.
This book blows my mind every time I open it. I’ll be the first to admit I haven’t really read it from start to finish. Yet.
The only reason it’s the last one in the list is because of its very specific scope: Artificial Neural Networks or Deep Learning.
However its first chapters, with an overview of Deep Learning’s precursors and what makes it different, and then the explanation of how Deep Learning works, are marvelous.
It even starts off by explaining everything you need to know before studying deep learning, with whole chapters dedicated to linear algebra, probability and information theory, and numerical computation methods.
The next chapters, which I’ve only partially read, serve as an awesome reference whenever you need to dive deeper into a particular Neural Network architecture.
They include in-depth explanations of Convolutional Neural Networks and Recurrent Neural Networks, along with many regularization or optimization methods.
The third and last section, which revolves around cutting-edge technology, explains Generative models, Autoencoders and many other interesting algorithms. Adding them to your own toolkit will probably give you a great boost!
The authors of this book are the rock stars of Machine Learning right now. One of them even won a Turing award recently, so I can’t think of better people to teach this subject.
Here’s an Amazon link if you’re interested in the Deep Learning book.
Conclusion
I went from a broad, introductory book to an advanced, specific one.
Each of these Machine Learning books has had a profound impact in my career and, to some degree, the way I see the world.
I really hope at least some of them will have the same positive impact on your life!
And if you’ve already read, or are reading, any of them, tell me what you think of them in the comments! I’d love to discuss any of them further, especially the Deep Learning book.
We can also discuss them on Twitter, Medium of dev.to if you’re interested.
I want to hear your opinions!
(small disclaimer: all of these links are Amazon affiliate links, which means I get a small commission if you buy the books. However, I’ll only review books I’ve actually read, and have genuinely recommended to people in real life) | https://towardsdatascience.com/3-machine-learning-books-that-helped-me-level-up-a95764c458e3 | ['Luciano Strika'] | 2019-04-29 04:26:36.268000+00:00 | ['Machine Learning', 'Data Science', 'Deep Learning', 'Book Review', 'Python'] |
Pyramid of Doom — the Signs and Symptoms of a common anti-pattern | Pyramid of Doom — the Signs and Symptoms of a common anti-pattern
with some tips on how not to code yourself into a corner
Anti-patterns. They are the bane of many developers that’s had the misfortune of meeting one. The pyramid of doom is often one that a lot of new JavaScript developers write. Most of the time, it’s written in innocence with no code janitor to tell them otherwise.
I’ve written pyramids of doom in my early days and I’ve experienced them from others. The anti-pattern begins its seeds as a few levels of functions , loops and if else statements — until the levels turn into an endless maze of curly braces and semi-colons that somehow magically work on condition that no one touches it.
What exactly is a pyramid of doom?
A pyramid of doom is a block of code that is so nested that you give up trying to mentally digest it. It usually comes in the form of a function within a function within a function within a function of some sort. If not, then it’s a loop within a loop within a 3 level nested if statement.
When faced with a pyramid of doom, we often ignore the code and just begin again. However, sometimes that’s not feasible because the entire application is written in this anti-pattern style.
It’s an anti-pattern because there is no pattern. It is simply the transmission of a developer’s train of thought as is without any categorization or organization.
Here’s an example of a multi-level beginning of a potential pyramid of doom:
function login(){
if(user == null){
//some code here
if(userName != null){
//some code here
if(passwordMatch == true){
//some code here
if(returnedval != 'no_match'){
//some code here
if(returnedval != 'incorrect_password'){
//some code here
} else{
//some code here
}
} else {
//some code here
}
} else {
//some code here
}
} else {
//some code here
}
}
}
There are other ways to code pyramids of doom such as through nested anonymous functions and callback nesting. In fact, if you nest something enough you’ll be sure to create a pyramid from it.
Here are some signs and symptoms that often lead to pyramids of doom and how to cure them.
Lack of planning
Sometimes, developers hit their favorite code editor and start tapping away. It’s alright. We’ve all done it. We take a quick look at the requirements and if there is none, we make it up as we code.
This results in unplanned functions, loops, and statements that need to be written somewhere. Why not just put it right where you’re coding right now?
As a result, we end up building our application in an ad-hoc manner that results in unnecessary code — sort of like if you were to build a house without a plan and just keep rocking on back to the shop to buy more timber. Next thing you know, your budget is blown because you bought too much of the wrong things and you can’t return it.
Cure: pseudo code out your plan
I have my juniors do this all the time to prevent wasted time trying to unravel the nest they’ve written. They don’t get to code anything unless they show me a plan first — even if it’s scribbled down with pen and paper with cross-outs and coffee stains.
The point of the plan is to help structure your thoughts and ensure that you understand the direction of your code. When you are able to do this, it allows you to pre-plan what kind of functions you’re going to write, how your objects are structured and if your inheritance is logical in terms of classification and flexibility.
Basic syntax knowledge only
Many of us jump right into coding because we’re excited. We’ve just figured out how to do a few things and it works. We end up with a pyramid of doom because we don’t know how else to solve the problem.
In this situation, we don’t recognize our anti-pattern because we don’t know any better. However, you can’t build large and complex applications with just basic functions.
Cure: check out OOP JavaScript, inheritance patterns and promises
Upgrade your skills by learning the higher level coding paradigms like object oriented. Although JavaScript is often presented as a series of functions, they are all objects with rules and inheritance patterns.
Understanding the concept of promises will also help you flatten your chain when it comes to writing callbacks and prevent your code from blimping out if something goes wrong. Throw errors when things go wrong so you know when and where things happened rather than having to sit for hours tracing through your code.
Complicate is smart
When starting out and without much guidance, we often create large and complicated blocks of code. Some people do it because they think that’s how code is supposed to be: complicated.
We get this misconception that the harder the code is to understand, the smarter we are for creating such a beast. But that is often the sign of inexperience and hubris.
It doesn’t matter how many months or years you’ve been coding. If your main aim is to make the code as complicated as possible, then it means you’re not versed in programming paradigms. When things get complicated and intertwined, the code becomes much more fragile and prone to breakage. There is no resilience to change and decay at a faster rate.
Cure: Simplify and learn your SOLID principles
Flatten your code and learn to use callback methods instead of nesting functions. Use SOLID principles to guide your choices and the relationships you create.
If you start to see more than one level, you should stop and evaluate your code choices. Most of the time, you can abstract it out — even if you think you’re only going to use it once and never again.
Fix it later mindset
We often tell ourselves that we’ll do it later — but from past experience, never often never materializes. It happens all the time. You promised yourself or get given the promise that you’ll have time at a later date to fix it. But that time never happens. It gets pushed back. It gets re-prioritized. Next thing you know, you’re stuck with a smelly piece of fragile code that you’ve forgotten how it works.
Not only that, you’ve just spent your time further entrenching bad patterns by writing more of the same.
Cure: do it now
It might take more time initially but once you get the hang of how to write flat and clean code, you get better at it. Every time you refactor your own code as you’re working on it, the better you become at detecting smelly code and anti-patterns as you write them.
It helps you build the muscle memory. Even if no one will ever see your code, it is best to keep applying SOLID principles, cohesive design and flat levels. Good patterns are as much a habit as anti-patterns. Name your constants. Abstract your SQL commands. Keep your scopes simple and contained. Avoid anonymous functions as callbacks.
Love ’em globals
Global variables are easy to create and deal with when you’ve got nested code. But bad things happen when you litter your code with them. It might feel safe to do so when your application is small. However, as the code base grows and multiple people start working on it, things can get complicated really quickly.
You don’t know what side effects you’ll have if you modify a global. You’ll need to go variable hunting and figure out how it’s going to impact the rest of the application. You don’t know exactly what it’s going to break, how things are going to break and if there’s going to be a cascading effect.
Then there’s your nested pyramid to deal with. If you need to set a global to use inside your function within a function, then you need to stop right there and rethink your game plan.
Cure: use more local variables
When you use more local scopes, your code becomes isolated and less fragile to change. Your logic gets contained and it forces you to rely on the variables that are immediately available within your scope rather than what’s external.
When you’re not relying on global variables, you can pass states and results between different functions through return rather than worry about how your global state is going to impact on other functions.
Having global variables isn’t bad but they’re best kept in the realm of immutables where things aren’t expected to change.
Final words
You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains. - Steve Jobs
If you find yourself working with a function that feels overly complicated, chances are, it is complicated.
Pyramids of Doom got its name because it only takes one break in the nest to have it collapse into itself. You might be able to put struts and stickers to prevent its downfall but the bigger your pyramid, the bigger the collapse.
Beautiful code is complexity simplified. It takes more effort, thinking, and skills upfront to create something that is easily understood by others. But your investment will pay off in the long run with a much more robust piece of code that ages gracefully. | https://medium.com/madhash/pyramid-of-doom-the-signs-and-symptoms-of-a-common-anti-pattern-c716838e1819 | ['Aphinya Dechalert'] | 2019-03-26 01:24:14.796000+00:00 | ['Technology', 'Programming', 'Software Development', 'Productivity', 'JavaScript'] |
Design Pattern: Billowing Curtains | Curtains, in the US usage of the term, are differentiated from drapes in that they are sheer and translucent, whereas in UK English the term refers to all forms of loosely hung window fabric.
Modern architecture and curtains didn’t go well together as loose draped fabric is inherently decorative, non geometric and baroque, and so blinds became the norm.
Curtains used properly should embrace the fact that they are loose fabric, and either be overly long so they break over the floor, if heavy and velvet or hung loosely so that they can capture the wind if light and sheer.
Curtains which move lightly in a breeze, over an open window, soften the edges of openings and can create the effect of a room breathing in the wind. This creates pattern in shadows and dappled light, which we are possibly instinctively attracted to, having evolved taking shade under trees. | https://medium.com/a-pattern-language/design-pattern-billowing-curtains-c77ec401f83 | ['David Galbraith'] | 2020-06-05 09:56:08.781000+00:00 | ['Design Patterns'] |
Crip brilliance transcending the theater | The Forgotten (Maria Palacios)
Maria Palacios in a later act, dancing flamenco to Seema Bahl’s accompaniment. ©NINE LAM, 2020 via Sins Invalid
Crips can’t afford to live being prepared for the worst
although the worst will always hit us harder…
The truth is, being prepared & being disabled
means mentally prepared
to be abandoned & left to die. — Maria Palacios, “The Forgotten”
Sins did not choose an image from “The Forgotten” for promotional materials, so the image above captures a later performance. Maria appeared in her chair on stage, mostly in darkness, her outfit off-white. With urgency, despair, and grief, she shared her and her Houston community’s experiences of Hurricane Harvey. The performance’s visual plainness focused attention on her words and emotions.
This performance, the second of the night, hit my partner and I hard. She made the point that crips, disabled folks like us, can’t afford to prepare. Some of us are on fixed incomes and can’t buy extra food. Or we eat our emergency food supplies before a natural disaster. We can’t predict some medical expenses, or other everyday disasters that cut into our finances. Most of us are living in constant isolation and social disaster.
Some of us are lucky. We have actual accommodations, ones we fought hard for. “Privileges” the government and insuance companies and other bureaucracies tried to keep from us. Or we’re Deaf and have a close local network of ASL interpreters who the Deaf community knows and can call on.
And for the same reasons, these disasters will hit us harder. Some people can’t listen to the radio. Some need help to get out of the house. If we can escape, some aren’t able to save all our specialized equipment — sometimes it’s too unwieldy to do alone. A government registry of disabled people doesn’t create a plan to save us. It doesn’t make the crip community even trust the government enough to add our names to the list.
I find myself using the present tense about this particular performance, because it has occupied so much space in my brain for the past week.
Maria brought up someone she knows, who so long after Harvey is still in a nursing home. They lost their accessible housing in the community in the hurricane, and they’re still waiting for a house they can live in. After losses like this, survivors live outside their interdependent communities against their wills.
In nursing homes, they have fewer people making sure they’re safe. They face a higher risk of caregiver abuse/neglect, not to mention COVID-19 exposure. And the able-bodied world expects them to show gratitude for the simple fact that they’re alive. They act as though that’s an act of great charity and not the lowest possible bar to clear. An actual act of great charity would be restoring the living conditions we had pre-disaster as much as possible.
I find myself using the present tense about this particular performance, because it has occupied so much space in my brain for the past week. It feels too real for me and my partner. We keep our wheelchair in the car, partly for convenience and partly so we have it if — God forbid — we ever have to escape with only ourselves, our cat, and our car. We don’t have an accessible vehicle or an accessible apartment, we can’t bring the wheelchair inside anyway.
But in the heat of disaster, would we remember all our medications, electrolyte drink mix, medical records, legal name change orders? Our collection of joint braces, wraps, and similar accommodations? Would disaster recovery volunteers make sure I continued to have access to vegan food? Would I remember my rollator, even if I hadn’t used it in a while?
Could we find a crip friend to take us in? Or would we end up in a shelter, separated because we aren’t related? Not considered a unit because getting married would limit the SSDI benefits my partner might qualify for? Would first responders understand that he is an ambulatory wheelchair user? Could they discern that I’m his partner and also disabled, not his caregiver?
I felt, and feel, for Maria. We are both crips facing climate chaos. We both have reasons to be afraid for our futures, though the specific reasons vary. We could find ourselves in acute disaster zones while already trying to survive in an everyday disaster. | https://ryanthea.medium.com/crip-brilliance-transcending-the-theater-b0ecd0ea7242 | ['Ryan Theodosia'] | 2020-11-14 00:51:37.815000+00:00 | ['Environment', 'Theater', 'Disability', 'Performing Arts', 'Environmental Justice'] |
Designing and validating a Conversational UI | Designing and validating a Conversational UI
H&M, the leading fashion destination, as an example
H&M is an amazing fashion and one of my favourite go-to-places for those shopping sprees.
Lately, I was finding a knitted cardigan at H&M. It was quite an engaging conversation with the employee. He was friendly and quite helping. Amazing experience, but hey …..w-a-i-t; this time it was on their app and the person I was in conversation with, wasn’t a human, it was a bot! Did that ruin the experience? Not really, rather it was just like visiting one of their stores where one of the employees help you find the best garment for you. :)
Conversational UI is already making waves in tech circles with organisations automating their processes around how chat bots interact with their customers. Today, we’re witnessing an exponential growth of applications that no longer have a graphical user interface (GUI). What they have is a conversational UI.
Having said that, a designer often struggles with designing it because it isn’t as easy as it might look. Structuring out and shepherding a seamless flow demands a lot of homework. Back in 1995, Jakob Nielsen formulated the 10 Usability Heuristics for User Interface Design, that holds true even today and fairly applicable for designing conversational UI.
So let’s scan the H&M bot through these usability heuristics. I have clubbed couple of them for making the study I little more crisp.
H&M Bot
Before we get into breaking down the design, you’d love the experience the entire prototype of the H&M bot. This prototype is made on CanvasFlip.
This flow is made for a girl, aged 25, looking for a knit cardigan. OPen prototype in a new link and play with the app prototype!
(Open prototype in a new link)
These conclusions on the bot are drawn after studying the user experience test on the prototype. The UX analysis was recorded on CanvasFlip.
1. Visibility of system status & recognition rather than recall
The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
✅ H&M has done a great job at using structured messages to guide a user through the interaction. It streamlines the requirements and search of the user beautifully.
agStructuring and streamlining the flow
2. Match between system & real world.
The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
✅ The app uses words and phrases that are relevant to the initial information tell collect. For eg, for a female of age in 20s words and emoticons are wisely chosen!
Today teenagers to peoples in their 30s, everyone uses emojis. And it shouldn’t be a surprise. They’re universal and extremely useful, and they add the non-verbal layer to written communication. That is something H&M has held onto.
Wisely crafted language
3. User control and freedom & Error Prevention
Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
⁉️ H&M does have options for user control, i.e., the user can revert and start a new search all over again. But what would be more recommendable is if there was an option just to undo the last message (/user input).
User feedback control
4. Flexibility and efficiency of use
Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
✅ H&M has executed a nice use of humans to fill in the gaps; novice users would appreciate the human touch, while power users can get right to ordering.
It gives a flexibility of options while making choices.
Flexibility of choices and default states available
5. Consistency and standards & Aesthetic and minimalist design
Users should not have to wonder whether different words, situations, or actions mean the same thing. Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
✅ It’s consistent and also has a voice that directs the user to what they are looking for. It isn’t like browsing through never ending options. Honestly it is quite similar to trying to buy a commodity offline in the stores.
Conclusion
The dream of conversational interfaces is that they will finally allow humans to talk to computers in a way that puts the onus on the software, not the user, to figure out how to get things done. There are amazing bots trying to replicate humans. Know of any such bot? Let me know in the comments section.
The best way to predict if the bot is human enough or not is to test — test with a large section of users to understand how human it is. Here’s what I turn to when I need to test — CanvasFlip | https://uxplanet.org/designing-and-validating-a-conversational-ui-70294766ba9c | [] | 2017-04-06 06:23:57.338000+00:00 | ['Conversational UI', 'Design', 'UX', 'User Experience', 'Bots'] |
Silk Screen Printing Vs Digital Printing on Fabric | In the past few years, technology has developed massively.
Things that used to be impossible are now easily doable without too much hassle.
The printing industry has benefited from the latest advancements in print technology.
Many printers have moved away from the traditional analogue approach, and some of the biggest firms are making use of digital equipment that is distinct from t-shirt printing which uses traditional silk screens.
There are different aspects to take into account for the people who are unfamiliar with the printing industry.
In today’s era, there are tonnes of options out there.
All the approaches are different and come with their individual characteristics.
Direct to Garment (DTG) and screen printing are the most popular and certainly the most qualitative.
In the past few years, t-shirt printing agencies have been making use of silk screen printing as their preferred method for designing t-shirts.
The only other substitute used was iron-on designs or transfers that produced unique products.
The biggest problem with iron-on was the fact that it was a time-consuming process when it comes to designing many t-shirts.
On the other hand, the screen printing process allowed the distribution agencies to create many t-shirts in short duration.
However, with the introduction of digital printing, things changed.
More and more people are using this process to create unique designs.
Digital printing is a new era of printing that allowed the print agencies to print designs from a computer, directly onto a shirt.
Screen Printing
The screen printing process involves creating a screen and utilises it as a stencil for applying layers of ink on the printing surface.
In this procedure, you require various screens for distinct colours that are used in the design and merged to get the final look.
Screen printing is the ultimate option for designs that need a top level of vibrancy especially when printing on dark background or product.
Used for larger orders, the dye colour in screen printing equipment is applied thicker than the digital printing that results in brighter colour even on darker shades.
It is simpler and more cost-effective to utilise silk screen printing artists for mass production, rather than digital printing on fabric.
Related Article: How The Right Logo Design Can Increase Sales
With the continuous change in the fashion industry, it is becoming an important factor for market domination.
As a result, many retailers are planning to go for minimum stocks and give repeat orders.
Advantages of Screen Printing
Highly cost-efficient process for bulk orders.
Easy to print on specified areas.
Huge range of printable fabrics such as wood, textiles, glass and more.
Top-quality output.
Long-lasting prints.
Digital Printing
Digital printing is an ultimate direct to garment printing process that is preferred by a fewer number of orders.
In case a design needs a full-colour spectrum, it might be listed as a full-colour print and will be published in this form as well.
The primary benefit of digital printing is that multiple colour palettes allow people to recreate anything virtually.
This is an ultimate process that calls for artwork which needs to be handled by a computer and printed directly to the surface of the garment.
Imagine digital printing like printing out a paper from a printer except on larger scale with ink made for fabric.
Digital printing is ideally suited on a light coloured base as the ink is applied thinner that enables the design to shine through.
As there are no screens or physical set up, the fact that the design is processed and printed digitally brings the quality output.
Advantages of Digital Printing
Easy to print various colours.
Changeable data and personalisation options.
Minimum set up costs.
The design does not bump out.
Good for short runs.
Now, as we have discussed what these two techniques mean, let’s see the major differences between them.
The size of the order
It totally depends on the number of units you need when deciding on which technique to use.
The screen printing process is arduous and involves extensive preparation and comes with minimum order requirements.
On the other hand, digital printing has a lot more scope for flexibility because of lesser techniques required.
Another major point to take into consideration is the price.
Digital printing comes at a flat rate which means that the price per unit remains the same no matter how many pieces are printed.
In the case of screen printing supplies, the price per unit decreases as you produce more units, mainly because once the screen is made, it can be used countless times.
Design
If you want to decide which technique to utilise that does not affect your design, then it is preferable to split this section into two categories.
Related Article: What is Brand Marketing?
Detail
Digital printing manages detailed designs better than silk screen printing equipment when it comes to deciding between both the techniques.
Images are sharper with more clarity.
So, in case your designs have small touches or letters, digital is the way to go.
Colour
One major drawback of digital is that it is limited regarding colour.
While screen printing supplies make use of pure colour (often Pantone), digital opts CMYK that mixes tones to mimic the right one.
Moreover, screen printing machines produce more vivid colours.
The entire process of pouring every colour layer through the screens offers screen-printed products a vibrant and lasting colour.
If the process is done properly, the outfit can last for a longer duration even though there are some limitations to colour possibilities.
In digital printing on fabric, the image is directly transferred onto the material, and complete detail can be captured.
However, the quality of the image degrades, and some details may not get transferred correctly if the picture resolution is not high enough.
Material
The vital difference between the two techniques is the breadth of materials that can be printed on.
Screen printing can be done on any material ranging from cotton to polyester and from Teflon to nylon whereas digital is limited to exclusively 100% cotton.
However, it must be said that for both techniques, cotton is ideal.
Quick And Cheaper
Silk screen printing is still being used by t-shirt printing companies and is the most used process, but the digital process has also become the main alternative for small runs.
The set up for digital is much easier and cost-efficient.
Talking about the screen process, one has to set up a screen for every colour which indicates that the more complicated the design, the more screens the printer will need.
This leads to the increase in manual labour which will automatically raise the cost.
However, when it comes to digital printing on fabric, there are numerous colours and no screens, only one flat fee per t-shirt.
So, instead of preparing all the screens from before, the only set up digital need is a pre-treatment of the t-shirt that offers the shirt a solid base and protects the digitally printed ink from smearing.
Time Consumed
Another important factor to take into consideration is the total time consumed while using these techniques.
If you are looking to completing a project within a short span of time, you must go with digital technology.
Related Article: Colours In Logo Design
This method can handle a huge quantity in a short duration.
As discussed earlier, the screen printing process is complicated.
The preparation time will alone consume a significant amount of your time frame.
Quality
When it comes to quality, then silk screen printing equipment is ahead.
It offers better quality imaging as the ink gets completely absorbed and lasts for the longer duration.
Moreover, it provides clear edges to the image printing mainly because of the accuracy that specifically created stencils offer.
On the other hand, while using digital printing, though the ink does not spread as the image is directly printed on the fabric, but it tends to lose its colour more quickly as compared to screen printing images.
However, if anyone has vivid images to imprint, then this is an ideal option as all the colours are present in the single image and the user does not need to separate screen for the same.
Wrapping Up
Finally, to conclude, digital printing on fabrics is the most popular approach today.
In the past, there were rarely any digital printers, but now, most of the printing companies have a digital printer and make use of it for all single garment orders.
The capability of designing outstanding graphics on computers is one of the skills highly in demand these days.
However, before the advent of digital printing, computer generated drawings are still printed on apparels with the newest screen technology.
Digital printing is the best alternative to screen printing.
Author Bio: Jennifer Adam is a highly trained T-shirt designer software developer who is currently working with a highly reputed organisation named inkyROBO. Owing to an amazing skill set and expertise in the related domain, she has achieved great results till date. This blog has been penned down with an aim to deliver useful knowledge to the readers. | https://medium.com/inkbot-design/silk-screen-printing-vs-digital-printing-on-fabric-c62b9a13e88a | ['Inkbot Design'] | 2017-05-10 09:33:13.660000+00:00 | ['Screenprinting', '3D Printing', 'Branding', 'Design', 'Digital Marketing'] |
What Surfing Taught Me About Grief and Guilt | For the last month or so, most mornings have begun with a cup of tea in bed while I log on to local webcams. There are four permanently monitoring the beaches near my apartment. If the surf looks good I slip into my wetsuit, jump on my bike and in ten minutes I am in the surf. If the surf is not looking good, I slump back into bed and slowly shake off waves of grief and depression that have become a regular part of my life.
As I fought the Atlantic breakers this morning, I considered the three ways I was dealing with the physical waves in the ocean and began relating them to dealing with the waves of emotion in my own life.
It starts with making the decision to engage.
How do you deal with your emotions? Do you engage with them? Are you afraid of them? Do you feel you have to conquer or manage them? Do you desensitise yourself and avoid them?
I recognised that often I am unaware of emotional waves approaching and I ignore them until they overwhelm me and I can do nothing but collapse in a heap with Netflix and chocolates.
I related this to my encounter with a wave in Hawaii. It was my first experience of a wave there. I froze. It was too big to go over, but I did not fancy being under that breaker when half a tonne of water pummeled itself down to the sand. In a millisecond of indecision, it hit me full-on, scrambled me up like morning eggs and deposited me back on the beach. I was convinced it had dislocated my shoulder, but only my dignity was damaged.
This is inaction. You feel that if you do nothing, the uncomfortable feelings will somehow stop of their own accord. They won’t. There are many different kinds of inaction. Many different ways we try to avoid feeling any problematic emotion.
Feelings that are not acknowledged have many ways of making their presence felt. Physical ailments can have their root cause in suppressed emotions, as Bessel A. van der Kolk describes in his book The Body Keeps The Score.
Traumatized people chronically feel unsafe inside their bodies: The past is alive in the form of gnawing interior discomfort. Their bodies are constantly bombarded by visceral warning signs, and, in an attempt to control these processes, they often become expert at ignoring their gut feelings and in numbing awareness of what is played out inside. They learn to hide from their selves.
If you related to this, waves of emotion come, you stand there, you let them hit you, and they knock you for six. You have developed your ways of dealing with this, but it is all torturous. Here are three more useful strategies to consider.
They are ways of accepting that these waves of powerful emotions will always be with you, but you can survive them and even begin to enjoy the ride.
Surf Strategy #1: Jump over the wave
You jump over the wave, which in itself is quite an art. I hold my bodyboard in front of me. I push off and then allow the surf to sweep my legs from under me like the tendrils of a jellyfish.
Your curiosity and willingness to explore within, puts you in front of the emotional waves. You have recognised that you have been avoiding dealing with personal issues that arise in your life. You have stepped into the ocean, ready to feel the emotions. You have acknowledged their existence.
As you walk deeper into the ocean, you encounter the waves that have already broken. You cannot ride these waves, but if you stand still and do nothing, you will be battered and knocked off your feet by them.
I experience great joy and freedom diving over waves. I push off with my feet, and my torso is above the turbulence of the white frothing water. As I dive into the stillness on the other side, my legs and feet feel the energy of the wave tugging and pushing, but they are loose and free.
I could not enjoy the pleasure of diving into the cool water on the other side, without having the courage to face the wave and dive over it.
I have many examples in my life of confronting demons, and finding the process has given me freedom and vitality.
I would be interested in your responses to this image. Have you faced up to something traumatic and painful that has been haunting you? Rather than avoiding it, have you thrown yourself into it and experienced that joy and freedom on the other side?
Surf Strategy #2: Duck under the wave
If a wave is too big to jump over, I duck down in the water and allow the surf to break over my head. This manoeuvre needs careful timing, or your head and body can feel like they could part company with each other.
In the surfing analogy, you duck under a wave when it has already broken, but it is too large to jump over. You preserve your safety by ducking under it. For a few seconds, you are aware of the turbulence above you, but you are safe in the darkness of the deep water.
On the emotional roller coaster of life, you may often want to duck for cover. Sometimes it is essential to preserve your safety. Some struggles and confrontations are just too big. Please don’t feel you have to stand there and let them crash into you. There is no shame in dropping out of the firing line. You preserve your mental health by stepping away for a moment.
Ducking away is a temporary avoidance, but you are acknowledging the issue. You know it is there, but you are choosing to deal with it at another time when maybe you have more energy and resilience.
Surf Strategy #3: Ride the wave
You ride the wave, which takes great skill. You need to read the currents and the flow of the water around you and make sure you are in the right place to catch a wave just as it breaks. You can improve your chances with a lot of observation and knowledge of the ocean and the layout of your beach.
You may be familiar with the “ride the wave” metaphor concerning emotional and mental health. You either ride the wave, or you are overwhelmed by it. Some writers present it as a binary distinction, and I don’t think this reflects the depth and complexity of human emotion.
In my surf this morning, I had to deal with over a hundred waves. I got a good ride on four. I had numerous aborted attempts, and for all of the rest, I had to use the other two strategies.
Jumping over and ducking under waves is all part of the surfing experience. If you do not master those two skills, then you will never ride a wave.
Riding the wave of your emotions is not possible every hour of every day. I am not going to give you handy tips or a prescribed method in a “one size fits all” format. It takes study and practice.
Study the waves of emotion throughout the day, Notice when you are reacting to circumstances, when you are not speaking your truth, filled with anxiety, crying uncontrollably or whatever your emotional manifestation may be.
Practice noticing the emotions, acknowledging them and then carrying on with the day.
With study and practice, you will learn to recognise the waves when they are building up. You will learn to recognise the instinctual fight, flight, or freeze reflex and make a considered decision on the path to take.
You have no hope of riding the wave if you do not study the patterns. Develop the self-knowledge that will help you see the wave beginning to break. You will then be giving yourself the opportunity to ride the wave. You will not always be successful, and you may have to abort at any time, but without the willingness to explore your inner life you will remain standing there being battered and bruised until the tide goes out and you drag yourself off the beach.
Your curiosity and engagement with your emotional life are what puts you in the right position to kick-off and ride down the glassy front of the wave and feel the power behind you driving you forward.
Your emotional struggles become the driver of your life rather than the obstacle you can never hope to surmount.
Finally, recognize when you’ve left the beach.
In terms of mental and emotional health, you need to accept that “riding the wave” is a metaphor for a state of wellness. You are functioning joyfully in life alongside the waves of difficult emotions. The two cannot be separated. If you feel you have eradicated the difficult emotions from your life, then you have probably left the beach.
We desensitise ourselves in many ways using alcohol, drugs, sex and other addictions. That is leaving the beach.
Some people retreat into an intellectual bubble to avoid the emotional world. That is leaving the beach.
There is a phenomenon called ‘spiritual bypass’.
a “tendency to use spiritual ideas and practices to sidestep or avoid facing unresolved emotional issues, psychological wounds, and unfinished developmental tasks” Fossella & Welwood 2011.
That is leaving the beach.
An Invitation
I invite you to join me on the beach, to overcome your fears and dive into the water. To run joyfully into the surf and experience fully the energy of being alive.
I would love to hear your responses to this imagery. I don’t feel I have any answers to the specific issues you are facing in your life, but I write about things that are meaningful to me in the hope that they light a spark in others. Your responses then spark new creations from me, and so we bring life into the community. | https://medium.com/invisible-illness/what-surfing-taught-me-about-grief-and-guilt-a80799a2a5c0 | ['John Walter'] | 2020-09-20 17:17:51.346000+00:00 | ['Grief', 'Guilt', 'Personal Development', 'Mental Health', 'Self'] |
Science needs the Breaking of Symmetries | Photo by mahdis mousavi on Unsplash
Those trained as physicists strive to see the world in terms of symmetries and processes that break symmetries. It’s a meta-principle that guides their questioning.
This is very different from how many are taught physics (I guess in high school). We come to know of physics as consisting of laws that describe the behavior of inanimate (or is it indifferent) objects? These laws are described as expressions of equalities (i.e. equations).
But laws like Newton’s f=ma or Einstein’s e=mc² are derived from a higher meta principal. These principles are known as conservation laws. Energy, momentum, angular momentum are all conserved.
Conservation laws however are also derived from a higher principle. A conservation law implies that something remains the same as the situation of study change. Energy conservation implies that it is that something that does not change in time. In short, a symmetry.
This idea that laws are derived from conservation principles that is derived from symmetries is at the core of the study of physics. The laws of physics are only authoritative as long as they don’t violate a higher principle. Ultimately, that principle is all about symmetry.
In fact, scientific method itself is all about the quest for symmetry.
Science: the Quest for Symmetry | 3 Quarks Daily by Yohan J. John
Reproducibility implies that a result of an experiment remains the same regardless of who performs it. Predictability implies the effect of a system remains the same given the same subset of inputs.
We can think of nature as having 3 kinds of things. Things that are (1) inanimate that behave in a deterministic manner to physical interactions, (2) alive and execute algorithms to remain alive and (3) alive but are conscious of their aliveness.
Each kind has its own conservation laws and thus its own symmetries. The symmetries of (2) and (3) can be understood in terms of the concept of individuality.
The Fluid Nature of Individuality
The last kind, the conscious kind, the kind with sophisticated brains are driven by homeostasis and the drive to conserve a complex milieu of selves.
Homeostasis and a Definition of Intelligence
Morality described as Haidt as consisting of any of these kinds: fairness, loyalty, authority, sanctity etc. are based on the conservation of a particular feature of society. A kind of feature who’s utility is also the preservation of society.
The game that is played that is most evident in philosophy but also pervasive in science is the preservation of a philosophical take or a model of reality. All too often, the thinking style of a scientist is molded by the education he received in his discipline.
To succeed in science (i.e. get tenure) one has to conform one’s practices to the prevailing fashionable approach at the time. It is only the lucky few (usually the financially secure) who have the freedom to explore truly revolutionary new approaches.
As a consequence, a majority of science is practiced like it is a performance art. Most papers exist because they exhibit the technical and intellectual of their authors and not for any groundbreaking new insight.
This state of affairs has resulted in immense tunnel vision that is shared by researchers in many fields. There is a general lack of understanding of the big picture. When pressed to critique their approach to that of an adjacent field, one is left with a shallow explanation.
Too many scientists are too involved in the trenches to even understand why they are involved in the war. It’s become a force of habit that perhaps when they retire they will write something about the big picture.
Even worse is this accepted idea by many that a big picture doesn’t even exist! “Shut up and compute” is the modus operandi.
But what then is the big picture? Symmetry and therefore conservation.
If we are the only existential proof of a being that is able to attempt to understand reality then we have the unimaginably huge responsibility to ensure that we make it past the great filter. en.wikipedia.org/wiki/Great_Fil…
What you will find in science is that there are many who have staked out the hill that they plan on defending to their death. The hill that is chosen is a consequence of personality, educational upbringing, and their field of inquiry. All too often, that hill is more like a cult.
Let’s say you’ve invested for generations a field of inquiry that assumed that the world was flat. Then comes along this chap who argues a non-intuitive description of reality that it’s actually a globe. Those people are standing upside down from you at the other end.
The people who made the investment are thus in a very tight spot. Either throw away all investment and start from scratch or double down throwing more into the money pit. History has shown that the more prevalent option happens to be the latter option.
The other behavior that is characteristic of humans is that projects with simple explanations tend to get the most funding.
Documentary follows implosion of billion-euro brain project
One of the great innovations of civilizations is excess capital. Funding without conditions or even credit is the great catalyst for technological innovation. Unfortunately, what gets funded is what is the present norm of understanding.
What I’m trying to get at is that the scientific projects that do get funded are the kinds where the model of reality matches the model of reality of the people who control the funding. Often it is the case that that model is either outdated or wrong.
It tends to be outdated because social pressures tend to preserve the status quo. It tends to also be wrong because the status quo tends to be several decades behind the leading edge.
A glaring example is research on complex adaptive systems that originated in the 1980s by the @sfiscience . Half a century later ideas of complexity have yet to be accepted or even known by the general science community.
A majority of science is conducted using a naive reductionist approach where progress is measured by incremental improvements in the small. Constantly directed exploitation is favored over directionless exploration.
One cannot judge performance art if the movements are too Avante garde. Yet innovation happens elsewhere because elsewhere does not constrain exploration.
Science makes progress by finding new symmetries but to do so requires the breaking of existing symmetries. | https://medium.com/intuitionmachine/science-needs-the-breaking-of-symmetries-25bffe79bdd1 | ['Carlos E. Perez'] | 2020-12-22 14:15:00.905000+00:00 | ['Science', 'Science Policy'] |
New Writers on Medium MUST keep writing and publishing | New Writers on Medium MUST keep writing and publishing
If you’re looking to make any sort of income with your writing on Medium, you have to do the hard yards
Photo by Max Ilienerwise on Unsplash
My ‘career’ started on Medium in January 2019, when I wrote my very first article. Surprisingly, it was picked up by The Writing Cooperative. At the time, I didn’t realise I could publish my work in publications, so it was a pleasant surprise to get a message from them asking to add my article to their publication.
It wasn’t until May 2019 when I began to fully understand how Medium works by paying its writers, that I joined the Medium Partner Program. That’s when I started to think I could actually make money from my writing and possibly even make a living out of it.
Fast forward to July 2020 and my last monthly earning was US$1.80. Not exactly a sustainable salary for a writer with three young kids living in Australia. To be honest, I haven’t made enough money to sustain a writing career solely on Medium, and to be even more honest, I haven’t exactly been trying too hard to accomplish that career.
What I discovered during my year and a half of writing on Medium is that there was one significant thing that affected my earnings.
You might have already guessed it, but that one thing was how many articles I published on Medium. It all comes down to simple output, at least for new writers who don’t have a huge subscriber base or for those of us who aren’t famous people.
My highest earning month was from late August to late September 2019, when I earned US$29.63. During that month I published eight articles, which isn’t a huge amount, compared to many writers on Medium, but one of the articles earned US$9 that month, which was a pretty big deal for me (and still is).
My last monthly paycheck of $1.80 is a result of me not writing for over two months. No, let me rephrase that, it is a result of not publishing for over two months. I was surprised to still make money when I wasn’t writing, but that’s the beauty of Medium because your articles can keep earning money many months or even years after you’ve published them. I’ve still been writing during those two quiet months — but not publishing my drafts because I don’t feel proud of them.
Perhaps it’s the perfectionist in me, but I find it really hard to finish writing articles. I get about halfway through and then I go back to the beginning and start editing the mistakes, or rephrasing paragraphs. Then I get side-tracked by another shiny new topic that’s trending on Medium and think, oh, I can write about that, too!
Putting my perfectionist tendencies aside, the key takeaway for new writers on Medium is to be consistent. Keep writing, and keep publishing. One of the most helpful pieces of advice I received was from Shaunta Grimes, who said that new writers should create their own publication, rather than always trying to get their work published in ‘big-name’ Medium publications.
I listened to her advice and started Mama Write, a publication about being a mum of three young sons living in Australia and trying to build a writing career. I managed to get 16 subscribers and those subscribers are the ones who gave me the time of day to read my articles. I probably made my first paycheck off them.
While my career on Medium hasn’t exactly turned out the way I expected, the past year and a half has taught me a lot about writing online and how much effort and consistency is actually required to achieve a decent result. If money is your key objective, then you need to put in as much effort into writing on Medium as you would into your regular full-time day job.
That effort was too much for me as a mum of three kids, working at home for my family’s concreting company, as well as doing part-time educational sales work. I couldn’t publish consistently, but that didn’t stop me from writing every day. I wrote drafts which will probably never be published on Medium. I wrote drafts that one day may be tweaked into a publishable article.
I also wrote outside of Medium. I did a bit of copywriting, which pays a lot more than Medium, but it’s also not as fun as writing on Medium. The writing that is most fun for me is when I write my middle-grade historical fiction manuscript. I’ll stay up until the early hours of the morning drawing character costumes and researching pioneer Australia. I’ll tell myself I’m only going to write one more chapter and then go to bed, but stay up until 3am and only stop because one of the kids has woken for a toilet run. That manuscript is my true passion, but it’s still writing, one way or another.
In the end, it comes down to asking yourself: What do I want to achieve from Medium?
Do you want to make a writing career out of Medium? If so, you need to treat it like a full-time job. Make a schedule, stick to your routine and get those interesting or even quirky articles out there to be read.
Perhaps you’re like me and want to write part-time because you just don’t have the time to write every day or to publish articles regularly. You’re happy to make enough money each month to cover the cost of your Medium subscription, and hopefully one day you’ll write that article that takes off and launches your writing career.
But while you’re waiting for that one day, keep writing today. Keep publishing today, because once you stop it’s so hard to start up again. However, if you’re like me and you need to take a two-month break to recharge, reset your positive headspace and take time to be with your family, then that’s okay, too.
If your passion is writing, you already know you’ll never stop doing it, no matter what form it’s in, or how much you get paid for doing it. It’s finding that motivation to keep going, even when you feel like it’s too hard or no one is reading your work. Because one day you can look back and be proud of all the work you’ve put in and what you’ve accomplished. | https://medium.com/mama-write/new-writers-on-medium-must-keep-writing-and-publishing-5159901b369b | ['Lana Graham'] | 2020-07-09 12:18:55.554000+00:00 | ['Publishing', 'Writing Tips', 'Consistency', 'Careers', 'Writing'] |
Improve Your MongoDB Performance Using Index Selectivity | Performance Experiment
Before we start any experiment, let’s ensure the setup is correct. There are no indexes created for the collection yet except the default _id field.
The experiments I would like to perform here are:
Experiment 1. Evaluate query performance using destination and stop index
and index Experiment 2. How index selectivity affects compound indexes
Experiment 1. Evaluate query performance with created indexes
Before we start the query and evaluate the performance, let’s create an index for the stop field.
Use the command below to create an index for the stop field.
Next, we will see the performance where we query the bookings with destination “Gerlachmouth” and more than one stop. From the screenshot below, we can see that the query performance is not efficient, as we are scanning through 262K index keys and documents, and in the end, it only returned 12K documents.
We only need 4% of what we examined, and this is not cool.
Now let’s try indexing the destination field. Use the command below to create an index for the destination field.
Query Performance using Destination Index
The query performance where we use the destination indexes is way more performant compared to using the stop index we created above. Refer to the screenshot above. Now, we're only examing 25K index keys and documents, which is almost 10 times less than the using stop index.
If you think about it, it is acceptable, and pretty common sense too, when we query using destination. The destination is way more specific compared to whether the number of stops is more than one. This is what we call index selectivity in MongoDB.
It means the higher the index selectivity, the easier it is for MongoDB to narrow down the query results equal to a significant improvement to MongoDB performance. With this example, the execution time for this query is 7.6 times faster compared to using the stop index.
Experiment 2. How index selectivity affects compound indexes
Until this stage, you might be thinking that we can resolve this index selectivity by using a compound index. We can create a compound index with both the stop and destination fields.
Let’s try it out by creating a compound index using the command below.
Query Performance Using Compound Index stop_destination
From the screenshot above, the performance and execution time seems excellent compared to the index we created in Experiment 1. However, it is pretty weird that we’re examing four more keys compared to the total documents returned.
You might be thinking it’s only four more keys. But I have seen a scenario where 90K index keys were examined but only 10K documents returned. Thus, this isn’t very nice, and we can fix it using the index selectivity theory.
Let’s move on by creating a compound index based on the strength index selectivity. We start with the strongest selectivity.
We can create index starting with destination then followed by stop using the command below.
From the screenshot below, we achieved the result that index keys examined is equal to the number of returned documents. Although it is just a very small improvement that seems negligible in this example, it’s good practice to create an index according to the index selectivity. | https://medium.com/better-programming/improve-your-mongodb-performance-using-index-selectivity-17a3747ea437 | ['Tek Loon'] | 2020-08-18 14:28:21.150000+00:00 | ['Programming', 'Mongodb', 'Software Engineering', 'DevOps', 'Database'] |
5 Reasons Why You Need a Planner | I survived all of high school without a planner. I started using one and my life changed.
I used to think that using a planner was useless. I thought to myself, Why do I have to write everything down if I could just remember it. Remembering everything usually worked for me, but there were some instances that a friend reminded me about a homework assignment, I saw projects posted on SnapChat, and I didn’t know there was homework until the day it was due. Using a planner would have saved me from these dilemmas, but I didn’t use one.
I started using a planner over the summer, and there have been several improvements in my organization. Based on my experience, I’m listing five reasons why having a planner is a life-saver and game-changer.
5 Top Reasons:
1. You won’t forget what you need to do.
This seems like a no-brainer, but it’s one of the main reasons why you should use a planner. I wanted to be productive over the summer to reduce my workload for the next semester in college. By listing activities that I needed to accomplish, I remembered what I needed to do day-by-day, which helped in cutting my workload.
For example, I needed to accomplish a preparatory chemistry course before the upcoming fall semester. I started doing this prep course three months before it was due. Then, I proceeded to list “CHEM PREP” on my planner every day, which forced me to complete at least three chemistry lessons per day. Because of this, I do not need to cram all of the lessons a week before the prep is due since I spaced out the lessons throughout the summer. This method made me remember what I needed to do every day, which will is extremely important.
2. You can see into the future.
Most, if not, all planners have a general monthly calendar. I use a planner from TUL, and it’s so efficient, cheap and I highly recommend it. A monthly calendar shows all the days in a span of 1–2 pages. An example of one is shown below:
With a monthly calendar, I list all the significant days and deadlines so that I’m aware of it. By writing these dates down, you can work around your schedule and even plan days. For example, if I have a big Calculus exam on January 24th, I would write it down on my calendar so I can see the deadline visually. Then, I would most likely start studying two weeks before the test date, so I would write on January 10th to start studying for Calculus. This strategy works for most activities that have a deadline!
3. You can prioritize what you need to do.
Because you can visually see what you need to do, you can prioritize on what you need to do for the day. For example, I will make a sample list below
Monday:
Buy groceries
Pay rent
Work out
Study for Psychology
Do Calculus Homework
Based on the sample list below, I would most likely pay rent then buy groceries first since I see them as the most urgent activities I need to accomplish. Paying the rent on time would save me money from late-fees and other expenses, so I see this activity as extremely urgent (plus I also need to keep paying if I want to live somewhere). Then, I would buy groceries after paying rent since it sets me up for the entire week (and without groceries, I won’t have food which is an issue.) After accomplishing these urgent activities, I would do my calculus homework next because I would not be able to fully concentrate on studying for psychology since I would be worrying about my calculus homework. Afterward, I would then work out accordingly depending on how much time I have left to study for psychology. If I have to spend more time studying, I will work out for a shorter amount of time.
As seen in the example, I used the strategies of “urgent versus not urgent” and “busy work or not busy work” whenever I plan. If I flag an activity as “urgent,” I try my best to do it over the other activities I need to do (e.g. buying groceries). If flag an activity as “busy work,” I try my best to get it out of the way AFTER I finish the urgent activities (e.g. calculus homework = busy work). Busy work takes away my focus from big tasks (e.g. studying for psychology), so I try to get them out of the way. For me, urgent and busy work usually is the same thing but this strategy might work differently for other people. Try it out!
4. You will save time.
While you write all the things you need to do, it starts to feel like you’re running out of time. Fear not! You’re actually saving time.
By listing the things you need to do, you become more aware of how much time you need to spend to do these things. For instance, if I need to study for an exam and I know it will take me at least two hours, I will only go to the gym for 30 minutes so I can have time for studying. By only going to the gym for 30 minutes, I’m saving time for myself for more important tasks. Additionally, you become more organized, so you spend less time trying to sort things together since you already have your to-do-list written down. For example, instead of figuring out what you need to do for the day (which takes up time), all you need to do is look at your planner and voila! Lastly, by being aware of what you need to do, you can yourself some time since you do not worry about tasks you are missing or activities you need to do that you don’t remember. Not having to worry saves you time that you can use to finish up the things you need to do.
By being organized, you’re saving more time for yourself!
5. You can feel productive.
Do you know that feeling when you cross out something from a to-do-list? Another perk of using a planner is experiencing that feeling of relief mixed proudness. Whenever I cross out an activity, I feel accomplished for the day, which makes me feel more productive. When I feel more productive, I have more energy to finish my to-do-list for the day or whatever I have listed on that planner. This chain reaction is another reason why you need to use a planner.
Wrap it up…
Based on these five reasons, using a planner has made me more productive, organized, and successful in completing my activities for the day. I highly recommend using one for extremely busy people, have a lot of responsibilities (job, parent, studying, pet owners, etc.) since it will keep you organized. Take this from a person who hasn’t used a planner since middle school. (I only used it because we were required to.)
Time to run to your nearest office supplies store! | https://svph300.medium.com/5-reasons-why-you-need-a-planner-107226028715 | [] | 2019-08-12 03:18:49.815000+00:00 | ['Life', 'College', 'Education', 'Schools', 'Productivity'] |
Podcast Episode #3: AI Inquiry with Janelle Shane, Optics and AI Research Scientist | Janelle Shane is an optics and artificial intelligence research scientist, as well as the author of the AI Weirdness blog, where she writes about the sometimes hilarious, weird ways that machine learning algorithms get things wrong. She received her PhD in electrical engineering — photonics from UCSD. In this episode, she shares about her current research, her educational background, and her writing endeavors and perspectives on AI.
As only a middle schooler, Janelle became very interested in electrical engineering and optics thanks to her aunt, then an optics professor at The Ohio State University who ran a laser lab. The fun of the lab helped to inspire Janelle’s further academic pursuits in electrical engineering and optics.
Though she always had many interests, Janelle primarily stuck with the field of electrical engineering throughout her education, proceeding to obtain an MPhil in Photonics from the University of St. Andrews as well as a PhD in electrical engineering — photonics from UCSD. During her experience at UCSD, she designed microscopic lasers with the purpose of sending information at faster speeds on computer chips. Her PhD thesis at UCSD involved conducting simulations with “coke can lasers,” where a laser material is encased in a metal shell while being amplified.
Janelle now works at Boulder Nonlinear Systems, an optics company that specializes in non-mechanical beam steering, move or shape light without physically moving parts like a mirror. Janelle explains that the field of optics “covers a really broad bunch of areas” such as physics, chemistry, and electrical engineering. It essentially can be summed up as the science of light. At her company, their current goal is to get quick updating speeds to keep up with processes such as brain activity. Janelle and her collaborators “are using computer generated holograms…to zap individual brain cells in the brain of a mouse” in order to figure out how different brain cells interact with each other. Optics is used to read the signals that come off the brain cells because a lot of them are engineered to fluoresce when activated.
As for how she integrates AI in this research, Janelle describes that “AI is a useful approach if you don’t know much about the problem you’re trying to solve.” For instance, she found AI to be useful when, among very many possible shapes, she needed to identify what shapes might be useful when breaking apart molecules in a particular way. In this instance, AI helped her to recognize the pattern of simply adjusting the power of the laser.
When attempting to apply AI to her projects, however, she often discovered that AI really wasn’t necessary to reach the desired result efficiently. “AI really is, in some cases… an approach of last resort.” One major concern for the problems with AI is that we can’t always tell how the AI got to the answer it did. She goes further to state that “the danger of AI is not that it’s too smart, but that it’s not smart enough.” We shouldn’t rely on AI nor assume it’s perfectly accurate. Especially since AI can and often picks up on human bias and takes advantage of programming loopholes which provide inaccurate or unintended results. Janelle explains that these potential errors or blind spots of AI make it more essential for people to incorporate human judgment and use discretion when designing and using the results of AI models.
On the topic of writing, she originally started her AI weirdness blog to document her electrical engineering material and projects — even when tests failed, the results could still be interesting and cool to document. She then branched out to write about and share funny or weird outcomes of different AI experiments she conducted.
One of her favorite, unexpected examples of this experimentation with AI weirdness is human collaboration with AI to create something silly. For example, she might use AI to come up with strange combinations of words and then human artists will draw those words to result in some humorous drawings.
Her general advice for students interested in AI is to utilize existing resources or programs such as runway ML and get started working with AI in our own time.
Co-written by Emily Zhao | https://medium.com/ds3ucsd/podcast-episode-3-ai-inquiry-with-janelle-shane-optics-and-ai-research-scientist-6173b3054805 | ['Derek Leung'] | 2020-05-16 21:40:16.212000+00:00 | ['Optics', 'Data Science', 'Artificial Intelligence', 'Interview', 'Podcast'] |
How to Write for the Big Self | Important note: Your piece should offer a clear benefit to the reader. It can be a practical step they can take, an inspirational story, reassurance that they’re not alone — or whatever else they may need to move towards their Big Self.
Whatever it is, make sure they’re better off after reading your article than before.
The best way to get a feel for what we’re looking for is to read what we published. If you aren’t sure where to start, try these:
What we aren’t looking for
In short, we don’t publish surface-level content that already floods the Internet.
What we care about the most is our readers’ trust. We go the extra mile to make sure our content isn’t misleading, manipulative, or overly simplistic.
Please don’t send us:
Clickbait — don’t make promises that you can’t fulfill in your writing. This applies to headlines and subheadings, as well as avoiding other tricks writers sometimes use to keep readers engaged without offering value.
— don’t make promises that you can’t fulfill in your writing. This applies to headlines and subheadings, as well as avoiding other tricks writers sometimes use to keep readers engaged without offering value. Absolutist pieces — in the realm of human psychology, few things work the same way for everyone. We don’t publish pieces that claim to have found “one secret trick” to solve the puzzle of life.
— in the realm of human psychology, few things work the same way for everyone. We don’t publish pieces that claim to have found “one secret trick” to solve the puzzle of life. Lifehacks — again, we’re not interested in surface-level solutions that don’t address deeper, underlying problems. 90% of the time, this means no “lifehacks.” We may make an exception if you can talk about them with nuance and relevant perspective.
— again, we’re not interested in surface-level solutions that don’t address deeper, underlying problems. 90% of the time, this means no “lifehacks.” We may make an exception if you can talk about them with nuance and relevant perspective. Unsupported claims — if you cite statistics, numbers, or theories, we want to see the source they’re coming from. We don’t publish generalized claims that are extrapolated from your individual experience.
Our Style Guide
The best way to summarize our style guide is this:
We want your writing to be as simple as possible, without compromising on the depth of your thinking.
Going a bit more into detail, here are a few guidelines we ask you to follow:
Support your claims. Whether it’s citing research, referring to your experience, or someone else’s work — make it clear where your claims come from.
Whether it’s citing research, referring to your experience, or someone else’s work — make it clear where your claims come from. Check your writing for grammar and spelling errors. While we’re happy to do some light editing for you, we expect you to edit your piece first. The least you can do is run it through Grammarly — even if it’s just the free version.
While we’re happy to do some light editing for you, we expect you to edit your piece first. The least you can do is run it through Grammarly — even if it’s just the free version. Adjust your voice. We want to sound conversational and relatable. But we’re also not afraid of using big words. Our readers are savvy and can handle diverse language, provided that you’re not just using it to sound smart! 😉
We want to sound conversational and relatable. But we’re also not afraid of using big words. Our readers are savvy and can handle diverse language, provided that you’re not just using it to sound smart! 😉 Use jargon in moderation. Unlike many self-improvement publications, we don’t ban jargon. We understand that, sometimes, specialized language can be helpful to explain a psychological theory or concept. If you need to use it, please do so in moderation — and explain those less obvious words to the reader!
Unlike many self-improvement publications, we don’t ban jargon. We understand that, sometimes, specialized language can be helpful to explain a psychological theory or concept. If you need to use it, please do so in moderation — and explain those less obvious words to the reader! Keep the formatting simple. Bolding and italicizing only make sense if you don’t overuse them. The same goes for bullet points and quotes. For the title and subheadings, use the “big T” formatting in the Medium editor. To capitalize the title correctly, you can use this simple and free tool.
If you include a subtitle, format it with the small “T” — just like we did here:
How your title and subtitle should look.
Use five relevant tags. Tags help your story to be seen by the right audience. You can select your tags by clicking on the three dots in the top-right corner of your draft and selecting “Change tags.” If you don’t choose your tags, we will. 😉
Tags help your story to be seen by the right audience. You can select your tags by clicking on the three dots in the top-right corner of your draft and selecting “Change tags.” If you don’t choose your tags, we will. 😉 Keep it on the longer side. We don’t have strict guidelines on the word count of our stories. That said, pieces under 4-minutes reading time are less likely to be accepted, simply because they usually lack depth.
We don’t have strict guidelines on the word count of our stories. That said, pieces under 4-minutes reading time are less likely to be accepted, simply because they usually lack depth. We accept both published and unpublished stories. If you already published the piece on your own, we’ll still happy to consider it. You can send it over by email (if you never published with us before) or by adding it to the publication just as you would an unpublished draft (if you’re already added as a Big Self writer).
If you already published the piece on your own, we’ll still happy to consider it. You can send it over by email (if you never published with us before) or by adding it to the publication just as you would an unpublished draft (if you’re already added as a Big Self writer). We accept both locked and unlocked stories. As per the recently updated rules, both locked and unlocked stories are now eligible for Medium distribution. The advantage of publishing a locked story is that you can be paid for it through the Medium Partner Program. At the moment, the Big Self isn’t able to pay you for your work upfront.
To publish your work with us, you first need to have your own Medium account. It’s free (unless you want to be a Medium Member) and it only takes a few minutes to set up.
How we’ll edit your story
We expect you to do most of the editing before you submit your story. Once again, Grammarly is your friend.
That said, we may still make the following edits before publishing your piece:
Tweak your title, subtitle, and subheadings, as well as change the tags if we believe that this will contribute to the exposure of your story.
Change the feature image to better fit our branding.
Do some light copy-editing, such as rephrasing a wordy sentence, breaking down a long paragraph, or deleting redundancies.
Add a short call to action a the end of your piece (this will not affect your story’s chance for distribution).
Our turnaround time is usually up to 3 working days. We may schedule stories for publishing during the weekend, but we don’t look at new submissions on Saturdays and Sundays. (trying to practice what we preach here by setting boundaries between work and rest!)
How to submit your piece
If it’s your first time submitting to Big Self, send an email to Marta at [email protected]. In the email, please include:
“Big Self submission: [the title of your piece]” as the email subject;
Your name, link to Medium profile, and a sentence or two about yourself (we like to know at least a little bit about our writers!);
Link to the Medium draft you’re submitting + a 1–3 sentence synopsis of the piece;
Your portfolio OR 1–2 links to your published work (relevant to the Big Self topics).
Please only send us complete drafts, not article proposals or outlines.
After you’ve been added as a writer and we published your first piece, you can keep sending new ones directly via Medium. There’s no need to email us about each submission.
Simply go to your draft, click on the three dots in the top-right corner and select “Add to publication” → “Big Self.”
Any other questions?
If you have questions or suggestions about publishing with us, we want to hear from you!
Send an email to [email protected] and I’ll reply as soon as I can.
Thanks for your interest and we’re looking forward to your drafts! | https://medium.com/big-self-society/how-to-write-for-the-big-self-9fe4e7415b9f | ['Marta Brzosko'] | 2020-11-30 11:58:29.427000+00:00 | ['Writing', 'Submission Guidelines', 'Publication', 'Writing Tips', 'Big Self'] |
The Chicago Race Riots, July 1919 | BOOKS
The Chicago Race Riots, July 1919
“Destruction is like a snow-ball rolled down a Hill, for its bulk increases by its own swiftness and thus disorder spreads.”― Peter Ackroyd
White gang looking for blacks during Chicago riots of 1919. Public domain. Source: The West Virginian.
One of the best ways to find good books on the cheap is at late spring garage sales near college campuses. Graduating students are oft in a hurry to discard some of the belongings they accumulated. If you’re lucky, and it happens, you’ll find some real gems for pennies on the dollar.
Last night I started re-reading a short volume called The Chicago Race Riots, July 1919 by Carl Sandburg. It was one of a dozen books I picked up for a dime each from a Hamline grad when we were living in the Midway in St. Paul.
If the Sandburg name sounds familiar, it may be because of the two Pulitzer Prizes he won as a poet. Or it may be because of the Pulitzer he won for his renowned biography of Lincoln. I had not realized at the time I picked up this volume that he was initially a journalist. This printing includes a preface by Ralph McGill and an intro by the legendary Walter Lippmann.
I fetched the book off my shelf after reading a section of Don’t Know Much About History pertaining to the post-WWI South. It’s painfully depressing to read about the raw treatment blacks have received at the hands of whites.
In the Deep South cotton was king, until the boll weevil came along. Few of us today realize how devastating the boll weevil infestation was. If you’re like me, you may have thought boll weevils were a problem cotton growers had always had to contend with.
Chicago Daily News. Public domain.
The weevil had been a plaque in South America but over time came north through Latin America and Mexico to become a major problem after the First World War. The way this critter works is that it lays its egg in a cotton boll. The newly hatched baby weevil then chews up the inside a bit and thereby kills the boll. Farms that produced thousands of bales of cotton were soon producing hundreds of bales. While the Roaring Twenties roared up North, the Southern economy was in a tailspin.
This, combined with Jim Crow laws, now set in stone, led to an exodus of workers seeking employment in Northern Rust Belt cities.
This, however, produced another problem. Racism in the North wore a different face. If you were black, you couldn’t live just anywhere you wanted. The Chicago black population had been 50,000 at the beginning of the century, but with this influx of families thru the decade there were 125,000 blacks in the Windy City by 1919. (It took more than four decades to place laws on the books that would permit a black family to choose where they could live.*) The lack of housing, Chicago politics and post-war psychology all contributed to the events that happened in July 1919.
For blacks who stayed in the South at this time, prospects weren’t exactly comforting. Ralph McGill, in his preface to this book, cites three incidents. In Blakely, Georgia, April 5, 1919, A Private William Little returned to his hometown after the war via train. He was “met by a band of whites who ordered him to remove his uniform and walk home in his underwear.” When he continued to wear his uniform (because he had no other clothes), he was found dead, “his body badly beaten, on the outskirts of town. He was wearing his uniform.”
A few weeks later, in Shreveport, Louisiana, a train was held up by an armed mob in order to lynch a black man who had written a note to a white woman. Only after he was shot did anyone seek to find out whether he could read or write. He could not.
Another example from two weeks after that was cited in McGill’s preface but it was so horrible I’m not even going to share it. The account begins, “Lloyd Clay, Negro laborer, was roasted to death last night.” A mob of 800 to a thousand men and women removed him from a jail…
McGill’s preface was to the re-issued 1969 publication of Sandberg’s account, 50 years after its original publication. He laments that race relations, in spite of the Civil Rights Acts of 1964 and 1965, were not wholly better. This (1969) was only a year after the assassination of Dr. Martin Luther King, Jr. and the race riots that shook more than 100 cities. | https://ennyman.medium.com/the-chicago-race-riots-july-1919-922065ac22d2 | ['Ed Newman'] | 2019-11-15 03:55:58.495000+00:00 | ['Racism', 'History', 'Books', 'Ideas', 'Culture'] |
Starting Over — What Now?. Grow to think bigger and better past… | Starting Over — What Now?
Grow to think bigger and better past your loss and insecurities.
Photo by Karim MANJRA on Unsplash
When you’ve experienced a loss from a job, a relationship, your health, or a situation you didn’t expect to lose, you’re left with a form of pain and hurt.
Giving yourself tough love discipline and jumping back on the saddle isn’t going to snap you out of your sad memories immediately, but will help speed things up for your healing and moving on.
Suffering beyond letting out the emotional cry for a week, is optional. You don’t want to stay immobilized from life any longer than you have to.
You have more fearlessness inside you than you may think, if you intently search without a map, for the internal treasure chest full of love.
And if you want a speedy recovery, you’re smart to challenge yourself to live every day joyfully. As you daily distance yourself from the thing that caused you pain that you can’t control, then you heal yourself back to good health and life.
When I was going through difficulties as a young adult, I had intentions to think happy thoughts just to be able to function whole, but I didn’t know how with all the messy life problems I faced.
Even with knowledge of visually minimizing negative thoughts down from a large to small television screen, and other available proven and effective methods, I couldn’t change my negative and unhelpful thoughts (…and maybe you’ve tried similar techniques that didn’t work?).
None of that helped, as the advice went in one ear and out the other. And then made me wonder if something was wrong with me, to add insult to injury.
Well… until I was ready.
I wish I could tell you that getting ready only took me a year or two, or a few seasons of wardrobe changes. Only two decades later can I tell you that there is no magic ingredient to maturing faster than when you’re ready.
Maybe you’re getting ready sooner or want to be there already?
There’s a process to slow cooking, learning, and enjoying in the process, and finding your own successful path boils down to your commitment to result desires (and what you’re willing to do). The same general formula for any of life’s great successes.
That can be achieved by radical choice and actions, or a gradual shift like I did.
Personal Transformation From Within
Using the metaphor of your young self as a caterpillar, you can emerge into a transformed and whole living butterfly, if you have a personal growth desire. In your new beautiful form, you can fully enjoy the earth and also the sweet, colorful flowers.
There may be a transitional cocoon period, or two or more, where nothing significant may seem to be happening, and where you may experience silence and setback. That’s where most the transformational growth happens.
Changing your mind’s ways takes transformation, that starts as an inside job.
You want to cut out the diseased parts that by no fault of your own, permeated your heart and soul, while growing your spirit to influence your heart, that can reprogram your mind.
Without these gentle and hidden parts on your side, you don’t stand a chance to fight against your old brain trying to protect you from hurt, and your ego spinning the truth.
How to Use Your Past Effectively
Let Go of Your Recent Past.
“The past is already gone, the future is not yet here. There’s only one moment for you to live, and that is the present moment.” — Buddha
But remind yourself what you learned...
Remember a time when something didn’t work out and reflecting back, that was the best thing that could’ve happened to you? Because otherwise you wouldn’t have moved onto be the person you are today.
But you forgot about those steps. So remind yourself so you can feel good again and maybe take a slightly different step.
Bring your journal out and write or scribble down all those present thoughts, whether it’s a messy or neat handwriting day.
Pour out your emotions and get new revelations in your thoughts and spirit.
If you have a health situation, pray for your healing. Are your thoughts making your matter worse? Calm your stress, worry or anxiety, so you can reduce your inflammation and weight gain.
Instead of pity, be grateful you have free time away from your calendar, to do whatever you want. Some people dream of having time back for themselves or a life of their own.
If you lost a job, this is an opportunity to pivot and go in another direction. If you wanted to do something big in your life, now is the time to reinvent yourself. Don’t go back to your old career you weren’t happy with. That’s thinking small. Think bigger, so you can get bigger.
For the first time, you’re encouraged to show up as your unique self. The opportunities are limitless as the world is dying for innovation, creativity, and new ways.
Discover Your Gains.
You’re best to move forward and think of what you’ve gained (over your loss). If holding onto past memories isn’t helping you grow, it’s doing no good. Period.
Because those negative tapes turn into an insecure tape where you become the only one suffering, that spiral downward and your loss becomes additional lost time and energy (the most valued resources).
You’re best to turn your past painful experiences around. Your mess is your message. Use it to learn from, be better, and do better.
Ask yourself: what if you never experienced your past?
Answer: Then you would not be the person you are today, who’s kinder, more humble, and compassionate. The traits that get you further in this life.
Then, go and heal yourself. You do this from the courage born within you, and the life that was breathed in you. And believe there’s something better on the other side, because there is.
You won’t successfully open a new door, until the past one shuts. How that happens doesn’t matter, but be glad in the surprise ending.
Because there is greater wisdom in the world than you possess in your own mind. When you embrace, trust and believe you’re being led to a positive future, then you can feel empowered to take action now.
You can feel connected to your soul and spirit, and anything you want to happen has a chance to birth and take shape, despite all the chaos around. Your new desires and beliefs bring you to your best life.
Remind Yourself of the Good
Remember all the wonderful memories you had, that you forgot about. Did you celebrate? And commemorate what’s still working and who’s alive?
Listen to people who come with messages in seasons and for reasons to help lead you on your path and give direction. Sometimes these people come in the form of love relationships who aren’t with you anymore.
Believing and seeing they made you a better person is the greatest gift. You experienced the best that there was to offer from that relationship. Capture those good memories in your peaceful mind and a kind heart.
When you need to cry today, know that tomorrow you have a chance to start over. | https://medium.com/change-becomes-you/starting-over-what-now-7033769afd01 | ['La Dolce Vita Diary'] | 2020-12-28 14:59:43.811000+00:00 | ['Mindfulness', 'Personal Development', 'Spirituality', 'Mental Health', 'Self'] |
Is There a Writer in the House? | Photo by Debby Hudson on Unsplash
The descriptions of the room gave out first. Tight but detailed sentences imparting the tactile feel of the long mahogany bar, its intricate wood grain deeply etched with the hopes, the dreams, the pains, and the pleasures of an untold number of people who had ordered drinks there over the last century, were suddenly missing from the narrative. The dozen-or-so patrons on this late Friday night in an unspecified season did not at first notice the textual mise en scène was missing, but the barroom they inhabited was fading quickly. The author of the short story found herself blocked.
The red brick walls, preserved for decades to provide an old-world ambiance lost color, began to fade, then blurred into flat, grey, amorphous material lacking vivid, descriptive under-painting. The floorboards, hundred-year-old oak planks, lost their patina and gave no distinctive feel to the key location of the story. The people went on murmuring general small-talk, oblivious to their plight. After a time that the author could have mentioned but didn’t, the bartender raised up from serving a bourbon and a white wine to a distinctly generic couple who would have no significant parts in this story and said, “Hey folks, everything okay?”
What caused his unease was the very lack of any reason he should feel so. The author wrote nothing to give a segue into such a turn of events. His question brought all conversation to a halt as every character in the scene affected some vague look of confusion, befuddlement, or simple surprise. Everything was most definitely not okay. Something was very wrong, but no one could quite say what or how.
The bartender realized that his unease would be much more interesting given better explication, perhaps several paragraphs, but he had nothing to offer. Given a new prompt from the author, he inexplicably knew he owned this establishment. He learned he had worked countless years as dishwasher, busboy, waiter, bartender. He found out that he parlayed his lifelong experience and life savings of an unknown currency into buying and running the most popular bar in this city/ town/ village/ starship, but he had no further back-story, no web of relationships, no ethnicity or even a name.
For all he lacked, however, his strength lay in his being the cliché of a wise, older man. He knew what to do. “Is there a writer in the house?” he shouted.
The rock/ blues/ folk/ pop/ generically alien music that was, for the first time in the story, said to be blasting from the jukebox/ playlist/ hologram in the bar drowned out his strong and authoritative voice, so he pressed the button or flipped the switch or waved his hand or did whatever task would stop the music in whatever time period and technology level the story was set, and repeated his question loudly.
“Is there a writer in the house?”
“I’m a writer” came a voice from the back of the room. All the patrons turned, predictably, toward the voice. All, that is, except one man, — the man everyone called Phil. He remained hunched over his whiskey at the bar and would not look into the bar back-mirror, even though this might provide a plot twist and heighten the excitement of the story. The author left this character behind earlier and had no idea of his importance. She was unprepared for the idea that he could become a great vehicle for an interesting turn, so he remained a mere passing mention.
“We’re stuck”, the bartender said. “Can you help us?” The nondescript patrons of the still vaguely sketched barroom hummed with muddled but hopeful approval of the bartender’s question.
The voice hesitated a second before answering. “Yes, I can” it said. It was at this point the author decided that the voice was female, so the answer “Yes, I can” was described as low, strong, and feminine, like a steam-engine burning honey as fuel. The author liked that terrible phrase but knew it would have to go in the next edit. She decided to let it stand until she could, she hoped, come back later, and write something better.
The bartender merely nodded. The room fell silent, or more accurately the still undefined characters stopped saying the lines the author had not yet penned.
A woman walked forward from the shadows of the tables in the darkened back of the barroom. People still without the slightest hint of their place in this scene moved back to make way for the stranger. She walked into the dim light near the long, ancient, grooved and deeply stained mahogany bar (the author made a note here to revisit this description), and leaned against the corner nearest the bartender’s station.
She was harshly beautiful and full of life but carried the scars of some loss no one spoke of and no one knew about except Phil, who would prove to be more important in this story now that the author saw a way to put them together. She was brunette with inexplicable blonde and red highlights, blessed with a muscular build softened with womanly curves so as to be every feminine characteristic at once until the author could sort out what to emphasize.
“I can help” the woman said simply, gravely, or some other adjective, emphasizing the word “can”. “But you will all need to trust me.”
After a moment of poignant silence, the mutterings of the still amorphous crowd registered their acceptance of the vague terms the mysterious woman asked.
“Very well” she said quietly. “Let’s begin.”
A blackness overtook the bar, the patrons, the bartender, Phil, and the woman.
The author closed the document file and opened another, saving it with the title ‘Is There a Writer in the House — 2’.
She typed the first line, a brief opening naming the bar and the bartender, then leaned back in her chair. She wondered who the writer was, and if she knew Phil. | https://medium.com/pickle-fork/is-there-a-writer-in-the-house-e6990372c7d4 | ['Craig Allen Heath'] | 2018-08-19 00:19:45.365000+00:00 | ['Humor', 'Satire', 'Writers Block', 'Fiction', 'Writing'] |
Chat generator | Photo by Austin Distel on Unsplash
Is Artificial Intelligence(AI) making us lazy or efficient?
I think it’s making us efficient. Due to COVID-19, people are more often found interacting with their peers via social media and text messages. For instance, my push notifications are up by 37%, and positively enough I have reconnected with my school friends, old friends per se. However, this arose a problem of constantly sticking to my phone and suffering from Nomophobia and Phantom vibration syndrome.
Nomophobia — a term describing a growing fear in today’s world — the fear of being without a mobile device, or beyond mobile phone contact. The Post Office commissioned YouGov, a research organization, to look at anxieties suffered by mobile phone users. The study found that about 58 percent of men and 47 percent of women suffer from the phobia, and an additional 9 percent feel stressed when their mobile phones are off. The study sampled 2,163 people. Read more here. Phantom vibration syndrome — where you think your phone is vibrating but it’s not — has been around only since the mobile age. Nearly 90 percent of college undergrads in a 2012 study said they felt phantom vibrations. Get more insights here.
I was indeed almost always on the phone, and even while sleeping, I used to wake up hastily to check my phone as well, and being an introvert I love to sleep. So, I decided to make AI work for me.
With Recurrent Neural Networks (RNN), I decided to train my machine to generate automatic replies, trained based on my personal chats/replies/forwards, etc.
On the corollary, there are many fledgling chatbots trained on the humongous text. However, they lack the human touch and the word and sentence formations one uses while texting. While sending short messages, for instance — fewer people write “See you Later” and my personal network uses “c u l8r”- both of which convey the same message, but with different semantic and syntactic structuring.
Dataset :
I have 881 text messages which are basically interactions between 11 different participants from India(most of them), Germany, and the USA. Due to this time difference, not all are active at once. Few are more gregarious, few more tacit. So this data is a perfect mix of human interactions — sarcastic and sassy — replies, which are more prominent in taking.
The main reason for training on this data is — it’s the most active group in my network and more close to me as I do not want to sound like a bot when I am “replying”.
LETS TALK
# importing necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import nltk
import string
import unidecode
import random
import torch
After importing we need to have a GPU as RNN or any deep learning neural network requires heavy computing and takes a long time on CPU. additionally, GPUs have additional advantages over CPUs, these include having more computational units and having a higher bandwidth to retrieve from memory.
train_on_gpu = torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU!')
else:
print('No GPU available, training on CPU; consider making n_epochs very small.')
This code will tell you if you have a GPU or not. even if you don't have one, it is just going to take a longer, but still gives you results.
train_df = pd.read_csv("WhatsappChat.csv")
author = train_df["Content"]
This is how the data frame looks like, I have worked on some data processing and Exploratory Data Analysis to bring it in this formation. The code to change WhatsApp chat into a similar pandas data frame visit here. As I am training it on the content of the chats, we will just be working on that column.
text = list(author)
def joinStrings(text):
return ' '.join(string for string in text)
text = joinStrings(text)
# text = [item for sublist in author[:5].values for item in sublist]
len(text.split()) test_sentence = text.lower().split() trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])
for i in range(len(test_sentence) - 2)]
chunk_len=len(trigrams)
print(trigrams[:3])
after joining and making the content as a huge text data, I am training the data based on tri-gram as most of the replies — at least in my network — are sized at 3 words in reply.
Since to Train an RNN, I need a vocabulary size, so that my replies don't go out of bounds.
vocab = set(test_sentence)
voc_len=len(vocab)
word_to_ix = {word: i for i, word in enumerate(vocab)} # making input and their respective replies
inp=[]
tar=[]
for context, target in trigrams:
context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)
inp.append(context_idxs)
targ = torch.tensor([word_to_ix[target]], dtype=torch.long)
tar.append(targ)
RNN
It’s time we define our neural network class and see what it can do for us.
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, n_layers=1):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.encoder = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size*2, hidden_size, n_layers,batch_first=True,
bidirectional=False)
self.decoder = nn.Linear(hidden_size, output_size)
def forward(self, input, hidden):
input = self.encoder(input.view(1, -1))
output, hidden = self.gru(input.view(1, 1, -1), hidden)
output = self.decoder(output.view(1, -1))
return output, hidden def init_hidden(self):
return Variable(torch.zeros(self.n_layers, 1, self.hidden_size))
Here is a class RNN, a general object-oriented programming approach to instantiate objects and their respective methods or functions for faster execution.
The def forward() function is our forward pass or feed-forward network and connection of the neural network. And the def init_hidden() is a variable instantiate for hidden layers.
def train(inp, target):
hidden = decoder.init_hidden().cuda()
decoder.zero_grad()
loss = 0
for c in range(chunk_len):
output, hidden = decoder(inp[c].cuda(), hidden)
loss += criterion(output, target[c].cuda()) loss.backward()
decoder_optimizer.step() return loss.data.item() / chunk_len
Now we need to reduce loss to get the optimized reply and check the accuracy of the model. The above code gives us data loss in whole backpropagation.
import time, math def time_since(since):
s = time.time() - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
This is a simple function to check how much time does it take to run the program or time taken to train the model.
n_epochs = 50
print_every = 10
plot_every = 10
hidden_size = 100
n_layers = 1
lr = 0.015 decoder = RNN(voc_len, hidden_size, voc_len, n_layers)
decoder_optimizer = torch.optim.Adam(decoder.parameters(), lr=lr)
criterion = nn.CrossEntropyLoss() start = time.time()
all_losses = []
loss_avg = 0
if(train_on_gpu):
decoder.cuda()
for epoch in range(1, n_epochs + 1):
loss = train(inp,tar)
loss_avg += loss if epoch % print_every == 0:
print('[%s (%d %d%%) %.4f]' % (time_since(start), epoch, epoch / n_epochs * 50, loss))
# print(evaluate('ge', 200), '
') if epoch % plot_every == 0:
all_losses.append(loss_avg / plot_every)
loss_avg = 0
This is where the magic happens — training the chat and learning what to reply for 50 times I let the machine read the chat and let me know what is the best reply to the message. Also, prints the loss incurred after 10 epochs and time took to execute.
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
%matplotlib inline plt.figure()
plt.plot(all_losses)
This plots the losses for those who — like me — appreciate plots and likes visual representations then numbers.
def evaluate(prime_str='this process', predict_len=100, temperature=0.8):
hidden = decoder.init_hidden().cuda() for p in range(predict_len):
prime_input = torch.tensor([word_to_ix[w] for w in prime_str.split()], dtype=torch.long).cuda()
inp = prime_input[-2:] #last two words as input
output, hidden = decoder(inp, hidden)
# Sample from the network as a multinomial distribution
output_dist = output.data.view(-1).div(temperature).exp()
top_i = torch.multinomial(output_dist, 1)[0]
# Add predicted word to string and use as next input
predicted_word = list(word_to_ix.keys())[list(word_to_ix.values()).index(top_i)]
prime_str += " " + predicted_word
# inp = torch.tensor(word_to_ix[predicted_word], dtype=torch.long) return prime_str
We need to define an evaluation function to check if we are getting any tangible replies generated. It takes the prime string, length of the sentences, and temperature which takes care of the missing words if any new message comes.
print(evaluate('trip pe',11, temperature=1)) # output
trip pe shuru ? dekh na😅. bumble to sanky bhi use kar sakta
Voila! There is a message generated and it makes less sense, but tangible words. Interestingly, it learned the smileys as well. And we use a huge amount of emoticons in our chats.
Future work
Now, all I need is work on APIs to embed this code in the WhatsApp chat, let it train in a span of a month, and generate the messages — then I don’t look at my phone. This will cure my sleep cycle and leverage me in interacting with people around me than on my phone. Hopefully with increased epochs, say 100 and more data over time this will give fewer errors and more personalized replies which will trick my friends into wondering whether I’m a BOT or replying with my conscience.
If you are interested, you can get the code here.
Do let me know if you think this method lacks some ideas, or how I can optimize it further to get to being a humanlike BOT and let this AI take over my communication. | https://medium.com/the-innovation/chat-generator-d61cc5a1d1df | ['Baban Deep Singh'] | 2020-09-27 14:44:36.278000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Neural Networks'] |
Steel Stapler, Wooden Box, and Ceramic Jar Walk Into a Story | CC0 Source
The Best day of His Life
When Steel Stapler was first brought to The Desk, shiny and new from the stationery store, he was sure he’d arrived in Paradise. After filling him with a row of bright new staples, the Woman set him down near the most beautiful Wooden Box he’d ever seen.
Having waited to be purchased for many months on a shelf near the decorative desk organizers, he knew a thing or two about beauty. But here, so near he could almost touch her, was the Box of his dreams. Small and delicate, her tiny boards of cherry wood finished with dark lacquer and hand-painted blossoms, she caught his eye and he could not look away.
He wished to be cool about greeting her, wanted to act the suave gentleman and not the bumbling schoolboy, but his excitement in her presence was too great.
“Clack clack!” he said. “Pardon me, but you are so beautiful, I cannot help but introduce myself. I am Steel Stapler. May I speak with you awhile?”
Box enjoyed Stapler’s metallic approach to wooing. She had been taught to prefer the softer tones of wood and paper, having once been courted by a Cigar Box of the finest Spanish Cedar and another time by a large and learned Dictionary. But her mother, the Dowager Ceramic Jar, had raised her to be a lady, and she wouldn’t return rudeness for awkward courtesy. After all, in her heart, she found his shiny steel body fascinating. She recalled the tales she’d heard in the Great Parlor that housed The Desk, about knights in armor pledging faithful love to beautiful damsels. She wondered if he was a knight, and more, if she was a damsel.
“How do you do?” she replied with soft taps of her delicate lid, “I am Box, Wooden Box, and I am pleased to make your acquaintance.”
The effect of her soft, dulcet voice told Stapler he had been right, this was Paradise. He shined a bright smile and was about to speak again, to speak his heart’s wish that Box would be his Eve, he her Adam, and together they would live in this lovely garden of Desk, when a voice, low and brittle-sounding, stopped him.
“What are you doing, daughter?” said the Dowager Jar. She stood nearby, three times as tall as Stapler, her ornate, ceramic body a molded relief of grape vines heavy with fruit, painted in regal golds and greens and blues. When she spoke, her scraping lid reminded Stapler of dry bones rubbed together.
“Clack clack!” he said, “Pardon, good Lady Jar, but I was just speaking with…”
“I did not,” Jar said, now echoing the sound of a stone coffin lid slid back to reveal the corpse within, “address you.” She had lived too long, taking care her daughter be protected against all manner of new things brought to The Desk by The Woman, to allow a metallic beast like Stapler to insinuate himself between them.
“Daughter?” Jar repeated, “What have I told you about newcomers, especially those of metal? Leave this modern monstrosity alone. Do you so soon forget what occurred with the arrival of Iron Letter Opener?”
Box shrank with disappointment. Stapler could see she would surrender to her mother’s will. The knowledge inflamed his desire to rise and fight for his love.
“Box!” He cried out, “Clack clack, Box, do not listen to her! You know, deep within, that we were meant to be, don’t you? Stay with me, and I will protect you from anything and everything that would do you harm! I swear it on my hinge!”
Box wavered, but knew she would do as her mother said. It would come to no good to be with someone so different, so hard and unbending, so harsh and loud. Her mother reminded her of Letter Opener, and the memory stabbed at her breast. For a brief second her hope had flickered, as it had that one time long ago, but two more words from the Dowager Jar blew out the tiny flame.
“Daughter? Come.” | https://medium.com/literally-literary/steel-stapler-wooden-box-and-ceramic-jar-walk-into-a-story-cab409bb23ff | ['Craig Allen Heath'] | 2019-02-07 03:11:19.985000+00:00 | ['Creative Writing', 'Literally Literary', 'Fiction', 'Margaret Atwood', 'Writing'] |
Remember the Simple Truths | When the moose carcass was brought back to the village, Sylvia’s grandfather insisted on distributing much of the meat to other community members.
“My father asked why, because our own family was having a pretty hard time of it,” says Sylvia. “My family needed that meat. And my grandfather said, ‘This is how everyone in the village survives — by sharing. A lot of people are hungry. So you share, and it comes back to you.’ And my father emphasized that lesson to us kids, and we’ve never forgotten it.”
While Sylvia learned how to hunt, fish and trap from her father, she lacked the skills typically practiced and taught by women in Upper Tanana Athabascan culture.
“So when I was 11, my dad sent me to my aunt so I could learn about beadwork, birch bark basket making, and things like that,” Sylvia says. “I loved working with beads, though I was never too thrilled about basket making. But it was important that I learned how to do it.”
As Sylvia grew up, her dad and uncle felt she needed to broaden her horizons beyond Northway. So after graduating from eighth grade, she was sent away to the Mt. Edgecumbe Boarding School in Sitka for her high school education. Many native kids from Northway followed the same educational path, and some found separation from family and familiar environs jarring, acknowledges Sylvia.
“But I really liked it,” she says. “I met kids from all around Alaska, and I made a lot of new friends. Then I got to spend my junior year at the Chemawa Indian School in Salem, Oregon, and that was fantastic. The weather was so nice compared to Sitka! And when some friends and I got good grades in math, our teacher took us to a Creedence Clearwater Revival concert. I remember everything about that concert — and I still like listening to Creedence today.”
Sylvia returned to Mt. Edgecumbe the following year and earned her high school diploma.
“Then I went back home,” she says. “Back home to Northway. And I’ve been here ever since.”
Sylvia worked for a while with the Youth Conservation Corps, and then took a position as a community health aide. It was challenging but rewarding work, and she stayed with it for 32 years.
“In a small community like Northway, you do it all if you’re in health care,” Sylvia says. “I’d draw blood, do lab work, help deliver babies, take care of people who were ill, respond to accidents, make home visits. Sometimes I’d get up at 2 or 3 in the morning due to some emergency, and I wouldn’t get home until 9 or 10 at night. I enjoyed it, and I enjoyed serving my community. But it was stressful, and it took a toll. At a certain point my doctor told me the stress was getting to be too much, and that I needed to find something else to do.”
Sylvia had always wanted to work for the U.S. Fish and Wildlife Service, so she applied for a position at the Tetlin National Wildlife Refuge after retiring from health care. | https://alaskausfws.medium.com/remember-the-simple-truths-5668b33731df | ['U.S.Fish Wildlife Alaska'] | 2020-11-18 18:45:56.631000+00:00 | ['Diversity And Inclusion', 'Environment', 'Culture', 'Native Americans', 'Alaska'] |
Testing your Hadoop program with Maven on IntelliJ | Set your project name, project location, groupId, and artifactId. Leave the version untouched and click finish.
Now we are ready to configure our project dependencies
Configuring Dependencies
Open the pom.xml file. This file is often the default opening screen after clicking finish. Click enable Auto-import but you can also import changes if you prefer to be notified every time you edit your pom.xml file.
In your pom.xml file post the following blocks before the project closing tag </project>
<repositories>
<repository>
<id>apache</id>
<url>http://maven.apache.org</url>
</repository>
</repositories> <dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.2.0</version>
</dependency>
</dependencies>
The final pom file should look like the following
Below is the full pom.xml file
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.Word</groupId>
<artifactId>WordCount</artifactId>
<version>1.0-SNAPSHOT</version>
<repositories>
<repository>
<id>apache</id>
<url>http://maven.apache.org</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.2.0</version>
</dependency>
</dependencies>
</project>
Now we are ready to create classes for our sample test project WordCount.
Creating a WordCount class
Proceed to src -> main -> java package and create a new class
Name the class and click and enter
Paste the following Java code in your wordCount class.
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class wordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(wordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
The wordCount class code includes both the main method, the map class and the reduce class. It scans all text files in the folder defined by the first argument, and outpout the frequencies of all words into a folder defined by the second argument.
We are almost ready to run the program….
First we must create our text input file. In your project package create new folder and name it input. Then within the input folder/directory create your txt file or drag one if you already have.
Copy and paste some texts within this file
Almost ready be patient…
We have not set our program arguments. Select Run → Edit Configuration.
Add a new Application Configuration by selecting “+” then Application.
Set the Main class be wordCount, set Program arguments be input output. This allows the program to read from input folder and save the result to output folder. Do not create the output folder, as Hadoop will create the folder automatically. If the folder exists, Hadoop will raise an exception. When done select apply then ok.
Now we are ready to run our program….
Select Run → Run 'WordCount' to run the Hadoop program. If you re-run the program, delete the output folder before.
An output folder will appear. On each run your results are saved in output→part-r-00000. | https://medium.com/analytics-vidhya/testing-your-hadoop-program-with-maven-on-intellij-42d534db7974 | ['Frazy Nondo'] | 2020-01-15 04:52:28.151000+00:00 | ['Mapreduce', 'Intellij', 'Data Science', 'Hadoop', 'Java'] |
Flutter Performance Optimization | Ever wondered how flutter handles all your UI building and events like Futures, taps, etc.. on a single thread( yes it does all that on a single thread 😮😮😮 until and unless explicitly done).
What is Thread/Isolates ?
Thread is an independent process that has its own chunk of memory and executes the given instructions on that memory , It can work parallelly with other threads hence can reduce execution time of multiple process on a single thread .
Let’s understand this with an example :
In Fps games like counter strike, Call of duty, etc. you can see that as soon as you fire a weapon few tasks executes simultaneously like playing of bullet sound, change of bullet count and reduction in opponent health , All these things happens parallelly these are basically threads which execute parallelly and execute their task on separate isolates(isolates and threads can be used interchangeably as isolate is a Dart way of multi threading more on that below) which have its own memory.
Languages like JAVA and C++ Share Their heap memory with threads, but in case of flutter, every isolate has its own memory and works independently. As it has its own private space this memory doesn’t require locking, as if a thread finishes its task it already means that the thread has finished utilizing its memory space and then that memory can go for garbage collection.
To maintain these benefits flutter has a separate memory for every isolate(Flutter way of multi-threading) that’s why they are called isolate 🙂.
Learn more about isolates below.
How can it be helpful to me and where should I use isolates/Threads?
When to use isolates/threads ?
There are a few situations where isolates can be very handy.
Let say you want to execute a network call and you want to process that data that you just received . and that data contains about million records that alone will hang your UI. You have some image processing tasks that you want to do on-device these kinds of tasks are highly computational as they have to deal with lots of number crunching operations which may lead to frozen UI or legginess in UI.
So to conclude when to use isolates, We should use them whenever you think there is a lot of computation that needs to be offloaded from the main thread.
How to use isolates ?
Flutter team has designed a very elegant and abstract way of using isolates/threads in a flutter, Using compute we can do the same task which isolates does but in a more cleaner and abstract way. Let’s take a look at the flutter compute function.
Syntax:
var getData = await compute(function,parameter);
Compute function takes two parameters :
A future or a function but that must be static (as in dart threads does not share memory so they are class level members not object level). Argument to pass into the function, To send multiple arguments you can pass it as a map(as it only supports single argument).
compute function returns a Future which if you want can store into a variable and can provide it into a future builder.
Let’s start by analyzing a sample problem:
In the above code pausefunction() is called just below the build method which pauses the execution of code for 10 seconds. And because of that when you try to navigate to this page from a previous one there will be a delay of ten seconds before our page gets pushed on to the widget tree.
We can try to resolve this issue by using async.
As you can see now we have declared our pause function as async even doing this will not help
As async in dart is basically puts our code in ideal until there is something to compute so it seems to us that dart is executing these on a different thread but actually it’s just waiting for some event to occur in that async function.
More on async below :
Let’s solve the above issue using compute.
In the above code, we basically passed our function in compute() function and that creates a separate isolate to handle the task and our main UI will still run without any delay (check the debug console for response ).
Summary:
Dart is by default executes all its code on a single-threaded. Every function and every async-await calls work only on the main thread(until and unless specified). We can create multiple threads using compute( Future function/normal function, argument). You can use compute for executing network calls, performing number-crunching calculations, image processing, etc.
This is all about compute to learn more about isolates (the underlying architecture of computing function) check out isolate .
Thanks for reading this article.
If you find it interesting Please Clap! and if you found anything wrong please let me know I would appreciate it for your contribution. | https://medium.com/flutterdevs/flutter-performance-optimization-17c99bb31553 | ['Utkarsh Sharma'] | 2020-08-22 02:11:01.426000+00:00 | ['Mobile App Development', 'Flutter', 'Multithreading', 'Flutter App Development', 'Performance'] |
The State of Small Businesses Report in 2017 [Infographic] | For the third year in a row, Wasp Barcode Technologies conducted a survey to identify how small business owners felt about their growth, confidence in the economy, employment, technology use, marketing tactics, and government impact.
The State of Small Business Report research is based on a random online sample of 1,127 U.S. small business owners/managers with companies with five to 499 employees. The anonymous survey was conducted via the Internet by Survey Monkey from November 10–18, 2016. The survey has a margin of error of +/- 2.9 at the 95% level of confidence.
Sample characteristics: 1,102 surveyed. All own or manage a small business. All headquarterd in the U.S.
Important Highlights for Small Business Growth in 2017
Approximately 42% of small businesses plan to increase IT spending in 2017. As a comparison, in the year before, 44% of companies surveyed planned to increase investments in IT.
Network security, upgrading networks and replacing computer hardware were among top changes in 2015 and 2016.
In the early 2016 report, 62% of larger small businesses (company size of 101 to 499 employees) were using or planning to use web-based or subscription-based software, while in the 2017 report numbers dropped to 57%. Out of smaller businesses (5–10 employees), 36% were thinking of taking this step in the previous year report. We see an increase in numbers, with 38% thinking of implementing web-based or subscription-based software in 2017.
Results are highly encouraging especially for startups, usually consisting of small teams (5–10 people), who are more oriented towards IT and cloud solutions in their growth strategy.
The full report can be found on their page, here. | https://medium.com/social-media-growth-hacking-hub/the-state-of-small-businesses-report-in-2017-infographic-12dc6c9e158c | ['Roxana Nasoi'] | 2017-01-17 20:11:54.074000+00:00 | ['Growth', 'Startup', 'Small Business', 'Digital Marketing', 'Infographics'] |
The POM — to Play ‘Pick Three’. Extending the Politically Speaking game… | The Pick Three challenge:
Increased engagement on our publication helps ALL of our writers! So let’s make this a game. Visit our homepage or work off this list of the curated articles from August. Pick three articles/poems that stand out to you. This is your chance to curate THREE and get the word out to others about your picks!
Pick three — and then read each article or poem, comment on each of your picks (link this post if you want to challenge them), share each poem or article to your social media to encourage other people to read it.
Simple! PICK THREE — READ — COMMENT — SHARE
Want to make it more fun? Let them know they are one of your “Pick Three” picks and plop the link to this newsletter post to challenge them to Pick Three! Or, write up a Pick Three for the publication Top 3! Have fun selecting YOUR Pick Three picks!!
Check out these curated articles in The POM!
August pub-wide curation rate: So far in August we have 192 posts (WHOA!) and 18 of them have been curated. Please read ALL newsletters — I often include tips that can help move toward curation.
For example:
Did you know that if you include ANY friend links within your story — like those self-promo links at the bottom or within your posts — that it makes your entire post exempt from curation? It is IN the curation rules:
From the curation guidelines: | https://medium.com/the-pom/the-pom-to-play-pick-three-87e0465b2f0 | ['Christina M. Ward'] | 2020-08-28 16:14:25.288000+00:00 | ['The Pom', 'Writing', 'Poetry', 'Newsletter', 'Pick Three'] |
Translating Your Web App Via Flask-Babel 🌎 | While building my first web app I decided to have a language feature that allowed users to pick between Spanish or English. Following the Flask-Babel documentation was confusing for me since it was my first time using it. Now that I have implemented this library, I wanted to create a blog that can be used by developers who are also learning Babel. It is important to know that this blog is targeted towards developers that can manually create their translations and for those using Flask as their framework, so here we go!
Installation 🚀
$ easy_install Flask-Babel
or
$ pip3 install Flask_Babel
Configuration File 📁
Once, you have installed Babel you will move on to creating a babel.cfg configuration file and add the following:
babel.cfg lets Babel know where to look for your translations.
In your server file you will add the following to instantiate flask and babel. I only used gettext() for flash messages so that is why I imported that as well.
Backend 🔙
Through Babel you can make your translations based on the locale of the user or the user preferred browser language. However, I designed my web app to allow users to make those changes. To do this I created the following POST request and then used the babel.localeselector to make those changes.
There are many ways to do this so feel free to use what makes sense in your web app.
gettext()
Any messages that will be rendered to your user via Flask (backend) will be formatted this way.
Here are some examples:
Frontend (HTML & Jinja)
The way you will communicate to Babel that you want something translated in your frontend will be by using the following format (parentheses preceded by an underscore),
{{ _(‘Translate Me’) }}
Here are some examples of what I translated. You can translate the placeholders, labels, heading, inputs. EVERYTHING! As long as you wrap it around the example format. DON’T translate “name” as that can create errors with POST and GET requests.
Creating messages.pot & messages.po 💻
Now that you have arranged all the strings you want translated we will compile them and create our translations folder. To do this we will return to the terminal and type the following.
Make sure you do this in your project directory. This will allow babel to extract all the strings you want translated (both in your server file and Jinja templates)
$ pybabel extract -F babel.cfg -o messages.pot
This will create a new file called messages.pot which is the template.
Then, type this to your terminal
$ pybabel init -i messages.pot -d translations -l es
es is for Español but you can choose your preferred language
Here are other languages you can choose: https://flask-user.readthedocs.io/en/v0.6/internationalization.html
This will create a new file called messages.po and will be in a translations folder. Messages.po will include all the strings that you want to translate. It looks like this:
Looks very similar to messages.pot but it is not the same. Here you will make translations.
In this file you will make the translations in msgstr.
Translations Completed ✅
Once you have translated all your strings you will compile them with pybabel
$ pybabel compile -d translations
YAY! You have translated your Web app! 😁 | https://medium.com/datadriveninvestor/translating-your-web-app-via-flask-babel-a1561376256c | ['Alejandra Lopez'] | 2020-01-11 10:46:57.478000+00:00 | ['Flask', 'Translation', 'Python', 'Babel', 'Web App Development'] |
Three Steps To Setting An Effective Marketing Budget | This article is a spin-off from our podcast episode titled, “How Do You Set Up An Effective Marketing Budget?”
It’s is a common topic for most first time founders and digital marketers. Many startups make a mistake of putting in X amount for channels as the official budget without any correlation to its effectiveness or any method of measuring the success (or failure) of the Marketing Campaigns. This often leads to them spending more than they need to in the initial months, which is also when cash burn really does matter.
To answer this question we’ve got Ahmad our Director of Marketing & Learning Programs at AstroLabs. He’s also our lead instructor in the Digital Marketing Track and has advised startups on their Marketing strategy.
Raunak: Why do you think entrepreneurs have challenges with setting up an effective marketing budget?
Ahmad: It is because they come at it from the point of view of, “I wanna have a certain budget per month and I want to spend this budget on Marketing.” The problem is you cannot know if the budget you have set is effective for your business until you find a way to track your goals. Only they can you determine if your marketing spend is effective for your goals. So people formalize a budget without even determining a way to measure the effectiveness of the spend versus their goals.
There is an old saying that goes like this, “Half of your marketing budget is wasted, but you just don’t know which half.” Now with Digital Marketing, you can know what’s wasted and what’s not. I would recommend anyone setting up a marketing budget to follow these three steps.
The first step is to clearly define what you want to track as a goal. So a goal is something that is valuable to the business, for example, if you are an e-commerce store, your goal would be to get someone to buy something from your shop. You might also have secondary goals like to have someone to sign up for your newsletter. Typically, primary goals are revenue driven. If you are a consultancy, your goal would be for customers to contact you on some kind of a form and then after that, you would try to sign them up as clients.
The next step is to figure out is how much are you willing to spend to get one goal. Let’s say you sell sneakers online and are trying to understand how much should you budget for. Let’s say you sell one pair for a $100 including margins. In that, you want to decide you want to allocate 20% of that to marketing spend. So you are willing to spend 20$ for every sale you make. This is your initial benchmark spend per
Once you start testing campaigns, you would look at the marketing channels to see if you are reaching a $20 Cost Per Acquisition or CPA in the campaign you are running. This is where the third steps happen, testing and measuring if your campaigns are reaching your goals or not.
If you are a founder or a marketer looking to upgrade your skillset and get certified, check out our upcoming Digital Marketing tracks here: astrolabs.com/academy or reach out to me at [email protected] | https://medium.com/astrolabs/three-steps-to-setting-an-effective-marketing-budget-ba70f9e2f28a | ['Raunak Datt'] | 2017-11-23 08:47:27.810000+00:00 | ['Budget', 'Digital', 'Digital Marketing', 'Goals', 'Marketing'] |
How to Convert Your Creative Ideas into Action | Don’t we all have innovative ideas? Every human being comes across a good idea, especially while taking a shower in the morning or before going to bed at night. Presumably, the one reading this article at the moment has a brilliant idea too, do not you? But that is not innovation. Millions of random ideas and thoughts pop into millions of heads now and then.
The term innovation by the virtue of its definition can only be used when we put our ideas into action. Each of us is capable of bringing a change to this world with all the available resources and a little push towards the path paved with creative ideas. But if this idea is not backed by action, the idea perishes along with the possessor.
Rabindranath Tagore in one of his poems said:
“Spring has passed. Summer has gone. Winter is here… and the song I meant to sing remains unsung. For I have spent my days stringing and unstringing my instrument.”
People waste their entire time in preparation and planning and never prioritize action, and consequently, they lose their purpose in life. They find ways to distract themselves, and undeniably today’s generation has got countless options to do so. This makes it even easier for our “Lizard Brain” to control us. Often while strolling around, we come across strangers to whom we want to say hello, but something within us restricts us, and eventually, we miss that opportunity to make a new contact. Can you guess what made us bury our emotions? The influence of our lizard brain multiplied by our fear stops us from taking any mandatory step. Now the question arises, is there a way to tackle our lizard brain?
Of course, we can. Remember, man can move mountains by faith. So, the next time the lizard brain tries to manipulate us, we need to remember these three spells:
• Believe in yourself
• Eliminate your distractions
• Act yourself into feeling than feel yourself into action
Believe in Yourself
Confidence is the key to self-belief. It comes when we know we are worthy and can contribute positively to society.
Eliminate Your Distractions
Patience is the key to eliminate distractions. We must keep patience while performing a task. After ten to fifteen minutes our lizard brain will get restless and will try to distract us. But remember, patience is the key. Once the lizard brain is defeated, guess what? From this point, our idea will demand action from us, and we no longer will hesitate to follow it.
Act Yourself into Feeling than Feel Yourself into Action
Often people procrastinate because of unfavorable moods. “I will do it the day I feel good” “I will go to the gym the day I feel active” are among the most frequently used lines. But in reality, the existence of that faithful day is uncertain. Therefore, we shouldn’t wait to feel good to act well. Our priority should be action. Often we see artists lose themselves totally into their work and fail to notice anything else. This happens on reaching the state of mind called “flow”. Flow is a magical state where individuals operate from muscle memory. When a dancer steps on the stage, in the beginning, s/he may have to struggle with the moves, but after a short warm-up, they dance with their eyes closed.
The death of an idea is worse than the death of a person. Human beings are mortal, but an idea can be brought to life by putting our energy and efforts into them. So, have faith in your ideas. As a child, we didn’t give up our ideas without a fight. Then why as an adult, should we let go of our ideas? Keep them in your heart, expect that the world will not embrace your ideas easily, pursue them, and turn them into reality. | https://medium.com/age-of-awareness/how-to-convert-your-creative-ideas-into-action-e7ce1b4fa179 | ['Gunjan Phukan'] | 2020-12-15 03:32:51.967000+00:00 | ['Self Improvement', 'Mindfulness', 'Education', 'Learning', 'Productivity'] |
Post-Traumatic Stress Disorder and Its Relationship to Childhood Trauma | Photo by Jakub Kriz on Unsplash
Most people know that some Veterans returning home from overseas where they served under combat conditions are diagnosed with post-traumatic stress disorder (PTSD). However, did you know that PTSD can also affect those who have never served in the military?
This site is all about raising awareness about complex post-traumatic stress disorder (CPTSD) that is caused by repeated childhood trauma. However, those who have been exposed to a single event of trauma in childhood also can exhibit the symptoms of post-traumatic stress disorder.
In this article, we shall examine together post-traumatic stress disorder and complex post-traumatic stress disorder and their relationship to childhood trauma.
The Definitions of Post-Traumatic Stress Disorder and Complex Post-Traumatic Stress Disorder
To better understand the connection that childhood trauma can have to both PTSD and CPTSD, it is important we examine their definitions.
Post-traumatic stress disorder is a mental health condition that’s triggered by a traumatic (fear-filled) event that is either experienced or witnessed. Symptoms may include flashbacks, nightmares, and severe anxiety.
Complex post-traumatic stress disorder (also known as complex trauma disorder) is a mental health disorder that develops in response to prolonged, repeated interpersonal trauma in which the child feels they have little or no chance of escape.
Put side by side it is easy to see the differences between the two diagnoses. However, people can have co-occurring experiences where they suffer the effects of both diagnoses at once with overlapping symptoms.
PTSD Isn’t Just for Veterans
Photo by Spencer Imbrock on Unsplash
Although PTSD was first described by military physicians trying to understand why some soldiers who had been exposed to battle behaved irrationally upon arriving back behind the lines. Many men were given dishonorable discharges or sent home in disgrace due to their brain malfunctioning after being in situations where they experienced extreme fear and saw things in a battle no one should ever see.
However, children living in homes where terror is a way of life as they experience child abuse themselves or see a loved one injured also can form PTSD. It is when these events are continuous that you add on the hell that is CPTSD. The combination is a deadly mix of despair and fear-ladened living that can lead to suicide.
To be clear, one does not need to experience abuse to form PTSD. Witnessing or being a victim of a car accident, or an incident where the person was filled with fear can cause PTSD.
However, there are far more consequences to trauma for children because of their developing brains and minds. Trauma in childhood is a leading cause of many of our modern diseases and a leading cause of death in the United States and around the world.
PTSD: The Single Event Disorder
One of the greatest differences between PTSD and CPTSD, as we have stated before, is the number of events that cause each. While complex post-traumatic stress disorder is caused by repeated trauma, post-traumatic stress disorder is related to one occurrence happening sometime in a person’s life.
PTSD has horrific symptoms ranging from moderate to severe and includes all of the following each arranged by category (Sourced and Quoted from the Mayo Clinic1)
Symptoms of intrusive memories including:
· Recurrent, unwanted distressing memories of the traumatic event
· Reliving the traumatic event as if it were happening again (flashbacks)
· Upsetting dreams or nightmares about the traumatic event
· Severe emotional distress or physical reactions to something that reminds you of the traumatic event
Avoidance
Symptoms of avoidance may include:
· Trying to avoid thinking or talking about the traumatic event
· Avoiding places, activities or people that remind you of the traumatic event
Negative changes in thinking and mood
Symptoms of negative changes in thinking and mood may include:
· Negative thoughts about yourself, other people, or the world
· Hopelessness about the future
· Memory problems, including not remembering important aspects of the traumatic event
· Difficulty maintaining close relationships
· Feeling detached from family and friends
· Lack of interest in activities you once enjoyed
· Difficulty experiencing positive emotions
· Feeling emotionally numb
Changes in physical and emotional reactions
Symptoms of changes in physical and emotional reactions (also called arousal symptoms) may include:
· Being easily startled or frightened
· Always being on guard for danger
· Self-destructive behavior, such as drinking too much or driving too fast
· Trouble sleeping
· Trouble concentrating
· Irritability, angry outbursts, or aggressive behavior
· Overwhelming guilt or shame
For children 6 years old and younger, signs and symptoms may also include:
· Re-enacting the traumatic event or aspects of the traumatic event through play
· Frightening dreams that may or may not include aspects of the traumatic event
Clearly, PTSD is a serious condition that we must not ignore at any cost.
Co-Occurrence of PTSD and CPTSD in Veterans
Photo by Nijwam Swargiary on Unsplash
Not all the men and women of the military who are sent into dangerous situations come from well-established, non-traumatic, happy homes. Many come from homes that are dysfunctional beyond what every family experiences.
Therefore, our military personnel might be entering a war zone where PTSD may occur already affected by CPTSD. The combination of the two leads to serious mental and physical consequences that can be fatal.
It is only with more research and treatment of veterans that we will truly understand all the health consequences they must endure as the result of living in hell not just as children, but also in combat.
Acknowledging Childhood Trauma Exists
Society as a whole has a great deal of trouble even acknowledging that childhood trauma exists, let alone stand up for the rights of children for safety and health. We tend to want to hide our heads in the proverbial sand and say to ourselves it happens to others in other people’s families, never our own.
However, we are not only speaking of child abuse that can lead to CPTSD but of other trauma’s a child might experience that will haunt them in the form of PTSD.
If a child receives a shock or life-changing event, such as the death of a parent, they are in great danger of forming PTSD either then or later in life. Only the loving, caring, and safety of an adult can mitigate the consequences of forming and living with PTSD.
We must as a society stop hiding from the truth that many children live with every day, that life isn’t a safe thing to go through but that with some prevention we can make the future brighter for children who have been traumatized.
The Injury to the Brain Caused by PTSD
Childhood trauma has grave psychological consequences for its victims and, unfortunately, interpersonal violence that is either experienced or seen by children is common. While the emotional consequences are well-documented, less is known and reported on the biological damage done especially to a child’s brain.
A paper published in journal Dialogues in Clinical Neuroscience explored the brain regions involved in the stress response to seeing or experiencing childhood trauma including the amygdala, hippocampus, and prefrontal cortex plus their connection to PTSD.
The damage to these vital brain regions seems to be caused by traumatic stress with increased release of cortisol and norepinephrine, hormones that ready the body for the fight/flight/freeze/fawn response.
According to the paper, findings from animal studies have been extended to patients with post-traumatic stress disorder and have shown other regions of the brain are affected as well such as the anterior cingulate, decreased medial prefrontal/anterior cingulate function. In laymen’s terms, people who have experienced childhood trauma have an incidence of memory and emotional regulation problems because of the brain regions that are affected. (Bremner, 2006)2
Undetected (Delayed) PTSD
Photo by Olivier Piquer on Unsplash
Some people do not show symptoms of post-traumatic stress disorder until years later. This finding comes from a paper published in World Psychiatry, which states that delayed-onset PTSD is deadly because of the physical harm it does to a person who has experienced trauma. (McFarlane, 2010)3
These physical problems are accompanied by brain changes that even those who have been traumatized but seemingly have no symptoms that are like those who have been diagnosed and are being treated for PTSD.
Clearly, PTSD at any age must not be ignored. Anyone who is involved in a traumatic event, especially children, are subject to severe brain changes that can become real challenges later.
Occurrences of delayed-onset PTSD are only now coming to light as a problem, so more research is needed to truly understand and identify how delayed-onset PTSD affects people’s lives.
References:
1. Post-Traumatic Stress Disorder (PTSD). Mayo Clinic. Retrieved from: https://www.mayoclinic.org/diseases-conditions/post-traumatic-stress-disorder/symptoms-causes/syc-20355967
2. Bremner, J. D. (2006). Traumatic stress: effects on the brain. Dialogues in clinical neuroscience, 8(4), 445. Retrieved from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181836/#:~:targetText=PTSD%20is%20characterized%20by%20specific,and%20concentration%2C%20and%20startle%20responses.&targetText=Brain%20regions%20that%20are%20felt,amygdala%2C%20and%20medial%20prefrontal%20cortex.
3. McFarlane, A. C. (2010). The long‐term costs of traumatic stress: intertwined physical and psychological consequences. World Psychiatry, 9(1), 3–10. Retrieved from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2816923/ | https://shirleydavis-23968.medium.com/post-traumatic-stress-disorder-and-its-relationship-to-childhood-trauma-9f03362ffb6c | ['Shirley J. Davis'] | 2019-11-19 18:51:26.123000+00:00 | ['Childhood Trauma', 'Cptsd', 'PTSD', 'Veterans', 'Mental Health'] |
Case study: laying down the building blocks | Design principles
From the company’s inception in 2016, all design decisions for Studio were made by front-end developers, the chief product officer, and A Million Ads’ founder/chief executive officer. Even though everyone’s contribution helped launch a business, it did so at a cost. Throughout Studio design inconsistencies exist due to conflicting design decisions. In order to ensure alignment from everyone Studio needed a clear set of design principles. “Design principles are a set of considerations that form the basis of any good product.” (Design Principles) Most importantly, Design Principles can help teams make decisions.
Working closely with both Mohan Taylor, A Million Ads’ Chief Product Officer, and Richard Taylor, the organization’s Senior Design Engineer, we began laying down a list of design principles we believed were in-sync to the way design was being approached within the product/engineering team and the organization as a whole. We had generated up to ten principles during those sessions.
Bob Baxley
One morning on my way to the office I was listening to the Design Better podcast episode, What it takes to build a connected workflow, featuring Bob Baxley. A Silicon Valley icon, Bob worked at Yahoo!, Apple, Pinterest, and is currently the Senior Vice President of Design and Experience at ThoughtSpot. Bob’s story inspired me to reach out to him. We connected over a Zoom call where we talked for over an hour about our journey into this field, as well as design leadership. Just before our conversation was coming to a close I asked if he had any advice with regards to establishing a set of design principles for a software application. Bob had bestowed three key points:
Your design principles need to reflect your organization’s own principles. In many ways, they need to be an extension of that. These principles are the de facto rules/constraints that nobody in the team can argue with. Once it’s set, that’s it. Finally, your design principles need to be easy to remember. Hence, these principles need to be ingrained in the heads of everyone that is involved in building the product.
Studio’s design principles
The following day I revisited the list of proposed design principles and cross-referenced them with A Million Ads’ own principles:
An audience-first, creative-led approach will make our product and work stand out. We can use technology to enhance human interaction, not replace or undermine it. We have a responsibility to people and the planet by treating both with respect.
After some adjustments, and suggestions from Mohan and Richard, we had devised four design principles:
1.) The WOW factor!
“There are three responses to a piece of design — yes, no, and WOW! WOW is the one to aim for.” — Milton Glaser
A Million Ads aims to stand-out from the rest of the ad-tech community since its focus is on personalizing the advertising experience by making it dynamic. As a result, Studio should reflect that from its functionality all the way to its micro-interactions. The goal is WOWing its users by demonstrating how Studio differs from any software used in the ad space.
2.) Consistency
“A consistent experience is a better experience.” — Mark Eberman
From the use of language to the specifics of styling, consistent design patterns employ familiar mental models in order for Studio’s users to navigate with ease. Without consistency, Studio can appear daunting and confusing; hence, causing a detrimental hit towards the usability of the web application.
3.) Progressive disclosure
“Simplicity is not the goal. It is the by-product of a good idea and modest expectations.” — Paul Rand
To ensure that Studio doesn’t overwhelm its users of its many possible options, only the most relevant functionality is featured initially. Examples of progressive disclosure can be found throughout Studio revealing the many ways users could include further dynamic features to an ad that would align with their client’s needs.
4.) Design with the end user in mind
“Want your users to fall in love with your design? Fall in love with your users.” — Dana Chisnell
Studio was designed with its users in mind. They are made up of creative professionals who are familiar with a suite of software that allows them to create digital audio-based and/or video-based advertisements. Without 1.) the WOW factor, 2.) consistency and 3.) progressive disclosure working in unison, the overarching design principle, 4.) design with the end user in mind, would not be realized.
The culmination of these four design principles went beyond defining the look and feel of Studio, but was also how the product/engineering team would devise solutions for the business. With the design principles set in stone, it was a good opportunity to lay down the groundwork for Studio’s design system. | https://medium.com/design-bootcamp/laying-down-the-building-blocks-4e8f16efbfec | ['Nikin Nagewadia'] | 2020-11-03 05:13:02.220000+00:00 | ['Design Principles', 'User Experience', 'Design Systems', 'Startup', 'Case Study'] |
A Self-sorting and Self-balancing Tree (2–3 Tree for geeks) | There is so much data that needs to be stored and accessed daily, from student data to medical data. All of these data needs to be accessed in some logical order, and very quickly. Various data structures are invented, implemented, and used for this purpose, where each structure has its specialties and is used for those specific use cases. 2–3 Trees are one of those structures, with its specialty being that it is always sorted and it is always balanced, thus very efficient (logN lookups and inserts, to be specific, where N is the number of items stored).
What is a tree and what are its nodes?
The above diagram shows the structure of a tree. Each bubble in the diagram above is called a node. This is where a data element is stored. Each tree has a root which references the top-most node. This is what is used to access the tree. If this is your first time seeing a tree, look up binary trees and binary search trees to better understand how trees work at their core.
What is a 2–3 Tree?
2–3 tree is a data structure in computer science where every node has either two or three nodes, and each node has either one or two elements. Every addition made from the client side is added to the lowest valid leaf node, where a leaf node is defined as a node with no children, and the tree self-balances when 3 items are reached in a node, dividing between left, right, and middle nodes. The five cases in the image below demonstrate how this tree functions. | https://medium.com/datadriveninvestor/a-self-sorting-and-self-balancing-tree-for-geeks-b886817abcf5 | ['Anisha Jain'] | 2019-05-09 05:06:39.937000+00:00 | ['Programming', '2 3 Trees', 'Trees', 'Python', 'Data Structures'] |
Possible Hotel Bookings | Problem
A hotel manager has to process N bookings of rooms for the next season. His hotel has K rooms. Bookings contain an arrival date and a departure date. He wants to find out whether there are enough rooms in the hotel to satisfy the demand. Inputs:
- First list for arrival time of booking
- Second list for departure time of booking
- Third is K which denotes the count of rooms Output:
- A boolean which tells whether its possible to make a booking
false means there are not enough rooms for N booking
true means there are enough rooms for N booking Example: Inputs:
- arrivals = [1, 3, 5]
- departures = [2, 6, 10]
- K = 1
Output: false. At day = 5, there are 2 guests in the hotel. But we have only one room.
Solving Process
This problem is interesting in my opinion because there are many different ways to solve it. Let’s see a possible process.
Structure Storing Each Day Count
Our first idea might be to have a structure to store the number of bookings for each day. This structure could be an array with a fixed size (the maximum departure day).
Inputs:
- arrivals = [1, 3, 5]
- departures = [2, 6, 10]
- k = 1
This example would lead to having an array of size 10 (because the last departure is at day 10). To construct this array we iterate over each arrival and departure and we either increment or decrement the corresponding day. In pseudo-code:
int[] counts = new int[maxDepartures(departures)] for each arr in arrivals {
counts[arr]++
} for each dep in departures {
counts[dep]--
}
At the end we have the following array:
value: 1 0 1 1 2 1 1 1 1 0
index: 1 2 3 4 5 6 7 8 9 10
Once the array is built, we just have to iterate on it and check if all the elements are smaller than k (the number of rooms).
In the previous example, the maximum number of rooms was 1. Because on day 5 we have 2 bookings, we return false.
The solution is O(n) in time with n the number of bookings but O(m) in space with m the maximum departure day. Not bad in theory but we can potentially allocate a very large array even though most of the space is not really useful. For example:
Inputs:
- arrivals = [1, 3, 5]
- departures = [2, 10000, 10]
- k = 1
Would lead to allocating an array of 10k integers.
Let’s see the other options.
Storing a Collection of Events
What are the other options? Let’s check again what we produced with the previous structure:
value: 1 0 1 1 2 1 1 1 1 0
index: 1 2 3 4 5 6 7 8 9 10
We can see that some information are kind of duplicated. For instance, between day 6 and day 9, the number of bookings does not change as we know that nothing happened during this time frame.
Would it help to store some sort of events instead? Let’s take again the same example:
Inputs:
- arrivals = [1, 3, 5]
- departures = [2, 6, 10] Day 1: +1 booking
Day 2: -1 booking
Day 3: +1 booking
Day 6: -1 booking
Day 5: +1 booking
Day 10: -1 booking
The solution would be to iterate over those events and to either increment or decrement a counter. If at some point, the counter is greater than k , we return false. Yet, to iterate over this collection of events we need it to be sorted.
What is the best structure here? Let’s summarize our requirements:
Search to check whether a day already exists
Add a new day
Browse the structure to iterate over each sorted day
What about using a Binary Search Tree (BST)?
Each node could be represented this way:
class Node {
int day
int count
Node left
Node right
}
The sorting would be done per day .
Let’s see the impacts in terms of time complexity:
Search to check whether a day already exists: O(log(n)) average case, O(n) worst case
Add a new day: O(log(n)) average case, O(n) worst case
Browse the structure to iterate over each sorted day: O(n) using an in-order strategy (Depth-First Search)
As we have to iterate over each element and insert them in the BST, the algorithm complexity is O(n log(n)) average case, O(n²) worst case.
Another option is to use a hash table and to sort the keys once we have added all the events:
Search to check whether a day already exists: O(1) average case, O(n) worst case (the probability depends on the map capacity)
Add a new day: O(1) average case, O(n) worst case
Browse the structure to iterate over each sorted day: O(n log(n)) to sort the keys and O(n) for the iteration
In the end, the solution is O(n log(n)) average case (due to the sorting operation), O(n²) worst case. This solution appears to have the same complexity than the one using the BST.
Let’s see a possible implementation in Java using a sorted map:
Constant Space Complexity
If we want to optimize our algorithm, we need to think whether it is really mandatory to store those events? Can’t we just simply iterate over the given collections (arrivals and departures) and check the booking constraint on the fly?
A solution would be possible but it would require to simplify the inputs by sorting them up front.
If both collections are sorted, we can simply iterate over each element using two pointers (one on the arrivals, one on the departures) and perform the constraint check on the fly:
As you can see, during each iteration we still have to check what is the minimum between arrivals.get(indexArrival) and departures.get(indexDeparture) to know what pointer to update.
Overall, the algorithm has a constant space complexity and an O(n log(n)) time complexity due to the sorting operations. | https://medium.com/solvingalgo/solving-algorithmic-problems-possible-hotel-bookings-fa3a544c6683 | ['Teiva Harsanyi'] | 2020-12-09 19:29:10.137000+00:00 | ['Arrays', 'Programming', 'Java', 'Coding', 'Algorithms'] |
Case Study: An Exploratory Data Analysis Of Netflix Content (2008–2020) with Python | Introduction
Exploratory Data Analysis or EDA as it is commonly called is a process or stage in any data science project that cannot be overlooked or talked about enough.
This is where the data scientist or data analyst (as the case may be) “gets a feel or understands” the data he/she wants to build a model on.
In cases where the end product of such a project isn’t some ML or AI product, EDA can result in great insights and recommendations about business problems through pattern discovery, hypothesis testing, and checking of assumptions. All these are usually achieved with help of summary statistics and data visualizations.
The ultimate aim here is to aid effective and efficient decision making which may affect businesses positively.
In this article, we are not going to be deploying any model to create an ML or AI product after our EDA. We’re simply going to use EDA to “explore” and gain an understanding of a dataset containing Netflix’s contents between 2008 and 2020. We’re basically going to be using EDA to get a “quick summary” of what the dataset contains.
Netflix is a content platform that offers subscription-based streaming services, offering online streaming from a library of films and TV-series. The popularity of Netflix no doubt hit new levels this year due to the “almost” worldwide lock-down which resulted as a response to the Covid-19 pandemic that hit most countries of the world.
Importing Libraries
The first step towards any form of EDA is the import importation of necessary python libraries we would be needing for our analysis. While the majority of these libraries are imported at the start of the analysis, some libraries are imported “as needed” as the analysis progresses. Our imported libraries are shown below;
As seen in the image above, all imported libraries have their specific uses from data preprocessing and analysis to data visualizations.
Data Type, Null Values & Summaries
After importing our libraries, we then take a quick look at what our data looks like. This is shown in image 1 below. Image 2, 3 & 4 gives a brief summary of what the columns of our data contain. From Image 4 we can see that the dataset contains over 6 thousand records based on the total number of IDs which the the primary key.
Image 1
Image 2
Image 3
Image 4
There’s usually discrepancies in the dataset when it comes to data science project whether it is a structured or unstructured data. The EDA is where these discrepancies are usually resolved. These discrepancies may include but not limited to null values, duplicates, wrong data, incorrect values, etc. Let’s see what kind of discrepancies our Netflix dataset contains;
The only discrepancy our data contains is null value which are found in 5 columns. We’ve discussed some ways in which we can treat them in an earlier article which can be found here.
A method I prefer to use when performing EDA on any dataset is to pose a series of questions I want the dataset to answer. These questions serve as guidelines for gaining simple yet massive insights into the data in question. Thus, we would be adopting a “question-method” in order to show how valuable EDA is to data science projects and solving business problems in general.
Movies or TV-Shows?
Netflix primarily lists two types of contents -movies and TV-shows on their streaming sites. The simple donuts-chart below shows in a glance that Netflix uploaded more movies than TV-shows between 2008 and 2020.
We’ve already gotten from the image above that movies are the most uploaded contents. The image below shows us an extra information. It shows the trend of the number of contents added year-on-year. As seen, little to no contents were added between 2008 and 2014. The number of contents added however began to rise from 2015 to mid 2019 which is the peak, before dipping from late 2019 to early 2020.
What are the top producing countries?
The streaming site from Netflix lists contents from different countries and in different languages. The bar chart below shows us the top 10 countries with the most contents on Netflix. United States is first with contents doubling that of India who is second. United Kingdom came third while Turkey and Mexico complete the top 10 list.
Monthly and Daily Trend
We’ve seen how most of the contents added peaked at mid 2019. It would be more insightful if we go further and see which month of the year and day of the week most contents were uploaded.
As seen from the line chart above, majority of Netflix’s contents were uploaded from middle of the year to the end of year as we approach the holidays, with most uploaded in December. This is understandable as many people spend more time at home with family and friends during this time. Or it could also just be Netflix following another set of data that shows majority of its customers watch more contents towards the end of the year. Or simply the calendar of movie producers who spend all summer shooting movies.
From the bar chart above, we can see that majority of its contents were uploaded steadily everyday with minimal difference except for Friday when the number is significantly high. Unfortunately our data does not include the actual time stamp contents were uploaded, we would have been gotten more insights maybe they were uploaded against the weekend when traffic may be higher than weekdays.
Duration of contents?
The histogram above shows us that majority of the movies uploaded has a duration between 70 minutes and 120 minutes with the actual median duration being 98 minutes. We have number of outliers below 40 minutes and above 160 minutes.
TV-shows on the other hand has majority of its contents being between 1 and 3 seasons with 1 season TV-shows having a significantly high number.
What are the top categories and directors?
All contents on Netflix are grouped into categories regardless if its a movie or TV-show. The word-cloud below gives a brief over view of the top categories. The bigger the word the higher the number of contents.
Top Categories
The word-cloud above shows us that the top categories include international movies, TV-shows, independent movies, action, adventure, romantics movies, comedies, dramas, etc. Some other categories with low contents as shown by the size of their text include anime series, documentaries science, faith spirituality, adventure anime, etc.
Top Directors
The word-cloud above shows the directors with the highest number of contents. Raul Campos and Jan Suter come in first position. Other top directors include Steven Spielberg, Jay Chapman, Shannon Hartman, Marcus Raboy, etc.
Type of contents Netflix uploads?
One very important factor to consider when watching any Netflix content especially when watching with family where kids are involved is the ratings of such content. The chart below shows that majority of Netflix content whether movies or TV-shows are rated TV-MA. This type of contents are for mature, adult audiences and therefore unsuitable for children. The second highest type of contents are for children that are 14 year-old and above. These are rated TV-14. Meaning of other ratings can be found here
Conclusion
So far, we’ve been able to give a brief description of our data and get some valuable insights which have been communicated via visualizations. However, please note that EDA is not limited to all we have discussed here. To understand more on how these visualizations and the little data cleaning that went into them please check the notebook here
Thank you for your time. To see more about this analysis, see the link to my GitHub here. You can connect with me on LinkedIn + Twitter | https://medium.com/python-in-plain-english/an-exploratory-data-analysis-of-netflix-content-with-python-a637d28bcade | ['Emmanuel Ayeni'] | 2020-12-16 14:03:05.424000+00:00 | ['Data Science', 'Python', 'Netflix', 'Data Analysis', 'Data Visualization'] |
Essential Ingredients for Watering Your Mental Health and Promoting Growth | An Objective Perspective
According to Psychotherapist, Tina Gilbertson, the first step of personal growth is identifying our flaws. It’s by improving them that we better ourselves. And, according to Gilbertson, this requires us to gain a new perspective and separate ourselves from our previously entrenched viewpoints. Rather than maintaining positive and false biases about ourselves, a new view is an opportunity to identify things that used to be invisible to us.
Gilbertson draws on an example of self esteem. Early on in her development, she would internalize the harsh and negative things that were said to her. She took it as a direct reflection of her worth, and as a result, she perceived herself as lower than others.
Because she internalized it, she failed to recognize the counterproductive habit that caused her to take other people’s comments personally. It became a normal part of her daily routine, so she failed to acknowledge how damaging this mentality was to her growth. In short, her internalizing that behavior made it invisible to her. She had no idea she was doing it.
According to Gilbertson, it was taking a step back that she was able to objectively see other’s cruel words as a reflection about the speaker, then about her. In doing so, she stopped taking things personally, and gained key insight that was fundamental to her growth.
Gaining that Perspective
We’re all blind to our flaws and bad habits. To give another example, Gilbertson associated drinking alcohol with her emotional state of being — whenever she was anxious, she would drink. Over time, that tendency became normal and invisible to her.
It’s by gaining an objective opinion, that we can begin to recognize our habits are abnormal and counterproductive. At times, gaining this can be difficult. As mentioned, we all carry biases about ourselves, and tend to ignore negative things that counteract those thoughts. But to help my growth and identify my flaws, I tend to:
Thoroughly introspect on my thoughts, feelings, and emotions. What do I feel guilty, or ashamed of? What am I proud of? Doing so helps me to admit the personal flaws I was already aware of.
To identify the flaws that I’m oblivious to, I occasionally ask friends and family to share their honest opinions about how they perceive me.
I listen to feedback from my superiors. Be that in education or work. If they don’t think I’m pulling my weight or reaching my full potential, I welcome that feedback.
We can never truly know how we look or how our behavior is perceived. Our self biases are so defined, that some scientists argue that if you saw yourself on a train — you wouldn’t even recognize yourself. So it’s by gaining this new and fresh perspective that we are able to recognize our flaws, and nurture our growth. | https://medium.com/mind-cafe/essential-ingredients-for-watering-your-mental-health-and-promoting-growth-b2beef77bae3 | ['Jon Hawkins'] | 2020-11-10 16:52:44.129000+00:00 | ['Life', 'Life Lessons', 'Psychology', 'Self', 'Advice'] |
“Printing Money” with Operational Machine Learning | “Printing Money” with Operational Machine Learning
Can businesses finally generate revenue from big data?
By Thomas Davenport and Rich Masi
Organizations have made large investments in big data platforms, but many are struggling to realize business value. While most have anecdotal stories of insights that drive value, most still rely only upon storage cost savings when assessing platform benefits. At the same time, most organizations have treated machine learning and other cognitive technologies as “science projects” that don’t support key processes and don’t deliver substantial value.
However, there are a growing number of large but innovative companies that are driving measurable value through “operational machine learning” — embedding machine learning on big data into their business processes. They’re employing a new generation of software, skills, and infrastructure technologies to solve complex, detailed problems and deliver substantial business value. One company found the approach so successful that a manager said it was like “printing money” — a reliable, production-based approach to generating revenue.
Beyond Decision Management
Take, for example, an investments firm that needed to create personalized cross-channel customer experiences. In the past, the company used “decision management” technology to create offers based on scores computed from past investments and the company’s perceptions of net worth. Today, however, the problem is much more complex. The company had tried to create cross-channel versions of the same idea, but it had never been successful because both the available technology and the collaboration between marketing and technology groups were lacking.
Over the past year, the firm created a cross-channel approach to personalized customer offers. It uses data from the customer’s website clickstreams, investing behaviors, and call centers. It can create both emailed offers and personalized, optimized website content. Personalized offers can also be made in call center interactions.
The solution learns from the responses of customers and tunes offers over time. It includes machine learning models to customize offers, an open-source solution for run-time decisioning, and a scoring service to match customers and offers. It supports millions of customer offers a day, and customer response is improved significantly over the single-channel legacy system. In order to help create these capabilities, the company created both a Chief Data Officer and a Chief Loyalty and Analytics Officer within the marketing function.
Driving Value With Machine Learning
With the adoption of big data platforms, many companies are experimenting with machine learning as a means of dealing with all the data. Data scientists, who are typically key to making machine learning work for organizations, have been described as holding “the sexiest job of the 21st century.” With the prominence of machine learning and the data scientist, why isn’t there a continuous benefit stream of value that flows from big data?
Part of the reason is the labor-intensive nature of early machine learning initiatives. In practice, the majority of machine learning initiatives follow the traditional resource consuming process of discover, model, deploy, monitor, and update that has been used for decades. Today, modern data and analytics architecture components can be used to infuse automation into each step of this process and embed scalable machine self-learning into operational processes.
Embedded business rules and predictive analytics that drive operational decisions is not new, and there have been product offerings in this space with robust functionality for years. However, this technology has gained limited adoption, due to both cost barriers and the complexity of deployment and support. Today’s contemporary big data architecture and open source software may be the gateway to more widespread adoption. The data management vendor space in this brave new world of data and analytics is crowded, but the area of real-time decision management that allows for production scoring and learning within analytical assets is much less populated.
There is a large opportunity for organizations to build these types of applications on top of their big data stack and an even bigger opportunity for vendors in the data management space to extend their offerings to address real-time decision management.
There are three core functional capabilities that need to be developed to support real-time decision management: a decision service, a learning service, and a decision management interface.
The decision service determines the array of possible outcomes of a process. It accepts decision requests from business processes, applies business rules to filter a decision set, scores predictive analytics for the decision set, arbitrates by a business defined strategy, and returns an optimized result back to the business process. This is typically a rules engine of some kind, either proprietary or open source. The learning service improves statistical predictions or categorizations over time. It maintains analytical assets for the decision set, updates predictive assets when responses are available, and passes production-ready predictive models to the decision service. This would be a machine or statistical learning offering, also available from both proprietary vendors and in several open source versions. The decision management interface allows business to define and update a decision set and/or decision set metadata, define business rules, and define a segmented decision-making strategy that includes rules, predictive analytics, and other key decision metrics. This could be adapted from existing decision management tools or built from scratch.
Building these capabilities on top of a big data stack (including data lake storage and data transformation capabilities) is transformational in terms of the availability of information to support the decision, the performance of the decision request, and the performance of the learning service. We have seen cases where the data query run time to support a decision has been reduced tenfold (for example, from around fifty milliseconds down to less than five milliseconds per query). Applications that used to only consider one month of customer history due to performance constraints can now include all customer history. In other situations, the learning service previously choked on the volume of responses, but when moved to a Hadoop data cluster, the distributed nature of the environment is not overly taxed. With the potential for processing thousands of concurrent requests per second, these big data-driven benefits change the game in operational contexts.
Exploratory analytics and machine learning can certainly generate insights that may be turned into actions that may drive value. On the other hand, operational machine learning that can scale within an embedded business process can drive value without ongoing human intervention. While your company may not feel it has become a money printing press, this capability does offer the potential to generate massive and ongoing business value.
Rich Masi heads NewVantage Partners’ data science and analytics practice and its Charlotte, NC, office.
Tom Davenport, the author of several best-selling management books on analytics and big data, is the President’s Distinguished Professor of Information Technology and Management at Babson College, a Fellow of the MIT Initiative on the Digital Economy, co-founder of the International Institute for Analytics, and an independent senior adviser to Deloitte Analytics. He also is a member of the Data Informed Board of Advisers.
This article first appeared on the Data Informed site Dec. 13. 2016, here. | https://medium.com/mit-initiative-on-the-digital-economy/printing-money-with-operational-machine-learning-115d03eeeeff | ['Mit Ide'] | 2017-01-02 16:30:28.488000+00:00 | ['Machine Learning', 'Data Science', 'Big Data'] |
E.E. Cummings’ Poem Ripped by His Feedback Partner, F.F. | E.E. Cummings’ Poem Ripped by His Feedback Partner, F.F.
“i carry your heart with me” doesn’t cut it with collaborator
Photo by Andraz Lazic on Unsplash
e.e.:
So far I’ve just got this first verse. Let me know if you’re feeling it:
i carry your heart with me(i carry it in
my heart)i am never without it(anywhere
i go you go,my dear;and whatever is done
by only me is your doing,my darling)
F.F.:
I hate to say it, but you lost me right off. What’s with that lower case “i”? It literally SCREAMS “low self-esteem.” You don’t want to start off sounding like a starving artist who can’t afford to repair his wonky shift key.
e.e.:
My shift key works fine. I was experimenting with a new style of . . . never mind, it’s hard to explain . . . How about the content? I’m onto something there, right??
F.F.:
If you mean my nerves, maybe. You’re gonna carry HER heart in YOUR heart? How’s that gonna work?? Why, even your largest chamber — that’d be your left ventricle — couldn’t possibly fit an entire —
e.e.:
It’s METAPHORICAL, man. I’m trying to get at how she’s just always there, you know, deep inside me, like a . . .
F.F.:
Like a what? Like a tumor? It’s gross, man. She’s in there with your pumping blood and your plaque and all the —
e.e:
(miffed) Plaque is in the arteries. I’m pretty sure —
FF:
My point is: plaque isn’t pretty. And ventricles are revolting. Say something complimentary. My love is like a red, red rose/That’s newly sprung in June” — that sort of thing.
e.e:
That’s been done.
F.F.:
NOT Donne. It was Robbie Burns who wrote “A Red, Red Rose.” Just switch up that sort of thing. Maybe say: My love is like . . . an awesome amaryllis. My girlfriend grew this amazing “Apple Blossom” Amaryllis, it was two feet tall —
e.e:
No flowers! I’m not a florist — I’m a modernist. And you’re my feedback partner. You’re not supposed to tell me WHAT to write. Just give me notes on what works for you, and what doesn’t. Does anything work for you??
F.F.:
Sorry. Yes, absolutely, the “my dear” part works. And the “my darling.” Strong! Chicks eat that stuff up. But . . . that part where you lug her around — anywhere you go, she goes?? Women want freedom, their own space — they don’t want to just tag along, squished into some guy’s left ventricle.
e.e.:
Hold on. You’re being literal again. It’s poetic. I’m a freakin’ poet —
F.F.:
And I’m NOT? I seem to recall my “Ode to an Adder’s Tongue” got into the same anthology that rejected you. Anyway, you ASKED for my feedback. If you can’t even take a few notes—
e.e.:
— I can take . . . I WILL take them . . . under advisement.
F.F:
Good. Now, maybe rethink the whatever is done by only me is your doing. Like, what if you did something wrong? — say you screwed up and forgot your anniversary — it sounds like you’d be saying it’s her fault.
e.e.:
(deflated) What I meant was . . . never mind. Is that it?
F.F.:
Pretty much. Just massage it a bit, is all I’m saying. Oh, and expand that scrunched-up spacing. You’ve got to let a poem B R E A T H E . . . | https://medium.com/jane-austens-wastebasket/e-e-cummings-poem-ripped-by-his-feedback-partner-f-f-315d63b62d6c | ['Judy Millar'] | 2020-10-09 00:14:59.425000+00:00 | ['Literature', 'Writing', 'Satire', 'Poetry', 'Humor'] |
Czech Carmaker Škoda will use Israeli AI Solution to Optimize Engine Production | The Israeli pioneer in process-based industrial artificial intelligence, Seebo, announced a partnership last week with the leading Carmaker Škoda Auto. This collaboration aims to use Seebo’s unique process-centric AI solution in aim to predict and prevent losses in automotive production lines.
By employing Explainable AI technology, Seebo enables process manufacturers to predict and prevent unexpected process inefficiencies that continually damage production yield and quality. The Seebo solution empowers production teams to discover process inefficiencies and their operational impact, pinpoint why these process inefficiencies happen and predict when they will happen next.
Seebo was set up in 2012 by Lior Akavia and Liran Akavia. Seebo has over 50 employees, working out of offices in San Francisco, Tel Aviv, and Shenzen. The startup has raised $16.5 million to date, according to Start-Up Nation Central. The company provided its solutions to several manufacturing sites worldwide, such as Hovis, Nestle, and PepsiCo.
Many predicted and discussed the transformation AI will have over autonomous vehicles and how will it completely change the driving experience. Although not many emphasized on how AI will transform most aspects of the auto-manufacturing process. AI will not just change the vehicles that are built, it will also change the entire business of how they get built. Increasingly, AI applications are supported by the adoption of devices and sensors connected to the Internet of Things (IoT). As companies rush to apply AI to high-value industrial tasks such as predictive maintenance or performance optimization, we are seeing a rush of investment in AI technologies.
“The use of AI in the automotive industry is expanding beyond autonomous vehicles into the production plants, to attain smarter, data-driven manufacturing processes. This collaboration demonstrates Škoda’s continued commitment to remain innovative while excelling in production technology and we are proud to be part of their smart manufacturing strategy.”, said Seebo CEO and co-founder Lior Akavia. | https://medium.com/jewish-economic-forum/czech-carmaker-%C5%A1koda-will-use-israeli-ai-solution-to-optimize-engine-production-2e462f63ad1e | ['Liran Zitser'] | 2019-11-11 10:12:51.628000+00:00 | ['Israeli Startups', 'Automotive', 'Innovation', 'Startup Nation', 'Artificial Intelligence'] |
B2B eCommerce Marketing Strategy: 2021 | B2B eCommerce Marketing Strategy: 2021
Exciting stuff on the horizon for every online retailer.
Photo by Oleg Laptev on Unsplash
The future of the world is changing at a rapid pace, and the demand for B2B eCommerce is quickly outweighing the supply. Today you can order takeout with just a button press, get an Amazon delivery the next day, and even chat with friends and coworkers digitally without having to leave your bed. Whether we like it or not, the world is changing and the new norm is here.
There is no denying it, now is the time of convenience, and up to 66% of consumers are more likely to buy from a business based on availability and accessibility, while 47% will choose based on price/value; people are seeking ways to make their lives easier.
This means your business needs to take the necessary time to learn about the new emerging trends for the next year to come, and gear your business towards success over your competition who will still be living in the past.
Now let's dive into the new B2B eCommerce trends for the coming years.
Voice Search Is On The Rise!
Think about how many times you have personally searched for something using your voice, whether to find that perfect YouTube video, get directions on Google Maps or find a product online, voice commerce is slowly becoming more prominent.
Nowadays, with the increased acceptance of using smart home devices like Google Home or Alexa, or even Apple’s Siri, people are finding that the accuracy and convenience of this new technology is making texting more and more obsolete.
47% of all voice searches are towards finding a product, which means your business must work on optimizing for those commonly searched voice queries by including them as long-tail keywords within your future blogs, and image descriptions to give your business an edge, and tap into this massive emerging market of voice-driven eCommerce consumers.
SMS and Mobile Messaging Shouldn’t Be Overlooked
Although it might seem like outdated technology, SMS, and mobile messaging is fast becoming the easiest way to reach out to your customers. With the focus on everything being mobile-friendly: mobile shopping, mobile checkouts, and mobile payments, now your business can take advantage of direct-to-consumer texting!
Consumers love it when brands take the extra steps to make it personal, and with people spending on average 4 hours a day on their smartphones, it would be a complete shame for your business to miss out on this opportunity for increased brand awareness and connection with your customer base.
Moving forward, try an SMS marketing campaign and see how the results will completely blow most email marketing campaigns out of the water!
Optimizing For Mobile Shopping
As an eCommerce business, you need to make sure to have your business optimized with the mobile shopper in mind, this means:
Having responsive, eye-catching website design User-friendly navigation Fast website load times An intuitive checkout process Compressed images and videos No noisy and invasive pop-ups and ads High-quality content without too much fluff
By making these necessary changes to your current eCommerce store, you are guaranteed to see an increase in how well your page ranks on Google, as well as increased visibility to potential customers!
Making Your Products Available On Google
The above point talks about how to optimize your eCommerce shopping experience organically, which due to the nature of SEO, will always be a long-term game. However, you can quickly transform your eCommerce business by listing your products on Google directly!
In recent years, Google marketplace has emerged as one of the most powerful sales and marketing channels for eCommerce retailers to boost their online visibility and attract a massive audience for increased site traffic and sales.
To get started, first set up a Google Merchant Center account, and a Google AdWords account. From there, you can create your first Google Shopping campaign and begin advertising your products (if you’re struggling to find products to sell, check out Google Trends and see what people are searching for!)
Remember, the power of Google Marketplace is:
Its ability to allow your business to visually represent a product, which in a sea of text, while help make your business stand out
Allows for your business to show up in various parts of a webpage, whether as an actual page result, a text-only PPC result and a Shopping result
It will boost your audience, reaching across your region and internationally, unlocking huge growth for your brand
Design, Design, Design!
The design of your brand will make a big difference in revenue and profitability beyond simple first impressions. Subtle changes in the colour of your website, the nuances of a hover animation over a “buy now” button, and even the layout of your store, will all play a role in increasing conversion for your business!
94% of a website user’s first impression are design related
- Kinesis Inc
Great design doesn’t just stop at your store. Think about how Amazon has their logo on everything from their storefront, advertising, packaging, and other materials. This is your chance to build brand awareness among consumers through intentional and strategic design, which in turn will connect you with your customers on a deeper level, and boost repeat sales.
Remember, design is everything for an eCommerce brand, and arguably the most important part is the packaging combined with the unboxing experience.
Diversification In Your eCommerce Strategy
If there is one lesson that can be taken away from this COVID-19 pandemic, is to not have all your eggs in one basket.
Businesses who relied on Amazon FBA learned the hard way when Amazon shutdown all FBA shipments for non-essential items from third-party sellers. This meant that if a business solely relied on Amazon FBA, then their business was in major trouble.
So the takeaway here is to not be afraid to branch out!
2021 is going to be a fantastic year, full of growth and prosperity, and with the plethora of so many alternative marketplaces such as Walmart, Google, and eBay your online eCommerce business will have no troubles in reaching more customers, building brand awareness, and ultimately, increasing sales.
Handling eCommerce Payments
Once you diversify your business, it will be important to also change up your account's receivable infrastructure to scale in accordance with your new eCommerce business model.
It’s time to move away from manual processes like mailing invoices, and rather, setting up online payment options through your bank, or third-party software like Stripe or PayPal.
This shift towards electronic disbursement will save your business time and money and will keep your business thriving in the “era of convenience” that we are all living in now, meaning, people want an easy checkout process, so this will give them just that.
Recap
2021 is gearing up to be an explosive year full of insane growth for the eCommerce industry, but with that, come’s change.
Make sure your business is ready by adapting with the times and looking into the following trends:
Voice commerce SMS and Mobile Messaging For Connecting With Your Customers Optimize For The Mobile Shopper Leverage Google Marketplace Diversify Where Your Business Sells Set Up Different eCommerce Payment Options
By adapting to this new era of convenience, you are setting your business up for success, and building infrastructure that will establish your business and brand as a leading player in your industry. | https://medium.com/the-innovation/b2b-ecommerce-marketing-strategy-2021-17d8dfbf66c2 | ['Lev Markelov'] | 2020-12-24 20:02:15.988000+00:00 | ['Trends', 'B2B', 'Ecommerce', 'B2b Marketing', 'Marketing'] |
Almost Love Letter | I’m trying so hard to believe there’s enough love in the world.
For both of us.
I grit my teeth and silently wish I was magic.
Able to slip in and out of this reality.
Into another where rejection was someone I barley knew.
A brunette passing me by in the night.
Blurred movements as I rush to the show.
Those tickets were so damn expensive.
It’ll be so much fun to experience this with you though.
Moon and stars hanging over us.
Dressed to the nines.
We’ll retreat to my place after it’s over.
Watch a movie and share our smiles.
Intimate images flicker through my mind.
In and out they go.
Frequent and vivid.
I wish they’d coalesce into a bridge.
Shimmering under the twilight hour.
Letting me walk across and away.
From this world where I wasn’t good enough.
Deficient and unwanted.
I’ll probably never know exactly why you said no.
But I know I have control over my feet.
As long as this bridge keeps forming.
I can someday make it to the world in my dreams.
The one where I was finally good enough.
The sun finally able to set on my isolation. | https://jeauxzephwrites.medium.com/almost-love-letter-32634fa083e8 | ['Joseph Coco'] | 2019-06-16 23:40:47.167000+00:00 | ['Love Letters', 'Writing', 'Poetry', 'Love', 'Breakups'] |
Battling Social Media Addiction | “Shot through the heart, and you’re to blame, darlin you give love a bad name”, wakes me from a deep sleep that I’m pretty sure I just fell into about an hour ago. (I’m 40, of course I love Bon Jovi!)
Rolling over, I tap my watch on the stand and spend a few seconds focusing in on the screen; 3:30am, again. Ugh! My body is screaming at me to hit snooze, but my brain quickly steps in, “not happening girl, that app you’re using has no snooze remember?”.
Why do I torture myself like this? No sane person in this world gets up at 3:30 in the morning to workout before having to be at work before 6:30am, right?
I drag myself out of the bed, place my feet on the cold floor, and grab my phone from the flat wireless charger it stays on beside me all night. I hit the button on the app that gives me about 45 seconds to get to the kitchen and take a picture of my cabinet before it starts going off again and rush to the kitchen.
Although I didn’t accomplish much in 2019, I am thankful that I at least beat the obsessive snooze epidemic by finding that app. As soon as I have the all clear from my alarm, it starts.
I can feel the heat in my body start to rise into my throat. My fingers start to twitch just a little. The overwhelming urge to check every social media newsfeed that I follow is taking control of my body.
I resist long enough to make it to the restroom and as soon as I sit down, it’s on. Telling myself I am only going to check notifications, I grab my phone and log into Facebook. Just as I start to check into Twitter to see what’s going on, my toes start tingling. I just barely notice it, but don’t give it much thought. Next, I’m checking my Medium stats, then Instagram, LinkedIn, and on to my emails.
By now, my feet are completely asleep and I finally realize that I’m still sitting on the toilet and it’s already 4:00am. Chastising myself, I put my phone down, wash my hands and swear to myself I am just going to throw on my workout clothes and get busy.
Another 30 minutes passes by as I put one sock on, check my phone, another sock, check my phone, my pants, read an article on Medium, then my shirt, post on Facebook; another morning, another hour completely wasted. Now, I am going to end up being late getting the kids to school, AGAIN and probably late for work too!
This is my morning. EVERY morning.
For the rest of the day, as I am sitting or standing at my desk at work, it’s basically the same thing. It’s even worse if I am having to deal with a difficult task that day or a stressful situation. I must check my phone before I get started and then every time I feel just an ounce of stress or anxiety, I grab it again.
My evenings are pretty much the same. If I had a dime for every time my husband has asked me to put the phone down, I would be one seriously rich woman by now.
It has taken me a very long time to be able to admit that I have a serious problem. It’s embarrassing, quite frankly, and it means I have to acknowledge that my husband was right every time he said something to me about it, which is not something I am very proficient at doing (haha).
I’ve done my homework. I’ve researched social media addiction, tried every recommendation on the World Wide Web of knowledge and opinion, and have even talked to my therapist about it multiple times.
I have checked Facebook 10 times just while writing this article.
Apparently, it is estimated that 5 to 10% of all Americans suffer from this same problem. If the time you are spending on social media is impacting other areas in your life such as work or time with family, I would highly recommend that you consider the possibility that you are suffering from social media addiction as well.
I have withdrawal symptoms if I can not access my social media for any length of time. I get pretty defensive if anyone tries to tell me to put it down. Every time I logon, I feel a brief sense of happiness and stress relief. For me, it’s often a coping mechanism; an escape from life and what I may be dealing with at that moment.
In the past, I would get the same sensation every time I would take a drag off of a cigarette, back when I used to smoke.
Social media addiction produces the same neurological results as gambling, alcohol, and even drug use such as cocaine because it affects the same part of the brain due to a high increase in dopamine.
I do not have any answers on how to best address this issue as I am still battling it every day. I have found that deliberately spending time writing and exercising does tend to pull me away from social media more than anything else. I have also found that turning off the notifications from my social media accounts helps just a little, but a little is better than nothing.
Since becoming addicted to social media, I have developed/been diagnosed with anxiety and depression. From all I have read on this topic and my discussions with my therapist, it is believed that excessive social media use does increase the risk of developing both of these issues, so it’s very likely that my addiction has caused this.
Hopefully, sometime later in 2020, you’ll see an article from me on how I beat social media addiction. If this is something you suffer from, or have suffered from in the past, I would LOVE to hear from you! Let me know what you tried and what has helped. I am all ears! | https://medium.com/what-doesnt-kill-you/battling-social-media-addiction-db1f11058d3e | ['Brooke Moore'] | 2020-01-03 22:48:00.514000+00:00 | ['Addiction', 'Anxiety', 'Social Media', 'Depression', 'Mental Health'] |
Blockchain as the Next Evolutionary Step of the Open Source Movement | There’s little argument that open source has transformed our world. As a developer, I cannot recall a single day in the last few years where I did not rely on open source software. I’m not the exception. The majority of software engineers today rely on open source daily in their professional lives.
For one, open source is dominating developer infrastructure. From operating systems (Linux in the cloud) to databases (MySQL, MongoDB, Redis) to programming languages themselves (JavaScript, Python, Java, C, PHP). It’s not just developers, it’s consumers as well. From what they run on their phones (Android) to how they access the web (Chrome, Firefox).
The motivation is clear. Open source is good for humanity. It is making technology more accessible and open — anyone can build anything.
Open source was not always mainstream
If you had asked a random developer 20 years ago whether this idea of open software would ever catch on, they would have laughed. Sharing intellectual property, your competitive advantage? absurd. Would it affect real business? barely, it’s a niche. The ones leading it? anarchists, trying to tear down establishments.
This is not very far from how many people view blockchain today. Decentralizing control when you can hold on to it? absurd. What is the business use-case? not mainstream, a niche. The ones leading it? anarchists, trying to tear down institutions.
With blockchain, it’s actually worse. The inflated cryptocurrency bubble and its recent recession, the abundance of opportunism and over-speculation, are all adding even more suspicion to the mix.
Open source and for-profit companies
In the beginning it seemed that open source and for-profit companies were mutually exclusive. Corporations like Microsoft were hailed as the enemies of open source. Companies saw code as their secret-sauce, which sharing would bring-about their downfall or destroy their competitive edge. Today, nothing is further from the truth.
The biggest contributors to open source today are for-profit enterprises like Microsoft, Google, IBM and Facebook. These companies are leading many of the most popular projects like React and TensorFlow. Personally, I was lucky to be part of such a company, Wix.com, and help take it from a walled-garden to number 11 in some obscure ranked list of global open source contributors in 2017.
Why are these companies choosing to open parts of their IP? Well, it’s certainly not because of ideology. Open source is making these companies more competitive.
A good example is Google with Android. Google was late to the coveted space of mobile already predominated by Apple with the transformative release of the iPhone. Microsoft was late as well (technically they were there first but with the wrong product), with more years of operating system domination than the other two combined. Penetrating this emerging market was no easy task.
Part of Google’s strategy was relying on a largely open source operating system, Android. Manufacturers like Samsung would have otherwise found it hard to join — basing critical parts of their business on an ecosystem they have zero control over would be unwise. This strategy paid off. The Android ecosystem grew as the open answer to Apple’s closed garden. Google did forgo the ability to sell licenses for this property, but gained something much more valuable: presence in the pockets of over a quarter of the population of Earth.
The same debate is taking place today about blockchain. Why would for-profit companies, market leaders especially, ever opt to decentralize any part of their business? Isn’t their position of power linked to centralized control?
I argue that they will do so for the same exact reason. Not because of ideology, but because it will make them more competitive. To maintain their positions of influence they will have to make ecosystems that are more open. Otherwise, their competitors will, and win.
Forks, control and the balance of power
We’ve seen why a company like Facebook would release IP like React — a project that transformed the way web frontend is built. What is less clear is why other companies would adopt Facebook’s native technology for their own critical-business paths.
I was fortunate to have had a front row seat to such a decision. When I was at Wix.com there was a debate on whether to base the Wix.com website editor on React. For a company that creates websites for a living, risking the editor is risking the lifeline of the company as a whole.
Imagine that one day Facebook decides to compete with Google for web domination and releases its own web browser as an alternative to Chrome. Significant parts of the web are based on React. What if, in this dystopian future, Facebook decides to make React incompatible with Chrome? This decision could jeopardize Wix.com’s business.
The governance of open source is successful mainly due to the concept of forks. Anyone can take the entire source code of any open source project and make a copy that they control, at the click of a button. If Facebook would ever make React incompatible with Google Chrome, Wix.com can fork React and create a version that is compatible. If the community favors this fork to the one Facebook is maintaining, they would adopt it. At some point, the more popular fork would actually become “the” React in the eyes of the public.
This delicate balance keeps Facebook in check. Facebook may maintain its position of influence as long as it doesn’t abuse it. Where does the line cross? where the consensus says it does.
This sounds awfully close to how blockchain governance works. This same guarantee of the ability to fork is one of the core guarantees this technology provides to its users. One thing to notice is that this guarantee is much stronger under blockchain. Beyond the source code of the system, you can fork all of its data as well.
A continuation of the open source movement
We’ve drawn several parallels between open source and blockchain. We’ve seen similar regard to both movements in their early days. We’ve seen similar motivations of openness. We’ve seen the same questions whether for-profit companies fit in. We’ve seen the same governance and balance of power.
I argue that it’s more than mere coincidence. I see Blockchain as a continuation of the open source movement, picking up where this left off.
There’s a clear limit to what can be shared with open source. Open source cannot open up live systems, it cannot open their data. You can share the source code for a server, but naturally you cannot share a running instance of this server.
Blockchain is making this next step technologically possible.
A concrete example
Let’s go back to Android. We’ve seen the value the ecosystem derives from having control of the operating system source code — this value allowed companies like Samsung to join in and made this ecosystem attractive.
But Android is not just source code. There are many living services required for the ecosystem to function. Android relies on push notifications, it relies on payments, it relies on apps being downloaded from Google Play. These services are all running instances, not just code. Billions of users query them daily. They hold data.
Who is running those services? Let’s focus on Google Play. With a name like that the answer is self evident. Google is running these services on private infrastructure that isn’t shared with anyone.
What is the cost of having Google in sole control of Google Play? For starters, a 30% fee every developer pays Google for the benefit of distributing their app digitally. But that’s just money. Every mobile developer has felt the uneasiness with the app approval process. Less fortunate ones have experienced app rejection and the 3 strike suspension policy. Absolute control of a distribution channel where said company’s own products are being sold and distributed can’t be good for competition. See what’s going on with Spotify in the Apple store.
What about competing app stores on Android? They’re possible, Amazon did a great job of building an alternative. Unfortunately these alternatives cannot resolve the fundamental problems and offer little differentiation.
Being able to run a service like Google Play together would provide a great value to the Android ecosystem. Such an idea is not technically possible with open source alone.
It is technically possible with blockchain though, I will be more than happy to show a proof of concept in one of my future posts. But let’s be honest, we won’t see this happening any time soon. A community run decentralized Google Play isn’t practical enough today primarily for business reasons.
Instead, I see a different opportunity. An opportunity for a for-profit giant like Microsoft, desperate to carve some niche in the mobile space. Building an app distribution channel that is not completely centralized. Giving a few more guarantees to the developers who are relying on it. Not because of ideology, but because it will make the offering more competitive — more competitive than Google Play and Amazon Appstore at least.
Blockchain will become mainstream
I believe that history will show that every system that has multiple parties relying on it will eventually have to provide its users with some hard guarantees. Not for ideology, but to remain competitive.
This doesn’t mean that every system will run on blockchain. Just like every piece of software doesn’t have to be open source. But there is some critical part of the world that has to be open. Just like some critical parts of software must be open source for companies to be successful. | https://medium.com/orbs-network/blockchain-as-the-next-evolutionary-step-of-the-open-source-movement-96158f46d0d7 | ['Tal Kol'] | 2019-03-27 17:05:39.087000+00:00 | ['Blockchain', 'Open Source', 'Identity', 'Software', 'Development'] |
Internship testimonial: Sara Neves | Get to know our Summer Interns Class of 2017 and their feedback on the “endless summer” they had here at the office.
Could you please introduce yourself?
SN: Hi! I’m Sara, I’m 21 years old and I come from a small town named Ourém. I’ve just finished my Bachelor’s Degree in Design and Multimedia at the University of Coimbra, which offered me the opportunity to merge two things I’m passionate about — design and coding.
How did you get to GetSocial?
SN: I was looking for an internship and Faber’s Summer Internship was my number one choice. I found out about it through an event called DotWorks in Coimbra, where I had my interview. I was sure I would have people helping me and willing to share their experience. I knew I would learn and have the opportunity to apply what I know, but this time in a real company. GetSocial was the startup they chose for me and that allowed me to do so.
Sara doing it again! Works like these: About Us, Shares Report and much more.
What have you done during the internship?
SN: At GetSocial I was as a UI/UX Developer slash Designer. My main project was to improve GetSocial’s User Experience. I also did some promotional content for our social media channels and I have just finished rebuilding the onboarding experience for new users, so they have a smoother welcome to the GetSocial platform.
Did this experience meet your expectations? Would you recommend to future interns?
SN: I chose this internship because I was sure I would learn, but I never thought I would learn THIS MUCH! Every Friday I went home with my mind so overwhelmed. Being immersed in the startup’s world is far better than what I was expecting, and I already had high expectations… I really appreciate the fact that they are always willing to help, sit next to you and give some feedback or guidance if you’re feeling lost.
It was also good to see how a startup environment actually is and to work with a small but diversified team. I definitely recommend it! | https://medium.com/getsocial-io/internship-testimonial-sara-neves-2d82b31bdb6 | ['João Correia'] | 2017-09-13 15:57:34.504000+00:00 | ['Getsocial', 'Startup Life', 'Faber Ventures', 'Internships', 'Startup'] |
8 best practices of high-converting websites | Your website is the center of your online presence, and the center for all your marketing efforts. Attracting potential customers is great, but once a visitor lands on your website, you need to have a strategy to convert them into a paying customer. To increase the chance of conversion, you must:
understand your ideal customer
understand their intent
understand their challenges
understand what they expect from a website like yours
Every customer’s journey starts by researching online, and your brand’s website must guide potential customers through that journey. Design it to tell an engaging story, highlight a problem, provide the solution, call them to action, and promise a happy ending.
In order to design a marketing-activated website that converts visitors into customers, keep these 8 best practices of high-converting websites in mind.
Design for customers, not designers
Left to their own devices, web designers will craft a website for design’s sake, not necessarily for the customer. For instance, remember Flash? Now almost extinct, Flash initially turned the heads of designers and programmers as it provided immense visual capabilities. However, in the new era of SEO and mobile-friendly sites, Flash can’t deliver what’s required in the market. A user-friendly website has an impressive yet simple design. It’s easy to navigate, quick to load, and efficient in guiding the customer towards conversion.
Place visible calls-to-action
Even the most perfect piece of content balanced with flawless SEO tactics will fail to convert visitors into customers if it’s missing a strategically placed call-to-action (CTA). Visitors will read your amazing content, but without a push in the right direction, they may go to another website where your competitor will be happy to assist them. A well-placed CTA will drive visitors to take the next step. It’s wise to use only one CTA per page to avoid confusion. Don’t forget: the CTA should create a sense of urgency that compels readers to act.
Aim for user-friendly navigation
Simple navigation can make a huge difference in your customer’s journey, and conversion often depends on whether your visitors can find what they are looking for. When designing your website’s navigation, place each section where it feels natural and will be most easy to find. Try to limit the number of menu items to seven or fewer. If your brand logo is in the header (which it should be), link it to your homepage so visitors can revisit it easily.
Provide relevant, fresh content
Creating a solid content strategy is key to ensuring your website stays relevant and valuable to your audience. In order to create useful content, you must first understand the needs of your target audience. Consider creating content that answers questions already asked by your visitors. Use emotion, sincerity and authenticity to empathize and connect with your audience. Crucially, if you want your content to get discovered, make sure your posts and pages are SEO-optimized with smart keyword usage, metadata, and other on-page elements.
Don’t compromise on speed
People are not patient, and slow-loading webpages will almost certainly lead to a higher bounce rate. If your page takes longer than five seconds to load, it’ll frustrate your visitors and give them a reason to search elsewhere. To increase the loading speed of your webpages, consider removing any nonessentials, such as videos or large images that take extra time to load. Compressing images will also reduce loading time. Finally, utilize browser caching for storing cached versions of static resources to speed up your pages significantly.
Showcase your offerings
If you don’t do anything else, at the very least, showcase your products and services on your homepage. You have only a few seconds to make a first impression, so make it count. Professional photos and spotless copy will help you put your brand’s best foot forward. Product pictures and descriptions should be detailed, useful and appropriate. You may have a great website design, but if your photos look fuzzy, potential customers will think twice about buying your products. However, take care not to overload your website with graphics. Select a few good images and feature them on the homepage.
Establish trust & credibility
Potential customers are less likely to enter their contact information or make a purchase if they suspect that your website is not secure or trustworthy. Communicate your trustworthiness by featuring customer testimonials, case studies, reviews, security badges and your privacy policy. Make sure your contact information is easy to find so visitors know they can reach you. All of these signals will help you establish trust and credibility as a reputable brand.
Communicate your value proposition
Use compelling language to convince and show readers how your brand will add value to their lives or resolve their problems. What benefits can customers expect to enjoy by making a purchase or signing up for your service? What features make your products better than what your competitors offer? If you can excite your visitors with your value proposition, you will see your conversion rates improve.
Even after you’ve followed all of these practices, review your website regularly to create a list of changes and optimizations. Be innovative about delivering the best experience possible, and never stop iterating. Optimization is an ongoing process, but the rewards are well worth the effort.
About Olivia Carter
Olivia Carter is a freelance writer and short story author. She is a performer, singer and a fitness freak. She loves to write about technical stuff, the internet and other trending topics. She lives in Portland and has a degree in English literature. | https://medium.com/lucidpress/8-best-practices-of-high-converting-websites-6c03d2ef7fbb | [] | 2017-10-03 17:05:13.245000+00:00 | ['Marketing', 'Ecommerce', 'Lead Generation', 'Conversion Optimization', 'Web Design'] |
The Sun Isn’t Hot Enough to Shine | The Sun Isn’t Hot Enough to Shine
Nuclear fusion requires 100 million degrees Kelvin, yet the Sun’s core can only reach 15 million. How then does it create light? A quantum phenomena known as quantum tunneling is the answer.
Students are taught that a star has enough gravity from its immense size to create high enough pressure and temperature to overcome the natural repulsion of protons in hydrogen atoms and fuse them into helium. The helium weighs slightly less than the original hydrogen atoms, and the missing mass is released as an enormous amount of energy, as dictated by Einstein’s E=mc^2. According to this equation, even a tiny amount of mass (m) becomes a lot of energy (E) when multiplied by the speed of light squared (c^2), which is roughly 9 x 10 16m^2/s^2.
While this sounds like a solid explanation, there’s a massive problem: the Sun’s core doesn’t get anywhere near hot enough for nuclear fusion to occur. Here on Earth, our fusion reactors, which may actually power the grid soon, need much higher temperatures.
The mind-bending world of quantum mechanics, though, provides a solution.
Quantum Mechanics, a Crash Course
Quantum mechanics is the science of the very small, where the rules of reality are very different from what we expect. The laws that govern our everyday world don’t make sense at this scale, as particles can appear on the other side of insurmountable obstacles, don’t exist in a specific location until observed, can interact instantaneously with other particles over long distances, among many other counter-intuitive principles. Despite being so weird, quantum mechanics has been a gem for modern science, as it provides incredibly precise solutions to many problems in the classical world of physics.
The crux of quantum mechanics is wave-particle duality, as explained by Schrödinger’s equation. This explains that particles exist as a wave of probability until they are detected, at which point they choose a location. Yes, it sounds insane, but it’s been proven again and again.
This was first shown in Young’s famous double slit experiment. In this experiment, photons were fired at a barrier with two slits in it. Behind this barrier was a wall that could detect where the particles hit. If they behaved as solid particles, then they would pass through the two rectangular slits and produce two corresponding rectangles on the back wall. However, if they behaved as a wave, then they would pass through the slits, begin to propagate again on the other side, and interfere with each other, producing an interference pattern of dark and light bands.
When the crests and troughs of each wave meet, they add. When the crests meet the troughs, though, they cancel. (Creative Commons License)
During the experiment, when the only detector was the back wall, the photons produced a wave pattern. But, when the particles were detected as they were going through the slits, they produced a particle pattern. The pattern produced on the back wall depended entirely on the particles being observed as they went through the slits. This forced them to stop existing as a wave of probability and to choose a specific location.
So what does this have to do with solar fusion? Thinking of hydrogen atoms — more specifically the single protons in their nuclei — as a particle means they don’t have enough energy to get close enough to each other for the strong nuclear force to take over. This force only operates within 10 −15meters. Thinking of them as a probability wave, though, means they can tunnel through this energy barrier via a strange phenomenon of wave mechanics known as evanescent waves.
Evanescent Waves
When a wave traveling through one medium hits another medium it will do two things: reflect and refract. Imagine pointing a laser at a pool of water. Some of the light will bounce off and some will go through. How much reflects and how much refracts depends on the angle of the light and the properties of the two mediums. When it comes to air and water, the common explanation is that the angle at which 100% of the light is reflected is 48.5 degrees. This is known as total internal reflection.
However, it’s more complex than that. Looking at Maxwell’s equations, which explain all of classical electromagnetism, at the point where the wave hits the new medium, a tiny, fleeting wave is produced. It usually only lasts a few wavelengths, but this evanescent wave can continue much further under the right conditions.
The infrared beam is displaying total internal reflection, in which the beam is supposed to stay contained with in the crystal, reflecting off of the boundaries between it and surrounding media. Even at the perfect angle for total internal reflection to occur at the intersection of these particular media, the beam emits an evanescent wave. (Creative Commons License)
Quantum Tunneling and Solar Fusion
So far, we have quantum particles existing as a probability wave and the fact that when a wave should be 100% reflected it’s not due to evanescent waves. Based on this, if quantum particles are, for example, contained in a box, they will have certain probabilities of being found in different areas of the box and a non-zero possibility of being found outside the box. They have a small chance of appearing outside of where they are supposed to be, on the other side of barriers that should be insurmountable.
The red line is a probability wave, with quantum particles being more likely to be found at the peaks. Like all waves, this probability wave can extend through a barrier due to evanescent waves. Therefore, there is a small probability of quantum particles being found, quite surprisingly, on the other side of an impenetrable barrier. (Creative Commons License)
Applying this to solar fusion, the quantum particles in question are the protons of hydrogen atoms and the insurmountable barrier is the large energy spike needed to fuse them. Because they behave as a probability wave and have a non-zero chance of getting through it, some will inevitably get to the other side, allowing the strong nuclear force to take over and solar fusion to happen. Think of it a different way: protons have a tiny chance of being places they shouldn’t be, including right next to each other without the necessary energy to do so.
It’s sounds crazy, but the math works. The Sun’s core has about 10^56 hydrogen atoms, and the chances that two of these protons will fuse because of quantum tunneling is about 1 in 10^28. For the Sun to output the energy that it needs, roughly 3.7 x 10^38 fusion reactions per second need to occur. Even though quantum tunneling is exceedingly rare, the Sun has enough protons within such a tightly packed core that the tiny odds of quantum tunneling can easily be overcome. | https://medium.com/discourse/the-sun-isnt-hot-enough-to-shine-3585af6fdddd | ['The Happy Neuron'] | 2020-12-18 05:21:11.617000+00:00 | ['Quantum Mechanics', 'Science', 'Astrophysics', 'Astronomy', 'Quantum Physics'] |
When your home is a far away land | Sometimes the only way to realize you have grown up in tragedy, is to move to a far away land
Sometimes the only way to get curious about your origins, is to leave the land of your origin
Sometimes the only way to forgive your parents, is to get to know them from thousands of miles away
Sometimes the only way to meet yourself, is to leave the place that shaped parts of you, and also obscured parts of you
Sometimes the only way to see the beauty in your culture, is to wash yourself with that foreign culture
Sometimes the only way to fall in love with your heritage, is to realize that where you live is broken too, everywhere is a little broken in its own way
Sometimes we have to leave our country, our home, our parents, our city, our habits
Sometimes we have to hate, feel angry, reject, ignore, forget
Before we can look back at home with gratitude and watery eyes that only can see that broken is beautiful | https://jessicasemaan.medium.com/when-your-home-is-a-far-away-land-b2e0062f63e3 | ['Jess Semaan'] | 2018-12-29 21:11:34.961000+00:00 | ['Poetry', 'Immigration', 'Identity', 'Culture', 'Psychology'] |
The roadmap metaphor: speculative alternatives | (This analysis is in part based on a Twitter thread started by Cameron Tonkinwise)
Metaphors change based on the way they are used. As Ian Hacking says of the categories we use to name different kinds of people (people who have autism, child viewers of television, women refugees, etc.), they are more interesting from the perspective of their dynamics, rather than semantics (Hacking 1999). In other words, how terms or categories are used in turn changes the meaning of the term, which continues to inform a dynamic feedback cycle as terms with changed meanings continue to change. In this sense, what a roadmap means changes depending on the instances where the term is used.
Presumably, one of the advantages of the roadmap metaphor is that it makes an abstract, unfamiliar notion like ‘a vision for the future’ more tangible. This is in part why metaphors for abstract concepts are often anachronistic: the past can appear more solid and reassuring than the present or the future which might seem, as Karl Marx put it, to be melting into air. The same impulse informs skeuomorphism, a practice in digital interface design whereby so called real-world or analogue objects are used to represent digital alternatives: the rubbish bins, folders and hourglasses that populate our computer desktops — not to mention the notion of a desktop itself!
For this reason, using the metaphor of ‘a pattern book’ has a certain appeal as the source domain for the target domain of ‘navigating towards future goals’ (as an alternative to ‘roadmap’, in other words). It doesn’t, however, avoid the problem of anachronism and in a sense, a pattern book is no different to a roadmap with regard to linearity: neither tell you where to go, but offer a range of different options for different navigational alternatives. A roadmap is no more or less inherently linear than a pattern book, constellation or a plan. Due to contingencies of usage, the roadmap metaphor may have drifted towards more deterministic conceptualisations of planning and rigid linearity, it is, however, simply a pattern book by another name.
One of the problems with the roadmap metaphor relates to a misunderstanding of what maps do. As noted by November et al (2010), maps are better understood as having a navigational rather than mimetic relationship with the places they are said to represent. The authors refer to the example of a yachtsmen in her cabin looking at a map while navigating the high seas: “The relation she is looking for is based not on some resemblance between the map and the territory but on the detection of relevant cues allowing her team to go through a heterogeneous set of datapoints from one signpost to the next” (2010, 585). While the mimetic or resemblance model of the map suggests a direct correspondence between map and territory, what November et al call the navigational model “emphasizes the establishment of some relevance that allows a navigator to align several successive signposts along a trajectory” (2010, 586).
The authors suggest that a more extensive consideration both of how maps are produced and used highlights the limitations of the mimetic notion of mapping. The “miracle of reference” to which maps attest is not the outcome of a great leap from territory to representation, but the product of a network of “explorers, navigators, cartographers, geometers, mathematicians, physicists, military personnel, urban planners, and tourists that have `logged in’, so to speak, on those `platforms’ in order to feed the `databanks’ with some piece of information, or to draw the maps, or to use them in some way to solve their navigational problems” (2010, 586).
November et al (2010) use a number of terms that seem like laudable alternatives to the mimetic temptations of the roadmap metaphor, including: navigational platform, and dashboard. While capturing the interactive aspect of making and using maps, these two terms have the disadvantage of already being quite common in the vernacular of digital computing. Though perhaps for some audiences this might be an advantage on account of familiarity and a sense of contemporaneity that is lacking from the roadmap and pattern book alternatives.
My personal favourite candidate as an alternative to roadmap is ‘smartphone’. A clunky and no doubt in most circumstances impractical candidate: in present circumstances it is hard to imagine a consultancy, for example, delivering a metaphorical ‘smartphone’ to a client in place of a metaphorical ‘roadmap’ without some laborious exegesis. Imagine politicians talking about their ‘smartphone for a hydrogen future’ or a ‘smartphone to better aged care’. There would be a few raised eyebrows.
The superficial clunkiness of the smartphone metaphor masks a deeper interpretative fecundity, at least this is what I will attempt to argue in what follows. And remember, at one point in history, using ‘roadmap’ in place of ‘smartphone’ in the above examples might have seemed just as perplexing.
While it might lack the romantic undertones of large-scale organic metaphors like constellations, trees and landscapes, the smartphone is nonetheless deceptively complex and large in scale, despite also fitting in a palm or pocket. Furthermore, this lack of romanticism is, I would suggest, also one of its advantages. The world of digital entrepreneurship has pillaged the so called natural world for cosily familiar, seemingly benevolent things and places (e.g. Apples). Better to sacrifice the smartphone to this cause than the sky or the forest.
Presently, it is almost impossible to ‘black box’ the smartphone metaphor as something that is taken for granted in terms of its interpretative affordances. Unlike ‘roadmap’ which is close to becoming so conventional to qualify as ‘dead’, the smartphone is, in this context, alive with a sense of the inappropriate. As the source domain for a metaphor describing future planning, it is at once familiar and yet so obscure to demand unpacking. Granted, this is not advantageous in circumstances where brevity is the top priority. In the more gratuitous contexts of academic theorising, however, the smartphone metaphor comes into its own.
The smartphone is among the most widespread, most advanced pieces of navigational equipment that has even been created in the Western world. While the pocket globes of the 18th and 19th centuries might have been advanced in their time, the multifunctional, computational and creative powers of the smartphone testify to the massive advances in navigational technologies over the intervening centuries. Furthermore, not only does the smartphone allow people to navigate through and between places, like previous mapping technologies, it also possesses unprecedented affordances for logging into different platforms. In this sense it is not a bounded, static, or even well understood technology, but a mutable, polymorphous, adaptable digital guardian.
Smartphones are cameras, photo albums, navigational tools, telephones, internet browsers, alarm clocks, messaging tools, music archives, music players and measuring devices for how far we walk — just to name some of the more obvious functions. Considering this polyvalence, I’m reminded of a story from The Bible, Mark 5:1–20, when Jesus encounters a man possessed by demons in the region of Gerasenes and asks his name. “My name is Legion,” the man replies, “for we are many.”
Smartphones are inadequately grasped by the same metaphysics that has a stock and trade in what the linguistic philosopher J. L. Austin so quotably called “moderate-sized specimens of dry goods” — that is, tables, chairs, rocks, vases, balls, jugs and so on (Austin 1976, 8). Better analogies for the smartphone can be found in spiritual and mythological realms, such as the Roman birth gods Genius and Juno, which Peter Sloterdijk includes in his speculative archeology of phenomena that exemplify the primary condition of withness. Sloterdijk characterises the culturally widespread notion of such a spirit being as “a mysterious union of the wonderful and the reliable” which “ensures the psychological space inhabited by the ancient subject discretely and continuously borders on proximate transcendence” (2011, 425).
Sloterdijk uses the example of Andy Warhol’s relationship with his tape recorder as a technological manifestation of a guardian spirit. Warhol was an obsessive recorder, he created over 4000 hours of recordings and referred to the recorder as his ‘wife’ and claimed that it solved all of his emotional problems. Rather than objects, Sloterdijk prefers to call such quasi-objective forms ‘nobjects’: “things, media or persons that fulfil the function of the living genius or intimate augmenter for subjects” (Sloterdijk 2011, 467).
The aptness of the guardian angel concept as metaphor for the smartphone also highlights its advantages for characterising the same target domain as roadmap. Why would any organization want a roadmap when they could have their own institutional genius; a guardian being that facilitates projective problem solving?
Like many familiar nouns, the smartphone is itself metaphoric. Due to the multi-functionality of the device, a more straightforwardly literal name is hard to conceive. ‘Intimate Augmenter’ or ‘Inti-Aug’ for short, seems a reasonably decent alternative, or perhaps Micro-Media-Factory (MMF). Nonetheless, while ‘phone’ might still register a referential sense of the more recent, now largely defunct, location-specific technological ancestors of the smartphone, the longer etymological history of the word can be traced back to the Greek word for ‘voice’, which, when combined with ‘smart’, captures a sense of an intangible, advisory being offering navigational help in an unpredictable context.
No metaphor is perfect. Furthermore, different appetites for obscurity, on the one hand, or clarity and directness on the other hand, tend to divide different users and voters on what is best when it comes to the poetics and performance of language. To once again echo Hacking (1999), perhaps the most innovative and interesting alternative to a single metaphor is not a word or phrase that is more meaningful in terms of its applicability and adequacy, but one that awakens users, institutions and communities to the dynamism of language.
As a closing gesture, below is a list of three criteria that I’ve used in my teaching practice for metaphor design, or, if you want the deluxe version, try this free, self-paced, online learning experience, which takes roughly 2–4hrs
Adequacy: the extent to which the metaphor accounts for all of the elements of the target domain
Applicability: the extent to which the source domain is readily understandable as a metaphor for the target domain
Surprise: the extent to which the metaphor uncovers fresh interpretations of the target domain
List of works cited
Austin, J. L., (1976). Sense and Sensabilia. London : Oxford University Press.
Hacking, I., (1999). The Social Construction of What. Cambridge Mass: Harvard University Press.
November, V., Camacho-Hübner, E., & Latour, B. (2010). Entering a risky territory: Space in the age of digital navigation. Environment and planning D: Society and space, 28(4), 581–599.
Sloterdijk, P., (2011) Spheres: Microspherology. Vol. I: Bubbles. Translated by W. Hoban. Los Angeles, CA: Semiotext(e). | https://tomlee-64741.medium.com/the-roadmap-metaphor-speculative-alternatives-86324e3963ef | ['Tom Lee'] | 2020-11-25 04:14:47.760000+00:00 | ['Scenarioplanning', 'Futuring', 'Metaphor', 'Roadmaps', 'Design'] |
Dependencies between DAGs in Apache Airflow | A DAG that runs a “goodbye” task only after two upstream DAGs have successfully finished. This post explains how to create such a DAG in Apache Airflow
In Apache Airflow we can have very complex DAGs with several tasks, and dependencies between the tasks.
But what if we have cross-DAGs dependencies, and we want to make a DAG of DAGs? Normally, we would try to put all tasks that have dependencies in the same DAG. But sometimes you cannot modify the DAGs, and you may want to still add dependencies between the DAGs.
For that, we can use the ExternalTaskSensor.
This sensor will lookup past executions of DAGs and tasks, and will match those DAGs that share the same execution_date as our DAG. However, the name execution_date might be misleading: it is not a date, but an instant. So DAGs that are cross-dependent between them need to be run in the same instant, or one after the other by a constant amount of time. In summary, we need alignment in the execution dates and times.
Let's see an example. We have two upstream DAGs, and we want to run another DAG after the first two DAGs have successfully finished.
This is the first DAG. It has only two dummy tasks.
The second upstream DAG is very similar to this one, so I don't show the code here, but you can have a look at the code in Github.
The important aspect is that both DAGs have the same schedule and start dates (see the corresponding lines in the DAG 1 and in the DAG 2). Notice that the DAGs are run every minute. That's only for the sake of this demo. In a real setting, that would be a very high frequency, so beware if you copy-paste some code for your own DAGs.
The downstream DAG will be executed when both upstream DAGs succeed. This is the code of the downstream DAG:
Some important points to notice. The schedule and start date is the same as the upstream DAGs. This is crucial for this DAG to respond to the upstream DAGs, that is, to add a dependency between the runs of the upstream DAGs and the run of this DAG.
And what if the execution dates don't match but I still want to add a dependency? If the start dates differ by a constant amount of time, you can use the execution_delta parameter of ExternalTaskSensor. For more details, check the documentation of ExternalTaskSensor.
The documentation of Airflow includes an article about cross DAG dependencies: https://airflow.apache.org/docs/stable/howto/operator/external.html | https://towardsdatascience.com/dependencies-between-dags-in-apache-airflow-2f5935cde3f0 | ['Israel Herraiz'] | 2020-07-02 13:32:16.463000+00:00 | ['Python', 'Apache Airflow'] |
Create an SMTP server with NodeJS | In this article, I will show you how simple it is to create an SMTP server using NodeJS and be up and running in a few minutes. You can then send emails to your node application, either as a method of remote control or for 1001 other reasons.
An SMTP receives emails from other email servers or email clients. If you want an application to process emails in realtime, then your own SMTP server is the way to go. Writing your own server provides some added benefits, like being able to use your own domain name with unlimited email addresses at no extra cost.
There is a bit more work to get it up and running, but I promise, it’s not that much.
What you're going to need.
Your own domain name with the ability to edit DNS records.
NodeJS
A fixed IP Address
If you're on a home network you will need to be able to set up port forward rules on your Internet Router.
DNS setup
In order for a domain to receive emails, it needs to know about the mail server that will handle incoming email.
I already have a domain hackmail.net which I will use as an example throughout this tutorial.
Here is how you need to configure your domain records. I use 123-reg.co.uk for domain registration. They provide their own guide on mail server setup. Your registrar may have a similar guide.
Step 1
Delete any existing MX records on your domain.
Step 2
Add a new A record that points to your internet router IP Address.
Hostname: mail
Type: A
Desination IP Address: <Your routers Public IP Address>
Step 3
Create a new MX record that points to the fully qualified hostname
Hostname: @
Type: MX
Priority: 10
Destination MX: mail.hackmail.net.
Notice the full stop (period) after the Destination. MX records must be terminated with a period or they will not work.
That’s it, Your DNS is configured.
Port Forwarding
You only need to do this if the server, computer, laptop that you will be running the server on, is behind a network router. How you configure this will be different for every router brand. At a high level, you need to tell the router to forward all traffic arriving on port 25 to the internal IP address of the computer running the SMTP server.
Forward all traffic arriving on port: 25 TO IP Address: <Your internal IP Address>
port: 25
Firewall configuration
If you have a firewall set up on your network or computer, you will need to make sure that traffic on port 25 is allowed through. Consult the docs for the firewall your using.
Writing the code
Create a new folder called node_smtp_server
In the new folder, run the following command
npm init -y
Create a new file called app.js
touch app.js
Before I add any code, I am going to install two npm modules. Run the following command to install them.
npm install smtp-server mailparser --save
Now open app.js in your editor of choice and add the following code.
On line 1, I import smtp-server this is the core of the application.
On line 2, I have imported mailparser , this will convert the received emails that are a simple string into a JSON object. This makes it easy to get the data I need or save them to a database like MongoDB.
Line 4 creates a new SMTP Server. It takes an object containing configuration as the constructor. In this example, I am passing in two options. I will cover the second option first.
disabledCommands is an array of commands you want to disable. In this case I am disabling AUTH . If you don’t do, this, any emails you send to this server will be bounced with an error. 530 Error: authentication Required
onData is a method that can take 3 parameters stream , session , and callback . In this case, I only require the stream and callback . The session is used for email clients, when a user is logging into the server with a client.
stream is a readable stream for the incoming email.
callback is run when the stream is ended. If you return an error object, it will reject the email.
Line 18 puts the server online. It only requires one parameter the port number In this case, I am also passing the optional IP address to listen on.
You can read more about the smtp-server module and it’s many options here https://nodemailer.com/extras/smtp-server/
Testing it out
I am going to send an email from my Gmail Account to [email protected] . When the message is received a few seconds later, the following is outputted to the terminal.
As you can see it’s an easy to use JSON object, thanks to the mailparser module.
Conclusion
Writing a mail server in NodeJS is really simple and can provide some really powerful features for an application.
Let me know in the comments, your ideas for using an SMTP server in an application.
As long as you can edit your domain’s DNS records, you can be up and running a few minutes.
If you liked this article, please show your appreciation and leave me a message. If you didn’t like it, I still appreciate the feedback, it keeps me on my toes 🙂 | https://medium.com/the-innovation/create-an-smtp-server-with-nodejs-5688d8fd882e | ['Simon Carr'] | 2020-08-22 18:25:31.265000+00:00 | ['Smtp', 'Programming', 'Nodejs', 'JavaScript', 'Startup'] |
The Best Way To Learn A New Skill Fast | The Best Way To Learn A New Skill Fast
Why this counter-intuitive process is one of the fastest ways to learn.
Photo by NESA by Makers on Unsplash
When the pandemic closed offices earlier this year, I was one of the millions who suddenly had to learn how to use video conferencing. I had only ever used video chats a couple of times in my entire life, so I was way behind the curve.
When I first opened Zoom, I had no idea how to connect to a room or turn on my camera. I wasn’t even aware of the more complex tools like annotations and breakout rooms.
I might have been able to fake it if I was just sitting in on meetings, but I was going to have to run them. I needed to master Zoom fast. So, I turned to an unusual tool that’s helped me through these situations before: teaching.
How Teaching Can Help Us Learn
One of the fastest ways that I’ve ever learned to master a new skill is to start teaching it as soon as possible. I’m using the term “teaching” broadly here, to mean any method of instructing others: from conducting a lecture to writing an essay explaining what you learned.
It may sound counter-intuitive at first; after all, how can we teach what we haven’t already learned? The key is to turn teaching and learning into an iterative process.
With Zoom, I began by watching the training videos in the Getting Started guide and following along with the software. As soon as I was done, I wrote down what I had learned in the format of an instructive article. My goal was to write it clearly enough that a total beginner could catch up with everything I had learned so far.
During the writing process, I discovered some steps that I didn’t completely remember. This provided the clue that I needed to go back and refresh my memory.
By experimenting with the software, I also discovered buttons in Zoom that I hadn’t learned yet. This told me that I needed to find additional articles, explaining how those tools worked.
I went back and forth, writing my article and learning the software for myself. In addition to reading guides and watching videos, I spent plenty of time just playing with the software and running demo meetings.
By the time I had to run my first real video conference, I felt confident that I knew Zoom more than well enough to get by. I actually continued to “teach” Zoom by writing more after I got an even better feel for the software.
Through this process, I had essentially mastered Zoom, and I had written about two thousand words explaining what I knew. As a bonus, I even ended up making a couple of hundred bucks through the experience, by editing down the material into a few articles and selling them to a blog.
The Benefits of Teaching
There are several reasons that teaching is such a powerful tool for learning:
You will quickly discover the limits of your knowledge as you run out of material to teach, or are asked questions to which you don’t know the answer.
You can learn an immense amount from your students, particularly if you’re teaching a live class.
Forcing yourself to explain concepts in simple terms will help boost your own comprehension of the material.
New concepts will become more ingrained in your memory through repetition: you’ll be exposed to them once as you learn them, once as you prepare a lesson, and once as you deliver it. Editing and revisions can add even more exposure.
Where to Teach
For many concepts that I’ve learned, I’ve had the benefit of being able to teach in live classrooms, but this option isn’t always available (particularly during a pandemic).
Fortunately, thanks to the internet, there are limitless opportunities for “teaching.” You can write an article, record a YouTube video, or even create an entire online course.
You don’t need to be an expert to start teaching. Some of the best teachers have the least experience, while we all know the stereotype of the professor who leads their field but has terrible lectures. If you’re committed to learning and careful preparing your material, you can help others study while you learn for yourself. | https://benyaclark.medium.com/the-best-way-to-learn-a-new-skill-fast-357d80927b0f | ['Benya Clark'] | 2020-10-30 06:31:18.447000+00:00 | ['Self Improvement', 'Teaching', 'Learning', 'Productivity', 'Advice'] |
What can Byebug do for your debugging? | Byebug is an all-in-one debugging utility for Ruby. It lets you:
1. Stop execution anywhere in any piece of code to look around and see what’s going on
2. View a complete backtrace of every bit of code leading up to where you are (including any framework code)
3. Navigate around, step into, and continue through any additional code calls.
The benefits are immediate and immense. Without the advantage of a massively comprehensive test suite, you’ve probably, at least once, said “What the heck is going on?” when you observe how a piece of code behaves. Byebug helps to solve that mystery.
There are several ways to use byebug, but we’ll demonstrate the simplest and most common case: Dropping a byebug somewhere within your lines of code to see what’s going on.
Let’s say we’re testing a new factorial function, with an example test case of fac(4) printing out 12. Something is obviously amiss. Let’s look at what code we’ve set up to do this (hopefully you’ll be able to spot the bug here pretty quickly):
def fac(num)
if num <= 2
1
else
num * fac(num - 1)
end
end
puts(fac(4))
Instead of relying on our ability to manually read this out and come to a conclusion about what the bug is, let’s plop a byebug at the top of this function and see what we get at each step…
require "byebug" def fac(num)
byebug
if num <= 2
1
else
num * fac(num - 1)
end
end
puts(fac(4))
Once we get the program running, we’ll see what’s going on under the hood. First by checking what num is (to make sure that we aren’t seeing any funky values) and then checking the backtrace of where the breakpoint is:
matthewk@matthewk-bonanza ~ $ ruby fac_example.rb [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
4: byebug
=> 5: if num <= 2
6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
(byebug) num
4
(byebug) backtrace
--> #0 Object.fac(num#Integer) at /home/matthewk/fac_example.rb:5
#1 <main> at /home/matthewk/fac_example.rb:12
Nothing sticks out so far. We’ll hit next to navigate further in the code, and then eventually step to see what’s going on when we step into that recursive fac call…
[3, 12] in /home/matthewk/fac_example.rb
3: def fac(num)
4: byebug
5: if num <= 2
6: 1
7: else
=> 8: num * fac(num - 1)
9: end
10: end
11:
12: puts(fac(4))
(byebug) step [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
=> 4: byebug
5: if num <= 2
6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
(byebug) num
3
Still looking okay, and the number goes down by one, as we’d expect… so let’s keep going.
(byebug) next [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
4: byebug
=> 5: if num <= 2
6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
(byebug) next [3, 12] in /home/matthewk/fac_example.rb
3: def fac(num)
4: byebug
5: if num <= 2
6: 1
7: else
=> 8: num * fac(num - 1)
9: end
10: end
11:
12: puts(fac(4))
(byebug) step [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
=> 4: byebug
5: if num <= 2
6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
It’s still looking good. Let’s just verify that we get the number (2) that we expect, and that the backtrace looks OK.
(byebug) num
2
(byebug) backtrace
--> #0 Object.fac(num#Integer) at /home/matthewk/fac_example.rb:4
#1 Object.fac(num#Integer) at /home/matthewk/fac_example.rb:8
#2 Object.fac(num#Integer) at /home/matthewk/fac_example.rb:8
#3 <main> at /home/matthewk/fac_example.rb:12
Looks like the number is correct and we have all of our recursive calls. Let’s continue:
(byebug) next [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
4: byebug
=> 5: if num <= 2
6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
(byebug) next [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
4: byebug
5: if num <= 2
=> 6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
Wait a minute.
At this point, you should notice that heading into this branch (returning 1 when the number is 2 ) seems faulty. When we peer into that if statement, we observe that the number 2 isn’t the correct number to place there — it’s 1. So we replace it with 1, rerun the program, and all is well.
One neat thing that byebug offers us is the fact that hitting the return key repeats the last command entered. For example, we could be using our next command repeatedly to see what our (now fixed) result gets used as. Say that we update our byebugged code to be this:
require "byebug" def fac(num)
byebug
if num <= 1
1
else
num * fac(num - 1)
end
end
fac_4 = fac(4) if fac_4 == 24
puts("It worked!")
else
puts("It's broken!")
end
We can then step through the call and observe which `puts` line it gets to via this (note the blank lines, then the `continue` at the end, which moves to the end of the execution):
matthewk@matthewk-bonanza ~ $ ruby fac_example.rb [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
4: byebug
=> 5: if num <= 1
6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
(byebug) next [3, 12] in /home/matthewk/fac_example.rb
3: def fac(num)
4: byebug
5: if num <= 1
6: 1
7: else
=> 8: num * fac(num - 1)
9: end
10: end
11:
12: fac_4 = fac(4)
(byebug) [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
4: byebug
=> 5: if num <= 1
6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
(byebug) [3, 12] in /home/matthewk/fac_example.rb
3: def fac(num)
4: byebug
5: if num <= 1
6: 1
7: else
=> 8: num * fac(num - 1)
9: end
10: end
11:
12: fac_4 = fac(4)
(byebug) [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
4: byebug
=> 5: if num <= 1
6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
(byebug) [3, 12] in /home/matthewk/fac_example.rb
3: def fac(num)
4: byebug
5: if num <= 1
6: 1
7: else
=> 8: num * fac(num - 1)
9: end
10: end
11:
12: fac_4 = fac(4)
(byebug) [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
4: byebug
=> 5: if num <= 1
6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
(byebug) [1, 10] in /home/matthewk/fac_example.rb
1: require "byebug"
2:
3: def fac(num)
4: byebug
5: if num <= 1
=> 6: 1
7: else
8: num * fac(num - 1)
9: end
10: end
(byebug) [9, 18] in /home/matthewk/fac_example.rb
9: end
10: end
11:
12: fac_4 = fac(4)
13:
=> 14: if fac_4 == 24
15: puts("It worked!")
16: else
17: puts("It's broken!")
18: end
(byebug) [9, 18] in /home/matthewk/fac_example.rb
9: end
10: end
11:
12: fac_4 = fac(4)
13:
14: if fac_4 == 24
=> 15: puts("It worked!")
16: else
17: puts("It's broken!")
18: end
(byebug) continue
It worked!
We can also modify variables as we execute our program. In this example, we print out a set of factorials with our now-working program:
require "byebug" def fac(num)
if num <= 1
1
else
num * fac(num - 1)
end
end
numbers = [ 3, 4, 8, 11 ] byebug puts( numbers.map { |number| fac(number) } )
Let’s run it and then change numbers to a simpler array, then see what we come up with:
matthewk@matthewk-bonanza ~ $ ruby fac_example.rb [6, 15] in /home/matthewk/fac_example.rb
6: else
7: num * fac(num - 1)
8: end
9: end
10:
11: numbers = [ 3, 4, 8, 11 ]
12:
13: byebug
14:
=> 15: puts( numbers.map { |number| fac(number) } )
(byebug) numbers = [ 1, 2, 3, 4 ]
[1, 2, 3, 4]
(byebug) continue
1
2
6
24
In fact, you can run literally any command you’d use at any point of your script. This makes it quite possible to observe what would happen if you added different variations of a piece of code after a byebug breakpoint.
This is a great way to get you going with byebug and use it in a way that helps you find and squash bugs quickly in your application. Interested in going into more detail? See the official docs. If you’re interested in a comparison inline debugger, you can check out pry. Note, however, that if you’re developing for Rails, byebug is considered the default, well-supported debugger so you’ll probably get a lot more support for debugging within byebug than pry in that setting.
If you’re interested in getting even more out of your code, at GitClear, we make the process of understanding changes in the code and measuring productivity incredibly easy. One click gets you a full analysis of each change in your repository instantly available. We support 6 languages (including Ruby) with more on the way. | https://medium.com/static-object/what-can-byebug-do-for-your-debugging-5d1ab25e0c01 | ['Matthew Kloster'] | 2019-06-17 17:52:50.209000+00:00 | ['Programming', 'Productivity', 'Ruby', 'Rails', 'Debugging'] |
VC Corner Q&A: Madison McIlwain of Defy VC | Madison McIlwain is an Associate at Defy, where she works alongside her team to source, invest in, and help amazing companies grow. She’s passionate about retail innovation, supply chain, and consumer technology.
Formerly a product manager at Gap Inc, Madison managed a team of over forty engineers to modernize Gap’s order management system and customer communication channels. There, she drove numerous technology initiatives, such as enabling SMS communication for customers and launching the first website-wide chatbot. Before Gap, Madison worked at Rent the Runway and an AI start-up working to create a shoppable virtual closet.
Read on to learn more about Defy’s mission and the most important question Madison asks herself before committing to an investment!
— What is Defy’s mission?
Being an entrepreneur means questioning everything. It means pushing back on all the smart, well-meaning people who tell you you’re wrong. At Defy, our mission is to help entrepreneurs Defy convention and expectation. We’re an early stage venture firm focused across consumer, enterprise/saas, and deep tech. We love people who are positioned to uniquely disrupt the industry they’ve grown up in. We hope to back and empower the next generation of startup leaders who defy all odds and build impactful, enduring companies.
— What was your very first investment? And what struck you about them?
The first investment I sourced for Defy was Thrilling. Thrilling is bringing vintage retailers online and enabling consumers to shop vintage from the comfort of their home — all while enabling more sustainable shopping and the circular economy. Honestly, I was drawn to the founder, Shilla, and her magnetic energy and passion for the problem. Shilla herself is an avid vintage shopper who wanted a better experience finding vintage treasures online. She’s built a marketplace that supports small businesses and reduces waste on our planet by leveraging technology to digitize thousands of single SKU items. From my time at Gap and Rent the Runway, I knew SKU management for resale was very challenging and believe Shilla will be the one to turn these challenges into scalable solutions.
— What is one thing you’re excited about right now?
I am really excited about the circular economy and how technology is enabling a more sustainable supply chain. I explored this in detail recently here. When I was at Gap, return rates were better than the average but still sad. What most customers don’t realize about returns is that they are unprofitable and unsustainable for retailers in a myriad of ways. Retailers lose on shipping items back and forth. They also lose on the restocking labor. Worst of all, retailers usually have to mark down inventory once it’s returned to them because it’s often no longer in season. Returns are a side effect of a burgeoning ecommerce ecosystem. With innovation in reverse return logistics and end of clothing life management, we have an opportunity to disrupt the returns status quo.
— Who is one founder we should watch?
Kimberly Shenk! I want to be Kimberly when I grow up! Not only is she a kick butt founder as CEO of Novi Connect, but she is also a thoughtful, compassionate and kind person. With Novi, she is powering ingredient and supply chain transparency. Consumers are increasingly demanding transparency around what’s going into all of the products they touch, eat, wear, etc and the many companies that make/sell all of these products are struggling to deliver. Novi’s software solves this problem through a SAAS-enabled network. I’ve learned a lot from her by the way she breaks down big problems into small manageable pieces and works her way back to a solution.
— What are the 3 top qualities of every great leader?
Tenacity
Humility
Kindness
— What is one question you ask yourself before investing in a company?
The question I always ask myself is “would I invest my own savings into this business?” If the answer is no, it’s a signal to me I don’t have enough conviction on the product, market, or team.
— What is one thing every founder should ask themselves before walking into a meeting with a potential investor?
What is one key objective I’m hoping to get out of this meeting? It might be funding. But more often than not a first meeting is a stepping stone to establishing a relationship with that investor and firm. Capital may come, but this person might be helpful in other avenues as well; customer introductions, hiring, or connections to a more suited firm.
— What do you think should be in a CEO’s top 3 company priorities?
Building product and culture
Hiring great leaders
Retention both of customers and employees
— Favorite business book, blog or podcast?
Is it cheating if I say my podcast? The Room is a podcast with your favorite founders and founders where we discuss what it was like to be in The Room where it happens. Our target audience is first-time founders and young funders. My co-host, Claudia Laurie and I are both curious digital natives navigating our careers in the Valley asking the same questions as our listeners. We felt there was an opportunity to bring to life the conversations and the creation stories which historically happen behind doors closed to groups across age, gender and race. If you like How I Built This, our podcast is for you!
— Who is one leader you admire?
Sally Gilligan. Sally is the CIO of Gap Inc. Sally gave me my first job as a product manager in Gap’s supply chain. At our company all-hands and during our one on ones, she taught me both how to command a room and make an individual feel worth listening too. She continues to lead Gap Inc. through a compelling digital transformation with her keen insights for where the future of retail aided by technology is heading.
— What is one interesting thing most people won’t know about you?
Most people wonder how I have so much “energy”! I think it baffles people because it’s pretty relentless and honestly sometimes I exhaust myself. I think my energy comes from being an extreme extrovert. I genuinely derive the most energy when I’m around others. Thankfully, in venture, my job is to talk to people which consistently fuels me, hence the energy.
— What is one piece of advice you’d give every founder?
Lean into curiosity and stay determined to build a better experience for your customers. | https://medium.com/startup-grind/vc-corner-q-a-madison-mcilwain-of-defy-vc-5cc04d605585 | ['The Startup Grind Team'] | 2020-11-19 10:00:54.438000+00:00 | ['Vc Corner', 'VC', 'Investing', 'Startup', 'Startup Lessons'] |
Flight Data Analysis Using Spark GraphX | Spark GraphX Tutorial — Edureka
GraphX is Apache Spark’s API for graphs and graph-parallel computation. GraphX unifies ETL (Extract, Transform & Load) process, exploratory analysis and iterative graph computation within a single system. The usage of graphs can be seen in Facebook’s friends, LinkedIn’s connections, internet’s routers, relationships between galaxies and stars in astrophysics and Google’s Maps. Even though the concept of graph computation seems to be very simple, the applications of graphs are literally limitless with use cases in disaster detection, banking, the stock market, banking, and geographical systems just to name a few. Through this blog, we will learn the concepts of Spark GraphX, its features, and components through examples and go through a complete use case of Flight Data Analytics using GraphX.
We will be covering the following topics in this Spark GraphX blog:
What are Graphs? Use Cases of Graph Computation What is Spark GraphX? Spark GraphX Features Understanding GraphX with Examples Use Case — Flight Data Analysis using GraphX
What are Graphs?
A Graph is a mathematical structure amounting to a set of objects in which some pairs of the objects are related in some sense. These relations can be represented using edges and vertices forming a graph. The vertices represent the objects and the edges show the various relationships between those objects.
In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from mathematics, specifically the field of graph theory. A graph data structure may also associate to each edge some edge value, such as a symbolic label or a numeric attribute (cost, capacity, length, etc.).
Use Cases of Graph Computation
The following use cases give a perspective into graph computation and further scope to implement other solutions using graphs.
Disaster Detection System
Graphs can be used to detect disasters such as hurricanes, earthquakes, tsunami, forest fires, and volcanoes so as to provide warnings to alert people.
Page Rank
Page Rank can be used in finding the influencers in any network such as the paper-citation network or social media network.
Financial Fraud Detection
Graph analysis can be used to monitor the financial transactions and detect people involved in financial fraud and money laundering.
Business Analysis
Graphs, when used along with Machine Learning, helps in understanding the customer purchase trends. E.g. Uber, McDonald’s, etc.
Geographic Information Systems
Graphs are intensively used to develop functionalities on geographic information systems like watershed delineation and weather prediction.
Google Pregel
Pregel is Google’s scalable and fault-tolerant platform with an API that is sufficiently flexible to express arbitrary graph algorithms.
What is Spark GraphX?
GraphX is the Spark API for graphs and graph-parallel computation. It includes a growing collection of graph algorithms and builders to simplify graph analytics tasks.
GraphX extends the Spark RDD with a Resilient Distributed Property Graph. The property graph is a directed multigraph which can have multiple edges in parallel. Every edge and vertex has user-defined properties associated with it. The parallel edges allow multiple relationships between the same vertices.
Spark GraphX Features
The following are the features of Spark GraphX:
Flexibility:
Spark GraphX works with both graphs and computations. GraphX unifies ETL (Extract, Transform & Load), exploratory analysis, and iterative graph computation within a single system. We can view the same data as both graphs and collections, transform and join graphs with RDDs efficiently and write custom iterative graph algorithms using the Pregel API.
Speed:
Spark GraphX provides comparable performance to the fastest specialized graph processing systems. It is comparable with the fastest graph systems while retaining Spark’s flexibility, fault tolerance, and ease of use.
Growing Algorithm Library:
We can choose from a growing library of graph algorithms that Spark GraphX has to offer. Some of the popular algorithms are page rank, connected components, label propagation, SVD++, strongly connected components and triangle count.
Understanding GraphX with Examples
We will now understand the concepts of Spark GraphX using an example. Let us consider a simple graph as shown in the image below.
Looking at the graph, we can extract information about the people (vertices) and the relations between them (edges). The graph here represents the Twitter users and whom they follow on Twitter. For e.g., Bob follows Davide and Alice on Twitter.
Let us implement the same using Apache Spark. First, we will import the necessary classes for GraphX.
//Importing the necessary classes
import org.apache.spark._
import org.apache.spark.rdd.RDD
import org.apache.spark.util.IntParam
import org.apache.spark.graphx._
import org.apache.spark.graphx.util.GraphGenerators
Displaying Vertices: Further, we will now display all the names and ages of the users (vertices).
val vertexRDD: RDD[(Long, (String, Int))] = sc.parallelize(vertexArray)
val edgeRDD: RDD[Edge[Int]] = sc.parallelize(edgeArray)
val graph: Graph[(String, Int), Int] = Graph(vertexRDD, edgeRDD)
graph.vertices.filter { case (id, (name, age)) => age > 30 }
.collect.foreach { case (id, (name, age)) => println(s"$name is $age")}
The output for the above code is as below:
David is 42
Fran is 50
Ed is 55
Charlie is 65
Displaying Edges: Let us look at which person likes whom on Twitter.
for (triplet <- graph.triplets.collect)
{
println(s"${triplet.srcAttr._1} likes ${triplet.dstAttr._1}")
}
The output for the above code is as below:
Bob likes Alice
Bob likes David
Charlie likes Bob
Charlie likes Fran
David likes Alice
Ed likes Bob
Ed likes Charlie
Ed likes Fran
Now that we have understood the basics of GraphX, let us dive a bit deeper and perform some advanced computations on the same.
Number of followers: Every user in our graph has a different number of followers. Let us look at all the followers for every user.
// Defining a class to more clearly model the user property
case class User(name: String, age: Int, inDeg: Int, outDeg: Int)
// Creating a user Graph
val initialUserGraph: Graph[User, Int] = graph.mapVertices{ case (id, (name, age)) => User(name, age, 0, 0) }
// Filling in the degree information
val userGraph = initialUserGraph.outerJoinVertices(initialUserGraph.inDegrees) {
case (id, u, inDegOpt) => User(u.name, u.age, inDegOpt.getOrElse(0), u.outDeg)
}.outerJoinVertices(initialUserGraph.outDegrees) {
case (id, u, outDegOpt) => User(u.name, u.age, u.inDeg, outDegOpt.getOrElse(0))
}
for ((id, property) <- userGraph.vertices.collect) {
println(s"User $id is called ${property.name} and is liked by ${property.inDeg} people.")
The output for the above code is as below:
User 1 is called Alice and is liked by 2 people.
User 2 is called Bob and is liked by 2 people.
User 3 is called Charlie and is liked by 1 people.
User 4 is called David and is liked by 1 people.
User 5 is called Ed and is liked by 0 people.
User 6 is called Fran and is liked by 2 people.
Oldest Followers: We can also sort the followers by their characteristics. Let us find the oldest followers of each user by age.
// Finding the oldest follower for each user
val oldestFollower: VertexRDD[(String, Int)] = userGraph.mapReduceTriplets[(String, Int)](
// For each edge send a message to the destination vertex with the attribute of the source vertex
edge => Iterator((edge.dstId, (edge.srcAttr.name, edge.srcAttr.age))),
// To combine messages take the message for the older follower
(a, b) => if (a._2 > b._2) a else b
)
The output for the above code is as below:
David is the oldest follower of Alice.
Charlie is the oldest follower of Bob.
Ed is the oldest follower of Charlie.
Bob is the oldest follower of David.
Ed does not have any followers.
Charlie is the oldest follower of Fran.
Use Case: Flight Data Analysis using Spark GraphX
Now that we have understood the core concepts of Spark GraphX, let us solve a real-life problem using GraphX. This will help give us the confidence to work on any Spark projects in the future.
Problem Statement:
To analyze Real-Time Flight data using Spark GraphX, provide near real-time computation results, and visualize the results using Google Data Studio.
Use Case — Computations to be done:
Compute the total number of flight routes Compute and sort the longest flight routes Display the airport with the highest degree vertex List the most important airports according to PageRank List the routes with the lowest flight costs
We will use Spark GraphX for the above computations and visualize the results using Google Data Studio.
Use Case — Dataset:
Use Case — Flow Diagram:
The following illustration clearly explains all the steps involved in our Flight Data Analysis.
Use Case — Spark Implementation:
Moving ahead, now let us implement our project using Eclipse IDE for Spark.
Find the Pseudo Code below:
//Importing the necessary classes
import org.apache.spark._
...
import java.io.File
object airport {
def main(args: Array[String]){
//Creating a Case Class Flight
case class Flight(dofM:String, dofW:String, ... ,dist:Int)
//Defining a Parse String function to parse input into Flight class
def parseFlight(str: String): Flight = {
val line = str.split(",")
Flight(line(0), line(1), ... , line(16).toInt)
}
val conf = new SparkConf().setAppName("airport").setMaster("local[2]")
val sc = new SparkContext(conf)
//Load the data into a RDD
val textRDD = sc.textFile("/home/edureka/usecases/airport/airportdataset.csv")
//Parse the RDD of CSV lines into an RDD of flight classes
val flightsRDD = Map ParseFlight to Text RDD
//Create airports RDD with ID and Name
val airports = Map Flight OriginID and Origin
airports.take(1)
//Defining a default vertex called nowhere and mapping Airport ID for printlns
val nowhere = "nowhere"
val airportMap = Use Map Function .collect.toList.toMap
//Create routes RDD with sourceID, destinationID and distance
val routes = flightsRDD. Use Map Function .distinct
routes.take(2)
//Create edges RDD with sourceID, destinationID and distance
val edges = routes.map{( Map OriginID and DestinationID ) => Edge(org_id.toLong, dest_id.toLong, distance)}
edges.take(1)
//Define the graph and display some vertices and edges
val graph = Graph( Airports, Edges and Nowhere )
graph.vertices.take(2)
graph.edges.take(2)
//Query 1 - Find the total number of airports
val numairports = Vertices Number
//Query 2 - Calculate the total number of routes?
val numroutes = Number Of Edges
//Query 3 - Calculate those routes with distances more than 1000 miles
graph.edges.filter { Get the edge distance )=> distance > 1000}.take(3)
//Similarly write Scala code for the below queries
//Query 4 - Sort and print the longest routes
//Query 5 - Display highest degree vertices for incoming and outgoing flights of airports
//Query 6 - Get the airport name with IDs 10397 and 12478
//Query 7 - Find the airport with the highest incoming flights
//Query 8 - Find the airport with the highest outgoing flights
//Query 9 - Find the most important airports according to PageRank
//Query 10 - Sort the airports by ranking
//Query 11 - Display the most important airports
//Query 12 - Find the Routes with the lowest flight costs
//Query 13 - Find airports and their lowest flight costs
//Query 14 - Display airport codes along with sorted lowest flight costs
Use Case — Visualizing Results:
We will be using Google Data Studio to visualize our analysis. Google Data Studio is a product under Google Analytics 360 Suite. We will use Geo Map service to map the Airports on their respective locations on the USA map and display the metrics quantity.
Display the total number of flights per Airport Display the metric sum of Destination routes from every Airport Display the total delay of all flights per Airport
Now, this concludes the Spark GraphX blog. I hope you enjoyed reading it and found it informative.
So this is it! I hope this blog was informative and added value to your knowledge. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of Spark. | https://medium.com/edureka/spark-graphx-f9bd805ac429 | ['Shubham Sinha'] | 2020-09-10 10:22:57.787000+00:00 | ['Apache Spark', 'Data Science', 'Big Data', 'Graphx', 'Spark'] |
10 Behaviors to Make Your Team Great | 10 Behaviors to Make Your Team Great
Is your team greater than the sum of its parts?
Photo by Michael Ankes on Unsplash
My team was in a bit of a rut. There was no trust and poor communication. People weren’t collaborating, and there was no transparency into anyone’s days. Updates in our daily standup meetings were vague and non-committal. Morale was low. Things just weren’t getting done.
I knew it wasn’t a people problem. I’d been with most of the team for years, and everybody was smart and talented. No — this was definitely a behaviors problem.
But, while I could feel the problems, I didn’t know how to articulate them. Before I could address the issues, I needed a better understanding of what they were, and I needed to establish a vocabulary with the team to facilitate a discussion. Only then, with awareness and buy-in, could we begin to implement change to improve our effectiveness.
Good Behaviors, Bad Behaviors
I was discussing the team’s underperformance and collaboration problems with a colleague, and they joking-not-jokingly proposed doing a Five Dysfunctions of a Team exercise.
It had been a while since the five dysfunctions had been front of mind, and I had to look them up for a refresher. “Let’s see what we’ve got here,” I thought as I clicked through some search results.
Absence of trust — check. Fear of conflict — yup. Lack of commitment — oh yea. Avoidance of accountability — definitely. Inattention to results — mhmm.
Wow. We had ’em all. People on the team didn’t trust each other to complete assignments. Rather than confront the lack of trust, they preferred to work alone on whatever they felt was most important. Updates in standups would be, “I worked on some things and will figure out what’s next,” and people would leave for a coffee and disappear for the rest of the afternoon. Meanwhile, user stories would drag on for days and weeks with no sense of urgency. Yikes!
The five dysfunctions also reminded me of Project Aristotle. This Google Research project attempts to answer the question, “What makes a team great?” One of their key findings was that effectiveness depended more on how the team worked together than who was on the team. In other words, team dynamics and behaviors matter more than people and individual performance.
…what really mattered was less about who is on the team, and more about how the team worked together
Google’s “five effectiveness pillars” go with the five dysfunctions like peanut butter goes with jelly, combining to create a gooey smattering of team efficiency — and they gave me exactly what I was missing most: a vocabulary for talking about the areas we needed to improve and ways to communicate the importance & impact.
The Actions in Action
Photo by Trym Nilsen on Unsplash
I had the concepts. Now I needed to deliver the message. I decided to put together two hypothetical situations based on our very real problems to illustrate the impact of these behavioral patterns & anti-patterns.
Example One. In standup, a dev says, “I’m going to work on implementing the Thingamabob. I’m going to try to complete tasks A, B, & C today, then we can test and close it out tomorrow.” In the afternoon, they say, “Something came up and I need to leave for a few hours, but I’ll be back to finish up. I completed task A and am almost done with B.” They come back later when everyone else is offline, complete task B, and leave a note before signing off: “Completing task B took longer than expected, but I got it done. I wasn’t able to get to task C. I’ll pick it up first thing in the morning.”
Example Two. In standup, a dev says, “Not sure what I’m doing today. I might start working on implementing the Thingamabob.” They start working on the story to implement the Thingamabob and complete task A plus part of task B. They need to leave for a few hours, but they don’t say anything. They come back later when everyone else is offline and complete task B.
In both examples, the person might’ve been equally productive, written brilliant code, and completed the same tasks. In both cases, the person had to leave for several hours, and in both cases they didn’t complete the (stated or unstated) goal of finishing task C.
However, the first example demonstrates all of Google’s dynamics of great teams.
Psychological safety. The dev wasn’t afraid to share status or go away because of other responsibilities; they felt safe to let the team know they didn’t complete their stated goal.
The dev wasn’t afraid to share status or go away because of other responsibilities; they felt safe to let the team know they didn’t complete their stated goal. Dependability : The developer made commitments in standup and was transparent about progress and effort.
: The developer made commitments in standup and was transparent about progress and effort. Structure and clarity : They communicated status so the team had awareness, which allows the team to adjust its actions and priorities. (For example, this could allow someone else to jump in on completing task B while the developer was away, and upon returning they could complete task C versus only completing task B.)
: They communicated status so the team had awareness, which allows the team to adjust its actions and priorities. (For example, this could allow someone else to jump in on completing task B while the developer was away, and upon returning they could complete task C versus only completing task B.) Meaning : The developer appreciates having a job that allows them the flexibility to take care of other responsibilities during the day.
: The developer appreciates having a job that allows them the flexibility to take care of other responsibilities during the day. Impact: Ensuring progress and helping the team achieve its goals feels good.
Conversely, the second example exhibits symptoms of all five dysfunctions.
Absence of trust : Low visibility and poor communication lead the team to wonder what the developer is working on.
: Low visibility and poor communication lead the team to wonder what the developer is working on. Fear of conflict : Sporadic availability makes it hard to collaborate; team members become exasperated and prefer to work alone.
: Sporadic availability makes it hard to collaborate; team members become exasperated and prefer to work alone. Lack of commitment : The developer was non-committal in standup, and the team has no expectations or ability to coordinate.
: The developer was non-committal in standup, and the team has no expectations or ability to coordinate. Avoidance of accountability : No commitments and poor visibility & availability; the dev does nothing to demonstrate their effort.
: No commitments and poor visibility & availability; the dev does nothing to demonstrate their effort. Inattention to results: Individual behavior prevents the team from achieving its goals.
All this is to say that, in order to be an effective team, individuals must focus on their behavior and interactions with teammates more than just being productive themselves.
Staging the Intervention
Photo by Todd Quackenbush on Unsplash
Okay, I had my ideas to share, and I had my plan of how I wanted to roll my message out to the team — it was time to put the wheels into motion.
The first thing I did was to send an email using the examples above. My messaging (paraphrasing) was, “Hey, team — I’ve been thinking that we haven’t been as productive lately as we’ve been in the past. I think we’re exhibiting some of the Five Dysfunctions of a Team, and we’ve lost some of Google’s pillars of effectiveness that we had previously. Consider these examples.” I also shared my analysis about how the examples were illustrative of the five dysfunctions and effectiveness pillars.
I didn’t really get feedback on the email, but there was a mention here & there in standups and retrospectives. I feel like the email did a fine job of planting the seed and helping to establish a vocabulary for the conversation. Mission accomplished there, I’d say.
Step two was to solicit feedback in one-on-ones. I’d ask people what they thought about the email and how they felt about the team in that context. These conversations were helpful because it confirmed my feelings and demonstrated that others were experiencing similar frustrations. This also helped to establish that we were all on the same page and had similar perceptions of our team strengths and weaknesses.
Finally, I decided to bring it up in the team’s sprint retrospective. I was blunt with them. I said, “I don’t think the team is doing enough to demonstrate commitment & accountability.” It took some courage, but I had to trust the team and not fear conflict — to practice what I was about to preach. The groundwork I’d laid proved valuable. People referenced the email I’d sent, and we’d all had miniature versions of the discussion in one-on-ones. It was a really productive conversation and a catalyst for positive change. | https://medium.com/the-innovation/10-behaviors-to-make-your-team-great-4969ca45774 | ['Adam Prescott'] | 2020-11-28 12:04:53.483000+00:00 | ['Growth', 'Leadership', 'Management', 'Personal Development', 'Productivity'] |
Data Governance in a Data Hungry World | Photo by Miguel Ángel Sanz on Unsplash
Europe continues to lead the UK and US on data regulations having voted last month to develop a new legal framework outlining “the ethical and legal obligations to be followed when developing, deploying and using artificial intelligence, robotics and related technologies… including software, algorithms and data”. The key guiding principles enshrined within a new regulatory framework include:
- Human-centric and human-made AI
- Safety
- Transparency and accountability
- Safeguards against bias and discrimination
- Right to redress
- Social and environmental responsibility and
- Respect for privacy and data
In addition to these principles, the EU Commission is also pushing for all high-risk AI technologies, such as “those with self-learning capabilities”, to be designed to allow for “human oversight at any time”. Legislation establishing a civil liability framework would make those operating high-risk AI strictly liable for any resulting damage. The EU Commission hopes that a “clear legal framework would stimulate innovation by providing businesses with legal certainty, whilst protecting citizens and promoting their trust in AI technologies”. The European Parliament’s work on AI is led by the Special Committee on Artificial Intelligence in a Digital Age, established in June of this year. Its mandate stresses its aims to develop a “holistic approach providing a common, long-term position that highlights the EU’s key values and objectives relating to artificial intelligence in the digital age”.
Earlier this year the Court of Justice of the European Union (CJEU) ruled that US tech companies could not move data from Europe to the US. Privacy Shield — a broad agreement and standard contractual clauses (SCCs) that are drawn up on an individual basis by each organisation — was ruled against and tech companies in the US must apply SCCs with data protection in mind and cannot store European data in the US if it can just as easily be retained and stored in the EU. What this means is that tech giants have lost the any legal basis for storing personal data in the US, where data protection laws are significantly less stringent than in Europe. For more information see here.
The EU are not alone in combatting data privacy breaches. The UK’s Information Commissioner’s Office has recently issued two significant fines to British Airways and Marriott Hotels. British Airways had initially faced a £183 million for breaching the privacy of over 400,000 customers personal data but was reduced to just £20 million in light of the impact of the Coronavirus pandemic on the airline industry. Marriott saw a data breach that may have affected up to 339 million of its guests and its initial fine was £100 million, reduced now to £18.4 million.
What is interesting about the Marriott case is that the company bought the data bases along with their takeover of Starwood, another hospitality company. Hackers had already infiltrated Starwood’s databases before Marriott acquired them. Marriott failed to check what it was purchasing, and their subsequent cyber-security improvements were too little too late — something the ICO pointed out in their review of the breach.
These are the first major fines issued by the ICO since GDPR first came into effect in May 2018.The fines demonstrate a substantial signpost in how the ICO seeks to ensure personal data is protected and managed properly by private companies — having previously pursued an ineffective policy of regulation and responsibility. The ICO, while lenient in their subsequent reductions of the initial fines, believe that the rulings will deter other companies from making the same mistakes.
While this maybe the case, other companies that have collected data from pubs and restaurants(as required by Coronavirus measures) have been selling it on to third parties, in breach of Government guidelines. Government documents state that information collected in relation to contact-tracing in pubs and restaurants should be kept by businesses for 21 days and must not be used for “any purpose other than for NHS test and trace”. This has occurred as companies contracted to provide systems such as QR code scanning when you enter a pub or restaurant have included in their privacy policy that the information they collect may be used for purposes other than NHS test-and-trace. As highlighted in last month’s policy blog, the Coronavirus pandemic has highlighted the need for Government to collect and use data better. This further demonstrates how valuable data is and the need for regulatory legislation to catch-up.
Similarly, the US is showing some signs of cracking down on tech giants — whether over personal data or antitrust issues. The U.S. Justice Department launched the most significant antitrust case to date against Google. The case argues that Alphabet Inc., Google’s parent company, is abusing its market power with its control of over 90% of the online search market in the US. Google is the “unchallenged gateway” to the internet and has perpetuated an environment of anticompetitive practices in their favour, locking our competition from rivals.US Attorney General William Barr argues that this monopoly will have a negative impact on future innovation. Texas Attorney General Ken Paxton is also preparing a complaint over Google’s conduct in the digital advertising market, where it controls technology used to buy and sell ads across the internet. Google has argued that the case against them is “deeply flawed” and told that investors that the case presents “limited risk”. For more information on the case, see here.
These events follow the Senate Commerce Committee issuing subpoenas to the heads of Twitter, Facebook, and Google to question them about their content policies. This may mark a strategic shift from US policy-makers, seeking to curb and manage tech giants through legal avenues and testimonies — much like unfolding trends in the EU and the UK. | https://medium.com/carre4/data-governance-in-a-data-hungry-world-cca9060de6ae | ['Lauren Toulson'] | 2020-11-10 22:21:06.767000+00:00 | ['Eu', 'AI', 'Bias', 'Data', 'Policy'] |
5 Answers to Kubernetes CKAD Practice Questions | Question 5.
The fifth question from Palmer’s practice exam [1] is as follows:
“All operations in this question should be performed in the ggckad-s5 namespace. Create a file called question-5.yaml that declares a deployment in the ggckad-s5 namespace, with six replicas running the nginx:1.7.9 image. Each pod should have the label app=revproxy . The deployment should have the label client=user . Configure the deployment so that when the deployment is updated, the existing pods are killed off before new pods are created to replace them.”
We need to check the labels for both the deployment and pods, and then we need to, as per [1] “configure the deployment so that when the deployment is updated, the existing pods are killed off before new pods are created to replace them.”
We have two deployment strategies available: RollingUpdate (the default), and Recreate [14]. Given the definitions for RollingUpdate and Recreate , in this case, we do not want to use the default and instead need to assign Recreate to the .spec.strategy.type .
Our deployment configuration can be seen here:
We can apply the file the same as we did in the previous question:
k apply -f ./question-5.yaml
Finally, we need to verify our work.
Verifying our solution
We need to check the labels for both the deployment and pods, and we also need to ensure that the requirement is met that when the deployment is updated, existing pods are killed off before new pods are created to replace them.
We can use the following command to view the deployment labels:
k get deployments --show-labels --namespace ggckad-s5
And we can use the following command to view the various pod labels:
k get pods --show-labels --namespace ggckad-s5
Below we can see the output of applying the question-5.yaml file and then executing these two commands. The yellow arrows show that the labels have been set as per the specification:
Note that our deployment relies on the nginx:1.7.9 image. This image is old, so if we update this to nginx:latest and apply the question-5.yaml file again, then we see that new containers are deployed while the old containers are gradually terminated. The following image demonstrates this behavior:
Applying a deployment with a recreate strategy
Next, we repeat the same exercise, only here we check the rollout status using the following command [11]:
kubectl rollout status deployment.v1.apps/question-5 --namespace ggckad-s5
We can see from a different angle that the old replicas are being terminated while the new ones are starting.
Last, we can reproduce the same behavior by executing the following command:
kubectl rollout restart deployment question-5 --namespace ggckad-s5
In this case, we don’t even need to update the nginx version in the question-5.yaml file .
Let’s take a look at an update when the deployment strategy type is set to RollingUpdate . This is the default, and it is also the incorrect choice given the specification.
An incorrect solution
The same example but with a RollingUpdate deployment strategy would be incorrect. We include the output of this example below for comparison purposes.
Next, we repeat the same exercise, only here we check the rollout status using the following command [11]:
kubectl rollout status deployment.v1.apps/question-5 --namespace ggckad-s5
We can see from a different angle that the old replicas are being terminated while the new ones are starting.
The rollout status when updating Nginx from 1.7.9 to latest
And that’s it for this question, and for this article for that matter — on to the conclusion! | https://medium.com/better-programming/5-answers-to-kubernetes-ckad-practice-questions-3fa1c72a6b5d | ['Thomas P. Fuller'] | 2020-11-25 17:54:15.727000+00:00 | ['DevOps', 'Kubernetes', 'Programming', 'K8s', 'Containers'] |
4 important lessons learnt from ziplining | The wooden platform
Riding a zipline is optional, so is embracing life. I choose to do the latter.
This was on my bucket list for long. I finally mustered up the courage to zipline at the world’s longest zipline certified by Guinness world records located on top of the Jabal Jais mountains in the UAE.
I suited up and took the 30km drive towards the start point in a minivan, not knowing what to expect. Once there, I climbed up a wooden platform 1680 meters above sea level. I looked down into endlessness with my heart pounding a mile a minute.
I was then harnessed, learnt the safety rules, took my fear and my pounding heart and jumped off the mountains to sail through the air at a speed of 120 kph.
It was both terrifying and exhilarating. I loved it and loved it more and definitely would do it again! However, it took me an enormous amount of guts to do this for the first time and in the process I learnt some very valuable and important lessons.
To achieve or conquer a goal, you need to want it more than you fear it.
This activity was on my bucket list for long but I never came around to doing it until the fear of regret, the fear of listening to others talk about their amazing experience and not having a story of my own, the fear of not inspiring my kids to be the best version of themselves and most of all the fear of staying where I was, became too overwhelming to not take action. These fears were much more in magnitude than my ‘fear’ of doing the zipline and I wanted to do the zipline more than anything else.
Differentiate between excitement, anxiety and fear
Fear is an indispensable emotion that helps us keep ourselves safe and respond effectively to danger. However, when we push ourselves out of our comfort zones, what we experience more of is anxiety or apprehension of what is to come than actual fear. We imagine the various threats or dangers that we might encounter and create intense (often skewing towards the worse) stories in our minds.
On the other hand, very common evidences of fear as well as excitement are — increased heart rate, shallow breathing, dry mouth, butterflies in the stomach, sweating etc. Both emotions trigger the same ‘fight or flight’ mode in humans. These bodily sensations are enough to send us into a panic mode as they signal the brain of an impending danger while we are actually not in danger. There is very little physiological difference between excitement and fear.
During such situations, it is very hard to recognize and/or separate the feelings from each other and hence, it’s important to step back and feed our brain with some fact based reasoning.
As I stepped on to the platform to take off, I told myself — numerous people have ziplined here before me and all had shared great reviews, the establishment which I had chosen to do this activity with, had an excellent safety record, all precautions were very seriously being followed as I could see myself. If other people could do it, I could do it too and there is nothing to worry about. This calmed my nerves considerably. I now knew that I was excited and after the initial, very brief time, first few seconds of being in the air, my anxiety vanished completely, and I enjoyed the ride!
Show some self-love
It is common to push ourselves out of the comfort zone and do things. However, to truly enjoy the process it’s important to step back and really show some self-love rather than ignoring the overwhelming feelings and just powering forward.
I told myself that it was completely ok to back out if I did not feel like going through with this activity even if it was at the last minute. My body and mind now knew that there was an option and that it was ok no matter what the outcome was. I dealt with myself in a calm and caring manner. This clarified the mind and I was genuinely enjoyed the whole process of ziplining from the beginning to the end!.
Trust your instinct
I was at the mercy of other people and machines during this whole activity. It required me to place my trust on the team harnessing me, on the crew that suited me up, on the establishment to ensure that all safety aspects were taken care of and the equipment itself etc. But sometimes, we just have to trust our instincts.
The following quotes say it all!
‘You don’t always need a plan. Sometimes you just need to breathe, trust, let go, and see what happens’ — Mandy hale.
‘Sometimes you cannot believe what you see, you have to believe what you feel’ — Mitch Albom
This was a leap of faith that earned me an experience of a lifetime!
In conclusion
Life is a series of choices. We either choose to ‘play it safe’ or choose to explore the vastness of it. Holding on and being in the comfort zone causes more pain than letting go and the discomfort that letting go causes leads to some amazing experiences and opportunities.
‘Twenty years from now you will be more disappointed by the things you didn’t do than by the ones you did do. So, throw off the bowlines. Sail away from the safe harbor. Catch the trade winds in your sails. Explore. Dream. Discover’ — Mark Twain. | https://medium.com/the-kickstarter/4-important-lessons-learnt-from-ziplining-b7e0192eaea0 | ['Swati Shetty'] | 2020-11-15 14:34:13.238000+00:00 | ['Life Lessons', 'Trust', 'Life Experience', 'Ziplining', 'Fear'] |
Is Apple Losing It? 🍎 | Newsletter #1: 12/18/20
Welcome to the first-ever Night Shift newsletter! In case you’ve been living under a rock, Apple released some new headphones called the AirPods Max. And they’re $550. So yeah…
Maximum Fruitiness 🍇
Apple has had a big fall/winter season. And the last couple of weeks have been no different.
Most notably, Apple released their new headphones, the AirPods Max, for $550. The internet seems to have come to a consensus on the fact that the new headphones are overpriced and not worth the money. But others (including me) say Apple might not be as dumb as you think.
An Apple A Day 👩⚕️
Other Apple news is also streaming in fast. iOS 14.3 got released with new exciting features and bug fixes.
And some other Apple stuff as well:
Optimize And Kill 💀
If you want to learn more about what’s going on in the Facebook world, I highly recommend these two articles:
Meanwhile, Google had some troubles this week:
Silent but Deadly 🚙
Gas cars are looking older and older every day.
Living in Denial 😅
It seems that Qualcomm is shrugging off the new M1 arm chips from Apple. It will be interesting to see what the future of chips, especially x86 chips, turn out in the future.
Learning Everyday 🌞
This is an amazing guide to machine learning in Python:
Raspberry Cry 😢
It seems that everyone wants to make their own affordable prototype computer like the Raspberry Pi.
Just a Bit of Fun 🎤
Google made this. You’re welcome.
Well, that’s it for the very first Night Shift newsletter! Any feedback to improve would be greatly appreciated. | https://medium.com/drknode/is-apple-losing-it-289dadb7a393 | ['Henry Gruett'] | 2020-12-18 15:07:28.803000+00:00 | ['Apple', 'Night Shift Newsletter', 'Tech', 'Technews', 'Technology'] |
A streamlined UX Process To Redesign Team Analytics Platform — A Case Study | The story was originally published on the EL Passion Blog.
So, what is the product we redesigned?
Betterworks Engage (back then known as Hyphen) is a successful product, which had been developed over 3 years of focused work from distributed teams. As their competition in the space of employee analytics became more fierce, the team identified the user experience and the user interface of their dashboard software as big selling points in the industry.
The technology was there, it was simply not that easy to use.
The platform connects company management and HR with the employees through surveys, polls and sentiment analysis of online conversations. An example use case would be:
HR wants to analyse the onboarding experience of new employees. They create a survey and select the “Onboarding” category. Employees can answer the survey via a dedicated mobile app, a web panel or directly on Slack. The HR can review detailed results of the survey (both quantitative and qualitative results) in their Betterworks Engage web app, supported by extensive filtering capabilities.
The “before” version of Engage (Hyphen) included a dashboard and a navigation which did not entirely support the core user flows.
My job, as the lead UX Designer in the project was to look at the Insights panel and provide the HR teams around the world the best possible experience of collecting, analysing and using employee data in a meaningful way.
It wasn’t a simple task, as there were almost 50 multi-level report screens and a number of creator and settings screens, which were in a dire need of a redesign.
UX Strategy
EL Passion’s design team always does as much as possible to understand the context of the user experience and the core problems that need solving. The 360-degree research approach from our team over the course of 2 weeks involved:
Expert audit of the current user experience
of the current user experience Usability testing with HR managers to look at the UX from their perspective
with HR managers to look at the UX from their perspective Stakeholder interviews with team members previously involved in creating the product
with team members previously involved in creating the product Analysis of customer service logs
Review of behavioural analytics of the current users
of the current users Analysis of the competitive space
Why did we leave the competitive analysis as the last step?
By the time we started the work, the tool was already in use — that means that there is a lot to learn from. We could look inside the product, instead of tapping into inspiration from the outside.
A 360-degree perspective
The first step to analysing a user experience of an application, you need to check where the problems lie currently. Apart from Google Analytics stats, we hit a jackpot in customer support logs. What could be a better place to understand the customers’ pain points than the very place they go to for help?
Even though I did my master’s degree in HR, I have never worked in the field, so I couldn’t pretend to be an expert. There was a need of real HR managers to look at the product and give us honest feedback. Even just testing with 5 people reinforced some of the points my colleague Tomek and I found in the current user experience of Engage.
The problems we needed to solve to help Engage succeed
The core issues were the functional architecture of the application and the understandability of certain key features, which prevented the users from extracting maximum value from the application’s offering.
The architecture and the navigation was a patchwork
Like many other applications, Engage’s features were gradually developed. Sometimes it meant adding a new item in the main navigation, but sometimes it meant squeezing in a feature in the place that it doesn’t belong, in order to avoid the whole structure from falling apart.
There were several problems with navigation through the application, which was related to the way particular screens were grouped and to the fact, that they didn’t necessarily support particular user flows in the application.
In other words — the tasks HR people were trying to do were not accessible from a single screen — they needed to go in circles through many screens in the app while doing one thing.
Airtable was extremely helpful in creating a repository of all screens, features and UI elements.
UI inconsistencies made the product difficult to learn
When I explain the importance of UI consistency to my trainees, I always say this sentence:
The more consistent each of the elements is throughout an enterprise application, the less effort is needed to learn and remember the interface. The less effort is required, the more meaningful will be the work produced in the tool.
The same applied to Engage — the same functions were sometimes built by different teams and ended up working or looking a bit differently. The users were getting confused and often wrote to customer support to ask how to use a feature again.
If the same feature is looking differently across different screens, the users will have a hard time finding their way around the application.
5 small usability problems can turn into 10 big user experience problems
When analysing user experience, we often look out not only for high-level issues. The small problems matter a lot too, especially when they aggregate in tens or hundreds across 50 different application screens.
Some of the usability concepts we took a close look into were:
System feedback:
There were many instances in which there is an action happening in the system, however the user was not informed of what has happened. For instance deleting or resolving an item.
Elements should not disappear from the screen right after clicking — the user might miss it and be left puzzled with what actually happened in the app.
Choosing appropriate UI controls:
Some UI control types have not been chosen for maximum usability. There are a number of places where users need to use long dropdown lists or displaced radio buttons.
Sometimes a static list with a search filter can be more user-friendly than a huge system dropdown.
Lack of user help:
One of the biggest problem of certain features and microcopy was the lack of explanatory information. Due to the complexity of certain components, it makes the system difficult to learn. We couldn’t expect first time users to know terms such as “Driver Impact Analysis” or features such as heatmap reports.
This deep holistic approach gave as a very deep understanding of the problems that the users of the application are facing, as well as primed some thinking about potential solutions. With all the new ideas in our heads, we needed to hold our horses and start designing very slowly, in order to avoid making the same mistakes.
UX Design
Architecture
Based on the full breadth of findings from the UX Strategy phase we decided to focus on UX Architecture first. What was important for us was close collaboration with the client and their developers, which resulted in revised architecture schemes for the full application. We used them as our main communication tool in the first days of the UX Design phase.
Our aim here was to make the application easy to navigate and to provide clear ways to jump between the reports and not get lost in the hierarchy. | https://medium.com/elpassion/a-streamlined-ux-process-to-redesign-team-analytics-platform-a-case-study-fd8a0f7ae2b3 | ['Michał Mazur'] | 2020-09-15 07:41:07.043000+00:00 | ['Redesign', 'Design', 'UI Design', 'UX Design'] |
Exploring venues in Chandigarh, India using Foursquare and Zomato API | We see that some venues overlap while other venues are way off. Thus, using careful analysis we decided to drop all corresponding venues from the two datasets that had their latitude and longitude values different by more than 0.0004 . Once this was done, we observed that there were still some venues which were not aligning which could be categorised as follows:
There are venues that have specific restaurants/cafes inside them as provided by Zomato API (Pizza Hut in Elante Mall). Two locations are so close by that they have practically same latitude and longitude values (The Pizza Kitchen and Zara). Some venues have been replaced with new venues (Underdoggs has now been replaced by The Brew Estate).
While it’s okay to keep the venues that belong to category 1 and 3, we shall drop venues in category 2. This left us with a dataset of 49 venues.
Methodology
As a first step, we retrieved the data from two APIs (Foursquare and Zomato). We extract venue information from the center of Chandigarh, upto a distance of 4 Km. The latitude and longitude values are then used to fetch venue rating and price from Zomato.
The data from the two sources is carefully combined based on the name, latitude and longitude values from the two sources. The final dataset would include the rating and price values for each venue.
Next, we analyse the data that we created based on the ratings and price of each venue. We identify the top category types. We identify places where many venues are located so that any visitor can go to one place and enjoy the option to choose amongst many venue options. We also explore areas that are high rated and those that are low rated while also plotting the map of high and low priced venues. Lastly, we cluster the venues based on the available information of each venue. This will allow us to clearly identify which venues can be recommended and with what characteristics.
Finally, we’ll discuss and conclude which venues to be explored based on visitor requirement of rating and cost.
Analysis
During the analysis phase, I explored the venue categories, the rating distribution of the venues and the price range across the map of Chandigarh.
Categories
As we extracted categories from the Foursquare API, identifying what type of venues are most popular in the city would really be helpful. We plot a bar chart for the same. | https://towardsdatascience.com/exploring-chandigarh-india-using-foursquare-and-zomato-api-1d4501291320 | ['Karan Bhanot'] | 2019-06-15 14:05:35.475000+00:00 | ['Machine Learning', 'Towards Data Science', 'Data Science', 'Productivity', 'Technology'] |
Improving Your Git Productivity with Aliases | One of the main concerns for developers who want to start using Git from the command line is that it’s slow. It may also feel slow if you compare it to a Git GUI that groups multiple Git commands into a single action. This is where Git aliases can come handy.
If you’ve never used aliases you basically need to create a file named gitconfig to store these aliases (if you’ve never done it before check setting up gitconfig). Although gitconfig allows you to do many things, we’ll be focusing on aliases (if you’ve never used Git aliases before check setting up Git aliases).
Let’s start by looking at a very basic but useful alias for git status. Add st = status to your gitconfig file. We can now use git st instead of the slightly longer git status .
Before we dive into some more useful aliases I would like to share my rules for what goes into my personal .gitconfig .
Only add aliases for commands you know — otherwise, there’s less chance you will actually use them. Furthermore, you don’t grow as a developer when you’re using things you don’t understand.
Only add commands after you’ve used them multiple times to avoid littering your Gitconfig — this also verifies you have these commands in your muscle memory.
Try to follow a convention & be consistent with your aliases — this makes remembering your aliases a breeze. It also speeds pulling these aliases out of your head.
With that out of the way, let’s start (note that I added some documentation to make the gists easier to read)
Now we can use these from the command line
It’s worth noticing the conventions I used. If, for example, I need to use some “add” command, know it will start with an a which makes it easier to remember.
At some point I started feeling like typing git is actually longer than at least some of my aliases so I decided to create a zsh alias in my zshrc that goes like this: alias g="git" .
This made all my commands even shorter (try to think how many times a day you’re typing git into your terminal).
Let’s look at some more useful aliases:
At this point, I hope you are starting to see how useful these aliases can be. Another awesome thing about .gitconfig is that it allows you to add bash commands:
Our new cor command allows you to checkout branches by regex (well, sort of) like this:
Note: I know my “checkout by regex” implementation is not bulletproof but, I’ve been using it for more than a year now and it never failed me.
The “checkout by regex” is a very good example of trying to spot things that can be improved and speed them up using Git aliases.
As already mentioned the fact that gitconfig allows you to write aliases with any bash commands has a lot of interesting potentials. Let’s look at a final example.
The amazing FZF allows you to do a very fast fuzzy search which allows you to pimp your Git aliases:
We basically combined the power of FZF with Git. Pretty cool ah? :)
And all the aliases combined:
Some of you may be wondering why not use some git plugin like oh-my-zsh git plugin instead of creating your own custom aliases. There are multiple reasons I could think of:
Why add a plugin when you can easily do all the stuff the plugin can without it? This is especially true if you alias git to g .
to . The plugin creates global aliases which feels a bit awkward to me. I would rather have all the aliases available only in a Git context.
I think the space after the g makes my commands slightly easier to read.
makes my commands slightly easier to read. When using the plugin you get a lot of aliases that you are either not familiar with or are not part of your workflow which feels like an overkill to me.
Let’s summarize:
Utilizing Git aliases as part of your workflow can dramatically improve your productivity — imagine how many times a day you’re using all these basic commands like status , commit , checkout , commit --amend and many more.
, , , and many more. Use aliases to group multiple commands together. Don’t forget you can even use bash commands.
To speed your Git workflow it’s important to work deliberately, always look for actions you’re performing multiple times a day and try to create useful aliases for them.
If you’re already familiar with Gitconfig and you feel like I’ve missed something or just want to share more cool aliases — let me know :) | https://medium.com/analytics-vidhya/improving-your-git-productivity-with-aliases-c64b94517c14 | ['Gideon Caller'] | 2019-10-19 15:07:16.310000+00:00 | ['Gitconfig', 'Self Improvement', 'Git', 'Technology', 'Productivity'] |
The UI & UX tips collection — volume one | Originally published at marcandrew.me on November 19th, 2020.
Creating beautiful, but also practical UIs takes time, with a lengthy amount of design revisions along the way. I know. I’ve been there before.
But what I’ve discovered over the years is that by making just a few simple, and quick adjustments to your designs you can improve the end-result massively.
In this guide I’ve put together a collection of my popular UI & UX tips from the past 12 months that can, with little effort, help improve both your designs, and the overall user experience.
Let’s dive on in… | https://uxdesign.cc/the-ui-ux-tips-collection-volume-one-f69f0969ed17 | ['Marc Andrew'] | 2020-11-20 08:30:28.125000+00:00 | ['Product Design', 'Visual Design', 'UX', 'UI', 'Design'] |
Venture Capital Funding Hasn’t Just Recovered — It’s Booming | Venture Capital Funding Hasn’t Just Recovered — It’s Booming
Is this the new Dotcom bubble?
Photo by Markus Winkler on Unsplash
Containment measures for COVID-19 plunged the world into a deep recession, with tens of millions of people losing their jobs.
However, Venture Capital funding survived much of the crash intact, down just 4% year-over-year in the first quarter, and down just 2% YoY in Q2. The second quarter already represented significant growth from March’s bottom, up 17 percent quarter over quarter.
The numbers for Q3 show that there’s indeed been a swift turnaround, as funding was up 9% YoY. Q3 was a “seven quarter high” in venture funding for US companies, and Q4 is on pace for significant YoY growth.
Figures by CrunchBase. Q1, Q2, Q3. Visualized by author.
Q4 — The Grand Finale
Venture capital runs the gamut from angel investments all the way to IPO, and there are a lot of exciting IPOs rounding out 2020, including Affirm, Airbnb, DoorDash, QuantumScape, Wish, Roblox, ThoughtSpot, UIPath, and C3.ai.
There’s a bunch more, and several will run into 2021, but in any case, there’s a lot of recent VC funding and deals in the pipeline.
Affirm is a $10 billion installment loans company that filed for IPO on October 8th, and is expected to list once the SEC completes its review.
QuantumScape is an Electric Vehicle battery company with exciting tech that supposedly surpasses Telsa — a rival whose stock is up over 10,000% all-time.
C3.ai is an AI solutions company that IPOd under the ticker $AI. Their products include AutoML, a niche within AI that’s growing even faster than the industry as a whole. While I haven’t worked with C3.ai in particular, I work with AutoML tools like Obviously.AI, which makes implementing AI orders of magnitude easier, so there’s a lot of potential.
Not Just a Recovery, an IPO Boom
The long line of IPOs may even rival the frenzy of the dot-com era, and become the biggest year ever for IPOs. A lot can change in the next month, but it’ll be a big year no matter what.
Globally, U.S. exchanges have made up 87% of IPO proceeds in Q3, according to the Global IPO trends: Q3 2020 report. In the first three quarters of 2020, well over 100 companies IPOd, including long-awaited giants like Palantir and Snowflake.
Summary
If you’re a startup looking for funding, quarterly financing data shows that now is an incredible time. You need look no further than Crunchbase or even Twitter to find countless startups raising capital. | https://medium.com/datadriveninvestor/venture-capital-funding-hasnt-just-recovered-it-s-booming-6ffd1b5b2c2c | ['Frederik Bussler'] | 2020-12-13 11:43:23.811000+00:00 | ['Investing', 'Venture Capital', 'VC', 'Startup', 'Funding'] |
PyTorchバックエンドの確率的プログラミング言語Pyroと生成モデルのツールPixyz | hei4/medium-pyro-pixyz
You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or… | https://medium.com/pytorch/pytorch%E3%83%90%E3%83%83%E3%82%AF%E3%82%A8%E3%83%B3%E3%83%89%E3%81%AE%E7%A2%BA%E7%8E%87%E7%9A%84%E3%83%97%E3%83%AD%E3%82%B0%E3%83%A9%E3%83%9F%E3%83%B3%E3%82%B0%E8%A8%80%E8%AA%9Epyro%E3%81%A8%E7%94%9F%E6%88%90%E3%83%A2%E3%83%87%E3%83%AB%E3%81%AE%E3%83%84%E3%83%BC%E3%83%ABpixyz-ac4e6c4d3963 | ['大川 洋平'] | 2020-12-18 22:19:17.145000+00:00 | ['Probabilistic Programming', 'Pytorch', 'Deep Learning', '日本語', 'Python'] |
Perfect Parenting: A Guide | Photo by Benjamin Manley on Unsplash
If you want to be a perfect parent, there’s really only one hard and fast rule to follow:
Don’t have kids.
Seriously. If you want to be perfect at parenting, it’s better not to have kids because otherwise, you are just setting yourself up for failure. It’s the hard truth we’re busy ignoring when we’re plotting and planning how we’ll do parenting better than anyone else — better than our own parents, better than our friends, and sure as hell better than the parents we judge out in the public sector.
I was a perfect parent — back before I had children.
I was going to have a peaceful household, my children would pick up after themselves practically from birth, and we would have a learning environment devoid of the standard electronic babysitters.
All parents, let’s take a moment to laugh at ourselves at our pre-parenting ideas of parenting.
I live in semi-organized chaos. It’s a madhouse of fighting siblings, toys spread from one room to the next, and the dynamic duo of tablet time and television entertainment keeping us all from losing it. I’m teaching them mindfulness, and they’re learning to be responsible for their own toys, but our environment is far from perfect. We have our good days and our challenging ones, but the parent I thought I was going to be retreated quickly when faced with actual children in a real-life parenting scenario. She was not equipped for this unfamiliar territory.
In fact, pre-parenting me would be shocked at the number of times I’ve ended up screaming like a banshee about finding the other shoe after repeated quiet and respectful requests to put them on. Pre-parenting me would assume that all the house rules would automatically be followed with little fuss, and I wouldn’t have taken into account the fact that children are learning and growing, which means they aren’t exactly emotionally stable or rational.
My children are 4 and 6 now, and we’re all learning. I’m learning every day how to be the kind of parent I want to be, and they’re learning how to be kids who will one day be adults. We’re in it together, and when I mess up, I set the example by talking to them about it rather than pretending that I do no wrong. I own up to it and use it as a teaching moment.
As much as I get wrong, I think there are probably a few things I get right.
If our parents’ criticism can become our inner critic later, I’m hoping a few of my more encouraging parenting phrases can also make the cut of their future inner dialogue. In my house, we say:
Even when we get mad at each other, we still love each other.
Your best is always good enough.
And, oddly enough,
When it’s time to go, it’s time to go.
I want them to learn that anger doesn’t eclipse love. When we have a disagreement, it doesn’t mean that they aren’t loved. They are always loved. Nothing they do can cancel that out, and it serves as a good reminder that we are worthy of love even when we screw up. And screw-ups are an inevitable part of life.
I also want them to know that their best on any given day is good enough. I want them to know that they are good enough. Just as I am. It can be easy to feel defeated and to struggle on our challenging days, but as long as we’re doing our best, that’s enough.
Okay, the last one is something I say when we go to any special event or activity. It means that when it’s time to leave, they need to go without giving me a hard time about it. But maybe even that one can apply to life in general. When we know that it’s time to let go, we should do it rather than dragging it out. This one probably gets lost in translation, but it’s still a good reminder for me to recognize when something has run its course and make peace with it.
I’m hoping the good things sink in because a perfect parent I am not!
I make mistakes. I overreact. I let my own emotions run away with me. But I do a couple of things to address this. First, I initiate fresh starts. When we have a tough morning, I remind them that we have the opportunity to make the best of the rest of the day. When we have a rough night, I remind them that we can try again in the morning. We get another chance to do better, and it can start at any time.
I also make sure that at the end of each day they know that they are loved without condition. We end our day with a bedtime routine that includes stories I read and stories we all make up and tell to each other. It’s a favorite part of our day, and it helps remind us that are a family who loves each other, even on the days when one or more of us has struggled.
Perfect parents don’t exist.
I don’t care how many “perfect” Pinterest moms we see out there who somehow manage to give the impression that their kids are perfect and they never screw up. They don’t really exist. They’re a figment of their own imagination. They aren’t showing us the hard side of parenting or the ugly side. It’s just an illusion.
Parenting is messy. It’s imperfect by its very nature because we’re all new at this. We weren’t born knowing how to parent correctly. We take nature and nurture and our own life experiences, and if we care at all, we try our best not to screw it up.
We will definitely screw it up.
Let’s face it: all our kids will probably need therapy for something. But we can make sure that we’re giving them unconditional love, instilling them with self-love and self-respect, and teaching them that they are good enough even though they aren’t perfect. After all, neither are we.
We can also make sure that we’re showing as much if not more than telling. We need to practice self-love and self-care like we mean it. We need to be body positive about ourselves so we can teach our kids how to do it. We need to be mindful about how we talk about other people because our kids are listening. We can show them that we are imperfect by acknowledging our own mistakes and teaching them how to deal with screw-ups in a healthy way.
If we love our children and keep trying to do our best, it’s enough. We might feel like we’re failing, but that’s just a part of the process. We can let it make us better by continuing to look for ways to improve and by modeling that growth mindset for our children.
If we want to be perfect parents, we don’t need to go to Pinterest. We don’t need to poll all the best parents with their perfectly behaved children. We just need to accept that we will be imperfectly perfect, and then love our kids with the whole of our hearts, learn who they are and support them, and figure out a way to embrace the mess that is parenting with as much courage and humor as we can manage. | https://medium.com/swlh/perfect-parenting-a-guide-c9cbde3ddf5b | ['Crystal Jackson'] | 2019-10-31 19:55:29.336000+00:00 | ['Self', 'Psychology', 'Mothers Day', 'Advice', 'Parenting'] |
Amazon’s First Non-Employee Customer and What He Bought | Amazon’s First Non-Employee Customer and What He Bought
The story of John Wainwright and his journey to the first purchase at Amazon.
Author purchased rights via adobe stock photos
Vincent Van Gogh once wrote that, “Great things are not done by impulse, but by a series of small things brought together.”
In many ways, the beginnings of a business is a canvas, an idea that’s initiated with a scary, committal brushstroke, evolving into a beautiful final product, or a botched and forgotten disaster.
It’s particularly interesting looking at the humble origins of ubiquitous, global companies. Their beginnings are almost always inspirationally small, the first half of their founder’s proof of perseverance.
Jeff founded Amazon in 1994, walking away from a well-paying finance job in New York City. His supervisor implored him not to leave, namely because Bezos was a good employee and worth keeping, but also because he thought Jeff was on a fool’s errand. He’d seen so many great minds fall into the abyss of failed startups.
Most of you know the rest of this story. But let’s go back, and see the part most don’t know: the story of their first sale. | https://medium.com/publishous/amazons-first-non-employee-customer-and-what-he-bought-fb1a07d42ced | ['Sean Kernan'] | 2020-12-08 00:19:19.547000+00:00 | ['Life', 'History', 'Technology', 'Sean Kernan', 'Artificial Intelligence'] |
What’s the Ceiling for the Quickly Emerging GEVO? | Image via Unsplash- Jonathan Petersson
What’s the Ceiling for the Quickly Emerging GEVO?
The growing biofuel and renewable chemical company is starting to make some major noise on the stock market
New favorites appear on the stock market stage frequently. Some sputter and die away while others plug away and find financial glory. GEVO has recently thrown their hat in the ring for darling of the moment, seeing impressive gains in recent days. However, are they for real and if so, what might be their ceiling?
Headquartered in Colorado, according to Robinhood, “Gevo, Inc. Common Stock, also called Gevo, is a renewable chemicals and next generation biofuels company, which focuses on the development and commercialization of renewable alternatives to petroleum-based products.”
They are a smaller growth company of about 60 employees, who have been operating since 2005. With the election of the 46th President, Joe Biden, a reinvigorated emphasis has been placed renewable energy and resources in the United States and perhaps abroad.
GEVO offers an impressive and diverse array of products, which include but aren’t limited to renewable gasoline and biodiesel, sustainable aviation fuel, ethanol and high-protein animal feed. Their line focuses on decarbonization to give their creations the lowest carbon life-cycle assessment (LCA) possible. This LCA is seen as the most effective method of measuring true carbon intensity in biofuels and chemical with the lower levels benefitting the environment.
As more and more businesses and communities search for ways to reduce their carbon footprints, companies like GEVO are there to help transition into a new age. Contracts are continuing to be accumulated by the company, including a deal for their renewable gasoline in Seattle, where they have a long-term pact.
A look at the balance sheet shows unprofitable numbers over the past years. However, a deeper dive makes it easy to identify a number of reasons for optimism. The company projects to grow revenue by 24.05% per year (however, they may not reach profitable status until 2023), which a steady and healthy trek upwards.
Earlier this year, GEVO also inked a $1.5 billion long-term renewable hydrocarbons purchase and sale agreement with Trafigura Trading, LLC, who will start to annually market and sell 25 million gallons of renewable hydrocarbons beginning in 2023. This provides a solid base of income in the coming years, as they work to add many more similar deals.
GEVO has also made admirable progress regarding its assets and debt. During its 2020 second quarter earnings, they reported not only improving their cash and cash equivalents from $6.3 million just the year before to $80.6 million this year. Most importantly, they also expect to pay off the remainder of their approximate $12.7 debt balance by the end of 2020. A company with no debt and a steadily increasing stockpile of cash and war chest of lucrative contracts is a recipe most enterprises can only dream of achieving.
Investors are just starting to recognize the potential and power of GEVO. With share prices sitting at around $1.90 on December 1st earlier this month, they have risen almost 275% since then with the price at $5.20 as of the end of after-hours trading on December 28th. This has allowed them to exceed analyst predictions, which projected them to reach $5.00 by the end of this year.
President Elect Biden has spoken extensively on his intent to pursue a more environmentally friendly agenda than the current Trump administration. While GEVO certainly hasn’t cornered the market, their rosy financial outlook, variety of products and emerging place in the market seemingly have them in an envious upward trend as we head into 2021.
GEVO has gone from almost all projection to making tangible strides. The proof will be in the pudding once their contracts continue to accumulate and they begin to get into production and delivery in earnest. However, it’s hard not to like what they have going on. As more bricks fall (or don’t fall) into place, the picture will become clearer as to where the company’s stock price is headed. The current excessive bullishness could be simply a taste for things to come. Only time and continued careful monitoring of this emerging company will tell, but as things stand today, it’s starting to command your attention and consideration as a potential long-term investment.
DISCLAIMER: The author is not a financial advisor or expert. The opinions expressed in this article are for general educational purposes and entertainment only and are not intended in any way to provide specific advice or recommendations for any individual or on any specific security or investment product. Individual investors are responsible for their own money and investment decisions. The author holds a small position in the product discussed in the article. | https://medium.com/datadriveninvestor/whats-the-ceiling-for-the-quickly-emerging-gevo-87f55688b5bf | ['Andrew Martin'] | 2020-12-29 09:14:09.623000+00:00 | ['Investing', 'Renewable Energy', 'Money', 'Stock Market', 'Finance'] |
How I Designed a Map | Ch. 4: The Typography
My map was coming along well, but I also wanted to change the font. In Jules Verne’s novel, it all starts with a cryptic note written in the Runic script. I wanted something similar — hard edges, geometrical lines harkening back to the Nordic style.
A cryptic runic note that starts the adventure in Jules Verne’s novel
After a long time looking for it, I found TT Firs, designed by Ivan Gladkikh and the TypeType team. It was perfect. It came in a huge range of variants, was neutral, and most importantly, legible at smaller sizes. It gave a beautiful character to my map.
It was now time to style POIs. I grouped them by type and assigned colours. Green for nature, purple for transport, teal for water-based transport, red for medical, and orange for education. Everything else would be brown. Groups would have similar colours but different icons. Fortunately, Mapbox had a whole set of POI icons ready to be used and customised.
I thought my map was coming along beautifully. There were a few niggles here and there, but V1 was complete! The niggles were mainly at higher zoom levels, but since my final aim was to map my whole road trip, I figured it was okay if I left it at that.
The map was designed only for Iceland so the colours and information density don’t work on all countries. V1 of my map can be interacted with on my website, and you’re free to explore and play around with it. It was now May 2016 and I couldn’t plot the road trip until I was back.
Ch. 5: Dusting it Off
Four months later I went on my trip. I spent 16 days in Iceland driving 3300 km taking it all in. I hiked random trails, saw stunning waterfalls, and experienced standing under the Aurora until my fingers went numb with the cold. It was a surreal experience.
However, I forgot about the map I’d made. Flash forward 4 years. Lockdown happened and I remembered my map. I came back to it and saw that there was a new version of Mapbox which let me do in 30 minutes what took me weeks. (insert laugh crying GIF).
While I was writing this post, Mapbox released another updated version! Do they even sleep?
I estimated that it would take me even more time to learn the new version and transfer my styles there, so I stuck with the old one. All I had to do now was chart my trip onto the map. To do that, I had to map out my whole trip and every single place I went to and then learn how to use Mapbox APIs to overlay it on top of my map. Thankfully I remembered my route almost completely.
Once I had my trip details, I learnt how GeoJSON works and used that to make a tileset in Mapbox which I could then style. Tilesets are essentially geographical representations of your data. It’s actually pretty simple once you know what to do, but it took me almost a week to figure all of it out. I tried it on another map once and then ported it onto Iceland. All of it finally clicked into place.
This took a lot more effort than I originally thought it would, but my map was better for it. I learned so much about cartography and technology by making just one map. I’ve always had huge respect for cartographers but it grew tenfold after this process; imagine creating maps 400 years back! It also gave me that warm fuzzy feeling you get when you make something with your hands entirely from the ground up.
Almost 1600 days after I first began, my map was finally complete. I exported the map and made some adjustments in Photoshop to create a final static version of my road trip to Iceland — which, by the way, included going to Snæfellsjökull where it all started. | https://medium.com/nightingale/how-i-designed-a-map-7fa404023990 | ['Nimit Shah'] | 2020-12-28 14:02:36.865000+00:00 | ['Mapping', 'How To', 'Mapbox', 'Cartography', 'Built With Mapbox'] |
Reporting On The Middle East: Three Things Journalists Need To Know | First published by the European Journalism Observatory (EJO) and co-written with Payton Bruni, a student at the University of Oregon, School of Journalism and Communication, majoring in journalism with a minor in Arabic studies.
Being a journalist in — or reporting on — the Middle East brings with it many challenges.
One key issue, for journalists and news consumers alike, is the relative lack of media freedom, and freedom of expression, in the region.
This can drive conversation into more controlled environments, like closed WhatsApp groups and encrypted apps like Telegram. It also results in self-censorship, due to privacy concerns and concern over what it is safe to say in the public domain. The fact that social networks are often banned or blocked in times of upheaval adds to this online wariness.
Yet at the same time, as our new report “State of Social Media, Middle East: 2018” demonstrates, the growth of social media use in the Middle East and North Africa (MENA) continues unabated.
What should journalists make of this complicated landscape? Here are three considerations:
1. Be aware of the environment
As recently noted by the Brookings Institution’s Center based in Doha, Qatar: “The question of media freedom takes on a particular character in the Middle East. Worldwide, the Middle East is the most dangerous region for journalists. Not only journalists, but also media outlets themselves are now under existential threat.”
Reporters Without Borders highlighted this in their 2018 World Press Freedom Index, ranking a number of Middle Eastern countries near the bottom of their list.
The deaths and detainment of journalists in countries such as Syria and Yemen is a cause for concern, as is increased hostility towards journalists in Egypt.
In 2018, the Egyptian government passed legislation restricting where journalists can operate and making it obligatory for new media websites to apply for licenses. It also categorised social media accounts with more than 5,000 followers as media outlets, exposing them to monitoring by the authorities.
Even veteran journalists have been feeling the heat in Egypt. The New York Times correspondent and former Cairo bureau chief, David D. Kirkpatrick, was recently detained by security officials and expelled from the country with no explanation.
Image: 2018 World Press Freedom Index: Source: Reporters Without Borders. > Countries with the lowest rankings for Press Freedom (many of them in the Middle East) can be seen in black.
2. Opportunities for social media as a source
Despite this repressive backdrop and the self-censorship often resorted to by internet and social media users, social networks remain an avenue for stories and sources.
The potential for this was famously demonstrated by NPR’s Andy Carvin during the Arab Spring. More recently, regional outlets such as Al Jazeera have harnessed social media as a news source by reporting on the #BringDevBack movement started by Yemeni people looking to rebuild their country.
As with any source, content found on social networks needs to be treated with caution. This is particularly important given the increasing weaponisation of cyberspace (a trait not unique to the MENA region) to push political agendas.
Following the disappearance and murder of Saudi journalist Jamal Khashoggi, an investigation by NBC showed how Twitter accounts — belonging to both real people and bots — were promoting the denials of the Saudi Arabian government.
On the other hand, analysis by Reuters revealed a network of at least 53 websites which, “posing as authentic Arabic-language news outlets, have spread false information about the Saudi government and Khashoggi’s murder.” These stories were also amplified by automated Twitter bots.
This trend can also be seen in the online conversation about neighbouring Qatar. In May 2018, 29% of tweets in Arabic about Qatar — a nation at odds with several of its Gulf neighbours — were tweeted by bots. This is up from 17% a year before.
3. Social media as a platform for distribution and engagement
Although journalists need to be aware of this wider context, it shouldn’t deter them from using social media. Social networks are not just important channels for sources, they are also essential platforms for the distribution of content and engagement with audiences.
The growth of Facebook, for example, is one such opportunity. By early 2018 there were 164 million active monthly Facebook users in the Arab world, up from 56 million Facebook users just five years earlier.
Nearly half of young Arabs (49%) say they get their news on Facebook daily, up from 35% in 2017, and almost two thirds (63%) of Arab youth now say they look first to Facebook and Twitter for news.
Meanwhile, Saudi Arabia not only has the highest annual growth rate of social media users anywhere in the world (up 32% vs. a worldwide average of 13% between Jan 2017-Jan 2018), but a third of the country’s population uses Snapchat every day.
No wonder the ephemeral network is partnering with a range of local content providers, who are re-imagining their material for the platform.
Image: Annual growth of social media users. Source: Hootsuite and We Are Social
Social media in the Middle East is a complex and fast-moving space. Keeping abreast of how this landscape is evolving is therefore essential for journalists if they are to understand, and harness, the full potential of social media in the region.
To find out more, download the full study, “State of Social Media, Middle East: 2018” by Damian Radcliffe and Payton Bruni, from the University of Oregon Scholars’ Bank, or view it online via Scribd, SlideShare, ResearchGate and Academia.Edu. | https://medium.com/damian-radcliffe/reporting-on-the-middle-east-three-things-journalists-need-to-know-4a4dabb397ee | ['Damian Radcliffe'] | 2019-03-17 15:01:02.333000+00:00 | ['Media', 'Freedom Of Speech', 'Journalism', 'Middle East', 'Social Media'] |
Numpy Cheat Sheet | This is a long note, make yourself a cup of tea, and let’s get started!
As always, we need to import NumPy library:
import numpy as np
1. N-Dimensional Array (Ndarray)
What are Arrays?
Arrays are a data structure for storing elements of the same type. Each item stored in an array is called an element. Each location of an element in an array has a numerical index, which is used to identify the element.
1D vs 2D Array
1D array (i.e., single dimensional array) stores a list of variables of the same data type. It is possible to access each variable using the index.
1D array
2D array (i.e, multi-dimensional array) stores data in a format consisting of rows and columns.
2D array
Arrays vs Lists
Arrays use less memory than lists
Arrays have significantly more functionality
Arrays require data to be homogeneous; lists do not
Arithmetic on arrays operates like matrix multiplication
NumPy is used to work with arrays. The array object in NumPy is called ndarray .
Create a Vector
To create a vector, we simply create a one-dimensional array. Just like vectors, these arrays can be represented horizontally (i.e., rows) or vertically (i.e., columns).
# Create 1 dimensional array (vector)
vector_row = np.array([1,2,3]) # Create vector as a row
>>> array([1, 2, 3]) vector_column = np.array([[1],[2],[3]]) #Create vector as a column
>>> array([[1],
[2],
[3]])
Create a Matrix
To create a matrix, we can use a NumPy two-dimensional array. In our solution, the matrix contains three rows and two columns.
matrix = np.array([(1,2,3),(4,5,6)]) # Two dimensional array
>>> array([[1, 2, 3],
[4, 5, 6]])
Creating a Sparse Matrix
A sparse matrix is a matrix in which most of the elements are zero. Sparse matrices only store nonzero elements and assume all other values will be zero, leading to significant computational savings.
Imagine a matrix where the columns are every article on Medium, the rows are every Medium reader, and the values are how long (minutes) a person has read that particular article. This matrix would have tens of thousands of columns and millions of rows! However, since most readers do not read all articles, the vast majority of elements would be zero.
Let’ say, we want to create a NumPy array with two nonzero values, then converted it into a sparse matrix. If we view the sparse matrix, we can see that only the nonzero values are stored:
from scipy import sparse
matrix_large = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[3, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) # Create compressed sparse row (CSR) matrix
matrix_large_sparse = sparse.csr_matrix(matrix_large)
print(matrix_large_sparse)
>>> (1, 1) 1
(2, 0) 3
In the example above, (1, 1) and (2, 0) represent the indices of the non-zero values 1 and 3 , respectively. For example, the element 1 is in the second row and second column.
Create Special Ndarray
np.zeros() function returns a new array of given shape and type, filled with zero.
# Create 1d array of zeros, length 3
np.zeros(3)
>>> array([0., 0., 0.]) # 2x3 array of zeros
np.zeros((2,3))
>>>array([[0., 0., 0.],
[0., 0., 0.]])
np.ones() function returns a new array of given shape and type, filled with one.
# Create 3x4 array of ones
np.ones((3,4))
>>> array([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]])
np.eye() function returns a matrix having 1’s on the diagonal and 0’s elsewhere.
# Create 5x5 array of 0 with 1 on diagonal (Identity matrix)
np.eye(5)
>>> array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.]])
np.linspace() function returns an evenly spaced sequence in a specified interval.
# Create an array of 6 evenly divided values from 0 to 100
np.linspace(0,100,6)
>>> array([ 0., 20., 40., 60., 80., 100.])
np.arange(start, stop, step) function returns the ndarray object containing evenly spaced values within the given range.
The parameters determine the range of values:
start defines the first value in the array. stop defines the end of the array and isn’t included in the array. step is the number that defines the spacing (difference) between every two consecutive values in the array and defaults to 1 .
Note: step can’t be zero. Otherwise, we will get a ZeroDivisionError . We can’t move away anywhere from start if the increment or decrement is 0 .
# Array of values from 0 to less than 10 with step 3
np.arange(0,10,3)
>>> array([0, 3, 6, 9])
np.full(shape, fill_value) function returns a new array of a specified shape, fills with fill_value .
# 2x3 array with all values 5
np.full((2,3),5)
>>> array([[5, 5, 5],
[5, 5, 5]])
np.random.rand() function returns an array of specified shape and fills it with random values.
# 2x3 array of random floats between 0–1
np.random.rand(2,3)
>>> array([[0.37174775, 0.59954596, 0.50488967],
[0.22703386, 0.59914441, 0.68547572]]) # 2x3 array of random floats between 0–100
np.random.rand(2,3)*100
>>> array([[23.17345972, 98.62743214, 21.40831291],
[87.08603104, 84.23376262, 63.90231179]]) # 2x3 array with random ints between 0–4
np.random.randint(5,size=(2,3))
>>> array([[2, 3, 4],
[4, 4, 0]])
2. Array shape manipulations
Shape
It will be valuable to check the shape and size of an array both for further calculations and simply as a gut check after some operation.
NumPy arrays have an attribute called shape that returns a tuple with each index having the number of corresponding elements.
arr = np.array([(1,2,3),(4,5,6)])
arr.shape # Returns dimensions of arr (rows,columns)
>>> (2, 3)
In the example above, (2, 3) means that the array has 2 dimensions, and each dimension has 3 elements.
Reshape
It is important to know how to reshape the NumPy arrays so that our data meets the expectation of specific Python libraries. For example, Scikit- learn requires a one-dimensional array of output variables y to be shaped like a two-dimensional array with one column and outcomes for each row.
Some algorithms, like the Long Short-Term Memory recurrent neural network in Keras, require input to be specified as a three-dimensional array comprised of samples, timesteps, and features.
reshape() allows us to restructure an array so that we maintain the same data but it is organized as a different number of rows and columns.
Note: The shape of the original and new matrix contains the same number of elements (i.e, same size)
arr1 = np.arange(1, 11) # numbers 1 to 10
>>> array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) print(arr1.shape) # Prints a tuple for the one dimension.
>>> (10,)
We can use reshape() method to reshape our array to a 2 by 5 dimensional array.
arr1_2d = arr1.reshape(2, 5) print(arr1_2d)
>>> array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10]])
If we want NumPy to automatically determine what size/length a particular dimension should be, specify the dimension as -1 which effectively means “as many as needed.” For example, reshape(2, -1) means two rows and as many columns as needed.
arr1.reshape(2, 5)
arr1.reshape(-1, 5) # same as above: arr1.reshape(2, 5)
arr1.reshape(2, -1) # same as above: arr1.reshape(2, 5)
Transpose
Transposing is a common operation in linear algebra where the column and row indices of each element are swapped.
From the last example, arr1_2d is a 2 by 5 dimensional array, we want to switch its rows with its columns.
arr1_2d.T
>>> array([[ 1, 6],
[ 2, 7],
[ 3, 8],
[ 4, 9],
[ 5, 10]])
Flatten a Matrix
flatten() is a simple method to transform a matrix into a one-dimensional array.
arr1_2d.flatten()
>>> array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
Resize a Matrix
resize(arr, new_shape) function returns a new array with the specified shape.
If the new array is larger than the original array, then the new array is filled with repeated copies of arr .
# Resize arr1_2d to 3 rows, 4 columns
resize_arr = np.resize(arr1_2d, (3,4))
resize_arr
>>> array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 1, 2]])
Inverting a Matrix
The inverse of a matrix A is a matrix that, when multiplied by A results in the identity. A good example is in finding the vector of coefficient values in linear regression.
Use NumPy’s linear algebra inv method:
matrix = np.array([[1, 2],
[3, 4]]) # Calculate inverse of matrix
np.linalg.inv(matrix)
>>> array([[-2. , 1. ],
[ 1.5, -0.5]])
Convert Array to List and vice versa
When I was first learning Python, one of the errors that I ran into quite often — and sometimes still run into now — looked like this:
Arrays need to be declared whereas lists do not need declaration because they are a part of Python’s syntax. This is the reason lists are more often used than arrays. But in case of performing some arithmetic function to our list, we should go with arrays instead.
In case we want to store a large amount of data, we should consider arrays because they can store data very compactly and efficiently.
arr = np.array([(1,2,3),(4,5,6)])
>>> array([[1, 2, 3],
[4, 5, 6]]) arr_to_list = arr.tolist() # Convert arr to a Python list
>>> [[1, 2, 3], [4, 5, 6]] np.array(arr_to_list) # Convert list to array
>>> array([[1, 2, 3],
[4, 5, 6]])
Other useful functions to describe the array:
arr.size # Return number of elements in arr
len(arr) #Length of arrayarr.ndim # Number of array dimension
arr.dtype # Return type of elements in arr
arr.dtype.name # Name of data type
arr.astype(int) # Convert an array to a different type
arr.astype(dtype) # Convert arr elements to type dtype
np.info(np.eye) # View documentation for np.eye
3. Numerical Operations on Array
Trace (linear algebra)
The trace is the sum of all the diagonal elements of a square matrix.
arr = np.array([[2, 0, 0], [0, 2, 0], [0, 0, 2]])
np.trace(arr)
>>> 6
Determinant
Determinants a matrix is a special number that can be calculated from a square matrix. It can sometimes be useful to calculate the determinant of a matrix. NumPy makes this easy with det() .
matrix = np.array([[1, 2, 3],
[2, 4, 6],
[3, 8, 9]]) # Return determinant of matrix
np.linalg.det(matrix)
>>> 0.0
Find the Rank of a Matrix
The rank of a matrix is the estimate of the number of linearly independent rows or columns in a matrix. Knowing the rank of a matrix is important. While solving systems of linear equations, the rank can tell us whether Ax = 0 has a single solution or multiple solutions.
matrix = np.array([[1, 1, 3],
[1, 2, 4],
[1, 3, 0]]) # Return matrix rank
np.linalg.matrix_rank(matrix)
>>> 3
Find Eigenvalues and Eigenvectors
Many machine learning problems can be modeled with linear algebra with solutions derived from eigenvalues and eigenvectors.
eigenvalues and eigenvectors
In NumPy’s linear algebra toolset, eig lets us calculate the eigenvalues, and eigenvectors of any square matrix.
matrix = np.array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]]) # Calculate eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(matrix)
eigenvalues
>>> array([ 1.33484692e+01, -1.34846923e+00, -2.48477279e-16])
eigenvectors
>>> array([[ 0.16476382, 0.79969966, 0.40824829],
[ 0.50577448, 0.10420579, -0.81649658],
[ 0.84678513, -0.59128809, 0.40824829]])
Scalar Operations
When we add, subtract, multiply or divide a matrix by a number, this is called the scalar operation. During scalar operations, the scalar value is applied to each element in the array, therefore, the function returns a new matrix with the same number of rows and columns.
new_arr = np.arange(1,10)
>>> array([1, 2, 3, 4, 5, 6, 7, 8, 9]) # Add 1 to each array element
np.add(new_arr,1)
>>> array([ 2, 3, 4, 5, 6, 7, 8, 9, 10])
Similarly, we can subtract, multiply, or divide a matrix by a number using functions below:
np.subtract(arr,2) # Subtract 2 from each array element
np.multiply(arr,3) # Multiply each array element by 3
np.divide(arr,4) # Divide each array element by 4 (returns np.nan for division by zero)
np.power(arr,5) # Raise each array element to the 5th power
Matrics Operations
A matrix can only be added to (or subtracted from) another matrix if the two matrices have the same dimensions, that is, they must have the same number of rows and columns.
When multiplying matrices, we take rows of the first matrix and multiply them by the corresponding columns of the second matrix.
Note: Remember “rows first, columns second.”
multiply matrices
It is important to know the shape of matrics. Then the matrics operations are simple using the NumPy library.
np.add(arr1,arr2) # Elementwise add arr2 to arr1
np.subtract(arr1,arr2) # Elementwise subtract arr2 from arr1
np.multiply(arr1,arr2) # Elementwise multiply arr1 by arr2
np.divide(arr1,arr2) # Elementwise divide arr1 by arr2
np.power(arr1,arr2) # Elementwise raise arr1 raised to the power of arr2
np.array_equal(arr1,arr2) # Returns True if the arrays have the same elements and shape
Other math operations:
np.sqrt(arr) # Square root of each element in the array
np.sin(arr) # Sine of each element in the array
np.log(arr) # Natural log of each element in the array
np.abs(arr) # Absolute value of each element in the array
np.ceil(arr) # Rounds up to the nearest int
np.floor(arr) # Rounds down to the nearest int
np.round(arr) # Rounds to the nearest int
4. Array Manipulation Routines
Adding/removing Elements
append() function is used to append values to the end of a given array.
np.append ([0, 1, 2], [[3, 4, 5], [6, 7, 8]])
>>> array([0, 1, 2, 3, 4, 5, 6, 7, 8]) np.append([[0, 1, 2], [3, 4, 5]],[[6, 7, 8]], axis=0)
>>> array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
The axis along which values are appended. If the axis is not given, both array and values are flattened before use.
insert() : is used to insert the element before the given index of the array.
arr = np.arange(1,6)
np.insert(arr,2,10) # Inserts 10 into arr before index 2
>>>array([ 1, 2, 10, 3, 4, 5])
delete() we can delete any row and column from the ndarray
arr = np.arange(12).reshape(3, 4)
>>> [[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]] np.delete(arr,2,axis=0) # Deletes row on index 2 of arr
>>> array([[0, 1, 2, 3],
[4, 5, 6, 7]]) np.delete(arr,3,axis=1) # Deletes column on index 3 of arr
>>> array([[ 0, 1, 2],
[ 4, 5, 6],
[ 8, 9, 10]])
sort() function can be used to sort the list in both ascending and descending order.
oned_arr = np.array([3,8,5,1])
np.sort(oned_arr)
>>> array([1, 3, 5, 8]) arr = np.array([[5, 4, 6, 8],
[1, 2, 4, 8],
[1, 5, 2, 4]]) # sort each column of arr
np.sort(arr, axis=0)
>>> array([[1, 2, 2, 4],
[1, 4, 4, 8],
[5, 5, 6, 8]]) # sort each row of X
np.sort(arr, axis=1)
>>> array([[4, 5, 6, 8],
[1, 2, 4, 8],
[1, 2, 4, 5]])
Join NumPy Arrays
Joining means putting contents of two or more arrays in a single array. In NumPy, we join arrays by axes. We pass a sequence of arrays that we want to join to the concatenate() function, along with the axis. If the axis is not explicitly passed, it is taken as 0.
# Adds arr2 as rows to the end of arr1
arr1 = np.array([1, 2, 3])
arr2 = np.array([4, 5, 6])
arr = np.concatenate((arr1, arr2), axis=0)
>>> array([1, 2, 3, 4, 5, 6]) # Adds arr2 as columns to end of arr1
arr1 = np.array([[1, 2, 3],[4, 5, 6]])
arr2 = np.array([[7, 8, 9],[10, 11, 12]])
arr = np.concatenate((arr1,arr2),axis=1)
>>> array([[ 1, 2, 3, 7, 8, 9],
[ 4, 5, 6, 10, 11, 12]])
Split NumPy Arrays
Cool, now we know how to merges multiple arrays into one. How to break one array into multiple? We use array_split() for splitting arrays, we pass it the array we want to split and the number of splits.
Note: If the array has fewer elements than required, it will adjust from the end accordingly.
# Splits arr into 4 sub-arrays
arr = np.array([1, 2, 3, 4, 5, 6])
new_arr = np.array_split(arr, 4)
>>> [array([1, 2]), array([3, 4]), array([5]), array([6])] # Splits arr horizontally on the 2nd index
arr = np.array([1, 2, 3, 4, 5, 6])
new_arr = np.hsplit(arr, 2)
>>> [array([1, 2, 3]), array([4, 5, 6])]
Select element(s)
NumPy offers a wide variety of methods for indexing and slicing elements or groups of elements in arrays.
Note: NumPy arrays are zero-indexed, meaning that the index of the first element is 0, not 1.
Suppose we have two arrays, one contains user_name, and the other stores the number of articles that the person has read.
user_name = np.array(['Katie','Bob','Scott','Liz','Sam'])
articles = np.array([100, 38, 91, 7, 25]) user_name[4] # Return the element at index 4
>>> 'Sam' articles[3] = 17 # Assign array element on index 1 the value 4
>>>array([100, 38, 91, 17, 25]) user_name[0:3] # Return the elements at indices 0,1,2
>>> array(['Katie', 'Bob', 'Scott'], dtype='<U5') user_name[:2] # Return the elements at indices 0,1
>>> array(['Katie', 'Bob'], dtype='<U5') articles<50 # Return an array with boolean values
>>> array([False, True, False, True, True]) articles[articles < 50] # Return the element values
array([38, 7, 25]) # Return the user_name that read more than 50 articles but less than 100 articles
user_name[(articles < 100 ) & (articles >50)]
>>> array(['Scott'], dtype='<U5')
We use similar methods for selecting elements in multi-dimensional arrays:
arr[2,5] # Returns the 2D array element on index [2][5]
arr[1,3]=10 # Assigns array element on index [1][3] the value 10
arr[0:3] # Returns rows 0,1,2
arr[0:3,4] # Returns the elements on rows 0,1,2 at column 4
arr[:2] # Returns returns rows 0,1
arr[:,1] # Returns the elements at index 1 on all rows
5. Statistical Operations
Find the Maximum and Minimum Values
Often we want to know the maximum and minimum value in an array or subset of an array. This can be accomplished with the max and min methods. Using the axis parameter we can also apply the operation along a certain axis:
Suppose we store the number of articles a person per month in an array.
articles = np.array([[10, 23, 17],
[41, 54, 65],
[71, 18, 89]]) # Return maximum element
np.max(articles)
>>> 89
np.max(articles, axis=0) # Find maximum element in each column
>>> array([71, 54, 89])
np.max(articles, axis=1) # Find maximum element in each row
>>> array([23, 65, 89])
We can use similar methods to find the minimum elements:
np.min(arr) # Return minimum element
np.min(arr,exis=0) # Find minimum element in each column
np.min(arr,axis=1)# Find minimum element in each row
Calculate the Average, Variance, and Standard Deviation
Just like with max() and min() , we can easily get descriptive statistics about the whole matrix or do calculations along a single axis.
np.mean(arr,axis=0) # Return mean along specific axis
arr.sum() # Return sum of arr
b.cumsum(axis=1) # Cumulative sum of the elements
np.var(arr) # Return the variance of array
np.std(arr,axis=1) # Return the standard deviation of specific axis
arr.corrcoef() # Return correlation coefficient of array
The code in this note is available on Github.
That’s it!
I consider this note as the basics of NumPy. You probably come across these functions repeatedly when reading existing code at work or doing tutorials online. I will try to continuously update this as I find more useful Numpy functions.
All learning activities are undertaken throughout time and experience. It is impossible to learn Python in a couple of hours. Remember that the hardest part of any endeavor is the beginning, and you have passed that, keep on, keeping on!!!
Resources
Numpy is a very important library on which almost every data science or machine learning Python packages such as SciPy, Matplotlib, Scikit-learn depends on to a reasonable extent. It is important to have a strong understanding of the fundamentals. Conveniently, there are some great resources to help with this task. I have listed some of my favorites below, some of which get deeper into aspects of linear algebra; check them out if you are eager to learn more! | https://towardsdatascience.com/numpy-cheat-sheet-4e3858d0ff0e | ['Xuankhanh Nguyen'] | 2020-07-23 20:42:42.778000+00:00 | ['Machine Learning', 'Deep Learning', 'Programming', 'Data Science', 'Python'] |
Keeping a beginner’s mind | Keeping a beginner’s mind
“In the beginner’s mind there are many possibilities, but in the expert’s there are few” ― Shunryu Suzuki
illustration by Malia Eugenio
The eleventh time we design a sign-up page, the interface solution comes out almost automatically. We already have an image in our head of what the needs, the hierarchy, flows, and scenarios are. This somewhat automated process makes us more efficient in our craft. The more design solutions we can automate like that, the easier it becomes for us to move on to tackling new problems and new projects. However, while practice makes perfect, repeating the exact same steps won’t always lead to the best outcomes. We can’t let this lazy confidence — often wrapped and sold as efficiency — get in our way to becoming better designers.
That’s where curiosity comes into play. We need to stay curious about our craft, our industry, our users, and the problems we have to solve. As an article on HBR on curiosity put it: “we are less likely to fall prey to confirmation bias (looking for information that supports our beliefs rather than for evidence suggesting we are wrong) and to stereotyping people (making broad judgments, such as that women or minorities don’t make good leaders). Curiosity has these positive effects because it leads us to generate alternatives.”
Generating alternatives is a key step in our design process and one that we can only do effectively if we can put ourselves in a position of learning. And we can always learn from people around us. But we don’t. As we grow in our careers, we often avoid asking questions or putting ourselves in a position that makes us feel vulnerable. Seniority is not about knowing everything, but knowing where to learn. In many ways, curiosity is about keeping a beginner’s mind and using it as a tool of our craft.
Curiosity is a design tool to open new possibilities and avoid canned solutions.
While we are all born with creativity and we all have this tool, there are a few things we need to do to keep it sharp:
Automate the tedious tasks, not the thinking
Sure, we do need to optimize our work. But it’s a matter of automating the tedious stuff so you can free up more of your time for important problem-solving. Don’t automate the solution for a problem, automate the emails in your inbox (and even then, as a good designer, take some time every now and then to see how you can improve your flows). Instead of automating to be able to do more work, automate so you can spend time on work that matters.
In our day-to-day, there is a constant back-and-forth between the many tools we use: from sketching solutions to prioritizing problems with product managers. We shouldn’t then pick one to focus and automate the other. Both are part of our craft and thinking. Both matter and both need time. They can’t be automated the same way we automate how we export assets or create templates for meetings. Unlock your curiosity from the tyranny of efficiency.
Ask questions
Designers and their whys… It goes without saying that it’s important to ask the right questions. Questions open spaces for design to grow, and design will grow and expand to the space it’s given. If you don’t know exactly what to ask or where to start, take advantage of some frameworks and guides that exist out there, such as these IA Lenses we’ve been recently experimenting with here at SurveyMonkey. Find more ways to listen to your user, and you will see how this will spark new questions about them.
IA lenses in action, by Jasmine Rosen
Asking questions also doesn’t mean only asking the big questions. Sure, those are important too, but everyone you work with is already asking them. The insight often happens when we ask simple questions to people that we often don’t collaborate with much. If we’re trying to find new answers, what we ask is as important as who we ask.
Set learning goals and expand your skills
Setting a goal might sound counterintuitive after everything else I’ve said so far. We are already so fatigued from the goals and metrics we pay attention to in our day-to-day, adding another one seems nonsensical. Since our brains are at this point so deeply wired to think about goals, you can see this just as a hack to open space in your day to see beyond them. Goals are, after all, a way to make space in our lives for something we want.
Set some time aside and create a goal to learn something new beyond our designiverse.
With the world going through so many challenges on almost every front, from climate change to the rise of authoritarianism in politics, it’s even more important for us to learn and understand (as citizens and designers) what is happening around us and how we can use our skills to help. Being exposed to different realities other than our work environment can make us better designers, but most importantly, better people. The first step is wanting to learn and keep our mind open to it.
Be vulnerable
If we avoid showing our vulnerability to avoid fear or shame, we miss out on the opportunity to create more authentic connections with our team (and in our lives) and to be more courageous and impactful with our work. When you make yourself vulnerable, people tend to relate and connect to you more. It unleashes a cascade effect of true connections, candor, and collaboration. As Elliot Hedmans puts it, if we reject vulnerability, we kill curiosity: we cannot reflect, and we cannot pivot to a better solution.
Brené Brown dedicated her career to studying vulnerability and has amazing talks on this topic:
There are many fans of Brené Brown and her work on vulnerability on our team here at SurveyMonkey
Being vulnerable is even more important if you’re in a leadership position. As a leader, you can share your frustrations, your mistakes, and seek advice from your team. Your team looks up to you, and by showing vulnerability, you’ll make them feel more comfortable doing the same. They’ll be empowered to offer ideas outside of their comfort zone. Paraphrasing Shunryu Suzuki again: the best way to lead people is to encourage them to be mischievous.
The next time you have to design a sign-up form or any other piece of work that feels automatic, try keeping a beginner’s mind to be able to see all the possibilities. Use your curiosity as a tool to learn more about your users and see beyond the problem that was given to you. There are many exciting opportunities awaiting you beyond your expert mind. | https://medium.com/curiosity-by-design/keeping-a-beginners-mind-8913d38af934 | ['Caio Braga'] | 2019-08-02 19:49:05.327000+00:00 | ['Product Design', 'Careers', 'Surveymonkey', 'UX', 'Design'] |
10 Books You Can Binge In A Day. Read a lot of books in a few hours. | Genre: Crime Thriller
Goodreads
On the face of it, this is a gritty cold-blooded thriller, but this 97-page novella packs in much more than that — exploring clinical depression, child abuse and a twisted father-son relationship that goes a long way in shaping the protagonist, the ex-Marine Joe, and giving the audience an insight into his actions. Not a single paragraph in the book is a drag, and every dialogue and every character has a role to play in moving this dark, disturbing story forward. Full points to the author for holding my attention for a gripping two hours.
Reading this book is a rare treat because Ames creates a story that is very atmospheric — letting the reader imagine scents and sights masterfully with his words. Apart from that, the discussion of mental health is very subtle and delicate. Kudos to the author for giving us a real and relatable protagonist who is clinically depressed, and yet carries on through life without it being a debilitating influence. | https://medium.com/publishous/10-books-you-can-finish-in-a-day-4f70b15c8cda | ['Anangsha Alammyan'] | 2020-06-01 19:44:14.492000+00:00 | ['Books', 'Reading', 'Self Improvement', 'Book Recommendations', 'Book Review'] |
Topic Modeling in Python: Latent Dirichlet Allocation (LDA) | Topic Modeling in Python: Latent Dirichlet Allocation (LDA)
How to get started with topic modeling using LDA in Python
Introduction
Topic Models, in a nutshell, are a type of statistical language models used for uncovering hidden structure in a collection of texts. In a practical and more intuitively, you can think of it as a task of:
Dimensionality Reduction, where rather than representing a text T in its feature space as {Word_i: count(Word_i, T) for Word_i in Vocabulary}, you can represent it in a topic space as {Topic_i: Weight(Topic_i, T) for Topic_i in Topics}
Unsupervised Learning, where it can be compared to clustering, as in the case of clustering, the number of topics, like the number of clusters, is an output parameter. By doing topic modeling, we build clusters of words rather than clusters of texts. A text is thus a mixture of all the topics, each having a specific weight
Tagging, abstract “topics” that occur in a collection of documents that best represents the information in them.
There are several existing algorithms you can use to perform the topic modeling. The most common of it are, Latent Semantic Analysis (LSA/LSI), Probabilistic Latent Semantic Analysis (pLSA), and Latent Dirichlet Allocation (LDA)
In this article, we’ll take a closer look at LDA, and implement our first topic model using the sklearn implementation in python 2.7
Theoretical Overview
LDA is a generative probabilistic model that assumes each topic is a mixture over an underlying set of words, and each document is a mixture of over a set of topic probabilities.
We can describe the generative process of LDA as, given the M number of documents, N number of words, and prior K number of topics, the model trains to output:
psi, the distribution of words for each topic K
phi, the distribution of topics for each document i
Parameters of LDA
Alpha parameter is Dirichlet prior concentration parameter that represents document-topic density — with a higher alpha, documents are assumed to be made up of more topics and result in more specific topic distribution per document. Beta parameter is the same prior concentration parameter that represents topic-word density — with high beta, topics are assumed to made of up most of the words and result in a more specific word distribution per topic.
LDA Implementation
The complete code is available as a Jupyter Notebook on GitHub
Loading data Data cleaning Exploratory analysis Preparing data for LDA analysis LDA model training Analyzing LDA model results
Loading data
For this tutorial, we’ll use the dataset of papers published in NIPS conference. The NIPS conference (Neural Information Processing Systems) is one of the most prestigious yearly events in the machine learning community. The CSV data file contains information on the different NIPS papers that were published from 1987 until 2016 (29 years!). These papers discuss a wide variety of topics in machine learning, from neural networks to optimization methods, and many more.
Let’s start by looking at the content of the file
# Importing modules
import pandas as pd
import os os.chdir('..') # Read data into papers
papers = pd.read_csv('./data/NIPS Papers/papers.csv') # Print head
papers.head()
Sample of raw data
Data Cleaning
Since the goal of this analysis is to perform topic modeling, we will solely focus on the text data from each paper, and drop other metadata columns
# Remove the columns
papers = papers.drop(columns=['id', 'event_type', 'pdf_name'], axis=1) # Print out the first rows of papers
papers.head()
Remove punctuation/lower casing
Next, let’s perform a simple preprocessing on the content of paper_text column to make them more amenable for analysis, and reliable results. To do that, we’ll use a regular expression to remove any punctuation, and then lowercase the text
# Load the regular expression library
import re # Remove punctuation
papers['paper_text_processed'] = papers['paper_text'].map(lambda x: re.sub('[,\.!?]', '', x)) # Convert the titles to lowercase
papers['paper_text_processed'] = papers['paper_text_processed'].map(lambda x: x.lower()) # Print out the first rows of papers
papers['paper_text_processed'].head()
Exploratory Analysis
To verify whether the preprocessing happened correctly, we’ll make a word cloud using the wordcloud package to get a visual representation of most common words. It is key to understanding the data and ensuring we are on the right track, and if any more preprocessing is necessary before training the model.
# Import the wordcloud library
from wordcloud import WordCloud # Join the different processed titles together.
long_string = ','.join(list(papers['paper_text_processed'].values)) # Create a WordCloud object
wordcloud = WordCloud(background_color="white", max_words=5000, contour_width=3, contour_color='steelblue') # Generate a word cloud
wordcloud.generate(long_string) # Visualize the word cloud
wordcloud.to_image()
Prepare text for LDA Analysis
Next, let’s work to transform the textual data in a format that will serve as an input for training LDA model. We start by converting the documents into a simple vector representation (Bag of Words BOW). Next, we will convert a list of titles into lists of vectors, all with length equal to the vocabulary.
We’ll then plot the ten most frequent words based on the outcome of this operation (the list of document vectors). As a check, these words should also occur in the word cloud.
# Load the library with the CountVectorizer method
from sklearn.feature_extraction.text import CountVectorizer
import numpy as np import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline # Helper function
def plot_10_most_common_words(count_data, count_vectorizer):
import matplotlib.pyplot as plt
words = count_vectorizer.get_feature_names()
total_counts = np.zeros(len(words))
for t in count_data:
total_counts+=t.toarray()[0]
count_dict = (zip(words, total_counts))
count_dict = sorted(count_dict, key=lambda x:x[1], reverse=True)[0:10]
words = [w[0] for w in count_dict]
counts = [w[1] for w in count_dict]
x_pos = np.arange(len(words))
plt.figure(2, figsize=(15, 15/1.6180))
plt.subplot(title='10 most common words')
sns.set_context("notebook", font_scale=1.25, rc={"lines.linewidth": 2.5})
sns.barplot(x_pos, counts, palette='husl')
plt.xticks(x_pos, words, rotation=90)
plt.xlabel('words')
plt.ylabel('counts')
plt.show() # Initialise the count vectorizer with the English stop words
count_vectorizer = CountVectorizer(stop_words='english') # Fit and transform the processed titles
count_data = count_vectorizer.fit_transform(papers['paper_text_processed']) # Visualise the 10 most common words
plot_10_most_common_words(count_data, count_vectorizer)
Top 10 most common words
LDA model training and results visualization
To keep things simple, we will only tweak the number of topic parameters.
import warnings
warnings.simplefilter("ignore", DeprecationWarning) # Load the LDA model from sk-learn
from sklearn.decomposition import LatentDirichletAllocation as LDA
# Helper function
def print_topics(model, count_vectorizer, n_top_words):
words = count_vectorizer.get_feature_names()
for topic_idx, topic in enumerate(model.components_):
print("
Topic #%d:" % topic_idx)
print(" ".join([words[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
# Tweak the two parameters below
number_topics = 5
number_words = 10 # Create and fit the LDA model
lda = LDA(n_components=number_topics, n_jobs=-1)
lda.fit(count_data) # Print the topics found by the LDA model
print("Topics found via LDA:")
print_topics(lda, count_vectorizer, number_words)
Final Topics found vis LDA
Analyzing LDA model results
Now that we have a trained model let’s visualize the topics for interpretability. To do so, we’ll use a popular visualization package, pyLDAvis which is designed to help interactively with:
Better understanding and interpreting individual topics, and Better understanding the relationships between the topics.
For (1), you can manually select each topic to view its top most frequent and/or “relevant” terms, using different values of the λ parameter. This can help when you’re trying to assign a human interpretable name or “meaning” to each topic.
For (2), exploring the Intertopic Distance Plot can help you learn about how topics relate to each other, including potential higher-level structure between groups of topics.
%%time
from pyLDAvis import sklearn as sklearn_lda
import pickle
import pyLDAvis LDAvis_data_filepath = os.path.join('./ldavis_prepared_'+str(number_topics))
# # this is a bit time consuming - make the if statement True
# # if you want to execute visualization prep yourself
if 1 == 1: LDAvis_prepared = sklearn_lda.prepare(lda, count_data, count_vectorizer) with open(LDAvis_data_filepath, 'w') as f:
pickle.dump(LDAvis_prepared, f)
# load the pre-prepared pyLDAvis data from disk
with open(LDAvis_data_filepath) as f:
LDAvis_prepared = pickle.load(f) pyLDAvis.save_html(LDAvis_prepared, './ldavis_prepared_'+ str(number_topics) +'.html')
Closing Notes
Machine learning has become increasingly popular over the past decade, and recent advances in computational availability have led to exponential growth to people looking for ways how new methods can be incorporated to advance the field of Natural Language Processing.
Often, we treat topic models as black-box algorithms, but hopefully, this post addressed to shed light on the underlying math, and intuitions behind it, and high-level code to get you started with any textual data.
In the next article, we’ll go one step deeper into understanding how you can evaluate the performance of topic models, tune its hyper-parameters to get more intuitive and reliable results.
Sources:
[1] Topic model — Wikipedia. https://en.wikipedia.org/wiki/Topic_model
[2] Distributed Strategies for Topic Modeling. https://www.ideals.illinois.edu/bitstream/handle/2142/46405/ParallelTopicModels.pdf?sequence=2&isAllowed=y
[3] Topic Mapping — Software — Resources — Amaral Lab. https://amaral.northwestern.edu/resources/software/topic-mapping
[4] A Survey of Topic Modeling in Text Mining. https://thesai.org/Downloads/Volume6No1/Paper_21-A_Survey_of_Topic_Modeling_in_Text_Mining.pdf | https://towardsdatascience.com/end-to-end-topic-modeling-in-python-latent-dirichlet-allocation-lda-35ce4ed6b3e0 | ['Shashank Kapadia'] | 2019-09-05 22:20:58.102000+00:00 | ['Topic Modeling', 'Data Science', 'Python', 'In Depth Analysis', 'Towards Data Science'] |
Advanced Kubernetes Operators Development | Advanced Kubernetes Operators Development
How to build a production-standard Operator based on Kubebuilder. Tips and pitfalls
In the previous article ㉿ (Kubernetes Operator for Beginners — What, Why, How), I have described Kubernetes Operators’ concepts and how to implement one with a simple example of auto-generating ServiceAccount and ClusterRoleBinding through Kubebuilder.
But that example is rough and does not meet the production standards, just for illustration. 😞
poor operator, by author
CRD Condition is not set . The Condition status is generally a monitoring field used by various kubectl tools to observe the resources’ status.
. The is generally a monitoring field used by various tools to observe the resources’ status. Without a health check, we can not add a liveness probe and readiness probe.
Operators that run in production obviously need more.
This article lists multiple aspects of building a stable and functional Operator by improving the previous one. And we will discuss some pitfalls encountered during the development.
Before starting, let’s review what Operator Controller is and what functions we want to achieve with it in Kubebuilder.
Controller implements and manages the reconciliation loop
Controller reads the desired state from the resources’ YAML and makes sure they reach the expected state
I recommend everyone to read Operator Best Practice before implementing an operator. I briefly conclude the points that are meaningful to me.
Do one thing and do it well , which is consistent with SRP. It is necessary to follow this principle for better stability, performance, and less difficulty in development and expansion.
, which is consistent with SRP. It is necessary to follow this principle for better stability, performance, and less difficulty in development and expansion. One controller controls or owns one CRD , which aligns with the above one.
, which aligns with the above one. Namespace is configurable , which is also important and considered to be following Kubernetes Namespace Best practice. In our code case, it is a good strategy to put different ServiceAccounts in different namespaces.
, which is also important and considered to be following Kubernetes Namespace Best practice. In our code case, it is a good strategy to put different in different namespaces. Expose metrics . It should go without saying that we need Prometheus to monitor our Operators.
. It should go without saying that we need Prometheus to monitor our Operators. An Operator should not relate to another Operator. Always keep in mind to keep it simple since a too complicated Operator is of no help.
Always keep in mind to keep it simple since a too complicated Operator is of no help. Use webhooks to validate CRD input. When your CRD has different versions, webhooks are very important.
There are more points, and I won’t expand here. Our need for a “fancy” Operator is like the pursuit of a nice house.
Fancy Operator, by author
Improve Our Operator
My goal is to make the Operator more stable and reliable. I will keep the UserIdentity code for comparison, and develop it on a new Kind and add new content.
So the first step is using the Kubebuilder tool to create the next version of the CRD type and generate the controller.
kubebuilder create api --group identity --version v2 --kind UserIdentityv2
If I don’t use the new Kind, Kubebuilder will not generate a new controller but requires to write the reconcile logic of two versions in the same controller.
The Create command will generate Go classes such as useridentityv2_types.go in the api/v2 directory.
First, I will straightly copy the fields of v1.UserIdentity .
The command will also create a new useridentityv2_controller.go file in the controller directory and copy the similar logic from the v1 controller here simultaneously.
Add Logs
The first step of optimization is more logs.
Kubebuilder embeds logr in its own framework to record logs, which we can use as well.
Add a variable name to the ignored default log object.
log := r.Log.WithValues("useridentity", req.NamespacedName)
Generally, we will add logs in err processing.
if err != nil {
log.Error(err, fmt.Sprintf("Error create ServiceAccount for user: %s, project: %s", user, project))
return ctrl.Result{}, nil
}
We also add business logs at key points.
log.V(10).Info(fmt.Sprintf("Create Resources for User:%s, Project:%s", user, project))
log.V(10).Info(fmt.Sprintf("Create ServiceAccount for User:%s, Project:%s finished", user, project))
Tips on verbosity logs:
Verbosity-levels on info logs. This gives developers a chance to indicate arbitrary grades of importance for info logs, without assigning names with semantic meaning such as “warning”, “trace”, and “debug”. from — https://github.com/go-logr/logr
Because of its unique flexibility, Verbosity log has appeared in more and more Go open-source frameworks and has gradually become a prevalent standard. Better isolation greatly improves the convenience of debugging.
When viewing the log with kubectl , we use the following command to view verbosity logs. ({{}} is the variable value that needs to be replaced and the same for the following content in the article.)
kubectl get po -n {{ns}} -L {{label}}={{value}} --sort-by='{{field}}' -v10
Be prudential about logs, don’t log too much. No one wants to be buried under the millions of logs when debugging. If you are curious about Kubernetes logging mechanism. 👉 here
Set Conditions
In Kubernetes’s Resources management, conditions are very crucial concepts and associate with the Pod lifecycle. Setting the resource condition reasonably in the sync loop is necessary when using probes functions such as readiness probes. This blog What the heck are Conditions in Kubernetes controllers? gives a more detailed explanation.
To set Conditions for CRD, we need to add the conditions field to the status definition of UserIdentity .
type UserIdentityV2Status struct { // Conditions is the list of error conditions for this resource
Conditions status.Conditions `json:"conditions,omitempty"`
}
Then you can modify the CRD conditions in the key position of the controller.
Add the condition of UpdateFailed where err appears.
Update Condition Status
Set the condition into UpToDate after it succeeds.
condition := status.Condition{
Type: Ready,
Status: v1.ConditionTrue,
Reason: UpToDate,
Message: err.Error(),
}
After our controller successfully sets the condition, we can simply find the problematic CRD via kubectl .
kubectl get po -n {{ns}} -L {{label}}={{value}}
Add Health Checks
One of the most important functions of Kubernetes as an orchestration tool is to close unhealthy Pods and restart Pods automatically.
Simply put, this feature requires support from the readiness probe and liveness probe. For CRD, you need to add code supportive of readiness probe and liveness probe in reconciling.
Here is an example of a liveness probe and the health check it needs.
After adding the relevant logic, we can enable the liveness probe in the config/manager/manger.yaml .
livenessProbe:
httpGet:
path: /healthz
port: 6789
initialDelaySeconds: 20
periodSeconds: 10
It’s not the CRD YAML but the Deployment manager YAML file.
Add resource deletion logic
Our sync loop only has the code of increasing users and configuring related resources. However, when a user is deleted, the related resources also need to be deleted.
Its way of implementation depends on how we receive user update information.
FindAll interface. Get all users, then add and delete related resources by comparison.
interface. Get all users, then add and delete related resources by comparison. Event notification. Either adding or deleting users can be achieved by subscribing to the upstream events, which is currently the most prevailing event-driven pattern.
Event-driven, by author
Kubernetes APIServer naturally supports event-driven with the watch function.
So there are only two things that need to be done in the Controller.
Create events that need to notice. Here we watch the user update event from Pubsub Topic.
Add watcher to Controller’s SetupWithManager function
// define userevent and run
ch := make(chan event.GenericEvent)
subscription := r.PubsubClient.Subscription("userevent")
userEvent := CreateUserEvents(mgr.GetClient(), subscription, ch)
go userEvent.Run()
return ctrl.NewControllerManagedBy(mgr).
For(&identityv2.UserIdentityV2{}).
Watches(&source.Channel{Source: ch, DestBufferSize: 1024}, &handler.EnqueueRequestForObject{}).
Complete(r)
And here comes the last step, configuring PubsubTopic and RBAC permissions.
Add More Functionalities
Even if we made up for the shortcomings in the UserIdentity , much remains to improve, like support for kubectl and the use of more Kubernetes features.
In my view, only Operators with these functions are relatively reliable.
reliable operator, by author
Let’s take a look at how to implement some of these functions.
Use Kubebuilder features
Kubebuilder’s skaffold provides many special functions, helpful to our development and practice. Among them, the comment function is one of the most convenient and useful.
Take an example to see the advantages when adding comments to the types field.
Add more detailed info to CRD. For example, if you add the following comment to the field, the field information can be displayed in the output of kubectl get .
// +kubebuilder:printcolumn
Add a default value or verification to the CRD field. For example, adding the following comment to the field enables us to limit the field value to 1, 2, 3; otherwise, an error will be reported.
// +kubebuilder:validation:Enum=1,2,3
Stop the ApiServer from pruning fields that are not specified.
// +kubebuilder:pruning:PreserveUnknownFields
For more comments on Kubebuilder, please refer to the Kubebuilder book.
With our code, we can print the RoleRef column used by the UserIdentityV2 .
// +kubebuilder:printcolumn
RoleRef rbacv1.RoleRef `json:"roleRef,omitempty"`
Support unstructured data
In CRD design, sometimes we have to jump out of the box. We can’t limit ourselves to using the current Kubernetes API or resource types and try to complete some special operations with unstructured data.
Take UserIdentity as an example. Here we hardcode the creation of ServiceAccount and ClusterRoleBinding , which results in the following problems.
We need to use the core/v1 and rbac/v1 libraries corresponding to ServiceAccount and ClusterRoleBinding . The more resources we create, the more associated APIs, but not all required types are supported by go types.
libraries corresponding to and . The more resources we create, the more associated APIs, but not all required types are supported by go types. We cannot create the required resources by modifying our CRD dynamically. Once the modification is required, you must change the controller logic.
So it is of great importance to support unstructured structure here. We can switch our CRD design to the following YAML:
The code is required to support the template parsing, and here let’s modify it in UserIdentityV3_types.go . You may have noticed that I have created a UserIdentityV3 to implement the unstructured function to compare the code of v1 and v2 versions better.
// Template is a list of resources to instantiate per repository in Governator
Template []unstructured.Unstructured `json:"template,omitempty"`
The template here essentially uses the Go template function, allowing us to parse the template, inject parameters in the controller, and create objects by unstructured API. Look at the code.
Support Events
The v2 version already supports conditions, while there is still a missing feature, events.
In Kubernetes, resources indicate the resource status changes and other noteworthy information by emitting various events so that users can obtain information through the kubectl get events command, avoiding submerging in massive logs.
Here we need Kubernetes client-go/tools/record package.
import "k8s.io/client-go/tools/record"
And we need to define a Recorder in our reconcile.
Recorder record.EventRecorder
Then we can start to emit events.
r.Recorder.Event(&userIdentity, corev1.EventTypeNormal, string(condition.Reason), condition.Message)
// or
r.Recorder.Event(&userIdentity, corev1.EventTypeWarning, string(UpdateFailed), "Failed to update resource status")
Support Webhooks
A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST — from kubernetes.io
Kubebuilder naturally supports webhooks, but only admission webhooks.
You can add a verification webhook to UserIdentityV3 by executing the following command.
kubebuilder create webhook --group identity --version v3 --kind UserIdentityV3 --defaulting --programmatic-validation
Then a useridentityv3_webhook.go will be generated in the api/v3/ directory. And the webhook is set into the manager in main.go by default.
if err = (&identityv3.UserIdentityV3{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "UserIdentityV3")
os.Exit(1)
}
As mentioned above, adding comments( // +kubebuilder:validation ) to the types’ field also plays a validation role, but the webhook can do more.
But for the UserIdentity , I haven’t thought of a good way yet. If you are interested, you can add the logic you want based on my code.
Reconcile Periodically
Adding the code below at the end of the reconcile function triggers Operator reconcile every 10 minutes.
return ctrl.Result{RequeueAfter: 10 * time.Minute}, nil
Set OwnerReference
Resource ownership is basic knowledge in Kubernetes, helps to delete the sub-resources when deleting the owner resource.
If we run kubectl delete useridentityv3 , it deletes the sub-resources by default. Of course, we can set cascade = false to disable it.
The goal here is to facilitate the Kubernetes Garbage Collection, especially when there are multiple resources deeply coupled. For example, consider the tight bound between Deployment->Service->Pod.
Back to our code, we can set the ownership of all the unstructured resources to our CRD. Then we can delete them by deleting the operator itself.
return ctrl.SetControllerReference(userIdentity, &existing, r.Scheme)
Avoid Pitfalls
pitfalls, by author
Although my overall experience with Kubebuilder is good, problems inevitably arise, such as problems with Kubebuilder itself, issues with Kubernetes controller-runtime library, and even some with Golang, including the use of Google’s Pubsub.
The three most representative ones are
Crazy Ginkgo Tests
I don’t like Kubebuilder’s suite_test and ginkgo framework, or maybe I’m not using it correctly.🤷🏻♀️
We cannot test different test cases separately in the IDE. Unlike JUnit that you can run or debug a test case separately, suite_test is not supported, at least in Goland. Think about how crashed when you have 20 or 100 more test cases.🥶
is not supported, at least in Goland. Think about how crashed when you have 20 or 100 more test cases.🥶 The race condition is annoying. When we design and write unit tests, we generally follow F.I.R.S.T principles. However, the isolated principle has been repeatedly violated in the ginkgo test, for which I still don’t understand the root cause. Even if I create different CRDs with different names and use different reconcilers in the same suite_test , race will often occur (enable race detector) and cause super flaky unit tests.
CreateOrUpdate
It is a real pitfall. If you only read the method comments, you will have a simple thought that I pass in one object, and if its field value has not changed, then Update will naturally be triggered.
// CreateOrUpdate creates or updates the given object obj in the Kubernetes
// cluster. The object’s desired state should be reconciled with the existing
// state using the passed in ReconcileFn. obj must be a struct pointer so that
// obj can be updated with the content returned by the Server.
In practice, it is not the case. We discover the resources we created by Operator are all deleted and recreated during CRD’s upgrade process, even if there is no change in these resources at all.
When we take a closer look at the CreateOrUpdate code, we find that this object executes the copy relying on the DeepCopy function in the zz_generated.deepcopy.go that automatically generated by the Kubebuilder tool after we define the CRD.
If your object is unstructured or dynamically generated and without implementing the DeepCopy function, then you need to recreate it every time!
To solve this problem, you need the mutate func defined by this method.
// The MutateFn is called regardless of creating or updating an object.
//
// It returns the executed operation and an error.
func CreateOrUpdate(ctx context.Context, c client.Client, obj runtime.Object, f MutateFn) (OperationResult, error) {}
Initializing object with mutataFn can avoid rebuilding every time.
Set Context Timeout
We met an error when we launched our operator. The operator stuck at some point, and there were no logs, no events and nothing to help the debug.
Then we reviewed the code again and again, and we literally commented on each part of the code to test.
Finally, we found out that the upstream gRPC interface we connected to was already relocated, but somehow the connection didn’t timeout or break, causing our operator stuck forever. We changed the gRPC interface to SRV connection to solve the issue.
We still want our operator reliable enough to exit instead of block. It just needs a small change in the context we use everywhere.
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Minute)
defer cancel()
Always add time out to the context!
Besides the aforementioned three issues, other issues will also arise. But as a developer, locating the problems and working them out is part of our daily life. 😆 Don’t forget about creating bugs.
To Sum up
After adding all the new code, UserIdentityV3 is now a more decent Operator and ready for production 💯.
Putting aside the shortcomings mentioned above, Kubebuilder is indeed a handy Operator skaffolding tool, which spares developers more time so that we can focus on developing controller logic.
And by implementing Operator, we better understand the internal operating principles of webhook, controller plane, etc., understand the open-source code in the Kubernetes SIG, and even contribute code to some SIGs if interested.
Kubebuilder 3.0 is on its way. Join Kubernetes slack,Google group, or follow this document to learn more.
All the code related to this article is on Github. I will keep writing when I have more to share.
Thanks for reading! | https://medium.com/swlh/advanced-kubernetes-operators-development-988edad5f58a | ['Stefanie Lai'] | 2020-12-07 14:29:14.069000+00:00 | ['Kubebuilder', 'Golang', 'Kubernetes', 'Kubernetes Operator'] |
How to Create ‘Moving’ Presentations | In my role as a Design Advocate at Google, I regularly give presentations on designing and building awesome Android apps. Lately I’ve been talking a lot about motion design so my presentations include a lot of videos captured from a device or fancy slide builds/transitions. After presenting, I like to post my deck online, but most of the standard sharing sites don’t support motion; it’s pretty hard to talk about motion in static images!
I’ve found sharing the presentation as a Google Photos album to be a great workaround; it seamlessly switches between static (photo) and animated (video) slides. If viewers have the mobile app then they get a nice swipey fullscreen experience with looping videos.
Here’s an example:
I tend to create presentations in Keynote and there are a few gotchas to the process, so I wanted to document it in case it’s useful to anyone else (and for the next time that I need it and have forgotten all the details!).
Export
Open your keynote presentation and:
File > Export To > Images… I create an image per build and export PNGs. I then go through the resulting folder, deleting any intermediary slides I don’t want, or any slides with video. Notice how keynote adds a three digit suffix for each slide/build number; we’ll need this later. For each motion slide we want to export, select all slides in the navigator except the one we want, right click and hit Skip slide . Remove any slide-level transition (otherwise the video fades at the end) then File > Export To > QuickTime… I set the next slide/build delay to 0 and the format to 1080p .
Protip: we’ll be doing this a few times so I recommend setting up a shortcut under Settings > Keyboard > Shortcuts > App Shortcuts . I map this to ⌘ + e ; while you’re there I also highly recommend setting up standard shortcuts for zooming and grouping.
5. Hit next and add the appropriate suffix number so that the video appears in the right place amongst the images.
6. Hit undo twice to restore any slide transition and un-skip all other slides.
7. Repeat 3–6 for all other motion slides.
You should now have a folder of images and videos in the correct order. Before uploading, there’re a couple of other things I like to do:
Preparation
Google Photos ‘helpfully’ orders your uploads by creation date (which makes sense for photos). Unfortunately our files aren’t in this order as we made the videos after the images. I run a script to alter the file timestamps to match the name order (i.e. this relies on those numbered suffixes being correct), you can find my script here. I tend to set the creation date to when I gave the presentation. The PNGs are pretty large so I run them through ImageOptim before uploading.
Upload
Our files are now ready to go. I upload them all to photos.google.com and create a new album. If our preparation has gone well then they should all be in the expected order. If not then you can manually drag them around. Then:
Select the first image, hit the menu and select Use as album cover . For each slide I open up the ℹ️ panel and add speaker notes, links or any text I want to be copy-able. Go back to the album level and select Sharing options and enable sharing and turn off collaboration. I tend to disable comments, but that’s up to you. Copy the share link. [Optional] I create a bit.ly short-link as this lets me customize the url to make it more memorable and gives me some stats about clicks. Protip: all bit.ly/foo links can be written j.mp/foo to make them even shorter, which helps on services which truncate urls (like twitter). [Optional] You can then archive all of the images/videos so that they don’t pollute your photo library; they’ll remain in the album.
You can now share this link. Note that Twitter will unfurl the url and show a nice preview image of your album cover, even with the link shortener 😎:
Things I’ve Learned
I’ve tried exporting the entire presentation as a video and then chopping out the ‘slides’ I want. This was super time consuming and also produced lower quality results; I guess because it was encoded twice.
It’s awesome to be able to easily update an individual slide if you find a typo etc.
Exported images have a different numeric suffix to their slide number, I guess because of builds so don’t just use the slide number for the numeric suffix when exporting each video.
Viewers can’t copy/paste text from the slide. This might not be ideal but you can work around it by adding text to the description or even links to gists etc.
People ‘join’ the album. I’m not sure why? Maybe to bookmark it?
Google Photos seems to de-duplicate images, so If you repeat a slide, it might not appear. Not sure how to ‘fix’ this so I work around it by slightly altering one copy of a repeated slide.
Google Slides does support videos but they show player controls etc which isn’t ideal.
Conclusion
Hopefully this is helpful to someone! I’ve been really happy with the results and will keep using this approach until presentation services up their game. If you know of a service which supports motion i’d love to hear about it!
Here are some more examples of decks i’ve created with this technique: | https://medium.com/google-design/moving-presentations-d4f895e78de3 | ['Nick Butcher'] | 2017-10-06 20:08:15.511000+00:00 | ['Motion Design', 'Design', 'Presentations', 'Animation', 'Android'] |
Your Guide to Kubernetes Operators | This blog post was originally published here on CloudOps’ blog.
Most applications will require resources from the environment they are running on. Memory, CPU, storage, networking, etc. Most of those resources may be consumed easily and transparently, some may not depending on the application. Most applications will require some previous configuration steps before being deployed and will require a few, or maybe a lot, of special maintenance tasks that may be related to backups, restores, file compression, high availability checks, log maintenance, database growth, and sanity routines, etc. They may need to be put into some special state while upgrading to make sure they won’t drop the users for example.
All those things we just described are the applied human technical knowledge on top of an application. All that operational toil is repeated multiple times during the lifecycle of a living and serving software. Of course many times they have a few scripts to automate those tasks. But what if that application lives and grows inside a container, in a Pod, orchestrated by Kubernetes or OpenShift? Is there a better way to automate all of that? Something that could “enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil”? (from the Cloud Native Definition)
And the answer to this question is the operator pattern. A.k.a Kubernetes Operators. So what are they? How to develop one? What can they add to our applications? And how do they add up to our software as a service experience by publishing them to an operator hub?
The best definition I personally like to give is an operator is an extension to the Kubernetes API, in the form of a Custom Resource, reconciled/managed by a standard controller running in a Pod out of a Deployment. Seems complicated, right? Let’s check those parts:
Extending the Kubernetes API
First, let’s step back just a little bit and try to understand it piece by piece. The first question I would ask is how do we interact with Kubernetes? We use kubectl to deploy and maintain our application from a stand-alone admin perspective, we use client-go and other libraries to automate the communication with Kubernetes API. Ok cool. What does the API give to us?
Let’s take a look at what the Kubernetes API gives to us:
All those features are shared between native Kubernetes objects. Many well-designed operations such as create, read, update and delete, the capability of watching endpoints, authentication and authorization, and much more.
We know that Kubernetes resources are built on top of definitions that come from the canonical Kubernetes API that lives in this repository: https://github.com/kubernetes/api
And there we can find the groups, the versions and kind for those resources, right? That is the information that goes straight in the field called TypeMeta. Let’s take a look at that!
If we get a resource such as a DaemonSet and run:
$ kubectl get DaemonSet myDS -o yaml
In the very beginning we’ll see something like below:
apiVersion: apps/v1
kind: DaemonSet
This is telling us that DaemonSets are under the group apps, has the version v1, and is a kind of DaemonSet. And where can we find the corresponding golang type for that object? We just need to navigate into that repository and find the types.go file. Like below:
$ tree -L 2 ... ├── apps
│ ├── OWNERS
│ ├── v1
│ ├── v1beta1
│ └── v1beta2 ...
Inside the folder v1 we have types.go and we can look for Type DaemonSet like below:
type DaemonSet struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// The desired behavior of this daemon set.
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec DaemonSetSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// The current status of this daemon set. This data may be
// out of date by some window of time.
// Populated by the system.
// Read-only.
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status DaemonSetStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
What if we can develop our application as being a native part of Kubernetes this way or at least leveraging all those features in such a way that we just type in kubectl get myapplication and receive back information based on my specific needs? And going further what if we can actually create our own update routines and functions? What if we could leverage the embedded metrics and build deep insights from Kubernetes the same way we do with native resources?
The cool features that share all that good stuff that Kubernetes provides are the Custom Resources and Custom Resource Definitions. They will behave pretty much like the native Daemonsets we saw before. They are extensions of the Kubernetes API that allow us to create our own fields crafting the perfect data structure to represent our application needs. They allow us to have our own api group, versions, and kind.
Here you can check more about CRDs and API extensions. But we’re half done here. What else do we need to put those Custom Resources to life? The controller. Let’s check it!
Controllers: Making it Kubernetes Native
Controllers are nothing else than a loop. The idea is a control loop that for each iteration it checks the state of some resource. After checking the state of the desired resource by reading it, the control loop runs what we call the reconcile function that compares that live state with the expected state for that given object. That’s the standard way Kubernetes works.
So, if we’ve defined our own custom object representing our application, with all fields and required data structures, the piece that comes after is this controller with its reconcile function. It really gives us control of the state of our application by running a custom logic that embeds the human operational knowledge we’ve talked about before.
If you want to know more about them check here.
Operator SDK: Bootstrapping and Building
Understanding the inner workings of the Kubernetes API, compliantly to the OpenAPI standard, is not an easy task. We can say the same about creating controllers that run exactly like the native ones with all tools provided by the API machinery SIG and controller-runtime libraries, in order to facilitate the creation of the operator framework. Among the tools the operator framework provides is the operator-sdk command line tool. Let’s check how it helps us to quickly scaffold all the necessary tooling to concentrate only on the operator logic.
Initializing a new operator project:
$ mkdir myproject
$ cd myproject
$ operator-sdk init --domain mydomain.com --group myapp --kind MyApp --version v1alpha1
After running a go project folder will be scaffolded with the minimum elements to develop and build the operator.
.
├── Dockerfile
├── Makefile
├── PROJECT
├── bin
├── config
├── go.mod
├── go.sum
├── hack
└── main.go
We have our basic Dockerfile to build the operator, a Makefile with all automation necessary to test and build, the config folder where all yaml artifacts will live powered by Kustomize and the main.go where all begins with the manager that runs our controllers. To add a new API/CRD endpoint with a controller for our custom application we run the following for example:
$ operator-sdk create api \
--group=myapp \
--version=v1alpha1 \
--kind=MyApp \
--resource \
--controller
Now we have 2 new folders:
.
├── Dockerfile
├── Makefile
├── PROJECT
├── api
├── bin
├── config
├── controllers
├── go.mod
├── go.sum
├── hack
└── main.go
The folders api and controllers. And there we can find all the code automatically generated to begin the development process.
In the api we find:
$ tree -L 2 api api
└── v1alpha1
├── groupversion_info.go
├── myapp_types.go
└── zz_generated.deepcopy.go
myapp_types.go will hold all the fields and elements for the application.
And finally on the controller side we have:
$ tree -L 2 controllers
controllers
├── myapp_controller.go
└── suite_test.go
And myapp_controller.go will hold all the controller logic for us.
The reconcile function will be ready for you to insert your code:
func (r *MyAppReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
_ = context.Background()
_ = r.Log.WithValues("myapp", req.NamespacedName)
// your logic here
return ctrl.Result{}, nil
}
To better understand this process if you really want to go in I strongly recommend two tutorials:
The kubebuilder book. Kubebuilder just merged into operator-sdk and a good part of the logic inside comes from the kubebuilder project. So to understand deeper the Kubernetes API and controllers logic this is probably the best place to start.
https://book.kubebuilder.io
Finally, I totally recommend taking a look on the operator-sdk website where you can also find a lot of resources and examples. https://sdk.operatorframework.io
Operator Lifecycle Manager: Publishing Operators
Another key project in the operator-framework is the Operator Lifecycle Manager that acts as your software catalog presenting to kubernetes a software as a service application where all publicly published operators can be installed from. Check the project here and more information on the https://operatorhub.io.
Conclusion
We talked about what are Kubernetes operators and how they are made of 2 basic but powerful pieces that are the Kubernetes Custom Resources and Controllers. We touched a little bit on the operator-sdk that helps us scaffolding all the code to easily begin developing Kubernetes native applications that will talk to the api and control our custom resources representing our application inside the cluster. We suggested checking the Kubebuilder book and the operator-sdk docs on the website. And finally, we pointed out that the operator lifecycle manager is the official catalog where all the public operators can be found.
Alexandre Menezes
Alexandre Menezes works at Red Hat as a Service Reliability Engineer helping partners and customers to develop their operators and make their applications shine through all the container ecosystems.
This blog post was originally published here on CloudOps’ blog.
Sign up for CloudOps’ monthly newsletter to stay up to date with the latest DevOps and cloud native developments. | https://medium.com/cloudops/your-guide-to-kubernetes-operators-f4243c2b1b4f | [] | 2020-10-16 19:03:49.003000+00:00 | ['Kubernetes', 'Kubernetes Operator', 'Kubernetes Operations', 'Kubernetes Cluster'] |
Quirky | Quirky
Adapting for Asperger’s at the Expense of Sincerity
No, really, I’m like this all the time.
Coming to terms with being a 38-year-old man with Asperger’s, having only been diagnosed a few weeks ago, has naturally lead to reexaminations of my behavior. The first things I’ve focused on have been those aspects of my personality that put me blatantly at odds with the rest of the species, such as my extreme introversion, my inability to read others’ signals or intentions, and my aversion to overstimulation.
But as some of this has begun to settle, I also find myself going a few layers deeper, and I realize just how much of my identity is wrapped up in how I’ve compensated for the hindrances of Asperger’s. Some of the more interesting exploration is not about my differences, but my adaptations — the behaviors I’ve adopted to mitigate those differences. Successful adaptations, even.
As I’ve noted before, some people have trouble accepting my Asperger’s diagnosis as a valid one, because all they see are the adaptations. They see me as someone who’s generally smart and funny and well spoken, someone who is obviously not “the average guy,” but someone a little different, just a little odd, and harmlessly so. A bit nerdy, a little geeky, and humorously self-effacing about all of it. Maybe a little too self-effacing, but oh, that’s just Paul. One of his many quirks.
That’s me. I’m quirky.
Paul says some weird things sometimes, or Paul gets oddly quiet and distant, or Paul seems to find everything funny, but also every once in a while he takes something too seriously, and talks a little too much, too fast, and too loud. But that’s just his quirkiness.
If anyone comes away with that impression of me, as “quirky,” then I have successfully adapted as best I could. Once it became clear to me, probably around my mid-teens, that I was never going to be considered “normal,” and not even in the same universe as “cool,” I decided (partly consciously, partly unconsciously) that I would adopt a quirky identity. I’d be the funny sidekick, the sarcastic friend, vaguely-artsy oddball, just minimally different enough to cover up just how utterly alien I actually felt. My quirkiness was like a white noise machine to help muffle and distract from the sound of the train line running right next to the house.
Decades of this practice led me to believe that the act was who I really was. In a new social setting, I’m harmless-quirky, making little jokes when it seems safe to do so. With bosses, I’m grinning-idiot-quirky, engaged and overly eager to agree. With closer friends, I’m wry-quirky, able to vent a little of my misanthropic steam, but in a safe and humorous way. And so on.
It even extends into my online persona, where the facepalming-Paul avatar has become my unofficial insignia. I have a quirky logo.
Some of it is natural, some of it is very much forced. But over the years I think I may have gotten so good at it that I don’t know when I’m “working” and when I’m just “being.”
But without this adaptive behavior, I don’t know how I would have navigated the real world. Maybe if I had known I had Asperger’s, and accepted the things that made me different, I wouldn’t have bothered to try so hard to please and to pass. What would I have been like? What happens if I decide to drop the quirk now? What will I be?
I think the scary answer to that is: sincere. I’d be sincere.
I am not an insincere person, per se, not in the way we usually think of that term. I’m not two-faced or deceptive or phony. What I mean by sincerity is a dropping of unnecessary pretenses and performances, allowing whatever person was behind those masks to come out and breathe.
That’s terrifying!
I can’t say with any exactness, but I suspect this hypothetical sincere version of me would be less expressive when in the company of others. Even in conversation, I might look distant or even severe, even if my actual feelings were entirely benign. I would interject less often, and save my words for when they might contribute to something. That might make me appear disinterested or “shy,” even if I felt neither. A more sincere version of me might excuse himself entirely earlier and more often in order to recover from the stresses of stimuli.
A sincere version of me would be less concerned with a projected persona, online and off. He would not think so much about cultivating a “brand” for himself, and simply let his work and his words speak for themselves. It would likely have no impact on the number Twitter followers I could boast, and this version of me (again, hypothetical) wouldn’t concern himself with that anyway, because why bother.
This sincere-me would relieve himself of the stress engendered by worrying over what people thought of his various interests and obsessions. Contemporary geek culture has made the world a safe place for folks to proudly parade their allegiance to various fiction franchises, but that’s not quite what I mean, because what that really adds up to is a new in-group that happens to be made up of people who once languished in out-groups. That’s good and fine, but not what I mean.
I mean that when I have a driving obsession with something that holds no obvious value to anyone but the satisfaction of my own brain, that this is not a failing. It’s not something to be embarrassed about or ashamed of. I can just pursue that interest (within reason and feasibility) without regard to the opinions of others.
And I…I mean this hypothetically sincere version of me…wouldn’t have to make excuses for any of it. He wouldn’t have to apologize, and qualify himself with “I know this is weird” or “this probably seems silly, but…” He…I…would just follow the string of curiosity where it leads, and allow my brain its squirts of dopamine whenever they can be safely had.
The last bit of this is the hope that sincere-me would not indulge his autism and oddness at the expense of his responsibilities to those he loves. I don’t see that as a problem, because one thing that even quirky-me can be sincere about is my love and devotion to my kids. I don’t need to “act” that, no“passing” required. Come to think of it, I’m very lucky for that.
The adaptations of Asperger’s have been enormously expensive in countless ways. They have eaten up time, energy, and my valuation of myself. Maybe over time, as I truly come to terms with this condition and its implications, I can begin to turn down the dials, divert power away from the quirk-generators, and recoup some of what I’ve lost. I would sincerely like that. | https://paulfidalgo.medium.com/quirky-adapting-for-aspergers-at-the-expense-of-sincerity-bc9150021a30 | ['Paul Fidalgo'] | 2018-07-15 22:28:47.673000+00:00 | ['Mental Health', 'Asperger', 'Autism'] |
How my team uses Agile(scrum) in development | How my team uses Agile(scrum) in development
My experiences using Scrum and Jira for the first time
Photo by Daria Nepriakhina on Unsplash
In my previous article, I talked about who is all on my dev team, their roles, and my daily workflow. As our project continues to get bigger, we decided we needed a better way to manage tasks, other than just using Trello. A team member, who has previous experience in agile development, suggests we include scrum and Jira within our workflow.
Before getting into our typical workflow when using these tools, I want to give you an overview of what they’re.
What is Agile and Scrum?
In short, Agile development is an approach you can take when managing projects with complexity. It describes best practices and techniques for managing a project. Scrum, known as a framework, that describes the exact tools needed to help the team manage their projects and work.
What are Sprints?
In short, A sprint is a timed interval the team will go through to get a certain amount of work done. Sprints have cycles, and here is an image to show it.
Key points in a Sprint
Sprint planning (discuss with your team what tasks need to be completed, and how will it get done)
Product backlog (list of all the tasks the team created that needs to be worked on. Most of the time, these are just user stories)
Sprint backlog (list of tasks the team chosen from the backlog to do during the sprint)
Daily scrum/standups (a meeting to check-in to see how the team is doing with their task and if they have any blockers)
Sprint review (your team will discuss what tasks they completed during the sprint)
Sprint Retrospective (another meeting discussing things about the sprint that could be improved for next time if any)
What is Jira?
In short, Jira is a project managing software that is used to manage Scrum projects easier.
My workflow with Scrum
*sidenote: product backlog(User stories) items are already completed by the product owner.
In Jira, our main groups are categorized . Our structure looks similar to this → [Product backlog] [Sprint backlog] [Doing] [Blockers] [Done]
Our structure looks similar to this → [Product backlog] [Sprint backlog] [Doing] [Blockers] [Done] Our sprint meetings are set to a specific day, my PM is the one who decided this (Wednesdays).
We have our sprint meeting on Wednesday, and during that meeting, we all decided(frontend team, backend team, designer) which tasks(user stories from product backlog ) we prefer to work on during the sprint. Since I’m on the frontend, I choose tasks that are related to the UI. Backend developers and Designers choose theirs.
) we prefer to work on during the sprint. Since I’m on the frontend, I choose tasks that are related to the UI. Backend developers and Designers choose theirs. Each developer/designer chose 2–3 tasks to do. Once we all have the tasks to work on, our PM put these tasks in the sprint backlog category . These are the tasks we work on for the next 2 weeks.
category These are the tasks we work on for the next 2 weeks. Somewhere in the middle, we had a meeting update to check and see how everyone is doing on their tasks.
When our two weeks were up, we had another meeting(Wednesday) to say what we all completed, what we learned, explain our blockers if any, and discuss what we can improve for next time.
This cycle gets repeated starting with picking tasks.
These were my experiences working in an Agile development environment, and I got to say its much smoother and easier than I expected it to be. I learn someone new every day, which is always great. I hope this was helpful to someone out there. Stay safe, and keep learning.
Enjoy! 👍
~ Love to live, live to code | https://medium.com/dev-genius/how-my-team-uses-agile-scrum-in-development-82503b4d0e9d | ['Ajea Smith'] | 2020-06-30 07:33:05.602000+00:00 | ['Software Development', 'Web Development', 'Coding', 'Startup', 'Agile Development'] |
Glen Weyl on Fighting COVID-19 and the Role of the Academic Expert (Ep. 94 — BONUS) | Glen Weyl on Fighting COVID-19 and the Role of the Academic Expert (Ep. 94 — BONUS)
Glen Weyl is an economist, researcher, and founder of RadicalXChange. He recently co-authored a paper that sets forth an ambitious strategy to respond to the crisis and mitigate long-term damage to the economy through a regime of testing, tracing, and supported isolation. In his estimation the benefit-cost ratio is ten to one, with costs equal to about one month of continued freeze in place.
Tyler invited Glen to discuss the plan, including how it’d overcome obstacles to scaling up testing and tracing, what other countries got right and wrong in their responses, the unusual reason why he’s bothered by price gouging on PPE supplies, where his plan differs with Paul Romer’s, and more. They also discuss academia’s responsibility to inform public discourse, how he’d apply his ideas on mechanism design to reform tenure and admissions, his unique intellectual journey from socialism to libertarianism and beyond, the common element that attracts him to both the movie Memento and Don McLean’s “American Pie,” what talent he looks for in young economists, the struggle to straddle the divide between academia and politics, the benefits and drawbacks of rollerblading to class, and more.
Listen to the full conversation
You can also watch a video of the conversation here.
Read the full transcript
TYLER COWEN: Hello. Today, I am chatting with Glen Weyl, who is one of the smartest and sharpest of all the economists, and Glen is, among other things, the founder and leader of RadicalxChange Foundation. Most recently, he is coauthor of a significant study on how we should fight back against COVID-19. He and his coauthors have come up with a plan — a rather ambitious plan — for a pandemic testing board and hoping to test as many as two million Americans each day. Glen, welcome.
GLEN WEYL: Thanks so much for having me on, Tyler, especially at such short notice. It’s really great to be able to talk about these issues with you.
COWEN: We will have our usual wide-ranging chat, but also a lot of focus on COVID-19. Let me start with a simple question. Why is testing in America, right now, so hard to scale up?
WEYL: I think we’ve got two basic problems. One is a coordination failure along the supply chain, and the other is a lot of small innovations that require a lot of regulatory engagement to get rapidly deployed that need to be really accelerated and coordinated.
If you look deep into the supply chains where they’re producing the reagents, where they’re producing the test kits, there has not been a clear demand signal to those parts of the supply chain that we’re going to aim for a really high level of testing like we’re describing. Therefore, there’s a real unwillingness to make the fixed-cost investments to repurpose manufacturing, to supply tests at that level. We can go into why that’s the case in a minute.
If you think of closer to the consumer, the issues are actually quite different. They’re not really about money. They’re much more about the fact that the current testing technology is extremely intrusive and very volatile. So —
COWEN: That’s the swab up your nose, right? It hurts.
WEYL: Exactly. Exactly.
COWEN: It sounds scary. So you want to spit into a cup.
WEYL: Exactly, or a tube.
COWEN: Say I’m an individual American, and we’re in a world where tests are easy to get. Indeed, we’re testing two million Americans a day. Why, in fact, want to be tested if I’m afraid that information can be used against me, keep me away from my job, or remove me from my family?
WEYL: That’s a great point, Tyler. That’s the reason why the three pillars of our strategy are testing, tracing, and supported isolation. We need to ensure that isolation is accompanied by supports from the public that are sufficient to give people a strong reason to want to engage in isolation. People have a lot of concern for their neighbors. They don’t want to get people sick. They don’t want to get their families sick.
So there’s already an inducement isolation there. But especially for Americans who have more limited economic means, it can be a huge hardship to be away from your job for that long, which is why we need public support for people who need to be isolated so that they can receive the treatment that they need, so that they can receive the food and income support that they need, and so that they don’t get detached from their jobs.
COWEN: Where physically will we put these people? Say I test as having COVID-19. Where does the truck bring me, so to speak?
WEYL: I don’t think a truck brings you anywhere. The vast majority of people in the Asian countries that have been most successful in containing the disease have isolated at home, sometimes being isolated even from their families, but overwhelmingly at home. Note, for visitors from abroad who have no clear residence in the country, there may be some dedicated facilities, particular types of hotels associated with isolation. But that’s going to be a very small minority of all cases.
COWEN: Is there enough trust in America to pull this off? Even if you write down the rules of the game and they sound fair, the people don’t trust the federal government. They don’t trust Donald Trump. They may not trust the Democrats and Nancy Pelosi. Won’t people really still run away from the test like a plague? We don’t know how long immunity lasts, if there’s immunity, how long contagion lasts. I just don’t want to know or behave carefully enough that I don’t feel guilty. Otherwise I’m like, “Keep that test away from me.” Or not?
WEYL: Yeah, I couldn’t agree more that there is a systemic lack of trust — especially in federal government — in this country, which is why we believe that the most effective way to make this work is by drawing on institutions that have a lot more trust. A lot of the leading businesses in this country have a very high degree of trust. The state governments, local governments have a very high degree of trust.
So we need a strategy that has a role for the federal government in funding and coordinating the parts of this that absolutely need the federal government in terms of the supply chain. But beyond that, we want to empower those localities and trusted businesses to be the ones who both execute on and lead the public communication around the strategy.
There’s a well-worn tradition of that in something called the interstate compact, where the federal government can provide funding, but it’s actually administered by state governments and often staffed by the private sector.
COWEN: At what rate of false negatives are these tests not worth doing?
WEYL: Probably around 50 to 60 percent. Now, it depends a huge amount on whether those false negatives are what we call permanent false negatives or whether they’re from poor administration of the test. If they’re from poor administration of the test, you can just give a test multiple times. And it appears that most false negatives are currently from poor administration of the nasal swabs, which, by the way, is another reason why moving towards a spit test is so desirable, because it’s much harder to screw up.
COWEN: But the spit test doesn’t do better on false negatives, right? It probably does the same.
WEYL: Swabs are more sensitive if they’re correctly administered, but they’re very easy to incorrectly administer because they’re a very invasive and complicated procedure. The spit test is much less prone to that sort of human error and so may actually, in practice, perform better even though the nasal swab has the potential to do better.
COWEN: What do you think is the rate of false negatives right now?
WEYL: Probably about 20 percent, 20 to 25 percent.
COWEN: At what rate of false positives are these tests not worth doing?
WEYL: I think that even a relatively low rate of false positives could create a huge problem here because the disease prevalence is not that high. So, if you start getting a lot of false positives, pretty much everything that’s going to come up is going to be a false positive. Luckily, the PCR tests have shown a very low rate of false positives so far.
COWEN: But is that data on false positives very reliable? Because we don’t have another test to test the test, right? There’s never been a control group when we subject people to all the different tests and then find out if they really had it. We have a lot of uncertainty about test quality?
WEYL: I think that that’s true. I also think that we’re not getting a huge dragnet coming out of these PCR tests in countries where prevalence rates are low, and you would expect to see that if the false positive rate were nontrivial.
In Korea, they’re administering a lot of these tests, and they’re getting about 2 percent of tests coming back positive. If you look at countries where we know that there is very low prevalence, you would expect that, even if there’s zero prevalence, you would be getting a significant false positive rate. The fact that some countries are really getting close to zero tests coming back positive suggest that there’s a very low false positive rate.
COWEN: If we look at Singapore, which has done a lot with testing and track and trace, it seems, at least superficially, they did many things right. Now they’re back to having over 900 cases a day [subsequently more], and they’re about the size of Fairfax County and have incredible governance. What did Singapore do wrong, and how will we avoid that same mistake?
WEYL: The truth is I haven’t followed the Singaporean case recently closely enough to figure out what went wrong recently. My impression is that they put too much confidence into a particular digital tracing system, which turned out to get very low take-up, and they pulled back on their manual tracing efforts before there was reason to be confident that they had the ability to pull back on them.
There’s also a big problem, which is that manual tracing efforts do a poor job of covering public spaces, and I think that the Singaporeans believed that these Bluetooth-based tracing technologies would cover those public spaces well, and they failed to do so. And they therefore allowed redensification of their public spaces too quickly, and I think that’s something we need to be very careful about.
COWEN: But whether or not we make those exact same mistakes, the fact that such a high-quality government made mistakes, didn’t we really truly fear the United States — with 50 different state governments, a barely competent federal government, if that — will make a lot more, possibly quite different mistakes? How confident are you about how this is going to run?
WEYL: Well, look, I think that there is likely going to need to be some capacity for states or localities — probably through some sort of identity certificate or something like that — to potentially limit travel across jurisdictions. In Canada, they’ve done that across provinces.
COWEN: So this could limit travel across some US states?
WEYL: Yeah, I think so. I don’t know if it will come to that. There’s a possibility that we get very successful here in a very uniform way, but for the reasons that you’re saying — because of the federal structure — I think we’re going to face a choice between centralizing power more than I think we should want to and in a way that would reduce the scope for desirable experimentation, and allowing some restrictions on travel across localities.
COWEN: Actually, I look right now at New Zealand, Hawaii, and the Faroe Islands, in fact, also Taiwan. They’re all doing a great job. They’re all like islands in some way, or they’re literally islands. Isn’t so much of the gain just from reducing the travel? And if we reduce travel, not worry so much about the tests. We get most of the gain, or no?
WEYL: Well, there’s plenty of islands that have restricted travel, at least at some point, and have failed. The UK has had a terrible experience and is also an island. So I don’t think being an island is enough.
COWEN: It’s a big island, right? If you’re in Hawaii, it’s pretty carved up, even within Hawaii, and there’s not much mobility.
WEYL: I think you can certainly achieve a lot if you don’t yet have the disease in country that way. There are not that many places in the US that have low enough prevalence that I think that that would really succeed. But it would probably succeed for some localities. I just think it’s not a comprehensive strategy for most of the population centers of the US, where prevalence is already high enough that trying to treat yourself as an island is not really going to accomplish a lot.
COWEN: Give me a sense of the timeline of what you’re proposing. What do we get the rate of transmission down to? How quickly do we get the tests available? And then when do we reopen the economy? What’s the ticking of the clock?
WEYL: There’s one really critical element of this plan that I don’t think has been widely discussed, which is that there are 40 percent of people in the essential sector who are still out there doing their jobs. There may have been some improvements in sanitation. There probably have been, though there have been a lot of issues with getting the PPE required to do that.
But those people are basically transmitting the diseases they always have been. And so, by far, our first priority has to be not “reopening the economy,” but rather stabilizing that sector of the economy so that transmission is not taking place within that sector.
Once we’ve accomplished that goal, it will actually be relatively easy to reopen the rest of the economy, given that that’s 40 percent. It’s just a doubling to get to everybody being in a disease-stabilized situation. So I really think the focus has to be on stabilizing the essential sector by building up this regimen. I think we can do that by the end of June.
Once that’s accomplished, I think we can, over the course of July, reintroduce most of the rest of the economy and have the confidence that, because we haven’t seen reemergence of diseases within the essential sector, that reintroducing everybody else will proceed in a similar fashion.
COWEN: I think if people not paying their rents, and maybe more importantly, not paying their mortgages — they worry, say, within four to six weeks, the whole banking system will be insolvent. I don’t mean illiquid, where the Fed can prop it up. I just mean flat-out, permanently insolvent. Isn’t there some very rapid, irreversible, nonlinear deterioration going on, and we’ll need to reopen more than we would like to pretty soon, no matter what our level of testing is? What do you think of that claim? Obviously, you’re an economist.
WEYL: I think it’s a little bit extreme, but I’m certainly inclined in that direction. The problem, Tyler, is that if we reopen under the current conditions, we’re going to see — and this is expected by all the epidemiological models — a resurgence of the disease, probably sooner rather than later, and we’re going to have to lock things down again.
As problematic as it is to keep things closed for another month plus, it’s going to be much more problematic to suddenly and unexpectedly every so often have to shut everything back down again. It will completely destroy the capacities of businesses to plan if that is looming out there.
As problematic as it is to keep things closed for another month plus, it’s going to be much more problematic to suddenly and unexpectedly every so often have to shut everything back down again.
Whereas, if we can plan for some period of bridge loans, some period of the Fed bailouts, et cetera, then at least we can get that into a bill and get ahead of it, rather than relying on people to just have to deal constantly with new crises emerging.
COWEN: Let’s say we never soon figure out the puzzle of immunity, how immune you are, and for how long, and we’re not sure how long contagiousness lasts. You get the test, and we learn that you’ve had COVID-19. We’re not sure if you’re immune or you’re contagious for two months. What do we do with you? What box do you get put in?
WEYL: I think serological tests — if we get them working, and they’re not really working very reliably yet — can be quite helpful for that because there’s one of the antibodies — I always forget which one it is, MMM or MMG — but one of them is an indicator of convalescence and at least temporary immunity. So serology is very useful in that case.
It’s also widely believed that if you’ve had a period of symptoms and no longer have symptoms — though this is not known for sure because we have seen some returns of it in South Korea — but it’s believed that during that period when you don’t have symptoms, the amount of the virus that you’re shedding is low.
So I don’t think we can quite say that those people are immune until they get a serology test, but at least they can go back to being in the same condition as the rest of the population unless we see a resurgence of symptoms.
COWEN: What are the labor requirements for following up on people who test positive? You track them down, you call them up, you text them reminders — whatever’s going to be done. How many people do we need to hire and train to do that work?
WEYL: Somewhere on the order of a few hundred thousand. Precisely how many depends, really, on how quickly you want to follow up on the cases because you can have one person on each case or you can have multiple, but somewhere in that range.
And by the way, the Australian government managed to train 20,000 people in a week who had been laid off from Qantas. So we definitely have examples around the world of this being done, and I’m hopeful that we can replicate this in the US. Even a county in rural East Texas has managed to do this quite rapidly, as well as the state of Massachusetts. So we already have some success stories on that.
COWEN: And the party ultimately making this work — is it the federal government or the state governments? Or if there’s a disagreement, who or what is the final adjudicator?
WEYL: I think it’s going to be many different things with many different roles. But if you’re talking about the pandemic testing board that would be the coordinating body, it could be a national forum or it could be an interstate compact.
I’d prefer it to be an interstate compact, in which case it would be a consortium of governors who would be the final authority, but they would appoint the pandemic testing board, which I think would be mostly staffed by retired generals and business leaders, as well as probably someone representing labor and so forth.
COWEN: Say my employer tests me, maybe it’s George Mason, and the test is wrong — false positive, false negative. Can I sue them? Or is there a complete liability waiver here?
WEYL: I think employers should have a responsibility to use the best tests.
COWEN: There’s still a pretty high false rate, right?
WEYL: High false negative rate. At maximum, 1 percent false positive rate because —
COWEN: Total rate could be over 30 percent. So if result is wrong and you can sue your boss, bosses won’t want to test you.
WEYL: I think that they should have a negligence requirement to use the best tests available, but I don’t think that they should have a strict liability requirement that if anything goes wrong, it’s their fault.
COWEN: But we’d have to get, through all the different court systems of the country, some kind of agreement on liability, right? And just for going back to work. You’re in the workplace, there’s a testing regime.
WEYL: Yeah. I think that’s fair. I think the pandemic testing board should have some authority to set guidance about that, and my guess is, under standard common-law approaches, that there would be a fair bit of deference to that by most reasonable courts. I can’t say that that would happen everywhere, but that would be my guess.
COWEN: Let’s say I love taking the test. I take the test every week. It clears me every week. Do I get a certificate?
WEYL: We believe that taking the test frequently enough — and I don’t think once a week is enough; it should probably be twice a week — that that should give you an equal status to someone who’s been shown an immune.
COWEN: Can I get a certificate proving that, and it’s like a passport?
WEYL: Yeah, for both of those cases, both for immunity and for if someone takes frequent tests. We don’t think people should have the right to do that until we have enough tests to do the more basic regime for the whole population. But eventually, we would like to make them available through a more standard price mechanism like you’re describing.
And then, especially in essential sectors, I expect, yes, there would be a certification process like you’re describing for people who are either known to be immune or for people who get frequent enough negative tests.
COWEN: We end up with a segregated nation.
WEYL: I don’t think so because first of all, we will not make that available — the immunity certificates or these ones that you’re talking about, the frequent negative tests — until we’ve already managed to really control the disease enough that we feel comfortable for people going back into most public amenities just based on the fact that we’re tracing down most of the disease.
So really, the only reasonable purpose of those types of things — either immunity or frequent negative tests — would be for jobs in extremely sensitive professions, where you’re close to people who are in a very vulnerable part of the population.
COWEN: But if I can’t get a certificate for a long time, doesn’t that mean I just don’t want to take the test? There’s no benefit for me.
WEYL: Until that time, all the tests are being used in a test-and-trace regime. And if you test positive in the test-and-trace regime, you will go into supported isolation. So, both you’ll end up having your health protected, but also you’ll get the support so that you can actually be just as well off as if you didn’t get the negative test.
COWEN: Seems to me trust there will be very weak. I wouldn’t believe they’re going to send me enough money. If they tell me they’re going to send me a nurse, I worry about rate of contagion amongst healthcare professionals. What’s the support I get that’s so valuable?
WEYL: I think that getting the precise parameters of that right are really critical, Tyler, and I can’t say that I’ve gotten down to the level of precision necessary. But there’s obviously a real tradeoff there between not inducing people to voluntarily get the disease in order to obtain the support, but also not getting so low that people don’t want to go into that regime. I think there is an incentive-compatible place between there. I’m not sure precisely how to set it.
We do know that in the East Asian countries with a wide range of government structures, it seems to have worked out reasonably well, and they’ve managed to induce most people to isolate. I also think there’s a fair bit of altruism and desire to protect your family, which doesn’t go all the way, but it helps broaden the range of incentive compatibility there.
COWEN: What do you think of the Robin Hanson point — this is not a question unique to your system at all — that many young people will want to expose themselves to limited doses in order to get immunity at some point, the certificate, reenter normal life? And can that be a feature of a system rather than a bug?
WEYL: I think that that is not desirable because during the period . . . We don’t know how asymptomatic and pre-symptomatic transmission precisely works here. And I think a lot of young people, if they do that, would be putting their more vulnerable and elderly relatives into a lot of risk. So that’s something we would like to discourage, but I don’t think it’s something that we should have more than social sanctions against.
COWEN: You are estimating a benefit-cost ratio for your plan. What would that number be?
WEYL: Well, it depends on what the alternative plan is, but I think the most natural alternative —
COWEN: Continuation of the mess we’re in. I don’t even know how to describe it.
WEYL: I would say 10 to 1. The costs we estimate of our plan are on the order of a bit less than a month of continued freeze in place.
COWEN: What percentage of Americans do you think will download the tracing app?
WEYL: It’ll depend a huge amount on what part of the country you’re in. In suburban and rural areas, I don’t think many people will download it, and I don’t think there’s any reason for them to. I think in areas with a lot of high-density public amenities, a lot of people will download it. And some of them, especially some of the private amenities, may choose to require you to show that you have the app before you enter that amenity.
COWEN: We all know, when doing policy, proposals go through the Washington, DC, meat grinder and the state- and local-government meat grinder. What do you think about the tradeoff between getting this done quickly and getting it done the way you want it to be done? Let’s say your version is the best version. You want speed more importantly or getting it right more importantly?
WEYL: There’s a clear tradeoff between the two of them, and there are minimum requirements that are needed to get this working at all. I would say those things have to be met, but the fastest possible, subject to those being met, is probably going to be much more important than getting it all precisely right.
COWEN: How do your views on testing differ from those of Paul Romer, if at all?
WEYL: Overall, I think there’s a lot of similarity between us and Paul. Paul believes in mass-scale testing, and we do as well. Paul thinks that tracing is so problematic that he would rather see universal, very frequent testing, rather than tracing being used to reduce the number of tests necessary. His plans, correctly calculated, would require something like 10 to 20 times the number of tests that ours would and, therefore, has costs much closer to something like $500 billion rather than $100 billion.
It would also be much more intrusive because there’d be a much greater reliance on these negative test certificates that you were talking about earlier, Tyler. Therefore, from both a civil liberties and a cost perspective, I strongly prefer a regime that also involves tracing. The other thing I would say is, because it’s so ambitious, Paul’s plan, in terms of the number of tests required, will take much longer to ramp up to that point, so we’ll end up with one to two more months of freeze in place.
Overall, I think that there’s a strong case to be made for test, trace, and supported isolation instead of just testing. But on the other hand, I think it’s great that he’s advocating an ambitious target just as we are.
COWEN: What do you think of the plans that say we should try to predict who is a super spreader and then test them incredibly often? Maybe we won’t get that far in universal testing, but we’ll get most of the gains. Testing nurses, testing people who shake hands a lot, testing the extroverts, whoever people are at these nodes. Maybe they work in nursing homes — wherever we find, say, from analyzing big data. How effective would that be?
WEYL: I certainly support some forms of that. I think testing essential workers, especially in long-term care facilities where there is a possibility not just for a lot of spread but for a very dangerous spread, makes a lot of sense.
I think having a very top-down regime of someone analyzing a bunch of data and, on the basis of some probably pretty tenuous statistical correlation, claiming that such and such a person needs to be tested, and then coercively going in and testing them on a government basis — it does not seem to me like a very robust regime.
So I think that there’s some robust elements of this that I would love to see implemented, probably largely through private demand. “I want my person taking care of me to be tested.” And then there’s other things that seem to me problematic and potentially authoritarian.
COWEN: Here’s a question from a reader, and I quote, “What are the best ideas for applying radical markets to the COVID-19 crisis?”
WEYL: Okay. I think some of my favorite ideas actually aren’t necessarily on radical markets but on RadicalxChange ideas more broadly. But the ones that are related…
COWEN: That’s fine. Absolutely.
WEYL: For example, I think a huge problem we have right now is that cultural industries are struggling to survive or thrive in the internet world. And we’re now, suddenly, completely in the internet world. All the possibilities of doing in-person gigs that were really supporting a lot of the music sector are gone.
I’d really love to see a pool of funding be put into the matching mechanisms that we’ve been emphasizing to improve the environment for things like Patreon and Kickstarter to fund cultural innovation that will help sustain morale during the times when people are separated from each other. So that’s —
COWEN: They could write better incentive-compatible contracts by drawing on your other insights, and that would help these people raise money and support themselves.
WEYL: That’s the idea.
COWEN: More generally about the crisis, should we allow price gouging, say, for masks or reagents? They don’t like calling it price gouging, by the way.
WEYL: Yeah, the problem I have with the price mechanism here is not the usual “price gouging” or variable pricing, but the fact that there’s so many externalities in the allocation of some of these critical inputs here. In principle, we could try to price those externalities, but in practice, trying to get such a pricing mechanism and the information required for it in place quickly is going to be very hard.
Therefore, I think we need to have a lot of nonprice allocation, not of the whole economy but just of the really critical elements, like testing and certain types of PPE, because I think otherwise we’re all going to be harmed by that not being allocated to those nodes, as you were talking about, where they have the largest costs associated with it.
COWEN: If we need to do two million tests a day or whatever’s the number, and if you’re a little skeptical about targeting the super spreaders, or you just want high prices, mobilize elastic supply as quickly as possible. Make sure who should get the stuff.
WEYL: Yeah, you definitely want high prices to the suppliers, for sure. Absolutely. I just don’t think the best way to do that is by, on the demand side, allocating according to the price mechanism. I absolutely agree with Alex Tabarrok that things like advanced market commitments that throw a lot of money at the supply chain make a ton of sense.
But whatever comes out of the supply chain I don’t just want allocated by the price mechanism to a bunch of rich people who want to go out and have dinner somewhere. I want to allocate it to the people who are going to spread the disease the most because that will let everybody go out and go back to normal life at a much lower cost.
COWEN: But it seems a lot of the rich people have been big spreaders, right? Prince Charles, Boris Johnson, Tom Hanks.
WEYL: [laughs] Yeah, I’m not saying that there wouldn’t be some of that, but I wouldn’t say that, on average, that’s going to be the case. A lot of the long-term care facility workers, who are some of the most dangerous spreaders, are not people who have a lot of means.
COWEN: Now, in economics, why has price theory so fallen out of favor?
WEYL: I think price theory is actually making quite a bit of resurgence in the last couple of decades. Raj Chetty, Amy Finkelstein, Jon Levin — people like this, who’ve won the John Bates Clark medal, recently have really drawn on it a lot. I think it fell out of favor in the ’80s and ’90s largely because of a lot of the rise of the mathematization of economics, the rise of technocracy within the profession, the increasing focus on refinement of methods as opposed to engagement with the public. I think those were some of the underlying reasons.
There’s also an association with the University of Chicago and a particular ideological view there, which sort of mixed it all up with politics. And that’s something that I think has become less and less true with this new wave that I was describing.
COWEN: Now, you’re a reformer. How would you reform the economics profession, which you’ve seen from a number of different vantage points, right?
WEYL: Yeah, one of the most important failings of the economics profession right now — and I think this is something you’re doing a great job of trying to rectify with the engagement work you do — has to do with the lack of accountability to public discourse. This is something that’s really systematic across American society, not just in economics.
There’s a very unhealthy relationship to expertise, where either there’s a total disregard of and distrust of expertise or a deference to it, rather than the notion . . . If you look at someone like Milton Friedman — the way you judge an expert is by their ability to distill things down and convey a message that becomes part of the public discourse. That’s hurting us in the COVID situation, and it’s been a disaster in the economics profession.
COWEN: What’s the mechanism design you would implement to get us there? We might all agree with the outcome, but what do you change? Tenure procedures? Peer review?
WEYL: One thing we need to change is the way that universities evaluate professors for tenure and the way that we evaluate people for prizes. There needs to be a much, much greater emphasis on your ability to bring things into public discourse in evaluating people rather than just the esteem of your colleagues.
One thing we need to change is the way that universities evaluate professors for tenure and the way that we evaluate people for prizes. There needs to be a much, much greater emphasis on your ability to bring things into public discourse in evaluating people rather than just the esteem of your colleagues.
Getting the right metrics on that is a really tricky thing. I bet it’s something you’ve thought about, actually, Tyler. But I think we need to be bringing that public engagement and delivery of things directly to the public much more into how we evaluate people.
You look at someone like Henry George. Henry George was one of the great economists. He ran for mayor of New York, and he actually beat Theodore Roosevelt. I’d like to see more economists living that sort of life. Milton Friedman obviously had a bit in that direction. John Kenneth Galbraith. We need more of that.
COWEN: Here’s another reader question. “How have the events around COVID-19 changed Glen’s views on RadicalXchange and related issues?”
WEYL: It’s actually interesting because the first thing I wrote about COVID was not the stuff that I’m doing now. It was about Taiwan’s experience and how much better Taiwan had done — and it appears this is still going — than even places like Singapore and China. I think one critical reason for that is that Taiwan has this really rich democratic technology tradition in which citizens are engaged in making technical tools that then help scale up and govern the country.
In China and in the US, for different reasons, the technical leads are quite divorced from those who their technology is meant to serve, and therefore they’ve been very poorly responsive to the emerging issues on the ground. The signals have not been reaching them from the local knowledge very effectively. That actually makes me believe that RadicalxChange ideas may be a very powerful mechanism for warning us about future crises.
It’s very hard to innovate in those fundamental ways in the midst of a crisis, which is why, at some level, the proposals I’ve been pushing for this have been conservative in nature. They’re drawing on things we really know have worked in the past rather than experimenting with new things. But as an early warning system for this type of thing, I believe all the more in that type of democratic technology.
COWEN: Now, you may wish to challenge the premise here. Why do I see so little talk about the blockchain during this pandemic? Just doesn’t seem that salient.
WEYL: Well, first of all, I don’t think blockchain is very salient, period. If you think about the conversations around technology and society, AI is way up there. Internet of things is way up there. Blockchain is pretty far down in terms of the broad public imagination. Within the blockchain community, obviously that’s a bit different.
And within that community, I think there has been quite a bit of focus on what are the best ways to do things like contact tracing. Now, if you call that blockchain or not is a bit of a question, but certainly privacy-preserving cryptographic technologies, if anything, I think are getting more attention now than they were getting before because of the emphasis on trying to do contact tracing in a privacy-preserving way.
COWEN: Other than possibly the adoption of your plan, what do you think will be the most enduring economic or social change from this pandemic?
WEYL: My guess is that there will be a lot of large corporations that take on important social responsibilities because of the trust environment that you were talking about and that it becomes increasingly illegitimate for them to be run under a pure shareholder-maximization perspective once they’re taking on that role. I think we’re going to see fundamental shifts in some of the corporate governance parameters as a result of the social role that a bunch of companies end up taking on.
On things under- and overrated
COWEN: In the middle of these dialogues, we have a section, overrated versus underrated. I have some easy ones for you. Are you game?
WEYL: Yeah, sure.
COWEN: Rio de Janeiro — overrated or underrated?
WEYL: About correctly rated, I would say.
COWEN: What do you like most about it?
WEYL: Best place in the world to be as a tourist, but a very challenging place to live and be productive.
COWEN: Song by Don McLean, “American Pie.” Overrated or underrated?
WEYL: Oh, that’s one of my favorites. Underrated.
COWEN: Underrated. What’s so good about it?
WEYL: It manages in a very accessible and catchy way to be just allusive enough about historical events that you can make sense of it and yet still appreciate the poetry and complexity of how it’s speaking to things.
COWEN: Why didn’t Don McLean have a better career? There’s “Starry Night,” and then it seems to end, or am I missing something?
WEYL: I actually don’t know much about the dynamics of his career. And I like a couple of his other songs, but I agree, it is kind of remarkable that he’s such a one-hit wonder.
COWEN: How much people respect law in Latin America — does the typical educated outsider underrate or overrate that, law-abidingness in the Latin countries?
WEYL: I think that they think people respect law more than they actually do because they don’t really see the favelas and the informal settlements very much on most standard trips, and they don’t realize how pervasive the fact that people are living outside the law is to the way that everyday life works in Latin America.
COWEN: Julius Krein — overrated or underrated?
WEYL: Underrated. I’m a big fan.
COWEN: He’s your coauthor, right?
WEYL: Yeah, I met —
COWEN: Tell us the story there.
WEYL: First of all, Julius and I disagree on a great many things, but I have a huge amount of respect for his intellect. He’s one of the people who really challenges a lot of the ways that people have fallen into thinking. And he did it, really, at a time when I think that was incredibly necessary, so I’m a big fan of his. I really like collaborating with him, even though in some ways we’re sort of polar opposites. He’s a nationalist. I’m very much an anti-nationalist in my basic outlook.
COWEN: What was Milton Friedman most wrong about?
WEYL: Monopoly power.
COWEN: Say just a little more.
WEYL: Milton Friedman — if you read Capitalism and Freedom, it’s beautiful. It’s one of my favorite books. I actually think it’s very similar to Rawls. It’s funny because a lot of people on the left love Rawls, but they hate Milton Friedman. I actually think their visions are very similar.
I think both of them dramatically underestimated the importance of increasing returns phenomenon. Friedman says, “Well, there may occasionally be a temporary monopoly, but it’ll go away because of competition anyway, and we need to try to just avoid it becoming too permanent by the government getting involved in it,” and so forth.
I don’t think he perceived that increasing returns phenomena that tend to create monopolies are really the foundation of what creates the possibility of civilization. He had in the back of his mind this sort of decreasing returns model that’s dominant in economics, and I think that that colors his whole worldview in a way that leads him to miss a lot of the key questions, even though he was right on a lot of the things that he spoke about, actually.
I’m actually largely sympathetic to a lot of Milton Friedman’s ideas on the things he focused on. But the problem is, the things he focused on weren’t the key problems, I don’t think.
COWEN: Speaking of increasing returns, what’s your favorite movie?
WEYL: Memento.
COWEN: Why?
WEYL: Because it captures a really critical philosophical issue in an extremely engrossing thriller fashion. It’s sort of like Don McLean. It’s getting at something deep and rich, but in a way that’s broadly accessible.
COWEN: What makes for a good movie critic? You were a movie critic once, right? For The Daily Princetonian.
WEYL: [laughs] I once tried to be one. I don’t think I was all that successful. I don’t read nearly as much movie criticism as I used to in the past. What I like in a movie critic is when they’re able to capture the emotional feeling of a film and what it would be like to experience it without talking too much about what actually happens.
COWEN: Galapagos Islands aside, what’s the best place in Latin America to go see turtles?
WEYL: I love turtles, and I love Latin America. But I don’t feel I have a definitive answer to give to that. I do have the place that I’ve enjoyed seeing turtles most, which was Puerto Escondido, which is a relatively small beach town where we saw some nice turtles. I’m sure there are better places.
I’ve heard that some of the islands off of Venezuela are some of the best. But my wife got banned from going to Venezuela because she wrote a critical report on the government. And so, we’ve never been able to go to Isla Magdalena, I believe, which is supposed to be one of the best places.
COWEN: One of the ideas you pushed earlier in your career — not that long ago — was quadratic voting, which would place greater weight on more intense preferences. Let’s say we take the current pandemic, and right now we had some form of quadratic voting. How would that change the nature of our response?
WEYL: I’m a big fan of quadratic voting still. I think the question is, quadratic voting for precisely what? The things I’d most like to see quadratic voting be used for in the pandemic response is eliciting from people informed and rich feedback about what things they value or what elements of the response they value most.
I think it could be quite powerful there in allowing basically large-scale deliberation in a remote fashion. I think we would learn a lot more about what elements, for example, of the social distancing are hurting people the most and what elements people are most willing to accept. And we might get a much richer picture of the cost-benefit tradeoffs that we’re facing, which I don’t think have been very well factored into public policies.
COWEN: Do you think we, as a collectivity, would value human lives more or less with quadratic voting?
WEYL: I think probably quite similar, but a lot of the more rich and nuanced things — for example, restrictions on parks versus restrictions on theaters — I think we’d learn a lot about what’s most important to people there.
COWEN: Let’s say you’re applying your ideas on mechanism design to higher education. In general, what would you change?
WEYL: One thing I’ve thought about quite a bit has been the evaluation of people for tenure and some of the publications stuff. I don’t know if that’s higher education, really, though, because it’s a little more research.
COWEN: Part of it. I’ll ask about students next, but what’s your idea for that?
WEYL: We’ve been working for a while at RadicalxChange on trying to create a new system of peer review in journals, where rather than having a set of authors and then referees and editors, instead there’s just an ordered list of people who sign on to the article, so that authors would have the first chance to sign, and maybe editor next, and the referees next.
I think this would be a much more incentive-compatible way to get good-quality referee reports and to actually allocate credit in proportion to what people have contributed to making an article work, as compared to the current system, where there’s a very binary division between the authors who get credit and everybody else who gets very little.
You could add into that some really rich stuff around having some quadratic voting in there, and then maybe having individualized views of how many citations or how much respect someone gets from a journal based on who you respect, and who they respect, and how that filters through. I’d have to go into more of it, but I think that those are some ways you can put these elements together to get a much better approach to understanding how you evaluate a scholar.
COWEN: How would you apply mechanism design to improving admissions? It’s been very controversial. It seems unfair. Some people would say intensity of preference being counted is the problem. Do you agree?
WEYL: I think it’s really critical in admissions that we — and this is a really different element than intensity of preference — but that as the American system does, at the point when people need to make a lot of costly investments in figuring out what places they like, that they have a sense of who might let them in.
If you think about the medical match system that Al Roth is very famous for being involved in, and Gale and Shapley — they have a system where you rank all the institutions before you know which ones are going to admit you. That requires you either to do a huge amount of due diligence about all the different institutions or to make guesses about where you’re going to get into. That’s not a very effective process, even though it has some other properties that people have highlighted.
Something more like the way that we admit students in the university — an undergraduate admission—is more sensible. And I think there are ways to further improve on that, to add more stages of letting in the top matches first — the people who most want to go to someplace and the schools most want them — and allowing those parts in the market to clear, and then doing the other things later. That’s a little bit like early admission, but actually making it much more finely graded.
COWEN: As I’m sure you know, at a school like Harvard or Princeton, you can’t just buy your way into getting a graduate admission. It’s run by the faculty, correct?
WEYL: Yeah.
COWEN: Could we do undergraduate admissions the same way? It would be a lot of work, of course, the faculty.
WEYL: It’s interesting. I think you’d probably have to filter a little bit. Maybe it should be graduate students who should be helping admit undergrads or something like that. I think it’s a very interesting idea.
COWEN: You wrote a paper in 2009 called “Whose Rights? A Critique of Individual Agency as the Basis of Rights.” Do you think now, standing in 2020, are individual rights ever an appropriate concept to invoke, to argue for or against a policy?
WEYL: I think almost any moral concept is a useful concept in certain contexts because all our ideas are proxies for some deeper truth that we don’t fully understand. I often make arguments about individual rights, individual liberties, even though I ultimately think we need to get past our standard conception of individuals as atomized and understand individuals more as being an intersection of different social circles that they’re a part of.
But of course, the more sophisticated these ideas are, the more true to reality, the more complicated and foreign they are. And we always need to strike a balance between clearly communicating and verisimilitude to the reality we’re trying to describe.
COWEN: You still think, in principle, that either group rights can be meaningful or even a component of an individual could have rights, and that there’s no particular reason to necessarily stop at the level pinpointed to methodological individualism. Would that be a fair description for you now?
WEYL: Yeah. I would enrich that story a little bit. I alluded to this in that original piece, but now I have a clearer sense of it. I think often the parts of individuals that we’re talking about actually are associated with various groups, so we should think of individuals as being made up of group identities to a large extent, and group identities as being made up of individuals to a large extent. So we should be moving towards a dual perspective on these things rather than a grounding that sees one as the endpoint that composes the others.
COWEN: But this would be one foundational reason why you’re less libertarian than maybe you might have been before you wrote the paper.
WEYL: Yeah.
COWEN: The practical reason would be increasing returns, correct?
WEYL: Yeah, and those are, I think, actually just two different ways of expressing the same thing. I view the fundamental role of groups as just a different way of expressing the notion of increasing returns.
COWEN: Do you have a unified theory of you and what you believe?
WEYL: I don’t often have time enough for the meta-rationality that that requires. David Foster Wallace was one of the most remarkable people at doing that sort of thing. I aspire to it, but I haven’t had quite enough time to figure it out.
COWEN: Here’s the unified theory of you, which I’m not endorsing, just playing with.
WEYL: Yeah. Yeah.
COWEN: At heart, coming out of the Jewish socialist tradition, through a matter of biographical accident, you first became a libertarian. Needed time to find your way back to the tradition you belonged to. Along the way, did economics, so you believe in some notion of markets, albeit directly adjusted by regulation and mechanism design. And you’ve moved away from methodological individualism.
But you’re this weird person of a Jewish socialist, believes in markets, and had this path leading away from libertarianism. No other person in the world probably is that, but you are. Is that a unified theory of you?
WEYL: Well, the thing that throws a little bit of a wrench into that is that I was actually a Jewish socialist before I became a libertarian.
COWEN: Does that strengthen or weaken the theory?
[laughter]
WEYL: Well, the thing that’s funny is that it’s certainly the case that I came back to identifying with my Judaism at around the same time that I was starting to move away from libertarianism. I don’t know if that’s because of the entanglement between the collective element of religion and the ideological element of this other stuff.
But my unified theory of me on those lines has always been that I’ve been someone who’s hugely about Hegelian synthesis and trying to find things that seem persuasive and to find a way to simultaneously fully embrace them both in my mind by finding some syncretic fusion of them. Intellectually, that’s something that is quite important to me.
I actually saw, from my senior year of high school, I had a capstone project, which was about conservative liberalism. And actually, if you read it, it reads a lot like what I’m writing recently. [laughs] So the reality is I think I have these themes of trying to find syntheses of different things, and those keep recurring and getting nuanced by the more I learn about different fields.
COWEN: As you well know, there’s a long-standing historical connection between Judaism and socialism: Karl Marx, Moses Hess, Eduard Bernstein. One could go on with this. What do you think, ultimately, is the foundational reason for that historical connection, and yourself as well, right?
WEYL: Well, I also think there’s a deep historical, maybe even stronger historical association between Jews and capitalism. I think it really has to do much more with just abstraction and the ways in which Jews have engaged with the economic world, coming from the ways in which they’ve been able to express their political voice, the fact that there was literacy much earlier in the Jewish community than there was in many other communities in a broad scale. I’ve actually written about that issue and why Jews have been so engaged with economics.
But I don’t think it’s really socialism in particular. It’s both socialism and capitalism. If you look at the Nazis, they often depicted one Jew of socialism and one Jew of international capitalism, both eating the German nation, so Jews have always been put in that position of representing these abstracted economic systems, rather than one or the other in particular.
On the Glen Weyl production function
COWEN: Our final segment is about what I call the Glen Weyl production function. This is about you. Simple question: at Princeton, as an undergraduate, why did you rollerblade to class?
WEYL: I had always been into rollerblading since I was very young, and I thought it would be a good way to get around Princeton, though the hills ended up having a big challenge for getting around on rollerblades. I pretty quickly abandoned it for that reason. I didn’t really like biking like other kids did. And luckily, it caught the eye of my future wife, so that was great.
COWEN: What’s your own account of why you were so successful before the age of 27?
WEYL: I think I developed intellectually much more quickly than a lot of my peers, and I developed physically and emotionally a lot more slowly. Eventually I had to balance those things out.
But as it turned out, that made me very unsuccessful until I got into high school, very successful from high school through the very beginning of my career. And then, I faced a number of challenges because of it after that. There are just different times in life where different forms of development are more important than others.
COWEN: If you’re looking for talent in young economists, other than the obvious, like people who work hard, what is it you look for?
WEYL: I look for people who have an ability to see beyond the ways in which the field shapes them to see, while at the same time internalizing it, who can sort of live within the world of economics and then also see it from the outside.
COWEN: What do you view yourself as rebelling against? The foundational level.
WEYL: Oh, many things.
COWEN: Look at Robin Hanson. Robin, to me, is rebelling against hypocrisy. I think he even might agree with that. What are you rebelling against?
WEYL: I think I’m most deeply rebelling against the separation between the role of the expert and the role of the politically engaged person. I grew up wanting to be a politician for long periods, and also wanting to be a physicist for long periods, and I’m deeply frustrated by the ways in which these things are these separate and contradictory roles in our society. I’m struggling to straddle the divide.
COWEN: Well, that’s a good answer. But if you had to boil it down to something more foundational, what institutional failure or what personal quality lies behind that? What would that be? Why do we screw that thing up?
WEYL: Singular identity is one way of putting it. Many people who are economists think they’re an economist. Many people who think that they’re libertarian think they’re libertarian. Every identity that I’ve been part of, that I thought I believed in, ended up having so much corruption entwined in it, and ultimately, it’s the plurality and intersection of those things where I find meaning. It’s that sort of singular definition of what I am, who I am that I find most constraining.
COWEN: So people aren’t Hegelian enough, and there’s a lot of corruption out there. And that’s a big part of what you’re rebelling against.
WEYL: Yeah.
COWEN: Let’s say I’m a young person. Maybe I want to do economics, or maybe I want to be a politician, or I’m conflicted. And I go to you, and I say, “Glen, what can or should I do to become more Hegelian?” What’s your advice?
WEYL: Travel in different circles. Take them all really seriously, and don’t let yourself totally compartmentalize them. Ask why there are contradictions and what it means. And don’t get intellectually lazy about just writing it off to people being different.
COWEN: Glen Weyl, thank you very much. Again, for our listeners and readers, I recommend you all read Glen’s new paper, coauthored, on how to fight the pandemic. Thank you, Glen.
WEYL: Thank you so much. Tyler. | https://medium.com/conversations-with-tyler/glen-weyl-tyler-cowen-covid-19-coronavirus-pandemic-relief-e4869e1a7a51 | ['Mercatus Center'] | 2020-05-21 13:48:31.873000+00:00 | ['Coronavirus', 'Economics', 'Authors', 'Covid 19', 'Podcast'] |
Pangeo and Kubernetes, Part 2: Cluster Design | In my last post, we looked into Pangeo’s cloud costs and discussed what it would like to budget for a typical Pangeo research cluster. In this post, I’ll present a technical and opinionated design for a typical Pangeo Kubernetes cluster. I’ll focus on the design features that impact scaling, cost, and scheduling and discuss some recent improvements to JupyterHub, BinderHub and Dask-Kubernetes that were implemented to improve behavior in these areas.
To review, we’re interested in deploying a Kubernetes cluster with this basic node-pool configuration:
Core-pool : This is where we run things like the JupyterHub and other persistent system services (web proxies, etc.). We keep this as small as possible, just big enough to run core services. Jupyter-pool(s) : This is an auto-scaling node pool where we put single-user Jupyter sessions. By autoscaling, we mean that size of the node pool (number of virtual machines) increases/decreases dynamically based on cluster load. Dask-pool(s) : This is a second auto-scaling node pool designed to run dask-kubernetes workers. The node pool is setup to use preemptible (aka spot) instances to save on cost.
Motivation
Before we get started, let me list a few of the reasons the enhanced cluster design discussed here is needed:
Optimize for rapid scaling — users want clusters to scale up quickly and admins want clusters to scale down quickly when load decreases. Early versions of Pangeo on Kubernetes suffered from long scale down times (see this GitHub issue for one such example). We determined there were two issues contributing to this problem. First, Kubernetes system pods were drifting onto nodes intended for Jupyter and Dask pods, preventing seamless scale down. Second, pods were often packed into nodes inefficiently, leading to low utilization.
Explicit node pools — We have previously used a Kubernetes feature node selectors to control where pods were scheduled. This worked well for most pods intended to be scheduled in the core and jupyter pools but didn’t provide anyway to keep system or dask pods where they belong. Additionally, it required users of dask-kubernetes on Pangeo deployments to implement a specific dask worker selector in their configuration. In all, this just ended up being really brittle and failing too often.
Cost control — related to the two points above is the issue of cost controls. The main point here is that we want to make sure we’re always using preemptible instances for dask-workers and we need ways to insure this happens.
So, given these failures, we set off on a quest to improve the infrastructure around scheduling Kubernetes pods for.
Keeping pods where they belong
Because we have (at least) three node pools with varying characteristics, we want to introduce some tools for herding pods into the right places. Ultimately, we want to make sure the JupyterHub pods stay in the core-pool, the user pods stay in the Jupyter-pool, and the Dask pods stay in the dask-pool. Kubernetes has two concepts that allow us control when, where, and why pods are scheduled to specific nodes. The first concept is pod affinity and the second is taints and tolerations.
Node Affinity — the concept of node affinity gives us the power to attract pods to specific nodes. Pods can have affinity for being scheduled on certain nodes. There are three different kinds of affinity, and then they can all be either preferred or required which is also sometimes described as soft or hard affinities. The three kinds of affinity a pod can have are node affinity, pod affinity, and pod anti affinity.
If a pod has a node affinity, it will want to be scheduled on a node with a certain label.
If a pod has a pod affinity, it will want to be scheduled on a node that already has a certain pod identified by a provided label scheduled on it.
If a has a pod anti-affinity, it will want to be scheduled on a node that does not already has a certain pod identified by a provided label scheduled on it.
As we discussed above, node affinities can be based on a variety of markers, including node labels. In our case, we want all single-user Jupyter sessions to attract to the jupyter-pool and we can accomplish this by adding the hub.jupyter.org/node-purpose: user label to the jupyter-pool nodes. We then rely on built in node affinity settings in Zero-to-JupyterHub to do the rest.
# Jupyter-notebook-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
— matchExpressions:
— key: hub.jupyter.org/node-purpose
operator: In
values:
— user # Jupyter-pool-nodes
metadata:
labels:
hub.jupyter.org/node-purpose: user
In the deployment example below, we’ll repeat this approach for the other node pools. More details on the affinity concepts can be found in the Kubernetes documentation.
Taints and Tolerations — the concepts of taints and tolerations give us the power to repel pods that aren’t intended for specific nodes. These controls are particularly useful in a few common scenarios that we run into:
We want to keep Kubernetes system pods in the core pool. These pods are often long running tasks and can get in the way of autoscaling. By making sure all of our node pools (except the core-pool) have taints on them, we can effectively constrain system pods to the core-pool.
We want to limit certain pools to specific types of pods. For example, in our clusters, we want to only run dask-workers in node pools with the more cost-effective preemptible instances. By adding a taint to the dask-pool, we can repel all pods that are not dask-workers.
We will add taints to both the Jupyter and Dask -pools. In the case of dask-workers, we can add a single taint ( k8s.dask.org_dedicated=worker:NO_SCHEDULE ) that dask-worker pods will tolerate by default. Again, the Kubernetes documentation provides a good bit of documentation on this subject.
# dask-kubernetes-pods
spec:
tolerations:
— effect: NoSchedule
key: k8s.dask.org_dedicated
operator: Equal
value: worker
# dask-pool-nodes
spec:
taints:
— effect: NoSchedule
key: k8s.dask.org_dedicated
value: worker
Scheduler
The Zero-to-JupyterHub project recently (v0.8) added a series of optimizations that give administrators options for managing the scheduling of pods controlled by JupyterHub. Two of these features are particularly impactful for Pangeo workloads. First, the userScheduler option helps pack pods full when new sessions are spawned, helping the cluster scale down more efficiently. Second, the nodeAffinity option allows us to require both user and core pods be scheduled on nodes that match the node purpose label.
# JupyterHub-values-yaml
jupyterhub:
scheduling:
userScheduler:
enabled: true
userPods:
nodeAffinity:
matchNodePurpose: require
corePods:
nodeAffinity:
matchNodePurpose: require
Deploying our node-pools
The Pangeo documentation provides a step-by-step guide for setting up a Kubernetes cluster. Here, I’m simply going to extend that tutorial by adding the labels and taints described above. The rest of the setup of Kubernetes and Helm remains the same. I’ll provide an example of how to do this using GCP’s SDK but the basic pattern should be easily replicable on any Kubernetes deployment. Eventually, we’ll push most of this to Pangeo’s setup guide as well.
# core-pool
core_machine_type=”n1-standard-2"
core_labels=”hub.jupyter.org/node-purpose=core”
gcloud container node-pools create core-pool \
— cluster=${cluster_name} \
— machine-type=${core_machine_type} \
— zone=${zone} \
— num-nodes=2 \
— node-labels ${core_labels} # jupyter-pool
jupyter_machine_type=”n1-highmem-16"
jupyter_taints=”hub.jupyter.org_dedicated=user:NoSchedule”
jupyter_labels=”hub.jupyter.org/node-purpose=user”
gcloud container node-pools create jupyter-pool \
— cluster=${cluster_name} \
— machine-type=${jupyter_machine_type} \
— disk-type=pd-ssd \
— zone=${zone} \
— num-nodes=0 \
— enable-autoscaling — min-nodes=0 — max-nodes=10 \
— node-taints ${jupyter_taints} \
— node-labels ${jupyter_labels} # dask-pool
dask_machine_type=”n1-highmem-4"
dask_taints=”k8s.dask.org_dedicated=worker:NoSchedule”
dask_labels=”k8s.dask.org/node-purpose=worker”
gcloud container node-pools create dask-pool \
— cluster=${cluster_name} \
— preemptible \
— machine-type=${dask_machine_type} \
— disk-type=pd-ssd \
— zone=${zone} \
— num-nodes=0 \
— enable-autoscaling — min-nodes=0 — max-nodes=10 \
— node-taints ${dask_taints} \
— node-labels ${dask_labels}
What had to change to make this all work
Apart from the JupyterHub user scheduler that came out in v0.8 of Zero-to-JupyterHub, not much needed to change. We did make a few small, backward compatible changes to Dask-Kubernetes (see here) in its version 0.8 but other than that, everything was already possible. There were also some similar fixes BinderHub (see here) to help control the scheduling of build pods on binder.pangeo.io.
Wrapping up
Now we get to sit back and watch our clusters behave. We can already see higher use of preemptible node pools on our clusters:
Timeseries of costs on our main Kubernetes cluster. We transitioned to the new cluster design around May 6th, at which point the usage preemptible instances (red and green) increased.
Going forward, there are some interesting experiments we’d like to mix into these concepts. Two that come immediately to mind are:
including using GKE’s node-auto-provisioning for automatic management of multiple node pools.
using the userScheduler to managed node assignment for dask-workers.
Thanks to Erik Sundell, Yuvi Panda, Tim Head, Jacob Tomlinson for their help in the various development efforts that contributed to this work. | https://medium.com/pangeo/pangeo-cloud-cluster-design-9d58a1bf1ad3 | ['Joe Hamman'] | 2019-05-31 01:44:48.021000+00:00 | ['Pangeo', 'Dask', 'Kubernetes', 'Jupyterhub'] |
File Management for Designers | File Management for Designers
A simple yet comprehensive organizational system
Client projects have lots of moving pieces. I’m a designer on Creative Engineering at Google, which works with Google PMMs (Product Marketing Managers) to create websites and digital experiences. We ourselves are comprised of several creative and development agencies, and work with both internal product teams and other external agencies.
As such, our projects can become extremely complex, and require strong organizational infrastructure on every level. This is my organizational system.
Benefits
I use this organizational system to accomplish the following: | https://medium.com/google-design/file-management-for-designers-3bc04216a9ec | ['Neil Shankar'] | 2019-06-03 23:38:34.785000+00:00 | ['Design', 'Management', 'UX', 'Organization', 'UI'] |
An Introduction to Asch Chain Interoperate Protocol | Last month Asch released the chain interoperate protocol, with which Bitcoin can transfer to Asch chain and be used in different DApps. So how is this protocol complemented?
Asch’s chain interoperate protocol is a two-way peg protocol based on multisig federation. There are two kinds of accounts on the Asch chain: normal user accounts and gateway accounts. It is the gateway accounts who handles different kinds of transactions between Asch chain and Bitcoin Network. Every transaction can be checked by the user on the blockchain explorer. | https://medium.com/aschplatform/an-introduction-to-asch-chain-interoperate-protocol-768fb49754ef | [] | 2018-06-26 03:45:10.798000+00:00 | ['Blockchain Technology', 'Asch', 'Development', 'Interoperability', 'Bitcoin'] |
Pros and Cons of Kotlin for Android App Development | Thanks to the solid support from Google and Jetbrains, both developers as well as businesses, Kotlin’s adoption is on a quick rise.
In fact, according to Kotlin’s official website, many tech giants including Uber, Evernote, Pinterest have adopted Kotlin for their Android version of mobile apps.
If you’re also considering to use Kotlin for your next Android app project, be sure to first understand what exactly Kotlin is, its benefits, and disadvantages of this programming language.
What is Kotlin?
Kotlin is an open-source, statistically-typed programming language created by JetBrains.
It was named after Kotlin Island which is located near St. Petersburg, Russia and it basically allows creating code that can seamlessly run on the Java Virtual Machine.
Although, Kotlin’s syntax is not compatible with Java, the common-used language for developing Android apps, however, it can easily interoperate with the Java code.
In laymen’s terms, Kotlin code can easily work with Java exactly the way it does natively.
Apart from this, JetBrains has also enabled Kotlin to use aggressive inference to decide the type of expressions and values in case they weren’t defined by Android app developers.
This, in turn, makes Kotlin even more distinctive than Java.
However, the aim of this article is not to compare Kotlin with Java. Rather, the main focus is to discuss the pros and cons of using Kotlin for Android app development and determine whether it’s the right technology for your project.
Let’s start with the pros.
Pros of Kotlin for Android App Development
Kotlin is clearly loved by companies as well as Android app developers. Here’s the list of Kotlin advantages that makes it the prime option to build Android apps.
1 — Interoperability with Java Code
Kotlin is 100% Java-Interoperable programming language, which means switching from Java to Kotlin is a piece of cake for experienced Android app developers.
This is possible because Kotlin is consistent not only with Java but also with its frameworks and tools. And in case you have an existing Android app developed using Java language, you can still write new features or simply update your Android app in Kotlin language.
2 — Easy Maintainability
Most of the IDEs available in the market today provide support for Kotlin, which ultimately helps in maximizing productivity as it eliminates the need for learning new IDE for the developers.
3 — Boosts Team Efficiency
Thanks to its intuitive and succinct syntax, Kotlin is a compact and clean programming language that boosts team efficiency to a great extent.
In laymen’s terms, developers can get more work done using Kotlin compared to Java, as Kotlin takes fewer lines of code to build and deploy Android applications.
4 — Kotlin is Much More Reliable
Compared to other emerging programming languages such as Flutter, Kotlin is a much more mature programming language. It first came into existence in 2011.
As far as reliability is concerned, ever since Kotlin’s inception, it has gone under numerous Alpha and Beta tests before JetBrains finally released its final public version.
In fact, JetBrains also made Kotlin’s latest version reversely compatible with many of its previous language versions, giving much more reliability for the existing Kotlin-based Android apps.
Cons of Kotlin for Android App Development
Just like anything in life, Kotlin is not a perfect programming language. And if you’re going to use Kotlin to build Android app for your startup, business, or Enterprise, it’s imperative to take its cons into account before making the final decision.
1 — Fluctuation in Compilation
In many cases such as performing incremental builds, Kotlin is faster than Java, there is no doubt about it. However, Java remains a clear winner when it comes to creating clean builds for Android apps.
2 — Less Talent for Hire
Although, Kotlin’s increased popularity, especially after Google announced it as the first-class Android app development programming language, has certainly increased the number of Kotlin app developers in the market.
There are, however, still less number of Kotlin developers available in the market compared with Java developers.
3 — Limited Learning Resources
Though the number of Android app developers switching to Kotlin is increasing almost every day, there is still limited number of resources available in the market to learn and master Kotlin.
This basically leads to extra time to try and figure out how to do or build certain things in an Android app using Kotlin programming language.
The good news is, a lot of Android app development companies have mastered the language even with limited resources, so building a seamless Android app for your business using Kotlin is no longer an issue.
4 — Kotlin is Still Not Java
While Kotlin and Java have a lot of similarities, making the switch from Java to Kotlin will take some time for developers to get familiar with how everything works in the Kotlin programming language.
This means if you’re an existing Java-based Android app and want to switch it to Kotlin, additional expenses for training the team will be definitely required.
Final Thoughts…
Having official support from a tech giant Google is definitely a clear sign that the future of Kotlin is bright.
However, this doesn’t mean that you need to hurry in replacing your existing technology stack with Kotlin. So, even if you have an existing Android app, try baby steps instead.
What I mean is, instead of rewriting the entire application in Kotlin, you can build the next feature in Kotlin. This will help you get better familiarility with Kotlin while still maintaining an efficient Java-based Android app. | https://medium.com/quick-code/pros-and-cons-of-kotlin-for-android-app-development-c4b0f95c1324 | ['Sandeep Agarwal'] | 2019-10-22 17:38:35.899000+00:00 | ['Android App Development', 'AndroidDev', 'Kotlin', 'Android', 'Java'] |
Jeff Bezos’s Lazy Saturday Morning Routine | 5:15 A.M.: Internal performance optimization alarm activated. Eyes fly open. Strangled war cry escapes mouth. Fist reflexively punches air.
5:15–5:20 A.M.: Somersault out of bed. Land in plank position on bedroom floor. Command: “Alexa, read me unopened Jeff Bezos fan mail.” Do 200 push-ups to the sound of Alexa mechanically reporting a detailed sex dream sent in by Trisha Wagner, ‘your #1 fan in Haysville, Kansas.’
5:20–5:30 A.M.: Burn effigy of self. Whisper, “Always be better than yesterday.” Watch hungrily as effigy burns. Observe, pleased, as tiny robot sweeps ashes into tiny dustbin, opens drawer full of Jeff Bezos effigies, and replaces effigy in preparation for tomorrow’s ceremony.
5:30–5:35 A.M.: Tweeze one nose hair. Place carefully in vial and stopper. Add vial to Clone Library.
5:35–5:40 A.M.: Open hatch in bathroom floor and drop down into kitchen, landing in power lunge before fridge. Open fridge. Remove tray labelled “Optimized Protein Cubes.” Eat four.
5:40–6:00 A.M.: Open hatch in kitchen floor, landing in power lunge in the archery room. Command: “Alexa, play ‘Jeff Bezos’ Princeton Commencement Speech: Dubstep Remix.’” Acquire target, which is the words ‘Customer Dissatisfaction’ printed in the middle of a speech bubble emerging from Satan’s mouth. Archery practice.
6:00–6:15 A.M.: Take high-speed elevator from archery room back up to master bathroom. Fully submerge self in ice cold bath. Keep eyes open underwater. Require no breath. Feel no pain.
6:15–6:30 A.M.: While air-drying naked body, exfoliate, moisturize, and polish bald head.
6:30–6:45 A.M.: Remain naked. Open second hatch in bathroom floor, landing perfectly centered in the Warehouse Control Room, atop the Amazon Hover-Board (patent pending). Fire up 360-degree warehouse security monitor. Crack knuckles.
6:45–6:47 A.M.: Alert! Worker #886 in Warehouse #53 (Santa Fe) has stopped working to massage back of neck. Hover-board violently over to Screen #53 and press “ZAP.” Watch as biohacked neuron sends signal to worker, who jumps and frantically grabs next parcel. Feel wave of intense pleasure course through veins.
6:47–7:30 A.M.: Ride the Hover-Board methodically between warehouse screens, zapping inefficient workers.
7:30–7:40 A.M.: Keeping eyes on screens, record a few lines of stark, powerful prose for chapter eight of autobiography, ‘Extremely Clever and Incredibly Rich; Also, Very Hot: Jeff Bezos on Jeff Bezos’ (“Phase Six: Acquisitions & Mergers of ALL OF IT”).
7:40–7:45 A.M.: Take high-speed elevator from Warehouse Control Room to master bedroom. Open closet, which is empty save one black polo shirt, one pair of dark navy jeans, one pair of black aviator sunglasses, and “Jeff Bezos” name-tag. Apply all to body.
7:45–7:45.02 A.M.: Double finger guns in mirror.
7:45–9:45 A.M.: Open hatch in bedroom floor, landing in power lunge in the Clone Room. Have riveting conversation about warehouse worker performance optimization with Jeff Bezos clone prototype #8.
9:45–10:30 A.M.: Open hatch in Clone Room floor, landing in ‘Thinker’ pose in the Idea Room. Brainstorm ideas for Amazon corporate retreat. Decide on South American portage trip ending at life-threatening Class 5 white-water rapids in Chile. Survivors will be fired. Successful completion of challenge suggests winners have spent time improving physical fitness and are therefore insufficiently committed to The Company.
10:30–10:31 A.M.: Think about divorce by accident. Is she happy? Is she happier? Mutter: “No, Jeff.”
10:31–10:31.003 A.M.: Scan brain for topic other than ex-wife. Land immediately on customer satisfaction optimization.
10:31.003–11:15 A.M.: Analyze Amazon customer data to determine best-selling product. Discover that it is a pair of artificially intelligent jazz shoes that rewires your central nervous system to make you a ‘real snazzy cyborg dancer,’ called the JiggyBoots™.
11:15–11:45 A.M.: Develop blueprints for near-identical alternative to the JiggyBoots™. Call them the Amazon JiggyBootz™. Make them voice commandable, and sell for half the price.
11:45–11:50 A.M.: Delete unread e-mails from charities asking for donations.
11:50–11:59 A.M: Generate list of junior executives who will be the lucky recipients of Saturday afternoon surprise performance reviews.
12:00 P.M.: Internal performance optimization alarm activated (PHASE 2: AFTERNOON). Open hatch in Idea Room floor, landing in driver’s seat of giant Amazon drone. Punch in address of first lucky junior executive. Double finger guns in rear-view mirror. Game on. | https://medium.com/slackjaw/jeff-bezoss-lazy-saturday-morning-routine-4f729987052e | ['Molly Henderson'] | 2020-09-17 00:34:07.499000+00:00 | ['Morning Routines', 'Jeff Bezos', 'Amazon', 'Technology', 'Humor'] |
The problems of 2018 and the solutions of 2019 | The development of the Futourist platform started in Q2 of last year and really got to speed in Q3. Trying not to scale too fast, but still, deliver the MVP product to the market fast, proved to be a real challenge. The development team was only 3-man strong at the beginning and 5 at the end of 2018. The complex nature of our platform (video streaming, multiple front-ends, mobile apps, web, blockchain,…) forced us to pick our priorities. To optimize the development costs and speed, we decided to go completely “serverless” with Firebase and Cloudinary on the back-end side. On the front-end, we decided to go with ReactJS for the web and React Native for our mobile apps. The programming language of choice was JavaScript, across all those technologies. And that was a great idea since the team at the time were JavaScript web developers. Except for one thing. React Native really isn’t just JS.
We delivered on the backend and a limited version of most of our web front-ends. The backend set up, admin dashboard running, web app released in beta, business dashboard in testing. But things got really messy with our mobile app, arguably the most important one. React Native proved to be the wrong choice for our team. With no native developers in the team and not many resources to be spent, we were in a constant battle with buggy 3rd party provided or open-source code. In the end, our conclusion is that React Native demands native developers for any medium- to the high-complexity app. This one we had to scratch off and throw away entirely.
The New Year
The situation for the New Year was more than comfortable for Futourist. With ETH reaching absurdly low levels, it became obvious that our runway got so short that we might not even be able to take off.
There is one more thing to know here. Due to the nature of the subject, this is going to be a short paragraph, but to understand the situation further, you need to know that a terrible family tragedy struck in our team before the end of last year. Our CTO was away for months after that, spending most of his time in the hospital. It was up to the rest of the team to handle things, restructure and move on, which took extra time. Something major had to be done at that point if we were to fulfill our promises.
The Pivot
The decision, in the end, was not hard. In a matter of days, we pivoted from a product-oriented company to a full-scale digital agency. We set the remaining ICO funds on the side and started earning our own money.
In Q1 and Q2 of 2019, we made a 6-digit number, which was enough to sustain the team over this period, without digging into investments. We also decided to part ways with certain team members, which were not the right fit for the company. We optimized our expenses and reduced our burn rate significantly. The development team itself has made significant know-how progress. We are actively looking for other means of financing, and have made progress there as well (another blog about this coming soon).
We feel the team is stronger than ever. It had some serious ups and downs, but right now we are working as hard as possible and the determination to deliver is at its peak. It hurts us deeply when Futourist is marked with words like “scam”. We are far more than that and we’ll do everything to prove that.
The restart of Futourist
The process of restarting the development of Futourist has now begun! Plans are being made on how to tackle the mobile app problem, as well as on how to progress with web front-ends. Please understand that things take time. We are still involved in the digital agency space, which is good, we have an additional stream of revenue. There is still some cleanup to do after this 6-month fallout and the developers who are not here anymore. But as said before, the team is strong and we will do everything to deliver. If no more unpleasant surprises come along the way, and crypto markets hold on, you should be seeing more deliveries from us in the autumn.
The strong supporters | https://medium.com/futourist/the-problems-of-2018-and-the-solutions-of-2019-8bec18ae6128 | ['Ziga Luksa'] | 2018-12-29 00:00:00 | ['React'] |
New Educational Branch! | Fellow Cryptolords!
We’re continuing to unveil the new Worldopo Concept we are working on.
Today we’re presenting highbrow Tech/Educational branch.
The Purpose of the Educational Branch
Even highly automated industries need skilled professionals to work on plants, electric stations, financial establishments, leisure sites, and all other Worldopo’s business empires branches. It means that units need to train somewhere. That’s the first role for Tech/Edu buildings.
The Human Resource distribution model of Educational Branch
Let’s take a closer look at how we designed a new Educational Branch.
Entry-level Human Resources
First, players need low-level workers to manage entry-level buildings. They can be trained in Headquarter’s HR Department.
Advanced Human Resources
As the base development level improves, you must hire more skilled workers and new professions for new buildings. That kind of staff could be unlocked in a separate HR Agency building.
Hi-End Human Resources
Meanwhile, hi-end professionals for premium limited building could be trained in Universities, which are also premium and limited.
Another key feature of the branch is the Research Lab building. Players need it to unlock certain hi-end technologies on the Technologies diagram.
Have a nice day, Cryptolord! | https://medium.com/worldopo/new-educational-branch-590c5179b3e | [] | 2020-07-20 15:15:52.153000+00:00 | ['Gamedev', 'Concerts', 'Games', 'Development', 'Worldopo'] |
The Need To Control Might Be Ruining Your Relationships | Undoubtedly, the biggest perspective shift in my life was surrendering to the idea of controlling things that are beyond my control.
I am going to do everything possible within my power to achieve this result, but not base my happiness on the outcome of the result itself.
I will give my all to fixing this relationship without any expectations from them to do the same.
The pandemic stopped me from exploring this new city, so I will focus on building up my skills. This will enable me to offer some value to society when normalcy is restored.
We can’t change things like the pandemic & the behavior of others, but what we do have control over is how we approach and contextualize the problems.
Overcoming this was, and still continues to be, one of the toughest mental challenges I face every day. I use the word overcoming very intently in the previous sentence because it’s truthfully a daily effort. It requires a great deal of intentionality to stop myself from going towards conditioned habits.
A bit of History
Stress, anxiety, worry, and the desire for control have all evolved over time. Mark Leary, professor at Duke University states, “A deer may be startled by a loud noise and take off through the forest, but as soon as the threat is gone, the deer immediately calms down and starts grazing. And it doesn’t appear to be tied in knots the way that many people are.”
Similar to the behavior of deer, our ancestors were equipped to react to dangers and threats in the environment by utilizing stress and anxiety to their advantage. However, once the threat was gone, their stress and worries subsided with them.
This is not the case anymore. We live in an environment that is drastically different from our ancestors. This mismatch between our old brain and a new environment is what leads to long-term stress, overthinking, and other such negative emotions that we must overcome.
Control in Relationships
The difficulty of giving up control is realized when we form a relationship with another being. It is a bit more difficult to control the actions and emotions of other people. I will go as far as stating that most relationships fail due to the perpetual battle for the need to control and dominate one another.
When two people start a life together, they enter relationships with many expectations. They reflect these expectations on their new partners with the assumption that their partner should understand exactly what their needs and wants are.
Hypothetically, let us assume that one person in the relationship becomes unhappy with the level of attention they receive from their partner. They confront their partner but get no apologies in return. Instead, their significant others argue that their partner is being dramatic and making a big deal out of nothing.
“Me, wrong? Never.”
They get frustrated because they are not able to convince their partner to accept his or her fault. In another word, they are angry at not being able to control their partners’ emotions. | https://medium.com/modernmeraki/the-need-to-control-might-be-ruining-your-relationships-7cb77ca8adf2 | [] | 2020-09-03 15:47:56.048000+00:00 | ['Self-awareness', 'Adulting', 'Relationships', 'Letting Go', 'Self'] |
Editing After Publishing | I occasionally make small edits to my stories after publishing. Typically due to a lack of proofreading diligence on my part. But also as the result of rushing to publish. Sometimes I realized I could have phrased something a little differently to clarify an idea or thought. Or to make it funnier. Which improves the story. So I have tweaked a sentence or two after publishing.
Obviously, it is better to edit and proofread stories more thoroughly before publishing, but I am not always as disciplined as I should be. Once I had a minor character appearing in a story after he had died a few paragraphs earlier. Oops. I loved the sentence he appeared in, but it had to go. The character was already dead. Luckily, I caught my mistake quickly and only one person had read the story. The other person who read the story got the corrected version.
I don’t think you should make drastic edits that change the meaning of a story. Especially when there are responses to the story. But if I can improve a story with a small tweak, I don’t think it is a crime.
What is your opinion? | https://medium.com/mark-starlin-writes/editing-after-publishing-c8f635bad887 | ['Mark Starlin'] | 2019-08-19 17:44:35.891000+00:00 | ['Essay', 'Publishing', 'Writing', 'Editing', 'Stories'] |
Forget Work-Life Balance: It’s All About the Blend | Forget Work-Life Balance: It’s All About the Blend
See why and get 5 tips to make it happen
No matter what we try, work-life balance always seems like a destination that we have yet to reach. It’s around the corner, out of our grasp.
Work-life balance.
It sounds nice, doesn’t it? We all say we want it, and why wouldn’t you? You envision that perfect 50/50 balance point, where you magically finish everything you need to do at work and still have time left over for going to hot yoga, making homemade bone broth, getting 8 hours of sleep, and everything else Instagram tells you to do to be a well-rounded human.
Reality looks a little more like this: You’re working on that report but you have to leave the office early because you haven’t been to the dentist in an embarrassingly long time. Or you’re trying to meal prep at home when an important email comes in, and next thing you know you’ve burned everything and you’re stuck eating instant ramen for lunch tomorrow. Or any one of about a thousand other scenarios that have happened to all of us, pretty much every single day.
Simply put, when you’re at work, your personal life seeps in, and when you’re at home, your brain’s often still at work. More frequently, it’s a combination of all those things, happening all at once. And when you have that paragon of balanced perfection in mind, the constant spillover effect can make you feel as though you’re failing on both fronts.
No matter what we try, work-life balance always seems like a destination that we have yet to reach. It’s around the corner, out of our grasp. Maybe, we think, we could get there if we rearranged a little, woke up earlier, or just tried harder.
But maybe the problem isn’t what we’re doing, but rather the concept of work-life balance itself. Perhaps it’s time for a new standard: work-life blend.
A healthy balance
An American Sociological Review study found that seven out of ten US workers struggle with this issue, so you’re not alone. But figuring it out is really important. Not just for your own sanity, but for your health, your productivity, and your company’s bottom line.
One study found that work-family conflict can increase poor physical health by 90 percent, while another found that work-induced stress can increase your risk of mortality by almost 20 percent. But reducing work-life stress brings numerous benefits, such as lowered hypertension, better sleep, less alcohol and tobacco use, decreased marital tension, and improved parent-child relationships. So it turns out how you work affects how well (and how long) you live.
Given how important it seems to be, why is ‘work-life balance’ so hard to actually achieve?
Finding the right words
Meetings and presentations, errands and appointments, conference calls and research, laundry and takeout, pets and sippy cups — they’re all threads in the fabric of this little thing called life.
In some ways, the very idea of work and life as two things to be balanced sets us up for failure.
For one thing, ‘balance’ implies that one of those components is a negative that needs to be counteracted, like the dark side of the force. But there’s nothing negative about having a job and a life.
More importantly, work really isn’t this ‘other’ thing overshadowing your life. It’s a huge part of your life. Even if you’re not incredibly passionate about your day job, it’s still where you probably spend the bulk of your time.
Meetings and presentations, errands and appointments, conference calls and research, laundry and takeout, pets and sippy cups — they’re all threads in the fabric of this little thing called life. In pursuit of work-life balance, we treat them as different entities, trying to separate the individual strands. It’s a stressful, unrealistic, and unnecessary exercise to put ourselves through.
So ‘work-life balance’ just isn’t working anymore. We need something different. Something more fluid. Something that captures the way we actually work, live, and do all the things we do in between when our eyes first flutter open and when our heads hit the pillow again at night. We need to be focusing more on work-life blend.
How to actually build work-life blend
Work-life blend doesn’t mean that everything is happening at the same time, all the time. It’s about finding a way to fit together the important pieces.
The truth is that it’s going to take some effort to pivot from the ideal of work-life balance to being content with the reality of work-life blend. It will be messy, and it will be hard, but it’ll be worth it.
Here are some tips for cultivating and practicing work-life blend:
1. Acknowledge the blend.
As with almost anything, the first step is acknowledgment. We need to come to terms with the fact that work-life blend is how our life actually is, instead of striving to create perfection. We can’t let the amorphous pressure to ‘have it all’ pour in through the seams, making us feel like failures.
This can be hard, especially when you’re scrolling through a feed of perfectly crafted photos from people who appear to have it all figured out. “A lot of people try or claim that they have perfected balance. But in reality they’ve just drastically deprioritized, so they really are just working on fewer things,” says Joshua Zerkel, a certified professional organizer, productivity expert, and former head of community at Evernote. “The key is to accept reality and then come up with some strategies to prioritize within your blended lifestyle, knowing that’s the playing field,” he continues.
2. Be clear on your priorities.
Part of the reason why work-life balance often doesn’t work out is that it’s pretty tough to do it all. “The biggest challenge people run into with trying to have a balanced or even blended life is that they want to fit all of it in,” Joshua observes.
And doing all of the things is not really a plan (nor is it balance). Work-life blend doesn’t mean that everything is happening at the same time, all the time. It’s about finding a way to fit together the important pieces.
“To me, work-life blend is like Tetris,” Joshua says. “You have to fit the pieces of your life in in a way that makes sense to you. The difference is that you’re choosing which blocks to fit, instead of just having this big pile of blocks in the corner giving you anxiety.”
Figure out the key components that you want to get to in your days, whether it’s fitness, self-care, meals with the family, and schedule them on your calendar at a regular cadence. Treat them with the seriousness you bring to meetings and deadlines at work.
3. Set boundaries.
Once you’ve determined the pieces that matter most to you, you need to carve out time to make them happen. “I’m a big fan of time-boxing things,” Joshua says. “Give yourself time and space for personal things and then for work things. If you have a loose framework laying out where you intend to spend your time, it won’t feel like this big overwhelming mess.”
Of course, the other piece to this is knowing that sometimes your boundaries will change and bleed over, and you have to be okay with that. “Your time boxes will definitely break,” Joshua observes. “It’s okay if you run over working on your project or miss family dinner this week.” Acknowledging that things are imperfect and will naturally overlap is key to making it work. Your boundaries can’t be so rigid that they won’t bend to give way to the irregularities of real life.
Even if you can’t eliminate overlap, you can minimize it. Try out small tactics, such as using a different computer to get personal tasks done so you’re not tempted to check those Slack messages.
4. Check in on how you’re doing.
After you’ve identified your priorities and set up rough guidelines for how you want to allocate your time, you need to check in with yourself and see how your new approach is making you feel.
Ryan Smith, co-founder of Qualtrics, developed a weekly system to evaluate his progress. “Each week, I examine the categories of my life — father, husband, CEO, self — and identify the specific actions that help me feel successful and fulfilled in these capacities,” he says. “This weekly ritual helps me feel like I’m doing everything in my power to address my needs and the needs of those around me.”
Whether it’s in a journal or with a template in Evernote, track how you’re feeling in regards to work-life blend on a daily, weekly, or monthly basis. If it doesn’t feel like the right mix, come up with some tactics to adjust.
5. Understand it’s a process.
As with any kind of new habit or change, this is not something that’s one and done. You can’t just check work-life blend off your to-do list. “It’s tempting to think, okay, tomorrow I’m going to have work-life blend,” Joshua says, “but of course, it doesn’t work that way.”
It’s important to be okay with adapting and evolving; after all, work-life blend means that there aren’t specific ratios or quotas you have to hit. “These are steps in an ongoing process that doesn’t end until you die — or get lots of assistants to help you manage it all,” Joshua wryly observes.
You’ll always be tweaking and adjusting, and you’ll probably constantly feel like you’re not getting the ratios right, but as with any good recipe, it tends to work out when it all comes together. | https://medium.com/taking-note/forget-work-life-balance-its-all-about-the-blend-ad3115ed1fa4 | [] | 2018-02-06 14:56:01.549000+00:00 | ['Life', 'Work Life Balance', 'Personal Development', 'Work', 'Productivity'] |
What was wrong—and right—about Elon Musk’s infamous coronavirus tweet | Earlier today, Elon Musk tweeted “The coronavirus panic is dumb”. Here’s the evidence:
The Twitterverse didn’t waste time piling on top of him with insults, threats, and demands that we woke folks ought to “cancel” him. Yikes.
Why were so many Earthers upset? Perhaps it’s because:
they didn’t really grok what he was saying tragedy makes us very sensitive to anything that seems to downplay it we humans love lurking on social media waiting to pounce on an offensive celebrity they’re, uh, PANICKING?!?
Maybe it’s more of a combination of all four theories? Either way, he’s attracted quite the boatload of righteous fury.
After the Twitterstorm, you know he’ll need some of the sticky-icky-ICKY this weekend
Let’s break down his sentence, which, c’mon, isn’t exactly a complicated quintet of words. The word “coronavirus” is just a modifier on the word “panic”, which is the subject. He was calling panic dumb, not the coronavirus. If you struggle to understand that, please find the nearest middle school English teacher and ask them to tutor you this weekend.
And Elon’s right. Panic makes people (and mobs) do bad things. It makes people loot and shoot and drives them apart at exactly the time they need to come together with cool heads and compassion. Panic makes bad situations worse. (bad situations, like, you know, global pandemics)
So what did he do wrong? For one, he lacked grace and charisma. Whenever death is involved, try to be compassionate by seeking to NOT trigger people. Just be cool, man.
Also, his brevity came off as a cocky superiority. If he really cared to quell the rising panic in the population, he should have used the other 200+ characters in that tweet to explain it a little further. Or maybe used a cutesy emoji. Or, OHOHOH, he should have used an awesome gif from some cult classic 90s movie!
Or, he could’ve just laid out the truth like this randomly-selected, incredibly handsome, and EXTRAORDINARILY woke (and humble) twitterbro: | https://medium.com/hifi-press/what-was-wrong-and-right-about-elon-musks-infamous-coronavirus-tweet-b8e48d5e6938 | ['Tom Sadira'] | 2020-03-07 00:54:26.202000+00:00 | ['Coronavirus', 'Elon Musk', 'Global Health', 'Twitter', 'News'] |
Twitter Sentiment Analysis Using Naive Bayes and N-Gram | Twitter Sentiment Analysis Using Naive Bayes and N-Gram
Analyzing the level of positivity of tweets
Photo by MORAN on Unsplash
In this article, we’ll show you how to classify a tweet into either positive or negative, using two famous machine learning algorithms: Naive Bayes and N-Gram.
First, what is sentiment analysis?
Sentiment analysis is the automated process of analyzing text data and sorting it into sentiments positive, negative, or neutral. Using sentiment analysis tools to analyze opinions in Twitter data can help companies understand how people are talking about their brand.
Now that you know what sentiment analysis is, let’s start coding.
We have divided the whole program into three parts:
Importing the datasets
Preprocessing of datasets
Applying machine learning algorithms
Note: We have used Jupyter Notebook but you can use the editor of your choice. | https://medium.com/better-programming/twitter-sentiment-analysis-using-naive-bayes-and-n-gram-5df42ae4bfc6 | ['Siddharth Singh'] | 2020-08-18 19:56:22.359000+00:00 | ['Machine Learning', 'Python', 'Data Science', 'Programming', 'Sentiment Analysis'] |
New Book Releases: October 27, 2020 | MEMORIAL, Bryan Washington. A touching novel about a couple whose relationship is falling apart, as one goes to Japan to care for his ailing father and the other who stays in Houston with his boyfriend’s prickly mother. “A subtle and moving exploration of love, family, race, and the long, frustrating search for home,” writes Kirkus in a starred review. Like one of my favorite books of the year, The Margot Affair, this is highly recommended for people who love food. Aggregated critical reviews, Bookshop.
GONE, Linda K. Olson. In this memoir, a radiologist details her recovery from a train accident that resulted in the loss of both her legs and an arm. Bookshop.
WARRIORS OF WING AND FLAME, Sara B. Larson. Two sisters must cross between worlds to save their family from an evil sorcerer in this YA fantasy sequel to Sisters of Shadow and Light. Bookshop.
INSIDE STORY, Martin Amis. An autobiographical novel about the friendship between Amis and the late Christopher Hitchens. Aggregated critical reviews, Bookshop.
THE COLD MILLIONS, Jess Walter. From the author of Beautiful Ruins, a novel about two brothers fighting for survival, justice, and financial stability in the early 20th century. “I haven’t encountered a more satisfying and moving novel about the struggle for workers’ rights in America,” writes the San Francisco Chronicle. Aggregated critical reviews, Bookshop.
LOVE YOUR LIFE, Sophie Kinsella. After a painful breakup, a woman goes on a writer retreat that requires its attendees to adopt fake names, where she falls in love with a participant of a nearby martial arts retreat. Can they make their relationship work in real life, too? Bookshop.
A note: I use affiliate links, so when you buy a book through my link, I get a small commission! This doesn’t affect the books I choose, obviously! | https://angelalashbrook.medium.com/new-book-releases-october-27-2020-7e98e455050e | ['Angela Lashbrook'] | 2020-10-26 08:12:12.718000+00:00 | ['Books', 'Literature', 'Reading', 'Fiction', 'Culture'] |
Where #ImFrom: Both sides of the hyphen | By Ngozi Onike
I spent most of my childhood navigating two cultures. I grew up in South Bronx with Nigerian parents who never taught me to speak their Igbo dialect, but expected me to understand it. I was taught to greet my elders with my right hand and never pass anyone an item with my left. I spent my Saturday mornings listening to Peter Nwokocha’s music as I did my chores, and every Sunday afternoon, I quietly sucked my teeth as I ate white rice and peppery tomato stew. I wanted McDonald’s so bad.
Ngozi and her mom at home in the Bronx, New York. (Photo courtesy of the author)
I grew up dreading roll call on the first day of school, listening to the teachers pronounce my name “Nah-go-zee” as I wondered why my parents did not choose the name Nicole or Naomi instead. I laughed awkwardly at jokes about being an African booty scratcher. I even once had a classmate ask me if Africans chew on the extra string that sometimes hung from my shirt. And when asked if I spoke “African,” I would give a blank stare and reply, “no.” I did not correct or inform; I just said no.
Though I was born in the United States, I never really felt “from” here. I lived in the same neighborhood for 25 years and itched to leave for most of it. It didn’t matter that I had only visited Nigeria three times. No one cared how many pop song lyrics I had memorized, and I did not get cool points for knowing the best places to get pizza or Chinese food. My name and dark skin always made me different, and I was uncomfortable. I was undeniably African.
My perspective began to change in college when I made the decision to embrace and identify with my “Nigerianness.” Freshman year, I joined the Nigerian Students Association. I stayed on campus after classes rehearsing with the club’s dance group. If you passed by my dorm room, you would likely hear Wizkid or D’Banj blasting from my speakers. My friends and I bonded over stories of Sunday rice and stew. Most importantly, my name was pronounced “in-guh-zee” not “nah-go-zee.” It was correct, and I liked it.
Ngozi, her mom and a family friend at a wedding party. (Photo courtesy of the author)
I was proudly Nigerian, and though many responded to my pride with interest and acceptance, I was occasionally reminded I still didn’t quite belong. I had family members who called me “the American.” I would sit quietly, like an outsider, as I listened to my cousin tell stories about gatherings at my grandmother’s house in Osina village. I still felt guilty for avoiding phone conversations with her before her death in 2011, embarrassed that I could not respond to her in Igbo and English was not easy for her to understand.
So where do I belong? Where am I really from?
It took some time, but recently, I found peace in my place between Nigerian and American. I have settled here, occasionally pulling from both sides of the hyphen freely and without self-placed pressure. I am comfortable here. I met my Nigerian-Ivorian husband on this border line, and I gave birth to my daughter here. It won’t be long before she too will wonder where she is from, and I hope, like me, she will eventually find her home.
This is part of a series called #ImFrom, where members of the AJ+ community share personal stories about the question, “Where are you from?” | https://medium.com/aj-story-behind-the-story/where-imfrom-both-sides-of-the-hyphen-a0679e9fb47b | ['Aj'] | 2017-03-24 23:09:18.763000+00:00 | ['Nigeria', 'Family', 'Culture', 'Im From', 'New York'] |
Learning D3 — How to Build a Tree Chart w/ Line-by-Line Code Explanations | Line 38: Use d3.tree() to create a tree layout with a size of 600px*500px
Line 39: Pass our previously formatted treeData through treeLayout() to assigns the x and y positions for the nodes that can be accessed through d.x and d.y afterward.
Line 41: Declare a variable parentsNumber to store the number of nodes that have children, we will style them differently than non-parents.
Line 45: Select the SVG canvas and append g elements in the class of nodes — assign g and nodes class to move and style all nodes together.
Line 47–53: We want to create nodes with children as circles. This is achieved by (i) selecting all the circles (ii) passing the array of nodes with children through — accessed through treeData.descendants().slice(0,parentsNumber) (iii)append the circles and assign a class of circles so we can style them together later (iv) by default, the tree chart is created from top-down, because we want to create it from left to right, we need to assign the positions of circles using translate() function by d.y and d.x instead of d.x and d.y. (v)Finally, assign the radius of circles as 8.
Line 56–64: We want to create nodes without children as rectangles. This is done in a similar way to the step before. The only thing to note here is when drawing the rectangles, we assign a width based on the number of characters of the text, and the y position of the rect (top left corner) is adjusted to be centered in relation to the text and link we will build later. | https://medium.com/javascript-in-plain-english/learning-d3-how-to-build-a-tree-chart-w-line-by-line-code-explanations-958e04153dba | [] | 2020-12-21 16:53:29.976000+00:00 | ['Data Visualization', 'JavaScript', 'Web Development', 'Design', 'UX'] |
How to Mindfully Tune Yourself to the Frequency of the Woods | How to Mindfully Tune Yourself to the Frequency of the Woods
Listening to nature, talking to trees, and connecting with the Earth can enhance your mental health
Photo by Lukasz Szmigiel on Unsplash
We walk among the trees. They speak to us. My daughter and I listen to them with our hearts open. Towering above us and sometimes ruffling their leaves, they observe us with the wisdom of the ages. “They know so much more than we ever will,” I reflect. We put our heads up against the trunks to receive some of that wisdom. Their roots make a woven network of life below the dirt we walk on. They thrive while staying still.
The trees are our friends.
Yet, some humans climb on them, breaking weak branches. Some carve their names in them, causing scars for life. And some cut them down.
I want to show you the mental and healthful benefits of walking mindfully and connecting to the forest, hoping that it will allow you to increase your awareness of yourself and of life all around us. | https://medium.com/mindfully-speaking/how-to-mindfully-tune-yourself-to-the-frequency-of-the-woods-88ceff066964 | ['Emily Jennings'] | 2020-12-29 02:10:38.687000+00:00 | ['Mindfulness', 'Outdoors', 'Mental Health', 'Nature', 'Self'] |
Why Apple Products cost so much and why it’s good for Customers | Photo by Igor Son on Unsplash
Why Apple Products cost so much and why it’s good for Customers
Privacy doesn’t come cheap.
“Apple Tax” is the term used to describe the premium Apple charges for its products when compared to similarly spec’d devices from other manufacturers. Over time, it’s become both the butt of many jokes in the tech community as well as one of the biggest criticisms of Apple as a company.
What people can’t laugh or argue about though are the numbers that Apple brings in.
Apple is the first US company to be valued at 1.5 Trillion USD, dominating 66% of 2019 smartphone profits, despite only releasing three to four phones each year.
So how does a company that routinely charges an arm and a leg for its products continue to rack up sales, and how do these exorbitant margins benefit you, the customer, if you’re footing the bill?
It all comes down to privacy
What happens when an operating system that costs hundreds of millions of dollars in development hours is given away for free? How does Google, a trillion dollar company, intend to profit off of its free, open-source operating system?
Data. Google is a data company, and your data is their product. The paying customers are advertisers. In 2020, there’s nothing revolutionary about the concept, but it’s important to reiterate to juxtapose against the strikingly different business model Apple uses.
Instead of putting a budget-friendly phone in your hands then collecting (and selling) your data to advertisers, Apple makes their money upfront by charging you Apple Tax.
Doing so makes them a lot of money without having to monetize your data.
This is why, despite selling far fewer phones than the competition, Apple still takes home most of the profits.
I would even argue that Apple has more to lose if they’re caught monetizing user data. Doing so would betray their values and tarnish their brand equity, which ultimately hurts stock prices, and I’m confident no one at Apple wants.
Photo by Arnel Hasanovic on Unsplash
Only Apple can pull it off
Imagine if your neighborhood supermarket, in an effort to improve public health, suddenly decided that all food with High Fructose Corn Syrup (HFCS) had to have a gigantic label across the front of its packaging to declare it.
I am certain that not a signal manufacturer would even bother, and it won’t take long for that supermarket to close down.
Now imagine Walmart, one of the largest retailers in the US, boycotts any products that refuses to disclose its use of HFCS. All of a sudden the big name brands will either be quick to comply or, at the very least, negotiate with Walmart.
Apple is Walmart.
Many apps and websites have a vested interest in tracking and collecting user data.
Why don’t developers just threaten to pull out of the AppStore and drop support for iOS if Apple is placing such stringent privacy policies?
Because iOS accounts for 25% of the world’s mobile operating system and nearly 60% of the US mobile market.
Apple has such a hold on the market that it can bully developers into adhering to its stringent (but arguable necessary) rules.
Google will never care about your privacy
Google is a data company. Asking them to protect our privacy is like asking Coca-Cola to safeguard children from diabetes.
It’s never going to happen.
Keep in mind that Android is not only provided but is continually being developed absolutely free — no licensing fees, no royalties. Samsung, OnePlus, Huawei, and every other phone manufacturer pays zero dollars to use Android and is making all the profits for themselves.
Someone has to pay for all that development, and it’s you with your data and advertisers with their cash.
Photo by Austin Distel on Unsplash
Apple isn’t benevolent; it’s a business
Apple doesn’t protect your privacy because “they care.” They do it because it makes good business sense.
By leveraging on their unique market position, they can charge a gigantic premium (the Apple Tax) to consumers who actually care about privacy since there’s essentially no other option.
If you don’t care about privacy (as you have every right not to and I certainly know people who don’t) you are very free to buy any other smartphone. No one is stopping you.
iPhones aren’t EpiPens.
No one needs an iPhone to survive. If consumers are charged too large an Apple Tax, consumers can revolt and just boycott the company.
Expect Apple products to get cheaper
As the smartphone arms race gradually cools off, Apple now has to find new ways to generate profit.
They can choose to start collecting user data, but as we’ve covered, doing so is neither in the consumer or Apple’s best interest.
What they have chosen however, is to offer services to ensure that you, the consumer, keep paying Apple for media streaming, cloud storage, and financial services (Apple Card) among others.
And what is the best way to get people paying for subscriptions? Get them using devices that support it.
By offering cheaper hardware, Apple essentially lowers the barrier for people to use their services. It’s already happening.
We’ve seen a downward trend in Apple’s hardware pricing driven by cheaper product offerings in at least the past two years.
In the iPhone category, the introduction of entry level flagships (iPhone Xr, iPhone 11) and mid-tier smartphones (iPhone SE 2020.)
In the Mac category, the entry level MacBook Air got, not only, the usual spec bump but also double the storage at a $100 discount from the previous year.
Photo by Daria Nepriakhina on Unsplash
In conclusion
In no way am I saying that people buy Apple products just because of their obsession with data privacy, but it is a big benefit.
Charging the Apple Tax essentially does two things — protects consumer privacy and Apple’s stock prices. It protects the consumer by giving Apple zero reliance on advertisers to remain immensely profitable, and it protects Apple’s stock prices by not compromising on one of its unique competitive advantages that literally none of its competitors can compete on.
The Apple Tax is slowly being lifted in favor of revenues driven by Apple’s services business. Only time will tell if this business model will work. Several companies are centralized on services which Apple now directly competes with. Apple Music to Spotify, Apple TV+ to Netflix, and iCloud to Google Drive to name a few.
At the end of the day, if this fails, Apple can go back to the tried, tested, and true — charging huge premiums in the form of Apple Tax, but the one thing Apple will never do is sell your data. | https://medium.com/illumination/why-apple-products-cost-so-much-and-why-its-good-for-customers-1d57fad4e33b | ['Calvin L.C.'] | 2020-07-14 14:55:03.447000+00:00 | ['Privacy', 'Business', 'Data', 'Technology', 'Apple'] |
About Me — Giulia Penni. Lifelong Learner and Word-Lover. | About Me — Giulia Penni
Lifelong Learner and Word-Lover.
Me (Giulia Penni)
When I was a little girl, I used to write stories. I used to write a lot of stories, filling up notebook after notebook with novels and fairy tales. I was an avid reader too — I remember borrowing a new book almost every week from the school library. I loved fiction, especially books about magic and witches. I had a fervid imagination, and I was tirelessly writing new stories (although I must admit I was leaving some unfinished).
Writing provided an outlet for my creativity, and it was fun. I enjoyed reading my stories to my mother and playing make-believe games with my sister, pretending to be one of the imaginary characters from my tales.
Growing up, it became harder and harder for me to find the time to dedicate to my writing, but I never lost my passion for language and words, so I decided to study translations and communication at the university.
My passion for language led me abroad, first in Austria, where I lived a few years working as a communications specialist, then in Greece, where I currently live and work as a copywriter, writing and proofreading in my native language (Italian).
I recently joined Medium because, honestly, my boyfriend suggested me to do so (he’s an experienced data scientist turned writer to share his passion and knowledge for data science and AI).
I followed his advice for two reasons. First, I want to get feedback from my readers. Although I am passionate about writing, I never thought of myself as a writer, so nobody (except for my family and friends) has ever read anything I have written. Medium is a chance for me to get my content out there and see what happens. Plus, it’s a good side hustle to make a little extra money.
The second reason I joined Medium, and I am here now writing my About Me is because I felt I needed a challenge — when I was little I used to love writing in my native language, will I enjoy writing in a different language now? Will I enjoy writing about different topics? Will my readers (real readers) enjoy my writing? Most importantly, will I still enjoy writing at all?
I am glad I have undertaken this challenge, and I look forward to what is to come. My goal is to grow as a writer, improve my style and my writing skills, and make reading an enjoyable experience for whoever comes across my stories.
Thanks for reading 😊
Giulia | https://medium.com/about-me-stories/about-me-giulia-penni-88f594606b48 | ['Giulia Penni'] | 2020-11-22 18:14:10.976000+00:00 | ['About Me', 'Writers On Medium', 'Writing', 'Introduction', 'Autobiography'] |
WAVES Weekly No. 6 | New Website and ICO Site Withdrawals
Firstly, we strongly encourage those investors who have not yet withdrawn their balances from the ICO site to do so as soon as possible. The new website has been launched and the original crowdfunding site will be disabled next week. If you have not done so already, please log in, save your Waves address on the ICO site, confirm through email and wait for your tokens to be transferred to your local wallet (or directly to an exchange, if you prefer).
The Waves lite client can be downloaded from our website, and you can find instructions on how to create a new account here.
The ICO website will not be maintained beyond next week. After this point, withdrawals will only be processed manually and at less frequent intervals.
LPoS White Paper
The white paper for Waves’ Leased Proof of Stake consensus system is now in its review stage, pending release to the wider community. LPoS is the lynchpin of the Waves network, allowing for fast (ten seconds) block times, whilst remaining far more energy efficient than proof-of-work mining systems.
LPoS builds on the standard PoS implementation by allowing users to lease their balances to other nodes, fine-tuning the Delegated Proof-of-Stake (DPoS) approach used by BitShares and others. Ensuring a clearly-defined set of active nodes reduces latency and increases block capacity. The essence of this approach is the limited pool of full-nodes taking care of transaction processing in the network. Having unlimited numbers of stakers changes the system dynamics drastically, so in a production-ready system it makes sense to maintain a balance between decentralization and usability. The Waves network seeks to establish a relatively large number of active nodes without unnecessarily sacrificing performance.
This approach will be complemented by the addition of a centralised order-matching service. This will match buyers and sellers via a central server on a near-instant basis, whilst the trades themselves will be cleared on the blockchain for security and transparency. The Matcher is obliged to execute the order submitted to it if it can be matched by an order from another node — it cannot prevent the trade being executed. This scheme mimics the way centralized exchanges work, the only exception being that the Matcher does not control users’ funds.
Asset Specification Update
The asset specification has been updated on Github. A new scheme for asset exchanges has been created. You can see the changes from the previous version here.
Full nodes
After several weeks of testing and a number of bugs addressed, full node code is now almost ready and will be released as soon as possible, allowing all users to run their own nodes.
As most users appreciate, this is a critical step, since it forms the foundation for everything that follows — including our custom tokens implementation. With a secure and stable public network, development can proceed more rapidly. Full decentralisation of the Waves network is also a necessary and desirable step for proper security and confidence in the platform. The experiences of numerous other cryptocurrencies demonstrate that this step cannot be rushed, and if done wrong can have serious consequences. We therefore look forward to the successful launch of a large number of staking nodes, and thank you for your participation in the Waves project!
You can always find the latest version of the client at https://wavesplatform.com and https://wavestalk.org. | https://medium.com/wavesprotocol/waves-weekly-no-6-8dfe53e588db | ['Waves Tech'] | 2016-08-10 11:04:39.152000+00:00 | ['Bitcoin', 'Decentralization', 'Blockchain', 'Fintech', 'Startup'] |
A Discussion on Singly Linked List | What are Linked Lists?
The data structure for grouping and storing similar items together in most programming languages is used as an array, also known as a list. An array is a linear data structure that stores items together contiguously in memory. A linked list is similar to an array in that it also is a linear data structure, however, linked list stores information non-contiguously in memory. Since arrays store information in a contiguous block of memory when creating and manipulating the array often times will result in having too much or too little memory allocated to it. Linked list is much better at memory utilization in that it stores information non-contiguously in memory and only uses memory when needed.
Linked List Versus Array
Although linked lists are great at memory utilization they do have some drawbacks to regular arrays. Linked lists are great at inserting and removing items from the beginning and the end, however, they lagged when accessing the item due to the fact that nodes must be traversed starting from the beginning to look for the interested item.
Linked List vs Dynamic Array
Basics of Linked List
A linked list has a chain of nodes. A node is a unit of information, in a linked list it will typically consist of the data and the next pointer. They are chained together by each node having a reference property that points to the following node, in this case it’s called the “next pointer” property. The first node is the starting node and it is set as the head node. The last node in the linked list will have a next pointer property of Null or None because it’s the last node in the list and there aren’t any nodes after that. Often times linked lists are compared to a freight train, where each connecting freight car is like a node in a linked list. And the caboose is the tail of the node and the head of the node is the engine car.
A typical linked list node consists of data and the next pointer
Diagrams of Array and Linked List
To get a better understanding of linked lists, below is a diagram implementing data with values of 23, 4, 65, and 7 in a typical dynamic array.
Dynamic Array
Notice the memory addresses when implementing the data as an array — they are contiguous and sequential, storing one right after another.
Now storing the same data as a linked list would look something like this:
Singly Linked List
Note how the linked list uses memory as needed, or dynamically, as when new nodes are created and the memory addresses for each node is randomly assigned. Conversely, when using a dynamic array, the memory addresses are sequential and extra memory at the end is allocated upon data creation to handle the possibility of inserting new values.
Linked List Implementation
A basic linked list has a head and tail properties. Typically there’s also a size property that increment or decrement when nodes are added and deleted, respectively, to keep track of the size of the linked list.
To add/append an item/value to the list, a node needs to be created and setting the tail node’s next property to be the new node, and updating the tail property to be that new node. Since all steps to add a node to the linked list require constant time, the time complexity of this method is O(1), which is similar to a dynamic array. However, since during the creation of a dynamic array only a select amount of extra memory is allocated to it and when it is all filled up, the array needs to be copied, or reallocated to a new slot in memory causing the time complexity of O(2n) or reducing down to O(n). Because most of the time adding an item to a dynamic array takes O(1) time and sometimes O(n), its time complexity of O(1) is considered amortized whereas a linked list will always be O(1).
Accessing data in an array is simple and fast because each element in the array is assigned an index. For example, in our example above calling sample_list with index 2 will instantly give us the value of 65. As such accessing an item in an array takes constant time. Accessing data from a linked list requires a bit more work. Linked list have no indexes and to get to the data you are requesting you must traverse through the nodes and see if it’s the data you are looking for. Traversing a linked list always starts that the head because each node only has information about its data and its next node, and nothing more, which is why it’s not possible to start traversing in the middle of the linked list. Since nodes must be traversed in order to access data the time complexity is O(n), where if you were to access an item near the tail end of the linked list.
To find an item/value in the linked list you must traverse through the nodes and compare each node along the way if the data is what you are looking.
Look up value 65
Since the only way to traverse a linked list is by the starting at the head, there’s a possibility all nodes must be traversed in order to find the value you are looking for thus its time complexity is O(n).
The process to insert /delete items in the middle of the linked list expands on the steps of the find method with steps to reset next pointers to the appropriate nodes in order to update the linked list. The time complexity of insert/delete is O(n) because while it is an expansion of the find method all additionally steps take constant time so its time complexity is the same as the find method. A visual representation from Visalgo.com.
Insert value 59 at index 3
Delete value 59
As with any data structures there are advantages and disadvantages to using them and linked lists are no different. They a have time complexity of O(1) to insert and delete at the beginning and end, which make those processes extremely fast. However, they suffer when trying to find the node’s index, and deleting a node in the middle of the linked list. A linked list is just another tool a developer can use to make better programming choices to optimally balance runtime and memory usage for their projects.
The complete implementation of a linked list can be found at gitHub.
Sources: | https://medium.com/swlh/a-discussion-on-singly-linked-list-fa478ccfe67b | ['Cao Mai'] | 2020-06-22 06:06:42.133000+00:00 | ['Education', 'Python', 'Singly Linked List', 'Data Structures', 'Arrays'] |
How to create a newsletter with Mailchimp, Gatsby.js & Netlify | How to create a newsletter with Mailchimp, Gatsby.js & Netlify Mariequittelier Follow Apr 30 · 6 min read
After doing a contact form and to connect Google Analytics, I wanted to follow on my series of articles on how to create a newsletter sign up form. Today’s article will be about setting up MailChimp.
One way of building you up your website audience is by providing regular content. And, to do so, you’ll need to collect email addresses and send them an email.
I wish we could go back to that time where people sent letters … Photo by Kate Macate on Unsplash
Sure, you could do that by creating an API, but that would be you’ll either go into a lot of effort or lose the following:
the analytics behind each email opened, read, or caught by the spams.
the unsubscribe system.
That’s why we are going to use Mailchimp. As usual, let’s start by a Gatsby.js starter.
If you get lost along the way, here is the repo access. | https://medium.com/javascript-in-plain-english/how-to-create-a-newsletter-with-mailchimp-gatsby-js-netlify-d48778d5c774 | [] | 2020-06-27 22:36:40.998000+00:00 | ['Coding', 'Programming', 'Gatsbyjs', 'JavaScript', 'Development'] |
Named Entity Recognition with NLTK and SpaCy | Named Entity Recognition with NLTK and SpaCy
NER is used in many fields in Natural Language Processing (NLP)
Named entity recognition (NER)is probably the first step towards information extraction that seeks to locate and classify named entities in text into pre-defined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. NER is used in many fields in Natural Language Processing (NLP), and it can help answering many real-world questions, such as:
Which companies were mentioned in the news article?
Were specified products mentioned in complaints or reviews?
Does the tweet contain the name of a person? Does the tweet contain this person’s location?
This article describes how to build named entity recognizer with NLTK and SpaCy, to identify the names of things, such as persons, organizations, or locations in the raw text. Let’s get started!
NLTK
import nltk
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag
Information Extraction
I took a sentence from The New York Times, “European authorities fined Google a record $5.1 billion on Wednesday for abusing its power in the mobile phone market and ordered the company to alter its practices.”
ex = 'European authorities fined Google a record $5.1 billion on Wednesday for abusing its power in the mobile phone market and ordered the company to alter its practices'
Then we apply word tokenization and part-of-speech tagging to the sentence.
def preprocess(sent):
sent = nltk.word_tokenize(sent)
sent = nltk.pos_tag(sent)
return sent
Let’s see what we get:
sent = preprocess(ex)
sent
Figure 1
We get a list of tuples containing the individual words in the sentence and their associated part-of-speech.
Now we’ll implement noun phrase chunking to identify named entities using a regular expression consisting of rules that indicate how sentences should be chunked.
Our chunk pattern consists of one rule, that a noun phrase, NP, should be formed whenever the chunker finds an optional determiner, DT, followed by any number of adjectives, JJ, and then a noun, NN.
pattern = 'NP: {<DT>?<JJ>*<NN>}'
Chunking
Using this pattern, we create a chunk parser and test it on our sentence.
cp = nltk.RegexpParser(pattern)
cs = cp.parse(sent)
print(cs)
Figure 2
The output can be read as a tree or a hierarchy with S as the first level, denoting sentence. we can also display it graphically. | https://towardsdatascience.com/named-entity-recognition-with-nltk-and-spacy-8c4a7d88e7da | ['Susan Li'] | 2018-12-06 02:39:35.867000+00:00 | ['Machine Learning', 'Named Entity Recognition', 'Towards Data Science', 'Python', 'NLP'] |
Grieving the Life You Left Behind? Me Too. | Grieving the Life You Left Behind? Me Too.
Maybe we have to properly mourn our old world before we can move forward
Image by Science Giant on Pixabay
In the past two weeks, I’ve buried a lot of friends. Security. Relaxation. Optimism. And it’s a bit crushing. To lose so many loved ones in so short a time.
And I’ve seen you attending the same funerals I have. We’ve all been saying our farewells. To Stability. To Calm. To Excitement about the future. And to our sense of power, the small modicum of control that allowed us to sleep peacefully at night.
Quite simply, you and I are grieving the loss of the life we knew before, and perhaps there’s some small comfort in knowing that we’re going through the four stages of grief together. Denial, anger, bargaining, and depression.
Yes, yes, I know, there are actually five steps, but that fifth stage? Acceptance? I am miles away from that holy grail. And I’m guessing you’re the same.
And speaking of these steps, I’m learning it’s wrong what they say, about each stage and their logical progression. Because there is no rhythm or rhyme in this funeral dance. One step forward. two steps back.
For just like sinners guilty of lust in Dante’s Inferno, my passionate longing for the days before this terrible pandemic are winds which blow me back and forth in a never ending cycle of grief.
So, let me share my journey. Perhaps it’s one you are walking with me.
Stage One: Denial and Shock (I’m currently in this stage)
Sometimes I feel like I’m hallucinating. I’m seeing things, doing things, feeling things that can’t be real. It’s like I’m in the middle of a Walking Dead episode, where zombies lurk behind every corner waiting to rip me apart. And no matter how stealthy I am, each doorknob I grab, each physical touch alerts them to my presence.
In the middle of the heavy fog that is now my life, I have moments of complete bewilderment. I sit here, looking around, wondering if I’m having one of those crazy dreams you have when you’re pregnant or you drank too much the night before. A dream in which you wake a bit exhausted, have your cup of coffee and continue on with your normal life.
But it’s not a dream.
I’m fully awake and yet somehow still bleary-eyed and confused. Because I can’t believe that this is really happening.
Image by kalhh on Pixabay
Stage Two: Anger (I’m also currently in this stage)
I’m so mad I could scream. I’m mad I had to cancel the girl’s trip to the beach my thirteen-year-old daughter begged me to plan for months. I’m mad it’s so beautiful outside and the sunshine doesn’t brighten my spirits the way it used to. I’m mad I have only been able to see my mom once in three weeks, and I’m mad that the one time I did see her I was more fearful than excited, worried that every surface I’ve touched, every place I’ve gone might make her sick.
So, I now perpetually live in a state of unleashed fury at the universe. And no matter what I do, what obscenities I spew, this international “bully” stands over me, laughing and taunting.
Stage Three: Bargaining (Epic fail at this stage)
So this stage is where I attempt to gain control over my behaviors and emotions. At times I try, but my emotional state doesn’t want to negotiate.
Build a routine-a new normal, they say. Get outside and walk around, they say. Play games and watch comedies to get your mind off of your grief.
But does that really help? When you’ve lost a friend, does an episode of The Office really take the grief away?
When you’ve lost your center, does a daily to-do list really help you get it back?
Not for me.
And I’ll admit, there are times where these efforts do work, for a half-hour, for an afternoon even, but all of these diversions are extraordinarily temporary. And sooner rather than later, I realize I’m back to stage two, or more times than not, I head straight to stage four.
Image by Ulrike Mai on Pixabay
Stage Four: Depression/Sadness (I’m also currently in this stage)
Depression is something I have battled most of my life. And it was hard enough to manage before all “this” happened. And my anxiety disorder fuels this despair. I feel hopeless. Because most of the things that lifted my spirits have been taken away.
I am heartbroken when I see my anxious teenage daughter stay up until the late hours of the night, panicked, unable to sleep because of the fears and worries running through her head.
But this is nothing, nothing compared to the abyss of sorrow I feel for my teenage son.
It’s March and his senior prom is now only a dream. And he’s losing his last days with the lifelong friends to whom he will soon say goodbye: the people he has laughed with and cried with, the people he studied with and talked with about all the things that will never happen. Senior weekend at the beach. Signing each other’s yearbooks at the pre-graduation picnic. Picking up his cap and gown in May in anticipation of the “big day.”
And don’t even get me started on graduation. It’s supposed to be in mid-June and right now, the very thought of the possibility that it will be canceled creates a swirling tornado that encompasses all the stages of grief mentioned above.
I imagine my son’s heartbreak mixed, quite selfishly, with my own. I want to see him throw his graduation cap in the air, hug his friends and pose for silly photos in his cap and gown. I want his 66-year-old grandmother and his 73-year-old grandfather to see this crowning moment in his life. And though I still hold on to this remote possibility (his graduation is mid-June) in my heart, I’m already mourning its unlikelihood.
And adding power to this depression is a complete sense of powerlessness.
A sorrow that stems from desperately wanting this new world to go away, coupled with a rage that this cruel virus reacts like a stubborn child, sitting in front of me and sneering, saying, “Go ahead. Try and make me.”
Stage Five: Acceptance: (I fear I will never get to this stage)
I’m a fighter. I “fix” things when they're not right. I don’t give up. I find a solution. And this acceptance I am supposed to arrive at seems impossible. Like the death of a loved one, I cannot come to grips with the reality that this is my new life for a long while to come. So, once again, I am victim to storms of anger and torrential rains of sadness. And this “promised land”of acceptance seems galaxies away.
The bottom line:
I’m grieving. And I’m fairly certain you are too. For it seems quite impossible to not mourn the lives we lived before. And so we must sing the hymns of grief and lay flowers on the graves of the lives we knew.
And maybe we will come to accept our new reality.
I wish you that fifth step, my friend. Wish me the same if you will. | https://medium.com/the-partnered-pen/grieving-the-life-you-left-behind-me-too-efa23aa7d31a | ['Dawn Bevier'] | 2020-03-30 11:43:17.461000+00:00 | ['Grief And Loss', 'Mental Health', 'Self', 'Mindfulness', 'Covid 19'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.