title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Visualize Programming Language Popularity using tiobeindexpy | Even though, most of us would be coding in one primary programming language, it is always good to keep an eye on the shift that happens in the world of programming. TIOBE is an organization that has created an index for programming languages and tracks the change of that index, every month.
In this post, We will use tiobeindexpy Python package to visualize programming language popularity so that can serve as a small tracker for us to revisit every month.
About the package:
tiobeindexpy is a Python package (available on PyPi) that gives us TIOBE Index extract from the official website.
Installation:
The stable version of tiobeindexpy can be installed using pip .
pip install tiobeindexpy
If you have got Python3, Please make sure to use pip3 install tiobeindexpy to avoid any conflicts.
Objective
There are three things we’d try to have achieved at the end of the exercise.
What are the top 20 popular programming languages (as of the month of Feb 2019)
Who are the top 5 gainers (Feb 2018 vs Feb 2019 — Out of the ones present in current Top 20)
Who are the top 5 losers (Feb 2018 vs Feb 2019 — Out of the ones present in current Top 20)
Loading Libraries
As its typical of a decent coding style, Let us start with loading the required libraries.
from tiobeindexpy import tiobeindexpy as tbpy
import seaborn as sns
It has to be noted that once the library is loaded, tiobeindexpy downloads required data from TIOBE index website. Hence all other subsequent functions would just take that data and would not make an actual server call.
Plot Size and Theme
I’d also prefer to set up the Plot size and its theme at the start — which would make it easier if anyone wants to play with this parameters without looking deep into the code.
sns.set(style = "whitegrid")
sns.set(rc={'figure.figsize':(11.7,8.27)})
The Top 20
To start with, We’ll use the function top_20() from tiobeindexpy to extract the top 20 programming languages based on TIOBE Index Ratings.
top_20 = tbpy.top_20()
Before we move on to visualize the data, Let us just try to verify if the data is actually available by printing it.
top_20
As you can see from the above output, It’s evident that the Python object top_20 is a Pandas dataframe and it is actually for the month of Mar 2019. You might also notice the % symbol next to the numeric values in Ratings and Change.1 column — which also means, these columns must be string in the extracted dataframe, hence need to be prepossessed.
Data Preprocessing
In this step, We’ll strip (remove) the % symbol from those two above mentioned columns and we’ll type cast them to floating point fields.
top_20['Ratings'] = top_20.loc[:,'Ratings'].apply(lambda x: float(x.strip("%"))) top_20['Change.1'] = top_20.loc[:,'Change.1'].apply(lambda x: float(x.strip("%")))
Beginning Data Visualization
As we’ve got our data preprocessed, we’re now ready to start visualizing the programming languages rankings.
Top 20 Programming Languages
We can start with a very simple bar plot of the Rankings of Top 20 Languages (based on their TIOBE Index Ratings)
sns.barplot('Ratings', 'Programming Language', data = top_20).set_title('Mar 2019 - Programming Popularity')
Java, C, Python — are the top 3 languages based on TIOBE Index. The Ratings difference between Java and Python seems humongous leaving me wondering to look into TIOBE Index methodology for clarity.
Between 2018 and 2019
There’s a lot has changed in the world — especially in technology between 2018 and 2019 so let’s try to see how the difference in a year looks like:
Top 5 Gainers
sns.barplot('Programming Language', 'Change.1',
data = top_20.sort_values("Change.1",ascending = False)[0:5]).set_title('Mar 2018 vs 2019 - Language Popularity - Biggest Gainers from Top 20')
Python has been the leader with taking the biggest leap forward followed by VB.NET and C++.
Top 5 Losers
sns.barplot('Change.1', 'Programming Language',
data = top_20.sort_values("Change.1",ascending = True)[0:5]).set_title('Mar 2018 vs 2019 - Language Popularity - Biggest Losers from Top 20')
C#, PHP, Ruby — have been the leaderboard toppers in terms of the percentage of change (negative).
So far, this has given us a good monthly picture and how the popularity of programming languages are changing.
Hall of Fame — last 15 years
Let’s abstract up and try to see what are the programming language that are Hall of Fame winners each year.
We can simply use tiobeindexpy ‘s function hall_of_fame() to extract the Hall of Fame data.
hof = tbpy.hall_of_fame()
A slightly formatted table of the above output.
hof.style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
This data shows, how time and time again, Python picks up when a new trend comes in and this way, how it has become one of the most appearing entries in the hall of fame.
Summary
In this post, we’ve seen how to use the python packages tiobeindexpy and seaborn for visualizing programming language popularity (rankings) based on its TIOBE index.
References | https://towardsdatascience.com/visualize-programming-language-popularity-using-tiobeindexpy-f82c5a96400d | ['Abdulmajedraja Rs'] | 2019-03-06 12:52:30.706000+00:00 | ['Python', 'Coding', 'Data Visualization', 'Data Science', 'Programming'] |
Loneliness Interrupted | Loneliness Interrupted
Poetry
LONELINESS INTERRUPTED by Ema Dumitru
Perhaps it was chance.
The gut knows
But it won’t talk.
The unknown man
and the unknown woman
Two lives in use
entered and divided.
The present and the past
above all science.
The trees, the roads, the stones
A small print in a room
The rain that fell at the beginning
of autumn — There was no trace of it
No physical sensation.
All sense of what was real
discharged from the mind.
And as it came, so it went.
The days, the nights, the music
Only marginal notes.
Rain still falling, now with less force
A great performance of the mind. | https://medium.com/chance-encounters/loneliness-interrupted-e4706a50200f | ['Ema Dumitru'] | 2020-11-30 17:07:08.615000+00:00 | ['Poetry', 'Photography', 'Writing', 'Poem', 'Chance Encounters'] |
31Events is not just for invites, it can used for reminders too | The calendar is awesome marketing tool that is completely overlooked by most marketing professionals. We use email, we use landing pages, we use drip campaigns — but no calendar invitations or reminders.I got an email today from Amazon AWS about their re:Invent 2017 conference. Once I saw it, I thought … “why don’t more companies send calendar invitations and reminders?”.
Announcements like this are good — but how about product launches or maybe even special or flash sales. Anything timebased could be have a calendar invitation attached. So, as an example, I created this landing page, put your email in the box, and it will send you a real calendar invitation/reminder.
check out the landing page here
Just think what you could do with that type of marketing tactic. | https://medium.com/calendar-marketing/31events-is-not-just-for-invites-it-can-used-for-reminders-too-90fea2169c46 | ['Arnie Mckinnis'] | 2017-06-15 02:17:16.929000+00:00 | ['Events', 'Digital Marketing', 'Marketing', 'Landing Pages'] |
Don’t Quit Just Because It’s Hard | More often than not, you’ll regret it.
Photo by Andrew McElroy on Unsplash
It’s human nature to stay away from any pain. We don’t like to be uncomfortable. It’s so much easier to coast and go through the motions, but there’s nothing admirable about not pushing yourself.
Life isn’t supposed to be so easy. Anything worth having is worth putting in the work. Anytime you’re making a change in your life, there will be an adjustment period.
There will be some time when you’re out of your comfort zone. You have to focus on the goal and realize that the hurdles you encounter are still on the right path to reaching it.
I just finished reading The Dip by Seth Godin and it’s all about knowing when you should or shouldn’t quit. Quit when you’re making absolutely no progress. Quit when you’re positive it’s hurting you more than helping you. Quit when you’re sure that you’re headed down a dead end road and there’s no more possibility of moving upward or forward.
All other times, stick. If you’re resolute in your decision that it is something you want, don’t quit. If you’re seeing results, no matter how small at first, don’t quit. Most definitely don’t quit just because it gets hard.
“The people who skip the hard questions are in the majority, but they are not in demand.” — Seth Godin
The people who stand out are the ones who pushed through the tough part. There’s always a learning curve that you need to get through.
When do most people quit their new workout routine? As soon as they get sore, which is pretty early on because their muscles are adapting to the new stress of exercise.
As soon as there’s even just a little bit of pain, we pull back. This is natural, but not a smart decision if you want to reach almost any goal. Hard work takes discipline and sacrifice, but it should be worth it if you truly want what you’re going after.
Most people quit too early in a new endeavor because they’re expecting an immediate result with little to no struggle involved. That’s not how it works. Remember why you’re doing it.
Another piece of solid advice from Godin is that whatever you’re trying to do, try to be the best. There are far too many people in the middle.
What’s the point of striving for average? Why pick surviving over thriving? All the work you have to do is not worth ending up in the middle of the crowd.
Push through the hard part because average people don’t. Push through the dip because you are that much closer to coming out on top.
If you truly want to change your life or become somebody who keeps your word when you set out to do something, keep going. If you want to have a chance at being the best at what you do, don’t quit. You’ll only end up stronger and better because you didn’t. | https://maclinlyons.medium.com/dont-quit-just-because-it-s-hard-71c485277e63 | ['Maclin Lyons'] | 2019-07-10 01:16:31.059000+00:00 | ['Self', 'Productivity', 'Life Lessons', 'Life', 'Self Improvement'] |
The Data Behind the Ur-Physical Graffiti | Despite listening to Led Zeppelin for over twenty years, I just learned this week that their sixth studio album, Physical Graffiti, was actually two-thirds original material and one-third outtakes from previous albums. Apparently the band recorded three sides’ worth of material for the album but didn’t want to trim anything; so instead they made it a double album by sprinkling outtakes across all four sides.
There were eight original tracks recorded for the album and seven outtakes added. Three outtakes came from each of the previous two albums, Houses of the Holy (including its would-have-been–eponymous title track) and Untitled (hereafter Led Zeppelin IV), and one outtake came from Led Zeppelin III.
I wanted to explore the data by looking at the amount of source material that I had just discovered was recorded for each of their eight studio albums. This is what the original album durations looked like, with Physical Graffiti standing out as the sole double LP:
When we instead distribute the seven outtakes to their original source albums, we see this:
An LP could hold around 46 minutes of music, which was the limit the band stuck to for the majority of their albums. But for Led Zeppelin IV, Houses of the Holy, and Physical Graffiti, the band apparently recorded nearly an hour of material for each. In the case of first two, the band trimmed songs down to those that would best fit on a single LP (2015 interview). But, when faced with the same old dilemma for Physical Graffiti, they had fortunately accumulated enough backlogged material to trim nothing and ship a double album instead.
Led Zeppelin IV, Houses of the Holy, and Physical Graffiti are generally regarded as the band’s best albums, and this data also shows that they coincided with the band’s creative maximum. And so Physical Graffiti is no longer the sole anomalous double LP in their oeuvre but really the capstone of this trifecta.
Now an interesting followup question is, if we were to listen to these outtakes inside the albums they were originally composed for, what order would we put them in? But I tend to agree with the peanut gallery that the albums are perfect as-is, and you can’t just swap in or reorder the tracks. But it is interesting to hear the outtakes alongside their contemporaneous material, so I created YouTube playlists that append each albums’ outtakes to it as a coda (in-joke intended):
And I also decanted the eight “belters” from Physical Graffiti into an Ur-Physical Graffiti, which turns out to be my new favorite Led Zeppelin album: | https://pgbaumstarck.medium.com/the-data-behind-the-ur-physical-graffiti-9924f5426868 | ['P.G. Baumstarck'] | 2020-02-22 03:30:32.464000+00:00 | ['Rock', 'Led Zeppelin', 'Music', 'Data Visualization'] |
Why Writers Should Never Have Sex With Their Readers | Since becoming a blogger, I’ve gotten sexually involved with two people who have read me — and I had to face the consequences that come with it.
I get a lot of different types of messages from my readers.
The majority of emails and DMs I get are from people who want to say something nice about my work.
Sometimes, a business or an entrepreneur will reach out to me to see if I’m interested in collaborating with them or promoting their product.
Because I’m a woman with an online presence, I also get to use my block button to deal with the men who send me dick pics.
And every once in a while, a reader will proposition me.
They’ll message me something dirty or flirty. They’ll nudge me to exchange nudes. They’ll offer themselves up for some dirty talk or phone sex.
In one memorable instance, a guy messaged me for advice on making his girlfriend squirt and then asked me if I’d be interested in flying to Lebanon to eat her pussy. (No thanks, I’m good.)
I know that kind of thing is bound to happen. I’m an openly polyamorous woman who writes and podcasts about sex. I bare my sexual side online and anyone who wants to give me a click can have a peek at it. Some people assume that means I’m down to fuck.
But I’m not — not with them anyway. Not with any of my readers.
Getting sexually involved with your readers can be surprisingly tempting.
When you create content full time, your audience becomes a big part of your world. Outside of my immediate family, most of my interactions are with the people who read me.
So, whenever I consider dating or fooling around a little, they’re the obvious choice. I’ve already put myself out there. It would be a short skip and hop to start flirting and courting the people who are already interacting with me.
But I won’t do it because I know it’s bad news. And sadly, I know that from personal experience.
In the interest of full disclosure, I’ve done this kind of thing twice. Each time, I was on a different side of it.
The first time it happened, I was the writer. Soon after I started publishing my work, a reader reached out to me privately and we got to talking. Since I wrote about sex, that was the topic we discussed.
Those conversations started getting more personal, more intimate. The more emails we exchanged, the more they turned me on.
For a while, he became a part of my life. We would flirt, exchange nudes, and talk dirty on a regular basis.
And then it ended and in a way I wished it had never started. Not just for the usual reasons you feel that way after a breakup, but also because he was someone who got to know me through my writing.
I should have learned my lesson, but there was another guy who came along months later. This time, though, I was the reader.
Yeah, I was a writer, too. But a much smaller one by every measure.
He found one of my articles and commented on it. I read some of his work and developed a crush on him at a distance. I did a bit of fangirling in his comments. That’s when he sent me a private message.
This time, we weren’t just talking about sex. He skipped right over that and went straight to hitting on me. There wasn’t much in the way of formalities — just filthy and horny messages.
It was happening way too quickly for me. I wasn’t comfortable with the way he jumped right to sex. But I didn’t want to disappoint him — he was kind of a big deal.
I went along with it. I tried to focus on the fact that I had a crush on him and figured I’d have time to assert my boundaries as things got going.
Our conversations soon led to cybersex, which then lead to phone sex. And even more phone sex. Then it became clear to me that he lied about his feelings to get some fast action out of me. That left me feeling miserable for a few months. It also helped me realize that someone in his position should never have messed around with someone in mine.
That sealed it for me. I closed off the option of ever dating or fucking one of my readers. Both of my experiences showed me why that kind of thing is a huge mistake.
Based on what I’ve been through and what I’ve seen happen to others, here are some of the biggest reasons writers shouldn’t get romantically or sexually involved with their readers.
Parasocial Relationships Aren’t the Basis for Real Relationships
A parasocial relationship is a one-sided relationship between two people, usually involving a public person and a member of their audience.
It happens a lot with digital content creators. After watching a YouTuber for years, you might feel like you know them really well even though they’ve never even heard of you. You might feel a personal connection with a musician who has no clue you exist.
And you can feel like you’re getting to know a writer without them being aware of you at all.
Parasocial relationships are perfectly normal and they’re usually fine. But they’re not the right kind of foundation for a healthy romantic or sexual relationship.
For one thing, you’re on uneven ground from the very beginning. Dating someone who has read you for months or years when you’ve just recently learned their name is going to feel kind of like dating a stalker. It will become clear very quickly that they know way too much about you and that will, at best, make you feel uncomfortable.
But it’s worse than that because your readers are going to be wrong about you.
Even if you tell nothing but the truth in your writing and in interviews, it’s only ever going to give them a small glimpse into your life.
I write primarily about sex and based on that a lot of the readers who have hit on me seemed to feel like they really got me, understood who I was, and knew what I needed.
But my sexual side is far from being the only important part of my personality. And it’s not in the driver’s seat nearly as often as some people seem to think. For the most part, I’m a painfully normal person with a very mundane day to day life. It’s just that no one gets to see that when they’re reading my blowjob tips or about the awkward foursome I had once.
It’s normal to fill in the blanks, too, and come up with all sorts of assumptions about the person you’re reading.
Some guys try to talk dirty to me, bragging about having a big cock after reading my article on dick size — even though I make it clear in that article that I have a preference for more modest ones. Or they’ll send me a dick pic after reading something dirty I wrote, because they think I’m the kind of lady who’d appreciate a cyberflash.
Actually starting a relationship with a writer you admire is bound to be disappointing. The guys who hit me up might assume I’m always on and horny. They might think I’m the queen of dirty talk. They might think I’d fulfill whatever sexual or romantic fantasy they have. But chances are I wouldn’t, because I’m just a regular person with needs, boundaries, and preferences of my own.
That kind of disappointment happened to me when I got mixed up with a writer I admired. When the guy he was in real life didn’t match up to the man he was in his articles, it left me feeling confused. I was let down by that harsh reality check. I felt stupid because I had fallen for a persona instead of a person. And I felt torn between wanting to hold on to his image while realizing I had to let go of him.
The other major problem with parasocial relationships is that the reader has already invested a lot in you. They’ve followed your journey, heard your stories, felt things for you, identified with you, empathized with your struggles, and celebrated your victories. They might even have fallen a little in love with you.
Meanwhile, you’re just starting to feel things out.
It will always feel like they’ve put in so much and you’ve put in so little. There’s a really good chance that leads to a whole lot of hurt.
Pedestals Are Easy to Build
When meeting a writer doesn’t disappoint the reader, that can be a problem too. Because sometimes readers will put writers they love on a pedestal.
They’ll assume nothing but the best about them. They’ll feel intense, unshakeable admiration. They might even want to worship them.
That’s an unhealthy way to relate to someone you know personally.
When someone puts you on a pedestal, it’s way too easy to take advantage of them. They’ll be too quick to set their own needs aside and even quicker to make excuses for your behavior.
And the fact that they look up to you will make it harder for them to voice their concerns. Or they’ll be too afraid of disappointing you to assert their needs.
Being put on a pedestal also tends to bring out the worst in people because it makes you feel like you can get away with anything.
That doesn’t have to result in exploitation, but I’m willing to bet most writers wouldn’t be able to resist taking advantage of it, at least a little.
There’s a Power Imbalance
It’s weird to think of myself as having any kind of power. But I do have a platform. I have an audience. I have a name that some people recognize.
That’s enough to give me, and anyone in a similar situation, a little bit of sway. And it’s the kind of sway a reader you decide to get involved with probably doesn’t have.
If the reader you’re fucking happens to be a writer themselves, that can be a bigger problem because their professional lives get mixed into it.
If you’re the one with the bigger platform, the one with more clout, or the one who already has a small army of defenders, it’s easy for them to feel like they need to please you or else they might jeopardize their chances of making it.
They might fear the consequences of displeasing you. They could worry that you’ll bad mouth them to people you might pitch to one day. Or that you could sour things in their writing community.
Because of that, they might stick around longer than they want to because they’re worried about what would happen to their writing career if they broke up with you — or just said no to you.
I know that’s a factor because I’ve been in that position. I put on a brave face, tried to play nice, and agreed to things I wasn’t comfortable agreeing to because I didn’t want to get on the wrong person’s bad side.
That’s a problem. And it can be avoided by keeping it in your pants when you interact with your admirers.
Reputations Get Ruined
Treating your readers like sexual prospects isn’t a good look.
When you create a pattern of hitting on your readers and sending them flirty private messages, it starts to get pretty sleazy after a while.
Do that enough and people will start analyzing your writing for all the wrong reasons. Is your new article just casting a net and hoping someone cute responds? Is that passage where you brag about your accomplishments just a flex or is it you using your blog post as a dating ad?
It’s going to be hard for other writers to keep taking you seriously. Building meaningful connections isn’t easy when you’ve got a sketchy reputation. And publications and platforms might be reluctant to have you on or host your work if they know you’re on the prowl.
It Creates a False Sense of Security
A reader who dates a writer is likely to enter into that arrangement already having a high level of trust. They feel safe because they’re sure they know you based on your writing.
That can create a false sense of security, which can lead to them sticking around longer than they should. In some cases, it can even cause them to suffer through toxic behavior, emotional abuse, or worse.
When I was getting dicked around by a writer, I kept telling myself I must be wrong, that I must be misreading the situation.
I believed the things he had written about himself. So when his conduct didn’t match that image, I made all sorts of excuses for him instead of trusting my gut.
I gave him so much unearned trust, all because the narrative he created about himself made me feel safer than I would have otherwise.
You’re Always on Display
I used to be into the idea of dating someone who read me because it would’ve been like a shortcut to intimacy. They could learn so much about me through my writing that I could skip some of the awkward conversations.
I might not have to bring up my emotional triggers if they read about them already. Learning about my chronic illnesses is just a link or two away. I wouldn’t even have to risk getting involved with someone who wouldn’t be into my kinks.
Now I realize how naive that is. Not only would skipping the getting-to-know-you stage mean I’d be missing out on some serious intimacy building, but knowing they’re kind of lurking around can get awkward.
Both of the men I got involved with can still read anything I write. I mean, yes, anyone can read what I write because it’s public — but they were already on the platform I use before I met them and they’re still on them now.
Feeling like they’re still lurking can make you second-guess certain things you write. It can make you worry that they’ll get the wrong message from something you post. You always have to worry you’ll get a notification telling you they commented on your latest article.
Even if you don’t air your relationship’s dirty laundry, it can still put you in a weird place.
It’s easy for them to stay connected to you and what’s going on in your life. They just have to go back to reading you — just like they did before you got involved with them.
Keep It Professional
I learned some of these things the hard way, but I’m glad I learned them early. It’s an important thing for writers to know because it can keep you from tarnishing your career and getting mixed up in way more drama than anyone needs.
Your audience isn’t your dating pool. Your readers aren’t your prospects. That can be hard to see when you’re getting started, but it’s an important thing to recognize.
Dating or fucking your readers is bad for you as a writer. It’s bad for your fans, too.
Your readers are your lifeblood. They’re the reason you get to keep writing. If you make a living doing this, it’s thanks to them. The very least you owe them in return is to not blur those lines.
So, keep things professional and never use your influence or your platform to get laid. That’s what Tinder is for. | https://medium.com/love-emma/why-writers-should-never-have-sex-with-their-readers-9071bd1ea5d8 | ['Emma Austin'] | 2020-08-05 11:02:17.397000+00:00 | ['Self', 'Relationships', 'Life Lessons', 'Sex', 'Writing'] |
The Dark Side of the Spiritual Quest | Many narcissistic con artists market themselves as New Age coaches and charitable leaders in professional development forums. Their humanitarian do-gooder guise coupled with overtures of enlightenment procures worshipping devotees who are eager to please and grovel. A friend of mine is still reeling in the aftermath of having been a primary mark by a lauded teacher in a ‘consciousness-raising’ cult.
Photo by Дмитрий Хрусталев-Григорьев on Unsplash
Spiritual materialism, a term coined by Tibetan meditation teacher Chögyam Trungpa, describes the belief that suffering can be magically assuaged by hedonistic pursuit disguised as sanctifying thought systems, rituals and ideologies.
Trungpa wrote that we are often, “deceiving ourselves into thinking we are developing spiritually when instead we are strengthening our egocentricity through spiritual techniques.”
Likewise, religious scholar Andrew Harvey has written about how we’ve become neurotically driven and addicted to ‘the light’. Harvey expounds on this premise, conveying that the ego clutches at enlightenment in an effort to fulfill the ego’s need to be ‘special’. He further conveys that the New Age Spiritual Movement capitalizes on this misguided self-absorbed search, by polarizing itself on the spiritual spectrum.
By denouncing ‘darkness’ New Agers maniacally align with an illusory sense of God-like power. They proclaim evil and sin are false constructs and assert that the unrestrained pursuit of ‘abundance’, bliss, ‘enlightenment, light, are the apex of spirituality. If others are harmed by these sacred pursuits it’s simply chalked up to karma, ‘bad energy’, and a low vibration.
Vulnerable to the trap of promised enlightenment we succumb to the seductive lure of Plastic Shamans.
Reaping from a smorgasbord of astrology, psychics, channeling, angels, crystals, and aliens, magical thinking and narcissistic grandiosity is exalted. Hungry for power, these self-aggrandized gurus, cults, workshops televangelists, mystical healers, and spiritual centers capitalize from the mass insatiable yearning to escape the human condition through idolatry and materialism. These charlatan Godheads strategically use mind control techniques to foster the sort of spiritual fetishism necessary to ensure lucrative tithes.
The need to naively worship so as to glean a sense of belonging, safety, and magical salvation is also evidenced in blind allegiance to the church. In spite of the church’s heinous history of aligning with Hitler and Mussolini, implementing the Inquisition and Crusades, the Magdalene laundries, the witch-hunts, pedophilia and the supported democide and slavery in the Americas, Africa, and Australia, the persistence with upholding illusory ideas of spiritual infallibility and idealized notions of virtue trump accountability and objective reality.
Here we see that in spite of concrete evidence that contradicts expressed spiritual and moral views, the reliance on primitive ego defenses such as confirmation bias collectively bolsters the deification of those in power. Confirmation bias only considers that which supports what people want to believe.
Spiritual teacher Krishnamurti wrote, “The evil of our time is the loss of consciousness of evil.”
This adage was exemplified in the 1978 Jonestown massacre, in which hundreds of devotees of the People’s Temple run by the psychopathic Reverend Jim Jones, were victims of a mass murder-suicide.
It is hubris to deny or attempt to transcend one’s basic humanity and it is dangerous to imbue a chosen spiritual teacher with Godlike properties. When we ascribe to the belief that we are magically capable of Divine feats, that we are not subject to human fallibility and foibles, we give our psychological shadow-free reign to act out. We become prostrating sheep that stigmatize, blame and scapegoat those who deviate from ‘The Path’.
This alienation from one’s body, emotions and inner depths proliferates the very emptiness and desperation that ironically made one susceptible to blindly deferring to false promises of infinite Cosmic bliss.
An authentic spiritual life is balanced, conscious, and leans towards wholeness. The collaborative relationship between the sensorial world of the body, the ego-self, and the metaphysical world of the spiritual self come together for the essential purpose of actualizing the capacity to love. This requires psychological maturity and the courage and perspective to humbly embrace the truth of our intrinsic nature.
As Gandhi imparted, “The seeker after truth should be humbler than the dust. The world crushes the dust under its feet, but the seeker after truth should so humble himself that even the dust could crush him. Only then, and not till then, will he have a glimpse of truth.”
Humility respects our innate humanity and is therefore the trajectory to selfhood and grounded spiritual discovery and truth. We must keep our heart open to fallibility if we are to “admire the illimitable superior spirit who reveals himself in the slight details we are able to perceive with our frail and feeble mind.” (Einstein). Only through embodying our paradoxical nature, the light and the dark, can we relinquish fantasies of spiritual rescue and safeguard ourselves from the many duplicitous faces of spirituality that maneuver to deceive and exploit. | https://medium.com/publishous/the-dark-side-of-the-spiritual-quest-7a718b5a7c75 | ['Rev. Sheri Heller'] | 2020-09-15 23:09:52.394000+00:00 | ['Religion', 'Spirituality', 'Materialism', 'Psychology', 'Narcissism'] |
Instinct | Haiku is a form of poetry usually inspired by nature, which embraces simplicity. We invite all poetry lovers to have a go at composing Haiku. Be warned. You could become addicted.
Follow | https://medium.com/house-of-haiku/instinct-de6fbd3dd4c0 | ['Sean Zhai'] | 2020-12-19 03:36:07.657000+00:00 | ['Poetry', 'Environment', 'Flying', 'Animals', 'Hunting'] |
The only lamb cuts guide you’ll ever need: how to choose and cook your cuts | With Easter Sunday fast approaching, it’s time to think about the big roast, the spring equivalent to Christmas lunch. But instead of sticking to the same old fail-safe lamb joint as last year, why not explore the world of nose-to-tail cooking and discover your new favourite cut.
Discover more handy guides to cooking at farmdrop.com.
This simple guide will help you understand more about the different cuts of lamb available and which cooking method best suits each cut.
When it comes to choosing the perfect cut, price doesn’t necessarily mean the best. In fact, cuts that include a large amount of bone can be the most flavoursome and tender, where the collagen and marrow from the bone are released when cooking, tenderising and flavouring the meat. Unfortunately, nose-to-tail cooking has fallen out of fashion, with many of us opting for the same old lamb leg joint for our Sunday roast. So let us break it down for you, so you can find the perfect cut of meat for every occasion (you might even save a buck or two too).
Shank
Lamb Shank from Story Organic
Best for: Slow cooking
The shank is a meaty cut from the lower end of the lamb leg. Excellent for slow cooking, it’s great value and the bone running through the centre provides a lot of the flavour, releasing collagen as the joint cooks and tenderising the flesh.
How to cook
Shanks need low and slow cooking to achieve meltingly tender meat that falls off the bone. The rich meat can handle a good amount of flavour, so be bold.
For delicious red wine braised shanks, dust the shanks in flour then brown in a hot pan before roasting in a low oven with carrots, celery, onions, herbs and plenty of red wine.
For a bold take on a Moroccan tagine, marinade the shanks in a ground spice rub of cumin, coriander, ginger, paprika, before stewing in plenty of passata, preserved lemons, apricots and saffron. Serve with flaked almonds, fresh coriander and fluffy couscous.
How much to get
1 lamb shank will serve 1–2 people.
Leg
Native Breed Leg of Lamb from Park Farm
Best for: roasting
Everyone’s favourite Easter Sunday roasting joint, lamb leg is popular due to it’s dark, melt-in-the-mouth meat and meat-to-bone ratio, making this one easy to carve at the dinner table.
How to cook
Try it whole
You can keep things simple by slowly roasting the leg whole, studded with garlic and rosemary, for dark, tender meat.
For a centerpiece with a difference, try a sweet, nutty stuffing and drizzle over homemade Romesco sauce as in our super simple recipe that’ll show you how to easily stuff your lamb leg to perfection.
Steak it up
If you’re stuck for time and looking for a quicker supper, lamb leg steaks are a wonderful lean cut, each with a portion of bone in to keep the meat wonderfully juicy when cooked. Griddle or pan fry for 3–5 minutes on each side for medium-rare meat, or longer if you like it well done.
How much to get
1kg leg of lamb will serve 4–6 people
Allow 1 x 225g lamb leg steak per person
Rump
Lamb Rump from Story Organic
Best for: quick roast
Also referred to as chump, rump comes from the back side of the lamb where the top of the leg meets the loin. It’s a plump yet lean cut, with a generous layer of fat to keep the meat juicy. Unlike beef rump, lamb rump isn’t quite as popular, but it definitely should be.
How to cook
Boneless rump/chump steak
Herb crusting is a great way to retain moisture in a leaner cut of meat and works a dream with lamb rump/chump steak. Blitz woody herbs such as rosemary and thyme with garlic and homemade breadcrumbs until course. Brown the steaks in a hot pan then brush with mustard and roll in your herb crust. Roast in the oven until slightly pink in the middle (about 15–20 minutes) and rest for 10 minutes before slicing and serving.
Bone-in chump chops
Unlike the boneless steaks, chump chops contain bone so need slightly longer cooking. Rub with oil and fresh chopped herbs like mint and parsley then oven bake until crisp and golden brown for about 30–45 minutes, depending on the size of your chops.
Both kinds of lamb chops and steaks are ideal for barbecuing and need no more than a drizzle of oil and seasoning before hitting the coals for perfect, smokey meat.
How much to get
1 x 250g lamb rump/chump steak per person
1 x 350g lamb chump chop (with bone) per person
Loin
Lamb Loin Chops from Story Organic
Loin best for: roasting, chops best for: quick frying or grilling
Taken from the top of the back, the loin is a prized cut of lamb due to the super tender meat. It’s an ideal cut for roasting, however, as it doesn’t have a layer of fat for protection, care must be taken not to overcook. The loin comes in different cuts -
Loin chops (chunky and boneless)
Barnsley chops (effectively two loin chops in one or a double-sided chop cut across the whole loin with the bone)
Noisettes (smaller medallions of lamb loin wrapped in a thin layer of fat with no bone)
How to cook
Lamb loin
A rolled lamb loin makes for a great family roast. You can stuff your own or get your hands on a ready stuffed joint like ours from Park Farm. It’s filled with a lemon and herb stuffing, which soaks up the roasting juices from the lamb whilst cooking. Put seasoned lamb on a rack in a roasting pan and roast in middle of oven 30 to 40 minutes, or until a meat thermometer reads 50°C for medium-rare/ 55°C for medium. Let stand 10 minutes before slicing.
Loin chop and Barnsley chop
As with rump, lamb loin chops are wonderful cooked on the bbq, smothered in a herby, garlic marinade. The Barnsley chop (named as it’s believed to have originated in a hotel in Barnsley) needs slightly longer cooking than a regular chop, so try roasting in the oven with a bottom layer of onions, celery and carrot for 10–15 minutes before finishing off on the BBQ for that smokey flavour.
Noisettes
Noisettes make a elegant dinner party option. Wrap each noisette in parma ham, then fry in a pan to crisp up and finish in the oven for 15–20 minutes. Serve with gratin Dauphinoise and wilted seasonal greens.
How much to get
1 x 220g loin chop or 1 x 250g barnsley chop per hungry person
1–2 noisettes per person
Rack
Rack of Lamb from Story Organic
Best for: quick roasting or grilling
Taken from the lamb ribs, the rack is very popular as a great, impressive all rounder, that’s super quick to cook and easy to achieve perfectly crisp skin and tender, melt-in-the-mouth flesh. The cutlets are individual rib steaks taken from the rack and look beautiful on the plate.
How to cook
Rack of lamb
Unlike some fattier cuts, the rack is light and delicate, so needs a light dressing to avoid overpowering the flavours of the meat. Lightly score the fat then sear on each side in a hot pan until golden brown before finishing in the oven. Serve drizzled with mint sauce or atop a lightly dressed Spring salad. You can also crust the rack with a herby mixture as with the rump (see above)
Lamb cutlets
Cutlets are a perfect, quick cook cut and benefit from light cooking such as on the BBQ, grill or griddle. Dress with lemon and olive oil and eat like lollipops.
How much to get
1 rack of lamb containing 4 cutlets (around 560g) will serve 2 people
1–2 individual cutlets per person
Breast
Lamb Breast from Story Organic
Best for: slow roasting
Lamb breast is a value cut that is often underused as it has quite a lot of fat and can be tough if cooked incorrectly. Treat as you would pork belly and you’re away to go — the layer of fat brings oodles of flavour and helps to tenderise the meat as it cooks.
How to cook
Rolled lamb breast
For perfect rolled lamb breast, brown on each side in a hot pan then roast low and slow on a bed of shallots. This cut can handle a good dose of flavour, so whip up your own wild garlic and lemon oil and drizzle over juicy rings of lamb breast.
How much to get
700g lamb breast will serve 4 people
Shoulder
Whole Lamb Shoulder from Park Farm
Best for: slow roasting
This large cut from the top front leg of the lamb has lots of lean juicy meat, the bone and generous marbling keep the meat juicy and the flavour intense.
How to cook
Create your own pulled lamb by marinating a whole shoulder of lamb with garlic, chilli, paprika and cumin. Wrap in foil and cook slowly until the meat pulls away from the bone with a fork and serve stuffed into bread buns, flatbreads or use as a stuffing for filo pastry pasties.
How much to get
2kg shoulder will serve 6–8 people
Neck
Lamb Neck Fillet from Story Organic
Best for: slow cooking
The neck fillet is often underrated and inexpensive as it takes a little longer cooking than other popular cuts — but it’s the marbling through the cut that gives all the flavour.
How to cook
Lamb neck can be cooked whole, long and slow to ensure tender meat.
You can also chop the neck into chunks and brown off for use in stews and curries. Marinade the cubes of neck in a rub of ground coriander, cumin, sumac and chilli before pushing onto skewers and flaming over the BBQ for wonderful homemade kebab. Serve with hummus and warm wood fired pita bread.
How much to get
1 neck fillet (350–400g) will serve 2 people
What to look out for when buying lamb
The most important thing to watch out for is the quality of the animal — always buy outdoor reared, grass fed meat from a reputable farm for the best meat. The bones should be slightly pink in colour and the fat quite dry and crumbly. As a rule of thumb, the darker the colour of meat, the older the animal — young lamb will be pale pink and older lamb pinkish-red.
Cooking tips for lamb
Always bring meat to room temperature before cooking to allow perfectly cooked meat throughout. You can serve lamb a little bit pink and when cooked the meat should always look moist and juicy, and a little rare if you like but never bloody.
What’s your favourite lamb cut? And what will you be cooking up for Easter lunch? Post a picture and let us know on Instagram and Twitter. | https://medium.com/farmdrop/the-only-lamb-cuts-guide-youll-ever-need-how-to-choose-and-cook-your-cuts-29419edf875b | ['Beth Thomas'] | 2018-03-19 12:06:18.498000+00:00 | ['Easter', 'Sustainability', 'Cooking', 'Food', 'Recipe'] |
How to use Facebook without losing your soul | 1. “TYPE, DON’T CLICK”: STOP POSTING OTHER PEOPLE’S STUFF
Facebook makes its money selling your information to advertisers. For Facebook, traffic is everything: the more you use it, the more money it makes. So the most obvious way to use Facebook without being evil is to simply use it less often by going back to its best use: mostly just post personal stuff. Facebook is a place for you to connect with people you care about, not fill their news feed with quotes that Aristotle/Martin Luther King/Audrey Hepburn never actually said. (I got that from John Lennon, by the way.)
So, “type don’t click”: when you post on Facebook, always add something of your own to the conversation rather than just clicking share. We want to hear from you, see your family, hear your news. And if you do feel an urgent need to share something reputable, add a comment as to why you are sharing it and why you agree.
This takes time, so you’ll post less, which is the point. Remember, traffic = cash in Zuckerberg’s pocket. How about a 5/1 rule — 5 original posts to 1 share?
Please, please stop
2. “DON’T SELL ME BRO!” MAKE YOURSELF LESS INTERESTING TO ADVERTISERS
Here are some good ways to make it harder for Facebook to track you. This will make you less profitable to advertisers, which is what the non-soul-losing Facebook user is looking to achieve. If we all were to do this, then for basic economic reasons, Facebook would feel obliged to be less evil. My message to Facebook is simple: you can track me once you get your act together. In principal, I don’t mind targeted advertising but they ain’t getting that from me until Zuckerberg gets a grip.
On the web (not via the app), go to “ ad preferences ” > “Your interests” and remove stuff you don’t want them to know. (I periodically remove everything.)
” > “Your interests” and remove stuff you don’t want them to know. (I periodically remove everything.) From there, also go to “ Your information ” and switch all the options to off.
” and switch all the options to off. Then go to “ Ad settings ” and make them all “Not allowed.” You’re still going to see ads, they just won’t be targeted to you, which puts a dent in Facebook’s business model.
” and make them all “Not allowed.” You’re still going to see ads, they just won’t be targeted to you, which puts a dent in Facebook’s business model. Sadly, you can’t stop here as Facebook even tracks you when you’re not on Facebook. Manage this by going to “ off-Facebook activity ” (you’ll need to re-enter your password) and choosing “clear history”; then go to “Manage Future Activity” and switch everything the hell off.
” (you’ll need to re-enter your password) and choosing “clear history”; then go to “Manage Future Activity” and switch everything the hell off. Periodically run cookie checks on your browser to opt out of ads by using this tool.
Finally, you are going to want Facebook to stop tracking your phone. Go to your phone settings then “Apps and notifications” > “Facebook” (Android) or just “Facebook” (on iOS) and switch off anything you don’t want them to track.
Stick it to the Zuckerberg
3. “TEXAS = RUSSIA”: CHECK YOUR SOURCES!
With great power comes great responsibility, Peter. The fact is, Facebook and social media are where most people get their news, so for your friends, you have more power than the BBC or the New York Times. Think about that for a moment … and check your sources!
Let’s say you’re Vladimir Putin and you want to sow chaos in the United States. Your goal is to whip up enough fear so that people flock to the kind of leader they think will protect them, who also happens to be the kind of leader Russia wants. This is how you use Facebook from your internet cave in St. Petersburg:
Start a page called “Heart of Texas.” Spend six months posting nice photos of Texas and happy memes about the general awesomeness of the Lone Star State. Get loads of shares and likes. Then start the fake news avalanche with your captive audience of millions. Sit back and smile as your garbage goes viral.
So, if you’re going to share, check who and what you are sharing, even if it looks benign.
Texas = Russia
4. “THANKS FOR GIVING ME YOUR FACE”: BE WARY OF SURVEYS AND GIMMICKY-APPS
Have you seen the posts that send you off to a survey that tells you what your political views are? Well done, you’ve just given away your political profile to some unknown data farm. Expect to be targeted with political ads, most of them manipulative. In 2016, doing this meant that you also gave away all of your friends’ details without them having a clue.
Done one of those “Faceapp” face-aging things? Your photo, linked with all of your Facebook details, are now on a server in Russia.
These men want your faces
5. “PRIVATE. DO NOT ENTER!”: LOCK DOWN YOUR PROFILE
As a general rule, the more private we are on Facebook, the less power Facebook has. So sort out your privacy settings! Consumer Reports has a good guide.
6. “BILBO’S RULE”: YOU DON’T ACTUALLY LIKE THE SACKVILLE-BAGGINSES, SO DON’T INVITE THEM TO YOUR PARTY
Old Bilbo had it about right:
I don’t know half of you half as well as I should like; and I like less than half of you half as well as you deserve.
Learn from Bilbo’s mistake and, in the nicest possible way, cull your friends. Part of the reason you spend so much time on Facebook is that you have to scroll through 80% of the stuff you don’t care about to arrive at the 20% you do. This is because you have too many “friends.” They need to go. No-one needs more than 100 friends on Facebook.
Think of it this way: if you were organizing a party you really wanted to enjoy, who would you invite? Keep those people and quietly dis-invite the Sackville-Bagginses. If that makes you feel guilty, you can just stop following them rather than unfriending altogether. Facebook is not a place to appease your social obligations. Make it leaner and you’ll waste less time. If you’re sick of political rants and/or photos of my lock-down gardening, please unfollow me. I will not be offended. I won’t even know!
Do it this way:
Open Facebook and scroll through 10 posts.
If you see a post and the person, group, or page does not pass Bilbo’s Rule, choose “ Unfollow ” from the options. (If the person is unknown or abusive, proceed to unfriend.)
” from the options. (If the person is unknown or abusive, proceed to unfriend.) If you still want to follow the person but they are prone to posting stuff that raises your blood pressure, “Hide post” and the FB algorithm should help filter out more of Uncle Bob’s rubbish. | https://medium.com/along-the-road/how-to-use-facebook-without-losing-your-soul-e360d9432ae6 | ['Ronan Mclaverty-Head'] | 2020-06-06 22:17:06.153000+00:00 | ['Ethics', 'Facebook', 'Privacy', 'Social Media', 'Fake News'] |
Dask: Parallelize Everything | Dask is a one-stop-shop for general big data processing. Whether you are a Python developer looking to speed up existing codebases, a data scientist aiming to extract insight from complex data, or a microbiologist who wants to analyze terabytes of images, Dask got you covered.
Built from the ground up in Python, Dask is truly the only one of its kind. Since it is co-developed with Pandas, scikit-learn and Jupyter teams, it offers many things its competitor PySpark does not have. With Dask, Python developers no longer need to read complicated Java error messages, constantly switch between different syntax, or rewrite the entire codebase to benefit from distributed computing.
Dask also simplifies the big data workflow. Its excellent single-machine performance speeds up the prototyping stage, and leads to faster model deployment. For anyone with experience in Pandas, NumPy or SciPy, parallelizing existing workflow using Dask is painless and only requires small changes. Dask provides the easiest way to deploy not only statistic data analysis, but also machine learning and imaging processing pipeline on clusters.
Dask: Parallelized Python Programming
What is Dask?
A flexible library for parallel computing in Python.
Dask is a Python library leveraging task scheduling for computational problems. Dask provides the most widely-used data structures inherited from Pandas and Numpy, as well as basic parallel computing interfaces based on its self-developed task scheduling system, in order to make large-scale data computing happen.
Why choose Dask?
Code in Python, compute in parallel.
As a data scientist or machine learning engineer, you might face several challenges during your project:
The dataset is extremely large, causing your computer out of memory. You are expecting to switch between a cluster and your home workstation. Multi-processing or multi-thread calculation is what you always dream of. There are other frameworks available, but their APIs are different from what you regularly use. Understanding the task order of computation is necessary. You want Python for everything in your project.
Dask solves ALL OF THESE for you!
When using Dask, you can use all your familiar Python libraries and toolkits, and make the computation running in parallel. Dask customizes some of your most-used data structures to fit large datasets requirements, and supports computing and processing everything in a scheduled order. Whether you are using a laptop, or you own a cluster of 1,000 CPUs, Dask works with the exact same strategy, offering high accuracy and robustness.
Lazy evaluation, a method for minimizing the work done for computing, is used to schedule and optimize all the tasks in the program before getting the final result.
The entire task graph can be easily visualized with one line of code, and it helps you to figure out where to optimize your process from the start to the end.
How to use Dask?
Do whatever you do with Python.
Dask can be installed with either conda or pip.
# Install with conda
conda install -c conda-forge dask # Install with pip
pip install dask
The two most important functions in Dask
To understand and run Dask code, the first two functions you need to know are .visualize() and .compute() .
.visualize() provides the visualization of the task graph, a graph of Python functions and the relationships between each other. Based on these dependencies, the task scheduler in Dask determines how to run functions in parallel. Parameter rankdir="LR" is helpful if the graph is expected to be viewed from left to right.
Dask uses lazy evaluation strategy, so the program only computes the results after .compute() is called. To avoid calling .compute() multiple times to get results in collections, there is also a .compute() function taking multiple collections and returning multiple results.
y = (x + 1).sum()
z = (x + 1).mean()
# compute results of y and z at once
da.compute(y, z)
High-Level Collections
Array
dask.array splits a large array into small blocks of ndarray .
A quick example of visualizing this is to create a 2D array of 100,000 * 100,000 numbers with 10,000 chunks of size 1000 * 1000.
import dask.array as da x = da.random.random((100000, 100000), chunks=(1000, 1000))
The huge array is split into small chunks to minimize RAM usage
The original array is about 80GB! It could be extremely difficult for nparray to handle this on most personal computers. But as we see, it is only 8MB for each chunk, which is much smaller and easier to process.
Most functions you would like to call on nparray are supported here, for example:
y = x + x.T
z = y[::2, 5000:].mean(axis=1)
2. Dataframe
dask.dataframe is implemented based on pandas.dataframe , combining a number of Pandas dataframes by index into a huge dataframe.
A Dask Dataframe containing several Pandas Dataframes ordered by date
The functions of dask.dataframe is a subset copied from pandas :
import dask.dataframe as dd df = dask.datasets.timeseries()
df2 = df[df.y > 0]
df3 = df2.groupby('name').x.std()
df3.head(20)
3. Bag
dask.bag is implemented from python.list , which is designed for simple parallel computing for unstructured or semi-structured datasets, like text files and JSON objects.
You can make function calls like what you do with pyspark.rdd or pytoolz :
import dask.bag as db b = db.read_text('data/*.json').map(json.loads) b.map(lambda record: record['occupation']).take(2)
b.filter(lambda record: record['age'] > 30).take(2)
b.count().compute()
Low-Level Interface
Sometimes you may want to parallelize your algorithm on some small tasks, but Array , DataFrame or Bag are not sufficient to use, or you are aiming to construct some functions by yourself. Then it is the time to use dask.delayed() . It is much simpler to use .delayed() for parallel programming, which is only calling dask.delayed(func)(parameters) .
dask.delayed() works pretty well with loops, for example:
def inc(x):
return x + 1 def mul(x, y):
return x * y def add(x, y):
return x + y results = []
for x in [1, 2, 3, 4]:
a = dask.delayed(inc, pure = True)(1)
b = dask.delayed(mul, pure = True)(2, x)
c = dask.delayed(add, pure = True)(a, b)
results.append(c)
total = dask.delayed(sum, pure = True)(results) total.visualize(rankdir="LR")
Task Graph of a scheduled loop | https://medium.com/sfu-cspmp/dask-parallelize-everything-eb60e0662ce6 | ['Haoran Chen'] | 2020-02-04 07:06:25.545000+00:00 | ['Machine Learning', 'Data Science', 'Dask', 'Image Processing', 'Blog Post'] |
Dynamic Sitemap Generation in Next.js 🗺 | Dynamic Sitemap Generation in Next.js 🗺
Create your own dynamic sitemap with next.js!
To get your page indexed and all sites discovered by Google you should maybe provide a compass to help — also known as a Sitemap. A sitemap is a list of all routes on your page that you want Google to index. Let’s make a dynamic sitemap for your Next.js app without any plugins or libraries!
Photo by Aron Visuals on Unsplash
🗺 What is a Sitemap?
A sitemap is a set of URLs written as an XML file ( sitemap.xml ). The image below is a screenshot of a sitemap from one of my pages written with Next.js called InkTemplate:
Example of Sitemap.xml from InkTemplate
The most basic sitemaps are only a set of links ( <url> ) but you can add more information than the location ( <loc> ) like the priority ( <pri> ), frequency of change( <changefreq> ), and last modification date( <lastmod> ). If you want to read more about the sitemap XML standard you can read more in the documentation. You can point to the sitemap in your robots.txt so crawlers and search engines can find it — usually, you can find both sitemap.xml and robots.txt at the root ( yoursite.com/sitemap.xml ).
Example of robots.txt from InkTemplate
🗺 Create your own sitemap with next.js
By nature, Next.js has a file routing system. If you want to have your sitemap placed at root ( yoursite.com/sitemap.xml ) you can simply make a file in the routes folder named sitemap.xml.jsx . Let's make a simple sitemap listing all the routes for your page:
Sitemap with routes — Image by Author
If you have multiple environments, the getInitialProps will give you the headers so you can fetch the host or other relevant information you might need.
💃 Dynamic routes
Most pages are likely to have a dynamic route — like different products. Let's say you have a route named product/[id].jsx giving each product it’s district route. For a product with id 123, the URL would be yoursite.com/product/123 . You would want each of the products to get their own URL in the sitemap so you need to provide a list of all products by hardcoding a list or fetch a list from an API in getInitialProps .
Sitemap with dynamic routes — Image by Author
💬 (i18n) Language routes
To extend your audience it would be smart to provide your content in multiple languages. For Google being able to index all pages in different languages you should have the language code in the slug somewhere like yoursite.com/<LANGUAGE>/product/123 . For all products and for all languages there would then be a distinct URL. If you have your page in English (default) and Norwegian (NO) you would have two separate links to the product 123 like this: yoursite.com/product/123 and yoursite.com/no/product/123 .
Sitemap with language codes — Image by Author
⏩ Fast forward
Ok, now what? You have a sitemap listing all distinct routes for your page. To fast forward the process you can tell Google at Google Search Console “hi, I have a map for you to explore”. At Google Search Console you can upload your sitemap and Google will look at the map right away to start indexing your pages. I also recommend using Google Search Console to track keywords — it will give you the idea of what people searched for before clicking at your page and can give you tracking of your search rating. | https://medium.com/javascript-in-plain-english/dynamic-sitemap-generation-in-next-js-7107ccdc6830 | ['Marte Løge'] | 2020-09-10 21:22:43.430000+00:00 | ['SEO', 'Nextjs', 'Programming', 'Front End Development', 'React'] |
Treasure Chest of Real Queer Tales | Treasure Chest of Real Queer Tales
LGBTQ storytelling from Prism & Pen — December 6, 2020
by James Finn
Prism & Pen storytelling strikes it rich
New P&P writer Aaron W. Marrs gives us two pieces of gorgeous poetic prose. Danny Jackson H. and theoaknotes ponder unaccepting family members, a painful subject for many queer folks during the holidays. And theoaknotes dares to provide frank (and scarce) info to transmasculine folks who don’t know what to expect “down there” when they go on T.
Add in fiction, poetry, stories about biphobia, queer parents, and being trans late in life, and this week’s edition is packed with queer treasure.
If you’re not a Medium member, please click on the underlined links to read for free. If you can afford a membership, you help support your favorite writers.
Editor’s Picks —
Creative Nonfiction
How Cool Kids Spread Biphobic Toxins
James Finn
I’m not biphobic, really! My first crush was bisexual, and I’ve always been cool with bisexuality. So why have I been spreading biphobic toxins most of my life? My deliciously flaming friend Howie and our gender-rebel pal Carla help explain. | https://medium.com/prismnpen/treasure-chest-of-real-queer-tales-1555077706f5 | ['James Finn'] | 2020-12-06 19:50:41.457000+00:00 | ['Storytelling', 'Creative Non Fiction', 'LGBTQ', 'Fiction', 'Poetry'] |
Analyzing Gender Proportions Using Python and Web Scraping | Analyzing Gender Proportions Using Python and Web Scraping
Support your argument with data!
The subject of gender, particularly gender inequality, has generated a lot of debate recently. This post aims to provide helpful insights for anyone who’d like to study gender proportions in specific fields. I will provide some tips for data collection using web scraping as well as an automated way of finding probable gender of a person based on first names.
Data collection
If you are lucky, you may have your data in a handy format, like excel or .csv from some source. Nevertheless, this is rarely the case. In most analyses, you have to collect your data — generally from a website. This methodology is called web scraping.
It’s important to note that not all accessible data is collectable. Just because you can see something in your browser does not necessarily mean that you are allowed to legally scrape it. Some websites protect themselves against web scrapers. Always make sure that what you do is legal! For instance, scraping Wikipedia is perfectly fine, while scraping social media websites is illegal in most cases if not done through public APIs of these websites.
It may sound intimidating, but basically scraping is just mimicking what your favorite browser does:
Sends an http request to a site. Parses the response it gets.
In some cases, websites protect their data from scrapers, but a quite common source of information is Wikipedia, where no such protection is present, and information there is free to use. Therefore, you can scrape anything you want from Wikipedia. Python even has a package for that. For didactic reasons let’s not use the package, but scrape the information the old fashioned way!
Let’s say we want to assess the gender of composers and lyricists of anthems from around the world. We go to this site and press Ctrl+Shift+I (in Google Chrome) or right click on virtually any place of the website and click inspect. This is what you will see (you may have to switch to Elements in the upper panel on the right):
On the right-hand side, you can inspect the structure of the website, which will be important for how you parse your response. The purple text refers to the tag of this element, through which you will be able to find it when you parse the response of the page.
In this code snippet, you can see what I did in this case: send a request and parse the response into a searchable a BeautifulSoup object. In this object, one can easily find the specific part of information you are looking for. In this case, a row corresponding to an anthem is stored in <tr> tag, in which <td> tags contain the specific information I need. Do not hesitate to check my github for the full code!
To further clarify, here is what you need to do to collect the data you want:
Navigate to the page where the information is to be found. Inspect the structure of the website, find the tags where the information is stored. Using python, send a http request to the site. Using the BeautifoulSoup object created from the response, and the learned structure from point 2, create the algorithm, to extract and store the information you need.
Gender guessing
In order to make Python guess genders for us, the only thing we need is to supply it with a first name. gender-guesser is a Python package written for this purpose. It can return 6 different values: unknown (name not found), Andy (androgynous), female, male, mostly_male, or mostly_female. The difference between Andy and unknown is that the former is found to have the same probability to be male than to be female, while the latter means that the name wasn’t found in the database.
In this snippet, you can see what I did. After instantiating the detector, I created a function which takes a pandas DataFrame column, extracts the first name then performs gender guessing on it. Finally, it creates a column with the “_gender” (or any arbitrary) suffix and fills it with the guessed genders.
Concluding remarks
Do not forget to check the results manually at the end! In some cases, you must do a google search to clarify unknown or Andy cases, and it is always good to double check your work. These are great tools to speed up collecting gender proportion data. | https://medium.com/starschema-blog/analysing-gender-proportions-using-python-df99d3d41b43 | ['Mor Kapronczay'] | 2020-01-07 15:03:43.436000+00:00 | ['Data Collection', 'Python', 'Research', 'Web Scraping', 'Gender Equality'] |
How to Plot Data on a World Map in Python? | Photo by Martin Sanchez on Unsplash
I learned a new thing this week. In Machine Learning class, I clustered weather stations using DBSCAN (maybe topic for another article) and plotted those results on a world map. For this, I used the Basemap package in python.
What is Basemap?
Basemap is a tool to create beautiful maps in python. It is an extension of Matplotlib. Using Basemap, we can plot coastal lines and countries directly.
Prerequisites
Basics of Matplotlib and Pandas in Python
Experience with Jupyter Notebooks or Google Colab Notebooks (or any other equivalent)
Lets do this!
I’m using Google Colab for this tutorial. First, we need to install the required packages to use the Basemap in python.
Install The Required Packages
Here, I’m importing the packages which we will be using later.
Import the Packages
We can use any dataset which has Latitude and Longitude as features. Here, I’m using the cities.csv dataset provided by my instructor (You can find a similar dataset on kaggle). Create a Pandas data frame using that dataset file.
Create a Data Frame using dataset.csv
The dataset which I have used has 245 entries and 8 features (including latitude and longitude).
Data Frame Shape and Head
We need only the latitude and longitude for plotting the map. So the following piece of code creates a data frame having only those 2 features.
Extract the required features from the data frame
Set the upper bound and lower bound of latitude and longitude
Upper and Lower Bound of Latitude and Longitude
But how to get the values to set the upper bound and lower bound? I used OpenStreetMap export feature to find those values for South America region.
OpenStreetMap Export Feature
Extract only those data points that lie in the given region. There are 47 data points to plot in the South America region.
Retrieve data points in the specified area
Create the Basemap with necessary parameters like upper bound and lower bound of longitude and latitude.
Create Basemap
Draw coastal lines and countries. xs and ys are relative coordinates on the map. Iterate through the dataset and plot each point on the map. The last line in the below code snippet downloads the image to the computer (or to /content in Google Colab).
Draw the map
Output of the above code snippet
The Final Result!! :)
The below code snippet can be used to display an image in python notebook. Here, it displays the map that was downloaded before.
Code to display an image in python
Output
ExampleMap.png
The above image is an example of how powerful the Basemap package is. Using the Basemap package we could create many more such maps with ease.
Photo by Kelly Sikkema on Unsplash
Hope this helps. Let me know your views/suggestions/questions in the comments section below. :)
GitHub Link: https://github.com/athisha-rk/mediumArticlesRelatedFiles/blob/master/Basemap_Example.ipynb | https://medium.com/analytics-vidhya/how-to-plot-data-on-a-world-map-in-python-25cf9733c3dd | ['Athisha R K'] | 2020-09-22 13:45:48.131000+00:00 | ['Python', 'Data Science', 'Python3', 'Computer Science', 'Basemap'] |
Top AI/ML Blogs I follow | This is an education blog by MIT (Massachusetts Institute of Technology), a blog dedicated to updating the public about the news and achievement of its students, staff, and organization. This blog covers news on the latest trend in AI as well as different areas of AI application. So, if you are looking for a way to follow the latest news in the AI industry, then follow this blog and never miss out on what’s going on. | https://medium.com/datadriveninvestor/top-ai-ml-blogs-i-follow-fb40e4cf1c24 | ['Naina Chaturvedi'] | 2020-12-02 09:50:46.858000+00:00 | ['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Education', 'Tech'] |
Ultimate Designer’s Gift Guide 2020: Buy the Best Gifts for Designers for This Holiday — DesignXplorer | Ultimate Designer’s Gift Guide 2020: Buy the Best Gifts for Designers for This Holiday — DesignXplorer Akbar Shah Follow Nov 8 · 9 min read
Guys, the holiday season is around the corner, so are you ready for the gifts parade? We know it’s super easy to find gifts for anyone but tech geeks in your life. Yes, we’re talking about the hard-to-please graphic designers, UX designers, Web Designers, and all the other digital artists.
Now that we are here with a list of Best gifts for designers, you can stop squeezing your brain for ideas. We’ve covered some amazing holiday gift ideas that could turn your hard-to-please tech geeks dumbfounded.
So now that we’ve got your back, you can stop pulling your hair, relax, and read this post to find some cool Gifts For Designers. These are some thoughtful gifts for your creative partner, brother, friend, or colleague that you are planning to show your love for. We have categorized the gifts into different sections. Thus, you can easily scroll down and find a suitable category for your loved one.
Regardless of the digital artist you are shopping for, these unique ideas will boost your mood and make the gift parade more interesting!
Let’s go!
Nudge the Geek to Have Fun
This is your chance to pull your workaholic designer to the real world and make him/her enjoy these Best Gifts for Graphic Designers. Let them know they deserve much more than this, and there’s more to life. Shower them with these gifts and nudge them to have fun. Your recipients need reminders that the holiday season is around the corner!
Some of these gift ideas are not only fun but also productive. For example, the Polaroid Instant Film Camera is great to snap all the memories instantly. But that’s not all. This can be a cool add-on for the designer’s professional life. Some more gift ideas are as follows.
Fun Gift Ideas to Gift Designers: | https://medium.com/nyc-design/ultimate-designers-gift-guide-2020-buy-the-best-gifts-for-designers-for-this-holiday-674528991a78 | ['Akbar Shah'] | 2020-11-25 19:11:41.218000+00:00 | ['New York', 'UI', 'Gifts For Designers', 'Gift Ideas', 'Web Design'] |
Writing a custom data augmentation layer in Keras | Writing a custom data augmentation layer in Keras
Subclass Layer, and implement call() with TensorFlow functions
Data augmentation can help an image ML model learn to handle variations of the image that are not in the training dataset. For example, it is likely that photographs provided to an ML model (especially if these are photographs by amateur photographers) will vary quite considerably in terms of lighting. We can therefore increase the effective size of the training dataset and make the ML model more resilient if we augment the training dataset by randomly changing the brightness, contrast, saturation, etc. of the training images.
While Keras has several built-in data augmentation layers (like RandomFlip), it doesn’t currently support changing the contrast and brightness. So, let’s implement one.
Writing the Data Augmentation Layer
The class will inherit from a Keras Layer and take two arguments: the range within which to adjust the contrast and the brightness (full code is in GitHub):
class RandomColorDistortion(tf.keras.layers.Layer):
def __init__(self, contrast_range=[0.5, 1.5],
brightness_delta=[-0.2, 0.2], **kwargs):
super(RandomColorDistortion, self).__init__(**kwargs)
self.contrast_range = contrast_range
self.brightness_delta = brightness_delta
When invoked, this layer will need to behave differently depending on whether it is in training mode or not. If not in training mode, the layer will simply return the original images. If it is in training mode, it will generate two random numbers, one to adjust the contrast within the image and the other to adjust the brightness. The actual adjust is carried out using methods available in the tf.image module:
def call(self, images, training=None):
if not training:
return images
contrast = np.random.uniform(
self.contrast_range[0], self.contrast_range[1])
brightness = np.random.uniform(
self.brightness_delta[0], self.brightness_delta[1])
images = tf.image.adjust_contrast(images, contrast)
images = tf.image.adjust_brightness(images, brightness)
images = tf.clip_by_value(images, 0, 1)
return images
Note: For efficiency, it is important that the implementation of the layer consist of TensorFlow functions so that they can be implemented efficiently on a GPU.
Testing that the layer works
To test that the layer works, simply create the layer and call it on some images:
layer=RandomColorDistortion()
trainds = create_preproc_dataset('gs://practical-ml-vision-book/flowers_tfr/train-*')
for (img, label) in trainds.take(3):
...
for idx in range(1, 5):
aug = layer(img, training=True)
ax[rowno, idx].imshow((aug.numpy()));
The result is shown below:
Random contrast and brightness adjustment on three of the training images. The original images are shown in the first panel of each row, and four generated images shown in the other panels.
Incorporating the layer into a model
To use the layer, simply insert it into the Keras model layers. The layer will be applied during training and be a no-op during evaluation or prediction:
layers = [
...
tf.keras.layers.experimental.preprocessing.RandomFlip(
mode='horizontal',
name='random_lr_flip/none'
),
RandomColorDistortion(name='random_contrast_brightness/none'),
hub.KerasLayer …
]
Does it work?
The purpose of data augmentation is to improve model accuracy and to reduce overfitting. On that count, this layer works quite well on the flowers dataset.
Compare the training plot without data augmentation:
with the training plot after data augmentation:
We get better accuracy (0.88 instead of 0.85) and the training and validation curves remain totally in-sync indicating that overfitting is under control.
Enjoy!
Next Steps: | https://towardsdatascience.com/writing-a-custom-data-augmentation-layer-in-keras-2b53e048a98 | ['Lak Lakshmanan'] | 2020-12-29 04:10:48.919000+00:00 | ['Keras', 'TensorFlow', 'Computer Vision', 'Google Cloud Platform', 'Machine Learning'] |
How to Become a Better Developer Every Single day | How to Become a Better Developer Every Single day
4 ideas to make you a top-class performer without long study nights
Photo by Valeriy Khan on Unsplash
True winners build themselves during their practice time.
That’s something you might have recognized yourself when looking at masterclass performers. You see them doing something and you think to yourself:
“I wonder how much this person practised for achieving this”.
Coding is no exception to this rule. And if you want to be a top performer too, you have to include daily practice of your skills in your life.
Let’s see how you can easily do that with the following list.
You Should Always Have A New Goal And Work Toward It
This is a very personal belief I have, and it’s something that guides me through my life every day. I feel like people always need to set a new goal in their mind. Something that they want to achieve, and they have to work hard to get it.
That’s true for me both on a personal level and in my career. I suggest you set one target at the time for yourself and you build your way toward it.
For example, you could:
Build an app you always wanted to create.
Finally, finish all those Udemy coding courses you have in your library.
Learn a new language you were curious about.
Learn new patterns, techniques, for improving the code you write daily.
Find a way to achieve your goals. Write down the necessary steps if you feel like.
This behaviour has immense value. It will make you grow as a professional because you will learn new stuff and practice. It will give you new occasions because you never know what some knowledge can bring you to in the future. | https://medium.com/javascript-in-plain-english/how-to-become-a-better-developer-every-single-day-22f771de5897 | ['Piero Borrelli'] | 2020-11-19 08:50:05.775000+00:00 | ['Web Development', 'Technology', 'Software Engineering', 'Work', 'Programming'] |
Learning from Audio: Fourier Transformations | Learning from Audio: Fourier Transformations
Breaking down a fundamental equation in signal processing
Related article:
Introduction:
In Wave Forms, we looked at what waves are, how to visualize them, and how to deal with null data.
In this article, I aim to develop an intuition on what the Fourier Transformation is, why it is useful when studying audio, show mathematical proofs to make it computationally efficient, and visualize the results. The data we are working with in this (and related) articles can be found on my GitHub repository for this series.
With this in mind, let’s begin. We will first initialize our necessary variables and packages below:
Recall from Wave Forms: | https://towardsdatascience.com/learning-from-audio-fourier-transformations-f000124675ee | ['Adam Sabra'] | 2020-10-08 21:11:50.034000+00:00 | ['Signal Processing', 'Engineering', 'Mathematics', 'Data Science', 'Machine Learning'] |
Think Differently to Flourish and Grow Your Career | Think Differently to Flourish and Grow Your Career
What 5 famous entrepreneurs can teach us about how to think
Photo by Rod Long on Unsplash
Thinking differently will help your career. How you think about things is as powerful as how hard you work, how successful you are as a mentor or even the skills you gain.
Very successful people think differently from everyone else. The good news is that thinking habits can be learned. Here are wise words of wisdom from 5 successful business people that you can put into action to get ahead.
Jeff Bezos
Founder of Amazon
“If you’re not flexible, you’ll pound your head against the wall, and you won’t see a different solution to a problem you’re trying to solve.”
Key takeaways:
Define the problem and explore the causes of the pain. Find as many solutions as you can. Draft a plan of action to crack the code.
Mark Zuckerberg
Co-founder of Facebook, Inc.
“People think innovation is just having a good idea, but a lot of it is just moving quickly and trying a lot of things.”
Key takeaways:
Try things out, even if the outcome is not apparent. Escape the limitations of traditional thinking. Test, test, and test.
Steve Jobs
Co-founder of Apple
“I think if you do something and it turns out pretty good, then you should go do something else wonderful, not dwell on it too long. Just figure out what’s next.”
Key takeaways:
Focus on your next goal. Judge success by your momentum. Stay on top of your goals and know where you are.
Warren Buffet
Founder of Buffet Partnership, Ltd
“It takes 20 years to build a reputation and 5 minutes to ruin it. If you think about that, you’ll do things differently.”
Key takeaways:
A poor reputation makes you less trustworthy with people you want to work alongside. When facing a challenging situation, take a couple of deep breaths and then react. After you compose yourself, determine how to best respond.
Elon Musk
Founder of SpaceX and The Boring Company
“It’s important to view knowledge as sort of a semantic tree: make sure you understand the fundamental principles — the trunk and big branches — before you get to the leaves, the details.”
Key takeaways: | https://medium.com/2-minute-madness/think-differently-to-flourish-and-grow-your-career-dbea348d4f58 | ['Matthew Royse'] | 2020-12-26 18:46:39.857000+00:00 | ['Inspiration', 'Thinking', 'Mindset', 'Motivation', 'Self Improvement'] |
A Curated List of 57 Amazing GitHub Repositories for Every Python Developer | Learn
All of the computer science algorithms implemented in Python — great for tech interviews.
A curated list of awesome Python frameworks, libraries, software, and resources — with code covering almost everything you might use Python for.
A book for self-learners — this book aims to teach the Python programming language using a practical approach.
Python sample codes for robotics algorithms.
Jupyter notebooks for teaching/learning Python 3.
Playground and cheatsheet for learning Python. Collection of Python scripts that are split by topics and contain code examples with explanations.
Useful functions, tutorials, and other Python-related things.
An animation engine for explanatory math videos. It’s basically used to create animations programmatically.
An open-source collection of libraries and tools for natural language processing.
Freely available programming books. There’s a Python section with a ton of free e-books to read through.
A mix of worksheets that walk users through the basics of getting started with machine learning. Includes links to code samples, data sets, and useful videos explaining key math concepts.
Interactive deep learning book with code, math, and discussions. Available in multiframeworks. Adopted at 140 universities from 35 countries.
Models by TensorFlow (67.1k stars)
An open-source repository to find many libraries and models related to deep learning.
A reference for anyone getting started with Google’s TensorFlow machine learning software framework. Includes a long list of code examples demonstrating everything from basic TensorFlow operations to building neural networks.
A list of programming tutorials that are project-oriented, including building web scrapers, applications, bots, etc.
Solutions for various coding/algorithmic problems and many useful resources for learning algorithms and data structures. | https://medium.com/better-programming/a-curated-list-of-57-amazing-github-repositories-for-every-python-developer-67dc2cd8d0bc | ['Angelica Dietzel'] | 2020-11-16 18:09:49.361000+00:00 | ['Machine Learning', 'Data Science', 'Python', 'Computer Science', 'Programming'] |
Meditative Walking | Spiritual bliss, walking meditation,
Observing only what is present now
Absorbing unique scenes passing by,
Calming hurried extraneous thoughts
Concentration on nature’s presence.
Listening to creek’s rhythmic flows
Melodious sounds of rushing currents
Watching forceful power of its shoals,
Verdant trees nestled along its banks
Occasional squirrels scamper around.
See bounding deer buck on far bank
Sharp pointed antlers exude his pride
Behind him comes doe with two fawns,
Continuing my walk across wooden bridge
Rising high above rapid flowing creek.
Embracing nature’s own unique glory
Gratitude for its diverse primal facets
Soothing balm for weary human souls,
Spiritual bliss felt, meditative walking. | https://medium.com/flicker-and-flight/meditative-walking-6f14c28b0b79 | ['Randy Shingler'] | 2020-08-29 14:14:29.240000+00:00 | ['Self-awareness', 'Mindfulness', 'Meditation', 'Walking', 'Poetry'] |
Accessible UX design in crypto products: review of BRD wallet | Good User Experience is rare in the crypto world. Most projects are very technology driven and lack attention on user experience. Something which is, unfortunately, holding back mainstream adoption of crypto applications beyond crypto as an investment.
Luckily there are some exceptions of projects that are more focused on ‘normal’ users instead of crypto experts, and invest proper attention to the UX design of their products.
One of my favorite crypto wallets is BRD. Their app is both technically strong as decentralized wallet where you hold your own keys, and has accessible UX design for users that are new to using crypto.
BRD wallet UX design review
Let’s have a look at some of the core flows in the BRD app and what are some of the good and bad UX design decisions.
Wallet setup and seed key backup
The setup flow of decentralized wallets require the user to write down a backup seed key, by which users can access their wallet if they change devices. The first time an user sets up a wallet (and it doesn’t have any funds yet) the urgency to store this seed key in a secure place is little felt. Making this a very challenging part of the user experience to do right.
● Fair explanation why users need to backup a ‘paper key’’. But no explanation what a ‘safe place’ is. Most users might use an unsecure text file on their desktop computer to store this. Examples and graphics would help here.
✔︎ At the end of the flow the app checks if you really wrote down the seed key by asking to input two words you wrote down.
✔︎ Easy to login on your phone with a 6 digit password, instead of a very long private key. | https://medium.com/coinmonks/accessible-ux-design-in-crypto-products-review-of-brd-wallet-3e81da021193 | ['Antonio Van Der Weel'] | 2020-09-21 15:16:50.617000+00:00 | ['User Experience', 'Cryptocurrency', 'Design', 'Blockchain', 'Bitcoin'] |
Integrating Standard Bounties | Integrating Standard Bounties
Bringing security, interoperability, and a richer feature-set to Gitcoin’s smart contracts via Bounties Network
Moving money is easier than ever before. This trend is likely to continue with the rise of virtual currencies like Bitcoin and Ether. It is becoming increasingly clear that we’ll be living in a future where “moving money is as easy as sending an e-mail is today.”
“Live in the future, then build what’s missing.” — Paul Graham
In the future of fast transactions, we’ll want to govern the movement of money well. This is especially true for complex transactions, where human judgement is still required. Figuring out when money should move is still hard. Think about the below examples:
UX Design: Did the final interface meet the requirements the issuer specified?
Did the final interface meet the requirements the issuer specified? Software Development: Was the feature built, tested, and successfully merged based on the requirements outlined?
Was the feature built, tested, and successfully merged based on the requirements outlined? Content Marketing: Did the article written discuss the appropriate topics, was it descriptive enough, and did it follow through on the intended scope?
To see the rest of this article visit: https://gitcoin.co/blog/integrating-standard-bounties/ | https://medium.com/gitcoin/integrating-standard-bounties-dc4cf62bf814 | ['Vivek Singh'] | 2019-06-12 20:32:37.307000+00:00 | ['Open Source', 'Blockchain', 'Ethereum', 'Bitcoin', 'Writing'] |
Finding the Shape of the Surface of Water Rotating in a Bucket | In this article, I will consider the problem of finding the shape of the free surface of the water inside a half-full bucket rotating with constant angular speed. Since I will work with systems moving with speeds much smaller than the speed of light, relativistic effects can be safely neglected. The derivation will be based on the Newtonian laws of motion, first enunciated in Newton’s magnum opus, the Principia.
The Principia was first published in 1687, followed by two expanded editions in 1713 and 1726. An American edition appeared in 1846. Page 83 contains the statement of “Newton’s Laws of Motion,” as shown below.
This article is based on Knudsen and Hjorth, which I will henceforth call KH.
Figure 1: The portrait of Isaac Newton facing the title page of the Principia and, on page 83, “Newton’s Laws of Motion.”
Kinematics in Accelerated Reference Frames
What are the equations of motion of a body, expressed in terms of coordinates in the reference frame S, moving arbitrarily with respect to the inertial reference frame I?
Our first goal will be to find Newton’s second law expressed in the moving frame S. Note that any motion of S can be obtained by a combination of a translation of the origin of I and a rotation of S around an axis crossing its origin.
Figure 2: The system of coordinates S, moving arbitrarily with respect to the inertial system I (figure based on KH).
We first call R the position of a body of mass m with respect to I and r the position of the body with respect to S. To find Newton’s second law of motion in S coordinates we first need to obtain the relation between the acceleration in I and S.
The velocity and acceleration vectors of m measured in the inertial coordinate frame are:
Equation 1: The velocity and acceleration vectors of m measured in the inertial coordinate frame I.
The velocity and acceleration vectors of m measured in the moving coordinate frame are:
Equation 2: The velocity and acceleration vectors measured in the moving coordinate frame S.
Note that, since we are using Newton’s mechanics, the measure of time does not change from I to S.
As shown in Fig. 2, the position of the body in I can be written as:
Equation 3: The position of the body in the reference frame I.
Differentiating Eq. 3 twice we obtain, after some algebra, the following expression:
Equation 4: The decomposition of the total acceleration relative to S.
In this equation, a is the acceleration of the body with respect to S, the three terms on the right-hand side are, respectively:
The relative acceleration between I and S.
The acceleration of the body relative to the inertial frame I if the body is at rest at an instantaneous position (x, y, z) in S (this is the so-called comoving acceleration).
The Coriolis acceleration.
Figure 3: In the inertial reference frame (top), the black ball moves in a straight line. However, due to the Coriolis and centrifugal forces present in the rotating reference frame (bottom) the observer (red dot) who sees the object as following a curved path (source).
The three components of a have the following form:
Equation 5: The expression for the three terms in Eq. 4.
We can re-write the second and third lines of Eq. 5 using ω and dω/dt where ω is the angular velocity of the frame S:
Equation 6: The comoving and Coriolis accelerations written in terms of the angular velocity and its time derivative.
Figure 4: Directions of the centrifugal and Coriolis forces in the S frame.
The second term (with an extra minus sign) in the comoving acceleration (the first line of Eq. 6) can be written as:
Equation 7: Centrifugal acceleration.
Dynamics in Accelerated Reference Frames
We will now express Newton’s second law as seen by an observer in the reference frame S. Following KH, we assume that the force acting on the body is the same in both reference frames. Hence:
which gives us Newton’s second law in S:
Equation 8: Equation of motion (Newton’s second law) of the body as observed from S.
For Newton’s second law to be valid in S, we interpret the last four terms on the right-hand side of Eq. 8 as fictitious forces.
The Rotating Bucket
We now consider a bucket half full of water rotating with angular velocity ω about its symmetry axis. To find the shape of the surface of the water we proceed as follows.
Figure 5: In the inertial reference frame I, the bucket is rotating. Eventually, the water comes to rest relative to the bucket. In the non-inertial reference frame S that also rotates with angular velocity ω, the bucket and the water are at rest (based on this source).
In the non-inertial reference frame S that also rotates with angular velocity ω, the bucket is at rest. After some time the water will come to rest with respect to the bucket.
Consider a water volume element, with mass m. The forces acting on it are:
The gravitational force
The centrifugal force
The force originating from the pressure gradient
Since the mass is at rest in the rotating coordinate system S, the sum of these forces must be zero in S:
Equation 9: Equilibrium equation for the water volume element in the non-inertial reference frame.
The double product in the second term can be rewritten as:
The components of the equilibrium equation are, therefore:
Equation 10: Components of the equilibrium equation in the non-inertial reference frame S.
After integration, we obtain:
Equation 11: The pressure P obtained after integrating Eq. 12.
For a surface with constant pressure, Eq. 11 gives us:
Equation 12: The surface of the water is a paraboloid.
We conclude that the shape of the water in the bucket is a paraboloid. | https://medium.com/cantors-paradise/finding-the-shape-of-the-surface-of-water-rotating-in-a-bucket-564b5217d363 | ['Marco Tavora Ph.D.'] | 2020-12-03 18:43:33.920000+00:00 | ['Science', 'Math'] |
Angular — How To Proxy To Backend Server | Angular — How To Proxy To Backend Server
Explaining how to configure a proxy for backend API calls with an example.
Photo by Jens Herrndorff on Unsplash
In the Angular app, We often talk to backend servers in the development phase, we will explore all the scenarios in this article. Here are the topics we cover.
What is proxying
Example Project
proxy.config.json options
Proxy Setup with Angular CLI
Different Ways to configure
Rewrite the Path URL
Multiple app entries to one API endpoint
Multiple app entries with multiple endpoints
Summary
What is proxying
In general, A proxy or proxy server serves as a gateway between your app and the internet. It’s an intermediate server between client and servers by forwarding client requests to resources.
In Angular, we often use this proxying in the development environment. Angular uses webpack dev server to serve the app in development mode. If we look at the following diagram, app UI is running on port 4200 and backend server is running on port 3700. All the calls start with /api will be redirected to the backend server and rest all fall back to the same port.
In subsequent sections, we will see how we can accomplish this and other options as well.
proxying all URLs start with /api
Example Project
Let’s follow these commands for the example project and you are ready for angular CLI proxy setup.
git clone //clone the projectgit clone https://github.com/bbachi/angular-proxy-example // install dependencies for node server
npm install //cd to ui
cd appui // install app ui dependencies
np install
Once you install all the dependencies, you can start both Angular app and node js server on 4200 and 3070 respectively.
You can start the Angular app with these commands npm start or ng serve and here is the Angular app running on 4200.
The angular app runs on 4200
Let’s start the server with this command npm start and test this API on port 3070.
API running on port 3070
proxy.config.json options
target: This is where we need to define the backend URL.
pathRewrite: We need to use this option to edit or rewrite the path
changeOrigin: If your backend API is not running on localhost, we need to make this flag true.
logLevel: If you want to check whether proxy configuration working properly or not, this flag should be debug.
bypass: Sometimes we have to bypass the proxy, we can define a function with this. But it should define in proxy.config.js instead of proxy.config.json.
Proxy Setup with Angular CLI
Now app and server running on different ports. Let’s set up a proxy for communication between these.
First thing you need is this proxy.config.json. We are defining the target for all the URLs starts with /api here.
proxy.config.json
Second thing is to let Angular know we have this proxy.config.json in place. We can do that by adding the proxy-config flag while starting the app like below. Once started, we can see the message indicating all the URLs starting with /api will be redirecting to nodejs server running on port 3070.
npm start script
Now we can test the app and see the settings from the server
settings coming from the server
Different Ways to configure
Another way to configure proxy config in the Angular project is defining in angular.json.
proxyConfig in angular.json
Let’s summarize the ways here
Add this ng serve — proxy-config proxy.conf.json to the start script in package.json
to the start script in package.json Define in angular.json under serve section like above.
Rewrite the Path URL
Whenever there is a change in the URLs, we often rewrite the path of the backend servers endpoints. We can do that with the pathRewrite.
Let’s understand the pathRewrite option. For instance, our backend URL /api/settings is changed to /api/app/settings and we want to test in development before it goes to production. We can achieve this with the option pathRewrite like below.
path rewriting
So, we are rewriting /api/setting to /api/app/settings and /api/users to /users.
Here is the console output while starting the app.
angular proxy rewriting URL paths
Multiple app entries to one API endpoint
Sometimes we have multiple modules with services in our app. We might have a scenario where multiple entries or services will call the same API endpoint.
In that case, we need to define proxy.config.js instead of proxy.config.json. Don’t forget to add that to angular.json.
proxy.config.js
angular.json
Multiple app entries with multiple endpoints
We have seen how to define multiple entries to the same endpoint. Let’s look at multiple entries to multiple endpoints scenario.
proxy for multiple APIs
For instance, we have three APIs running on ports 3700, 3800 and 3900 and your APP should talk to these APIs.
All we need to add multiple entries to the proxy.config.json. Here is the configuration for that setup and we have to make sure all APIs are running on these ports for successful communication.
proxy.config.json
Summary | https://medium.com/bb-tutorials-and-thoughts/angular-how-to-proxy-to-backend-server-6fb37ef0d025 | ['Bhargav Bachina'] | 2020-01-12 02:26:59.153000+00:00 | ['JavaScript', 'Software Development', 'Angular', 'Typescript', 'Web Development'] |
Building a Spring Boot REST API — Part 3: Integrating MySQL Database and JPA | Converting Blog to Entity (@Entity)
In the previous tutorial, we created a blog class, Blog.java , with the fields of our table. As you know, each instance of Blog.java is supposed to be an entry in our table (i.e. a row).
To tell spring that Blog.java is an entry, we need to add a @Entity annotation to the class.
From our table structure above, the id column is the primary key and auto-generated field. To tell Spring that id is a primary key, we put the annotation @Id to the field.
@GeneratedValue(strategy = GenerationType.AUTO) tells Spring that the field is auto-generated and will not be provided by the user, rather, it will be generated by the database.
We also add another constructor to the class with “title” and “content” only. This constructor will be used when we supply form data to the controller.
As the id will be auto-generated, we don’t need to supply it. Hence, the exclusion from the constructor.
Another annotation we can add to the class is:
@Table(name = “Blog”)
This is required if your table name is different from the class name.
We can also add column annotation to our fields if the field’s name differs from the table column’s name. | https://medium.com/better-programming/building-a-spring-boot-rest-api-part-iii-integrating-mysql-database-and-jpa-81391404046a | ['Salisu Wada'] | 2019-08-22 23:10:09.247000+00:00 | ['Spring Boot', 'Programming', 'Java', 'Rest Api', 'Jpa'] |
Designing for mass adoption. | Designing for radical simplicity.
While the biggest leaps in terms of usability are unarguably achieved through technological advancements and translating complicated technicalities into relatable concepts, we needed new interfaces that are capable of actually leveraging that for users.
Additionally, the new interface design had to live up to our high goal of radical simplicity while keeping the heart of our brand.
We stripped away everything that wasn’t absolutely necessary for the interface to function. By reducing the number of elements and info on each screen we achieved a focus on the important elements. Additionally, we dedicated every step in a user-flow to exactly one action, ensuring additional focus. We went from an intensely colored design language to a simple and plain one that works with a lot of whitespace and shades of gray, which gave us the possibility to make our sparingly used brand colors really stand out to guide users through our flows.
We aim for interfaces that are so simple and basic that they can be understood as a boilerplate for the community to build their own apps upon, but still maintain a different look and feel — and are nonetheless actually fun to use.
Easy money.
Many of the crypto-specific concepts and terms are unknown to most and especially hard to get for non-technical people. So for Nimiq, we avoid crypto slang and use established concepts where possible, reducing the cognitive load to a minimum.
Accounts
While the word ‘account’ has a common meaning when it comes to banking and finance, our accounts are rather to be compared with the representations of identity in web-services (e.g. a Google account). Accounts manage and aggregate addresses but cannot send or receive funds on their own.
They are visually represented by a ring of hexagons which hints at which and how many addresses it contains.
Addresses
A Nimiq Address is a simplified public key that looks like a regular IBAN address. Addresses hold, send and receive NIM.
Accounts manage addresses — addresses can send and receive funds.
The Nimiq Addresses are represented by fun looking avatars, so-called Identicons.
The visual appearance of those avatars is directly derived from the address they represent, thus making the avatars a human-readable way to display and verify a cryptocurrency address.
To give the most relevant example: Sending crypto requires diligence to avoid making a potentially dramatic mistake while typing or pasting the receiver’s address. An avatar, in contrast, can be checked and verified at a glance.
On top of this, the avatars add a fun and relatable touch to what would otherwise be a rather dry and technocratic matter, read more on the Identicons here.
Auto-naming for new Accounts and Addresses
To support the visuals, an automated naming concept was implemented that provides relatable names for newly created accounts and addresses.
Of course, these names can be changed at all times.
An account is named after the background color of its initial address. It is rather a basic concept that can eventually result in a user having two ‘Yellow Accounts’. Still, we are happy to avoid generic ‘New Accounts’ and provide some guidance here.
The address names, however, are directly derived from and correspond to the avatar’s appearance (and thus, from the actual address).
If your Avatar has roller skates, glasses and a cowboy hat, it might be called the ‘Reading Outdoor Skater’ or the ‘Inline Cattle-Driving Geek’ or the ‘Rolling Nerdy Cowboy’
The visuals and names result in about 4 billion unique combinations.
Super easy entry
The gateway into the Nimiq ecosystem is the most crucial point of a user’s journey. We took our time to create an entry to the Nimiq blockchain that has one clear cut goal: Be easy.
The radically simplified account creation consists of only two steps: Choosing an avatar and setting a password. And just like that, the user becomes a first-class citizen of the Nimiq Blockchain.
We believe it to be the easiest and fastest onboarding for any payment system, crypto or conventional. Our estimated account creation time is way below 30 seconds — give it a try and let us know how you liked it.
However, Nimiq is a decentralized payment system and as such, it can’t be secure enough. As security and convenience often oppose each other when it comes to user-experience we had to find a solid middle ground.
The basic idea here is:
Security should correspond to the user’s situation. A new user should be enabled to try Nimiq before having to commit time and effort to it.
Therefore, we chopped the onboarding into three easily digestible concepts with the most inconvenient one being a somewhat optional choice that we encourage but don’t enforce.
Step 1. Account and password creation
The password is set at account creation and is mandatory. It is required to transact NIM, to download the Login File, to back up the Recovery Words and to add more addresses to an account.
Password: To authorize important actions
Step 2. Login File download
After a new user has created an account, she/he is presented with a prominent call-to-action to download the ‘Login File’, right at the top of the dashboard.
The Login File is Nimiq’s version of the ImageWallet standard (co-developed by Nimiq). It is the default way of logging in to Nimiq. As an image file, it can be easily moved and stored. In the near future, a device’s camera can be used to conveniently log in to the account via the featured QR code.
The Login File is encrypted with the account’s password.
Login File + Password: The way to log in to your account
Step 3. Recovery Words backup
After the Login File was successfully downloaded, a new call-to-action appears. The user is now considerably involved in Nimiq as she/he has already successfully completed the onboarding steps and can now be presented with the most inconvenient step: writing down the 24 Recovery Words. Again, the name is intended to be self-explanatory.
Recovery Words: The backup of your account
Both Login File and Recovery Words come with short but informative advice on the security implications and on how to best handle these presumably unknown concepts of crypto-security.
Super fast payments: The Nimiq Shop-Checkout
Paying with NIM is now as easy as paying with PayPal. The all-new online-checkout flow allows for super quick and convenient payments.
When shopping online with NIM, simply click pay, choose an address and confirm the transaction with your password.
The interface is reduced to the minimum and aims to deliver a light and enjoyable experience that provides high reliability and control nonetheless.
Logout
With this release of our new apps, you are now able to log out of your account(s) with a simple click of a button. Logging out removes all settings associated with that account from your device, so be sure to make use of the backup options Nimiq presents before you hit the final logout-confirm button.
What’s next?
The list of features and ideas, from implementation-ready designs to vague concepts, is long.
The concept of a browser-based blockchain opens up a whole world of potential use cases and apps that hold the power to outperform existing solutions — fiat based services as well as more conventional blockchain solutions.
The fast and easy user experience of Nimiq is not due to some magically well-done design work, it’s the accessibility and the easiness of the browser-first Nimiq blockchain that empowers it.
Stay tuned for an article about the next features to come (yes, Cashlinks is one of them). | https://medium.com/nimiq-network/new-ui-ux-ea5283dd2e0d | ['Julian Bauer'] | 2019-06-06 10:23:17.464000+00:00 | ['UI', 'Cryptocurrency', 'Design', 'Blockchain', 'UX'] |
Importance of SVGs in Web Development | Benefits
Scalability
SVGs are resolution-independent, so we can use them whatever their size is. Increasing and decreasing their sizes is very easy. Unlike JPG or PNG, SVGs retain their quality no matter what size of screen or resolution is used to show them.
Different scalabilities with the same file would be very helpful on responsive web pages. As a developer, you can use the same file source everywhere with just a quick adjustment of the scale.
File size
Unless they’re sized on a big screens, the file size of SVGs is very small compared to PNG or JPEG files:
As you can see, the PNG is nearly ten times larger. When you need to use a lot of images on your website the difference in size is huge.
Note: The size of an SVG can be increased without any file-size increase.
Performance
As you can see above, the sizes of SVG and PNG vary widely. When you’re developing a website with many graphics and images there will be a huge difference between using SVGs instead rather than JPEGs or PNGs.
If we have 30 images and icons on our landing page, we can easily reach up to four to five MB using JPEGs or PNGs. With SVG it would be just 100–200kb. The difference is huge!
Believe me, no one wants to wait an extra 15–20 seconds when they first land on your webpage!
Style control
There’s another great benefit of using SVGs. While coding you can change their properties just as you want, with a single file source.
Chane fill, stroke color easily
Change sizes with their properties
Easily change the over color for different purposes. (You no longer need different images or sprites for hover effects)
Example svg code
<svg height="100" width="100">
<circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" />
</svg>
This SVG includes height , width , cx , cy , r , stroke color, stroke-width and fill properties. These values can be easily changed using the single file source.
As you can see below, using the same file you can add a hover effect for the icon:
Fast websites
SVG will definitely increase the speed of your website — a factor that has a very powerful impact on your audience.
How many of you have waited more than 30 seconds to load a single page? These days, most people will assume they’ve lost their connection after just five seconds. | https://medium.com/better-programming/do-svgs-really-matter-154240f5435c | ['Melih Yumak'] | 2020-08-11 15:50:42.127000+00:00 | ['Design', 'Software Development', 'JavaScript', 'Programming', 'CSS'] |
Mirror, Mirror… | Witch — Not A Vampire…BAHAHAHA!- Author’s photo
October 2017:
A while back my therapist gave me some homework. I was to go shopping with a friend and find a mirror. A beautiful mirror and hang it in my house in a prominent place. I was to look at myself in that mirror every day and see Real Ann — the person my friends see and love, not the person I imagine myself to be when my inner critic is turned on at full volume.
This assignment came about when I realized that other people actually decorated their homes with mirrors. On purpose. I have three mirrors in my home — they are over my bathroom sinks and exist mostly to make sure I don’t have spinach stuck between my teeth.
As the discussion with my therapist ensued, he was truly perplexed at my lack of decorative mirrors. And my long history of never having mirrors in my home. I joked about my past life as a vampire. He was not amused at my attempt to derail his observations. Therapists are like dogs with bones, once they get their teeth into a thing, well, there’s no letting go of it.
I can’t really tell you why mirrors have never appealed to my inner interior decorator. My home looks more like a warm and welcoming Ye Old English Pub than anything out of a design magazine. Warm wood and leather set against earth tones is the theme. Somehow mirrors seem out of sync with that Earth Mother vibe.
At least that’s what I tell myself.
Or maybe my therapist had a valid point. Perhaps I just wasn’t interested in looking too hard at myself. Maybe I just didn’t want to see what other people saw when they looked at me. He might have been right when he postulated that I was content to let the voices in my head run the show without taking stock of reality — my actual external self. I was too hard on myself he stated — being very hard on me — I might add.
Irony runs The Universe, have you ever noticed that?
This was all before Said Universe sat me down and made me look at my life. Not just my external self. All of it. Every last bit of it. And encouraged me to cut myself some slack.
In the last six months, I’ve had a long look at my inner workings. Its time to do my homework and take a look at my outer self now as well, through the eyes of the people who love me. And see myself with compassion and love, not judgment.
You don’t need a magic mirror — tomorrow morning look yourself in the eye, acknowledge the strength it took you to get this far. Give yourself credit for all the battles you’ve survived. Find the soft edges where you can still offer your heart to those worthy of its gifts. And breathe.
In the movie “The Help” the nanny tells her charges every day — “You is smart, you is beautiful — you is important — you is loved.” If no one ever told you that, look in the mirror and tell yourself that, because you are. Say it every day. Because your peeps believe that about you and they aren’t fools. The Humans in our lives who love us are here to remind us what we really look like, who we really are. Inside and out.
Namaste.
Addendum: I eventually found The Mirror. It’s a perfect circle — because Life — the earth — seasons — everything is a circle. It has a frame made of beautiful bits and pieces of glass chips — not quite stained glass but a similar aesthetic. It sits eye-level on the wall at the bottom of where my staircase makes a small turn. I see myself descend those stairs every day. And I remember. I am smart. I am beautiful. I am important. And most of all — I am loved. | https://medium.com/recycled/mirror-mirror-e14de220a2ad | ['Ann Litts'] | 2020-10-02 21:00:08.848000+00:00 | ['Life', 'Mental Health', 'Self Love', 'Healing', 'Women'] |
Principles of design: The basis of good design | Photo by Aleks Dorohovich on Unsplash
As designers, it’s our goal to pass information in the most pleasing way possible. Starting out, there’s a wealth of literature to read and videos to watch that can get quite overwhelming to take in at a glance. People take different routes to learn all that needs to be learnt but there are basic principles of design which if they are mastered now will make all the difference and set your designs apart. This is a sneak peek into what they are about.
Fundamentals
The first thing we should know is that every design through any visual medium is made up of the following elements; Line, Shape, Form, Texture and Balance. These fundamental elements are little pieces that make the bigger picture. No matter your need, these little elements make all the difference and can either make or break a design. They are present in every form in every place — in the texture of your clothes, the layout of your home and even little things like the shape of your cup. You can access more on the basic elements of design here.
Photo RhondaK Native Florida Folk Artist on Unsplash
Colour
Colours can draw your eyes to an image, evoke a certain emotion or communicate important things without using words at all. Most people have a favourite colour that they are drawn to. They like the feelings such colours create in them but they aren’t actively conscious of their reactions to it. People wear black when they are sad, stop signs are red and the walls of hospitals are predominantly white. These colour choices were not by chance but to create or enhance feelings to fit the venue or location.
“DESIGN SPEAKS LOUDER THAN WORDS”
Colours can be described in HEX, RGB,CYMK, HSV or HSB code. Although HEX, RGB and CYMK can describe colour, CYMK is used majorly in print media and the other two don’t do a good job in describing shades of a colour. HEX code in particular is difficult to understand. On first glance, do you understand the value #13AC7B? Confusing right? HSV or HSL has been prescribed as the best choice for specifying colour on digital media. The colour wheel which we were taught in school is a guide to selecting colours that will be a magic wand in the arsenal of any designer. Using different formulae for selecting colour, creating a colour palette to use in a project is to an extent, simplified. Some formulae include;
Monochromatic: This uses one colour from the colour wheel and uses saturation and value to create variations.
Analogous: This uses colours that are next to each other on the colour wheel like reds and oranges or blues and greens.
Complementary: This uses colours opposite each other on the wheel like Blue and Orange or Red and Green.
Split Complementary: This uses two colors on either side of the complement. This gives the same level of contrast as complementary but more colours to work with.
Triadic: This uses three colours that are evenly spaced. These combinations tend to be striking so be mindful when using them.
Tetradic: This uses four colours forming a rectangle on the wheel. The formula works when you let one colour dominate and the remaining as accents.
To learn more on colours, follow the link here.
Photo by Francesco Ungaro on Unsplash
Typography
This is the style or appearance of text. Since it is present in almost every form of media, how we style it plays a vital role in design. Fonts communicate more than words expressed through them. They can be casual, neutral, exotic or graphic. For example, the fonts used in children’s literature and adverts show the playful nature of kids and invite them in. Serious information is normally passed with text that looks as serious (sometimes grim and scary) as the message. There are different kinds of fonts; Serif, Sans-serif and Display fonts. There are some things to consider when selecting fonts such as Hierarchy, Leading, Tracking and Kerning. When selecting fonts, note that less is more. Limit yourself to one or two font choices per project. Follow the link here to understand more on the types of fonts and things to consider when selecting fonts.
Photo by Halacious on Unsplash
Layout & Composition
Layout and Composition gives your work structure and make it easy to navigate (think your kitchen or office floor layout). It also shows the relationship between elements. There 5 principles of Layout and Composition that can help sharpen your work; Proximity, White Space (Negative Space), Alignment, Contrast and Repetition.
Proximity: This is using visual space to show relationship in your content.
White Space: This is also a major principle of design but is tightly related to Layout and Composition hence mentioning it here. It helps in defining and separating different sections.
Contrast: This is difference between two items. It can help you catch the reader’s eye, create emphasis or call attention to something important. Contrast can be created using colour, size or visual weight and different styles of text.
Repetition: This is a reminder that every project should have a consistent look or feel. Being consistent make your work easier to read. When the user knows what to expect, they can relax and focus on the content.
You can see more here.
Photo by sarandy westfall on Unsplash
Images
Images are more than just decoration. In design, they are the hook that draws the viewer to what you have to offer. Compelling visuals help you connect and make a strong impression on viewers without them reading a single word. People are drawn to images that look authentic and tell a story. There are two kinds of images, Raster and Vector images. Each are good under certain circumstances. Ensure when selecting images for your design, look for images that are sharp, clear and free of distortion. More on images here.
Photo by Markus Spiske on Unsplash
Negative Space
Negative space or white space is the area of the design that is empty. Negative space is something that would be thought of as a bye-product of designing but actually is something that is as important as the other principles discussed above. It gives our design breathing room and passes emotions in and of itself. Look at the picture above, there’re just a bunch of frames on a wall. It is the proper use of negative space that gives it beauty as design or art. Negative space can be manipulated through padding, margin and line height. There are two types of negative space: Micro & Macro negative space. Micro negative space is the small space between elements while Macro negative space is the larger space between layout elements. Watch the full video on Negative space to see more.
And that’s all the basic principles of design. Every good design out there whether print media, digital, artwork or architecture make use of these principles. Grasping these principles now will change how you see everything permanently. | https://uxdesign.cc/principles-of-design-the-basis-of-good-design-6df8ab5aeb83 | ['Linda Okorie'] | 2020-07-30 21:51:04.113000+00:00 | ['UI Design', 'User Experience', 'Design', 'Graphic Design', 'Product Design'] |
The First Few Years of My Marriage Were a Failure | Eventually, my brother got a job offer and left Gallup for greener pastures, and Linda and I had no place to live. We found a tiny apartment we could afford, but it had problems. Cold air leaked in the windows, and snow blew under the door. There was one gas heater in the bedroom that didn’t come close to heating the whole house.
It was so awful, we named it, “Ye Olde Shithole!” Linda left the theater because we were officially “together” and she couldn’t work at the same place as me. She got a job as a waitress where the tips were okay, and the food she brought home from work was excellent.
We were broke but had some good times. My mom and dad wanted to meet Linda for a long time, so they showed up in Gallup one day. Of course, we made it seem as though Linda was just visiting my apartment because they didn’t know we lived together.
Since Gallup is a boring and hideous place, we loaded in the car and took the drive to Santa Fe for the day.
Linda must have passed the test because as we were walking up to a restaurant for lunch, my dad lifted his leg and farted loudly, and he would only do that in front of people he likes. He does have manners. They also bought me some groceries because the cupboard was bare.
It was a good sign.
It was around that time that Linda peed on a stick and found out she was pregnant. I had no idea how I was going to announce this to my family because up until then, I had done a good job of hiding my sins from them and the Witnesses.
I didn’t want the Witnesses to know because I would have been disfellowshipped and my mom and dad would have been forced to shun me, and I didn’t want to lose them. We vowed to keep the pregnancy a secret for as long as we could.
But, time went on, and working as a manager for the theaters was getting old. Imagine if the former manager you took over for had a history of stealing money. Then imagine the big boss treated him like a son and enabled him. Also, imagine that the big boss let the former manager still have keys to the building and access to the safe.
Finally, imagine overhearing the former manager talking to the big boss and referring to me as a “scapegoat.”
The next time money went missing from the safe, I documented everything, turned in my evidence to the corporate office, and quit my job.
It was the only thing I could do.
One would think with the kind of experience I had, someone would hire me. But, I was young and not a high school graduate, and in a depressed economy where there were more educated workers than jobs, I got left out in the cold.
But I lucked out and someone offered me a free apartment in exchange for being the rental manager/maintenance man, so we didn’t have to worry about rent, and the apartment was much more pleasant and suitable. But I still needed to find something that paid actual money, because Linda and I had to eat.
So what did I do? I sold vacuum cleaners door-to-door. If you know anything about the companies that sell these overpriced hunks of metal, you know that very few people want to spend $1200 on a vacuum, even if they can make small monthly payments.
I sold a total of one my first week, and that was it, but kept cold-calling because that is what the senior salesmen said to do if I wanted to make money.
Then Linda left her job at the cafe, and our food and money quickly ran out.
With the last bit of money, we went and bought a substantial bag of potatoes.
I was severely depressed and anxious about having no money, and it was putting massive stress on our relationship. It also didn’t help that my duties as a rental manager were clashing with my ability to go out and sell vacuums.
And you can imagine what eating only potatoes for two weeks will do to a person.
The depression got so bad that all I did was sleep. I ignored Linda, I didn’t answer the phone when tenants called, and I didn’t sell any more vacuum cleaners.
During the time when I was in a funk but had managed to go out of town for work to cold-call a new territory, Linda started to bleed, and her friend Tina took her to the Indian hospital.
She lost the baby, and even if we still didn’t know how we were going to tell people, it was a huge blow to our relationship and the last straw that broke the camel’s back.
I was a failure again. We had no money, no food, and I was getting fired as the rental manager because all I did was sleep, so we had no place to live either.
I finally called my parents and asked them to send me money. All they had was $100, but it was enough for us to buy more food than we had seen for a long time.
When I was talking to my dad, he suggested I move back to Tucson, where he could help me. He could get me a temporary place to live until I got a job and got back on my feet, and it sounded like the perfect solution.
Things were not going great with Linda, and when I said I was fickle, I meant it. I listened to my friend Paul when he tried to convince me not to stay with Linda. Again, he never liked her. For a while, I listened, and I told Linda I was leaving and would not be taking her with me.
What an asshole, right? But, in my defense, I was young and having second thoughts about getting married when I had my whole life ahead of me. My life was also in turmoil and I was only thinking of myself.
As for Linda, she had no place to go but back with her brother to the reservation. She couldn’t move back in with her estranged mother. I felt terrible that she would be starting from zero, but, I was firm and told her I couldn’t take her. At least for a while.
As time got closer to leave, the more I didn’t want to go without her, so we started talking about what it would be like to go together. We even talked to my parents about it and they thought it was a good idea, but mentioned that if we wanted to stay together in the same house with them, we would have to be married.
In all the excitement, we forgot that part.
In the end, I changed my mind, apologized, and we started making serious plans for both of us to head to Tucson. And when my friend said I was making a mistake, I ignored him. | https://jasonjamesweiland.medium.com/the-first-few-years-of-my-marriage-were-a-failure-e6892ce0c396 | ['Jason Weiland'] | 2020-08-11 17:06:05.500000+00:00 | ['Mental Health', 'Family', 'Relationships', 'Ninjabyob', 'Lifestyle'] |
When you don’t know what to write, just write | I’m not sure how many times this year I’ve had to say this to myself. I also don’t know how many times this year I’ve written similar pieces that reflect this on going struggle, always turning out in the same way. Just sit and write. Write about writing.
Turns out it works.
It works, because no matter how many grand ideas to write about I might have, ultimately, all I want to do is write, and when the first alternative is not available or does not seem to come out naturally, the only thing left to do is the one that can get you closer to it.
Practice. Repetition. Discipline. Time. Focus.
I know that the more I write, the easier it becomes. I know that by the fact that I’ve done it so much so often, that I was able to see my mind and skills changing for the better. They get sharper, ideas flow regularly. Everything is an opportunity for a new post.
It’s that repetition what I’m trying to achieve and struggling to get back to. It’s making the time and regaining the discipline to do it every day. To make it a daily practice again.
Creativity is a skill, one that thrives on repetition, discipline, and giving yourself the time to think and create. That’s why, the more you do it, the more it will flourish. It becomes natural. The default way of thinking and acting.
But it also require focus. The focus of knowing, at least roughly, what you want to do. Where do you want to go. Otherwise you might yourself in the same spiral as myself. You know you want to do it, just not sure the direction.
Yes, any writing is better than no writing at all, but at the end, we all want to perceive a sense of progress, not only the improvement of the craft, but the notion of movement towards something. That’s why focus, or a rough notion towards it, is so important.
Because creativity is not different than sports, work or life. One thing (the craft) is not enough and will only take you so far. It is a matter of all of the above, in similar or different measure, depending who you are and what you want out of it, but ultimately, where you want to be. Why are you doing it?
Start with why? | https://medium.com/thoughts-on-the-go-journal/when-you-dont-know-what-to-write-just-write-2a8ef874b5df | ['Joseph Emmi'] | 2019-10-31 00:39:50.078000+00:00 | ['Personal', 'Discipline', 'Habits', 'Writing', 'Journal'] |
Index Your App Content With Core Spotlight | Define the Item Attributes — CSSearchableItemAttributeSet
This is the first and most important step to index your content: define its attributes.
We create a CSSearchableItemAttributeSet instance and then start filling its properties.
When we initialize the instance, we need to pass its content type. Content type needs to be some sort of UTI from the type of string.
After instantiating the attributes set, you can start filling its properties, such as subject, title, description, creator kind, and many, many more.
There are several extensions to this class, such as:
CSSearchableItemAttributeSet_Places
CSSearchableItemAttributeSet_Events
CSSearchableItemAttributeSet_Documents
and more.
They hold different sets of attributes for the object.
Try to fill as many attributes as you can — rich attributes sets will be translated better to search results and better user experience. Also, the search algorithm Apple uses prioritize richer results.
Some of the attributes are related to phone numbers, addresses, and locations.
This means that, whenever the search results contain items with those attributes, a call/navigate action button will appear next to the search item result and will let your user get a quick response to the search result. | https://medium.com/better-programming/index-your-app-content-with-core-spotlight-def31cbb7736 | ['Avi Tsadok'] | 2019-08-18 23:45:23.913000+00:00 | ['Development', 'Search', 'Spotlight', 'Programming', 'iOS'] |
The Data Painter and the Data Poet | In the digital society, flows of data have to be transformed continuously into flows of information along the way from the sensors to our eyes. Thus the filed or data visualization grew as fast as the need for optimal understanding of that data.
As a reaction to the ubiquity of these data visualizations and their pervasive rationality, a new form of art called data art is appearing. Its base material is not the pigment or the word but numbers, lists or files.
Yet data have, as well as words, a special status. They can have a significance. And this significance is a bridge to reality. But unlike the word, the datum disappears in the sea of data. While in the English language there are about 505,000 different words, every human being creates on average 1.5Mo of data a second. As meaningful as it can be, each piece of data’s meaning is being erased by billions of billions of other pieces of data. Just like the poets who eventually invented symbolism, the data artists realize that their base material is significant and that this significance fades away.
Data look more like pigments than words in the way they are represented. Each piece of it is rarely discernible from the whole. The bars of a bar chart are drawn just like a painter makes brush strokes on a canvas.
Therefore data artists face a duality in their material. They have to choose how much weight to give to the significance of data. If they decide to ignore it, they get closer to the painter. And if they embrace it and question it, they get closer to the poet. Just like the symbolists, data artists place themselves in-between their freedom of creation and the natural fading of the material.
At one end of the spectrum, data painters create sensorial impressions disconnected from the meaning of the data they use. They emancipate from the reality that lies into the material and make room for the expression of their artistic freedom. Nevertheless, the data and its meaning still impregnates their work. Their intention is elsewhere but their creation is still the result of the abstraction of real phenomenon. Refik Anadol and his Data Paintings or Kirell Benzi are good examples of this way of creating. They are still unjustly accused sometimes of using the data as a trendy marketing strategies.
At the other end of the spectrum are the data poets. Their art subject is our relationship to data and their meshing with reality. They give up a bit of freedom in the creative process, and accept limitations imposed by the meaningfulness of their material. They “show” the data and invite to a discussion upon their shapes. For example, data poets can address topics such as the nature of data, privacy in the digital communities, manipulation of representations, data literacy or the gap between data and reality.
Regardless of the position the data artists choose on the spectrum, they all seem to share the same doubt of the scientific codes established by the data visualization and data science communities. | https://medium.com/nightingale/the-data-painter-and-the-data-poet-e43a7404ca55 | ['Guillaume Meigniez'] | 2020-07-09 15:41:42.586000+00:00 | ['Data Humanism', 'Data Visualization', 'Data Art', 'Design', 'Art'] |
If You Want to Use Short Form, This is How You Can Go Viral | If You Want to Use Short Form, This is How You Can Go Viral
This one additional step could see your article go from good to great.
Photo by Andrea Piacquadio from Pexels
Medium decided to introduce short form recently and writers are going crazy over its use.
There are pros and cons to short form as well as adding it to your writing arsenal. The new change is trying to mimic Twitter. On Twitter, the limit is 280 characters. This is equivalent to what a person can read in a minute.
These days people have short attention spans as they from one item to another. Look at users of Instagram, Tik Tok, and other social media platforms where you flip through images you want to see.
The length of your short-form article
Numbers will vary on the length of your short-form article. If you add images, then you need to use fewer words.
Then you have to consider some people read faster than others. In general, people read approximately 250 words per minute according to Wikipedia.
Use images
Personally, I have used short form but not like everyone else before short form even came out. I like my version of the short form. In articles I have written, I always include an image. Many people are visual learners.
From my experience in the Army, I used PowerPoint. Many people in the Army are visual learners. The images you use for your audience says a lot. Take the time to pick out the best image for your short-form article.
Inc. published an article and identified 65 percent of people are visual learners.
If your reader is a visual learner, you could be missing out on a lot of reads. The reader could decide to pass your article if you do not include an image.
Then make sure the image fits and attracts people to your short-form article. Spending a few more minutes will pay off for you and your reader.
Actions that will help you publish viral short-form articles | https://medium.com/2-minute-madness/if-you-want-to-use-short-form-this-is-how-you-can-go-viral-21ff0766b4ed | ['Tom Handy'] | 2020-12-28 17:24:02.644000+00:00 | ['Writing Tips', 'Self Improvement', 'Life Lessons', 'Writer', 'Writing'] |
A hospital tracker app using Node.js and Python | What we will be using
A server to serve the pages, specifically Node.js. We won’t use express.js.
Python’s BeautifulSoup library for parsing HTML.
Regex to extract meaningful information from a huge jumble of HTML.
Custom code to serve as a simplistic templating engine.
Basic HTML and CSS to create a frontend.
An overview of the different components is as shown
Part 1. Obtaining the HTML pages
Photo from Unsplash by Florian Olivo
We first create a JSON that’ll hold the number of current active cases per state. We could get the data directly using a suitable API. Or, we could select a suitable website that lists this data and fetch the HTML code for it, so as to practice HTML parsing. To fetch the HTML code, we could use urllib.requests module or requests module of Python. What if the page content gets injected by JavaScript(a website built with React for instance) after the page loads ? In such cases, you’ll see the page clearly using browsers but will not get the injected content using any of the above two modules.
Here’s where Selenium comes in, specifically WebDriver.
Browser vendors provides some endpoints that can be used to automate browser testing. These are used by WebDriver to get the page via the browser itself and returns it for our usage. You can read more about it here. To set up the web driver, we just use the following imports and statements
from selenium import webdriver
from configs import config as cfg driver=webdriver.Chrome(executable_path=cfg.secret_info["executable_path_chrome_driver"])
driver.get("URL TO REQUIRED SITE")
The webdriver for Chrome needs to be downloaded and the location of the file is specified as a value of the key executable_path_chrome_driver in our configs file. Next, we issue a GET request to whatever is the URL of the site we are trying to access. The required HTML, inline CSS and scripts are obtained as
dummyContent=driver.page_source
However, things aren’t always so simple or goes exactly as they say in the docs or the tutorials. Many times the above code returned nothing from the site. If you’re an experienced developer, you’ve most likely worked out the bug by now but I was naive and took longer. Remember the page we are obtaining is injected by JavaScript. The problem was Selenium returned as soon as the root div was added. But that div doesn’t have the active cases information, that information is in a table which hasn’t been added yet! We need to make Selenium wait till the required element is present.
try:
element_present = EC.presence_of_element_located((By.CSS_SELECTOR, '.state-name'))
WebDriverWait(driver, 10).until(element_present)
'''As it's injected by JS, we have to wait for the entire stuff to load. Else Selenium returns after only div id=root is loaded but not the rest of the elements. The state class of the table contents is safe to use and guarantees the page has indeed loaded'''
print("APP CLASS ELEMENT FOUND")
except:
print("Timed out waiting for page to load")
Check out the examples for making Selenium wait here. The above code introduces a wait time of 10 seconds.
We have a horrid looking mess now on our hands that we are going to parse using the BeautifulSoup library of Python.
from bs4 import BeautifulSoup
import re
soup = BeautifulSoup(dummyContent, 'lxml')
step1=soup.find_all("div", class_="state-name")
all_rows=soup.find_all("div", class_="row")
We use the library’s inbuilt methods to get all divs with a class of state-name and all divs with class of row. These values will change when the website structure is updated. Careful examination of the structure will guide us as to what regular expressions to follow to get each piece of information. For example, state names such as:
example = '<div class=”state-name”>STATENAME</div>'
The regex to extract that is:
stateName=re.findall(r'>[a-zA-Z\s]+<', example)
To get >STATENAME<, now just remove the first and last characters as stateName[1:-1]. Similarly, we can obtain the active cases from the HTML code. Finally, we create an array of Python dictionaries where each dictionary has the schema:
{
"statename":
"activeCases":
"totalBeds":
"bedsLeft": }
The code for this:
for i in range(0, len(activeCases)-1):
data_dict_term={}
data_dict_term["state"]=states[i][0][1:-1]
data_dict_term["activeCases"]=activeCases[i]
if(beds[data_dict_term["state"]] == "-1"):
data_dict_term["totalBeds"]="Not known"
data_dict_term["bedsLeft"]="Not known"
else:
data_dict_term["totalBeds"]=beds[data_dict_term["state"]]
data_dict_term["bedsLeft"]=int(data_dict_term["totalBeds"]) - int(data_dict_term["activeCases"])
data_dict_array.append(data_dict_term)
The number of hospital beds per state was manually obtained from https://www.kaggle.com/sudalairajkumar/covid19-in-india which in turn have got the data from https://pib.gov.in/PressReleasePage.aspx?PRID=1539877. Values are subject to change and should be treated as an approximation. Now the only thing we need to do is to save the results for later inspection.
import json
#print("Length of datadict: " , len(data_dict_array))
json_string=json.dumps(data_dict_array)
print(json_string)#Printing the data is needed so that JavaScript can access the value directly using Newprocess.stdout.on
with open('./data/dataFromScraping.json', 'w+') as f:
json.dump(data_dict_array, f)
Why we print it will become apparent later. When we execute the script we just assembled, the required JSON with the above schema is created and stored.
Part 2. Setting up the server
Photo from Unsplash by Alfred
What I wanted is a website that performs the parsing and shows the output within the webpage itself. So let’s create a server for this in Node.js. Of course, I could have just used Django or some other pure Python framework for this but since I had already done that in a similar side project I wanted to get Node.js and Python to work together. Setting up a Node.js server using express.js is super easy. Without express, you need to write a lot more verbose code but its good practice for beginners like me.
const http = require('http');
const url = require('url');
const path = require('path');
const fs = require('fs'); const mimeTypes = {
"html": "text/html",
"jpeg": "image/jpeg",
"jpg": "image/jpg",
"png": "image/png",
"js": "text/javascript",
"css": "text/css",
"json":"text/json",
"webp":"image/webp"
};
The above is all the imports we need along with the MIME types. These are the types of files that would be requested to the server so we need to specify the extensions appropriately. When the site asks the server to load image files for instance, we need to set Content-type field to image/png or image/jpg or image/webp.
http.createServer(function(req, res){
try
{
var uri = url.parse(req.url).pathname; //url.parse takes in a URL as argument and returns an object, each part of the URL is now a property of the returned object like uri.host, uri.pathname etc var fileName = path.join(process.cwd(), decodeURI(uri));
console.log("File name ", fileName) //Use decodeURI as unescape is depreciated(source Mozilla docs) it converts string to a form taking into account the escape sequences like unescape('%u0107'); becomes "ć" console.log("path.extname", path.extname(fileName).split(".")) //Array like ['' 'html']
var mimeType = mimeTypes[path.extname(fileName).split(".")[1]];
console.log(mimeType)
if(uri=='/main.html')
{
//do main stuff here
}
else
{
//Execute this for requests for all other files such as image files.
res.writeHead(200, {'Content-type': mimeType});
var fileStream = fs.createReadStream(fileName);
fileStream.pipe(res);
} }
catch(Exception)
{} }).listen(1337);
The above code is self-explanatory. For any requests to server for files other than main.html, the mime type is obtained based on file extension and the file stream is piped to facilitate output on the browser. When main.html is requested for, the server uses child_process module to spawn or create a new process. This runs the Python script that creates the JSON. The print statement of the script helps to transfer the output back to Node.js using stdout.on(‘data’, callback) function. This data will be converted to a String.
Newprocess.stdout.on('data', function(data_from_python) {
//on receiving output from python where data_from_python is whatever python prints to console during processing, so if there are prints due to exceptions, they'll show up and cause a JSON parse error later on
insert_data_in_template=data_from_python.toString()//This is the data from python containing array of JS objects
file_contents=fs.readFile('main.html','utf-8', (err, data)=>{
final_output = utils.augment_template(data, insert_data_in_template)//Custom code to inject the data_from_python into the main.html file
res.writeHead(200, { 'Content-type': 'text/html'})
res.write(final_output)
return res.end()
//response.end() signals to the server that all headers and body has been sent, the server must consider the message to be complete. res.end() MUST BE CALLED at end of each response.
})
})
We create an HTML file that will be injected with the data from Python to create the required output. Since this is not a frontend project and my UI skills are pretty basic, the following is all I could come up with
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="./styles/stylesForMain.css">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head> <body>
<h1 style="color: white;">COVID 19 HOSPITAL BED TRACKER</h1>
<div style="position: relative; top: 20vh;">
<div class="flex-container">
@<div class="flex-item card-layout">
<div id="statename"><h2>State: { </h2></div>
<div id="text-container">
<p>Active COVID19 cases: { </p>
<p>Total hospital beds: { </p>
<p>Remaining beds: { </p></div>
</div>@
</div>
</div>
</div>
</body>
</html>
A simple card layout, one card for each state. Generally, the injection of data dynamically into HTML, what is called templating is easily done using ejs and express. However, it’s a no-express zone here so we write our own custom code for templating using JavaScript. The HTML is just read like we read a file normally using fs.readFile. The entire file contents is placed in memory and augment_template function is called. It first extracts the template format, which I define as “everything between first @ and the next @”. Next, the { is defined as “the place where data is to be put”. Note all these are my own definitions. The data we got from Python is converted to an array of JSON objects. Each term of array creates an HTML card. Each value of each object is suitably injected at the 4 { to make 1 card. This is achieved by simple substring manipulations as shown below
augment_template = (data, insert_data_in_template) =>
{
/*
Input: data which is the HTML code as a large string and insert_data_in_template which is a string containing the data to be entered. Output: returns the HTML code, replacing the { in template with the data that is supposed to be placed their for dynamic display. */
//step1 is extract the template which is everything between @ and @ in main.html. Template is just the div with card-layout that's gonna be repeated.
let step1 = get_start_and_end_indices(data, "@")
let extracted_template = data.substring(step1[0], step1[1])
let extracted_template_copy = extracted_template
//Keep a copy of this template format as the format is updated with new data in each iteration of for loop. let data_to_be_output = JSON.parse(insert_data_in_template)//The data passed was a string
let part_one = data.substring(0, step1[0]-1)//All HTML code before template enclosed in @@
let part_two = data.substring(step1[1]+1)//All HTML code after the closing @
let final_output = ""
for (item in data_to_be_output)
{
for(key in data_to_be_output[item])
{
step2 = get_start_and_end_indices(extracted_template, "{") //{ opening curly braces marks the spot where data is to be //inserted. The opening { must have a space before and after it. replace_brackets_with_this = data_to_be_output[item][key]
extracted_template = extracted_template.substring(0, step2[0]-1) + replace_brackets_with_this + extracted_template.substring(step2[0]+1)
step2 = extracted_template
//console.log("replace_brackets_with_this ", replace_brackets_with_this)
}
final_output = final_output + extracted_template
extracted_template = extracted_template_copy
} final_output = part_one + final_output + part_two
return final_output }
get_start_and_end_indices = (data, symbol) =>
{
/*
Input: data, which is a string and symbol, whose index is to be found out.
Output: Returns a list of 2 items, item[0] is the first index at which symbol is present + 1, item[1] is the last index at which symbol is present. Symbol is usually @ or { symbol.
*/
return [ data.indexOf(symbol)+1, data.lastIndexOf(symbol)]
} module.exports= {
"augment_template":augment_template,
}
Rules are { must have 1 space before and after it and there can’t be any @ in the HTML code itself like for keyframes or media queries. Hence I had to include all of that as external CSS files.
Putting it all together
We have reached a point now where we got our Python script for parsing, a server without express.js and our own custom templating (sort of, at least). Here’s what my final application looked like
Here’s how the final application looks like
We hence know that there are N active cases, M total hospital beds at a state so M-N beds at worst case would be left for others. Note M-N some of these beds are also occupied by patients so actual number of free hospital beds have M-N as an upper bound only.
Conclusion
If you’ve read this far, you’ve experienced the journey of creating a simple hospital bed tracker app using Node.js and Python. Thank you so much for sparing a few minutes of your busy day to read this. I hope you enjoyed the article, feel free to point out any bugs which may have crept in despite my best efforts, share feedback, best practices or anything else that you feel is appropriate for the occasion. Take care, stay safe and happy learning ! | https://medium.com/python-in-plain-english/a-hospital-tracker-app-using-nodejs-and-python-3321bca89b9e | ['Suchandra Datta'] | 2020-12-28 06:37:18.898000+00:00 | ['JavaScript', 'Html5', 'Python', 'Web Development', 'Nodejs'] |
What Warhol’s cruel lens tells us about ourselves | The Screen Test of Edie Sedgwick, 1965
I’ll Be Your Mirror
What can art tell us about ourselves? I don’t mean “society” or “culture” or “civilisation”, I mean us: you and I. Artworks can provoke emotional responses, they can make us think too, but can they say much about how we behave towards each other?
Andy Warhol is perhaps unique among artists in the way he took as his subject the very arena of our shared longings and loathings: stardom. You and I probably have very little in common, except that we know and think about — and sometimes get emotional about — famous people (that’s what makes them famous).
While Warhol’s screen print paintings gave us a representation of stardom — the Elvises, Marilyns and even the Mona Lisas — it’s his films that examine the mechanics of stardom, turning it inside out for his patient and non-squeamish audience to gaze into. In doing so, he confronted us with some uncomfortable truths about how we treat those who have — wittingly or unwittingly — found a place on a public pedestal.
Of the many Warholisms, the one that rings most true in his film output is “the idea of waiting for something makes it more exciting”, since there’s never really any action to come. Warhol’s films simply… happen. But they happen in a provocative, often menacing, way.
Warhol’s filmography is longer than most people imagine, beginning in 1963 when the artist purchased a Swiss Bolex camera. Sleep, the first of the typically Warholian mono-action epics, screens his lover, John Giorno, slumbering naked in Warhol’s apartment. Sleep was followed by a number of films capturing the mundane activities, such as haircuts and eating, as well as a number of three-minute long close-ups of couples kissing in the series Kiss from 1963–65.
Sex was explored too, notoriously and obliquely, with Hand Job (1964) and Blow Job (1964). In these two movies the camera is trained on the face of the ‘star’ of the movie (respectively John Giorno and DeVeren Bookwalter) as he is taken to climax.
Screen Tests
Perhaps Warhol’s most famous films are the series of “Screen Tests”, of which there are 472 films (over 500 were produced, but many were lost). Inspired by criminal mug shots (that Warhol used in his Thirteen Most Wanted Men mural for the 1964 World’s Fair), these films depicted celebrities, Warhol’s “superstars” and other hangers-on that visited his Factory studio.
The films were movie mugshots, in which the subject was told to hold as still as possible for the three minutes it took to exhaust the 100-feet of film in the Bolex camera’s magazine. The backdrops tended to be stark, and the subject sometimes harshly lit.
The first screen tests were compiled into two series: The Thirteen Most Beautiful Women and The Thirteen Most Beautiful Men (alluding, of course, to the criminal mugshots) but watching the movies you are immediately struck by the idea that the evocation of “beauty” is a ruse.
Much of the literature written around the screen tests focuses on the uncanny way the videos seem to convey so much character from such a simple set up. The paradox of photographic portraiture is conveying the depth of personality from the veneer of outward appearance.
While most photography employs allegory — signs and symbolism to tell the “inner story” of outward appearance — Warhol’s brilliance was to use movement; however minuscule that movement may be, it gives non-verbal expression to thought.
There’s more to it than that. These are, after all, called screen tests. Until 1965 Warhol referred to them as “stillies”, but later called them screen tests as the body of the series grew ever larger. The rename suggests that there’s a trial at stake, a challenge. These are not screen tests in the traditional sense, but certainly a kind of trial.
Warhol’s camera is a little cruel, a little invasive. The artist used different lighting effects for the tests: some are frontally lit, others harshly from one side, making the other side of their head almost invisible like a half-moon. Sometimes harsh lights from either side of the head cast a shadow like a cleave down the middle of the face.
In her screen test Ann Buchanan holds perfectly still (probably instructed to do so) and does not blink. Tears eventually well and fall from her eyes. Remarkably she does not blink for the full duration of the film.
The brief to the subject was to stay still and in shot while the camera whirs away. In the earliest “stillies”, subjects were asked not to blink, to stay perfectly still and very often tears well in their dry eyes. In one extraordinary screen test, Ann Buchanan holds perfectly still while a tear over-brims, rolls down her face and drips from her jaw.
These are ordeals, not constructed – and sitter approved – portraits. What we are witnessing is the differing ways that subjects deal with the situation they have been put in. Bob Dylan is defiant; Salvador Dali mugs to the camera (Warhol humours him by inverting the film); Lou Reed is evasive behind sunglasses and drinks a Coke; Billy Name gives nothing away; Nico reads — and then plays with — a magazine. Edie Sedgwick’s screen test is painfully affecting, she is as vulnerable to the camera as she is smitten to it.
Featureless Films
By 1965 Warhol’s confidence had grown and he had began to make experimental ‘feature films’ co-directed with Paul Morrison, two of the most notorious being Vinyl and Chelsea Girls.
Vinyl, 1965. Gerard Melanga is at the front. Edie Sedgwick, making her Warhol debut, is on the right.
Vinyl is a barely-watchable pre-Kubrick interpretation of Burgess’s A Clockwork Orange. In a starkly minimalist mise-en-scène, Factory superstars including Sedgwick and Gerard Melanga shambolically play out the tale of ultra-violence and state control in the 70 minute film peppered with numbers by The Kinks, The Rolling Stones and Martha and the Vandellas.
Vinyl was the first of 12 Warhol films that Sedgwick appeared in. She had met Warhol at a party in 1964 and was snapped up quickly as a muse. Sedgwick was everything Warhol wasn’t: a young (21 at the time) American “aristocrat” who could trace her lineage back to the pilgrims (William Ellery, signatory to the Declaration of Independence, was also an ancestor); she was beautiful, extroverted and immensely privileged in wealth and social cachet.
Sedwick was also deeply troubled. Coming from a notoriously dysfunctional family, she had been tormented by mental health issues and was interned at the Silver Hill psychiatric institution in Connecticut some years before she met Warhol. Sedgwick perhaps thought Warhol was her ticket to fame and success. That was partly right, but Warhol had other ideas and the clues are already apparent in Vinyl.
The film shares a similar aesthetic as the fixed-lens Factory screen tests; the ‘actors’ bodies are cooped into a tight frame and shallow space (presumably in some corner of the Factory) and the camera stares down on them cruelly at a steep angle.
There’s something sadistic about the film above and beyond the erotically-charged “torture” of Malenga: the cold Warholian objectivity is palpable throughout, and the crude acting is almost abject. With no dialogue or real part to play, Sedgwick mostly looks on smoking, she occasionally dances on her seat. She appears in the film simply to appear.
You get the feeling watching this film among the screen tests that the Factory was more a kind of zoo, that the superstars were not just there to be looked at or admired, but to be examined — studied, even.
Chelsea Girls, 1966
Chelsea Girls (1966) takes place in the centre of New York City’s creative universe — the Chelsea Hotel. The film is a kind of drama-documentary that follows the lives of a number of the young ‘superstars’ that clustered around the Factory scene. Over six hours of footage is divided by split screen, giving us a diptych binary of the ‘white’ (innocent) and ‘black’ (dark) aspects of the squalid lives of the hotel’s residents.
The action takes place in front of a rapidly panning and zooming lens, giving the film a cool and detached aesthetic. Clocking in at over three hours, the movie is a challenge of patience that at moments (with emphasis on ‘moments’) explodes with vainglorious brilliance and sordid shock-tactic. The notable omission from the film is Sedgwick, who had demanded that her scenes were cut when she fell out of favour with Warhol.
Unlike the screen tests, both feature films have divided the critics since their release, and it’s unlikely they will ever be universally appreciated. The films are challenging to watch in their entirety for all but the most fanatical Factory scene devotees. The late Roger Ebert observed that Chelsea Girls employed “perversion and sensation like chili sauce to disguise the flavor of the meal […] Warhol has nothing to say, and no technique to say it.”
Whatever you may think of each film, they’ve made their mark more as a cultural statement, an epochal happening. The screen tests, which have fared well with the critics, particularly epitomise the 60s ‘cool’: a detached and laconic indifference that is still the template of stylish self-possessiveness. But there’s also a cruelty in these studies that we can still see reflected in self-destructive celebrities and the intrusive lenses through which we view them. Warhol reveals “cool” to be a veneer, a brittle shell that could be cracked.
Outer and Inner Space
The Screen Tests perhaps culminate in Outer and Inner Space, released in 1966. This film also utilises Warhol’s split screen technique (employed for Chelsea Girls) and is probably the apogee of his film output, a synthesis of the two strands of “stillies” and drama-documentary.
Sedgwick is filmed solo beside a previous film of herself, and again that is doubled (with split screen). The compounded effect is like an echo chamber and hall of mirrors at the same time, with Sedgwick seemingly talking to her “self”. Her integrated and unified sense of self — her ego — is stretched to the limit. Where, this movie seems to ask, does Erie Sedgwick begin or end?
Edie Sedgwick in Warhol’s Outer and Inner Space (1966)
What is perhaps one of the most notable legacies of the 1960s is the way the internal and ostensibly private “self” has become externalised, studied and contested. The sexual, spiritual, psychological and social revolutions of the 60s allowed us to be “our-selves” but exposed those selves to admonishment and invasive scrutiny.
In Beauty №2 from 1965, Sedgwick cavorts half naked with a young handsome man in bed while an unseen voyeur (Chuck Wein), off-screen in the shadows, asks her uncomfortable questions and taunts her. His questions and comments are hurtful and deeply personal, alluding to her mental health and family problems. She eventually loses her temper with Wein and throws a glass ashtray at him.
Warhol famously said, “I never fall apart because I never fall together.” He was notoriously elusive and private, often to the frustration of those who believed they were close to him. In many interviews he is monosyllabic. Appearing on the Merv Griffin Show with Sedgwick in 1965, Warhol, looking painfully shy, does not directly talk to the interviewer but whispers his answers into Sedgwick’s ear. The more extroverted Sedgwick revels in her role as interpreter.
Warhol fiercely guarded his inner self from scrutiny. Little was known of his private life until after his death. Instead he mirrored society’s perverse voyeurism and invasive scrutiny into the lives of the famous and infamous with his treatment of his superstar subjects. His allusion to never “falling together” perhaps betrays a reluctance to be a star who can reveal a coherent self to the world, especially when so many stars were falling apart around him.
None of the “superstars” actually became famous outside of the constructed microcosm of the Factory scene. Only Sedgwick — who was described by Vogue as “girl of the year” and a “Youthquaker” in 1965 — came close to real stardom, though not through merit, more a public fascination with her troubled personal life.
Warhol was of course genuinely famous, and remained the gatekeeper who simultaneously conferred a niche fame on the superstars while holding mainstream fame back. It was Sedgwick’s frustration with this state of affairs that led to her break with Warhol and her life spiraling out of control until her death at the age of 28 in 1971.
Celebrity-Cruelty Complex
It’s easy to draw parallels between the tragic case of Sedgwick and other celebrities who found themselves unwillingly subject to perverse levels of public scrutiny that will often lead to their breakdown or deaths. Warhol’s filmography is a microcosm of society’s celebrity-cruelty complex in which famous people are submitted to extreme tests of their self-possession.
This kind of cruelty took the form of invasive publicity – most notoriously the paparazzi – but has since been accepted, made overt and gamified for public consumption.
Princess Diana lived most of her adult life being photographed in the street. The public “appetite” for images of her gave rise to a breed of ruthless commercial street photographers known as the “paparazzi”. In 1997 she died in a car crash while being pursued by “paparazzi” photographers.
Reality TV shows like Big Brother and I’m a Celebrity Get Me Out of Here make a game of pushing celebrities and wanna-be celebrities to their mental and sometimes physical limits. In the latter, Room 101-like trials are devised to terrify and repulse celebrities (tasks include eating insects or having snakes placed upon them) for public enjoyment.
What is most extraordinary about this phenomenon is that people wittingly play along with it, and not just for money. People seem to understand that fame is a test, a leap of faith into the pit of your own self: you may fall together, you may fall apart.
Warhol essentially reflected the two great themes of his time: the mass consumption of goods and the industrialisation of celebrity. From the 1950s industrial serial production was put to the service not of grand public works, but to the enjoyment of individuals. The “worth” of celebrities is the quantity of their images in the media.
Warhol replicated that seriality, that repetition: Here are 100 soup cans; here are 100 Marilyns. But he also replicated the indifference of the machine, of the bottom line. ‘Good business is the best art,’ he said. Warhol’s superstars were commodities for public consumption just like celebrities are. We swallow them up, sometimes we spit them out.
Thank you for reading. | https://medium.com/the-sophist/warhols-featureless-films-89557923bbc | ['Steven Gambardella'] | 2018-09-27 19:34:02.823000+00:00 | ['Movies', 'Creativity', 'Art', 'Culture', 'History'] |
For the Love of Libraries | The social and political climate in this country frightens me. It does not feel like people remember how to converse or dialogue. The abomination of Nazi book-burning haunts the edges of my mind and repugnance festers deep in my guts when I read Fahrenheit 451. I want to blame the current climate on a lack of reading that keeps minds small.
Books have always been a lifeline for me and I cannot imagine why they would not be for anyone else.
When my stepson came to live with us when he was 6 years old, he was not much of a reader. But he saw me reading every chance I got. His curiosity grew. We read together, talked about books, and eventually had mom-son library dates. The first time I took him to the library, I remember his eyes getting big, followed by a little “Woah!” It turned into a weekly excursion, with him reminding me on occasion about library day.
Realms of discovery
Stacks of adventure, shelves of things unknown, pages of curiosities just waiting for discovery. Rifling through books and new topics is like treasure hunting without a map, you never know what you’ll find. Browsing a library can introduce kids to wonders they never knew to ask about and widen their horizons.
Reading inspires exploration, curiosity, and critical thinking. When I was a kid, I spent my summers racking up points with the summer reading club, exploring the stacks for the next adventure. I would read about one thing, then wonder about something else and go back to find out more about that thing. As I grew older, I felt confident there was a book for whatever it was I wanted to know.
Reading takes you to places you might never experience in real life. I grew up in a small Midwest town with no diversity whatsoever. Part of the excitement about reading was I could explore other parts of the country or exotic places to get a feel for what life would be like somewhere else. I met characters who thought differently than I did, who experienced life in ways that contrasted to my own and followed divergent paths.
Rifling through books and new topics is like treasure hunting without a map, you never know what you’ll find.
These are the experiences kids need to develop inquiring, curious minds that seek understanding. The exploration of ideas gets kids asking questions and thinking about the why of things. Diverse thinking creates a world that looks for solutions instead of adversaries. Narrow or smallmindedness leads to the kind of behavior that supports the horrors that squash ideas and leads to the burning of books out fear for those ideas.
Free library resources
The best part about libraries is they are FREE! Not only are printed books in abundance, but most libraries now have e-books available with your library card. The libraries in my area use Overdrive and Libby, with most of the catalog available electronically as print or audiobook.
For a real trip, check out the Internet Archive that not only has free book borrowing, but also contains millions of audio, video, and film libraries. All free. Which means ANYONE can access it!
Of course, there are e-book and audiobook resources through Kindle, iBooks, and Scribd, often with free books available.
With all this knowledge available, why aren’t we all reading like crazy and becoming super-geniuses?
Intentional reading
When I read Natalia Forrest’s tips for reading more, the first thing I thought of was beating her 2019 total books read. I tapped into my summer-reading-club-child because I only read 107. Now I have a goal of 130 for 2020.
The second thing that flooded my brain was a battle cry urging parents to engage their kids with their local library!
Send them to the library after school.
Set a regular library excursion date complete with treats afterward.
Participate in the programs offered all year long.
Start a reading contest at home if they don’t have one at the library.
Don’t allow screen time until they read a chapter or two.
Listen to audiobooks on family trips.
Invite your kids to read to you or have a family reading night where you take turns reading to each other.
Engage in a family drama where you each take parts of a play to read and act out.
Find books about the movies, television, or characters your kids like. Just as one example, it’s unreal how wide and complex the Star Wars universe is — so much happens in print! Or Halo? I don’t play the game, but the books are great!
Come up with a topic the whole family wants to learn more about, research it individually and share it.
Look for books about your next vacation spot to learn more about its history, geography, and culture.
Hope for the future
It’s up to us to inspire the next generation of readers and the best way to do that is by teaching how to engage with and love reading.
My greatest joy was seeing my son choose a book over a screen. Given an example, time, and encouragement, yours will too. | https://medium.com/raise-a-lifelong-reader/for-the-love-of-libraries-c1d0233df3ac | ['Trudi Griffin', 'Ms'] | 2020-01-29 17:19:34.778000+00:00 | ['Literary', 'Books', 'Reading', 'Literacy', 'Libraries'] |
A Liberating Productivity Insight | The Principle
Here is a principle I want you to seriously consider. It can set you free. You may push back. So, read this carefully and be honest.
Although I may be able to do several things at once, I can only focus on one thing at a time.
Cooking dinner. Talking on the phone. Driving to the store. Listening to an audiobook. Following the stock market. Walking and chewing gum. :)
We can do many things at the same time. But we, as finite humans, cannot truly multitask. That is a capacity reserved for God alone.
This is not a downer principle. Trust me. Facing our limitations is really good news. | https://medium.com/the-mustard-seed/a-liberating-productivity-insight-5d0ee8db94f2 | ['Dr. Mckay Caston'] | 2020-11-11 16:54:11.294000+00:00 | ['Spirituality', 'Christianity', 'Productivity', 'Religion', 'Grace'] |
Medically Proven Cure For Writer’s Block | Summers in Santa Barbara are or have been blissfully mild, allowing every sort of diversion or recreation. I spent two as a Teaching Fellow at the South Coast Writing Project working with teachers who hoped to teach writing. The Director, Sheridan Blau, brought in a host of writers, some nationally celebrated, some academic, some amateur, all of whom described the processes by which they got words on the page. He’d build on their comments, set the class a set of assignments and wait for the inevitable throat clearing and, yes, dare I say, whining. “Why is this like opening a vein?” Sheridan inevitably barked. The bark was delivered frequently enough that the class pitched in for a t shirt emblazoning the phrase surrounded by blood spatter.
Fetching.
So, Writer’s Block. It’s a real thing. I’ve seen it for years and have had an occasional bout of blockage myself. Over many, many years of reading the work that came from blocked imaginations, I devised and borrowed assignments that were intended to liberate the writer. My colleagues may have insisted on quality; I wanted fluency. Revision is a separate and exceedingly helpful skill, but one that can only follow fluency. There has to be something to revise. I won’t trot out every stratagem in one tip sheet, but I will explain one of the most effective and provide live footage (on paper) of a writer galloping down the path I suggest.
Borrowing from those who compose music, I’ve called this, “Writing In The Key Of And…” , by which I mean asking the writer to summon up a memory of any sort, any subject as long as there is some immediacy to it, then describe the event without using any punctuation — no capitalization,no pauses, ellipses, periods, question marks. Nada. However, every separate thought or description has to be introduced by the word “and”. There’s some resistance at the outset, and many questions, but then the magic often happens.
There is something about a headlong rush through the describing of a moment that can bring breathless urgency to a piece. Part of the process is that it shoves together the essential and the transitory, the observed and the imagined, self-reflection and the emotion of the moment. I say, “Throw attentive self-editing to the winds. No room for doubt. Keep the pace. Go wherever your mind takes you. Then let’s see what you’ve got.”
My car’s battery died last week. Not a particularly notable event. As it must to all batteries, last week death came to mine. Here’s my Key of And:
“What the hell the car is just not working and lights are flashing and then not flashing and then nothing at all and I am supposed to be going to volunteer at the Hospice Boutique and I like volunteering and I hate being late and I hate ditching even more and I have ditched many many too many things in the course of my life and what the hell is the course of my life and is it a course as in path or is a course as in academic course and that’s exactly the sort of question that serves absolutely no purpose and the expression that comes to mind is tits on a bull and that is a vile expression and how do I get things out of my head that I do not want in my head and songs are among those things and I have a song in my head literally around the clock and I don’t know actually if I do when I sleep and I do know it’s there when I wake up and many of them are from God knows where or when and that put the song who knows where or when in my head and that’s not one I want in my head and I have to do a quick recasting of songs and I’m trying as hard as I can and come with que sera sera and that’s not much better in terms of existential panic and I do like the tune though and I can keep that on in the background and think about other things and one of those things is death and dying and that’s two things and maybe one after all and I won’t know until I know and that’s if I know and what am I doing spilling my guts here and where am I supposed to spill my guts and I wish I had a guru or master who welcomed gut spill and I actually really don’t want a guru or master foraging through my guts and I have to get out of this assignment and this will be the end.”
Cautionary note. I am remembering how hard it is to return to a crafted sentence after having enjoyed the literary wind in my hair, and, of course, I am tempted to go off on a riff about my hair or lack thereof. So there are some issues with fluency that will need attention. On the other hand, I am reading Anna Burns Milkman, a book that won the Mann Booker Prize and one that I find fascinating. It’s not quite written in the key advertised above, but there are moments that come close.
From page 200, randomly selected:
“ ‘It’s creepy, perverse, obstinately determined’ went on longest friend, she said, ‘It’s not as if , friend,’ she said, ‘glancing at some newspaper as this were a case of a person glancing at some newspaper, as they’re walking along to get the latest headlines or something. It’s the way you do it- reading books, whole books, taking notes, checking footnotes, underlining passages as if you’re at some desk or something, in a little private study or something, the curtains closed, your lamp on, a cup of tea beside you, essays being penned — your discourses, your lubrications. It’s disturbing. It’s deviant. It’s optical illusion. Not public spirited. Not self-preservation. Calls attention to itself and why — with enemies at the door, with the community under siege, with us all having to pull together — would anyone want to call attention to themselves here’.
Why indeed?
Some readers will have arrived at this point having read both passages, and some may come away with an appreciation of fluency as an end itself. Others, not so much. All I can offer at the end of this exercise is that no veins were opened in the making of this essay.
I’m done, and it is probably not necessary to say that the last bit in the last sentence sent me to the exculpatory note at the end of movies in which no animals, they say, have been harmed, and from there … | https://parango46.medium.com/medically-proven-cure-for-writers-block-5ebec042299 | ['Peter Arango'] | 2019-11-15 18:14:57.948000+00:00 | ['Writers Block', 'Blocked Writer', 'How To Beat Writers Block', 'Writing'] |
Guide to Multimodal Machine Learning | Guide to Multimodal Machine Learning
Analysing Text and Image at the same time!
Meme with the same text but different meaning. Source: Author of this post
I got my attention on multimodal learning from Facebook recent Hateful Meme Challenge 2020 on Driven Data. The challenge is about how to make an effective tool for detecting hate speech, and how it must be able to understand content the way people do. Seems pretty cool challenge as it makes use of both text and image for analysing content which is similar to what humans do. Let's dive deep into Multimodal Machine Learning to get what it is actually.
Multimodal Learning
As per definition Multimodal means that we have two and or more than two modes of communication through combinations of two or more modes. Modes include written language, spoken language, and patterns of meaning that are visual, audio, gestural, tactile and spatial.
In order to create an Artificial Intelligence ( even A.G.I 🤩 ) that is on par with humans, we need AI to understand, interpret and reason with multimodal messages. Multimodal machine learning aims to build models that can process and relate information from multiple modalities.
To understand how to approach this problem we must first need to understand the challenges that need to be addressed in Multimodal Machine Learning.
The challenge of Multimodal AI
Representation: The first and foremost difficulty is way to represent and summarize multiple modalities in a way we can exploit their complementarity and redundant nature. See we need to understand that usually, all modes of information we take into account points towards the same information like lip-reading and sound we hear from a person represent the same thing. But using both things together gives us that robustness which helps us understand what the other person whats to convey. So the first challenge is how we can combine multimodal data. eg: Language is often symbolic while audio and visual modalities will be represented as signals. How can we combine them?
Alignment: Secondly we need is to identify the direct relations between sub-elements from different modalities. Let's make this easy with a real-life example. We have a video on how to complete a cooking recipe. Now we also have subscript. To make it intuitive we need to match the steps shown in the video with the subscript to make a complete sense of whats going on. This is known as alignment. How do we align different modalities and deal with possible long-range dependencies and ambiguities?
Translation: Process of changing data from one modality to another, where the translation relationship can often be open-ended or subjective. At some point, we might need to convert one form of information to another. Image captioning is one prime example of this. But there exist a number of correct ways to describe an image and one perfect translation may not exist. So how do we map data from one modality to another?
Fusion: The fourth challenge is to join information from two or modalities to perform a prediction. The competition discussed above Facebook AI hateful Meme challenge is one example of it. Usually, we divide fusion techniques into two parts. Early Fusion or Late Fusion. ( Model -Agnostic Approaches)
Early Fusion And Late Fusion. Source: Author of this post
Co-Learning: Transfer knowledge between modalities, including their representations and predictive models. This is an interesting one because sometimes we have a unimodal problem and what we want from other modalities is some extra information at training time so that our system can perform best at testing time.
If after reading out this if Multimodal Machine Learning got you hooked I would suggest going through CMU Multimodal Machine Learning Course.Link in the reference.
Reference: | https://towardsdatascience.com/guide-to-multimodal-machine-learning-b9b4f8e43cf7 | ['Parth Chokhra'] | 2020-11-05 02:57:12.230000+00:00 | ['Deep Learning', 'AI', 'Data', 'Data Science', 'Machine Learning'] |
Is there Loss in Leaving Traces of Yourself Behind? | Part of my AML collection
On the flight from Phoenix to Chicago, I began reading one of my favorite books.
This was before that thing called “airplane mode” existed on our electronics, so I traveled with an actual paperback for those limbo electronic times — waiting to take off, waiting to land. Who am I kidding? I still travel with a paperback, just in case my Kindle fails.
The book was a thin 1965 reprint of a 1955 Anne Morrow Lindbergh (AML) classic, “A Gift from the Sea.” Over the years, I’ve kept track of the number of times I’ve given this book to women, finally ordering a dozen copies of the edition issued in 1991 to have available. So far sixteen copies are in the hands of friends who otherwise may not have read it. Only one person said they didn’t like it, the others shared similar words of, Ah … thanks, I needed that.
My forty-six year old copy was tattered, bore my name inside the front cover and the year I found it: 1995. Throughout the pages are jotted notes and highlights in my favorite neon green. I read this book at least once a year, sipping it slowly like the coffee I may drink while turning the pages. It refreshes, reawakens, renews my spirit, my energy, my ability to live outward.
Ms. Lindbergh’s eloquence always touches my heart, not the least when she is humbly seeking and finding enlightenment from whatever world she’s currently inhabiting. In this book it’s a casual piece of coastline with an unassuming beach house near the water. She is mere steps from the froth of the ocean, writing during a week when she manages to leave her wife and maternal responsibilities behind and find time — enjoy time — for herself.
We women do not do that well: explore and indulge in extended hours for ourselves.
When I was single and lived alone, it was certainly easier. I carved out Sunday afternoons as sacred, spilling multiple projects around my living room, a favorite movie in the background on a quiet TV, or soothing music playing at a subdued level.
Who am I kidding? I still travel with a paperback, just in case my Kindle fails.
But life changes and gets more hectic and it’s been years since I was that person in those circumstances. Finding time now is harder. There are many things vying for my attention — so many the same or different as the women in my life: a husband and marriage, a house to clean and manage, writing projects to work on, agents to query, bible study, friendships to maintain; family to see, groceries to buy and turn into meals, a lawn to mow, gardens to weed, or a driveway and sidewalk to shovel snow from, social media to keep up with, books, magazines, newsletters to be read.
I have difficulties arranging everything and in a past life, I taught a class on time management. How do women with children manage any hours for themselves? Even for a simple bubble bath?
When I reach the level of being overwhelmed by “things” that “need” done, I reach for Anne and her gentle reminders of taking gifts from the sea, or the places we are right now, and pulling them into ourselves for female-rejuvenation.
During that flight, I could feel my heart relaxing as I finished chapter one, barely getting to know her once again and yet remembering what she would be sharing with me later on.
Then we were given permission to turn on laptops. I tucked Anne away in the seat pocket, withdrew my Mac, and got busy working.
Do you see the irony already?
Leaving vacation-me behind until the next non-electronic moment, I worked for a while, spent some time educating myself about my first Apple product, read a novel on my Kindle, learned about SEO…. When you have the electronics available, you multitask like never before — burning through different projects at speeds not possible with paper.
Coming into Chicago, I put everything away, and shut my eyes for the duration of the flight and pondered Anne. Mrs. Charles Lindbergh was so much more than what that name infers. She was a petite, quiet, reserved mother of five, a prolific and profound writer, yet stayed in the shadows of her husband’s adventures. In North to the Orient, 1931 found her sitting on a crate in Charles’ plane and helped him map the air travel route from New York to Tokyo. She was a participant in many of his explorative flights, a prolific writer, and yet I was alive until 1995 before I knew anything about her beyond the horrific kidnapping of their son.
What an inspiration AML would be to young women if only we knew about her! She was as aware as Virginia Wolfe about the need for women to have rooms of their own. Given the recent revelation of Charles’ multiple lives, the one thing I admire him for is ensuring that no matter where they lived, he made a special place for Anne to write. He encouraged her talent and urged her to pursue publication.
It was as we were preparing to board for the connecting flight home to Pittsburgh that I reached into my backpack for Anne and realized she was gone. I’d left her in the airplane seat pocket. In thirty years of flying, I’ve never left anything on a plane and of the inconsequential things I could have left, instead it was a treasured book.
Dashing to the ticket counter, I asked the agent for her help, she tried, but without success. However, the lost book leads to a conversation and a chance for me to share Anne’s book with another rushed and harried woman. She jots down the name, the author and tells me it sounds like a book she needs to read, right now.
Sad as I was to board the plane, I started to think about what we leave behind us, what resounds in the wakes of our passing. When we move on, apparent in our absence are both the intangible and tangible — like a book and like a conversation. Perhaps that gate attendent went to her local bookstore and found Anne, read it, and felt some hope for future moments of quiet and peace in her hectic life.
I filed a report and hoped for months that the book would turn up in the lost and found. That didn’t happen. So the next thing I hoped for was that the woman who sat in that seat after me found the book and first thought to herself, I should turn this in. Then she flipped a page and read, flipped another page, and read, and became so engrossed, enthralled with what had been left behind that she was compelled to keep Anne so she could take joy in the fluid, magical words of Gift From the Sea.
This book is a tangible trace of myself that I left behind. It bears my name and has my notes scattered throughout the pages. It makes me wonder what traces of myself I leave in the world as I walk through it and touch some places, some people, and then others.
What do you leave in the lingering traces of your wake? | https://rosemarygriffith.medium.com/is-there-loss-in-leaving-traces-of-yourself-behind-7607b1c5f9a2 | ['Rose Mary Griffith'] | 2019-03-12 12:30:31.813000+00:00 | ['Travel', 'Life Lessons', 'Reading', 'Women', 'Writing'] |
React styling made easy with Styled Components | I understand the notion of “cascading” style sheets and how they ought to work. But as my projects get bigger, I find it difficult to cleanly organize my style files. Most of the time I find myself tangled in a CSS web, hesitant to change things and inadvertently break throw off styles elsewhere in my project.
Cue: React Styled Components. This library allows you compartmentalize your CSS by creating modular style components. The CSS for one component is completely isolated from that of another, in its own styling sandbox. This makes it really easy to play with different components or sections of your page without worrying about unintended side effects. Getting started is a breeze:
npm install --save styled-components
I like to organize my components/containers in folders along with a style file for each. The style file will (obviously) hold the styled components.
You can certainly add your styles in the same file as the component’ JSX, but I find it helpful to separate concerns.
I’ll start with some extraordinarily simple JSX to show you how it’s done. Instead of using traditional JSX to wrap your text/image/input, you will replace the JSX with pre-styled components that are wrapped in a variable name.
Say I wanted to create a component that simply returned an h1 with blue text. Instead of wrapping my text in an <h1>, I define a variable called “Header” which I will assign to be an <h1>. I declare CSS properties like normal within that Header component variable, then I wrap my text in that component like so:
Take care to import and export your files appropriately, the same way you would do with any the react component. The output from the above code is a a blue <h1> tag, just as expected. The beauty is that we’ve now compartmentalized the styling for that particular tag.
Here I’ve added a bit more code. You can see that I had two sentences wrapped in <p> tags that I wanted to have different styles. I created two different components and styled them entirely differently.
One important rule (that cost me a lot of time) — as with regular React components, variable names for styled components must be capitalized.
You can nest additional styles inside of your styled component, just as you would with vanilla CSS.
Note how I was able to target different elements within the <Coding> component. The styles listed at the bottom (color and font-size) will be applied throughout the entire component, unless an element has been more specifically styled. I fully recognize that the code and styles here is silly, but I’m sure you can imagine the utility of being able to isolate styles this way.
These are the basics — it’s really that easy! Styled components won’t make you a CSS wizard (unfortunately), however the modularity of them should make your next project a little bit more manageable.
Check out https://styled-components.com/docs for more info! (But tbh it couldn’t be simpler — just do what I told you 🤪). | https://jessicasalbert.medium.com/react-styling-made-easy-with-styled-components-7ecaa4b15c71 | ['Jessica Salbert'] | 2020-11-12 15:40:20.592000+00:00 | ['Styled Components', 'React', 'Programming', 'Web Development'] |
6 Types of Neural Networks Every Data Scientist Must Know | DEEP LEARNING, NEURAL NETWORKS
6 Types of Neural Networks Every Data Scientist Must Know
The most common types of Neural Networks and their applications
Neural networks are robust deep learning models capable of synthesizing large amounts of data in seconds. There are many different types of neural networks, and they help us in a variety of everyday tasks from recommending movies or music to helping us buy groceries online.
Similar to the way airplanes were inspired by birds, neural networks (NNs) are inspired by biological neural networks. Though the principles are the same, the process and the structures can be very different. This is as true for birds and planes as it is for biological neural networks and deep learning neural networks.
To help put it into perspective, let’s look briefly at the biological neuron structure. Figure 1 shows the anatomy of a single neuron. The central part is called the cell body, where the nucleus resides. Various connections pass the stimulus to the cell body, called dendrites, and a few connections send the output to the other neurons called axons. The thickness of the dendrites and axons implies the power of the stimulus. Many neurons with various cell bodies are stacked up and form a biological neural network.
Figure 1: Anatomy of Single Neuron (Image by author)
This same structure is visible in deep learning neural networks. The input is passed through an activation function (similar to the nucleus) with weighted edges (similar to dendrites). The generated output can be passed to another activation function. Many of these activation functions can be stacked up, and each of these is called a layer. Apart from the input layer and the output layer, there are many layers in the interiors of a neural network, and these are called hidden layers. | https://towardsdatascience.com/6-types-of-neural-networks-every-data-scientist-must-know-9c0d920e7fce | ['Ramya Vidiyala'] | 2020-12-18 14:02:07.149000+00:00 | ['Machine Learning', 'Data Science', 'Technology', 'Education', 'Artificial Intelligence'] |
Blue | Image Source: Unsplash
Tonight I feel
Really fucking low
I feel down, I feel blue
I’ve used every word and phrase
For sadness, for depression
I’m down in the dumps
I’m not myself
I feel stressed out
Blue, miserable, fed up –
Tonight I feel blue
Tonight I feel blue
Tonight
It is just one night
I have to remember
When I think I can’t keep going through
This shit show time and again
That it passes
It always passes
Tonight
It is just one night
Everything could change by the next morning
And I won’t know
Until I wake
I feel really low
Really down, really blue
But I can make it until morning
Tonight, just one night
I can make it
I won’t know until
Blue night turns into blue morning | https://medium.com/the-partnered-pen/blue-faa238596998 | ['Kat Morris'] | 2019-11-10 19:16:53.874000+00:00 | ['Mental Health', 'Sadness', 'Depression', 'Poetry', 'Poem'] |
My Plant Friend: Wood Sorrel | My Plant Friend: Wood Sorrel
How a common weed led me to veganism…
Plant Camp 2006: Earthaven Ecovillage, Black Mountain, NC
In 2006, I attended a one-week herbal intensive at Earthaven Ecovillage. One of our assignments was to find a plant to work with up close and personal. In herbal circles this is known as the “plant ally.” To choose a plant ally, we were told, we could simply look around and see which plants called to us. It could be something that might treat a physical condition, something that simply looks pretty, or something you have always wanted to learn more about. Part of the idea, though, is to slow down, observe, listen and allow the plants to speak…
“Let my voice move into your awareness, as I spin my sister story, as I tell you of my ability to feed you fully: I can nourish your being, your sense of self-worth and every cell in your body.” by Susun Weed
The first plant ally I considered was Stinging Nettle. It was the first herb my acupuncturist recommended. I don’t remember if she said it was for moving blood, increasing yin, or toning the liver, or what. She was rather vague and I was rather skeptical of Chinese explanations for the way things work. I do know that nettle is recommended for allergy, menstrual issues, calcium, and arthritis. Since I had all those issues, I thought nettle would be a perfect ally.
Later in the week, my teacher shared her personal journey from veganism to once again consuming animal products. At the time, I had been strongly considering veganism, but while at Earthaven I had justified my food choices by telling myself that the eggs and cheese come from happy chickens and happy goats. As she spoke about the lamb that offered itself to her, I realized that the animals are only alive and happy as long as they are useful, but as soon as someone has a personal excuse to kill and eat them, that’s it. No more happy, peaceful lamb. As this awareness settled into my being, my heart joined with that lamb’s heart and we cried out for a deeper path.
Meanwhile, all week long I had been noticing the Wood Sorrel. It is a common plant and easily recognized with three tiny heart-shaped leaves. Even while I noticed this plant over and over, I never considered it as an ally. It is an ordinary salad green with no medicinal value. As my teacher was sharing her story, I began doodling the Wood Sorrel all over my notebook. It suddenly dawned on me that the Wood Sorrel was calling to be my plant ally. This common little plant was calling me to follow my heart and stand strong in my spiritual beliefs.
UPDATE 2019
For the past few years, I have been studying divination practices such as animal totems, astrology, and tarot cards. In reading back over this story it occurred to me that I was wrong to judge my teacher for her choice. Just as I saw meaning in the wood sorrel, she saw meaning in the lamb. We can’t find meaning in everything, not everything has a message for us. But, if we do read a message than the message is meant to be heard. Perhaps she needed to eat meat and eggs, perhaps I did not.
If you would like to learn more about working with a plant ally, check out “Worts and Cunning Apothecary’s Plant Ally Project”
Quote from Healing Wise by Susun Weed. | https://medium.com/weeds-wildflowers/my-plant-friend-wood-sorrel-f943735bf7f2 | ['Laura Manipura'] | 2019-10-20 13:14:28.825000+00:00 | ['Nature', 'Plants', 'Vegan', 'Nature Writing', 'Vegetarian'] |
Initial Impressions Of The iPad | My (long awaited) iPad arrived last Friday, and I’ve had a few days to start my relationship with it. I have the wireless plus 3G version. Here are my initial impressions, from a user experience point of view:
Fingerprints, fingerprints, fingerprints — I am beginning to understand the use of fingerprints in forensic science… certainly they are all over my iPad! Luckily they are easy to clean off with a little windex and a soft cloth.
It doesn’t replace anything — I’ve read some critiques of the iPad saying it can’t replace your iphone (doesn’t have a phone) and it can’t replace your laptop (not enough storage space etc.) My take is that it isn’t meant to replace anything… it is its own device in its own right. I don’t think that is a bad thing. The iPad is different.
The iPad apps are what it’s all about — As soon as I got the iPad I started downloading apps. Most of them are free, a few cost from .99 to 9.99. The apps are great, and I find myself scanning my 3 news sources more than I do on my laptop. I’ve started reading books. I know, it’s not a Kindle, but I like reading books on the iPad. Apps for the iPhone “work” but essentially are useless… they look bad and show up tiny on the screen.
Small differences have huge results — The user interface for calendar, and email (both ical and gmail) is subtly different than on a laptop, but the difference in the interface makes a huge difference in the experience. Although a keyboard is important for composing an email, perusing emails, reading them, deleting them, looking at your calendar is all much more intuitive on an iPad than on any other device I’ve used. Having said that, you need to add the keyboard (I got the wireless one) if you are really going to type anything. I find I use my laptop when I really need to type, and the iPad when I don’t. I am going to experiment with using the iPad and the keyboard while travelling in place of my laptop.
The iPad is my new pet — Maybe I’ll be able to articulate this better as time goes on, but in the few days I’ve had the iPad I’ve become attached to it. It’s something about the size, the shape, the speed of response, and the user experience of using your fingers to navigate rather than a mouse and keyboard… all of these things make me feel attached to the iPad. It’s like a pet. I want it near me, I reach for it first thing in the morning and often during the day.
It’s not perfect, and I’m sure the whole concept will evolve over time, but there’s a new device in town that I believe is here to stay. Maybe I’ve just got the glow of a new relationship. I’ll let you know if it lasts!
Do you have an iPad? Want one? Don’t want one? Write a comment with your opinion.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
Did you find this post interesting? If you did, please consider doing one or more of the following:
add your comment
subscribe to the blog via RSS or email
sign up for the Brain Lady newsletter
share this post | https://medium.com/theteamw/initial-impressions-of-the-ipad-1066642c00bc | ['The Team W'] | 2016-09-21 22:12:06.395000+00:00 | ['User Interface', 'iPad', 'Review', 'Design'] |
Fantasy Football Data Analysis | Fantasy Football Data Analysis with Python
Analyzing the relationship between players’ fantasy production and defenses
Photo by Eternal Seconds on Unsplash
With the 2020–2021 NFL fantasy football season about to come to a close, I was inspired to analyze data from the past few years:
Before I start jumping into the data analysis, here’s a summary of fantasy football so we are all on the same page:
To play fantasy football, you need to create or join a league on one of many websites (ESPN, Yahoo, Sleeper, CBS, NFL, etc). Each member of the league is the owner/general manager of their team. Each league has two parts to a roster: the starting lineup and the bench. The starting lineup includes a combination of quarterbacks (QB), running Backs (RB), wide Receivers (WR), tight ends (TE), flexes (FLEX), kickers (K), and defenses (D/ST) based on league settings. The bench can be made up of any players the owner chooses.
There are 3 types of leagues:
Redraft Leagues: each season, rosters completely reset and all players are available to draft
Keeper Leagues: each season, owners can keep a certain number of players for the following season, and then draft the rest of their roster
Dynasty Leagues: each season, owners keep their entire roster for the next season and draft rookies in the draft
There are 2 types of drafts:
Snake (Traditional) Drafts: each owner gets a chance to draft and the draft order reverses each round of the draft
Auction Draft: each owner gets a set amount of money, and each owner can bid for each player as long as they have a sufficient amount of money
There are 2 types of scoring systems:
Standard: 1 point per 25 passing yards, 4 points per passing touchdown, 1 point per 10 rushing or receiving yards, 6 points per rushing or receiving touchdown, -2 points per fumble lost or interception.
Point Per Reception (PPR): Scoring is the same as Standard, except players get 1 additional point per reception
After the draft, you can add unrostered players to your team or make trades with other owners to improve your team.
The first 13 weeks of the NFL season is known as the fantasy regular season, each week you play in a head to head matchup with another owner in your league. Whoever has the most points scored by their players that week receives a win. After the first 13 weeks, the owners with the most wins make the playoffs and are placed into a bracket. After the fantasy playoffs (weeks 14–16), the champion is crowned.
A vital part of fantasy football is deciding which players you are starting based on their matchups. Some players’ output is heavily reliant on the strength of the defense they play while others are “matchup-proof”, meaning that regardless of the strength of the opposing team, they will perform well. I wanted to figure out which players were “matchup-proof” and which players were matchup reliant.
This led me to ask the question: what is the effect of a defense’s strength on the fantasy output of a player?
Step #1: Data Collection
The first step to any data analysis project is collecting the data. The data necessary is the yearly stats for every player, the weekly stats for every week for every player, the rankings of every defense against QBs, RBs, WRs, and TEs, and the schedules for each team from 2017–2019.
Step #2: Calculating Fantasy Points
The next step is to iterate through all the data files and transform all of the stats into PPR fantasy points:
fantasypoints = 0 # negative stats fantasypoints -= (stats["FL"][i] * 2)
fantasypoints -= (stats["Int"][i] * 2) # positive stats fantasypoints += (stats["PassingYds"][i] * 0.04
fantasypoints += (stats["PassingTD"][i] * 4)
fantasypoints += (stats["RushingYds"][i] * 0.1)
fantasypoints += (stats["RushingTD"][i] * 6)
fantasypoints += (stats["ReceivingYds"][i] * 0.1)
fantasypoints += (stats["ReceivingTD"][i] * 6)
fantasypoints += (stats["Rec"][i])
After converting player stats into fantasy points, the data files looked like:
Player Name, Position, Team, Games Played, Total Fantasy Points, Average Fantasy Points Todd Gurley,RB,LAR,15.0,383.3,25.55
Le'Veon Bell,RB,PIT,15.0,341.6,22.77
Kareem Hunt,RB,KAN,16.0,295.2,18.45
Alvin Kamara,RB,NOR,16.0,312.4,19.52
Step #3: Pulling Defense Rankings for Each Week
Once, all the fantasy points were listed for each player in the data files, I needed to pull the defense ranking of the team they played when they scored those points.
The rankings are determined by the average number of fantasy points a defense gives to each position (QB, RB, WR, TE) throughout the year. This means that each defense has 4 different rankings:
Team, QB Rank, RB Rank, WR Rank, TE Rank ARI,18,4,18,14
ATL,23,7,14,13
BAL,2,22,2,21
BUF,5,32,5,22
To add the ranking of the defense to every weekly stat file, I had to iterate through all the weeks and all the schedule files to find the opposing team and then add the ranking of the defense to the weekly stat file.
Once added, the weekly files looked like:
Player Name, Position, Team, Total Fantasy Points, Opposing Team Rank Kirk Cousins,WAS,QB,26.8,17
Tom Brady,NWE,QB,33.72,30
Jared Goff,LAR,QB,23.58,31
Case Keenum,MIN,QB,28.56,19
Step #4: Creating Correlation Coefficients and Graphs
After the defensive rankings were added to the weekly files, all that was left to do was to iterate through all the weekly stat files and plot every player’s fantasy points against the ranking of the defense they played:
plt.scatter(xdata, ydata)
plt.title("Effect of Defense Strength on " + str(playername) + " in 2017")
plt.xlabel("Defense Ranking (1-32) | Correlation = " + str(correlation))
plt.ylabel("Fantasy Production Above/Below Yearly Mean") x = np.array(xdata)
y = np.array(ydata)
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x + b)
plt.plot(flatlinex, 0*flatlinex, linestyle = "--", dashes = (5, 5), color = "black") plt.show()
Now all that was left to do is it interpret the data that I compiled. But, before I share my findings, let me explain the significance of a correlation coefficient.
A correlation coefficient (r) quantifies the strength and direction of a linear relationship. A positive r indicates a positive linear relationship, and a negative r indicates a negative linear relationship. When r is greater than 0.6 or less than -0.6, it means that there is a strong correlation between the two variables.
Step #5: The Results
There were a few players each year that had a strong correlation between their fantasy output and defense strength:
2017:
Todd Gurley: 0.02 (#1 Overall Player)
Dak Prescott: 0.62
Ezekiel Elliot: 0.64
Alex Collins: 0.65
Drew Brees: 0.65
Charles Clay: 0.65
Marlon Mack: 0.67
OJ Howard: 0.68
Rex Burkhead: 0.71
Jared Goff: 0.85
Python Generated Graph | Image by Author
Python Generated Graph | Image by Author
Todd Gurley was the highest fantasy scorer in 2017 and had virtually 0 correlation between his production and the defense he played. On the other hand, Jared Goff had a correlation of 0.85 and his fantasy production was incredibly reliant on his matchup.
2018:
Todd Gurley: 0.28 (#1 Overall Player)
Josh Reynolds: 0.61
Davante Adams: 0.62
Duke Johnson: 0.62
Marcus Mariota: 0.68
Corey Davis: 0.69
Dalvin Cook: 0.69
Mitchell Trubisky: 0.72
Carson Wentz: -0.73
Russel Wilson: 0.74
Gus Edwards: 0.83
Python Generated Graph | Image by Author
Python Generated Graph | Image by Author
Todd Gurley was the highest fantasy scorer in 2018 and had little to no correlation between his fantasy output and the defense he played against. Surprisingly, Carson Wentz had a strong negative correlation between his fantasy output and the defense he played against. This means that he played better against better defenses. Although there is an outlier in his data, there is still a somewhat clear negative trend.
2019:
Christian McCaffrey: 0.42 (#1 Overall Player)
Tony Pollard: 0.62
Eric Ebron: 0.63
Andy Dalton: 0.64
Tevin Coleman: 0.65
Marquise Brown: 0.65
Chris Carson: 0.67
Jimmy Garoppolo: 0.68
Alshon Jeffery 0.68
Odell Beckham: 0.68
Devonta Freeman: 0.71
Melvin Gordon: 0.73
Adam Thielen: 0.74
Jared Goff: 0.79
Python Generated Graph | Image by Author
Python Generated Graph | Image by Author
Christian McCaffrey scored the most fantasy points in 2019 and had a very slight correlation between his performance and the defense he played against. On the other hand, just like in 2017, Jared Goff had a very strong correlation between his performance and the strength of the defense he played.
However, in football, many factors that go into how a player plays other than the defense they’re playing. Just to name a few: game script, injuries, coaching, etc. Although some of these numbers may seem convincing, many many other factors are playing a role.
This was my first time using pandas, numpy, and matplotlib in Python. You can check out my code on Github.
Let me know if you have any feedback. Thanks! | https://anishkasam.medium.com/fantasy-football-data-analysis-with-python-b3c017d0d3b5 | ['Anish Kasam'] | 2020-12-29 22:03:31.047000+00:00 | ['Python', 'Sports', 'Towards Data Science', 'Programming', 'Fantasy Football'] |
3 to read: Saying No to Trump | Censorship factories | Tips to deal with disinformation | By Matt Carroll <@MattCData>
Jan. 12, 2018: Cool stuff about journalism, once a week. Get notified via email? Subscribe: 3toread (at) gmail. Originally published on 3toread.co
Why networks should say No to Trump: The president addressed the nation about immigration last week. Margaret Sullivan at the WaPo argues convincingly that the networks should turn away from Trump, next time he wants to use free air time to spread “propaganda.” In his talk, the president offered up no news, but did repeat again and again exaggerated and false information. So what’s the point?, she asks. A good read.
A peek inside China’s ‘censorship factories’: China is big on censoring news, whether it’s about certain political issues or an ominous empty chair. No news there. But the NYT provides a glimpse of what’s like to work inside one of the “censorship factories,” where low-paid people work to scrub the words of 800 million daily users. It’s a fascinating take.
5 lessons for reporting in an age of disinformation: Good tips from Claire Wardle at First Draft News about how to train reporters from being manipulated. Some ideas: Train your newsroom in disinformation tactics and techniques; do more reporting that helps explain the issues that are often the subjects of disinformation campaigns. | https://medium.com/3-to-read/3-to-read-saying-no-to-trump-censorship-factories-tips-to-deal-with-disinformation-c78f13db007a | ['Matt Carroll'] | 2019-01-12 13:21:00.698000+00:00 | ['Journalism'] |
My 5 Questions for the Medium 1% | |Medium|Blogging Tips|Marketing|
Success on Medium is determined by which ecosystem you write in. Here’s are my questions for those who make over $1,000/month on the platform
Photo by Analia Baggiano on Unsplash
The genius of Medium is that it offers elegant publishing tools to everyone, free of charge, irrespective of mother tongue, geography, experience or skill. Still, writers on Medium have vastly different experiences and vastly different reach, based on the Medium ecosystem in which they find themselves. To succeed on the platform, you have to calibrate your strategy with the ecosystem in which you find yourself, rethinking each time you hit your benchmarks for success.
Subscribe to the Suggestion Box newsletter below to receive your free eBook on how to maximize your success on Medium today.
What’s Behind the Questions
Having hit my own benchmarks ahead of schedule, here’s what I need to know to get crystal clear about my next set of goals:
Can you cross $1K without any viral stories in the month?
The joy of earning on Medium is that good, distributed stories pay out small amounts long after they’re published. Still, in each of my most successful months, I’ve had at least one viral story, which represented about half of that month’s earnings. Do Medium all-stars, with more than 5K personal followers hit big earning numbers even without a viral story in a given month?
How many impressions do your top-earning stories get?
Each writer on Medium defines success differently, and when you’re just building your following, a story receiving 100 impressions is an important milestone. On the other hand, I have to imagine that umair haque, with his 188k follower count, receives 100 impressions within 10 minutes of publication. How many impressions can you count on, when you write a good piece within your area of expertise?
Does the publication matter?
Medium all-stars often have more than 5K followers on their personal publications. With hefty personal followings, does publication with wide-distribution labels matter, or are the 1% -ers beyond all of that? Asked differently, I want to know if the major Medium writers ever hold an article because they know that its a perfect fit for a major Medium publication.
What is the ideal length for a profitable story?
Let’s be honest. Some of my most beloved Medium writers can ramble on from time to time. Many come from the world of mainstream media, and as such, I have to imagine that they’ve worked with editors who could easily slash their 3,000 words in half. On the other hand, as Medium pays based on read time, these all-stars know that word count, when compelling, equals increased revenue. How do top writers balance the incentive for quality and the incentive for high word counts?
When is the right time to return to a viral story and share its main ideas again in a new post?
It is incredibly satisfying to watch a story you love go viral, read and respond to comments, etc. While distributed pieces do have long shelf lives, once a story has run its initial publishing course, its reads drop substantially. Medium plagiarism rules do not allow writers to delete and republish stories, but successful writers on the platform return again and again to the topics that resonated with their readers, sharing them in new ways. So, I’m wondering: When is it too soon to return to a hit and how many stories can you get out of an idea that clearly interests readers?
Medium is disrupting the world of publishing. Never before has it been easier for writers to make a living doing what we love. But, success on the platform takes strategy. Building your successful strategy starts by knowing your Medium Ecosystem. | https://medium.com/suggestion-box/my-5-questions-for-the-medium-1-9ba3b5686ef3 | ['Sarene B. Arias'] | 2020-10-15 22:45:30.620000+00:00 | ['Strategy', 'Tips', 'Medium', 'Blogging', 'Writing'] |
CV | Clevan Johnson
Clevan Harvey
Hi, I am a Freelance Social Media Manager and Digital Marketing Consultant, Creater & Founder of You Are Not Alone.
I am a hard working, easy going individual, with a passion for social media, marketing, tech, current affairs, music, art, politics, sport. travel and photography (oh yeah and all things Jamaica!).
Creator -(You Are Not Alone), February 2017 — Present
You Are Not Alone -an online platform where people who suffer/ed with mental health illnesses can share their stories; with a similar approach as Humans of New York but extending it further than just a picture and a quote but having a space (a blog) where they can share in depth about their battle with their mental health illness.
Freelance Social Media Manager / Digital Marketing Consultant -(FWRD), November 2016 — Present
- Managing social media accounts — Twitter, Instagram, Facebook and Snapchat
- Growing an organic following
- Engagement amongst followers
- Strategising Social Media Campaigns and posts
- Creating Content
- Brand Consulting
- Launch campaigns
- Managing blog content from contributors
- Managing online community/Admin Slack Group
Freelance Social Media Manager / Digital Marketing Consultant / Startup Community Lead - (YSYS — formerly SWS), October 2016— Present
Social Media Manager / Digital Marketing Consultant:
- Managing social media accounts — Twitter, Instagram, Facebook and Snapchat
- Growing an organic following
- Engagement amongst followers
- Strategising Social Media Campaigns and posts
- Creating Content
- Brand Consulting
Startup Community Lead
Responsibilities:
Community Growth:
- Scout the for individuals/information regarding Bloggers and articles
- Interviewees (startup founders / brands etc)
- Finding relevant start up / texch information/news content for the social media outlets
- Establish partnerships with ecosystems / brands. Keep a database of potential leads
- Cultivate deep relationships with colleagues as well as partners, organisations, etc.
- Attend events on behalf of SSW and provide feedback
- Create strategic community engagement / marketing plans
Events:
Help with events planning, logistics and execution
Freelance Social Media Manager — (Surprise Them With Progress), February 2017 — Present
Managing social media accounts — Twitter, Instagram, Facebook and Snapchat
- Growing an organic following
- Engagement amongst followers
- Strategising Social Media Campaigns and posts
- Creating Content
- Brand Consulting
Freelance PR & Social Media Manager — (Mr Outspoken), June 2016 — Present
- Managing social media accounts — Twitter, Instagram, Facebook and Snapchat
- Growing an organic following
- Engagement amongst followers
- Strategising Social Media Campaigns and posts
- Creating Content
- Brand Consulting
- Launch campaigns and Music management (organising connections with music organisations such as SBTV, etc)
Freelance Social Media Manager — (Goodwood Pictures), June 2016 — Present
I now manage all of Goodwood Pictures social media platforms. Where I use my expertise to gain followers for his online presence but also gain awareness to his films and photography and his personal brand.
By using my social media expertise I turn followers into engaging fans. Using several social media mechanisms, whether it be engaging with followers and others on social media or delivering interactive topical discussions on these platforms with excellent story telling techniques.
Social Media marketing and branding / PR.
Freelance PR & Social Media Manager & Digital Marketing Consultant — (Guzzi’s World — Vlogger), July 2016 — January 2017
I have gained 5 thousand plus organic followers for Guzzi’s World twitter account since taking it over from early July 2016
I managed all of Guzzi’s World social media platforms. where I used my expertise to gain followers for his online presence but also gain awareness to his vlogs and his personal brand.
By using my social media expertise I turn followers into engaging fans. Using several social media mechanisms, whether it be engaging with followers and others on social media or delivering interactive topical discussions on these platforms with excellent story telling techniques.
Social Media marketing and branding / PR.
Work Experience
(Hope to put Social Chain here soon)
Qualifications
Politics Diploma — University of Westminster, London (September 2013 — July 2016)
A Levels — Bromley College (Oprington Campus), London (September 2011 — June2013)
Economics
Government & Politics
Sociology
St Francis Xavier Sixth Form College, London (September 2010–June 2011)(Yes Tinie Tempah went here too)
GCSEs — St. Mary’s Catholic High School, London (September 2005 — June 2010) | https://medium.com/clevan/cv-400a1d1f32f8 | [] | 2020-11-24 00:47:23.793000+00:00 | ['Cv', 'Social Media', 'Marketing', 'Jobs', 'Politics'] |
What convolutional neural network architecture works best for classifying malware images | What convolutional neural network architecture works best for classifying malware images
Apparently, we can now detect viruses by converting them into images
As the world goes digital, cybersecurity becomes exceedingly important. Manually programming rules to catch the viruses is slow and will never keep up with ever-changing malware. This is why the idea of representing malware binaries as an image was groundbreaking. The procedure is relatively simple and is done by the following procedure:
How to catch malware
This approach comes with several benefits. Accurate conversion to images will allow us to use the various deep-learning based CNNs in image classification. Visualizing the malware will let us spot patterns. And all this can be done without having to run the malware. This is a big plus for security. Classifying Malware Images with Convolutional Neural Network Models by Bensaoud et al dives into this. It compares the performance of several image-classification architectures when it comes to classifying malware images. In this article, I will explain the different models compared, as well as elaborate on the results. As always, I will talk of any implications and extensions. At the end of this article, will be the annotated version of the paper, so that you can understand some of the nuances better.
The need for ML-based detection
Some might be wondering why ML-based detection would be useful. It would be much more expensive than traditional ways. Would the overhead justify itself? In a chart:
This rapidly increasing number is more than enough justification. This pandemic has accelerated the online shift. As automation changes many areas of life, this number will keep increasing. Building robust and generalized models will not only work well for all types of malware present but also for future malware (more in the extension section).
Results and Understanding the different protocols
Above is a summary of the results predicting the classifying different types of viruses. We see some variance with 2 models getting sub-30% accuracy. The other 4 are all in their 90s. Inception V3 is the winner. If you don’t know these models, no worries, I will now be explaining them all. All the architecture images are taken from the images unless specified otherwise.
VGG 16
VGG 16 is a very interesting model. It prioritizes depth over width by applying 3x3 convolution filters that are small. This allows them to increase the number of layers from 16 to 19. It was first proposed in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition” by a team from the University of Oxford. It performed extremely well on the ImageNet dataset, beating more complex models. However, in this case, it wasn’t able to classify the malware effectively.
Inception V3
Inception V3 is an improvement on the GoogleNet Inception V1. It is an impressive model that is able to keep up with models like VGGNet despite using over 20 times fewer parameters. This model is 42 layers deep, reducing the error. This combines 5 Inception Module As, 4 Inception Module Bs, 2 Inception Module Cs which are combined with 2 Grid Size Reductions and a final auxiliary classifier. This makes it quite powerful, despite a low number of parameters. More information can be found by reading the paper Rethinking the Inception Architecture for Computer Vision. Its results in this assignment speak for itself.
ResNet 50
Another heavy hitter in the image recognition space. This model provides “extra connections between non-contiguous convolutional layers, using shortcut connections.” This allows the model to skip through layers to deal with vanishing gradients in order to achieve lower loss and better results. The network had 152 layers, (8 times deeper than a comparable VGG network). However, it also had a poor performance in this task. Read more here: Deep Residual Learning for Image Recognition
CNN-SVM
CNN-SVM adds a novel twist to a familiar model. “For classification, deep learning models usually use the softmax activation function as the top layer for prediction and minimization of cross-entropy loss. Tang[42] replaced the softmax layer with a linear SVM” This replacement of the softmax with a linear SVM to the final layer of a CNN allowed for the extraction of features for input images with a linear SVM. This worked very well with facial expression recognition. However, it seems to underperform compared to CNN-Softmax which makes me curious why it was used here. Paper: An Architecture Combining Convolutional Neural Network (CNN) and Support Vector Machine (SVM) for Image Classification. I will read more on this, and update anything I learn. If you know about this, be sure to share in the comments.
GRU-SVM
“Agarap and Pepito[17] modified the architecture of a Gated Recurrent Unit (GRU) RNN by using SVM as its final output layer for use in a binary, non-probabilistic classification task (see Fig 8). They used GRU-SVM on the Malimg dataset and achieved 84.92% accuracy.” This was all the paper said about this. So I did some more digging into this. A GRU is a recently-developed variation of the long short-term memory (LSTM) unit. This is a type of Recurrent Neural Net. It excels in areas like translation, NLP, and speech recognition. GRU-SVM model actually outperforms the conventional networks using softmax as the last layer. The paper I found about it is: A Neural Network Architecture Combining Gated Recurrent Unit (GRU) and Support Vector Machine (SVM) for Intrusion Detection in Network Traffic Data.
MLP-SVM
Another SVM addition. This time we don’t replace the final layer of a multi-layer perceptron with an SVM. Instead, we have run the SVM and MLP in parallel and combine their results to get our classification. This technique is essentially an ensemble of 2 models. Done right, ensembles can be very powerful since models can be picked that cover each others weaknesses. This shows in the performance of the MLp-SVM which has the second-best performance of the architectures. It even beat the Inception model at classifying certain families of malware.
A more detailed look at the performance
If you’d prefer a graph:
Closing
This paper was super exciting for various reasons. This combines my love for ML, outside the box thinking, and malware detection to create a perfect paper. I will definitely be going over papers mentioned in the index to learn more about the actual process.
I’m curious why certain CNN architectures were picked over the others. Also the predictions should be extended by including noise into the malware-images. In one of my early articles: How Did Google Researchers Beat ImageNet While Using Fewer Resources? we saw how noise makes models more robust, generalizable, and far far cheaper to train. By adding noise here, we might create models that perform not only for malware that exists now but for the future as well.
Reach out to me
Thank you for reading this. I am dropping all my relevant social media below. Follow any (or all) to see my content across different platforms. I like to use the strengths of different platforms. Leave any feedback you might have, as it really helps a growing content creator like myself. If you found this useful, please share the article.
I’ve shortened the URLs using this great service. They do great work, so show them some love. This is not sponsored, but it’s always good to promote useful work.
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube. It’s a work in progress haha: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Twitter: https://twitter.com/Machine01776819
My Substack: https://devanshacc.substack.com/
If you would like to work with me email me: [email protected]
Live conversations at twitch here: https://rb.gy/zlhk9y
To get updates on my content- Instagram: https://rb.gy/gmvuy9
Get a free stock on Robinhood: https://join.robinhood.com/fnud75
Paper
As promised, below is the annotated paper below. Be sure to read it yourself, to get a better understanding of the research. | https://medium.com/datadriveninvestor/what-convolutional-neural-network-architecture-works-best-for-classifying-malware-images-5a1215542f6a | [] | 2020-11-13 02:17:30.656000+00:00 | ['Machine Learning', 'Research', 'Artificial Intelligence', 'Deep Learning', 'Security'] |
The problem of identifying design with problem solving | Many designers of all disciplines, some of them renowned names, keep in their speech the statement “design is problem solving” with the good intention of giving design a supposedly lost image of functionality and rigor.
I understand the use of “… is problem solving” as a metaphor, as an attempt to distance design from the ornamental and whimsical. I can also understand it as an attempt to elucidate what we can contribute to a project, not only to decorate the cake, but also dedicating our efforts to noble causes, wich could be as important as “problem solving”. The statement resembles a softer version of the typical “improving people´s lives”.
This simplification might be useful in some situations, as the definition of problem can be roughly applied to almost anything. However, I believe this definition is lacking precision, wich leds me to reflect about it.
This expression is repeated as a slogan here and there, but very rarely the concepts of problem and solution are fully explained. At best, the term is accompanied by an explanation of its intended meaning: design detached from art, fully functional or everything has a purpose. That’s fine, it’s just explaining what design is, but there is a mismatch between expression and explanation.
If a client or stakeholder is as inexperienced to not understand that design is not the same as styling, alluding to “it´s problem-solving” without further explanation may only increase confusion.
It seems that this speech implies an inferiority complex, a mediocre solution to an alleged problem as old and tiresome as design itself: being seeing as stylists.
Leaving styling behind
Some designers, maybe as a reaction, seem to want to transcend the aesthetic design aspect, even avoiding bringing it to the table. Among them we could include some who produce beautiful work. To speak about beauty has become uncomfortable. It could become a time-consuming argument and deflect attention from core design tasks.
I admit that sometimes I have avoided this issue when dealing with a customer.
Others place design work focused purely on function, efficiency and measurable results at a higher level. White background and blue links on digital products, flat aluminium sheets and visible screws in physical products. A seasoned designer would know that sometimes it doesn´t make sense to spend resources on a delightful look and feel, would deliver valuable products with a naked design approach, very appropriate in some cases, but not all.
It is also true taht sometimes I’ve felt very proud having met client´s expectations with this kind of design.
Maybe we are taking the idea that “beauty is the result of right” to the extreme (a Japanese proverb quoted by Bruno Munari in “How do objects are born”). It could be interpreted that when everything makes sense and works, it is automatically beautiful.
Obviously, to solve problems and achieve beauty are not incompatible activities, so, why is problem solving not the best way to explain our work?
First, we should know what we are calling a problem: a perceived gap between what we have and what we want, a situation we want to change.
Common local problems
When working on the design of a product, what I usually consider problems are issues or barriers to overcome so the product will reach its goals, rather than the client’s briefing. It´s the mentioned distance between the state we have and the state we want, where the state we want is the design brought to reality and working perfectly. They could be named as local problems in contrast to the global problem stated in the briefing.
Those local problems are something to overcome, but not enough to complete a great job, they are at the base of the pyramid. I’m talking about a non obvious joint between parts, a shape which increases manufacturing cost, slow queries to a database, too much information on one screen, a brand without enough contrasts on a particular background.
They are well-defined problems, if we find one solution, it can be enough. In this case, designers and engineers can use a process of analysis and subsequent synthesis.
The approach of dealing with such problems is engineerish, assuming that the problem can be well defined, broke down to address each cause and a solution that can be true or false can be built.
With this approach I don’t want to undervalue or detach engineering and design, quite the contrary, but just highlight differences between both approaches and how they complement each other.
Engineering vs. design (Design paradigms. Peter Ljungstrand)
Engineering approach
Define problem
Look for best solution
Assumes problem can be well-define
Divide-and-conquer
Based on analytical and mathematical skills
Solution is true or false
Design approach | https://uxdesign.cc/the-problem-of-identifying-design-with-problem-solving-e5fb88d7d640 | ['Alberto Zamarrón'] | 2017-02-04 11:16:37.723000+00:00 | ['Problem Solving', 'Interaction Design', 'Design'] |
Python Web Scraping Tutorial | <Response [200]>
This will be output we get. Great, response 200 means that the page was fetched successfully. Let’s now use our Beautiful Soup module to create an object. Add the below into the file.
# Create a BeautifulSoup object soup = BeautifulSoup(page.text, 'html.parser') print(soup)
Output when running this new file
When we run the file, we can get the entire html page of the GitHub trending page! Let’s now explore how we can extract the useful data.
Extracting data
Highlighted shows ‘repo-list’
Head over to your browser (Chrome in this case) and open up the GitHub Trending Page. Click inspect anywhere, and you can see that the entire body of our wanted data is in the tag <div class="repo-list"> so the class repo-list should be our initial focus.
Each individual repository information
Next, we can see that each of the repositories are defined in the <li class='col-12 d-block width-full py-4 border-bottom'> This is what we will retrieve next
Your code should now look like this. If you run this script now, the output should show 25
Next we will iterate through each of the list to retrieve the desired information.
Repository Name
Highlighted shows the tag that displays full repository name
The above snip shows that the full repository name occurs under the very first <a> tag. We can extract the text from. Since the it returns a string with / in between them, we can split the string using / to get an array of string. First index will have the developer name and the next index will have the repository name.
Number of Stars
Stars are defined using <svg> tag with class <octagon>
Since not all repository contain the number of stars as the first element, we cannot use the position to retrieve the number of stars. However, we can see that the <svg> that defines the star and the number of stars itself are under the same parent. So if we get the <svg> by using the class octicon octicon-star we can get the parent and then extract the text (which will be the number of stars).
For loop
I have already implemented the loop as shown above. For each item in our repo_list (which contains 25 items), let’s find the developer, repo name and the stars.
Run the above code and the output should be something like this:
Output showing the 3 field information requested
Great! We can print what we have set out to achieve. Printing is good on its own, but it would be even better if we can store it somewhere, such as on a csv file. So let’s save this information there.
Saving it as CSV
First we need to import the built-in csv module as such:
import csv
Then we need to open a file and write the headers into our csv file:
# Open writer with name file_name = "github_trending_today.csv" # set newline to be '' so that that new rows are appended without skipping any f = csv.writer(open(file_name, 'w', newline='')) # write a new row as a header f.writerow(['Developer', 'Repo Name', 'Number of Stars'])
Next, in the for loop, we need to write a new row into our csv file
f.writerow([developer, repo_name, stars])
That is all you need to save the trending information onto our csv file!
This is what our script looks like finally. Once you run in, you will a new file github_trending_today.csv appear in our folder. If you open it it will look like this:
Scraped Information
Great! You have completed a simple tutorial to extract website information using python! | https://medium.com/quick-code/python-web-scraping-tutorial-74ace70e01 | ['Bilguun Batbold'] | 2019-03-21 15:22:27.823000+00:00 | ['Web', 'Python', 'Crawling', 'Scraping'] |
Make room for curiosity along with space to fail and invite everyone in | Make time to be curious and space to fail
In thinking about empathy, an oft mentioned concept in human-centered design these days, it is paramount to step into someone else’s shoes. For yourself and your relationship with others too. Observe, watch, listen, you will learn so much. This is true in user research, design critique, for our team members, for home, for peers, for strangers.
Last year, I attended a training called So Now You’re a Manager by Plucky and one of the things that we learned was the three levels of listening and how they might be used. This is an approximation of my notes from that training augmented by some findings from the Internet:
Internal listening — this level of listening is focused inward instead of externally, and it is very personal. It might include an inner dialogue that involves our own thoughts, opinions, judgments, feelings and conclusions. Focused listening — this level is at once intuitive and yet directed. It is a more active type of listening that requires concentration and interpretation to hear the context and “unsaid” themes and messages. Whole body listening — this level is the most active of the three. When in this space, the listener is aware of more than just what they are hearing, but also the entire environment.
It’s true, the current state of the world is one of distraction — 24 hour news cycles, smart phone notifications, voice interfaces all around us — but I think actively listening while sitting in the tension of silence will create fundamentally better team results as well as design outcomes which makes for stronger products.
Speaking of distraction, I want to say something about time management. I don’t think it’s possible to manage time. Unless you’ve got a time machine stashed somewhere. True, you can find more efficient ways to do things but the key revolves around focus and attention. See also my conversation with Jake Kahana on the second episode of How This Works.
You know, when I first became a manager, I wanted to “prove myself” to my reports and colleagues. I wanted them to know that I knew what I was doing and that I was on top of everything. So because of that and other reasons, I often spoke too much and did not listen near enough. But over time and by watching other managers, I realized that I was tripping myself up.
As a way to address this, I now resolve to listen at least 3x more than I speak. Instead of waiting for my chance to say something, I dig into what someone else is trying to say, I mean really listen, and make sure that person is able to fully speak their mind.
But listening is only part of the equation. Without room to fail or margin for error, the tolerance for risk within an organization drops to zero and isn’t able to keep up with or race ahead of the competition and grow properly learning from its mistakes. There are many procedures and processes within organizations that point to this general idea of learning from failures to improve future performance — end-of-project reviews, postmortems, and retros. But these often don’t lead to the desired change. Why? We have such a complicated relationship with the notion of failure. In 2011, the Harvard Business Review wrote an article where they talked about this idea:
First, failure is not always bad. In organizational life it is sometimes bad, sometimes inevitable, and sometimes even good. Second, learning from organizational failures is anything but straightforward. The attitudes and activities required to effectively detect and analyze failures are in short supply in most companies, and the need for context-specific learning strategies is underappreciated. Organizations need new and better ways to go beyond lessons that are superficial (“Procedures weren’t followed”) or self-serving (“The market just wasn’t ready for our great new product”). That means jettisoning old cultural beliefs and stereotypical notions of success and embracing failure’s lessons.
Without making time to listen and making room to fail, you might as well be trying a 5.15 rock climb (read: really hard) with one hand tied behind your back. | https://medium.com/design-bootcamp/make-room-for-curiosity-along-with-space-to-fail-and-invite-everyone-in-67058635bdd | ['Skipper Chong Warson'] | 2020-12-21 02:03:04.211000+00:00 | ['UI', 'Design', 'Principle', 'Product Design', 'Career'] |
Interview with Rajat Shroff | Interview with Rajat Shroff
VP of Product at DoorDash
Hi Rajat, tell me about what you do at DoorDash.
I run Product and Design. The teams are responsible for developing product strategy and executing visions to help DoorDash achieve its mission of helping local communities and businesses reach their full potential.
What does the DoorDash Product team work on?
There are two aspects of the product team’s work. One is based on the audience group we support: Consumer, Merchant, and Dashers. Consumer team is focused on helping consumers get their food faster at a more affordable price. Merchant team is all about helping the merchants grow their business and operate more efficiently. Dasher team enables our Dashers to earn more on the DoorDash platform with flexibility.
Another set of teams is based on the new vertical products that touch on all three audiences in the DoorDash ecosystem. Some examples are Catering, Convenience, and Grocery.
What made you decide to join DoorDash 3 years ago?
I worked on several products that have helped small businesses find ways to grow over 10 years. It’s often harder for small businesses to capitalize on the ROI of online systems like SEO and ads. Most existing solutions require merchants to heavily discount or advertise their products, and merchants don’t have the flexibility they’d like to attain specific goals.
I’ve been looking for a company that lives by a merchant-first approach and found that DoorDash was exactly that. DoorDash provides the flexibility to merchants to solve their unique challenges, whether it’s about bringing new customers, creating loyalty programs, introducing new products, expanding to catering, building their own marketing channels, etc.
You’ve been leading the products in many other tech companies in the past. What makes our product org stand out?
First of all, we made an intentional decision to keep the product and design team small, so that each of us has a tremendous amount of leverage. The impact each of us makes should be visible from space.
Secondly, we don’t think about what we build as a “product.” Rather, we think about what we build as a “service.” We know that our customers come to DoorDash not because of the app itself, but because they need a service. For consumers, it means getting their food on time from their favorite merchants. For Merchants, it means getting more orders from more customers and growing their businesses. Dashers, it means earning more money as efficiently as possible.
To deliver the service-driven product, you have to collaborate with Sales, Support, and Operation teams closely. Every PM and Designer needs to understand operational challenges and then use technology to scale the solution. And a lot of times, this doesn’t happen while sitting behind the desk. A lot of our teams are experiencing each audience in the field. For example, our merchant team would work behind the counter or spend days in the kitchen to observe the merchant’s operational challenges. Our Dasher team would ride along with Dashers to understand the challenges around parking, pickup, and dropoff processes.
Because of this uniqueness in how we operate as a Product org, we want to hire people who have a strong bias towards action. They need to have ambition but should be able to break up problems into small parts to test their hypotheses faster. Curiosity is another important one as they need to know the ins and outs of their customers. As we are tightly knitted with cross-functional teams on all problems we solve, collaboration ability is also critical.
How would you describe the culture of DoorDash Product and Design?
Foremost importantly, a strong sense of ownership. The leadership team provides goals and then from there, how each team achieves their goals truly comes from bottom up. PMs and Designers obsess about their audience, define the strategy, and drive the solution.
Another unique culture about us is speed, a breakneck speed. We spend a lot of time prioritizing so that the execution itself can go fast. We create projects fast, launch them fast, and also kill them fast if they don’t work. Agility and resilience have been in our DNA from day one.
How do you work with the Design team day to day?
I’m quite closely plugged into the design processes end to end, from the product briefs to vet the problem statements as a team, the design reviews to iterate the design strategy and execution, to the ship review to ensure the quality of the product we’re shipping. I also closely work with the research team to understand customer insights.
Besides those recurring reviews, I also enjoy strolling around the design team’s area in the office. And this is when I hear a lot of interesting new ideas from designers in more informal settings. Often I’m impressed by the team thinking beyond what’s on the roadmap to bring more delight to our customers, and I feel excited to help bring those visions to come true.
What do you think the Design’s role is in the product organization?
The design team is the keepers of customer delight and customer love. They are the ones who are the closest to what the customer touches at the end of the day. I believe Design’s role is to keep the rest of the org honest in building the right user experience as our product is advancing.
At DoorDash, designers are highly encouraged to bring a strong opinion and challenge the cross-functional partners. I value some healthy friction in debates as it uncovers new areas and broadens our perspectives. Also, I look to the design team to the rest of the company about how to apply the customer love to drive a delightful experience end to end.
Looking back a year, what excites you about the Design team at DoorDash?
For the past year, the team has literally tripled its size and we have brought in many new talents into DoorDash. Now there’s a robust process that enables us to move faster while keeping the design standard high. The design team has also found a strong voice within the company, and they feel more empowered to bring perspectives in decision-making processes.
All of these make me very excited about where the Design team is now as well as what it’ll become in the future.
Do you have any advice to give to Design/Content/Research candidates who are considering to join DoorDash?
The problems we’re solving at DoorDash are difficult and complex, involving multi-sided audiences, online and offline. And we provide lots of high leverage opportunities for folks to come and invent new things and make an impact that’s visible from space. Adding to that, our mission of uplifting our local communities is very relevant for the current climate we are in. If you’re ready to learn a lot from amazing design leaders and make a massively outsized impact, this will be the best team to join!
Thanks, Rajat for your time! :-)
=======
Please learn more about other leaders at DoorDash:
Christopher Payne—Chief Operating Officer
Kathryn Gonzalez — Manager for Design Infrastructure
Radhika Bhalla — Head of UX Research
Sam Lind — Sr Manager for Core Consumer Design
Tae Kim — UX Content Strategist Lead
Tony Xu — Chief Executive Officer
Will Dimondi — Manager for Merchant Design | https://medium.com/design-doordash/interview-with-rajat-shroff-37bf9ea9eb9f | ['Helena Seo'] | 2020-06-12 01:24:07.397000+00:00 | ['Leadership', 'Product Management', 'Design', 'DoorDash', 'Product Design'] |
How Effective is The Coronavirus Vaccine? | How Effective is The Coronavirus Vaccine?
Preliminary reports have shown that the vaccine is about 94% effective in preventing SARS-CoV-2.
How Effective is The Coronavirus Vaccine? Source: Pexels
To understand the effectiveness of the coronavirus (SARS-CoV-2) vaccine, we need to understand exactly what effect a vaccine has on the human body. For this, first of all, we need to understand how the human body prevents any infection.
When a bacterium or virus enters our body, its main goal is to increase its own numbers. Germs accomplish this purpose by using different components of our body and as a result, we get sick. The human body’s immune system is also designed to prevent infection with various weapons.
Weapons of the immune system
There are two stages of immunity in our body. The first step is Innate immunity, which can be called the ‘first line of defense’. This reaction starts as soon as you come in contact with a germ (within minutes or hours). Several patterns are found in germ cells, which are present only in the case of germs. In Innate Immunity, our body identifies enemies by looking at this pattern. After identifying the enemy, a special type of cell starts acting called a macrophage. It is a special type of white blood cell that can swallow a variety of germs or dead cells in one word. However, this innate immunity is not long-term. But the macrophages that are activated as a result of awakening to the Internet of Immunity are the ones that trigger the next level of the immune response.
The second step is adaptive or acquired immunity, which is what gives us long-term protection from any germs. Some special types of white blood cells, B-lymphocytes or B-lymphocytes and T-lymphocytes or T-lymphocytes help to develop adaptive immunity. B-cells are mainly made up of bone marrow, hence the name ‘B’ cells. They basically make antibodies that later attack a special element (antigen) that comes from the germ. Since these antibodies can be detected from our body fluids or humor, this type of response is called the humoral immune response. The bone marrow is also the source of T-cells, but in their immature state, they move to the thymus, where mature T-lymphocytes are formed. And that’s why ‘T’ is the name of the cell. Their main job is to attack and destroy infected cells.
Learning to recognize the enemy
Just like it takes some time for you or me to learn to do something for the first time, it takes a lot of time for our body to identify the enemy and prevent disease when the first infection occurs. But in the future, if the same bacteria or virus attacks again, but our immune system does not delay a moment to recognize the enemy. This is made possible by a special type of B and T-lymphocyte, called memory cells, or the immune system. These memory cells instruct B-lymphocytes to make antibodies whenever they see a known bacterium.
The first time an infection occurs, it takes a long time for our body to identify the enemy and prevent the disease.
This time let’s talk about vaccines. Speaking of the first attack by an unknown enemy for so long, the vaccine mimics exactly that. It has two main advantages. First of all, we don’t have to be as sick as we would be if we were attacked by an unknown germ. This is because vaccines are made with disabled germs/viruses or special parts of germs/viruses (such as proteins, DNA or RNA, etc.). Second, our immune system also underwent training, which created memory T cells and B-cells, so that in the event of a future attack by this germ, it could immediately send him a letter. But keep in mind that it is not possible to develop immunity to the disease with the vaccine. After being successfully vaccinated, our body takes quite some time (at least 10–15 days) to produce immune B-cells and T-cells. Therefore, if someone is infected just before or after getting the vaccine, they will get sick.
But is it enough to get vaccinated once? This of course can be particularly different in the field. There are some vaccines that can alert our immune system once taken, but some vaccines require multiple doses, called booster doses. Booster doses may be required for different vaccines. As with some vaccines that do not cause an adequate immune response to the initial dose, a booster dose is needed (e.g. meningitis vaccine).
The various steps involved in creating a vaccine
But creating a vaccine is no easy task. There are many steps that can be taken to make a vaccine for a disease.
There are six steps to creating a vaccine:
Preliminary studies have shown that this vaccine is able to prevent SARS-CoV-2 infection in non-human primates (such as monkeys, orangutans, gorillas, etc.). In the Phase-1/2 clinical trial, the vaccine was given to 543 volunteers in initial and booster doses (26 days apart). Tests have shown that in most volunteers, the vaccine produces antibodies. It is undoubtedly a beacon of hope. The vaccine is currently undergoing a phase-3 clinical trial with 30,000 volunteers in different countries. However, the trial was adjourned for a few days after a volunteer fell ill unexpectedly. However, the trial has resumed in Britain from October 5. Recent results have shown that this vaccine works well in older people (between the ages of 60 and 70). However, we will have to wait until the end of the phase-3 trial for the final result. The vaccine is a joint venture between the Indian pharmaceutical company Serum Institute and AstraZeneca.
So far, most of the vaccine phase-3 clinical trials have been able to produce significant antibodies in the human body.
Another phase-3 trial of the vaccine, run by Johnson & Johnson in the United States and Belgium, was conducted with 60,000 volunteers but was temporarily suspended by the organization. The reason is that unexpected illness. According to experts, this is not uncommon in any vaccine trial. But the good news is that the trial has started again from the last week of October.
The vaccine of a joint venture between the National Institutes of Health and Moderna has also yielded the expected results and is also in phase-3 clinical trials. Preliminary reports have shown that the vaccine is about 94% effective in preventing SARS-CoV-2. However, it is not possible to say anything definitively until the end of the phase-3 clinical trial.
Pharmaceutical companies Pfizer and BioNTech have jointly developed a number of mRNA vaccines, and two of these vaccines (BNT162b1 and BNT162b2) have yielded the expected results in phase 1/2 clinical trials. Preliminary data from a recent phase-3 clinical trial in the United States showed that the BNT162b2 vaccine is 90% effective in preventing SARS-CoV-2 infection. However, the full results of this phase-3 clinical trial may be available later this year.
Most of the vaccine phase-3 clinical trials so far have been able to produce significant amounts of antibodies in the human body. However, in almost every vaccine, a booster dose has to be given after the initial dose, and in each case, some initial symptoms have been observed, such as fever, body aches, headache, dizziness, etc. after vaccination. However, these types of symptoms often occur after vaccination. Researchers at Oxford have found that common paracetamol reduces these symptoms.
All the vaccines I have talked about so far have to be injected intramuscularly. Recently, a team of scientists from the University of Washington School of Medicine was experimenting with an intra-nasal vaccine. They found that the vaccine was much more effective as nasal sprays than intramuscular injections. Studies have shown that this vaccine is able to prevent the transmission of SARS-CoV-2 in the upper and lower respiratory tract (respiratory tract) in rats. Vaccine maker India Biotech will launch a phase-3 clinical trial of the vaccine in India in a joint venture with the Washington University School of Medicine.
Other barriers to effective vaccine development
In the light of so much hope, scientists still have some questions. One of the reasons for this is the second corona infection. Such information has been obtained from several places in the world. But why is this happening? Is our body is not able to build long-term immunity? Or is it the second time the same virus has attacked another strain or type? Scientists still have no clear idea about this. Patients with SARS-CoV-1 or MERS (Middle East Respiratory Syndrome) coronavirus infections, such as SARS-CoV-2, have been shown to have immunity for up to 2–3 years. After 5–6 years, it decreases. Since the novel Coronavirus or SARS-CoV-2 is a completely new virus, nothing is known about its immunity after infection. Scientists are also constantly trying to understand the whole thing from new information. And that’s why no matter how promising the vaccine may be, we still don’t know how long the antibodies produced by the vaccine will last in our body. Or will the same vaccine work equally well for people of all ages or geographical locations? Time will tell.
There is another problem. That is the problem with storing so many vaccines. Most of the vaccines that are still being worked on having to be kept at -20 degrees or -60 degrees Celsius, which can be very costly for a poor country.
Another problem with vaccine development is the antibody-dependent enhancement of disease or ADE for short. The antibody that can save us from disease is the power of the antibody. Antibodies that can protect us from the disease have the ability to increase the spread of disease in an unwanted way. Experts believe that this could happen if the vaccine does not produce enough antibodies. However, those viruses that attack macrophages are more likely to be at risk of ADE. Although ADE has been shown in pre-clinical trials in some animals (non-human) who have been vaccinated against SARS-CoV-2, it is not yet clear whether the SARS-CoV-2 vaccine will be ADE. This requires many more experiments.
After all, the way scientists around the world are working on vaccine research may one day find a solution to this disease. But once an effective vaccine is made, it will take at least 2–3 years for it to reach every human being. So now it is our duty to wear masks, wash our hands frequently with soap and avoid unwanted crowds. That way, we can protect ourselves and the people around us.
Note: Thymus: This is a type of gland, which is located between the two lungs of the human body at exactly the same height as the heart. By the age of one year after the birth of a human child, the thymus continues to grow rapidly in size; Then, of course, it grows very slowly until puberty. This gland secretes a hormone called thymosin. This thymosin hormone stimulates the formation of mature T cells. | https://medium.com/illumination/how-effective-is-the-coronavirus-vaccine-3a6d1dc891a1 | ['Samrat Dutta'] | 2020-12-17 15:10:37.083000+00:00 | ['Covid Vaccine', 'Science', 'Virus', 'Vaccines', 'Covid 19'] |
Gmail’s Latest Consent Box Is a Privacy Mousetrap | When I logged into my Gmail today, I saw a consent box asking me a bizarre set of permissions:
Courtesy: Gmail
Due to my inherent skepticism for anything that offers a smartcut (smart shortcut), I clicked Turn Off + Next to proceed.
Here is what I saw after clicking Next:
Courtesy: Gmail
I was baffled.
My first instinct was: Was this designed by a professional UX designer? How can Google expect me to flip a switch to turn-on/turn-off 12 features of Gmail?
Or was this someone who passed out from Google’s $49 courses that included User Experience design?
How can Google expect me to flip a switch to turn-on/turn-off 12 features of Gmail?
On an optimistic note (with sincere apologies to the aspiring $49 course students 😔) — maybe this is the reason Google started taking UX seriously, and we can expect better UX from Gmail in near future.
An age-old business tactic:
Smart businessmen employ this technique every time they want a huge pie of the customer’s assets. It works great when the following two conditions are true:
You must offer something tangible, of infinitesimal value with utmost urgency to act from the customer
Customers are too lazy + too greedy to even think about what they are giving up
Gmail caught me in the middle of something very urgent, and I was in no mood of having second thoughts. But I wasn’t greedy enough. Something stopped me from consenting too fast.
I remembered the Europeans’ colonial past.
Gmail is a glaring example of how an advertising company leverages its freebie mousetrap.
When Britishers and other European traders went to rich colonies of Asia, they offered precious gifts to colonial kings. In lieu of those gifts, the kings allowed them to sell merchandise on their land. Slowly, Europeans began to offer their armies to kings. What was the selling point? Security to kings from neighboring states. Kings deserved royal lives, not the dust of the savage warzones.
Lazy and foolish kings gladly accepted. When they realized the whole game, it was too late. Armed with the industrial revolution, Europeans (mostly Britishers) captured the colonies and began to rule the world.
Tech megaliths are no different from European colonists. Gmail is a prime example of how an advertising company leverages its freebie mousetrap.
Due to its ability to categorize email + spam separation + speed of the chrome browser, it became a choice for many yahoo users at the beginning of the 2010 decade. People loved it for its great categorization of promotions and social notifications.
At the onset of AI, Gmail began to monetize its dominance by analyzing content being communicated. Gmail targeted ads weren’t unknown to anyone. Since its inception, when you click on a link from within Gmail inbox, the first domain in your browser URL is Google domain, followed by a redirect link.
Gmail’s text parsing of the content is being offered as a smart capability to aid grammar checks:
That’s Google’s requirement. It needs your data because it is vital to its core business. There is nothing wrong with this, as far as it is consented by you.
And the consent is well-communicated.
What does the 12-Settings Box (Possibly) Do?
My Inbox turned into 35000+ unread emails
Credit: Gmail (Edited by author)
Let us revisit the 12-feature box.
All the items with a red arrow pointing to them require deep analysis of the email content. (Yes, the content and the intent of the emails, not just disparate words to target tag-centered ads.). You can totally do away without them if you aren’t forgetful about the events of your life.
arrow pointing to them require deep analysis of the email content. (Yes, the of the emails, not just disparate words to target tag-centered ads.). You can totally do away without them if you aren’t forgetful about the events of your life. All items with the purple arrows are similar in their data-gathering + analysis. The only difference is that they provide significant productivity to you — the user, in your daily chores of dealing with 150+ emails every day.
arrows are similar in their data-gathering + analysis. The only difference is that they provide significant productivity to you — the user, in your daily chores of dealing with 150+ emails every day. All items with blue arrows are possible items that do not rely on your private/personal data, and helpful to you in boosting your communication productivity. I have kept the last item (Google Pay) under the same belt because your Google Pay merchants are already known to Google. Allowing them to convert better isn’t wrong, as long as it doesn’t invade your privacy without your consent.
Clubbing 12 consents together is no consent at all.
The problem isn’t that all this data is being collected from you. Gmail is free as of today, and free offerings have to be ad-supported.
The problem is that Google has combined them under single consent. Clubbing 12 consents together is no consent at all.
Do you want productivity? Accept our data-collection features that power our AI. I say this because I clearly remember smart suggestions being a separate Gmail setting very recently.
So what did my turn-off bring to me?
I wanted to see what my dreadful future held if I shielded myself from the red + purple arrows. And giving away the productivity advantages of the blue arrows.
As soon as I clicked Turn-off, my inbox turned into a nightmare.
The #1 blue arrow in the image above, and Gmail’s most attractive feature — those segregated tabs that separated Promotions, Social, and Inbox — was gone!
I knew it would be gone, but I hadn’t imagined the impact.
My Inbox turned into 35000+ unread emails, something that I had never allowed to happen priorly.
In the last 10 years, we are so accustomed to Promotions + Social tabs that we rarely even check them, let alone delete unnecessary emails from them. (Sometimes, even necessary emails landed there. But never mind, it’s better than getting spammed). Their advantage outweighed their pitfalls.
They need your productivity more than you do.
I was still skeptical: This wasn’t something they needed. It’s me who needed smart features more.
But I quickly knew. Seconds after turning them off, I was offered another opportunity. I saw ugly Gmail prompts littered throughout my inbox, luring me to turn-on the smart features.
My inconvenience, coupled with their offering it too soon. My mind dwelled heavily on the 17th-century European colonialism tactics.
If it’s too good to be true, it probably is.
Parting Words (What Shall I Do)?
I do not know if Yahoo is aggressive on features as of today. But I am pretty sure it isn’t aggressive on data-collection.
My older yahoo account has 10000+ unread emails to date. I have been switching all my accounts access to Gmail since 2010, the year when Yahoo was still ahead of Gmail.
Despite my crucial emails residing in Yahoo, and despite their switching to Gmail-like conversation view, I have been reluctant to switch back.
The only reason I am not switching back to Yahoo is I dread sifting through (and delete) those dreadful 10000+ unreads.
As I looked into Gmail Promotions and Social emails, I was reminded that they were simply segregated by labels. If I filter them, I could delete them in bulk. I could even create a rule to move it to chosen folders, but as of now, I do not see rule creation in my Gmail settings.
Given that Gmail allows me to view only 100 emails at max per page, filtering + deletion would take me 350 iterations to delete 35000 emails.
A layman is less likely to go that deep, and would simply turn on the smart features. And if I choose not to be a layman, I must do the cleanup every week manually, now that the Promotions + Social tabs were gone.
I do not know if Yahoo is aggressive on features. But I am pretty sure it isn’t aggressive on data-collection.
And when it comes to deleting unread emails, 35000+ is 3.5 times more than 10000. | https://medium.com/swlh/gmails-latest-consent-box-could-be-a-privacy-mousetrap-a663c8e390f | ['Pen Magnet'] | 2020-12-08 11:54:22.009000+00:00 | ['Privacy', 'Gmail', 'Productivity', 'Work', 'Email'] |
Knowledge is an Obstacle to Knowledge! | There are oysters that live at the bottom of the ocean. A little bit of the light we enjoy up here is able to reach down there somehow. But the oysters have no chance to see the blue ocean; for them the blue ocean doesn’t exist. We human beings are walking on the planet. When we look up we see the constellations, the stars, the moon, the blue sky, and when we look down we see the blue ocean. We consider ourselves to be much superior to the oysters, and we have the impression that we see everything and hear everything. But in fact, we are a kind of oyster. We have access only to a very limited zone of suchness.
Our perception of something tends to be based on the ground of our precious experiences. We have experienced something in the past and we compare it with what we encounter in the present moment and we feel that we recognize it. We paint the information with the colors we already have inside us. That’s why most of the time we don’t have the direct access to the reality.
Often it is our own knowledge that is the biggest obstacle to us touching suchness. That is why its very important to learn how to release our own views. Knowledge is the obstacle to knowledge. If you are dogmatic in your way of thinking it is very difficult to receive new insights, to conceive of new theories and understanding about the world. The Buddha said, “Please consider my teaching to be a raft helping you to the other shore”. What you need is a raft to cross the river in order to go to the other shore. You don’t need a raft to worship, to carry on your shoulders and to be proud that you are possessing the truth.
The Buddha said, “Even the Dharma has to be thrown away, not to mention the non-Dharma”. Sometimes he went further. He said that, “My teaching is like a snake. It is dangerous. If you don’t know how to handle it, you will get bitten by it.”
One day in a meeting, a Zen master said this: “Dear friends, I am allergic to the word ‘Buddha.’ You know, he is a Zen master, and he talks about the Buddha like that. “Every time I am forced to utter the word ‘Buddha’ I have to go to the river and rinse my mouth three times.” And many people were confused, because he was a Buddhist teacher. He was supposed to praise the Buddha. Fortunately there was one person who understood in the crowd. She stood up and said, “Dear teacher, every time I hear you pronouncing the word ‘Buddha’, I have to go to the river and wash my ears three times.” This is a Buddhist example of a good teacher and a good student!
References:
Buddha Mind, Buddha Body — Thich Nhat Hanh (Book)
See Also: | https://medium.com/devansh-mittal/knowledge-is-an-obstacle-to-knowledge-5bd490387217 | ['Devansh Mittal'] | 2019-10-08 14:32:01.957000+00:00 | ['Philosophy', 'Spirituality', 'Buddhism', 'Psychology', 'Religion'] |
15 Free Courses to Learn Python in 2021 | 15 Free Courses to Learn Python in 2021
A curated list of some of the free online courses to learn Python.
Hello guys, If you are a beginner looking for some Free Python resources to start your programming journey in 2021 then you have come to the right place.
Earlier, I have shared a couple of free Python Programming eBooks sand today I’ll share a couple of good Python programming courses that are absolutely FREE!! You can take these best online courses to learn Python at your own pace, at your own time, and at your place.
This is a great advantage of online learning, the flexibility it provides is just awesome. You just need a laptop or a smartphone with an internet connection and you can learn anything.
Btw, before starting with the list of courses to learn Python programming I want to congratulate you on making the right decision to start your programming journey with Python.
Many beginners, students and people starting with programming ask this question to me every day. Should I start with Python or Java? Even though I am a Java developer, I ask people to start with Python because of its awesome and multi-purpose features.
Python is easy to learn, easier compared to even Java. You can also write small Python scripts to quickly automate things you normally do manually and that provides a great value to beginners.
Python is also powerful, feature-rich, and multi-purpose. For example, you can use Python for web development, you can use it to create scripts, and you can even use it in the space of Data Science and Machine learning.
This seriously makes learning Python an important skill that will pay you throughout your career.
I have always advised all my readers and students go along with SQL and UNIX, you should also learn Python. One of the great programming skill every programmer should have. That’s the reason I have listed is one of the essential tools for programmers.
Btw, if you don’t mind paying a small amount for learning something valuable as Python then you can also check out The Complete Python 3 Bootcamp. It’s not free but it’s completely worth your time and money.
15 Free Courses to Learn Python Programming
Now that you know that learning Python is great for your programming career its time to actually learn Python. Whenever I start with a new technology I usually follow my 3 point model like join an online course, buy a book, and do a project.
This way I have mastered several new technologies all by myself without going into expensive coaching classes or boot camps and nothing beats free resources to start with.
In the past, I have shared top books and courses to learn Python, and today I will share some of the best online courses you take to learn Python Programming for free.
The project part is something that you can do yourself once you learn Python by going through these courses and books.
1. Introduction To Python Programming
If you need a quick brush-up or learning Python for the first time then this is the perfect course for you.
This is quite amazing that the instructor himself is a 17-year-old student and this Python course have more than 130K students enrolled in Udemy, which speaks volume about the course.
Here is the link to join the course: Introduction To Python Programming
This course is a one-stop-shop for everything you’ll need to know to get started with Python, along with a few incentives.
You will start with the basics of Python, learning about strings, variables, and getting to know the data types. You will then learn other essential programming constructs e.g. loops and conditions in Python.
The course also teaches you file manipulation and functions. In short, a Quick and Easy Intro to Python Programming. | https://medium.com/swlh/5-free-python-courses-for-beginners-to-learn-online-e1ca90687caf | [] | 2020-12-08 08:59:23.835000+00:00 | ['Programming', 'Coding', 'Python', 'Software Development', 'Web Development'] |
Deep Learning Using Raw Audio Files | Feed raw audio files directly into the deep neural network without any feature extraction.
If you have observed, conventional audio and speech analysis systems are typically built using a pipeline structure, where the first step is to extract various low dimensional hand-crafted acoustic features (e.g., MFCC, pitch, RMSE, Chroma, and whatnot).
Although hand-crafted acoustic features are typically well designed, is still not possible to retain all useful information due to the human knowledge bias and the high compression ratio. And of course, the feature engineering you will have to perform will depend on the type of audio problem that you are working on.
But, how about learning directly from raw waveforms (i.e., raw audio files are directly fed into the deep neural network)?
In this post, let's take learnings from this paper and try to apply it to the following Kaggle dataset.
Go ahead and download the Heatbeat Sounds dataset. Here is how one of the sample audio files from the dataset sounds like
The downloaded dataset will have a label either “normal”, “unlabelled”, or one of the various categories of abnormal heartbeats.
Our objective here is to solve the heartbeat classification problem by directly feeding raw audio files to a deep neural network without doing any hand-crafted feature extraction.
# Prepare Data
Let’s prepare the data to make it easily accessible to the model.
extract_class_id() : Audio file names have its label in it, so let’s separate all the files based on its name and give it a class id. For this experiment let’s consider “unlabelled” as a separate class. So as shown above, in total, we’ll have 5 classes.
convert_data() : We’ll normalize the raw audio data and also make all audio files of equal length by cutting them into 10s if the file is shorter than 10s, pad it with zeros. For each audio file, finally put the class id, sampling rate, and audio data together and dump it into a .pkl file and while doing this make sure to have a proper division of train and test dataset.
## Create and compile the model
As written in the research paper, this architecture takes input time-series waveforms, represented as a long 1D vector, instead of hand-tuned features or specially designed spectrograms.
There are many models with different complexities explained in the paper. For our experiment, we will use the m5 model.
m5 has 4 convolutional layers followed by Batch Normalization and Pooling.
a callback keras.callback is also assigned to the model to reduce the learning rate if the accuracy does not increase over 10 epochs.
## Start training and see the results
Let’s start training our model and see how it performs on the heartbeat sound dataset. As per the above code, the model will be trained over 400 epochs, however, the loss gradient flattened out at 42 epochs for me, and these were the results. How did yours do?
Epoch 42/400 128/832 [===>..........................] - ETA: 14s - loss: 0.0995 - acc: 0.9766
256/832 [========>.....................] - ETA: 11s - loss: 0.0915 - acc: 0.9844
384/832 [============>.................] - ETA: 9s - loss: 0.0896 - acc: 0.9844
512/832 [=================>............] - ETA: 6s - loss: 0.0911 - acc: 0.9824
640/832 [======================>.......] - ETA: 4s - loss: 0.0899 - acc: 0.9844
768/832 [==========================>...] - ETA: 1s - loss: 0.0910 - acc: 0.9844
832/832 [==============================] - 18s 22ms/step - loss: 0.0908 - acc: 0.9844 - val_loss: 0.3131 - val_acc: 0.9200
Congratulations! You’ve saved a lot of time and effort extracting features from audio files. Moreover, by directly feeding the raw audio files the model is doing pretty well.
With this, we learned how to feed raw audio files to a deep neural network. Now you can take this knowledge and apply to the audio problem that you want to solve. You just need to collect audio data normalize it and feed it to your model.
The above code is available at following GitHub repository
That’s it for this post, my name is Vivek Amilkanthwar. See you soon with one of such next time; until then, Happy Learning :)
References: | https://medium.com/in-pursuit-of-artificial-intelligence/deep-learning-using-raw-audio-files-66d5e7bf4cca | ['Vivek Amilkanthawar'] | 2019-04-11 09:51:34.607000+00:00 | ['Machine Learning', 'Deep Learning', 'Audio Classification', 'Kaggle', 'Deep Neural Networks'] |
7 Fantastic Resources for Tech Interview Prep | 7 Fantastic Resources for Tech Interview Prep
Prepare well and nail your next interview
Photo by Christin Hume on Unsplash
The software interview is quite honestly one of the most challenging aspects of getting a job.
Even after wading through years of college or months of boot camp, you still have to triumph over the interview process before you can start earning that sweet money.
I gathered a list of my favorite resources that have helped me immensely in the past with interviewing for jobs. I hope this helps you! | https://medium.com/better-programming/7-fantastic-resources-for-tech-interview-prep-607df806584e | ['Michael Vinh Xuan Thanh'] | 2020-07-02 20:27:24.094000+00:00 | ['Coding', 'Programming', 'Interview', 'Software Development', 'Startup'] |
How the enneagram made me a better product manager | Product Management
How the enneagram made me a better product manager
Dispelling the pressure to become a fantastic beast
I tend to sense in myself an abyss of emptiness and play dead like an animal when danger approaches. At least, this is how Richard Rohr describes type FIVEs in his exposition on the enneagram.¹ To my amusement, this is actually quite true. Unfortunately, it is exactly the opposite of what I am called to do as a b2b product manager — brave the storm and drive solutions to complex business problems.
The polarity between running away from problems and being called to solve them has been one of the most challenging aspects of reconciling my own personality tendencies and being a product manager at the same time. In other words, learning to be a product leader has not been rainbows and butterflies.
Thankfully, I do not think I am alone here. Product mangers are often described as “unicorns” (whether rightfully or mistakenly),³ who are called to a high pressure, make it or break it type role that requires multi-disciplinary expertise. This can easily be applied to many other tech roles that require a marriage of disciplines — designers, business analysts, etc.
I do not think that anyone comes across all the skills needed to be an effective product manager naturally. In fact, many product managers cycle through bouts of unproductive imposter syndrome (although there are ways we can rethink what is means to be an imposter in product). Instead, these product skillsets need to be cultivated and refined over time.
Searching for some answers, perhaps the best resource I have encountered on reconciling personality traits with overwhelming expectations is the enneagram. If you’re not familiar with the enneagram, think of it as an ancient typology that takes a more holistic approach to personality than other popular assessments out there like Myers Briggs or StrengthFinder. It uses expositions on character types (associated with numbers 1 through 9) to confront us with the compulsions and laws under which we live.² While personality assessments do share common elements, I’ve found that the enneagram takes a stronger emphasis on the “nurture” side of the nature vs. nurture spectrum and comes with a depth of analysis above and beyond the others.
While we tend to think of personality in strictly the personal realm, I’ve found the enneagram’s typology be quite an excellent window into the professional one as well. We might think of the enneagram in this context as a tool to confront the compulsions and laws under which we work, too. How?
The enneagram normalizes the centripetal draw of personality and promotes a healthy, more objective picture of success
The enneagram skillfully transforms “strengths” and “weaknesses” into opportunities
Applying these mindsets has allowed me to craft better results and stronger product outcomes from myself and my team members. Yes, these lessons are wholeheartedly my own, however I expect that I am not alone in these experiences. Whether you are looking for tools to improve your effectiveness as a product manager or how you manage a product team, you can use some of these lessons to evaluate if the enneagram can help you, too.
Not So Rare Unicorns
Unequivocally, the most positive effect the enneagram has had on me is that it has instilled in me the notion that I am not crazy. YES, there are other people that think the same way I do, have the same irrational fears, and have the same impulse to run. There’s really no need to make a fuss about being special or unique.
This is equally true of product managers! While being a product manger can (at times) seem lonely and pioneering, there are other product managers out there like you who struggle with the same things — who succeed in the same ways you do. You are not a magical unicorn that can mysteriously lead a team towards innovation — instead you are a normal person like everyone else.
This seems painfully obvious, however, for me, it was a major barrier to overcome. There is an incredible amount of pressure placed on product managers to succeed and produce — this often engenders an impaired vision of reality, over-saturated with an eye for failure and problems.
(It’s very likely that there are some of you who do not share this sentiment and are natural super stars, succeeding at (nearly) every step without much effort, however, I doubt you are in the majority.)
So what? For me, this has opened up a world of hope. It has released me from the inward impulse to meet some external set of expectations of what it means to be product manger and instead has freed me to focus on getting work done.
Strengths and Weaknesses as Opportunities
Rohr & Ebert use the language of “gifts” and “dilemmas” to help describe each type in their exposition on the enneagram. This is another, more refined, way of talking about strengths and weaknesses, of which we are so enamored with in the business world. In my personal case (identifying as a FIVE), some of my gifts, according to Rohr, are:
The gift of objectivity and detachment
The ability to soak up massive amounts of data
The ability to invent grand intellectual systems and understand connections
I love the way Rohr ties all of these gifts together:
“The gift of FIVEs is of great value to every community… They can follow the monologues of others for hours at a time. You can talk and talk — and the FIVE seams to have an unlimited capacity to listen and absorb everything. Their ability to withdraw themselves emotionally in the process can help those seeking advice to apprise their situation more clearly, soberly, and realistically. Because of their particular talent FIVEs can look at a very tense emotional situation objectively as say ‘Now I think the issue can be viewed from this side and from that.’”⁴
Some weaknesses of a FIVE are:
Emotional and intellectual greediness
Detachment and compartmentalization to a fault
Withdrawal from conflict
This is all nice and well, but how does it actually help? In my specific case, this exposition of gifts and obsessions creates opportunities instead of fleeting evocations of pride and shame. The shift is subtle yet strong — if others thrive in similar ways, so too, can I thrive by cultivating what I am naturally inclined to do — the enneagram has convinced me of this. The laws under which we live can be harnessed for good as opposed to a useless platitude.
Some specific examples — in the case of objectivity — appraising situations clearly and methodically is a key skill in product management. Analyzing and synthesizing broad swaths of information from multiple parties who all have different end goals requires a certain level of detachment. When product management is not done from a place of objectivity, it might look like this:
PMs get caught up in trying to appease a certain executive’s desires and lose track of the fundamental problem the product is trying to solve
Product decisions rely on intuition and preference rather than data and direct client or user feedback
PMs prioritize exciting or sleek projects over and against less sexy non-functional requirements
PMs choose to solve easy or even logical problems vs. the problems that solve a business objective
Through study of the enneagram, I’ve learned to distinguish successful strategies from unsuccessful ones particularly as these strategies play to my strengths and weaknesses. I thought everyone went through the same well-reasoned, unbiased study of the facts before reacting to emotion or events before making decisions (not so!). Feedback I have independently received in the product realm from others happens to correlate with the strengths associated with FIVEs.
This is not because all FIVEs happen to make good product managers — no! Instead, the enneagram has been able to reinforce and emphasize my strengths allowing me to be more confident in my decisions, exactly because these strengths and weaknesses are corroborated by others.
By learning this, I’ve been able to reinforce some product practices for both me and my team. Some examples:
I am more emboldened to press hard for a business need when one isn’t apparent. No, a feeling, intuition, or directive from a client is not enough to make sound product decisions. Instead, a business need must be present. This has made its way into requirement templates and success criteria for product managers in performance views.
I am able to lead cross-functional teams towards resolution even when tensions run high. By recognizing this, I have felt more empowered to offer a solution after hearing each team’s perspective. In the past, I was perfectly content standing on the sidelines and absorbing as much as I could — now it is imperative to reach a conclusion and clearly create a plan even when knowledge isn’t complete.
I am better equipped make priority decisions through rubrics. Identifying or creating a rubric is a great way to draw on a desire for objectivity in making priority calls. In one example, our team shifted from delivering on a net new UI to a completely invisible back end API because the number value ascribed was objectively higher to the client vs. what end users said they wanted.
Marchitecture — drawing diagrams to reduce subjective understandings of a path forward has proven immensely helpful in aligning teams towards crafting a product vision. This plays well to the gift of creating intellectual systems.
Again, this is all specific to my experience, however, the same insights can apply to you, too, through study of what makes you thrive.
Roadmap for the Soul
We might consider the enneagram in the context of product management like a roadmap for the soul. For me, the enneagram has given me the gift of security in my career ambition and affirmation in my skill sets. It’s transformed the all too common “product manager gloom” into opportunity. Product unicorns do not exist — only hard-working people who can recognize what makes them thrive and therefore take advantage of it in every discipline required to lead and craft. The product world needs more people who can play to their strengths, not more magical roadkill.
What have you learned about yourself as a product manager through the enneagram? | https://uxdesign.cc/how-the-enneagram-made-me-a-better-product-manager-504bedb26d3c | ['Kevin Capel'] | 2020-12-27 12:51:50.373000+00:00 | ['Enneagram', 'Product Management', 'Psychology', 'Personal Development', 'Product Design'] |
The Complete Guide to SCSS/SASS | Here’s a list of my best web development tutorials.
Complete CSS flex tutorial on Hashnode.
Ultimate CSS grid tutorial on Hashnode.
Higher-order functions .map, .filter & .reduce on Hashnode.
You can follow me on Twitter to get tutorials, JavaScript tips, etc.
In this tutorial Sassy, Sass and SCSS will refer to roughly the same thing. Conceptually, there isn’t much difference. You will learn the difference as you learn more, but basically SCSS is the one most people use now. It’s just a more recent (and according to some, superior) version of the original Sass syntax.
To start taking advantage of Sass, all you need to know are the key concepts. I’ll try to cover these in this tutorial.
Note: I tried to be as complete as possible. But I’m sure there might be a few things missing. If you have any feedback, post a comment and I’ll update the article.
All Sass/SCSS code compiles back to standard CSS so the browser can actually understand and render the results. Browsers currently don’t have direct support for Sass/SCSS or any other CSS pre-processor, nor does the standard CSS specification provide alternatives for similar features (yet.)
Let’s Begin!
You can’t really appreciate the power of Sassy CSS until you create your first for-loop for generating property values and see its advantages. But we’ll start from basic SCSS principles and build upon them toward the end.
What can Sass/SCSS do that Vanilla CSS can’t?
Nested Rules: Nest your CSS properties within multiple sets of {} brackets. This makes your CSS code a bit more clean-looking and more intuitive. Variables: Standard CSS has variable definitions. So what’s the deal? You can do a lot more with Sass variables: iterate them via a for-loop and generate property values dynamically. You can embed them into CSS property names themselves. It’s useful for property-name-N { … } definitions. Better Operators: You can add, subtract, multiply and divide CSS values. Sure the original CSS implements this via calc() but in Sass you don’t have to use calc() and the implementation is slightly more intuitive. Functions: Sass lets you create CSS definitions as reusable functions. Speaking of which… Trigonometry: Among many of its basic features (+, -, *, /), SCSS allows you to write your own functions. You can write your own sine and cosine (trigonometry) functions entirely using just the Sass/SCSS syntax just like you would in other languages such as JavaScript. Some trigonometry knowledge will be required. But basically, think of sine and cosine as mathematical values that help us calculate the motion of circular progress bars or create animated wave effects, for example. Code Flow and Control Statements: You can write CSS using familiar code-flow and control statements such as for-loops, while-loops, if-else statements similar to another languages. But don’t be fooled, Sass still results in standard CSS in the end. It only controls how property and values are generated. It’s not a real-time language. Only a pre-processor. Mixins. Create a set of CSS properties once and reuse them or “mix” together with any new definitions. In practice, you can use mixins to create separate themes for the same layout, for example.
Sass Pre-Processor
Sass is not dynamic. You won’t be able to generate or animate CSS properties and values in real-time. But you can generate them in a more efficient way and let standard properties (CSS animation for example) pick up from there.
New Syntax
SCSS doesn’t really add any new features to the CSS language. Just new syntax that can in many cases shorten the amount of time spent writing CSS code.
Prerequisites
CSS pre-processors add new features to the syntax of CSS language.
There are 5 CSS pre-processors: Sass, SCSS, Less, Stylus and PostCSS.
This tutorial covers mostly SCSS which is similar to Sass. You can learn more about Sass here: https://www.sass-lang.com/ .
SASS ( .sass ) S yntactically A wesome S tyle S heets.
( ) yntactically wesome tyle heets. SCSS (.scss) Sassy Cascading Style Sheets.
Extensions .sass and .scss are similar but not the same. For command line enthusiasts out there, you can convert from .sass to .scss and back:
Convert files between .scss and .sass formats using Sass pre-processor command sass-convert.
Sass was the first specification for Sassy CSS with file extension .sass. The development started in 2006. But later an alternative syntax was developed with extension .scss which some developers believe to be a better one.
There is currently no out-of-the-box support for Sassy CSS in any browser, regardless of which Sass syntax or extension you would use. But you can openly experiment with any of the 5 pre-processors on codepen.io. Aside from that you have to install a favorite CSS pre-processor on your web server.
This article was created to help you become familiar with SCSS. Other pre-processors share similar features, but the syntax may be different.
Superset
Sassy CSS in any of its manifestations is a superset of the CSS language. This means, everything that works in CSS will still work in Sass or SCSS.
Variables
Sass / SCSS allows you to work with variables. They are different from CSS variables that start with double dash you’ve probably seen before (for example, --color: #9c27b0 ). Instead they start with a dollar sign (for example, $color: #9c27b0 )
Basic $variable definitions
You can try to overwrite a variable name. If !default is appended to the variable re-definition, and the variable already exists, it is not re-assigned again.
In other words, this means that the final value of variable $text from this example will still be “Piece of string.”
The second assignment “Another string.” is ignored, because a default value already exists.
Sass $variables can be assigned to any CSS property
Nested Rules
With standard CSS, nested elements are accessed via space character:
Nesting with standard CSS
The above code can be expressed with Sassy’s Nested Rules as follows:
Nested Rules - Sassy scope nesting looks less repetitious.
Of course, in the end, it all compiles to normal CSS. It’s just another syntax.
As you can see this syntax appears cleaner and less repetitive.
This is in particular helpful for managing complex layouts. This way the alignment in which nested CSS properties are written in code closely matches the actual structure of the application layout.
Behind the veil the pre-processor still compiles this to the standard CSS code (shown above), so it can actually be rendered in the browser. We simply change the way CSS is written.
The & character
Sassy CSS adds the & (and) character directive.
Let’s take a look at how it works!
Usage of & character directive
On line 5 the & character was used to specify &:hover and converted to the name of the parent element a after compilation.
So what was the result of above SCSS code when it was converted to CSS?
Result - SCSS converted to CSS
The & character is simply converted to the name of the parent element and becomes a:hover in this case.
Mixins
A mixin is defined by the @mixin directive (or also known as mixin rule)
Let’s create our first @mixin that defines default Flex behavior:
Mixins
Now every time you apply .centered-elements class to an HTML element it will turn into Flexbox. One of the key benefits of mixins is that you can use them together with other CSS properties.
Here, I also added border:1px solid gray; to .centered-elements in addition to the mixin.
You can even pass arguments to a @mixin as if it were a function and then assign them to CSS properties. We’ll take a look at that in the next section.
Multiple Browsers Example
Some experimental features (such as -webkit-based) or Firefox (-moz-based) only work in browsers in which they appear.
Mixins are helpful in defining browser-agnostic CSS properties in one class.
For example, if you need to rotate an element in Webkit-based browsers, as well as the other ones, you can create this mixin that takes a $degree argument:
Browser-agnostic @mixin for specifying angle of rotation.
Now all we have to do is @include this mixin in our CSS class definition:
Rotate in compliance with all browsers.
Arithmetic Operators
Similar to standard CSS syntax, you can add, subtract, multiply and divide values, without having to use the calc() function from the classic CSS syntax.
But there are a few non-obvious cases that might produce errors.
Addition
Adding values without using calc() function
Just make sure that both values are provided in a matching format.
Subtraction
Subtraction operator works in the same exact way as addition.
Subtracting different type of values
Multiplication
The star is used for multiplication. Just like with calc(a * b) in standard CSS.
Multiplication and Division
Division
Division is a bit tricky. Because in standard CSS the division symbol is reserved for using together with some other short-hand properties. For example, font: 24/32px defines a font with size of 25px and line-height of 32px. But SCSS claims to be compatible with standard CSS.
In standard CSS, the division symbol appears in short-hand font property. But it isn’t used to actually divide values. So, how does Sass handle division?
If you want to divide two values, simply add parenthesis around the division operation. Otherwise, division will work only in combination with some of the other operators or functions.
Remainder
The remainder calculates the remainder of the division operation. In this example, let’s see how it can be used to create a zebra stripe pattern for an arbitrary set of HTML elements.
Creating Zebra stripes.
Let’s start with creating a zebra mixin.
Note: the @for and @if rules are discussed in a following section.
This demo requires at least a few HTML elements:
HTML source code for this mixin experiment.
And here is the browser outcome:
Zebra stripe generated by the zebra mixin.
Comparison Operators
Comparison Operators
How can comparison operators be used in practice? We can try to write a @mixin that will choose padding sizing if it’s greater than the margin:
Comparison operators in action.
After compiling we will arrive at this CSS:
Result of the conditional spacing mixin
Logical Operators
Logical Operators.
Using Sass Logical Operators
Creates a button color class that changes its background color based on its width.
Strings
In some cases, it is possible to add strings to valid non-quoted CSS values, as long as the added string is trailing:
Combining regular CSS property values with Sass/SCSS strings.
The following example, on the other hand, will produce a compilation error:
This example will not work.
You can add strings together without double quotes, as long as the string doesn’t contain spaces. For example, the following example will not compile:
This example will not work, either. Solution?
Strings containing spaces must be wrapped in quotes.
Adding multiple strings.
Adding numbers and strings.
Note: content property works only with pseudo selectors :before and :after . It is recommended to avoid using content property in your CSS definitions and instead always specify content between HTML tags. Here, it is explained only in the context of working with strings in Sass/SCSS.
Control-Flow Statements
SCSS has functions() and @directives (also known as rules). We’ve already created a type of function when we looked at mixins. You could pass arguments to it.
A function usually has a parenthesis appended to the end of the function’s name. A directive / rule starts with an @ character.
Just like in JavaScript or other languages, SCSS lets you work with the standard set of control-flow statements.
if()
if() is a function.
The usage is rather primitive. The statement will return one of the two specified values, based on a condition:
@if
@if is a directive used to branch out based on a condition.
This Sassy if-statement compiles to:
Example of using a single if-statement and an if-else combo.
Checking If Parent Exists
The AND symbol & will select the parent element, if it exists. Or return null otherwise. Therefore, it can be used in combination with an @if directive.
In the following examples, let’s take a look at how we can create conditional CSS styles based on whether the parent element exists or not.
If parent doesn’t exist, & evaluates to null and an alternative style will be used.
@for
The @for rule is used for repeating CSS definitions multiple times in a row.
for-loop iterating over 5 items.
Conclusion
I hope this article has given you an understanding of SCSS/SASS. If you have any questions, post them in the comments. | https://jstutorial.medium.com/the-complete-guide-to-scss-sass-30053c266b23 | ['Javascript Teacher'] | 2020-10-24 02:15:07.553000+00:00 | ['CSS', 'Programming', 'Tech', 'Design', 'UX'] |
A Peek into the World of Literary Agents Part One: Stacey Kondla | Stacey Kondla, photo by Stacey Kondla/The Rights Factory
Writing Tips From a Writer in the Trenches.
A Peek into the World of Literary Agents Part One: Stacey Kondla
Recently, I had the opportunity to chat with a bright new force in the agenting world.
Pull up a chair, get comfortable, and listen in on my conversation with literary agent, Stacey Kondla.
LAW: Thank you so much for agreeing to let me interview you, Stacey. I’m truly grateful to have a chance to chat with you.
SK: Thank you for thinking of me!! It’s so exciting!
LAW: This has been quite a journey for you. When we first met, you were a fairly new associate agent and now, in a very short period of time, you are a full agent and a rising star in the agenting world with a very impressive sales record.
SK: {{I’m blushing}} But I’m also very happy with 12 deals in 14 months. And they are all wonderful books written by extremely talented writers that I am eternally grateful for and honoured to represent ❤
*12 deals in 14 months is an extraordinary accomplishment for any agent, let alone someone so new to the business.
LAW: I believe you are an editor by trade and were a bookseller prior to becoming an agent. Do you feel these things have contributed to your success as an agent? Having a working knowledge of the publishing world, both in sales and on the editorial end must be so helpful.
SK: So, I started out in the publishing world as a Field Consultant for Scholastic Book Fairs and managed the kid's departments in a couple of Chapters-Indigo stores — so primarily a bookseller and passionate book lover. Once I joined the organizing committee of When Words Collide, I started beta reading for some writer friends. It wasn’t until I met Sam, (*Sam Hiyate is the owner of the Rights Factory, a literary agency based in Toronto, Canada)in August 2017 that I pursued editing and took three editing courses through Ryerson University. Those courses were both validating and very educational — I discovered that my years of reading taught me a lot about story, and the courses enhanced my editorial skills. I joined the agency as an associate agent in March 2018.
*Stacey is an editorial agent who works with her authors to ensure their work is the best it can be before sending it out on submission. Not all agents have the time or desire to do the same. If this is something you are looking for in an agent, it’s best to do your research before querying.
Having a bookseller background has been invaluable to me as an agent and it allows me to consider how a project will be embraced by both booksellers and readers. If I think I would have a hard time hand selling a book as a bookseller, it isn’t a project I can take on as an agent.
*Stacey’s knowledge of books sales gives her a keen edge when it comes to knowing which books will sell and where to pitch them.
LAW: Can you tell me a little about what first drew you to agenting?
SK: Two things drew me to agenting. The first thing is that I love a challenge and agenting would essentially be levelling up my bookselling skills.
The second thing is that I truly love writers, I have a lot of writer friends, I enjoy the diversity of personalities and the quirks that come along with creative minds. Spending time with writers is never boring and is always inspiring. So, having the chance to work closely with writers and to help them pursue their publishing journeys is super satisfying and fulfilling.
LAW: If there was one thing you could change about your job, what would that be?
SK: Overall, I absolutely love what I am doing. I guess I would like to make money as starting out as an agent is not a get-rich-quick sort of thing. It can take years of super hard work to develop your career to the point where it pays your bills every month and even that is not guaranteed.
*Literary agents can spend months and multiple rounds of editing before a book is even sent out on submission to editors. And they won’t get paid for that time until the book sells.
LAW: What is your favourite thing about being an agent?
SK: The people and the books!!
LAW: What is the hardest thing about being an agent?
SK: Saying no/declining queries — I fully recognize that every project I receive in my queries is the hopes and dreams and hard work of a dedicated writer. I am not in this business to be mean or break hearts. Saying no really sucks.
*As I stated earlier, agents are not here to destroy your dreams, they really do want to say yes.
LAW: Do you have a favourite book? One that you’ve read countless times and plan on reading again?
SK: It is absolutely impossible to choose a favourite book — I might be able to narrow it down to my top one hundred in no particular order. And I’m not a re-reader. As a child and teenager I did re-read some books several times, but as an adult, there are just way too many books I want to read for the first time and no time to go back and re-read a book I’ve already read.
LAW: Do you have a favourite genre to read, on a personal level and as an agent?
SK: As a teen and in my twenties, I almost exclusively read science fiction and fantasy. Once I joined Scholastic and was encouraged to read outside my comfort zone, I discovered that genre shouldn’t limit me and that I could get so much from reading broadly. Today, I read across genres and age levels, fiction and nonfiction — really if a story has a compelling hook or a nonfiction book is about a topic that sounds interesting to learn about, I will probably read it. For pure rest and relaxation and enjoyment, I love reading across the genres in the YA category the most.
LAW: Is there a favourite type of music, or a particular song that never fails to lift you up? In these troubled times, I think music can be a blessing. I have a special playlist for days when I’m feeling the weight of the world.
SK: This is a weird one for me — as a teen and up to December 2016, I loved listening to music, also rather broadly across musical genre’s — I could listen to Great Big Sea, Rage Against the Machine, Justin Timberlake, Pearl Jam, John Denver, and Josh Groban all in one day. Since surviving a stroke in December 2016, I rarely listen to music of my own accord, and when music is on, I can only handle an hour or so of it before I need to get away from it. It is overstimulating and makes me agitated. Same with TV and it is because of the noise. My perfect environment is a quiet one. Sometimes, I will put a movie soundtrack that is purely instrumental or classical music, but at a low volume and not for very long. I also have been preferring to listen to audiobooks, but only when I am doing chores and housework and never for long stretches of time. When I am working on editing silence is best.
LAW: I understand you have had many different pets. I love animals and over the years my girls have had pet rats, dogs, and fish. Growing up I had three horses, countless dogs, cats, a few rabbits, and even a pet duck. A question to you about animals in stories. Is it a deal-breaker for you if an author kills off an animal in a story? I’ve heard that some agents will stop reading if a dog dies.
SK: It’s not a dealbreaker for me, if it is important to the story, but I draw the line at gratuitous violence and abuse.
LAW: Is there any one thing you would like querying authors to keep in mind before querying you in particular, and any agent in general?
SK: I think there are two things; first is to do your homework and make sure you are sending queries to agents that represent the kind of work you do, and second, is to remember that agents are also people with individual circumstances just like you, and as such, can be slow to respond, or forget to respond, or be too overwhelmed to respond, or they just might not connect with what you write — my point is to not treat agents any more harshly than you would like to be treated as an author. And that goes both ways.
LAW: Is there anything in specific you are dying to see in your inbox right now?
SK: I’m closed to children’s and YA fiction right now, but would love to see nonfiction proposals for children’s and adult nonfiction.
In the kids market, I am ideally looking for projects that will speak equally to children, the parents who buy books, and teachers and librarians in the education market. That said, if a truly dynamite nonfiction project comes along that is super kid-centric, I am happy to go with that too. I tend to avoid anything too message-driven, and look for projects that are fun, entertaining, beautiful, or educational in a fun and curiosity-driven way, not in a preachy way. For kids nonfiction, I’m looking for picture book, middle grade and YA nonfiction, preferably in science, nature, or social justice.
I am open to adult nonfiction in the areas of cultural/social issues, history, medical, nature & ecology, pets, science, and technology, as these are the types of nonfiction I gravitate towards and enjoy on a personal level. I am not open to memoir or naturopathic or spiritual works.
LAW: Considering that we appear to be living in a dystopian novel these days, do you foresee a glut of stories in a similar vein in the future? Do you think that now is a good, or bad time to be writing these? Impossible questions, I know, because personally I think this will be a very subjective situation and vary from agent to agent, and editor to editor.
SK: Truly an impossible question to answer. Leading up to now, I have been gravitating more toward contemporary fiction, but will take on dystopian if it rocks my world. I sold one dystopian last year that rocked my world called The Hill by Ali Bryan, and it will be published this fall 2020 by an independent New York press called Dottir Press. It is a super cool feminist dystopian about a group of feral girls living on a reclaimed garbage dump and it hooked me immediately. Right now, with all the gravity in the world, I have to say that funny and light is hitting the spot for me.
LAW: With all the libraries closed, have you seen an increase in book sales? Or has the uncertainty of our situation pushed sales down?
SK: I don’t think we will have accurate numbers on this for a couple of months. I think some communities are supporting local independent bookstores who are offering free in-town delivery, and I am sure a lot of people continue to leverage the library via ebooks and audiobooks, and continue to support big box via online shopping and ebook and audiobook purchases. It is realistic to expect a downturn across all of these retail channels if personal income is reduced for a prolonged period of time and to see a greater upswing in library ebook and audiobook usage.
LAW: Do you see university presses and smaller presses closing down in the wake of this situation? Is there anything we as authors can do to help?
SK: I think a number of small businesses across the publishing industry are probably in jeopardy. All we can do is hope for government and community support and see where things land when life starts approaching normal again — whatever that looks like and whenever that happens. I think university presses may have support from the larger educational institutions they are attached to, but I’m not entirely sure how their funding works. Losing some small presses and independent bookstores is likely inevitable as sad as that is. Authors and readers alike can help simply by supporting the industry — shop(if you can) at the businesses you want to stay in business. If you can’t shop, then try to check out books from the library apps that belong to the publishers you want to support. If you love audiobooks, consider switching to Libro.fm because every purchase is credited to the bookstore of your choice and they get a commission on that audiobook purchase. The other audiobook retailers don’t support your community brick-and-mortar bookstores in this way.
LAW: Do you think historical stories set in previously troubled times might make a resurgence? I wonder if they will, simply because these stories had a resolution, and that can provide a sense of comfort to people. I think it is the unknown that is driving people’s fears. Perhaps if we look to the past and see how humanity rose to the occasion and prevailed, it could bring a sense of calm.
SK: I am unsure of this and really don’t have an answer. I think there has been a small uptick in interest in books about contagious disease, fiction and nonfiction — I’m not sure that will translate to future acquisitions.
LAW: How soon do you get a sense that a manuscript you’re reading is going to be something special? I’ve heard that some agents can tell within the first page.
SK: I would agree with that — when you have a super solid query and a manuscript really grabs you on page one, it feels magical.
LAW: If a query is poorly written but the pages are exemplary, will you consider requesting the manuscript, and what about the opposite scenario. A fabulous query, but disappointing first pages.
SK: When I read queries, I always read the sample, so even if a query isn’t as great as it could be the sample has a chance to hook me. No matter how fabulous the query, if the sample disappoints, then I have to pass.
LAW: What would make you willing to take a chance on an author if their submission material isn’t quite there if indeed there is anything?
SK: Ultimately the sample must hook me and show me the writer can really write. So, a bad query with me isn’t endgame, but a bad sample is.
LAW: How long typically does it take you to decide to offer representation to an author, from reading their query, to finishing their MS, and to that all-important phone call? Would you be willing to share what your process looks like?
SK: There are a lot of factors, like how busy I am with other clients, but if I love a manuscript or proposal, I generally reach out to the author quite quickly via email to request a phone call. I can be quite slow in actually reading a requested manuscript — I don’t have the luxury of lying around reading all day, and I have submissions I am sending out, emails and phone calls with editors and clients, I do a lot of editorial work for my signed clients on projects we are working on, and I work 25–30 hours a week a book store to help pay my bills — so reading manuscripts and proposals has to fit in there somewhere and that is why I am not fast.
LAW: When you sign an author, what is the first step? Do you dig into revision immediately, or do you have several conversations with the client first, to see where you both sit on the plan for the manuscript?
SK: This depends on where I am at in the process with my other clients and projects that are in the queue and where the new project is in the editorial process. Some projects need more work and time than others, and I don’t rush editorial work. Rushed work is sloppy and I’d rather be slow and do it well. Once a new client is signed, I do have a phone call with them and we work out first steps together and it turns into an ongoing dialogue and back and forth until the project is ready to submit.
LAW: Do you already have markets in mind for an author’s work when you sign them? Or is that something that you develop later on?
SK: I generally almost always have a few editors that leap into my mind for each project, but a lot can change between when a client is signed and when the project is ready to be submitted. So once the project is ready to go, I tailor make a submission list for it using our agency database, Publisher’s Marketplace, and Google to make sure I am sending the project to the editors that will be most interested.
LAW: Is there a great deal of variability in how many rounds of revision a book might go through before you send it out on submission? Have you ever offered on a book that was ready to go as is? (I can’t imagine that happening, but one never knows. Lol)
SK: Some projects are certainly more ready than others. The longest I’ve worked back and forth with an author before submission has been almost ten months. I haven’t submitted any projects that didn’t go through some amount of editorial first.
LAW: Can you give us a list of the most important factors in establishing a satisfactory working relationship between an author and an agent?
SK: I think this really depends on the author and the agent because different people need different things from their work relationships. For me, the most important thing is a feeling that you can and will work well together — trust your intuition and have an open and candid conversation about what each of you expects and needs from each other. Open communication is critical for both parties.
LAW: Would you consider taking on a client who parted ways with their former agent? I understand this is not that uncommon, and would likely involve a bit of research into why that relationship did not work out. But are there any red flags that would make you say no?
SK: Absolutely yes and I already have a few clients that parted amicably from their previous agents before approaching me. Like any relationship, it is possible and common for agents and clients to grow apart. This is another reason why open communication is so important. Just because an author and agent choose to part ways doesn’t necessarily mean that either of them have done anything wrong — it can be as simple as an author wanting to try new things, or an agent developing their list in a different direction.
The only red flags that would make me say no is if an author bad mouths their previous agent, or blames their agent for failure, or if the author seems at all resistant to editorial work prior to submission. Other than that, it really hinges on how much I love the project and how easy the author seems to work with. Nobody wants to be in a professional relationship with someone who is hard to work with.
LAW: Any parting words of wisdom for all the anxious authors out there in the depths of the query trenches?
SK: Only two things — make sure your manuscript is the best it can be prior to looking for an agent, and once you’ve started querying, be patient — half the process is learning how to wait because nothing happens super fast in publishing. It’s a long game.
You can find Stacey on Twitter and Instagram. If you have a project that matches her wish list, do consider querying her.
I hope you enjoyed this little peek into the world of a literary agent. Stay tuned for Part Two: Naomi Davis of Bookends Literary.
Now, go write.
If you enjoyed this article, you may also enjoy: | https://medium.com/the-partnered-pen/a-peek-into-the-world-of-literary-agents-part-one-stacey-kondla-74632f3198ba | ['Leslie Wibberley'] | 2020-04-03 19:47:32.886000+00:00 | ['Publishing', 'Writing Tips', 'Books And Authors', 'Jobs', 'Writing'] |
3.5 Billion-Year-Old Fossils Challenge Ideas About Earth’s Start | 3.5 Billion-Year-Old Fossils Challenge Ideas About Earth’s Start
The oldest trace of earthly life confirmed
Artist's impression of early Earth (Wikipedia)
Fossils almost 3.5 billion years old discovered in Australia are the imprint of the oldest known microorganisms that have lived on Earth, scientists have confirmed that life probably appeared much earlier.
For these researchers, the work published in an edition of the Proceedings of the American Academy of Sciences (PNAS), also suggests that life could be frequent in the Universe, at least in the form of micro-organisms.
Fossilized Bacteria Discovered
Researchers from the Universities of California and Wisconsin identified, thanks to new mass spectrometry, chemical signatures of eleven microbial specimens that belong to five species including some similar to those existing today.
“It is the first and oldest place on the planet where we have both the morphological and chemical imprint of life, ” explains John Valley, professor of geochemistry and petrology at the University of Wisconsin, the principal co-author of this study.
“We also discovered that there were several types of metabolisms and different species with different biological functions: some produced methane, others consumed it or used solar energy for photosynthesis,” he told AFP.
The methane was to form an important part of the atmosphere of the very young Earth frequently bombed by comets, where oxygen was scarce or absent.
Some of these bacteria, now extinct, belonged to archaea, a group of prokaryotes, unicellular microorganisms — single-celled living things without a nucleus.
Others were similar to microbial species still found today.
This study thus suggests that some of the micro-organisms, first described in 1993 in the journal of Science, according to their cylindrical morphology and filamentous, could have lived at a time when there was no oxygen on Earth yet.
“These organisms — 0.01 millimeters in width — formed a community of very well developed microorganisms which probably did not constitute the dawn of life, ” sums up Professor Valley.
The fact that different types of microbes were already present 3.5 billion years ago “tells us that life had to start much earlier on Earth, and also confirms that it is not very difficult for a primitive life form to evolve into advanced micro-more organisms ”, points out William Schopf, professor of paleobiology at the University of California, another principal co-author of these works.
For him, this study, along with others, indicates that life could be frequent in the cosmos.
Studies published in 2001 by Professor Valley’s team suggested that the existence of oceans of liquid water could go back 4.3 billion years, over 800 million years before the fossils described in these last works and just 250 million years after the formation of the Earth.
“We have no direct evidence that life existed 4.3 billion years ago but it would have very well could be … and it’s something we all want to know,” notes Professor Valley.
Studies published in the British Journal of Nature reported the discovery of potential signs of life dating back 3.95 billion years, the oldest to this day but which remain to be confirmed.
These fossils were found in grains of graphite, a form of carbon. They were trapped in ancient sedimentary rocks in Canada. | https://medium.com/history-of-yesterday/3-5-billion-year-old-fossils-challenge-ideas-about-earths-start-eb5e87c623ff | ['Max Ngamla'] | 2020-12-26 09:02:34.808000+00:00 | ['History', 'Science', 'Discovery', 'Life', 'Earth'] |
How to Speed Up Software Delivery | How to Speed Up Software Delivery
Deploying fast by optimizing each step of the pipeline
The whole theme of the last decade or so (maybe more) has been about agility and techniques that enable agility. CI-CD, DevOps etc have become a critical feature of this, and yet I often see a lot of friction when it comes to deploying. There is some sort of magical aura around deployment which makes it something special, and this impacts how many of our teams work in each of delivering software.
Why deploy fast
No matter how beautiful our system architecture, how elegant our code, and how solid our test suite — the only way we get to make an impact on our business and our customer’s lives is when we actually deploy code to production. Before deployment, code is just an intellectual exercise, like an interview question. Deployment turns this intellectual property into an economic proposition. So it seems like a no-brainer that we should be deploying code as fast as possible.
And yet engineering orgs struggle with this activity. It is a documented fact that the overwhelming majority of outages are a result of new changes deployed. This makes sense — if nothing changes, then things are far less likely to break down. So deploying code has obvious risks.
Most of the currently popular engineering practices are aimed at increasing the rate of software deployment while mitigating the risk of failure arising from this increased rate. Pretty much anything works at small scale and companies, but as the scale of the enterprise grows in terms of software or team size, context becomes increasingly important, and reverse engineering from development back to design/implementation can expose a lot of inefficiencies.
If we agree that deploying software fast with high quality is a worthy goal, then I want to apply a pipeline like perspective to the process of delivering software so that we can identify bottlenecks and broaden them to make the process more efficient. Applying Theory of Constraints means that we model the software delivery flow in reverse, identify the bottlenecks, and optimize them one after another.
Deployment causes outages
This is the most widespread argument against deploying changes to production and as I mentioned above, industry data supports this. It has therefore become the flagship argument for treating deployment as some type of sacred activity. However, there is a lot of subtext to this top level problem that merits looking into.
While this is a whole universe of topics in its own right and includes multiple disciplines, I want to call out that if fast and safe deployments are the desired objectives, then we cannot do without efficient means of :
Discovering when things go wrong — To reduce outages, we need to be able to detect them. Monitoring and alerting tools are indispensable for this, and automated tests running in production as very effective too. We need these tools regardless of whether we deploy frequently or not, so we might as well go ahead full steam and reap the benefits! Additionally, the organization should have dependable on-call support protocols (preferably manned by developers, but at least by some sort of central ops team) to respond to alerts. Mitigating the problem — Since we are talking about outages caused by deployments, the most powerful tool we have for mitigating the problem are canary deployments and automated rollbacks. Coupled with monitoring tools, they give a pretty solid safety net for when things go wrong. Feature gates are another extremely powerful tool to manage deployment risk. They allow us to deploy code which can later be engaged under close supervision instead of having every change going into effect as soon as it is deployed. Debugging the problem — Whether you like the logs-metrics-traces approach to observability or the event based approach recently gaining ground, you need tools which allow your team to rapidly nail down the cause of the problem. Without these, you will be helpless even in the face of known problems because you won’t know where they stem from and why. Fixing the problem — Once the problem has been identified, product and development teams need to have processes that let them determine the priority for the fix and get the fix out of the door as quickly as possible. Note that the ability to deploy a bug fix fast is contingent on our ability to deploy ANYTHING fast.
Deployment takes time
Before we dig into the specific, I would like to point out that thanks to feature toggles, deployment is not release. Code for a feature may get deployed without coming effect because it is toggled off — this will become relevant below.
Let’s consider the actual process of deployment. Some teams I have worked with in the past have argued against deploying often because deployment takes up a lot of time from the team. Here again we can look at the various steps typically involved and identify the ways in which they can be made efficient.
Merge all the code to the deployment branch — This is often, but not always, done by the engineer who happens to be on the hook for deploying the application (for whatever reason) — and it shouldn’t be. Merging code to the deployment branch is part of the development cycle but often becomes part of the deployment cycle because any code sitting in the deployment branch becomes “active” when deployed. This dependency can be broken by using feature toggles as mentioned above. Developers should merge code with appropriate implementation of feature toggles. Trigger and monitor deployment — While the gold standard of all of this is CI-CD process,, a good enough compromise (IMO) is an automated deployment process which includes building the deployment branch, running automated sanity/integration tests, canary deployment with auto rollback followed by rolling/incremental deployment. While this might take time depending on the infrastructure being used, it should not be a hands on activity for the dev team. They should get actively involved only if they see failures. Validate that system is stable post deployment — This is essentially the activity of detecting and debugging outages in production when caused by deployment and we have already discussed techniques to make this efficient. Another aspect here is the actual release of the code that was shipped (remember — deployment is not release). Release should be the responsibility of individual feature developers and they should judge when they should use feature toggles to expose their code to its users. As such, this is not a process of the deployment cycle.
Testing takes time
When I was working in the supply chain team at Myntra, we had this practice of testing end-to-end. This meant that a feature (any feature) could be considered signed off by QA only if we could make a whole set of orders of different types all the way from the customer cart to logistics. With testing environments being broken often due to untested code, needless to say sign offs were a bitch, both for devs (too slow) and for testers (too painful). Widening the testing bottleneck typically means working on two fronts:
Testing individual changes — Since we are trying to deploy fast — it means that changes will be coming in fast, and each complete feature might involve changes to multiple teams and systems. Having a QA handoff on this path is extremely inefficient. I believe testing of individual changes is part of the development process. Whether by unit testing, or automated/manual integration tests, developers should verify their changes are safe to deploy and functioning as intended (including feature toggles). Consumer Driven Contracts are a powerful tool in testing small changes and individual components that I am surprised are not very popular. Testing complete features — This is the testing of the complete customer experience, and this is something that QA teams can own and drive using rigorous automation. While this is an important component of testing, it should be understood that this is a process parallel to the mainline software delivery path and will always run slightly (and only slightly) behind the latest production system (we had put it inside the delivery path at Myntra because at the time, we were not doing a good job with testing individual changes). It is however, very powerful in detecting regressions and also acts as a repository of information about how the system is expected to function.
Splitting these two tracks and making separate teams responsible for both significantly broaden the testing bottleneck.
We deploy complete features
Feature development often works as a batch process. Developers design for the complete feature, and then implement the complete design in one go. In a way, the complete feature become the unit of work for developers. This is okay for features of trivial size, but when we work on large features which touch many parts and layers of an application, this style of working creates a huge risk because multiple developers are doing the same thing. If all the changes come together just before deployment, there will be tons of conflicts and it is difficult to predict if all features are still working correctly.
And I’m not even talking about the agile process of identifying sprint stories — this is even lower level than that. If a code change makes sense on its own (e.g. schema changes to create a new table in a database, core business logic which is not yet exposed via an API), what stops us from deploying it? By holding it back, we are causing two problems:
Our teammates working in the same areas of code haven’t seen our changes — we may be stepping on each other’s toes. Getting small changes out there quickly avoids confusion later. If the change is logical and deployable, then why NOT deploy it? After all, the larger the deployment, the larger the risk of things breaking on deployment.
However, there are certain prerequisites to be able to implement features in small bites.
Design large to implement small -We spoke earlier about how treating the entire feature as the unit of work can be troublesome. However, to be able to break down work into smaller units, we need to design (at least roughly) for the entire feature. This is necessary to make all the small pieces fit correctly. The output can look something like this — “we need a table to store these data points, REST APIs to insert single and bulk records to this table, and integration with external service X to check for Y before we insert”. We identify coarse grained system boundaries that the feature implementation will impact and identify the changes that will be made. Coarse grained is a relative term here, and this is a recursive process — if the feature is very large and spans multiple systems then we have keep applying this breakdown till we reach the code if actual code that will be written in each of those systems. Test small changes deeply — Since each change is small, developers should be able to establish easily that each of them works exactly as intended. The fastest mechanism for doing this is via unit tests that mock external dependencies. However, any mechanism is fine as long as it establishes the deployment safety and intended behaviour of the change being published. Code reviews should look out for these tests. Small, fast-moving code reviews — One feature need not travel as one pull request. As a result of our design exercise, we can now implement the feature in a series of very small pull/code review requests in line with the design boundaries defined above. Large pull requests are never going to get reviewed as thoroughly as small ones because they take a much larger amount of effort on the part of the reviewer. The flip side of this argument is that we should have team processes in place to get the code reviewed very quickly. I have published some guidelines for reviewing distributed systems code — check them out for some pointers on things to look out for.
After these changes, we are pushing very small, verified to be safe changes down the deployment pipeline, much of which is already automated and needs minimal manual intervention. But no amount of automation or tooling will help make our deployments fast or safe if we insist on pushing large changes in batches. Large change sets lead to poor testing/reviews lead to unstable systems.
Software Delivery as Change Stream
To me, this is a mental shift in terms of thinking of software delivery as a stream of small changes that are intended to compose into a feature on deployment instead of thinking in terms of moving features from the developer’s laptop to production servers. There is no such thing as a feature which is “done” — everything is always evolving and changing. So instead of viewing features as statically-bound things that we ship, we should think of them as a set of changes that we need to make within certain system boundaries. Some people call this ”Flow” of work through software organizations. This is enabled as much by adoption of agile techniques in the development phase as it is enabled by evolving out-of-the-box infrastructure capabilities in the deployment and operations phase.
TL;DR
Deploy as frequently as you can. No matter what you think the cost of deploying is now, it will only be greater later on. Unless you are building some life-or-death related software or something which is highly regulated, my vote goes to deploying as fast as you can. Like the broken-window theory, making rapid deployment an engineering objective will directly give rise to a robust engineering culture and practices which will help your organization in the long run.
Read Next — Moving faster (not just fast) as an organization imperative (aka Build momentum not velocity) | https://medium.com/swlh/how-to-speed-up-software-delivery-13be384c340d | ['Kislay Verma'] | 2020-11-04 19:32:10.814000+00:00 | ['Continuous Delivery', 'Software Deployment', 'Software Engineering', 'Agile Development', 'DevOps'] |
Watson Speech-to-Text Services — tl;dr need not apply | Photo by Ben White on Unsplash
In consideration of today’s too long; didn’t read (tl;dr) mentality, my desire to tell you EVERYTHING about Watson’s Speech-to-Text Service is directly confronted.
Being a writer, I personally enjoy writing in full description and meaningful explanation. Long prose can be satisfying in a way — a somewhat cathartic baring of the subject at hand, exposed for all to read and know.
Oh, the stories I could tell about the various uses of our services, like:
A call center transcribing audio conversations between customers and agents to analyze common call patterns and issues.
A medical service provider creating an application for doctors to dictate patient diagnoses and treatments directly into their files.
A retailer servicing its customers through an online conversational application with transcription for real-time logging.
But today, first thing’s first.
Today, I simply want to remind you of what our speech-to-text services can do, in a way that is exactly the opposite of tl;dr.
How? With a CHEAT SHEET!
Dive into our documentation and you will find so many golden nuggets of detail in there, conveying the many features of our speech-to-text service and the flexibility at which you can work with it.
But tl;dr, right?
Hence the cheat sheet below.
This cheat sheet is meant to be a quick reference to our feature set, so you can know in just a matter of seconds what our services can do.
And, for the readers among you — I conveniently hyperlinked my cheat sheet to our correlating technical documentation in case you want to dive in for more details. I mean, being the writer that I am, I cannot not* refer to our thorough worth-reading-all-the-way-through documentation. It’s just that good!
(*double-negative noted, for emphasis and creative flair!) | https://medium.com/ibm-data-ai/watson-speech-to-text-services-tl-dr-need-not-apply-ce4a27a56adb | ['Kati Venturato'] | 2017-11-10 15:20:26.782000+00:00 | ['API', 'Speech Recognition', 'Artificial Intelligence', 'Ibm Watson', 'Tutorial'] |
The Conversation We Refuse to Have About War and Our Veterans | I went to the market
Where all the families shop
I pulled out my Ka-bar
And started to chop
Your left right left right left right kill
Your left right left right you know I will
-Military cadence
“You can shoot her…” the First Sergeant tells me. “Technically.”
We’re standing on a rooftop watching black smoke pillars rise from a section of the city where two of my teammates are taking machine gun fire. Below, the small cluster of homes we’ve taken over is taking sporadic fire as well. He hands me his rifle with a high powered scope and says, “See for yourself.”
It’s the six year old girl who gives me flowers.
We call her the Flower Girl. She hangs around our combat outpost because we give her candy and hugs. She gives us flowers in return. What everyone else at the outpost knew (except for me, until that day) was that she also carried weapons for insurgents. Sometimes, in the midst of a firefight, she would carry ammunition across the street to unknown assailants.
According to the rules of engagement, we could shoot her. No one ever did. Not even when the First Sergeant morbidly reassured them on a rooftop in the middle of Iraq.
Other soldiers didn’t end up as lucky.
Sometimes they would find themselves paired off against a woman or teenager intent on killing them. So they’d pull the trigger. One of the sniper teams I worked with recounted an evening where he laid up a pile of people trying to plant an IED. It was a “turkey shoot,” he told me laughing. But then he got quiet and said, “Eventually they sent out a woman and this dumb kid.” I didn’t need to ask what happened. His voice said it all.
I often wonder what would have happened if the Flower Girl pointed a rifle at me, but I’m afraid I already know. The thought didn’t matter anyway. There was enough baggage from tours in Afghanistan and Iraq that coming home was full of uncertainty, anger, and confusion — and not, as I had been led to believe, warmth and safety. | https://humanparts.medium.com/the-conversation-about-war-and-our-veterans-we-refuse-to-have-a95c26972aee | ['Benjamin Sledge'] | 2019-06-05 15:11:50.391000+00:00 | ['PTSD', 'Mental Health', 'Veterans', 'War', 'Military'] |
Mailchimp Is Dead (It Just Doesn’t Know It Yet) | Mailchimp Is Dead (It Just Doesn’t Know It Yet)
There’s a new 800-pound gorilla in the email marketing jungle, and its name is Amazon
Photo by Pixabay from Pexels
Until now, Mailchimp may have been the 800-pound gorilla in the newsletter-delivery ecosystem, but there’s a new silverback Alpha that’s about to chop down the whole rainforest.
When email marketers discover the true potential of this new challenger, the established providers can kiss their current business models goodbye. Mailchimp, Aweber, Constant Contact, ConvertKit, MailerLite, CampaignMonitor… if they don’t respond to what’s coming, they’re all toast. | https://medium.com/better-marketing/mailchimp-is-dead-it-just-doesnt-know-it-yet-6e404c3e4b7b | ['Jared A. Brock'] | 2020-12-21 22:02:39.292000+00:00 | ['MailChimp', 'Newsletter', 'Email Marketing', 'Marketing', 'Email'] |
Zerobank Update: The first version of MVP is launched on schedule | During the last few weeks, ZeroBank team has been gathering up speed and working with full capacity. Now, we proudly inform you that the very first version of our MVP (Minimum Viable Product) will be launched on schedule, within the next 7 days — on August 15th, to be exact.
After the launch next week, this MVP version will be released for the internal testing purpose. Regarding the release, we hope to show our team’s determination to build a genuine product according to the previously-agreed roadmap and to fulfill our promise of building a revolutionizing money exchange and remittance ecosystem.
For our community, our CTO, Dr. Ly Van Bao, will share some exclusive features of ZeroBank application on the MVP release day.
Save your date and stay tuned for more updates on the project development!
Stay updated on our channels: | https://medium.com/zerobank-cash/7-days-left-until-the-launch-of-zerobanks-first-version-of-mvp-1dce091ae42e | ['Zerobank - Your Local Currency'] | 2018-08-10 09:58:06.398000+00:00 | ['Startup', 'ICO', 'MVP'] |
3 Steps to Get Your Stories Curated | 3 Steps to Get Your Stories Curated
I promise, no clickbait
Photo by Attentie Attentie on Unsplash
So if you still decided to click on this story, you want to know how to get your story curated.
Now, I am not an expert but I do have some steps that will increase your curation chances quite significantly.
I will use my own articles that have been curated to show you some key features.
I spent over 2 and a half weeks working on this article.
I kid you not, I thought this was going to be my holy grail article, the one that would net me hundreds of dollars.
Well, I was wrong, by far.
The article wasn’t curated. | https://medium.com/never-fear/3-steps-to-get-your-stories-curated-849ba490f714 | ['Aryan Gandhi'] | 2020-10-02 02:39:06.162000+00:00 | ['Finance', 'Curation', 'Growth', 'Success', 'Writing'] |
4 Intersecting Domains That You Can Easily Confuse with Artificial Intelligence | ← PART 1 | ARTIFICIAL INTELLIGENCE ESSENTIALS
4 Intersecting Domains That You Can Easily Confuse with Artificial Intelligence
Learn the Differences Between Artificial Intelligence, Machine Learning, Deep Learning, Data Science, and Big Data | AI Essentials
Figure 1. Photo by Michael Dziedzic on Unsplash
Once you start consuming machine learning content such as books, articles, video courses, and blog posts, you will often see the terms like artificial intelligence, machine learning, deep learning, big data, and data science being used interchangeably. These terms represent several closely related areas within the field of artificial intelligence. They are usually used interchangeably without adequate attention is paid to their scopes. It’s not entirely the authors' fault since there is a slight ambiguity about these terms' differences. With this post, we will put an end to this ambiguity and clarify their scopes.
We will cover five different adjacent fields:
Artificial Intelligence
Machine Learning
Deep Learning
Data Science
Big Data
and bonus part, I will share a visual in the end to clarify them even further.
Let’s start!
Artificial Intelligence
Figure 2. Photo by Markus Winkler on Unsplash
Artificial Intelligence (AI) is a broad umbrella term, and its definition varies across different textbooks. The term AI is often used to describe computers that simulate human intelligence and mimic “cognitive” abilities humans associate with the human mind. Problem-solving and learning are examples of these cognitive abilities. The field of AI contains machine learning (and, therefore, deep learning) studies since capability learning from experiences is a sign of intelligence. Generally speaking, machines with artificial intelligence is capable of:
Understanding and interpreting data;
Learning from data; and
Making ‘intelligent’ decisions based on insights and patterns extracted from data.
These terms are highly associated with machine learning. Thanks to machine learning, AI systems can learn and excel at their level of consciousness. Machine learning is used to train AI systems and make them smarter. I do not want to get into how the field of AI has developed over the years. But, to have a quick understanding, here is a summary timeline of the development of artificial intelligence:
Figure 3. A Brief Timeline of the Field of Artificial Intelligence (Figure by Author)
This timeline shows the fundamental studies that were done in artificial intelligence. While researchers tried to mimic the neurons between the 50s-70s, they focused on machine learning with expert systems. Beginning from the 2000s, the focus is currently on deep learning studies. As I mentioned, all of these domains are part of artificial intelligence.
Let’s talk more about these relevant domains…
Machine Learning
Figure 4. Photo by Pietro Jeng on Unsplash
Machine learning is considered as a sub-discipline under the field of artificial intelligence. Machine Learning (ML) studies aim to automatically improve the computer algorithms' performance designed for particular tasks with experience. In a machine learning study, the experience is derived from the training data, which may be defined as the sample data collected on previously recorded observations. Through this experience, machine learning algorithms can learn and build mathematical models to make predictions and decisions. The learning process starts with feeding training data (e.g., examples, direct experience, basic instructions), which contains implicit patterns, into the model. Since computers have more processing power than humans, they can find these valuable patterns in the data within a short amount of time. These patterns are -then- used to make predictions and decisions on relevant events. The learning may continue even after deployment if the developer builds a suitable machine learning system that allows continuous training.
Previously, we were able to use machine learning in a few sub-components of a system only. Now we actually use machine learning to replace entire sets of systems, rather than trying to make a better machine learning model for each of the pieces.
“There is an ever-increasing use of machine learning applications in different fields. These real-life applications vary to a great extent.” — Jeff Dean
Some machine learning use cases may be listed as follows:
Healthcare : Medical diagnosis, given the patient’s symptoms;
: Medical diagnosis, given the patient’s symptoms; E-commerce : Predicting the expected demand;
: Predicting the expected demand; Law : Reviewing legal documents and alerting lawyers about problematic provisions;
: Reviewing legal documents and alerting lawyers about problematic provisions; Social Network : Finding a good match given the user’s preferences on a dating app;
: Finding a good match given the user’s preferences on a dating app; Finance: Predicting the future price of a stock given the historical data.
This is obviously a non-exhaustive list, and there are hundreds, if not thousands, of potential machine learning use cases. Depending on what your goal is, there are many different methods to create a machine learning model. These methods are usually grouped under four main approaches: (i) Supervised Learning, (ii) Semi-supervised Learning, (iii) Unsupervised Learning, and (iv) Reinforcement Learning. Each method contains distinct differences in its design, but they all follow the same underlying principles and conforms to the same theoretical background:
Figure 5. Basic Comparison of Four Different ML Approaches (Figure by Author)
After training a machine learning model, you can embed it in an artificial intelligence system. Now, let’s talk about deep learning.
Deep Learning
Figure 6. Photo by Josh Riemer on Unsplash
Deep learning (DL) is a sub-field of machine learning that exclusively uses multiple layers of neurons to extract patterns and features from raw data. These multiple layers of interconnected neurons create artificial neural networks (ANNs). An ANN is a special machine learning algorithm designed to simulate the working mechanism of the human brain. There are many different types of artificial neural networks intended for several purposes. In summary, deep learning algorithms are a subset of machine learning algorithms.
Figure 7. An Example of an Artificial Neural Network (Figure by Author)
Just as in machine learning, all four approaches (supervised, semi-supervised, unsupervised, and reinforcement learning) can be utilized in deep learning. When data and computing power are abundant, deep learning almost always outperforms the other machine learning algorithms. Deep learning algorithms are instrumental in image processing, voice recognition, and machine translation. Convolutional Neural Networks, Recurrent Neural Networks, Autoencoders, Generative Adversarial Networks, Transformer Networks are some of the artificial neural network examples which makes deep learning possible.
Data Science
Figure 8. Photo by Chris Liverani on Unsplash
Data science is an interdisciplinary field that sits at the intersection of artificial intelligence, a particular domain knowledge, information science, and statistics. Data scientists use various scientific methods, processes, and algorithms to obtain knowledge and draw insights from observed data.
In contrast with machine learning, a data science study's goal does not have to be model training. Data science studies often aim to extract knowledge and insight to support the human decision-making process without creating an AI system. Therefore, although there is an intersection between data science and the other adjacent fields, data science differs from them since it does not have to deliver an intelligent system or a trained model.
Big Data
Figure 9. Photo by imgix on Unsplash
Big data is a field that aims to efficiently analyze a large amount of data that cannot be processed with traditional data-processing methods and applications. Data with more observation usually brings more accuracy, while high complexity may increase false discovery rates. The field of big data studies how to efficiently capture, store, analyze, search, share, visualize, and update data when the size of a dataset is very large. Big data methods can be used in artificial intelligence (and its sub-domains) and data science. Big data sits at the intersection of all the other fields mentioned above since its methods are crucial for all of them.
The Taxonomy Diagram
Now that we briefly covered all these fields, let’s see the relationship between these domains in a taxonomy diagram.
Figure 10. The Taxonomy of Artificial Intelligence and Data Science (Figure by Author)
This taxonomy is almost clear evidence for the reasons behind the ambiguity. Whenever we are talking about deep learning, we are also talking about machine learning and artificial intelligence. Some might call it a data science project or a big data project when working on a deep learning project. These naming practices are not necessarily incorrect, but they are confusing. Therefore, it is vital to know the intersections and subtractions of these fields.
Final Notes
Now, you know their similarities and differences of these adjacent domains as well as their intersections. In this post, I tried to clarify the differences and I hope you can easily differentiate them
Subscribe to the Mailing List for the Full Code
If you would like to have access to full codes of my tutorial posts on Google Colab, and have access to my latest content, consider subscribing to the mailing list:✉️
If you are interested in deep learning, also check out the guide to my content on artificial intelligence: | https://towardsdatascience.com/4-intersecting-domains-that-you-can-easily-confuse-with-artificial-intelligence-2233cb6ad7d1 | ['Orhan G. Yalçın'] | 2020-12-19 15:52:33.213000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Technology', 'Data Science'] |
How To Host an Angular Static Website on Azure | How To Host an Angular Static Website on Azure
A step by step guide with an example project
Photo by William Iven on Unsplash
There are a number of ways you can build a website with Angular such as Java with Angular, NodeJS with Angular, NGINX serving Angular, etc. For the single-page applications, all you need to do is to load the initial index.html. Once you load the index.html the Angular framework kicks in and do the rest of the job like loading component, calling API calls, etc. What if there are no backend calls and you want to build a static website with Angular?
Azure CDN with blob storage is one of the options which provides low cost and highly reliable static website hosting solution. These static sites have only CCS, HTML, JS files, fonts, etc. In this post, we can see how we can build a static website with Angular and host that on Azure.
Example Project
Prerequisites
Host Static Website With Azure Blob Storage
Deliver With Azure CDN
Summary
Conclusion
Example Project
Here is an example project which we can put in the Azure blob storage for static website hosting. This is a simple profile page with a header and some sections.
git clone // clone the projectgit clone https://github.com/bbachi/my-profile.git // install dependencies and start the project
npm install
npm start
You can clone the project and run it on your machine. Here is the demonstration when you run it on your localhost and the port 4200. | https://medium.com/bb-tutorials-and-thoughts/how-to-host-an-angular-static-website-on-azure-1257eed9d47e | ['Bhargav Bachina'] | 2020-05-27 05:01:00.888000+00:00 | ['Angular', 'Programming', 'Web Development', 'Azure', 'Cloud Computing'] |
Java Generics as Assortment Box | What is Generics?
Java is a statically typed language, which means, you must declare a variable with its type before using it. Generics allow types to be parameters.
I will try to explain this saying in terms of the spices assortment box. I’m not a good cook and I don’t particularly understand spices. However, this one image immediately associates Generics with spices to my non-cooking mind.
Let’s try to create a “non generic” assortment box list and then let’s add paprika to the list.
Actually, IntelliJ is too smart to let you fall in that. It will warn you about raw use (call to type without any type argument) of a parameterized class. And will suggest you to “generify” Generics.java. I will suppress “rawtypes” warnings, just so we can see what happens.
So, you have a list of spices and now, you want to retrieve the paprika spice from the list:
Bang! Error! Tastes like cumin! 😖
We have two ways to fix it:
Specifically cast to Paprika:
2. Change spice type to Object (because most Java collection classes receives and return parameter / argument of type Object under the hood):
Or, just listen to IntelliJ’s suggestion and generify Generics. I told you, it’s smart! (As you, probably, already noticed, I use (and love) IntelliJ, that’s why all the praises belong to it).
By adding the diamond operator <> containing the type, you narrow the specialization of this list only to Paprika type. Don’t you ever try to add some other spice to the defined list, you will deal with compilation error.
Error! The compiler will suggest you to change the assortment box type to List<Cinnamon> .
As you can see, similar to the formal parameters used in method declaration, type parameters provide a way to reuse the same type (List, in our example) with different parameters (Paprika / Cinnamon).
The difference between type parameters and formal parameters is:
inputs to formal parameters are values;
inputs to type parameters are types.
In small programs, this might seem like unnecessary sugar (a condiment, not a spice), after all, it’s your code and you are aware at any given moment to entities you are handling. However, in larger programs, this can add significant robustness (and flavor) and makes the program easier to read.
Much the way the Java objects has generic types, you can create your own generic objects (classes, interfaces and methods). | https://medium.com/swlh/java-generics-as-assortment-box-228988fa40d3 | ['Gene Zeiniss'] | 2020-08-05 09:28:44.054000+00:00 | ['Backend', 'Generics', 'Code', 'Java', 'Programming'] |
These are the Top 3 Missing Features in Google Data Studio in 2019 | Before we get into the list, I want to start off by giving the Google Data Studio project team kudos on the progress they have made have made since 2016. They were able to transform Data Studio from a beta product into a viable contender in the data visualization space. I am huge fan of the product and use it nearly every day of my life. That being said, there are some glaring holes in Data Studio’s feature set that keep me from recommending it over the likes of Microsoft Power BI and Tableau. If the Data Studio project team was able to implement the following features in 2019, I believe they would finally become the de facto data tool for the majority of organizations.
Feature 1: Detailed Visual Customization
One of the best things about Data Studio is how simple (and quick) it is to go from raw data to a beautiful visualization. However, if you are accustomed to the viz customization options available in Power BI and Tableau, you will quickly understand the limitations of Data Studio. This is an area the project team has been slowly iterating on, but there are some pretty basic features still missing (e.g. the ability to edit the size, format, and density of data labels).
“there are some pretty basic features still missing”
I have high hopes for the Community Visualizations that should start rolling out to more users in 2019, but I would prefer to see a concerted effort from the Data Studio team to really nail down the basics (data labels, color by metric, size by metric, axis padding, individual axis toggles, etc.) before offloading the work onto the community.
Feature 2: Extract and Blended Data Features
I wrote that Google Data Studio’s Extract feature was a game changer when it launched in 2018, but since I’ve been using it, I am constantly running into limitations that I hope are addressed in 2019. If you don’t understand the importance of extracts, I’ll explain it with one word, “speed”. An extract allows data to be stored in memory, which allows the queries to run locally. In my own tests, I was able to drop dashboard refresh times from 1.5 minutes to just under 5 seconds using an extract instead of a direct connection. It’s powerful stuff, but there are still some issues using an extract, which I will get into now.
For starters, there is currently no way to schedule a refresh of an extracted dataset, which renders extracts pretty useless for any dashboard that needs to be constantly up-to-date (aka most dashboards).
“…there is currently no way to schedule a refresh of an extracted dataset”
I’ve also run into issues with certain functions not working on extracted files like REGEXP_EXTRACT. Another limitation is the size of extracts. As of this posting, an extract can only be 100MB. I understand that a lot of the features I’m requesting require more storage and compute power, and Google might not be willing to take on more cost while keeping the product free. That said, I think the community would be fine with paying a small fee to unlock more power and storage if it meant delivering a better user experience for their dashboard viewers.
Blended Data was another feature released in 2018 to a lot of fanfare. Having the ability to blend data from multiple sources is checkbox #1 for any serious BI tool. While I’m happy to see a data blending option available in Data Studio, it’s not intuitive to use and, keeping with our theme, it’s extremely limited.
“[data blending is] not intuitive… and… extremely limited.”
I would love to see data blending move to a dedicated space in the dataset interface so that 1) I can do all of my data blending before I even start messing around with visualizations, and 2) I can save/reuse/share/manage my blended datasets like any other dataset in Data Studio. This is how most tools are setup and it’s more logical than trying to do everything in the dashboarding interface. While we’re on the topic, blended data calculations need to be updated. As of today, you can technically create a blended data calc, but it’s a mess and it doesn’t scale.
Feature 3: Functions, Parameters, and Grouping
As of today, Data Studio has the same function list as when it launched in 2016. It was super limited then, and it is still a frustrating aspect of the product. While Tableau and Power BI both offer 150+ custom functions to apply to your data, Data Studio has a paltry 56. I’ve heard through the grapevine that IF functions are in the roadmap, but they have a long way to go to catch up to some of the hard-to-live-without functions available in Tableau (e.g. FIXED) and Power BI (e.g. SUMX).
“[Data Studio] has a long way to go to catch up to some of the hard-to-live-without functions available in Tableau and Power BI”
Related to functions, Parameters and Grouping are two other features that have been noticeably absent since 2016. Tableau has the best implementation of Parameters and Grouping I’ve ever used, so hopefully the Data Studio team can copy what Tableau has already done.
Here’s Hoping
As I mentioned this in the beginning of the post, I’m a huge fan of Data Studio, an active member in the community, and a daily user. I reiterate this point because I want readers to know that this list comes from a place of love.
“this list comes from a place of love”
I want Data Studio to succeed because it is a delightful product to use despite all of its shortcomings — here’s hoping that this reaches the right person and some of these features are considered.
What do you think? Are there any features that you’ve been waiting to see implemented in Data Studio? Feel free to write them down in the comments. | https://medium.com/compassred-data-blog/these-are-the-top-3-missing-features-in-google-data-studio-in-2019-7db175a99d64 | ['Patrick Strickler'] | 2019-01-07 21:40:36.300000+00:00 | ['Data Visualization', 'Google', 'Google Analytics', 'Google Data Studio', 'Data'] |
Anti-rivalrous conversation | Wholesomeness is the new punk rock—Akira The Don
Cain Slaying Abel, by Jacopo Palma, 1590
Anti-rivalry may be one of the more vital ideas circulating these days—a term I learned from Jordan Greenhall and Daniel Schmachtenberger, and which was invented by the economist Steve Weber. Its meaning is more or less self-explanatory but its implications are vast. According to Laurence Lessig, speaking about computer code and language in general: “It’s not just that code is non-rival; it’s that code in particular, and (at least some) knowledge in general, is, as Weber calls it, ‘anti-rival’. I am not only not harmed when you share an anti-rival good: I benefit.”
To be anti-rivalrous is, in essence, to reward good faith and excellence rather scarcity and dog-eat-dog competition. To consume an anti-rivalrous production is to increase its value for yourself and others. To illustrate: a rivalrous product like Coca-Cola produces little sustainable value but a lot of addiction, thirst, and scarcity; whereas an anti-rivalrous production, such as a good story, a brilliant computer code, or conversation will increase value and knowledge rather than exhausting material resources.
The original inspiration for this article
The more you think about it, the more it makes sense that—for survival needs alone—we need to learn to act, think, and behave in an anti-rivalrous fashion. This term anti-rivalry is beautiful, for one thing because it allows us to bypass both hyper-capitalist and Marxist collectivist logic — the former being about rivalry for resources, the latter a rivalry of classes.
Anti-rivalry has been used mostly in an economic context but it may be helpful to think of it in psychological or even spiritual terms. Deep spirituality is anti-rivalrous by nature, as it is about the development of the soul rather than the ego. To be anti rivalrous is to be generous, it is to decry bad faith, to be soulful rather than shallow. Only by renouncing rivalry can one occupy that in-between space which Martin Buber calls ‘thou’ instead of ‘it’. Only in the intimate space of anti-rivalry can you create real spiritual value rather than resource depletion.
Jordan Peterson’s often used example of marriage illustrates the psychological benefits of anti-rivalry pretty well. If you want your marriage to succeed, he tells us, it’s better to lose one battle against your wife or husband than to crush him or her with your superior reasoning. This doesn’t mean, however, that conflict, competitions—a fight now and then—isn’t also a good thing for the marriage. However the basis of anti-rivalry relationship must be trust and generosity—anti-rivalry should be the higher principle that guides a long term relationship. A good marriage is not about winning or losing, obviously, but flourishing over time.
“Dox, harass, troll, lie, smear, mock, distort, harangue, and preferably ruin” — Andrew Sullivan
The anti-rivalrous mentality is born in the death throes of the rivalrous mode, when toxic communication reaches its maximum pitch. As Andrew Sullivan has described in his recent article America, Land of Brutal Binaries: ‘Dox, harass, troll, lie, smear, mock, distort, harangue, and preferably ruin: those are the tools of the alt-right just as much as they are the tools of the woke left.” Internet communication brings out the worst in people and the entire toxic ‘alt’ community—but it also creates the possibility of long form conversations and deep learning. If the old media is dying, the dead matter of this caterpillar can become the butterfly of new form of anti rivalrous media. Or it can become the ‘alt’ monster Sullivan describes.
Anti-rivalry is the antidote to the toxicity of a polarised media. Its mode is a long form conversation rather than a revolutionary program for change. The old self, the old system, the old mechanical way of thinking, will die of its own accord; you don’t have to ‘overthrow the system’ to be anti-rivalrous particularly, you simply have to allow the behemoth of the collective or personal ego to collapse. Anti-rivalry could be a form of active non-doing like Zen meditation: the space for the answer to your Koan—or the seemingly impossible riddle of existence—to emerge.
An anti-rivalrous conversation can be awkward and may be less satisfying in the short term — but it will be alive rather than mechanical and formulaic. For the right ‘code’ of communication to emerge, a living space has to be first established. It is like a good conversation between friends—which is generative by nature. Such a conversation is more like a path through a forest than a highway; it is slow and meandering but rich, meaningful, and never expedient—again, it is a marriage rather than a one-night-stand.
Anti-rivalry could be a mode of listening and becoming, as opposed to rivalry that makes us either tone deaf or ideologically possessed. The point is: we cannot grow by trumpeting our ideologies and certainties, by erecting our ‘empire of dirt’. If you put two people in a rivalrous space, the result is conflict and polarisation and war. On the other hand, if you put two people together in an anti-rivalrous space, the result is friendship.
An anti-rivalrous conversation is different than a battle of wills: it creates an upward trajectory rather than a field of corpses. It is about lifting the other up, finding the beauty and meaning in the stranger other. It is also about finding the gravity and presence to avoid an ideological war—or actual war, for that matter.
The reason that long form podcasts and and videos are popular these days is that they are, or at least can be, anti-rivalrous in spirit. When the culture becomes transparently toxic and superficial then people will look for meaning, depth, and long form anti-rivalrous conversation. As Akira the Don says ‘wholesomeness is the new punk rock’.
Cain and Abel
The original cautionary tale of rivalry, the story of Cain and Abel, tells us why we need to adopt spiritual anti-rivalry. Cain kills his brother out of envy and resentment, and the cycle of violence and war begins. The logic of Cain is one of poverty and resentment. Rivalry and envy may have helped us build the shiny office towers of the modern world but it is also what will bring them down in the end. Rivalry is the nightmare of history, the story of empires which rise and fall—the story of Cain. Rivalry is binary — it is brother against brother.
An anti-rivalrous conversation, on the other hand, requires time, space and good faith. It is a slower and more organic in development; however it can also simplify your life and be highly efficient. When people take the time to unfold their feelings and ideas, their souls appear, rather than their masks and personas—and the soul always knows what to do. The soul, meaning the greater aspect of the person, has no room for hubris and goes strait to the point.
Another thing to point out: anti-rivalry doesn’t mean that we have to throw away competition or even some free market mechanisms. Healthy competition can still remain as long as the anti-rivalrous spirit is the overarching principal. The difference is that an anti-rivalrous system rewards good faith rather than rent seeking, virtue rather than greed. Anti rivalry is the bridge to a world of plenty rather than poverty, in the material, but also the deeper spiritual world.
This strategy is anti identity politics or cutthroat competition as well. It supports neither social justice kangaroo courts nor a libertarian dystopia based on Ayn Rand novels. Anti-rivalry creates value rather than waste — it is both altruistic and self serving. Perhaps it can only happen in a mixed economy that values intelligence and complexity over ideology.
The extraction economy (stealing a term from Jordan Greenhall again), with its rivalry dynamics, is not only unstable, but eats itself in the end — unless anti-rivalrous structures are instituted. The sharing economy is an example. However, the foundation for anti-rivalry are with what is most intimate and near. To clean your own room first, in Petersonian lingo, is to create an anti-rivalrous relationship with your world. There is no harm and only good generated from an ordered and creative space.
To conclude: if we are too rivalrous with our intimate partner, we will end up destroying the relationship. So, too, with our relationship with the larger world. On the other hand, if we listen, appreciate, and lift each other up, especially when we are in conflict, there is no end to the learning and depth that can occur. The key point is that anti-rivalrous systems are not only sustainable and inexhaustible but they generate a richer and more complex environment. In a world of toxic communication and disappearing resources, anti rivalrous conversations and economics may be our only chance to save ourselves.
Links:
Podcasts:
Sweeny vs Bard
Sweeny Verses
Rebel Wisdom Articles by Andrew Sweeny
Support or contact Andrew Sweeny:
Patreon
Twitter
Facebook
YouTube
Music and Poetry
Thanks Stephen Lewis for the edits | https://andrewpgsweeny.medium.com/anti-rivalrous-conversation-1a61c8db7cf | ['Andrew Sweeny'] | 2019-07-14 18:37:44.029000+00:00 | ['Christianity', 'Psychology', 'Religion', 'Economics', 'Jordan Peterson'] |
Episode 14: Factory-based Construction | Karim Khalifa: Change is hard, right? Everybody in the ecosystem has to take a step in the same direction to make it work.
Eric Jaffe: That’s Karim Khalifa, the Director of Product Design for Buildings at Sidewalk Labs.
Vanessa Quirk: You may remember Karim from Season 1 of City of the Future, when he helped us realize the potential of mass timber, which has a much lower carbon footprint than concrete or steel.
Karim Khalifa: Mass timber is the material of the future. The reason you go to mass timber is really for sustainability, but when you combine it with the factory you get speed and quality.
Eric Jaffe: According to Karim, these three benefits — quality, speed, and sustainability — could really convince the building industry as a whole to buy into factory-based construction.
Vanessa Quirk: And getting to these benefits starts with changing the design process itself.
Karim Khalifa: If you think of how architects work today, they will buy a window from a catalog, or a door from a catalog. So why does everybody have to design a floor plate from scratch? Why can’t they pick a floor plate from a catalog?
Eric Jaffe: To help standardize the design process, Karim and his team are developing what they call an architectural kit of parts. It’s just four building parts — but like different Lego pieces, they can be combined in countless ways.
Lily Huang: The idea is that these four kit of parts could create an infinite number of different mass timber buildings.
Vanessa Quirk: That’s Lily Huang, an architect who works on the Buildings team. We asked her to break down the kit of parts for us.
Lily Huang: There are four basic elements that form our kit, and they are facade panels, so that’s everything that wraps around the exterior of the building, including windows, doors and everything that’s part of that envelope. And then the next part is the floor cassettes, which encompasses the floor of one unit, but the ceiling above so it’s really that sandwich that is in between a unit. It includes fire safety such as sprinklers. It’ll include mechanical, so the HVAC and ducts, and then it’ll include acoustic layers to protect the sound from traveling.
Eric Jaffe: So we have facade panels and floor cassettes. What’s the third element?
Lily Huang: The third element is the structural component so that’s columns and beams, and these are the elements that hold up the building. And the last element is our interior partition modular walls.
Sidewalk Labs is developing an architectural kit of parts for factory-based construction that can be combined in different ways. (Image: Sidewalk Labs)
Vanessa Quirk: Okay, so keeping in mind that the benefits here are quality, speed, and sustainability, let’s start with quality. Lily, how does the kit of parts allow you as an architect to make a really high quality building, you know, one that doesn’t feel cookie-cutter?
Lily Huang: Yeah, you’d think that only having four parts would result in boring buildings, but it’s actually the opposite. Having the kit takes away the boring aspect of the design and lets the architect focus on the more creative parts.
Karim Khalifa: We want to leave some creativity for a sense of place, and how an architect might want a space to feel to a community, or to a neighborhood.
Karim Khalifa: And look, just to be clear, our kit of parts is not going to make a Frank Gehry building, right? He will have stretched and broken our parts. But there aren’t that many Frank Gehry buildings out there either. And so most of us are trying to build really nice buildings that really perform well, and that have some character. And we think we can fit that mold. We can actually offer really good building materials and building components that are assembled to people who normally wouldn’t be able to afford them.
Lily Huang: We actually studied which real-world buildings we’d be able to make with our kit of parts, and we found we could make 90 percent of the buildings in New York City. So, with four parts, you can really create a huge variety of building types.
Eric Jaffe: And what about speed? How does the kit of parts help with that?
Karim Khalifa: Yeah, so by using the kit of parts we can develop factory process lines that, with their precision machinery, are able to shape pieces of mass timber at a very high production rate.
Parts such as floor cassettes can be produced in advance then transported to construction sites. (Image: Sidewalk Labs)
Lily Huang: Yeah, mass timber is significantly lighter than concrete and steel and therefore simpler and more cost-effective to transport, as well as assemble with one or two people per part.
Karim Khalifa: Our kit of parts is similar to Legos. Each of the Lego blocks has little dots on top and it actually has a recess in the bottom, so when they come to your construction site you can actually fit them together because they’ve been designed that way.
Eric Jaffe: So the pieces are designed for machines and factory workers to produce super fast, and they’re shipped to construction workers who are trained to put them together almost as quickly as Legos.
Lily Huang: And it’s also important to note that because the pieces have been cut so precisely with these machines, they come together like perfect puzzle pieces. Which isn’t just important for speed of assembly, but actually for sustainability, because airtight buildings require less energy to heat and cool.
Vanessa Quirk: That’s a really good point, Lily. So let’s get deeper into that sustainability piece. What are other ways that factory-based construction offers sustainability gains? Off the bat, I could imagine that this process must be less wasteful, since we know exactly how much material we need from the start and can make exactly that amount, nothing more.
Lily Huang: Yeah, and we would also have a bill of materials. So, each part would have X amount of wood, glass, fasteners, insulation, and with that, we would know the carbon footprint of each of these components, and of the whole building, before we’ve assembled it.
Karim Khalifa: So when you assemble the whole building, you get the list of kit of parts, you get the price of all of your parts, and you get the sustainability factor readout as well. So in today’s world, that’s really not — you’re not able to do that. You keep selecting materials and asking for data about its sustainability, and some materials have it and some don’t. But because we’re going to do this over and over again, we can now ask for the people that are supplying those materials to provide us the sustainability of that material.
Vanessa Quirk: And, with mass timber as the building material, Karim and Lily think we could take this idea even further. You could track not only the carbon emissions of a piece of timber, but even how sustainably it was harvested. And you could press the industry to keep improving.
Lily Huang: Recognizing that buildings are about 40 percent of carbon emissions, I think the way that we build now has to radically change if we’re going to address that. If we really want to address the climate change problem in the way that we need to in the next 5 or 10 years, we really need to change up the design process. And the answer to that, I think, is mass timber.
Eric Jaffe: And it’s this piece of the puzzle — the fact that mass timber is not just light and strong, but better for our planet — that might just be the most important piece of all.
Karim Khalifa: Manufactured buildings are a great opportunity to get efficient buildings built, but when you combine it with mass timber — which is a super-sustainable material — you’re ending up with a better building and doing something that’s better for the environment all at once.
Vanessa Quirk: And Karim isn’t the only one who’s bought into this idea. In fact, in the Pacific Northwest — a movement of factory-constructed mass timber buildings is already gaining momentum. | https://medium.com/sidewalk-talk/episode-14-factory-based-construction-364098346d33 | ['City Of The Future'] | 2020-11-20 13:28:39.402000+00:00 | ['City Of The Future', 'Cities', 'Construction', 'Sustainability', 'Building'] |
Python *args and **kwargs — Data Science Edition | *args
Let’s say you want to declare a function for summing numbers. There’s one problem with this function by default — it only accepts a fixed number of arguments. Sure, you can get around this by using only a single argument of type list — and that’s a viable alternative. Let’s explore it for a bit.
Down below we have your regular function for summing numbers, expecting a single argument of type list:
def sum_numbers(numbers):
the_sum = 0
for number in numbers:
the_sum += number
return the_sum
We can use it to find the sum:
numbers = [1, 2, 3, 4, 5]
sum_numbers(numbers) >>> 15
But what if you don’t want to use a list? *args to the rescue:
def sum_numbers(*args):
the_sum = 0
for number in args:
the_sum += number
return the_sum sum_numbers(1, 2, 3)
>>> 6 sum_numbers(1, 2, 3, 4, 5)
>>> 15
Yes, I hear you — this isn’t the best use case since we can use lists as a replacement. But we have a couple more examples under our belt — the first being unpacking.
List unpacking
The idea of unpacking is to, well, unpack any iterable object. The single asterisk * is used to unpack any iterable, and the double-asterisk ** is used only for dictionaries. You’ll quickly get the gist of it.
Let’s say we have the following list:
num_arr = [1, 2, 3, 4, 5]
The process of unpacking it is straightforward — and already covered by our nifty sum_numbers() function:
print(*num_arr) >>> 1 2 3 4 5
In a minute or so we’ll talk about dictionary unpacking — for now, let’s wrap this section with list concatenation.
List concatenation
Another useable aspect of *args is list concatenation. Let’s say we have two lists:
nums1 = [1, 2, 3]
nums2 = [4, 5, 6]
How would we concatenate them into a single list? If your answer is somewhere along the lines of iterating through both and storing values to the third list, then you’re not wrong (per se), but there’s an easier and more elegant option. Take a look at the following code:
nums = [*nums1, *nums2]
nums >>> [1, 2, 3, 4, 5, 6]
And that’s how easy it is. Let’s move along to **kwargs, something a bit more applicable to your everyday data science tasks. | https://towardsdatascience.com/python-args-and-kwargs-data-science-edition-978e16c7c2fc | ['Dario Radečić'] | 2020-06-14 18:12:49.430000+00:00 | ['Machine Learning', 'Data Science', 'Python', 'Towards Data Science', 'Programming'] |
Visualizing NFL Free Agency as a Node Network | The first wave of NFL games is in the books. Le’Veon Bell is a Jet. Odell Beckham Jr. had to turn over his blue uniform for a brown (Browns) one. How many players did your favorite team lose this offseason? Who did they pick up? Where’s everybody going?
Every year, hundreds of players switch teams and sign big contracts in a mass exodus called NFL Free Agency. If you’re just tuning in for the beginning of the NFL season, your team might look very different from when you last saw them.
How do you make sense of player movement across teams and see the whole picture? I wondered the same thing, so I created a node network to visualize all of NFL free agency on one page.
Demo: https://nfl-fa-2019.surge.sh/
Zoomed out node network of 2019 NFL free agency
The NFL offseason is an integral retrospective of a team’s performance when they determine how to rebuild or retool their rosters within the constraints of the salary cap. During this phase, general managers seek talent in the NFL draft and in free agency. Free agents are players whose contracts have expired and, as such, are free to sign with any team.
Free Agency 2019 OverTheCap
Currently, to understand NFL free agency you must sift monotonously through tabulated records of players. For example, the 2019 free agency table from OverTheCap lists each player along with their old and new teams. Finding information for a particular player is easy, but understanding why a team might have made certain decisions is lost in the rows and columns.
I wanted to know how many players the Carolina Panthers signed this offseason, how many they lost, and how much money they spent on their newly signed players. I wanted to see which teams are attracting top talent and which teams are losing it. With each player acquisition, I wanted to evaluate whether or not a team improved upon their weaknesses from last season. If a team struggled defensively, I wanted to know whether or not they looked to improve their defense by signing defensive linemen, safeties, or linebackers. Most importantly, I wanted to discover new insights and relationships hidden amongst the rigidity of the table structure above.
Essentially it boils down to one question: What is the best way to view all types of player movement and easily drill down on specific player information?
Simple Node Network
My answer is to visualize free agency as a node network. A node network is a direct representation of a graph, which is made up of edges (links) and vertices (nodes). Links are connections between nodes to show their respective relationships. This visualization is very tempting since it shows the flow of all players between teams and encodes various information into the size of the nodes and colors of the links.
But a node network comes with its own drawbacks. Searching for a particular set of nodes becomes difficult with large amounts of data. The network is often cluttered and requires a great deal of tedious tweaking and optimization, and encoding that much information makes a lot of assumptions about the audience, especially one that is not familiar with NFL lingo or semantics. However, it was still worth a shot. | https://medium.com/nightingale/visualizing-nfl-free-agency-as-a-node-network-d0b00e5ad4f2 | ['Advaith Venkatakrishnan'] | 2019-09-18 16:05:47.372000+00:00 | ['Sportsviz', 'NFL', 'D3js', 'Dataviz', 'Sports'] |
Multi-Class Text Classification with PySpark | Apache Spark is quickly gaining steam both in the headlines and real-world adoption, mainly because of its ability to process streaming data. With so much data being processed on a daily basis, it has become essential for us to be able to stream and analyze it in real time. In addition, Apache Spark is fast enough to perform exploratory queries without sampling. Many industry experts have provided all the reasons why you should use Spark for Machine Learning?
So, here we are now, using Spark Machine Learning Library to solve a multi-class text classification problem, in particular, PySpark.
If you would like to see an implementation with Scikit-Learn, read the previous article.
The Data
Our task is to classify San Francisco Crime Description into 33 pre-defined categories. The data can be downloaded from Kaggle.
Given a new crime description comes in, we want to assign it to one of 33 categories. The classifier makes the assumption that each new crime description is assigned to one and only one category. This is multi-class text classification problem.
Input: Descript
Example: “ STOLEN AUTOMOBILE”
Output: Category
Example: VEHICLE THEFT
To solve this problem, we will use a variety of feature extraction technique along with different supervised machine learning algorithms in Spark. Let’s get started!
Data Ingestion and Extraction
Loading a CSV file is straightforward with Spark csv packages.
from pyspark.sql import SQLContext
from pyspark import SparkContext
sc =SparkContext()
sqlContext = SQLContext(sc) data = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('train.csv')
That’s it! We have loaded the dataset. Let’s start exploring.
Remove the columns we do not need and have a look the first five rows:
drop_list = ['Dates', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y'] data = data.select([column for column in data.columns if column not in drop_list])
data.show(5)
Figure 1
Apply printSchema() on the data which will print the schema in a tree format:
data.printSchema()
Figure 2
Top 20 crime categories:
from pyspark.sql.functions import col data.groupBy("Category") \
.count() \
.orderBy(col("count").desc()) \
.show()
Figure 3
Top 20 crime descriptions:
data.groupBy("Descript") \
.count() \
.orderBy(col("count").desc()) \
.show()
Figure 4
Model Pipeline
Spark Machine Learning Pipelines API is similar to Scikit-Learn. Our pipeline includes three steps:
regexTokenizer : Tokenization (with Regular Expression) stopwordsRemover : Remove Stop Words countVectors : Count vectors (“document-term vectors”)
from pyspark.ml.feature import RegexTokenizer, StopWordsRemover, CountVectorizer
from pyspark.ml.classification import LogisticRegression # regular expression tokenizer
regexTokenizer = RegexTokenizer(inputCol="Descript", outputCol="words", pattern="\\W") # stop words
add_stopwords = ["http","https","amp","rt","t","c","the"] stopwordsRemover = StopWordsRemover(inputCol="words", outputCol="filtered").setStopWords(add_stopwords) # bag of words count
countVectors = CountVectorizer(inputCol="filtered", outputCol="features", vocabSize=10000, minDF=5)
StringIndexer
StringIndexer encodes a string column of labels to a column of label indices. The indices are in [0, numLabels) , ordered by label frequencies, so the most frequent label gets index 0 .
In our case, the label column (Category) will be encoded to label indices, from 0 to 32; the most frequent label (LARCENY/THEFT) will be indexed as 0.
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler
label_stringIdx = StringIndexer(inputCol = "Category", outputCol = "label") pipeline = Pipeline(stages=[regexTokenizer, stopwordsRemover, countVectors, label_stringIdx]) # Fit the pipeline to training documents.
pipelineFit = pipeline.fit(data)
dataset = pipelineFit.transform(data)
dataset.show(5)
Figure 5
Partition Training & Test sets
# set seed for reproducibility
(trainingData, testData) = dataset.randomSplit([0.7, 0.3], seed = 100)
print("Training Dataset Count: " + str(trainingData.count()))
print("Test Dataset Count: " + str(testData.count()))
Training Dataset Count: 5185
Test Dataset Count: 2104
Model Training and Evaluation
Logistic Regression using Count Vector Features
Our model will make predictions and score on the test set; we then look at the top 10 predictions from the highest probability.
lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0)
lrModel = lr.fit(trainingData) predictions = lrModel.transform(testData) predictions.filter(predictions['prediction'] == 0) \
.select("Descript","Category","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
Figure 6
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
0.9610787444388802
The accuracy is excellent!
Logistic Regression using TF-IDF Features
from pyspark.ml.feature import HashingTF, IDF hashingTF = HashingTF(inputCol="filtered", outputCol="rawFeatures", numFeatures=10000)
idf = IDF(inputCol="rawFeatures", outputCol="features", minDocFreq=5) #minDocFreq: remove sparse terms
pipeline = Pipeline(stages=[regexTokenizer, stopwordsRemover, hashingTF, idf, label_stringIdx]) pipelineFit = pipeline.fit(data)
dataset = pipelineFit.transform(data) (trainingData, testData) = dataset.randomSplit([0.7, 0.3], seed = 100)
lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0)
lrModel = lr.fit(trainingData) predictions = lrModel.transform(testData) predictions.filter(predictions['prediction'] == 0) \
.select("Descript","Category","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
Figure 7
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
0.9616202660247297
The result is the same.
Cross-Validation
Let’s now try cross-validation to tune our hyper parameters, and we will only tune the count vectors Logistic Regression.
pipeline = Pipeline(stages=[regexTokenizer, stopwordsRemover, countVectors, label_stringIdx]) pipelineFit = pipeline.fit(data)
dataset = pipelineFit.transform(data)
(trainingData, testData) = dataset.randomSplit([0.7, 0.3], seed = 100) lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0) from pyspark.ml.tuning import ParamGridBuilder, CrossValidator # Create ParamGrid for Cross Validation
paramGrid = (ParamGridBuilder()
.addGrid(lr.regParam, [0.1, 0.3, 0.5]) # regularization parameter
.addGrid(lr.elasticNetParam, [0.0, 0.1, 0.2]) # Elastic Net Parameter (Ridge = 0)
# .addGrid(model.maxIter, [10, 20, 50]) #Number of iterations
# .addGrid(idf.numFeatures, [10, 100, 1000]) # Number of features
.build()) # Create 5-fold CrossValidator
cv = CrossValidator(estimator=lr, \
estimatorParamMaps=paramGrid, \
evaluator=evaluator, \
numFolds=5) cvModel = cv.fit(trainingData)
predictions = cvModel.transform(testData)
# Evaluate best model
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
0.9851796929217101
The performance improved.
Naive Bayes
from pyspark.ml.classification import NaiveBayes nb = NaiveBayes(smoothing=1)
model = nb.fit(trainingData) predictions = model.transform(testData)
predictions.filter(predictions['prediction'] == 0) \
.select("Descript","Category","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
Figure 8
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
0.9625414629888848
Random Forest
from pyspark.ml.classification import RandomForestClassifier rf = RandomForestClassifier(labelCol="label", \
featuresCol="features", \
numTrees = 100, \
maxDepth = 4, \
maxBins = 32) # Train model with Training Data
rfModel = rf.fit(trainingData) predictions = rfModel.transform(testData) predictions.filter(predictions['prediction'] == 0) \
.select("Descript","Category","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
Figure 9
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
0.6600326922344301
Random forest is a very good, robust and versatile method, however it’s no mystery that for high-dimensional sparse data it’s not a best choice.
It is obvious that Logistic Regression will be our model in this experiment, with cross validation.
This brings us to the end of the article. Source code that create this post can be found on Github. I look forward to hear any feedback or questions. | https://towardsdatascience.com/multi-class-text-classification-with-pyspark-7d78d022ed35 | ['Susan Li'] | 2018-02-20 16:54:15.057000+00:00 | ['Machine Learning', 'Python', 'Data Science', 'NLP', 'Spark'] |
Fish Camp and Family | The family was poor in the sense that there was little available cash, says Cora, “but the land provided for us, just as it had provided for our ancestors over thousands of years. We didn’t have toys — for Christmas, our presents were moccasins and mukluks my mother sewed from moose skins she had tanned. But we had everything we needed, and I was so happy there.”
But though the fish camp formed the physical parameters of Cora’s life, her mother knew the family — indeed, all Upper Tanana Athabascan people — straddled two worlds. And the outside world was encroaching on the traditional world Cora knew. From mid-autumn through early spring — when she wasn’t at fish camp — Cora attended a Bureau of Indian Affairs school in Northway. And when she was around ten, her mother sent her to a boarding school in Wrangell.
“My mother couldn’t read or write, and she felt that we needed to get formal educations to negotiate the modern world,” says Cora. “But I didn’t want to leave her. They put me on a little plane at Northway kicking and screaming, and I kicked and screamed when they transferred me to other planes at Fairbanks and Juneau. I was incredibly homesick — traumatized, really. Even today, more than 50 years later, I sometimes get a little twinge when I’m on a plane because that memory comes back.”
The following years were tough for Cora. In the 1960s, Alaska Native children were punished for speaking their primary languages in school, and tribal cultures were denigrated.
“I wasn’t allowed to speak Upper Tanana Athabascan, and they cut my long hair,” recalls Cora. “My mother was terribly upset when she found out. In general, Native kids were just treated poorly. I wanted to come home, but mom wouldn’t let me, insisting that I needed to be educated. She’d send me dried fish and meat to make me feel better.”
Ultimately, says Cora, “I just went within myself for protection, and I stayed there.”
Things started to turn around after Cora graduated high school. A school counselor, Dorothy Johnson, took the young girl under her wing and helped her find work. The genuine concern and care expressed by her mentor was transformational, Cora says — she describes herself as a butterfly emerging from its chrysalis.
“Dorothy made me feel worthy again,” Cora says, “and when I returned to Northway I got a job at the airport lodge, doing everything from waitressing to cleaning rooms. And I did the best job that I could possibly do. My mother taught me that. There’s a saying in our language, which essentially translates as ‘Do the best you know how,’ no matter what it is. That ethic kept us alive in our family fish camp, and that’s still how I live my life today.”
Cora married her husband, Glenn Demit, in 1966, and they eventually had four children. She became an alcoholism counselor and still maintains her certification “because sometimes people just want to talk. Sometimes they need to call and check in, and I always want to be available to help.”
Later, she became a special education aide for the Northway school district.
“I had that position for 10 years and I loved every minute of it,” says Cora. “Working with those kids was one of the most rewarding things I’ve ever done.”
Cora’s life changed again in 1991, when the U.S. Fish and Wildlife Service posted an opening for a Park Ranger at the Tetlin National Wildlife Refuge. She applied for the job and got it, and later became a Refuge Information Technician, charged with overseeing Tetlin’s visitor center in Tok. | https://alaskausfws.medium.com/fish-camp-and-family-ab696df39981 | ['U.S.Fish Wildlife Alaska'] | 2020-11-18 18:43:54.519000+00:00 | ['Environment', 'Native Americans', 'Education', 'Tradition', 'Culture'] |
What I Fear Most About Freelance Writing | This Job is Horrifying
Photo by Daniel Jensen on Unsplash
When I was a teenager, I had a clear vision of my adult self. She was free-spirited, with flowing blonde hair, a fresh baguette in one hand, a notebook in the other. She was outdoorsy and effortless. She had a large, bounding retriever at her heels. She worked for herself and took no shit.
Life had other ideas.
Now, I’m in my mid-30s, having spent the better part of two decades taking shit. I’ve worked countless dead-end jobs I’ve hated for salaries I’ve hated even more. I’m more goth than boho, my appearance requires more effort than I’d care to admit, and I don’t buy baguettes because I can’t comfortably eat that much bread. I have a pomeranian sleeping by my feet.
There came a time when I realized that I couldn’t do it any more. I couldn’t pretend to care about a thankless job. I couldn’t stomach the politics and the hypocrisy. I couldn’t force a smile to countless other drones over lunch. I could be a freelance writer and be the one thing that teenage me envisioned. I could be the independent woman that I wanted to be.
The thought was thrilling. It was empowering. It was downright horror-film-level terrifying.
Photo by Melanie Wasser on Unsplash
I’m afraid of writing.
Writing is hard. There are days when it feels like you’re waist-deep in mud, trying to slog through to the bottom of the page. There’s no glamour here. I’m not lounging by a pool, sipping a cocktail and typing out well-crafted paragraphs at great speed. More often than not, I’ve got a lukewarm cup of tea, and I’m struggling not to throw my laptop across the room in frustration. I write some absolute junk sometimes. I have awful ideas and even worse sentence structure. I feel like I’ve forgotten the only language I know.
And it’s all there in front of me, on screen. It’s staring back at me, glaring with its beady, haunting eyes, judging me with every keystroke.
I’m afraid of being honest.
Forget about me speaking my truth. I don’t want to do it. The truth can be ugly and unpleasant. It makes me uncomfortable. Who knows how my authentic self might cloud the opinions that people have of me. I might publish something that inadvertently upsets someone. I could say something that my friends disagree with. Maybe my family will be disappointed and disown me.
What if no one ever looks me in the eye again?
There’s a dagger hanging tenuously above me, reminding me that I’m always on then brink of typing the wrong thing, leaving myself vulnerable, and garnering disgust and hatred from those closest to me. No thanks.
I’m afraid that I will fail horribly.
I’ve put all of my eggs in this thread-bare basket. I can’t afford to fail at this. But, that’s a huge, petrifying possibility. I don’t have it in me to return to an office job. It’s soul-sucking. It’s draining. It’s a slow, living nightmare that plays out for 40 hours a week until you’re finally consumed by it and you slump over in your uncomfortable chair.
It’s the thing that frightens me the most.
Failing at writing is a life sentence with no chance at parole and every time I sit down to write, I’m sitting awaiting trial. My lawyer is a hack and my anxiety is taking over.
I’m not entirely convinced I’ll ever get over any of this. There’s always something waiting around the corner to get me, but for now, I guess I just keep writing. | https://medium.com/swlh/what-i-fear-most-about-freelance-writing-1917ba099d0 | ['Rachee Ross'] | 2019-09-25 17:52:23.215000+00:00 | ['Freelancing', 'Life Lessons', 'Fear Of Failure', 'Writing'] |
A World of Fairy Tales | After this first meeting with the lakes of the region, I go to Gérardmer to start my second loop. It takes about 1h45 to go around this second lake, so I decide to have my lunch break right after to rest my legs and regain strength before continuing my walk.
As I walk along the trail that borders the west side of the water body, turning my back to the city, I hear the clink of my metal canteen in my backpack. That’s all it takes to throw me back twenty-five years or so as I climb the stony paths of the Vercors mountains, with my father in the lead and my mother and sister closing the march.
When we would go hiking for the day, or even for the week with two donkeys — my best memories — we would take turns carrying the water bottles, which we would fill when drinking water sources were available. In addition to a large blue plastic water bottle, we also had two smaller ones made of metal, including a dented orange one that must have seen more countries than any experienced hiker.
I can still hear the sound of that canteen hanging on my belt, the characteristic click of its closing system. I remember us taking breaks to drink and catch our breath before resuming our assent. I remember my dad studying his IGN map to make sure his family doesn’t get lost en route and gets safely to their destination.
I hear him again telling me to be careful where I set foot and to look ahead while walking. I can see again his big brown leather hiking boots with big red laces. My dad, solid as a rock, helping me through difficult passages by supporting me with his steady hand.
All these images suddenly come back to me, without me expecting it.
Halfway around the lake of Gérardmer, I stop to nibble on a cereal bar on a small rock by the water’s edge. Two ducks approach me and deliver a charming tirade that makes me happy. I regret not speaking their language. | https://medium.com/scribe/a-world-of-fairy-tales-bbcb90426fd5 | ['Thomas Gaudex'] | 2020-12-17 17:00:06.848000+00:00 | ['Life', 'Nature', 'Stories', 'Travel', 'Writing'] |
How Smart Telescopes Aim to Bring Stargazing and Astronomy to All | How Smart Telescopes Aim to Bring Stargazing and Astronomy to All Bojan Stojkovski Follow Dec 17 · 4 min read
Credit: Vaonis
With the current health crisis hitting most aspects of people’s lives, many have turned to new hobbies that suit social distancing and staying at home.
Stargazing and amateur astronomy are two activities to benefit from people’s enforced isolation, with lockdowns across the globe leading to an increase in people observing the night skies and a rise in sales of astronomical instruments.
But despite the attraction of these activities, some people have found the use of traditional telescopes hard work, especially those who are new to amateur astronomy. It’s a problem that a young French company is trying to tackle.
Vaonis, a Montpellier-based startup founded in 2016 and specializing in the production of astronomical instruments, recently launched its newest device, Vespera. It’s a cross between a smart telescope and a camera and is expected to cost about $1,500.
Credit: Vaonis
In October, the company launched a 30-day pre-order campaign and managed to attract $2.5m in worldwide billing, making Vespera the most-funded project in the category of space exploration and the most-funded tech or hardware project in France.
Vespera’s app lets an astronomer control the telescope from their smartphone to select and home in on the celestial object they want to observe. The telescope will then point at the object and track it.
Vaonis says apart from the device’s automatic pointing and tracking system, it also employs intelligent and powerful image processing with autofocus. Vespera calibrates itself using its owner’s phone GPS and the company’s star-recognition technology.
During the quarantine, the French company saw the number of orders, uses, and shared photos created using its devices more than double. Aiming to shake up the field, Vaonis has also teamed up with former NASA astronauts Scott Kelly and Terry Virts, utilizing their experience and expertise.
According to Virts, smart telescopes like Vespera, are there to bring astronomy closer to almost everybody.
“I’ve owned multiple telescopes over my life, including reflectors and refractors. I’ve always enjoyed them, and the thrill of seeing objects in the night sky” Virts tells ZDNet.
“But frankly it is too much work to drag a typical six-inch or eight-inch or even 12-inch telescope outside at night, align it, and attach astrophotography devices. Most telescopes are never used.”
Now, by having this technology at hand, observers can spot deep sky objects in just minutes, Virts explains.
Credit: Vaonis
“It makes it possible to see amazing deep-sky objects that are thousands or sometimes millions of light years away. Observations like that are simply not possible with conventional telescopes without a significant amount of expertise, time, and heavy and expensive equipment.”
It took Vaonis nine months to complete the project, which is the company’s second product. The technology behind the device combines optics, electronics and high-precision mechanics.
For the new product, the company was able to exploit much of the work that had gone into its first project, an observation device called Stellina.
Credit: Vaonis
“A lot of the work required to get Stellina out the door wasn’t feature specific, but involved things like camera integration, mobile app communication and deployment, firmware management, user interface design, bug tracking,” the company said.
“Having that foundation means we can spend more time optimizing performance and testing features.”
Now, it reckons its products can also lead to a new generation of astronomical technology for observing deep space.
“It can inspire a whole new generation of instruments to observe the universe. It’s not only to observe — it’s to capture, share, and learn, all in one product.” Cyril Dupuy, founder and CEO of Vaonis, tells ZDNet.
“Regarding the rest of the industry, it’s too early to say but it will certainly encourage other companies to create products with a better user experience than the traditional telescope.”
In addition to being the smallest smart telescope in the world, the device is also the only instrument to offer sky observers a shared and interactive experience around the stars, while respecting the precautions of use imposed by COVID-19, thanks to the way it allows remote observation on screen, the company adds.
According to James Sweitzer, a Kickstarter supporter of the project, the use of such devices will help more people understand the fascination of deep space.
“As a lifelong astronomer and planetarium developer, I’m interested in their use for education,” he says.
Credit: Vaonis
“But I firmly believe their most important impact will be to lift the veil on the deep universe and bring joy and wonder to many people. They are like admission to the infinite planetarium.” | https://medium.com/an-idea/how-smart-telescopes-aim-to-bring-stargazing-and-astronomy-to-all-ebc64739c745 | ['Bojan Stojkovski'] | 2020-12-22 16:21:44.965000+00:00 | ['Astronomy', 'Telescope', 'Stargazing', 'Space', 'Startup'] |
Critical Point of Traditional Marketing and Digital Marketing | The internet is quickly changing the world as we know it. For the first time in the history, a single businessperson can compete on equal footing with large, multinational firms. Anyone can sell something, collect the money for the item, then go purchase it and sent it to the buyer. Today, armed with nothing more than information and a little bit of know-how, people are becoming millionaires
There are many facets of traditional marketing and examples might include tangible items such as business cards, print ads in newspapers or magazines. It can also include posters, commercials on TV and radio, billboards and brochures.
The world of digital marketing continues to evolve and as long as technology continues to advance, digital marketing will as well. Examples of digital marketing include things like websites, social media mentions, YouTube videos, and banner ads. Specifically, digital marketing is similar to traditional advertising, but using digital devices.
Nearly half of all smartphone users have used their phones while shopping in brick-and-mortar stores — 40% of them to compare the competition’s prices. Statistics for who’s scanning QR codes and with what device appear to be mixed, although most data places iPhone users at the top, with the user age range being 25–34. Japan and the U.S. are currently leaps and bounds ahead of other countries in QR code scans around 60% with Canada and the U.K. trailing dozens of percentage points behind.
Now creative digital advertisers are use it for innovative promotions and give new life to boring places. Famous angry birds and Instagram create a display advert for their offline promotions with QR code shape Designs and Both QR codes take you directly to download the app, and are a great showcase for both ad creativity and self-explanatory promotion of the apps themselves.
Sukiennice Museum in Poland has added a whole new effect to their paintings to turn each one in to a story with QR code. Visitors can scan QR code for particular painting, and get the inside scoop direct from the YouTube video.
Still, QR code creation jumped a whopping 1,253% in 2011, with two million of them created in less than three months. By far, most were used to lead users to a web address, but they can also store vCard details, Google Maps info and even YouTube video links.
If you haven’t seen them in your favorite newspaper or magazine, it’s called a Quick Response Code or a QR code. In short, let’s just say, it’s like a bar code but better.. The difference is that a “QR CODE” will take you instantly to a web page when scanned by a smartphone. It’s one of the latest cool technologies that helps bridge the virtual world and the physical world.
My conclusion is QR CODE, the critical point of Digital and Traditional marketing. | https://medium.com/enfection/critical-point-of-traditional-marketing-and-digital-marketing-bede0ee7d91f | ['Madhawa Chandrasiri'] | 2017-03-16 09:27:25.522000+00:00 | ['Advertising', 'Marketing', 'Digital Marketing', 'UX', 'Qr Code'] |
Ruby vs. Python: What’s the Difference? | Which is better Ruby or Python?
I’ve used both Ruby and Python in my work — and while they’re similar, they’re also different in some critical ways. It’s a popular question, but an important one, so let me example the difference between Ruby and Python.
Difference Between Ruby and Python
To set the stage, I first learned web development through Python (and the Python framework called Django). After spending four years building Django apps, I got a job doing Ruby on Rails and expected the transition to be straightforward. That’s when it became clear to me that the two languages and frameworks are very different and it’s not so easy to jump from one to the other.
Now Observe…How are they different?
The Language:
The Ruby on Rails web framework is built using the Ruby programming language while the Django web framework is built using the Python programming language.
This is where many of the differences lay. The two languages are visually similar but are worlds apart in their approaches to solving problems.
Ruby is designed to be infinitely flexible and empowering for programmers. It allows Ruby on Rails to do lots of little tricks to make an elegant web framework. This can feel even magical at times, but the flexibility can also cause some problems on Ruby Certification, For example, the same magic that makes Ruby work when you don’t expect it to can also make it very hard to track down bugs, resulting in hours of combing through code.
Python takes a more direct approach to programming. Its primary goal is to make everything visible to the programmer. This sacrifices some of the elegance that Ruby has but gives Python a big advantage when it comes to learning to code and debugging problems efficiently.
A great example that shows the difference is working with time in your application. Imagine you want to get the time one month from this very second. Here is how you would do that in both languages:
Ruby
require 'active_support/all'
new_time = 1.month.from_now Python
from datetime import datetime
from dateutil.relativedelta import relativedelta
new_time = datetime.now() + relativedelta(months=1)
Notice how Python requires you to import specific functionality from DateTime and dateutil libraries. It’s explicit, but that’s great because you can easily tell where everything is coming from.
With the Ruby version, a lot more is hidden behind a curtain. We import some active_support library and now all of a sudden all integers in Ruby have these “.days” and “.from_now” methods. It reads well, but it’s not clear where this functionality came from within active_support. Plus, the idea of patching all integers in the language with new functionality is cool, but it can also cause problems.
Neither approach is right or wrong; they emphasize different things. Ruby showcases the flexibility of the language while Python showcases directness and readability.
Web Frameworks
Django and Rails are both frameworks that help you to build web applications. They have similar performance because both Ruby and Python are scripting languages. Each framework provides you all the concepts from traditional MVC frameworks like models, views, controllers, and database migrations.
Each framework has differences in how you implement these features, but at the core, they are very similar. Python and Ruby also have many libraries you can use to add features to your web applications as well. Ruby has a repository called Rubygems, and Python has a repository called the Package Index.
Community
Python and Ruby have substantial communities behind them. Each community influences the direction of the language, updates, and the way software is built. However, Python has a much broader community than Ruby does. There are a ton of academic use cases in both math and science where Python has thrived, and it continues to grow because of that momentum. Python is also pre-installed on almost every Linux computer making it the perfect language for use on Linux servers (aka. The most popular servers in the world).
Ruby’s popularity kicked off when Rails came out in 2005. The community proliferated around Rails and has since been incredibly focused on web development. It has also become more diverse, but not near the level of diversity that Python has reached.
Usage
Who is using these programming languages? Quite a lot of companies. Both Ruby and Python are widespread in the tech world.
There are many famous websites built with Python including Google, Pinterest, Instagram, National Geographic, Mozilla Firefox, and the Washington Post. Similarly, there are just as many Ruby on Rails website examples. Notable companies using Ruby on Rails including Apple, Twitter, Airbnb, Shopify, Github, and Groupon.
Should I learn Python or Ruby first?
Ruby saw a spike in popularity between 2010–2016, but it seems like the industry is trending towards Python. Here’s one way to help you make a decision: If you already have a specific client, job, or project lined up that requires you to know Ruby, learn Ruby. If not, learn Python first. Keep in mind there is a difference between Python 2 and Python 3. If you’re new to coding then I’d recommend you start with the latest version — Python 3
Conclusion: Ruby vs. Python?
Anything you can do in Ruby on Rails you could also do in Python and Django. Which framework is better isn’t a question of capability. The better question might be: which language is better suited for your or your team?
If you plan on sticking with building web applications, then consider prioritizing Ruby on Rails. The community is good and they are always on the bleeding edge. If you are interested in building web applications than I’d recommend you Learn Ruby On Rails Online Training, Python is also good to learn
RUBY Vs PYTHON
Ruby Overview:-
LANGUAGE
More magical
Created in 1995 by Yukihiro Matsumoto
PROS
Tons of features out of the box for web development
Quick to embrace new things
CONS
Can be very hard to debug at times
WEB FRAMEWORKS
Ruby on Rails-Started in 2005 by David Heinemeier Hansson
COMMUNITY
Innovates quicker but causes more things to break
Very web-focused
USAGE
Apple
Twitter
Github
Airbnb
Github
Groupon
Shopify
Python Overview:-
LANGUAGE
More Direct
Created in 1991 by Guido Van Rossum
PROS
Very easy to learn
A diverse community with big ties to Linux and academia
CONS
Often very explicit and inelegant to read
WEB FRAMEWORKS
Django-Started in 2003 by Adrian Holovaty and Simon Willison
COMMUNITY
Very stable and diverse but innovates slower
Used widely in academia and Linux
USAGE | https://medium.com/quick-code/ruby-vs-python-whats-the-difference-bc67538445d3 | ['Sandhya Reddy'] | 2019-12-23 19:35:53.806000+00:00 | ['Framework', 'Python', 'Ruby on Rails', 'Python3', 'Ruby'] |
Codable in Swift and iOS | The implementation: codable
A typical application takes data from an Endpoint and decoded into a Swift model. Now since structs typically represent data and are value types they are suitable for models (over class , although you need to make your own decision about this).
This user mode conforms to the Codable protocol. This means that we can choose to either encode to decode to (from) the Swift UserModel model.
We can encode this through the following function:
We then can decode user model:
The issue with the above is that this is the simple case. The JSON format exactly matches the model which is a rather lovely synergy of a back and front end. | https://stevenpcurtis.medium.com/codable-in-swift-and-ios-12a1415b9aa6 | ['Steven Curtis'] | 2020-07-26 12:54:29.252000+00:00 | ['Swift Programming', 'Swift', 'Programming', 'Software Engineering'] |
Why does Amazon Pay Zero Taxes | Yes, individuals can deduct a few expenses such as mortgage interest, medical and so forth. But usually this doesn’t exceed say 10 or 15% max. Where Corporations can deduct all of their expenses. In fact for individuals if they somehow find too many deductions and it gets too high there is something called the alternative minimum tax (AMT) which they will end up having to pay. This rule keeps individuals from deducting too much.
So, in the case of Amazon, a notoriously low margin business, where frequently their income statement looks something like this:
$100 - $100 = $0
They end up paying very little taxes just due to their low margin.
The government encourages risk
If their is no profit. There is no federal income tax.
One reason that Silicon Valley and the explosion of venture capital and startups exists here in the United States is due to the tax code. This is a good thing in that we now have companies like Amazon, as well as Apple, MicroSoft, Google, Facebook, SalesForce, Tesla and I could go on and on.
All of these companies have one thing in common. At the very beginning they made no money. Sometimes for many, many years.
Amazon, by design, was one of these companies for longer than just about any other. Jeff Bezos’ entire philosophy for Amazon was to sell every item for as low a price as possible, where the margin was as close to zero as possible, so that he could undercut competition and capture as much market share as he could.
Or simply put, he decided Amazon would grow their customer base instead of trying to make a profit.
Net Operating Losses (NOL)
The tax code also allows these startup companies to roll forward those losses they have in the early years.
So, for example if the first five year of Amazon profit/(loss) looks like this $(3M), $(2M), $(1M), $(0M), $1M. Then even in year five when Amazon finally made a profit of $1M they would pay no federal income tax because they can use past losses to offset current profits for up to twenty years. In fact, in this example, Amazon in years 1–4 Amazon had a $(6M) loss, and used up $(1M) of that loss in year five. But still have a $(5M) NOL to use up in future years.
This is another reason Amazon pays so little in taxes. They had losses for so long and built up such big NOL’s that they were able to use those to offset their taxes even once they started showing a profit.
Interesting aside. A lot of those NOL’s were from their early years. They were founded in 1995. Only in the last couple of years has Amazon started to show a significant profit, having made over $11B each of 2018 and 2019 and already having net income of $14.1B through September 30, 2020. So, likely will have close to $20B profit in 2020.
But the tax rule is you can only carryforward NOL’s for twenty years.
Analysts have been talking about this. Their theory, as to why Amazon has been showing profit lately, is that Amazon’s success has taken off at a trajectory where they cannot spend fast enough to keep their margin close to zero. I wonder if these past two years they weren’t using up all of their NOL’s as they are getting close to expiring. To be honest the analysts probably have it right or they wouldn’t have made $20B in profit in 2020. But I still bet a conversation back in 2016 or 2017 happened where someone said we should let up a little on the pressure to spend every penny. To take advantage of these expiring NOL’s the next couple of years.
Depreciation, Share Based Compensation and R&D Expenses
Ok, those were the major reasons that Amazon pays so little but I just want to cover a couple of other reasons that are pretty significant to Amazon as well.
To understand Depreciation first you need to understand that their are actually two sets of books for accounting. One for taxes and one for financial reporting. The one for taxes is kept and used for filing tax returns. The one for financial reporting is kept for filing financial reports such as the quarterly and annual 10-Q and 10-K’s filed to the Securities Exchange Commission. These are the reports used by analysts to tell you what earnings, revenue, PE ratio’s, and all of the other financial ratios you hear being thrown around. Including income and loss.
So, when you hear some idiot say something like this.
“Which means they paid about 1.4% of their total net income in federal income taxes.” ~ quote by this guy right here
They are actually comparing apples to oranges in a way. Since, the tax amount comes from the tax books and the income amount comes from the financial reporting books.
Depreciation
Ok, so I told you all of this to explain depreciation book-tax differences to you. (In common accounting speak, book refers to the financial reporting books, and tax refers to the tax books.)
As we all know Amazon spends a lot of money on infrastructure. After all, how do they get that package to you so fast. Lots and lots of infrastructure, massive well equipped and automated warehouses, with tons of high tech, bleeding edge equipment. Not to mention, planes, trucks, vans and the tons of office spaces for their hundreds of thousands of employees.
All of this infrastructure they can expense through depreciation.
But David, didn’t you say that they deduct all their expenses above. And that’s why their margin is so low? So, isn’t this factored into your explanation above?
Well no, because another way congress has juiced the tax code is by allowing companies to depreciate their buildings and equipment faster under the tax set of books then under the financial reporting set of books.
So, for example if they buy a piece of equipment for $1M under the financial reporting books they might depreciate over a ten year period and record $100K of expense each year. But the tax code might allow them to depreciate it over five years so they can record $200K of expense on their tax returns and lower their taxes.
I point this out. Because this is a major way that Amazon is able to manage their taxes. If it looks like they might be close to making some money in the next year or two, they might go on an infrastructure spending spree. We see them announce these all the time, where they are about to spend a few billion more building out their warehouses and shipping capabilities.
This is an easy way for them to manage their taxes and the only result is a much more massive company at the end of it.
R&D Expenses
Amazon also spends a lot of money developing new technologies. Think of their website, Amazon Prime, AWS and all of the technologies surrounding those.
Again you say David, if all of their expenses are included in that equation you showed me above aren’t these R&D expenses already captured in your explanation above?
To understand this one you’ll have to understand the difference between a tax deduction and a tax credit. Above all of those expenses end up being deductions and reduce their income. A tax credit allows them to subtract these expenses from the actual amount of tax paid.
So, two examples:
A deduction. A company makes $100 and has $80 in deductions. They end up with $20 on which they pay taxes. At a 40% tax rate they pay $8 in taxes.
A credit. A company makes $100 and has $60 in deductions. They end up with a $40 on which they pay taxes. At a 40% tax rate they pay $16. But let’s say that missing $20 is able to be applied in this example as a credit. Which gets directly applied to the tax owed. So, they end up with negative $4 in taxes. That’s right they get $4 back from the government.
So, yeah, you can see tax credits are much more valuable than deductions. As the government again is trying to incentivize growth. In this case they want to encourage companies to go out and develop new technologies.
America wants Silicon Valley on that wall, they need Silicon Valley on that wall… so, congress puts things like R&D tax credits in the tax code.
Stock Compensation
If you’ve read articles similar to this about why Amazon doesn’t pay taxes you’ve probably seen this explanation. This is a favorite one to explain Amazon’s lack of taxes.
And while I agree. It’s not the biggest reason at all. So, we’ll cover this quickly.
Employees receive stock options. Say they receive an option for $20K. They don’t end up giving out any actual cash, since the employee is receiving a part of the Company. But they still get to record that $20K as expense (albeit over a few years).
This is one major way that Amazon can have a loss but yet have a positive cash flow. In case, over the past couple thousand words you were wondering how Amazon can keep losing money and still exist as a Company. Well, it’s things like this (along with in the early stages of the Company of course just receiving cash directly from investors to fund the losses).
So, is this all a problem? If it is a problem, what’s the solution?
So, first of all, I actually agree with a lot of the things that the tax code has. I don’t mind that the tax code tries to incentive risks and allows for things like the net operating losses. I don’t mind that they want to incentivize research and development that hopefully, leads to new technologies.
I think a lot of these types of incentives are what leads the United States to be one of the better places in the world to do business, and is what has made us so prosperous.
But I do think there is a limit. Amazon has reached it. And so for that matter have Google, Facebook and all of the other tech behemoths.
Put an AMT on companies of a certain size
My simple solution is to do what Congress did to individuals with the alternative minimum tax (AMT) that I discussed above.
We would apply the AMT to revenue. i.e. what we tax individuals on. Just being fair here.
I don’t want to discourage startups or the risks taking by those involved in funding startups. So, we could make a cap of say $1B in revenue. To be honest we could make it $10B. We could start at $10B, see what that does and if needed lower it in the future. Some studies would have to be done here to figure out the cap as well as the AMT percentage.
Then simply apply a percentage of say 1% to every company that has over $10B in sales and if they had calculated less taxes then in this example $100M in federal income taxes then they would have to pay the difference. | https://medium.com/datadriveninvestor/why-does-amazon-pay-zero-taxes-7fc42cf59a39 | ['David Ferrara'] | 2020-12-08 17:55:04.422000+00:00 | ['Capitalism', 'Taxes', 'Amazon', 'Economics', 'Politics'] |
My journey in the world of startups: intro | I became interested in the world of startups as a student at the university of Oulu. Some time in the early days of 2018, I participated in an event called Pitching in the Kitchen — Master Public Speaking workshop with Mats Kyyrö, hosted by the Business Kitchen Oulu and organised by the Oulu Entrepreneurship Society. The event is similar to Polar Bear Pitching, but without the cold, the wet, the stress, the pressure… You get the point. I loved it. Mats was great, everybody was friendly and supportive. I learnt a lot and was ready for more. So, although I was starting to be a bit swamped and stressed as the end of my master’s studies was approaching and I had not been able to find a supervisor for my thesis, I registered to participate in the Startup Weekend Oulu.
Future entrepreneurs gather on a Friday evening to pitch an idea and find a team to work with. After the teams are formed, they are presented with a set of guidelines that they have to follow and some rules they must abide by. For instance, they cannot get in touch with friends or family to sell them their ideas, products, services. They work more or less night and day on their projects, and then on Sunday evening they present them to a jury, who then decides the winner.
Sounds pretty easy, but unfortunately, things did not go well. For my team, at least. We took the wrong path early on in the process and did not get the support we needed to bounce back. Sadly, the team split up after almost two days of struggle. Since too many teams broke up during the second day, there were not enough places on the other teams, so some of us chose to leave the event.
This experience did slow me down but did not manage to make me lose interest. On the contrary, I moved closer to finding my path. In my country, Romania, I worked for 7 years for a language and business training center, where I collaborated with my fellow trainers on various projects, managed teams and projects myself, and wrote training and marketing content. The environment was very agile, as was the management. Our clients were multinationals, embassies, banks, such as British American Tobacco, the American Embassy and the Romanian Commercial Bank, to name just a few. In other words, I did have the training and experience to survive in the startup world. But how could a 41-year old language trainer and program manager fit the profile of the up-and-coming young and hip players of the startup environment?
As you may have guessed already, the story is not over. I did not lose heart, and this spring, back in my dear, lovely Espoo, I was asked by a friend to volunteer as a Team Lead for the Helsinki Startup Crawl 2019. Of course, I said Yes! I also volunteered to call startups to follow up on emails or contact some startups for the first time. Some didn’t answer my calls, some sent text messages that they are busy, some called back. Nevertheless, every person I talked to was polite, friendly, and honest. I even managed to find a VC company, a supporter of startups, who were happy to join the Crawl at the last moment. This is a great event for students, and I am sorry to have missed it in Oulu, where it was organised this year for the 11th time, but well, let’s not stumble over details. During this event, students join a track that they are interested in, such as tech, business, sustainability, or games, depending on the startups represented there, and go from startup to startup. Each startup organises a presentation, a game, a competition, or even a case study, so that the students can gain more in-depth knowledge about the startup, and of course, enjoy themselves.
While all this was happening, another friend forwarded me a link to The Shortcut’s Catalyst Program info session, which was happening the next day. I went there to check it out with the friend from the Helsinki Startup Crawl and we both ended up joining The Shortcut as volunteers first, and then as part of the Talent and HR team respectively.
And the journey has just begun… | https://medium.com/the-shortcut/my-journey-in-the-world-of-startups-chapter-1-e2fb1d8b84f4 | ['Alina-Diana Välinen'] | 2019-07-28 07:47:47.168000+00:00 | ['Make New Friends', 'Journey Of Life', 'Startup', 'Volunteer', 'New Path'] |
Don’t Do Analytics Engineering in Snowflake Until You Read This (Hint: dbt) | Don’t Do Analytics Engineering in Snowflake Until You Read This (Hint: dbt)
Using dbt To Create Tables Using Custom Materialization
by Venkatesh Sekar
Imagine you had an Analytics Engineering solution (think CI/CD for database objects) that worked with Snowflake Cloud Data Warehouse and is…
Open-source
Easy to understand and learn if you are SQL savvy ~ 3 days
Git versionable
Designed with visual lineage in mind
A great way for your analytics teams to get better visibility into data pipelines
Well…it’s here and it’s called dbt!
As I have continued to work with numerous customers across our Snowflake client base, the need for a SQL-centric data transformation and data ops solution has come up time and again and dbt has really stepped up to take on the challenge.
I’m going to take you through a great use case for dbt and show you how to create tables using custom materialization with Snowflake’s Cloud Data Warehouse.
Notably, if you haven’t tried out Snowflake yet and are looking to leverage the cloud for your data warehouse or moving an existing on-premise data warehouse to the cloud (Netezza, Teradata, Exadata, etc.), you should hands-down try out Snowflake — it just works.
So let’s first dive into dbt and then focus on our use case.
What is dbt?
dbt is a command line tool based on SQL and is primarily used by analysts to do data transformations. In other words, it does the ‘T’ in ELT.
It facilitates writing modular SELECT SQLs and takes care of dependencies, compilation, and materialization in run time.
If you are interested in learning more about dbt here are some quick links on the getdbt site:
Read the dbt viewpoint
Overview on dbt
Source tables in dbt
In dbt, a “Source Table” holds data on which data transformations are done. The transformations are SELECT SQL statements which are joined together and then materialized into tables.
When developing scripts or models with dbt, the main statements are SELECT SQL dialect. There are no create or replace statements written in model statements. This means that dbt does not offer methods for issuing CREATE TABLE statements which can be used for source tables. It’s up to the user to define these outside of dbt.
Macros in dbt
But if you look closely at how dbt offers customization or enhancement to be developed using Macros, you will realize that these are pretty much Jinja templates.
And if you search around the code base in github, you will come across common macros such as:
create_view_as
get_columns_in_relation
drop_relation_if_exists
alter_column_type
truncate_relation
Materialization in dbt
Materializations are strategies for persisting dbt models in a warehouse such as Snowflake. There are four types of materializations built into dbt. They are:
table
view
incremental
ephemeral
dbt also offers the capability to develop Custom Materializations as well. Knowing this, it led me to investigate if I could use this functionality to develop a custom materialization that would do the following:
Process a model file which contains a “CREATE TABLE” statement
Identify if a column has been added/updated/dropped in the definition and issue an alter statement accordingly
A complete, full refresh of the data
Backup the table before doing any modifications
Migrate the data after the table has been modified
As mentioned, I’m using Snowflake as the database of choice for this example, but dbt does work with other databases as well — so let’s get going on our example.
Defined Macros
I have defined the following macros in snowflake_helper_macros
Persistent Table Materialization
I have defined the custom materialization persistent_table_materialization to handle the above-defined needs. In short, the implementation has the following logic:
Our dbt Model
Below is an example of the model file which now could be materialized by dbt. The example is here CONTACT
{{ config(materialized=’persistent_table’ ,retain_previous_version_flg=false ,migrate_data_over_flg=true )}} CREATE OR REPLACE TABLE “{{ database }}”.”{{ schema }}”.”CONTACT” ( FIRST_NAME VARCHAR(100), LAST_NAME VARCHAR(100), EMAIL VARCHAR(100), STREETADDRESS VARCHAR(100), CITY VARCHAR(100) );
Walking Through dbt Execution Examples
To see this materialization in action, here is a walkthrough with screenshots on the various facilities.
full-refresh
Let’s start off with no tables defined in Snowflake. A “full-refresh” flag would mean to create the table as if nothing existed. Should the table exist, it will recreate the table (due to ‘CREATE OR REPLACE’ in the model).
dbt -d run -m CONTACT — full-refresh
The table is now created and I have inserted some sample records manually. Here is the screenshot:
full-refresh with Migrate Data Enabled
Let’s do a full-refresh with the ‘migrate_data_over_flg’ set to true
config(materialized=’persistent_table’ ,retain_previous_version_flg=false ,migrate_data_over_flg=true )
Here is the command you want to issue:
dbt -d run -m CONTACT — full-refresh
Again, the table is recreated and I have inserted some sample records manually.
Backup Previous Version
Let’s go through an example of how to retain the previous copy and see what happens after migration.
The screenshot below reflects the CONTACT table as in INFORMATION_SCHEMA.TABLES before refresh:
For this we set the flag ‘retain_previous_version_flg’
config(materialized=’persistent_table’ ,retain_previous_version_flg=true ,migrate_data_over_flg=true )
we issue the command to do a full-refresh as usual
dbt -d run -m CONTACT — full-refresh
The screenshot below reflects the various CONTACT tables in Snowflake as in INFORMATION_SCHEMA.TABLES after refresh:
dbt backed up the table ‘CONTACT_DBT_BACKUP_20191006125145387106’ and also retained the rows (look at row count). Due to the ‘migrate_data_over_flg’ it has also migrated over the previous set of data.
Add column
Now I want to add a column ‘LAST_UPDATED’ to the definition.
CREATE OR REPLACE TABLE "{{ database }}"."{{ schema }}"."CONTACT" ( FIRST_NAME VARCHAR(100), LAST_NAME VARCHAR(100), EMAIL VARCHAR(100), STREETADDRESS VARCHAR(100), CITY VARCHAR(100), LAST_UPDATED DATE )
Hmm — were you able to see that I’m not doing an ‘ALTER TABLE’ statement. Through dbt you will see this happen. Issue the command and do not set the full-refresh flag.
dbt -d run -m CONTACT
This results in dbt issuing an ‘ALTER TABLE’, as in the log below:
The screenshot reflects the Snowflake table structure after this update:
Also, note that the existing records are not deleted as this was an alter statement.
Drop column
Now let’s remove the ‘LAST_UPDATED’ column.
CREATE OR REPLACE TABLE “{{ database }}”.”{{ schema }}”.”CONTACT” ( FIRST_NAME VARCHAR(100), LAST_NAME VARCHAR(100), EMAIL VARCHAR(100), STREETADDRESS VARCHAR(100), CITY VARCHAR(100) )
Same thing — I am not doing an ‘ALTER TABLE’ statement.
dbt -d run -m CONTACT
As before, dbt issues an ‘ALTER TABLE’, as you can see in the log below:
The database structure in Snowflake in the screenshot below:
Are There Any Limitations To Be Aware Of?
If you are using this approach with dbt, keep the following points in mind:
You cannot use this model in a ‘source’ or a ‘ref’ call
Do not ask dbt to do a ‘run’ across all the entire model as this could result in recreating the tables accidentally, although you’d have a backup if the flags were set
Where Should You Go From Here
With the capability that dbt brings to the table for creating tables using custom materialization, I feel very good in recommending dbt for your CI/CD pipeline database objects. Also, you can access everything in my git repo here.
You should also check out John Aven’s recent blog post (a fellow Hashmapper) on Using DBT to Execute ELT Pipelines in Snowflake.
If you use Snowflake today, it would be great to hear about the approaches that you have taken for Data Transformation and DataOps along with the challenges that you are addressing.
Need Snowflake Cloud Data Warehousing and Migration Assistance?
If you’d like additional assistance in this area, Hashmap offers a range of enablement workshops and consulting service packages as part of our consulting service offerings, and would be glad to work through your specifics in this area.
How does Snowflake compare to other data warehouses? Our technical experts have implemented over 250 cloud/data projects in the last 3 years and conducted unbiased, detailed analyses across 34 business and technical dimensions, ranking each cloud data warehouse.
To listen in on a casual conversation about all things data engineering and the cloud, check out Hashmap’s podcast Hashmap on Tap as well on Spotify, Apple, Google, and other popular streaming apps.
Other Tools and Content You Might Like | https://medium.com/hashmapinc/dont-do-analytics-engineering-in-snowflake-until-you-read-this-hint-dbt-bdd527fa1795 | [] | 2020-09-22 20:43:04.319000+00:00 | ['DevOps', 'Dbt', 'Snowflake', 'Cloud Computing', 'Open Source'] |
Reconnecting with the Art of Coding | I empathize with my friend’s feelings because I also went through something similar, like other developers I know. It’s true that, as time goes by, it’s easy to come to a point where we lose the spark of why we do the thing we do. This can open the door to a whole bunch of issues, like bitterness and boredom.
That day, he and I talked for a while about the importance of stimulating our creativity and the fact that it should be more encouraged in schools.
But this conversation also brought me some kind of breakthrough. It reconnected me to something I discovered naturally in my first years of programming: the concept of code as an art form.
The parallel between code and poetry is not new.
I started doing some research when it first hit me that reading beautiful code has the same effect on me as reading poetry. I was happily surprised to see that I wasn’t alone and that other people felt something similar.
Code has purpose and meaning. It requires structure. It should be lightweight and elegant, not bogged down with lines and lines of garbage. Writing great code isn’t something that just happens. It takes discipline and work! It’s an art unto itself. - Matt Ward, The Poetics Of Coding
These kinds of findings got me digging deeper into what other programmers thought about code as a form of art and what I found is beautiful.
Some people took the concept of semantics and syntax in programming languages to another dimension.
See, for example, the goals of the International Obfuscated C Code Contest :
to write the most Obscure/Obfuscated C program within the rules
to show the importance of programming style, in an ironic way
to stress C compilers with unusual code
to illustrate some of the subtleties of the C language
to provide a safe forum for poor C code
which gives results such as this completely valid program.
In this kind of approach, the idea is not to make something useful, it’s to play with the language in a beautiful and skillful way. It’s to write a program whose objective is purely to demonstrate a creative approach to the language itself.
Others took the concept of code as poetry and injected their unique personality in some very entertaining ways.
Dylan Beattie is a programmer, guitarist and old 80s rock lover. He created a programming language called Rockstar that allows you to write code in the manner of classic 80s song lyrics.
Yeah. For real.
There’s also a whole movement of creative programmers that use their skills as a musical performance tool.
Take the example of Sonic Pi, which leverages the simplicity of Ruby’s syntax, or TidalCycles which utilizes the power of functional programming with Haskell.
Code is now something you can not only play with but also share with an audience who can appreciate your skills and enjoy your creative process with you. This kind of performance is not limited to music alone and can also be applied to visuals through frameworks like Hydra.
Others are fascinated by the minimal.
There’s a creative universe to be found in the purity of mathematics and byte operations. The demoscene has been focused for years in creating the most beautiful and interesting results with programs that use a minimum of computing resources and memory.
Viznut, an active member of the demoscene, discovered bytebeat: one-line C programs that output raw byte data which can then be fed to a computer’s sound interface. The results are interesting music pieces, sometimes with unsuspected complexity. | https://medium.com/earth-to-abigail/reconnecting-with-the-art-of-coding-4710a89d1c34 | ['Mynah Marie'] | 2020-06-24 15:26:00.857000+00:00 | ['Creativity', 'Art', 'Software Development', 'Technology', 'Programming'] |
How I Built My Blog Using Gatsby and Netlify | Photo by Stanley Dai on Unsplash
Can you name a more iconic duo? 🤔
Years ago, whenever I built a static website, I didn’t use any fancy frameworks or build tools. The only thing I brought into my projects was jQuery, or if I was feeling extra fancy, I used Sass.
Nowadays, we have tools like Gatsby and Netlify, which greatly improve the experience of building static websites. Rather than thinking about boilerplate and configuration (looking at you Webpack), you can just focus on your application.
I wouldn’t hesitate to say that the Gatsby and Netlify flow is the best programming experience I’ve ever had. Let me explain why.
Gatsby
Gatsby is a static site generator that uses React. Everything is configured out of the box including React, Webpack, Prettier, and more.
Since Gatsby builds on top of React, you get all the benefits of React, such as its performance, components, JSX, React library ecosystem, and a large community (React is nearing 100,000 stars on GitHub 😱).
If you haven’t used React before, there is a learning curve. But there are plenty of well-written tutorials that make React very accessible. The official React documentation is also very well written.
For many static websites like my blog, I need to use external data sources (my actual blog posts) during the build process. Gatsby provides support for many forms of data, including Markdown, APIs, Databases, and CMSs like WordPress. To access this data, Gatsby uses GraphQL.
Taken straight from the Gatsby website
All my blog posts are in Markdown, so I’m using a Gatsby plugin (gatsby-transformer-remark) that lets me query my Markdown files using GraphQL. It also converts a Markdown file to HTML straight out of the box like magic. I simply need to use the following GraphQL query to access a specific post:
query BlogPostByPath($path: String!) {
markdownRemark(frontmatter: { path: { eq: $path } }) {
frontmatter {
title
date(formatString: "Do MMMM YYYY")
}
html
}
}
Using this query, I access the data through my props like so:
const BlogPost = ({ props: { data: { markdownRemark } } }) => (
<div>
<h1>{markdownRemark.title}</h1>
<p>{markdownRemark.date}<p>
<div dangerouslySetInnerHTML={{ __html: markdownRemark.html }} />
</div>
)
If you understand GraphQL, accessing data from Markdown using Gatsby feels right at home. If GraphQL is new to you, it does add yet another thing to learn. But the documentation on using GraphQL with Gatsby has plenty of information and code snippets that you can use.
If you are building a simple blog with only one or two queries, there are Gatsby starter kits that set up gatsby-transformer-remark and all the querying for you. To speed up development, I used one called gatsby-starter-blog-no-styles.
I am a huge fan of styled-components, so I tried to use it when building this blog. I did encounter an issue, since there was no way for me to specify to gatsby-transformer-remark how to style my components. Instead I had to use plain CSS for styling. I would love to see something like the following in gatsby-config.js :
import styled from 'styled-components' const Header = styled.h1`
font-size: 24px;
color: #333333;
` module.exports = {
plugins: [
{
resolve: 'gatsby-transformer-remark',
options: {
h1: Header
}
}
]
}
In addition to the ease of actually using Gatsby, the official documentation is very well written and up to date. Each guide in the docs explain concepts of Gatsby so well, it’s likely that in most cases you won’t need to check any third party source of information.
The only difficulty I had with Gatbsy was when I deployed my website. I had a FOUC (flash of unstyled content). I found that upgrading Gatsby from 1.8.12 to 1.9.250 fixed the issue. I’m not too sure why this fixed it, and I assume it must have been an internal issue with Gatsby.
I mean who really wants to see my forehead?
Netlify
Usually, when building a static website, I’ll use GitHub pages because it’s free and fairly easy to set up. Although I still think GitHub pages is a great tool, Netlify takes the process one step further to make the developer experience even more efficient.
Once you’ve hooked up Netlify to your repo, each push to your GitHub repository automatically builds your website, according to the static site generator you’re using, and deploys it to production.
I currently only use Netlify for static site hosting. But it also supports cloud functions, domain management (with SSL), form submissions, a/b testing, and more.
Netlify’s web interface is also clean and easy to use. The difference from AWS is night and day. While AWS is highly configurable, many developers don’t use this functionality. When I first used S3 or Lambda (Amazon’s static file and cloud function services), I spent a considerable amount of time looking up Amazon’s difficult and sometimes out-of-date documentation. There is a whole lot of unneeded complexity and Amazon jargon when using AWS. In comparison, Netlify is a breath of fresh air. It’s one of those services that just works.
The best part about Netlify is that it’s free. If you’re in a large team or need more resources for cloud functions, form submissions, and more, they do have paid options. If you plan on building a small blog like I am, it’s unlikely you’ll need to pay for anything.
TL;DR
Gatsby and Netlify are the easiest way to build and publish a static website. Period.
If you would like an example of how to build a blog using Gatsby, the code for my blog is available on GitHub. | https://medium.com/free-code-camp/how-i-built-my-blog-using-gatsby-and-netlify-f921f1a9f33c | ['Pav Sidhu'] | 2018-06-11 16:48:42.229000+00:00 | ['Technology', 'Gatsbyjs', 'Programming', 'React', 'Netlify'] |
Artificial Intelligence, our best friend in a stressed, if not devastated, power grid | Artificial Intelligence, our best friend in a stressed, if not devastated, power grid
AI and other transformational technologies, a must with the ever-growing distributed energy resources (DERs), especially in a soaring and bigger storms context
Photo by Nathalie SPEHNER on Unsplash
In today’s multifaceted energy world, a growing number of prosumer assets are increasing the complexity of power grids. This is even more important in an ever-changing climate that more and more generates huge storms such as the Typhoon Lekima which caused 9.3 Billion in damage (5th Costliest known Pacific typhoons) and more than 90 deaths in the Philippines, Taiwan and China earlier this year, or the recent monstrous Category 5 Hurricane Dorian in the Atlantic Ocean. The director-general of the Bahamas Ministry of Tourism and Aviation, Joy Jibrilu, details the damage left in the aftermath from Hurricane Dorian and what the Bahamas will need to move forward especially on the infrastructures. (source MSBC: https://www.youtube.com/watch?v=c_8sLpTQq_E)
This looks too similar to what we’ve seen in Porto Rico two years ago which suffered severe damage from the category 5 hurricane Maria. Damages cumulated to ~92 billion USD, the third most costly tropical cyclone in US history. The blackout as a result of Maria has been identified as the largest in US history and the second-largest in world history.
CosmiQ Works developed an interesting data-fusion mapping approach and the first independent remote sensing assessment of the recovery of electricity and infrastructure in Puerto Rico.
Analyzing the declined levels of observed brightness across the island, this IQT Lab was able to identify that 13.9% of persons still lacked power and that 13.2% of infrastructure has been lost as of May 31, 2018.
Photo by NASA on Unsplash
Distributed Vs. Centralized
Decentralized systems with solar generation, wind turbines, and electric vehicles provide promise for a decarbonized future, but also bring along challenges for both utilities and prosumers, but they need to be properly planned and operate to survive, or at least to be restarted in a timely manner after such a catastrophic disaster (towardsdatascience.com/no-fast-enough-energy-transition-without-intelligent-energy-storage).
The transformation of energy grids, the emergence of new services, new players: prosummers, consum’actors and new models such as self-consumption, alters the operating requirements and constraints of the grids themselves and imposes the management of increasingly massive data that would be unworkable without recourse to AI.
The energy market is moving away from a model with centralized power plants only and entering the era of distributed grids and peer-to-peer markets. Multiple elements of the energy ecosystem are evolving at a dizzying speed. We are seeing a very complex market emerging, where the distribution company needs to allow more and more renewables and flexible energy assets to be installed behind the meter while maintaining a stable local grid. At the same time, prosumers who have installed such flexible assets want to optimize their energy flow to maximize the value of their investment.
A steadily growing challenge is the emergence and the accelerated growth of a decentralized generation, where private users, bigger or smaller, generate and use their own electricity from renewable sources, such as wind and solar power. This complicates supply & demand oblige utilities to buy surplus energy from private users, who produce more electricity than they consume and send it back to the grid. Since 2010, the use of solar energy has substantially increased and this exponential trend is expected to continue with photovoltaic cells, devices generating electricity from sunlight, reducing costs and increasing efficiency.
An extending decentralized production
The current systems have generally not been designed to take into account this diversification of energy sources, particularly the increase in renewable resources. For example, in many American jurisdictions, when demand outstrips supply, utilities activate fossil fuel-based power plants, known as “state of the art” power plants, just a couple minutes in advance to avoid a cascading disaster. This procedure is the most expensive and, but also, the most profitable part for these companies. It results in a higher electricity bill for consumers and an increase in greenhouse gas emissions into the atmosphere. These problems will be exacerbated as energy demand is expected to increase substantially in the coming years. To avoid these non-optimal (for the least) operating mode with IES, AI can enable automatic learning algorithms, combined with data on these complex networks and real-time meteorological data (from satellites, ground observations and climate models), to be exploited with the full potential to predict the electricity generated by RES, such as wind, sun and oceans.
Combined with other technologies such as Big Data, the Cloud and the Internet of Things (IoT), energy storage with AI can play an important role in power grid management by improving the accessibility of power sources. renewable energies.
Source: Smart Phases Power Generation with and Artificial Intelligence.
The (Deep) Learning curve
AI can greatly help to manage electricity consumption so that big utilities or an even smaller grid with DER can sell when it’s expensive and buy when it’s cheap. Machine learning, and especially deep learning, algorithms can be applied in the energy sector in a very interesting way in this context. As the end-users are becoming “prosumers”, smart devices are proliferating, big data is available for analysis, renewable energy sources are growing, and business models and regulations are adapting.
Combining it all together can help get to the point where energy flows and/or is stored at the optimal timing, direction and volume. With artificial intelligence algorithms to determine when to produce, consume, store and trade energy, to the cost-benefit of the end-user, the service provider and the grid operator. With thousands of emerging energy communities, this vision might become clearer and perhaps even the main reality in the coming 5 to 10 years.
More and more sustainable communities, utilities and operators are currently under simulation or in the first phases of pilot projects. With Internet of Things (IoT) demanding more than 10 billion smart devices with over 100 million electric vehicles (buses, trucks and passenger cars), with more than 1 billion prosumers (private and industrial) having their own “production” of kWh (solar or else), all predicted by the year 2025 — it’ll be a huge challenge to maintaining reliability and secured supply and grid stability.
Expectations of how DERs will evolve in the coming years are multiple. But these changes require a completely new operating paradigm, and there is no better test for technology than real life. New models involving Artificial Intelligence, Energy Storage and Renewable are already being applied at various levels in many states on all continents, not to mention Australia, California, Germany, China, Costa Rica, Israel and many other countries around the world.
This is especially true when we’re dealing with a climate that is reacting to our human intervention. AI properly used with Renewable and energy storage can help us not only on reducing the impact of our energy consumption on the CO2 emissions, but also to adapt to the growing impacts of disasters (related to the man-made climate change).
It’s not required to be psychic to envisage that AI and DER will be the Transformational technologies that will soon be our best friends in making the new grid model.
Photo by James Peacock on Unsplash
This article is an extension of a series on Artificial Intelligence and Energy Storage by Stephane Bilodeau, ing., P.Eng, PhD, FEC. Founder & Chief Technology Officer, Smart Phases (Novacab), Fellow of Engineers Canada and expert contributor to Energy Central and Medium. | https://towardsdatascience.com/artificial-intelligence-our-best-friend-in-a-stressed-if-not-devasted-power-grid-3e9303d6d9ae | ['Stephane Bilodeau'] | 2019-09-03 19:16:10.802000+00:00 | ['Climate', 'Towards Data Science', 'Artificial Intelligence', 'Energy', 'Hurricane'] |
Voyager 2 Finally Hears from NASA | Voyager 2 received commands from NASA for the first time in months — so how is our intrepid little explorer doing?
NASA sent commands to Voyager 2 for the first time in eight months, utilizing a newly-upgraded radio telescope in Australia. Image credit: NASA/CSIRO
NASA recently made contact with the Voyager 2 spacecraft for the first time in months. This ultra-long distance call was made using the only telescope in the world capable of communicating with the distant interplanetary explorer.
Voyager 2, launched August 20, 1977, is now racing through space just beyond the edge of the Solar System.
“On Oct. 29, mission operators sent a series of commands to NASA’s Voyager 2 spacecraft for the first time since mid-March. The spacecraft has been flying solo while the 70-meter-wide (230-foot-wide) radio antenna used to talk to it has been offline for repairs and upgrades. Voyager 2 returned a signal confirming it had received the “call” and executed the commands without issue,” NASA reports.
If You Listen Closely…
The DSS43 radio telescope. Image credit: NASA/CSIRO
Just outside Canberra, Australia, NASA’s Deep Space Station 43 (DSS43) radio telescope is the only instrument in the world with a transmitter capable of sending commands to the distant spacecraft. However, scientific and health data was still received from Voyager 2 by other radio telescopes during this time.
Built in 1972, this 70-meter wide radio telescope is the only such instrument of its size in the southern hemisphere.
“Deep Space Station 43 (DSS-43) was constructed in 1969 to 1973 as a 64-metre diameter antenna. The 64-metre antenna was more than six times as sensitive as DSS-42, the original 26-metre at the Complex. Therefore DSS-43 could communicate with spacecraft at greater distances from Earth as the signal became weaker,” the Deep Space Network describes.
Call When You Get Work!
Because of the path Voyager 2 took through the Solar System, the vehicle is unable to communicate with radio telescopes in the Northern Hemisphere.
“The spacecraft’s flyby of Neptune in 1989 set it on a course below the elliptic plane that eventually took it to interstellar space on November 5, 2018. In 1998, engineers switched off the spacecraft’s nonessential instruments to conserve power. Data from at least some of the six instruments still in operation should be received until at least 2025,” NASA reports.
A look at the time when Voyager 2 passed out of our Solar System. Video credit: NASA/JPL
In March, NASA began upgrading the DSS43 telescope, including the installation of a new, three-ton X-band frequency cone that needed to be hoisted 20 stories in the air. Installation of the cone was completed in May, providing increased sensitivity for mission engineers using the telescope.
For eight months, maintenance work on the telescope shut off communications with Voyager 2. With those upgrades still underway, engineers were able to re-establish communications with the far-flung robotic observatory.
Even traveling at the speed of light, it took radio signals 17 hours to reach the distant vehicle, and the same amount of time to return.
Engineers hope upgrades will be completed on the DSS43 telescope by February 2021.
These 70’s Spacecraft are FAR OUT!
The 1970’s saw the launch of four spacecraft destined to travel outside the Solar System. The Pioneer 10 and 11 missions were followed by Voyager 1 and 2. Three of these are headed in the same direction as the Sun orbits around the galaxy, while Pioneer 10 is headed in the opposite direction.
“The untold want, by life and land ne’er granted,
Now, Voyager, sail thou forth, to seek and find.” ― Walt Whitman, Leaves of Grass
Voyager 2 is the only spacecraft to have ever visited all four outer planets of our Solar System — Jupiter, Saturn, Uranus, and Neptune.
“A gravity assist at Neptune shot Voyager 2 below the plane in which the planets orbit the Sun, on a course out of the solar system,” NASA describes.
The intrepid explorer officially reached interplanetary space — where particle pressure from the Sun is overwhelmed by the interstellar medium — on November 5, 2018.
Voyager 2 is currently more than 18.7 billion kilometers (11.6 billion miles) from Earth, racing away from the center of the Solar System at more than 60,000 kilometers per hour (37,300 MPH).
NASA will continue listening in on Voyager 2 and its companion until batteries finally fall dead, silencing the spacecraft, leaving the hardy explorer forever adrift in the silent void. | https://medium.com/the-cosmic-companion/voyager-2-finally-hears-from-nasa-a4502176f9cc | ['James Maynard'] | 2020-11-03 22:53:08.233000+00:00 | ['NASA', 'Technology', 'Science', 'Robotics', 'Space'] |
Listening to Women’s Stories | Photo by Charles Rondeau, CC-BY-SA 4.
Yesterday, I was sitting at a picnic table with a woman I had just met and the sunlight was absorbing into her dark red hair making the tips look like they had ignited. Her eyes were sparkling, amplified by the wrinkles that radiated out from them, like starbursts.
I thought to myself, This woman is beautiful.
I had only learned her name an hour before, and confessing that she would forget mine, she said, I’ll just call you honey-darling.
We talked there in the sunlight a long time. As she talked, I thought about sitting in the kitchen as a child listening to my mother and my aunt intensely talk while slicing cucumbers and about my uncle leaning over to my brother man-to-man, saying, Women can just go on for hours, can’t they? And I thought about all the times I’ve been chided by partners because I am most certainly what you would call “a talker,” late nights with friends out on the porch and long forty-minute goodbyes.
The woman at the picnic table was going a mile-a-minute, her words spilling out from her so quickly that sometimes they fell on top of each other in a puddle and she scooped them back up in her sun-spotted hands and tried again.
And I thought to myself, Maybe this is what happens, all this babbling and carrying-on, when you live in a world that sanctions you no air time.
She is writing a book, she said, could I maybe edit it? She doesn’t have much education, she said, so she’s dictating it onto a computer but it doesn’t put in the punctuation. Sure, I said, I can help with that.
When it’s done, she wanted to know if I would take it back to Greensboro, the city where I am from. She can’t let people here read it, the town is too small, and it’s about stuff that no one would believe anyway. Her father, the neighbor, husbands, boyfriends, a teacher, a stranger, an uncle. She said, I dyed my hair black one time and got myself fat for a while hoping they might stop touching me.
One cousin apologized and said he hoped it hadn’t messed her up too bad. She really appreciated that.
And I thought about Christmas Eve just a few months ago when I sat with my friend in a Mexican restaurant, Feliz Navidad written in fake snow frosting on a Dos Equis mirror above her head, and she told me about her cousin who killed herself at eleven and over beers we recounted all the men who had damaged us. I thought about the hundreds of conversations that we all have had, women to women, sometimes strangers, words pouring out untrained and undisciplined as if we had just been released from a cage and our wings were beating hard, unsteady, like we are just trying to stay up.
And I thought about another restaurant, a fancier one, that I was sitting in about a year ago with the man I was dating. And I thought about how, when I began to share with him some of the things I knew, things that I had seen as a woman, he stopped me and said, Why do you want to talk about this? We are out to dinner after all.
I asked, Does it make you uncomfortable? And he said, No, it’s just that no one I have ever dated has wanted to talk about what had happened to them before. And I thought, Well, that’s not true.
But then he went on to tell me about the upcoming tour with his band while I wondered what it would feel like to swallow my own throat.
So the woman with red hair is going to write her book, and I will fill in the punctuation and take it home with me to Greensboro, which she fancies to be a big city, but I know is not. I will copy it a thousand times and bring it to you, and ask you to read it carefully, committing it to memory, the type of memory that isn’t a word-for-word recitation, but something deeper, something closer to the marrow of your bones.
Come talk to me, friends. Use all the words in the dictionary and use them right and use them wrong and do what you will with the punctuation. Tell me long stories, the ones you worry I don’t want to hear.
Tell me your troubles over dinner. Sit on my porch and stay too long. | https://gwenfrisbiefulton.medium.com/listening-to-womens-stories-298b806ab74e | ['Gwen Frisbie-Fulton'] | 2019-04-12 23:14:38.832000+00:00 | ['Storytelling', 'Gender', 'Short Story', 'Equality', 'Women'] |
The Importance of Trust in Relationships | The Importance of Trust in Relationships
Most experts regards it as the greatest antidotes to unfaithfulness.
Photo credit: iStock
By Raymond Michael
No one can overemphasize the importance of trust in developing a successful long-term relationship. It’s the superglue for forming the strongest of friendships and the deepest of love. Without trust, no relationship stands a chance to thrive long-term.
In fact, a well-functioning relationship is one in which you trust your partner, are open to them, and are a source of support when they need comfort or help. The importance of trust in a loving relationship is probably why most experts regards it as the greatest antidotes to unfaithfulness.
The ability to trust in yourselves and in others during times of need, is a basic emotional and spiritual survival need. To live life fully, there’s the need to be able to trust your perspectives of reality and to also let important people matter to you.
What is Trust?
Trust is simply your belief that someone is reliable. This makes you to place confidence in them as you grow to feel safe with them emotionally and physically. Basically, trust is the act of an individual believing that what someone is saying is true because they believe in that person.
It’s something two people can develop in a relationship when they decide to be honest and rely on each other. Thus, individuals come to trust themselves when they become committed to each other. This is also dependent on them perceiving that their partners are acting in a positive manner.
Trust is the glue of life. It’s the most essential ingredient in effective communication. It’s the foundational principle that holds all relationships. Stephen Covey
Building trust in an intimate relationship is thus contingent on the honesty and openness expressed between both partners.However, trust is something that you earn; it is never automatic. Trust develops slowly over time through all the thousands of mundane interactions you engage in every day.At the same time, trust is not something you can demand or request proof for. Trusting someone is simply a choice that you make.
Importance of Trust in a Relationship
Most couples in happy relationships have one thing in common. Going by several research, this is the fact that they have mutual trust for each other. This helps them to feel safe with each other and deepens their love. It also allows them to grow in their marital friendship while enjoying increased sexual intimacy.
Conversely, unhappy partners complain of their relationships lacking this fundamental element. Also, unhappy partners generally have lower levels of trust and exhibit more rigid and defensive patterns when handling conflicts.
The trust imperative is so woven into our being that there is actually a trust hormone, oxytocin, whose main function appears to give us the ability to trust. Mira Kirshenbaum Author, “I Love You But I Don’t Trust You”
Thus, trust plays an integral role in the sustenance of any loving relationship. The quality of a relationship’s functioning further underscores the importance of trust in a relationship. This is because the quality of a relationship accounts for a large percentage of the partners’ daily well-being.
Just imagine the unnecessary amount of stress a lack of trust can easily bring into a relationship. Without trust, a relationship is basically dysfunctional as it is very unpredictable, chaotic, full of drama, and toxic.
In sharp contrast, trust helps to take away a huge source of stress. The reason is that it allows you to act with “incomplete information“. Thus, the complexity of your decision making process greatly reduces. This due to the fact that you no longer subject your mind and body to constant worry.
So, your overall well-being has a close tie to the people with whom you spend most of your time. Thus, sharing common values such as trust and having harmony in your relationship greatly affects your well-being.
Statistical Support of Trust
The Relationships Indicators Survey 2011[1] also underscores the importance of trust in a relationship. The survey listed financial stress, communication difficulties, different values, and lack of trust as the four major reasons why relationships fail. From the survey, lack of trust was the most common reason.
The Love and Trust Dilemma
On the surface and to most people, it seems that love is the most important thing that sustains a relationship. However, that would depend on the definition and type of love in question. And as we all know, that could mean a lot of things to different people.
Notwithstanding, whether it is a romantic or any other type of relationship, without the fundamental element of trust existing between those concerned, such love cannot last long-term.
Happy couples know the importance of trust in their relationship and how it has helped in strengthening their love. Love does not build trust; it trust that builds love.
Concerning “unconditional love” which most people will want to hang onto, in adult relationships, it is at best akin to being in a relationship without boundaries. And as experience has shown over the years, such relationships don’t subsist for long.
On a scientific level, Mario Beauregard and his colleagues carried out fMRI procedures on participants who were shown sets of images either referring to “maternal love” (unconditional love) or “romantic love”. The researchers reached a conclusion that “the feeling of love for someone without the need of being rewarded is different from the feeling of romantic love.”[2]
I trust you is a better compliment than I love you because you may not always trust the person you love but you can always love the person you trust. Unknown
Without reciprocity, no relationship can thrive. And we all know that the reciprocity between partners is a key component of any thriving long-term relationship.
Achieving reciprocity is based on each partner trusting that the other will return in kind, the love or gesture that they have shown. A one-sided love or “unrequited love”, is nothing but poison to the soul.
It’s the trust you have in your partner, that opens the floodgate of love into your relationship. Trust lets you feel free with your partner. It also allows you to be able to reveal the deepest and darkest part of your being to them. It’s trust that takes your love to its summit.
The Fragility of Trust
Most of us describe the trust in a relationship to be the superglue that holds it together. Yet, it is ironical that trust is also very fragile. Despite the importance of trust, a lot of people struggle with trust, and for a lot of different reasons.
It’s so fragile that the impact of a single negative action requires as much as twenty positive actions to offset. Once a partner breaks the trust in a relationship, rebuilding it can sometimes be a very daunting task.
Trust is something that is difficult to establish. It is very fragile that needs to be taken care of. Once trust breaks or shatters into pieces, it is very difficult to rebuild it. K. Cunningham
It’s like putting the broken pieces of a glass together. Though it may be painstakingly restored, it may never fully be as it once was. Trust is not something you can fake or quick-fix.
And much like our physical heart, we can view our loving heart as a muscle, a trust muscle. To strengthen it, we need to use and exercise our trust muscle. If we injure it in the process, it will weaken or slow down.
Trust is like blood pressure. It’s silent, vital to good health, and if abused it can be deadly. Frank Sonnenberg
The Transference of Trust
For a lot of couples, being in a trusting relationship remains an elusive dream. And many a shattered faith in love have been very devastating.
There are many people that simply don’t realize the importance of trust when starting even a new relationship. The romantic love often takes over most people and they forget the basics such as trust.
In general, the way an individual has been treated in the past by people they considered important — especially by romantic partners — reflects in how they subsequently view and think about relationships.
Transferring Hurt Feelings
For instance, people who have been hurt early in life might have developed a lack of trust in others. They consciously become self-protective, with a reluctance to trust others and fear being vulnerable and open to being emotionally hurt again.
Thus, such individuals find it very difficult to completely let go of their doubts and confidently relinquish control to a new intimate partner. The fear of being hurt again makes them unwilling to take the chance of being closely involved with someone else emotionally and sexually.
Not being able to let go and trust those around you can be incredibly stressful. You will be constantly questioning the actions of those around you, never feel in control and generally unhappy. David Cannell
If care is not taken, they may become particularly intolerant of combining love, affection, and satisfying sex in an intimate relationship. However, such wariness can leave such an individual vulnerable to lifelong and profound loneliness.
Thus, it’s quite common for people to transfer their lack of trust into a new relationship because of fear that history might repeat itself. However, it’s important to realize that no two people are the same in whatever context.
Your previous spouse or partner made a choice. But no matter how painful the consequences of that choice might have been, it shouldn’t stop you from moving forward and taking responsibility for your future happiness.
Also, you need to come to terms with the true importance of trust in healthy relationships and find genuine ways to start learning to trust again.
Reliving Caring and Supportive Experiences
On the other hand, individuals who had caring and supportive experiences in prior relationships often have positive views of their current partners and relationships. Ultimately, such individuals experience better relationship functioning.
When an individual has such positive views, they are more likely to trust their current partner, disclose important information to them, and also be a good source of support when they need assistance.
Trust Developing Components
Trust in a relationship cannot be built if one person is willing and the other person is not. Building trust is a two-way street and requires mutual commitment from both partners.
In general, strong relationships depend on trust and effective communication. Then again, shared values between partners is what helps to foster trust and communication. Overall, trust is a fundamental function of character and personal trustworthiness.
It takes two to do the trust tango — the one who risks (the trustor) and the one who is trustworthy (the trustee); each must play their role. Charles H. Green The Trusted Advisor
Shared Values
When you share similar beliefs with your partner, you feel a lot safer and also find it more rewarding sharing your thoughts and feelings.
We cannot actually overemphasize the importance of shared value when talking about trust. This is because your core values were formed a long time ago. And they are very to be yours for the rest of your life. This also applies to your partner.
Thus, considering the fact that you and your partner are not likely to change your core beliefs, it helps a lot if you’re compatible to a certain degree.
Trustworthiness
Being trustworthy means that you’re capable of demonstrating consideration and care for other people. To trust someone is for you to be responsible to them and care care for them, It also means sharing your resources with them and loving them.
For others to trust you, you must be deserving of their trust. You must prove yourself to be someone who people can rely on to do the needful or right at any given time.
Essentially, for people to trust you, you must show a certain degree of trustworthiness. Being trustworthy is a function of your character and competence.
Your character is what you are and is very related to your values. More specifically, your character is about you having integrity (ability to walk your talk), maturity (balancing of courage and consideration), and an abundance mentality (a paradigm that life is ever expanding).
In order to establish trust, it is first important that you be trustworthy. This means you should be forthright with all your dealings. Paul Melendez
Regarding your competences, these include your technical (knowledge and skill to achieve results), conceptual (seeing the big picture), and interdependent (interacting effectively with others) capabilities.
However, competence without character doesn’t inspire trust either. Thus, both character and competence are necessary to create trustworthiness and thereby inspire trust.
Without having this foundation of essential trustworthiness, trust is tentative at best. Discussions between you and your partner will have a lot of posturing and positioning since both of you will be guarding your statements.
Conversely, trustworthiness helps to create flexibility and emotional reserve in your relationship. So, even when you screw up at times, it doesn’t necessarily ruin the relationship. The emotional reserves you’ve created will make your partner readily trust your basic intent. This exemplifies the importance of trust in a relationship as your partner already has an understanding of what you are inside.
Building Trust Through Vulnerability
When you reveal yourself to your partner, and they in turn treat you with respect, love, and dignity, your trust in that person grows. By revealing more and more of yourself to your partner, you unconsciously invite them to be vulnerable as well.
Allowing your vulnerability to show gives your partner the courage to show the hidden or shameful parts of themselves. This environment allows both of you to experience a high degree of security and peace. This way, you both know that you have each other’s back.
With true emotional vulnerability, interactions become more trusting. It also allows for reciprocal disclosure and enhances mutual attraction. Being vulnerable makes both of you to feel both loved and respected and makes you truly value the importance of trust in your relationship.
We’re never so vulnerable than when we trust someone — but paradoxically, if we cannot trust, neither can we find love or joy. Walter Anderson
Yet, a lot of individuals are too quick to trust others in the name of forgiveness and vulnerability. They quickly trust people without ensuring that the person is making trustworthy improvements.
Truthfulness is everything in a relationship. If there’s any kind of deception, it’s best to stop everything and resolve that issue first. When your partner is always lying to you, then there’s no relationship.
If this is in a non-committed relationship, the whole thing is a farce. You cannot downplay the importance of trust if you really want a thriving and happy relationship especially at its early stage. In fact, you would be better off on your own if there is not significant level of trust during initial dating.
It is foolish when you continue opening up yourself to emotional abuse when you’ve not seen any true change. True vulnerability requires setting proper boundaries and having a degree of connection. So, learn to forgive but at the same time learn to guard your heart until you see sustained change.
As a quick recap…
Whichever way you look at it; life will always be full of uncertainties. And we don’t own crystal balls that might have made things a bit easier by revealing compatible and trustworthy partners.
Despite the hurt from a betrayal of trust, you do not need to give up. It doesn’t matter if this was in your previous relationship or a current one. What you need to do instead is to continue trusting in yourself and in your instincts.
Get to grips with the act of true emotional vulnerability and setting proper boundaries. Always remember the importance of trust and try to establish the right environment for trust to grow. Being in a state of true emotional vulnerability with your partner, is where lies the key to the strongest trust you can experience in your relationship. | https://medium.com/hello-love/the-importance-of-trust-in-relationships-d5fe25fe3eae | ['The Good Men Project'] | 2020-12-09 17:22:17.181000+00:00 | ['Trust', 'Relationships', 'Cheating', 'Love', 'Marriage'] |
Medium Burnout Is A Real Thing | I’ve seen this happen to writers time and again. It was only just a matter of time, and not I’m sitting here, completely blank. Burnout. Medium-burnout. …and that’s where this article was born.
I see many people publish amazing articles every single day, and I’ve done my best to try and keep up the pace — I should have known it couldn't last.
I know that in order to create a steady stream of income on Medium, quantity seems to be just as important as quality, but I’m beginning to wonder if the process can be challenged. I’m also starting to question if I’m going about it the wrong way; putting too much emphasis on the earnings and losing sight of the actual writing.
Over the past few weeks — since the MPP change — I’ve found myself obsessing over the daily earnings, refreshing my screen repeatedly after the clock hits a certain time every night until the earnings update. This obsessive state I’m in detracts from why I’m here in the first place, which is to write.
I found this platform when I did a Google search about “how to make money writing online.” I wanted a place where I could let my creativity flow, and this seemed like the ideal place to do that. My goal when joining Medium was never to write about Medium. And here I am, hitting the publish button on two of them in a row. Why? Well, because 1) I’ve realized, as many others have, that articles about Medium do well, and 2) I write about what I feel, and right now I’m feeling some pretty heavy Medium-burnout.
I’m a fiction writer. Other than dabbling (and failing) in personal blogging a time or two, I only began writing non-fiction pieces once I joined this platform back at the end of June. I’ve had to step out of my safety zone, and it has really helped me push my own boundaries and expand and grow my skills as a writer. But if I’m being completely honest, I miss fiction. I miss it dearly. And I hate this state of mind where I’m finding myself these days.
So I’ve come to the conclusion that I need to step back a moment and reevaluate things. I’m not leaving Medium — I love it here! If it wasn’t for this community, I would have never met some of the people I now consider my dearest friends. But my goal isn’t to lock myself in, it’s to continue to grow.
In order to do that, I know I need to widen my horizons beyond the walls of Medium. I need to add streams of income so that I’m not at the mercy of a single website. The goal is to turn writing into a full-time career, and although some writers make incredible amounts of money here, in my heart I know this is a stepping stone. One single aspect of what my career and my life is shaping into.
So I’ve decided that I need to stop putting so much pressure on myself. Publishing a story every single day (or trying to) has brought me to the point where my brain feels empty. Devoid of anything remotely inspiring. I need to reboot. I also need to break this crazy addiction to statistics.
Quality or Quantity
Instead of pushing myself to write things that just aren’t there every day, I’m going to work on putting out two or three pieces a week — with some poetry sprinkled in here and there. I’m going to focus on the quality of my work because I came here to add value first and make money second.
I’m not giving up my goals of writing for a living, so I’m going to hustle in other areas to try and create that career for myself. I’m not leaving Medium unless they kick me out (please don’t kick me out!) but I will be publishing less frequently.
There’s definitely a bit of controversy on this topic, and I’ve found myself agreeing with both sides of the coin in the past. I also know that different methods — quality over quantity, or vice-versa — work best for different people, and I’m hoping that this will be the right move for me. It may be for you, or it may not be…but I’m going to give it a shot! | https://medium.com/the-partnered-pen/medium-burnout-is-a-real-thing-1cb2bf95f414 | ['Edie Tuck'] | 2019-11-19 19:50:14.936000+00:00 | ['Priorities', 'Burnout', 'Medium Writers', 'Goals', 'Writing'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.