title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
A Hell/Heaven of Our Own — Alma and the Tibetan Book of the Dead | Centuries old Zhi-Khro mandala, a part of the Bardo Thodol’s collection, a text known in the West as The Tibetan Book of the Dead, which comprises part of a group of bardo teachings held in the Nyingma (Tibetan tradition) originated with guru Padmasambhava in the 8th Century. Borrowed from Wikipedia.
“In the Tibetan Book of the Dead [or The Bardos], when the instructions are given as to what happens when someone leaves their body after death, it says something like, ‘When the clear light of the void comes, it is followed by the vision of the blissful Bodhisattvas; then comes the vision of the wrathful Bodhisattvas,’ and so on. And then it says, ‘Realize, oh nobly born, that all this is but the outpouring of your own mind.” — Alan Watts, What is Zen? (New World Library, 2000), 23 “I say unto you, can you imagine to yourselves that ye hear the voice of the Lord, saying unto you, in that day, ‘Come unto me ye blessed, for behold, your works have been the works of righteousness upon the face of the earth?’ Or do ye imagine to yourselves that ye can lie unto the Lord in that day, and say, ‘Lord, our works have been righteous works upon the face of the earth,’ and that he will save you? Or otherwise, can ye imagine yourselves brought before the tribunal of God with your souls filled with guilt and remorse, having a remembrance of all your guilt, yea, a perfect remembrance of all your wickedness, yea, a remembrance that ye have set at defiance the commandments of God? “I say unto you, can ye look up to God at that day with a pure heart and clean hands? I say unto you, can you look up, having the image of God engraven upon your countenances? I say unto you, can ye think of being saved when you have yielded yourselves to become subjects to the devil? “I say unto you, ye will know at that day that ye cannot be saved; for there can no man be saved except his garments are washed white; yea, his garments must be purified until they are cleansed from all stain, through the blood of him of whom it has been spoken by our fathers, who should come to redeem his people from their sins. … “These are they that are redeemed of the Lord; yea, these are they that are taken out, that are delivered from that endless night of darkness. And thus they stand or fall; for behold, they are their own judges, whether to do good or do evil.” — Alma 5:16–21, 41:7; borrowed from Grant Hardy, The Book of Mormon: A Reader’s Edition (University of Illinois Press, 2005), 261, 367
I believe these two ideas, those of the Tibetan Book of the Dead (as described by Alan Watts) and those of Alma the Younger in the Book of Mormon, are quite similar. Something about the approach of death, and even passing through it, brings something like judgment, we could say.
Both the Book of Alma and the Tibetan Book of the Dead agree, however, that this is not the judgment of external deities, though we surely experience it that way initially. Instead, we come to find that, whether punishment or reward, these do not come flying at us from some externality, but erupt from within us. Even in the afterlife, as in this life, our lives are not the result of an external authority’s judgment, but of our own inner state of being. This should entirely change how we read Alma’s teaching, that one must have their “garments purified” of that which condemns us from within, not without. The Book of Mormon has a long lineage of malefactors, like Nehor, who taught that God will save all without judgment. But this view misses the point: it’s not that God is unwilling to save all, but that God does not save anymore than condemn. God instead opens our eyes to those portions of ourselves which we seek to hide from others, including ourselves; those truths about ourselves which we desperately wish were not true. Nehor missed the point, but so do those who glory in self-condemnation, the unfortunate people who meet themselves with disgust, contempt, shame.
God is not the judge in any conventional sense, but the Christ, who looks upon even the darkest corners of our hearts, not with a runaway or renegade justice of retribution (that was our gig), but with unconditional mercy. We are the ones who cannot accept ourselves or others, not God, not Christ. Christ is the one who looks at the darkest reaches of the soul, that which one would wish wholeheartedly to hide away, and says, “I love you.” It’s in this vein that the Book of Mormon calls its reader to take upon themselves the name of Christ, to awaken the same pure love of Christ within, for themselves and others. Our problem is not that God is mad and we have to win him over; it’s that the root of our suffering is that we can’t accept ourselves or others as is. Our problem is not that we have to win over or woo someone else, or even ourselves, but that we keep desperately telling ourselves we must because we and others are not enough in ourselves. Always trying to be someone else or get somewhere else, we’re never who or where we are.
We all die, and, in a manner of speaking, we’re all going to the same place. In this life, we can be in the same room doing the same thing, but one of us is experiencing joy and the other their own private hell — a subtlety which follows us beyond death. A Zen koan says this:
“A soldier named Nobushige came to Hakuin, and asked: “Is there really a paradise and a hell?” “Who are you?” inquired Hakuin. “I am a samurai,” the warrior replied. “You, a soldier!” exclaimed Hakuin. “What kind of ruler would have you as his guard? Your face looks like that of a beggar.” Nobushige became so angry that he began to draw his sword, but Hakuin continued: “So you have a sword! Your weapon is probably much too dull to cut off my head.” As Nobushige drew his sword Hakuin remarked: “Here open the gates of hell!” At these words the samurai, perceiving the master’s discipline, sheathed his sword and bowed. “Here open the gates of paradise,” said Hakuin. — 122 Zen Koans: Find Enlightenment, ed. Taka Washi (2013), section 57
We are not punished for sin or rewarded for virtue, but punished by sin and rewarded by virtue. God cannot give that any more than life will — we create our own hells and heavens. | https://medium.com/interfaith-now/a-hell-heaven-of-our-own-alma-and-the-tibetan-book-of-the-dead-5e09c619758 | ['Nathan Smith'] | 2020-01-06 14:28:16.551000+00:00 | ['Mormon', 'Tibetan Book Of The Dead', 'Book Of Mormon', 'Buddhism', 'Afterlife'] |
The COVID-19 Boogaloo Opus | All decisions here, in either direction, could kill you.
As the sensemaking crisis goes into overload on COVID-19, many people are shutting down, many people are hyperventilating, and many people don’t know what’s real and what’s not. The signal to noise ratio on this is very badly weighted towards noise, with our mainstream media sources and politicians failing to be particularly helpful. And while COVID-19 could kill you, other things could kill you too, such as unemployment, starvation, or national civil war, and we need to look at all of those things honestly to pull the signal out of the noise. There are a lot of things to consider. Let’s look at the noise first.
The Noise
I saw this in my social feed today. I think it’s a totally accurate depiction of the noise we’re all facing.
Well good news! A friend has broken down all the facts and everything we need to know about COVID-19! 1. Basically, you can’t leave the house for any reason, but if you have to, then you can. 2. Masks are useless, but maybe you have to wear one, it can save you, it is useless, but maybe it is mandatory as well. 3. Stores are closed, except those that are open. 4. You should not go to hospitals unless you have to go there. Same applies to doctors, you should only go there in case of emergency, provided you are not too sick. 5. This virus is deadly but still not too scary, except that sometimes it actually leads to a global disaster. 6. Gloves won’t help, but they can still help. 7. Everyone needs to stay HOME, but it’s important to GO OUT. 8. There is no shortage of groceries in the supermarket, but there are many things missing when you go there in the evening, but not in the morning. Sometimes. 9. The virus has no effect on children except those it affects. 10. Animals are not affected, but there is still a cat that tested positive in Belgium in February when no one had been tested, plus a few tigers here and there… 11. You will have many symptoms when you are sick, but you can also get sick without symptoms, have symptoms without being sick, or be contagious without having symptoms. 12. In order not to get sick, you have to eat well and exercise, but eat whatever you have on hand and it’s better not to go out, well, but no… 13. It’s better to get some fresh air, but you get looked at very wrong when you get some fresh air, and most importantly, you don’t go to parks or walk. But don’t sit down, except that you can do that now if you are old, but not for too long or if you are pregnant (but not too old). 14. You can’t go to retirement homes, but you have to take care of the elderly and bring food and medication. 15. If you are sick, you can’t go out, but you can go to the pharmacy. 16. You can get restaurant food delivered to the house, which may have been prepared by people who didn’t wear masks or gloves. But you have to have your groceries decontaminated outside for 3 hours. Pizza too? 17. Every disturbing article or disturbing interview starts with “ I don’t want to trigger panic, but…” 18. You can’t see your older mother or grandmother, but you can take a taxi and meet an older taxi driver. 19. You can walk around with a friend but not with your family if they don’t live under the same roof. 20. You are safe if you maintain the appropriate social distance, but you can’t go out with friends or strangers at the safe social distance. 21. The virus remains active on different surfaces for two hours, no, four, no, six, no, we didn’t say hours, maybe days? But it takes a damp environment. Oh no, not necessarily. 22. The virus stays in the air — well no, or yes, maybe, especially in a closed room, in one hour a sick person can infect ten, so if it falls, all our children were already infected at school before it was closed. But remember, if you stay at the recommended social distance, however in certain circumstances you should maintain a greater distance, which, studies show, the virus can travel further, maybe. 23. We count the number of deaths but we don’t know how many people are infected as we have only tested so far those who were “almost dead” to find out if that’s what they will die of… 24. We have no treatment, except that there may be one that apparently is not dangerous unless you take too much (which is the case with all medications). Orange man bad. 25. We should stay locked up until the virus disappears, but it will only disappear if we achieve collective immunity, so when it circulates… but we must no longer be locked up for that?
This depiction of the noise about COVID-19 is dead on. And people clamoring for the general population to restrict their sensemaking to only official channels: (A) Don’t seem to be aware of the tremendous fuckups that the official channels have already made on this, and, (B) Seem to be vehemently opposed to the most “official” channel in the country anyway.
This is a morass of sensemaking failure that could lead to things far worse than the viral infection that caused it. Let’s move forward by extracting the signal, the actual facts, that we can hang our hat on at this time.
The Tale of Two National Fuckups
This thing came from a Chinese laboratory in Wuhan, probably the Wuhan Institute of Virology. We don’t need evidence gift wrapped by the Chinese to make this case. We just need simple mathematics, and the case is rock solid.
The “official channels” have maintained for four months that this virus originated in a wet market in Wuhan, not at the Wuhan Institute of Virology, which is the world’s Mecca of studying emergent SARS coronaviruses that originate in bats. A lot of speculation by the media has gone into supporting this case, as well as the solid support of the Chinese government, but the case is obviously garbage. I grant that wet markets for exotic harvested wild meats are a great vector for something like this, but set that aside for a moment.
There are between a hundred and a thousand wet markets in China. There are well over a thousand wet markets in Vietnam. There are well over a thousand wet markets in Thailand. There are hundreds or thousands of wet markets in Laos, hundreds or thousands more in Cambodia, hundreds or thousands more in Burma and Myanmar and Malaysia. Nobody knows for sure, but it’s completely reasonable to estimate the total number of wet markets in East Asia being at least ten thousand.
But only one of these ten thousand or more wet markets is two blocks from the Wuhan Institute of Virology.
The chance that a brand new never before seen SARS coronavirus variant would emerge at the only wet market two blocks from a laboratory whose primary function is to study never before seen SARS coronavirus variants, specifically from bats, is simply too astronomical to believe. If a brand-new world epidemic virus were to emerge every day from a wet market in east Asia, it would be three years or more on average before one emerged from Wuhan. No honest scientist would believe that coincidence given what we know.
I’ve followed a lot of traffic from geneticists and epidemiologists saying this virus doesn’t seem to have the earmarks of being created artificially. They may be right. But that doesn’t mean that a diseased bat wasn’t transported to Wuhan and the virus escaped via an infected technician, or via an improperly disposed of specimen. Nor does it rule out the disease being a product of “gain of function” research on bats with lesser uncatalogued diseases.
The Chinese reaction was archetypically communist and cannot be trusted. In order, they imprisoned whistle blowers, denied the virus, admitted the virus but said it wasn’t transmissible, admitted it was transmissible and invited foreign journalists in to watch them build a giant hospital, turned everyone in Wuhan into The Bubble Boy, snuffed it out (officially), then kicked all the journalists out and reopened the city. Then after the journalists were gone, they beat up people trying to go to the hospitals with COVID-19 to keep their new cases number down, cremated a lot more people than the official death count, denied any reinfection after lockdown was ended, and then blamed the origins of the infection on the US Army. Which is obviously not true, because if the USA had developed the virus we’d have tests for it way sooner than we did.
Now granted, that could just be communists acting like communists, but the entire timeline tells of a cover up.
The USA’s fuckup was a fuckup of mid level bureaucracy that has been widely reported, but doesn’t seem to be widely understood despite the reporting. This article is a fabulous primer, but I’ll summarize.
The first case in the US was identified the same day as the first case in South Korea, January 21st. South Korea gave out regulatory approval to every company in the country that wanted to make a test within one week, by the end of January, and as a result created the best testing apparatus in the world. The FDA and CDC collaborated to prevent US companies and universities from developing tests until the middle of March, and only eventually stopped obstructing test development by administrative (Trump/Pence) fiat. One of the most egregious examples of this behavior, which was promulgated by bureaucrats at the FDA and CDC, is the example of Washington University’s Helen Y. Chu, who after testing someone in her ongoing flu study for COVID-19 and discovering she had a sample pool that may have many infections, was told, basically:
1) You just violated that test subject’s HIPAA privacy rights, and 2) You don’t have a permit to do COVID-19 tests, therefore 3) Stop testing.
If they had said the exact opposite, Seattle would have been controlled. Chu had everything in her hands to isolate the Seattle cases and possibly the lions share of the cases on the West Coast.
When universities and companies tried to develop their own tests, they were told to apply for a permit, and then only one permit was issued — to the CDC. The CDC then screwed up the test, and had to release a new one several weeks later. The backlash from the screw-ups came to a head the last day of February, where the FDA begrudgingly allowed some 5,000 labs (of the 260,000 labs in the country) to start working on tests.
The doors were finally thrown open to academic and private entities in full on March 15th, when HIPAA was waived for anyone working on COVID-19, and March 16th, when Vice President Mike Pence announced that all the rest of the labs could work on this without FDA interference.
Wojtek Kopczuk, a professor of economics at Columbia University, quipped that the “FDA sped up the process by removing itself from the process.”
The USA lost 45 days as compared to South Korea, at the same starting gun, entirely due to pencil pushers at the FDA and CDC. The important thing to take away from the Tale of The Second National Fuckup, is that no politician could have prevented this, unless they were willing to unilaterally step in, deplatform the FDA, burn HIPAA sooner, and bust the CDC down into an “advisory only” role. Not Trump, not Hillary, not Biden, not Bernie. The one politician who might have been able to do it, is the hypothetical caricature of Trump for which many Trump voters voted. And knowing how government works in the USA, it is unthinkable that this will get fixed, or that this won’t happen again the next time, because our universal bipartisan answer to government failure is more government.
And the government’s final response to needlessly wasting 45 days reacting to this, is to issue a 2 trillion-dollar bailout to pause the national economy for 56 days, so we can catch up, while everyone loses their jobs.
Boogaloo Soup
Case fatality rates (CFRs) for this thing vary tremendously by country, because the numbers don’t exist to properly calculate it. You’re not supposed to calculate a CFR until a confirmed case has either cleared or died, but everyone is calculating them in real time for COVID-19 by looking at deaths of confirmed cases. The math is all wrong. For one, we have some dead people who probably had COVID-19 and didn’t get counted because they didn’t get tested. For another, we have a lot more people who caught it and survived, but never got confirmed, because of the testing SNAFU. For a third, we have some currently alive cases who haven’t resolved in our numbers. This means the numerator in the fraction is wrong, and the denominator is very wrong. It is likely, on speculative analysis, that the final CFR for this thing will turn out to be very similar to the flu, as we see in South Korea and Iceland which have good testing. It’s “just” a flu that everyone gets all at once because nobody started with any immunity to it, which leads to more dead people. The spike of COVID-19 deaths we will see in the coming months are going to be several times higher than the flu deaths, because basically they’re several years-worth of flu victims squeezed into one year.
What would happen if this were as deadly as the measles? What if a “true” epidemic, of the style we’ve seen in the past, hit the highly connected, highly vectored, Marketplace Of Disease we call the “global economy?” Same infection rate, much higher CFR. This is the sort of thing the “preppers” have been thinking about for years.
The preppers didn’t need to run out and buy toilet paper. Or meat. Or rice. Or guns. But everyone else did — especially guns. Check out the March 2020 statistics for the FBI’s NICS Background Check System:
Firearm sales vary seasonally, owing to things like hunting season, Black Friday, Christmas, and such, but a good indicator of gun sales trends can be drawn by comparing a month’s sales to the same month from the prior year. In the past full year, every month except one has set a new monthly record. Look closely at March 2020. 3.7 million background checks were issued in March, in a simply unprecedented wave of sales. It’s a million-gun spike, ten times previous spikes.
Gun store owners told an interesting tale as well — these are almost all new owners. Most estimate around 75% of gun sales in March 2020 were first time buyers. That would constitute 2.8 million people, almost 1% of the total US population buying their first gun. Many of them liberal. Some of them prior supporters of gun control. Many foreign nations only have 1% gun ownership rate nationwide. We may have that many who became first time owners a month ago. And now the ranges are closed, and they can’t practice or train with their new purchase, and they’re sitting at home losing their jobs reading a stream of social media anxiety.
These numbers don’t even count peer to peer sales or gifts of prior owned firearms. And the other things people are buying? Nonperishable food, medicines, seeds, things to use at home. Prepper stuff.
The makeup of COVID-19 America now constitutes the following classes:
1) Previously armed previous preppers
2) Previously armed new preppers
3) Newly armed new preppers
4) Unarmed new preppers, lovingly referred to as “targets.” Bless their heart.
Nobody’s not a prepper anymore. Certain corners of the internet might call this a “Boogaloo Soup.”
But shooting people is a really hard, really terrible thing to do, so people don’t generally start shooting each other unless they have three things, not just one. First, they need the tools. Second, they need dire motivation. And third, they need psychological reinforcement that dehumanizes “the other,” some frame of reference or point of view that posits the person at the other end of the barrel as less human than themselves. We have the tools. Do we have these other elements?
Tribal Dehumanization
It’s widely known that our modern cross-tribal trust, when we speak of the Red Tribe and the Blue Tribe, is historically low. We have described this in many ways on HWFO, and some good indicators came from studies in the run up to the 2018 midterms, which were already marred with political violence.
All that happened before the impeachment trial.
A graph from that article highlights alarming polling numbers from an APSA 2018 study.
see other article, link above
Be very clear, at the peak of the Syrian Civil War, the total number of combatants on all sides only numbered 2% or less of the population on a per capita basis. We have 1% new gun owners alone, a national gun ownership rate around 30%, and a projected number of Red or Blue tribal goons who support terrorism to be up around the 15% range before COVID-19 entered the picture.
But the statistics supporting cross tribal terrorism aside, one of the best indicators of literal dehumanization might be to look at marriage polls.
People who identified with a party had even more intense feelings. In 1958, 33 percent of Democrats wanted their daughters to marry a Democrat, and 25 percent of Republicans wanted their daughters to marry a Republican. But by 2016, 60 percent of Democrats and 63 percent of Republicans felt that way.
Compare that to gauges of classic racism, the most historically significant American tradition of dehumanizing “the other.”
Opinions about interracial dating and marriage on a personal level have also evolved significantly. In 1971, 48% nationally said they would not approve of their own children dating someone of another race, while 28% said they would approve.
Put simply, the Red Tribe / Blue Tribe cultural divide in the United States is thicker than mid-20th century racism. We have all the dehumanization we need for a civil war, and all the gear. We’re just not motivated yet.
Lockdown Calculus
A lot of people I speak to don’t seem to understand that the economy is not just something we do to manipulate a stock market. It is the fundamental way that humans provide for our needs, including food, and has been since we came down out of the trees and settled on this idea of “labor specialization.”
I spoke to a lady in the Philippines a few weeks ago. She’s in her 20s, poor, and lives in Manilla in a two-bedroom apartment with five other women. Not uncommon there. She teaches English as a Second Language over the internet to Japanese people. Or at least she did until their lockdown.
She told me stories from the ground floor in Manila. When their lockdown went into effect, tens of thousands of people hit the roads and walked home to their family villages in the rest of the country, in a mass pedestrian migration that took many days. They just walked. Slept on the side of the road. She stayed, and explained how their lockdown was being managed by the local matron in charge of a block of apartment buildings, who was acquiring food and delivering it to them, while the army patrolled the streets. She said the Army was very nice, and that everyone was in good spirits, because Filipino people are generally good spirited people. But the topic among everyone was when the food would run out, and whether more people were going to die from the lockdown than the virus.
Her Facebook account was deactivated last week.
I don’t know why.
People in Africa are revolting against the lockdowns now, and with very good reason. The median life expectancy in many areas of Africa isn’t much over 65, and their rate of the sorts of comorbidities that are leading indicators for COVID-19 fatalities is extremely low. Few obese people, few diabetics, few old people. In a morbid sort of way, COVID-19 deaths will be very small throughout Africa because a combination of other factors, malaria and malnutrition among them, have already cleared out the people most likely to die from it. One of their leading sources of death is malnutrition. Africans should objectively abandon lockdown now, if not yesterday.
Lesson: The calculus of when to come off lockdown is different everywhere, and the damage the lockdown does must be accounted for in this calculus.
There aren’t a lot of great studies on the fatality rate of recessions in the United States, but the best I’ve read was by Daniel Sullivan and Till von Wachter. Their sample pool was unfortunately limited to high seniority male workers in Pennsylvania from the 1970s on, but we might make some assumptions and apply it to the general pool. They found that overall mortality rates of their sample set increased by between 50% and 100% in the year following the year they got laid off. And while that effect declines sharply in further years, it remains at 10% to 15% higher twenty years later.
Let’s presume the mortality rate among the poor who get laid off is near the top of this band, and let’s further presume that the net chance of death over the time scale generally adds up, to a 100% increase in overall mortality due to an unemployment event.
The mortality rate for the working age population is on average, per CDC data, around 200 per 100,000, so a 100% increase would be an additional 200 per 100,000. The St. Louis Fed projects unemployment to top 47 million people in the wake of our COVID-19 response. If we presume that only 41 million of these people are directly due to the response, which seems reasonable given prior unemployment numbers, we can calculate 82,300 people killed by the lockdown. A recent article by the admittedly partisan National Review calculated a similar number by different means.
If the lockdown saves half a million people, perhaps this is worth it. If we value the lives of working age people higher than we do aging retirees or people in nursing homes, perhaps it’s not. If the lockdown doesn’t actually save that many people anyway, because our treatments in the hospital don’t help that much, then this entire calculation gets a lot muddier. And all this ignores the hunger element, which for the USA is tied in with both the Potential Boogaloo and the food service industry.
Hunger Is the Indicator
I routinely post on my Facebook wall how many people I know who (Have contracted COVID-19) / (Have Recovered) / (Have Died) / (Are Newly Unemployed).
My friends all respond with their counts. Most know fewer than I who’ve contracted it, and only a few know someone who’s died. But the greatest variation in the numbers is in the “unemployed” category. My friends who are tied up in the entertainment, food service, or bar industries have “unemployed” numbers in the hundreds, or say they are simply uncountable. And food service is a huge link in the supply chain of hunger, a chain that’s been broken, and the breaks in the chain have now officially spilled back into agriculture itself.
According to the New York Times, tens of millions of pounds of fresh food are being destroyed by the nations farmers because we closed restaurants, hotels, and schools. 3.7 million gallons of milk per day are dumped out on the ground. Farmers are currently plowing under fields of fresh produce, because they have no choice. It seems absurd on its face, but it’s entirely predictable. The banana you buy in the grocery store looks different than the bananas they use in restaurants. Nobody makes onion rings at home. Everybody bought potatoes and rice for three weeks, and now have to figure out what to do with all the storable starch they bought instead of buying lettuce. The Times indicated that 5% of the total nation’s milk supply is being dumped out every day, and that will grow to 10% if we stay on lockdown too much longer.
The Times narrative speaks about tragedy in the industry, and with good reason, but the terrifying thing lies below the surface. It’s conceivable that bailout money could keep those industries alive, but there is no amount of bailout money that will dig that onion out of the ground. And the onion served a more important purpose than farming revenue. It was food. It prevented hunger. That’s what the economy is for, remember?
On the whole, US farmers export over 20% of what they produce, according to the USDA. But 18% of the food Americans ate, before the lockdown, was eaten away from home. In a perfectly elastic economy, nobody would starve in the US from closing all the restaurants because a 20% reduction in food production will simply lead to 20% less food exported, and we would hoard the remaining food for ourselves. But that’s not how things work. If a food factory is built to put food into export boxes instead of grocery store boxes, it’s going to continue to do so. Especially now, when foreign countries are probably already struggling with their own food shortages and we are their bread basket. In a very real way, buying up all those potatoes during the Great Grocery Rush of 2020 took a potato out of some kid’s mouth in another country, so now they might starve. And in a very real way, if we don’t open the restaurants up soon, and get the prior supply chains working again, we are very likely to end up with long term food shortages here.
And that’s the last element we need to start shooting each other.
Although I’m a prepper, and I’ve got plenty of food in my garage, you may not be. And if I was you, and my children were starving, I might try to shoot someone and take their food. And if you are you, and you try to do that to me, you might get shot. Expand to the national case.
If that happens, we will have the Tools for the Boogaloo, which are guns. We will have the Dehumanization for the Boogaloo, which is our political and cultural tribalism. And we will finally have the Motivation for the Boogaloo, which is our kids need to eat.
The Boogaloo Soup will be complete.
And should that happen, it will kill far more people than COVID-19, and will kill far more people than the unemployment from our response to COVID-19. It will be the greatest tragedy in the history of our nation, because we will have brought it all upon ourselves, from our own Freakoutery.
The soup timer is ticking. Beginning of May would be a great time to get our asses in gear. | https://medium.com/handwaving-freakoutery/the-covid-19-boogaloo-opus-51b1c1b860cd | ['Bj Campbell'] | 2020-04-17 16:56:20.075000+00:00 | ['Politics', 'Guns', 'Covid 19', 'Coronavirus', 'Random'] |
The New Submission Guidelines For Making of a Millionaire | The New Submission Guidelines For Making of a Millionaire
What we are looking for in 2021
Photo by Aaron Burden on Unsplash
We all have a “money story.” Money impacts our lives every day in both positive and negative ways. Whether it be the anxiety caused by lack of money, the thrill of making money, the fear of losing money, or the responsibility of managing money, we all have a story to tell about how money impacts our lives.
We want to give you the opportunity to share your money story! We are now accepting submission requests to publish your money stories in “Making of a Millionaire.”
If you would like to submit a story to Making of a Millionaire, follow these easy steps.
Leave a comment below, and we can add you as a writer. Once we have added you as a writer, go to “Edit” on any story you wish to submit, click “Add to Publication,” and select “Making of a Millionaire.” We will either accept the story, make some edits to the story, or let you know if the story does not fit with this publication.
If you wouldn’t mind also telling us a bit about yourself and why you want to write about personal finance.
We will be publishing fewer stories in 2021
In 2020, we wanted to give as many writers as possible a chance to have their stories published in Making of a Millionaire.
And we published a LOT of stories. Too many, if I’m being honest. We ended up publishing many stories on a subject that we have already published many times before.
For example, we probably have over 50 stories published on how to build an emergency fund.
Does this serve the reader best?
No.
Having one or two great articles on how to build an emergency fund is all that is required and making sure they are visible to our readers.
What we are looking for in 2021: Stories that challenge our readers
Here are a few examples of the exact type of stories we want to publish moving forward.
Notice, all of the stories we are about to highlight are in-depth, insanely helpful, and go beyond “how to” and generic articles about personal finance.
Adam Parsons story on “When $100,000 isn’t enough.”
This is a real story about money and finances. It breaks down in detail how quickly $100k can go out the door.
Rocco Pendola story on why “ Mark Cuban’s Most Recent Money Advice is Ridiculously Simple and Super Important.”
Takes a simple piece of financial advice and breaks it down, and explains how it can apply to the readers' life.
My story on why “Passive Income Is A Lie, But Scalable Income Is Real.”
In this story, I push back against one of the trendiest topics in personal finance “passive income.” But I do it in a genuine way because it is something I actually believe. Then, I explain what “Scalable income” is and how it has changed my life and could potentially do the same for the reader.
To put it simply.
We will be publishing fewer stories that are more focused on unique points of view and provide value to our readers
If you are currently a writer for MOAM and we are currently publishing most of your drafts, then keep doing what you're doing.
If you find we are not publishing any of your drafts, ask yourself if the story you submitted is personal to your experience, provides a unique viewpoint, focuses on providing a huge amount of value to readers, or does not meet a high standard for quality and formating.
Stories we will almost certainly turn down
Stories on “how to become a trader”
Stories advocating that readers pick and choose individual stocks
Anything overly self-promotional or have the slightest leaning towards some type of multi-level marketing.
Stories containing undisclosed affiliate links.
We prefer stories that are 1,000 words or more. If your story is under 500 words we are unlikely to accept it. Our readers expect in-depth posts from Making of a Millionaire.
Poorly written stories. We often receive submissions that are simply not up to the standard of writing quality we expect from our writers.
Stories that are in our opinion, clickbait. See Medium’s guidelines on clickbait here.
Please do not take it personally if we reject your story. It is not a personal attack or comment on you. If your story is rejected, it simply means it’s not a fit with what we are after at the moment. Keep at your writing, look at the three stories I highlighted above and send us back your next draft.
I do want to thank all of our current writers and anyone who would take the time to send their hard-earned work to Making of a Millionaire. I have a huge amount of respect for anyone who has the courage to put their thoughts in writing and publish it.
Cheers,
Ben | https://medium.com/makingofamillionaire/the-new-submission-guidelines-for-making-of-a-millionaire-301bdba5d305 | ['Ben Le Fort'] | 2020-12-21 17:12:22.326000+00:00 | ['Money', 'Publication', 'Personal Finance', 'Writing', 'Writer'] |
Find Your Best Customers with Customer Segmentation in Python | Overview
When it comes to finding out who your best customers are, the old RFM matrix principle is the best. RFM stands for Recency, Frequency and Monetary. It is a customer segmentation technique that uses past purchase behavior to divide customers into groups.
RFM Score Calculations
RECENCY (R): Days since last purchase
FREQUENCY (F): Total number of purchases
MONETARY VALUE (M): Total money this customer spent
Step 1: Calculate the RFM metrics for each customer.
Source: Slideshare
Step 2: Add segment numbers to RFM table.
Source: Slideshare
Step 3: Sort according to the RFM scores from the best customers (score 111).
Source: Blast Analytics Marketing
Since RFM is based on user activity data, the first thing we need is data.
Data
The dataset we will use is the same as when we did Market Basket Analysis — Online retail dataset that can be downloaded from UCI Machine Learning Repository.
import pandas as pd
import warnings
warnings.filterwarnings('ignore') df = pd.read_excel("Online_Retail.xlsx")
df.head()
df1 = df
The dataset contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered online retailer.
It took a few minutes to load the data, so I kept a copy as a backup.
Explore the data — validation and new variables
Missing values in important columns; Customers’ distribution in each country; Unit price and Quantity should > 0; Invoice date should < today.
df1.Country.nunique()
38
There were 38 unique countries as follows:
df1.Country.unique()
array([‘United Kingdom’, ‘France’, ‘Australia’, ‘Netherlands’, ‘Germany’,
‘Norway’, ‘EIRE’, ‘Switzerland’, ‘Spain’, ‘Poland’, ‘Portugal’,
‘Italy’, ‘Belgium’, ‘Lithuania’, ‘Japan’, ‘Iceland’,
‘Channel Islands’, ‘Denmark’, ‘Cyprus’, ‘Sweden’, ‘Austria’,
‘Israel’, ‘Finland’, ‘Bahrain’, ‘Greece’, ‘Hong Kong’, ‘Singapore’,
‘Lebanon’, ‘United Arab Emirates’, ‘Saudi Arabia’, ‘Czech Republic’,
‘Canada’, ‘Unspecified’, ‘Brazil’, ‘USA’, ‘European Community’,
‘Malta’, ‘RSA’], dtype=object
customer_country=df1[['Country','CustomerID']].drop_duplicates() customer_country.groupby(['Country'])['CustomerID'].aggregate('count').reset_index().sort_values('CustomerID', ascending=False)
More than 90% of the customers in the data are from the United Kingdom. There’s some research indicating that customer clusters vary by geography, so here I’ll restrict the data to the United Kingdom only.
df1 = df1.loc[df1['Country'] == 'United Kingdom']
Check whether there are missing values in each column.
There are 133,600 missing values in the CustomerID column, and since our analysis is based on customers, we will remove these missing values.
df1 = df1[pd.notnull(df1['CustomerID'])]
Check the minimum values in UnitPrice and Quantity columns.
df1 = df1[pd.notnull(df1['CustomerID'])]
0.0
df1.Quantity.min()
-80995
Remove the negative values in Quantity column.
df1 = df1[(df1['Quantity']>0)]
df1.shape
df1.info()
(354345, 8)
After cleaning up the data, we are now dealing with 354,345 rows and 8 columns.
Check unique value for each column.
def unique_counts(df1):
for i in df1.columns:
count = df1[i].nunique()
print(i, ": ", count)
unique_counts(df1)
InvoiceNo : 16649
StockCode : 3645
Description : 3844
Quantity : 294
InvoiceDate : 15615
UnitPrice : 403
CustomerID : 3921
Country : 1
Add a column for total price.
df1['TotalPrice'] = df1['Quantity'] * df1['UnitPrice']
Find out the first and last order dates in the data.
df1['InvoiceDate'].min()
Timestamp(‘2010–12–01 08:26:00’)
df1['InvoiceDate'].max()
Timestamp(‘2011–12–09 12:49:00’)
Since recency is calculated for a point in time, and the last invoice date is 2011–12–09, we will use 2011–12–10 to calculate recency.
import datetime as dt
NOW = dt.datetime(2011,12,10) df1['InvoiceDate'] = pd.to_datetime(df1['InvoiceDate'])
RFM Customer Segmentation
RFM segmentation starts from here.
Create a RFM table
rfmTable = df1.groupby('CustomerID').agg({'InvoiceDate': lambda x: (NOW - x.max()).days, 'InvoiceNo': lambda x: len(x), 'TotalPrice': lambda x: x.sum()}) rfmTable['InvoiceDate'] = rfmTable['InvoiceDate'].astype(int)
rfmTable.rename(columns={'InvoiceDate': 'recency',
'InvoiceNo': 'frequency',
'TotalPrice': 'monetary_value'}, inplace=True)
Calculate RFM metrics for each customer
Interpretation:
CustomerID 12346 has frequency: 1, monetary value: $77,183.60 and recency: 325 days.
CustomerID 12747 has frequency: 103, monetary value: $4,196.01 and recency: 2 days
Let’s check the details of the first customer. | https://towardsdatascience.com/find-your-best-customers-with-customer-segmentation-in-python-61d602f9eee6 | ['Susan Li'] | 2017-10-25 04:21:02.720000+00:00 | ['Machine Learning', 'Python', 'Customer Success', 'Data Science', 'Towards Data Science'] |
Call-To-Action Buttons Usage Guide | Call-to-action buttons on websites are often neglected. Designers sometimes don’t understand exactly what makes a good call to action button beyond being attractive and fitting into the overall design. But the call to action buttons is too important to be designed without some kind of understanding of what makes them effective. After all, the main point of a call to action button is to get visitors to do something.
Call-To-Action Advantage Buttons
Your CTA has to really provide some sort of benefit to the user to make him/her click. Just imagine the last time you bought something on the internet… what prompted you to take action? I’m sure you took action not because you were looking for what to buy, but because you saw a good benefit attached to the ‘Buy’ button.
In the same vein, a user cannot take action if your CTA is not convincing enough — they want to know exactly what they’re getting, and what they’ll achieve with it to avoid wasting money. Therefore, your call to action has to provide a solid benefit to your customers. If people are not so sure about the value they’ll get from your CTA button, they won’t click. It’s as simple as that.
Furthermore, apart from the text in your CTA button, the button color and placement are equally as important as the message. For example, lots of marketers have discovered that placing a subscription box on the bottom of the landing page performs best, while other people saw an increase in conversions when they placed the button on the left side of the page.
It’s your duty to find out which placement works best for you. You don’t have to do what others are doing, just test, test, and test some more before choosing a winner.
Also, figure out which button color works well for you. Green buttons may imply money and prosperity, but the best choice is to always test. Test every element of your CTA (including button color).
Looks Like Buttons
The subject of “signifiers” is critical when it comes to conversions and user-experience (UX). When we mention “signifiers” in the web design space, we’re mostly talking about making every element on a web page to look exactly like what it’s supposed to be used for. It means that a button should look like a button… and nothing else.
This will make it easy for users to immediately identify it as an element that they should click on to initiate an action. So let me ask you… when a first-time visitor lands on your landing page, will he/she absolutely identify which elements are clickable? Or will he/she get confused and start guessing what to do? If you agreed with the second question, then you have to change something immediately. In a nutshell, buttons are generally easier to click when we’re sure they’re clickable.
It’s no wonder why gray buttons often convert poorly — they look deactivated, so lots of visitors won’t even know they’re expected to click them. Can your visitors easily identify the CTA on your site and landing pages? Is the call-to-action visible enough? Does it have signs implying clickability? Finally, another good idea to make your call-to-action stand out is to have lots of space around it, like the PayPal ones.
Curiosity
When a user sees this level of openness, they know exactly what they’re supposed to do. Make Your Visitor Curious. Use curiosity effectively, and you’ll see a massive boost in conversions. According to Andrew Sobel, one of the 6 rules for evoking curiosity is: “Tell people what you do and the results you get, not every detail about how you do it. The former is interesting; the latter can become tedious.”
Curiosity brings out the burning desire to know something you didn’t know before. If you design your call-to-action message in a way that could create a burning desire for your prospects to find out what’s on the other side of the CTA, they’ll be more willing and eager to click, thereby giving you the lead generations you want. And, remember: The higher your click-through rate, the more sales you’ll generate.
In other words, emotional triggers like surprise, trust, fun, delight, and, most importantly, satisfaction arouse curiosity in your users: For example, when people trust you, they’ll be more willing to click. In the same way, when people are delighted with your PPC ads or landing page copy, they’ll immediately click, because they envision a benefit.
You should always remember that your target audiences are human beings who continually make emotional and rational choices depending on the information presented before them.
For Free
We all love free stuff, especially when it’s useful free stuff. Although there may be no such thing as a free lunch, even in free town, as humans, we can’t resist the attraction of a bonus, including a free eBook that sounds interesting. Offering your customers a helpful freebie is one super-effective way to attracting and retaining more of them. Therefore, you have to start offering a bonus in your CTA message, too.
For example, when a company offers you a great opportunity to save a little money while making a purchase, that’s a reward because they’ll bear all the risk and you’ll gain more. In fact, the majority of telecommunications service providers out there offer some kind of “bonus”, such as free shipping, extra savings, rebates, and “buy-one-get-one-free” offers.
Attractive Call-To-Action
Your sales copy and PPC ad campaigns, promotional banners, and landing pages can only drive quality leads and customers to your business when they click on your call-to-action button.
To a significant extent, a high click-through rate (CTR) equals a higher conversion rate. If all the other important elements like your sales funnel and offer are properly optimized for your target users, and you’re not seeing conversions. The problem is likely with your CTA. | https://medium.com/visualmodo/call-to-action-buttons-usage-guide-be78c8755e7 | [] | 2019-01-21 19:05:07.478000+00:00 | ['Marketing', 'Call To Action', 'Buttons', 'Inspiration', 'Cta'] |
Thinking Outside the Books | Thinking Outside the Books
A library without librarians, a bookshelf without books, and other wonders of the modern word.
It’s late in the night, when most people are getting ready for bed — if they aren’t asleep already. Daytime shops have closed long ago; they wait with their shutters down until the next morning. But in this silent street, one door remains open.
It leads to a library.
But what an extraordinary library this is! There are no staff working inside; no librarian. You walk in through the door, after having your library-card scanned: walk in, pick the books you want to borrow, and walk out again. As you move about, the library’s sensors follow your motions with their electronic eyes.
And when you walk out, the library detects which books you’ve borrowed, automatically entering them into your account.
This is an ‘Intelligent Library’, one of several set up around Taiwan to help people access books more easily. Intelligent Libraries are not meant to replace ordinary ones. Instead, they act as a supplement. They extend a library’s reach to places where a full-fledged, human-handled library would be too impractical.
Intelligent Libraries can also open earlier and stay open later into the night. That’s important, because with ever-lengthening work hours, that’s the only time people have for reading. Libraries in China and Singapore regularly remain open until midnight.
But some libraries never close at all.
The Bath University, UK, was the first in the country to try out a 24-hour library. That was in 1996. But it’s only in the past decade or so that other universities have started to follow suit.
24-hour libraries are open round the clock. They aren’t always in use, but they’re still useful for students who work late at night to catch up on assignments, or wake up early morning to prepare for an upcoming exam. Foreign students use the space to conduct Skype calls with relatives in different timezones.
Some people are concerned that 24-hour libraries may lead to bad habits. Students may get the impression that they’re expected to work late, rather than catch up on much-needed sleep. They might do that anyway, but an all-night library would only encourage the habit.
Students don’t buy that argument, though. If they’re so pressed for time, it’s useful to have library access whenever they need it instead of worrying about opening and closing times.
Either way, it seems the 24-hour library is valued mainly for the space it provides. The question of books hardly ever turns up.
Maybe that’s why some libraries have decided to do away with them altogether.
The Vision IAS Library, Delhi, is one of many such spaces that have sprung up in India’s capital. This ‘library’ has soothing air-conditioned rooms to keep out the worst of Delhi’s heat. They have WiFi access, open discussion spaces, the usual silent atmosphere, and everything else you’d expect from a library.
Well, almost.
It doesn’t have any books.
Instead, what it has are rows of desks, at which you can sit and study without disturbance. You can book desks in one of three ‘shifts’ — morning, evening, and night — or pay extra for round-the-clock access.
The Vision IAS Library was started in 2011, after Shalini and Sanjeev Rathod had failed several attempts to crack the government-job IAS exam. They decided to use their experience to conduct coaching classes, but then they realised students were missing one more thing: a comfortable, distraction-free space to study.
Libraries like the Vision IAS don’t store books, because exam textbooks change every year. Instead, some provide lockers for people to store their own belongings. Though summer is peak season, these ‘libraries’ are in demand all through the year — some so much that they even let you pre-register online.
While Vision IAS is a library without books, you could say Safari Books Online is books without a library.
Started in 2001, Safari Books Online is the Netflix of digital textbooks. You pay a monthly or yearly fee, and get unlimited access to the whole collection. And you can read the book in whichever ebook format you prefer, on a variety of devices.
Safari Books Online started with a focus on computer science and programming, but it has expanded to other areas as well. All this is still about textbooks, though. What about proper books — books that you want to read, for fun, and for pleasure?
Ignoring all the illegal pirate sites out there, Project Gutenberg and WorldCat are the places to go.
The idea behind Project Gutenberg is simple. Take all the books whose copyrights have expired. Scan them, digitise them, and put them up for people to read.
But copyrights take time to expire. Nowadays, companies renew them even after the original authors are long gone. So Project Gutenberg is good if you want to dip into Alice in Wonderland or Sherlock Holmes, but not if you want to check out the latest New York Times bestseller.
That’s when WorldCat comes in. It’s a catalogue of real, physical libraries from all round the world. Search for a book, and WorldCat will tell you which libraries have a copy (of the libraries who’ve registered with WorldCat, that is). Then you can go to the library and pick up the book, or, in some cases, even have it home-delivered.
Of course, if the library’s too far from you to access, you’re out of luck. Or are you? Maybe not: libraries can also lend you books through the Internet. And the Internet is seldom “too far away”.
How does that work? Well, libraries don’t just lend out physical books. They can also lend out ebooks. Lending works the same way: you get the book for a while, and then, when you’re done, you “return” it by deleting it from your device. Libraries keep track of how many copies of an ebook are ‘lent’, and make sure not to load more than they’re allowed.
Usually, ebook loading is only available for people who visit the library. But through WorldCat, they can lend ebooks to anybody in the world (but you’ll still have to pay).
If you’re like me, you wouldn’t consider an ebook a ‘proper’ book. It would have the same text, of course, but everything else is different.
Ebooks are getting very popular because they’re cheaper, lighter to carry, and don’t cut trees. But if you don’t like ebooks, don’t worry: there are initiatives to promote ‘proper’ book reading, too.
The Delhi Metro is a busy place. Over two-million commuters use it every day to get around the city.
There are harried businessmen for whom the ride is the only routine they have during the day. There are students squeezing in some last minute revision, brick-like textbooks propped open in their arms. There are suburban housewives in their bright clothes and gaudy lipstick, heading into the city to meet their friends for lunch. And they’re all trying to get to the places they want to go as fast as possible.
But somewhere in a corner, if you look very carefully, you might find a book.
‘Books on the Delhi Metro’ is an project by writer Shruti Sharma and her husband Tarun Chauhan. With a tagline of “Take it, Read it, Return it”, the idea is to leave books in random places for people to find. If you find a book, you can take it home and read it. When you’re done, leave it somewhere — anywhere — in the metro system, for the next person to find. People are also asked to leave a tweet, to keep track of where the books are going.
Books on the Delhi Metro is currently only in Delhi, but there are plans to expand it. And it’s not the only such initiative. The Delhi project was itself inspired by Emma Watson, who played Hermione in the Harry Potter movies.
Emma Watson left books on the London Underground. She later went on to start Book Fairies, which lets anyone around the world becoming a ‘book fairy’ by leaving books in random places for people to pick up. Other projects, like BookCrossing, let you do the same thing too.
With reading habits going down and bookless libraries going up, you may be worried about the future of libraries. But if Books on the Delhi Metro and similar initiatives catch on, we won’t have to worry so much about them vanishing.
The whole world will be a library.
Have something to say? At Snipette, we encourage questions, comments, corrections and clarifications — even if they are something that’s easily Googled! You can also sign up for our weekly email.
Sources and references for this article can be found here. | https://medium.com/snipette/thinking-outside-the-books-7e484fa1aa41 | ['Badri Sunderarajan'] | 2018-10-28 02:31:02.458000+00:00 | ['Books', 'Reading', 'Libraries', 'Culture', 'Books On The Delhi Metro'] |
Neural Networks, Demystified | You’ve no doubt heard about neural networks — the mysterious and sci-fi-like technology that makes for a great buzzword. But being someone who isn’t technical, you’ve written them off as an enigma left only for computer science nerds (like myself). That changes today with this primer on how they work, designed for people who know nothing about computer science, coding, or mathematics.
What is a neural network?
A neural network can be thought of as an artificial information processor. It takes input, processes it in some way, and then produces some output. The structure of the network defines how it does the processing, with different structures producing different output. The result of which are networks that can classify images, translate languages and much, much more.
As we’ll soon see, some parts of the network are fixed while others, known as parameters, can change. Our goal is to adjust these parameters in such a way that our network learns to solve a problem. Initially, our network will be very bad at its task, like a child doing calculus, because these parameters are set randomly. But as we iterate through many cycles of testing the network, and updating the parameters based on its response, it will get better over time. Unsurprisingly, this repeated process of testing and updating means that training data is a big part of neural networks.
Let's take a look at what a neural network looks like.
Network Architecture
The original motivation for neural networks are neurons in the human brain, which have a few important features:
Neurons in our brain are connected to each other by a massive network where the output of some neurons can serve as input to others. The strength of connections between neurons can change based on the frequency of use, leading to the popular phrase by Donald Hebb “neurons that fire together, wire together”. The electrochemical potential in a neuron can build up, but the neuron won’t ‘fire’ until the potential passes some threshold.
Let’s see if we can artificially replicate some of this functionality by looking at the building blocks of neural networks, perceptrons.
In the above diagram, we’ve represented two connected neurons, A and C, where the output of neuron A, x, is equal to the input of neuron C. We’ll represent neurons with nodes (the circles) and the connections between neurons with edges (the lines). You can think of a neuron like this: it takes some input, it holds a value (some combination of its input) and then passes that value on as output. So far this model satisfies the first feature listed above. Let’s introduce connection strengths.
We can vary the strength of connections by introducing a connection weight, w. The input to neuron C will now be the output of neuron A, x, multiplied by the weight w. Intuitively, the greater (smaller) the value of w, the stronger (weaker) the connection between the two neurons. This satisfies the second feature. Finally, let’s introduce potential thresholds.
We’ve now introduced another neuron B which has a value of b and a connection weight of -1. B is known as a bias, and we’ll see why soon. The input to C becomes the weighted sum of A and B, that is, w*x + (-1)*b. Next, we apply the step function to the input at C which is defined as f(x) = 1 if x > 0, 0 otherwise.
The step function
To summarise, the value at C becomes 1 if w*x -b > 0, 0 otherwise. Why on earth would we do that? Well, the value at C will equal 0 if w*x < b. In other words, the bias b acts as a threshold that we need to pass in order for the value at C to not be 0. This is exactly like the third feature of neurons discussed earlier! Because of this, we call the step function an ‘activation function’.
There’s only one problem. The vertical part of the step graph at x = 0 means that it's not differentiable. If you don’t know what that means, don’t worry (see the conclusion if you’re not satisfied). All you need to know is that we can approximate the step function with the sigmoid function.
The Sigmoid function
You can think of the sigmoid function as a ‘squishification’ of all possible inputs to fit between 0 and 1. The larger (smaller) x is, the closer sigmoid(x) is to 1 (0).
We can extend our current model to have many neurons feeding input, each of which will have its own weight. Note that only one of these will be a bias. Again, the input becomes the weighted sum (the product of the output of each node by its connection weight) of the neurons before it.
And while we’re at it, why not add a couple more nodes to each layer, and a couple more layers of connectivity? We call the layers between the first layer and the last ‘hidden layers’. Here, each layer will have only one bias.
We generally start by populating the leftmost layer of neurons and move ‘forward’ through the network by calculating the value of each neuron in the next layer, and so on. Finally, we can compute the value of the neurons in the output layer.
As we said before, there are some fixed features and some parameters in our network. The general structure, that is, the number of layers, the number of nodes in each layer and the activation function are fixed. Based on how we move forward through the network, the value of each neuron is deterministic given the neurons and the weights that precede it. Therefore the only thing we can change, our parameters, become the weights of the connections between neurons.
Now that we understand what a network is, let’s look at how we can use it to solve a problem.
How it ‘learns’
We’ll take a look at one of the most famous machine learning tasks, identifying handwritten images.
The general process for learning is this:
Define a network Pass an image into the network (input) The network will predict the label of the image (output) Use the prediction to update the network in such a way that it ‘learns’ Return to step two and repeat
Let’s assume the images are each 28x28 (784) pixels, and since they are grey-scaled, the value of each pixel ranges from 0 (black) to 1 (white). In order to train the network, we need training data in the form of images and their associated label.
The first layer of our network will represent the data; it’s how we feed a data point (an image) into our network. There will be 784 neurons in the first layer (plus a bias) and the value of each neuron will be the value of a pixel from the training image. The last layer in the network will represent the output; the model’s prediction for the label of the image. There will be 10 neurons in this layer, and the closer the value in neuron i is to 1, the more the model thinks the image has label i.
Initially, we set the weights of the graph to random values, which is why the initial predictions won’t be very good. The choice for the number of hidden layers and the number of neurons in each is a challenging problem to solve, which we’ll skip over. For educational purposes let’s just assume there is 1 hidden layer with 10 nodes, and look at an example. | https://towardsdatascience.com/neural-networks-demystified-49f3426d4478 | ['Ben Rohald'] | 2019-07-15 14:52:10.702000+00:00 | ['Machine Learning', 'Education', 'Neural Networks', 'Artificial Intelligence'] |
Starting your career during a pandemic: 40 young journalists enter a new journalism landscape | The response to this year’s GNI Fellowship call for applications was exciting and a great sign that many young people are looking to land their dream job in journalism. More than 1,400 students and graduates looking to kickstart their journalism careers applied for a summer placement. Leading European news organisations across 14 countries will now host 40 Fellows for eight weeks.
The GNI Fellowship seeks to bring young talent into newsrooms and help them become more diverse and interdisciplinary while staying at the forefront of the use of technology in journalism. For many aspiring journalists, the Fellowship is their first paid job and a step that can kick-start their career in journalism.
The Fellows were carefully selected by each host organisation for the skills they offer in digital and data journalism, audience and product development, verification and fact-checking, all of which are in high demand to help move the industry forward. Amid a pandemic, which has pushed many newsrooms to find new ways of producing journalism, these skills are needed more than ever.
Meet the Fellows
The cohort is a young group of aspiring media professionals with backgrounds that range from journalism to UX/UI design, and from computer science to philosophy. Here are the GNI Fellows for 2020 and the main areas they will focus on:
Area: Data Journalism and Visualisation
‘The GNI Fellowship gives me an opportunity to learn a new inspiring way of reaching and engaging audiences by using data and new storytelling formats. I am really looking forward to working at the intersection between journalism and design and connecting with the community of fellows from across Europe.’- Pilar Tomás Franco ‘I look forward to using new technologies and narrative formats to complement my previous journalistic experience. Because we need strong critical journalism and we need to make it ready for the future.’- Gabriel Rinaldi
Area: Audience engagement and digital storytelling
‘I’m most looking forward to have the opportunity to make meaningful change in the environmental sphere, learn from industry professionals and experiment with social content.’- Katie Anderson
Area: Fact-checking, verification and investigative journalism
‘The GNI fellowship is an exceptional opportunity to work with the best fact-checking journalists of a leading news organisation. I will get to learn about the state-of-the-art technological tools and latest innovative formats used in this domain. It is really exciting!’- Juliette Mansour
Area: Design and product development | https://medium.com/we-are-the-european-journalism-centre/starting-your-career-during-a-pandemic-40-young-journalists-enter-a-new-journalism-landscape-5ce92d0fa33f | ['Charlene Arsasemita'] | 2020-06-30 11:48:11.162000+00:00 | ['Journalism', 'Fellowship', 'Innovation', 'Media', 'Updates'] |
How Denying My Sexuality Destroyed My Ability to Love Those Most Like Me | I was once asked to protect an archbishop from gay Catholic protestors during a church service.
It is a rather unusual thing to admit, especially considering I am gay myself, but life is nothing if not ironic.
I was in seminary at the time, training to be a Catholic priest. One day the rector in charge of the seminary asked to see several of us who he had identified as leaders. We had quickly bought into his vision of a more masculine and orthodox church reform led by the men of Catholicism and he had made us his lieutenants.
Tomorrow, he explained, the Archbishop will be celebrating Mass at the university chapel. A group of LGBT Catholics are planning on protesting the service, probably inside the chapel and possibly during communion.
For Catholics, communion is the high point of our worship, where God becomes physically present under the auspices of simple bread and wine. A small amount of protestors around the world had taken to desecrating the communion host to make their point, which, when you believe God is physically present, is a truly offensive protest. Thus, any protests at a Mass were taken seriously, whether a threat was actually made or not.
The four of us, the rector said, would protect the Archbishop and protect Jesus at communion.
I was thrilled.
me in my seminary days
The next day we walked over to the chapel together, in our khakis and blue polos with the name of the seminary embroidered on the left chest. Standard issue seminarian uniform. Honestly, we looked more like an early 2000’s dance group in a retail clothing commercial than bouncers — a fact that was once called out to us on the street by an actual bouncer in New Orleans while fifty of us walked by: Five dollar wells! Four dollar domestics! Ladies get in free! And what is that a motherfucking Gap commercial?!
As we processed down the aisle at the beginning of the Mass we could see a group of four or five men and women sitting towards the back. Some wore a rainbow flag draped over their shoulders like the suffragettes in Mary Poppins, others a more modest pin. They all seemed to be at least fifty.
The scripture readings and sermon moved along without any disruptions. The protesters sat quietly, standing only when the whole congregation did, saying the usual prayers along with us. I was both relieved and disappointed. Part of me was grateful they were being so respectful, but I also was curious to see how any conflict would play out. Church is stereotypically boring for a reason — what would it look like with a little bit of rebellion thrown in the mix? Just being asked to be there, my ego was already sky high. My mind raced at what I would do if I had to step in.
The priest had asked us to stand alongside everyone handing out communion in case anything happened. The big question — and the heart of the protest, really — was whether the Archbishop would give the LGBT members communion. They were protesting the Catholic Church’s treatment of its queer members and symbolically, you couldn’t get much better than being prohibited to even share a meal with the rest of the congregation.
Devout Catholics would object to that set up. There’s no discrimination in barring LGBT Catholics from communion, they say. There are certain beliefs required of all members and various actions prohibited — confession awaiting all those who fall short — in order to be ready to participate in communion. There are no separate rules for gay and straight Catholics, just one set of beliefs that binds us all together.
Add onto that an understanding that the solemnity of communion is such that protesting or in any way politicizing the act would be wildly inappropriate and it is no large surprise that the threat of rejection floated in the air.
The small group queued up in the Archbishops line as I stood tall and still right beside him. It is not normal in Catholic churches to have anyone accompanying a person distributing communion, and given the potential for disruption we believed was present, I did my best to look intimidating. By this point I was nineteen and into it. I felt like a knight protecting royalty, or maybe secret service agent, accompanying the president as he shook hands or kissed babies.
A woman in her sixties approached the Archbishop and gently put up her wrinkled hands like the rest. Her grey hair was cut short and an “LGBT Catholic” button was placed prominently on her chest. I glanced out of the corner of my eye and caught the Archbishop’s face. He smiled and had a genuine tenderness in his eyes. He reached down and put his hand on her shoulder like he did for people not ready to receive communion and said a small prayer asking for God’s blessing.
The woman seemed to crumple like crepe paper under his hand. Tears gushed out and she quickly walked past the line for the wine and returned to her pew, muffled sobs echoing off the plaster walls.
I betrayed no emotion, but my eyes followed her all the way back to her pew until her head fell down into her hands. This was the great threat I was here to thwart, and honestly, I was proud to have played my part. Her tears did not move me. The dignity of the Church had to be guarded and the truth she bore upheld. If her stunt left her feeling rejected, good.
I was a knight. I was nineteen.
When Mass was over we accompanied the Archbishop back to the sacristy where he could hang up his priestly vestments. Us seminarians were in formation around the Archbishop as the protestors approached outside. They stayed just outside our perimeter, signs that were tucked away during the service suddenly hoisted high as they chanted about justice for gay Catholics. It was all over within several seconds. By the time the Archbishop had stowed away his vestments everyone had dispersed. Within fifteen minutes I was back in my room, my feet propped up and a Philosophy 101 textbook in my lap.
a different, bigger protest
What worries me most, looking back on that episode, is not the dramatic overreaction in itself. The desire to protect the church, its leaders, and yes even God, is a sincere one. It was silly in its intensity, but not rooted in malice. Those protestors were incredibly peaceful, and honestly not even that good at being distracting. The part I struggle with is how easily I was able to disassociate myself from the LGBTQ Catholics that were there.
In 2005 I was still closeted to all but my family and a few old friends. In the seminary, I was trying desperately to increase the Catholic side of my identity, hoping it might somehow consume the gay one. Hoping that with enough fervor and devotion, the parts of me the Church found lacking would fade away and be forgotten.
The thrill I got at standing tall and being the Catholic knight, the defender of what is right and true and good, eclipsed any reality of how close I was to those who were approaching the altar, and why. I think my subconscious did some kind of primitive, communal calculus, and decided the queers wanting a place at the table were a threat to my own inclusion as an real Catholic.
Watching that woman with her wrinkled hands held out to the Archbishop, my mind wouldn’t allow me to see how close I was to her. How it had been only a couple years since I was laying in bed with a boy and then mumbling to my mother the next morning that maybe the Church could change.
Instead, I put my shoulders back and my eyes forward and decided that I had been the one who changed. I wasn’t gay — not really. I struggled with same-sex attraction, but no one needed to know that. It wasn’t a real part of me. Being a good son of the Church was my identity now.
I wonder what might have gone different if I had learned to see a part of me in those protestors. I can’t imagine I would have been willing to play the Secret Service agent. My discomfort with turning a church service into a protest would probably have remained. There are certain places and times that need to remain inherently reverent and free from antics, righteous though the cause may be.
But my willingness to engage, to listen, to search out a more proactive solution would have been there. If I had just been able to say, I disagree with how they are going about this, but I at least understand how I could have ended up there myself. That changes the dispositions of the dialogue entirely.
As should be plenty obvious by now, I have switched sides on the gay/Christian debate, and think it is the Church that needs to move more towards the gays, not just the other way around. But I hope I can remember the path I took to get there. How I was able to stand next to the Archbishop and, in my heart like Peter, utter my own, Woman, I do not know the man.
I need more compassion for those who disagree with me, even while I insist on the need to recognize the full humanity of LGBTQ individuals and their rights. The empathy piece is key though. Both in humility to recognize how easily I could have ended up in their shoes. And in practicality, to adopt a more inviting and, dare I say, Christian approach to conflict.
But I also believe trying to shut down my gayness, to compartmentalize and lock away this part of me, is what made me able to so harshly view those protestors as only their sexuality. The smugness with which I smiled inside at that woman’s pain was a direct consequence of me being taught to hate and push away being queer. The coded language of you are more than just your sexuality I was constantly told, in practice meant that any acknowledgment of it was to claim queerness as my sole identity. And when I saw it in another, I suppose I wanted it destroyed.
A refusal to see myself clearly meant an inability to recognize anyone like me. That is the terrible hole so many of us gay Christians are still trying to climb out of. It is how some of the most vocal opponents of gay rights end up coming out of the closet when the weight of it becomes too much. They look at someone with all the same life experiences and instead of seeing themselves, only see what they think could shatter them.
There is a lot about my youthful zeal I wish I could go back and do again. But few pieces of my history gnaw at my conscience like my role in seeking to intimidate that woman. To intimidate myself, really. So far from fear not. I stood tall and puffed out my chest, but inside I was cowering.
What a grace to know I have nothing to fear. I hate my role I played in sending away others from the communion table. But I cherish my part in being able to invite them back. | https://medium.com/reaching-out/how-denying-my-sexuality-destroyed-my-ability-to-love-those-most-like-me-7f4c00c49eaf | ['Patrick Flores'] | 2018-01-14 16:53:05.732000+00:00 | ['Reaching Out', 'Christianity', 'Life', 'Storytelling', 'LGBTQ'] |
React to React Native: Tips and tricks for your journey | React Native is gaining popularity these days and many people try to create React Native components out of their existing React ones. This process is not always straightforward so we have created some quick tips to make the migration smooth and error-free.
1. Replace HTML elements with React Native components
The first and most important step in migration is to replace all the HTML elements — as React Native doesn’t support HTML — with the React Native components. For instance, div/section should be replaced with View component and h1 , h2 , … h6 , p and similar text-based elements should be replaced with Text component. For example:
// Web / HTML Component:
const TextComponent = ({content}) => <h1>{content}</h1> // React - Native version of above Component: import { Text } from 'react-native'; const TextComponent = ({content}) => <Text>{content}</Text>
Such React Native components compile into Native code based on the platform and, hence, constitute the fundamental building blocks of the app.
2. Conditional rendering of components can be tricky
Conditional rendering is one of the commonly used patterns in React. Say, we are conditional rendering TestComponent as follows:
<View> //React Native
{ifTheConditionIsTrue && <TestComponent>}
</View>
The above code works fine in React Native until the variable `ifTheConditionIsTrue` is an empty string. If ifTheConditionIsTrue becomes an empty string, React Native starts expecting a <Text> component to encapsulate it and break the app.
The solution is type coercion. Adding a !! before ifTheConditionIsTrue will convince React Native that the variable is a boolean. The solution looks like this:
<View> //React Native
{!!ifTheConditionIsTrue && <TestComponent>}
</View>
If you are using typescript, nullish coalescing is also an option as you can use the benefits of the ternary operation and still maintain the readability like so:
<View> //React Native
{ifTheConditionIsTrue ?? <TestComponent>}
</View>
Since nullish coalescing acts internally as a ternary operator, React Native won't expect a <Text> component if ifTheConditionIsTrue is an empty string.
3. You always click so, learn to press.
In React, we use onClick synthetic events in components. As we don't use a mouse on the mobile phone (yet), we don't have an onClick event. Instead, we have a <TouchableOpacity onPress={}> component, which handles press events in mobile phones. Hence, all the onClick events should be changed to onPress in order to execute the callback when interacting with the components.
4. Platform-agnostic components pose a challenge
When we build an app using React Native, the code gets compiled into Native code depending on the platform (iOS or Android). Maintaining consistency across platforms is quite difficult, especially with the Picker/Select/DropDown. For instance, the native picker component resembles a dropdown in Android and opens a model with options in iOS. If you want to maintain a consistent design, either build a custom component or use libraries such as react-native-picker-select.
5. SVGs
Handling SVGs in React Native is one of the most difficult things. Check out this article to find out how to deal with icons in React Native, which are mono-color SVG. If you want to render “complex” SVGs with more than one color or attributes, you should use npm packages such as react-native-svg .
6. Ditch CSS
React Native doesn’t support CSS (SASS/LESS)and hence we will have to either use a CSS-in-JS solution or StyleSheet from React Native. Of course, inline-css is always an option with the least priority.
A personal suggestion is to use styled-components as they enable reuse of existing CSS using its css package wherein you may pass CSS as a string without changing much. And if you are already using styled-components for your React components, it’s quite convenient because it provides the same API to style in both React and React Native. This increases the development speed substantially.
A detailed explanation of the effective usage of styled components can be found in this article, which also covers how to handle inline styles with styled components.
7. Inline styles
Inline styles — which can be avoided — are written as objects and accept string values in px for React components. But in React Native such values should be passed as numbers without px . Values in pixels(inline styles) tend to break React Native.
Also, React Native doesn't support shorthand CSS properties in styles. For instance, padding : 10px will break React Native. The individual properties should be listed as follows:
style = {{
paddingTop: 10px,
paddingBottom: 10px,
paddingLeft: 10px,
paddingRight: 10px,
}}
To avoid all these issues, it's better to use a CSS-in-JS solution.
8. FlexBox is key
Flexbox can be used with React Native and it makes it a lot easier for developers to maintain layouts between React and React Native. However, in desktop, the default value for flex-direction is row , while in React Native the default value is column .
Hence to maintain the uniform layout, we need to specify the value of flex-direction .
If you are new to Flexbox or just lazy like I am, Facebook has an interactive tool called Yoga, which creates complex layouts and directly issues codes to React Native.
9. Use styled components with animated components
In the case of React components, we use styled-components as follows:
import ReactComponent from './ReactComponent';
import styled from 'styled-components'; const StyledReactComponent = styled(ReactComponent)`
/* Styles goes here */
`
In React Native, we have a set of Animated components — used for Animations — which can’t be used with styled-components in a similar way. styled(Animated.Text) will issue an error. How does one solve this issue?
Instead of directly using the Animated.Text component, we should leverage the createAnimatedComponent function of the Animated class and create our own component. We can use this custom component styled-components as below.
const CustomAnimatedText = Animated.createAnimatedComponent(Text); const StyledCustomAnimatedText = styled(CustomAnimatedText)`
/* Styles goes here */
`
10. Invalid styling properties
React Native doesn't support all CSS properties supported by the browser. For instance, properties such as position: fixed are not supported but position: absolute is supported. Another such example is cursor: pointer .
This React Native cheatsheet might come in handy when you are searching for styles supported in React Native.
11. Background images
Are you a CSS fanboy who uses background-image extensively? I have bad news for you: React Native doesn't support background-image and you will have to use the ImageBackground component of React -Native to set an image as background. It goes something like this: | https://medium.com/omio-engineering/react-to-react-native-tips-and-tricks-for-your-journey-c5f5ddfc09b5 | ['Vilva Athiban P B'] | 2020-01-10 13:48:03.030000+00:00 | ['React Native', 'JavaScript', 'Software Development', 'React', 'Styled Components'] |
About Me — Aria Dailee. Professional Geographer, budding… | About Me — Aria Dailee
Professional geographer, budding developer, and writer
Picture of me voting this year using an at-home voting booth (haha)!
Hello, everyone! I started writing on Medium in October 2020. Funny enough, I thought about starting in 2018 but decided against it. At the time, I didn’t think people would like my writing or even care what I had to say. However, I’ve had a change of heart and decided to give it a try. I’m really excited to start on this journey!
I’m a first-generation American. My parents immigrated to the United States from the Caribbean and met in New York. My father, unfortunately, passed away this year.
I’m a former online tutor with an educational background in geology (BS), geography (BA), and cartography (MS). Now, I’m a mineral commodity analyst by day and a writer by night! I love making maps, so I hope to incorporate a few in future articles.
I enjoy researching and writing about a variety of topics, but a few favorites are introspective personal essays, climate activism and sustainability, history, and tips about whatever skills I recently learned.
I’m one of those strange people that finds learning and working fun. I’m always learning a new skill. Right now I’m working on refining my graphic design and web and app development skills.
My Current Personal Projects:
As of December 8, 2020
My blogging website (The first website I’ve ever made from scratch!)
Updating a Mars weather app I made using Flutter (will link to it here once finished)
Teaching myself how to use Blender (I want to create some art and maybe a few animations)
I’m currently participating in #100DaysOfCode by learning something new in Python for the next 100 days.
Feel free to say hi and connect with me! Don’t forget to check out a few of my stories listed below. I’ll update this story periodically, so be sure to check back occasionally for future updates! | https://medium.com/about-me-stories/about-me-aria-dailee-f32fc1a30be3 | ['Aria Dailee'] | 2020-12-08 11:46:52.012000+00:00 | ['Nonfiction', 'Autobiography', 'Introduction', 'About Me', 'Writers On Medium'] |
The Ice Giant Planet that Put Jupiter and Saturn in their Place | The Ice Giant Planet that Put Jupiter and Saturn in their Place James Maynard Follow Oct 29 · 4 min read
How did Jupiter and Saturn get where they are today? A massive ice giant planet which once orbited between Saturn and Uranus may have played a role in shaping our Solar System.
A massive planet may have once orbited between Saturn and Uranus, forever altering the orbits of Jupiter and Saturn, before heading out to space. Image credit: The Cosmic Companion / Created in Universe Sandbox
Did a massive ice giant planet once orbit in the outer solar system? And what could evidence for such a world teach us about the original positions of Jupiter and Saturn?
The ancient Solar System was the formed from a disk of gas and dust spiraling around the nascent Sun. At first, most astronomers believe, the earliest planets formed in regular, closely-packed, orbits. Soon, however, gravitational tugs from the most massive of these worlds played havoc with the regular orbits of their neighbors.
It was once thought that solar systems like our own — with small, rocky planets placed close to their parent star and larger gas giants in the outskirts of the system — would be common. But, following the discovery of 4,500 exoplanets, the makeup of our solar system was found to be rare.
“We now know that there are thousands of planetary systems in our Milky Way galaxy alone. But it turns out that the arrangement of planets in our own Solar System is highly unusual, so we are using models to reverse engineer and replicate its formative processes. This is a bit like trying to figure out what happened in a car crash after the fact — how fast were the cars going, in what directions, and so on,” said Matt Clement of Carnegie Institution.
You Should be a Model
The research team ran more than 6,000 simulations of the evolution of the Solar System, revealing an unexpected finding about Jupiter and Saturn.
The orbits of Jupiter and Saturn may have been shaped, in part, by influences from a massive world, long gone from our solar system. Image credit: NASA
Astrophysicists typically thought the two planets orbited in a 3:2 ratio — for every three orbits around the Sun made by Jupiter, Saturn was thought to trace out three trips around our parent star.
Instead, the simulations showed that the two planets were, more likely, in a 2:1 resonance, where Jupiter raced around the Sun twice for every trip completed by Saturn.
Such resonances produce systems much like the one we see in the present day — with small terrestrial planets in the inner solar system, surrounded by larger worlds.
The models also showed that the orbits of Uranus and Neptune were shaped, in part, by gravitational pulls from the multitude of bodies in the Kuiper Belt, sitting at the edge of our family of planets.
Ice Planets Leave me Cold
Voyager became the first spacecraft to visit an ice giant when it arrived at Uranus in January 1986. Video credit: NASA
Another surprise was evidence for an ancient ice giant world that once existed in our Solar System, which left our family of planets long ago.
Ice giant planets are worlds far larger than Earth, mostly consisting of elements heavier than hydrogen and helium, including sulfur, nitrogen, carbon, and oxygen. Two ice planets orbit in the outer reaches of our own solar system — Uranus and Neptune.
“In the strictest definition, ice is the solid form of water. However, planetary astronomers often use ‘ice’ to refer to the solid form of any condensable molecule. These tend to be highly reflective, form clouds, and (unlike minerals) can readily change between liquid, solid, and gas states at relatively low temperatures. Frozen water and carbon dioxide (‘dry ice’) are the most familiar ices on Earth, but methane, ammonia, hydrogen sulfide, and phosphine (PH3) can all freeze in the atmospheres of Uranus and Neptune,” Amy Simon, planetary scientist at NASA’s Goddard Space Flight Center, writes for the Planetary Society.
The first known ice giant planet in another solar system was confirmed in October 2014, sitting 25,000 light years from Earth. This world, four times more massive than Uranus, orbits at a similar distance as its more familiar cousin.
Tools and techniques developed in this study might also assist researchers looking at exoplanets orbiting distant stars. | https://medium.com/the-cosmic-companion/the-ice-giant-planet-that-put-jupiter-and-saturn-in-their-place-451b9145687b | ['James Maynard'] | 2020-10-29 23:15:51.676000+00:00 | ['Astrophysics', 'Space', 'Solar System', 'Physics', 'Science'] |
Wireframes, flows, personas and beautifully crafted UX deliverables for your inspiration | When people think of UX Design documentation, the first image that comes to mind is dense, long, and heavily annotated wireframes, full of boxes and arrows that indicate how a system is going to function and behave.
But it doesn’t have to be like that.
Here are a few examples of UX deliverables that are well polished, legible and simple to understand.
Sketches
Wireframes
User flows
Personas | https://uxdesign.cc/wireframes-flows-personas-and-beautifully-crafted-ux-deliverables-for-your-inspiration-bb7a8d99af62 | ['Fabricio Teixeira'] | 2017-10-26 13:13:51.607000+00:00 | ['Interaction Design', 'Design', 'User Experience', 'Personas', 'UX'] |
Junior designers, stop designing for the happy path | Junior designers, stop designing for the happy path
In design, the happy path is what happens when the user does everything exactly the way you expect them to. Although this can happen, it won’t always happen.
This is a happy path. Alice Donovan Rouse Unsplash
One of the most common mistakes I see in portfolios from junior designers is that they often show too much — and too little — simultaneously.
One unfortunate reality of the many design boot camps out there is that they seem to encourage students to design entire apps or entire web pages for fake businesses.
Now, before we continue, I am a big supporter of design boot camps. I believe that there is a lot of work to be done to improve many of them, and I also feel like General Assembly is a borderline criminal enterprise. The most unprepared students tend to come from there and I have met more than a few that haven’t been able to find jobs even years later. Regardless, I think that boot camps, in general, are filling a very real gap for a career that lacks college programs. I personally attended DESIGNLAB and a few courses at BrainStation, both of which taught me a lot, although neither was perfect.
The problem with many student projects is that they take on too much. You don’t start your first job and get presented with the task of designing an entire app. You also don’t start right off redoing an entire website (in most cases). Projects are broken down much more granularly, and it’s more likely than not that you’ll be collaborating with other people throughout the project.
Many junior designer’s portfolios (including my own, when I started) will show the concept for an app, along with a chronological checklist of research methodologies, and then final designs, that often don’t adhere to either Material Design or Human Interface Guidelines, and would be unjustifiably complicated and expensive to build, for no good reason.
When I talk about the “happy path” issue, I am referring to these final designs (or the wireframes). I often see designs that show a user effortlessly signing up, making an account, accomplishing a single important task with no hiccups, and then going on their way. When designing in the real world, you don’t get to just account for that happy path. You also have to account for a plethora of other paths. And I can bet you that interviewers are going to ask you about some of those paths when you show these designs.
This is more realistic, but like x10. Caleb Jones Unsplash
Some questions to ask yourself — AND DESIGN FOR:
What happens if the user exits the app and then reopens it? What happens if the user has airplane mode on? In the case of apps with populated lists, what happens if the user doesn’t have any “items?” In the case of apps with the ability to buy things, what happens if the user doesn’t have a payment method on file? In the case of apps with registration flows, what happens if the user skips a section and then wants to go back? In the case of apps that load things (pretty much all apps), what does that loading look like? And what happens if it fails?
This list represents some of the more lightweight questions you may be asked by a product manager. The reality is that complex APIs and limits on front-end logic can require you to solve even more complex situations.
A few that I’ve come across.
What if it’s possible to save some rewards for later, but others will be automatically applied? How do we show the user which they can save, and what UI component makes the most sense for saving and activating rewards? What if the user runs out of pre-loaded funds. The way it’s built, we will charge their default payment method? But how do we communicate that with them so they know? It takes a long time to load this screen, what can we show the user first to allow them to make corrections before the rest of the screen finishes loading?
These difficult questions, in addition to fleshing out realistic designs, also challenge you to make compromises. Fake projects don’t have to make compromises, but it would be so much more impressive if they did. Very often I am faced with decisions between including some piece of information that I think the user would enjoy having or increasing the loading speed.
I’ve looked at a lot of portfolios. I guarantee if I saw some empty states, error states, or other interesting cases accounted for, I’d look again. | https://medium.com/design-bootcamp/junior-designers-stop-designing-for-the-happy-path-44cdae1aa69c | ['Aaron Cecchini-Butler'] | 2020-11-25 22:36:00.774000+00:00 | ['Product Design', 'Design', 'UI', 'Careers', 'UX'] |
Let’s remember 2020 as the year we learned how to do things better | Let’s remember 2020 as the year we learned how to do things better Enrique Dans Follow Dec 22 · 2 min read
Every year at this time, I borrow some of the greetings created by my employers, IE University, to celebrate the holidays with my readers, those of you who use the site regularly, those of you who find yourselves here by chance, and those of you who read my content elsewhere.
It would be an understatement to see that this has been quite a year. Quite simply, it has been unprecedented. What’s more, wishing you all a better 2021 makes little sense: it would be hard for it to be worse than this one.
On a personal level, over the course of the year I have had to develop skills in an area, video, I was never comfortable with, and make it the cornerstone for my professional activities. I have had to give hours and hours of teaching with a mask on, and I have been able to reflect on the extreme difficulty of delivering content and conversing when half my face and that of the people I was talking to is hidden. Another of my main activities, conferences (and all the activity around them) has come to an end, bringing home to me the extent to which they inspired so much of my thinking and what I write about every day. My timeline on Google Maps has pretty much shrunk to a dot on the map, while the days are all very similar to each other, with the resulting negative impact on my creativity, which has certainly robbed me of my sparkle. To those of you who are still around, thank you for continuing to put up with me in spite of everything.
A very different year, this has been a time to think about many things, about our limits as individuals and as a society, and about the many problems that the pandemic has exposed, mainly that we need to do things differently now, and not in 20 or 30 years. Last year, my fears were based on years of research and reflection, but in 2020 we have experienced the evidence first hand.
Vaccines are now being rolled out, but we’re not out of the woods yet, and we should remain vigilant. Please continue to take all precautions. Let’s enjoy the festive season, but remember that we’ve lowered our guard before, and it only made things worse.
That said, my very best wishes to everybody. Let’s hope that 2020 will soon be a not-so pleasant memory. But above all, let’s not remember it as a lost year: even if it came at a high price… it taught us many things and that we can do things better. | https://medium.com/enrique-dans/lets-remember-2020-as-the-year-we-learned-how-to-do-things-better-108bc0ec06dd | ['Enrique Dans'] | 2020-12-22 11:59:58.082000+00:00 | ['Personal', 'Greetings', '2020', 'Ie University', 'Coronavirus'] |
30 Mission-Driven Startups You Should Know | Many in Silicon Valley aspire to build products and companies that have the potential to create positive change on a large scale. Often, however, these companies can be difficult to find because they have smaller recruiting budgets, and their mission statements can get lost in the noise of startups claiming to be “changing the world.”
To help, we’ve decided to put together a list of mission-driven companies that are attempting to make a big impact on the world.
We tried to pick companies that:
have solutions that are technology-driven or at least tech-enabled
have similar hiring needs to a typical silicon valley startup
are focused on fulfilling basic needs of people who are typically underserved
Based on that loose criteria, we’ve put together a list of companies that we think are awesome and fit the bill.
Of course, we may have left some great ones off our list and would love to know who we missed! If after reading this list you want to learn more about these companies, here are two things you can do: | https://medium.com/tradecraft-traction/30-mission-driven-startups-you-should-know-35195cf45c77 | [] | 2018-03-01 15:51:38.911000+00:00 | ['Nonprofit', 'Tech For Good', 'Entrepreneurship', 'You Should Know', 'Startups'] |
Oracle Big Data Cloud, Event Hub and Analytics Cloud Data Lake Edition pt.1 : Creating the Real-Time Data Pipeline | Some time ago I posted a blog on what analytics and big data development looked like on Google Cloud Platform using Google BigQuery as my data store and Looker as the BI tool, with data sourced from social media, wearable and IoT data sources routed through a Fluentd server running on Google Compute Engine.
Read more at the MJR Analytics blog. | https://medium.com/mark-rittman/oracle-big-data-cloud-event-hub-cloud-and-analytics-cloud-data-lake-edition-pt-1-84961cd4274f | ['Mark Rittman'] | 2018-10-16 19:52:51.795000+00:00 | ['Oracle Cloud', 'Obiee', 'Analytics', 'Big Data', 'Apache Kafka'] |
How a Daily 60-Minute Break From My Smartphone Fixed My Anxiety | Consume Great Fiction Before Bed.
This is a complete gamechanger.
I majored in Communications in college, and right before I dropped out I learned that there was an interesting phenomenon called “narrative transportation.” The Wikipedia definition of it is as follows: “Narrative transportation theory proposes that when people lose themselves in a story, their attitudes and intentions change to reflect that story.”
Think back to the last great movie you watched, one that made you feel like you were right there as the bombs went off or the dragons breathed fire or when the characters stepped out of a dusty closet into Narnia. Narrative transportation explains why when I read The Lord of the Rings I feel like I’m in The Shire, and why when I watch Stranger Things I taste the bittersweet tinge of 80’s nostalgia.
Narrative transportation takes you away from your reality to the world of whatever you’re reading or watching. Most of the time this is a wonderful thing, but in my case, it led to a lot of anxiety. You see, I used to read a lot of self-help and business books before going to bed.
Yes, these books helped me grow, but reading The Four Hour Work Week before bed would hype me up and bombard my brain full of ideas. And this state of heightened state of arousal would follow me into dreamland and manifest as unrestful sleep, nightmares and the slow but constant grinding of teeth.
I didn’t even realize how bad I had it until I switched to fiction upon my friend's advice. Reading the works of Neil Gaiman, G.R.R Martin and even the horrorscapes of Stephen King took me out of my head. Suddenly, I wasn’t a struggling Singaporean entrepreneur anymore. I was Ser Jaime Lannister of Westeros, who upon having his sword hand cut off, has to forge a new identity for himself as a crippled commander.
Consuming fiction, particularly speculative fiction such as high fantasy and sci-fi, serves to transport me to a realm that is infinitely bigger, brighter and fundamentally different from the one I inhabit. There, for a few short hours, I can forget my weals and woes. I can rest. I can heal.
So don’t consume non-fiction before bed. Self-help articles and the latest tragedy unfolding on Fox News can be perused during the day. Opt instead to swap out your nighttime entertainment for something more relaxing, something more magical. A good-old-fashioned novel comes to mind, or a classic fantasy flick like Harry Potter. Please, give it a try before you roll your eyes.
You’d be surprised by how much tranquillity this simple tip introduces into your life.
For God’s Sake — Put Your Phone on Airplane Mode Before You Sleep.
Or better yet, shut it off. If you can’t turn it off your device for 8 measly hours a day — when you’re dead to the world, no less, it’s a sign that something is truly, terribly wrong.
I’ve been rudely awakened more times than I can count by a stranger calling the wrong number or by a text from the bank advertising their latest loan scheme. And believe me, these rude awakenings add up.
There’s a reason why many self-help gurus emphasise morning routines, and that's because the way you start your morning colours your entire day. Isn’t it stupid to let your day dictated by something as inconsequential as a mistaken phone call?
A simple way to avoid this problem is to switch your phone off and stick it in the drawer an hour before bedtime, or at least put it on airplane mode. This simple act will also prevent you from checking your phone the instant you wake, which brings us to our next point….
Don’t Check Your Phone Immediately After Waking. Get One Mindfulness Practice in Instead.
I have to confess, this last point is the one I have the most trouble with.
It's so easy, so convenient, so tempting, to wake up and check your notifications. To check what new notifications popped up over the night, or if you’re work-oriented, to refresh your emails.
The problem is this habit catapults you into work mode. It doesn’t give you time to relish that delicious moment where you’re semi-conscious, your Self slow-emerging from your soupy subconscious as sunlight streams soft-yellow into your room. It doesn’t allow you time to reflect and plan for your day ahead.
Nowadays, instead of checking my notifications first thing in the morning, I practice mindfulness instead. For 30-minutes, I do some light stretching, then proceed to journal three-pages for the day. And this is no hyperbole, I truly consider my morning practice the cornerstone of my happiness.
Writing first thing in the morning makes me feel productive. If nothing else happens, I already have three pages of freehand material done and dusted. Filling in my journal over a cup of coffee gives me rare time to introspect, to digest my past and plan for the future. And lastly, my journalling habit helps me warm up my writer’s fingers — and more crucially, my writer’s mind.
If, for some reason, you can only implement one of these three tips, let it be this one. Swap out your morning phone time for some mindful time with yourself. It doesn’t have to be journaling. Here are some great options:
Yoga
Meditation
A morning run in nature
Playing the piano over some tea
A light workout in the sun
Take your pick. It doesn’t have to be anything hardcore — your morning mindfulness practise is nothing more than a medium that allows you to spend some quality time with yourself before the busyness of the day. Done right, this practice will not only help you dispel anxiety. It will help you know yourself better.
And to know is to love. | https://medium.com/the-ascent/how-a-daily-60-minute-break-from-my-smartphone-fixed-my-anxiety-65c821428e6 | ['Alvin Ang'] | 2020-11-17 15:03:13.339000+00:00 | ['Self Improvement', 'Lifestyle', 'Life Lessons', 'Mental Health', 'Social Media'] |
Azure Functions Express: Running Azure Functions locally using Docker Compose | Every time I join a new project, I try not to rely too much on external environments when building and running the software that I’m working on. Most of the time, a DEV or CI environment is overrated and unstable. My approach is to run all the components locally and remove the external dependencies.
I already wrote about running a local SQL Express Docker instance, you will find this article to have the same merit:
If it were up to me, I’d write everything in Azure Functions, not everything fits the model, though. Maarten Balliauw explains this in more detail here: https://blog.maartenballiauw.be/post/2019/10/02/dont-use-azure-functions-as-a-web-application.html
However, when I do develop an Azure Function, I like to run it locally first, without the interference of anything hosted in Azure or elsewhere. The fastest way of having such an experience, in my opinion, is using Docker and Docker Compose.
The term ‘ Docker’ is glorified by many and horrified by some, I consider Docker to be just another tool in my toolbelt, and a great one at that.
I think the reason I like Docker so much, is the versatility of the tool:
Do you have an ASP.NET Core Web Application? Docker.
How about an SQL Database? Docker.
Angular Frontend Application? Docker.
An executable that you’d likely run via a Windows Service? Docker.
Database Migrations? Docker.
Azure Storage Emulator? Docker.
Your entire CI/CD Jenkins Pipeline? Docker.
Local Azure functions with Docker and Docker-Compose
Everything I’ll mention here is compressed inside of the accompanied git repo below:
I’ve created a template C# Azure Functions project for you to scaffold that holds the following features:
An HTTP Triggered function
A Blob Triggered function with Blob output binding connected to a local storage account
A Queue Triggered function connected to a local queue
An easy “start and stop” way of hosting this function locally
What you will need to do first in order to get started:
Install Docker
Install Docker-Compose
Install Azure Storage Explorer
VS Code with the REST CLIENT extension
Clone the repo
Run the following command:
docker-compose up
First, you will see the Azurite image being pulled from Docker.
Next, you will see the Local.Functions project being built and containerized.
Finally, when both containers are ready, they are spun up inside of one network using their respective names:
Now, you can open up the Storage Explorer and browse the local.storage.emulator’s Blob storage and Queues:
Preparing the environment
Create two containers named; input-container and output-container.
Create a queue named; queue.
Working with the Queue
Add a new message to the queue via the Azure Storage Explorer
In a few moments, you’ll see the message getting picked up by the local.functions container. | https://maartenmerken.medium.com/azure-functions-express-running-azure-functions-locally-using-docker-compose-bf6be03250fc | ['Maarten Merken'] | 2020-09-11 13:28:41.482000+00:00 | ['Docker Compose', 'Docker', 'Software Engineering', 'Azure Functions'] |
Data Ingestion from 5 Major Data Sources using Python | Did you know that in 2020 around 147 GB of data is generated per day? And, we have already stored around 40 trillion GB of data until now. All these stored data are not even the same. Data types like text or numbers have different formats. That explains why we have different types of data sources.
When you are working with data, you should know how to ingest the data from different sources. In this article, we are going to ingest data from various sources with the help of python libraries.
We will go through the below Data sources.
1. RDBMS Database
2. XML file format
3. CSV file format
4. Apache Parquet file format
5. Microsoft Excel
Do we have one python library which fetches data from all the sources?
Nope, because every data source has its own protocol for data transfer. We have multiple python library which does this job. Consider this article as a one-stop place to know about these python libraries.
In this article, we explain why we save data in different sources and how we retrieve data using python library.
Let’s start with our data fetching story.
1. Relational database management system (RDBMS) Database
The data in RDBMS has saved in rows and columns format. Tables present in the database have a fixed schema. We can directly use Structured Query Language (SQL) in the database to update the table. Examples of RDBMS are oracle, Microsoft SQL Server, etc.
Why we use an RDBMS database?
· Easy to use by users due to tabular format.
· A standard language SQL is available for RDBMS to manipulate data.
· The processing speed increases if we optimize RDBMS properly.
· Maintenance is easy.
· More people can access the database at the same time.
Now we can access different RDBMS databases using python libraries.
import pyodbc
server_name = “SQL instance of your database”
username = “username of your database”
password = “password of your database”
database_name = “name of your database”
port = “connection port for your database” conn=pyodbc.connect(‘DRIVER={PostgreSQL ODBC Driver(UNICODE)};
SERVER=’+ server_name +
‘;UID=’ + username +
‘;PWD=’ + password +
‘;DATABASE=’ + database_name +
‘;PORT=’ + port + ‘;’)
cursor = conn.cursor()
cursor.execute(query)
query_data = cursor.fetchall()
We will be using this code in MySQL and Postgress database connection. The MySQL database connection does not need a port variable.
The contents of the Driver variable are different for different databases. Driver for MySQL and Postgress databases are SQL Server and PostgreSQL ODBC Drive(UNICODE). Also, check what types of database drivers are available in your computer. Use pyodbc.drivers() function.
Use the below code for the Oracle database.
import cx_Oracle dsn_tns = cx_Oracle.makedsn(server_name,
port,
service_name=server_name) conn = cx_Oracle.connect(user=username,
password=password,
dsn=dsn_tns)
cursor = conn.cursor()
cursor.execute(query)
query_data = cursor.fetchall()
Reach out to your database admins to get the values of username, password, server_name, port, service_name, and database_name variables.
2. XML file format
XML is a file extension for the External Markup Language (XML) file. It stores those textual data that is human-readable and machine-readable. XML has designed in such a way that it’s format not change across the internet.
Why we use an XML file format?
· XML is a plain-text file format which can be understood by both human and machine.
· XML has a simple and common syntax rule to exchange information between applications.
· We can use a programming language to manipulate the information inside the XML file.
· We can combine multiple XML documents to form one large XML file without adding extra information. You can also divide XML into various parts and use them separately.
· The XML file format is preferable in web applications.
Now, we can access the XML file using the xml library.
import pandas as pd
import xml.etree.ElementTree as etree xml_tree = etree.parse(“sample.xml”)
xml_root = xml_tree.getroot()
columns = [“A”, “B”] datatframe = pd.DataFrame(columns = columns)
for node in xml_root:
name = node.attrib.get(“A”)
mail = node.find(“B”).text if node is not None else None
datatframe = datatframe.append(pd.Series([A,B], index=columns),
ignore_index = True)
You can use the request library to post the XML file in SOAP API.
3. CSV file format.
A Comma Separated Values (CSV) is a file format that stores plain text and tabular data. The first line of CSV file generally contains the columns and comma separate each column. Second-row and onwards have contents of the columns. It could be a text, number, or date. Tab Separated values file also has .csv file extension. It solves column separation issues related to CSV file format.
Why we use the CSV file format?
· Easy to create and manipulate data.
· Easy to read and understand data.
· We can organize a large amount of data.
· We can easily import and export CSV files.
Now we can access CSV files using pandas and the CSV library.
With the help of the pandas library, you can directly import the CSV file into the dataframe.
# importing Pandas library
import pandas as pd
csv_dataframe = pd.read_csv(“hr_data.csv”, sep=”,”,)
print(csv_dataframe) Name Hire Date Salary Sick Days remaining
0 Graham Chapman 03/15/14 50000.0 10
1 John Cleese 06/01/15 65000.0 8
2 Eric Idle 05/12/14 45000.0 10
3 Terry Jones 11/01/13 70000.0 3
4 Terry Gilliam 08/12/14 48000.0 7
5 Michael Palin 05/23/13 66000.0 8
If CSV file has ‘\t’ separator then use sep=”\t”. In case of space use sep=” “. Visit here for more information about read_csv function.
import csv with open(“hr_data.csv”) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=’,’)
line_count = 0
for row in csv_reader:
if line_count == 0:
print(f’Column names are {“, “.join(row)}’)
line_count += 1
else:
print(f’\t{row[0]} first column content {row[1]} second
row content {row[2]} third row content.’)
line_count += 1
print(f’Processed {line_count} lines.’)
4. Apache Parquet file format
Apache Parquet is a column-oriented data storage file format. The data stored in parquet files have compressed efficiently. Shredding and assembly algorithms are used in parquet to store the data. It is enhanced to handle complex data in bulk. Generally, parquet formats are useful in big data technologies.
Why we use the apache parquet file format
· The parquet file created using an efficient compression algorithm that saves a lot of storage space than other file formats.
· Queries that fetch columns data need not scan the whole row, which improves performance.
· Each column has its own encoding techniques.
· Parquet files have optimized for queries that process a large amount of data.
Now we can access parquet files using pandas and pyarrow libraries.
import pyarrow.parquet as pq example_table = pq.read_pandas('example.parquet',
columns=['one',’two’]).to_pandas() print(example_table) one two
a foo bar
b bar baz
c baz foo import pandas as pd pandas_dataframe = pd.read_parquet('example.parquet',
engine='pyarrow') print(pandas_dataframe) one two
a foo bar
b bar baz
c baz foo
5. Microsoft Excel
Excel is a spreadsheet developed by Microsoft. It stores data in tabular format. It has a grid of cells, which form rows and columns when combined. It has a lot of inbuilt features such as calculations, graphing tools, pivot tables, etc.
Why we use Microsoft Excel file format?
· You can analyze the data in excel using charts and graphs.
· Excel is good at sorting, filtering, and searching in data.
· You can build a mathematical formula and apply it to the data.
· Excel comes with a password-protected feature.
· You can use excel as a calendar.
· You can also use excel to automate data-related jobs.
Now, we can access Microsoft Excel using openpyxl library.
from openpyxl import load_workbook workbook = load_workbook(filename=”sample.xlsx”)
workbook_sheets = workbook.sheetnames
sheet = workbook.active print(sheet[“A1”].value) “hello” print(sheet.cell(row=10, column=6).value) “this is hello world store in row 10 and column 6.” import pandas as pd df = pd.read_excel(‘File.xlsx’, sheetname=’Sheet1') print(“Column headings:”)
print(df.columns) [‘A’,’B’,’C’]
Conclusion
This article helps you to understand why we need different sources to store data and how you retrieve data from these sources. We have used multiple python libraries to ingest data. In this article, I have covered 5 data sources.
Hopefully, this article will help you in data processing activities.
Other Articles by Author | https://medium.com/towards-artificial-intelligence/data-ingestion-from-5-major-data-sources-using-python-936144b30fa6 | ['Manmohan Singh'] | 2020-10-24 13:45:02.980000+00:00 | ['Python Programming', 'Parquet', 'Rdbms', 'Data Ingestion', 'Big Data'] |
Unleash the Power of Your Teams | Unleash the Power of Your Teams
Want to stop leaking value? Start gathering team data.
Do you know how your organization’s teams are getting work done? Do you know how much value they’re producing each day? What about the factors that are driving their success — or failure?
If you can’t answer these questions, and provide numbers to backup your answers, chances are you’re leaving productivity and profit on the table every single day.In today’s rapidly evolving business environment, where teams are formed and reformed fluidly in order to get work done, most leaders have very little idea of what their company’s team structure looks like, how work is getting done, or how value is being generated (or depleted) — even as they remain responsible for creating that value.If that’s you, you’re not alone. And it’s not your fault.
When researching what makes for team success, hard numbers and concrete advice are hard to find. This topic has been well researched, but the results are conceptual and difficult to implement, requiring significant cultural change without evidence of results. Most discouragingly, though research agrees on some common qualities of successful teams, what actually makes for consistent team success and failure is different for each company, and changes based on factors like industry, product and business model.
The damaging result of this is that when you want to know what makes the difference between your teams succeeding or failing (creating or draining value), you only have a few options: guess (really), rely on tribal knowledge, or slog through historical data to come up with stale numbers. This leads to decision lag and an inefficient and costly trial-and-error approach, leaving leaders vulnerable and resulting, ultimately, in missed value.
If this is sounding all too familiar, don’t panic. You can get a true grip on what’s happening in your teams, and where you can drive productivity and value across your organization.
Team Insights helps you visualize the way your teams get work done, behind a single pane of glass.
“One of the hardest things for technology leaders and managers today, is that there’s plenty of data out there, but it’s hard to get a clear idea of what’s driving team success,” explains Kevin Tuskey, SingleStone’s Director of Design. “Even if you’re not starting from scratch, the products and tools available just aren’t designed to show you what’s important.”
Suzanne Hawthorne, Account Director and Client Advisor at SingleStone agrees: “Even our most technologically advanced clients are struggling in this space. Measuring team success across an entire enterprise is tough, but it’s crucial to today’s leadership if they want to make the quick, data-driven decisions that will help their company get ahead. It’s the void in this critical area that drove us to create Team Insights.”
Team Insights, SingleStone’s team intelligence product, takes the guesswork out of what makes a company’s Agile teams successful. Instead of merely showing the HR structure of an organization, it shows how a company’s cross-functional teams are organized in a simple, easy-to-use system. Team Insights provides data-based decisions to drive how to get work done and create value. In short, it’s built to capture the data that helps a company and its leadership pinpoint, measure and unleash untapped productivity in their Agile organization.
“Before Team Insights, there was nothing like this available in the marketplace for our clients, and it was a huge gap,” says Hawthorne. “This product is designed to help our clients do three things: Gain a clear line of sight to the way work gets done in their company, capture straightforward data on the factors that contribute to team success, and uncover trends and insights that drive confident action and proactive decision-making.”
If you are struggling to gain line of sight to your Agile organization, or to make the data-driven decisions that will take your company to the next level, SingleStone can help. Our consultants and technology teams bring more than 20 years of experience in the areas of data, software, team success and Agile transformation. Our custom dashboards power decision making at many Fortune 500 companies in financial services and other industries. Team Insights is a direct result of our collective experience and track record.
Make 2019 your year to get on track and unleash the hidden potential of your teams with Team Insights. Reach out to learn more about our team intelligence product and schedule a free demo. | https://medium.com/singlestone/unleash-the-power-of-your-teams-8c71685b3a46 | [] | 2019-08-30 21:04:47.452000+00:00 | ['Dashboard', 'Data Visualization', 'Software Development', 'Agile', 'Teamwork'] |
Easy Python Speedup Wins With Numba | If you have functions that do a lot of mathematical operations, use NumPy or rely heavily on loops, then there is a way to speed them up significantly with one line of code. Ok, two lines if you count the import.
Numba and the @jit decorator
Meet Numba and its @jit decorator. It changes how your code is compiled, often improving its performance. You don’t have to install any special tools (just the numba pip package), you don't have to tweak any parameters. All you have to do is:
Add the @jit decorator to a function
decorator to a function Check if it’s faster
Let’s see an example of code before and after applying Numba 's optimization.
Before
The only purpose of this code is to do some calculations and to “be slow.” Let’s see how slow (benchmarks are done with Python 3.8 — I describe the whole setup in the Introduction article on my blog):
Now, we add @jit to our code. The body of the function stays the same, and the only difference is the decorator. Don't forget to install Numba package with pip ( pip install numba ).
… and after. Can you spot 2 lines that have changed?
Let’s measure the execution time once more:
Using @jit decorator gave us a 120x speedup (217 / 1.76 = 123.295)! That’s a huge improvement for such a simple change!
Other features of Numba
@jit is the most common decorator from the Numba library, but there are others that you can use:
@njit — alias for @jit(nopython=True). In nopython mode, Numba tries to run your code without using the Python interpreter at all. It can lead to even bigger speed improvements, but it's also possible that the compilation will fail in this mode.
mode, Numba tries to run your code without using the Python interpreter at all. It can lead to even bigger speed improvements, but it's also possible that the compilation will fail in this mode. @vectorize and @guvectorize — produces ufunc and generalized ufunc used in NumPy.
and generalized used in NumPy. @jitclass — can be used to decorate the whole class.
@cfunc — declares a function to be used as a native callback (from C or C++ code).
There are also advanced features that let you, for example, run your code on GPU with @cuda.jit. This doesn’t work out of the box, but it might be worth the effort for some very computational-heavy operations.
Numba has plenty of configuration options that will further improve your code’s execution time if you know what you are doing. You can:
Disable GIL (Global Interpreter Lock) with nogil
Cache results with cache
Automatically parallelize functions with parallel .
Check out the documentation to see what you can do. And to see more real-life examples (like computing the Black-Scholes model or the Lennard-Jones potential), visit the Numba Examples page.
Conclusions
Numba is a great library that can significantly speed up your programs with minimal effort. Given that it takes less than a minute to install and decorate some slow functions, it's one of the first solutions that you can check when you want to quickly improve your code (without rewriting it).
It works best if your code: | https://medium.com/python-in-plain-english/easy-speedup-wins-with-numba-b3dad3f0c207 | ['Sebastian Witowski'] | 2020-10-01 12:45:57.714000+00:00 | ['Coding', 'Software Development', 'Performance', 'Programming', 'Python'] |
A modern-day washing machine… | I Confess. I'm In Love With An Appliance | News Break
I resisted for a long time and now I wonder why. Once I made the decision, I couldn't believe it took me so long to… | https://medium.com/technology-hits/the-wonders-of-technology-fef10aa6df39 | ['Tree Langdon'] | 2020-12-22 03:34:23.520000+00:00 | ['Relationships', 'Philosophy', 'Technology', 'Self Improvement', 'Science'] |
Zoe | Evening fell hard that night, its blackness descending almost in an instant while I struggled to balance my overdrawn checking account.
I didn’t write that sentence in one shot. I backed up and embellished it several times before it was done. First sentences are important. I hope it grabs you.
The night-chatter of frogs and hum of insects rose in the dark. I didn’t feel their presence until, about an hour after sunset with an orange gibbous moon rising in the east, I pushed back from my computer, ran my hand through my hair, and gave up. I was grateful for their unseen companionship. I had moved out here to the middle of nowhere, Montana to get away from people, but sometimes when the night closed in I could wish for someone to talk to.
I’m ready to set aside the word list. I’ve only used two of its entries, but I don’t need it anymore. I have a character now, not well-developed but reasonably intriguing, and I think I can run with him. I still need the theme, though, to make something happen to our protagonist. A couple of influences come into play here. One is my interest in astronomy. I like to incorporate astronomical notes in my works. The description of the moon is accurate. A moon rising in the east shortly after sunset will be just past full — a waning gibbous — and can appear yellow or even orange depending on the state of the atmosphere. A second influence is a work I’m currently reading in which, near the beginning, a group of friends in a remote cabin are menaced by some strange intruders.
When I first came here, it seemed an idyllic existence. I had no family left, no real friends, not even any colleagues I particularly liked. A failed law student, for twelve years I’d shuffled papers and appointment calendars for a modest law firm in Denver. It paid well enough. If only I hadn’t ended each day feeling like I’d been run over several times by a sixteen-wheeler. Finally I realized the insanity of contenting myself with discontent. I chucked it all — job, condo, everything — and came here to make a living off the internet.
No, I didn’t go off half-cocked. I planned it. I knew what I’d sell, how I’d reap the rewards of placing ads on my websites, even wrote up a business plan. I just hadn’t realized how hard it was to translate plans into reality. And now? Insolvency loomed as large as the rising moon. What I needed, I thought with some desperation, was for some guardian angel to show up at the door.
Yes, I actually thought that. And immediately, the hammering on the door began, followed by a woman’s voice, filled with desperation, calling, “Is anybody home?”
All of this pretty much flowed out. In revision I might change or move some of it, as it strikes me it might be a bit slow. We’ll see what happens. The newcomer was, of course, planned based on the theme, “Stranger at the Door.” I hadn’t originally figured the stranger to be a woman, though. I made her female only when she showed up. One might expect a menacing figure to appear out of the night, but I like to contradict my own expectations from time to time, just to see what happens.
The coincidence was too much. After my initial startle reaction, I stared at the door during a lull in the pounding and only rose when it resumed. I flipped on the porch light and, without removing the chain, opened the door a crack. I’m not sure what I expected to see. A tall, ethereal beauty affixed with white billowing wings, maybe. In fact she was rather small, only five foot five including the thick brown hair she wore in a pony tail. Her dark eyes blinked at me in astonishment or fear. Dressed in a short green skirt and white blouse, she clutched a little white purse before her with both hands.
“I’m sorry,” she said. “I’m really sorry.”
Feeling like an idiot scared of his own reflection, I undid the chain and eased the door open. “For what?”
“Do you have a land line? I can’t get any reception out here.”
I peered into the night but couldn’t see a vehicle. She must have had car trouble out on the road, I supposed. My cabin was set a quarter mile up a gravel drive. Nobody would walk up here in the dark by choice.
I had to stop there to take my wife for cataract surgery. Seriously, I did. Life doesn’t stop just because you’re writing a story. Later that evening, I continued…
“Yeah,” I told her. “Come on in.” I held the door for her and closed it behind her. While she looked around my spartan one-room, I redid the chain. “Car trouble?”
“Ran out of gas. Stupid of me, but I really thought there would be a gas station somewhere.”
I gave the room a once-over, too, fearing it might be too messy for visitors. The kitchen in the back corner wasn’t exactly choked with dirty dishes, but I hadn’t cleaned it up today. My bed in the opposite corner — a queen because I liked having the space to flop around — was unmade, but didn’t look too disreputable. As it was a warm summer night, I hadn’t lit a fire in the tiny stone fireplace, but uncleaned soot and ash had accumulated there, it’s smell suffusing the air.
“You live here alone?” she asked.
It should have been obvious. “Yeah. Here’s the phone.” I led her back to the kitchen and snatched the device from the table.
She took it and studied it as though she’d never seen one before. “I don’t even know who to call.”
“Family?” I suggested. “A friend?”
She shrugged. “Don’t have any.”
“Me, either.” As soon as I said it, I wished I hadn’t. I had no interest in forming bonds, however tenuous. “I don’t suppose you’re a triple-A member?”
There was that shrug again. “I’m pretty hopeless, I guess. I don’t suppose you’d have any gas, like for a lawn mower?”
“Afraid not.” I didn’t bother telling her lawn mowers were of little use in a forest.
She pulled out a chair and sat, sighing heavily. “I guess I’m just stuck.”
Time to quit for the night. Can you tell where this is going yet? Neither can I!
“Don’t worry. I can find emergency service for you online.” My laptop was at the table, too, along with the uncleared dishes from dinner. And lunch. And breakfast. I pushed as much of it aside as I could and sat around the corner from her. “Sorry about the mess.”
Again that little shrug. “I don’t suppose it seems so important when it’s just you.”
“Depends on the day.” I entered the search terms into the computer and got a list of one, a place over thirty miles away. “Here we go. It’ll take them a little while to get here.” I turned the computer so she could see the number.
“Thanks.” She punched the number into my phone and waited. “Hi, I ran out of gas. Could you send someone out?” She listened, then gave our location, then listened some more before signing off with a resigned, “Okay, thanks.” She handed me the phone back. “Three hours.” She glanced at the door, then at the darkness beyond the kitchen window.
I didn’t want company. I came out here to get away from people. Why, I wondered, did she have to run out of gas in front of my place? But I couldn’t send her out into the dark on her own. “You can stay here,” I offered. “I’ll walk you to your car when the time comes.”
She smiled gratefully while objecting, “You don’t have to do that.”
“It’s okay.” Which it wasn’t, not exactly, but strangely I found her presence less of a burden than I would have expected. If nothing else, I’d have an excuse to ignore my financial mess for a few hours. “You want something? Coffee? Tea?”
“Coffee would be nice.” Again the smile, which I found myself returning. “But really, I don’t want to be a bother.”
“It’s no bother.” I got up, flipped on the coffee maker, and took a pair of mugs from a cupboard.
“But you don’t like people, do you? I’m an unwanted intrusion.”
I felt myself flush. Was it that obvious? “I’m a bit of a loner, I guess.”
“Where are your parents?”
“Dead. Their house burned down. Faulty wiring, the fire inspector said.” As the coffee dripped through the machine, I turned to face her. Why was I telling this to a complete stranger? “What’s your name?”
She gave me a coy smile. “What’s yours?”
“I asked first.”
“So you did. No brothers or sisters, I guess?”
I shook my head and turned away. I should have been irritated, but I couldn’t manage it. All I could feel was a deep hollow in the pit of my stomach, an emptiness that had probably been there for more years than I cared to admit.
“No woman in your life?”
“Thankfully not.”
She laughed. “Can’t live with ’em, can’t live without ’em, right?”
The coffee finished brewing. Forcing a few breaths to steady myself, I poured and brought the steaming mugs to the table. “They can’t seem to live with me. Best you don’t even think about it.”
“Hmm.” She took the mug in both hands and seemed to melt in its warmth. “I’m not thinking anything. You’re the one who called me.”
I watched her drink the whole mug in one long swallow, steam curling about her face, her eyes never leaving mine. “What’s your name?” I asked again.
“Take your pick. I have so many. ”
The encounter had to turn strange at some point, otherwise it would have no interest. Now that it has, I need to take a small break. When I come back, I expect the end will materialize.
She held out her mug as if to ask for more coffee. I hadn’t touched mine yet, so I pushed it to her. “Your real one will do.”
With a nod of thanks, she wrapped her hands around the mug and lifted it to her lips. “Zoe.”
I laughed. “Like the second Doctor Who’s companion? You don’t look a thing like her.”
“She was named after me.” Zoe drank down her second mug in one long gulp, then delicately wiped her mouth with the back of her hand. “Who do I look like?”
I didn’t know, but something about the shape of her face and the turn of her mouth reminded me a little of my mother. A fragment of anguish rose in my throat and tried to choke me. I forced it back down. “I have a strange feeling you know.”
She held out the mug. “I don’t want to impose, but . . .”
I got up and gave her a refill, which she downed as quickly as the first two. Then, setting the mug gently on the table, she rose. “I should go. If I stay, I’ll run you out out of coffee.”
“Go where? You’re out of gas.”
“Nah, I just said that to get you to open the door.”
I watched her cross the cabin and undo the chain on the front door, then rushed after her. “Wait! I don’t know who you are. I don’t even know what you are!” Reaching her, I put my hand on hers to keep her from turning the knob.
“Oh, so now you want me to stay.”
Her words jarred me as though she’d slapped my face. “Well . . .”
She smiled and enfolded my hand in hers. “For now, that’s enough.”
A moment later, I was standing alone, not remembering when or how she had slipped through the door, or even if she had. She might have dissolved into mist and floated up the chimney for all I knew.
A lot of this was written while feeling my way toward the end. If you think I planned it all, you’re wrong. And I’m still looking for the finish. I know about what it is now, but not the form.
I thought about her a lot in the coming days. I even looked for her online, but with only a first name it was a fool’s quest. Thinking she might live in a town nearby — relatively speaking — I forced myself to go out looking, but to no avail. I talked with servers in mom and pop restaurants, gas station attendants, even a couple of librarians. Nobody had ever heard of Zoe or anyone quite fitting her description. Not with her thirst for coffee, anyway.
Strangely, the further I ventured, the less I craved my solitude. Not that I wanted to abandon it, but I began to discover — or rediscover —that connections had some value after all. I began to wonder if maybe I didn’t need the world at least a little, and if just possibly the world needed me in return.
In short, thanks to Zoe, I’m rediscovering life. It would be hard not to. She isn’t just any woman. I’m convinced of that. She’s . . .
Oh, look it up for yourself.
The end? Yes and no. The end of the first draft. Along the way I’ve done a bit of rewriting, but not so much as I usually do. Normally, I’d rework the story several times before showing it around. What you’re seeing is therefore rough around the edges. I’ll post the final version later, so you can see what changes.
Addendum: The final draft of “Zoe” is now available on Lit Up. You might like to compare it to this first draft. | https://lehket.medium.com/zoe-51989ed3fe63 | ['Dale E. Lehman'] | 2018-08-08 14:28:03.836000+00:00 | ['Writing Prompts', 'Writing', 'Short Fiction', 'Fiction', 'Short Story'] |
Why you should bring prototyping into your design process? | Why you should bring prototyping into your design process?
Learn the value of the prototype, make a compelling design and boosting your product development process.’
Photo by UX Store on Unsplash
I’ve been with UI/UX design for nearly 5 years so far, every time when I present my design solutions to the stakeholders, I got different levels of feedback depending on how I demonstrate my works.
And I found an interesting fact is that if I only show my static design mockup with some basic user flows, people can roughly understand how I want to approach the problem, but it’s not easy for them to imagine how this design would work in the reality. So I have to be very articulated about every interaction detail of my design in order to better visualize the whole design concept for my audiences.
However, sometimes when I’m designing for a large scope project or a complex feature, it’s a bit difficult to clearly describe my idea by just showing static design mockups. So I’ve been looking for a better way to help me showcase my work without explaining too much? Then I finally landed on prototyping.
I started translating my ideas into an animated or playable prototype from different levels of tasks, it helps me largely reduce the communication cost and give people a simpler way to absorb the idea I provided.
In the past few years, I’ve tried a lot of prototyping tools in different use cases, I learned a lot from building those interactive experiences, it makes my design more vibrant and compelling, it’s also an important skill to me being a UX designer. | https://uxdesign.cc/why-you-should-bring-prototyping-into-your-design-process-fb25b679accb | ['Lin Simon'] | 2020-04-20 11:36:15.752000+00:00 | ['Marketing', 'Prototyping', 'UI', 'User Experience', 'UX Design'] |
How to rewrite your SQL queries in Pandas, and more | Fifteen years ago, there were only a few skills a software developer would need to know well, and he or she would have a decent shot at 95% of the listed job positions. Those skills were:
Object-oriented programming.
Scripting languages.
JavaScript, and…
SQL.
SQL was a go-to tool when you needed to get a quick-and-dirty look at some data, and draw preliminary conclusions that might, eventually, lead to a report or an application being written. This is called exploratory analysis.
These days, data comes in many shapes and forms, and it’s not synonymous with “relational database” anymore. You may end up with CSV files, plain text, Parquet, HDF5, and who knows what else. This is where Pandas library shines.
What is Pandas?
Python Data Analysis Library, called Pandas, is a Python library built for data analysis and manipulation. It’s open-source and supported by Anaconda. It is particularly well suited for structured (tabular) data. For more information, see http://pandas.pydata.org/pandas-docs/stable/index.html.
What can I do with it?
All the queries that you were putting to the data before in SQL, and so many more things!
Great! Where do I start?
This is the part that can be intimidating for someone used to expressing data questions in SQL terms.
SQL is a declarative programming language: https://en.wikipedia.org/wiki/List_of_programming_languages_by_type#Declarative_languages.
With SQL, you declare what you want in a sentence that almost reads like English.
Pandas’ syntax is quite different from SQL. In Pandas, you apply operations on the dataset, and chain them, in order to transform and reshape the data the way you want it.
We’re going to need a phrasebook!
The anatomy of a SQL query
A SQL query consists of a few important keywords. Between those keywords, you add the specifics of what data, exactly, you want to see. Here is a skeleton query without the specifics:
SELECT… FROM… WHERE…
GROUP BY… HAVING…
ORDER BY…
LIMIT… OFFSET…
There are other terms, but these are the most important ones. So how do we translate these terms into Pandas?
First we need to load some data into Pandas, since it’s not already in database. Here is how:
I got this data at http://ourairports.com/data/.
SELECT, WHERE, DISTINCT, LIMIT
Here are some SELECT statements. We truncate results with LIMIT, and filter them with WHERE. We use DISTINCT to remove duplicated results.
SELECT with multiple conditions
We join multiple conditions with an &. If we only want a subset of columns from the table, that subset is applied in another pair of square brackets.
ORDER BY
By default, Pandas will sort things in ascending order. To reverse that, provide ascending=False.
IN… NOT IN
We know how to filter on a value, but what about a list of values — IN condition? In pandas, .isin() operator works the same way. To negate any condition, use ~.
GROUP BY, COUNT, ORDER BY
Grouping is straightforward: use the .groupby() operator. There’s a subtle difference between semantics of a COUNT in SQL and Pandas. In Pandas, .count() will return the number of non-null/NaN values. To get the same result as the SQL COUNT, use .size().
Below, we group on more than one field. Pandas will sort things on the same list of fields by default, so there’s no need for a .sort_values() in the first example. If we want to use different fields for sorting, or DESC instead of ASC, like in the second example, we have to be explicit:
What is this trickery with .to_frame() and .reset_index()? Because we want to sort by our calculated field (size), this field needs to become part of the DataFrame. After grouping in Pandas, we get back a different type, called a GroupByObject. So we need to convert it back to a DataFrame. With .reset_index(), we restart row numbering for our data frame.
HAVING
In SQL, you can additionally filter grouped data using a HAVING condition. In Pandas, you can use .filter() and provide a Python function (or a lambda) that will return True if the group should be included into the result.
Top N records
Let’s say we did some preliminary querying, and now have a dataframe called by_country, that contains the number of airports per country:
In the next example, we order things by airport_count and only select the top 10 countries with the largest count. Second example is the more complicated case, in which we want “the next 10 after the top 10”:
Aggregate functions (MIN, MAX, MEAN)
Now, given this dataframe or runway data:
Calculate min, max, mean, and median length of a runway:
A reader pointed out that SQL does not have median function. Let’s pretend you wrote a user-defined function to calculate this statistic (since the important part here is syntactic differences between SQL and Pandas).
You will notice that with this SQL query, every statistic is a column. But with this Pandas aggregation, every statistic is a row:
Nothing to worry about —simply transpose the dataframe with .T to get columns:
JOIN
Use .merge() to join Pandas dataframes. You need to provide which columns to join on (left_on and right_on), and join type: inner (default), left (corresponds to LEFT OUTER in SQL), right (RIGHT OUTER), or outer (FULL OUTER).
UNION ALL and UNION
Use pd.concat() to UNION ALL two dataframes:
To deduplicate things (equivalent of UNION), you’d also have to add .drop_duplicates().
INSERT
So far, we’ve been selecting things, but you may need to modify things as well, in the process of your exploratory analysis. What if you wanted to add some missing records?
There’s no such thing as an INSERT in Pandas. Instead, you would create a new dataframe containing new records, and then concat the two:
UPDATE
Now we need to fix some bad data in the original dataframe:
DELETE
The easiest (and the most readable) way to “delete” things from a Pandas dataframe is to subset the dataframe to rows you want to keep. Alternatively, you can get the indices of rows to delete, and .drop() rows using those indices:
Immutability
I need to mention one important thing — immutability. By default, most operators applied to a Pandas dataframe return a new object. Some operators accept a parameter inplace=True, so you can work with the original dataframe instead. For example, here is how you would reset an index in-place:
However, the .loc operator in the UPDATE example above simply locates indices of records to updates, and the values are changed in-place. Also, if you updated all values in a column:
or added a new calculated column:
these things would happen in-place.
And more!
The nice thing about Pandas is that it’s more than just a query engine. You can do other things with your data, such as:
Export to a multitude of formats:
Plot it:
to see some really nice charts!
Share it.
The best medium to share Pandas query results, plots and things like this is Jupyter notebooks (http://jupyter.org/). In facts, some people (like Jake Vanderplas, who is amazing), publish the whole books in Jupyter notebooks: https://github.com/jakevdp/PythonDataScienceHandbook.
It’s that easy to create a new notebook:
After that:
- navigate to localhost:8888
- click “New” and give your notebook a name
- query and display the data
- create a GitHub repository and add your notebook (the file with .ipynb extension).
GitHub has a great built-in viewer to display Jupyter notebooks with Markdown formatting.
And now, your Pandas journey begins!
I hope you are now convinced that Pandas library can serve you as well as your old friend SQL for the purposes of exploratory data analysis — and in some cases, even better. It’s time to get your hands on some data to query! | https://medium.com/jbennetcodes/how-to-rewrite-your-sql-queries-in-pandas-and-more-149d341fc53e | ['Irina Truong'] | 2019-10-30 16:08:53.862000+00:00 | ['Sql', 'Coding', 'Software Development', 'Python', 'Data Science'] |
How to Reverse Engineering an Android App | Video Tutorial
Here are the steps we will follow:
Download an APK file from Google Play Store (it could be any APK file)
Using some free tools, we will reverse engineer the APK file to see the code.
If the APK file is protected using any premium software, then we can not reverse the code actually, or if the code is obfuscated then it will also be difficult to read the code after reverse engineering.
APK stands for the Android application package, a file format used by the Android operating system for distribution and installation.
Disclaimer: This tutorial is for educational purposes. During this demonstration, we selected an application APK file from the Google Play Store. If this chosen APK file is not working, try a different one or may use your own. There is no intention to harm or tamper with the APK file we chosen from the Google Play Store.
Step 1:
We will use the dex2jar software.
Let’s download dex2jar software from the following link: https://sourceforge.net/projects/dex2jar/.
software from the following link: https://sourceforge.net/projects/dex2jar/. If you want to see the code there is a GitHub link, that is https://github.com/pxb1988/dex2jar
This is a zip file and in my desktop->demo3 directory I put the unzipped directory of this zip file.
Step 2:
We need to download JD-GUI software. You can visit the following link http://java-decompiler.github.io/ and in the download section based on your operating system, you can download the software.
I store the software in my desktop->demo3 directory.
dex2jar-2.0 and jd-gui directories within demo3 directory
Step 3:
We need to target any android app. So in this case I am targeting EnglishScore: Free British Council English Test app from the following google play store link. https://play.google.com/store/apps/details?id=com.englishscore
Step 4:
We need to download the APK file.
Visit the following site https://apkpure.com/region-free-apk-download
On the website, at the top text field, paste the google play store app link and click download .
and click . It will take some time, and will show you another download link. Use that link to download the APK file.
Download link generated
After downloading store the app in demo3->dex2jar-2.0 directory
The APK file is placed within dex2jar-2.0 directory
Step 5:
Open your terminal if you use mac or open your Windows PowerShell if you use windows.
In the terminal, go to the target directory. In my case it will be: cd /Users/mahmud/Desktop/demo3/dex2jar-2.0
Now fix the permission of all files by pasting the following commands:
chmod 0777 *
Now type ./d2j-dex2jar.sh and then type Eng and click tab to get the full file name. EnglishScore\ Free\ British\ Council\ English\ Test_v1.00.32_apkpure.com.apk .
and then and to get the full file name. . Now click enter. It will take some time, and you will see a new file named
EnglishScore Free British Council English Test_v1.00.32_apkpure.com-dex2jar.jar is created.
File Structures and Commands in Terminal
Step 6:
Now go to your jd-gui->build->libs directory. And in my case, if I double click jd-gui-1.6.1.jar you will see the following interface of the app.
directory. And in my case, if I double click you will see the following interface of the app. Now drag and paste the EnglishScore Free British Council English Test_v1.00.32_apkpure.com-dex2jar.jar file within the app and you will see the following things.
Reversed Engineering Code
If you click the com section, you will see what 3rd party libraries this app is used.
Also in this app, if you click englishscore section, you will see the source code of the app.
Now click any file, for example, BritishCouncil.class file and you will see the actual code of the app. If this app was protected by any premium tool, or even obfuscated the code before releasing the app, we couldn’t easily understand the code after reverse engineering.
But unfortunately, many android app developers didn’t know about reverse engineering and spying eyes can easily reverse engineer the app.
So it is important to know by all android developers.
Conclusion
If you click distinct classes of this application, you can see all the code. Now think, if you are an android developer, you are developing a financial or banking app and you didn’t use any obfuscation techniques or especially for financial type app; you didn’t use any protection then how easy it is for hackers to hack your application or breach the security right?
So I hope you understand how vulnerable our app is. If you want to know how to protect an Android app from reverse engineering, check the following article. | https://medium.com/level-up-programming/how-to-reverse-engineering-an-android-app-be5835f6fa1e | ['Mahmud Ahsan'] | 2020-10-28 08:20:02.427000+00:00 | ['Software Engineering', 'Android', 'Hacking', 'Security', 'Android App Development'] |
New call for applications opens for European Journalism COVID-19 Support Fund | Emergency Fund
For news organisations
Amount : €5,000, €10,000 or €25,000
: €5,000, €10,000 or €25,000 Eligible : news organisations (that employ the equivalent of at least one full-time journalist)
: news organisations (that employ the equivalent of at least one full-time journalist) Focus: providing specific financial support to address immediate and critical business needs. These grants may be used, for example, to replace lost sales revenue including from printed and digital products/ services, to fund alternative print distribution methods, to cover key organisational costs, to hire freelancers to replace staff during illness, to maintain essential coverage and services unrelated to COVID-19 and to fund IT, software services and infrastructure support.
“Without the help of this fund, we would have had to close Star & Crescent, at least for the foreseeable future. This funding will also support our current work towards launching a Members’ Scheme, which will help us to develop a sustainable income stream, and to deepen our relationship with the local community. I cannot adequately put into words the difference this funding makes to us.” Star & Crescent, U.K. (Wave 1 grantee)
For freelance journalists
Amount : €5,000
: €5,000 Eligible : freelance journalists; groups of freelance journalists
: freelance journalists; groups of freelance journalists Focus: helping community-based, community-driven local media to engage communities and their conversations within short-term or one-off COVID-19-related initiatives. These grants may be used, for example, to launch a dedicated newsletter, create a community group, cover travel costs, cover costs of audio/visual/recording equipment to aid remote working, undertake local fact-checking, engage in community data reporting, produce short-run print material, or set-up online events.
“We want to guarantee local reporting about migrant communities. This project will help a broader reporting of realities often unnoticed. Now I feel encouraged and motivated to share these stories.” María Clara Montoya, freelance journalist, Spain (Wave 1 grantee)
Endurance Fund
For news organisations only
Amount : €10,000 or €25,000
: €10,000 or €25,000 Eligible : news organisations (that employ the equivalent of at least one full-time journalist)
: news organisations (that employ the equivalent of at least one full-time journalist) Focus: providing specific financial support to news organisations that have pivoted / are pivoting their business model during the COVID-19 crisis. These grants may be used, for example, to invest in resources (including technology, toolkits, people, and experts) to: build resilience within teams and leadership, facilitate effective cross-team collaboration and sharing of knowledge, create more/better pathways for community participation in the work of the news organisation, execute user-focused product development, or develop or launch reader-revenue models.
“This support is vital to our survival as we have lost a significant portion of our advertising revenue. We can continue to serve our audience through the hard times ahead, as we have done for the last 20 years.” Klubrádió, Hungary (Wave 1 grantee)
Grantee from Wave 1, Koncentrat
Eligibility and selection criteria
The fund is open to freelance journalists or news organisations with their principal place of business located in a country in the Council of Europe. The applicant must be serving communities on a hyperlocal, local or regional scale and/or communities of interest.
Independent experts and the EJC team will shortlist and select grantees according to the criteria laid out in the Call for Applications (Wave 2).
Previous, unsuccessful applicants from Wave 1 (April 2020) may re-apply, but must create a new application that reflects changes in circumstances since the original application. Successful news organisations and freelancers who received grants from Wave 1 are not eligible to re-apply.
Please check the new Call for Applications (Wave 2) and the updated FAQs for details.
The deadline for applications is 11:59 pm CEST, Friday 25 September 2020.
Important links
“We want to share the joy of this award with our members: without them, Slow News wouldn’t even exist. So, we are committed more than ever to provide them journalism they support every day.” Slow News, Italy (Wave 1 grantee)
About the Fund Partners
Since 1992, the EJC has been building a sustainable, ethical and innovative future for journalism through grants, events, training and media development. It is an international non-profit, headquartered in the Netherlands, that connects journalists with new ideas, skills and people. Our focus in 2020 is building resilience into journalism.
The Facebook Journalism Project works with publishers around the world to strengthen the connection between journalists and the communities they serve. It also helps address the news industry’s core business challenges. Its trainings, programs, and partnerships work in three ways: build community through news, train newsrooms globally, and quality through partnerships. | https://medium.com/we-are-the-european-journalism-centre/new-call-for-applications-opens-for-european-journalism-covid-19-support-fund-98a33e8372f4 | ['Adam Thomas'] | 2020-09-17 12:01:18.531000+00:00 | ['Journalism', 'Funding', 'Covid 19', 'Media', 'Updates'] |
What 2020 Taught Us About Ourselves | What 2020 Taught Us About Ourselves
The year made us appreciate things we once hated.
Photo by Charl Folscher on Unsplash
This morning I went to YouTube’s recommended section, originally planning to watch the latest late night show episodes and get my update on what you guys across the big pond are doing to your country.
Instead I ended up watching a soccer goal compilation, which has to be the first time in my life that I did that. I never liked soccer, whenever our news casters went on to talk about sports I switched off the TV.
In the same ways I learned to like the simple pleasure of comfortable pants, not quite to the point where I would wear them in public but I do wear them while working from home most days.
I learned to enjoy meeting and talking to people after two decades of doing whatever I possibly could to prevent human interactions.
And while that is just me I believe that we can all agree on seeing our priorities, beliefs, work and pastimes shift in all kind of weird, unexpected ways this year.
So what then, does that teach us about ourselves?
Priorities do shift.
This makes me afraid, it may mean that I end up one of those guys with a house, a family and too-big-a-car that is financed where I can barely keep up with the montly payments.
I can only hope that I will be able to maintain my disgruntled loner state who likes work more than living and fills every waking hour with productivity of some sort, that is certainly the easier life to live.
We need some form of continuity in our lives.
If I look back at my own life I have usually made larger changes while retaining continuities and routines in other areas — even if I ended up changing those soon after sometimes.
So maybe I would move places, but stay in contact with people of the old worlds, then once we inevitably grew apart I had my new world offering me continuity.
2020 has tossed a lot of those around, shuffled the cards and the dices fell in weird, unrelated ways sometimes. I think that a lot of us have suffered from this uncertainty on a larger scale more than the uncertainty in our personal lives. Unemployment security used to safeguard us all from going hungry, but I think this year was the first when many started wondering how long the state would be still in a state (hah) to cover those social securities.
Staying adaptable may just be the most powerful skill we can hone.
Anyone remember the gay, jewish ex-nazi who realized one day that he should probably reevaluate his beliefs? If a guy so deep into any one rabbit hole can recover and readjust then so can we, right?
My life in retrospect has been a long string of swift and drastic lifestyle changes and I’m incredibly glad for that. I lived in eight different cities, towns and villages now, worked a farm job, did construction work, assembled e-cigarettes in a warehouse — and then somehow ended up in a hectic programming job that got so bad now towards the end that I’m delivering pizza to clear my head.
So I guess I have the advantage of change being my routine, this time next year I might be at a completely different place living a completely different life. Who knows, who cares.
We can live well on much less than we do right now.
I am nowhere near rich, but I’m entering that frightening stage in life where I begin to have actual savings, haven’t needed to touch my emergency fund in months and money comes in slightly faster than I can reasonably spend it.
But there was also a time when I lived on the 650€ a month that we were paid as apprentices (obviously with help of my mom but still) and somewhere in between those two I was working hard at substandard wage and lived a life worth living regardless.
I have good memories of those days, I even own a now-dysfunctional pilot watch that I bought from a rare, unexpected year-end-bonus and treasured ever since. I should probably have it repaired, but it was cheap enough to make the repair more expensive than buying a completely new one so I’m on the fence with that.
The point here is that I live more on less than other people spend to live miserable lives.
Happiness is not in things or people but rather in the moment.
I live a weird fringe life where I all-but-know that the life I’m currently living is due for a change in little time — that includes people I like, people I dislike, the things I own and the plans and dreams I have at any given moment.
Right now what I enjoy most is the weirdly stressful, weirdly relaxing way I meet with a friend who is a nurse working at odd hours, varying shifts and only a week’s advance knowledge of when and where we might be able to meet. The other night she needed moss for Christmas crafts so we met in the darkness of the early morning hours to hike around a lake and hunt for patches of moss to the shine of our flashlights — as you usually do.
That was fun by the sheer weirdness of the idea, even before adding the interesting conversations that arise from living two vastly different lifestyles.
I treasure those moments, much like the nights I spent working late into the nights constructing event locations, the hours on the farm cutting firewood, riding a tractor for hours across the interstate after a one-minute crash course (that’s the gas pedal, that’s the brake, if you think you need turn signals those are there) — they are insignificant on the greater scale but I still enjoy thinking back to them.
Having too much time to think can’t be good in the long run.
This year has seen me change my ways in more than just one regard, the main one being that I started to live, think, read and write again after what feels like five years of absence from life. | https://medium.com/the-ascent/what-2020-taught-us-about-ourselves-50aea26e8c14 | [] | 2020-12-26 22:02:31.269000+00:00 | ['Life', 'Self-awareness', 'Self', 'Self Improvement', 'Life Lessons'] |
36 Alien Civilizations Have Colonized the Milky Way | Re-thinking the Drake Equation; The Astrobiological Copernican Limit.
A new study published in the Astrophysical Journal has shifted the paradigm on the question of alien existence.
The study, conducted by researchers at the University of Nottingham, takes a fresh look at the Drake Equation. The researchers developed a new calculation called the Astrobiological Copernican Limit which is a more specific alien probability analysis than the Drake Equation.
First, some background.
The ACL draws inspiration from the famous Copernican Principle; the idea that earth does not sit at the center of the universe or is special in some way.
In the 16th Century, scientist Nikolau Copernicus proposed that the Sun is centrally located and stationary in contrast to the then currently upheld belief that the Earth was central. Austrian cosmologist Sir Herman Bondi named the principle after Copernicus in the mid-20th century.
By looking at the world through the eyes of this principle, we can remove certain blinders and preconceptions about ourselves and re-examine our relationship with the universe.
The new study centers around the Copernican principle. The study guesstimates that the number of Communicating Extraterrestrial Intelligent Life in our galaxy could be somewhere between 4 and 211, but most likely 36!
Professor of Astrophysics at the University of Nottingham, Christopher Conselice, who led the research, explains: “There should be at least a few dozen active civilizations in our Galaxy under the assumption that it takes 5 billion years for intelligent life to form on other planets, as on Earth.”
The research is looking at evolution, but on a cosmic scale, hence the “Astrobiological Copernican Limit.”
Of course, the civilizations proposed by the team would be able to send radio signals out into space, which is what qualifies them as communicating.
The key difference between the ACL and the Drake equation is that it makes very simple assumptions about how life developed.
One major assumption is that life forms scientifically; that is, if the right conditions are met, then life will form.
This approach bypasses the two impossible-to-answer questions that have plagued previous calculations; “what fraction of planets in the habitable zone of a star will form life?” and “what fraction of life will evolve into intelligent life?”
These two questions are not answerable until we detect life, which still seems like a far-fetched reality.
Again, much like the Drake Equation, the ACL isn’t without its fair share of issues. We can’t accurately know the correct figures to compute, with estimates of the number of anything in the Milky Way tending to differ from source to source.
We don’t know the number of stars and exoplanets in the galaxy. So while this new method is surgical in its approach, it’s by no means definitive. We are still at the stage of trying to figure out the most precise values to compute.
There’s also the fact that we’re still not certain what caused intelligent life to evolve even on earth. The assumption that it could happen anywhere else in the universe “under the right circumstances” is incredibly far-fetched.
Additionally, the study suggested that the average distance to the nearest civilization might be about 17000 light-years, making detection and communication extremely difficult with our current technology. | https://medium.com/predict/36-alien-civilizations-have-colonized-the-milky-way-b23d67518cba | ['Leon Okwatch'] | 2020-12-28 01:38:45.466000+00:00 | ['Science Fiction', 'Astronomy', 'Space', 'Physics', 'Science'] |
Was a ‘Secret’ Version of the Gospel of Mark Found in 1958? | Was a ‘Secret’ Version of the Gospel of Mark Found in 1958?
Morton Smith made an announcement
Mar Saba monastery, Palestine (2011; Creative Commons license)
On December 29, 1960, at a meeting in New York of the Society of Biblical Literature and Exegesis, an assistant professor of history at Columbia University named Morton Smith announced an exciting manuscript discovery. Two years prior, he said, he’d been looking over some old Latin books in the top room of the tower library at the Mar Saba monastery, which is outside of Jerusalem.
At the end of a book printed in 1646, he’d noticed two and a half pages of handwriting. It was a text, in Greek, identified as a letter by Clement of Alexandria, the second-century Christian, addressed to a person named Theodore. The letter discusses, and quotes from, a “secret” version of the gospel of Mark.
One would have to appreciate, at least, a good story? A scholar, well-regarded in his field, had found in an old book a copy of a secret teaching of a sacred text, kept hidden since the origins of the faith. He wasn’t allowed to remove the book, so it remained locked up in the monastery.
But he had taken photos.
Morton Smith, “Secret Mark” (1958; public domain; colorized)
There were ‘new’ scenes with Jesus, his mother and Mary Magdalene, and a resurrected Lazarus
The most detailed scene was a passage to be inserted between Mark 10:34 and 35. It was, Clement prefaces, a “more spiritual” version of the gospel.
“And after six days Jesus told him what to do and in the evening the youth comes to him, wearing a linen cloth over his naked body. And he remained with him that night, for Jesus taught him the mystery of the kingdom of God.”
Clement adds, in reference to a question from Theodore, that the phrase “naked man with naked man” wasn’t in the secret text.
The day following Smith’s presentation, newspapers around the country report on the matter, framing it as a curiosity.
The Hackensack Record, December 30, 1960.
Was it an eerie moment in Christian history?
Since 1945, ‘new’ biblical text had been shocking the established religious traditions: the Dead Sea Scrolls, the Nag Hammadi codex. Christians were being pressed to consider manuscripts for religious acceptability of which they had no knowledge!
However, Christians knew that Jesus was not a “magician,” or something on the level of a hypnotist, and had not had an erotic scene with a young man—which was the case that Smith set out to prove.
Along the way, he kept up a correspondence with Gershom Scholem, the great Jewish scholar. No matter how reluctant, Christian scholars will have to deal with the matter, Smith writes to him, as “the text is there and has to be explained, and the problems are there and have to be answered.”
Is the text saying that Jesus and Lazarus were sexual? As Smith writes in his resulting 1973 book, “there is no telling how far symbolism went,” though the key moment in the ritual, he thinks, would be when “the disciple was possessed by Jesus’ spirit.”
This “mystery” language is given a spousal context in Ephesians 5:32, and Paul and Peter do issue those warnings against ‘immorality’. To Smith, it seems the early Christians had gotten a little licentious, and the scene with Jesus and the young man might provide the reason.
His book got media play and professional blowback. “I’m reconciled to the attacks,” he tells the New York Times. “Thank God I have tenure!”
However, from the establishment of Biblical scholarship, the feeling might be that he was fired.
A Jesuit scholar named Quentin Quesnell was suspicious
In two papers, in 1975 and 1976, he broached the matter. Why had Smith not taken more efforts to secure this supposed manuscript, and make it available for public scrutiny?
Why would it be, he mused, that Smith had—all his professional career—been interested in the very subjects his discovery seemed to verify? Even in his 1951 dissertation, Smith had written about “secret doctrine” in early Christianity and “forbidden sexual relationships.”
The hazy suggestion is that Smith might have forged it, but Quesnell allows that Smith might have found a text forged by someone else. “I do not find the style typical of Clement,” he notes.
In 1983, Quesnell traveled to Mar Saba to examine the manuscript, expecting to find an obvious forgery. His knowledge of this field, as he writes in his notes, included “what I read about forgeries in detective stories.”
When the book was finally before him, just as Smith had described it, Quesnell realized he wasn’t sure. He saw the librarians were good at guarding their property, and it would be “impossible” to remove the book. Quesnell went home, never to discuss the experience publicly.
About four other scholars had also gone to see the letter. They’d taken more photographs, and tried to arrange tests on the paper by Israeli scientists. As this would involve Jews, the monks wouldn’t allow it.
The monastery had begun to see Smith as publicity-hungry. He tried to get a BBC camera crew into the library. This disturbance was refused. Somewhere along the way, the letter went missing.
For years it seemed Smith had been the only person to see the ‘Letter to Theodore’
A theory formed among the skeptics. Quesnell’s notes, found after he died in 2012, note the talk that “psychological explanations” account for Smith’s forgery. He’d never married and was an Anglican priest in early life.
Morton Smith in 1989 by Allan J. Pantuck (public domain)
In 2005, Stephen C. Carlson published The Gospel Hoax, a dismissal of “Secret Mark” as a forgery motivated by Smith’s homosexuality. In 2007, Peter Jeffery published a similar critique, The Secret Gospel of Mark Unveiled.
Jeffery writes:
“My impression is that Morton Smith was a man in great personal pain, even if (which I don’t know) he was usually able to hide this fact from the people who knew him.”
The posthumous ‘outing’ by Bible scholarship was done without biographical investigation. In 2010, Biblical Archaeology Review did a feature on the controversy. Friends of Smith wrote in, disputing he’d been homosexual. He’d dated at least two women. A friend reports: “I suspect that he was just an Anglican clergyman who had had an unsuccessful love affair and afterward condemned himself to bachelorhood.”
Scholars continued to dismiss “Secret Mark”
Academics as illustrious as Larry Hurtado, Bart Ehrman, and Craig Evans, saw it as more or less a ruse on Smith’s part. Among Bible scholars this might even have been a required view.
A South African feminist scholar named Winsome Munro had a 1992 paper, “Women Disciples: Light from Secret Mark.” And other non-American scholars, like Richard Bauckham and Scott G. Brown, examined how the ‘new’ text could illuminate some famous problems in the gospel narratives — as if a piece, removed, had been placed back in.
Timo Paananen, from the University of Helsinki, analyzed the handwriting in the photos of the letter, finding a case for forgery rather weak.
In 2008, the correspondence between Smith and Scholem was published. One watches Smith thinking, and re-thinking, the letter over time. The idea of his being a forger, the editor thinks, had emerged “from quite unscholarly grounds,” and in retrospect the evidence:
“…strongly points to the total trustworthiness of Smith’s account of his important discovery (though not necessarily of his interpretation of the document).”
An independent scholar named Stephan Huller did a series of blog posts on the matter, noting possibilities. Following textual clues, this ‘Theodore’ seemed to be an early Christian about whom more was known—like that he’d a “same-sex union rite” and been “united to another man in this city with Origen of Alexandria presiding over the ceremony.”
That connection was noticed by an independent scholar named Michael Zeddies. In two papers, in 2017 and 2019, he traces the possibility that the ‘Letter to Theodore’ wasn’t written by Clement at all. It sounded more like Origen, the key third-century Christian scholar.
Origen had been declared a heretic in 553, some two hundred years after his death. A misattribution of the letter could have been the key to its survival. A story, that is, of Christians saving texts from Christians.
The idea of a ‘secret’ teaching was hardly new
Zeddies writes:
“Origen would have been quite comfortable with suggesting that some parts of Scripture were to be literally withheld from the spiritually unprepared.”
Smith’s ideas of erotic scenes might have been overdone. Religions often have their own specialized vocabulary. For Origen, the word “carnal,” for example, would point to the material world.
The “Secret Mark” scene might just describe a baptism, which in early Christianity had been done at night, all night, and fully naked. The early motto, Zeddies points out, was: “naked to follow the naked Christ.” The follower is shedding humanity along with clothes, dreaming of a new spirit form that is yet to be.
Whatever this “mystery” had been, it didn’t seem too lurid, at least, in 1 Corinthians 2:7, which notes the “mystery that has been hidden…”
What might not be clear is that it was ever revealed. | https://medium.com/history-of-yesterday/was-a-secret-version-of-the-gospel-of-mark-found-in-1958-9bde330fa1f9 | ['Jonathan Poletti'] | 2020-12-18 17:14:56.369000+00:00 | ['Religion', 'Christianity', 'Books', 'Bible', 'History'] |
Case study: how I would design Twitter’s Edit Button | So I hope everyone has a mask on because that might encourage twitter to add on an edit button. That being said, here’s a case study(case essay? redesign?) that shows how I’d implement the infamous ‘edit’ button on twitter.
Target Use Case:
Giving a chance for users to fix their tweets while retaining accountability.
Giving people the chance to fix a spelling error, misquoted tweet, or as a way to rectify false information, would be greatly beneficial to the platform as a whole.
Just think about the number of times you’ve been wrong or you’ve seen tweets that were wrong, a couple of outcomes may occur such as;
People begin to correct and pile onto the original poster to let them know that they’re wrong
and pile onto the original poster to let them know that they’re wrong A tweet becomes a thread — with the correct response (kinda like social proofing)
— with the correct response (kinda like social proofing) It gets deleted which holds 0 accountability to the poster.
Design Analysis:
Mockups for how the edit button would
The design process for me was to be as authentic and realistic as possible to what Twitter would do themselves.
Hence why I decided that instead of adding another action button to a tweet, it was implemented as another option similarly to ‘delete tweet’ and ‘pin to profile’.
Afterward, the tweet with being treated just like a regular retweet except at the bottom of tweet there will be a tag stating that the tweet is edited & it essentially is threaded to the original tweet.
This tag acts as a similar function to Twitter’s already existing tag regarding violations to their rules
An example of what kind of tag when you violate twitter’s rules.
The language I chose for the tag does a few things;
Holds the user accountable
Has notified any viewer that the tweet is NOT the original tweet & has been edited
the original tweet & has been edited Anytime someone comments or retweets the edited or original tweet the tag will appear and notify users that it has been edited.
The ability to gain optional knowledge on twitter’s guidelines
Langauge used for the tag
The feature would also hypothetically not allow the author to delete the original tweet unless they delete the edited version as well — effectively deleting the thread of tweets altogether. The edit function would only be able to be used once on one tweet. This way it’ll ‘foolproof’ any kind of misleading tweets that may be allowed.
A terrible recording of the mockup I made for the interaction
Final Thoughts
I believe having an edit button would be beneficial in twitter’s interactions as long as there is some form of accountability between users and a way for the original tweet to still exist alongside the edited version. This would help deter any misleading or offensive styled tweets because of the role the interaction would play. The edit button action would also help educate people on what is an error and what is a mistake — effectively creating a way to redeem oneself if need be.
We’ve seen the power of social media has on the ‘real’ world. From affecting our mental health to taking over entire elections it’s not a surprise it’s gotten to a point where we question if the pros outweigh the cons that platforms like Reddit, Twitter, Facebook, and Instagram, do to us. Is it worth it? And if it is, how can we make it better for everyone to interact with? What steps can we take to prevent the spread of misinformation, propaganda, and inappropriate opinions?
It’s a question that we have to continually ask ourselves, because with the advent of ‘deep fakes’, and people claiming your source is wrong, my source is right (what are we? children?) it seems like understanding what is true and what is not is becoming more and more difficult than we’d like to admit.
I believe the edit button on twitter will help aid in the fight against all of these factors, as long as there are accountability and education, Twitter can make their platform a little less toxic by implementing this.
TL;DR: | https://medium.com/design-bootcamp/twitters-edit-button-how-i-would-do-it-e39458168a2b | ['Anik Ahmed'] | 2020-10-16 03:08:19.994000+00:00 | ['User Interface', 'Case Study', 'Design', 'Twitter', 'UX'] |
Dr. Carmen Köhler: “We need more females in space” | Dr. Carmen Köhler is a true role model for men and women alike. As an “analogue astronaut” she explores Mars-like environments on earth and she is also a founder of the primary school competition Code4Space.
In this episode of REWRITE TECH, we talk to Dr. Carmen Köhler about Mars simulation missions, women in space, and how her passion has led to a very unique career.
How to become an analogue astronaut
The first thing people stumble upon when getting to know Dr. Köhler is her unusual career path. Born in Berlin, Dr. Köhler always loved maths, but she didn’t think she was capable of studying it. In consequence, she followed her second passion and made an apprenticeship to become a hairdresser. But then serendipity came into her life in the form of a client.
The client was a professor and they started to talk about books. When Carmen mentioned that she was currently reading Fermat’s Last Theorem, which is a book about mathematical proof, the professor was astonished. From that day on, he started to bring her mathematical programs and encouraged her to finally study the subject.
“I gave myself half a year after the hairdressing to become a make-up artist and then I studied maths. The first time I sat in the university and the professor started writing equations over equations, I was totally in love,” Dr. Köhler recalls.
The next turning point was the Austrian Space Forum, which was looking for an analogue astronaut. Carmen took the chance and eventually got the job. She explains: “Analogue astronauts are actually people who do science on Mars-like environments on earth.”
Dr. Carmen Köhler, why should we explore Mars?
The goal of these missions is to find out how the human body reacts to certain circumstances, both physically and mentally. Analogue astronauts are commissioned by universities or private sponsors to find answers to their questions. “As an astronaut, you’re the eyes and hands of the scientists,” describes Dr. Köhler.
“I think as humans, we are explorers. We are curious and we want to know things”
But the learning generated on Mars is not only relevant if we want to populate other planets, as Dr. Köhler clarifies: “What we learn in space, we can use for the earth and I think that is really important.” For example, spaceflights induce bone loss, which makes those flights an accelerated model for drug testing. Thanks to research on the International Space Station, a new therapy for osteoporosis could be found.
Is space made for women?
Since men and women produce different hormones, they also react differently to the environment of space. That’s why it’s so important to diversify the team and collect data. “We need data to know how our body and psychology react and what it has to do with our chemistry. For example, women in space have fewer problems with their eyes and ears than men.”
As the first female analogue astronaut, Dr. Köhler experienced some of these problems first-hand. The spacesuit and the shoes are made for men and are therefore quite heavy and large.
“We need more women, but we also need to make things better for women,” concludes Dr. Köhler. In her view, it’s important to have pioneers to guide the way and make space more inclusive for women. With her work, Dr. Köhler is one of these necessary forerunners.
Listen to REWRITE TECH with Dr. Carmen Köhler
Listen to the full conversation with Dr. Carmen Köhler on our REWRITE TECH Podcast, which is available on all common audio streaming platforms including Spotify and Apple Podcasts.
Don’t miss out on our other episodes including Janina Mütze from Civey or André Christ from LeanIX.
______________________________________________________________
Learn more about REWRITE TECH. | https://medium.com/rewrite-tech/dr-carmen-k%C3%B6hler-we-need-more-females-in-space-e8a52496caa1 | ['Sarah Schulze Darup'] | 2020-11-24 11:05:45.774000+00:00 | ['Aviation', 'Podcast', 'Women In Tech', 'Space', 'Science'] |
Why I Don’t Follow My Passion | Why I Don’t Follow My Passion
Motivations can’t bring food to your table
Photo by LinkedIn Sales Navigator on Unsplash
I’m a Digital Marketer by profession & a photographer by passion. You may wonder why I didn’t make my career in photography. I actually tried and learned a life-changing lesson that drove me to separate my career and passion.
I brought up hearing the words “Follow Your Passion”. Later found out that it’s the biggest misconception, motivational speakers give us.
The quote should be, “Follow your passion but not Blindly”.
I was confused at the age of 25, “from where should I start?”. The most common thought of us.
I didn’t have a shitload of money or any ovarian lottery, and passion related jobs pushed me to fail horribly.
But I didn't take it as a failure, I took it as a life-changing lesson and started my new beginning. I would like to share what are the lessons that changed my perspective. | https://medium.com/illumination/why-i-dont-follow-my-passion-78177176d9cc | ['Intisar Mahee'] | 2020-12-12 21:11:32.627000+00:00 | ['Passion', 'Motivation', 'Jobs', 'Careers', 'Career Advice'] |
“Usability is Accessibility” | Disability access symbols; image credit (https://oae.stanford.edu/resources-faqs/disability-access-symbols)
How and why designers should think about accessibility.
Design for “Everyone”
Who do you design for? As designers, we aspire to create designs that have the potential to impact the world and people of all kinds of shapes, sizes, and backgrounds. Moreover, designers fixate on the importance of creating products that anyone can use; after all, usability is important, and designers must critically consider the efficacy of their products. Thus, inherent to design is the notion of “design for everyone” — that is, to create simple designs that “even your grandma can pickup and use!”
Junior designers (myself included) easily acknowledge and accept this utopian adage with eagerness, strictly follow the long and winding road of usability checklists, and gently fall into the lion’s den of non-inclusive design. What happened here? Usability checklists aren’t wrong; in contrast, they espouse great principles and I encourage you to continue following these checklists when creating your designs. The problem lies in what the designer failed to do — they failed to think as a designer. Design is more than just creating aesthetic details or manufacturing textbook usable motifs; design is about thinking about your audience. Unfortunately, attempting to design for everyone may accidentally cause designers to create overly-general designs that fail to accommodate the unique populations of users who may actually use your designs, but are not specifically mentioned in your general guidelines. That doesn’t mean it’s bad to create designs that benefit the most types of users, but you should be aware of how your perceptions of “everyone” might exclude particular audiences. The saccharine idea of universal design unfortunately poses an ironic dilemma — if you attempt to design for everyone, you can leave out anyone.
If you attempt to design for everyone, you can leave out anyone.
What happens when you attempt to design something that is usable by anyone? Can you design something that is usable by everyone? Who does your “everyone” exclude? Do you consider a working and pregnant mother in your designs? Do you consider the grandmother with Asperger's? A four-year old who recently scraped his hands after falling from his bike? A fifty-eight year old electrician with moderate eye strain? Or how about the teenager with thyroid cancer?
Whether implicitly or explicitly, your design will leave out particular audiences. This truth is unfortunate, but inherent to the politics of the artifact that you design. Now, as a designer, you may wonder, “How can I make sure that I’m designing for my user then?” The answer lies in the question: “design for the user.”
“How can I make sure that I’m designing for my user then?” The answer lies in the question: “design for the user.”
You may bat an eye at this statement. Isn’t designing for everyone designing for your user? The two ideas sound almost identical, but the results of religiously following one idea over the other create vastly different results. “Everyone” is an ambiguous term. Although it appears to stand for all people, it stands for not. It serves as a “catch-all” term that doesn’t cater towards anyone’s needs. And, unfortunately, the needs of “everyone” can leave out the unique traits that may characterize your user. Instead, critically identifying a discrete number of probable users (and noting their characteristics and skills) can enable you to vicariously avoid the pit of generalizing your user and specifically create great features that best suit the needs for the individuals who will actually utilize your app. After all, if your target audience never includes a particular type of user, why divest resources that could otherwise be used to craft a solution that accommodates your actual users?
Accessibility is for “Everyone” Usability Forgets
61% of adults in the United States have some type of disability — Center for Disease Control and Prevention
A real consequence of “designing for everyone” is forsaking unexpectedly large populations of common users; these common users are usually tech savvy individuals who rely on accessible technologies to live productively. Although you may counter that your app most likely will never be used by someone with “those demographics”, the numbers beg to differ. According to the CDC, 61% of adults in the United States have some type of disability. That’s 1 in 4 Americans. Again, you may counter that people with disabilities do not have the skills or needs to use technology, but again, you would be wrong. Pew Research reports that at least half of individuals with disability use the Internet on a daily basis. Although this number suggests that this population uses the Internet at a smaller percentage, it’s important to recognize that disabled populations still make up a significant percentage of possible users. Statistically speaking, it would be unwise to ignore the consideration of accessible designs and instead build for the idealized “everyone”.
…And Other Dangerous UX Myths
Accessible Design is Ugly
Even if designers agree that accessible design is important, it’s likely that they also think that accessible design is ugly. Although “ugly” accessible designs exist, accessible design is not inherently unattractive. (Moreover, accessible design is more than just creating aesthetic user interfaces but also includes creating accessible user flows and experiences.) Although few companies espouse accessibility in their design philosophies, beautiful accessible designs are not as uncommon as you may think.
Have you ever used an iPhone? How about a Mac? Or an Apple Watch? All of these products have several things in common but notably one: globally accessible features and designs. Perhaps surprisingly, all Apple products offer a suite of accessible options and tools that reflect its mindset towards accessible design. Throughout the years, Apple has remained dedicated towards creating beautiful products that remain accessible for individuals of a wide range of skills and abilities. Both disability advocates and designers praise Apple for its beautiful and accessible products.
People like Sadie (featured in the linked Apple advertisement) use accessible products to enjoy creative and productive lives like anyone and everyone else.
As much as designers hate ugly or bulky accessible designs, those with disabilities despite them just as much. Of course, those with disability embrace their identities, but that doesn’t mean that a stigma against those with disabilities doesn’t exist. Consequently, perhaps for the benefit and delight of both designers and those with disability, design shouldn’t be ugly.
Accessible Design is for Edge Cases
Both usability and accessibility checklists exist for designers to check and optimize their designs, but by no means are these checklists the end-all-be-all. Although heuristics exist to help promote accessible and usable designs, they don’t represent the nuanced designs necessary to create a unique, accessible product. Consequently, making a product accessible means more than just checking a box for using the right colors or the right fonts — it’s about creating user flows that reduce cognitive load or physical strain, and this type of attention to detail demands more consideration than just a PDF checklist from Nielsen Norman Group, even if it provides great heuristic guidelines. (Good) user experience designers don’t just create one-off designs, they create experiences, and accessible experiences should be created no differently. That said, making a product design does take time, but most likely not as much as you anticipated.
Accessible Design is Too Time Consuming
For individual developers or designers, creating accessible designs can feel overwhelming. To be fair, there exist many types of people with all kinds of disabilities. This can be intimidating to a junior designer who isn’t familiar with creating designs in the first place and certainly not accessible designs. And, it is likely that if you are a busy designer on a tight deadline, that making something accessible isn’t your first (or maybe even your last) thought — you just really, really need to ship this design by tonight so the devs can have it ready by the weekend.
First of all, accessibility shouldn’t be an afterthought; if you want to create quality designs that are usable for those populations, then you should treat disabled populations as real people and real users who deserve your attention. Additionally, because many resources for creating accessible technologies exist, nowadays, it’s especially easy to create accessible designs. Just as many common practices and motifs exist for creating usable designs, they also exist for accessible designs. Not only that, but usability and accessibility go hand in hand. After all, accessibility is usability for a significant, but marginalized and forgotten population.
Additionally, and luckily for our developer friends, many developer guides and tools exist to support accessible design. Developers can easily harness these tools to create websites that are both functional and accessible.
The Case for Designing for Accessibility
Expand Your Audience to Millions of People Worldwide
As I mentioned above, disability isn’t as uncommon as you think it is. Hundreds of millions of people worldwide have a disability. Thus, not accommodating to these individuals not only leaves out a sizable population of those who can experience and enjoy your website, but it can also reduce your own audience and outreach. Accommodating the virtual workflow of millions of individuals is not only amicable, but it’s also economical.
Accommodate the Needs of Others
Disability is hidden. It’s likely that you take classes with people of a wide range or abilities with all kinds of necessary day-to-day accommodations. That said, although some disabilities may not be as blatantly obvious as absolute blindness or paraplegia, they still can impact how millions of people use or visit websites. Moreover, many individuals who don’t have disabilities use accessibility tools to also improve their workflows, so accommodating to general accessible needs helps a lot more people than you would imagine.
Set a Good Example
For better or for worse, a lot of companies and organizations claim to be welcoming to all people regardless of race, sexuality, gender, and more! I’ve seen it. You’ve seen it. We’ve all seen it. So if you claim to welcome people of all different shapes and sizes, races and ethnicities, genders and sexualities, then welcoming people of different abilities and skills is no exception. Although few companies (and a nonexistent number of student organizations) make it a priority to create products to accommodate a variety of audiences, setting a standard for being welcoming of a diversity of abilities makes a big difference.
Abide by WCAG 2.0
In the United States, WCAG 2.0 is a set of web accessibility guidelines that dictates standards that define web accessibility. It offers a comprehensive list for what supports accessible design. Although individuals have more freedom to create whatever website they want, there are penalties for not meeting accessibility standards in the United States for large corporations. Luckily, the United States government takes accessibility seriously, and this mentality should be reflected in all US-hosted websites.
How Do I Make My Designs Accessible?
WCAG 2.0 offers a list of guidelines for designers and developers on how to create more accessible, but I wanted to share a few easy but important ways to make your digital products more accessible.
Strong Color Contrast
Do you wear glasses? Do you wear contact lenses? Can you not see unsaturated reds or greens? Many Americans experience some type of visual impairment and many more wear glasses or contacts to correct their vision. One of the most well-known ways to enforce accessibility in your designs is with using colors with strong color contrast. Strong color contrast helps make certain elements more visible and distinguishable. Certain web browsers offer options for website users to turn on options to increase color contrasts, too.
Large or Variable Font Size
These days, using extra large, bold fonts are especially trendy, but that hasn’t always been the case. Using large fonts or offering large font options are important to a growing number of audiences with low vision. Consider offering these options to create an accessible solution to those people.
Descriptive Alt-Text
Did you know that Facebook and Instagram offer alt-text options when posting pictures? However, fewer than 0.1% of pictures on Twitter have captions, and most of these descriptions offer poor explanations of the depicted imagery. While including alt-text and captions is particularly important in creating an accessible website, maintaining descriptive, elaborated, and helpful descriptions is even more important to help those with low vision or blindness to understand the displayed content.
Keyboard Input Options
Many Americans with visual or motor disabilities rely on their keyboard to provide input or navigate websites. However, many websites fail to support this alternate method of navigation. Thus, to support these audiences, incorporating descriptive navigational menus into websites or apps is a necessary imperative.
Reduced Cognitive Load
This is a bit hard to explain, but if you are familiar with user experience principles, then it’s likely that you’ve heard of “cognitive load”. Simply put, reducing cognitive load is all about making things less complicated than it needs to be — keep it simple! This means that user flows should be straight forward, interactions should be logical, and descriptions should be succinct and informational.
And More!
WCAG 2.0 elaborates all of its standards for accessibility on its website. It’s a long list, but it helps to capture the needs of hundreds of millions of people throughout the world! I encourage you to take a look at the WCAG 2.0 website to better understand what steps you can take to make your website more accessible.
What Does an Accessible Design Look Like?
The Brain Exercise Initiative has the cutest mascot!
This summer of 2020 the Brain Exercise Initiative (“a 501(c)(3) nonprofit organization that uses simple math, writing and reading aloud exercises as an intervention to improve cognitive function in those with Alzheimer’s Disease”) reached out to Bits of Good. In the midst of the shelter at home orders, the Brain Exercise Initiative was having trouble reaching out to seniors to practice their reading, writing, and math skills; this meant that these seniors were unable to gain access to vital resources aimed to help improve their cognitive function. Thus, the Brain Exercise Initiative requested Bits of Good to create a mobile app that could serve as an intervention to improve cognitive function in those with Alzheimer’s Disease particularly during the COVID-19 pandemic. Given this urgent task, Bits of Good organized a team of product managers, engineering managers, developers, and designers to create an app that would do just that, and I was one of three designers to help create this “brain exercise” app.
Reflecting on our design process, I am proud that my team of designers and I thought about accessibility long before we began to design. We knowingly understood that designing for seniors (and possibly seniors with moderate to mild cognitive impairment) meant that we had to think critically about how we designed our app and for whom we designed our app. Nonetheless, since we were mostly unfamiliar with accessible technology, we dedicated a few weeks to read articles upon articles about best practices for accessible design. Thus, identifying our users and target audience was especially crucial for creating the Brain Exercise Initiative app.
Our original wireframes for the Brain Exercise Initiative app were messy and unlike the final product, but it helped us think critically about what our next steps would be.
When we moved on from ideation to design, we not only opted to use vivid, high-contrast colors and bold, obvious fonts, but we also deliberately included several accessibility settings including ways to narrate text, change font size, increase contrast, reduce motion, and more. Although we understood that our main audience was likely seniors familiar with technology with moderate cognitive impairment, we also recognized that our aging population would likely have a wide range of abilities and skills, so we included features that would accommodate those users.
Later, as we iterated through our designs, our most difficult decisions were with choosing designs that would be most enjoyable and usable for our seniors. Reflecting on the effects of the world pandemic, we realized that these seniors, although in need of exercising their brains, were likely without visitors for months. That said, we included fun pictures, warm copy, and encouraging notifications to design a more delightful experience. Additionally, we organized several meetings (and consulted our point of contact from the Brain Exercise Initiative) to fiddle with layouts, formats, colors, and fonts as we tried to optimize the app for our senior audience. Altogether, designing for the specific audience was a new and challenging task, but our previous research and outstanding grit helped us design something fantastic.
Look for the official Brain Exercise Initiative app in the iOS App Store and Google Play Store!
I’m excited and proud of the work that my team of designers put into this app. We spent countless hours squabbling over which designs would be easier for our senior audience to use and enjoy, and I think that the elegant flows and designs of the app reflects this dedication for accessibility. Now, I’ll be the first one to admit that our app isn’t perfect; our app hasn’t been evaluated by an accessibility expert or consultant and it still needs a lot of user testing; but I look forward to potentially working with the Brain Exercise Initiative again this coming semester to continue improving the app.
Conclusion
When we design for “everyone”, it can be easy for us to forget who our actual users are. As designers, we should determine who our users are and make critical decisions that inform our designs. It’s important to recognize the users that would benefit from accessible designs. Although designers may have reservations about creating accessible options or accessible designs, they should remember that accessibility can be incorporated easily and beautifully while benefitting many. So, I’ll ask you again.
Who do you design for?
— Thank you to the Brain Exercise Initiative for working with us!
Additional Resources You’ll Love
For Anyone
For Designers
For Developers | https://medium.com/bits-of-good/usability-is-accessibility-7bb6cc5996ee | ['Kimberly Do'] | 2020-10-07 00:12:14.043000+00:00 | ['Product Design', 'Design', 'Accessible Design', 'UX Design', 'Accessibility'] |
What I Learned by Hiring an Editor to Critique My Novel | What I Learned by Hiring an Editor to Critique My Novel
It may have been less about my writing and more about me.
Photo by Chivalry Creative on Unsplash
I’ve been chasing my dream of becoming a published author for almost six years. Three manuscripts later — all of them collecting dust on a shelf — and I still feel like I’m making forward progress. That’s the good part.
Deciding to finally “retire” an unpublished piece and move on to something new is difficult. It feels like such failure to admit no one’s interested in the story you’ve agonized over for countless hours. But I’ve grown some thick skin over the years. Learned how to embrace each rejection as an opportunity to learn and improve.
Part of the learning process includes making the most of available resources to develop your craft. For writers, there are plenty of them out there. Conferences, online workshops and writing groups. Immerse yourself in them all if you can. Critique partners and beta readers are critical too. If you’re the only one to read your novel before thinking it’s ready for publication, it’s not.
While I had taken advantage of plenty of these resources along my journey, I still wasn’t gaining enough traction in terms of getting published. I’d been told my writing is solid, but knew I was missing something. Something important. It wasn’t until I sought an opinion from one of my author friends that I realized I didn’t know the first thing about the business of publishing. Things like pacing, character arc and voice are important, but so are knowing what genres are popular at any given moment, how to hook an agent or editor with the first page, and which literary tropes have been overdone. Therefore, I set out to learn everything I could about what I now believe is my next (and hopefully last) step in getting published.
For me, I decided to do this by hiring an editor. Up until that point, I had never paid anyone for a critique, relying solely on my friends and a few random readers I’d connected with whom I believed would give me professional and objective feedback. There’s nothing wrong with that, and I’m grateful for all the time these people have taken to help me polish my work. But there’s a lot to be said for engaging a paid professional, someone only concerned in performing the work they were hired to do and not afraid to hurt your feelings if necessary.
My primary goal in identifying this person was to find someone with industry experience. I used an online service called Reedsy that gives authors access to a variety of editorial and publishing professionals. All you have to do is search for the type of service you’re interested in (I was looking for a developmental edit of a romantic suspense novel) and search through the profiles of people willing and able to do that work. You can sort by different criteria including price, genre, experience level and timeframe, put a quote together and submit to five professionals at a time. I was able to hire an editor who had experience with a major publishing house and who specialized in romance and mystery. And she was willing to work within my budget. The perfect fit.
Now that I’ve received her feedback, let me just say — or shout — it was money well spent! Not only did she help me polish my content, but the tips she gave me were invaluable. So much so, that I felt compelled to share them with anyone else out there struggling to launch their writing into the world. Here’s what I learned.
1. The rules of proper English may not always apply.
One of the “random readers” I referred to earlier happens to be a retired English professor. She’s read through all three of my manuscripts and has taught me a lot about how to employ the rules so perfectly summarized in Strunk and White’s Elements of Style. She’s a stickler for proper punctuation, has helped me clarify when to use past perfect tense, and loves to sprinkle adverbs into my writing for greater detail. She’s been a godsend and I can’t imagine writing anything of length without her.
At the same time, when I received feedback from my developmental editor, I was surprised to see that she often reversed the changes my English professor had suggested. Sentence fragments seem to be encouraged in modern-day novels. Adverbs should be used sparingly. And the only dialogue tag apparently needed is “said.” Using phrases like “she retorted” or “he growled” are believed to violate the rule of “show don’t tell” and can slow down the reader’s pace. One acceptable exception seems to be when denoting volume as in “he shouted” or “she whispered.” I questioned these principles at first but have since found plenty of confirmation that these truly are industry standards. Hm. Who knew? Again, money well spent.
2. Important details may only live in your own head. Set them free!
One of the biggest criticisms I received is that too many relevant facts were being kept from the reader. My editor at times even found this insulting, as if she couldn’t trust my main character. Whoa! I had no idea. But after I went back and read the passages she had been referring to, I realized she was right. And it was not intentional. As a writer, you know your characters and plot lines inside and out. The challenge is making sure everything you want people to know is communicated properly in writing. I hadn’t planned on leaving the reader guessing about certain details but failed to put all my thoughts down on the page. Luckily, this was an easy fix and now I’m more intentional about making sure the reader is “in the know.” No more mind reading required.
Sometimes it takes a second and third time to effectively communicate a particular point, and it’s okay to repeat yourself for emphasis. I know I appreciate this myself when I’m reading a book, especially one with a complex plot. That’s something my editor confirmed and counseled me about — how to weave reminders throughout a manuscript to ensure clarity of understanding. Without that piece of advice, many of my ideas may still be stuck inside my head, leaving my readers confused. Nobody wants that.
3. Stereotypes can dissuade agents from representing your work, not to mention offending your readers.
This is a biggie, and probably the most eye-opening piece of feedback I received. I didn’t realize that having my Spanish-speaking character use broken English throughout the manuscript could be offensive, or that portraying an Indian American character as an Ivy leaguer with a genius IQ could be a turn-off as overly stereotypical, but my editor strongly discouraged me from employing these techniques. She also cautioned against describing shoppers at a popular big box retailer (you can fill in the blank here) in a negative fashion, even if that’s how many people across the country may characterize them. “You automatically isolate yourself from a large number of potential readers — those who shop at that particular store.” Ironically, the use of some of those stereotypes was an attempt at humor, not contempt in any way. But if that’s the way it was perceived, I’ll absolutely heed the warning. I may even seek out a sensitivity editor in the future to make sure I’m not making any further “no-nos.” If there’s a market for that type of editor, which apparently there is, it’s obviously something that needs to be on my radar.
An unlikable protagonist could be the kiss of death.
Here’s where I needed to make the most drastic change. My protagonist’s love interest was originally portrayed as somewhat crotchety, kind of a curmudgeon with a strong bias against young people. (Think Mr. Magoo but not quite so old, and not that crotchety). Most of his opinions were based on his experiences and observations about Millennials, and the plan was to have his character evolve throughout the story and learn to appreciate the younger generation for their strengths instead of penalizing them for their perceived faults.
Unfortunately, this strategy backfired with my editor. She was quick to point out that she herself was a Millennial, as are many of the agents and editors working in the industry, and that she personally would not have made it to the end of the book. My curmudgeon struck a chord with her. She even identified the point in the manuscript where she would have stopped reading, unwilling to wait and see whether he had a change of heart by the end.
Good to know, right? And not anything that would ever had occurred to me.
4. Pay attention to the current culture when making decisions about your characters.
There’s a scene in my novel where the protagonist gets drunk and ends up inviting the other main character back to her hotel room. It’s actually one of my favorite scenes as the chemistry between the two characters is pretty intense. But I was advised to make sure the woman wasn’t too drunk to consent to whatever eventually happens in that hotel room. Before the #MeToo movement, such a scene may not have thrown up any red flags, but now it does (and rightfully so). I didn’t have to alter the scene too much to fit within safe parameters, and that exercise definitely opened my eyes to being more conscious about similar issues moving forward.
In summary, I sure learned a lot through the process of a developmental edit. I have to admit that I didn’t agree with all the feedback at first. There were moments when I wondered whether I was being encouraged to kowtow to the gatekeepers of the industry and whether this is how passive-aggressive censorship works. (Please don’t judge. They were only fleeting thoughts).
But then I took a step back and really reflected on the observations being offered. I realized that some of my own cynicism and/or bias may have been creeping into my characters. What a lesson in self-awareness that is! I’ve always worked hard to be objective, fair and diplomatic in everything I do, but especially in my writing. And even though one editor’s subjective opinion is not the be-all, end-all, I really respect and appreciate her candor. If I want to get this book published, (or maybe the next one after that), it’s important to know how to best my best foot forward. The truth can be harsh sometimes, but better to face it now than when I’m staring down a negative book review! | https://medium.com/swlh/what-i-learned-by-hiring-an-editor-to-critique-my-novel-2e29e5a42d10 | ['Susan Poole'] | 2020-09-30 13:25:42.338000+00:00 | ['Writing Life', 'Novel Writing', 'Writing', 'Writing Tips', 'Publishing'] |
expertlead, a global community of highly qualified tech freelancers | expertlead has raised €7M in total. We talk with Arne Hosemann, its CEO.
PetaCrunch: How would you describe expertlead in a single tweet?
Arne Hosemann: expertlead is a global community of highly qualified tech freelancers. We support our community in all stages of self-employment: from project acquisition, providing relevant services and opportunities for further training and peer-to-peer learning, to administrative tasks.
PC: How did it all start and why?
AH: Earlier in our respective careers we constantly heard businesses complain about how hard finding great tech talent is. This, in itself, is not too surprising given top tech talent shortage is a widely discussed topic.
But when we talked to developers, they would point out how poor their experience with recruiters and staffing agencies had been. Very often even tech focused recruiters would not understand tech skills specifics or the professional preferences of these developers. This got us thinking about what it would take to make the experience on both sides significantly better.
As there is a growing trend towards self-employment in tech and freelancers still often work alone or in remote teams, we wanted to become their go-to partner that supports them in all stages of self-employment.
This was what drove our whole idea: building a community that is really different from a recruitment agency or a pure self-matching talent marketplace by becoming a true partner in our self-employed tech experts’ professional lives — from project acquisition, opportunities for continued professional development and peer-to-peer learning to taking care of administrative tasks.
Similarly, for our clients, we did not just want to focus on matching demand — our mission is to go a lot further and help companies identify the best talent. This is where many companies struggle, especially those that are not digital native. Assessing the various skills level of tech applicants is quite challenging and can be very time-consuming and hence expensive. Therefore, we started digging deeper into how we can test various tech stacks, databases and frameworks while still keeping it enjoyable for the applicants as well. Very quickly we got to the point where we realized that no single company can cover the entire tech field when it comes to testing — it is way too broad and complex. That is when the idea was born to involve our tech community in assessing other tech experts — which is one of our core USPs today.
With that in mind, we both left our previous job in 2017 and started expertlead in 2018.
PC: What have you achieved so far?
AH: Since Alex and I started in 2018 our team has grown rapidly: we are now an international startup headquartered in Berlin that employs around 45 people. By the end of year we expect to be around 60 employees.
We have invested most of our seed capital in building out our tech products. We aim to use tech solutions across our entire value chain: from identifying suitable tech freelancers, testing their skill level and matching them to client projects to providing relevant services to our freelance community. We have made significant progress to automate these different steps already, especially when it comes to assessing our community’s tech skills, automatically matching client projects with the best freelancers and identifying leading tech talent.
Our strong focus on tech allows us to help our clients faster and more effectively than others. With that approach we have already convinced leading European multinationals and tech companies including Daimler, Babbel and Delivery Hero.
Just recently we have announced one of our greatest achievements since our launch: having three global investors — Acton, Rocket Internet and SEEK — jointly invest €7M in our company for our Series A round.
PC: How will you use your recent funding round?
AH: The newly raised capital will be used to support our international growth ambitions as well as to further drive the automation of our products. We also wish to broaden our technical know-how so our platform can service new areas such as cybersecurity. Last but not least our team will also be focusing on expanding our community offering and peer-to-peer engagement.
PC: What do you plan to achieve in the next 2–3 years?
AH: Closing our Series A was a great success but only the beginning of an exciting journey! In the next 2–3 years we will fully focus on expanding our tech community and on building a leading tech company in a space that is still mainly dominated by quite “manual” agencies.
We want to be known in the tech ecosystem for being a truly valuable partner in highly skilled freelancers’ professional careers and for offering the most enjoyable and solid technical assessment experience through our platform. That is the way we intend to expand our community globally in the years to come.
On the client side, we want to continue in our path to becoming the go-to trusted partner for both multinational corporates and tech companies when it comes to identifying and hiring the leading tech talent for their most innovative and complex tech projects. This will also, of course, continue to benefit our community greatly. | https://medium.com/petacrunch/expertlead-a-global-community-of-highly-qualified-tech-freelancers-1761ca632b46 | ['Kevin Hart'] | 2019-09-04 07:21:01.205000+00:00 | ['Freelance', 'Startup', 'Freelancers', 'Community', 'Freelancing'] |
Plug-in for Jira is live! | Vizydrop plug-in is available in Atlassian Marketplace.
We have integrated Vizydrop into the Atlassian ecosystem and happy to announce the availability of our plugin. Go and get visual answers from your Jira data.
Predefined templates popular among users will help you to get visual insights about your team progress.
Charts, pivot tables, facets.
Utilize all issue fields, projects, custom fields, changelog, status transitions, and work log.
Easy to use drag-n-drop editor with a user-friendly and comprehensive guide.
The report calculations powered by autocompleting help you to modify visualizations like a pro.
All your reports can be organized into dashboards.
Filter data with your saved JIRA filters, JQL or control data using built-in filters.
Data browser with data reveal allows you to drill down into concrete issues with just a few clicks.
Share and export reports. Export, print or share by link with colleagues, friends, your mom, and the whole world.
Use popular apps data sources and add custom sources. Create charts from files, links, Trello, Google Sheets, GitHub and etc.
Thank you for giving us a try. https://reports.vizydrop.com | https://medium.com/vizydrop/plug-in-for-jira-is-live-58e718f6bd8d | ['Oleg Seriaga'] | 2019-10-09 12:14:37.611000+00:00 | ['Dashboard', 'Jira Plugins', 'Atlassian', 'Project Management', 'Jira Reports'] |
Trump Obsessive Syndrome Appears to be Widespread | Satire/Humor
Trump Obsessive Syndrome Appears to be Widespread
Psychiatrists debate cutting edge therapies
Photo by Tim Mossholder on Unsplash
When I woke up this morning, I expected my social media feed to be refreshingly free of Trump stories, memes, and one-liners.
I mean, the election is over, right?
So I was surprised to see no fewer than 56 articles with Trump in their headline.
Here are just a few of them:
Left suggests Rounding up Trump supporters and sending them to Siberia Trump supporters vow to form separate state Democrats express amazement that Trump supporters aren’t rioting and burning cities Writers vow to continue writing about Trump as long as they can find funny memes of him on Unsplash
These articles are just the tip of the iceberg. The sheer number of Trump stories one week post-election left me with two options. I could pull my comforter over my head and go back to sleep, hoping the whole thing would be over, or I could investigate this strange phenomenon.
Being the crack reporter that I am, I decided to investigate.
And this is what I discovered.
There is a new psychiatric disorder which is the polar opposite of Trump Derangement Syndrome. TDS, in case you didn’t know, is defined by Wikipedia as a term for criticism or negative reactions to Donald Trump that are perceived to be irrational.
The new disorder, according to the latest and most cutting-edge psychiatric journals, has been dubbed Trump Obsessive Syndrome. It is defined as an obsession, especially among writers, to focus constantly on Trump.
Writers who are victims of this disorder claim they can’t get Trump out of their minds. “It’s like something going round and round in my head,” one writer explained.
When they attempt to write about anything else, they aren’t passionate about it.
“I tried all day to write self-improvement listicles, and I kept seeing the orange man in my dreams. I finally had to give in to the urge and write about him.”
Another writer said, “Trump has provided me with my most successful material. I’ve been accepted to write Trump stories for Gen and Level and Forge, which is a degree of success I never thought to attain. If I stop writing about him, I’m back to self-publishing and getting three views on my stories.”
There is a certain degree of distress that appears to be synonymous with the syndrome. Writers worry that when Trump leaves the White House, they will have to become salespeople or form a failed startup instead of fulfilling their lifelong dream of being writers.
Psychologists are recommending extensive therapy.
“We start out by getting patients to focus on any color except orange,” one doctor explained. Purple and blue are preferable.”
Another doctor said his therapeutic approach involves focus groups. “We have entire sessions where no one is allowed to mention Trump. Every time someone slips up and says his name, they are required to sit in the center of the circle wearing a MAGA hat.
“We expect this syndrome to fade away eventually,” the doctor continued. “But we’re growing concerned. Trump has elicited such an unprecedented level of emotion that it is hard for people to give up those feelings.”
This might be the reason several writer’s groups have gone underground to form a Recount the Ballots initiative.
Members of this underground group insist on remaining anonymous, but their reasoning goes something like this: If we can reverse the results of this election, we are guaranteed to always have something to write about.
But there is hope on the horizon if therapy doesn’t work. Pharmaceutical companies are already racing to come up with a vaccine. | https://medium.com/muddyum/trump-obsessive-syndrome-appears-to-be-widespread-36dceed03e39 | ['Bebe Nicholson'] | 2020-11-10 16:39:37.952000+00:00 | ['Elections', 'Humor', 'Writing', 'Politics', 'Satire'] |
How we built an easy-to-use image segmentation tool with transfer learning | How we built an easy-to-use image segmentation tool with transfer learning
Label images, predict new images, and visualize the neural network, all in a single Jupyter notebook (and share it all using Docker Hub!)
Authors: Jenny Huang, Ian Hunt-Isaak, William Palmer
GitHub Repo
Introduction
Training an image segmentation model on new images can be daunting, especially when you need to label your own data. To make this task easier and faster, we built a user-friendly tool that lets you build this entire process in a single Jupyter notebook. In the sections below, we will show you how our tool lets you:
Manually label your own images Build an effective segmentation model through transfer learning Visualize the model and its results Share your project as a Docker image
The main benefits of this tool are that it is easy-to-use, all in one platform, and well-integrated with existing data science workflows. Through interactive widgets and command prompts, we built a user-friendly way to label images and train the model. On top of that, everything can run in a single Jupyter notebook, making it quick and easy to spin up a model, without much overhead. Lastly, by working in a Python environment and using standard libraries like Tensorflow and Matplotlib, this tool can be well-integrated into existing data science workflows, making it ideal for uses like scientific research.
For instance, in microbiology, it can be very useful to segment microscopy images of cells. However, tracking cells over time can easily result in the need to segment hundreds of images, which can be very difficult to do manually. In this article, we will use microscopy images of yeast cells as our dataset and show how we built our tool to differentiate between the background, mother cells, and daughter cells.
1. Labelling
There are many existing tools to create labelled masks for images, including Labelme, ImageJ, and even the graphics editor GIMP. While these are all great tools, they can’t be integrated within a Jupyter notebook, making them harder to use with many existing workflows. Fortunately, Jupyter Widgets make it easy for us to make interactive components and connect them with the rest of our Python code.
To create training masks in the notebook, we have two problems to solve:
Select parts of an image with a mouse Easily switch between images and select the class to label
To solve the first problem, we used the Matplotlib widget backend and the built-in LassoSelector. The LassoSelector handles drawing a line to show what you are selecting, but we need a little bit of custom code to draw the masks as an overlay:
Class to manage a Lasso Selector for Matplotlib in a Jupyter notebook
For the second problem, we added nice looking buttons and other controls using ipywidgets:
We combined these elements (along with improvements like scroll to zoom) to make a single labelling controller object. Now we can take microscopy images of yeast and segment the mother cells and daughter cells:
Demo of lasso selection image labeler
You can check out the full object, which lets you scroll to zoom, right click to pan, and select multiple classes here.
Now we can label a small number of images in the notebook, save them into the correct folder structure, and start to train CNN!
2. Model Training
The Model
U-net is a convolutional neural network that was initially designed to segment biomedical images but has been successful for many other types of images. It builds upon existing convolutional networks to work better with very few training images and make more precise segmentations. It is a state-of-the-art model that is also easy to implement using the segmentation_models library.
Image from https://arxiv.org/pdf/1505.04597.pdf
U-net is unique because it combines an encoder and a decoder using cross-connections (the gray arrows in the figure above). These skip connections cross from the same sized part in the downsampling path to the upsampling path. This creates awareness of the original pixels inputted into the model when you upsample, which has been shown to improve performance on segmentation tasks.
As great as U-net is, it won’t work well if we don’t give it enough training examples. And given how tedious it is to manually segment images, we only manually labelled 13 images. With so few training examples, it seems impossible to train a neural network with millions of parameters. To overcome this, we need both Data Augmentation and Transfer Learning.
Data Augmentation
Naturally, if your model has a lot of parameters, you would need a proportional amount of training examples to get good performance. Using our small dataset of images and masks, we can create new images that will be as insightful and useful to our model as our original images.
How do we do that? We can flip the image, rotate it at an angle, scale it inward or outward, crop it, translate it, or even blur the image by adding noise, but most importantly, we can do a combination of those operations to create many new training examples.
Examples of augmented images
Image data augmentation has one more complication in segmentation compared to classification. For classification, you just need to augment the image as the label will remain the same (0 or 1 or 2…). However, for segmentation, the label (which is a mask) needs to also be transformed in sync with the image. To do this, we used the albumentations library with a custom data generator since, to our knowledge, the Keras ImageDataGenerator does not currently support the combination “Image + mask”.
Custom data generator for image segmentation using albumentations
Transfer Learning
Even though we have now created 100 or more images, this still isn’t enough as the U-net model has more than 6 million parameters. This is where transfer learning comes into play.
Transfer Learning lets you take a model trained on one task and reuse it for another similar task. It reduces your training time drastically and more importantly, it can lead to effective models even with a small training set like ours. For example, neural networks like MobileNet, Inception, and DeepNet, learn a feature space, shapes, colors, texture, and more, by training on a great number of images. We can then transfer what was learned by taking these model weights and modifying them slightly to activate for patterns in our own training images.
Now how do we use transfer learning with U-net? We used the segmentation_models library to do this. We use the layers of a deep neural network of your choosing (MobileNet, Inception, ResNet) and the parameters found training on image classification (ImageNet) and use them as the first half (encoder) of your U-net. Then, you train the decoder layers with your own augmented dataset.
Putting it Together
We put this all together in a Segmentation model class that you can find here. When creating your model object, you get an interactive command prompt where you can customize aspects of your U-net like the loss function, backbone, and more:
Segmentation model customization demo
After 30 epochs of training, we achieved 95% accuracy. Note that it is important to choose a good loss function. We first tried cross-entropy loss, but the model was unable to distinguish between the similar looking mother and daughter cells and had poor performance due to the class imbalance of seeing many more non-yeast pixels than yeast pixels. We found that using dice loss gave us much better results. The dice loss is linked to the Intersection over Union Score (IOU) and is usually better adapted to segmentation tasks as it gives incentive to maximize the overlap between the predicted and ground truth masks.
Example predictions by our model compared to true masks
3. Visualization
Now that our model is trained, let’s use some visualization techniques to see how it works. We follow Ankit Paliwal’s tutorial to do so. You can find the implementation in his corresponding GitHub repository. In this section, we will visualize two of his techniques, Intermediate Layer Activations and Heatmaps of Class Activations, on our yeast cell segmentation model.
Intermediate Layer Activations
This first technique shows the output of intermediate layers in a forward pass of the network on a test image. This lets us see what features of the input image are highlighted at each layer. After inputting a test image, we visualized the first few outputs for some convolutional layers in our network:
Outputs for some encoder layers
Outputs for some decoder layers
In the encoder layers, filters close to the input detect more detail and those close to the output of the model detect more general features, which is to be expected. In the decoder layers, we see the opposite pattern, of going from abstract to more specific details, which is also to be expected.
Heatmaps of Class Activations
Next, we look at class activation maps. These heat maps let you see how important each location of the image is for predicting an output class. Here, we visualize the final layer of our yeast cell model, since the class prediction label will largely depend on it.
Heatmaps of class activations on a few sample images
We see from the heat maps that the cell locations are correctly activated, along with parts of the image border, which is somewhat surprising.
We also looked at the last technique in the tutorial, which shows what images each convolutional filter maximally responds to, but the visualizations were not very informative for our specific yeast cell model.
4. Making and Sharing a Docker Image
Finding an awesome model and trying to run it, only to find that it doesn’t work in your environment due to mysterious dependency issues, is very frustrating. We addressed this by creating a Docker image for our tool. This allows us to completely define the environment that the code is run in, all the way down to the operating system. For this project, we based our Docker image off of the jupyter/tensorflow-notebook image from Jupyter Docker Stacks. Then we just added a few lines to install the libraries we needed and to copy the contents of our GitHub repository into the Docker image. If you’re curious, you can see our final Dockerfile here. Finally, we pushed this image to Docker Hub for easy distribution. You can try it out by running:
sudo docker run -p 8888:8888 ianhuntisaak/ac295-final-project:v3 \
-e JUPYTER_LAB_ENABLE=yes
Conclusion and Future Work
This tool lets you easily train a segmentation model on new images in a user-friendly way. While it works, there is still room for improvement in usability, customization, and model performance. In the future, we hope to:
Improve the lasso tool by building a custom Jupyter Widget using the html5 canvas to reduce lag when manually segmenting Explore new loss functions and models (like this U-net pre-trained on broad nucleus dataset) as a basis for transfer learning Make it easier to interpret visualizations and suggest methods of improving the results to the user
Acknowledgements
We would like to thank our professor Pavlos Protopapas and the Harvard Applied Computation 295 course teaching staff for their guidance and support. | https://towardsdatascience.com/how-we-built-an-easy-to-use-image-segmentation-tool-with-transfer-learning-546efb6ae98 | ['Jenny Huang'] | 2020-08-06 00:11:03.936000+00:00 | ['Transfer Learning', 'Visualization', 'Image Segmentation', 'Unet', 'Editors Pick'] |
How to Be Productive Without Being a Jerk | How to Be Productive Without Being a Jerk
Efficiency with people is ineffective.
Photo by Şahin Yeşilyaprak on Unsplash
Our 7-year-old daughter called her older sister a jerk the other day. It wasn’t the nicest thing to say, but the label was accurate at the time and the moment was actually kinda’ funny (I laughed on the inside because I don’t want to encourage our children to say mean things to each other lol).
I’ve been writing a lot of content for various publications, projects, and clients lately. If you’re a writer, then you understand that writing takes a lot of focus. I’ve also had more virtual meetings with my team, clients, and prospects these days. With three children at home, distractions can come quickly and often. Once in a while, I have to gently pry our 5-year-old son from my arm when I’m trying to write or participate in a Zoom meeting.
All of this got me thinking, “Have I been a productive jerk?”
The honest answer is “Yes, sometimes.” But for the most part, I think I’ve been good at giving the people in my life their needed attention while remaining productive. Our family is full of creative people and I want to be a good example of how to be creative, productive, attentive, and loving.
Allow me to share my tips on how to be productive without being a jerk. | https://medium.com/inspirefirst/how-to-be-productive-without-being-a-jerk-7fb114d61a85 | ['Chris Craft'] | 2020-08-20 17:49:23.304000+00:00 | ['Self Improvement', 'Self', 'Advice', 'Productivity', 'Life'] |
Reconciling the Differences in Our Data: A Mixed-Methods Research Story | We all love it when the quant and qual align, but what about those other times, when they seem at odds? For example: the surveys are in, the clickstream data has been analyzed, and you’re feeling confident. Then as you compare notes with your teammates, you realize that the recommended next steps based on UX research and data science are poised to send the business in two very different directions.
This was the challenge we found ourselves working to resolve as a user researcher and data scientist with Customer Insights Research supporting the same product team at Microsoft. What seemed like a conflict ended up leading us to deeper insights, a closer working relationship, and a better outcome for our customers—but getting there was a multi-step process.
Step 1. Confront your data discrepancy
Our product team was sunsetting the older version of an app in favor of one that provides accessibility for all users. To help our stakeholders understand what our customers needed in the new version, researchers had conducted user studies, interviews, and surveys as well as analyzing in-app feedback. Caryn, a researcher, was listening to what our customers were saying: they told us too many of the features they enjoyed in the older app were missing from the new app.
The user research recommendation, based on this analysis? Fill the feature gaps from the older app or customers will not transition over.
Meanwhile, Sera, a data scientist, conducted a cohort analysis with clickstream data to understand what our customers were doing in the older version of the app and how that impacted their transition to the new version. Based on the qualitative feedback, she expected to see customers who used features only available in the older app abandoning the new app. But the analysis showed that they weren’t.
The data science recommendation at this stage? Since customer retention in the new app doesn’t correlate with feature use in the older app, focus on other vital parts of the user journey to help people transition.
Research and data science had arrived at opposing suggestions. Now what?
Step 2. Resist the urge to champion your own data
At this stage, it could have been easy to each double down on our opposing viewpoints. If we’d presented the results, asking our general program manager to choose between recommendations, at least one of us would have the satisfaction of knowing we influenced the product. But how could our stakeholders and leaders be confident they were making the best data-driven decision, if we forced them to choose between quant and qual?
In a way, mixed-methods research is an exercise in getting comfortable with conflict and finding reconciliation instead of a “winner.” Happily, we each realized this and resisted the urge to champion our own perspective. We asked for the time we needed to investigate further, and our product team accommodated. | https://medium.com/microsoft-design/reconciling-the-differences-in-our-data-a-mixed-methods-research-story-6c1a2fe2f9f4 | ['Caryn Kieszling'] | 2019-12-31 19:14:29.344000+00:00 | ['Research And Insight', 'Design', 'UX', 'Data Science', 'Microsoft'] |
Scraping A to Z of Amazon using Scrapy | Scrapy is a fast, open-source web crawling framework written in Python, used to extract the data from the web page with the help of selectors based on XPath.
In this article, We will be looking at how we can use Scrapy to scrape all the Amazon Product Reviews using just its URL and automatically store all the scraped data into a JSON file within seconds. | https://medium.com/analytics-vidhya/web-scraping-a-to-z-using-scrapy-6ece8b303793 | ['Rohan Goel'] | 2020-07-21 11:43:55.775000+00:00 | ['NLP', 'Scrapy', 'Data Science', 'Amazon', 'Web Scraping'] |
Journalism In Dark Times | Fifteen years ago, I published my first news piece in a print magazine. After that, I went on a long journey discovering and working in diverse fields including blogging, citizen journalism, campaigning, translating, producing and managing. Some roads were bumpy while I found myself in others, and these became a launchpad to some successful media initiatives.
However, working in independent media in the Arab world has become increasingly more difficult, especially since the counter-revolutions began to gain strength in 2013.
Counter-revolutions have had a profound effect on the media industry, both in the countries of the Arab Spring and across the wider Arab world.
Security authorities have come to realize the power of the media and its impact on public opinion, as illustrated notably in 2011, when social media platforms were successfully used to mobilize people in protests, resulting in a political transition in a number of Arab countries. At the time, many television networks were prompted to change their policies and give more airtime to young voices. By 2013, things had completely changed.
Pre-2011, social media platform users in the Arab world were mostly young people who belonged to what can be classified as a rising middle class. However, following the 2011 uprisings, the general Arab public increasingly signed up to those platforms and started following them closely. This led to a significant change in the nature of discussions on those platforms. The new users came from different age groups and backgrounds, and Facebook, among other platforms, ceased to be a safe space to hold political discussions or start human rights campaigns.
Facebook debates turned into social confrontations that could land people in jail — something that has happened to many Egyptians who were simply expressing their views about current events in their country. Furthermore, many Egyptian journalists were arrested for doing their jobs, bringing Egypt up to the shameful third place in the world ranking of countries with journalists behind bars, after China and Turkey.
Mahmoud Abou Zeid, known as Shawkan, was finally released after spending more than five years in prison on trumped-up charges — Amnesty International
At the same time, the Egyptian State launched a crackdown on independent media outlets. Hundreds of websites have been blocked in Egypt and journalists have been demonized, portrayed as working for foreign entities and betraying their country. These actions have affected the personal security of all journalists.
The Egyptian government also moved to establish a number of companies with ties to state security agencies and the intelligence service. These new companies then acquired many television networks and news websites, which led to identical news coverage on all of the outlets.
The 2016 U.S. presidential election laid bare the role played by social media in making fake news go viral, which in turn prompted social media platforms to work on adapting their algorithms such that less news would be posted in news feeds, instead favoring a higher proportion of posts from friends and family. This greatly affected the independent media industry in Egypt, most of which had already fled from traditional news websites to social media networks in an attempt to reach the public.
The Egyptian State does not allow for an independent media, and constantly seeks to hinder any funding for institutions supporting independent media by drafting legislation aimed at paralyzing civil society. Alternative methods like social media outlets are also facing a crisis, not to mention the numerous risks faced by everyone involved in media.
How to solve this dilemma?
This is what I am trying to answer in my journey as a fellow in the Tow-Knight Entrepreneurial Journalism program at the Craig Newmark Graduate School of Journalism at CUNY.
The 2019 Tow-Knight Entrepreneurial Journalism Cohort. Not pictured: Emiliana Garcia. Photo by Skyler Reid
Independent journalism in the Arab world has generally kept to traditional means of publishing its content, such as text-based news and multimedia. Independent initiatives have not sufficiently explored innovative ways of doing their critical jobs in these tough years.
Al Jazeera’s AJ+ has greatly impacted the news industry both globally and in the Arab world. The digital consumer has become more interested in video than text-based content. However, a wide-scale investment in digital news has not happened yet.
Chatbots, Telegram groups and Instagram accounts have provided new tools for publishing content. For example, Iran is a country where Telegram and Instagram are widely used, and Telegram was employed during the 2017–18 protest against the regime to circumvent governmental obstruction, enabling protesters to coordinate and to inform the world about events in the country. Similar ways of using new tools will give greater chances for independent media to reach wider audiences.
The dependence of such initiatives on a small number of donor NGOs has, however, contributed to limiting the chances for discovering new tools in the media industry and in seeking out funding.
I’m looking for solutions to this complex dilemma by aiming to create a new model of non-profit journalism based on grants and individual donations. This model would ideally be able to reach an audience of millions using new tools that can bypass governmental obstruction. These restrictions may have succeeded so far in disrupting journalism in the Arab world, but cannot obstruct journalism forever. | https://medium.com/journalism-innovation/journalism-in-dark-times-dacf8a0e3bd9 | ['Abdelrahman Mansour'] | 2019-03-20 16:48:50.893000+00:00 | ['Journalism', 'Innovation', 'Technology', 'Human Rights', 'Media'] |
Implementing full-text search in Apache Pinot | Apache Pinot is a real-time distributed OLAP datastore, built to deliver scalable real time analytics with low latency.
Pinot supports super fast query processing through its indexes on non-BLOB like columns. Queries with exact match filters are run efficiently through a combination of dictionary encoding, inverted index and sorted index. However, arbitrary text search queries cannot leverage indexes and require a full table scan.
In this post, we will discuss newly added support for text indexes in Pinot and how they can be used for efficient full-text search queries.
Let’s take a few examples to understand this better.
Exact match with scan
SELECT COUNT(*) FROM MyTable WHERE firstName = “John”
In the above query, we are doing an exact match on firstName column that doesn’t have an index. The execution engine will find the matching docIds (aka rowId) as follows:
Exact match with inverted index
If there is an inverted index on the firstName column, the dictionaryId will be used to look up the inverted index instead of scanning the forward index.
Exact match with sorted index
If the table is sorted on column firstName, we use the dictionaryId to look up the sorted index and get the start and end docIds of all rows that have the value “John”.
The following graph shows latencies for exact match queries with and without index on a dataset of 500 million rows and selectivity (number of rows that passed the filter) of 150 million.
Text search with scan
What if the user is interested in doing arbitrary text search? Pinot currently supports this through the in-built function REGEXP_LIKE.
SELECT COUNT(*) FROM MyTable WHERE REGEXP_LIKE(firstName, ‘John*’)
The predicate is a regular expression (prefix query). Unlike exact matches, indexes can’t be used to evaluate the regex filter and we resort to full table scan. For every raw value, pattern matching is done with regex “John*” to find the matching docIds.
Text search with index
For arbitrary text data which falls into the BLOB/CLOB territory, we need more than exact matches. Users are interested in doing regex, phrase and fuzzy queries on BLOB like data. As we just saw, REGEXP_LIKE is very inefficient since it uses a full-table scan. Secondly, it doesn’t support fuzzy searches.
In Pinot 0.3.0, we added support for text indexes to efficiently do arbitrary text search on STRING columns where each column value is a BLOB of heterogeneous text. Doing standard filter operations (equality, range, between, in) doesn’t fit the bill on textual data.
Text search can be done in Pinot using the new in-built function TEXT_MATCH.
SELECT COUNT(*) FROM Foo
WHERE TEXT_MATCH (<column_name>, <search_expression>)
With support for text indexes, let’s compare the performance of text search query with and without index on a dataset of 500 million rows and filter selectivity of 150 million.
Text Indexing Problem
Like other database indexes, the goal of a text index is efficient filter processing to improve query performance.
To support regular expression, phrase and fuzzy searches efficiently, text index data structure should have few fundamental building blocks to store key pieces of information.
Dictionary
Dictionary maps each indexed term (word) to the corresponding dictionaryId to allow efficient comparison using fixed-width integer codes as compared to raw values.
Inverted Index
Inverted index maps dictionaryId of each indexed term to the corresponding docId. Exact match term queries (single or multiple terms) can be answered efficiently through dictionary and inverted index.
Position Information
Phrase queries (e.g find documents matching the phrase “machine learning”) are an extension of exact term queries where the terms should be in the exact same order in the matching documents. These queries need position information along with the dictionary and inverted index.
Automata for regex and fuzzy queries
Regex queries (including prefix, wildcard) and fuzzy queries will require a comparison with each and every term in the dictionary unless the prefix is fixed. There has to be a way to prune the comparisons.
If we can represent the input regular expression as a finite state machine such that the state machine is deterministic and accepts a set of terms then we can use the state machine in conjunction with the dictionary to get the dictionaryIds of all the matching terms that are accepted by the state machine.
Fuzzy edit distance search can also be done efficiently by representing the query as a state machine based on Levenshtein automata and intersecting the automata with the dictionary.
As discussed earlier, Pinot’s dictionary and inverted index can help answer exact match term queries efficiently. However, phrase, regex, wildcard, prefix and fuzzy queries require the position information and finite state automata which is currently not maintained in Pinot.
We learned that Apache Lucene has the necessary missing pieces and decided to use it for supporting full-text search in Pinot until we enhance our index structures.
Creating Text Indexes in Pinot
Let’s discuss creation of Lucene text indexes in Pinot as a series of key design decisions and challenges.
Text index per column
Pinot’s table storage format is columnar. Index structures (forward, inverted, sorted, dictionary) for the table are also created on a per column, per segment (shard) basis. For text index, we decided to stick with this fundamental design for the following reasons:
Evolution and maintenance is easier. A user has the freedom to enable or disable text indexing on each column of the table.
Our performance experiments revealed that creating a global index in Lucene across all text index enabled columns for the table hurts performance. Global index is larger than a per column index which increases the search time.
Text Index Format
Like other indexes, a text index is created as part of Pinot segment creation. For each row in the table, we take the value of the column that has text indexing enabled and encapsulate it in a document.
The document comprises of two fields:
Text field — contains the actual column value (docValue) representing the body of text that should be indexed.
— contains the actual column value (docValue) representing the body of text that should be indexed. Stored field — contains a monotonically increasing docId counter to reverse map each document indexed in Lucene back to its docId (rowId) in Pinot. This field is not tokenized and indexed. It is simply stored inside Lucene.
Storing Pinot DocId in Lucene Document
The stored field is critical. For each document added to the text index, Lucene assigns a monotonically increasing docId to the document. Later the search operation on the index returns a list of matching docIds.
Lucene index is composed of multiple independent sub-indexes called as segments (not to confuse with the Pinot segment). Each Lucene sub-index is an independent self-contained index. Based on the size of data indexed, how often the in-memory documents from the index are flushed to the on-disk representation, a single text index can consist of multiple sub-indexes.
The key thing to note here is that Lucene’s internal docIds are relative to each sub-index. This can lead to situations where a document added to the text index for a given row in the Pinot table does not have the same Lucene docId as Pinot docId.
For a query that has a text search filter, this will lead to incorrect results since our execution engine (filter processing, index lookup etc) is based around the docIds. So we need to uniquely associate each document added to the Lucene index with the corresponding Pinot docId. This is why StoredField is used as the second field in the document.
Text Analyzer
Plain text is used as input for index generation. An analyzer performs pre-processing steps on the provided input text.
Lower casing
Breaks text into indexable and searchable tokens/terms.
Prunes stop words (a, an, the, or etc)
We currently use StandardAnalyzer which is good enough for standard english alphanumeric text and uses Unicode text segmentation algorithm to break text into tokens. Analyzer is also used during query execution when searching the text index.
Text Index Creation for both Offline and Real-time
Pinot supports ingesting and querying data in real-time. Text indexes are supported for offline, real-time and hybrid Pinot tables.
IndexWriter is used to create text indexes. It buffers the documents in memory and periodically flushes them to the on-disk Lucene index directory. However, the data is not visible to IndexReader (used on the search query path) until the writer commits and closes the index which fsync’s the index directory and makes the index data available to the reader.
IndexReader always looks at a point-in-time snapshot (of committed data) of the index as of the time reader was opened from the index directory. This works well for offline tables since offline Pinot segments don’t serve data for queries until fully created and are immutable once created. The text index is created during pinot segment generation and is ready to serve data for queries after the segment is fully built and loaded (memory mapped) on Pinot servers. Thus the text index reader on the query path always looks at the entire data of a segment for offline tables.
However, the above approach will not work for real-time or hybrid Pinot tables since these tables can be queried while data is being consumed. This requires the ability to search the text index on the query path as the IndexWriter is in progress with uncommitted changes. Further sections will discuss the query execution for both offline and real-time in detail.
Querying Text Indexes in Pinot
We enhanced our query parser and execution engine with a new in-built function text_match() to be used in WHERE clause of the queries. The syntax is:
TEXT_MATCH(<columnName>, <searchExpression>)
columnName: Name of the column to do text search on.
Name of the column to do text search on. searchExpression: search query in accordance with Lucene query syntax.
Let’s take an example of a query log file and resume file:
Store the query log and resume text in two STRING columns in a Pinot table.
Create text indexes on both columns.
We can now do different kinds of text analysis on the query log and resume data:
Count the number of group by queries that have between filter on timecol
SELECT count(*) FROM MyTable
WHERE text_match(logCol, ‘\”timecol between\” AND \”group by\”’)
Count the number of candidates that have “machine learning” and “gpu processing”
SELECT count(*) FROM MyTable
WHERE text_match(resume, ‘\”machine learning\” AND \”gpu processing\”’)
Please see the user docs for an extensive guide on different kinds of text search queries and how to write search expressions.
Creating Text Index Reader for Offline Pinot Segments
Text index is created in a directory by IndexWriter as part of pinot segment generation. When the pinot servers load (memory map) the offline segment, we create an IndexReader which memory-maps the text index directory. An instance of IndexReader and IndexSearcher is created once per table segment per column with text index.
We chose to go with MMapDirectory instead of RAMDirectory since the former uses efficient memory mapped I/O and generates less garbage. RAMDirectory can be very efficient for small memory-resident indexes but increases the heap overhead significantly.
Text Filter Execution
Following diagram depicts segment level execution for the following text search query
SELECT count(*) from Table
WHERE text_match(textCol1, expression1)
AND text_match(textCol2, expression2)
Creating Text Index Reader for Realtime Pinot Segments
Text indexes in realtime Pinot segments can be queried while the data is being consumed. Lucene supports NRT (near real-time) search by allowing to open a reader from a live writer thereby letting the reader to look at all the uncommitted index data from the writer. However, just like any other index reader in Lucene, the NRT reader is also a snapshot reader. So, the NRT reader will have to be reopened periodically to see the incremental changes made by the live index writer.
Our real-time text index reader also acts as a writer since it is both adding documents to the index as part of real-time segment consumption and being used by the Pinot query threads.
During Pinot server startup, we create a single background thread. The thread maintains a global circular queue of real-time segments across all tables.
The thread wakes up after a configurable threshold, polls the queue to get a realtime segment and refreshes the index searcher of the real-time reader for each column that has a text index.
How often should the refresh happen?
Deciding the configurable threshold between successive refreshes by the background thread is something that should be tuned based on the requirements.
If the threshold is low, we refresh often and queries with text_match filter(s) on consuming segments will get to see the new rows quickly. The downside is lots of small I/Os since refreshing the text index reader requires a flush from the live writer.
If the threshold is high, we flush less often which increases the lag between the time a row was added to the consuming segment’s text index and appears in search results of the query with text_match filter.
It is a trade-off between consistency and performance.
Key Optimizations
So far, we discussed how text index is created and queried in Pinot. We also talked about a few design decisions and challenges. Now, let’s discuss details on optimizations we implemented to get the desired functionality and performance.
Using Collector
For a search query, Lucene’s default behavior is to do scoring and ranking. The result of the call to indexSearcher.search() is TopDocs which represents top N hits of the query sorted by score descending. In Pinot we currently don’t need any of the scoring and ranking features. We are simply interested in retrieving all the matched docIds for a given text search query.
Our initial experiments revealed that the default search code path in Lucene results in significant heap overhead since it uses a PriorityQuery in TopScoreDocCollector. Secondly, the heap overhead increases with the increase in the number of matching documents.
We implemented the Collector interface to provide a simple callback to indexSearcher.search(query, collector) operation. For every matching Lucene docId, Lucene calls our collector callback which stores the docId in a bitmap.
Pruning Stop Words
Text documents are very likely to have common english words like a, an, the, or etc. These are known as stop-words. Stop words are typically never used in text analysis but due to their high occurrence frequency, index size can explode which consequently hurts query performance. We can customize the Analyzer to create custom token filters for the input text. The filtering process in the analyzer prunes all the stop words while building the index.
Using a pre-built mapping of Lucene docId to Pinot docId
As discussed above, there is a strong need to store Pinot docId in every document added to the Lucene index. This results in a two-pass query execution:
The search operation returns a bitmap of matching lucene docIds .
. Iterate over each docId to get the corresponding document. Retrieve the pinot docId from the document.
Retrieving the entire document from Lucene was a CPU hogger and became a major bottleneck for throughput testing. To avoid this, we iterate the text index once to fetch all <lucene docId, pinot docId> mappings and write them in a memory mapped file.
Since the text index for offline segments is immutable, this works well as we pay the cost of retrieving the entire document just once when the server loads the text index. The mapping file is later used during query execution by the collector callback to short-circuit the search path and directly construct a result bitmap of pinot docIds.
This optimization along with pruning the stop-words gave us 40–50x improvement in query performance by allowing the latency to scale with increase in QPS. The following graph compares the latency before and after this optimization.
Disable Lucene Query Result Cache
Lucene has a cache to boost performance for queries with repeatable text-search expressions. While the performance improvement is noticeable, cache increases the heap overhead. We decided to disable it by default and let the user enable (if need be) on a per text index basis.
Use compound file format
Lucene’s on-disk index structures are stored in multiple files. Consider the case of 2000 table segments on a Pinot server, each Pinot table segment having text index on 3 columns with 10 files per text index. We are looking at 60k open file handles. It is very likely for the system to run into “too many open files” problem.
So, the IndexWriter uses compound file format. Secondly, when the text index is fully built for a column, we force merge the multiple lucene sub-indexes (which are also referred to as segments in Lucene terminology) into a single index.
Configure in-memory buffer threshold
As documents are added to the text index during Pinot segment generation, they are buffered in-memory and periodically flushed to the on-disk structures in the index directory. The default Lucene behavior is to flush after memory usage has reached 16MB . We experimented with this value and made some observations:
A flush results in a Lucene segment. As more of these are created, Lucene can decide to merge few/all of them in the background. Having multiple such segments increases the number of files.
Having a default threshold as 16MB doesn’t strictly mean the index writer will consume 16MB of heap before flushing. The actual consumption is much higher (around 100MB) presumably because in Java there is no good way to programmatically keep track of the amount of heap memory used.
Smaller thresholds result in a large number of small I/Os as opposed to fewer big I/Os. We decided to keep this value configurable and chose 256MB as the default to keep a good balance between memory overhead and number of I/Os.
Additional Performance Numbers
We also ran micro-benchmarks to compare the execution time of text_match and regexp_like on a Pinot table with a single segment containing 1 million rows. Two different kinds of test data were used:
Log data: A STRING column in Pinot table where each value is a log line from apache access log.
A STRING column in Pinot table where each value is a log line from apache access log. Non log data: A STRING column in Pinot table where each value is resume text.
The following graph shows that search queries using text index are significantly faster compared to scan based pattern matching.
Another evaluation was done with Pinot’s native inverted index to understand when using text index may not be the right solution.
White-space separated text can be stored as a multi-value STRING column in Pinot.
Pinot will create a dictionary and inverted index on this column.
If only exact term matches (using =, IN operators) are required, then text index is not the right solution. Pinot’s inverted index can do the exact term matches 5x faster than Lucene.
However, if a phrase, regex (including prefix and wildcard) or fuzzy search is needed, then text index is the right choice both functionality and performance wise.
Upcoming Work
Pre-built mapping of lucene docId to pinot docId works for offline segments since the text index is immutable. For real-time consuming segments, this optimization is not applicable since the index is changing while it is serving queries. Optimizing the Lucene docId to Pinot docId translation is work in progress.
Fine-tuning the background refresh thread to work on a per table or a per index basis. The current implementation has a single background thread to manage all realtime segments and their text indexes.
Conclusion
In this blog post, we discussed how we leveraged Lucene to engineer the text search solution in Pinot to meet our functional and performance (QPS and latency) requirements. Please visit the user documentation of text search to learn more about using the feature.
If you’re interested in learning more about Apache Pinot, these resources are great places to get started.
Docs: http://docs.pinot.apache.org
Getting Started: https://docs.pinot.apache.org/getting-started
Special thanks
I would like to thank our Pinot OSS team for their relentless efforts to make Pinot better: Mayank Shrivastava, Jackie Jiang, Jialiang Li, Kishore Gopalakrishna, Neha Pawar, Seunghyun Lee, Subbu Subramaniam, Sajjad Moradi, Dino Occhialini, Anurag Shendge, Walter Huf, John Gutmann, our engineering manager Shraddha Sahay and SRE manager Prasanna Ravi. We would also like to thank the LinkedIn leadership Eric Baldeschwieler, Kapil Surlaker, and Igor Perisic for their guidance and continued support as well as Tim Santos for technical review of this article. | https://medium.com/apache-pinot-developer-blog/text-analytics-on-apache-pinot-cbf5c45d282c | ['Siddharth Teotia'] | 2020-06-16 05:03:16.441000+00:00 | ['Software Engineering', 'Apache Pinot', 'Open Source', 'Programming', 'Analytics'] |
Here’s What I Learned From 30 Days of Creative Coding (a Codevember Retrospective) | Lessons Learned in Codevember
We all stand on the shoulders of code giants
I watched more code tutorials in one month than I have in three years. I browsed countless GitHub repositories of open source Javascript packages. I trawled through Twitter and Instagram, searching for other creative coders to draw inspiration from.
Here’s the thing: The internet would not be what it is today without open source creators.
Time and time again, I was blown away by people building creative open source packages and giving them away for free. At first, I felt guilty for taking other code and tweaking it to start my sketches. But then I learned that this is how we build things now: We find projects that inspire us, learn from what others built, and then build our own new thing.
I learned from so many people and package maintainers, but I feel the need to give shout-outs for a few specifically:
If you want to learn something new, start a work-adjacent project
I really struggle with the pervasive side-hustle culture in tech and programming. It feels like everyone is building an app on the side or, in my field, creating tons of cool data visualizations when they get home from work or on the weekends. But I spend six or more hours every day doing data visualization. Most of the time I love it, but it feels exhausting to come home and do the same thing. I want to be more than the work that I do at my job.
Day 3: Deep Waves. I used a Perlin noise generator to create pseudo-random line paths. (The site has an animated version!).
So for me, it was really important to do something that was not specifically data visualization. This is why I like the term work-adjacent — learning how to draw with code required a few skills I use in my day job (JavaScript, design, debugging, etc.) but with a whole new level of freedom to let my mind wander. There were no data sets to tie down my designs. I could explore abstract things like randomness and generating pseudo-random algorithms. I could create things just for the fun of it.
I am 90% sure that if I did data viz for every day of November, I wouldn’t have finished. I would have burnt out too soon. So if you have a project in mind, maybe ask a few questions first: Will this feel too much like work? If the answer is yes, take a deep breath and take off some of the pressure you put on yourself. Maybe it’s time to do something for the fun of it, or the creativity, rather than to further your career with a side hustle.
Doing something every day that is terrible and cathartic
When you try to come up with a new idea every day, something strange happens. At first, you obsess over details, trying to make every sketch better than the last. Then you fall a few days behind because you have resisted publishing the last sketch because it “still needed something.” And then you’re days behind, wondering how you’ll catch up, feeling like a failure.
But eventually, you give yourself a break. You stop caring so much. You put work out there before it’s perfect. And the new ideas start flowing through you more quickly now, escaping from your hands to breathe life before you have the chance to squash them.
Day 30: Devided Bliss. Not the most elegant of sketches, but I liked the end result. Random polygons are generated based on a function and then placed on the page with a masking effect.
You do good work even if it’s not perfect. You do more, and you learn more.
Programming should be fun, especially creative programming. So be easy on yourself. Make mistakes. Put something out into the world and then go back and fix it once you’ve given the thing a chance to breathe. You’ll feel better after doing it a few times.
It’s OK to feel like a fraud. Everybody does
Most days, I felt like a fraud for adapting someone else’s code. But then I would open Twitter and see loads of other devs expressing how they feel, like they never know enough about JavaScript, or Python, or whatever they are using.
Day 10: Old Pyramids. Nothing complicated on the code side. Just a nice picture composed purely of polygons
If you’re creating anything with code, you will probably have to continue learning new skills. It never stops. Technology evolves, packages change, new tools emerge. Even the most experienced developers and artists have to spend time learning. And they use the same resources as everybody else.
Math is beautiful. Math is for everybody
Sometimes you need to visualize something to understand it. I never considered myself a math person. Historically, I have leaned more to the art / design side of creativity. Math was for data scientists, engineers, astronomers, etc. I just helped bring their work to life with pictures.
Day 16: Math Meditations. This sketch visualizes the concept of recursion. Each function calls itself until a predefined value is reached.
False. If #Codevember did anything for me, it turned me into a huge math nerd. Even though I don’t understand large swaths of the field still, visualizing equations and algorithms in p5.js revealed the intricacy and beauty behind numbers. I’m used to creating images with static data sets, but connecting shapes and colors to evolving, random data opened up a whole new world for me.
Don’t be intimidated by a field you don’t understand. Math is not just for the mathematicians. Art is not just for the artists.
Have a look around, dabble in what excites you, and go make something beautiful. | https://medium.com/better-programming/heres-what-i-learned-from-30-days-of-creative-coding-a-codevember-retrospective-8c05a8497d24 | ['Benjamin Cooley'] | 2020-01-10 20:00:51.663000+00:00 | ['Data Visualization', 'JavaScript', 'Development', 'Programming', 'Creative Coding'] |
@RequestParam vs @QueryParam vs @PathVariable vs @PathParam | The annotations @RequestParam, @QueryParam and @PathVariable, PathParam are used to read values from the request. But which one is used for what?
The arrangement in the collection is deliberately grouped, as these are annotations that have the same task but come from different frameworks that often occur in combination.
comparison
As shown in the table, the difference lies in where a value is read out. @PathParam reads the value from a path part of the called URI. @QueryParam is used to read the values from QueryParameters of a URI call. These are after? listed in a URI.
PathParams are location-dependent, while QueryParams are passed as a key value pair and therefore their order is irrelevant to more than one QueryParam.
example
As an example again both calls in a URI:
@PathVariable
This annotation is used on the method parameter we want to populate:
@RequestMapping(value = "/orders/{id}", method = RequestMethod.GET)
@ResponseBody
public String getOrder(@PathVariable final String id) {
return "Order ID: " + id;
}
Even though @PathVariable and @RequestParam are both used to extract values from the URL, their usage is largely determined by how a site is designed.
The @PathVariable annotation is used for data passed in the URI (e.g. RESTful web services) while @RequestParam is used to extract the data found in query parameters.
Refrence
http://www.nullpointer.at/2017/11/22/requestparam-queryparam-pathvariable-pathparam/
Related Links
https://docs.oracle.com/javaee/7/api/javax/ws/rs/PathParam.html
https://docs.oracle.com/javaee/7/api/javax/ws/rs/QueryParam.html
https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/bind/annotation/PathVariable.html
https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/bind/annotation/RequestParam.html | https://medium.com/1developer/spring-requestparam-vs-queryparam-vs-pathvariable-vs-pathparam-7c5655e541ad | ['Mahdi Razavi'] | 2019-06-22 05:31:22.056000+00:00 | ['Spring', 'Java'] |
Chalkboard Thoughts — Dec 15, 2020 — Episode 4 —Fear & Doubt | I’ve written often about ‘Fear & Doubt’. Those two words came to me, when I was listening to an audible book during 2019, titled, ‘The Art of Happiness’ by The Dalai Lama and Howard C. Cutler. For me it’s in this place, the place of ‘Fear & Doubt’ where all suffering in our conditioned human minds starts. If we were only able to master this mental feat then we would be saved from those demons (see picture above) that roam around in our minds. | https://medium.com/stayingaliveuk/chalkboard-thoughts-dec-15-2020-episode-4-fear-doubt-156a70003ec1 | ['Michael De Groot'] | 2020-12-15 16:37:32.389000+00:00 | ['Storytelling', 'Shareyourstory', 'Blackboardthoughts', 'Stayingaliveuk', 'Whiteboardanimation'] |
The 5 Clustering Algorithms Data Scientists Need to Know | Clustering is a Machine Learning technique that involves the grouping of data points. Given a set of data points, we can use a clustering algorithm to classify each data point into a specific group. In theory, data points that are in the same group should have similar properties and/or features, while data points in different groups should have highly dissimilar properties and/or features. Clustering is a method of unsupervised learning and is a common technique for statistical data analysis used in many fields.
In Data Science, we can use clustering analysis to gain some valuable insights from our data by seeing what groups the data points fall into when we apply a clustering algorithm. Today, we’re going to look at 5 popular clustering algorithms that data scientists need to know and their pros and cons!
K-Means Clustering
K-Means is probably the most well-known clustering algorithm. It’s taught in a lot of introductory data science and machine learning classes. It’s easy to understand and implement in code! Check out the graphic below for an illustration.
K-Means Clustering
To begin, we first select a number of classes/groups to use and randomly initialize their respective center points. To figure out the number of classes to use, it’s good to take a quick look at the data and try to identify any distinct groupings. The center points are vectors of the same length as each data point vector and are the “X’s” in the graphic above. Each data point is classified by computing the distance between that point and each group center, and then classifying the point to be in the group whose center is closest to it. Based on these classified points, we recompute the group center by taking the mean of all the vectors in the group. Repeat these steps for a set number of iterations or until the group centers don’t change much between iterations. You can also opt to randomly initialize the group centers a few times, and then select the run that looks like it provided the best results.
K-Means has the advantage that it’s pretty fast, as all we’re really doing is computing the distances between points and group centers; very few computations! It thus has a linear complexity O(n).
On the other hand, K-Means has a couple of disadvantages. Firstly, you have to select how many groups/classes there are. This isn’t always trivial and ideally with a clustering algorithm we’d want it to figure those out for us because the point of it is to gain some insight from the data. K-means also starts with a random choice of cluster centers and therefore it may yield different clustering results on different runs of the algorithm. Thus, the results may not be repeatable and lack consistency. Other cluster methods are more consistent.
K-Medians is another clustering algorithm related to K-Means, except instead of recomputing the group center points using the mean we use the median vector of the group. This method is less sensitive to outliers (because of using the Median) but is much slower for larger datasets as sorting is required on each iteration when computing the Median vector.
Mean-Shift Clustering
Mean shift clustering is a sliding-window-based algorithm that attempts to find dense areas of data points. It is a centroid-based algorithm meaning that the goal is to locate the center points of each group/class, which works by updating candidates for center points to be the mean of the points within the sliding-window. These candidate windows are then filtered in a post-processing stage to eliminate near-duplicates, forming the final set of center points and their corresponding groups. Check out the graphic below for an illustration.
Mean-Shift Clustering for a single sliding window
To explain mean-shift we will consider a set of points in two-dimensional space like the above illustration. We begin with a circular sliding window centered at a point C (randomly selected) and having radius r as the kernel. Mean shift is a hill-climbing algorithm that involves shifting this kernel iteratively to a higher density region on each step until convergence. At every iteration, the sliding window is shifted towards regions of higher density by shifting the center point to the mean of the points within the window (hence the name). The density within the sliding window is proportional to the number of points inside it. Naturally, by shifting to the mean of the points in the window it will gradually move towards areas of higher point density. We continue shifting the sliding window according to the mean until there is no direction at which a shift can accommodate more points inside the kernel. Check out the graphic above; we keep moving the circle until we no longer are increasing the density (i.e number of points in the window). This process of steps 1 to 3 is done with many sliding windows until all points lie within a window. When multiple sliding windows overlap the window containing the most points is preserved. The data points are then clustered according to the sliding window in which they reside.
An illustration of the entire process from end-to-end with all of the sliding windows is shown below. Each black dot represents the centroid of a sliding window and each gray dot is a data point.
The entire process of Mean-Shift Clustering
In contrast to K-means clustering, there is no need to select the number of clusters as mean-shift automatically discovers this. That’s a massive advantage. The fact that the cluster centers converge towards the points of maximum density is also quite desirable as it is quite intuitive to understand and fits well in a naturally data-driven sense. The drawback is that the selection of the window size/radius “r” can be non-trivial.
Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
DBSCAN is a density-based clustered algorithm similar to mean-shift, but with a couple of notable advantages. Check out another fancy graphic below and let’s get started!
DBSCAN Smiley Face Clustering
DBSCAN begins with an arbitrary starting data point that has not been visited. The neighborhood of this point is extracted using a distance epsilon ε (All points which are within the ε distance are neighborhood points). If there are a sufficient number of points (according to minPoints) within this neighborhood then the clustering process starts and the current data point becomes the first point in the new cluster. Otherwise, the point will be labeled as noise (later this noisy point might become the part of the cluster). In both cases that point is marked as “visited”. For this first point in the new cluster, the points within its ε distance neighborhood also become part of the same cluster. This procedure of making all points in the ε neighborhood belong to the same cluster is then repeated for all of the new points that have been just added to the cluster group. This process of steps 2 and 3 is repeated until all points in the cluster are determined i.e all points within the ε neighborhood of the cluster have been visited and labeled. Once we’re done with the current cluster, a new unvisited point is retrieved and processed, leading to the discovery of a further cluster or noise. This process repeats until all points are marked as visited. Since at the end of this all points have been visited, each point will have been marked as either belonging to a cluster or being noise.
DBSCAN poses some great advantages over other clustering algorithms. Firstly, it does not require a pe-set number of clusters at all. It also identifies outliers as noises, unlike mean-shift which simply throws them into a cluster even if the data point is very different. Additionally, it can find arbitrarily sized and arbitrarily shaped clusters quite well.
The main drawback of DBSCAN is that it doesn’t perform as well as others when the clusters are of varying density. This is because the setting of the distance threshold ε and minPoints for identifying the neighborhood points will vary from cluster to cluster when the density varies. This drawback also occurs with very high-dimensional data since again the distance threshold ε becomes challenging to estimate.
Expectation–Maximization (EM) Clustering using Gaussian Mixture Models (GMM)
One of the major drawbacks of K-Means is its naive use of the mean value for the cluster center. We can see why this isn’t the best way of doing things by looking at the image below. On the left-hand side, it looks quite obvious to the human eye that there are two circular clusters with different radius’ centered at the same mean. K-Means can’t handle this because the mean values of the clusters are very close together. K-Means also fails in cases where the clusters are not circular, again as a result of using the mean as cluster center.
Two failure cases for K-Means
Gaussian Mixture Models (GMMs) give us more flexibility than K-Means. With GMMs we assume that the data points are Gaussian distributed; this is a less restrictive assumption than saying they are circular by using the mean. That way, we have two parameters to describe the shape of the clusters: the mean and the standard deviation! Taking an example in two dimensions, this means that the clusters can take any kind of elliptical shape (since we have a standard deviation in both the x and y directions). Thus, each Gaussian distribution is assigned to a single cluster.
To find the parameters of the Gaussian for each cluster (e.g the mean and standard deviation), we will use an optimization algorithm called Expectation–Maximization (EM). Take a look at the graphic below as an illustration of the Gaussians being fitted to the clusters. Then we can proceed with the process of Expectation–Maximization clustering using GMMs.
EM Clustering using GMMs
We begin by selecting the number of clusters (like K-Means does) and randomly initializing the Gaussian distribution parameters for each cluster. One can try to provide a good guesstimate for the initial parameters by taking a quick look at the data too. Though note, as can be seen in the graphic above, this isn’t 100% necessary as the Gaussians start our as very poor but are quickly optimized. Given these Gaussian distributions for each cluster, compute the probability that each data point belongs to a particular cluster. The closer a point is to the Gaussian’s center, the more likely it belongs to that cluster. This should make intuitive sense since with a Gaussian distribution we are assuming that most of the data lies closer to the center of the cluster. Based on these probabilities, we compute a new set of parameters for the Gaussian distributions such that we maximize the probabilities of data points within the clusters. We compute these new parameters using a weighted sum of the data point positions, where the weights are the probabilities of the data point belonging in that particular cluster. To explain this visually we can take a look at the graphic above, in particular the yellow cluster as an example. The distribution starts off randomly on the first iteration, but we can see that most of the yellow points are to the right of that distribution. When we compute a sum weighted by the probabilities, even though there are some points near the center, most of them are on the right. Thus naturally the distribution’s mean is shifted closer to those set of points. We can also see that most of the points are “top-right to bottom-left”. Therefore the standard deviation changes to create an ellipse that is more fitted to these points, to maximize the sum weighted by the probabilities. Steps 2 and 3 are repeated iteratively until convergence, where the distributions don’t change much from iteration to iteration.
There are 2 key advantages to using GMMs. Firstly GMMs are a lot more flexible in terms of cluster covariance than K-Means; due to the standard deviation parameter, the clusters can take on any ellipse shape, rather than being restricted to circles. K-Means is actually a special case of GMM in which each cluster’s covariance along all dimensions approaches 0. Secondly, since GMMs use probabilities, they can have multiple clusters per data point. So if a data point is in the middle of two overlapping clusters, we can simply define its class by saying it belongs X-percent to class 1 and Y-percent to class 2. I.e GMMs support mixed membership.
Agglomerative Hierarchical Clustering
Hierarchical clustering algorithms fall into 2 categories: top-down or bottom-up. Bottom-up algorithms treat each data point as a single cluster at the outset and then successively merge (or agglomerate) pairs of clusters until all clusters have been merged into a single cluster that contains all data points. Bottom-up hierarchical clustering is therefore called hierarchical agglomerative clustering or HAC. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. Check out the graphic below for an illustration before moving on to the algorithm steps
Agglomerative Hierarchical Clustering
We begin by treating each data point as a single cluster i.e if there are X data points in our dataset then we have X clusters. We then select a distance metric that measures the distance between two clusters. As an example, we will use average linkage which defines the distance between two clusters to be the average distance between data points in the first cluster and data points in the second cluster. On each iteration, we combine two clusters into one. The two clusters to be combined are selected as those with the smallest average linkage. I.e according to our selected distance metric, these two clusters have the smallest distance between each other and therefore are the most similar and should be combined. Step 2 is repeated until we reach the root of the tree i.e we only have one cluster which contains all data points. In this way we can select how many clusters we want in the end, simply by choosing when to stop combining the clusters i.e when we stop building the tree!
Hierarchical clustering does not require us to specify the number of clusters and we can even select which number of clusters looks best since we are building a tree. Additionally, the algorithm is not sensitive to the choice of distance metric; all of them tend to work equally well whereas with other clustering algorithms, the choice of distance metric is critical. A particularly good use case of hierarchical clustering methods is when the underlying data has a hierarchical structure and you want to recover the hierarchy; other clustering algorithms can’t do this. These advantages of hierarchical clustering come at the cost of lower efficiency, as it has a time complexity of O(n³), unlike the linear complexity of K-Means and GMM. | https://towardsdatascience.com/the-5-clustering-algorithms-data-scientists-need-to-know-a36d136ef68 | ['George Seif'] | 2020-12-13 19:23:17.239000+00:00 | ['Machine Learning', 'Clustering', 'Data Science', 'Algorithms', 'Towards Data Science'] |
The Secret to Building Performance Libraries | The Secret to Building Performance Libraries
Two key reasons you need to understand delegate prototypes in JavaScript
Photo by Fabian Irsara @ Unsplash
I was reading a section in a book about JavaScript and I came across an issue (but also the power of the concept that the issue stems from) that I want to write about. I think it will be especially helpful for newcomers to JavaScript — even if you’re experienced you might learn something new!
This article will go over a known anti-pattern with delegate prototypes. If you’re a React user the concept of this anti-pattern may be familiar to you. We will also look at how you can use that concept to greatly improve the performance of your apps — just as the majority of JavaScript libraries today do.
So, if you want to create a library in JavaScript, I highly recommend you learn how you can optimize your app by delegating prototypes. This is called the Flyweight Pattern and will be explained in this piece.
If you don’t know what a prototype is, they’re objects that JavaScript uses to model other objects after. You could say they’re similar to classes in that they can construct multiple instances of objects, but they’re also objects themselves.
In JavaScript, all objects have some internal reference to a delegate prototype. When objects are queried by property or method lookups, JavaScript first checks the current object. If that doesn’t exist it then proceeds to check the object’s prototype, which is the delegate prototype, and then proceeds with that prototype’s prototype, and so on. When it reaches the end of the prototype chain the last stop ends at the root Object prototype. Creating objects attaches that root Object prototype at the root level. You can branch off objects with different immediate prototypes set with Object.create().
Let’s take a look at the code snippet below:
We have two factory functions here. One of them is makeSorceress which takes a type of sorceress as an argument and returns an object of the sorceress's abilities. The other is makeWarrior which takes a type of warrior as an argument and returns an object of the warrior's abilities.
We instantiate a new instance of the warrior class with type knight along with a sorceress with type fire .
We then used Object.create to create new objects for bob, joe, and lucy, additionally delegating the prototype objects for each.
Bob, joe, and lucy were set with their names on the instance, so that we claim and expect their own properties. Finally, bob attacks lucy with using bash , decreasing her HP by 10 points.
At a first glance, there doesn’t seem to be anything wrong with this example. But there is actually a problem. We expect bob and joe to have their own copy of properties and methods, which is why we used Object.create . When bob bashes lucy and inserts the last targeted name into the this.lastTargets.names array, the array will include the new target's name.
We can log that out and see it for ourselves:
The behavior is expected, however when we also log the last targeted names for joe , we see this:
That doesn’t make sense, does it? The person attacking lucy was bob, as is clearly seen above. But why was joe involved in the act? The one line of code explicitly writes bob.bash(lucy) , and that's it.
So the problem is that bob and joe are actually sharing the same state!
But wait, that doesn’t make any sense because we should have created their own separate copies when we used Object.create …or so we assumed.
Even the docs at MDN explicitly says that the Object.create() method creates a new object. It does create a new object — which it did — but the problem here is that if you mutate object or array properties on prototype properties, the mutation will leak and affect other instances that have some link to that prototype on the prototype chain. If you instead replace the entire property on the prototype, the change only occurs on the instance.
For example:
If you change the this.lastTargets.names property, it will be reflected with other objects that are linked to the prototype. However, when you change the prototype's property ( this.lastTargets ), it will override that property only for that instance. To a new developer this can be a little difficult to grasp.
Some of us who regularly develop apps using React have commonly dealt with this issue when managing state throughout our apps. What we probably never paid attention to is how that concept stems through the JavaScript language itself. So, to state this more clearly, it’s a problem with the JavaScript language in itself that this an anti pattern.
But why is this even an anti pattern? Can’t it be a good thing?
In certain ways it can be a good thing because you can optimize your apps by delegating methods to preserve memory resources. After all, every object just needs one copy of a method, and methods can be shared throughout all the instances, unless that instance needs to override it for additional functionality.
For example, let’s look back at the makeWarrior function:
The battleCry function is probably safe to be shared by all prototypes since it doesn't depend on any conditions to function correctly, besides an hp property which is already set upon instantiation. Newly created instances of this function do not necessarily need their own copy of battleCry and can instead delegate to the prototype object that originally defined this method.
Sharing data between instances of the same prototype is an anti-pattern because it can become very easy to accidentally mutate shared properties or data that shouldn’t be mutated. This has long been a common source of bugs for JavaScript applications.
This practice is in use for a good reason, in fact. Take a look at how the popular request package instantiates the Har function in this source code:
So why doesn’t Har.prototype.reducer just get defined like this?
As explained previously, if newer instances were to be instantiated, it would actually degrade the performance of your apps since it would be (recreating new methods on each instantiation), which is the reducer function.
When we have separate instances of Har :
We’re actually creating 5 separate copies of this.reducer in memory because the method is defined in the instance level. If the reducer was defined directly on the prototype, multiple instances of Har will delegate the reducer function to the method defined on the prototype!
This is an example of how to take advantage of delegate prototypes and improve the performance of your apps. | https://medium.com/better-programming/2-reasons-why-you-must-understand-delegate-prototypes-right-now-6dac719d31f4 | [] | 2019-07-24 16:50:03.876000+00:00 | ['Frontend Development', 'Nodejs', 'JavaScript', 'Web Development', 'React'] |
The Woman Who Was Kidnapped and Kept in A Coffin-Sized Box for 7 Years | At times, tightly enclosed spaces can send shivers down the spine. Especially if one is claustrophobic, the situation can feel like a nightmare. Claustrophobia, the fear of confined spaces, is one of the most common phobias in the world.
One study indicated that approximately 5 to 10% of the global population suffer from severe claustrophobia, but only a few receive treatment. And the majority of those experiencing it can relate it to the scary ordeal of Colleen Stan, popularly known as “The Girl In The Box.”
It was the year 1977. Twenty-year-old Colleen Stan was hitchhiking for days from her hometown in Oregon to attend a friend's birthday party in Northern California.
Unfortunately, Colleen never made it to that party. She considered herself an experienced hitchhiker. On the beautiful day of May 19th, she had already turned down two rides before accepting the ride that turned her life upside down.
The Misfortune Begins —
After refusing the previous two rides, Colleen Stan finally accepted a ride from a family in a blue van. The lively vibe of the people inside the van made her opt for the unfamiliar ride.
The van was driven by a friendly-looking young man and his wife who sat in the passenger seat. The couple had a baby tied safely in its baby chair in the backseat of the van. To Colleen Stan, the family looked like the perfect blend of merry people. This couple was 23-year-old Cameron Hooker and his 19-year-old wife Janice Hooker, a pair of honest working-class people from Red Bluff, California.
The couple seemed innocent. But as the saying goes, looks are deceptive until the reality is unearthed.
For years, Cameron, a lumber mill worker, had been torturing his wife with beatings, whippings, electrical shocks. The man is a sadistic psychopath with some genuinely horrible bondage fantasies.
Janice, who deeply loved her husband and for the sake of her child's safety, silently suffered his violence. Surprisingly she also helped him fulfill his evil fantasies of holding innocent women captive and subjecting them to immoral torture.
Shortly after sitting inside the van, Colleen felt the air inside it awry. The initial friendliness of the young couple later gave her suspicious vibes. As the ride continued, the young man was continuously staring at Colleen, making her feel slightly uncomfortable. Her instincts uttered a voice of simply escaping the ride, but the excitement wrapped in attending her friend's party is what made her instincts rest and continue with the ride.
The peaceful drive didn't last long.
The Girl In The Box —
The draped personality of Cameron Hooker swapped with his original identity— the sadistic psychopath.
As time passed, Cameron soon veered off the road and drove to a remote location. He then pulled out a knife, held it up to Colleen’s neck, and threatened her not to utter a sound or else she will be killed. She was then chained inside a homemade soundproof wooden “headbox” that weighed twenty pounds.
The box caricatured a life of hell. By the odd twist of fate, Colleen Stan had to experience a difficult phase of her life. The box only confined her head, blocked the radiating sunlight, the pleasant sounds of the surroundings, and prevented fresh airflow.
The couple eventually drove the van to their house in California, where Colleen was held captive. She was then victimized through brutal forms of torture. Her wrists were tied to the ceilings; she was beaten, electrocuted, raped, whipped, and subjected to near-death experiences. The man’s wife, Janice, remained as a spectator to the torture the girl was subjected to. The man’s fantasies rested on imprisoning women as slaves, confining them to tortures with the victim’s voice silenced.
Unlike her husband's sadistic personality, it's unclear how much pleasure Janice derived from the merciless behavior. In many ways, it's quite possible that she remained the victim of her husband's violent behavior, which made her unsympathetic. Still, it does not excuse her for the crime she committed by supporting her evil husband.
The pain Colleen Stan endured incremented over time. She was trapped inside the coffin-like wooden box underneath the Hookers bed for entirely 23 hours of the day. Cameron selectively starved Colleen over the years by giving her occasional meals. She was then stretched on a medieval-style rack, was kept there for hours, and then punished severely. For an hour or two, she was made to do the household chores and babysit the children. The rest of her day was subjected to darkness — both physically mentally.
The cyclical routine of abuse that Cameron established: Isolation, fear, starvation, and torture.
In reality, Cameron and his wife had no interest in killing Colleen Stan. Instead, they just wanted her to dehumanize, manipulate, objectify, and torture her for the years to come.
The Satanic Organization —
The coffin-shaped box was a living ordeal of misery. But to Colleen Stan, the experience of immoral tortures and abuse didn’t feel much worse until one diabolical disclosure. It was the revelation that ran her impulses into fear.
The disclosure was of an evil organization — “The Company” and Cameron claimed that he was a group member. He warned Colleen that she was being eyed on by the organization, and they had already been harassing her families. Whether his claims were valid or not, the threat made Colleen believe his words.
Individual pain can be bearable, but when it involves family, we become petrified. And Colleen feared the same too. More than anything else, she started experiencing concerns about her family. She felt that her attempt to escape from the confinement might lead to the satanic organization harming her family even more. For her family's sake, she decided to stay in captivity and even signed a contract that asked her to remain the slave of the couple forever.
Earlier the family of Colleen Stan filed her missing report. Efforts were put to trace her, but they always met with failure. The investigators later reported that she was either kidnapped or killed.
The Change Of Feelings —
The signing of the contract brought about life changes. Adhering to the wishes of Cameron, Colleen Stan got to leverage few freedoms.
She was allowed to breathe fresh air, jog around, and work in the garden. Surprisingly in March 1981, she was allowed to visit her family for a day. Cameron accompanied her, and she addressed him as her boyfriend in front of her family. However, the family suspected something odd in her behavior. But they shifted their concern, fearing that it might lead to their daughter disappearing again.
Collectively, Cameron's fear and the scare of the satanic organization made her step back from escaping, rebelling against the confinement, or disclosing any information to her family.
Colleen Stan was held in captivity for seven years. Towards the end of the seven years, Cameron expressed his desires in wanting Colleen as his second wife. The husband's aspirations left Janice agitated.
Justice Served —
Janice Hooker’s resentment grew. Her husband's desire to marry Colleen Stan fuelled her with anger. Later, her conscience wrapped with guilt made her realize the immoral sins that she was part of, which had led her to commit merciless deeds.
After seven years of Colleen Stans’s captivity, Janice told her the truth: The contract she signed was bogus, and an organization called “The Company” never existed. After disclosing the facts, Janice helped her escape and pleaded mercy upon her husband, thinking that he might become humane after rehabilitation. However, when she realized that her husband would not change, she eventually reported him to the police.
Cameron Hooker was found guilty of his horrific crimes. He was charged with kidnapping and sexual assault and was sentenced to 104 years in prison. Surprisingly, Colleen Stan and Janice Hooker both live in California with names changed — but— the duo doesn't communicate.
Colleen Stan experienced a fate that, in the eyes of many, would be worse than death. Ever since then, her life was not easy. She experienced chronic back and shoulder pain due to her confinement — yet made a positive impact by working as a mental health professional and social worker. In 2016, a movie named “Girl In The Box” featured the real-life abduction story of Colleen Stan.
Her willpower, faith, and optimism made her survive the tough times.
In the words of Colleen Stan, the psychological mindset she adapted during the tragic time was: | https://medium.com/memoirsfromhistory/the-woman-who-was-kidnapped-and-kept-in-a-coffin-sized-box-for-7-years-2ffc8267ffb5 | ['Swati Suman'] | 2020-12-30 05:59:02.781000+00:00 | ['Justice', 'History', 'True Crime', 'Crime', 'Psychology'] |
My Vagina Is Not Too Tight and Dry | My Vagina Is Not Too Tight and Dry
Arousal after sex abuse.
Photo by Nine Köpfer on Unsplash
CW: the content warning is about child sex abuse, rape, and the aftermath and effects on the human body after.
Every time I have sex with someone new, I get asked a few questions. “Are you a virgin?” haha, I wish. Next, followed by “Are you sure you are turned on. You don’t feel right.” I don’t know how to explain to anyone how offensive it is to be told by seemingly well-intentioned men that my pussy doesn’t feel right, but I am going to try.
Growing up, I was sexually abused from the age of 5 to about 12 years old. I have been sexually assaulted more times than I can count on my hands. Being gang-raped leaves you with a certain kind of mental scar. There was no real education after all my sexual abuse that my body would render sex differently.
I get compared a lot to other women. I am frequently told how my pussy is wrong, or I am just not in tune with my body enough. I can get myself to orgasm in 3 minutes max with my fingers alone. I may not be very wet, but I can assure you I am entirely in tune with my body.
I think sex positivity is excellent. I support people who flow like waterfalls. I want people to understand that you should not use sex-positivity for othering. Per RAINN, 1 out of every 6 American women has been the victim of an attempted or completed rape in her lifetime (14.8% completed, 2.8% attempted). Statistics show there are a lot of sexual assault survivors out there. We need to talk about the aftermath of sexual assault.
For survivors of sexual assault, it is common to experience genital pain, tightness, and apprehensiveness when it comes to sex after the trauma. For me, sex is excruciating. It is not uncommon for me to bleed, have cuts, or to be dry. There are things I can do to lessen the effect of my body’s response to sex. However, anytime I consent to sex, I know I agree to pain.
If I tell my sex partner I am horny, and I want them inside me, I don’t want someone who isn’t my therapist or OB-GYN to explain to me my pussy is inept. Instead of shaming someone me for not being wet like your ex, pull out some lube. Listen to the person in front of you. They have been with their body since it was created; they certainly know themselves better than you do.
While it may be triggering for survivors to communicate their past abuse, it is crucial for medical professionals to know. I have gone to an OB-GYN for a pap smear. During the pap smear, the swab got stuck in my cervix. The doctor spent 20 minutes panicking, telling me I needed to relax because she couldn’t get it out.
At that moment, I didn’t have control over my body. It was traumatizing to have a doctor freak out while a swab was stuck inside me. I felt like I was being raped all over again.
Since that day, I found a new OB-GYN at the advice of my therapist. My therapist explained to me that people who are sexually assaulted often have trouble with arousal and pap smears. They can be very triggering for survivors like myself.
I found a new doctor and explained my history. My doctor understands how my sexual trauma manifests, and I’m delighted to say she has never got a swab stuck in my cervix. They offered full anesthesia for pap smears and IUD insertions.
Nothing is physiologically wrong with my body. I have had a lot of testing done and conversations with my doctors. I have buckets full of trauma that don’t make great first impressions. Sex is complicated. Sometimes I do get wet, but it’s rare.
To enjoy sex, I have to push through the pain. After penetration starts, I generally get wet, but it takes sex to happen first before my body can let go and be free. I love sex; it is genuinely my favorite thing. I am not ashamed of how my body performs. I am glad I can take back the power that was robbed of me.
I support people who are exploring themselves for the first time. However, I don’t enjoy men thinking that they’re my teacher, and they’re going to awaken my pussy for the first time. I don’t like the implications that because my body preforms differently, I need instruction.
I would much rather have a conversation clothed regarding sex. Sometimes I forget not everyone comes from a BDSM background where we all take half an hour questionnaires before even touching each other. There is something to be learned from that, though. Asking your partner five things they love about sex and five things they can’t stand in sex goes a long way. What does your ideal night in bed look like? Is there anything I can do more of?
If your partner can’t articulate what they like in words, ask them to show you their favorite porn. Maybe it is written erotica that gets them going. Even if you are vanilla, I think researching BDSM negotiation helps so much.
BDSM goes into a highly technical level of sexual negotiations. Where can you touch someone is common, also any triggers, and medical conditions are laid out. I know for some this may be overkill. However, it is a valuable tool for me to explain sexual boundaries. I use it for vanilla hookups all the time. So far, no one has gone running away if anything BDSM checklists and questionnaires have created more sexual satisfaction in my life.
Is your partner having a hard time but can’t say why? Make a safe word so things can stop, no questions asked. Have a partner who goes nonverbal regularly during sex? Give them a small ball to hold, let them know dropping it means sex ends no questions asked.
Life is too short for having lousy sex after trauma. I would rather have an awkward conversation thousand times over than be triggered or worse insulted because of my past. Being patient and keeping an open mind is critical. You never know what someone has been through until its too late usually. | https://medium.com/sexography/my-vagina-is-not-too-tight-and-dry-82502e6f232e | ['Beth Daily'] | 2020-08-14 19:20:19.033000+00:00 | ['Relationships', 'Sexual Assault', 'Self', 'Mental Health', 'Sex'] |
What Are Your Options When You’re No Longer Attractive For The Job Market? | Do you want me?
I’ve been avoiding the unavoidable task of looking for a job mainly because I’m already aware of what a complete waste of time it would be — sifting through lackluster job posts that provide just as much excitement as the obituary section of the local newspaper.
I’ve been out of work for about six months now and while I’ve been able to sustain myself with freelance work and the blessing of not having to fork out thousands of dollars for rent and utilities — as 2018 progresses — there is the nagging reminder that my timetable is patiently waiting for me to honor outstanding commitments.
The last full-time job didn’t end well. The last couple of years have alerted me to the fact that it’s almost impossible to find editorial jobs that live up to the promises of maximizing your worth with appropriate compensation, the security of steady hours and a robust benefits package.
After the temp job at ABC Digital ended abruptly after just two weeks — I returned to hustle mode (not that it ever stops) for a few more weeks before being referred to the digital content arm of another media giant. The duties were simple enough — and the best part was the ability to work from home.
Interestingly enough — even that quickly took its toll — as it became clear that I actually liked people a lot more than I realized and truly missed the daily interactions.
But back to the job. I was assigned a vertical that required sifting through large stacks of recycled content and choosing the ones that were pitch-worthy in order to keep the homepage well-stocked. This also meant periodic conference calls with partners from notable publications — who were desperate to retain their positions as the “go-to” outlets.
It took about a month for me to start buckling under the uninspiring regimen of navigating the strains of CMS — in search of content that all looked and sounded the same. As a writer it was intolerable to expect me to contribute to the symptoms of an ailing industry. I steadfastly bitch about how challenging it is to find original content that feeds the soul — and yet here I was earning an unremarkable paycheck as the reward for encouraging the extinction of something I was supposedly championing.
As the second month came to an end — I began to consider that all the pluses about my current gig were fading away. And apparently I wasn’t the only one dying a slow death — because the high turnover was another indication that molding us into a bots wasn’t going to be as easy as our employer envisioned.
Still — I was more than happy with the steady paycheck and was able to muddle though my guilt and intense fear that my writing and reading comprehension skills were going to suffer from the debilitating exercise of sourcing for badly written material — for hours and hours.
By the time I got the early evening call from the recruiter who apologetically confirmed that the next day would be my last — I was already mentally prepped for my imminent exit. I was no longer able to stomach the clickbait headlines and badly-constructed sentences — not to mention the endless sessions of providing captions for generic images.
The kind lady who broke the unexpected good news seemed a lot more upset than the person who just lost her job and all her health benefits. I did my best to assure her that I was used to the erratic job market — due to the experiences I have accrued working for big name corporations who rely on their reputations to hide how they will end up screwing you over — in the end.
I wasn’t that blatant of course — but even I had been — based on her job description — it’s hard to imagine that she would disagree.
Years ago — when I was stuck in a corporate job at a top financial institution — all I wanted to do was wait for the economic crisis to blow over — so I could venture out and land a real editorial job. By the fall of 2013 — I was able to write full-time even though I wasn’t getting paid for my services. Before then — I was supplementing my steady paycheck with freelance jobs in order to build up my cred.
By 2015 — I was getting jobs in digital from fancy start-ups and media companies that all paid shit money. The other thing they had in common was the reluctance to make you a permanent staff member in order to reduce the costs of such an investment.
It began to dawn on me that the job market had shifted into something I never anticipated. If I had known back in the spring of 2013 — that the digital world would meekly surrender to the content-churning machine that it has become — I wouldn’t have walked away from the option of holding down a full time corporate job.
The gamble wasn’t worth it when you consider the toll it took on my stability — as I risked it all to prove that I was capable of realizing the dream of calling The New York Times — home — or any of the other notable outlets that I have since discovered aren’t as illustrious with words as I had assumed.
So — now I’m back where I started. Reality has hit hard and I’m reminded of how relentlessly deceiving and unforgiving the editorial world can be. Aside from the intense competition that has only grown more violent with the consistent help of Twitter — there’s also the notification that nothing lasts for that long — so don’t get too comfortable.
Established portals with the best of intentions will coerce you into pouring your heart and soul and end up stumping all over it when the jig is up. Everything has a time limit and the only way to stay ahead of the curve is to carve out a space with your name on it.
At this point in my life — finding a job that not only suits my skill set — but also positions me for an enviable trajectory is something that I can’t fathom ever transpiring. I’m not young or old — which makes it harder for employers to know what the hell to do with me — and that’s if they’re interested enough to ponder.
The immense appeal I had almost five years ago has evaporated and now I have no clue how to convince anyone that I’m the best person for a job that I don’t even want.
So — what are your options when you’re no longer attractive for the job market — but have to work to sustain yourself and dignity?
Maybe — it lies in your priorities. At this point — it’s highly unlikely that I will ever hold an editorial position in a corporate setting that will allow me to blossom into a managerial position. So — I will have to take what I can get — while laboring on my own shit.
I will have to map out goals for future projects that will keep me motivated and excited about a craft that I still love — even if the climate is giving me plenty of reasons to hate it. I have to manifest my destiny without the false security of outside forces that only conspire to use you up — before tossing you out without a respectful exit.
I still find myself attractive among the ugliness of what the industry is constantly releasing — and instead of trying to convince suitors who aren’t interested — it’s time to accept the journey of re-discovery through self-empowerment that can only lead to the life of my dreams.
And that’s not work — it’s passion. The greatest love of all. | https://nilegirl.medium.com/what-are-your-options-when-youre-no-longer-attractive-for-the-job-market-da2c3ad87394 | ['Ezinne Ukoha'] | 2018-03-09 20:41:12.019000+00:00 | ['Life Lessons', 'Work', 'Media', 'Careers', 'Journalism'] |
The Mayans’ Lost Guide To Doing Data Science In Fast Paced Startups | The quest is what are those key practices left unsaid while working in a fast and highly chaotic environment? Are the AI foundations and computer science degrees enough to make scalable models in real life? It’s not as easy as eating cotton candy.
Driven by intellectual progress, I decided to do a review of my past experience in working with Machine Learning and Deep Learning models. In this article, I’ll be using my work experience on one of my recent projects where the aim was to predict a user’s purchasing probability for our various offerings. The purpose of this project was to help the Performance Marketing and Growth Team at HealthifyMe to evaluate their efficacy and efficiency of our sales engines respectively. I am sharing below the insights I’ve had from the same on what one needs to strive towards while solving data problems: | https://medium.com/healthify-tech/the-mayans-lost-guide-to-doing-data-science-in-fast-paced-startups-2531128aecd0 | ['Saurav Agarwal'] | 2020-09-12 16:18:33.279000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Startups', 'Data Science', 'Product Management'] |
We Need Trained People to Deal with Mental Health Breaks | Recently I was researching a post on Black Americans killed by police. Some of the individual stories that resonated with me, were the victims who suffered from mental health issues.
This is not a problem associated with just the American police, British police are no different. Five years ago I worked with a young lady with mental health issues. As with many of these people she was self-medicating with alcohol. During a mental health break, she had been drinking and was hiding in a bush keeping away from people. The police approached her and because it was dark they shone their lights in her face. She states she couldn’t see it was the police because the light was in her eyes. They asked her to get out of the bush and she refused. They then pepper-sprayed her in the face and dragged her out. She was scared still not realising it was the police who had her.
They arrested her and put her in a cell to sober up before they interviewed her. When I went to meet her the next morning to be her appropriate adult, what I saw shook me. She was black down both arms and legs. There were no gaps between her bruises, she had been brutalised. This was not only my opinion but that of the nurse I took her to after. When the case was investigated by the police complaints commission, it will not surprise anyone that it was thrown out. No case to answer.
Mental health services are at breaking point.
In Britain, mental health services are at breaking point. Too few people are diagnosed and offered the support they need to function. With these services cut, the pressure is placed on the police and ambulance service.
If services were available, we wouldn’t need untrained personnel dealing with these individuals. A trained person would know what to do. Instead of arresting my young lady, they would have spoken to her, calmed her down and tried to move her to a safe location. They could have kept her safe, without arresting her.
What we need is a fourth service dedicated to mental health. If a person is experiencing a break, these could be the first call. I understand some may be dangerous and require the police. A service to assess this initially would help. A group to respond to non-violent cases.
All countries could use this service.
America is no different they could also use this service. When we look at the case of Daniel Prude this is evident. Prude was a 41-year old who during a mental health break ran into the road naked. He became agitated when the officer put a spit hood on his head. To calm him he was placed on the floor. The officers full weight was forced on his head, he died after being asphyxiated.
Anna Rosser would have also benefitted from this intervention. She was shot dead when police entered her house and she was holding a knife. Her boyfriend unable to cope with her mental health break had dialled 911.
All citizens deserve equal treatment regardless of colour, gender or mental health. If as a species we cannot employ this service the least we can do it admit there is a problem. Train the existing staff in mental health. Having worked with mental health for twenty years, I would volunteer an evening a week to work this service.
Mental health is on the rise across the world. The state of life at present does nothing to help this. A quarter of the United Kingdom suffers from their mental health. Other countries have similar statistics. It is time we represented this percentage fairly and gives them access to trained help. | https://medium.com/mental-health-and-addictions-community/we-need-trained-people-to-deal-with-mental-health-breaks-de94684e6954 | ['Sam H Arnold'] | 2020-10-07 13:42:45.086000+00:00 | ['Ideas', 'Equality', 'Services', 'Mental Health', 'Diversity'] |
How to Become a Creative Writer | How to Become a Creative Writer
3 strategies for engaging a reader’s imagination with your writing
Photo by Clever Visuals on Unsplash
When we think of creative writing, it can get a bit confusing because it seems like a redundancy. Isn’t the very act of writing creative? So why is there writing and creative writing?
Unless you’re copying the words of someone else, all writing is creation. Even writers of owner’s manuals, which don’t feel particularly creative, have to create clear instructions on how to properly use a product.
So why the differentiation in terms?
A working definition of creative writing
Creative writing refers to writing that utilizes and engages the imagination, and that’s where the distinction is found. Creative writing is approached differently than writing that is more traditionally non-fiction, such as straightforward reporting or informational essays. In other words, any other type of writing that doesn’t tend to engage people on an emotional level.
Creative writing is about making people feel and imagine
Creative writing is called creative writing simply because you tend to use your imagination more in writing it than you do with more informational non-fiction writing. Creative writing features more figurative and sensory language than you would encounter in a news report or an informative article.
Creative writers approach the blank page with an incredible weight of responsibility to create something that will resonate with their readers on an emotional level and stick with them long after they’ve read it.
3 Ways to Put Creative Writing Into Practice
Creative writing can be broken up into three main categories: fiction, poetry, and creative nonfiction. We’ll look at each one a little more closely.
1. Writing Fiction
Fiction takes many forms, such as novels, short stories, and screenplays. What makes fiction unique is that it is the crafting of a story that doesn’t actually exist. We create the details of the story out of our own imaginations.
The key to writing compelling fiction is introducing your readers to a complex character they can invest in over the course of the character’s journey to try to get something she wants.
Stories are essentially about a character who wants something, faces opposition to get it, and undergoes a significant character change along the journey of trying to attain the goal.
Stories take people on a journey and engage them on a personal level, and they’re a fun way to exercise your creativity and tap into your imagination.
2. Writing Poetry
When you woke up this morning, how did you feel? What about on your way to work? How about when the weekend finally arrived? When you had a disagreement with your significant other? When someone told you that you were doing a great job?
We have a wide range of emotions that we feel throughout any given day. This is the human condition. And the worse thing you can do is bottle up your emotions.
Poetry is all about emotional expression. Poetry uses a variety of literary and poetic elements, such as figurative and sensory language, rhyme, rhythm, repetition, and other poetic devices
There are many different forms of poetry you can learn about, but the point is to capture emotions and communicate them in a concrete way through poetry.
For example, “His words cut like a knife” is a more creative and poetic way of saying he said something to cause her emotional pain.
3. Writing Creative Non-Fiction
Creative non-fiction is telling stories that are true in a creative way. A creative writer might learn about a story of a man who rescued a small child from a burning building. A journalistic approach would just report the facts: who the man was, who the child was, where the fire was located, how the man got the child out, etc.
A creative writer would recognize the immersive value of the story and want to write it in a way that engages people’s senses and emotions.
Ron Chernow’s outstanding biography Alexander Hamilton, the basis for the hit Broadway musical Hamilton, is an example of creative non-fiction that engages the reader in the story of the nation’s first treasury secretary using concrete and image-rich language, crafting a compelling narrative that tells a true story.
Creative non-fiction is about finding important true stories that will impact people and tell them in a way that is accurate, creative, and memorable.
For some excellent advice on writing creative non-fiction, I highly recommend Jack Hart’s book Story Craft. | https://medium.com/inspired-writer/how-to-become-a-creative-writer-9657a36f351f | ['Tom Farr'] | 2020-07-30 21:02:49.026000+00:00 | ['Poetry', 'Writing', 'Fiction', 'Creative Writing', 'Creative Non Fiction'] |
Due Diligence | Bobby Chiu
We limn the darkness as best we can,
trying to shine light into corners
where the scariest monsters await.
Such shining demands deft courage.
Shaky hands hold the illuminating torch.
Fear is a wise and constant companion.
The darkest corners are in the mind.
Some monsters despise the light.
Look hard, but be wary what you loose. | https://medium.com/geezer-speaks/due-diligence-9679f902a3a0 | ['Mike Essig'] | 2017-09-16 07:34:22.633000+00:00 | ['Poetry', 'Consciousness', 'Dreams', 'Fear', 'Psychology'] |
Diana Heinrichs, Lindera: “AI is built for the masses” | Diana Heinrichs’ mission is nothing less than to maintain the mobility and autonomy of elderly citizens. With her company Lindera, she founded an app that can analyze body movements and detect fall risks.
Lindera
In our REWRITE TECH podcast, we talked to Diana about her personal motivation to found Lindera and what challenges she had to overcome while founding a tech startup in Germany.
Diana Heinrichs:
As the life expectancy rises, the average age of society increases. That not only brings structural problems but also personal difficulties when family members get older and need an increased level of care. Falls are a particularly increased risk with age and the prevention of falls is a common goal in elderly care.
This is exactly where Diana Heinrichs’ idea starts. Lindera translates geriatric assessments into a software-driven product that can be used on smartphones. A relatively simple task at first was soon confronted with the first pushbacks, as Diana remembers: “I talked to Fraunhofer ITWM and they said it’s simple, but it won’t work. You can’t analyze three-dimensional pictures through a smartphone camera.”
But Diana wasn’t thrown off by the scientific evaluation. She formulated the mathematical problem and reached out to every single PhD candidate in northern Germany. “This is how we set up our data science team and solved the problem. Now we are the only company in the world that can do 3D geriatric assessments.”
Lindera didn’t just find a solution, they are meeting the scientific gold standard and their solution is evidence-based. And just as important: it is easy to use. People only need to record a 30-second video of someone with their own smartphone and provide some additional information regarding the subject’s medication. The 2D recording then gets split into many slices, so neutral networks can transform it into a 3D model.
“It’s a really nice self-service that puts the senior in the centre of digital attention.”
Tech for the elderly
Initially, the idea of founding Lindera was triggered by a personal experience. Diana’s own grandmother received care provided by Diana’s mother and, somehow, it just worked. “How can it be that in my family it just works?”, Diana wondered.
To get to the core of this, Diana paused her job at Microsoft Germany, took an internship in an outbound care service, and talked to many specialists in the field. “When we look at our ageing society,” she found, “it is absolutely clear that we need new solutions.” Diana decided to focus on fall prevention, one of the most important areas of elderly care and geriatric assessment that wasn’t digitalized yet.
Doing the impossible
The scientists at Fraunhofer weren’t the only ones that didn’t believe in her idea as she recalls: ”Something I know very well is: ‘yes but’”. Unlike in the United States, it’s uncommon in Germany to combine science and business, limiting the innovative possibilities. “We can’t just outsource innovation to Fraunhofer”, Diana said.
From her experience in selling her solution to care homes, Diana stumbled upon many people, who don’t understand how artificial intelligence works and demand exclusivity. “AI is built for the masses. It’s about to build a database, which we can learn from”, she states. In order for Germany to keep pace with innovations, the understanding of technology must improve, Diana says: “I hope we can get beyond this narrow mindset.”
Listen to REWRITE TECH with Diana Heinrichs from Lindera
Listen to the full conversation with Diana Heinrichs on our REWRITE TECH Podcast, which is available on all common audio streaming platforms including Spotify and Apple Podcasts.
Don’t miss out on our other episodes, including:
Find out more about REWRITE TECH. | https://medium.com/rewrite-tech/diana-heinrichs-lindera-ai-is-built-for-the-masses-d417c5d2f581 | ['Sarah Schulze Darup'] | 2020-12-17 14:49:57.705000+00:00 | ['Artificial Intelligence', 'Podcast', 'Women In Tech', 'Healthcare', 'Female Founders'] |
Amsterdam: A History of the World’s Most Liberal City | Russell Shorto cherry-picks the most interesting characters and events from his research into the city’s history.
Russell Shorto’s Amsterdam: A History of the World’s Most Liberal City is such an enjoyable book in part because Shorto cherry-picks the most interesting characters and events from his research into the city’s history. Shorto relates these stories in his clear, easy-to-read style creating a successful popular history as well as making a light foray into intellectual history. Although it covers the city’s history in at least cursory fashion from its foundation in 1200 to the present, it is far from comprehensive; there are large gaps especially during the city’s decline in the 18th and 19th centuries.
Shorto also has a thesis to prove: Amsterdam is the most liberal city in the world. He admits that this thesis is difficult to establish. For one, there are vagaries in defining the word “liberal” (liber in Latin means “free”) and it means different, even contradictory, things in different eras (a “Liberal” in the Netherlands is actually an economic liberal, and thus more of a conservative). Any claim to Amsterdam’s being the most “liberal” city in the world relies as much on its role in history as its current status as a medium size city with world-class culture, diversity, and, famously, official tolerance for soft drugs and legal prostitution (the latter actually isn’t so unusual in Western Europe). Shorto confronts the uncertain nature of historical influences (how much “credit” can we really give to Amsterdam for the development of Western freedoms?).
Shorto describes how Amsterdam’s liberal mindset was shaped by its water-logged origins. The harsh situation of the first settlers of Amsterdam, their constant struggle against the sea and the river delta called for collective action in water management: building dikes, windmills, bridges, and importantly, committing to this infrastructure for the long term. Such collective action, so typical of the Dutch mindset, proved in the end to be beneficial to the individual as well. Humble individuals ended up owning real estate that they had wrought from the sea. The early Amsterdam settlers were remote from kingly power. So after they reclaimed the land from the sea, it didn’t belong to a church or king. “It was theirs.” (253) (The statement about the land not belonging to church or king and the “It was theirs” line is repeated nearly verbatim on page 279).
Shorto of course ascribes economic factors as essential conditions for this rise of “individualism.” For example, in 1500 peasants owned 45% of the land in Holland as opposed to 5% in the rest of Europe (44). They were vested owners in their land, and with it came individual freedom of action that was hard to find in other parts of Europe, where peasants were generally bound to the manorial system.
So later on, when Philip II of Spain tried to roll back the Protestant Reformation in the Netherlands, one reason that a critical mass of Dutch people supported the revolt was their own vested interest in their country. (Another was that Philip’s plans were so draconian–execution even if you recanted and returned to Catholicism–that many Dutch Protestants were literally fighting for their lives.)
***
There was nothing inevitable about Amsterdam’s rise to becoming a global economic powerhouse. It got some lucky breaks along the way. The first was the “Miracle of Amsterdam” (1345) when a dying old man took his last communion, vomited it, and the communion host remained whole. The women who were caring for him threw it on a fire, but the wafer did not burn! A miracle was declared and Amsterdam became a place of pilgrimage. Many churches were built and religious tourists came and bought trinkets, not so unlike tourists shopping in the countless souvenir shops one finds in central Amsterdam today. (Shorto omits the fact that another key founding myth of Amsterdam features vomit: reputedly a seasick dog vomited on the spot where early settlers decided to build the first dike, near Nieuwendijk.)
Shorto relates how early technological breakthroughs helped Amsterdam get off the ground. Discoveries in herring preservation (the fish’s liver is a natural preservative) led to the herring bus, a floating herring factory where the herring was packed and preserved on board while the ship stayed at sea. This and other advances eventually enabled the Dutch to dominate the north European market. The herring bus led in turn to a rise in shipbuilding, which became the key infrastructure for Dutch trade and the later colonial empire. Windmills were adapted to become sawmills, so that the Dutch imported German wood and sold finished lumber for export to the English.
The Dutch were also early adapters of the printing press which led to the dissemination of literacy, unorthodox ideas, and the development of the greatest publishing center of early modern Europe. (Shorto avoids the issue of whether a Dutchman really invented the printing press before Gutenberg, as some Dutch scholars have claimed).
The printing press led to an explosion in the dissemination of knowledge. Erasmus of Rotterdam emerged as a key figure in challenging abuses of the church and the asserting the primacy of the individual in interpreting the Bible according to his or her own lights. Erasmus in turn inspired Martin Luther and the Protestant Reformation. If the essence of Protestantism is that the individual read scripture for oneself (rather than rely solely on Church authority), then Erasmus and Luther each played titanic roles in changing history in favor of the individual (instead of the Church). These ideas found ready reception in the Netherlands, including Amsterdam, especially after Calvin’s important refinements.
Shorto’s chapters on the Dutch golden age are an enjoyable retelling of what he calls “one of history’s classics.” Shorto describes the rise of the Dutch East India Company (VOC is its Dutch acronym) as the first modern corporation (a permanent company with shares for sale to anyone) and its role in reshaping parts of Asia and Africa, even as it enriched Amsterdam. The building of the Amsterdam’s canal belt (grachtengordel) was the largest planned urban expansion in Europe since Roman times (according to Geert Mak). Artists such as Rembrandt flourished, and portraiture for the first time was within the reach of the middle-class (not just aristocrats). Readers familiar with Simon Schama’s The Embarrassment of Riches, Jonathan Israel’s The Dutch Republic or Geert Mak’s Amsterdam: A brief life of a city may not find much new here, but Shorto is an exuberant story-teller and his enthusiasm for the period is infectious.
The fact that three of the seminal philosophers of the early modern era–Descartes, Spinoza, and Locke–all wrote and published their important works in Amsterdam is strong evidence that the light of liberty may never have burned as brightly in the world at large without the freedom of expression that Amsterdam allowed in this period. Descartes and Locke both came here and published works that they could not have published in France or England. Spinoza was born in Amsterdam, briefly ran his father’s business, and so breathed in its spirit of freedom his whole (short) life. The fact that Spinoza was excommunicated from the Jewish community makes Shorto see him as distinctly modern, the first prominent European to not belong any religious community and thus an individual par excellance. Spinoza’s Tractatus was perceived as such an incendiary challenge to organized church and state that it was banned even in the Netherlands; but it was published, read, and discussed anyway and its influence on the French Enlightenment in the following century was enormous.
In 1672 (the rampjaar or “year of disaster” in Dutch history), England, France and even the Bishop of Münster invaded the Netherlands. Although Amsterdam itself wasn’t invaded, the nation as a whole was weakened at land and sea, and soon after had to face a series of major wars against Louis XIV of France. Shorto winds up his coverage of the Golden Age there, even though other historians point out that while Amsterdam weakened in many ways, it was a slow decline, and the city remained a key financial center well into the 18th century. (And the Netherlands even played a key role in helping to fund the early days of the American revolution. See Barbara Tuchman’s The First Salute.) Shorto does offer an entertaining account of William III”s invasion of England, which English historians white-washed into the “Glorious Revolution” partly in order to preserve the notion that England hasn’t been invaded by a foreign power since 1066.
Shorto’s coverage of the eighteenth and nineteenth century is achieved mainly through sketches of some emblematic characters, such as Aletta Jacobs, the first Dutch woman to become a doctor and an early proponent of birth control and Eduard Dekker, the author of the classic anti-colonial novel Max Havelaar. So we see that there is still a tradition of enlightenment in the Netherlands, even during its period of economic senescence.
The twentieth century showed signs of Amsterdam’s resurgence with another massive enlargement of the city and rise of modern infrastructure and social services. But World War II was a dark chapter in the city’s liberal history. For various reasons, such as their concentration in Amsterdam and the usual Dutch efficiency at record-keeping, the survival of Dutch Jews was the lowest percentage of any country in Europe. Amsterdam did hold a general strike as a protest against the Nazi deportment of Jews, which historian Loe De Jong calls the “the first and only antipogrom strike in human history.” (268) But the Dutch carried a heavy conscience against their country’s relative lack of action to resist the deportments (compared to Denmark, for example). Shorto sees World War II as a low-point of Dutch liberalism. The Dutch, or the Amsterdammers, failed to stop the Nazis from shipping off their Jews to the death camps. So liberalism seems to some extent a position of expediency more than pure idealism.
The post-war period brought important changes to Amsterdam and a resurgence in liberalism, facilitated by a rise in affluence as Europe rebuilt. Outspoken champions of what would later by called gay rights, such as designer Benno Premsela, helped “normalize” homosexuality in The Netherlands. The Provo movement was hugely influential in challenging the status quo and created a climate where the first marijuana coffeeshops were legally tolerated in the early 1970s. Provo also highlighted the importance of cycling and helped call for building what would become the greatest cycling infrastructure of any city in the word.
***
Shorto doesn’t seriously challenge his thesis that liberalism was born in Amsterdam. He writes that if one were to award geographic medals for places that contributed to liberalism, then London, Paris and Monticello (Thomas Jefferson’s home) would all be candidates (18), but he doesn’t really pursue the claims of these other cities.
He also gives Amsterdam perhaps too much exclusive credit for the growth of capitalism, even while other Dutch towns such as Haarlem, and towns from the southern Netherlands (Brugge, Antwerp, etc.) also played important roles. Nor does he mention the important contributions to early capitalism of Venice or the towns of the Hanseatic League (Hamburg, Lübeck, etc.)
Professional historians will find much of this familiar ground and may find little new in it. Unlike in his The Island at the Center of the World, Shorto here doesn’t do much original archival research; this is too large a subject and so he relies mainly on secondary sources. The source notes also don’t cover quite the origins of all his anecdotes. When Shorto shares the story about a seventeenth century French naval commander who was surprised that a Dutch sea captain swept out his own quarters (while the French commander had a servant to do it), he doesn’t cite his source for this story. (It’s either from The Embarrassment of Riches or Israel’s The Dutch Republic.)
Shorto’s interviews with Frieda Menco, a Holocaust survivor who knew Anne Frank as a girl, and Roel van Duijn, founder of the Provo movement, which he incorporates into the narrative as original research, add both an anecdotal quality and fresh material to the narrative.
Passages about Shorto’s own Amsterdam experiences add a personal dimension to the book. Shorto interjects himself into the text perhaps more than usual in a popular history. He has until just recently lived in Amsterdam, and thus has had the opportunity to meet interesting personages from its twentieth century history. He mentions several places where he has lived in Amsterdam (and where he works, at the West Indian Company house), and the people connected to these places, and sees them as emblematic of the city’s history.
***
This is not a perfect book. The thesis is intriguing, but is hard to prove. In the end, the claim that Amsterdam is the most liberal city in the world doesn’t matter that much. It supplies a theme for the work; it makes for a good (if hyperbolic) meme. Shorto focuses on the times and events that support his thesis or add color to the narrative. While some people or eras are covered in depth, there is relatively little about the long, exhausting wars against Louis XIV, the 18th century in general, the Napoleonic wars, the 19th century in general (though “Multatuli” is covered) or World War I and its effects.
Shorto makes a few strange claims such as: “There is even a case to be made that our modern idea of “home” as an intimate personal space goes back to the Dutch canal houses of this period.” (19) Well, maybe. But farm houses have long been intimate personal spaces devoted to family; and while the Dutch canal houses (for the merchant class) didn’t have the multi-family scale of a manor or a castle, calling these canal houses the origin of the modern concept of “home” seems like an overreaching claim. What is true is that the Dutch middle-class enjoyed an unparalleled rise in their standard of living during the golden age, giving them the ability to afford coffee, tea, sugar, and even portraits in which their own (modest) lives were deemed important enough to be depicted. But surely the Dutch notion of gezelligheid (coziness) has contributed to urban connotations of home.
Shorto also sees the individualism within Dutch society as “seemingly contradictory” (253) to the strong collective tradition in Dutch history. But collectivism and individualism are not really “contradictory”; these are abstractions, and within nearly any society they are both present, but in different measure. They are competing principles, but any enlightened society or philosophical system will find its own balance between the extremes of individuals running amok without collective bonds (as in a libertarian’s fantasy of the U.S., or in an Ayn Rand novel) and larger organizations reducing individuals to utter insignificance (the medieval Church, the absolutist state, the Borg in Star Trek).
When Shorto points out that Amsterdam is a remote place because it lies at the same latitude as Saskatoon, Saskatchewan (16), it is totally unconvincing comparison. Amsterdam is close to major river deltas (transportation networks) in Western Europe, so despite its more northerly latitude (compared to, say, Paris), this is a useless statement. London lies at a similar latitude to Amsterdam. Is London remote? From what? Amsterdam was originally remote because it was scarcely habitable. But once the water problem was managed, the city’s location eventually became an asset, helping it to dominate the Baltic trade for example.
When Shorto describes the video of Anne Frank appearing in a home movie from 22 June 1941, he mentions other contemporary events from the war (e.g., the Germans had just conquered Crete), but fails to remind us that this was also the very day that Hitler launched operation Barbarossa, his invasion of Russia (partly in order to capture Jews in the Pale of Settlement). Since it’s the very same day, this might have been a useful (and dramatic) fact to mention.
And not that it matters, but the Mellow Yellow marijuana coffeeshop (which claims to be the first one) is not on Weesperzijde as Shorto writes (301), but on Vijzelstraat.
Dutch readers are likely to have varied reactions to the book, depending on their sensibility. Some might be more skeptical of Shorto’s claims about the unique contribution of Amsterdam. The book tends to treat some of features of Dutch national policy as if Amsterdam, a mere city, were largely (solely?) responsible for them. “But then too, Amsterdam is not the Netherlands. I have been guilty in this book of sometimes seeming to equate the two. Every Dutch person who is from outside the city will be ready to counter the notion.” (281) Indeed.
But these are all quibbles. Amsterdam: A History of the World’s Most Liberal City succeeds as a popular introduction to a glorious history. Much as Shorto justly receives credit for drawing more attention to the role of the Dutch in early New York history in his The Island at the Center of the World, now he will draw accolades for emphasizing the role of Amsterdam in the explosion of new ideas in the 17th century that inspired the 18th century philosophes, the Founding Fathers of the U.S., and other philosophers of freedom ever since.
[ See other reviews of Dutch history:
The Island at the Center of the World (also by Russell Shorto);
The Dutch Republic by Jonathan I. Israel;
The First Salute by Barbara Tuchman] | https://dan-geddes.medium.com/amsterdam-a-history-of-the-worlds-most-liberal-city-b6581a5e0fb9 | ['Dan Geddes'] | 2019-11-19 18:46:26.665000+00:00 | ['Books', 'Dutch', 'Netherlands', 'History', 'Amsterdam'] |
The Enchanting Lakes of Pakistan | The Enchanting Lakes of Pakistan
Enchanting Lakes of Pakistan
Pakistan is a very beautiful country to visit. It has a lot to offer to tourists from all around the world. It has beautiful lakes, rivers, greenish meadows, beautiful valleys, and the world’s most beautiful peaks.
Today in my story I’ll take you on a tour of Pakistan’s enchanting lakes. I hope you will fall in love with these beautiful lakes of Pakistan.
Saif-ul-Malook Lake Kaghan Valley
Saif-ul-Malook is the most beautiful and one of the most famous tourist destinations in Kaghan valley. This lake is known as the lake of Fairies.
Lulusar Lake Kaghan Valley
Lulusar Lake is another beautiful lake of Kaghan valley which is almost 30/40 mints far from Saif-ul-Malook lake.
Every year a large number of birds from Russia come to this lake.
Lulusar is actually the name of a combination of high hills and lakes, tourists who come to Naran must come to see Lulusar Lake and Lulusar Lake is the main source of water for the river Kunhar. The water of the lake is as clear as glass and the reflection of the snow-capped mountains around Lulusar captivates the viewers.
Ansoo Lake
This lake is only accessible between June to October. It takes almost 8 hours of trekking from Saif-ul-Malook lake to reach Ansoo lake. Adventure lovers do camping at Ansoo lake.
Ansoo Lake — Image by Autour
Dudipatsar Lake
Dudi Patsar Lake is located at an altitude of 4175 meters in the extreme north of the Kagan Valley and can be reached in a four-hour drive from Jalkhad. In the local language, Dudi Patsar means milky white water lake.
The reflection of adjacent snow-capped mountains in the crystal clear water of the lake makes it look like a milk canal from afar. This the main reason it is called Dudipatsar lake.
The magical scenery of the area compels tourists to pitch their tents here to enjoy the natural beauty. Getting there is a very difficult and arduous task. Access to the lake is possible by walking for at least seven to twelve hours through extremely difficult paths. After 12 hours of difficult trekking when tourists see this lake their tiredness disappears magically.
Satpara Lake
Satpara Lake is located at an altitude of 8,500 feet above sea level. This lake is full of fresh water and beautiful snow-capped mountains around this lake make it more beautiful.
Rush Lake
Lake Rush is the highest lake in Pakistan and is located at an altitude of 5098 meters near a peak called Rush Pari. It is the 25th highest lake in the world, accessible via the Nagar and Hopper Glacier routes, and the scenery is breathtaking.
Karombar Lake
Karomber lake is the second Pakistani and 31st height lake in the world. This lake is located between KPK and GB. At an altitude of 14,121 feet, the lake is a biologically active lake. The lake is about 55 meters deep, about 4 kilometers long, and 2 kilometers wide.
Located in the Broghil Valley, the beautiful Karombar lake is located at a distance of more than two hundred and fifty kilometers from the city of Chitral. The Broghil Valley is famous for its beautiful scenery, snow-capped peaks, magnificent Karomber Lake, and more than twenty-five small lakes, as well as three major passages.
Haleji Lake
Haleji Lake is located 80 km from Karachi on the National Highway, which was built by British authorities during World War II as a safe water reservoir. The lake, which is about 22,000 acres, has a diameter of 18 km.
Millions of birds migrated to Lake Haleji once in the winter to make a temporary home.
Apart from birds, about 200 species were recorded here, but now only exotic seasonal birds are seen here. Due to the lack of clean water, the lake is gradually drying up on the one hand, and on the other hand, the bushes themselves are rapidly engulfing it.
Shangrila Lake or Kachura Lake
Located in Skardu valley it is the most beautiful lake in Pakistan. Basically, Kachura lake are two lakes, one called ‘Upper Kachura Lake’ and the other ‘Lower Kachura Lake’ or Shangri-La Lake.
Upper Kachura Lake
Upper Kachura Lake is a clear water lake with a depth of about 70 meters. The Indus River flows a little deeper near it. In summer, the temperature here is 10 to 15 degrees Celsius, while in winter, the temperature drops far below freezing point, due to which the lake water freezes completely.
Similarly, Lower Kachura or Lake Shangri-La is also called the second most beautiful lake in Pakistan because its view can enchant anyone.
Lower Kachura Lake
Lower Kachura Lake or Shangri-La Lake is actually part of Shangri-La Rest House. It is a popular tourist resort located about 25 minutes by car from Skardu city.
The highlight of the Shangri-La Rest House is the restaurant, which is built in an aircraft structure. The Shangri-La Rest House is a model of Chinese architecture, attracting a large number of tourists.
Shandur Lake
Shandur Lake with Polo Ground looks like a great masterpiece of nature. It is three miles long and one mile wide. Rare birds live in this lake and the interesting thing is that there is no apparent discharge of water from this lake. In other words, the water appears to be stagnant in the lake.
Hanna Lake
In the rocky cliffs, about ten kilometers north of Quetta, in 1894, during the reign of the British Crown, the supply of cheap groundwater to the people and to irrigate the surrounding lands, Hanna Lake was formed.
The water level in the lake remained the same from 1894 to 1997, but due to lack of proper maintenance, the lake remained completely dry from 2000 to 2004. Despite receiving millions of rupees in tickets annually, no attention was paid to the improvement of the lake. The water level in Hanna Lake has now started falling gradually. At present, the water level has come down to eight feet. The falling water level has affected Siberian bird sanctuaries and tourism, as well as surrounding gardens.
Ratti Gali Lake
Located at an altitude of 12,000 feet above sea level, Ratti Gali Lake is the crest of the Neelum Valley. | https://medium.com/world-travelers-blog/the-enchanting-lakes-of-pakistan-a87ffa614541 | ['Muhammad Sakhawat'] | 2020-12-29 09:44:34.938000+00:00 | ['Traveling', 'Travel', 'Pakistan', 'Beautiful Lakes', 'Asia'] |
It’s Not Easy to Parent When Your Soul Is Leaving Your Body | It’s Not Easy to Parent When Your Soul Is Leaving Your Body
My recovery from postpartum depression was long and filled with brain zaps
Photo: Ade Santora/Getty Images
My sweet baby was almost two-and-a-half years old, and still I was fighting postpartum depression (PPD).
Fighting, yes. It wasn’t a quiet depression — it couldn’t be — because I had a child to raise. She needed me to sing and dance, so her mind could grow healthy and strong. So she wouldn’t be crazy like me.
All the sanity I had, I gave to her.
For myself? I needed therapy, but who has time when you’re nursing a baby? When you’re not even sleeping?
I attended one therapy session, with my sleeping newborn on my lap, as I bawled and scribbled down advice about communicating with my husband, and how to explain to him what PPD feels like.
She taught me how to lead with appreciation instead of accusation: “I know we’re both giving 110% of ourselves all the time.”
If I could make him care, maybe he could care for me, so I could care for her.
Or if that failed, he could care for her, and I could cease to be. That felt more likely.
It was an impossible time. When my doctor suggested meds at my daughter’s six-week weigh-in, I answered, “I’ll do anything.”
I started Sertraline (Zoloft). Yeah, it helped with the postpartum anxiety. I started to sleep a little. Now my baby kept me up instead of my mind.
But at a year old, she arched her back, away from my breasts and their dwindling supply of milk. When she weaned, everything changed. I experienced another puberty — why does no one tell you that? My chemistry changed, and Sertraline quit working for me.
So I started Fluoxetine (Prozac) and felt great. This was life! Except for the constant nausea. I couldn’t get out of bed, couldn’t keep food down. My even, happy mind was trapped in this nauseous body.
My doctor suggested Venlafaxine (Effexor). No. No. No. I wish I could go back. Wish I could tell myself back then to quit the meds, to finally start therapy. Maybe I didn’t have time for it when she was born, but I did now.
“With Venlafaxine, you won’t have nausea,” my doctor said, “but people say the withdrawal is just like with heroin.”
I stayed, of course, and together we created wounds that may never heal.
I brushed her words away. I needed to feel better. For my daughter. Always for her. Not for me. I didn’t even know who I was. And didn’t care.
I had ceased to be.
My doctor was right about the withdrawal.
One month lost. A month of pain and tears and exhaustion, of brain zaps, of depersonalization (my soul leaving my body). The month when my husband told me I needed to take this healing somewhere else. I stayed, of course, and together we created wounds that may never heal.
This was the month my two-year-old daughter developed her first fears. In a moment where I felt well enough to stand, I cleaned her wipeable placemat, erased a doodle I’d drawn of myself: a smiley face with curly hair.
When she saw the placemat, she started screaming, “Mama erase! Mama erase!”
She wouldn’t leave her bedroom for a day and a night. I drew a new doodle, but this upset her more. She was afraid her real mother was disappearing too. And she was right. I don’t believe in souls, and yet it felt like my soul had definitely left my body.
I spent my days sleeping, crying, battling fevers, wincing from constant brain zaps, and cuddling my daughter under a blanket. Yes, every day we cuddled, and I read stacks of books to her, even though I knew my voice sounded flat.
“Mama’s sick, but I’ll be better soon,” I told her. “I’m sorry. I love you so much.”
My soul was gone, but I didn’t want her to know. I didn’t want my husband to be right, that me being there — but also not there— was worse than me just being all the way gone.
He didn’t mean dead, but that’s where my mind went. Not for the first time.
“Go stay in a motel.”
“I just want to heal in my home,” I pleaded. “And it’s not like we could afford that anyway.”
“It’s not fair to us. It’s not fair to her. She shouldn’t have to see you like this.”
“I’m giving her love every day. I’m doing much more than I feel capable of. You don’t know how hard this is.”
“We don’t need your help.”
“She’s my daughter. She needs me and I need her.”
“This isn’t about you.” | https://humanparts.medium.com/its-not-easy-to-parent-when-your-soul-is-leaving-your-body-1011d9f8573d | ['Darcy Reeder'] | 2019-06-28 12:47:52.014000+00:00 | ['Human Prompt', 'Mental Health', 'Mind', 'Soul', 'Parenting'] |
I never understood JavaScript closures | I never understood JavaScript closures
Until someone explained it to me like this …
As the title states, JavaScript closures have always been a bit of a mystery to me. I have read multiple articles, I have used closures in my work, sometimes I even used a closure without realizing I was using a closure.
Recently I went to a talk where someone really explained it in a way it finally clicked for me. I’ll try to take this approach to explain closures in this article. Let me give credit to the great folks at CodeSmith and their JavaScript The Hard Parts series.
Before we start
Some concepts are important to grok before you can grok closures. One of them is the execution context.
This article has a very good primer on Execution Context. To quote the article:
When code is run in JavaScript, the environment in which it is executed is very important, and is evaluated as 1 of the following: Global code — The default environment where your code is executed for the first time. Function code — Whenever the flow of execution enters a function body. (…) (…), let’s think of the term execution context as the environment / scope the current code is being evaluated in.
In other words, as we start the program, we start in the global execution context. Some variables are declared within the global execution context. We call these global variables. When the program calls a function, what happens? A few steps:
JavaScript creates a new execution context, a local execution context That local execution context will have its own set of variables, these variables will be local to that execution context. The new execution context is thrown onto the execution stack. Think of the execution stack as a mechanism to keep track of where the program is in its execution
When does the function end? When it encounters a return statement or it encounters a closing bracket } . When a function ends, the following happens:
The local execution contexts pops off the execution stack The functions sends the return value back to the calling context. The calling context is the execution context that called this function, it could be the global execution context or another local execution context. It is up to the calling execution context to deal with the return value at that point. The returned value could be an object, an array, a function, a boolean, anything really. If the function has no return statement, undefined is returned. The local execution context is destroyed. This is important. Destroyed. All the variables that were declared within the local execution context are erased. They are no longer available. That’s why they’re called local variables.
A very basic example
Before we get to closures, let’s take a look at the following piece of code. It seems very straightforward, anybody reading this article probably knows exactly what it does.
1: let a = 3
2: function addTwo(x) {
3: let ret = x + 2
4: return ret
5: }
6: let b = addTwo(a)
7: console.log(b)
In order to understand how the JavaScript engine really works, let’s break this down in great detail.
On line 1 we declare a new variable a in the global execution context and assign it the number 3 . Next it gets tricky. Lines 2 through 5 are really together. What happens here? We declare a new variable named addTwo in the global execution context. And what do we assign to it? A function definition. Whatever is between the two brackets { } is assigned to addTwo . The code inside the function is not evaluated, not executed, just stored into a variable for future use. So now we’re at line 6. It looks simple, but there is much to unpack here. First we declare a new variable in the global execution context and label it b . As soon as a variable is declared it has the value of undefined . Next, still on line 6, we see an assignment operator. We are getting ready to assign a new value to the variable b . Next we see a function being called. When you see a variable followed by round brackets (…) , that’s the signal that a function is being called. Flash forward, every function returns something (either a value, an object or undefined ). Whatever is returned from the function will be assigned to variable b . But first we need to call the function labeled addTwo . JavaScript will go and look in its global execution context memory for a variable named addTwo . Oh, it found one, it was defined in step 2 (or lines 2–5). And lo and behold variable addTwo contains a function definition. Note that the variable a is passed as an argument to the function. JavaScript searches for a variable a in its global execution context memory, finds it, finds that its value is 3 and passes the number 3 as an argument to the function. Ready to execute the function. Now the execution context will switch. A new local execution context is created, let’s name it the ‘addTwo execution context’. The execution context is pushed onto the call stack. What is the first thing we do in the local execution context? You may be tempted to say, “A new variable ret is declared in the local execution context”. That is not the answer. The correct answer is, we need to look at the parameters of the function first. A new variable x is declared in the local execution context. And since the value 3 was passed as an argument, the variable x is assigned the number 3 . The next step is: A new variable ret is declared in the local execution context. Its value is set to undefined. (line 3) Still line 3, an addition needs to be performed. First we need the value of x . JavaScript will look for a variable x . It will look in the local execution context first. And it found one, the value is 3 . And the second operand is the number 2 . The result of the addition ( 5 ) is assigned to the variable ret . Line 4. We return the content of the variable ret . Another lookup in the local execution context. ret contains the value 5 . The function returns the number 5 . And the function ends. Lines 4–5. The function ends. The local execution context is destroyed. The variables x and ret are wiped out. They no longer exist. The context is popped of the call stack and the return value is returned to the calling context. In this case the calling context is the global execution context, because the function addTwo was called from the global execution context. Now we pick up where we left off in step 4. The returned value (number 5 ) gets assigned to the variable b . We are still at line 6 of the little program. I am not going into detail, but in line 7, the content of variable b gets printed in the console. In our example the number 5 .
That was a very long winded explanation for a very simple program, and we haven’t even touched upon closures yet. We will get there I promise. But first we need to take another detour or two.
Lexical scope.
We need to understand some aspects of lexical scope. Take a look at the following example.
1: let val1 = 2
2: function multiplyThis(n) {
3: let ret = n * val1
4: return ret
5: }
6: let multiplied = multiplyThis(6)
7: console.log('example of scope:', multiplied)
The idea here is that we have variables in the local execution context and variables in the global execution context. One intricacy of JavaScript is how it looks for variables. If it can’t find a variable in its local execution context, it will look for it in its calling context. And if not found there in its calling context. Repeatedly, until it is looking in the global execution context. (And if it does not find it there, it’s undefined ). Follow along with the example above, it will clarify it. If you understand how scope works, you can skip this.
Declare a new variable val1 in the global execution context and assign it the number 2 . Lines 2–5. Declare a new variable multiplyThis and assign it a function definition. Line 6. Declare a new variable multiplied in the global execution context. Retrieve the variable multiplyThis from the global execution context memory and execute it as a function. Pass the number 6 as argument. New function call = new execution context. Create a new local execution context. In the local execution context, declare a variable n and assign it the number 6. Line 3. In the local execution context, declare a variable ret . Line 3 (continued). Perform an multiplication with two operands; the content of the variables n and val1 . Look up the variable n in the local execution context. We declared it in step 6. Its content is the number 6 . Look up the variable val1 in the local execution context. The local execution context does not have a variable labeled val1 . Let’s check the calling context. The calling context is the global execution context. Let’s look for val1 in the global execution context. Oh yes, it’s there. It was defined in step 1. The value is the number 2 . Line 3 (continued). Multiply the two operands and assign it to the ret variable. 6 * 2 = 12. ret is now 12 . Return the ret variable. The local execution context is destroyed, along with its variables ret and n . The variable val1 is not destroyed, as it was part of the global execution context. Back to line 6. In the calling context, the number 12 is assigned to the multiplied variable. Finally on line 7, we show the value of the multiplied variable in the console.
So in this example, we need to remember that a function has access to variables that are defined in its calling context. The formal name of this phenomenon is the lexical scope.
A function that returns a function
In the first example the function addTwo returns a number. Remember from earlier that a function can return anything. Let’s look at an example of a function that returns a function, as this is essential to understand closures. Here is the example that we are going to analyze.
1: let val = 7
2: function createAdder() {
3: function addNumbers(a, b) {
4: let ret = a + b
5: return ret
6: }
7: return addNumbers
8: }
9: let adder = createAdder()
10: let sum = adder(val, 8)
11: console.log('example of function returning a function: ', sum)
Let’s go back to the step-by-step breakdown.
Line 1. We declare a variable val in the global execution context and assign the number 7 to that variable. Lines 2–8. We declare a variable named createAdder in the global execution context and we assign a function definition to it. Lines 3 to 7 describe said function definition. As before, at this point, we are not jumping into that function. We just store the function definition into that variable ( createAdder ). Line 9. We declare a new variable, named adder , in the global execution context. Temporarily, undefined is assigned to adder . Still line 9. We see the brackets () ; we need to execute or call a function. Let’s query the global execution context’s memory and look for a variable named createAdder . It was created in step 2. Ok, let’s call it. Calling a function. Now we’re at line 2. A new local execution context is created. We can create local variables in the new execution context. The engine adds the new context to the call stack. The function has no arguments, let’s jump right into the body of it. Still lines 3–6. We have a new function declaration. We create a variable addNumbers in the local execution context. This important. addNumbers exists only in the local execution context. We store a function definition in the local variable named addNumbers . Now we’re at line 7. We return the content of the variable addNumbers . The engine looks for a variable named addNumbers and finds it. It’s a function definition. Fine, a function can return anything, including a function definition. So we return the definition of addNumbers . Anything between the brackets on lines 4 and 5 makes up the function definition. We also remove the local execution context from the call stack. Upon return , the local execution context is destroyed. The addNumbers variable is no more. The function definition still exists though, it is returned from the function and it is assigned to the variable adder ; that is the variable we created in step 3. Now we’re at line 10. We define a new variable sum in the global execution context. Temporary assignment is undefined . We need to execute a function next. Which function? The function that is defined in the variable named adder . We look it up in the global execution context, and sure enough we find it. It’s a function that takes two parameters. Let’s retrieve the two parameters, so we can call the function and pass the correct arguments. The first one is the variable val , which we defined in step 1, it represents the number 7 , and the second one is the number 8 . Now we have to execute that function. The function definition is outlined lines 3–5. A new local execution context is created. Within the local context two new variables are created: a and b . They are respectively assigned the values 7 and 8 , as those were the arguments we passed to the function in the previous step. Line 4. A new variable is declared, named ret . It is declared in the local execution context. Line 4. An addition is performed, where we add the content of variable a and the content of variable b . The result of the addition ( 15 ) is assigned to the ret variable. The ret variable is returned from that function. The local execution context is destroyed, it is removed from the call stack, the variables a , b and ret no longer exist. The returned value is assigned to the sum variable we defined in step 9. We print out the value of sum to the console.
As expected the console will print 15. We really go through a bunch of hoops here. I am trying to illustrate a few points here. First, a function definition can be stored in a variable, the function definition is invisible to the program until it gets called. Second, every time a function gets called, a local execution context is (temporarily) created. That execution context vanishes when the function is done. A function is done when it encounters return or the closing bracket } .
Finally, a closure
Take a look a the next code and try to figure out what will happen.
1: function createCounter () {
2: let counter = 0
3: const myFunction = function() {
4: counter = counter + 1
5: return counter
6: }
7: return myFunction
8: }
9: const increment = createCounter()
10: const c1 = increment()
11: const c2 = increment()
12: const c3 = increment()
13: console.log('example increment', c1, c2, c3)
Now that we got the hang of it from the previous two examples, let’s zip through the execution of this, as we expect it to run.
Lines 1–8. We create a new variable createCounter in the global execution context and it get’s assigned function definition. Line 9. We declare a new variable named increment in the global execution context.. Line 9 again. We need call the createCounter function and assign its returned value to the increment variable. Lines 1–8 . Calling the function. Creating new local execution context. Line 2. Within the local execution context, declare a new variable named counter . Number 0 is assigned to counter . Line 3–6. Declaring new variable named myFunction . The variable is declared in the local execution context. The content of the variable is yet another function definition. As defined in lines 4 and 5. Line 7. Returning the content of the myFunction variable. Local execution context is deleted. myFunction and counter no longer exist. Control is returned to the calling context. Line 9. In the calling context, the global execution context, the value returned by createCounter is assigned to increment . The variable increment now contains a function definition. The function definition that was returned by createCounter . It is no longer labeled myFunction , but it is the same definition. Within the global context, it is labeled increment . Line 10. Declare a new variable ( c1 ). Line 10 (continued). Look up the variable increment , it’s a function, call it. It contains the function definition returned from earlier, as defined in lines 4–5. Create a new execution context. There are no parameters. Start execution the function. Line 4. counter = counter + 1 . Look up the value counter in the local execution context. We just created that context and never declare any local variables. Let’s look in the global execution context. No variable labeled counter here. Javascript will evaluate this as counter = undefined + 1 , declare a new local variable labeled counter and assign it the number 1 , as undefined is sort of 0 . Line 5. We return the content of counter , or the number 1 . We destroy the local execution context, and the counter variable. Back to line 10. The returned value ( 1 ) gets assigned to c1 . Line 11. We repeat steps 10–14, c2 gets assigned 1 also. Line 12. We repeat steps 10–14, c3 gets assigned 1 also. Line 13. We log the content of variables c1 , c2 and c3 .
Try this out for yourself and see what happens. You’ll notice that it is not logging 1 , 1 , and 1 as you may expect from my explanation above. Instead it is logging 1 , 2 and 3 . So what gives?
Somehow, the increment function remembers that counter value. How is that working?
Is counter part of the global execution context? Try console.log(counter) and you’ll get undefined . So that’s not it.
Maybe, when you call increment , somehow it goes back to the the function where it was created ( createCounter )? How would that even work? The variable increment contains the function definition, not where it came from. So that’s not it.
So there must be another mechanism. The Closure. We finally got to it, the missing piece.
Here is how it works. Whenever you declare a new function and assign it to a variable, you store the function definition, as well as a closure. The closure contains all the variables that are in scope at the time of creation of the function. It is analogous to a backpack. A function definition comes with a little backpack. And in its pack it stores all the variables that were in scope at the time that the function definition was created.
So our explanation above was all wrong, let’s try it again, but correctly this time.
1: function createCounter () {
2: let counter = 0
3: const myFunction = function() {
4: counter = counter + 1
5: return counter
6: }
7: return myFunction
8: }
9: const increment = createCounter()
10: const c1 = increment()
11: const c2 = increment()
12: const c3 = increment()
13: console.log('example increment', c1, c2, c3)
Lines 1–8. We create a new variable createCounter in the global execution context and it get’s assigned function definition. Same as above. Line 9. We declare a new variable named increment in the global execution context. Same as above. Line 9 again. We need call the createCounter function and assign its returned value to the increment variable. Same as above. Lines 1–8 . Calling the function. Creating new local execution context. Same as above. Line 2. Within the local execution context, declare a new variable named counter . Number 0 is assigned to counter . Same as above. Line 3–6. Declaring new variable named myFunction . The variable is declared in the local execution context. The content of the variable is yet another function definition. As defined in lines 4 and 5. Now we also create a closure and include it as part of the function definition. The closure contains the variables that are in scope, in this case the variable counter (with the value of 0 ). Line 7. Returning the content of the myFunction variable. Local execution context is deleted. myFunction and counter no longer exist. Control is returned to the calling context. So we are returning the function definition and its closure, the backpack with the variables that were in scope when it was created. Line 9. In the calling context, the global execution context, the value returned by createCounter is assigned to increment . The variable increment now contains a function definition (and closure). The function definition that was returned by createCounter . It is no longer labeled myFunction , but it is the same definition. Within the global context, it is called increment . Line 10. Declare a new variable ( c1 ). Line 10 (continued). Look up the variable increment , it’s a function, call it. It contains the function definition returned from earlier, as defined in lines 4–5. (and it also has a backpack with variables) Create a new execution context. There are no parameters. Start execution the function. Line 4. counter = counter + 1 . We need to look for the variable counter . Before we look in the local or global execution context, let’s look in our backpack. Let’s check the closure. Lo and behold, the closure contains a variable named counter , its value is 0 . After the expression on line 4, its value is set to 1 . And it is stored in the backpack again. The closure now contains the variable counter with a value of 1 . Line 5. We return the content of counter , or the number 1 . We destroy the local execution context. Back to line 10. The returned value ( 1 ) gets assigned to c1 . Line 11. We repeat steps 10–14. This time, when we look at our closure, we see that the counter variable has a value of 1. It was set in step 12 or line 4 of the program. Its value gets incremented and stored as 2 in the closure of the increment function. And c2 gets assigned 2 . Line 12. We repeat steps 10–14, c3 gets assigned 3 . Line 13. We log the content of variables c1 , c2 and c3 .
So now we understand how this works. The key to remember is that when a function gets declared, it contains a function definition and a closure. The closure is a collection of all the variables in scope at the time of creation of the function.
You may ask, does any function has a closure, even functions created in the global scope? The answer is yes. Functions created in the global scope create a closure. But since these functions were created in the global scope, they have access to all the variables in the global scope. And the closure concept is not really relevant.
When a function returns a function, that is when the concept of closures becomes more relevant. The returned function has access to variables that are not in the global scope, but they solely exist in its closure.
Not so trivial closures
Sometimes closures show up when you don’t even notice it. You may have seen an example of what we call partial application. Like in the following code.
let c = 4
const addX = x => n => n + x
const addThree = addX(3)
let d = addThree(c)
console.log('example partial application', d)
In case the arrow function throws you off, here is the equivalent.
let c = 4
function addX(x) {
return function(n) {
return n + x
}
}
const addThree = addX(3)
let d = addThree(c)
console.log('example partial application', d)
We declare a generic adder function addX that takes one parameter ( x ) and returns another function.
The returned function also takes one parameter and adds it to the variable x .
The variable x is part of the closure. When the variable addThree gets declared in the local context, it is assigned a function definition and a closure. The closure contains the variable x .
So now when addThree is called and executed, it has access to the variable x from its closure and the variable n which was passed as an argument and is able to return the sum.
In this example the console will print the number 7 .
Conclusion
The way I will always remember closures is through the backpack analogy. When a function gets created and passed around or returned from another function, it carries a backpack with it. And in the backpack are all the variables that were in scope when the function was declared. | https://medium.com/dailyjs/i-never-understood-javascript-closures-9663703368e8 | ['Olivier De Meulder'] | 2017-10-31 02:30:06.242000+00:00 | ['JavaScript', 'Software Development', 'Software Engineering', 'Closure', 'Programming'] |
5 Considerations for Building a 5-Star FireTV App | The Android operating system (OS) is being used across multiple devices and platforms and is currently the most popular mobile operating system. At the moment, Android powers more than 2 billion devices and many of those devices operate on variations of the Android software development kit (SDK), such as Amazon’s FireTV OS, Nokia’s X platform, and Alibaba’s Aliyun OS, to name a few. As a result, with multiple architecturally various applications that can be built, they share the same set of application programming interfaces (APIs) from the Android SDK.
After years of experience developing exclusively for Android mobile devices, I’ve come up with certain development patterns that can be replicated to help bring a quality mobile app to market. On the other hand, FireTV development requires some additional review and slight adjustments to those patterns based on the specifics of this platform. In this article, we will look into some aspects of FireTV development and some of the lessons learned from our experience developing a 5-star app for one of the biggest media providers.
Performance
At the time of writing this article, from the hardware perspective, FireTVs are less performant devices compared to the most up-to-date mobile Android phones. This means a developer will need to take a more diligent approach with regards to memory allocation, data processing, and algorithms while developing a FireTV app. These technical limitations often cause image stuttering and slowness. To avoid these issues, the best approach is to make as much data processing work as possible on the server-side, and send only the necessary data through the RESTful API. This will avoid unnecessary sorting and filtering on the client side that is both memory- and processing power-expensive.
Power Supply
Unlike mobile devices, FireTVs have uninterrupted access to a power supply and at first sight, may not require battery saving related optimizations. However, most of the battery-expensive work is related to the central processing unit’s (CPU) usage and network connectivity, which are the same factors that impact performance. This means that even though FireTVs do not have batteries and have permanent access to a power supply, implementing the necessary optimizations will considerably help with performance and will help avoid any lags in the user interface (UI) rendering.
Network Connection
Another out of the box advantage that comes with FireTV devices, unlike mobile phones, is the reliable, fast, relatively inexpensive and large bandwidth network connection that gives developers a bit more freedom in architecting the app. Depending on the case, engineers can reduce caching size and can rely more often on data updates from the network. They will not need to worry about network costs, bandwidth or reliability. However, consider making non-urgent network updates while the app is closed with Android’s WorkManager, this will help refresh, process and prepare the data before the user opens the app, and will avoid additional resource allocation when the user re-opens the app.
Overall App Architecture
FireTV applications have similarities and differences relative to “traditional” mobile application architectures. Networking and caching layer architectures can be lifted and shifted. These components can be used as is and do not require any adjustments or modifications. The major adjustments and differences revolve around the user experience (UX) and user interface (UI). FireTV does not have touch screen functionality and works exclusively with the remote. This fact requires engineers to follow guidelines for the cursor movement and UI for the selected states. The fastest way to include the UI elements into the app is by using the Leanback library, which has built-in navigation, however, it may be a bit limited in terms of customizations.
User Interface
FireTVs have a so-called “10-foot” user interface because the screen is roughly 10-feet from the user’s eyes, versus the 2-foot distance of a computer screen. This means some additional considerations must be taken into account in order to accommodate the distance and provide the right user experience. Developers should consider using appropriate (i.e.: larger) sizes for UI elements and fonts so they can easily be seen from a relatively longer distance. Also, make sure that every remote input can easily be reflected on the screen and is visible from a 10-foot distance. However, do not use larger assets than what is needed as this may negatively impact the performance and slow down UI rendering. | https://medium.com/tribalscale/5-considerations-for-building-a-5-star-firetv-app-8d2456a81513 | ['Tribalscale Inc.'] | 2019-06-20 14:37:50.729000+00:00 | ['Mobile App Development', 'Technology', 'OTT', 'Fire Tv', 'Android'] |
Automating Resiliency: How To Remain Calm In The Midst Of Chaos | By Shan Anwar and Balaji Arunachalam
The Case for Change at Intuit
When any company decides to migrate to the public cloud in order to better scale its product offerings, there will be challenges, including those involving manual testing. For Intuit, the proud maker of TurboTax, QuickBooks, and Mint, this meant breaking down the monolith, going to hundreds of micro-services, and requiring everything to be automated and available via pipelines. A proof of concept to automate manual resiliency testing needed to be created in order to scale exponentially and support dozens of micro-services across multiple regions. During this proof of concept, several homegrown tools were created by the Intuit team to embody the resiliency culture and thinking amongst the developers, preceding the Software Development Life Cycle (SDLC) approach.
In this blog, one such resiliency tool, called CloudRaider, helped accelerate Intuit’s goal to become highly resilient and highly available during this journey to the cloud.
Resiliency Testing at Intuit
As Intuit moved from single data center to dual data centers, HA DR (Highly Available and Disaster Recovery) testing became incredibly important. The team started with a well structured process. This involved a variety of engineers (developers, QE, App Ops, DBAs, network engineers, etc) to conduct long sessions, identifying various failures for the system, and documenting expected system behaviors, including alerts and monitoring. After appropriate prioritization (based on severity, occurrence frequency and ease of detection), the team then executed these failures in a pre-production environment to prepare the system for better resiliency.
This approach generally helped to identify system resiliency defects although it still had a lot of restrictions and gaps. This was a time-consuming, manual process requiring multiple engineers’ time and could get very expensive, only to be repeated as regression tests for future system changes. The FMEA (Failure Mode Effect Analysis) testing was conducted after the system implementation so it worked counter to the shift-left model used to uncover system resiliency issues early in the SDLC process.
In moving to the cloud, the teams started adopting chaos testing in production; this however did not help to solve these gaps either, given this test occurred post production and could not be run as a continuous regression testing. It was discovered that chaos testing was a nice complement to FMEA testing, but it was not necessarily a replacement. Chaos testing, being an ad-hoc testing methodology, required a structured approach of testing, and meant preparing systems prior to invoking chaos into production.
Requirements included are listed below:
Resiliency testing had to become part of the system design, not an after-thought. Shift left resiliency testing would be for developers to enable test driven design and development for system resiliency. Tests (including pre and post validation) would need to be fully automated and available as part of release pipeline as regression tests. Reverting failures would also need to be automated as part of testing. The ability to write the test code in natural language was needed so that the same could be used as a system resiliency design requirement document. A 100% pass on the automated resiliency test suite would be a prerequisite for chaos testing in production.
This led to creating an in-house resiliency testing tool called “CloudRaider”.
How Intuit Innovated with CloudRaider: D4D (Design 4 Delight)
During Intuit’s migration to the public cloud, the challenges of manual FMEA testing continued and a proof of concept to automate FMEA tests was created by applying Intuit’s Design for Delight principles.
Intuit Design Principles
Principle #1: Deep Customer Empathy
Our systems needed to be resilient; in case of failures we could not impact customers.
Principle #2: Go Broad to Go Narrow
An ideal state was fully resilient systems with automated regression to validate.
Principle #3: Rapid Experiments with Customers
During our experimentation, we involved teams to use our automation. At first, we tried to automate a few specific scenarios to confirm the value of automation. We were unable to scale and had to go back and try out new ideas of how to make it easier to write and execute scenario.
After experimentations, we solved the problem by applying a behavior-driven development process which involved writing a scenario first. This process helped us identify common scenarios and led to develop a domain specific language (DSL). The DSL provided a way to dynamically construct new scenarios and utilize more general code definitions to invoke any failures.
The automation of failures reduced execution time significantly but the question about effectiveness remained. This opened up ideas about automating the process to verify the impact of the failures and to measure the effectiveness of system recovery (see end-to-end design diagram).
End-to-End System
CloudRaider in Action
Example: Simple login service
Simple Example Scenario
Let’s look at an example of very simple login micro-service that consists of a frontend (web server) and a backend service running in AWS. Even in this simple architecture there are multiple possibilities of failures (see table):
FMEA Template
All of the above scenarios are very general and can be applied to any service or system design. In our example, we could have the same failures executed for either frontend or backend. We created these scenarios via CloudRaider (see sample code).
Cloud Raider DSL
In the scenario above, the implementation details were all abstracted and the test was written in natural language construct. Furthermore, it was all data driven where the same scenario could then be executed under different criteria thus making it reusable.
A slightly modified scenario was highlighted where a login service was unavailable due to very high CPU consumption (see code).
This high CPU consumption scenario varied slightly from the first one where only failure condition was different and easy to construct.
In reality the login service architecture would have many more complexities and critical dependencies. Let’s expand to include authorization of OAuth2 tokens and a risk screening service. Both are external (see diagram).
This new approach introduced resiliency implications such as slow response time or unavailability of critical dependency. In CloudRaider, we could include scenarios to mimic such behaviors by injecting network latency or blocking domains (see code).
We discussed simple failure scenarios, but in reality, modern applications are more complex and run in multiple data centers and/or regions. Our previous example could be enhanced to multiple regions scenario (see diagram). Applications could be highly available as they ran in multiple regions and still maintain auto recovery process if one of the regions went down.
Multi-Region Failover Example
In CloudRaider, we could write code to terminate a region as previously achieved but we could also assert our region failover strategy with the help of AWS Route53 service (see code).
Implementation Details
CloudRaider is an Open-Source library written in Java that leverages Behavior Driven Development (BDD) via Cucumber/Gherkin framework. The library is integrated with AWS to inject failures.
Github link: https://github.com/intuit/CloudRaider/
Benefits of an Automated and Structured Resiliency Testing Process
What used to take more than a week of heavy coordination and manual test execution with many engineers, became a three-hour automated execution with zero human resources. This process enabled us to test the system resiliency on a regular basis to catch any regression issues. Having these automated tests in the release pipeline also enabled very high confidence in our product releases and caught resiliency issues before they turned into a production incident. This also gave us more confidence to execute ad-hoc chaos testing in the production. This tool enabled developers to think about resiliency as part of design and implementation and own the testing of their systems’ resiliency.
Conclusion
Product adoption suffers if it is not highly available for customers to use. With increasing complexity and dependencies in the micro service architecture world, it would be impossible to avoid failures in a system’s ecosystem. We learned that our systems needed to be built in such a way to proactively recover from failures with appropriate monitoring and alerts. Testing the systems’ resiliency in an automated/regular way was a must; the sooner the test happened in the SDLC, the less expensive it would be to fix the problem. With well structured, fully automated and regularly executed resiliency tests, our team gained more confidence to execute ad-hoc chaos into production.
Resources
Authors | https://medium.com/intuit-engineering/automating-resiliency-how-to-remain-calm-in-the-midst-of-chaos-d0d3929243ca | ['Shan Anwar'] | 2019-12-09 18:17:10.386000+00:00 | ['AI', 'Open Source', 'Data', 'Data Science', 'Ai And Data Science'] |
A Guy With A Bed Frame Is 2018’s Good On Paper | You own hand towels, too? I may faint.
Photo by Mark Solarski on Unsplash
About a week ago, a guy on Tinder asked me if his vegetarianism was a deal breaker. I laughed to myself. Bless your heart, how quaint. I am a single woman living in Brooklyn, New York in the year of our Beyoncé 2018 and you think I’m going to be turned off by a guy I’ll never have to share the charcuterie with? I’m swatting away 34-year-olds with three roommates and meeting men whose closets and laundry bins are the same vessel. Eat your chickpeas, I really don’t give a shit.
Next came a guy at my coffee shop, completely average looking (which in New York, if you’re a woman, makes you a 2, if you’re a man, makes you hot), wearing well-fitting jeans, a crisp tee shirt, and what I’d consider pretty cool shoes. He was clean and put-together. I was confused. Then I saw his wedding ring and it all made sense. Silly Shani, single men don’t come well-packaged! They come wrapped in greasy paper and you have to bring your own bag.
The illumination of just how much I’ve compromised and been willing to accept or at least deal with in the last 5 or so years was furthered by a profile I came across a few days later. One that harkened me back to the days of having standards and expectations. Ah, memories.
My stars, what have we become? What does it say about single humanity when this profile right here reads like Shakespeare to me? Speak again, bright angel, tell me of the more than five shirts you own!
You mean you don’t sleep on a mattress on the floor of an unswept, linoleum-lined basement? I won’t have to wait in line to pee at 3am? I won’t have to dry my face before bed with paper towels? This is an embarrassment of riches!
Such is my woe, and perhaps the woe of any female, single, 30-something shit-together, that the men we date (in our age bracket) seem to exist two or three lifesteps behind us. And if we aren’t “okay” with this, our dating pool shrinks to the depth of a bottle cap. So this man, this just normal human, is a gilded gift to dating.
Good on paper used to mean that you had a great job, were motivated and driven, perhaps owned a home, vehicle, or pet. You were neatly dressed, groomed, polite. The kind of guy you wouldn’t mind running into your boss with on a Sunday afternoon. It still means all of these things for women, but for men it basically means that you brush your teeth.
Reader, I’m tired. The double standards that exist among the sexes never cease to replicate and evolve. They are the termites of my very existence. I am appalled not just by my own reaction to the truth of this man’s statements, but by the fact that he knew it would benefit him to say them.
“Hey ladies, I’m keenly aware that you’re one bartender/DJ away from starting that women-only tiny house community in rural Maine. It is I, the dating scene chupacabra, and I’ve come to supply you with entirely normal things. The line forms to the right, no shoving.”
I’ve never fancied the notion that if I want company, I’m going to have to clean it up first. While I don’t mind blowing the dust of a fixer-upper, so to speak, I do mind being financially and functionally responsible for a full renovation from floor to ceiling. I don’t require a general contractor and six to nine months for habitability, neither should you.
But they do. They all do. They (meaning single men populating the online dating apps of the greater New York area) all require me to live without something I used to consider table stakes. Something that is table stakes in my own life. Privacy, a reasonable linen supply, adequate cutlery. Once, just once in my life I’d like to see a man at Ikea or Target who isn’t there on a leash. One man who thinks to himself, “you know what, this place could use an end table.”
You don’t have to be a normal, average, basic insurance plan human being anymore if you want to meet someone. You don’t have to pack on any responsibility at all, from a solo lease to whether or not the cordless vacuum is charged, until you meet your female partner, because heaven knows everything will fall into place after that–she’ll handle it. Everything has fallen into place for me, and I’m starting to think I’m delusional for wanting to meet someone whose life is a little bit together, too.
Regardless, I will still be on my knees in the garden, day after year, weeding through bad idea after red flag after fuckboi, hoping to come across someone who is average, and therefore the cream of the crop.
Also friend, if you’re out there, holler. | https://shanisilver.medium.com/a-guy-with-a-bed-frame-is-2018s-good-on-paper-7d93d4270dd1 | ['Shani Silver'] | 2018-05-15 13:10:37.527000+00:00 | ['Humor', 'Culture', 'Dating', 'Singles', 'New York'] |
Q&A with Emily Ingram, Director of Product @ Chartbeat | Q&A with Emily Ingram, Director of Product @ Chartbeat
This week, The Idea spoke with Emily about recent Chartbeat initiatives on paywall optimization, Multi-Site View and image testing, tracking the supply and demand of climate change coverage, and why she thinks we should be paying more attention to mobile aggregators. Subscribe to our newsletter on the business of media for more interviews and weekly news and analysis.
What was your path to Chartbeat and what is your current role there?
I started my career as a journalist, working for The Washington Post for about six years. The first two of those were within the newsroom as an editor and a producer. Through that, I fell into product management because I knew how the CMS worked: when we were relaunching the mobile site, they needed someone to take CMS outputs and make it into technical requirements for engineers. For about four additional years, I wrote their iOS apps and also launched their digital partner program.
I then went to HuffPost for about a year and a half, working on a tool for storytelling. Throughout all of that, I had a passion for both product management and telling stories, and Chartbeat was the right next step for being able to work with lots of publishers. Now, I’m Director of Product at Chartbeat. I work with our team of product managers to build tools for digital publishers, like tools for paywall optimization.
What are your goals for the paywall optimization project?
I think folks often associate Chartbeat with our real time dashboard and our Big Board. Obviously, those are some of our most used tools and we continue to invest in them; but as media business models have evolved, the needs of digital publishers have too.
We think about optimization overall, like our headline-testing product, which is being used for tens of thousands of tests all across the world. One of the areas that we recognize a need for optimization is around paywall strategy. We’re in early stages of how we can help with choices about what kinds of stories make sense to market subscriber-only and working closely with a small number of publishers to help make those decisions better. Like everything with Chartbeat, the goal is to inform editorial choices but also work alongside editors to make their choices more effective.
What is a project you have worked on recently that had a lot of impact?
A feature we released last year, Multi-Site View, is doing well. We were looking to solve for organizations who have lots of sites and are often trying to coordinate coverage across maybe an entire region’s worth of daily newspaper sites. They need to be able to understand performance on multiple sites at once instead of having lots different tabs open and having to switch between them. Multi-Site View aims to consolidate things into a single dashboard with flexible roll-ups, so you could look at whichever combination of sites necessary for your role and understand what’s doing well and where the opportunities to improve are.
Conversely, what was a feature that wasn’t adopted as widely as you thought that it would be?
Folks often associate Chartbeat with their web traffic, but we also can track native apps as well as amp content. Occasionally when you look at dashboards, you recognize there’s a missing opportunity for them to see a full 360-degree view of their content. That’s something where we’re still working to improve on making sure folks are taking advantage of those capabilities.
What are the limits to using data to inform journalistic choices?
That’s something we’ve been thoughtful about from our earliest days. Chartbeat invented the concept of engaged time as a way to get the focus off clicks and to be more about genuine reader behavior.
It’s something that we try to do within the product — build functionality that guides people to use the tools in healthy ways that actually reach their end goals and not get strayed down the wrong path. For instance, one of the key metrics for our headline-testing tool is quality clicks. So, it’s not just a raw view of whatever headline gets the most wins; we’re also considering if people who clicked on that story actually stick around and read it.
We’ve always been a company that views ourselves as one of many tools for editors. We want to make sure that we have unique insights to offer them, but there are also some things that you can’t do with data and to which humans bring their unique insight to bear.
What is something surprising you’ve learned working with a diverse array of publishers?
We aim to make our tools flexible enough that they can suit different approaches. Right now, we’re working on an alpha for image testing. We already allow folks to test their headlines on a home page to understand which ones are performing best at leading folks to engage. The natural extension of creating an inviting experience on the homepage are the images associated with those headlines. The range of publishers involved in the test is quite large: you have everyone from traditional publishers non-traditional sites that happen to use Chartbeat, and they actually have similar needs.
Sometimes we also building specific things for a segment of clients. For instance, our Multi-Site View was something that’s particularly relevant given the consolidation of media and that folks are often working in centralized teams. Something I’ve learned from my time here is that there’s actually more commonalities than you might expect.
Are there trends in the media space that you wish more people were paying more attention to?
Something that’s always changing but that I think is critical to keep tabs on is constantly monitoring who you’re serving and what they’re coming to you for. For instance, we did a deep dive into climate coverage data, and one of the interesting things we found is that it’s up significantly from the supply side, but even more from the demand side. Those sorts of changes in terms of the topics that folks are most interested in are fascinating.
The other thing is mobile aggregators. Occasionally, we will see top referrers like Top Buzz and Smart News, which are companies that aren’t top of mind. Traffic from Google Chrome suggestions are also something that’s really surged in the past couple of years.
Finally, something that I’ve seen from when I was product manager at a publisher and continued to see here is how development in the phone ecosystem really changes user behavior and thus publisher experiences. The first memory I have of this is when iOS rolled out swipe left of home screen, which included a module that had news articles in it. After that, we noticed a sudden uptick in traffic that was unexplained because it was all of this dark social traffic, but it often was concentrated on these certain articles. That was something that you don’t know is coming. We’ve seen that with Google Chrome suggestions as well, where a feature built into Chrome can have a meaningful impact on what publishers are seeing on their end.
There can often be benefits to these features: Hopefully you can use some of these mobile aggregators as new entry points to maybe expose people to content they wouldn’t see otherwise. But if they change, you don’t really have control over that. So, at the same time as you try to keep tabs on that, you also have to focus on building your loyal audience and diversify your risk.
What is the most interesting thing that you’ve seen recently in media from an organization other than your own?
I think something that’s really exciting to me is the burgeoning news outlets that are starting up to serve particular niche audiences or niche purposes. For instance, I know some of the folks from Texas Tribune are starting up a news organization, The 19th, aimed at women. There’s Dejan Kovacevic in Pittsburgh, who used to be a columnist for their local paper and started a sports site [DK Pittsburgh Sports] that covers just Pittsburgh sports.
You also increasingly see niche e-mail newsletters. I’m a big theater fan, so I follow various theater-related newsletters and podcasts like Broadway Briefing, which is a subscription-based daily roundup of Broadway news. There’s a challenge anytime you’re starting something from scratch in terms of its longevity, but it’s certainly interesting to see a lot of energy around these very particular niches and what people are doing in terms of innovating on the business model for them.
Rapid Fire Questions
What’s the last podcast you listened to?
A Pop Culture Happy Hour episode from NPR. I have a 40-minute commute and some of those are perfectly timed for that.
What’s the last theater production you went to?
Hamlet at St Ann’s Warehouse.
What would you be doing if you weren’t in this role, whether it within media or outside of media?
I would probably want to work somehow in the arts.
This Q&A was originally published in the February 24th edition of The Idea, and has been edited for length and clarity. For more Q&As with media movers and shakers, subscribe to The Idea, Atlantic Media’s weekly newsletter covering the latest trends and innovations in media. | https://medium.com/the-idea/q-a-with-emily-ingram-director-of-product-chartbeat-8b18352c4425 | ['Saanya Jain'] | 2020-02-24 23:39:07.224000+00:00 | ['Product Management', 'Subscriber Spotlight', 'Journalism', 'Media'] |
Illustrated Guide to LSTM’s and GRU’s: A step by step explanation | Hi and welcome to an Illustrated Guide to Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU). I’m Michael, and I’m a Machine Learning Engineer in the AI voice assistant space.
In this post, we’ll start with the intuition behind LSTM ’s and GRU’s. Then I’ll explain the internal mechanisms that allow LSTM’s and GRU’s to perform so well. If you want to understand what’s happening under the hood for these two networks, then this post is for you.
You can also watch the video version of this post on youtube if you prefer.
The Problem, Short-term Memory
Recurrent Neural Networks suffer from short-term memory. If a sequence is long enough, they’ll have a hard time carrying information from earlier time steps to later ones. So if you are trying to process a paragraph of text to do predictions, RNN’s may leave out important information from the beginning.
During back propagation, recurrent neural networks suffer from the vanishing gradient problem. Gradients are values used to update a neural networks weights. The vanishing gradient problem is when the gradient shrinks as it back propagates through time. If a gradient value becomes extremely small, it doesn’t contribute too much learning.
Gradient Update Rule
So in recurrent neural networks, layers that get a small gradient update stops learning. Those are usually the earlier layers. So because these layers don’t learn, RNN’s can forget what it seen in longer sequences, thus having a short-term memory. If you want to know more about the mechanics of recurrent neural networks in general, you can read my previous post here.
LSTM’s and GRU’s as a solution
LSTM ’s and GRU’s were created as the solution to short-term memory. They have internal mechanisms called gates that can regulate the flow of information.
These gates can learn which data in a sequence is important to keep or throw away. By doing that, it can pass relevant information down the long chain of sequences to make predictions. Almost all state of the art results based on recurrent neural networks are achieved with these two networks. LSTM’s and GRU’s can be found in speech recognition, speech synthesis, and text generation. You can even use them to generate captions for videos.
Ok, so by the end of this post you should have a solid understanding of why LSTM’s and GRU’s are good at processing long sequences. I am going to approach this with intuitive explanations and illustrations and avoid as much math as possible.
Intuition
Ok, Let’s start with a thought experiment. Let’s say you’re looking at reviews online to determine if you want to buy Life cereal (don’t ask me why). You’ll first read the review then determine if someone thought it was good or if it was bad.
When you read the review, your brain subconsciously only remembers important keywords. You pick up words like “amazing” and “perfectly balanced breakfast”. You don’t care much for words like “this”, “gave“, “all”, “should”, etc. If a friend asks you the next day what the review said, you probably wouldn’t remember it word for word. You might remember the main points though like “will definitely be buying again”. If you’re a lot like me, the other words will fade away from memory.
And that is essentially what an LSTM or GRU does. It can learn to keep only relevant information to make predictions, and forget non relevant data. In this case, the words you remembered made you judge that it was good.
Review of Recurrent Neural Networks
To understand how LSTM’s or GRU’s achieves this, let’s review the recurrent neural network. An RNN works like this; First words get transformed into machine-readable vectors. Then the RNN processes the sequence of vectors one by one.
Processing sequence one by one
While processing, it passes the previous hidden state to the next step of the sequence. The hidden state acts as the neural networks memory. It holds information on previous data the network has seen before.
Passing hidden state to next time step
Let’s look at a cell of the RNN to see how you would calculate the hidden state. First, the input and previous hidden state are combined to form a vector. That vector now has information on the current input and previous inputs. The vector goes through the tanh activation, and the output is the new hidden state, or the memory of the network.
RNN Cell
Tanh activation
The tanh activation is used to help regulate the values flowing through the network. The tanh function squishes values to always be between -1 and 1.
Tanh squishes values to be between -1 and 1
When vectors are flowing through a neural network, it undergoes many transformations due to various math operations. So imagine a value that continues to be multiplied by let’s say 3. You can see how some values can explode and become astronomical, causing other values to seem insignificant.
vector transformations without tanh
A tanh function ensures that the values stay between -1 and 1, thus regulating the output of the neural network. You can see how the same values from above remain between the boundaries allowed by the tanh function.
vector transformations with tanh
So that’s an RNN. It has very few operations internally but works pretty well given the right circumstances (like short sequences). RNN’s uses a lot less computational resources than it’s evolved variants, LSTM’s and GRU’s.
LSTM
An LSTM has a similar control flow as a recurrent neural network. It processes data passing on information as it propagates forward. The differences are the operations within the LSTM’s cells.
LSTM Cell and It’s Operations
These operations are used to allow the LSTM to keep or forget information. Now looking at these operations can get a little overwhelming so we’ll go over this step by step.
Core Concept
The core concept of LSTM’s are the cell state, and it’s various gates. The cell state act as a transport highway that transfers relative information all the way down the sequence chain. You can think of it as the “memory” of the network. The cell state, in theory, can carry relevant information throughout the processing of the sequence. So even information from the earlier time steps can make it’s way to later time steps, reducing the effects of short-term memory. As the cell state goes on its journey, information get’s added or removed to the cell state via gates. The gates are different neural networks that decide which information is allowed on the cell state. The gates can learn what information is relevant to keep or forget during training.
Sigmoid
Gates contains sigmoid activations. A sigmoid activation is similar to the tanh activation. Instead of squishing values between -1 and 1, it squishes values between 0 and 1. That is helpful to update or forget data because any number getting multiplied by 0 is 0, causing values to disappears or be “forgotten.” Any number multiplied by 1 is the same value therefore that value stay’s the same or is “kept.” The network can learn which data is not important therefore can be forgotten or which data is important to keep.
Sigmoid squishes values to be between 0 and 1
Let’s dig a little deeper into what the various gates are doing, shall we? So we have three different gates that regulate information flow in an LSTM cell. A forget gate, input gate, and output gate.
Forget gate
First, we have the forget gate. This gate decides what information should be thrown away or kept. Information from the previous hidden state and information from the current input is passed through the sigmoid function. Values come out between 0 and 1. The closer to 0 means to forget, and the closer to 1 means to keep.
Forget gate operations
Input Gate
To update the cell state, we have the input gate. First, we pass the previous hidden state and current input into a sigmoid function. That decides which values will be updated by transforming the values to be between 0 and 1. 0 means not important, and 1 means important. You also pass the hidden state and current input into the tanh function to squish values between -1 and 1 to help regulate the network. Then you multiply the tanh output with the sigmoid output. The sigmoid output will decide which information is important to keep from the tanh output.
Input gate operations
Cell State
Now we should have enough information to calculate the cell state. First, the cell state gets pointwise multiplied by the forget vector. This has a possibility of dropping values in the cell state if it gets multiplied by values near 0. Then we take the output from the input gate and do a pointwise addition which updates the cell state to new values that the neural network finds relevant. That gives us our new cell state.
Calculating cell state
Output Gate
Last we have the output gate. The output gate decides what the next hidden state should be. Remember that the hidden state contains information on previous inputs. The hidden state is also used for predictions. First, we pass the previous hidden state and the current input into a sigmoid function. Then we pass the newly modified cell state to the tanh function. We multiply the tanh output with the sigmoid output to decide what information the hidden state should carry. The output is the hidden state. The new cell state and the new hidden is then carried over to the next time step.
output gate operations
To review, the Forget gate decides what is relevant to keep from prior steps. The input gate decides what information is relevant to add from the current step. The output gate determines what the next hidden state should be.
Code Demo
For those of you who understand better through seeing the code, here is an example using python pseudo code.
python pseudo code
1. First, the previous hidden state and the current input get concatenated. We’ll call it combine.
2. Combine get’s fed into the forget layer. This layer removes non-relevant data.
4. A candidate layer is created using combine. The candidate holds possible values to add to the cell state.
3. Combine also get’s fed into the input layer. This layer decides what data from the candidate should be added to the new cell state.
5. After computing the forget layer, candidate layer, and the input layer, the cell state is calculated using those vectors and the previous cell state.
6. The output is then computed.
7. Pointwise multiplying the output and the new cell state gives us the new hidden state.
That’s it! The control flow of an LSTM network are a few tensor operations and a for loop. You can use the hidden states for predictions. Combining all those mechanisms, an LSTM can choose which information is relevant to remember or forget during sequence processing.
GRU
So now we know how an LSTM work, let’s briefly look at the GRU. The GRU is the newer generation of Recurrent Neural networks and is pretty similar to an LSTM. GRU’s got rid of the cell state and used the hidden state to transfer information. It also only has two gates, a reset gate and update gate.
GRU cell and it’s gates
Update Gate
The update gate acts similar to the forget and input gate of an LSTM. It decides what information to throw away and what new information to add.
Reset Gate
The reset gate is another gate is used to decide how much past information to forget.
And that’s a GRU. GRU’s has fewer tensor operations; therefore, they are a little speedier to train then LSTM’s. There isn’t a clear winner which one is better. Researchers and engineers usually try both to determine which one works better for their use case.
So That’s it
To sum this up, RNN’s are good for processing sequence data for predictions but suffers from short-term memory. LSTM’s and GRU’s were created as a method to mitigate short-term memory using mechanisms called gates. Gates are just neural networks that regulate the flow of information flowing through the sequence chain. LSTM’s and GRU’s are used in state of the art deep learning applications like speech recognition, speech synthesis, natural language understanding, etc.
If you’re interested in going deeper, here are links of some fantastic resources that can give you a different perspective in understanding LSTM’s and GRU’s. This post was heavily inspired by them.
http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
https://www.youtube.com/watch?v=WCUNPb-5EYI
I had a lot of fun making this post so let me know in the comments if this was helpful or what you would like to see in the next one. And as always, thanks for reading!
Check out michaelphi.com for more content like this. | https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21 | ['Michael Phi'] | 2020-06-28 17:27:57.821000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Lstm', 'Neural Networks', 'Deep Learning'] |
Design & the military: a love story | Collage by Vittoria Casanova.
By Vittoria Casanova
We usually don’t ask ourselves many questions about the objects surrounding our lives. Aside from the simple function and aesthetics, we don’t think about the object’s history or why products and services, which we use every day, have been designed in the way that we know them. When you think about design, you wouldn’t initially associate it with war. But, looking back at the history of design and invention, it seems that war is the main and most important catalyst for the research, discovery, and implementation of many new solutions and technologies.
The reason might be found in the large amount of funding that governments allocate to military and defense departments. Just to give you an idea, the DARPA (Defense Advanced Research Projects Agency), responsible for the development of emerging technologies for military use, has an average annual budget of three billion USD. Yes, three billion per year!
Here are a few intriguing stories about common products and services that have been catalysed by war.
The grandmother of the Internet was called ARPA, short for Advanced Research Projects Agency. Its initial purpose was to enable researchers to communicate and share knowledge and resources between university computers over telephone lines.
ARPA was born during the Cold War when the US was worried about the Soviet Union destroying their long-distance communications network. The US urgently needed a computer communications system without a central core that could be used wirelessly and remotely. Which would, therefore, be much more difficult for enemies to attack and destroy.
ARPA then started to design a computer network called ARPANET, which would be accessible anywhere in the world using computing power and data. “Internetworking”, as scientists called it, presented enormous challenges as getting networks to ‘talk to each other’ and move data was like speaking Chinese to someone who can only understand Turkish. The Internet’s designers needed to develop a common digital language to enable data sharing but, it had to be a language flexible enough to accommodate all kinds of data, even for the types that hadn’t been invented yet.
The Internet seemed like an extremely far-fetched idea, near impossible to design. But, in the spring of 1976, they found a way. The Internet went from being an obscure research idea to a technology that’s now used by over 4.2 billion people. And, it took less than forty years.
The Global Positioning System, commonly known as the GPS, also has its origins in the Sputnik era.
The idea for the GPS emerged in 1957 when American scientists were tracking the launch of the first satellite, a Russian spacecraft called Sputnik, to orbit Earth. They noticed the frequency of the radio signal from Sputnik got gradually higher as the satellite got closer, and lower as the satellite moved away. This was caused by the Doppler Effect, the same effect that makes the ambulance siren increase or decrease as it moves away or towards an observer. This provided great inspiration: satellites could be tracked from the ground by measuring the frequency of the radio signals they emitted, and, in return, the locations of receivers on the ground could be tracked by their distance from the satellites.
Drones, also known as unmanned aerial vehicles, are another great example. These are aircraft with no onboard crew or passengers, which can be either automated or remotely piloted. The initial idea first came to light in 1849 when Austria attacked Venice with balloons that were loaded with explosives. While few balloons reached their intended targets, most were caught in change winds and were blown back over Austrian lines. From there, it was clear that better aerial technology, which could be controlled remotely, was desperately needed.
Last, but not least, a simple item that we use very often: tape. Duct tape was originally invented by Johnson & Johnson’s pharmaceutical division during WWII for the military. The soldiers specifically needed a waterproof tape that could be used to keep moisture and humidity out of ammunition cases. This is why the original duct tape only came in army green.
Many more examples can be found in various other mundane products: microwaves, digital cameras, superglue, canned food, and penicillin, just to name a few.
It’s also interesting to see that these military-born technologies can even be found in three of our INDEX: Award 2017 winners: Ethereum — a decentralised digital network, commonly referred to as Internet 2.0; what3words — a new GPS system using three-word address; and Zipline — a medical supply delivery chain using drones. But, let’s hope that in the future we won’t need to rely on war for more great solutions to emerge. | https://designtoimprovelife.medium.com/design-the-military-a-love-story-99dd58b8b40f | ['The Index Project'] | 2018-11-28 08:56:35.429000+00:00 | ['War', 'Technology', 'Design'] |
Financial Analysts think Traderium would become the bank of the future . | “On the back of solid advancements in Artificial Intelligence prediction and powered by blockchain technology, Traderium is transforming into the bank of the future, the bank that you own.” — CNN
Early this week, financial analysts released a statement which believes Traderium and blockchain technology will eventually replace banks and existing financial systems by eliminating the necessity for intermediaries and third party service providers.
In a research paper shared by the South African Bitcoin Group (SABG), Milwalke wrote:
“For now, virtual currencies such as Bitcoin pose little or no challenge to the existing order of fiat currencies and central banks. Why? Because they are too volatile, too risky, too energy intensive, and because the underlying technologies are not yet scalable. Many are too opaque for regulators; and some have been hacked. But many of these are technological challenges that could be addressed over time. Not so long ago, some experts argued that personal computers would never be adopted, and that tablets would only be used as expensive coffee trays. So I think it may not be wise to dismiss virtual currencies.”
As Milwalke emphasized, the vast majority of cryptocurrencies such as bitcoin and Ethereum are still struggling to solve their underlying scalability issues. Previously, in an interview with major South Korean financial news publication JoongAng, Ethereum co-founder Vitalik Buterin stated that it could take two to five years for public blockchain networks to scale with two-layer and on-chain scalability solutions.
But, once companies like Traderium starts making offerings that would form a bridge between traditional banking systems and blockchains, cryptocurrencies would achieve mass adoption and quickly replace traditional and fiat financial systems.
Traderium’s novel business model involves receiving cryptocurrency deposits just like traditional banks receive fiat deposits, use their A-rated Artificial Intelligence system in trading these deposits, as well as making other investments in bonds etc, much like traditional banks do, and sharing the profits with the customers.
This gives Traderium an edge over traditional banks because blockchain guarantees cheaper transaction costs, which improves their margin. Interest rates on deposits can go as high as 40% per month depending on the account type, and smart contracts make them more trust worthy than financial banks.
Milwalke explained the decentralized nature of bitcoin could provide general consumers with a more efficient, robust, secure, and cost-efficient financial network as an alternative to the global banking infrastructure.
Furthermore, Milwalke noted that the mainstream adoption of bitcoin and cryptocurrencies would result in the decrease of power of central banks and leading financial institutions. Fiat currencies would no longer be of any value as central banks and local financial authorities would not be able to manipulate the value of assets.
“Today’s central banks typically affect asset prices through primary dealers, or big banks, to which they provide liquidity at fixed prices — so-called open-market operations. But if these banks were to become less relevant in the new financial world, and demand for central bank balances were to diminish, could monetary policy transmission remain as effective?,” added Milwalke.
Already, through the Bitcoin Core development team’s scaling and transaction malleability solution Segregated Witness (SegWit), bitcoin has been able to scale to a certain extent by decreasing the size of transactions and blocks. Additionally, the demand toward bitcoin and the cryptocurrency market has increased to a point wherein multi-billion dollar financial institutions such as GoldmanSachs and Fidelity have started to address the rising popularity of cryptocurrencies by launching ventures including cryptocurrency trading operations and mining.
In the upcoming years, through appropriate scaling solutions and the integration of two-layer scaling platforms, bitcoin will be able to position itself at the forefront of financial disruption;challenging banks and financial institutions to evolve into the global financial system. | https://medium.com/bitcoinsouthafrica/financial-analysts-think-traderium-would-become-the-bank-of-the-future-7180919cec30 | ['Sandra Mathews'] | 2018-08-19 16:23:42.354000+00:00 | ['Pakistan', 'South Africa', 'Kuwait', 'Bitcoin'] |
7 Awesome Android Apps You’ve Never Heard Of | Ready, set, download!
It’s no secret that apps are changing the way we live. I have a friend that has over 100 on her phone. She swears she uses them all, but I doubt it. That’s the problem with apps. There are so many choices, it’s hard to find something new that you’ll want to use and keep.
Beyond Facebook
Everyone uses Facebook, Amazon, Piggy, and Pocket. You’ve never heard of Piggy or Pocket have you? That’s my point. There are thousands of apps that innovative developers have built that make your Android device more useful and fun.
To save you time, I created a list of clever new apps. It’s not scientific, it’s based on new apps my friends have tried, personal use, user reviews and pure awesomeness.
These 7 Android apps will help you get more out of your phone or tablet and do things you didn’t even know were possible. Read on and become an Android expert, and feel free to add your own suggestions below.
1) Become a coupon pro with Piggy
Get coupons automatically with Piggy
Shop your favorite stores in your phone or tablet’s browser, and Piggy will automatically search for coupon codes and cashback whenever you’re checking out. Just click the Piggy button and it will scour the internet for legitimate coupon codes and apply them to your shopping cart. No matter what, you’re always earning cashback. It’s free, it’s easy and it saves you money!
Download Piggy here
2) Always have something to read with Pocket
Pocket is an easy way to save any article and read it later, and works on both Chrome and Android. If you come across any article or website and don’t have time to read it then, save it to Pocket and you can pull it from the queue and read it at any time from any device.
Download Pocket here
3) Never get left out in the rain with 1Weather
1Weather is arguably the best weather app out there. It has a very simple, paginated design that shows you the current weather and forecasts up to 12 weeks. 1Weather offers two full versions, one that is free and has ads within it. Or you can purchase it for $1.99 with no advertising. The fun fact about 1Weather is they offer fun facts about weather that are sure to keep you entertained indefinitely.
Download 1Weather here
4) Google Drive Suite — the complete storage solution
Google Drive is a cloud storage solution essential available on Android. All new users will get 15GB of storage 100% Free indefinitely. The best part of this is you are also able to acquire what Microsoft charges a premium for through GSuite entirely free. This includes Google Docs, Sheets, Slides, Photos, Gmail, Calendar, and Keep. Between the office and photo apps, which by the way allow unlimited amounts of photo and video backup, you have an app to serve a use for practically anything.
Download Drive Suite here
5) Get a personal assistant with Google Now
And an intelligent one, at that. Just say the magic words “Okay Google” to get answers to your questions, make recommendations, and do just about anything and everything by making requests to various web services. Sync it to your Google Account to be able to pull up your schedule and notes in an instant, among many other actions; it also largely works hand in hand with Google Search so the repeated actions you perform are utilized to your advantage.
Download Google Now here
6) Don’t lose track of passwords with LastPass
Even if you have photographic memory or a systematic way of safekeeping your passwords, LastPass will change your life. It’s an awesome digital vault that takes its job of safeguarding all your online accounts seriously. Create a free account and secure it with a strong master password — your last password ever! Fill your vault with all your fave sites, save new sites automatically, and never be bothered with taking note of new passwords ever again.
Download Last Pass here
7) The best app for getting things done Wunderlist
This app surely lives up to the promise of its name, with its very user-friendly interface that packs in heroic features — from the digital notepad, alarms, and reminders, to the folders section and messaging function. You’ll be so excited to get your schedule, plans, goals, and lists in order because Wunderlist is so handy, you can access it anytime, anywhere on your mobile device or computer, and allows you to share your lists with anyone and work collaboratively with them.
Download Wunderlist here
We bet you won’t be able to put your Android device down after getting a hold of these apps… and we really can’t blame you. Enjoy! | https://medium.com/easysimplemore/7-awesome-android-apps-youve-never-heard-of-cb7a0d87fd8c | ['Katrina Angco'] | 2017-08-30 19:57:46.124000+00:00 | ['Android Apps', 'Lifestyle', 'Mobile Device', 'Productivity', 'Digital Marketing'] |
5 Potential Activities for Long-Distance Relationships | I say these encouraging things as someone who is highly empathetic and has worked closely with individuals and couples to make sense of their overarching problems, particularly as a crisis supporter and an active listener.
If you can get through this drama, then you have the capacity to keep going. If anything, wear this circumstance as a badge of honour. Your relationship hit a roadblock, and you came swinging, using your creativity to keep yourself and your partner satiated enough to keep going.
Either way, here are some suggestions on how to celebrate 2021 with your long-distance partner.
1. Cook a Meal “Together”
While it’s not the same as being together physically, you can set up your monitor or device in the kitchen, or bring parts of the kitchen to your device. If you haven’t already, install one of those remote conferencing or video chatting apps, like Zoom, Skype, Messenger, or even FaceTime.
You can lay out the basic ingredients for something that you want want to make. Perhaps you can keep the recipe or meal simple to avoid any kitchen-related mishaps. In real-time, you can critique one another and crack jokes at one another.
Photo by Dan Counsell on Unsplash — Keep it simple to avoid mishaps.
Maybe you can make a contest out of it, and see who has the better-looking meal. Maybe you can pretend that you’re on MasterChef or Hell’s Kitchen, running around while pretending that Gordon Ramsay is screaming at you to get it together.
Maybe one of you can play music associated with the show.
2. Agree to Watch Something Online
Sure, you’re in two different places, but you can set up the time to watch a movie “together”. Even far apart, sentimentality and love are still there. Maybe you can sport coordinating outfits and have similar snacks.
Plus, if you’re really hard-pressed against making such elaborate arrangements, maybe you can sit down and watch a live stream together, such as a remote concert or comedy show on websites like Youtube and Twitch.
Photo by NordWood Themes on Unsplash — You can still watch things together, even remotely.
Even if you settle on an old video or an agreed-upon movie classic, you can film one another's reactions during certain parts of the show, video, or movie, especially if something super funny or dramatic happened.
Maybe you can make a meme reaction out of it, and have a silly inside joke between the two of you. Maybe one of you will go viral because of it. Either way, you now have some moments, like your partner exclaiming surprise during your favourite scene from a movie.
3. Have a Spontaneous Fashion Show
It sounds silly, but with the power of technology, you can try something fun and spontaneous, like trying out the various outfits in your room. Maybe you can help one another find the perfect outfit.
This outfit could be for a hypothetical future date, or something you plan to wear for tomorrow morning. You get to have fun with what you already have, and you didn’t even have to spend major bucks for that to happen.
Plus, you can boost your confidence by doing a silly little dance, a semi-serious catwalk strut, or even tease your partner with something that they personally enjoy.
Photo by BBH Singapore on Unsplash — Maybe they’re having fun with their wardrobes.
You might get bonus points if you are able to play funny music during the whole situation. Maybe when your partner does their fake catwalk, you can play a video with some appropriate but funky music.
Make those little moments count, even if they’re fleeting, because it’s easy to take them for granted.
4. Roleplay a Mock Vacation
While we can’t travel, don’t let that stop you from pretending to have one.
Maybe one partner can pretend to be the budding tourist guide, and you can pretend to be the naive tourist. Maybe both of you can pretend to be two strangers meeting at the bar for the first time.
Think of it as a role-play or as a chance to show off your acting chops. Maybe in this adventure, one of you is a secret agent, and the other is also a secret agent, and you’re trying to outwit one another to get to the bottom of your mystery.
Photo by Free To Use Sounds on Unsplash — I honestly don’t have the context for this.
If you’re on a video conferencing app, you can change the background behind you to resemble a remote tropical paradise. You can even dress up as you please, perhaps opting to wear matching Hawaiian shirts.
Even if it’s not the same as the real thing, the ambience and mood can still be closely emulated. Maybe you can play certain songs or instruments to heighten the mood if you wanted.
5. Do Remote Acts of Kindness
Even if we’re not physically there, we can still help out our partners in other ways. If one partner is financially struggling where they are at, maybe you can send some funds to them online. If you’re struggling, they can reciprocate back.
Perhaps you can call their roommate (if they have one) and ask them to decorate your partner’s place for their birthday. Sure, this requires a fair bit of extra work due to the remote planning, but relationships are worth the effort you put into it.
Photo by Chase Chappell on Unsplash — It’s okay to make virtual plans, I promise.
Plus, even if they don’t have a roommate, maybe you can remotely order their favourite food to be delivered to their house and request a fancy little love note to be left behind.
You can pay for the order, alleviating a little bit of the sadness that your partner may feel. | https://synthiasatkuna.medium.com/5-potential-activities-for-long-distance-relationships-dfc814de878c | ['Synthia Satkuna', 'Ma Candidate'] | 2020-12-29 11:58:13.846000+00:00 | ['Self Improvement', 'Relationships', 'Long Distance', 'Mental Health', 'Dating'] |
A Ping That Saved Me From Madness | Follow my journey into troubleshooting internet disruption issues after switching service providers.
Photo by Joshua Sukoff on Unsplash
Now that we are in an active-pandemic world, those lucky enough to continue remotely working rely on fast and reliable internet service. I’m a software developer by trade and run multiple video meetings a day to keep in touch with our engineering team. All of our systems and tools are in the cloud these days, and we expect that internet service is like electricity and running water.
Working at home now requires me to be the “I.T.” guy. When the service goes down, I will hear it from one of my kids before I even realize it.
“Dad, the internet is down!”
Of course if “the internet” was down, we’d have bigger problems, but I know what they mean. The connection to our service provider has been severed, so they have to pause their online lives until I can solve it.
Troubleshooting Basics
The first thing I do is to turn off the wifi on my phone and switch to cellular LTE to check if there is a localized outage. Having cleared that, I check the lights on the modem to confirm they are all green. If not, a reboot gets us back online.
The same goes for the mesh pucks that we have around the house, if it shows red, a reboot usually restores the connection.
“Ok, it’s back up!”
I feel like the hero and go back to what I was doing, and they return to their TikTok, Instagram, Fortnite, and Netflix sessions.
Other issues expose the kids’ lack of understanding of wireless coverage areas, channel interference, and range. To remediate their complaints, I have tuned the wireless network for decent coverage up to our front street and back into the alley.
Slowness in the system is now attributed to multiple streams of traffic on the network, splitting the bandwidth across dozens of devices, including our security cameras and electronic home assistants.
More Speed == More Bandwidth?
We were choking on our bandwidth with the increased traffic of video meetings, especially during school days with simultaneous remote learning and remote work conference calls. I decided to change from cable internet service to fiber service; now that fiber was available in my area. Full-duplex 1 Gbps over the 100 Mbps, with a 22% cost saving, seemed like a no-brainer.
The ordering of the service was straightforward and smooth over their web site, including scheduling when a technician would install the equipment. I couldn’t wait to see how fast I could pull down source code and navigate our virtual instances.
The install took a couple of hours with no issues. I was so elated as I plugged my laptop into the ethernet and saw 950Mbps download and 930Mbps upload. I connected to the wifi network and saturated the 5Ghz band at 430Mbps.
A quick test navigating to reddit.com, medium.com, and cnn.com was good. I streamed a Netflix show and Apple music and then tested our digital electronic assistants and security cameras. Everything checked out and seemed to be working well.
Speed !== Reliability
After a few hours, the fiber nirvana came to a screeching halt when I heard a yell from upstairs.
“Dad, the internet is down!”
This was my start in a downward spiral into madness. The Netflix client on our TV would abruptly unload itself at random times. Sites would take an unusual amount of time to load, including Google searches. Zoom meetings would freeze with no audio. Our digital assistants would not answer due to no internet connection. Our security camera videos would blank out, went offline, and returned at random intervals.
There were no reported localized or wider outages, but sustained speed tests returned with typical results. Internal diagnostics tests on the modem all passed. The lights on both the fiber ONT box and the modem were indicating normal operation.
Resetting the modem box seemed to stabilize the connection for a short period and then dropped and delayed connections returned. I was not going back to cable internet and paying more for less.
I was determined to solve this!
Try All the Things!
A game of elimination is the first step to troubleshoot the issue. I unplugged the wifi network and plugged ethernet directly into my laptop. Maybe a device on the wifi network was flooding it with invalid packets, causing a faux DDoS attack.
With my laptop as the only device connected and a web browser, the single application running, it should give weight to that theory if it pans out. That experiment didn’t work; still, the problem persisted.
Maybe the DHCP server was an issue and not renewing IP addresses correctly? I changed to a using static IP address on my laptop, within the defined range. No dice, same issue.
I disabled the DHCP server altogether on the modem and kept the static IP address. Nope, no change.
Could it be the default DNS server causing long lag times? Pinging it returned with decent responses under 15 ms. I changed it to use Google’s DNS server at 8.8.8.8 and even using CloudFare’s DNS at 1.1.1.1. All were returning good responses on pings, but the delays and disconnections continued.
I decided to place the modem into an IP bypass mode to use my router. After some considerable research, I was able to expose the WAN IP address to my router and grant LAN IP addresses through my router’s DHCP server. It was a failed exercise; no change.
How about downgrading the firmware on the modem? I went through the process of flashing the firmware through a few minor revisions and down through a major revision. It failed again.
It was time to call the service provider and convince them to send me a new modem. They claimed all the tests on their end passed, and they did not see any problems. I sent them a log of the errors, and they decided to send a new one.
I was convinced that the first modem was a lemon and that a replacement would solve all our issues. We dealt with the current problems by restarting the modem a few times throughout the day and waited for the new one to arrive.
A few days later, the new self-install modem was replaced. My frustration continued. Did I get two lemons in a row, or was that model just flawed?
Never Give Up
I was annoyed but not defeated. There has to be some combination of settings and configuration I have not tried. While diving deep into forums discussing the model (BGW210–700) and all of the issues related to it, there was a one-sentence comment that would have been easily overlooked, but it struck me as weird.
“When the issue happens, even visiting the modem’s internal administration pages has a delay.”
That was odd. I tested it, and it proved correct. Why would visiting a page served on a local web server on the modem be slow? It should be almost instant. After the initial load, the rest of the pages were fast. I’ve noticed this during the multiple times changing the configuration during troubleshooting. Does the server go into idle or sleep mode, and does that cause the problem?
Let’s test it. I sat on the administration page and continuously hit refresh for over 5 minutes, and the delays and disconnections STOPPED! WTF?
The Ping
If refreshing the page keeps the admin web server awake, then would a constant ping to the server work? I opened up a terminal window on my MacBook and typed in:
ping 192.168.1.254
And I let it run continuously. Low and behold, it worked, it kept the server from idle or sleeping. There was rarely a disconnection or delay. The ultimate test was to run the Netflix client on our TV and stream a movie. I randomly chose a show and let it run. Two hours in, and it was still playing without it crapping out. The security cameras stayed online as additional proof that this hack worked.
Coming from the vantage point of a software developer, why would there be such a mechanism? Were they thinking about saving energy or optimization of heat dissipation due to a process continually running on a seldom-used feature? Regardless of the reason, why is it tightly coupled with the main operation of the modem’s main routing feature to the internet?
This is probably a design flaw in the firmware that needs to be addressed in the next upgrade. I hope it’s not a hardware issue and that new firmware will fix it. In the meantime, I will continue to run this temporary hack.
What’s Next?
I have an old spare Android tablet connected through wifi to our network, constantly pinging the modem’s administration page to keep it from idle. It seems to be overkill for a tablet to run a simple ping. It could perfectly fit as a Raspberry PI project, though.
Stay tuned for any updates if I decide to tackle that project. I hope this article helps with other users that have experienced the same issues. Or is this a unique problem surfaced by my network architecture?
Troubleshooting an intermittent issue is rarely trivial, and it will take you through unexpected paths. My experience hunting down defects in software engineering prepared me to investigate all clues, regardless of how insignificant or unrelated it may seem. This ping has saved my sanity, and I was able to play a hero for my kids again. | https://medium.com/literally-literary/a-ping-that-saved-me-from-madness-a4da181c22c7 | ['Tuan Pham-Barnes'] | 2020-08-11 06:25:47.081000+00:00 | ['Debugging', 'Nonfiction', 'Technology', 'Wifi', 'Internet'] |
An effective lead nurturing strategy should not be based on email and calls only | Tricky way from TOFU lead to MQL
Nowadays it takes fewer efforts for marketers to create a solid lead flow. Use-case or clients’ paint-point centric approach on search engines as well as social media can deliver as many leads as your business needs, with CPA less than $40. In addition to that, demand generation professionals can use the advantage of supersaturated lead generation market to acquire any quantity of legit leads with the right criteria in-bulk. But how to convert those TOFU leads to sales-qualified?
The times of “handraisers” are gone, and not too many hot leads are available in the market. The majority of potential clients needs to be properly nurtured and guided via their journey before they meet salespeople. What to do to warm-up those cold leads? Lead nurturing strategy, based on customer journey plus marketing automation technologies, could be the right answer.
What is lead nurturing?
Marketo tells that lead nurturing is the specific process of developing relationships with buyers at every stage of their journey through the sales funnel. The goal of that process is to convert cold / TOFU leads to acceptable leads, marketing qualified, sales qualified, opportunities, won deals and advocates respectively. An organization has to determine the fit of the prospect based on what they do, where they go or what they say. In the simplest case, if a website visitor downloads an asset, there can be a business rule that places that person on a nurture path at a specific phase or stage.
According to the Annuitas Group, in 2018 businesses that use marketing automation and multiple-channel approach to nurture prospects experience a 451% increase in qualified leads and nurtured leads make 47% larger purchases than non-nurtured leads.
However, not too many businesses in the US, as well as global, understand their customer’s journey and have an effective strategy in their hands. The common practice is to engage the leads via a very limited set of channels: email and phone calls.
Lead nurturing is not email and calls only
I cannot deny that email still remains the most effective channel to move leads through the funnel, but it is used very frequently. It annoys. Marketers cannot communicate with clients at the right time. That all leads to unsubscribe. My experience tells me that it is much easy to lose lead than to convert it with an email campaign.
Phone calls allow to get direct feedback from the client and immediately react on it as well. However, this channel has the same cons as email.
How to nurture your leads effectively?
I believe that the best practice is to use the power of all owned, paid and earned marketing channels to engage and delight all potential clients from your database. 20% of marketer’s job lies on the side of demand generation and 80% is situated in lead nurturing. Budgets should be allocated according to that as well.
If you would like to warm-up your leads, be ready to touch them at least 10 times via multiple channels and multiple offers. Your brand needs to earn their trust first.
As soon as you receive a lead you get enough information to target it and guide it via its journey. Modern marketing technologies are here to help. Social Media like Facebook and LinkedIn allow showing specific posts to a list of specific individuals based on their emails. Google Ads allows showing ads (display and search) to users based on their email address too.
Do not forget that if your brand is B2B, you also need to engage all members of the buying central. ABM is the best channel to keep your leads engaged at the account level. Let the whole company be aware of your brand and your offer. It did the job great for me.
To use all marketing tools effectively you need to segment users based on their stage in the sales funnel and then come up with a message that highlights the major need of that segment. Challenging? Who said it will be easy?
Here is an example of the light customer journey in B2B SaaS industry
0. A lead is captured by lead generation via content syndication program.
Let’s assume, it downloaded a white paper that highlights its main need on some 3-d party website. A lead has read it and forget about it successfully.
The nurturing is activated.
Few days after first touch the lead saw the post on Facebook about the topic relevant to its need (that was previously identified). The lead ignored it by not reacting; Next day that lead saw another banned on the web, clicked it, landed on the website but not converted since it did not like the CTA. However, it became familiar with the brand; At the same day its colleague saw the same banner and told the lead about the offer; After a while, the lead searched the web for content relevant to his need and clicked on the ad on SERP. It liked the landing page and converted by downloading another white paper; Later, it received an email with an offer to download another asset and that worked; A sales rep contacted that lead and told it about the webinar that demonstrates how to solve his problem (need) completely; The lead attended the webinar and asked a couple of questions, as well as some of its colleagues; Finally, the lead visited the website directly and scheduled an appointment with the sales team. … and another nurturing campaign has begun to generate opportunity.
In this example, it took 9 touches or 3 pieces of content to get MQL, and we need more touches to receive an opportunity.
Welcome to content marketing era
As it is written, marketing is rapidly shifting from a product orientation to customer-centric and we need to determine all strategies based on clients’ needs. Marketers no longer advertise, to engage customer they run content marketing activity — a part of inbound marketing. Inbound methodology leads the majority of marketing efforts now. It requires us to attract, engage and delight our customers via multiple channels.
For all businesses, the real job begins when they capture the lead. Any lead has to be properly nurtured to become a customer and then an advocate of the brand.
The effective lead nurturing strategy consist of three things that works for a specific segment:
The right mix of channels; Relevant message; The effective cadence.
That is hard to tell everything I can about lead nurturing in one article, so if you have any questions feel free to visit andreypalagin.com and reach me out for more information…
… and do not abandon your leads! | https://andreypalagin.medium.com/effective-lead-nurturing-strategy-should-not-be-based-on-email-and-calls-only-8d662f959646 | ['Andrey Palagin'] | 2019-11-11 06:21:16.204000+00:00 | ['Marketing', 'Lead Nurturing', 'Abm', 'Marketing Startegies', 'Digital Marketing'] |
Coding Your Way to Wall Street | The Trading Algorithm
Let’s start by importing the libraries we will need:
A major difference with Quantopian’s IDE is how some functions are run. You are still able to call functions like normal but others need to be scheduled to run at specific times during the market.
Initialize Function
The Initialize function
Here we have the initialize() function. Within this function we will be using the algo.schedule_function() to schedule three functions:
trading()
exiting_trades()
record_vars()
The other arguments:
algo.date_rules.every_day()
algo.time_rules.market_close()
specify when the functions will run. The reason these functions are scheduled for these times is to fix any potential leverage issues. Leverage plays an important role in our algorithm which will be explained later on.
Trading Function
So now that we have an initialize() function that will run our algorithm, we can move on to the trading() function which will automatically trade the stocks we want.
But first we must explain how we are retrieving the stocks we want. In Quantopian, stocks are assigned unique id values. To access the stocks, we call upon the sid() function, type in the stock ticker, and a drop down menu will appear. See below to see which id number is assigned to TSLA.
TSLA and its sid #
For our trading function, we will be making a list of stocks that we would like to pick from. Those stocks are: TSLA, DIS, AAPL, and SPY. Each with their own sid #. There are ways to grab more stocks and filter out selected ones but that requires another function.
With our list of stocks, we will iterate through them with a ‘for’ loop. The steps taken in the ‘for’ loop are:
Fetching each stock’s closing price history. Calculating their 50 day and 200 day moving averages. Creating a True/False statement to determine crossing points. Calculating leverage allowance and setting it as another True/False statement. Creating conditional statements using open_lev , and bullish_cross or bearish_cross . Once all conditions are satisfied, we place an order for a stock using order_target_percent(stock, 0.25) to fill 25% of our portfolio with that stock. (We set the percentage as negative or positive to buy or short the stock).
The Trading function
Now each step requires specific Quantopian methods and functions. So please refer to the help page once again for a detailed explanation. Next, we’ll be explaining the conceptual reasoning behind the crossing signals and open leverage.
To determine if the 50 day MA crosses the 200 day MA in a bullish fashion, we set the 50 day MA to be less than the 200 day MA, both calculated from one day before. Then, we set the current 50 day MA to be greater than the current 200 day MA. By setting this as a conditional statement, we can capture the crossing point of the moving averages.
Now regarding open leverage, its importance is taken into account because leverage determines whether we are using our own money or borrowing money. The results from borrowing money can drastically alter our trading outcomes. The formula and schedule coded before allow us to make trades without exceeding our own leverage or cash limit.
Exiting Trades Function
Now the next function, exiting_trades() , is very similar to our trading function. The only difference is checking if we have any positions open in the first place and what kind of position that is (long or short).
The exiting trade function
As you can see, there is not much of difference between exiting_trades() and trading() . This exiting trades function repeats most of the calculations and conditions from before in order to close positions whenever the moving averages crosses against us.
Next, let’s finish the code up with a record_vars() function to keep track of a few variables that we believe are necessary.
Here we will record() our leverage and how many positions we have open. We want leverage to not drastically exceed 1. Open positions should not exceed 4 because we devoted only 25% of our portfolio for each stock we trade.
Running a Backtest
Finally, set the dates and starting capital to match the one below. We won’t be running a full backtest because it is not necessary. A full backtest will provide more information about the coded strategy but, as of now, it is not needed.
Set to 7 years of backtesting with a starting amount of $10,000
Click the button Build Algorithm, which will run the code, and then you will end up with a page like so:
Results of our coded strategy
Great! We have successfully backtested our first trading algorithm. We were successful in limiting the leverage to 1 (albeit with a little leeway, which is fine) and the trading positions never exceeded 4. Feel free to alter some of the code to see if it would significantly affect the results. | https://medium.com/swlh/coding-your-way-to-wall-street-bf21a500376f | ['Marco Santos'] | 2020-05-23 18:40:05.923000+00:00 | ['Trading', 'Algorithms', 'Coding', 'Python', 'Stock Market'] |
Data Transformation | In the previews article, I briefly introduced the Volume Spread Analysis(VSA). After we did feature-engineering and feature-selection, there were two things I noticed immediately, the first one was that there were outliers in the dataset and the second issue was the distribution were no way close to normal. By using the method described here, here and here, I removed most of the outliers. Now is the time to face the bigger problem, the normality.
There are many ways to transfer the data. One of the well-known examples is the one-hot encoding, even better one is word embedding in natural language processing (NLP). Considering one of the advantages of using deep learning is that it completely automates what used to be the most crucial step in a machine-learning workflow: feature engineering. Before we get into the deep learning in the later articles, let’s have a look at some simple ways to transfer data to see if we can make it closer to normal distribution.
In this article, I would like to try a few things. The first one is to transfer all the features to a simple percentage change. The second one is to do a Percentile Ranking. In the end, I will show you what happens if I only pick the sign of all the data. Methods like Z-score, which are standard pre-processing in deep learning, I would rather leave it for now.
1. Data preparation
For consistency, in all the 📈Python for finance series, I will try to reuse the same data as much as I can. More details about data preparation can be found here, here and here or you can refer back to my previous article. Or if you like, you can ignore all the code below and use whatever clean data you have at hand, it won’t affect the things we are going to do together.
#import all the libraries
import pandas as pd
import numpy as np
import seaborn as sns
import yfinance as yf #the stock data from Yahoo Finance
import matplotlib.pyplot as plt #set the parameters for plotting
plt.style.use('seaborn')
plt.rcParams['figure.dpi'] = 300 #define a function to get data
def get_data(symbols, begin_date=None,end_date=None):
df = yf.download('AAPL', start = '2000-01-01',
auto_adjust=True,#only download adjusted data
end= '2010-12-31')
#my convention: always lowercase
df.columns = ['open','high','low',
'close','volume']
return df prices = get_data('AAPL', '2000-01-01', '2010-12-31') #create some features
def create_HLCV(i):
#as we don't care open that much, that leaves volume,
#high,low and close
df = pd.DataFrame(index=prices.index)
df[f'high_{i}D'] = prices.high.rolling(i).max()
df[f'low_{i}D'] = prices.low.rolling(i).min()
df[f'close_{i}D'] = prices.close.rolling(i).\
apply(lambda x:x[-1])
# close_2D = close as rolling backwards means today is
# literly the last day of the rolling window.
df[f'volume_{i}D'] = prices.volume.rolling(i).sum()
return df # create features at different rolling windows
def create_features_and_outcomes(i):
df = create_HLCV(i)
high = df[f'high_{i}D']
low = df[f'low_{i}D']
close = df[f'close_{i}D']
volume = df[f'volume_{i}D']
features = pd.DataFrame(index=prices.index)
outcomes = pd.DataFrame(index=prices.index)
#as we already considered the different time span,
#here only day of simple percentage change used.
features[f'volume_{i}D'] = volume.pct_change()
features[f'price_spread_{i}D'] = (high - low).pct_change()
#aligne the close location with the stock price change
features[f'close_loc_{i}D'] = ((close - low) / \
(high - low)).pct_change() #the future outcome is what we are going to predict
outcomes[f'close_change_{i}D'] = close.pct_change(-i)
return features, outcomes def create_bunch_of_features_and_outcomes():
'''
the timespan that i would like to explore
are 1, 2, 3 days and 1 week, 1 month, 2 month, 3 month
which roughly are [1,2,3,5,20,40,60]
'''
days = [1,2,3,5,20,40,60]
bunch_of_features = pd.DataFrame(index=prices.index)
bunch_of_outcomes = pd.DataFrame(index=prices.index)
for day in days:
f,o = create_features_and_outcomes(day)
bunch_of_features = bunch_of_features.join(f)
bunch_of_outcomes = bunch_of_outcomes .join(o)
return bunch_of_features, bunch_of_outcomes bunch_of_features, bunch_of_outcomes = create_bunch_of_features_and_outcomes() #define the method to identify outliers
def get_outliers(df, i=4):
#i is number of sigma, which define the boundary along mean
outliers = pd.DataFrame()
stats = df.describe()
for col in df.columns:
mu = stats.loc['mean', col]
sigma = stats.loc['std', col]
condition = (df[col] > mu + sigma * i) | \
(df[col] < mu - sigma * i)
outliers[f'{col}_outliers'] = df[col][condition]
return outliers #remove all the outliers
features_outcomes = bunch_of_features.join(bunch_of_outcomes)
outliers = get_outliers(features_outcomes, i=1) features_outcomes_rmv_outliers = features_outcomes.drop(index = outliers.index).dropna() features = features_outcomes_rmv_outliers[bunch_of_features.columns]
outcomes = features_outcomes_rmv_outliers[bunch_of_outcomes.columns]
features.info(), outcomes.info()
Information of features dataset
Information of outcomes dataset
In the end, we will have the basic four features based on Volume Spread Analysis (VSA) at different time scale listed below, namely, 1 day, 2 days, 3 days, a week, a month, 2 months and 3 months.
Volume: pretty straight forward
Range/Spread: Difference between high and close
Closing Price Relative to Range: Is the closing price near the top or the bottom of the price bar?
The change of stock price: pretty straight forward
2. Percentage Returns
I know that’s a whole lot of codes above. We have all the features transformed into a simple percentage change through the function below.
def create_features_and_outcomes(i):
df = create_HLCV(i)
high = df[f'high_{i}D']
low = df[f'low_{i}D']
close = df[f'close_{i}D']
volume = df[f'volume_{i}D']
features = pd.DataFrame(index=prices.index)
outcomes = pd.DataFrame(index=prices.index)
#as we already considered the different time span,
#here only 1 day of simple percentage change used.
features[f'volume_{i}D'] = volume.pct_change()
features[f'price_spread_{i}D'] = (high - low).pct_change()
#aligne the close location with the stock price change
features[f'close_loc_{i}D'] = ((close - low) / \
(high - low)).pct_change() #the future outcome is what we are going to predict
outcomes[f'close_change_{i}D'] = close.pct_change(-i)
return features, outcomes
Now, let’s have a look at their correlations using cluster map. Seaborn’s clustermap() hierarchical clustering algorithm shows a nice way to group the most closely related features.
corr_features = features.corr().sort_index()
sns.clustermap(corr_features, cmap='coolwarm', linewidth=1);
Based on this cluster map, to minimize the amount of feature overlap in selected features, I will remove those features that are paired with other features closely and having less correlation with the outcome targets. From the cluster map above, it is easy to spot that features on [40D, 60D] and [2D, 3D] are paired together. To see how those features are related to the outcomes, let’s have a look at how the outcomes are correlated first.
corr_outcomes = outcomes.corr()
sns.clustermap(corr_outcomes, cmap='coolwarm', linewidth=2);
From top to bottom, 20 days, 40 days and 60 days price percentage change are grouped together, so as the 2 days, 3 days and 5 days. Whereas, 1-day stock price percentage change is relatively independent of those two groups. If we pick the next day price percentage change as the outcome target, let’s see how those features are related to it.
corr_features_outcomes = features.corrwith(outcomes. \
close_change_1D).sort_values()
corr_features_outcomes.dropna(inplace=True)
corr_features_outcomes.plot(kind='barh',title = 'Strength of Correlation');
The correlation coefficients are way too small to make a solid conclusion. I will expect that the most recent data have a stronger correlation, but that is not the case here.
How about the pair plot? We only pick those features based on a 1-day time scale as a demonstration. At the meantime, I transferred the close_change_1D to sign base on it’s a negative or positive number to add extra dimensionality to the plots.
selected_features_1D_list = ['volume_1D', 'price_spread_1D', 'close_loc_1D', 'close_change_1D']
features_outcomes_rmv_outliers['sign_of_close'] = features_outcomes_rmv_outliers['close_change_1D']. \
apply(np.sign) sns.pairplot(features_outcomes_rmv_outliers,
vars=selected_features_1D_list,
diag_kind='kde',
palette='husl', hue='sign_of_close',
markers = ['*', '<', '+'],
plot_kws={'alpha':0.3});
The pair plot builds on two basic figures, the histogram and the scatter plot. The histogram on the diagonal allows us to see the distribution of a single variable while the scatter plots on the upper and lower triangles show the relationship (or lack thereof) between two variables. From the plots above, we can see that price spreads are getting wider with high volume. Most of the price change locate at a narrow price spread, in another word, wider spread doesn’t always come with bigger price fluctuation. Either low volume or high volume can cause price change at almost all scale. And we can apply all those conclusions to both up days and down days.
you can also use the close location of bars to add more dimensionality, simply apply
features[‘sign_of_close_loc’] = np.where( \
features[‘close_loc_1D’] > 0.5, \
1, -1)
to see how many bars’ close location above the 0.5 or below 0.5.
One thing that I don’t really like in the pair plot is all the plots with the close_loc_1D condensed, looks like the outliers still there, even I know I used one standard deviation as the boundary which is a very low threshold and 338 outliers were removed. I realize that because the location of close is already a percentage change, adding another percentage change on top doesn’t make much sense. Let’s change it.
def create_features_and_outcomes(i):
df = create_HLCV(i)
high = df[f'high_{i}D']
low = df[f'low_{i}D']
close = df[f'close_{i}D']
volume = df[f'volume_{i}D']
features = pd.DataFrame(index=prices.index)
outcomes = pd.DataFrame(index=prices.index)
#as we already considered the different time span,
#simple percentage change of 1 day used here.
features[f'volume_{i}D'] = volume.pct_change()
features[f'price_spread_{i}D'] = (high - low).pct_change()
#remove pct_change() here
features[f'close_loc_{i}D'] = ((close - low) / (high - low))
#predict the future with -i
outcomes[f'close_change_{i}D'] = close.pct_change(-i)
return features, outcomes
With pct_change() removed, let’s see how the cluster map looks like now.
corr_features = features.corr().sort_index()
sns.clustermap(corr_features, cmap='coolwarm', linewidth=1);
The cluster map makes more sense now. All four basic features have pretty much the same pattern. [40D, 60D], [2D, 3D] are paired together.
and in terms of the features correlations with the outcome.
corr_features_outcomes.plot(kind='barh',title = 'Strength of Correlation');
The longer-range time scale features have weak correlations with stock price return, while the more recent events have more effects on the price returns.
By removing pct_change() of the close_loc_1D , the biggest difference is laid on the pairplot() .
Finally, the close_loc_1D variable plots at the right range. This illustrates that we should be careful with over-engineering. It may lead to a totally unexpected way.
3. Percentile Ranking
According to Wikipedia, the percentile rank is
“The percentile rank of a score is the percentage of scores in its frequency distribution that are equal to or lower than it. For example, a test score that is greater than 75% of the scores of people taking the test is said to be at the 75th percentile, where 75 is the percentile rank.”
The below example returns the percentile rank (from 0.00 to 1.00) of traded volume for each value as compared to a trailing 60-day period. | https://towardsdatascience.com/data-transformation-e7b3b4268151 | ['Ke Gui'] | 2020-10-13 11:19:14.340000+00:00 | ['Machine Learning', 'Trading', 'Artificial Intelligence', 'Data Science', 'Pandas'] |
How to Impress With Your First Impression | How to Impress With Your First Impression
Be “you-centric”
Photo by Hồ Ngọc Hải on Unsplash
“You don’t get a second chance to make a first impression.” My soccer coach first presented this phrase to our team when we were 16-years old. He delivered an inspiring speech about how college coaches were going to start coming to our games, and we had to put on our best performance each time they came, because we never knew who would be watching. We may have opportunities to impress these coaches again, but we would not have a second chance at making a first impression.
As my search for playing college soccer heightened, after playing in front of coaches, the next step was to meet them in person. Coaches would invite you to their university for the day, and you would have what is considered an unofficial visit to their campus. Before my first visit, I remembered having a meeting with my own coach, in which he further explained the importance of this first impression. But this time, he went into detail about how to make this first impression flawless.
First impressions in relationships are quite important. People make snap judgments about your appearance, your body language, your posture, your tone, and of course your words. Whether it’s meeting new colleagues for the first time, going on a first date, meeting your significant others’ friends and family or meeting a college coach, it’s crucial to understand what goes into making a positive first impression. The following are some tips my coach shared with me that I always remember when I meet people. | https://medium.com/real-1-0/how-to-impress-with-your-first-impression-2fe5e8360cb6 | ['Jordan Gross'] | 2020-11-14 15:01:57.715000+00:00 | ['Motivation', 'Communication', 'Life Lessons', 'Relationships', 'Inspiration'] |
Chatbots- Connecting Artificial Intelligence and Customer Service | Every business revolves around customers and interactions carried out with customers. It is said that a customer is the boss of a business and every interaction with him counts! An infallible way of dealing with this pressing subject is the use of Chatbots. Chatbots are used to conduct an online chat conversation via text or text-to-speech and provide direct contact with a live human agent. With rising demands all over the world, they have gained immense popularity and are widely used in a myriad of industries to render a pleasant and uniform customer experience. They can be used to answer FAQs, handle customer queries and grievances, manage bookings, make recommendations, CRM and provide 24*7 customer support. They can be rule-based or have Natural Language Understanding.
Let’s look at some applications of Chatbots:
Accessible anytime
Handling Capacity
Flexible attribute
Customer Satisfaction
Cost Effective
Faster Onboarding
Work Automation
Alternate sales channel
Personal Assistant
Chatbots can be integrated with various platforms such as Google Dialogflow, Microsoft Bot Builder, Amazon Lex, RASA and Wit.ai.
Aim and Scope
Whether it is for placing orders or recommending products, most businesses today use Chatbots to provide an efficacious customer experience and obtain a competitive advantage.
Here we will look at the steps involved in building a chatbot to help customers streamline their orders for a pizzeria. Perusing menus and placing orders can often be a cumbersome and time consuming task. This Chatbot aims to rule out tedious steps of flipping through menus and offers a personalized and customized experience. It will recommend a particular dish to the users based on their choice of ingredients and help provide a smooth user experience.
What type of pizza should you order if you enjoy basil on your pizza? What are you most likely to relish if you love pineapple? Which pizza would be best suited for olive lovers?
The bot will instantaneously answer all these questions are more! It will welcome customers to a pizzeria and recommend them a particular type of pizza they are most likely to devour based on their preferred ingredients and toppings. It will also simply take orders from the user and be a promising expression of efficiency, availability, interactivity and customer loyalty.
Steps
We create the bot using Python and RiveScript. In order to train the bot a dataset needs to be created. For the purpose of demonstration, I will be using a small dataset consisting of 50 records and 2 columns “Pizza name” and “ingredients.”
The first step is to import essential libraries we will need.
RiveScript is a simple and user-friendly scripting language for Chatbots. It is a rule-based engine, where the rules can be created by us. These use a scripting metalanguage (simply called a “script”) as their source code.
The next step is to set up the bot dictionary.
Next, we will use two concepts:
Count Vectorizer
Cosine Similarity
Count Vectorizer is used to transform a given text into an n*n matrix on the basis of the frequency of each word that occurs in the entire text.
For instance,
Data= [‘The’, ‘Bot’, ‘will’, ‘recommend’, ‘pizzas’, ‘for’, ‘the’, ‘customer’]
Cosine Similarity is a concept commonly used in recommendation systems. It is a measure of the similarity between two non-zero vectors of an inner product space. To understand it better, consider two points, P1 and P2, in a multi dimensional space. Mathematically, the lesser the distance between the two points, the more similar they are and as distance increases, the similarity between them decreases. Cosine Similarity depicts how similar the two points are by taking cosine of the angle between them. It ranges from -1 to +1. It compares how similar documents are by considering the arrays containing the word counts of the documents.
The next step is to create a function to get replies from the bot. If the value returned by the previous function is not ‘0’, the bot will recommend a particular type of pizza the user is most likely to enjoy.
The last step is to write a code for the Flask app.
This is what your rivescript will look like:
You can customize it based on preferences or business requirements.
Lastly we must build a User Interface for our bot in order to create a personalized branded experience and enable efficient communication with customers to serve them better. | https://medium.com/analytics-vidhya/chatbots-connecting-artificial-intelligence-and-customer-service-d8efbc604e02 | ['Heena Rijhwani'] | 2020-12-06 10:38:11.357000+00:00 | ['Artificial Intelligence', 'Chatbots', 'Cosine Similarity', 'Customer Service', 'Count Vectorizer'] |
A Novel in Thirty Days: Drawing a Blueprint | A Novel in Thirty Days: Drawing a Blueprint
Let the preparations begin
This July, I’m taking another stab at fast-drafting a novel by participating in the controlled chaos that is Camp NaNo. In this piece, I discussed the mostly tangible preparations I’ve made, as recommended in Chris Baty’s book No Plot? No Problem. These include establishing a writing nest, planning when to write, and gathering the appropriate tools.
Today’s focus is on the intangibles critical to the success of such a mad dash to the noveling finish line, and most likely these will continue until approximately 11:59pm on May 30th.
First things first.
I already had an idea bubbling merrily on a back burner, so the most difficult step was finished. I just needed to put it front and center for the next month.
Then came the most critical factor of all: creating a cover for the novel I haven’t yet written.
Crickets chirping…
Wait, that’s not the first thing you do? Oh. Hmm. Well, I did that. Because reasons.
Anyways…
A writing exercise to get us moving
After briefly laying out the reasoning behind his one-week limit on prep time, chapter four of No Plot? No Problem! offers a warm-up writing exercise. This consists of answering the question, “What, to you, makes a good novel?”
“What, to you, makes a good novel?”
For the first half, Baty suggests making a quick list of anything you’ve noticed that consistently appears in books that you like, and saving it to refer to throughout the month. Why? “Because the things that you appreciate as a reader are also the things you’ll likely excel at as a writer.”
He calls it The Magna Carta, and he encourages incorporating as many of these elements as possible while developing your story.
This is what mine looks like:
Characters who are reasonably mature (or become so very quickly)
Characters who are quirky and irreverent
Complex and nuanced antagonists
Humor, or books that don’t take themselves too seriously
Romance that builds over time
Romantic partners that balance each other
Commitment and loyalty between partners
Situations that pull the rug out from under the MCs (Main Characters)
An honest portrayal of mental illness, as a facet of character
Unique settings and worldbuilding
Subverted tropes
Close third person POV (Point Of View)
Happy endings
For the second half, Baty tells us to “write down those things that bore or depress you in novels.” These are important to recognize, because: “If you won’t enjoy reading it, you won’t enjoy writing it.” Listing them clearly and referring to the list frequently will help keep you from accidentally including them in your novel.
“If you won’t enjoy reading it, you won’t enjoy writing it.”
He calls this list the Magna Carta II, the Evil Twin of Magna Carta I. Here’s mine:
Characters with no redeeming qualities, or who are insufferably immature
Miscommunication as a plot device
Mental illness as a plot device
Mental illness that is inaccurately or insensitively portrayed
Books that try too hard to be serious, such as most literary fiction
Insta-love, or love at first sight
Hate-to-love relationships, especially with characters who supposedly hate each other but still find each other sexually irresistible
Narration that distances the reader from the characters
Predictability, or over-reliance on cliches
Unsatisfying endings, especially involving the death of one or more characters or the end of a significant relationship
Getting to know the cast
Keeping in mind what we’ve learned from The Magna Carta and the Evil Twin, Baty gently leads us through fleshing out our characters, plot, setting, and POV.
Characters. My story will feature two main characters, Avery and Echo (they don’t have last names yet). The eight questions Baty suggests posing to your characters taught me a lot that I didn’t know about them.
How old are they?
What is their gender?
What do they do for work?
Who are their friends, family, and love interests?
What is their living space like?
What are their hobbies?
What were they doing a year ago? Five years ago?
What are their values and politics?
I’ve chosen not to include my answers to these questions. Otherwise, it would be so long I might as well just write the novel right here.
Plot, Setting, and POV
POV was an easy choice: I enjoy reading close third, so that’s what my novel will be. I have some bare-bones ideas for plot and setting. Those are next on my to-do list, so stay tuned!
A final note
Keep the concept of exuberant imperfection in mind as you make your own preparations. Remember not to get caught up in making every detail right, and just have fun with the process. That’s what NaNoWriMo is all about! | https://medium.com/write-well-be-well/a-novel-in-thirty-days-drawing-a-blueprint-2ca7f3986b49 | ['Rianne Grace'] | 2019-06-25 15:13:40.690000+00:00 | ['Advice', 'NaNoWriMo', 'Inspiration', 'Writing', 'Creative Writing'] |
Why Americans Want Polarization | “What we need in the United States is not division; what we need in the United States is not hatred; what we need in the United States is not violence or lawlessness; but love and wisdom, and compassion toward one another, and a feeling of justice toward those who still suffer within our country, whether they be white or they be black.”
These words were spoken to an audience in Indianapolis by Senator Robert F. Kennedy upon the assassination of Martin Luther King, Jr. The brief speech has been called one of the great public addresses of the modern world. Two months later, on June 6, 1968, Kennedy himself was assassinated while campaigning for the Democratic nomination for President of the United States.
The loss of these two extraordinary seekers of peace sent us reeling as a country. Most of us. There were Americans, though, who celebrated their deaths. At the time, and I remember it well, this felt shocking when it was revealed. If we own and honor our humanity, surely we cannot rejoice at the death of anyone.
But on recalling this reaction, I am also reminded of the behavior of people in the Middle East who danced upon learning that the Twin Towers had fallen in New York City. It shocked us to see them do this on news reports, and that shock held an element of terror within it. Was this truly what we are made of? Is this what those who delighted in the death of King and Kennedy were made of? And those who had shown gladness, for there were some, at the terrible assassination of President John F. Kennedy on November 22, 1963?
We all share the same origins, no matter what our ethnic, religious, or national background. There really is only one race, genetically speaking, according to research presented by National Geographic in their recent message that “There’s No Scientific Basis for Race — It’s a Made-Up Label.” So why do we persist in seeking division, in praising it, in coveting it, in living it? Why do we allow and accept polarization as a state of mind and heart?
Polarization is expressed through prejudiced behavior —anger, revenge, bitterness, division, and absolutism. All are a product of our ignorance and refusal to acknowledge our fundamental connection with each other. This polarization of viewpoints and allegiances arises out of one primary need — our individual desire to feel safe. We so often experience deep feelings of inadequacy for so many reasons, and these feelings increase when we encounter anything that threatens to change our world or life in any way — for immediately, we feel unsafe. We lose our bearings. Fear is the driving force in this. We believe we can bury this fear, keep it out of sight of mind and heart, if we lash out at or shun or denigrate other human beings, basing our actions on the illusion that there are differences between “us” and “them.”
“Oh, no, those differences exist!” — so says our current mantra. We WANT to believe this is true. We may, sometimes, choose to deal with this information in a civil way, but we still opt for believing in division. We still believe we all do not share common ground as human beings.
The result is a polarized country that appears to thrive at so many levels on the public revelation of those differences — in political speeches, in news media, in movies and television, or in the simple exchange of points of view over the dinner table.
Someone told me yesterday — a person who comes from a family that is politically divided — that no one dares to bring up politics when they gather together. The fierce feelings that sweep in are instantaneous. The dinner table becomes a battleground.
This scenario is repeated everywhere in our country now whenever the opportunity arises — in debates about community zoning, in the workforce, in churches, at a football game, in a child’s playground, or at a world summit. No one in these situations suggests having a lively conversation to talk things over. No one is listening to what anyone who disagrees with them has to say, nor cares what that person thinks.
So what is going on? Why do Americans want polarization? Because they do want it, no question, or we wouldn’t have it.
Again, it has to do with feeling safe. And people are driven by what is called the fight-or-flight response to conflict. Our amygdala controls this, a small almond-shaped set of neurons located deep in each temporal lobe of the brain — it has a key role in how we process our emotions. It is one of the oldest, primordial regions of the brain, and the amygdala is activated when we are confronted by actual physical or perceived danger. Hundreds of thousands of years ago, this could mean escaping the attack of a woolly mammoth or a saber-toothed tiger. Today, this small set of neurons is activated when we feel attacked because someone is talking to us about a difference of opinion or belief. Their words are felt as an attack on our safety just as strongly as if we were in actual physical danger. And when those words are spoken, we revert back to the hundreds of thousands of years of collective response — seek cover, or gear up for battle — flee, or fight. There is no in-between. And by refusing to talk to people who do not share our exact views, we protect ourselves — we evade the danger.
What we believe deep down is that we are evading the danger of dissent because it has the frightening power to change our minds.
Dissent is the central power given to us by our forefathers who wrote and signed the Constitution of the United States. They knew from their experience in Europe with monarchies and oligarchies that without freedom of speech and the right to dissent, we are doomed to an essentially totalitarian government. It is in the freedom and ability to change our minds, to allow differences of opinion, thought, outlook, and interest, that moves us forward, that gives us the chance to re-think what we are doing, and gives us the energy and will to find a better way.
Polarization is the easy path.
With it, we no longer have to re-think our own behavior — we stick with “our crowd” so we do not have to reconsider our outlook at all, ever. We are safe. NO DANGER.
Yes, we can refuse to allow change. Yes, we can refuse to seek and allow common ground with others who are different from us in some way. Yes, we can cling to this outlook because it is our safety net, again and again and again…
But if we remain in our safe world, it eventually becomes untenable. We stagnate, cease to grow. Such a state can never remain our ultimate path for long, because human beings are always in search of discovering who they are. We hunger to go beyond our limitations, even if that hunger is just a whisper in our hearts and minds.
Right now, you are the product of 4.543 billion years, the age of the Earth. It took that amount of time to create you as you are this second. Everything that happened over those eons has led to this moment in time — you.
In fact, you are the culmination of an even greater duration of time because everything, every element that composes our bodies, comes originally from the stars — we are indeed made of “star stuff,” as the astronomer Carl Sagan said.
Look closely at the image above taken by the Cassini Mission in 2017. There is our Earth, a pinpoint in the vast reaches of space. Are we meant to spend this exceedingly brief span of time we have here on this tiny planet— less than a moment in cosmic time — in stagnant, polarized safety, or in the willing exploration, with joy and wonder, of all there is for us to encounter?
It is a choice. | https://regina-clarke7.medium.com/why-americans-want-polarization-2b9fbc259224 | ['Regina Clarke'] | 2018-09-15 22:43:04.449000+00:00 | ['Politics', 'Humanity', 'Self-awareness', 'Choices', 'Safety'] |
What I learned writing for 1 hour every day for 60+ days | What I learned writing for 1 hour every day for 60+ days
Prioritizing process over inspiration
Photo by Glenn Carstens-Peters on Unsplash
On day number eight of sheltering-in-place, I was in a dark place. Over the previous week, I had lost all motivation to do just about anything but read articles about COVID-19. It didn’t seem like there was a reason to do anything. What’s the point? When the world can stop on a dime and change so drastically, so quickly, what hope do we have of making our plans come to fruition?
Everything felt meaningless. After a week of existential crisis, I decided it was time to make meaning out of the circumstances. Hey — wouldn’t it be a great story if I took this opportunity as the impetus to start something that changed my life? What if in 1, 5, or 10 years, I can point back to this period as the moment that something changed?
I also decided that I didn’t have to finish a book or complete some grand project during quarantine to make meaning out of isolation, and in fact putting the pressure on myself to do so would more likely result in crippling anxiety. Instead, I told myself that I wanted to complete three tasks, every day. If I completed these three tasks, no matter what else I did or did not do that day, I would consider it a success. To hold myself accountable, I got out a whiteboard and marker and wrote down my tasks vertically: 1) Meditate for 20 minutes; 2) Write for 1 hour; 3) Floss!
It has been 63 days since that first day, and I haven’t broken the streak for any of these three tasks yet. This is by far the most consistent I’ve ever meditated or flossed, but the biggest difference has been in my writing habits. Previously, I would only write when I “was in the mood,” or on days where the words were coming easily to me. If I wasn’t feeling it, I wouldn’t write. There was always an excuse.
I’ve written over 100 blog posts, a chapter in the book Finding Genius, and about 8,000 words of a sci-fi novel that I originally started in 2016. But all of this was written when I felt like it. I am thankful that outside of academic settings, I have never had a deadline for my writing, and have never had to rely on writing to make money. This is incredibly freeing, and I enjoy writing as a hobby and practice instead of for a living, which I fear would take much of my enjoyment out of it, and add a lot of anxiety and stress. But it also means that I’ve never needed a writing practice or discipline, and have had the luxury to simply write only when I felt like it. So, writing for an hour every day no matter how I felt was a new experiment for me.
There have been plenty of those 63 days where writing for an hour was the last thing I wanted to do (the day that I’m writing this sentence is one of those days). But, my task is not to write a book, or write something I’m going to publish — just to write anything and do nothing else for one hour. While I have made some progress on the novel, I also have a very, very long “scratch sheet,” where some days I just write non-sense that no one will ever see for an hour. I don’t enjoy those days, but they earn me a tally on my white board, and keep the streak alive.
Some days, I’ve been able to go from frustrated and blocked to finding my flow after 20 or 30 minutes of writing garbage just to get through. The biggest win has been allowing myself to write not good-like. Previously, if I didn’t feel like what I was writing was good, I would stop. But making the agreement with myself to write for an hour had nothing to do with quality, just with process. So I’ve written a whole bunch of garbage, and I’m okay — and even happy with myself for doing it. Because sometimes the momentum out of that garbage is the seed of an idea, or a sentence or concept that I really like. And if I didn’t wade through the garbage to get there, I probably would have never found it.
Now, I haven’t finished a novel, haven’t put that much more of my writing out into the world, and so even though I’m writing much more often than I ever had before, my public output has not increased at the same rate. Again, I’m okay with this. The exercise was not meant to increase my public output, but to increase my discipline and process. Knowing that I can make myself sit down and write at any time, and that I don’t have to “be in the mood,” to do so is empowering knowledge to have, and removes my most common excuse to avoid writing. It also means that the writing I do make public is (at least relatively) of higher quality.
Having to write for a full hour also motivates me to do it earlier in the day, so that I can feel as though my work is done. Some days I’ve put it off until nighttime, and have to slog through as I’m tired and miserable, regretting not finishing earlier. On these days, the writing is not very good, because I’ve been having anxiety leading up to it, believing that it won’t be good because I put it off until I was tired in some sort of evil-recursive-writing-logic loop. If I write in the early afternoon, I feel like I’m ahead of schedule and over-achieving. Once I finish, I feel relief, accomplishment, and freedom!
Interestingly, I haven’t really written for more than one hour since I’ve written for at least one hour every day. Before this discipline, if I was “in the mood” and working on something specific, I may write for 2–3 hours or more straight. That hasn’t been the case since I’ve been writing every day. Once my timer goes off, I breath a sigh of relief, finish my thought, and save the rest for the next day. I hope this changes, that some days I want to keep going past my minimum required writing time, but it hasn’t happened yet.
Extrapolated out, if I keep my 1 hour / day minimum of writing going, I would hit 10,000 hours (and consecutive days) when I’m 57. This may sound like a long way out (I’m 30, FYI), but with my previous cadence (writing only when I felt like it), I might never get to 10,000 hours. It’s the Chinese proverb — the best time to plant a tree was 20 years ago, the second best time is today.
I will continue to write for one hour a day as long as I can. Some of those days are going to be fucking magical, and I’m going to write the best dialogue or sentence or paragraph that I’ve ever written. Others are going to me sitting in silence, frustrated that I can’t force anything out, looking at the clock every few seconds. But those magical days can only happen on days that I write, so if I write every day, I’ll have more magical days. What I knew before, but didn’t fully appreciate, was that having a writing practice and discipline is a way of increasing the velocity of opportunity. It’s like buying lottery tickets, but the tickets are free (costing only your time), and you get better lottery tickets every time you buy one.
If you’re currently of the “I only write when I’m in the mood” legion of writers like I used to be, I think it’s worth starting a daily habit to see how your writing and attitude about writing changes. Maybe it’s just 30 minutes for 30 days. Anything that starts a habit in which you’ll be forced to write when you really don’t want to. You’ll likely see that you still can, and that at times, the stuff you write when you don’t want to be writing is actually pretty good. I didn’t take time to specifically write this article. I took time to write, and wrote this article. It wouldn’t exist if I didn’t have to be writing anyway. For me, that’s a lesson that good things can come from processes even without inspiration or flow.
Thinking back to day one when I started this exercise, one of the motivators for me was the narrative it would create. What if in five years, I can say that I have written for one hour every day since March 23rd, 2020? That would be an impressive feat that would give some personal meaning to the otherwise meaninglessness of a global pandemic, and probably have a pretty large impact on my life. Well, today I’ve written for at least one hour for 63 days straight. I think that’s a pretty good start. | https://medium.com/the-raabithole/what-i-learned-writing-for-1-hour-every-day-for-60-days-6e81c9c0e29c | ['Mike Raab'] | 2020-05-27 14:04:09.800000+00:00 | ['Life Lessons', 'Media', 'Writing', 'Personal Development', 'Writing Tips'] |
Game Level Design with Reinforcement Learning | Game Level Design with Reinforcement Learning
Overview of the paper “PCGRL: Procedural Content Generation via Reinforcement Learning” by A. Khalifa et al.
Procedural Content Generation (or PCG) is a method of using a computer algorithm to generate large amounts of content within a game like huge open-world environments, game levels and many other assets that go into creating a game.
Today, I want to share with you a paper titled “PCGRL: Procedural Content Generation via Reinforcement Learning” which shows how we can use self-learning AI algorithms for procedural generation of 2D game environments.
Usually, we are familiar with the use of the AI technique called Reinforcement Learning to train AI agents to play games, but this paper trains an AI agent to design levels of that game. According to the authors, this is the first time RL has been used for the task of PCG.
Sokoban Game Environment
Let’s look at the central idea of the paper. Consider a simple game environment like in the game called Sokoban.
Sokoban game level.
We can look at this map or game level as a 2D array of integers that represent this state of the game. This state is observed by the Reinforcement Learning agent that can edit the game environment. By taking actions like adding or removing certain element of the game (like solid box, crate, player, target, etc. ), it can edit this environment to give us a new state.
The PCGRL Framework
Now, in order to ensure that the environment generated by this agent is of good quality, we need some sort of feedback mechanism. This mechanism is constructed in this paper by comparing the previous state and the updated state using a hand-crafted reward calculator for this particular game. By adding appropriate rewards for rules that make the level more fun to play, we can train the RL agent to generate certain types of maps or levels. The biggest advantage of this framework is that after training is complete, we can generate practically infinite unique game levels at the click of a button, without having to design anything manually.
The three proposed methods for traversing and editing the game environment by the RL agent.
The paper also contains comparisons between different approaches that the RL agent can use to traverse and edit the environment. If you’d like to get more details on the performance comparison between these methods, here is the full text of the research results.
[Source] Different games tested for level design via the trained RL agent.
General Research Direction
While the games that were use in this paper’s experiments are simple 2D games, this research direction excites me because we can build upon this work to create large open-world 3D game environments.
This has the potential of changing online multiplayer gaming experience. Imagine, if at the start of every multiplayer open-world game, we could generate a new and unique tactical map every single time. This means we do not need to wait for the game developers to release new maps every few months or years, but we can do so right within the game with AI, which is really cool! | https://medium.com/deepgamingai/game-level-design-with-reinforcement-learning-52b02bb94954 | ['Chintan Trivedi'] | 2020-08-08 04:59:46.541000+00:00 | ['Game Futurology', 'Artificial Intelligence', 'Game Development', 'Reinforcement Learning', 'Machine Learning'] |
A Kind Manifesto | Photo by Sandrachile . on Unsplash
I believe in kindness. My faith in kindness is unlike the way people generally debate atheism. Is there a supreme being or not? Kindness does not float on an invisible realm like in some Platonic world of forms. Kindness exists only insofar as people are kind to one another.
My belief in kindness relates to its use. I trust in the ability of kind actions to change lives, organizations, and even the world. Maybe that sounds naive these days. Everything feels cruel and indifferent — wars, terrorism, environmental degradation, politics, economics, social media, refugee crises, racism, family life, and healthcare. Kindness seems to be in short supply.
I’m guessing, though, that you experience several acts of kindness every day. I do. These acts of kindness often go unnoticed, but they exist nonetheless. A hug from a loved one. Someone holding the elevator. A thank you from a coworker. Although kindness rarely shows itself on the same scale as cruelty, these kind deeds can add up to something substantial.
Kind acts can change the course of our day and our lives. Small acts such as genuine words of encouragement aren’t so small. I know more than one person who did not commit suicide because of the kindness of someone simply willing to listen for a few minutes.
I don’t know what would tools or methods we could use in order to measure kindness. How could anyone tally all of the kind acts done in a day? We could not calculate the total time given to kindness? There’s no practical way to create a Gross Kindness Quotient, and determining the magnitude of a single kind act would be impossible.
Kindness takes countess forms. In The Power of Kindness, Piero Ferrucci, describes 18 categories of kind acts. Some — like forgiveness, empathy, and generosity — are obvious. He suggests that we also show kindness in less apparent behaviors such as through honesty, respect, and flexibility.
You might compile your own classifications of kind deeds, and I would add that tone is essential to kindness. We all know a kind tone when we hear it, and tone can make a world of difference. Consider honesty. By itself, honesty is not necessarily kind. Someone can be truthful in a vicious and spiteful way. A kind tone makes the truth palatable, and a kind tenor begins with intention. If we want to practice kindness, we have to work at it. | https://kb719.medium.com/a-kind-manifesto-f2ad8e661d4d | ['Disabled Saints'] | 2019-01-31 17:44:07.505000+00:00 | ['Self', 'Education', 'Mental Health', 'Kindness'] |
4 Psychology Hacks Google Used on Employees to Change Bad Habits | 4 Psychology Hacks Google Used on Employees to Change Bad Habits
And how you can apply these
Photo by Abhinav Goswami on Pexels
We all have bad habits that interfere with our goals.
Maybe we feel like we just can’t stop eating junk food but want a ‘summer bod’. Perhaps we procrastinate instead of putting in the work to build up our side hustle. When we have a goal in mind, taking new action, in the beginning, can feel like a breeze.
But it’s sustaining a change in behavior long enough for it to become a good habit, where most of us fall off.
When Google set out to get employees to live healthier lives it was confronted with the challenge of changing their bad eating habits.
7 weeks later, the findings in the New York office alone were impressive. Employees consumed 3.1 million fewer calories. In 2018, employees were found to drink nearly 5 times more bottled water over sugary drinks. And today 2,300 breakfast salads are served daily.
You may not necessarily have the goal of living a healthier lifestyle. But the principles from human behavior used by Google, are lessons we can all use in making and sustaining good habits that support our goals.
Hack 1: Create extra steps for a bad decision & make the right decision easy
One action Google did was an experiment with the setup of coffee stations.
At station A, snack bars were conveniently put next to the coffee stations. At station B, the snack bars were placed across the room, just 4 or 5 extra steps away.
In the end, it was found that 20% of employees grabbed a snack from station A. At station B, this dropped to 12% of employees grabbing a snack.
Behavioral lesson:
Humans are wired to go for the easier choice or the path of least resistance. It’s a survival instinct.
In the case above, it was too inconvenient for those in station B to walk a few extra feet, and too easy for those at station A to grab a snack next to them.
How to apply this yourself:
If you would like to create a good habit of say, actually finishing books from your forever growing pile of unread books, try placing one of them beside your bedside table. This will create easy access and convenience to read every day.
If you want to stop the habit of being on your device before sleeping because this messes up your sleeping schedule, place all devices in another room an hour before sleeping. For extra effectiveness, place the devices annoyingly in a closet for example, or somewhere inconveniently high or low to reach.
Even if you’re itching to check your devices, you probably won’t want to go through the effort of getting out of bed, walking into another room, reaching up, or getting on your knees just to get these items.
Hack 2: Make the right choice more attractive
When Google tried to get employees to drink more water they began to place transparent “spa water” style canisters everywhere. They also added colorful fruit like strawberries and slices of lemon.
Behavioral lesson:
When we’re attracted to something, we’re likely to gravitate towards it. We’re even more inclined if attention-grabbing details like bright colors catch our attention.
In the case of Google employees, the visual of flavored water was far more attractive than plain water.
How to apply this to yourself:
Will owning a cute outfit or new pair of shoes motivate you to go to the gym to ‘show 'em off’? Tap into that vanity if it means you’ll show up at the gym to workout.
Be honest, if your home office looks better, would it motivate you to get work done in such an awesome looking place? Then decorate that home office!
Consider your other senses too, like your sense of smell. If you know that you need to get in bed by a certain time, it can be attractive to see the scented candle on your bedside table. The thought of smelling your favorite fragrance may attract you towards your bed when the needed time to rest comes.
Hack 3: Hide temptation and have good decisions clearly visible
To prevent unnecessary snacking, junk foods like chocolate and chips weren’t banned from the workplace but were hidden. Employees knew they still existed, but they were at the back of kitchens or in opaque containers.
Behavioral lesson:
Out of sight, out of mind.
“Visibility is extremely important. Whatever you see first is what you’re likely to start thinking about.” — David Just, Ph.D., an associate professor of behavioral economics at Cornell University.
How to apply this to yourself:
If you find yourself slowly getting addicted to playing your Nintendo Switch every evening, try hiding it away so you’re not tempted to play it every time you see it.
If you have the goal of drinking more water, try having a water bottle next to you as a constant visual reminder to drink up. If you need to study for an important test, have your study materials readily within reach.
Hack 4: Make the right choice more enjoyable
It’s not only kids who struggle to eat their vegetables. Google created not only an abundance of healthy plant-based meals to choose from but made sure these options were tasty too.
Behavioral lesson:
Google realized:
What motivates people to engage and stick with virtuous patterns of behavior has less to do with all the logical reasons they should and more to do with how much the person enjoys doing that virtuous thing — whether that’s going to the gym or eating their vegetables. — Jane Black, How Google Got Its Employees to Eat Their Vegetables
How to apply this to yourself:
The next time you have something that you need to do, instead of forcing yourself to do it and ‘hating it throughout’, brainstorm and ask yourself, “How can I make this process more enjoyable?”
If you need to get cardio done but hate running with a passion, maybe you should try a fun dance class instead to burn those calories. If cleaning out your apartment regularly is a chore, try playing your favorite music playing in the background.
When I lead tours, a fellow guide of mine used to make a game out of seeing how many names she could memorize as a personal challenge. | https://medium.com/psychologically/4-psychological-hacks-google-used-on-employees-to-change-bad-habits-5334ce181e6c | ['Willda Atienza'] | 2020-11-23 17:11:42.217000+00:00 | ['Life Hacking', 'Personal Development', 'Self Improvement', 'Habits', 'Psychology'] |
Seek Out One of These to Find the Missing Answer to Your Goals | What Is a Proper Mastermind?
It’s all in the title. A mastermind is a way to join multiple minds together to create something greater than any individual part.
I often tell people my writing is a collaboration and it’s true. Behind the scenes are other writers, entrepreneurs, publication owners, editors, superfans and web developers who all contribute in some way to my work.
A mastermind helps you find the right people. Because there are people all over the internet and the screening process will take the rest of your life.
Here are the features of a proper mastermind.
Heavy curation
Anyone can join a Facebook Group, but not everyone can join a mastermind.
Filtering the noise in your life helps you reach mastery. The people who form a mastermind are normally curated down from a bigger list. A mastermind is typically made up of less than 100 people. (The best ones I’ve seen contain thirty or less.)
Every trait of the people who join is taken into consideration.
Application process
There is an application process via a Google Form that requires you to talk about yourself in detail. You note down your goals, your experience, what you do for work. Everything about you is put under a microscope and up for debate by the owner and members of the mastermind.
The point of the application process is not to discriminate but to ensure that the right mix of people with similar interests are put together.
There is a mastermind for everybody. You just have to find the right one for you.
It costs money
A mastermind is serious. It costs money. My mentor’s mastermind is thousands of dollars every year to join. I initially thought it was a huge waste of money until I attended. Only then did I appreciate the power of curation and the price tag that comes with it.
You don’t appreciate something that is free.
The more money you have to give up, the more you’ll take the mastermind seriously.
It’s run by someone who cares
The leader of the mastermind is important. If their goal is only to further their own interests, then the mastermind will typically die a slow death.
The point of a mastermind is for everybody to grow together. There is no supreme leader that everybody worships. My mentor who runs a mastermind is obsessed with the members.
He does the best he can every week to serve their interests. Most of all, he celebrates the wins of people in the mastermind, which shows others what is possible so they too can have their moment.
There is one thing everybody has in common
The mastermind I’m in is full of writers. We all have similar goals: to write better, to write more, to touch people’s hearts with emotion, to inspire others, to earn a living from writing.
A common theme is what makes a mastermind great. Find your tribe by thinking about your interests. | https://medium.com/the-ascent/seek-out-one-of-these-to-find-the-missing-answer-to-your-goals-25ada50ff6f6 | ['Tim Denning'] | 2020-08-03 19:01:01.075000+00:00 | ['Life Lessons', 'Self Improvement', 'Life', 'Learning', 'Productivity'] |
Ruta de Aprendizaje Machine Learning en Español — Parte 2 | Written by
Data Scientist | DataEngineer | Software Developer | Electronic Engineer. I’m Jesus’s Follower | Don’t hesitate to AMA. Let’s take a coffee ☕ and enjoy life. | https://medium.com/colombia-ai/ruta-de-aprendizaje-machine-learning-en-espa%C3%B1ol-parte-2-fbe789869129 | ['German Andres Jejen Cortes'] | 2019-06-18 04:07:44.341000+00:00 | ['Gradiente', 'Backpropagation', 'Python', 'Learning Path', 'Machine Learning Ai'] |
Algebraic Data Types in Python | An Algebraic Data Type (or ADT in short) is a composite type. It enables us to model structures in a comprehensive way that covers every possibility. This makes our systems less error-prone and removes the cognitive mental load that comes when dealing with impossible states.
Motivation
Programmers who work in statically typed languages that have pattern matching are most likely using ADTs in their day to day work. If you’re not one of them, why should you care? I’ve decided to write about ADTs because:
Applicability — I was curious about the applicability of ADTs in dynamic languages like Python.
— I was curious about the applicability of ADTs in dynamic languages like Python. System understanding — categorizing certain parts of a problem in terms of ADTs leads to a more structured (and deeper) understanding of how our systems behave.
— categorizing certain parts of a problem in terms of ADTs leads to a more structured (and deeper) understanding of how our systems behave. Explicit design — Once we model parts of our system as ADTs we can embed certain architectural decisions into our design.
Defining ADTs
To be honest, I think there are already a bunch of good resources out there that define ADTs way better than I can. The reason I’ve decided to add this section is to make this post a complete reference (and not require us to jump from one resource to another).
Informally, ADTs:
Are a way to declare concrete, recursive and abstract structures.
Define which values and what variations are possible for these structures.
Are a composition of other types (which we expand on next)
ADTs are a composition of these types: Product and Sum Types. Product types define which values exist in the type definition while Sum types define which variations are legal for an ADT.
The last paragraph may have been a bit theoretical so let’s imagine a system that supports two kinds of users: authenticated and anonymous users. An authenticated user has an ID, an email, and a password. An anonymous user only has a name. Let’s represent these definitions with an ADT (I’m also using Scala for the Scala developers among us).
A Scala implementation:
A Python implementation:
The examples involve 3 types. AuthenticatedUser , AnonymousUser and User . AuthenticatedUser and AnonymousUser are the Product Types while User is the Sum Type (which is why User is explicitly mentioned in the Python example)
Product Types
Define the fields that the structure has. AuthenticatedUser is a Product type because it has an ID, an email, and a password. AnonymousUser is a Product type because it has a single name value. Although not mentioned in the previous example a Product type can have 0 values (we'll look into an example of this later on).
Product types define how many possible variations of AuthenticatedUser and AnonymousUser can exist in the system. We often refer to the number of possible variations as an Arity.
Sum types are the more interesting type of the two and are meant to define what are all the valid variations of a type. In the previous example, User is a Sum type because it can either be an AuthenticatedUser or an AnonymousUser . In the User example, a User must be either anonymous or authenticated and nothing else.
We use the term Sum type to define how many possible variations of a type can exist in the system. In the User example, User has 2 possible variations: authenticated or anonymous (an Arity of 2).
ADTs Examples
Before we discuss the reasoning behind ADTs it’s probably better if we’ll look at a few examples:
ADTs are everywhere, the following example shows that ADTs can even represent primitive types (I find it interesting even if the example is kinda useless)
Option allows us to write functions that either return a value or return nothing. What’s nice about Options is that we can make the optional return value explicit via type annotations (it has other advantages but they’re irrelevant for now).
It’s possible to also model operations as structures
Events are also possible candidates for ADTs
We can even represent a closed set of possible states. The following example shows how we can use an ADT to model the possible states of a Circuit Breaker (we can easily add a half-open state if we need to)
We can also use ADTs in Javascript and in React
The Reasoning Behind ADTs
One of the main ideas behind ADTs is that code that operates on an input ADT can be a total function. It knows in advance which variations are possible and can return a value for each of these variations. This has many advantages like code that is testable, easy to reason about, and deterministic. We can also use it as part of a functional core.
One implication for pushing towards total functions is that we have to know in advance all the possible variations of the ADT. If we had incorrectly defined an ADT to be inherently open for extension it would impossible to guarantee that functions that operate on it will indeed be total functions. When we define an ADT we are explicitly stating that our structure is inherently closed and we don’t expect it to frequently change (and in some cases never change).
There’s a big difference between “never change” and “don’t often expect it to change”. ADTs that are fundamentally closed like Option and boolean will likely never change. However, some ADTs will likely need to change sometime in the future (hopefully not often though). When this happens we want to have a fail-fast mechanism that prevents errors from creeping throughout the system.
Data Types & Operations Placement
When coming from an object-oriented background it may be tempting to place operations (or behavior) on the ADT and not separate them. There are, as always, tradeoffs for both options. This is actually a well-known problem called The Expression Problem. Uncle Bob also discusses this topic in one of his blog posts.
Uncle Bob summarizes the problem quite well IMO:
Adding new functions to a set of classes is hard, you have to change each class. Adding new functions to a set of data structures is easy, you just add the function, nothing else changes. Adding new types to a set of classes is easy, you just add the new class. Adding new types to a set of data structures is hard, you have to change each function. Uncle Bob Objects & Data structures
We can apply Uncle Bob’s guidelines and conclude that since ADTs are inherently closed we should aim to separate the operations from the ADTs.
Operating on ADTs
Going back to our Expression example (not to be confused with the Expression Problem), when using Python (or a similar language) the simplest way to operate on Expression will be to use some form of type checking:
In my opinion, the real concern here is for our system’s safety. Although we’re not expecting new types of Expression , it's possible that requirements will change (as they always do) and we will need to add a new Expression type (or more). When this happens, we will need to fix every code section that operates on Expression . This is very error-prone (in fact, when I initially wrote this example I forgot Divide and when I added it, I forgot to update the isinstance checks to match the new variation 🤦♂️)
If isinstance is not a good strategy then what are our options? It turns out Python has an ADT library that takes an interesting approach.
The Python ADT Library
This library tries to somewhat mimic pattern matching in Python by generating a match method for each ADT. Let's start by using it to generate an Option ADT.
Let’s go back to our Expression problem. This is how we would represent Expression with the ADT library and process it:
match achieves several interesting things:
In order to use the wrapped value, we are forced to deal with all the possible outcomes ( SOME and NONE or the different Expression Sum types) which means that functions that operate on the ADT ( parse_name or evaluate ) are total functions. The library automatically generates the match method. It verifies that we have a function to handle all the possible variations of the ADT. In our example, if we forget to write the lambda for SOME , NONE , LITERAL , MULTIPLY , etc, we will get an "Incomplete pattern match" error. An ADT can be recursive ( Expression ). Processing a recursive ADT may also require recursive processing.
These qualities cover our initial requirements. They allow us to write total functions that operate on the ADTs, and they fail-fast in case we do end up adding new variations of the type.
Summary
ADTs help us to create readable code that is predictable and is also easier to reason about. We use them to represent structures that are inherently closed but can seldom change. It’s for this reason that we tend to separate the operations from the actual structure and use total functions to operate on them. In the case, a new variation emerges we want to fail as soon as possible.
We’ve looked at how we can use Python’s ADT library to operate on ADTs. It’s possible to extend ADTs in such a way that they’re easily composable and even enforce certain constraints on the way we process them (constraints liberate, remember?). Maybe these are subjects for a future post?
In my opinion, the real benefit of thinking in terms of ADTs is that it truly improves the design of the system, it makes us think about which parts of our system are more likely to change than others and it encodes this understanding in the code (regardless of the language we’re using).
I’d love to get some feedback, improvement suggestions, and ideas for future posts. Feel free to reach out.
Further Reading | https://medium.com/swlh/algebraic-data-types-in-python-f24456d72f0 | ['Gideon Caller'] | 2020-12-21 20:53:13.003000+00:00 | ['Python', 'Adt', 'Quality Software', 'Functional Programming', 'Programming'] |
The Joy of Neural Painting | Our Implementation of a Neural Painter in Action.
The Code: Our implementation can be found at this Github repo: https://github.com/libreai/neural-painters-x
I am sure you know Bob Ross and his program The Joy of Painting where he taught thousands of viewers how to paint beautiful landscapes with a simple and fun way, combining colors and brushstrokes, to achieve great results very quickly. Do you remember him teaching how to paint a pixel at the time? of course not!
However, most of the current generative AI Art methods are still centered to teach machines how to ‘paint’ at the pixel-level in order to achieve or mimic some painting style, e.g., GANs-based approaches and style transfer. This might be effective, but not very intuitive, specially when explaining this process to artists, who are familiar with colors and brushstrokes.
At Libre AI, we have started a Creative AI initiative with the goal to make more accessible the advances of AI to groups of artists who do not necessarily have a tech background. We want to explore how the creative process is enriched by the interaction between creative people and creative machines.
As first step, we need to teach a machine how to paint. It should learn to paint as a human would do it: using brushstrokes and combining colors on a canvas. We researched the state-of-the-art and despite the great works, there was not really a single paper that satisfied our requirements, until we found Neural Painters: A Learned Differentiable Constraint for Generating Brushstroke Paintings by Reiichiro Nakano [1]. This finding was quite refreshing.
Neural Painters
Neural Painters [1] are a class of models that can be seen as a fully differentiable simulation of a particular non-differentiable painting program, in other words, the machine “paints” by successively generating brushstrokes (i.e., actions that define a brushstrokes) and applying them on a canvas, as an artist would do.
These actions characterize the brushstrokes and consist of 12-dimensional vectors defining the following variables:
Start and end pressure : pressure applied to the brush at the beginning and end of the stroke
: pressure applied to the brush at the beginning and end of the stroke Brush size : radius of the generated brushstroke
: radius of the generated brushstroke Color : the RGB color of the brushstroke
: the RGB color of the brushstroke Brush coordinates: three Cartesian coordinates on a 2D canvas, defining the brushstroke’s shape. The coordinates define a starting point, end point, and an intermediate control point, constituting a quadratic Bezier curve
A tensor with actions looks like this example:
tensor([
[0.7016, 0.3078, 0.9057, 0.3821, 0.0720, 0.7956, 0.8851, 0.9295, 0.3273, 0.8012, 0.1321, 0.7915],
[0.2864, 0.5651, 0.5099, 0.3430, 0.2887, 0.5044, 0.0394, 0.5709, 0.4634, 0.8273, 0.1056, 0.1702],
...
])
and these are a sample of some of the brushstrokes in the dataset:
The goal of the Neural Painter is to translate these vectors of actions into brushstrokes on a canvas. The paper explores two neural architectures to achieve such translation, one based on a variational autoencoder (VAE) and the second one based on a generative adversarial network (GAN), with the GAN-based Neural Painter (Figure 1) achieving better results in terms of quality of the generated brushstrokes. For more details please refer to the paper [1] .
Tinkering with Neural Painters
The code available to reproduce the experiments is offered by the author in a series of Google’s Colaboratory notebooks available in this Github repo and the dataset used is available in Kaggle. The implementation uses TensorFlow, which is great in terms of performance, but let’s face it, it is not great fun to digest TensorFlow’s code (specially without Keras ;) ).
Teaching machines is the best way to learn Machine Learning — E. D. A.
We played around with the notebooks provided, they were extremely useful to understand the paper and to generate nice sample paintings, but we decided that in order to really learn and master Neural Painters, we needed to experiment and reproduce the results of the paper with our own implementation. To this end, we decided to go with PyTorch and fast.ai as deep learning frameworks instead of TensorFlow as the paper’s reference implementation, to do some tinkering and in the process, hopefully, come with a more accessible piece of code.
Learning Neural Painters Faster
GANs are great generative models but they are known to be notoriously difficult to train, specially due to requiring a large amount of data, and therefore, needing large computational power on GPUs. They require a lot of time to train and are sensitive to small hyperparameter variations.
We indeed tried first a pure adversarial training following the paper, but although we obtained some decent results with our implementation, in terms of brushstrokes quality, it took a day or two to get there with a single GPU using a Colaboratory notebook and the full dataset.
To overcome these known GANs limitations and to speed up the Neural Painter training process, we leveraged the power of Transfer Learning
Transfer learning is a very useful technique in Machine Learning, e.g., the ImageNet models trained as classifiers, are largely used as powerful image feature extractors, in NLP, word embeddings, learned unsupervised or with minimal supervision (e.g., trying to predict words in the same context), have been very useful as representations of words in more complex language models. In Recommender Systems, representations of items (e.g., book, movie, song) or users can be learned via Collaborative Filtering and then use them not only for personalized ranking, but also for adaptive user interfaces. The fundamental idea, is to learn a model or feature representation on a task, and then transfer that knowledge to another related task, without the need to start from scratch, and only do some fine-tuning to adapt the model or representation parameters on that task.
More precisely, since GANs main components are the Generator and Critic the idea is to pre-train them independently, that is in a non-adversarial manner, and do transfer learning by hooking them together after pre-training and proceed with the adversarial training, i.e., GAN mode. This process has shown to produce remarkable results [2] and is the one we follow here.
The main steps are as described as follows:
(1) Pre-train the Generator with a non-adversarial loss, e.g., using a feature loss (also known as perceptual loss)(2) Freeze the pre-trained Generator weights(3) Pre-train the Critic as a Binary Classifier
(i.e., non-adversarially) using the pre-trained Generator (in evaluation mode with frozen model weights) to generate `fake` brushstrokes. That is, the Critic should learn to discriminate between real images and the generated ones. This step uses a standard binary classification loss, i.e., Binary Cross Entropy, not a GAN loss(4) Transfer learning for adversarial training (GAN mode): continue the Generator and Critic training in a GAN setting. Faster!
More in detail:
(1) Pre-train the Generator with a Non-Adversarial Loss
Figure 1. Pre-train the Generator using a (Non-Adversarial) Feature Loss.
The training set consists of labeled examples where the input corresponds to an action vector and the corresponding brushstroke image to the target.
The input action vectors go through the Generator, which consists of a fully-connected layer (to increase the input dimensions) and of a Deep Convolutional Neural Network connected to it.
The output of the Generator is an image of a brushstroke. The loss computed between the images is the feature loss introduced in [3] (also known as perceptual loss [4]). The process is depicted in Figure 1.
(2) Freeze the pre-trained Generator
After pre-training the Generator using the non-adversarial loss, the brushstrokes look like the ones depicted in Figure 2. A set of brushstrokes images is generated that will help us pre-train the Critic in the next step.
Figure 2 . Sample Brushstrokes from the Generator Pre-trained with a Non-Adversarial Loss.
(3) Pre-train the Critic as a Binary Classifier
Figure 3 . Pre-train the Critic as a Binary Classifier.
We train the Critic as binary classifier (Figure 3), that is, the Critic is pre-trained on the task of recognizing true vs generated brushstrokes images (Step (2)).
We use is the Binary Cross Entropy as binary loss for this step.
(4) Transfer Learning for Adversarial Training (GAN mode)
Finally, we continue the Generator and Critic training in a GAN setting as shown in Figure 4. This final step is much faster that training the Generator and Critic from scratch as a GAN.
Figure 4 . Transfer Learning: Continue the Generator and Critic training in a GAN setting. Faster.
One can observe from Figure 2 that the pre-trained Generator is doing a decent job learning brushstrokes. However, there are still certain imperfections when compared to the true strokes in the dataset.
Figure 5 shows the output of the Generator after completing a single epoch of GAN training, i.e., after transferring the knowledge acquired in the pre-training phase. We can observe how the brushstrokes are more refined and, although slightly different to the true brushstrokes, they have interesting textures, which makes them very appealing for brushstrokes paintings.
Figure 5 . Sample Brushstrokes from the Generator after Adversarial Training (GAN mode).
From Brushstrokes to Paintings
Once the Generator training process is completed, we have a machine that is able to translate vectors of actions to brushstrokes, but how do we teach the machine to paint like a Bob Ross’ apprentice?
To achieve this, the Neural Painters paper [1] introduces a process called Intrinsic Style Transfer, similar in spirit to Neural Style Transfer [6] but which does not require a style image. Intuitively, the features of the content input image and the one produced by the Neural Painter should be similar.
To implement the process we freeze the Generator model weights and learn a set of action vectors that when input to the Generator will produce brushstrokes, that once combined, will create a painting given an input content image.The image features are extracted using a VGG16 [7] network as a feature extractor, denoted as CNN in Figure 6, which depicts the whole process.
Figure 6. Painting with Neural Painters using Intrinsic Style Transfer.
Note that the optimization process is targeted to learn the tensor of actions, while the remaining model weights are not changed, that is, the ones of the Neural Painter and CNN models. We use the same Feature Loss as before [3].
Finally, given an input image for inspiration, e.g., a photo of a beautiful landscape, the machine is able to create a brushstroke painting for that image :) ∎
This Neural Painters implementation is the core technique used in our collaboration with the collective diavlex for their Art+AI collection Residual at Cueva Gallery.
Notes
For blending the brushstrokes, we follow a linear blending strategy to combine the generated strokes in a canvas, this process is described in detail in a very nice post titled Teaching Agents to Paint Inside Their Own Dreams also by Reiichiro Nakano [5]. We are currently exploring an alternative process that uses the alpha channel for blending.
Acknowledgements
We would like to thank Reiichiro Nakano for helping us clarifying doubts during the implementation of our Neural Painters and for his supportive and encouraging comments and feedback. Thanks a lot Reiichiro! [@reiinakano].
Pandas painted by our Neural Painter.
References
[1] Neural Painters: A Learned Differentiable Constraint for Generating Brushstroke Paintings. Reiichiro Nakano
arXiv preprint arXiv:1904.08410, 2019.
[2] Decrappification, DeOldification, and Super Resolution. Jason Antic (Deoldify), Jeremy Howard (fast.ai), and Uri Manor (Salk Institute) https://www.fast.ai/2019/05/03/decrappify/ , 2019.
[3] Fast.ai MOOC Lesson 7: Resnets from scratch; U-net; Generative (adversarial) networks. https://course.fast.ai/videos/?lesson=7 ; Notebook: https://nbviewer.jupyter.org/github/fastai/course-v3/blob/master/nbs/dl1/lesson7-superres.ipynb [Accessed on: 2019–08]
[4] Perceptual Losses for Real-Time Style Transfer and Super-Resolution
Justin Johnson, Alexandre Alahi, Li Fei-Fei https://arxiv.org/abs/1603.08155 , 2016
[5] Teaching Agents to Paint Inside Their Own Dreams. Reiichiro Nakano.
https://reiinakano.com/2019/01/27/world-painters.html , 2019
[6] A Neural Algorithm of Artistic Style. Leon A. Gatys, Alexander S. Ecker, Matthias Bethge. https://arxiv.org/abs/1508.06576, 2015
[7] Very Deep Convolutional Networks for Large-Scale Image Recognition. Karen Simonyan, Andrew Zisserman. https://arxiv.org/abs/1409.1556, 2014 | https://medium.com/the-ai-art-corner/the-joy-of-neural-painting-f00b4f3c4fd4 | ['Beth Jochim'] | 2020-10-13 10:28:32.354000+00:00 | ['Artificial Intelligence', 'Machine Art', 'Neural Paintings', 'Articles', 'Machine Learning'] |
What to Do When You Don’t Know What to Write | What to Do When You Don’t Know What to Write
You can’t always can’t on the muse to be there
Photo by Ryan Snaadt on Unsplash
As a writer, staring at a blank screen kinda sucks.
You rack your brain trying to come up with ideas, but there’s nothing there. Somehow, you’ve forgotten everything you know.
And the more you try, the harder it gets.
It’s funny. Some days, the material’s right at the forefront of your thoughts, begging to be written about.
But then there are other days when the muse just simply isn’t there.
What do you do then?
How do you write something when you have no idea what to write about?
I struggled with this question for a long time.
Often, I would sit in front of my computer and stare off into space, hoping something would come to me.
Alas, nothing ever did.
It was only when I started typing that the juices started flowing.
Take this article, for example.
Here I am, sitting in front of my computer with no idea what to write, thinking to myself “this sucks.” So what do I do, I write exactly that — “As a writer, staring at a blank screen kinda sucks.”
And with that one line, I was off to the races. The rest flowed like hot molten lava down a beautiful Hawaiian hill.
So when you’re struggling to find something to write about…
JUST START WRITING.
Write about how you feel. Write about hating the blank screen. Write about cursing the muse for never showing up.
It doesn’t matter.
What matters is that your fingers start moving.
Because once they do, the rest starts flowing pretty quick. | https://medium.com/the-innovation/what-to-do-when-you-dont-know-what-to-write-3b5f647157e1 | ['Daniel P. Donovan'] | 2020-12-04 15:02:12.169000+00:00 | ['Writers On Writing', 'Copywriting', 'Writing', 'Writer', 'Writing Tips'] |
Single-Binary Web Apps in Go and Vue — Part 1 | Photo by Gift Habeshaw on Unsplash
Often I find myself tasked with building web applications or APIs with web management portals. On the backend my language of choice is Go, while on the frontend my framework of choice is Vue. One of the big benefits of Go is it compiles into a single binary. When building an API in Go, and a frontend in JavaScript though, these are two different stacks, and as such might mean deploying two different apps. And in some cases that may be desirable. But for my simple uses I’d like to bundle everything into a single binary for deployment, because it makes my life easier.
This is a 4 part series:
Part 1 — The Go and Vue apps
Part 2 — Starting the Vue app with your Go app
Part 3 — Bundling it all up
Part 4 — Automating using Make
In this series we’ll build an app that does nothing useful, but demonstrates bundling your Go and Vue apps into a single binary. This article is part 1, where we will setup the Go and Vue apps separately. There are a couple of prerequisites necessary to get started. Make sure you have the following tools installed.
The Go App
Let’s start with the Go app. In this example we’ll build a a really small app that simply fires up an HTTP server. I’m using the excellent Echo framework for simplifying boilerplate HTTP server stuff. The first step is to initialize a new Go app. In your terminal create a new directory and run go mod init . For these examples, I’m using a namespace of github.com/adampresley/example. Change this to meet your own needs.
$ mkdir example
$ cd example
$ go mod init github.com/adampresley/example
$ touch ./main.go
The above steps will initialize Go modules for our package, and create a blank main.go file. Open up that file and paste the following content. We’ll break down what everything does in a minute.
Let’s break this down.
Line 18 — A global version variable. This is mostly used to announce the version of the application. We’ll muck with this more in part 4
Starting at line 30 we setup our HTTP server using Echo. On line 33 we create a simple endpoint at /api/version which simply returns the Version variable
which simply returns the Version variable At line 37 we create a goroutine which starts the HTTP server listening on port 8080
Line 56–62 creates a channel that waits for an interrupt signal, such as CTRL+C. Execution will pause here, allowing the HTTP server to run indefinitely until it is interrupted
Lines 64–71 tell the HTTP server to shutdown
If you run this now on a terminal using go run . you should see a message stating that the application has started.
The Vue App
The next part of the equation is the Vue JavaScript application. This part is way easier because Vue provides their CLI to set it all up for you. In this section I’ll walk through the choices I made for the example, then demonstrate calling our Go API to get the version and display it on our page.
The first step is to create the app using the CLI tool. We are going to create the Vue app inside our example folder where our Go app is.
$ cd example
$ vue create app
Running the above will start a wizard which asks several questions. Here are the options that I chose for this demo.
Once you’ve made your selections, watch and wait for the CLI to do it’s job. When it complete you should see a message that looks something like this.
Now, if you open two terminals, you can run the Go app in one, and the Vue app in another.
Terminal 1
$ cd example
$ go run .
Terminal 2
$ cd example/app
$ npm run serve
Now open a browser. In tab 1 navigate to http://localhost:8080/api/version and you will see the version string output. Open another tab and navigate to http://localhost:8081 and you will see the default Vue sample page.
Calling the API
Finally, let’s have our Vue app call the Go API version endpoint, just for kicks. In the /app/src/components/ directory there is a file called HelloWorld.vue. Open this file for editing. We want to display the API version string at the bottom of the page. Here are the steps.
Add a variable to hold the version. See lines 41–45
Add HTML to display the version. See line 30
Get the version from the server and assign it to our new variable. We’ll do this when the component is created. See lines 47–51
Here is the code in full.
Now when you refresh http://localhost:8081 you’ll see this.
That’s it for part 1! In part 2 we’ll add code in our Go application which will automatically start the Vue app for us when we run the Go app, adding a level of convienence. | https://medium.com/swlh/single-binary-web-apps-in-go-and-vue-part-1-ea7d4100eab7 | ['Adam Presley'] | 2020-12-28 15:15:41.502000+00:00 | ['Vue', 'JavaScript', 'Software Development', 'Go', 'Development'] |
What are encryption keys and how do they work? 🔐 | Diffie-Hellman-Merkle key exchange
This method allows two parties to remotely establish a shared secret (a key in our case) over an assumed insecure channel. This key can then be used in subsequent communications along with a symmetric-key algorithm.
Colours are generally used instead of numbers when explaining this, due to the correlation of undoing the mathematical operations being used to the complexity of knowing which two colours where mixed to create a new, third, colour.
Diffie-Hellman-Merkle protocol to establish a shared secret key
Alice and Bob each start with their own, private, values R and G, as well as a public common value Y. Alice uses Y along with her private value to create RY, and Bob GY. These are publicly shared. This is safe, as it is extremely computationally difficult to determine the exact private values from these new values. Alice can then use Bob’s new public combination along with her private value to create RGY, and importantly Bob can use Alice’s new public combination to create the exact same RGY value. They now have a shared secret they can use to encrypt future messages and know the other can decrypt them when received.
A main security flaw in this protocol is the inability to verify the authenticity of the other party while setting up this shared secret. It is assumed you are talking to a trusted other party. This leaves the protocol open to a ‘man-in-the-middle’ attack by someone listening in from the start of the exchange.
Eve performing a man-in-the-middle attack
Above it can be seen how the Eve effectively eavesdrops and intercepts the exchange to set themselves up in the position to read any message shared between Alice and Bob. Eve can receive a message from Alice, decrypt it using their shared secret key, read it, then re-encrypt it using the key Eve shares with Bob. When Bob receives the message it can be decrypted using his secret key, which he incorrectly assumes he shares with Alice.
We will come back to Diffie-Hellman-Merkle later, but now will look at solving this vulnerability using public-private key pairs. | https://medium.com/codeclan/what-are-encryption-keys-and-how-do-they-work-cc48c3053bd6 | ['Dominic Fraser'] | 2018-07-10 22:06:52.710000+00:00 | ['Encryption', 'Software Engineering', 'Security', 'Cipher', 'End To End Encryption'] |
Believing in a She-God isn’t Feminist | Let’s take feminism and strip it down to as basic a definition as we can get — Wikipedia.
Wikipedia says:
Feminism is a range of political movements, ideologies, and social movements that share a common goal: to define, establish, and achieve political, economic, personal, and social equality of sexes. [emphasis mine]
And that is exactly what we’re fighting for. In a serious conversation with most feminists, you won’t find us talking about world domination and fighting for the matriarchy. All you’ll hear is equality. We want equal pay, an equal shot at being hired for that pay, and equal opportunity to qualify for the job.
That’s it.
We ultimately aren’t interested in replacing our male-dominated reality with a female-dominated one because we know that is equally unfair to men as it has been for us since humans first became a thing.
So why, on the surface are all of our rallying cries and feminist jokes about women achieving world domination and eradicating men because we only need their sperm to continue the human race anyways? Why is the surface language for feminism so damn unequal?
Why are all the signs we look for in other feminists so disappointingly meaningless to our actual goals? Somehow our initiation into feminism is resting in the very female arms of God and a goodie bag with “The future is female” pasted all over it.
God isn’t a she
We know this. Most religions don’t have an explicit gender for their God, but guess what? English doesn’t have a non-gendered pronoun, and most of the people writing holy works were surrounded by a much more patriarchal society in the first place. God was powerful, intelligent, and a provider and in their time those were exclusively male traits.
So, yeah, We can refer to god as a woman — it's our choice, just as it was the choice of the people way back then to refer to him more often as a man. But acting as if God is any more female than male is equally as wrong as acting as if God is more male than female. Either way, we are assigning gender to someone that doesn’t have one.
The female god doesn’t push our agenda forward in any way, instead, it just gives us one more thing to be angry about when someone refers to God as he. And heaven knows we don’t need one more thing to be angry about.
The Goal isn’t to Turn the Tables
Female feminists are already an angry group of people — and for good reason. We have been systematically repressed for nearly our entire existence on the earth, and we’re rightly sick of it.
We’re sick of being treated as less than, being told we can’t, and forced to leave our passions to be pursued by men. We’re pissed that as hard as we fight right now to get the qualifications for our dream job, we are less likely to be hired for it. We want to erupt when we finally get hired and find that despite the amount of effort, love, and dedication we show in our work, we’re paid less than the guy a door down doing the same damn thing.
So hell yeah, we’re angry. But, we have to be careful with anger. We can’t go throwing around our anger at things like a male-gendered God, because then that anger gets mixed up in the wash with the things that actually matter.
Even justified anger has to be carefully managed. Anger scares people, it makes them feel threatened and unwanted. And threatened is not the feeling that makes someone join a movement.
We feel threatened by lions. When is the last time you saw someone just up and decide to join a pride of lions for the hell of it?
Over-compensating doesn’t help
We have this idea that if you fight for more than you need you’ll end up with what you need. Or, in our context, if you fight for the matriarchy, you’ll get equality. And yeah, that may work some places but when dealing with moral issues that people stand so strongly on, by fighting for more than the real goal we turn ourselves into a pride of lions.
When we over-compensate and get angry about someone’s male-gendered god, or when they don’t laugh at our joke about eliminating men from the world, we distance them from our message. Instead of convincing them that equality is the right way, all they can see is that we want to dominate them. And our anger gives them the perfect justification for feeling this way — it proves we are as emotional and unfit for power as they thought we were.
Call it the patriarchy if you will, but we can’t afford to be the repressive people men were — and sometimes still are — throughout history. We don’t have the luxury of eliminating the vote of the opposing side.
And securing a bare majority of the votes doesn’t work either. Women and minorities know better than most, that legislation doesn’t change bias.
Everyone is already equal to US law. But just because you make a law that says don’t discriminate doesn’t mean you can enforce that law or even pinpoint exactly when it is being breached.
We need the votes of men from every age, race, and way of life because as many strong independent women we recruit, the world is still unequally tipped so the power lands with the men. Repulsing men in power with our angry rants about bringing the matriarchy to pass is probably one of the slowest ways we can achieve equality.
And the clincher is, that we’re not even interested in ruling the world. Once again, our goal was never to bring the matriarchy crashing down on the head of the patriarchy with a powerful vengeance. Our goal is equality.
Why do we keep acting as if it’s some obnoxious and dramatic Adele song where we finally get to win?
The Future isn’t female
The future is human. The future is free from gender-based discrimination — meaning if the future is okay, men, women, and those in the non-binary trans population will all hold jobs in proportion to their percentage of the population. But more importantly, it will be free from the systems that convince women their dreams aren’t viable. It will be free from all the little offenses that push us down and keep us out of the places we have a right to be.
And we all believe this basic truth. The truth that all we want is equality, but we’ve also gotten so caught up in the fight that message is being obscured by the angry clutter we have taken on as an integral part of the movement. | https://desotaelianna.medium.com/believing-in-a-she-god-isnt-feminist-591b52f2ddd4 | ['Elianna Desota'] | 2019-03-30 23:21:27.753000+00:00 | ['Politics', 'Feminism', 'Equality', 'Psychology', 'Belief'] |
Going With the Flow. “It’s radical knowing that on a… | Mile 1
I got my flow the morning of the London Marathon and it was extremely painful. It would be my first marathon and I remember already feeling so nervous for it. I had spent a full year enthusiastically training hard, but I had never actually practiced running on my period.
I thought through my options. Running 26.2 miles with a wad of cotton material wedged between my legs just seemed so absurd. Plus they say chaffing is a real thing. I honestly didn’t know what to do. I knew that I was lucky to have access to tampons etc, to be part of a society that at least has a norm around periods. I could definitely choose to participate in this norm at the expense of my own comfort and just deal with it quietly.
But then I thought…
If there’s one person society can’t eff with, it’s a marathon runner. You can’t tell a marathoner to clean themselves up, or to prioritize the comfort of others. On the marathon course, I could choose whether or not I wanted to participate in this norm of shaming.
I decided to just take some midol, hope I wouldn’t cramp, bleed freely and just run.
A marathon in itself is a centuries old symbolic act. Why not use it as a means to draw light to my sisters who don’t have access to tampons and, despite cramping and pain, hide it away like it doesn’t exist?
Mile 6 | https://medium.com/endless/going-with-the-flow-blood-sisterhood-at-the-london-marathon-f719b98713e7 | ['Kiran Gandhi'] | 2018-02-12 11:12:32.732000+00:00 | ['Storytelling', 'Marathon', 'Feminism'] |
The Science and Tech of Face Masks | The Science and Tech of Face Masks
I put my money (and my body) on the line to teach you everything about COVID-19 masks
Images courtesy the author
When the CDC starts publishing sewing tutorials, you know things have gotten weird. As COVID-19 has spread worldwide, though, that’s the strange new world we’re living in.
When the COVID-19 crisis first began, the Centers for Disease Control in the the United States, and the World Health Organization internationally, came down strongly against healthy citizens wearing face masks. In some ways, that made sense at the time. Most masks don’t protect the wearer from infection, and there’s a risk that they’ll provide a false sense of security.
Just as seatbelts and airbags make people drive faster, wearing a face cover can convince people that they’re fully protected from the coronavirus, prompting them to go out more, touch their face, and skip steps that really do protect them, like the simple step of washing their hands. Initially, there was also the concern that people would buy up medical grade masks, taking these out of the hands of the front-line medical professionals who actually need them.
As the coronavirus crisis wears on and social distancing measures continue to intensify, the CDC has made a dramatic about-face. Acknowledging that masks protect others from you, even if they don’t protect you from others carrying the virus, the CDC now recommends that Americans wear a face covering any time they go outside. Several jurisdictions — including Los Angeles — have quickly enshrined this advice in enforceable laws and ordinances.
Like so many things with COVID-19, we could likely have learned this lesson much earlier if we just looked a bit beyond our borders. Asian countries have made masks a cornerstone of their coronavirus response from day one. And in places like Japan, it’s customary to wear face masks throughout the cold and flu season even in normal times, as a social gesture if not a medical one.
In any event, with the CDC’s dramatic reversal, Americans are suddenly getting a crash course in face coverings. Minutiae like the difference between cloth masks and medical grade masks are now acceptable quarantine-unit dinner conversation. Extremely technical terms like “N95” are suddenly common knowledge — and fodder for political squabbling.
The American’s credit, our civic organizations and individuals have mobilized around masks in a way that’s likely possible nowhere else. Church groups and the like are sewing cloth masks en-mass, and mask drives have resulted in donations of medical grade masks — often sourced from shuttered businesses or peoples’ garages — to hospitals across the country.
How do face masks work, anyway? How are they made? What are the different kinds, and how do they differ? How can you make your own? All of these questions are suddenly on millions of peoples’ minds. So I decided to dive in and help you answer them.
Doing so required wiring money to China, spraying myself in the face with a former chemical warfare agent, and relearning some middle school Home Economics skills I thought I’d never need to revisit. Let’s dive in together and take a look at masks in all their various forms.
Let’s begin with first principles. There are essentially three broad categories of masks available today: respirator masks, medical grade surgical masks, and cloth masks.
Respirator masks are the hardest to come by, and the most politically fraught. These are the fabled N95s that you hear mentioned ad nauseum on the news, and that authorities are encouraging citizens and companies to donate — forcibly or otherwise — to front-line first responders.
Why are these masks so important? Unlike all the other options on the market, they provide the wearer with a measurable level of actual protection against the COVID-19 coronavirus, and other airborne pathogens. The N95 designation means that the masks block 95% of particles under .3 microns in size. That includes many of the droplets that carry viruses and bacteria, including, many scientists believe, Covid-19. So if you’re wearing an N95 mask, you’re actually protected against the virus, even if it’s floating in the air all around you.
N95 masks are believed to protect the wearer from the virus
The catch is that for N95 masks to work, they have to fit properly. The masks’ ability to block particles is worthless if there’s a big gap around your nose (or your beard), and air is flowing through it. In a medical setting, front-line workers are routinely fit-tested, to ensure that their N95 and other respirator masks are actually providing protection.
The process for a fit test is relatively straightforward. You have a provider put on their mask, and adjust it properly to fit their face. You then place them in an environment with detectable particles of around .3 microns. If they can detect the particles, then they’ve failed the test. If they can’t detect them, then the mask is a proper fit, and passes.
I wanted to see how this fit test process works firsthand. I have an N95 mask which was purchased before the crisis, and which I’ve already worn, making it ineligible for donation. So to see how it fits me, I went in search of the materials necessary to perform a fit test.
Most of the time, these tests are performed using pleasant particles. A doctor or nurse puts on their mask, and steps into an environment filled with either a pleasant smelling chemical, or a fine mist of an artificial sweetener, like saccharin. If they can smell the chemical or taste the sweet saccharin, they’ve failed the test.
All the materials required to perform this version of a fit test were sold out. So I resorted to the only option still available: stannic chloride. Stannic chloride is a chemical compound originally used as a chemical warfare agent. Even at low (non dangerous) concentrations, it causes immediate (but harmless, if used properly) irritation to the nose and throat, causing coughing and discomfort in users.
It’s often used for fit tests where there’s a concern that a user might fake their results — for example, when they want to get through the fit test as quickly as possible and return to work. It’s easy enough to step into a chamber filled with saccharin wearing an improper mask and say “Nope, I don’t taste anything!”. It’s harder to get blasted in the face with stannic chloride and avoid uncontrollable coughing.
Technically the chemical is usually used for fit tests on N100 or P100 masks, which block even more particles than the N95 ones. But I figured that with stannic chloride, I should at least see a noticeable different in irritation if I wore an N95 mask or left it off, as it should still block a significant amount of the chemical if I was wearing it properly.
And that’s how I found myself stepping onto my driveway and using a little turkey-baster apparatus to spray myself in the face with a former chemical warfare agent (please, don’t try this at home).
The fit test protocol for stannic chloride says that to begin, “the test subject shall be allowed to smell a weak concentration of the irritant smoke before the respirator is donned to become familiar with its irritating properties.”
Aspirator for a stannic chloride fit test.
So I wafted a bit of the smoke out of a tube until it hung in the air, closed my eyes, and stepped through it while inhaling deeply, like an old-school department store shopper trying out a new perfume.
I expected an acrid smell. But my experience of inhaling the smoke was less a smell, and more a sensation: the instant, burning need to cough.
It wasn’t like someone blowing cigarette smoke in your face, where you’re hacking away for minutes. Rather, it immediately induced that feeling that you might get if you’re sitting in a quiet lecture hall, a theater, or another place where coughing would annoy those around you, and you find yourself needing to cough repeatedly anyway. “Irritating” is actually an excellent adjective to describe the overall experience.
I then donned my N95 mask, again following the recommendations from the fit test protocol, and puffed out a bit more smoke. This time, when I walked through it and inhaled, I felt nothing — no need to cough, no irritation, almost no effects at all.
I could smell the vapor a bit now (likely because my mask was N95 and not N100), and it had a pungent, gunpowder-like sulfurous smell. But the irritation was totally absent. Emboldened, I tried blasting my mask several times with the smoke. Still, I felt nothing.
The results were striking. I had assumed that as a layperson, I was probably wearing my mask incorrectly, and some amount of air (and potentially virus-laden particles) were leaking through. But my fit test showed that actually, the mask was surprisingly effective, even with my limited knowledge of how to use it properly. It gave me a new faith in the mask’s ability to protect me from whatever was out there floating around in the environment — whether that’s irritant smoke or the virus-filled remnants of some infected person’s sneeze.
That protection is why N95 masks have become the gold standard during the age of coronavirus. By actively blocking the particles which carry the virus, they provide the kind of protection that front-line healthcare workers need when treating infected patients. It’s also why they’re in such short supply.
Before the crisis, healthcare workers would discard N95 masks like candy. Between each visit with each patient, they would don an N95, and then throw it away after leaving the patient’s room. Even in a construction or other non-medical setting, the recommendation was to wear N95 masks for at most 8 hours at a time.
Now, the recommendations have changed dramatically. Front-line healthcare workers are wearing the same N95 mask for up to 30 days. The FDA has rapidly cleared technologies for sterilizing the masks, making them truly reusable. Absent high-tech solution, some healthcare workers have taken to dousing their masks in alcohol or boiling them to sterilize them and allow them to be used for days or weeks at a time.
Interestingly, N95 masks with a valve (which makes it easier to breath while wearing them) are actually banned in several jurisdictions’ mask orders. This is because the valve allows air that the wearer breaths out to pass unimpeded through the mask. Since the air is not filtered by the mask, N95 masks with valves provide no protection to others around you. If you still want to wear an N95 mask with a valve, go right ahead — just put some kind of cloth covering (on which more below) over the valve to protect others around you in addition to protecting yourself.
While N95s are the gold standard, there’s another option that’s proven nearly as good against other respiratory viruses: the medical-grade surgical mask. These are the little blue or yellow face shields you see doctors wearing in medical dramas, or may have put on yourself if you’ve ever visited a hospital during flu season. Surgical masks don’t create an airtight seal over the user’s mouth and nose like N95 masks do, so initially they were assumed to be much less effective in protecting against the coronavirus.
Recent studies have contradicted that assumption, though. Covid-19 is thought to spread mostly through droplets in the air, which are coughed or sneezed out by a sick patient. Surgical masks provide a barrier against these relatively large droplets, even if they don’t have a perfect seal. They also have the added bonus of preventing users from touching their faces, something that humans do an alarming amount.
As the crisis has unfolded, surgical masks are now considered nearly as good as N95 masks for medical workers. N95s are reserved for procedures where there’s likely to be a lot of coughing or sneezing (like intubating a patient), but otherwise healthcare workers are turning to the more basic surgical mask.
At first, I assumed surgical masks were made from a simple piece of fabric. It turns out they’re actually much more complex, and harder to manufacture. Medical-grade surgical masks have three layers, or plys. The outer layer is usually a waterproof synthetic fabric. It’s there to protect against watery sneezes and coughs, and to trap the big particles on which bacteria and viruses often travel. It also protects against splashes of blood or other bodily fluids during procedures.
A medical-grade surgical mask.
The middle layer of the mask is generally a filter, which traps bacteria and viruses down to 1 micron in size. It’s not as good as the .3 microns that N95 masks are rated for, but it’s still a lot more protection than a cloth mask (on which more below).
The final layer is generally another filter, and is designed to absorb the vapor from the wearer’s own breath, since the masks are generally right against the mouth and nose. Making a medical-grade mask is a challenging task. It requires special equipment, with extruded plastic sprayed through nozzles to create the tiny, sub-micro-size fibers that block particles but still allow air to flow through.
Many surgical masks are made in China. When the country was dealing with its own disease outbreak, most mask protection stayed internal. As China tamped down the virus internally, though, production started to open up — and so did exports to hard hit areas like Europe and the United States.
For previous projects, my company has worked with several manufacturers in China. They’ve helped us make things like product labels, a custom pet product, and thousands of specialized plastic bags. As China flattened its own Covid-19 curve, I started receiving messages from our Chinese suppliers offering a new product: surgical masks. Recognizing an opportunity, the country has pivoted hundreds of factories towards making masks, and is now churning them out at a (claimed) rate of 110 million per day.
At first, I was skeptical about the masks on offer. But as I started to read about healthcare workers in Seattle and New York using garbage bags and bandannas as PPE, I realized that our connections in China could serve an important purpose in the Covid-19 fight. And so I found myself taking on the unexpected role of procuring hundreds of masks from China.
Even in normal times, doing business in China (at least at our small scale) is a fascinating mix of old-school hands-on service and very streamlined tech. We originally found our Chinese suppliers through the ubiquitous marketplace Alibaba. We then developed relationships over time, conducted through email, with both sides likely making liberal use of Google Translate.
Most of our purchases from Chinese factories — masks included — have started with us receiving an email (often out of the blue) from a supplier offering a new product. These generally list a Minimum Order Quantity, a description of the product, and some staged photos of the product in use. In normal times, this might be some new kind of bag or label.
In the age of coronaviruses, these emails feature smiling models wearing face masks, and details about 3-ply construction and factories’ manufacturing capabilities. Many factories go so far as to create an invoice with your company name and details inserted, before even discussing a purchase with you.
What follows is a certain amount of very cordial and responsive email haggling — generally not around the price, but around breaks for quantities, shipping speed, and the like.
I’ve found Chinese factories to be remarkably responsive. I can send an email in the middle of the night on Chinese time, and get a response within hours. The factories often seem to be embedded without local networks, too, which allows them to source products from peers. I could likely ask any supplier for some random product (say, car tires), and within a few hours they’d have a quote and invoice ready for me.
At some point in the process, the transaction moves to a leap of faith — wiring hundreds of dollars to people you’ve never met, in a place you couldn’t locate on a map, and hoping they follow through on their end of the bargain. I say “wiring”, but really no one uses wire transfers — most payments are made either through Alipay or Paypal, to a email address provided by the factory. The total cost includes the product, shipping, payment feeds, and adjustments for any duties the factory must pay.
It sounds crazy to take this leap of faith, especially with bigger orders. But I’ve placed dozens of these orders over the years, and I’ve never had an issue with a factory failing to fulfill an order. If anything, most factories move faster then advertised — the old business adage to underpromise and overdeliver seems alive and well in China.
In this case, I worked with a supplier we’ve used for custom packaging and bags in the past. We agreed to purchase their MOQ of 800 surgical masks, for $310, including door to door air shipping. I sent the money via Paypal, with a promised delivery time of 12–15 days. In less than 7, a case of masks arrived at my door. The company even provided certificates from an independent quality assurance company, as well as an FDA certification, which made me more confident in the masks’ quality.
Even so — and recognizing that I had no idea what I was doing from a medical perspective — I decided to check them out for myself. There are several guides online for testing the quality of surgical masks. One test involves filled the front of the mask with water — if they’re real, the outer layer should be waterproof. My masks passed the test. Another obvious step it to cut the mask in half and ensure that it has three layers — mine did.
I stopped short of conducting a “fire test”, which involves pulling out the inner layer of the mask and trying to light it with a match. I don’t want to add “third degree burns” to the list of ailments my local medical system needs to treat. And besides, these masks weren’t meant for me — we purchased them to donate to the University of California San Francisco.
Months ago, donations of life-saving medical supplies from the general public would have been unthinkable. But as Covid-19 has strained supplies and front-line workers have taken drastic measures, many hospital systems are accepting (or actively soliciting) public donations of PPE, masks included. We donated our masks (minus the ones I removed for testing) to UCSF, where they were accepted by the nurses of the Benioff Children’s Hospital’s Intensive Care Nursery.
ICN nurses at UCSF accept our donation of 700+ surgical masks
UCSF will likely perform their own inspections of the masks (unlike me, they’re qualified to do this), and perhaps perform their own sterilization process. The masks will then likely be deployed in the fight against Covid-19, either to protect patients or visitors to the hospital, or if things get really desperate, to protect front-line workers themselves.
If you have surgical masks — even just a few — you should donate them too. Often the best place to start is with you local hospital — many have donation programs in place. You can also contact your town authorities — my town of San Ramon, California accepts donations in the City Hall parking lot at least once per week. Or you can reach out to a national organization like Operation Masks, which matches people who have PPE with healthcare organizations in need.
Don’t have medical-grade masks to donate (or obscure connections in China)? Or looking for an option to protect yourself, the average citizen? Your best bet is likely to turn to the final type of mask: a cloth face mask.
These masks are increasingly becoming mandatory in cities and states around the country. New York City is adopting mandatory mask laws this week, and many counties in the Bay Area have already made the move. As the world moves to reopen, it’s likely that face masks will be mandatory equipment for more people, more of the time. And for most people, it’s likely that their mask will be a homemade cloth one, not a medical-grade mask like my N95 or surgical masks.
How do homemade cloth masks work? Again, as with surgical masks, the main goal is protect others from the wearer. Cloth masks block your potentially infectious sneezes and coughs, keeping others around you a bit safer — especially in situations like walking through a grocery store or working in an office, where social distancing is often impossible. In some really desperate situations, cloth masks can even be used in a medical setting to protect patients, and the CDC formally authorized the uses at the beginning of the Covid-19 crisis.
So how do you actually get a cloth mask? There are numerous options online, and some high-fashion brands have turned to producing their own. But the simplest solution — and often the fastest — is to make one yourself. That brings us back to the CDC’s pivot towards domesticity, and its templates for sewing your own cloth mask at home.
I went to a very progressive public middle school in a suburb of Philadelphia, which was suspicious of gender norms way before “woke” became a household word. My school required all the female to take shop, and all the males to take Home Economics. So I received several semesters of formal education in sewing, cooking, and other basic household skills. When I saw the CDC’s mask tutorial, I felt fairly confident I could follow it.
Armed with a $9 sewing kit from Amazon and an old cloth dinner napkin, I got to work creating my own Covid-stopping PPE. The CDC’s mask pattern is very easy to follow. You essentially create two big rectangles of fabric, sew them together, and then create little pockets on each side for ear loops.
The CDC doesn’t directly suggest this, but many people choose to insert something between their fabric pieces to add a third layer of protection, much like a real surgical mask. Choices range from coffee filters to Swiffer cloths. Vacuum cleaner bags, it turns out, are the best option for improvised filters. The most important thing is to choose a material which can be easily laundered, or to construct your mask in a way that you can swap out the filter routinely.
For my own mask, I cut out my two pieces of fabric, transforming my fancy Pottery Barn dinner napkin into ugly rectangles with alarming, jagged edges. I then got out my needle and thread, and started sewing them together. Quickly, I realized that my sewing skills have atrophied — a lot.
My stitching work was less Martha Stewart, and more Frankenstein’s monster. Still I managed to hem the fabric and attached it at the top. Then, recognizing my own limitations, I finished the rest of the mask with tape (gaffer’s tape is a good choice), and inserted kitchen twine as my ear loops.
My CDC pattern cloth face mask under construction.
The end result was ugly as sin, and made me look like some kind of a poorly thought-out, off-canon Star Wars character from a forgotten desert planet. But it stayed on my face and covered my nose and mouth — the basic requirements for a cloth mask in the age of Covid. If you have your own sewing machine or even a tiny modicum of hand sewing skill — which I clearly lack — you can probably create something much better and more durable.
And if you can’t, there are likely plenty of people in your neighborhood who can. Through the same motivation that has led to an explosion of quarantine cooking, many people are turning to domestic pursuits to pass the time. Check out the Nextdoor feed for your area, and it’s a good bet that some local good Samaritan will make you a really nice cloth mask for the cost of materials, or even for free.
As with all things Covid, masks are likely here to stay, at least for quite some time. If you have N95 masks available, they’re the gold standard of protection. Unless you’re an essential worker yourself, you’ve worn your mask already, or you’re in a very high risk group, you should donate these masks to those who genuinely are on the front lines.
If you have surgical masks — or have a supplier who can produce them — leverage that. The world will need many millions of these masks to make it through the pandemic, and your local hospital and first responders are likely to embrace a donation of surgical masks with open arms.
You can also consider donating money to a charity or organization with the supplier connections to buy more of these masks — imported or otherwise — for local or national donation.
And for you own protection — or just to have a project — consider creating a cloth mask of your own. You can experiment with different filters and materials. You can even use the mask as a unique expression of your own style — from the high end and posh to the ironic. For some, masks are even a way to wear a big middle finger to Covid on their face whenever they’re out in the world, showing their own breed of solidarity and perseverance through the crisis.
A few months ago, the idea that millions of people who become intimately familiar with a product as obscure as face masks would have seemed bizarre. But the world we’re living in now is bizarre in so many more serious, alarming ways. Masks are one bright area, where we can each take a simple action that directly affects the common good — especially with cloth masks than anyone can make at home.
So get out your needle and thread, consult either your grandma or the CDC (never a line I thought I’d get to use), and start sewing! | https://tomsmith585.medium.com/the-science-and-tech-of-face-masks-36f6db045fb8 | ['Thomas Smith'] | 2020-04-27 14:07:47.505000+00:00 | ['Covid 19', 'Explainer', 'Face Mask', 'N95', 'Science'] |
Pandas, Plotting, Pizzaz, Brazil? | Pandas, Plotting, Pizzaz, Brazil?
A Brazillion Ways to Explore Data
To those newly initiated to the world of data science, the word pandas might still configure up images of cute, fuzzy, friendly animals, seemingly built for nothing but being cute on camera and wasting otherwise productive hours browsing google images of bears eating leaves. But don’t worry! That’ll soon change. We’ll take you from a passive panda perceiver to a full on Pandas professional, in just a few short minutes.
This article will assume some basic knowledge of Python, and a general idea of what the Pandas library is will be helpful, but not necessary. I’ll go over some basic/introductory concepts to get an overview and general understanding, but the focus of this article will be on application of matplotlib, pyplot, seaborn, and pandas in exploratory data analysis of a messy(ish) dataset. As we go through, I’ll suggest exploration to do on your own for your own practice.
Some of the questions I’ll answer here:
What is Pandas?
When is Pandas used?
How do you clean a dataset using Pandas?
How can visualizations aid in exploratory data analysis (EDA)?
What does exploratory data analysis look like?
What is Pandas?
Pandas is one of the premier packages for managing and cleaning data in the Python data science space. It allows for the neat containerization of data into Pandas objects called dataframes and is compatible with the widely used Python computing and data manipulation package NumPy, and can be combined with a ton of other common data manipulation packages like SQL. Pandas also contains many functions for cleaning, manipulating, indexing, and performing operations on data in a way which is optimized and significantly faster than standard Python operations, which especially comes in handy when working with the very large datasets which we can sometimes encounter.
At its most basic form, a Pandas dataframe is a two-dimensional organization of data with rows and columns. Though rows and columns are referred to in many way and many contexts throughout the data science world, I’ll try to stick to either rows and columns or datapoints and features. Dataframe columns can be individually selected as a Series, the other common Pandas datatype. The Pandas series and dataframe are the foundation of the Pandas library, and the documentation does a good job explaining these datatypes, so you can read more here if you’re unfamiliar.
When is Pandas Used?
This is an easy one. Are you working with data in Python? Use Pandas.
There are cases when a dataset is simply too large for a local runtime, where additional strategies must be employed, though after a certain point (limited either by your patience or your machine), a switch to a tool designed for larger datasets will be required regardless.
Pandas is used when working with CSVs, data scraped from the web, datasets from Kaggle or other sources, or pretty much any other time you have data which takes the from of datapoints with multiple features.
How do you clean a dataset using Pandas?
The short answer is it depends. There are a myriad of strategies that can be employed, and often you’ll have to look up examples specific to your situation, such as dealing with categorical variables, strings, etc.
That’s not a very useful answer though. In an attempt to write an actually helpful article, I’ll highly recommend visiting Kaggle and simply noting the strategies experts and master make use of, and the situations in which they’re used, making a data science cheatsheet of sorts where you can note basic tasks. I’ll link a rough one I made a while back here.
A helpful process can actually be writing down end goals you want, attempting to figure out a few substeps to get there, and then searching Stack Overflow or Pandas documentation for the implementation. Data cleaning is much more about understanding the mindset of how to manipulate data into useful forms than it is about memorizing an algorithm.
What Does EDA Look Like?
I’ll go through an example here, and post the Kaggle Kernel to take an even closer look, directly at the code. I highly recommend following along on your own, or in the notebook, looking up documentation as you go along.
This is a fairly large dataset with a TON of features. Let’s see if we can employ Pandas and some creative visualizations to clean this up.
As a disclaimer, Seaborn is based on PyPlot and Matplotlib, hence their mention in the intro, but I prefer Seaborn’s functionality and style, so you may not see PyPlot or Matplotlib explicitly here.
# import pandas and numpy, as well as seaborn for visualizations
import numpy as np
import pandas as pd
import seaborn as sns # import the os package for Kaggle
import os
print(os.listdir("../input")) # read in the data and create a dataframe
df = pd.read_csv("../input/brazilian-cities/BRAZIL_CITIES.csv", sep=";", decimal=",") # view a random sample of the dataframe
df.sample(25)
Good stuff! Let’s also look at the shape to get a sense of the magnitude of the set we’re looking at.
# get dataframe shape
df.shape
Out:
(5576, 81)
Whenever we do any type of complicated work, it can often be helpful to have a deep copy of our dataframe on hand such that we can more easily revert back if and when we mess up our primary dataframe.
# create a deep copy of our dataframe
df_copy = df.copy(True)
Now that we’ve done this, let’s get to work!
This dataset has a fairly large amount of features, which might make it hard to explore. Since we’re not looking to create a model, we don’t have to perform feature selection, so we can simply select a subset of columns we’d like to explore. Feel free to make a different list than my own!
columns = ['CITY', 'STATE', 'CAPITAL', 'IBGE_RES_POP', 'IBGE_RES_POP_BRAS','IBGE_RES_POP_ESTR','IBGE_DU','IBGE_DU_URBAN','IBGE_DU_RURAL', 'IBGE_POP','IBGE_1','IBGE_1-4','IBGE_5-9','IBGE_10-14','IBGE_15-59','IBGE_60+','IBGE_PLANTED_AREA','IDHM','LONG','LAT','ALT','ESTIMATED_POP','GDP_CAPITA','Cars','Motorcycles','UBER','MAC','WAL-MART','BEDS'] # create reduced dataframe and check shape
r_df = df[columns]
r_df.shape
Out:
(5576, 29)
Awesome! Much more manageable now. A really helpful tool for initial exploration is the Seaborn pairplot function. It graphs every variable against every other variable in one method! Let’s see if it can help us here.
# create a seaborn pairplot
pp = sns.pairplot(r_df)
Seaborn Pairplot of the Data
Wow. Umm, okay. That’s huge, a little overwhelming, and not particularly helpful. Imagine if we had left all the features in! A good next step, especially if the pairplot to unhelpful is trying a correlation matrix just to see if there’s any linear relationship among variables.
corr = r_df.corr() # I prefer one sided matricies so create a mask
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# set up figure
f, ax = plt.subplots(figsize=(15, 15))
cmap = sns.diverging_palette(220, 20, as_cmap=True)
sns.heatmap(corr, mask=mask,cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
Correlation matrix of all features
The UBER column is completely blank, let’s look directly at the data to see what’s going on.
r_df.UBER
Out:
0 1.0
1 NaN
2 1.0
3 1.0
4 1.0
5 1.0
6 NaN
7 1.0
8 1.0
9 1.0
10 1.0
11 1.0
12 1.0
13 1.0
14 1.0
15 1.0
16 1.0
17 NaN
18 NaN
19 NaN
20 NaN
21 1.0
22 1.0
23 NaN
24 NaN
25 NaN
26 1.0
27 1.0
28 NaN
29 1.0
...
5546 NaN
5547 NaN
5548 NaN
5549 NaN
5550 NaN
5551 NaN
5552 NaN
5553 NaN
5554 NaN
5555 NaN
5556 NaN
5557 NaN
5558 NaN
5559 NaN
5560 NaN
5561 NaN
5562 NaN
5563 NaN
5564 NaN
5565 NaN
5566 NaN
5567 NaN
5568 NaN
5569 NaN
5570 NaN
5571 NaN
5572 NaN
5573 NaN
5574 NaN
5575 NaN
Lot’s of NaNs and no zeros, a possible error in the set.
# there are a lot of nans, possibly in place of zeros, let's check
df_copy.UBER.value_counts()
Out:
1.0 125
Yeah, no zeros in the whole columns and only 125 values out of over 5000. Let’s replace NaNs with zeros and try again.
r_df.UBER.replace({np.nan:0}, inplace=True)
r_df.UBER.value_counts()
Out:
0.0 5451
1.0 125
Success! Let’s try the correlation matrix again. Running the same code, we get:
Corrected correlation matrix
Now let’s start looking at the relationships!
# let's investigate the strongest correlation first
sns.set_style('dark')
sns.scatterplot(x=r_df.IDHM, y=r_df.LAT)
Relationship of Latitude and IDHM (Human Development Index)
This shows what we expected from the correlation matrix, but doesn’t really supply a lot of meaning to most people, as without an idea of the latitudes of Brazil or a better idea of what IDHM is, they’re kinda just meaningless points. Let’s contextualize within the geography of the country using latitude and longitude.
# map of lat and long with IDHM detemining size
f, ax = plt.subplots(figsize=(8, 8))
sns.scatterplot(x=r_df.LONG, y=r_df.LAT, size=r_df.IDHM)
Attempt #2 at contextualizing IDHM trends
Here we can see a rough outline of the nation of Brazil. Overlaying this over a map may be even more helpful, but we’re going to skip that here. We can see a bit of a trend, but with so many points it’s hard to distinguish the sizes. Let’s try adding a color encode.
# it's hard to see any trends here, let's add color to get a better idea
f, ax = plt.subplots(figsize=(8, 8))
sns.scatterplot(x=r_df.LONG, y=r_df.LAT, size=r_df.IDHM, hue=r_df.IDHM)
Attempt #3 at contextualizing IDHM trends, much better!
Fantastic! Here we can clearly see a trend of higher IDHM towards the center and south of the country. In your own exploration, maybe you can find characteristics about these parts of the country which may cause this.
Let’s see if we can identify the state capital cities in this plot.
# let's see if we can spot any capitals in there
f, ax = plt.subplots(figsize=(8, 8))
markers = {0:'o', 1:'s'}
sns.scatterplot(x=r_df.LONG, y=r_df.LAT, size=r_df.IDHM, hue=r_df.IDHM,style=r_df.CAPITAL, markers=markers)
Attempt #4 at contextualizing IDHM trends, with markers added for capital cities
Here we’re facing the same problem as in attempt #2. We can’t see the additionally encoded data! Let’s try an overlay and see if that’s more clear.
f, ax = plt.subplots(figsize=(8, 8))
sns.scatterplot(x=r_df.LONG, y=r_df.LAT, size=r_df.IDHM, hue=r_df.IDHM)
sns.scatterplot(x=r_df[r_df.CAPITAL==1].LONG, y=r_df[r_df.CAPITAL==1].LAT, s=100)
Attempt #5 at contextualizing IDHM trends, with an overlay added for capital cities
Great. Now we can see the capital cities as well, and we can see that they very strongly trend towards the east of the country, towards the coast. Not surprising, yet very interesting to see all the same. Let’s see if we can give GDP per capita a similar treatment, kind of reusing our code.
f, ax = plt.subplots(figsize=(8, 8))
sns.scatterplot(x=r_df.LONG, y=r_df.LAT, size=r_df.GDP_CAPITA, hue=r_df.GDP_CAPITA)
GDP per capita encoded with latitude and longitude
Hmm. It looks like there aren’t enough color bins to show all the trends in GDP here, likely meaning the data has a large spread and/or skew. Let’s investigate.
# let's take a look at the distribution, after taking care of nans
f, ax = plt.subplots(figsize=(12, 8))
gdp = r_df.GDP_CAPITA.dropna()
sns.distplot(gdp)
Distribution of GDP per Capita
# it looks like gdp is heavily right skewed with a massive tail.
# it seems likely that those massive outliers are errors, and could be removed in some cases
gdp.describe()
Out:
count 5573.000000
mean 21129.767244
std 20327.836119
min 3190.570000
25% 9061.720000
50% 15879.960000
75% 26156.990000
max 314637.690000
A huge tail, and a significant skew. This will make the data much more difficult to encode with color. If we wanted to, we could remove outliers or try a different scale on the data, potentially log, but we’ll skip that for now and investigate one last variable. Uber! Let’s see how they’re doing.
f, ax = plt.subplots(figsize=(12, 8))
sns.countplot(r_df['UBER'])
Countplot of Uber in Brazillian Cities
We can see here that the vast majority of Brazillian cities don’t have Uber. Let’s see where they are.
f, ax = plt.subplots(figsize=(8, 8))
sns.scatterplot(x=r_df[r_df.UBER==0].LONG, y=r_df[r_df.UBER==0].LAT)
sns.scatterplot(x=r_df[r_df.UBER==1].LONG, y=r_df[r_df.UBER==1].LAT)
Distribution of cities with presence of UBER encoded in orange
We can see similar trends to IDHM here, with clustering by the coast and the south of the country. Try checking out IDHM and Uber on your own! Let’s look at the Uber’s relationship with cars.
f, ax = plt.subplots(figsize=(16, 12))
sns.boxplot(y=r_df['Cars'], x=r_df['UBER'])
Box plot of the distribution of number of cars in Brazillian cities, separated by the presence of Uber.
Oof. That’s not pretty or useful. Let’s remove those giant outlier and get a better look.
ubers, car_vals = r_df[r_df.Cars <100000].UBER, r_df[r_df.Cars <100000].Cars
sns.boxplot(ubers, car_vals )
Box plot of the distribution of number of cars (#Cars <100,000) in Brazillian cities, separated by the presence of Uber.
This is a really interesting distribution. The minimum bounds for cities with Uber is above the maximum bound for cities without. Try investigating other variables related to cars on your own to see why this is. My guess would be something to do with population or GDP. Also the presence of outliers in the non Uber cities suggests a large skew/tail. Let’s take a look.
f, ax = plt.subplots(figsize=(16, 12))
sns.distplot(r_df[(r_df.Cars < 100000) & (r_df.UBER==0)].Cars)
sns.distplot(r_df[(r_df.Cars < 100000) & (r_df.UBER==1)].Cars, bins=20)
As expected! The majority of cities either have no cars or a very small amount. Good work!
What we learned:
What is Pandas? A very useful Python package for data cleaning and manipulation.
A very useful Python package for data cleaning and manipulation. When is Pandas used? Pretty much anytime you need to work with data in Python!
Pretty much anytime you need to work with data in Python! How do you clean a dataset using Pandas? With a curious mind, and lots of searching pandas documentation, paired with helpful examples.
With a curious mind, and lots of searching pandas documentation, paired with helpful examples. How can visualizations aid in exploratory data analysis (EDA)? Visualizations are KEY in exploratory data analysis. This is easier shown than explained, check out the example in the article.
Visualizations are KEY in exploratory data analysis. This is easier shown than explained, check out the example in the article. What does exploratory data analysis look like? See above!
Want to explore more? Try these:
Explore population data
See how population varies with other supplied categorical values
Look at how gdp per capita varies with presence of other industries
Add back in all variables/subset with different variables and create more correlation matrices to explore additional trends
Thanks for reading! | https://medium.com/swlh/pandas-plotting-pizzaz-brazil-9a3aa4cf21b | ['Caleb Neale'] | 2019-06-14 16:56:11.943000+00:00 | ['Python', 'Data', 'Data Science', 'Data Visualization', 'Pandas'] |
Hiring Software Developers in a Competitive Market | Hiring Software Developers in a Competitive Market
Danté Nel at DisruptHR CPT 12.10.2017
There is a certain mentality one needs to have when hiring software developers in a competitive market. Most companies require technical skills, but supply has outstripped demand — software developers can afford to be picky when multiple companies approach them for a job. Danté speaks about the change in mindset required when you as an individual hiring manager at your company realises the fierceness of competition from other players in the market.
For instance, 60% of developers will have at least 2 companies interviewing them within 1 week on the job market. This jumps to 71% in the second week, while popular ‘devs’ easily get more than 10 companies asking them for interviews. Danté gives some pro-tips on how to “win” — making better technical assessments, applying empathy in the hiring process, etc.
Watch the video: | https://medium.com/getting-better-together/hiring-software-developers-in-a-competitive-market-62021aa2e997 | [] | 2018-03-29 10:07:24.405000+00:00 | ['Hiring', 'Disrupthrct', 'People', 'Talent', 'Software Development'] |
Mass Media Approaches in B2B Digital Selling | Nowadays sales processes are collecting instruments and tactics that were traditionally used by journalists and media-managers. It means that b2b sales representatives are moving from just good communicators to a digital trusted advisors. In this article you will find more about sales tools at the intersection of b2b business development and traditional media.
Pandemic 2020 has showed that customer communications in b2b could not be so classical anymore. Face-to-face meeting, huge conferences, long business trips and all other physical events are considered to be insufficient for closing new deals. Some businesses could say that landing pages, online-webinars and other marketing tools have substituted the lack of physical communication, but this is a marketing job. How sales person can process the demand in remote circumstances? What are the ways of being recognisable among competitors if you even don’t see your customer?
In such an instance, sales are required to absorb mass media practices that could be easily repeated in content creating. Being a part of the digital selling team in IT-industry and having a journalistic experience I used to test the most common media tools for sales, so here are the ones I find essential to work with.
Media storytelling
In the newspapers or on TV-reports you would notice the similar structure of the story. It includes a bright headline, a subhead (detailed description of a headline), a body text and a conclusion (summary of the story). Some elements of this structure had migrated to social media posts.
Firstly, look at the posts that become popular among the subscribers in the following link. Most of them have a catchy headline, that hooks your attention.
What are the good forms of headlines that sales people may use in their social selling posts?
Questions (E.g. How business has changed the mind-set about human resources due to COVID-19?) Round number (E.g. 40% employees will never gone back to offline offices after pandemia) Exclusive insights (E.g. The most common business launches in 2020)
Secondly, social media posts usually have the same structures as journalistic materials. These are the most common body text examples I have met in selling posts:
▫️ cause and effect
▫️ classification
▫️ compare and contrast
▫️ list
▫️ question and answer
Third common part between social selling post and media materials is a catchy conclusion. Journalists usually end their texts by short summary of all above-mentioned facts or do an announce of up-going event of the story. Sales people mostly use a call-to-action phrase such as ‘ask for detailed materials’ or ‘watch the video below’. I believe that both of summary types might be in social selling posts: journalistic style is more appropriated for personal stories about the business, selling style is good for invitations and lead generation campaigns.
However, these media storytelling approach fits not only social media texts, but also video selling scripts.
Video selling
For a long time video production has been included a lot of special media stuff, professional equipment and TV studios. Nowadays, we could do it with less of preparations and expenditures. Moreover, video has become the most popular form of communication for last 2–3 years. Last studies of non-text communications show that video is also going to wide-spread format in customer interaction.
59% of senior executives prefer lighthearted work-related videos
So, when might video be used by sales in b2b?
Personal solution overview
Invitations to webinars
Announce of solutions
Interview with the customers
Follow-up
Broadcasts
Tools for video selling
Concerning the instruments I like the ones that are simple and quick to use. It is super important factors, as b2b sales people usually have a lack of time and a lot of customers to work with. That’s are the video tools I find the most appropriate for sales.
BigVu is an app that helps you to record the video from teleprompter. You also could include texts, music and the company on the recorded shoot. The app has a trial mode.
Quik is GoPro editing tool, that let you combine photos, video shoots and do a dynamic content from standard templates. You may use it even without GoPro camera, work with shortages on your smartphone. Quik also has a free version.
Imovie is the most simple video editing app that I’ve ever known. You could cut, speed up and down your videos, do some texts and put a music background. It is free for all Apple devices.
Zoom has become super popular among people around the world in 2020 due to perfect quality and free 40-minutes mode. However, the is one option that is not well-known among the app’s users. It is a sharing option of Power Point presentation as a virtual background. So, your customer could see all you movements, gestures and the content on the backside.
So, I believe that pandemic 2020 has to become a strong trigger for digital transformation in b2b sales, that will make media approaches and sales execution to be more closer. Tech-savvy mindset and content creativity are going to be more appreciable among decision makers. I do consider that these qualities make sales to be more personalized and targeted for their customers. | https://medium.com/swlh/mass-media-approaches-in-b2b-digital-selling-92476e44b8f0 | ['Julia Pontus'] | 2020-12-22 08:21:31.458000+00:00 | ['Marketing', 'Sales', 'Digital', 'Médiá', 'Digital Selling'] |
Finding Hope through Mindfulness | Photo by Andrea Piacquadio from Pexels
You are not alone if you have been feeling caught in a cycle of constant negativity. A combination of social isolation, political chaos and social media messaging can make us feel overwhelmed and cause heightened anxiety and depressed mood. It is normal to feel that you’ve reached your capacity. Lately, I have heard from so many that even when they’re focused on self-care and wellness it doesn’t feel enough to kick them out of a slump of sadness and fear. How can we escape this negativity? What can we do to help ourselves when we feel this way?
It is easy to fall into a negative cycle in our fast paced lives when we are not taking the time to be mindful. When we take a step back and focus on the present moment, it is easier to see hope and understand that we have the strength to overcome what is in front of us. As a mental health therapist, I have seen the practice of mindfulness assist many in finding hope and minimizing stress during difficult times like these.
What is Mindfulness?
Mindfulness is the act of intentionally living with awareness in the present moment without judgment or attachment.
When we practice mindfulness it is most important to be open to the experiences each new moment brings, rather than being stuck in the past or looking towards the future. For example, if we are mindful of the food we are eating at this moment, then we want to only focus on that experience. We would want to avoid judging how the food tastes, past memories associated with the food or thoughts about what you will be doing after you eat. We would want to try to remain focused on the food we are eating and bring ourselves back to the present if we are distracted with other thoughts or judgments.
Mindfulness can be practiced with any event or moment by observing, describing or participating in an experience. Even as you are reading this article you can practice mindfulness. Are you focused on these black ink words and the meaning behind them? Are you drifting away by thinking about an argument you had earlier this week? Are you having judgments or being hard on yourself for not practicing mindfulness before? I encourage you to refocus on the words you are reading now and practice participating in mindfulness in this moment.
Benefits of Mindfulness
Imagine what it would be like to be driving and only be focused on the task at hand. Focused on the feeling of the steering wheel, the sounds of the turn signals and the cars around you, instead of stressing about running late, thinking about what you want for dinner and what is still left on your growing to-do list. Imagine how many less car accidents or close calls you would face. Imagine how much less stress you would feel. Mindfulness has the power to reduce your suffering, reduce tension and pain and increase control of your mind instead of letting our mind be in control of you. When we are mindful in each moment we are in control of our thoughts and are less likely to spiral with anxious worries. When we are mindful we give ourselves the opportunity to experience reality as it is and to live our lives connected to the universe around us. Mindfulness allows us to take in the entire experience, to be fully present and gives us the freedom to let go of attachments and demands that society has forced upon us. Mindfulness gives us the opportunity to see hope.
Sometimes mindfulness is difficult because the present moment may cause distress for us. It is so natural for us to distract ourselves from moments of distress. Mindfulness urges us to take in each moment as it is and sit in the discomfort that may cause. As you continue to practice mindfulness, oftentimes you will notice that challenging moments and experiences become less painful. This is because mindfulness allows us to regulate our emotions and feel a radical increase in love and compassion towards others. In the year 2020 especially, mindfulness can assist us in being less overwhelmed by taking in moments as they are instead of compacting them together.
Conveyor Belt Thoughts
One of the more difficult parts of mindfulness is allowing yourself and your thoughts to be focused on the present. It is so common for us to drift our thoughts away to the next thing we have to do or become distracted with worries. A key part of mindfulness is to be able to acknowledge those distracting thoughts as they come and then let them float away and bring your attention back to the present moment. It can be helpful to notice your thoughts and feelings coming down a conveyor belt. When practicing mindfulness, as you notice distracting thoughts coming up imagine placing them on a conveyor belt and watching them leave you in this moment, then bring your attention back to what is in front of you.
Loving Kindness
A mindfulness activity that can be particularly helpful in building hope within us during times when we feel chaos, anger, or sadness is practicing loving kindness. This activity can be focused on anyone, but can be especially impactful when utilized towards someone you are struggling with (maybe a partner, family member, colleague or even a political figure). When we feel in a cycle of negativity it is easy to feel dislike or even hatred towards those around you. If you feel like these feelings are not serving you, consider attempting to send loving kindness their way. Start by sitting, standing or lying down and practicing deep breathing for 1–2 minutes with your palms open and facing up. Begin by gently bringing the person you are thinking of to mind. Say their name out loud or in your head. Center yourself in the practice of mindfulness towards sending loving kindness to this person. Radiate loving kindness to this person by reciting warm wishes for the person. You can start with more neutral thoughts:
“I am sending loving kindness to _____”
“May ______ be safe”
Take a deep breath
“May _____ be healthy”
Take a deep breath
“May _____ feel at peace”
Take a deep breath
“May _____ feel happiness”
Take a deep breath
As you feel comfortable you are welcomed to add other well wishes for the person you are focusing on. Continue to repeat your phrases until you yourself feel immersed in loving kindness. Let me be clear: this can be very challenging. At the same time it can relieve distress and anger and can be a powerful tool in finding hope. During the activity utilize the conveyor belt method when thoughts of hatred or dislike come seeping in. There is no need to judge yourself if these thoughts come, instead acknowledge them and then imagine them leaving you. It may be easier to start this activity by practicing sending loving kindness to someone you love or to yourself and then working toward people in your life that you have more complicated feelings towards.
Reminder
A gentle reminder that mindfulness can most definitely be challenging. It is something that takes practice and patience. Don’t get discouraged if you are practicing mindfulness and have difficulty staying in the present. With time, this exercise will become easier. Practicing loving kindness is about challenging the narratives we have created regarding how we see people around us. Practicing mindfulness through loving kindness allows us to push back many of the assumptions we have made in our negative cycle and see someone in this moment as what they truly are without judgment. Starting this practice is a major step towards focusing on your mental health. Be proud of yourself for starting on this mindfulness journey and allow yourself to feel hope in this process. | https://medium.com/joincurio/finding-hope-through-mindfulness-41ea979f58ff | ['Sarah Belarde'] | 2020-11-02 15:37:01.496000+00:00 | ['Personal Growth', 'Mental Health', 'Anxiety', 'Mindset', 'Mindfulness'] |
Basic Data Cleaning — Removing NaNs | As a beginning data scientist, I’m learning that most of my time is spent preparing data for analysis. Much as writing is about clarifying and polishing ideas, before we can tell any compelling stories with data, it must be thoroughly cleaned and prepared for analysis.
This might not seem very interesting, but it is necessary if we want to extract any interesting stories from it.
Here are some example datasets for us to work with:
We’ve created a dataframe with 4 columns, and 100 rows (zero is counted as the first index marker so 99 will be our last row) that is populated with random integers between 0 and 100.
NaN (not a number) variables are gaps in data, when an observation has been missed or perhaps the data isn’t available. They make it impossible to run most analysis tools on a dataset, but if you are recording data and simply throw out all NaN observations, you will end up losing a lot of potentially useful data. Therefore, as data scientists, we want to observer how many NaN values there are, where they are in the data, and the most appropriate way to “clean” them.
You can verify that there are no NaN variables by running:
The .isna() function searches for NaN values and returns a boolean True when it finds a NaN, then the .sum() function counts each True and returns the sum for each column. If we wanted to check the number of NaN values organized by line instead of column we would type:
example_df.isna().sum(axis=1)
But I won’t run it because it’s a series of 100 zeros.
Since, we don’t actually have any NaN values yet, so lets add some:
What we’ve done is create a new dataset that has 15% NaN variables by mapping a function over every element of the dataframe. The function replaces the existing cell with a NaN value if the random number it generates is <= 15.
We can verify that the NaNs exist and their distribution as follows:
So if we take the mean of all the NaN’s in our four columns we get 15.25. This isn’t exactly 15%, probably due to the way that numpy interprets <= 15.
Feel free to let me know if you understand how that function works and I can add a clarification.
Now that we have some NaN data points, a fairly standard cleaning algorithm is as follows:
1) run df.isna().sum() to confirm the presence of NaN values (which we’ve done)
2) determine what is the appropriate measure to take with your NaN values.
3) Execute.
For step 2, we should consider the characteristics of this dataframe. It’s an array of integers, so it will have a shape, and some statistical properties, which we can see below:
*I use the print() function here because when you have two functions in a Jupyter (or colab) notebook code cell, only the last function will produce an output when you run the cell.
You can see that our dataframe’s shape is 100 rows by 4 columns, and the 4 columns have a certain mean, standard deviation, and distribution listed by the describe function. You can also see that the count is the number of rows minus the NaN values listed in our columns.
Since data science is concerned with rapidly deploying predictive models and descriptive statistics, there is an “art” to this science (this may cause some readers to see red and I think I know what you’re going to say, but please hear me out).
If we were to delete all columns with NaN variables, we would lose the dataset, so that’s not an option. If we were to delete all rows with NaN values, we would lose more than 15% of our data, which is an unattractive option.
A better option might be to mask the NaN values, without changing the shape of the dataset. We can replace each NaN with the mean of the respective column and should preserve the general shape of the data.
Let’s see what happens:
The mean is preserved, as are the min and max. However, our standard deviation and quartiles have shifted towards the mean. This is to be expected as all of the “un-counted” NaN’s that weren’t included in this description in our nan_df have been stuck into the mean of this dataset, shifting the relative weight of observations to the mean.
This is a tradeoff, but perhaps a small one since now we get to keep all our data.
I will complicate this situation a little more in my next post. | https://medium.com/writing-data/basic-data-cleaning-removing-nans-1787110fc11b | ['Ned H'] | 2019-03-26 00:39:26.970000+00:00 | ['Data Cleaning', 'Numpy', 'Python', 'Pandas', 'Data Science'] |
Five Strategies to Boost Your NaNoWriMo Progress | Five Strategies to Boost Your NaNoWriMo Progress
Embrace imperfection and finish your novel
Photo by Bill Jelen on Unsplash
Everyone has different life circumstances, are in different places in their writing journey, and have different skills. Some people stay up late while others get up early. There are writers who have kids running around and puppies and school and life events.
These strategies are still for you.
It can be overwhelming to see other people with word counts higher than yours (hence my introduction). Everyone writes at their own pace, so focus less on their numbers and more on your progress. If you need a reminder of other things to celebrate during NaNo besides word count, check out this piece.
This is my third NaNo project. I write fiction full(ish)-time. I’m also a self-published author who will finish this year with 15 published titles. I’m telling you this because when I share my word count, you should understand the life circumstances that allowed this to happen.
As of writing this post (I’m scheduling it for tomorrow morning), I sit at 25K words on my novel. My breakdown was 10K+ on day 1, 6K+ on day 2, and 7K+ on day 3.
That didn’t only happen because of my life circumstances or the fact that I have the first half of this novel planned extensively (yeah, once I reach the middle I need to figure out where I’m going). I actually have a few strategies that I use during November to help keep me on track and flying through the month.
No matter where you are in your writing journey or how many times you’ve participated in NaNo, these strategies can boost your progress without sucking the life out of you.
DO NOT PRESS THE BACKSPACE KEY
Yeah, I’m putting that one in all caps.
It’s so tempting to go back and try to make your first sentence perfect on the first try. You might want to go back and fix that small little comma error you have. You might think of a new or better word than the one you just used.
Don’t press the backspace key. Don’t let that word go to waste. Just keep swimming, like this writer who wrote 1000 words before finding her first sentence. Those 1000 words still count.
November is NOT National Editing Month. Keep writing.
Keep typing
Keep typing, even if you have no idea what you’re writing .
Something I do only in November (to pad my word count at the very least) is to not let my fingers stop moving for writing sprints. I keep typing, even if that means I start jumbling my stream of consciousness onto the page. It might just be a quick paragraph of what you might want to happen in the future. Just don’t stop.
Keep momentum and take advantage of it if you have it. Think of it like morning pages, where you can’t let your fingers stop typing. Even if it’s garbled mess, there are still words on your screen. They still count toward your final goal. And the more you keep typing, the better those words will start flowing, getting you closer to the finish line.
Make notes, not changes
Have you stumbled onto a plot hole? Discovered that you made a character’s eyes blue when they started as green?
MAKE A NOTE. Sticky note by your computer, comment on your word document, leave a trail of breadcrumbs.
Don’t pause and go back to fix the previous chapter. That takes time away from writing and it messes with your flow. Leave yourself a note and worry later.
I have a minimum of ten notes already to go back and add certain elements or change descriptions. I’m not upset I ‘missed’ them at first, but I just discovered things in my story that needed to happen differently.
Move forward as if you’ve made that change
Move forward as if you’ve made that change — That plot hole you just found? Yeah, it never existed… at least for your future word count.
Pretend as if you solved it from the beginning of your novel and keep writing as if it has always been that way. Leave a note (bold, highlight, brackets) and fix it after November.
So much time is wasted scrolling your document to find the small little sentence that you need to change. And if you have a massive change, it can be tempting to try to fix everything and make it fit just right in the manuscript. Fight the urge and just pretend it already exists. You can change it after you reach your 50K words.
It doesn’t have to be perfect
Believe me. I spent last year’s NaNo trying to write the perfect conclusion to my trilogy. I slaved over words, trying to make each sentence absolutely beautiful. I researched my old books making sure I covered everything. Getting 50K was the hardest word count I’ve ever worked for. I HATED THAT BOOK FOR A MONTH.
But then I rewrote it during Camp the following April and tossed perfection out the window. It might be my favorite book written to date.
Please please remember that this month is not about making a perfect manuscript on the first try. It’s about getting something onto the page so you can edit it later (or just throw away if that’s what you want).
No one has to ever see the first draft. You can keep your little secretly misspelled words like ‘fish’ to yourself (true story, I wrote ‘firch’ at least five times before I noticed). The first draft is for you and you alone. Write the story for yourself. Share it later. | https://medium.com/the-innovation/five-strategies-to-boost-your-nanowrimo-progress-ceba67231c71 | ['Laura Winter'] | 2020-11-06 15:44:09.549000+00:00 | ['Novel', 'Novel Writing', 'NaNoWriMo', 'Writing', 'Writing Tips'] |
A First Look at React’s New Server Components | A First Look at React’s New Server Components
Explaining the new approach to fetching data in React.js.
Yesterday, the React team announced a new feature: Server Components.
The feature is still experimental; there is no real documentation yet.
What it’s about is simply put: data & component fetching in React.js.
Server Components allow us to load components from the backend. The components have already been rendered in the backend and can be seamlessly integrated into the running app.
So it’s a bit like server-side rendering but works differently.
Similar to what you know from Next.js with getInitialProps, server components can fetch data and pass it to front-end components.
However, unlike classic SSR, Server Components are a bit more dynamic. We can fetch a server tree during the app's execution; the client state is not lost.
They also work differently technically. With SSR, our JavaScript code is rendered into HTML on the server. This creates an HTML template, which is the visible part of our web page.
This is sent to the client, plus the JavaScript code used for interactivity. Thanks to SSR, we see something earlier, but the interactivity can be delayed.
Server components are dynamically included in the app and passed in a special form, as shown in the following image.
All JavaScript instructions are executed. 1 + 1 becomes 2, which is also passed in the format. The components are static and cannot be interactive. Compared to SSR, only the visible part is passed — the interactivity is missing.
Where is the big advantage of Server Components?
Zero-Bundle-Size-Components
The JavaScript world is full of huge libraries. Just think of packages like Moment.js, which are many kilobytes in size, but of which we only use a few functions.
For the app's performance, and thus for the user, this is, of course, very bad — all the code is shipped to the front-end.
Tree-shaking can be used to save code that we don’t need. What remains is still a lot of code, which is often executed only once. For example, to format a date.
Thanks to Server Components, we can spare our front-end this code as well. | https://medium.com/javascript-in-plain-english/react-server-components-2cf9f8e82c1f | ['Louis Petrik'] | 2020-12-22 11:09:29.408000+00:00 | ['Reactjs', 'React', 'JavaScript', 'Web Development', 'Nodejs'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.