title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Welcome to Hawaii
Welcome to Hawaii Drop by for a spectacular view The volcano season has returned to Hawaii and tourism is down by half. This development came as a surprise to Honolulu’s Chamber of Commerce Chairman MK Buck. “We haven’t had a volcanic display like this since I was a kid. And people are staying away? F**king idiots!” Buck’s solution? A nationwide ad campaign to remind tourists what we’re missing. “This is a lifetime opportunity,” Buck swears. “How often do you have a chance to step into the path of fast running lava and capture video of the explosions on your iPhone?” Hawaiian tour guides attract adventurous customers during the volcano season. Hawaiian tourist bureaus now offer three packages with their “Love the Heat” campaign: By Land. Led by a robotic tour guide, visitors wade up the lava flow wearing asbestos gators. While the guide points out the sites of former landmarks (now buried under rubble), guests capture memorable moments on their cell phones or with disposable cameras.[1] Led by a robotic tour guide, visitors wade up the lava flow wearing asbestos gators. While the guide points out the sites of former landmarks (now buried under rubble), guests capture memorable moments on their cell phones or with disposable cameras.[1] By Sea. Guests approach the volcano in outrigger canoes steered by experienced native guides. While guides share native stories and folklore about Kilauea’s wrath, tourists can watch the toxic gasses rolling toward them with binoculars, or capture the memories on their phones. A few lucky guests who find themselves caught in the wind currents will experience the joy of paddling for their lives to escape surrounding fumes. Guests approach the volcano in outrigger canoes steered by experienced native guides. While guides share native stories and folklore about Kilauea’s wrath, tourists can watch the toxic gasses rolling toward them with binoculars, or capture the memories on their phones. A few lucky guests who find themselves caught in the wind currents will experience the joy of paddling for their lives to escape surrounding fumes. By Air. Tourists fly over Kilauea in hot air balloons during explosions. The tour guide drops them into the crater while attached to a steel cable to video the emerging lava and fireballs. This first rate collector’s edition t-shirt comes with your vacation package. Every package is advertised with the slogan: “Dress for the heat,” and every guest receives an honorary t-shirt that reads: “I survived Kilauea” on the front and “Drop by for a spectacular view” on the back. Basic packages began at $12,000 per person to enjoy the luxury of pitching tents and camping on recent lava flows. Braver tourists will catch poisoned fish floating past to cook for dinner. Packages with hotel accommodations run as low as $35,000 per person. “Bookings are up five percent,” Buck admitted. We hope to reach capacity before the Volcano goes dormant and people no longer have anything to attract them. All packages are fully insured.[2]
https://medium.com/emphasis/welcome-to-hawaii-ca7b134552d4
['Phillip T Stephens']
2018-06-02 18:37:21.985000+00:00
['Marketing', 'Hawaii', 'Humor', 'Travel', 'Satire']
Album(s) of the Day — October 7. Re-visiting the two Van Halen AOTD.
07.October.2020 Van Halen RIP Eddie Van Halen Thoughts and wishes for his family and friends. The beauty of Eddie Van Halen was that his music transcended both time and age. I began this by saying, “If you’re over a certain age…” but the truth is, it doesn’t matter. I recently watched two kids watch Eddie play “Eruption” live, and you can see their amazement. In fact, the young kid says: “How does he even do that?” I dunno kid, it’s a question every person has been asking for decades now. Now …imagine hearing that in 1978! It’s the second one down: There was, and never will be, a parallel to Eddie Van Halen. You’re going to read a lot of tributes about him and his music and contributions, etc. Loads of celebrities will offer up their condolences. I am not going to say too much here because really, what can actually be said? One of my first Albums of the Day was the first Van Halen album: It’s incredible (the album, not so much my writing). And then, on my birthday, I selected my favorite Van Halen album. It was a toss-up between Women and Children First and Fair Warning, but Women and Children First won. If you’re looking for a way to honor Eddie Van Halen’s memory, play his music tonight. Then play it again tomorrow. And then play it for someone who may have never heard it.
https://medium.com/etc-magazine/album-s-of-the-day-october-7-dfea116d52f7
['Keith R. Higgons']
2020-10-06 22:53:29.918000+00:00
['Rock', 'Music', 'Rock And Roll', 'Culture', 'Art']
Young Marie Curie
First Principle: Never Let One’s Self Be Beaten Down By Persons Or Events Quick Intro When history judges a legacy it over-weighs the tail-end of ones journey, often overlooking key starting circumstances — our tiny sample of polymaths is no different. As we’ve seen, the majority are born into wealth & opportunity, with the wind beneath their sails; a tiny handful of others, however, face one or many lifelong uphill struggles. Perhaps no one else in this series broke glass ceiling after glass ceiling, shattering the misconceptions of their times, like Madame Marie Curie. The eminent polymath scientist, she faced obstacle after obstacle, only to radiate through & rightly earn her legacy. Maintaining the same theme as previous entries in this series, we ask — what was she like in his twenties? Note-Worthy Accomplishments — First individual & only women ever to win a Nobel Prize in two distinct branches of science: physics & chemistry — Known as the Mother of Radiation, credited with founding, establishing & discovering applications in the field of radiology —Nurse & inventor that built mobile X-Ray units for use in War World I — Ground-breaking feminist that was the first-ever women to: attain a PhD & professorship at the University of Paris, & win not one but two Nobel prizes 20s To 30s (1887–1897) Maria Skłodowska was born in Warsaw, Poland on November 7th, 1867. The fifth & youngest child, her parents were two very well known liberal activist educators (Bronisława & Władysław Skłodowski). Her mother, the former, operated a prestigious Warsaw boarding school for girls; her father, the latter, taught math & physics while holding the position of director for a boys school. As one can imagine, Curie was taught to value a life of self-education; this nurture influence is seen multiple times throughout her childhood, like when her father (Wladyslaw) brought home laboratory equipment for his children to play with. Tragically, at the age of seven, her oldest sibling, Zofia, died of Typhus. Three years later, in 1878, Marie’s mother passed as well; this same year, when she was ten, Marie attended the boarding school of J.Sikorska. Unfortunately, while she was away, her father was fired from his position by his Russian supervisors (for pro-Polish sentiments). Now financially desolate, Wladyslaw offered boarding to travelers at their old home. Meanwhile, Marie graduated from the equivalent of high school in 1883. Not completely immune to her circumstances, Marie soon collapses from depression & spent the next two years in the countryside recovering with relatives of her father & then back in her old home. Throughout this period she began private tutoring (a role then known as a governess) whenever she found the opportunity. As her childhood came to a close, it became crystal-clear that Marie would need to extensively engage in two uphill battles to attain higher-education: cultural & financial. Unable to enroll in a local regular institution because she was a woman, she & her sister Bronisława joined the underground, clandestine Flying University (sometimes translated as Floating University), a Polish patriotic institution of higher learning that admitted women students during the rule of the Russian Empire. Marie (left) & Her Sister Bronislawa (right) In 1887, the year she turned twenty, Marie Curie fought feverishly to attain further education. Well-versed in many subjects, she could no longer count on her father so she sought out financial independence through the position of governess (home tutoring). She continued varying governess positions through the following two years, tending to personal bouts of studying math, physics & chemistry; in addition to STEM subjects, she also worked on her French & Russian. Marie wrote (in her biography of Pierre) that she learned to appreciate the skill of working independently, as it would deeply help her out later in life when she worked in Paris: Life is not easy for any of us. But what of that? We must have perseverance & above all confidence in ourselves . We must believe that we are gifted for something & that this thing, at whatever cost, must be attained. In 1890, the year she turned twenty-three, Curie found herself back in Warsaw living with her father. Without a need to cover her rent, she doubled-down on her self-education. First, she again attended classes at the secretive, underground Flying University (a place of education refuge for peasants & women seeking education); in addition, she interned at the laboratory of the Museum of Agriculture & Industry in Warsaw Poland. University Of Paris The next year, at twenty-four, she finally saved enough to travel to Paris to fulfill her dream of enrolling at the University of Paris. Per usual Curie fashion, she very much arrived in her own way: financially independent & romantically single (very rare for the current social milieu). In 1892, the following year, notes in her autobiography that her conditions were quite spartan. In it, she describes how it was so cold that water would freeze in her apartment at night and that she had to carry everything up six flights of stairs: During a rigorous winter, it was not unusual for the water to freeze in the basin in the night; to be able to sleep I was obliged to pile all my clothes on the bedcovers. In the same room I prepared my meals… These meals were often reduced to bread with a cup of chocolate, eggs or fruit. I had no help in housekeeping & I myself carried the little coal I used up the six flights. At twenty-six, in 1893, Marie Curie finally earned a degree in physics. Scoring at the top of her class, she began to curate a reputation as she was the first woman in the University’s records to earn a graduate degree in physics. Throughout this period, she maintained herself through private tutoring & soon through part-time employment at the industrial lab of Gabriel Lippman. The following year, at twenty-seven, she meets her soulmate & future husband, Pierre Curie — it’s an immediate match in heaven as he was equally intense & immersed in his work. Additionally, she attains her second degree (there are contrasting sources on the subject but it’s believed that this degree is in mathematics). Upon graduating, she applies to a professorship in Poland & is promptly rejected, which convinces her to stay in Paris & move in with Pierre. At twenty-eight, she returned to the University to earn her doctorate in physics (Pierre had recently earned his). By the end of 1895, they married simply & were already well-known in the Parisian community as an intellectual power couple — famously, they often biked through town together, engaging in deep scientific discussions: Marie & Pierre Curie At twenty-nine, in 1896, Curie published a dissertation on the magnetic properties of steel; additionally, this same year, Pierre’s father moved in with the recently betrothed couple. The following, final year in this mini-bio, Curie gave birth to her daughter Irene Curie. Quickly finding herself with a full household, she started teaching at École Normale Supérieure, which is a university that trains teachers. Of importance, since it’d lead to their groundbreaking discovery & Nobel Prize, this is the year that the Curies began to look into uranium rays just discovered by Henri Becquerel. Quirks, Rumors & Controversies Alas, like those in the series before her, Curie possessed a few intricacies, quirks & demons; inarguably, Curie ruffled a few feathers throughout her journey. An obvious surface-level intricacy & quirk of Madam Curie is her ferocious sense of independency — this is exemplified many times just throughout his mini-series. From her insistence on joining the un-official, secretive Flying University to stubbornly moving out to France without financial footing, Curie was aptly described as individually “hard-headed.” However, given the context of her environment & her boundless success in the face of it — it’s hard to argue that it wasn’t justified, if not flat out Herculean. Yet, most strengths dialed up to 11 can quickly turn into negatives; no doubt unfairly, Curie’s sexual independence & liberation eventually invited controversy. As the story goes, Curie’s greatest scandal stemmed from a salacious affair. As a prologue, however, it’s worth noting that tragically, her beloved Pierre died in 1906. Five years later, on the cusp of winning her second Noble prize, Marie is feeling particularly lonely as she’s reminded of her & Pierre’s first prize. Increasingly so, she eventually opens her arms & heart again to one of Pierre’s former University students: Paul Langevin. Langevin was (nearly) everything that Marie found in Pierre — brilliant, immersed, & celebrated. Paul Langevin Unfortunately, Paul was also married with kids. For what it’s worth, history notes that the Langevin’s marriage was exceedingly acrimonious, occasionally resulting in physical abuse (going both ways). With time, however, the affair only flourished & Marie & Paul found themselves rendezvousing in secret apartments to keep it up. Perhaps not surprisingly, Madame Langevin wasn’t as clueless to their methods as she let on — collecting multiple notes of their correspondence as the affair blossomed. As legend has it, Madame Langevin approached the enamored couple three days before Curie’s 2nd Noble acceptance-speech; during the confrontation, she declared to the press that her husband & Marie Curie were having an affair & demanded both money & custody of the children. Socially, for Marie Curie, all hell broke loose as she was universally castigated by both social & academic circles. Curie was unfairly cast as the conniving tramp who had entrapped a married man; worse, xenophobic sentiments seeped in & she was accused of being a “dangerous foreigner — a Jew!” Tragically, even members of the Nobel Committee ruthlessly attacked Curie, as seen below by a quote from biochemist Olof Hammarsten: We must do everything that we can to avoid a scandal and try, in my opinion, to prevent Madame Curie from coming. It would be quite disagreeable…and I don’t know who could have her at their table. Cooler heads eventually prevailed & Curie was comforted & encouraged to continue her prolific career by none other than Albert Einstein. Albert Einstein & Marie Curie In Closing Who was Marie Curie in her twenties? A motivated, brilliant, scientist with a deep reverence for self-education & a fierce desire for self-reliance. Was she accomplished in her twenties? A lukewarm yes. Curie had yet to create any groundbreaking, original work; this, however, doesn’t take away from her early ceiling-shattering (particularly at the University of Paris) due to her impressive perseverance in the face of financial & cultural pressures. At the close of her twenties, Curie’s journey was very much at an inflection point as it would only be a matter of months until the Curries came across their eminent discovery. Undoubtedly, though perhaps subconsciously, Curie spent her formative years creating & sharpening foundational lifelong habits. Among those, perhaps none were more important than her fastidious perseverance, her ferocious sense of individuality, her passion for self-education & her commitment to a unwavering work ethic.
https://medium.com/young-polymaths/young-marie-curie-42c3d759f88d
['Jesus Najera']
2020-11-11 19:13:53.513000+00:00
['Life Lessons', 'History', 'Education', 'Physics', 'Science']
3 Ways to Increase Your Concentration at Workplace
3. Don’t Get Sucked Into Multitasking There is a myth that multitasking helps us complete many tasks in a short time. But the fact is most of us suck at it. Switching between tasks distributes our attentional resources of the brain, as a result of which we lose our efficiency to focus on a single task. So, whenever we are involved in a job where we need to use our problem-solving abilities, we must restrict multitasking. As the Turkish proverb says: “One arrow does not bring down two birds.” Likewise, we can’t simultaneously complete different tasks using one brain because we lose our efficiency and productivity. We can think of it as sunlight. What do you think is more powerful enough to burn a piece of paper? Scattered sunrays or the ones that are concentrated by a magnifying lens? We must try to avoid multitasking if we can. Distributing our energy on varied activities is much likely to cause chaos and stress. Besides, the pressure to complete all the tasks kills our creative quotient, and hence our performance compromises. Having too much to handle than our mental capacity can also weaken our memory. Suppose you are cooking four recipes at one time. There are very high chances that you’ll forget an ingredient or two. Have you ever felt that while cooking, you can’t figure out if you’ve added the seasonings or not? This scenario is the result of scattered focus and an alarm to stop multitasking. A research study conducted at Stanford University suggests that multitasking is less productive and fruitful than doing one thing at a time. The researchers also provided evidence that individuals regularly affected by different types of electronic media cannot concentrate, recollect important information or juggle between tasks. Though we believe that we can multitask with ease, we are subtly killing our attention to detail, creative thinking, and organizational skills. So, how can we complete all tasks in a shorter time without multitasking? How you can do it First of all, get enough rest and sleep well at night, so you don’t rush things the next morning. A calm mind is a secret to 1000x productivity. If your employer demands you to work like a maniac, learn the art of saying NO. Sometimes, an upfront no is better than doing a lousy job. If you are overloaded with work, and yet your manager wants you to work on a new project, you can request that you already have plenty on your plate and appreciate it if your colleague could take over the next project. It would be best to use scheduled planner apps like Todoist, Microsoft, google tasks, etc., to prioritize activities. Listing all the jobs priority wise helps us devote our attention accordingly. Next, try to respond only high priority emails first. Not all emails need our attention. Schedule a time for distractions like phone, television, etc. This habit will help you to concentrate on your job better. Planning your day before it starts gives you room to accommodate any last-minute tasks, or else multitask will suck you in like a black hole. And you’ll come out of it stressed, exhausted, and tired.
https://medium.com/live-your-life-on-purpose/3-ways-to-increase-your-concentration-10a0ddc756c7
['Darshak Rana']
2020-12-11 02:26:05.612000+00:00
['Personal Development', 'Mental Health', 'Self', 'Self Improvement', 'Life Lessons']
Machine Learning Basics with the K-Nearest Neighbors Algorithm
The k-nearest neighbors (KNN) algorithm is a simple, easy-to-implement supervised machine learning algorithm that can be used to solve both classification and regression problems. Pause! Let us unpack that. ABC. We are keeping it super simple! Breaking it down A supervised machine learning algorithm (as opposed to an unsupervised machine learning algorithm) is one that relies on labeled input data to learn a function that produces an appropriate output when given new unlabeled data. Imagine a computer is a child, we are its supervisor (e.g. parent, guardian, or teacher), and we want the child (computer) to learn what a pig looks like. We will show the child several different pictures, some of which are pigs and the rest could be pictures of anything (cats, dogs, etc). When we see a pig, we shout “pig!” When it’s not a pig, we shout “no, not pig!” After doing this several times with the child, we show them a picture and ask “pig?” and they will correctly (most of the time) say “pig!” or “no, not pig!” depending on what the picture is. That is supervised machine learning. “Pig!” Supervised machine learning algorithms are used to solve classification or regression problems. A classification problem has a discrete value as its output. For example, “likes pineapple on pizza” and “does not like pineapple on pizza” are discrete. There is no middle ground. The analogy above of teaching a child to identify a pig is another example of a classification problem. Image showing randomly generated data This image shows a basic example of what classification data might look like. We have a predictor (or set of predictors) and a label. In the image, we might be trying to predict whether someone likes pineapple (1) on their pizza or not (0) based on their age (the predictor). It is standard practice to represent the output (label) of a classification algorithm as an integer number such as 1, -1, or 0. In this instance, these numbers are purely representational. Mathematical operations should not be performed on them because doing so would be meaningless. Think for a moment. What is “likes pineapple” + “does not like pineapple”? Exactly. We cannot add them, so we should not add their numeric representations. A regression problem has a real number (a number with a decimal point) as its output. For example, we could use the data in the table below to estimate someone’s weight given their height. Image showing a portion of the SOCR height and weights data set Data used in a regression analysis will look similar to the data shown in the image above. We have an independent variable (or set of independent variables) and a dependent variable (the thing we are trying to guess given our independent variables). For instance, we could say height is the independent variable and weight is the dependent variable. Also, each row is typically called an example, observation, or data point, while each column (not including the label/dependent variable) is often called a predictor, dimension, independent variable, or feature. An unsupervised machine learning algorithm makes use of input data without any labels —in other words, no teacher (label) telling the child (computer) when it is right or when it has made a mistake so that it can self-correct. Unlike supervised learning that tries to learn a function that will allow us to make predictions given some new unlabeled data, unsupervised learning tries to learn the basic structure of the data to give us more insight into the data. K-Nearest Neighbors The KNN algorithm assumes that similar things exist in close proximity. In other words, similar things are near to each other. “B irds of a feather flock together.” Notice in the image above that most of the time, similar data points are close to each other. The KNN algorithm hinges on this assumption being true enough for the algorithm to be useful. KNN captures the idea of similarity (sometimes called distance, proximity, or closeness) with some mathematics we might have learned in our childhood— calculating the distance between points on a graph. Note: An understanding of how we calculate the distance between points on a graph is necessary before moving on. If you are unfamiliar with or need a refresher on how this calculation is done, thoroughly read “Distance Between 2 Points” in its entirety, and come right back. There are other ways of calculating distance, and one way might be preferable depending on the problem we are solving. However, the straight-line distance (also called the Euclidean distance) is a popular and familiar choice. The KNN Algorithm Load the data Initialize K to your chosen number of neighbors 3. For each example in the data 3.1 Calculate the distance between the query example and the current example from the data. 3.2 Add the distance and the index of the example to an ordered collection 4. Sort the ordered collection of distances and indices from smallest to largest (in ascending order) by the distances 5. Pick the first K entries from the sorted collection 6. Get the labels of the selected K entries 7. If regression, return the mean of the K labels 8. If classification, return the mode of the K labels The KNN implementation (from scratch) Choosing the right value for K To select the K that’s right for your data, we run the KNN algorithm several times with different values of K and choose the K that reduces the number of errors we encounter while maintaining the algorithm’s ability to accurately make predictions when it’s given data it hasn’t seen before. Here are some things to keep in mind: As we decrease the value of K to 1, our predictions become less stable. Just think for a minute, imagine K=1 and we have a query point surrounded by several reds and one green (I’m thinking about the top left corner of the colored plot above), but the green is the single nearest neighbor. Reasonably, we would think the query point is most likely red, but because K=1, KNN incorrectly predicts that the query point is green. Inversely, as we increase the value of K, our predictions become more stable due to majority voting / averaging, and thus, more likely to make more accurate predictions (up to a certain point). Eventually, we begin to witness an increasing number of errors. It is at this point we know we have pushed the value of K too far. In cases where we are taking a majority vote (e.g. picking the mode in a classification problem) among labels, we usually make K an odd number to have a tiebreaker. Advantages The algorithm is simple and easy to implement. There’s no need to build a model, tune several parameters, or make additional assumptions. The algorithm is versatile. It can be used for classification, regression, and search (as we will see in the next section). Disadvantages The algorithm gets significantly slower as the number of examples and/or predictors/independent variables increase. KNN in practice KNN’s main disadvantage of becoming significantly slower as the volume of data increases makes it an impractical choice in environments where predictions need to be made rapidly. Moreover, there are faster algorithms that can produce more accurate classification and regression results. However, provided you have sufficient computing resources to speedily handle the data you are using to make predictions, KNN can still be useful in solving problems that have solutions that depend on identifying similar objects. An example of this is using the KNN algorithm in recommender systems, an application of KNN-search. Recommender Systems At scale, this would look like recommending products on Amazon, articles on Medium, movies on Netflix, or videos on YouTube. Although, we can be certain they all use more efficient means of making recommendations due to the enormous volume of data they process. However, we could replicate one of these recommender systems on a smaller scale using what we have learned here in this article. Let us build the core of a movies recommender system. What question are we trying to answer? Given our movies data set, what are the 5 most similar movies to a movie query? Gather movies data If we worked at Netflix, Hulu, or IMDb, we could grab the data from their data warehouse. Since we don’t work at any of those companies, we have to get our data through some other means. We could use some movies data from the UCI Machine Learning Repository, IMDb’s data set, or painstakingly create our own. Explore, clean, and prepare the data Wherever we obtained our data, there may be some things wrong with it that we need to correct to prepare it for the KNN algorithm. For example, the data may not be in the format that the algorithm expects, or there may be missing values that we should fill or remove from the data before piping it into the algorithm. Our KNN implementation above relies on structured data. It needs to be in a table format. Additionally, the implementation assumes that all columns contain numerical data and that the last column of our data has labels that we can perform some function on. So, wherever we got our data from, we need to make it conform to these constraints. The data below is an example of what our cleaned data might resemble. The data contains thirty movies, including data for each movie across seven genres and their IMDB ratings. The labels column has all zeros because we aren’t using this data set for classification or regression. Self-made movies recommendation data set Additionally, there are relationships among the movies that will not be accounted for (e.g. actors, directors, and themes) when using the KNN algorithm simply because the data that captures those relationships are missing from the data set. Consequently, when we run the KNN algorithm on our data, similarity will be based solely on the included genres and the IMDB ratings of the movies. Use the algorithm Imagine for a moment. We are navigating the MoviesXb website, a fictional IMDb spin-off, and we encounter The Post. We aren’t sure we want to watch it, but its genres intrigue us; we are curious about other similar movies. We scroll down to the “More Like This” section to see what recommendations MoviesXb will make, and the algorithmic gears begin to turn. The MoviesXb website sends a request to its back-end for the 5 movies that are most similar to The Post. The back-end has a recommendation data set exactly like ours. It begins by creating the row representation (better known as a feature vector) for The Post, then it runs a program similar to the one below to search for the 5 movies that are most similar to The Post, and finally sends the results back to the MoviesXb website. When we run this program, we see that MoviesXb recommends 12 Years A Slave, Hacksaw Ridge, Queen of Katwe, The Wind Rises, and A Beautiful Mind. Now that we fully understand how the KNN algorithm works, we are able to exactly explain how the KNN algorithm came to make these recommendations. Congratulations! Summary The k-nearest neighbors (KNN) algorithm is a simple, supervised machine learning algorithm that can be used to solve both classification and regression problems. It’s easy to implement and understand, but has a major drawback of becoming significantly slows as the size of that data in use grows. KNN works by finding the distances between a query and all the examples in the data, selecting the specified number examples (K) closest to the query, then votes for the most frequent label (in the case of classification) or averages the labels (in the case of regression). In the case of classification and regression, we saw that choosing the right K for our data is done by trying several Ks and picking the one that works best. Finally, we looked at an example of how the KNN algorithm could be used in recommender systems, an application of KNN-search.
https://towardsdatascience.com/machine-learning-basics-with-the-k-nearest-neighbors-algorithm-6a6e71d01761
['Onel Harrison']
2019-07-14 01:10:25.321000+00:00
['Machine Learning', 'Algorithms', 'Artificial Intelligence', 'Data Science', 'Data']
We Need More Compassion When We Witness Adult Meltdowns
What Is a Meltdown? A coworker, friend, or spouse snaps from a trivial offense. The tantrum escalates to the point of hysterics. The response seems wildly disproportionate to the transgression. We call it a meltdown. What should you do in that situation? Comfort and console? Pretend not to notice? Shoot a video to win social media points? I witnessed a disturbing scene, not violent like the guy who destroyed his computer, but inappropriate in other ways. A normally quiet and composed woman let out an uncharacteristic grunt. She followed with a rowdy complaint about the settings on her chair — the seat and armrests were adjusted lower than her preferred settings. She had worked from home the previous day, and someone had used her desk. Desk-sharing was a regular occurrence. There weren’t enough seats to accommodate everyone. What set her off was that someone had neglected to return the seat settings to its previous positions. It was a minor inconvenience at best, but she launched into a mini tirade. “This is absolutely disgusting,” she said. “My seat height is way off. The armrests are too low. Who does that?” I would think most people adjust the chair to their liking, especially if they’re going to use it for eight hours. That logic escaped her. The tantrum concluded with this puzzling statement. “I’m terrified that my children will soon have to work in a world like this.” Was her outrage justified? Was the fear of her children’s future warranted? Why would an otherwise composed person blow up at such a trivial affront?
https://medium.com/better-marketing/we-need-more-compassion-when-we-witness-adult-meltdowns-a1f936cfd6c0
['Barry Davret']
2019-07-19 00:55:47.340000+00:00
['Self Improvement', 'Life Lessons', 'Relationships', 'Work', 'Mental Health']
LOL: Issue #24
Mediocre Man by Ayaan Sawant 2. How to discover great books to read? by Harsh Snehanshu 3. To You by Hana Vaid Who is Hana Vaid? Her Poetry Book Meraki, is out, check it out. 4. On the Epilogue of The Handmaid’s Tale by Anna Sheffer 5. The Ideal Y by Aditya Mankad Feature: If you missed it,Spoken Word poets Kc Vlaine and Anirudh Eka earlier this week:-
https://medium.com/lol-weekly-list-of-lit/lol-issue-24-e78f42db76a
['Arihant Verma']
2017-08-11 18:24:36.261000+00:00
['Poetry', 'Books', 'Literature', 'Stories', 'Lolissue']
“Anxiety isn’t good with statistics.”
Photo by Sean McAuliffe on Unsplash A stressed brain does not like nuance. One thing to learn about a stressed or anxious brain is that that particular brain (and person) is in a very “all or nothing” framework. At that moment, stress and anxiety tend to lead us to think in very “either/or” terms. There is not much gray between the “black and white” of an anxious brain. In practical terms, this means that decision-making involving multiple options gets more challenging. It may be hard to even see that there are options available. When asking “why” about a particular choice, it may be helpful to remember that anxiety may only see one or two: fight or flight. Photo by Alexander Krivitskiy on Unsplash Fear favors efficiency. Survival is key here. And it can be helpful to remember that this same “fight or flight” system that is robbing you of the ability to make complicated decisions is trying to help you live. The example I typically use is that if a tiger was running at you then you do not want to doddle while searching the internet to find out what a tiger would prefer to eat … besides you! You definitely don’t want to leisurely prepare a tasty meal for the tiger, perhaps with an appropriate wine pairing. By the time you do your research, decide on appetizers, and prepare the meal, YOU will have become the main course for the tiger. Fight or flight (after the pause of the freeze response) is helpful here. Our body and brain’s reaction to fear, stress, and anxiety, is there for a reason: our survival. Decisions about survival do not have time for committee meetings. Decisions about survival tend to go for the quickest, bluntest instrument to achieve the goal. Consider even the low-level threat of hunger and how our body craves the fattest, sweetest, most carbohydrate-laden treat. The custard-filled doughnut may not be healthy, but it satiates our hunger quickly and efficiently. Part of why an anxious brain is not good with probability is that it focuses on the “what if this happens” scenarios; that sort of thinking may help us survive the “what if”. But by imagining that “what if” event, we really feel that it is possible; we feel it is real in our brains and bodies … which is in large part why our bodies react the way that they do to both acute and chronic stressors. Photo by Robin Benzrihem on Unsplash So what do we do? Stop. Breathe deeply. Breathe again. In this present moment, the “x” or “what if” is not happening. While the stressed/anxious brain is reacting as if there is a crisis, we need to communicate to our body that we are not actually under threat. Instead of the short, shallow breathing that fleeing from danger requires, we communicate with our bodies using the same slow, deep breathing that our body would use at rest. Because our body is reacting to the stressor, we need to communicate with our body first. The areas of our brain that are about fear are reacting; once we help that part of us find a more calm space then the more thoughtful part of our brains can take over. After we feel calmer, then our prefrontal cortex has a chance to look at how many people actually get bitten by a shark, or struck by lightning, or what is our chance of getting hit by a meteor. Then we can actually weigh our chances … and our choices. Remember: “Anxiety isn’t good with statistics”. And for sure a stressed brain focused on survival is worrying more about the chance of a Sharknado than about winning the lottery.
https://medium.com/whenanxietystrikes/anxiety-isnt-good-with-statistics-a1efb06b2436
['Jason B. Hobbs Lcsw']
2020-03-28 19:03:36.116000+00:00
['Anxiety', 'Self', 'Mental Health', 'Mental Illness', 'Therapy']
Please do learn to code
This morning I woke up to dozens of messages from students who had read an article titled “Please Don’t Learn to Code.” At first, I assumed Jeff Atwood’s 2012 article had spontaneously reappeared on Reddit. But no — this was a brand new Tech Crunch article of the same name, which echoed Atwood’s assertion that encouraging everyone to learn programming is like encouraging everyone to learn plumbing. Here’s why programming — unlike plumbing — is an important skill that everyone should learn: programming is how humans talk to machines. John McCarthy, the computer scientist who invented the Lisp language and coined the term “Artificial Intelligence” “Everyone needs computer programming. It will be the way we speak to the servants.” — John McCarthy People have been managing other people for thousands of years. The ancient Romans built their empire on the backs of slave labor. The British built their empire by imposing their will on the residents of dozens of colonies. And America became the economic force it is today largely thanks to cheap immigrant labor during the industrial revolution. But here in the 21st century, we no longer get work done by managing people who tend grain fields, import spices from Asian colonies, or install railroads across the Rocky Mountains. Now we get work done by managing machines. The nature of work has fundamentally changed. Today, it is no longer humans who do most of the work — it’s machines. Think about it — every day, humans make 3.5 billion Google searches. It’s machines that carry out that work — not humans. Think about how many man-hours it would take for humans to conduct even a single Google search manually. Can you imagine a bunch of PhD’s phoning each other around the clock deliberating about which documents they should recommend to whom? This work is only even remotely practical if it’s done by machines. Trip Advisor helps you decide where to go for vacation. Expedia helps you book the right flight to get there. Google Maps directs you to the airport. All of these services are within the reach of average consumers thanks to the hard work of machines. But machines are only able to do all this work because humans tell them exactly what to do. And the only way for humans to do this is by writing software. That’s right — computers are not nearly as smart as humans. For computers to succeed at the jobs we’ve assigned them, they need us humans to give them extremely clear instructions. That means coding. Coding isn’t some niche skill. It really is “the new literacy.” It’s the essential 21st century skill that every ambitious person needs to learn if they want to succeed. Don’t believe me? Just look at the legal profession. Software is turning it inside out, and causing mass unemployment for the lawyers who can’t code. The same is increasingly true for managers, marketers, accountants, doctors, and pretty much every white-collar job in between. And that’s to say nothing of the 3 million Americans whose jobs primarily involve driving a car, and billions of people world-wide who do other repetitive tasks that will soon be handled more inexpensively and effectively by machines. I’m hopeful that these displaced workers will be able to retrain for new jobs through inexpensive education programs like Starbuck’s partnership with Arizona State University — where all of its employees get a free college education (hopefully picking up relevant new skills like software development) — or government-sponsored equivalents. At the very least, they’ll have access to a free math and computer science education through initiatives like EdX, and a free programming education through Free Code Camp. Program or be programmed. We have a concept in software development called “the technology steamroller”. Stewart Brand, founder of the Whole Earth Catalog and the Long Now Foundation “Once a new technology rolls over you, if you’re not part of the steamroller, you’re part of the road.” — Stewart Brand You can’t stop technology. You can only adapt to it. Once a history-shaping new technology comes out of the genie bottle, you can’t put it back. This was true for airplanes, antibiotics, and nuclear warheads. And it’s true for microprocessors, the internet, and machine learning. Those who adapt to these permanent waves of changes flourish. Those who shrug them off — or fail to even realize they exist — asymptotically approach irrelevance. Coding is the new literacy. Like reading was in the 12th century, writing was in the 16th century, arithmetic was in the 18th century, and driving a car was in the 20th century. And just like how not everyone who learns to write will go on to become a professional writer — nor everyone who learns arithmetic will go on to become a professional mathematician — not everyone who learns to code will go on to become a software developer. But all people who learn these things will be immensely better off as a result of their efforts. Think of your ability to read the labels on your prescription drugs, or your ability to count the money that a banker hands you when you make a withdrawal. There’s something equally important that you can do if you can code: take tedious parts of your daily life and automate them. And some people take this basic skill much further, as a way to amass great personal wealth, or to make the world a better place. Ships are meant for sailing Rear Admiral Grace Hopper invented the first compiler and pioneered high-level programming languages. A ship in port is safe, but that is not what ships are for. Sail out to sea and do new things. — Grace Hopper Computers, at their core, are number crunching machines. Human brains, at their core, are learning machines. It may seem like you’ll never be able to code. It may seem like you’re just not wired for it. And there will probably be a parade of people behind you who’ve tried to learn to code, given up, and are eager to commiserate with you. And these people will read articles like the Tech Crunch article, and share them on Facebook — like 14,000 people did yesterday — further discouraging the millions of people around the world who are working hard to achieve this new literacy. But coding detractors are probably incorrect about their inability to learn coding. There’s a growing sentiment among educators and cognitive scientists that any able-minded person can learn to code — just like you can learn to read, write, do arithmetic, or drive a car. A Khan Academy video about the value of maintaining a growth mindset — essentially, believing in yourself. Sure, people with dyslexia have a harder time reading, people with dyscalculia have a harder time doing math, and both have a harder time programming. But even these are limitations that can be overcome, and programmers overcome limitations every day. So heed Grace Hopper’s advice. Sail out to sea and learn new things. Put that learning machine in your head to use. Learn to code. Learn to talk to machines. And flourish. I only write about programming and technology. If you follow me on Twitter I won’t waste your time. 👍
https://medium.com/free-code-camp/please-do-learn-to-code-233597dd141c
['Quincy Larson']
2017-12-27 22:30:13.945000+00:00
['Technology', 'Design', 'Education', 'Programming', 'Social Media']
The Art of Hansei — How The Japanese Philosophy of Self-Reflection Can Improve Your Life
The Art of Hansei — How The Japanese Philosophy of Self-Reflection Can Improve Your Life A Japanese philosophy that acknowledges how we fall short of our full potential Self-awareness is one of the best ways to improve or make progress — it’s a must for anyone interested in growing, personally and professionally. Hansei is a Japanese word meaning “self-reflection”, or “introspection”. It’s a fundamental part of Japanese culture. It is both an intellectual and emotional introspection. It’s similar to the German proverb Selbsterkenntnis ist der erste Schritt zur Besserung, where the closest translation to English is “Insight into oneself is the first step to improvement”. “Hansei also incorporates the concept of greeting success with modesty and humility. To stop Hansei means to stop learning. With hansei, one never becomes convinced of one’s own superiority, and feels that there is always more room, or need, for further improvement.” Most people don’t engage in self-reflection often enough — they don’t consider it a tool worthy enough to be used consistently to improve their lives. Generally, when people fall short of their expectations, they don’t make enough time to think deeply about what went wrong and what can be changed or done better next time. Whatever the outcome of your goals (success or failure), there’s always a gap that demands a better understanding of the processes that led to the outcome. “Hansei is used not only when things fail but also when they succeed. Anything can be made better and more efficient,” writes Kaki Okumura. “Hansei is meant to question our assumptions about the kind of control we have over our lives and grants us the power to be better,” Kaki says. Self-reflection in life and career can help you learn more about yourself, your processes, systems and practices — ii you want to get better, that is. Hansei implies that nobody and nothing is perfect and it’s considered a valuable tool for growth in Japan. This philosophy is practiced in almost all levels of in Japanese society — it’s a vital part of learning and improving. Hansei is taught in schools and has been traditionally considered a fundamental skill that promotes a child’s social and personal development. “In Japan, when someone makes a mistake, they will profusely apologize, take responsibility, and propose a solution for how they can prevent the same mistake from happening in the future,” writes Tim McMahon. Hansei is practiced regularly, as a discipline, irrespective of an outcome. Hansei in business is a rigorous review process. At Toyota, hansei is a pre-requisite for learning: “Even if a task is completed successfully, Toyota recognises the need for a hansei-kai, or reflection meeting; a process that helps to identify failures experienced along the way and create clear plans for future efforts. An inability to identify issues is usually seen as an indication that you did not stretch to meet or exceed expectations, that you were not sufficiently critical or objective in your analysis, or that you lack modesty and humility. Within the process, no problem is itself a problem.” Hansei suggests that we all have flaws and weaknesses, otherwise our ability to continuously improve will be at a disadvantage. Hansei is accepting and exploring our uncomfortable personal truths, admitting our mistakes with the intent to improve. It’s deeply reflective and deeply human. How to practice Hansei “All of humanity’s problems stem from man being unable to sit quietly in a room alone.” — Blaise Pascal To practice Hansei or make the most it, schedule it on your calendar. Make time for it. You can set aside about 5 or 15 minutes every week if you can’t do it every day. Hansei can be a daily or weekly journaling experience. You can experiment with weekly weekend hansei in an effort to align your efforts and actions to your goals and values or correct direction more frequently. Daily practice of hansei can help you assume responsibility of your actions and keep you open for improvements. It can also help you introspect about “what went right” along with “what went wrong / what could be improved”. The process of writing things down requires an honest acknowledgement of your uncomfortable truth — your personal struggles, mistakes you keep making and the path you should be taking next time. Keep it short and from the heart. Hansei is meant to help you get your thoughts and actions better aligned by making them more visible to yourself. Reflecting on your actions, habits and emotional responses naturally leads to self-control, the effectiveness of your planning processes and use of your unlimited time and energy better. Use the art of hansei to surprise yourself and create more opportunities for personal growth. Self-introspection & feedback are a hugely important part of personal progress & development.
https://medium.com/personal-growth/the-art-of-hansei-how-the-japanese-philosophy-of-self-reflection-can-improve-your-life-a886e11aeb96
['Thomas Oppong']
2020-12-08 14:51:51.070000+00:00
['Self Improvement', 'Self', 'Personal Growth', 'Psychology', 'Philosophy']
10 Things You Didn’t Know About Pandas
10 Things You Didn’t Know About Pandas Until now… Photo by Jon Tyson on Unsplash Pandas is the definitive library for performing data analysis with Python. It was originally developed by a company called AQR Capital Management but was open-sourced for general use in 2009. It rapidly became the go-to tool for data analysis for Python users and now has a huge array of features for data extraction, manipulation, visualisation and analysis. Pandas has many useful methods and functions here are ten things you might not know about the library.
https://towardsdatascience.com/10-things-you-didnt-know-about-pandas-d20f06d1bf6b
['Rebecca Vickery']
2020-09-06 20:44:46.852000+00:00
['Technology', 'Artificial Intelligence', 'Education', 'Data Science', 'Programming']
Login and Signup with Java and Spring Boot
Lets create authentication api i.e signup and login with java and spring learn spring boot Introduction These days it’s easier than ever to make a proper application that can do what you need it to do. There are thousands of application development frameworks out there, in whatever programming language you are currently using. Spring is one of those frameworks that you can use to develop your dream application, website or server. Spring is a popular application development framework, developed for the enterprise edition of the Java programming language. According to the Spring website: “Spring makes programming Java quicker, easier, and safer for everybody. Spring’s focus on speed, simplicity, and productivity has made it the world’s most popular Java framework.” MySQL is an open-source relational database management system. It is the database that we will be using to store our registered users. Swagger is an open-source project used to describe and document our API. The side effect of such a big and popular framework is that is can sometimes be daunting to approach it and start learning how it works. Usually it helps if you have some idea of what you want to do with the framework but following a step-for-step guide is just as useful. So, today we are gonna build a very basic User Authentication API, which will allow the user to register new users and also login. This will by no means be a project from which you can build the back-end of your new business or personal project, but it will give you a foot in the door of the Spring framework. Requirements To start off you will need a few things installed on your computer: You will need to have a Java JDK installed on your system. You can find the instructions for installing the JDK here. You will need an Integrated Development Environment installed. Personally I would recommend IntelliJ IDEA Community Edition. You will need a terminal and the Curl CLI. If you are on MacOS or Linux, then you should be covered. If you are on Windows then you can either try installing Curl or you can use the Windows Subsystem for Linux, this will just allow you to use the Linux terminal on windows. Lastly, we need to install MySQL. You can follow the steps that they gave on their website. Setup Once you have your development environment up and running you can head over to the Spring Initializr page. This site is used to create the basic startup package for your application so that you have to do less to achieve more. If you wanna follow along precisely then you should setup your project as follows: The way I have my project set up is as follows: Project: Maven Project. Language: Java. Spring Boot: 2.4.0 Group: (Left empty). Artifact: tutorial. Name: tutorial. Description: Basic user authentication application made using Spring and Spring Boot. Package name: .tutorial Packaging: Jar Java: 8 Dependencies: Spring Web, MySQL Driver, Spring Data JPA The dependencies aren’t as important because I will give you a list of the dependencies that we will use, as to ensure that we are both on the same page with the project. Once you have everything selected you can click on “Generate”. This will create a zip file that you should download and unzip into your development folder. Import the project into your IDE of choice. If you are using IntelliJ the steps for importing the project is as follows: Open IntelliJ. Click on “Open or Import”. Navigate to the file that you just unzipped. Click on it once, as to highlight it, then click on “Okay”. The project will be imported into your IDE and it should start off by downloading the dependencies and indexing your files so that it will know how to setup your project. Feel free to explore around, look in all the folders and open up a few files, just be sure to not accidentally change any files, as that might lead to your project not being able to compile. Open up the file called “TutorialApplication”. This is your main file. When you run the program it will start here before running the other files. Take a moment to just look at the file and try to see if you can understand what is going on there. Don’t forget to also realize that you have technically created your first Spring application. Before we can run the program we have to edit some files first. So go into “src/main/resources”, and double click on “application.properties”. This is where your application properties are kept, as the name indicates. Type this into the file: spring.datasource.url = jdbc:mysql://localhost:3306/users spring.datasource.username = USERNAME spring.datasource.password = PASSWORD spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect spring.jpa.hibernate.ddl-auto = update springdoc.api-docs.path=/api-docs So what this block of code sets the path to the database as “jbdc:mysql://localhost:3306/users” (the database is hosted on the computer at port 3306, and we are specifically interested in the “users” database). We set the username, you will have to set the username you setup, usually it is “root”. Next we set the password for our database, you will put your password here. In the next two lines we just say how the Spring framework should interact with the database and that we wanna be able to update the database when we work with it. The last line is a lot less important, it just points to the new path of our API documentation. See that wasn’t so bad, although it wasn’t very interesting either. You can now close that file, we won’t be touching on it again in this tutorial. Next open the “pom.xml” file in your main folder. This is another file that handles your project setup. It is also where we state what dependencies we are gonna use. I am not gonna explain what each dependency does, since we’ll be here all day if we did that. Find the “<dependencies></dependencies> bracket, it should be between line 21 and 41. Once you found it you can replace it with this block of code: <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>javax.validation</groupId> <artifactId>validation-api</artifactId> <version>2.0.1.Final</version> </dependency> <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-ui</artifactId> <version>1.2.32</version> </dependency> </dependencies> Once you saved your project there will popup an “m” with a reload icon on it. Click on it. It will download the dependencies and ensure that everything is valid. The last thing we need to do is create the “users” database. Open up your terminal and type in this command: mysql -u root -p It will ask you for your password, enter the password. You should now see something like this: Next type this command to show all your active databases: SHOW DATABASES; Remember to add the semi-colon on the end otherwise it will just go on to the next line, I have made that mistakes so many times now. You should now see a table with all your databases. Next lets create the users database. Type this command: CREATE DATABASE users; This will create the new users database. That should be all we have to do to setup the database. You can now type exit to exit out of the MySQL terminal. Go back to your IDE and open up “TutorialApplication”, its under “src/main/java/tutorial”. Right click in the file and click on the green triangle, with “Run TutorialApplic…main()” next to it. The bottom terminal will open up and you will see a lot of text being printed out. You did it! You created the foundation on which you can build a better application. Take this time to appreciate the work you have put in to get the foundation done. Next we need to create the the rest of the application, so lets get cracking. Setting up the files In the project tab on the left, right click on the “tutorial” package and then click on “new” and then “package”. Name this new package “user”. Right click on the newly created package, “user”, and then click “new”, and “Java Class”. The first class will be called “User”: You should now have a new file, Java class, under your user package, called User, with content similar to this: package tutorial.user; public class User { } We will also need another Java class, called “UserController”, so go through those same steps that you went trough before to create the Java class. Next we will need to create an Interface, called “UserRepository”. The steps are similar to creating the class but instead of click on Class you will click on Interface: The final piece of our puzzle is an Enum, called “Status”. So as before create the Enum under the “user” package. We will start by editing the “User” class. I will put the code first and then go through it afterwards, that way you can copy and paste it in your IDE and make notes via comments. Adding code to the project tutorial/src/main/java/tutorial/user/User.java: import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.Table; import javax.validation.constraints.NotBlank; import java.util.Objects; @Entity @Table(name = "users") public class User { private @Id @GeneratedValue long id; private @NotBlank String username; private @NotBlank String password; private @NotBlank boolean loggedIn; public User() { } public User(@NotBlank String username, @NotBlank String password) { this.username = username; this.password = password; this.loggedIn = false; } public long getId() { return id; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } public boolean isLoggedIn() { return loggedIn; } public void setLoggedIn(boolean loggedIn) { this.loggedIn = loggedIn; } @Override public boolean equals(Object o) { if (this == o) return true; if (!(o instanceof User)) return false; User user = (User) o; return Objects.equals(username, user.username) && Objects.equals(password, user.password); } @Override public int hashCode() { return Objects.hash(id, username, password, loggedIn); } @Override public String toString() { return "User{" + "id=" + id + ", username='" + username + '\'' + ", password='" + password + '\'' + ", loggedIn=" + loggedIn + '}'; } } So going from the top: @Entity: This annotation allows our class to be serialized and deserialized into and from JSON. It also allows us to create a table in the database we created earlier. @Table(name = “users”): This annotation tells the program to call the table “users”. The variables: Each variable is representative to a field in our database. So in our database table will contain records. Each record will have a field of id (long), a field of username (String), a field of password (String), and a field of loggedIn (boolean). @Id: This sets the id variable as the id field in the database. Databases records work with id’s. @GeneratedValue: This tells the program to generate the id value when a new record is added, that way we won’t have to worry about accidentally overriding records in our database. @NotBlank: This ensures that we won’t be able to add a record to the database that doesn’t have a name, password or value for loggedIn. User(@NotBlank String username, @NotBlank String password): This is a constructor. Its a function that will be ran when this class object is created. It takes a username and password, and then sets the username and the password to the ones that were given, it also sets the value for loggedIn as false so that the user isn’t automatically logged in when their profile has been added. Getters and Setters: These are used to set and return the various variables in our class. The reason for these have more to do with Java and the prefer method for data handling and less to do with the Spring framework. equals(Object o): This will be used later when we want to compare an object passed to the program with an object from our database. hashCode(): This function is used to generate a hash value of our object. toString(): This function, has the name might suggest, is used to return some information about our class object in the form of a String. This is especially useful during debugging. I used it a lot when I initially created this program as I had issue with comparing objects to one another. The next file we will work on is the UserRepository file. tutorial/src/main/java/tutorial/user/UserRepository.java: import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; @Repository public interface UserRepository extends JpaRepository<User, Long> { } Now I will admit this file looks a bit empty, especially in comparison to the User.java file, but don’t let the simplicity fool you, it is a very powerful file. The framework does a lot of work under the hood so you don’t necessarily see it, but this file will allow us to interface with our database. @Repository: This tells Spring that this is the interface to use for our database management functions. JpaRepository<User, Long>: This links the interface to our database table. We tell it to look at our User table, and we tell it that the value of our id field is Long. Next up is our Status enum: tutorial/src/main/java/tutorial/user/Status.java: public enum Status { SUCCESS, USER_ALREADY_EXISTS, FAILURE } There isn’t really anything interesting about this enum, it will be used as a way to give feedback to the user as to whether there current action was successful or whether it failed. Lastly is the UserController class: tutorial/src/main/java/tutorial/user/UserController.java: import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import javax.validation.Valid; import java.util.List; @RestController public class UserController { @Autowired UserRepository userRepository; @PostMapping("/users/register") public Status registerUser(@Valid @RequestBody User newUser) { List<User> users = userRepository.findAll(); System.out.println("New user: " + newUser.toString()); for (User user : users) { System.out.println("Registered user: " + newUser.toString()); if (user.equals(newUser)) { System.out.println("User Already exists!"); return Status.USER_ALREADY_EXISTS; } } userRepository.save(newUser); return Status.SUCCESS; } @PostMapping("/users/login") public Status loginUser(@Valid @RequestBody User user) { List<User> users = userRepository.findAll(); for (User other : users) { if (other.equals(user)) { user.setLoggedIn(true); userRepository.save(user); return Status.SUCCESS; } } return Status.FAILURE; } @PostMapping("/users/logout") public Status logUserOut(@Valid @RequestBody User user) { List<User> users = userRepository.findAll(); for (User other : users) { if (other.equals(user)) { user.setLoggedIn(false); userRepository.save(user); return Status.SUCCESS; } } return Status.FAILURE; } @DeleteMapping("/users/all") public Status deleteUsers() { userRepository.deleteAll(); return Status.SUCCESS; } } This is the business class of your program. Here is where we handle the requests sent to our program. @RestController: This tells Spring that this will be used to control the functionality of our API, and any requests sent to our program. @Autowired: This just handles the code injection for our UserRepository so that we won’t have to setup a constructor. @PostMapping(“/users/register”): This tells Spring that whenever our program receives a Post Request to /users/register that the registerUser function should be called and then it pasts the received data to the registerUser function. registerUser(@Valid @RequestBody User newUser): This function requires a valid json object similar to our User class, that way we will be sure that the object we receive is useable in our program. The function starts by creating a list of users in our database, called users. Note the part of “userRepository.findAll()”, this queries our database and returns all the users we have currently saved. The function then loops over all the users in our database and compares it to the user we just got, it does this to ensure that the user isn’t already part of our database. If it find that the user is already in our database, it returns Status.USER_ALREADY_EXITS. Otherwise it will add the new user into our database and return a status of SUCCESS. loginUser(@Valid @RequestBody User user): Same as before, this function receives a user object and then compares it against the users in our database. If it find that this user is in our database it will set it’s loggedIn variable to true, to indicate that our user has just logged in. If the user was successfully logged in it returns a status of SUCCESS. If it fails it returns a status of FAILURE. logUserOut(@Valid @RequestBody User user): This function is similar to loginUser, except this time we set the user’s loggedIn variable to false to indicate that the user isn’t currently logged in. If the logout was successful we return a status of SUCCESS, otherwise we return a status of FAILURE. @DeleteMapping(“/users/all”): This tells Spring to call the following function whenever a Delete Request has been issued to /users/all. deleteAll(): This function simply deletes all the users in our database. It is a useful function to have during testing. And that is it for our program, you should now be able to run the program and interact with it through your terminal. Testing our program Let’s first test whether or not we can register a new user into our database, open your terminal and paste this bit of code into your terminal: This commands sends a post request to our program with a json package. The json package contains a username and a password, you might’ve noticed that in our User class constructor we specifically requested a username and password. We specifically send this Post Request to http://localhost:8080/users/register. Our program is running on localhost on port 8080, and we have a PostMapping listening for when data is sent to “/users/register”. If we run this in our terminal (whilst our application is running in the background) we’ll see something like this: Notice that we received back a String with the content of “SUCCESS”, meaning our application received the new user and added them to our database. What happens when we run that same command again? This time we received a message of “USER_ALREADY_EXISTS”, so our program made sure not to add multiples of the same user to our database. Next lets see if we can log our new user in, you can run this command in your terminal: Same as before it sends a json object with our user credentials to “/users/login”, the part of our program that specifically handles user login. This is what it looks like when we run the command: We get back a message of “SUCCESS” so now our user is logged in, but what would’ve happened if we sent the wrong user credentials? Lets test that by running this command: We never registered the user so that user shouldn’t be in our database and thus shouldn’t be able to log into our application. This is what it looks like: As seen here, our program sends back a message of “FAILURE” indicating that it was able to stop an unregistered user of logging in. That’s it, that’s our entire program. We were able to create a user registration and authentication system using Java and Spring. If you want to clean up after yourself you can clear the database with this command: Cleanup And if you also wanna delete the users database from your system, then you can do this. Open your terminal and type mysql -u root -p and then enter your password. You should now be back in your MySQL terminal and now if you want to delete the “users” database you can run this command: DROP DATABASE users; One last thing before we end this tutorial, you can look at your API setup if you open your browser tab and then go to: http://localhost:8080/api-docs. This will open up some JSON about your API. The complete code for this project can be found at bitbucket.
https://medium.com/webtutsplus/a-simple-user-authentication-api-made-with-spring-boot-4a7135ff1eca
['Nil Madhab']
2020-12-10 21:18:08.581000+00:00
['Web Development', 'Spring Boot', 'Java', 'Backend']
Why I Removed All My Stories From Behind the Paywall
When I decided to write on Medium, I had two goals. To inspire people and to earn enough income to continue living a Freedom Lifestyle. For almost twelve years before I started writing on Medium, I twisted balloons and gave them away. I learned that when I did that, others gave away money. It worked for me. I focused on making people happy, and the Universe provided enough for me to live freely and happily. I define the Freedom lifestyle as doing what you need to do what you want. I have been a nomad, living in my Mobile Domicile since 2011. I have visited all of the Lower 48 United States and done some amazing things, such as climbing the tallest peak in Texas, being vulnerable on a stage with Brene’ Brown, walking from Chicago to Santa Monica in six months, and speaking in front of a thousand people. I was enjoying the Freedom Lifestyle. Yet, something was missing. Twisting balloons did not set my soul on fire. Telling stories of my travels and living a Freedom Lifestyle did. I had tried blogging a few times without much success. (Read: made no money). When I learned about Medium’s Partner Program, where you get paid based on people reading your stories, it made perfect sense. It was like twisting balloons and giving them away. Some people gave away money. Little did I know when I started publishing on Medium that my balloon twisting days will be over in a few months. However, money kept flowing into my life, regardless. I have always believed that money is never an issue; the Universe kept validating it. As I approached my first anniversary of writing on Medium, the Universe gifted me with a sizable reward that had nothing to do with writing, yet enough to continue living the Freedom Lifestyle for some time to come. When I analyzed my ROI for my efforts on Medium, it turned out to be 90 cents per story. Hardly worth the effort. I wrote about it the other day and received quite a response. And, when I laid my head down on my pillow that evening, I knew what I needed to do. Just as I gave away balloons and the Universe took care of my needs, I will give away my writing. The Universe is already taking care of me. God pays the highest interest on your goodly loan, says the Qur’an. Who am I to argue with that. What I learned from that one story. Honestly, I was surprised and delighted by the response I received to that one story. We all know that stories about writing on Medium do well on Medium, so that wasn’t a surprise. What surprised me was the overwhelmingly abundant love I received from my readers, validating once again that I was o the right path. As I said earlier, I decided to publish on Medium to inspire others and make money. It had become evident that I wasn’t going to make money on Medium. Writing about writing on Medium and churning out listicles is not what I want to do, and I am not capable of writing 4,000 stories in two years, as Julia E Hubbel is. Even she is struggling to make a decent buck any more. A quick analysis of my story. Stories with most fans according to my stats. Screenshot by the author. In just four days, that story was amongst the top five in the categories that matter to us as writers. It has the most fans, and it is the second top earner. It has as many reads as my top-earning story that was triple-curated back in January. (My last curated story). Of the top five stories with most fans, three stories are about myself that I’ve been rotating at the bottom of my stories for months. The previous story with most fans has been pinned on my profile as the featured story since June. The one at the bottom of the screenshot above is my highest earning story, and it was curated by Medium. For me, more important than the number of fans is the number of people who engaged through comments. That is the true measure of effectiveness and impact. When people take time to engage with you to let you know what they liked or didn’t in your story, it makes it all worth it. I have 26 comments on the story. The closest one is the one that’s pinned to my profile for six months with 20. Member reading time determines your income. My top earning story has made a grand total of $9.93 and has a member reading time of six hours and 45 minutes. This story has four hours and 55 minutes as of this writing and has earned about $7.00 (the only story I am leaving behind the paywall to see how it does in one week. Then I’ll remove it). What does it all mean? Most of the comments I received validated that My writing was inspirational and making an impact on people. Of the two goals that I had when I started publishing on Medium, one was unlikely to happen. I wasn’t going to make the kind of money I wanted. The Universe told me not to worry about that in a clear statement when I unexpectedly recieved the substantial gift. The other was already happening. I was inspiring people as I intended to do. The Universe is telling me to stay on course while I establish myself on the path that is more in tune with my passions. I may not write another 400 stories in the next twelve months, but it doesn’t matter since I won’t be depnding on them to earn me anything significant. My needs are being taken care of and I will be posting all my stories so that everyone can access them, not just those who are paying members. One thing that I will continue to write about soon is my reflection on Route 66 trip from four years ago. Doing that has enabled me to see many of the blessings in my life that I had missed or taken for granted since I finished the walk. I am about 10 days behind on it, and I will attempt to make up for it. Where do I go from here? It has been a long and tedious process of removing 400+ stories from behind the payw all and make them public. As I went through the process, doubt and concerns kept creeping in. I was concerned about trying to explain my unconventional approach to friends and family, until I realized that it was just all in my head. As long as I believed in myself, I can move forward with confidence. The Universe is guiding me all along. I am in the middle of a spiritual metamorphosis. Imagine what a caterpillar must go through while in the coccoon. That is the equivalant of what we call the dark night of the soul. The outside world has no idea of the changes going on within while the caterpillar is being torn apart and reassembled into a beautiful butterfly, trusting the Universe all the while to do it’s work. As I go through the internal changes of shredding off the old ideas and beliefs and internalizing the new thoughts and paradigms, the Universe is keeping me assured by showing what I need to see to move forward with confidence. Conversations with friends, stories and essays on Medium, comments by readers on my old stories prompting to read what I had written, and my nocturnal dreams, all seem to encourage and reassure me and show me the vision of the butterfly that is ready to emerge at the end of process. Yesterday, I awakened from a dream with a quote from Rumi ringing in my mind. “That which you seek is seeking you.” As I reflected on my dream, I was looking for Eddie, and suddenly Eddie found me and offered me a job to start on the spot. It was a job as a waiter and he helped me with it, plus the compensation was higher than the norm in the field. I know that when I dream of being a waiter — and it happens often — it means to be of service. I am invited to serve. It’s a reccurring theme. Eddie that I saw in my dream is a recognized leader amongst the Toastmasters. I know what the dream is telling me. In our society, we are told to go after what we want, to chase our dreams. My dream is about a paradigm shift. Instead of going after it, allow it to come to you, because the drea wants you as much as you want the dream. That which you seek is seeking you. ~ Rumi Several years ago, a lady who is a professional finget tips reader, akin to palmistry, told me that my life purpose will lead me to be a High Profile Innovator. I have always been an innovator, that’s how I labeled myself as Mister Weirdo. Being high-profile is going to take some getting used to. And, oh, your life lesson is to master self-love, she said. I can see how the two are interconnected. “There are only two ways to look at the universe. Either everythig is a miracle, or nothing is. ~ Albert Einstein An Update: I was inspired to return when I learned about the short-form content on Medium. Using that I am highlighting some of my older stories that have a personal meaning to me. As I do that I am putting them behind the paywall, but sharing the friend’s link. This way, it is still available for everyone to read, but I also get paid when a paying member reads it. As we say back home, I am killing the snake without breaking the stick. As always, thank you for reading and responding. I hope it was time well-spent. If you’re so inclined, you could buy me some chai. Graphic created using Canva Here are a couple of related stories:
https://medium.com/narrative/why-i-removed-all-my-stories-from-behind-the-paywall-fb9cbd5ac497
['Rasheed Hooda']
2020-12-13 03:36:54.656000+00:00
['Innovation', 'Pay It Forward', 'Dreams And Visions', 'Miracles', 'Writing']
Jupyter + Pycharm + Virtual Environments
👨🏻‍💻Hacking together a quick shell script to get them to work together I love Pycharm for many reasons — interactive debugging, linting, autocompletions, integrated Git tools and super-easy environment management are some of them. One of the things I don’t love about (the free Community Edition of) Pycharm is that it doesn’t come with support for Jupyter Notebooks, which is an indispensable tool for any Data Science project. So what’s a Data Scientist to do when they want to use Notebooks to do EDA on a dataset but also have access to Pycharm’s full-featured development environment? You write a shell script that lets you launch a Jupyter Lab session from within your Pycharm project terminal window! Here it is To do a quick demo, I’m going to create a new project with a new virtual environment, and we’ll do a quick demo starting from a screen that looks something like this
https://medium.com/analytics-vidhya/jupyter-pycharm-virtual-environments-9d151db7395d
['Adam Cohn']
2020-11-04 18:12:53.532000+00:00
['Pycharm', 'Jupyter', 'Python', 'Virtual Environment']
Blue’s a Miracle
Image by Free Nature Stock from StockSnap Bereft, But I claim what I own. I raise my hand to The work of gold: A pittance of Vast blue, A reminder that I’ve won. A sky-fallen diamond In a picture frame… An everlasting victory. I, wide-eyed and Erubescent Against its shimmer. I embrace the blue— Its color risen At cockcrow, And down with Twilight. The color greets me; And I greet the remnants of the day. It lingers, and it shines; And it is a dream; And it is my dream. —And no word can Take from me the Precious embellishment That belongs in the Expression of my face, The photographs I take, The spartan life I choose to lead. Blue’s a miracle Misbegotten only by The greatest of us. In sorrow, we cry tears That meld with the ocean And submerge with the rest of the world.
https://medium.com/imperfect-words/blues-a-miracle-62d5adbdc844
['Virginia Roces']
2020-07-06 22:45:23.224000+00:00
['Free Verse', 'Poetry', 'Self-awareness', 'Sadness', 'Emotions']
How to Calculate Central Tendency and Asymmetry measures in Statistics and Python
How to Calculate Central Tendency and Asymmetry measures in Statistics and Python Arpita Ghosh Follow Dec 21 · 3 min read Image by Author In this blog, I am going to talk about Central Tendency, asymmetry and variability with hands on using Python. If you miss my previous blog about Descriptive Statistics with Python, please go to the below link.https://medium.com/analytics-vidhya/descriptive-statistics-with-python-part-1-9f34e48abc05 What is Central Tendency? : In statistics, a central tendency is a central or typical value for a probability distribution. Purpose of Central Tendency: It is a single value which is the representative of an entire distributed data. There are three main measures in central tendency, mean, median and mode. Now we are going to in detail to know about these measures. Mean: Mean is mostly used for measuring central tendency. It is a simple average of whole data set. Formula of calculating mean of a data set is (𝑥1 + 𝑥2 + 𝑥3 + ⋯ + 𝑥𝑁−1 + 𝑥𝑁 ) /N Where 𝑥1, 𝑥2 , 𝑥3 , ⋯ ,𝑥𝑁−1 , 𝑥𝑁 => r Data values , N = Total number of sample data. Image by Author For population data, it is denoted as μ and for sample data x bar (symbol shown in above image) Note: Mean is easily affected by outliers. Mean Example: Find out Mean explanation with some example. Median: The median is the midpoint of the ordered dataset. . It is not affected by outliers. In an ordered dataset, the median is the number at position (n+1)/2. Here n is number of observations. If this position is not a whole number, then the median is the simple average of the two numbers at positions closest to the calculated value. Median Example: If we consider the above data set, let’s find out median. Mode: In a data set, the mode is the value which occurs most often. A dataset can have 0 modes, 1 mode or multiple modes. Normally, the mode is calculated by finding the value with the highest frequency. Mode Example: Python Coding for Central Tendency Measures Now it is time for measuring asymmetry for a data set. For this context, we need to understand skewness of a data set. Skewness: Skewness is a measure of asymmetry. It indicates the observations in a dataset are concentrated on which side. Image by Author The above graph has right (positive)skewness. It means that the outliers are to the right (long tail to the right). Left (negative) skewness means that the outliers are to the left. Normally, using different software, skewness is calculated. Formula of skewness is Image by Author Python Coding for Skewness Conclusion: In my next blog we will learn about variability with python coding.
https://medium.com/analytics-vidhya/how-to-calculate-central-tendency-and-asymmetry-measures-in-statistics-and-python-28b2bc10407d
['Arpita Ghosh']
2020-12-21 13:17:36.723000+00:00
['Statistics', 'Central Tendency', 'Python', 'Data Science', 'Machine Learning']
Getting Started With Jupyter Notebooks
Jupyter Notebooks Logo Throughout some of my classes and work environments, I have found Jupyter notebooks to be particularly helpful in laying out my code and ideas in Python. I figured writing an article on how to get setup might bring new techniques and ideas into the world for someone, so here it is! Jupyter notebooks are easy and fun to use, and they look pretty nice as well. Setup is easy and quick, but honing your setup to have specific qualities makes up most of the time after the initial installation. Jupyter Notebook and its flexible interface extends the notebook beyond code to visualization, multimedia, collaboration, and more. In addition to running your code, it stores code and output, together with markdown notes, in an editable document called a notebook. When you save it, this is sent from your browser to the notebook server, which saves it on disk as a JSON file with a .ipynb extension. This article will be broken down into just a few short sections. Installation Setup Use Let’s just jump right in! Installation First thing is first, we need to download and install Jupyter to run our notebooks for our personal use. To begin, you more than likely have pip installed, and that is what we will use to install Jupyter. First, let’s upgrade pip to get the latest dependencies/plugins. pip3 install --upgrade pip Upgrading pip Next, we will install Jupyter in a similar way. pip3 install jupyter Jupyter pip install Setup Once finished with the pip install, we can start the Jupyter server for our use. jupyter notebook command prompt for Jupyter As shown above, Jupyter is ran locally on your host. We can then navigate to any of the URLs shown when you start the server. Jupyter login verification By default, my server started with the token code enabled so I had to copy my code from my command prompt where my server was started to access. Once entered and you click login, you are presented with a page like this: Jupyter dashboard Use Now that we have it setup and installed, let’s use it! There’s numerous use cases for a tool like Jupyter Notebooks. You can use it normally like you would any python script for a conventional python use. You can also use pip to install things mid notebook as well if something you need is not installed. pip install mid-notebook You can even do fun stuff like plotting that is similar to MatLab or R, if that’s your thing. These are just a few uses for your notebooks. You can also use them to take notes in class, save examples from tutorials, and much more. I use them day to day in my job too! I may update this page continually as I find more novel uses for the notebooks. Thanks for reading!
https://medium.com/swlh/getting-started-with-jupyter-notebooks-6ac0593fb73d
['Jacob Latonis']
2020-08-18 01:19:07.940000+00:00
['Python', 'Programming', 'Technology', 'Information Technology', 'How To']
How to Work from Home Amid Back-to-School
How to Work from Home Amid Back-to-School Narbis Follow Oct 30 · 7 min read Most of your child’s virtual learning issues have been ironed out. You’ve figured out what school supplies will help streamline virtual learning beyond three-ring binders and gel pens. And for better or for worse, your kitchen table has become the default household classroom. You’ve got your kids to keep it all together amid virtual learning. You, on the other hand, are still reeling with the stress of it all: constant alerts on your smartwatch about the latest developments with the pandemic or politics; Slack alerts from your coworkers; and the general unease of being cooped up inside. And despite your best efforts, you can’t stomp out every issue with your fifth-grader’s curriculum. After all, you haven’t done long division by hand since, well, you were in fifth grade. You’re not alone. Tired, Stressed, and Distracted are three of the Seven Dwarfs of Quarantine. Yet, unlike Snow White and her crew, you don’t need them following you around in your workday pursuits. Here are some tips to help you charge forward in your own workday. 1. Recognize your stressors. A calm mind is a productive one. We’re often the most productive when we’re happy and relaxed. Think about how much middle school would have been better if it weren’t for the bullying. Or how about that old job? The work itself was exactly what you wanted, but that passive-aggressive co-worker made your life just so miserable you set sail elsewhere. Stress is inevitable. Whether that stressor is something that comes up during the course of a workday, such as a misfired cc on an email, or something about life and the world around us, stress is going to weigh on our productivity, despite our best mindfulness efforts. The first part of managing your stress is to learn how to recognize what’s triggering it. The Mayo Clinic suggests making a list of situations that make you feel stressed, as well as topics and issues that have been weighing on your mind particularly recently. Points out the Mayo Clinic, “you’ll notice that some of your stressors are events that happen to you, while others seem to originate from within.” Figuring out exactly what’s bothering you won’t just help your mental health and productivity: it also can sustain your physical health. A Polish study published in journal Medical Science Monitor in November 2013 showed that people who had been diagnosed with depression were more likely to encounter stressful events by avoiding or denying what was happening. This, in turn, can lead to prolonged stress and anxiety, which limits productivity even more and can lead to physical problems such as insomnia, headaches, and a weakened immune system. The take home lesson here: don’t fight stress. Identifying your stressors will help your brain process the world around it, and learn to adapt to its new normal — and help expand your intellectual capacity in the process. Learning to cope with whatever crosses your path soon after it happens can help the event be less of a stress, meaning your workday will go much more smoothly and you’ll feel more refreshed and ready to spear that next deadline. 2. Negotiate a different schedule. Millions of people across the country are finding themselves in a similar predicament of having to juggle their workplace commitments with their child’s education. If you’re finding that it’s just too hard to hit deadlines while ensuring your children hit theirs, consider asking your company for adjustments to your work schedule. Perhaps you could negotiate a workday starting in the mid-afternoon and ending in the late evening, giving your household internet the literal bandwidth to navigate your child’s lessons in the morning and your virtual meetings. If your company is on board, remember that you might need to set boundaries and expectations with your co-workers, just as with your recent office mates, e.g. children. Try to schedule meetings at hours that work with those of your colleagues and try to instill a mutual understanding about availability during your new non-working hours. Just as emails that come in at 11 pm at night likely aren’t going to get an immediate response from a 9-to-5er, for those working under a new schedule, an email popping up during your child’s 10:30 math class might not get fielded right away. 3. Use tools that let you concentrate. If your child is older and self-sufficient at remote learning, but still noisy and distracting, there are gadgets to help you blissfully ignore ambient noise. Noise-cancelling headphones, for one, can drown out external noise and create a peaceful, quiet zone wherever you are. That car alarm or squealing toddler outside will be silenced. Your spouse’s fidgety knuckle-cracking or sniffling will fade off into the ether. Fewer noises means fewer distractions. Plus, noise-cancelling headphones offer telecommuters an added plus: many models can connect via Bluetooth to your smartphone and computer, meaning your work colleagues won’t get snippets of that Cardi B hit that your roommate is blasting while you’re making a client presentation. While you’re at it, grab another set for your children, so they don’t have to hear each other’s lessons, helping them zero in on what’s going on in their virtual classroom. Another tool new to the market can actually train your brain to concentrate while you’re performing routine tasks like going through email, reading an article, or Slacking with a colleague. Sensors on Narbis’ neurofeedback smartglasses track and analyze brainwave activity, then send that data through an algorithm developed by NASA to train pilots. If the system detects that the wearer is distracted, the lenses change tint, alerting the wearer that it’s time to focus. When the system detects that the wearer is paying attention, the lenses lighten back. Eventually, regular wearing of Narbis smartglasses can train the brain how to and what it feels like to focus. The glasses have a setting for “ peak performance,” which helps your brain learn how to attain focus and mental clarity. Business guru Tony Robbins has credited neurofeedback for enhancing his ability to multitask, allowing him to visualize two separate tasks simultaneously; for example, typing an email to one person while having a conversation with another person. 4. Don’t fight against interruptions. Embrace them. There will be times when your student just can’t help making noise. It could be their virtual band rehearsal. It could be their virtual physical education class. Or maybe it’s just their fingerpainting session. Whatever the case may be, those times might not be the most opportune time to sit and do work that requires long-term focus. Rather than gritting your teeth and letting yourself get stressed, look at it as an opportunity to take a break. Is your child doing yoga or a HIIT workout? Join in off camera! Physical exercise can help refresh your mind, helping you to be more productive later in the day. The extra-long virtual meet that requires them to actively participate and answer questions might be a good time to take a walk or catch up on errands such as grocery shopping, knocking something off your at-home to-do list. There’s scientific evidence to back up this line of thinking. A July 2009 study in the journal Organizational Behavior and Human Decision Processes showed that in order to perform well on a task that comes up by way of interruptions, people need to disengage fully from the previous one. Then, people will be more likely to nail that subsequent task. 5. Designate office hours. Just like your college professors had set times to meet students, bounce off questions, and discuss course material, you can set up specific periods of the day when you’re open to field questions from your own in-house scholars. Granted, the youngest students in your household might need softer boundaries. But for older elementary students on up, established hours for when you’re available for homework help has two main benefits: You know when to be on work and on teacher mode; and second, this sort of schedule can help foster more routine with your children — something that can also help them concentrate on their school work. A half hour after lunch and another half hour at the end of your child’s school day can dovetail perfectly with your child’s school day and the ebbs and flows of your usual 9-to-5: this is typically when your child would be out of the classroom and when you might find yourself in need of a break from your screen and of a cup of coffee. In addition, adhering to a set schedule can help make for a seamless transition between virtual and in-person learning, if or when your child’s school adopts such a schedule. During this era of quarantine, work-life balance is paramount. Under usual circumstances, your commute home would have formed some sort of boundary between home and office. When your office is within the home, the boundaries need to be mental. Alas, mental boundaries are often harder to keep in place than physical ones. Nevertheless, while we are all struggling to survive this new normal, caring for your own needs as a knowledge worker should be considered a necessity, not a luxury.
https://medium.com/age-of-awareness/how-to-work-from-home-amid-back-to-school-ad245c0839e9
[]
2020-11-03 12:08:43.558000+00:00
['Work From Home', 'Work Life Balance', 'Virtual Learning', 'Education', 'Productivity']
Reflections
“I hope you’re having a good time here, Richard” “Yes, ma’am. Thank you so much for having me”, said Richard trying to make his unruly hair behave against the strong wind. “But, please call me Richie” Fiona smiled gently, the wrinkles around her eyes assuring genuineness. Her graying hair was tied in a bun so she didn’t have to keep holding them down. Her hands were instead around herself clutching her shawl. The walk around the hill was terrifying as it was beautiful. It was a never ending struggle for anyone wanting to describe this hill. The breathtaking scenery of the ocean with dangerous jagged rocks at the bottom kissing the high tides while the clouds disappeared into the horizon. The hill itself was covered with lush green grass. Fiona had grown up here and knew her way around with her eyes closed. She remembered the time when she walked with her granddaughter across the same path. Now, she walked with the boy her granddaughter had lost her heart to. “Do you remember the large mirror in our living room?” Fiona asked looking at her feet and then up at Richie. “Oh yes, It must be very expensive. It looked a hundred years old but I must say, I have never seen a more beautiful mirror” Richie wasn’t exaggerating. It was indeed a beautiful mirror with a brass frame that had intricate carvings which was definitely a craftsman’s masterpiece. It was undeniably worth a fortune now. “A hundred and twelve now; and yes, it’s priceless. It’s been in the family for generations. My grand mother was only a teenager when she came into possession of that wonderful object.” She hugged herself tighter and continued walking. “There was a time when Lisa admired herself everyday in front of it when she used to visit here” She chuckled “She loved it and asked me all sorts of stories and made some of her own with her vivid imagination” Richie was quiet and smiling, listening to the old woman reminisce her past. “She had all these wonderful questions and theories. She was fascinated about it all. Reflections mainly. She once asked me: ‘Grandma, If mirrors show our image reversed and if they always told the truth; then, aren’t we all liars?’ ” She chuckled again waving her hand like how little Lisa did. They walked closer to the end of the cliff and turned a bit away and continued. “She was such a brilliant girl, thinking things through. Looking at things the way others couldn’t. That question had me thinking so much. Wouldn’t you agree ?” “Of course, It’s definitely something to think about” A slightly cold Richie said. “Everyone is a liar at some point in their life, if not all. Some lie to get out of trouble. Some for the fun of it. And others just to wreak havoc.” She looked at him and asked “Why else do you think man lies?” “Er… So he could have it all?” She turned to look at the ocean “Man is such a strange creature, he will lose it all in his pursuit of insatiable happiness” Richie was getting a little lost now, he had no idea the old woman would get philosophical on him. He looked and found himself slightly closer to the edge of the hill. He shuffled slightly away and hoped Fiona didn’t see him stumble. “Is it true that you love Lisa?” “Oh yes! She is such an amazing woman” She sighed and turned to look at him with stern grey eyes “She was such a brilliant girl and yet as always love blinded the truth. If only she remembered the mirror, she could have seen where your lips had been on your lonely nights” A gentle push was all it was. It was all it was but sufficient. He was too stunned to scream though his eyes reacted better, widening proportionally to the inches he kept losing. She walked back ever so slowly. Not turning to look nor to see where she headed. She didn’t need to. Water threatened to flow from her eyes but she knew it wouldn’t.
https://medium.com/fictionhub/reflections-b74700850d80
['Jeshanth K S']
2016-10-25 07:23:51.793000+00:00
['Short Story', 'Fiction', 'Short Fiction', 'Storytelling']
That’s a Great Idea: How to Help Introverts Be Heard in Meetings
“Our culture made a virtue of living only as extroverts. We discouraged the inner journey, the quest for a center. So we lost our center and have to find it again.” — Anaïs Nin One of the most popular TED talks of all time is by Susan Cain, whose book The Power of Introverts in a World That Can’t Stop Talking struck a less-than-silent chord with a population whose value we often overlook: introverts. Even though between one-third and half of the population is introverted, Western culture exhibits a bias toward extroversion. Gregariousness is often falsely conflated with productivity, adults encourage quieter kids to “come out of their shells,” and extroverts rate as smarter, better looking, and more interesting. Of course, this isn’t all or nothing; ambiverts exist, and depending on the context, the degree to which you feel introverted or extroverted may change. It’s a spectrum, but one we don’t tend to honor equally. Yet there’s much to learn from studying introverted behavior, particularly when it comes to designing meeting experiences. In contrast to more social, assertive extroverts, introverts work best with solitude, space, time, and quiet. Introverts often prefer to communicate through writing rather than talking. Because meetings are vocal externalizations of thought, they tend to favor extroverts. So, when the Teams team started working on a project on AI-powered conversation transcription, they brainstormed several remedies, including measuring speaker interruptions and providing behavioral data during meetings to all attendees so that they can learn from their behaviors. However, when our Ethics and Society team was approached about this project, we did what we encourage all product makers to do: consider how people will use the technology, determine its benefits, and then focus solutions on a community at risk of being excluded from those benefits. With this project, we focused on people who skew toward introversion. With the world now working and learning remotely, more product makers are focused on improving digital meetings than ever before. To honor our ethical imperative of creating inclusive digital environments, here are three takeaway ideas based on our research and design explorations. Ultimately, by drawing from and honoring introversion, we can help create intelligent meetings that benefit everyone. Idea #1: Create room for deep thinking and reflection Since introverts particularly benefit from knowing an upcoming meeting’s agenda, consider adding a ‘Set Agenda’ entry field as a default setting for new meeting invites, as shown in the image above. This also opens opportunities to incorporate agendas into the meeting itself, with a banner popping up when someone is nearing the end of a topic. Using intelligent transcriptions in meetings presents incredibly exciting possibilities. The ability to extend what we can pay attention to, or search across a collective memory of discussions, could dramatically expand our capacity to listen and learn from others. AI-powered transcript analysis could surface action items and spur inclusive meeting behaviors. But recording meeting conversations and transcribing them for reference and analysis could also have the opposite effect. For certain groups of people, particularly introverts, such technology could dissuade them from participating at all. To counteract this risk, we designed more opportunities for people to interact textually in meetings, transforming these transcriptions from a pristine record into an interactive tool. For fast-paced meetings that allow little time for the type of reflection introverts prefer, we explored ways for people to offload their thoughts to the AI-powered conversation transcript with pins, or highlights, to refer back to at a later time, or to train the service to flag portions of a transcript that cover specific topics, themes or people. Another design possibility leans into the preference of introverts to reflect and comment in writing via threaded conversations within the real-time transcript. For example, a person could mark a point in the transcript and type a reply off to the side. This would allow a more introverted meeting participant to connect their comments to the original context of the discussion that sparked their ideas, and make their ideas known to the other participants at their own pace. These ideas help meeting participants stay engaged in the conversation by making their voices heard in a way that works for them and allowing for further reflection. While introverts would benefit most from these features, they also just lead to more useful, thoughtful meetings all around. Idea #2: Create space for shared understanding When designing transcript functionality, consider replies, pins, and highlights. Replies allow for contextual comments without interrupting a speaker, while pins and highlights support further reflection or asynchronous comments. Then, mitigate chilling effects by using explicit language to create awareness about mistakes AI can make. For all the potential benefits of AI-powered transcriptions, they can also make it clear that participants are being watched. Our research showed how this can create chilling effects — that is, a change in behavior when a person perceives they are being watched or judged by others. Transparency is key to keeping AI-transcribed meetings inclusive, especially for introverts. In a survey of 366 workplace employees, we found that introverts, more than extroverts, would be less comfortable, less productive, and more hesitant questioning others in a meeting using AI-powered conversation transcription. They would also be more likely to act differently, speak less, and have privacy concerns in these types of meetings. We also investigated people’s responses to 2 types of AI-powered conversation transcription scenarios — one that attributed speakers and used behavioral analytics, and one that didn’t. We found using attribution and analytics significantly increased chilling effects, especially among the introverts. Why? One, human memory is imperfect. We are forgetful, and our forgetfulness forges a social contract between people of plausible deniability about what did and did not happen. Conversation transcriptions strip away the benefit of plausible deniability. Two, while people might intellectually understand that the technology is fallible — that the transcriptions will contain inaccuracies and the attributions sometimes will be incorrect — people will likely treat AI-generated transcriptions as objective and therefore more accurate because of the natural tendency of people to over-rely on automation. Fortunately, designs that provide transparent information on the technology’s capabilities and limitations can thaw the chill, protecting authentic expression. One solution could be unifying all anonymous meeting participants as a single “guest” in the transcript, helping safeguard plausible deniability and reduce fear of being outed. Additionally, limiting access to digital meeting records to meeting attendees could also help maintain the social contract between meeting participants and alleviate concerns about privacy. Designs that enable sharing anonymized meeting transcripts, or transcript highlights, with people who were unable to attend the meeting provide a form of post-meeting hallway discussion that is less likely to spontaneously occur among introverts. However, transcript editing should be deprioritized to avoid burdening people with additional work. Idea #3: Leverage neurology to improve engagement It’s important to let customers self-design AI as much as possible. This mock UI shows a tiered approach where the customer decides how much an AI system can monitor interruptions during meetings. Introverts and extroverts also exhibit neurological differences. Understanding these can help design solutions that help a broader range of people participate during meetings. In our literature review, we found research showing that the blood flow pathways in introverts’ brains differ from the pathways in extroverts’ brains. For example, one study shows that in the brains of introverts, blood flows to parts that attend to planning and problem solving, whereas in extroverts it flows to parts of the brain focused on external, sensory processing. What’s more, introverts and extroverts differ in their response to dopamine, the neurotransmitter responsible for reward-seeking behavior and pleasure. Dopamine motivates extroverts more, but they are less sensitive to it, and thus need more stimulation. Meanwhile, introverts can become overwhelmed by elevated activity or rapidly shifting topics. The ability to focus and do deep work is especially important for introverts, and this design is an example of how we can remind people to consider their timing when messaging others. By having a distinct button label indicating that someone is in Focus Mode, the sender can reconsider when or how to send their note. These findings can inform small but impactful changes to how we plan meetings through designs that encourage planning and reflection. This includes embedding prompts that nudge meeting organizers to set an agenda and share meeting materials in advance, enabling all attendees to show up ready to participate and keep the meeting on track. Reimagining how we give and receive meeting feedback also takes cues from the introverted experience. At the end of a meeting, designs that solicit anonymous feedback from attendees can help the meeting organizer optimize and fine-tune agendas for future meetings. Time and space to provide feedback would enable introverts to comfortably communicate their experience and offer improvements and make their ideas heard. Introversion in Microsoft Teams Some of these features are on the horizon for Teams experiences. For example, integrating pre-read materials into the meetings experience, and increasing the visibility of chat and other non-verbal forms of communication (e.g., emoticon reactions and inline document collaboration). Designing Intelligent transcription thoughtfully and responsibly leads us to solutions that make everyone — from the most vocal to the more introspective — a key part of the conversation.
https://medium.com/microsoft-design/thats-a-great-idea-how-to-help-introverts-be-heard-in-meetings-556cf09fb487
['Microsoft Design']
2020-11-12 18:50:56.849000+00:00
['Microsoft', 'Design']
Crowdbotics Article Submissions & Editorial Guidelines
Crowdbotics accepts contributing articles from writers on Medium. Crowdbotics is looking for technical and how-to content for a variety of web applications categories and code frameworks including Blockchain, Browser Extensions, Dashboards, Voice-Enabled Applications, Online Payments, Django, Solidity, Swift, React, Node.js, Ruby, and more. We’re also interested in content individuals leading technical teams or working on product strategy. For example, topics like, Engineering project management Sprint planning Developer management and efficiency High-level product planning Remote work management Strategic technology and framework choices If you’re interested in writing for Crowdbotics, send your article idea (title and short summary) to editor [at] crowdbotics [dot] com. Editorial Guidelines Audience. Crowdbotics’s audience includes a variety of semi-technical and technical individuals. Identify your audience in the post by describing their characteristics and needs. Help readers complete tasks quickly and efficiently. Drafts & Revisions. An editor from Crowdbotics will proofread and make suggestions. Plan for 2 rounds of revisions. Length. A post should be a minimum of 300 words but will likely be much longer. Crowdbotics’s best performing posts are 2000+ words. Write long, then pare down with an editor. Grammar. Run your content through Grammarly and make edits before submitting a first draft to an editor and again before pushing “publish.” Write for clarity. Do not include unnecessary words or sentences. Remove bulk words that do not contribute valuable information. For further reading on grammar, check out The Elements of Style and Eats, Shoots, and Leaves. Media. Include code snippets, screenshots, and other media when possible. Crowdbotics will provide a featured image to lead off the post. Please include credits for images you provide yourself. Keywords. Crowbotics posts should be optimized for web search. An editor will identify a “focus keyword” for your post. Include the focus keyword in the title, headers, and throughout the body of the post. Active Voice. Use active voice. It is more clear and direct. In a sentence written in the active voice, the subject performs the action. In a sentence written in the passive voice, the subject receives the action. For example, “Connect to the API,” not, “The API is connected to.” Formatting. Web readers scan content to find information they are looking for. Use clear and simple headings and subheadings, bulleted lists, and highlighted keywords. Group short sentences into short paragraphs for easy scanning. Addressing the Reader. Where possible, address the reader as “you”. Refer to yourself, the author as “I”. It makes the content more personal. For example: “In this tutorial, I’ll show you how to build a custom Alexa skill.” Text should also be gender-neutral when possible. Authorship. Medium posts will be under your name unless otherwise specified. Crowbotics may re-purpose blog content on other platforms with attribution. Calls-To-Action. Posts should end with 2 calls to action: a CTA to engage users (For example, “Show us what you built in the comments.”) and a postscript Crowdbotics CTA. The latter will be added by an editor. Links. Include 2 ‘crosslinks’ to other Crowdbotics medium posts, and 2 ‘external links’ to other sites. ‘Backlinks’ to crowdbotics.com will be added to by an editor if relevant. Timeline. The timeline for outlining, writing, revising, and publishing a post is about 1 week. Posts should take approximately 6–14 hours to complete. Duplicate Content & Plagiarism. Crowdbotics typically publishes new, previously unpublished work. However, we do sometimes syndicate previously published article if we think they are valuable to our audience. When submitting your article, please specify whether it has been previously published elsewhere, and include a link. You should never submit an article that contains unattributed content published by another contributors elsewhere. Syndication If you are submitting an article for syndication (meaning, it has already been published), Crowdbotics will make edits to the title, subtitle, featured image, and add postscript text with a Crowdbotics call to action. A Crowdbotics editor may make additional edits for clarity or content as well. Most posts are published within 2 weeks of submission. All submissions are subject to editing and publication is not guaranteed. Additional Writing Tips Inverted Pyramid. Use the inverted pyramid methodology of writing. Start with the conclusion, then include explanatory information. End with background information. This way, readers can quickly read what the content is about, and make a decision to continue reading or continue their search. Maximum Word and Sentence Count. A paragraph should generally contain less than six sentences, around 100 words. Keep the number of words in a sentence to 20. Make the first sentence of a paragraph the topic sentence that describes the rest of the paragraph. Examples of Posts We’re Looking For More Tips
https://medium.com/crowdbotics/crowdbotics-submissions-editorial-guidelines-e8a54c8cccf7
['William Wickey']
2019-06-12 11:25:41.379000+00:00
['Call For Submissions', 'Technical Writing', 'Writing', 'Submission Guidelines', 'Editorial Design']
The Python Data Model
介紹 這個章節主要介紹python中的一些magic methods,顧名思義就是擁有魔法一樣,不需要去重新定義或架構一個方法,直接使用即可。例如,假如有一個a = [1, 2, 3],想知道它的長度,可以直接使用len(a)=3,這是因為type(a)是list,list這個物件的class裡面已經有定義 __len__這個function,所以只要使用len()就直接呼叫__len__這個function。這些magic method的命名方式都是用__x__。 若想知道某個class有哪些magic methods,可以這個程式碼查看,例如想知道int class的magic methods dir(int) 可以得到下面的magic methods ['__abs__', '__add__', '__and__', '__bool__', '__ceil__', '__class__', '__delattr__', '__dir__', '__divmod__', '__doc__', '__eq__', '__float__', '__floor__', '__floordiv__', '__format__', '__ge__', '__getattribute__', '__getnewargs__', '__gt__', '__hash__', '__index__', '__init__', '__init_subclass__', '__int__', '__invert__', '__le__', '__lshift__', '__lt__', '__mod__', '__mul__', '__ne__', '__neg__', '__new__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__round__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', '__truediv__', '__trunc__', '__xor__', 'bit_length', 'conjugate', 'denominator', 'from_bytes', 'imag', 'numerator', 'real', 'to_bytes'] 關於magic methods介紹可以參考:https://www.tutorialsteacher.com/python/magic-methods-in-python 建立一個 class A Pythonic Card Deck 以下是以撲克牌為例子: class FrenchDeck: ranks = [str(n) for n in range(2, 11)] + list( 'JQKA')] # ['2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K', 'A'] suits = 'spades diamonds clubs hearts'.split() # ['spades', 'diamonds', 'clubs', 'hearts'] def __init__(self): self._cards = [Card(rank, suit) for suit in self.suits for rank in self.ranks] def __len__(self): return len(self._cards) def __getitem__(self, position): return self._cards[position] 接下來使用collections.namedtuple來定義一個class,collections.namedtuple的好處是可以為class命名,增加可讀性 import collections Card = collections.namedtuple('Card', ['rank', 'suit']) print(Card) 可以得到: Card(rank=’7', suit=’diamonds’) 接下來實體化FrenchDeck(),透過len(deck)可以得到撲克牌的所有花色及大小 deck = FrenchDeck() print(len(deck)) 長度為52 這邊可以使用len()是因為在class FrenchDeck有定義__len__,若是將__len__拿掉會得到: TypeError: object of type 'FrenchDeck' has no len() 若是使用 index查看deck裡的element: print(deck[0]) 可以得到Card(rank=’2', suit=’spades’),這也是因為class FrenchDeck有定義__getitem__ Card(rank=’2', suit=’spades’) deck[:3] [Card(rank=’2', suit=’spades’), Card(rank=’3', suit=’spades’), Card(rank=’4', suit=’spades’)] for card in deck: print(card) Card(rank=’A’, suit=’hearts’) Card(rank=’K’, suit=’hearts’) Card(rank=’Q’, suit=’hearts’) 所以說當有一個字串’123’或是一個[1, 2, 3]可以使用len()或是index就是因為class string, class list有__len__及__getitem_,可以用dir(str), dir(list)查看。 How Special Methods Are Used 這些magic methods都有一個共通點,就是它是透過Python interpreter主動呼叫,而不是透過使用者自己呼叫,所以上面的例子我們只需要使用len(),而不需要deck.__len__()。但是其實我們在寫程式時,不太需要去直接去呼叫這些特殊的方法,有一個例外的方法就是__init__(),相信對於class有一點概念的大概都會知道,使用__init__()目的是為了做初始化的動作,可以看上面的class FrenchDeck,當實體化一個物件時,它會呼叫__init__()。 盡量避免隨意定義特殊方法,其實命名特殊發法也是一門學問,在這邊就不做介紹,詳細內容可以參考:https://aji.tw/python%E4%BD%A0%E5%88%B0%E5%BA%95%E6%98%AF%E5%9C%A8__%E5%BA%95%E7%B7%9A__%E4%BB%80%E9%BA%BC%E5%95%A6/ Emulating Numeric Types 首先定義一個class Vector class Vector: def __init__(self, x=0, y=0): self.x = x self.y = y def __repr__(self): return 'Vector(%r, %r)' % (self.x, self.y) def __abs__(self): return hypot(self.x, self.y) def __bool__(self): return bool(abs(self)) def __add__(self, other): x = self.x + other.x y = self.y + other.y return Vector(x, y) def __mul__(self, scalar): return Vector(self.x * scalar, self.y * scalar) Figure 1–1. Example of two-dimensional vector addition; Vector(2, 4) + Vector(2, 1) re‐ sults in Vector(4, 5) v = Vector(3, 4) Vector(3, 4) abs(v) 5.0,這邊可以跟前面相呼應,直接使用abs() Python interpreter就會去呼叫__abs__()這個special method v1 = Vector(2, 4) Vector(2, 4) v2 = Vector(2, 1) Vector(2, 1) v3 = v1 + v2 Vector(4, 5)
https://medium.com/chung-yi/the-python-data-model-74e0ccb1e033
[]
2020-04-04 15:46:12.088000+00:00
['Python']
Frigid Winter Air
Briskly striding in midwinter’s cold Observations of what it has wrought Dead leaves scattered on the ground Light snow remnants along the path Icy splotches lining creekside banks. Invigorated walking in morning frigid air Spiritual presence felt within nature’s wake Few other people venturing in morning cold Alone in nature’s scenery with my thoughts Sacred pleasures always warmly embraced. Rushing creek water after snow and rain Overflowing its native rocky shoals Hearing it playing melodious sounds Nature’s language spoken in this moment As it crests close to old bridge’s height. Fascinations in watching nature’s changes In current season of frigid winter air Knowing changes will continue evolving Into spring season of nature’s new growths.
https://medium.com/flicker-and-flight/frigid-winter-air-c710ee9ab09c
['Randy Shingler']
2020-12-26 20:49:15.659000+00:00
['Poetry', 'Winter', 'Change', 'Self-awareness', 'Spirituality']
Anecdotal Customer Feedback is Dangerous
Customer feedback is a gift. I believe that. I wrote a whole book about it called Hug Your Haters. Indeed, customers are doing you an enormous favor by taking the time to alert you to a problem when it occurs, or to their happiness, when that’s the outcome. The value of feedback is heightened today, as customer experience sways purchase decisions more than ever. Thus, listening to and analyzing customer feedback is crucial to make sure the company meets or exceeds ever-escalating expectations. But making customer experience changes based on customer feedback isn’t always wise, as you’ll see in this article. My friend Tom Webster has said many smart things, but one of my favorites is this: The plural of anecdote is not data. An anecdote is just a story. And too often we use stories, which began as customer feedback, to shape our company operations. This is dangerous. But it happens all the time. How does this occur? How do we end up in a place where listening to customer feedback may actually be harmful? When You Don’t Look Hard Enough for Customer Feedback As discussed in Hug Your Haters, your customers are talking about you in more places and in greater number than you probably know. For instance, the overwhelming majority of tweets about a business do NOT tag the business in question. Also, there is a lot of chatter about companies in discussion boards and forums, where many businesses do not actively listen. The result of not listening hard enough is that the volume of customer feedback is diminished. Consequently, when you haven’t collected all that much feedback from customers, the feedback you DO get is magnified in its importance. In this scenario, it’s easier to spin specific pieces of feedback into anecdotes: stories that you can use to shape the customer experience narrative how you prefer. When You Embrace the Outliers of Customer Feedback We remember really angry customers. And we remember customers who are incredibly happy and satisfied. This is human nature: we discuss different and ignore average. But, when you’re looking at your customer feedback and trying to decide what it all means, it’s disproportionately easy to remember the five stars and the one stars, turning that sliver of the whole into anecdotes and calling them data. Danger: Customer Feedback Ahead Why is this a problem? What’s the downside of using just a few customer viewpoints to help shape how and why you do things in your company? Because truth requires math. An anecdote — even a great and powerful one — is just a blip. One (or even a handful) of customers should never shape your customer experience decisions, regardless of how persuasive, powerful, or poignant their feedback. The opinions of one customer, in one circumstance, in one moment in time, based on their specific experience is just that: ONE experience. And that can create feedback that is VASTLY different and dangerously contradictory. I learned this lesson quite clearly over the past 30 days. When Customer Feedback Collides My newest book is called Talk Triggers: The Complete Guide to Creating Customers with Word of Mouth. Written with Daniel Lemin, Talk Triggers is comprehensively researched and includes the 4–5–6 system for creating word of mouth strategies that acquire customers (4 Requirements of a Talk Trigger; 5 Types of Talk Triggers; 6-step Process for creating Talk Triggers). Because the book is about word of mouth, Daniel and I decided the book should have a feature that stands out; something to create conversation among readers. We included three. First, the cover is hot pink and features alpacas. Second, the inside has tear-out pass-along cards readers can use to recommend the book. And third, the book has an iron-clad guarantee. The back of Talk Triggers reads: If you buy this book and do not love it, go to TalkTriggers.com and send the authors a note. They will buy you ANY other book of your choosing. So far, out of MANY thousands of readers, we’ve had just two redemptions of this very special guarantee. And they taught me just how dangerous anecdotal customer feedback can be. The First Negative Customer Feedback Book has been out five months, and we get our very first request for a different book, from Gary. Said he didn’t like Talk Triggers. Wants a copy of Mark Schaefer’s Marketing Rebellion instead. Good choice. We get his proof-of-purchase, and send him the book. Before we ship, we ask him what he found lacking in our book. Gary says: The number of examples were few compared with the number of companies in the country. Fair enough, Gary. Although it would be a long book indeed if we aimed to write case studies about all the companies in the country. The Second Negative Customer Feedback Fast forward a month. James emails us. Said he didn’t like the book. Requests a $200, out-of-print book about digital marketing instead. We weren’t too thrilled about that, but we bought it for him, and then asked what he didn’t like about Talk Triggers? James replied: Too many case studies. The book relied on them way too much. James, meet Gary. Gary, meet James. You guys should get along great. And that’s why anecdotal customer feedback can be dangerous. Same book. Exactly opposite feedback. Make your customer experience decisions based on math, not stories.
https://medium.com/convince-and-convert/anecdotal-customer-feedback-is-dangerous-8723aafbceca
['Jay Baer']
2019-04-11 16:59:06.271000+00:00
['Customer Success', 'Word Of Mouth', 'Customer Experience', 'Talk Triggers', 'Books']
5 Ways to Not Care What Other People Think
1. Speak Up I was silent for a long time. I became a self-taught introvert for most of the middle school and high school. I spoke when I was spoken to. I stayed quiet when the cool kids were around. I talked with my friends and said things to please them. I didn't want to be disliked for my opinions. Gaining my voice back was terrifying and freeing all at the same time. I was loud. Definitely loud. But I was happy. Not everyone is going to like you if you speak up. Think about the greatest people in history who used their voices to be heard. Rosa Parks, Martin Luther King Jr., Gandhi, Nelson Mandela, Winston Churchhill; need I say more? They didn't care what others thought or who hated them for what they had to say. What I'm saying is not comparable to what these courageous leaders did, but the idea that they weren't afraid to speak up because of judgment from other people is shared. 2. Accept Judgement I feared hatred and being disliked for a long time. But it's human nature to judge or critique others. At least most people do. I'd like to say I don't, but when I see a post of someone on Instagram that looks a little off, I usually think to myself, "why did they post that?" I guess what I'm trying to say is judgment is going to be there whether you like it or not. I don't like the thought of being judged, but if I fear it, I am losing everything and gaining nothing. I'm letting people's thinking control my life. Which is no way to live? “Be curious, not judgemental.” Walt Whitman If I'm going to live without fear of judgment, then I will live by not judging others. It is something I practice every day now. Let others live their lives as you live your own. I accept criticism; it will only make me stronger. And that criticism is usually coming from someone with their own insecurities. They feel the need to push onto you. 3. Push Send There are countless times I never hit send. The phrase "full send" is one that I try to implement into my life no matter how ridiculous it sounds. I cared what the response would be, whether it be me sending a text, a Snapchat, posting a picture on Instagram. The what-if mentally got the best of me, and I feared what the person on the receiving end would think. But if you never hit send, you will never know what could have happened. The fear that causes you to not know is worse than knowing. And trust me, I get it. Rejection or no likes or being left on read sucks. But regret, in the long run, sucks more. Remember that it's just a message. Hit the button and bite the bullet. Who really cares if it's a flop. Be so confident in yourself that it just doesn't matter. 4. Love Yourself I didn't love myself for a long time, let alone like myself. When you love yourself, you care less about what people think. So you do what you want when you want. I can see why I was always so scared to be myself; it was because I didn't like the version of myself I was presenting. If I had loved myself more, it would have been easier to care less. Self-love is something that takes time. I don't think we are all naturally in love with ourselves, especially in the world we live in today. It's easy to see people that are more successful, better looking, and well appraised just by going on our phones. But if you learn to stop comparing yourself to others and bettering your mind, body, and soul, then you will see just how amazing you are. 5. Be Honest It's hard to know who you are and not let judgment from others scare you. But I always remind myself this: We have one life. One body. One spirit. One soul. One heart. It's all yours. There no point in hiding it or pretending to be someone you're not. Be honest with yourself and everyone around you. It's easier said than done, but I'd rather live my life knowing I was one-hundred percent authentically me.
https://medium.com/an-injustice/5-ways-to-not-care-what-other-people-think-4cc5792f2bcf
['Taylor Franklin']
2020-05-24 04:45:34.119000+00:00
['Personal Growth', 'Mental Health', 'Culture', 'Personal Development', 'Self']
Why do my beauticians like everything but hair?
By high school, I put my foot down and decided I would find my own beautician. I started off small, testing out a beauty salon three blocks away from my childhood home. No cigarettes. No cookies. And I’d mastered the art of beating my head to death instead of scratching it. I was all set. Then I found out that my new beautician was allergic to clocks. If there were six women who could all be there by 2 p.m., she booked us at 2:01 p.m., 2:02 p.m., 2:03 p.m., 2:04 p.m., 2:05 p.m. and — no, not 2:06 p.m. because that’s when she decided to bail on all of us and take a lunch break. Jeezus. I had homework to do and could not deal with this one. Time to go. I thought I was onto something with the next beautician, who I found in the Yellow Pages. She was almost always late for our appointments and had something smart to say if I was ever late, too, but she worked magic on my hair in approximately two hours, no matter what I asked for — relaxer, deep conditioner, wash, full haircut, whatever. I could deal with her being a few minutes late. I was loyal to her all through high school. I went to senior prom, and shortly after that, she decided to become a school bus driver. Just packed up all of her curling irons and shampoo, and went straight to busing kids to school. WTF? Photo credit: Austin Pacheco/Unsplash It’s usually considered the ultimate disrespect to go to another beautician in the same salon, but what else was I supposed to do? I liked that salon, and the whole crew of ladies (six or so) could all do hairstyles I liked. I spied the one I liked the most and asked her to take over where the Bus Driver drove off. To my surprise, I liked the way the replacement stylist did hair even more. Well, well, well, I thought I had a winner there. I went off to college and tried to stay in touch with her whenever I drove home from Michigan or Missouri to get my hair done again. Although I’d gotten a reputation as the college hairstylist who could perm, curl and style my peers’ hair, it was nice to sit back, relax and let someone else do mine. Photo credit: Create Her Stock But by the time I graduated, I strutted into the shop, all set to get my favorite new ‘do and the salon owner told me my beautician decided to quit and become a full-time Ford Mechanic. What is up with these beauticians leaving me for cars? I gave my hairstylist hunting one more go with a male beautician who wouldn’t stop tapping on my shoulders like I was a piano. I asked for one hairstyle, and he “um hmmed” me to death and did something so horrid to my hair that I still scowl at it when I see pictures. I pondered on how long it would take me to find the next auto-body shop. I’m sure some woman changing somebody’s oil could fix whatever he did to my hair. But I sighed, shook my head at him smiling at the nightmare haircut and never ever went back. From that point on, I decided I was going to be my own beautician. I knew the basics. I knew I wasn’t going to let Chips Ahoy distract me. No car mechanics with common sense would ever try to recruit me. I’d given up my smoking days my sophomore year of college. And best of all, I knew that there would be no double-bookings. I may not have the cosmetology license behind me, but there will be no schoolchildren to snatch me away from my own hair needs. I’m pretty proud of my healthy, happy hair. But the next time you’re getting your car fixed, check out what the mechanic’s hair looks like. If it’s way too neat for someone fixing an engine, tell that lady “Shamontiel misses you.”
https://medium.com/tickled/why-do-my-beauticians-like-everything-but-hair-f3ab00459afe
['Shamontiel L. Vaughn']
2020-06-16 12:54:40.320000+00:00
['Humor', 'Black Hair', 'Beautician', 'Hair Salon', 'Storytelling']
Feature Selection — Using Genetic Algorithm
Let’s combine the power of Prescriptive and Predictive Analytics Source: analyticsvidhya.com All Machine Learning models use a large volume of data to train to predict the patterns in the future. It implies that the machine learning models are susceptible to the quality of data. Even with minute errors, the models may train to yield infeasible or inferior results. Thus, the quality of data used for training is the utmost concern of an organization. In this direction, feature selection plays a crucial role. Different techniques are present such as forwards selection, backward elimination, stepwise selection, etc. to select a feature set. However, most of these approaches are performed manually and are computationally expensive and time-consuming. Therefore, in this article, the Genetic Algorithm is used to obtain an optimal feature within a reasonable amount of time. The structure of this article is as follows.
https://medium.com/analytics-vidhya/feature-selection-using-genetic-algorithm-20078be41d16
['Samiran Bera']
2020-11-26 07:03:25.344000+00:00
['Python', 'Logistic Regression', 'Genetic Algorithm', 'Feature Selection', 'Machine Learning']
On the Verge
On the Verge How I feel when everything is going right… Photo by Arjun Kapoor on Unsplash There’s a line in a Sam Shepherd play that’s stuck in my brain, when a parent is worrying about a missing child, who might be lying on the side of some freeway, “busted open like a road dog.” I felt like that parent for most of the last 15 years, when I was worrying about my middle child, who battles with mental health and drug use, and was frequently MIA. But now everything has changed. Three and a half months ago, he was released from the hospital, and nothing bad has happened… “I’m not going to get my hopes up,” his dad said, and that’s understandable. Three months isn’t much time when compared to the 180 months we’ve been living with suppressed dread — afraid to answer every call. But it’s a long time — a good, long stretch — when compared with other periods of ease in our son’s adulthood. I’ve heard it said that a mother can only ever be as happy as her unhappiest child, and I’m here to testify — that’s true. But this is one of those rare moments in my adult lifetime when everything is going right. My oldest child? Happy. Middle child? Housed. Youngest? Getting married in Israel this month! And I’m feeling busted open right now, but not like a road dog — busted open like a pomegranate, with juicy red seeds spilling out all over the floor. Everything I see makes me want to cry. That older Asian man making his way carefully down the sidewalk outside the window of our cafe? Heartbreaker. See how he gently places each foot? How his black puffy jacket is keeping him warm — his baseball cap shading his eyes from the sun? How happy he must be to be alive and outside on this gorgeous afternoon in San Francisco! That two-toned orange and white Volkswagon bus rounding the corner? Tearjerker. How lovingly the driver has maintained that jewel from another era! His devotion to the ’60s shines through the pristine paint job, the bright white upholstery, the black scrunchie holding back his long blonde hair. The two young women complaining about their overbearing mothers at the next table? Loveliest thing I’ve ever heard! They yearn to fly the nest, but it’s too comfy to leave. How lucky for them! How lucky for all of us that these young women have such mild concerns. You see what I mean? See what is happening to me? This kind of happiness is awesome in its power. Living under its sway has opened my heart to my husband. So now when he complains that he doesn’t want to fly to Israel for the wedding and leave the cafe unsupervised for 10 days, I don’t admonish him and tell him to get with the program, attempting to bully him into my point of view. I tell him I’m sorry it stresses him out. I can actually hear him, which isn’t often the case after cohabitation for 36 years. And it’s not only him. I can hear everyone, see everything, with these new, teary eyes. This kind of happiness is frightening. Because it could disappear at any moment, and I could go back to living in a cramped, fearful life. But as for now? Pomegranates are in season. A middle-aged couple laughs together on the corner. An older man in a brown jacket covered with random badges points out the Cable Car Museum with obvious pride. Two young men in shorts with a dog and a frisbee stride down to the water. Pomegranates hang plumply from the Tree of Life, and I’m going to pluck them, and suck down every drop of their red, delicious juice while I can.
https://medium.com/fourth-wave/on-the-verge-of-tears-ac9e3a683ecc
['Patsy Fergusson']
2019-11-11 15:41:31.959000+00:00
['Short Story', 'Life Lessons', 'Mental Health', 'Parenting', 'Happiness']
How To Make Money: Intro
By Richard Reis Hello dear, Today, I’m excited to begin our “Making Money” series! Simply put, we’ll talk about how to make more money. “What classes should I take in college?” “Should I even go to college?” “How do I get a job?” “How do I write a cover letter/ resume?” “What are others ways to make money?” And so on. These are questions I’ve either (a) asked myself or (b) heard before. Chances are, so have you. Finally, we’ll answer them! Onward. Why Talk About Making Money? Some people might scoff at the thought of increasing income. “This isn’t what you should value!” they say (I’ve seen them before on this very blog). And to be honest I agree! Making money just for the sake of making money doesn’t fit my idea of “time well spent.” Go back through my letters. You’ll find I talk a loooot about saving and investing money. I did this for a good reason! Those two are the most important skills you need to live a life free from financial stress. However, some people really live paycheck to paycheck (I put emphasis on the word really because most people use that as an excuse and still buy coffee at Starbucks, still commute to work, still get the latest gadgets, yada yada yada…). Remember, the vast majority of people should focus on saving money. But if you can’t save another cent? You should focus on making more money. That’s it! How Do You Make Money Anyways? First, we need to agree on the same “money-making philosophy.” This will be helpful for future letters. Also, it’ll assure people I won’t write about “get-rich-quick schemes.” My philosophy is different. Here it is: If you wish to make money, serve someone in a way they’d pay you for. That’s it! Sidenote: Emphasis on the words “pay you for.” Serving more people doesn’t necessarily mean you’ll make more money (I serve about 5,000+ people with this blog and don’t make any money from it). Find something people are willing to pay for, and deliver it to them. Once you do that, you can multiply the amount you make by increasing the amount of people. Ahh… The beauty of Capitalism. The more you want, the more you have to give. Here’s the catch: Serving more people also means more pressure. Don’t take it from me. Take it from Evernote’s founder, Phil Libin: “People have this vision of being the CEO of a company they started and being on top of the pyramid. Some people are motivated by that. But that’s not at all what it’s like. What it’s really like: everyone else is your boss — all of your employees, customers, partners, users, media, are your boss. I’ve never had more bosses and needed to account for more people today.” -Phil Libin Yes, Phil Libin is a very wealthy man. But he also had to serve lots and lots of people. Keep that in mind before you pick a way to make more money (we’ll talk more about this in a future letter). Sidenote: For more glimpses into the hectic life of a company founder, follow Spanx’s founder, Sara Blakely, on Instagram. She’s laugh-out-loud hilarious, but she’s also one of the hardest working people I’ve ever seen. Two Ways To Make Money I’ll divide the next few letters into two different categories. “Why?” Because these two categories pretty much cover all the different ways in which you can make money. Those two categories are: salaried work, and nonsalaried work. Sidenote: I won’t talk about making money from investing in stocks. Simply because I spent the last 14 weeks doing so. Salaried Work This is the most common way of making money. You work for someone, they pay you a predictable amount. The amount is predictable because productivity is hard to measure. So whether you’re more productive or less productive any given month does not affect your income. Sidenote: Of course, sometimes you are so productive you deserve a reward (like a raise or a bonus). But this rule varies a lot depending on the job. Nassim Nicholas Taleb would classify this way of making money as a “Mediocristan” way. I’ve talked about this before, but here’s a refresher: “You can think of Mediocristan as the land where things are mundane, obvious and predictable. e.g.: If you have a normal job and make a normal income, you know that no singular event will suddenly make you rich. It will take years of saving and investing, day by day.” Although the word sounds negative, there is nothing wrong with Mediocristan. Why? Because remember you are only serving one person. Therefore it is (typically) much much less stressful than the alternative. Nonsalaried Work This way of making money is far less common. Some people call this “side-hustle” if they have a job, and “entrepreneurship” if they don’t. Here, it’s harder to predict how much money you’ll make each month (if you make anything at all). Nassim Nicholas Taleb would classify this way of making money as an “Extremistan” way. Here’s a reminder of what that is: “You can think of Extremistan as the land where things are highly volatile, accidental, and unpredictable. This can be good or bad. e.g.: If you take all your savings and bet on Bitcoin, you can lose it all or become very rich in a single minute.” Of course, since you depend on yourself, stress levels are usually much higher (Google “founder depression” to find countless articles on the subject). “Why would anyone willingly subject themselves to this??” There are many reasons. Since we’re talking about finance, here’s the financial reason: In this situation, productivity is more directly correlated with reward. Let’s say you found something people want and started selling it to them. The more you sell, the more money you’ll make. Therefore, how much money you make is only limited by how much you can sell (hence why founders are known to sleep very little and work very hard). Sidenote: This does not mean you should start a business just to make money! That is a terrible idea. Remember it’s really hard work, and it kills your social life. We’ll talk about this more in detail in a later letter. For now, that’s it for today! Today, we kickstarted out “Making Money” series! Also, we learned: Why we’re talking about making money. How you make money (by serving others). And the two categories of making money (salaried work and nonsalaried work). See you next week (follow the series here to be notified). Be well. R P.S.: Has anyone here tried Simple? I really love everything about this company, but I want to make sure others have had a great experience with them (before I switch completely). P.P.S.: One of my first mentors was a man named Jerry Weintraub. Unfortunately, he passed away a few years ago. If he were alive, today would have marked his 80th birthday. If you’re looking for an incredibly inspiring book to read, check out Jerry’s autobiography “When I Stop Talking, You’ll Know I’m Dead.” If you’re more into documentaries, check out his documentary “His Way.” I guarantee you’ll be entertained. At the same time, you’ll learn a lot (which classifies as time well-spent!). Happy birthday Jerry.
https://medium.com/personal-finance-series-by-richard-reis/how-to-make-money-intro-2cd2cbe46866
['Richard Reis']
2020-11-11 06:57:37.666000+00:00
['Life Lessons', 'Entrepreneurship', 'Life', 'Money', 'Finance']
Infidelity
Infidelity A Poem in Five Acts I first took notice of his arms. His muscles under the short sleeve of his shirt. I tried to imagine him; a complete him with his muscles taut then softening as he touched me. I tried to imagine his words: a softness when he spoke to me as his lover, whispering my name against my flesh. — I do not want to be something for him to dip into. A temptation. I do not want to be standing water: calm and deep. I want to be the ocean. I want my passion free. I love him. I love him. I love him. — In the mirror this morning, again this afternoon. Looking at myself in a bathing suit, I wondered. What would he think of my body? Someone who is in love with you defines you. They shape and sculpt the contours of your body with short glances. I feel sexy right now in a short, whit cotton tank dress. My hair is held up loosely in a bun. — David says that the more pressing issue is my marriage. — I feel like I belong here on this part of the coast: Stinson, Bollinas, Marshall, Point Reyes. The windy roads, the dramatic cliffs, and the Pacific Ocean. The weathered, old wood homes. I belong here, I can feel myself expanding, being creative. — I find myself missing my husband today. “Are you lonely?” I asked him last night as we lay across our bed together. “Yes.” He answered. “Me too.” — It was quiet. I wanted to stay on the bus: keep moving. In the dim light of dusk, in the oily smells, and under the dull electric lights. — We went back to our apartment in San Francisco to feed our cats and I lay on the couch and stared up at the white, painted plaster walls and Neil played the piano. I thought to myself how much like an independent film that moment was: my eye fixed on a chipped piece of plaster, the yellowish light of afternoon, my husband’s music. — I saw the tears falling on words and I thought of Jackson Pollack. And I thought of David and how I had said “you are Michelangelo and I am Jackson Pollack.” — In Kate’s car today. I felt in the right place: riding in the back seat with Neil, my feet out the window, reclined against him. He had his hand on my leg and I noticed how his fingers seem slender, ready to hit a piano key. I listened to his voice, how he pronounced words. I thought of how much I like him, just who he is and how he listens and squints his eyes when he considers a question. I like his restraint and how he can gauge a conversation and hold it up and keep it organized. All of this while I tapped my feet on the ceiling, listened to Led Zeppelin and watched the pelicans and egrets as they passed by the car on our way to Pt. Reyes. — I thought of how much like a mother I felt: Neil laying on top of me, his head buried against my neck. I was rubbing his back and softly kissing his forehead. “People aren’t this close when they first meet,” he said to me. — I feel peeled open. Cut into. — I said to David, “I’ve felt like this for a year.” David said “incubation can last forever. Those feelings can last forever in incubation. It is when they are consummated that you know whether they are viable outside of the fantasy.” — His words are sewn together, hidden in the fabric of our work. “…in my eyes you are perfect, absolutely no flaws…I don’t want you to leave…I care about you.” — At first his eyes landed on me when I didn’t expect it. At the coffee stand. I looked up and smiled at him. There was a pause, then he smiled back. — Before I went to talk to him, to tell him I was leaving (I was wearing a loose sun dress with thin straps. It was casual and revealing), I had said a prayer to God and I had figured (and still believe) that I had to find a path to righteousness. “I care about you,” he said to me on that day. I traced the grain of the table with my finger. To my sister he has two sides; she has only my eyes through which to view him. She doesn’t understand why he kept his wife on speaker phone with me in the room waiting. To my sister this is a sign.” — “You are just like my wife.” “She sounds like an amazing woman,” I joked. — David said “you are so flippant about that phrase: in love.” The Pacific was breaking in silent waves outside the think, glass windows of the Cliff House.” — I miss him. I miss how his eyes twinkle, grow large when I speak. I miss how he watches me and I miss those rare moments when he touches me. The one time we embraced, my body fit into his. It was perfect. — I can feel it inside of me. It is a hand and it opens like a bird when it comes out. — (he was teasing me) You were right. I was wrong. You were right. I was wrong. You were right. I was wrong. You were right. I was wrong. You were right. I was wrong. You were right. I was wrong. Can you forgive me? — I know what his wife looks like because he has a framed picture of her on his desk. She is very pretty with short, dark hair. I’ve heard her voice. We were working alone in his office. He had a speaker phone on the desk in front of us. The phone rang and it was his personal line. “I’ve got to get this,” he said to me. He hit a button and her voice filled the room. “Sweetie?” she said into the air. I looked down at my hands, feeling out of place. “Ya?” he said into the box on his desk. “Why did you call earlier?” she asked. “To get the address (of someone) to send a sympathy card — I already found it though and sent the card,” he said. He looked at me and smiled. “You’re a sweet thing, aren’t you?” she teased. He laughed a little. “I’ve got to go. I’m in a meeting. I’ll see you later.” Then her voice disappeared. — I saw a little girl’s face in the ivy leaves. My eyes were playing tricks on me. They were dry from crying. — Neil played a Hank Williams song with his friend tonight and I sat watching him and thinking how much music he creates; how talented he is: what a beautiful man he is. — I’m scared and I wonder about not drinking: the dreams, the anxiety; holding my eyes open for such a long, extended period of time. “you have more to uncover,” my priest had told me. He also told me not to indulge in the darkness: drinking, cutting myself. “Be careful,” he had said. — “I’m honestly worried about you,” he wrote to me. — I was nervous talking to him, asking him for something. “It’s OK. You’re doing OK.” I looked at him; it felt like he was holding my hand. That is how I wrote my novel; with him holding my hand, protecting me. I trust him. There they are again, the three birds dancing gracefully in the sky, just above the waves. They are pelicans and now they are overhead: flying between me and the sun on their way back to the lagoon behind the beach house. — My sister thinks he’s in love with me. She thinks that he wouldn’t want to hurt me by having an affair. In my heart, I wonder about yesterday. Could something have happened? — I started crying because I’m going into his office to meet him. No one else was in the building. I wrote a prayer: God please bless me and protect me. Help me become righteous. Keep me on the path of virtue. Please show me signs to guide my will and behavior. Help me understand passion as temptation or love — whichever it is. Calm me, even with these fires. Keep me virtious. Help me make the right decision. Give me Grace and allow me to do the will of God. — They don’t have the problems that I do. The people in the cars. Moving a snail’s pace to the toll plaza. — I felt like I had sought him out. In his office. Alone with him at night. “Do you ever relax” he asked me. “No,” I said growing nervous. “Never?” It is the way that he’s interested in me that makes me feel self indulgent. I want him. “When I had my book, I could but now that it’s done…” I said. “What do you mean ‘your book?’ What kind of book?” “A novel,” I said. I felt my neck growing warm and my throat closing. My words shrinking. He was smiling at me. “That’s another side of me,” I said softly. Our eyes met and slowly his smiled faded. I looked at him awkwardly for a moment. Then, I made an excuse and left. — Driving over the bridge, I scribbled on a napkin: Is it possible that God used you to sculpt me, your eyes carving me in the parts of me that you have noticed over the years? Is it possible that your love has been my inspiration, and through you, God has made me your instrument? Because, all that I have become belongs to you, and now my heart, my mind, and my will know only the path to your love. — I am whispering now, but no one can hear my voice. Into the darkness, I am asking my life to become different. I am asking myself to become. Just become. — I rolled over and my eyes felt dry and sticky from sleep. I opened my eyes and the absence of him was noticeable. It almost startled me. I sat for a moment and listened, but all I heard was the rhythm of the ocean. Last night as I was falling asleep, I thought, what if there were a war, I was a prisoner? Who would I want with me? It was Neil, of course. Of course it was my darling, Neil but he is so far away from me. He always has been. I think, perhaps, it is something to do with a sadness, a sadness that I have detected in him since I stopped drinking. I can not have Neil. I think that is an unsolvable problem. I can’t have my husband. — It’s fuzzy: Neil’s voice on the cell phone. “Isn’t it amazing?” he asks,” that I can call from the camp site?” The phone goes dead and doesn’t ring back. — Am I causing the loneliness in our marriage? The waves are large and they are ominous, pushing towards the three figures — one of which is my husband. They all look small, juxtaposed against the vastness and the power of the ocean. — I can hear the ocean. To me it sounds like a storm, like a car passing in the dark, through slush; or, loud winds. — Neil is sleeping on the edge of the bed. He draws a straight line with his body to the window out towards the ocean. Every time I glance at him I can’t help but notice the Pacific and the thick foam of breaking waves as they inch closer to us. — I see the stream of light focused on Neil. I can smell the sweetness of the marijuana and see the smoke dance in the tube of light. His music pours into the living room; through the spotlight and into the darkness beyond. He doesn’t look up at me; his eyes fixed on the piano, his fingers methodically striking the keys. Not a glance. Not one. Not at my legs or the neckline of my lilac dress that buttons up the front and folds slightly, revealing my cleavage. Not a glance. Instead, he looks around my messy office. “It’s going to take you more than three weeks to clean out your office,” he says, not smiling. “This stuff isn’t even mine,” I say back, teasing, “see the conditions I’ve had to word under for three years?” “You’re a saint. Truly a saint.” I mention something about work. Work business. “Ya well…” he says. His eyes have forgotten me. — Is it true, what David said? That if I don’t bring it into the real realm that it will continue to torture me? Should I have told him at lunch today, with him looking back at me, seeing me? — He studied my hand, followed it when I touched my hair, pulling it back, adjusting my barrette. I smiled, “I’m sorry. I’m not hungry.” I said to him. “It’s OK,” he said gently. “You don’t have to be sorry.” — “I’m going to miss you,” I put my arms around him and hugged him. He held me for a moment. His neck turned flush and he walked back a step. “OK, well…” he said. — It is dawning on me. I won’t see him any more. The words linger, whisper back. Any more. — My face feels swollen and old. It’s not. It’s just leaving him; his glances on my flesh, defining me. It’s losing his touch. It is just leaving him.
https://medium.com/life-is-fiction/five-months-9bb085f02b78
['Donna Barrow-Green', 'Rose Gluck']
2017-09-15 17:28:12.624000+00:00
['Fiction', 'Marriage', 'Love', 'Mental Health', 'Poetry']
The Truth Is Always So Strange: A Conversation With Lynn Lurie & Terese Svoboda
Lynn Lurie: In Great American Desert you return home. From the story “Dutch Joe,” “We settlers have pushed all the way into the pockets of Lady America… we perceive the urgency of the land’s fecundity to be ours, it is so empty and waiting.” Your first novel Cannibal, is narrated by a nameless, mostly placeless and timeless woman, who makes one reference to her origin late in the novel, “ I am telling him my father has cows, I come from a place that is as flat as this. ” After Cannibal, home becomes more evident in your novels. But you don’t write about your children, not even obliquely. It would seem you have a fidelity to them that is so fierce, so maternal that despite all the risks you take as a writer this is a subject you have walled off. Can you talk about this? Terese Svoboda: My first child died at the age of four in an accident. There is no grief as wild as this. He is buried in all my novels, and in three of the stories in Great American Desert. But this collection really concerns a more primal coming-to-grips: my own father’s long dying. TS: This new novel of yours, Museum of Stones, is written in oblique prose touched with the concision of poetry, and marked by the fragmentary nature of memory that unite all three of your novels. Corner of the Dead evokes a young American woman encountering the Shining Light in Peru and the execution of incomprehensible evil, and understanding her place in it. Sections depicting the villagers in extremis are interspersed with her life as a volunteer. Quick Kills is the story of a young photographer who is lured into the art by a pedophile, unprotected by her family. A strange and inconsolable son is at the heart of Museum of Stones, a character who overwhelms the mother with his inchoate demands, yet the book echoes not only the form but some of the subject matter of the previous two. Peru is where the mother and son volunteer, and the difficult family continues its demands on the narrator. Many authors re-articulate their concerns from book to book, hoping to get them right or to set them right. What is your intent? LL: Yes, all three books overlap and share a number of similarities. If I were the sort of writer who plotted and planned I might have thought to write a trilogy. Each novel provides a new context for examining, what are loosely, three significant events in my life. Understanding how they have marked me remains an unfinished task. This also defines the process of writing: I am never done. In each novel I take new risks, not only with the subject matter but also with the form. So while all three dance around the same issues, with each subsequent book, I have tried to push further. I would love to be done and on to something else, but despite my efforts I keep circling back. Tin God gives a sense of vastness of the prairie and of what the land has witnessed over time and in Bohemian Girl we are also west in the 19th century. The girl is chattel the father bargains with to pay off a debt. You have stories of a father of a gun, a pick up truck, of men. Have you struggled with how your work is received by those who might see themselves in the work? TS: When All Aberration, my first book of poetry, was published, my mother complained that I made her sound fat, when really there were a number of poems that portrayed her as unloving. That’s when I realized “those who might see themselves in the work” would see what they wanted. I am, however, relieved that my father died last month. What is it about the negative space between your paragraphs that generates so much power in your work? I’ve often thought of your work as a hybrid prose poetry, but this last book is the one that most resembles that form in that it moves from section to section with a mysterious unity, like a much shorter Underworld. Would you appreciate such a label or do you prefer to be thought of solely as a novelist? And why. LL: I don’t think I have written novels or poems. My work doesn’t fit comfortably in either box. It may be because I am revising the same questions the hybrid form makes most sense. It allows me more opportunities to find the connections. My sense of the novel is that it has a distinct beginning, middle and end and is plot driven. Plot and a formal structure often hamper me, which is why I use both minimally. I prefer to let the stories unfold in a non-linear fashion, the momentum determined by memory and how images come to me. I don’t think my stories come to a full stop, but rather end with motion more akin to yielding. I prefer to hold back from excess description and context, which often detract from the emotional content and impede the reader’s imagination. So hybrid might be the most accurate moniker. As regards the spacing, there is a consistency as to when it occurs in all three works. The white space is employed when author and reader are in need of air. It provides reprieve. Museum of Stones relies most heavily on spacing: it was the most difficult story to tell. Its narrator is the most fragile of the three and the vulnerability of the son, at times, pulses as if it is its own character. I needed to work up to Museum of Stones and the prior two books gave me that courage. The white space also allows the reader to spend more time with what she has just read. I ask a lot of the reader with these spaces and am grateful for those who find something in the layering. I was trained as a black and white photographer and I still think in gradations of grey, the way shadow can ultimately create an image even when none is clearly visible. LL: Black Glasses like Clark Kent is a memoir of your uncle. How did you manage the weight of the material and the sensitivity of the subject? The effort here I would imagine is different than in a novel as you are working with facts. TS: My uncle told me to take the tapes he dictated and do what I wanted with them. In following his story, I had to explore issues of race, social justice, white supremacy, and the debasement of women to understand what made him operate as an 18-year-old MP in post-war Japan. Early on I decided that the power of the material was in the facts. The structure of the book is related to the discovery of the facts: I reveal them as I find them. Most of the stories in Great American Desert are fact-based and researched, except the mythological and sci-fi. Truth is always so strange. TS: The mother in Museum of Stones suffers from altitude sickness and must abandon her son in order to save her life in the midst of political chaos. His escape and their reunion is one of the most gripping parts of the book, where all the vectors of his personal terrors rise up. Earlier, the mother nearly comes apart while caring for him. What are the challenges of a writer when describing these kinds of mental stress? LL: You are the perfect reader.The writing of mental stress requires an unforgiving and unflattering investigation into motives, a reckoning of inadequacies, an exorcism of weaknesses. My writing often borders on violating confidences. Perhaps it crosses the line and when it does it causes pain, which is never my intent. I spend a lot of time rewriting just to minimize this. It is hurtful to close my eyes and feel what I, at other times in my life, worked so hard to forget. When it works I am rewarded, when it fails, I am despondent. LL: In Great American Desert you return home to the west with a vengeance. There is a love for this land, for these people, for the trauma of living here. The collection underscores you are a writer of The United States, the frontier. Was the need to put these stories together related to events of late that are undermining our America? TS: Since the stories were written over a period of 25 years, I can’t say that they were written in response to recent events, but the thread I found to hold them together, climate concern, is timely. I have, however, written about the environment before. My second novel, A Drink Called Paradise, is about the Pacific peoples who still suffer so much from postwar bomb tests. Radiation is almost as timeless as the degradation of the earth, at least from the human perspective. The narrator of Museum of Stones returns to Peru to work with poor and illiterate patients. How does the foreignness of the location illuminate what is essentially an intimate story of the heart, mother and son? LL: The foreignness is a screen and when it is gives way, she is whittled down to her rawest form. It allows her to feel her own circumstances from a slight distance and that makes her momentarily less afraid to assess them. She learns the most about herself if she can feel it first through someone else’s viewfinder. Peru bleeds through all three novels. It was in that part of the world, when I was young, that I lived with people who were stripped to the barest of living conditions, and who, with so little, made art, raised children and somehow remained gentle and generous. Acts of kindness are what sustained them when floods took their huts and provisions and when the military swooped in and took the rest. I wanted desperately to emulate the way they managed hardship, to be useful and to be humane. This meant a lot of failure on my part. Something happens in the story “Africa” that you carry forward in the remaining stories in Great American Desert. I wonder if the ordering of the stories was to reflect that the narrator is beginning to become more present? “Africa” is still told in the third person, but then there is another shift to a series of stories with a first person narrating. We now feel we are in the present tense, with the timelessness of some of the other stories, particularly the first in the collection, gone. We have been brought inside, closer. “Mugsy” is breathtaking for this reason, as are the stories after Mugsy, seemingly culminating in “Hot Rain,” which is wrenching, desperate and exquisite. Will you continue with this in your next work? Why are these stories at the end of the collection? TS: “Something bad happened” is easier to handle than “something bad happened to me.” There’s also a progression from the young protagonists in “Camp Clovis” to the awkward romance between the sister-in-law and the husband in “Dirty Thirties” to the elderly in a story like “Seconds.” The second-to-last story in the collection is a retelling of a fairy tale, and the very last is sci-fi, both otherworldly, stories told late, perhaps after death. Is that a movement toward self-reckoning? These days I’m using a ten-foot-pole to write about race and sex, cracking off little bits of me for examination. I don’t always want to go where “Hot Rain” came from. The extended family in Museum of Stones work at cross-purposes to the narrator, depleting her maternal energies, as well as refusing to acknowledge her own childhood struggles. The way you have interspersed and layered their ripostes gives us both texture and backstory. Did you use some kind of chart to position the sections or was this intuitive? LL: No chart. Raising children it is impossible to not remember growing up. Things long buried or never examined resurfaced. Images would come to me unbidden and linger and then branch into other memories without obvious connection. This, too, is how I write. I need to find the connective tissue. The narrator’s responses to her son are a function of the parenting she received and the best way to illuminate this was by dispersing, in zigzagging bits, past information. I couldn’t have done it any other way because the story isn’t about the narrator but about the-narrator-and-her son. They are not severable. A Guggenheim fellow, Terese Svoboda is the author of seven books of fiction, seven books of poetry, a prize-winning memoir, a book of translation from the Nuer, and a biography of the radical poet Lola Ridge. The Bloomsbury Review writes that “Terese Svoboda is one of those writers you would be tempted to read regardless of the setting or the period or the plot or even the genre.” Her short story collection, Great American Desert, was just launched by Ohio State University Press. Lynn Lurie is the author of two previous novels, Corner of the Dead (2008), winner of the Juniper Prize for Fiction, and Quick Kills (2014). An attorney with an MA in international affairs and an MFA in writing, she is a graduate of Barnard College and Columbia University. She served as a Peace Corps volunteer in Ecuador and currently teaches creative writing and literature to incarcerated men. She has served as a translator and administrator on medical trips to South America providing surgery free of charge to children, and has mentored at Girls Write Now in New York City. Her new book, Museum of Stones, was just launched by Etruscan Press.
https://medium.com/anomalyblog/the-truth-is-always-so-strange-a-conversation-with-lynn-lurie-terese-svoboda-8d07a599fddc
['Sarah Clark']
2019-04-04 15:16:29.118000+00:00
['Peru', 'Fiction', 'Interview', 'Writing', 'Memory']
Chef Edwin Anthony Rodriguez Has Seen It All, From Hospital Dietician to Contestant on ‘Chopped’
What happens when a seasoned gourmet chef discovers firsthand the many benefits of plant-based eating? Some of the most stunning vegan dishes you could imagine! Chef Edwin Anthony Rodriguez, who came up in New York City and currently resides in Charlotte, N.C., worked in Michelin-starred restaurants for over a decade before launching his own private vegan chef business. From appearing on Chopped to working as a consultant for hospital dietitians and nutritionists, Chef E has truly seen all sides of the food service industry. “I have a large amount of gratitude for all those years I spent in restaurants,” says Chef E (he speaks highly of the work environment that allowed him many opportunities to mentor others in the kitchen), “as I got older, though, I did start to yearn for being around just vegans and cooking just vegan food.” Chef E spoke with Tenderly over the phone about becoming vegan while working as a chef, his unusual favorite comfort food, and the high school teacher that encouraged his passion for cooking in the first place. Tenderly: How do you describe your cultural background, or cultural influences you grew up around? Chef Edwin Rodriguez: I grew up very Nuyorican. There are more Puerto Ricans in New York than in Puerto Rico — I’m not sure that number’s still accurate, but I’m pretty sure it is. But I grew up in a very, very Nuyorican neighborhood. The middle school I went to was very Nuyorican, the food I grew up eating was very Nuyorican all the way until high school. My grandparents raised me, and my first language is Spanish. And I grew up eating and I grew up eating a lot of rice and beans and chicken. And I had a very limited diet, actually, growing up. I grew up eating pretty much rice, beans, chicken, avocado, and I didn’t end up trying a lot of other foods until I became a chef. When did your interest in cooking begin? I went to John Dewey High School, which is named after a philosopher who did not believe in competition. So we did not have football, basketball, rugby, we didn’t have any of those programs. So they use the funding, actually, to have programs like cooking, pottery, photography, dance, we had top of the line funding for that. ‘I was suspended over 50 times in high school, but for some reason, I did really well in this cooking class, always. I was always in the cooking class.’ A couple of my friends are in the cooking class. And it wasn’t my first interest! First, I took pottery, and I was very good at it, surprisingly. And I was interested in photography and dance. But I ended up going to the cooking class, and it was a teacher who was Haitian. And I gravitated towards her, and she gravitated towards me. She took me under her wing. She was a bit of a disciplinarian! And she was like Eddie, you better come to class. And I remember almost dropping the class my first year. And she was on me. She was like, I need to see you next year. You’re starting this class again next year, right? So I ended up staying in the cooking class for four years. And as soon as I graduated high school — which I didn’t do too well in, I was suspended over 50 times in high school. But for some reason, I did really well in this cooking class, always. I was always in the cooking class. So when I graduated high school, I didn’t really have my eyes set on college, but I had these four years of culinary experience. So I got a line cook job in Times Square. My first real job was as a line cook in the Times Square Junior’s Cheesecake, which is a landmark in New York. So at 17, I was a full time 60 hours a week line cook. So I skipped being a prep cook or a dishwasher. I became a line cook, and then I became a sous chef, and executive chef, and now I’m where I am. Photo courtesy of Chef E When did you become vegan, and what led you to that decision? Veganism started when I was 23, and I was working at one of Gordon Ramsay’s restaurants. And by 23, I’m already five years into the culinary game. I’m working at these fancy places — Ramsay’s restaurant had two Michelin stars at the time, probably in the top 15 restaurants in New York City. But I couldn’t eat there. I didn’t feel comfortable eating there. I couldn’t invite my friends, I couldn’t invite my parents. So I thought, “I need to find something that fits me.” And I’m working all these hours as a chef, and I needed to adopt a healthier lifestyle. So I gave up caffeine, and then I gave up any type of drinking. And I adopted a very healthy lifestyle just to continue working 70, 80, 90 hours a week as a chef. ‘I made diets for people with cancer, people who had just gone through jaw reconstructive surgery, people who need low sodium, people whose medication conflicts with dark leafy greens. These dietitians and nutritionists taught me how to substitute.’ So I became vegan, and I started meditating, and fasting, and adopting a lot of spiritual practices. And that’s where the veganism came from, about when I was 27. I started to almost fully transition to an alkaline vegan diet, but now I’m just vegan. But when I started, I went as deep as alkalinity and fasting and adopting all those practices. Did your relationship to the restaurant industry change when you became vegan? No, not even a little bit. Because by that point, I’m already 10 years into the restaurant game as a chef. And for me, cooking was so much more than just cooking actual product. It was teaching dishwashers and cooks how to follow these recipes, and teaching them the skills to make more money in their career. You know, cooking is a beautiful thing where you can actually teach someone something tangible. I literally can teach you 50 recipes, and now you’re a chef! Cooking is teamwork, in these places. So for me, personally, when I went vegan, it didn’t really change my job per se. Ideally, I would have loved to work at just a vegan spot, of course, but at the end of the day, when I’m cooking that food, it’s not for me. It’s for the customers. When I go and do a wedding for someone, I am a part of their wedding. I want to make their dreams come true, regardless of what I personally feel. As I got older, though, I did start to yearn for being around just vegans and cooking just vegan food, because I felt it was a bit hypocritical to make food that I personally would not eat. Photo courtesy of Chef E Can you tell me about re-launching your private chef business? Does it allow you more freedom to cook food you’re more comfortable cooking? Starting my personal chef business was very liberating, because I have full creative control. However, I have a large amount of gratitude for all those years I spent in restaurants, and for my business degrees that I was able to get through the salaries at those jobs. I needed all of those experiences to be able to start my business, how to form an LLC, how to hire people, follow tax codes, get a business lawyer. How do you develop a menu with a new client, or create meal plans for someone transitioning to a vegan diet? Before I started my business, I worked with a nutritionist, and I worked with a dietitian at a hospital. I worked at several hospitals, making diets for lots of different patients: people with cancer, people who had just gone through jaw reconstructive surgery, people who need low sodium, people whose medication conflicts with dark leafy greens. These dietitians and nutritionists taught me how to substitute things in people’s diets. And me, myself, when I went vegan, it was a long transition. I had to eliminate sugar first, to eliminate all of those cravings we create when we’re eating whatever we want. So when I cook for people, the first thing I eliminate is vegetable oil. The second thing I eliminate is sugar, and the things they drink, a lot of times — almost 500 calories a day can come from just the things you drink. So I ask them about that. And then I ask them to go see a doctor to run a quick blood panel test to see if they need any nutrients before I start messing with their diets. I’m making it clear to them that I’m not a medical doctor, and that they should have a primary care physician and follow their health. Then I ask them what they crave, do they have any allergies, do they work out? For example, if someone runs a lot, I need to make sure I’m getting them enough carbs. Whereas if someone has an office job or is more sedentary, I know I can limit their carbs a bit. I try to give them all of the information that I can as well. Another huge part of this is trying to send them encouragement. That’s such a big part of helping someone change their diet. I text them, I call them, to ask them how they’re doing, and I try to encourage them if they’ve fallen off of their diet. I always say that it’s okay, and we can start again tomorrow. And my diet plans are also about getting someone in the habit of introducing new foods. Maybe a client hasn’t eaten bok choy before, or quinoa, or kamut, or beets. So my diet plan is big on introducing clients to things they’re maybe not used to. These are all things I didn’t have until I became a chef. I had a very limited diet before that. I try to give everyone that experience, to show them there are so many vegetables out there. ‘Being a New York City resident almost my entire life, smoothies are an amazing thing to me. That’s my comfort food. I think smoothies are such a powerful instrument.’ And even if they’re not vegan, I don’t shut them out. I cook food first with coconut oil, and I try to eliminate that vegetable oil first, and slowly introduce more vegetables into their diet. The American diet is so limited in the amount of vegetables. I know that cooking is your job and you spend a lot of time experimenting — what’s YOUR comfort dish, what you want to make for yourself/your loved ones when you have the time? Smoothies. I know it’s weird, it’s very weird. But being a New York City resident almost my entire life, smoothies are such an amazing thing to me. You’re running to work, you’re running to the gym, you’re running to a date. Being time efficient in New York is huge. My friends all have two jobs, or a job and a side hustle, or a job and a passion. I try to tell everyone to make smoothies! Throw something in the blender. You’ll be surprised by how much of an improvement this can make. It makes it so easy to incorporate raw foods in your diet, it makes it so easy to stay hydrated. So I push this, even though it’s weird. That’s my comfort food. I think smoothies are such a powerful instrument. What’s your go-to smoothie combo? Spinach, definitely with spinach. It used to be kale. I do spinach and I incorporate the Green Vibrance supplement. It includes probiotics, so if you have that in the morning before you eat, you’re already setting up your stomach lining for a good day. Also berries, mangoes, and ginger. I try to stay away from supplements in general, but I always try to add greens. There’s not a lot of farms in Brooklyn, there’s not a lot of farms in the five boroughs. So if you have five minutes to make a meal, you can get a lot of nutrients right there in a smoothie. Photo courtesy of Chef E What advice do you have for new vegans, or someone considering veganism? I would say to throw the word “vegan” away. And think more about your health and what you care about. I love animals. There’s one quote that stays with me, I remember reading it when I was younger, in college. It was from the lead singer of Rage Against the Machine, I think. He said something along the lines of, “If I can live without eating meat, without killing an animal, then there’s no reason for me to eat it,” something like, “then why would I kill an animal?” ‘Vegans are a nice supportive community, where we help each other out and keep each other accountable.’ And I would say stay informed on what you’re eating, try to stay healthy. Oftentimes, when you eat vegan, sometimes it’s not healthy. You can have a vegan mac and cheese with pasta and nuts, and it’s not always healthy. And even if you don’t choose to go vegan, that’s fine, that’s your choice, but try to stay informed on what you’re eating. A lot of times people are not informed on what they’re eating at all, like not at all. If one day you’re vegan and the next day you’re not, don’t beat yourself up. It’s a long process, but eventually you’re not going to crave all of the non-vegan items. And support each other! I like how the vegans are a nice supportive community, where we help each other out and keep each other accountable. What’s up next for you? Stay tuned on my site. I’m about to upload some new recipes. Most of them will be vegan, but some of them will be for transitioning diets. I love the pressure cooker right now — I would encourage everyone to get one. I’m doing a lot of pressure cooker recipes right now, because it’s very easy for anyone who doesn’t have a lot of time on their hands. You can just put the ingredients in and plug it in.
https://medium.com/tenderlymag/chef-edwin-anthony-rodriguez-has-seen-it-all-from-hospital-dietician-to-contestant-on-chopped-1c53697ab6dd
['Casey Walker']
2020-12-03 01:01:05.961000+00:00
['Vegans of Color', 'Vegans Of Color', 'Vegan', 'Food', 'Work']
Power To Our Women!
Power To Our Women! The secret to Silicon Savanna’s future More than most, my life has been disproportionately influenced by women. I was raised by a single mother who had me at 19. She had four sisters, whose presence loomed large over the first couple of decades of my life, shaping my worldview and cultivating a deep connection to the feminine. From my mother I learned what style, professionalism, and resilience when facing incredible odds looks like. I had more than a front row seat to witness a woman curve her path through a male dominated world, which at the time I couldn’t appreciate but today stand in awe. Today, I am dad to two amazing 12 and 13 year old girls who are the most important humans in my life. Our relationship is a barometer for how I am doing on a minute to minute basis. I adore being their father and can’t imagine it any other way. Daughters were my destiny. I can’t remember suffering from a male superiority complex. How could I? To me, women were as formidable as anything. In school, some performed better than I did and I performed better than others. They were simply my peers. Life is much simpler when we are young. Twenty years in America, majority of which was spent in white male dominated Silicon Valley was my induction to adulthood, with all it’s complexities and contradictions. In 2018, I was compelled to return back home to Africa immerse myself in the economic development challenge. As I have set about identifying the right approach to unlocking tech driven entrepreneurship in my home city of Nairobi, I have uncovered some interesting insights and secrets. One of which will be central to our plans for building great African companies. Insight #1 Women are much more effective at leading and nurturing complex people-centric systems. Duh! As entities populated by talented individuals with unique needs, startups are the quintessential people-centric systems of our day, requiring leaders with high levels of empathy, self awareness, and people skills to nurture and drive them forward. Areas in which women are generally more gifted. I created Impact Africa Network as a non-profit startup studio to bridge the gap between the abundant young talent in Africa and the massive innovation opportunity on the continent. Our mission is to ensure young talented Africans can participate in the digital transformation of Africa as creators and owners. Our mission is to ensure young talented Africans can participate in the digital transformation of Africa as creators and owners. How We provide twelve month Innovation Fellowships to talented college graduates providing them the opportunity to work on well-vetted ideas with like minded peers under the guidance of an experienced leadership and global mentor network. From this alchemic process emerges two compelling pipelines that are currently in short supply on the continent: i) World class innovation leaders ii) Growth ready early stage startups Talent Advantage Growing startups require ‘adult’ leadership to move them forward. Early on Facebook famously brought on Sheryl Sandberg for this very reason. Nairobi is blessed with an amazing and abundant talent pool of highly capable younger women in the ‘Goldilocks zone’ of their careers. Just like our planet’s interstellar positioning of the same naming, the career Goldilocks Zone is that special stage between 28 – 33, when ambitious professionals have gained enough experience to start seeking opportunities that align with their personal interests, skills, and ambitions. By then, most will have been exposed to either grinding corporate culture, uninspired lethargic work places, work politics, toxic cultures or amazing work places with great cultures. Ergo, one knows enough to be discerning about where they should invest their time going forward. The goal for all our projects at IAN is to achieve market traction at which point we spin them out as independent operating entities. This is where we see our intrepid, emerging female business super heroes come into the picture. We bring them on at this stage to do a Sheryl Sandberg, with no need for the misguided ‘lean in’ manifesto, providing the experience required to drive these young entities on to their next phase of growth. For a particular type of high performing young professional who hungers for autonomy, responsibility, and opportunity, this is a match made in strategy heaven. I have had many conversations with women in this demographic in Nairobi and it is apparent that they would make ideal candidates to lead our portfolio of emerging startups. We recently we hosted one of these superwomen who had made the transition from the corporate world to the startup universe on our podcast to share her experience. Here are some more amazing podcast guests if interested. I have always said it takes a village to develop startups into great businesses. I am firmly convinced that Nairobi has the ingredients necessary for manifesting startups in to scale-ups, a belief that will be borne out in the next 10 years. At Impact Africa Network we are placing our bet on female talent to be instrumental in helping shape and manifest our 10.10.10 plan. Onwards and upwards! The above piece was originally posted in June 2019, below is a progress update — — — — — — — — — — — — — — — — — — — — — — — — — — Operations & Growth Manager, Jenga School - July 26th 2020 In June of 2020 Impact Africa Network launched Jenga School, our STEM Professional Talent development project with a focus on Data Science and AI. We are thrilled to announce Esther Mumbi will be joining Jenga as Operations and Growth Manager. Esther has a BSc with a concentration in Human Resources from JKUAT. She brings 10 years experience with stints at ABSA Bank and also as an entrepreneur running her own business. She will be one of our first Intrepid Female Business Leaders proving our above thesis. Make An Impact Impact Africa Network is a non-profit entity. If activating young talent in Africa is a cause that resonates with you and would like to make an impact, you can join our micro-donations support program. It works just like Netflix! For as little as $30 a month you can enable us continue doing this important work. We are seeking 1000 champions willing to support us with $30 a month. Together we can great the change we want to see.
https://medium.com/impact-africa-network/power-to-our-women-a8b065b48a4d
['Mark Karake']
2020-08-06 03:49:14.542000+00:00
['Innovation', 'Women In Tech', 'Women', 'Startup', 'Africa']
Why You Should Binge-Watch Your Favorite Movies This Christmas
If there’s one thing we’ve learned in 2020, it’s that human beings can forge connections on screens to temporarily replace real face-to-face human interaction. And I’m not just talking about daily Zoom calls with your colleagues. Spending time with fictional characters on television or in movies can mimic the benefits of seeing real-world friends or loved ones, reports science writer Markham Heid in Elemental — especially, if you’re rewatching your favorite shows or movies. Medium’s Sam Zabell also explains why binge-watching her favorite Christmas movies brings comfort: human brains are conditioned to feel good when we watch our favorite, cheesy, sometimes trashy Christmas movies: “A neutral stimulus (in this case, a holiday movie) is paired with an unconditioned stimulus (all of the good memories you associate with Christmas) and creates an emotional response (warm fuzzy feelings).” When Sam published her piece in December of 2019, she didn’t know what 2020 would bring. But recognizing that you can comfort yourself by watching your favorite movies or shows this holiday season might be more important than ever. Happy Holidays!
https://elemental.medium.com/why-you-should-binge-watch-your-favorite-movies-this-christmas-42c7b49ceb41
['Felix Gussone']
2020-12-24 06:33:33.864000+00:00
['Film', 'Holidays', 'Life', 'Mental Health']
The Threat Of Freedom And The Forgotten Value Of Being Powerless
“Here I stand, I can do no other, so help me God.” — Martin Luther Newsflash: sometimes, I can’t get what I want. Let’s envision getting what I want as a two-step process: 1) Discover what the cause-effect relationships that govern reality are. 2) Figure out how set the right cause-effect relationships in motion. At step one, I can’t attain my objectives due to a lack of knowledge. If I don’t understand how reality works, I can’t identify what it is that I need to do to reach my goals. If I set out to build a house, then I need to comprehend something about gravity to make an adequate design. At step two, I can’t reach my goals because I’m incapable of executing the tasks that need to be done. I might know what is required, but lack the skills to successfully perform the necessary actions. Although I understand gravity, I am not able to build a spaceship (or a house, for that matter). The solution? Science. All hail science At step one, science fills knowledge gaps: it finds out how reality functions so that I can recognize how to manipulate it to get what I want. Thanks to such knowledge, for example, we now know that praying for health is not super effective and that vaccinating is a better way to avoid sickness. At step two, science frees us from technological obstacles: it designs appliances that allow me to manipulate reality in the way that fulfilling my intention requires. These days, if I would like to cross the ocean by boat, I am no longer at the mercy of weather gods but can always turn on the engine. Science brings the world under our control: we can do more than ever. Science, therefore, liberates us. Empowerment = freedom..? The inference that science increases freedom builds on a specific idea about human freedom. On this view, human beings are free when they can get what they want. Thanks to science, both our capability to discern the cause-effect relationships that underlie reality and our capability to influence these relationships is increasing. Our environment no longer consists of inscrutable developments — Getting sick? The Gods must be angry — but of processes that we can understand. Consequentially, we have more power than ever to ensure that reality unfolds as we want it to. Sciences frees us, because it improves our ability to get what we want. Our bodies are just objects “We can construct a railway across the Sahara, we can build the Eiffel Tower and talk directly with New York, but we surely cannot improve man. No, we can! Man must look at himself and see himself as a raw material, or at best as a semi-manufactured product, and say: ‘At last, my dear homo sapiens, I will work on you’.” –Leon Trotsky, Sochineniia (1925) For science, the human body is just like other objects that the universe houses: simply something we can influence to get what we want — something that we can manipulate to reach our goals. In that spirit, science seeks to unravel how our machines work. Once that project is complete, we will be able to change our physical bodies in accordance with our heart’s desire. We will no longer be forced to take our corporal characteristics for granted: nose size, voice sound, skin color — all those personal aspects will be something we can choose for. Once we know how our bodies work, we can treat them like we treat our cars: keep the parts we’re pleased with, and replace or repair the components that we would like to be different. Thanks to science, our physique will be completely malleable. This is liberating, because it increases our ability to get what we want. There will be nothing about the human body which must be taken as a given. Next step: our brains. Disenchanting the human mind Science treats human beings as a part of the natural world; it tells us how we work. Once we know how we work, we can develop technologies of self-transformation: ways of making our bodies and our minds more pleasing to ourselves. After all, our brains determine how our minds work. Therefore, if we control our brains, we control our minds. Once we understand the neurological basis of desire, we can manipulate its causes in the brain: a science of the mind yields a technology of the mind. That would allow us to change what we want in the first place. If we cannot get what we want, we can simply change some chemical stuff — change the secretion of these or those neurotransmitters or whatever — thereby changing our desires and freeing ourselves from this unattainable need that we were so silly to harbor. This is not some far-fetched, futuristic fantasy. To give one present-day example: anti-depressants such as Prozac influence the level of serotonin in our brain. This affects all sorts of emotions, reactions and attitudes. By taking it, people can deal with, for example, their low self-esteem or their desire to remain with abusive partners not by satisfying it, but by destroying it. Such desires need no longer be taken as a given: we can simply decide not to have them. What a wonderful illustration of the liberating power of science! Surely psychiatric drugs can free us from unwanted mental states just as the discovery of penicillin once liberated us from the terror of tuberculosis? Choice overload “I believe that much of the distinctive value of the natural world and of things related to it, depends on their relative imperviousness to rational control.” — Stephen Darwall There’s one small problem. Instead of contributing to a better life, this newly won freedom might remove the meaning from it. The philosopher David Owens writes: “Science invites us to exercise control over our lives by finding out what we want, working out how to get it and then acting accordingly. But now we are being told that we shouldn’t take our desires as given, that we can act to change them as well. But if we change what we want, what basis is left for choice or decision?” Think about this question for a minute. When the technology of the mind is complete and we can alter our desires as we like, how shall we judge what to value in life? After all, when the science of our brains is complete, we could control which desires, needs and wants we have to begin with it. What seemed like an expansion of self-control, threatens to rob us from any grounds for making a choice. Without unchosen desires, when we don’t need to take any value judgment as given, what is left as a basis for deciding how to live and what to do? Here’s Owens again: “If man is just a bag of chemicals, once we know what these chemicals are, we can re-mix them at will. And by re-mixing them at will we can give ourselves whatever character we like. But if we can choose a character at random, our current needs and interests lose their authority as grounds for taking any decision. And what other grounds for taking decisions are there?” My choice That I — deep inside — harbor a bundle of personality traits, needs, desires and values which we cannot willingly alter makes my decisions meaningful as my decisions. Too much control that personal aspect from our decisions. It seems that the ability to make a meaningful choice — a choice that is distinctively mine — requires that this choice is subject to some constraints which I cannot influence. This unchosen core is me, and without such an unchosen base there could be no decision being made by me at all. The very possibility of personal choice requires that there are unchosen restrictions on this choice. These unchosen aspects of me make me uniquely me. And, perhaps contrary to the narrative of science as the big liberator, there is nothing regrettable about finding oneself, in the ultimate analysis, left with a fundamental core against which one is powerless. Without such a constitutive essence, the rationale for categorizing life’s most important choices as my personal choices evaporates. We’ll all be every-one — we’ll all be no-one.
https://medium.com/the-understanding-project/powerlessness-5fdeb607e831
['Maarten Van Doorn']
2019-06-17 18:58:44.713000+00:00
['Philosophy', 'History', 'Self Improvement', 'Life', 'Psychology']
Why I Walked Away From Teaching, Confessions of an Introvert
I loved teaching. I loved standing up in front of classes of often-intimidating children and testing myself in some of the most challenging circumstances. I loved that look on a student’s face when you finally got through to them. When my subject was no longer boring. When that lesson was more than just ‘alright sir’. I loved the different personalities and characteristics. I loved not being stuck behind a desk as much as when I had an office job. I loved getting out of the classroom on field trips. If I’m honest, I loved being the ultimate number one authority in the room. That surprisingly, most of the time, they did what I told them to do. I knew fairly quickly into my third year of teaching I couldn’t do it anymore. I no longer teach. Here’s why. Image by Jan Vašek from Pixabay Teaching is a challenging and demanding career. It’s one of many criminally underfunded and yes, underpaid, professions. Still, I had definitely faced higher levels of stress in previous careers. I had worked 24/7 shift patterns, given presentations to rooms full of 100 people, met impossible deadlines, been unfairly overlooked for promotion, whistle-blown, had to discretely tell a member of my team to improve their personal hygiene. I could go on. So what was it about teaching which made me walk away? Could it be because I am naturally an introvert? An introvert is often thought of as a quiet, reserved, and thoughtful individual. They don’t seek out special attention or social engagements, as these events can leave introverts feeling exhausted and drained. Healthline.com Too much noise and constant interactions can overwhelm me. Being around large groups of people for too long can be draining. Possibly not an ideal fit for the chaos of the school environment. Image by athree23 from Pixabay It’s more complex than that. I don’t think you have to be extroverted to be a teacher, but in many schools it certainty helps. It also fails to explain the thousands of introverted teachers out there getting on just fine. I actually enjoy public speaking. Yes, I still get nervous beforehand, but I enjoy it, it makes me feel alive. Being an introverted teacher could also have its benefits. Such as being able to better read quieter students and, using enhanced observation skills, build a clearer picture of what is going on in the room. Possibly my introversion made me a better listener. Undoubtedly it’s useful for students to have teachers with as many different traits and characteristics as they do. Photo by Alexander Catedral on Reshot I am still, however, an introvert. Other people sap my energy. Soon as you lose the energy battle in the classroom, your level of control suffers, the standard of teaching slips. Teaching requires acting skills — playing different characters for different classes. As a teacher, I could assume different roles to my natural introverted self. It’s the aspects of teaching you can’t control which get on top of you. The icebreaker heavy constant professional development sessions for example. Meetings with disruptive students and their parents/carers. Phone calls home. It’s relentless, it never stops.
https://medium.com/age-of-awareness/why-i-walked-away-from-teaching-confessions-of-an-introvert-d8453014fac
['N.J. Edwards']
2020-03-17 19:45:23.625000+00:00
['Work', 'Life Lessons', 'Mental Health', 'Teaching', 'Education']
Leap Into These Interesting Leap Year Facts
Leap Into These Interesting Leap Year Facts By Adam Barrett “Today is an ephemeral ghost” — Vera Nazarian “Might as well jump” — “Diamond” David Lee Roth If you spend enough time talking to people, eventually, somebody is going to mention how 2020 feels like it’s moving at a snail’s pace. January crawled, folks are predicting March will come in like a lion and leave… also like a lion, but February? Well, this month seems to be zipping along, which is especially strange, considering it’s technically longer than it’s been since 2016. That’s right, 2020 is a leap year — something that only happens once every four (or so) spins around the sun. Since this isn’t an every day (or even every year) occurrence, it’s a great chance to share a few facts about leap year with you! First Things First — What is a Leap Year Anyway? A leap year is an event that occurs on the Gregorian calendar (aka, the calendar on your phone, your computer, and your wall). It involves adding an additional day (known as leap day) to February, bringing it from 28 days to 29. This isn’t just some fun way to spice up the calendar every few years — a leap year actually serves an essential purpose. There are 365 days in the regular Gregorian calendar year, but it takes 365.24 days (365 days, five hours, 48 minutes, and 45 seconds to be precise) for Earth to complete a solar orbit (known as a tropical year). This means the Gregorian calendar year starts about six hours before a tropical year. While this may not seem like a big difference, scope creep is real in all aspects of life — even astrological ones. Without adding an extra day every four years to make up this difference, the calendar year would begin drifting out of sync with the tropical year. The seasons would shift, April showers would no longer bring May flowers, and — eventually — the Northern Hemisphere would be gearing up for winter in the middle of July. In short… When is a Leap Year Not a Leap Year? Okay, so, every four years, without fail, we have a leap year in the calendar. Easy peasy, right? Wrong! You’re actually thinking of how things worked with the Julian calendar (don’t you hate when that happens?). The Julian calendar, named after Roman general Julius Caesar, introduced leap days every four years to better align the calendar year with the tropical year. But, eventually, this extra day was found to be too much of a course correction. As a result, we moved from the Julian calendar to the Gregorian calendar, introduced by Pope Gregory XIII. The major difference? Instead of a leap day every four years, one occurs every year that is divisible by four, except for those divisible by 100 and not divisible by 400. So, the year 2000? Leap year. The year 1600? Leap year. 1700, 1800, and 1900? Not leap years. Feeling confused? Then you’re going to love this — on occasion, scientists have also been known to add “leap seconds” into the calendar. In fact, since 1972, there have been 27 extra seconds added to further align the calendar year with the tropical year. Luckily, we don’t make the calendars, we just follow them! Leap Year Birthdays About 0.07% of the world’s population — roughly 4.8 million people — celebrate their birthdays on February 29. A few of them you may have even heard of! Ja Rule was born February 29, 1976 (happy 11th birthday, Ja Rule), the same day as famous character actor Dennis Farina (1944), Canadian hockey player Cam Ward (1984), and singer/actress Dinah Shore (1916). And if you’re a true crime fanatic, it may keep you up at night to know “The Night Stalker” Robert Rameriez (1960) and Aileen Wuornos (1956) were also born on February 29. So, what are the odds of being born on leap day? Honestly, it’s not as much of a long shot as you might think — one in 1461. That said, According to Guinness World Records, only one family has ever had three consecutive generations born on leap day. The line begins with Peter Anthony Keogh, who was born on February 29, 1940. His son, Peter Eric, followed on leap day in 1964. Then, granddaughter Bethany Wealth arrived in 1996. Perhaps even more interestingly, a woman named Karin Henriksen gave birth to children on three consecutive leap days — a daughter in 1960, followed by sons in 1964 and 1968. Coincidences? The most impressive attempts at family planning ever? Who can say? Leap Year Events In addition to births, a few interesting things have happened throughout the (leap) years on February 29. In 1692, the first arrest warrants for the Salem witch trials were issued. In 1940, Hattie McDaniel became the first African American to win an Academy Award for her role in Gone with the Wind. In 1980, 51-year-old hockey legend Gordie Howe scored his 800th career goal as a member of the Hartford Whalers. And in France, La Bougie du Sapeur has been published every leap day since 1980 — making it the world’s least-frequently published newspaper. One event that folks in Greece try to avoid during a leap year is getting married. That’s because they believe tying the knot any time during a leap year — not just on February 29 — is bad luck. The same can be said for Italians, who have an extremely light-hearted expression summing up their leap-year feelings, “Anno bisesto, anno funesto” (leap year, doom year). A Few Final Fast Facts For Leap Year Here are a few more leap year facts to think about. Let’s start with the bad news, if you’re a salaried employee, you’re probably not getting paid for your extra day. But it could be worse, if you were in jail over a leap year, you’d have to serve an additional day behind bars. On the flip side, you’ll have a free night’s stay in your apartment on a leap year — renters aren’t charged for the extra calendar day. And if you find yourself traveling on February 29, we can’t think of a better spot to be than Anthony, Texas — the Leap Capital of the World! Every leap day, this West Texas town hosts an officially sponsored leap year birthday parade and festival. And there you have it, everything you could ever possibly need to know about leap year! Now, if you’re looking for something to do on your extra day (a Saturday, no less), we can help. Get caught up on our other great re:VERB articles or listen to the latest episode of the re:VERB Podcast. They’re sure to have you jumpi— er, leaping for joy! Adam is a Digital Copywriter and Content Strategist with VERB Interactive — a leader in digital marketing, specializing in solutions for the travel and hospitality industry. Find out more at www.verbinteractive.com.
https://medium.com/re-verb/leap-into-these-interesting-leap-year-facts-97617752f676
['Verb Interactive']
2020-02-28 14:04:54.100000+00:00
['Facts', 'Fun', 'Marketing', 'Interesting', 'Digital Marketing']
The Refiner’s Fire
Photo via flickr.com The other day my friend was going through a thing with one of their parents. It was just the usual and fairly regular parent-child dynamic. It featured the familiar refrains of “I have a better idea”, “you could do it this way”, and “what the hell is wrong with you” circulating in the air. It also contained the pretty typical mix of recrimination, guilt, frustration, years of history and every single button being pushed by the ones who installed them (i.e. this person’s parent). Needless to say, my friend was upset, both, at themselves for exposing themselves to such familiar criticism and, then, for getting upset by the criticism. They were also upset at their parent for not seeing them as the adult that they are. An adult who is capable of making choices that, while imperfect, and not those that their parent would make, are still very much okay decisions. My friends upset carried over from one day to the next and as we were discussing it I found myself missing my own father’s recriminations, judgments and criticisms while simultaneously being reminded to be grateful for my mother’s recriminations, judgments and criticisms which remain ongoing. As my friend and I were talking, I said to them, “enjoy it all while you have it. My Dad could be super harsh and angry and weird. His last — I think? — words to me were ‘you talk too much’ and even though I hated it then I’d kill to have him criticize me now. I miss it! LOL! So that’s the approach I’m trying to take now. Just take it all in gently, accept it, don’t get amped and let it slide off because someday it’ll, oddly enough, be missed.”. Author’s note: my Dad was right as I do in fact talk too much. Another author’s note: this is in fact how I am now trying to live my life and engage in my relationships. Soften, accept where I am and where other people are too. Especially other people because if I don’t like where someone is or how they are being there, so what? None of my business. My business is to be loving and supportive, keep a safe distance where needed and to be grateful I’m not walking in their — at least as they seem to me — really clunky and really uncomfortable shoes. It is not my job to dissect or separate one part of a person from another part of that person. My job is to accept their whole being and, when framed in that light, the gig actually seems pretty easy. Don’t get me wrong, I wish my Dad had been…..a little softer, more forgiving and less critical of himself and others. I also wish that not nearly every one of our conversations had included the phrase “I have a better idea.”. But, our conversations did often include that much maligned phrase and my Dad was who he was: exacting, tough, angry, complicated, critical, sensitive, free spirited, curious and, at the bottom of it all, very, very loving especially toward me. He was the one who taught me to hear my own voice the loudest and to not get swayed by the immaterial opinions of others. Yes consider things from those who matter, but, at the end of the day, make your own choices and then be prepared to live with those choices. My Dad adulted me even when I was a kid. He was not explicit with his lessons and didn’t sit me down for the ‘talks’ I longed for as Danny Tanner might have with his kids on “Full House”. But, as my friend Anne-Marie reminded me: Lou was Lou. He wasn’t Danny Tanner, and, while my Dad wasn’t a subtle man overall, his teaching style was subtle. So while the lessons weren’t explicit neither were the lectures. He wouldn’t ever sit me down for a shaming but he’d Dad me and criticize and correct me in more casual conversation. Pretty regular casual conversations but, for right now, I digress. The story that best exemplifies my Dad’s ‘teaching style’ involves a communion dress. In my family’s culture communions were a big deal. Even though I attended communion classes I kind of lost focus with what was going on (probably thinking about “Full House” or something). So, while I was going through the ritual I didn’t really know the reason. Had I known the reason I probably would have taken a pass but, again, I digress. All I knew was that there was a costume. Fashion was the light at the end of this First Communion tunnel and I was there for it. What can I say, I have always been into the lewks. Anyway, as we all know any good lewk begins with a mood board and mine centred on my Aunt Sue who had just married my Uncle Al the previous August and, as far as I was concerned, had exemplified the height of 80’s fashion. Full skirt, long train, head piece, sparkles aplenty. I was in love and knew my communion costume was going to be modelled after that dress. We went shopping for my communion dress / costume on a bright but cold Saturday and my Mom and I went into the shop while my Dad parked the car. By the time he entered, a battle was in full swing. Like any good miniature bride marrying a stranger, I had found the dress. It was Aunt Sue’s wedding dress, headpiece and all, in miniature form. While I was in love and my Mother was in horror. The dress was great for Sue and for her occasion, less great for me and for mine. My Dad asked me if I liked the dress and I told him yes. He asked me (seeing the hindsight of twenty years worth of writing on the wall) if I looked back on the pictures and still didn’t like it would that be okay and I answered yes. My Mom continued to protest and my Dad asked her who’s occasion it was and she, obviously, answered that it was mine. Bottom line: my costume party, my choice. I was (thanks Mom!) dissuaded from the train but I still walked out of that store as happy as an eight year old with a fabulous costume could be. My own voice, thanks to my Dad, had been allowed to come through and so I didn’t just get a dress, I also got a little more trust in myself. I also got to learn how to look back on a wide range of poor choices I have made and live with each of them. That costume moment was one when I could accept my Dad; a part of him that was pleasing to me. In the moments when he wasn’t behaving perfectly, there was not quite so much acceptance. And what do you know? In those moments of non-acceptance, I became what I feared and what didn’t like about him and sought to avoid: harsh, exacting, unacccepting and unyielding. I can’t say that I understand my father better now than I did when he was alive, but, I can say, I accept him now in a way I didn’t before and, for me, that will always lead to a pang of regret in my heart. Why couldn’t I accept him fully an on his own terms when he was alive? Not all of his behaviours certainly but all of him. Why couldn’t I just let more things slide off of my back? The simple answer is simply that I was afraid and I wasn’t very evolved. A common but toxic combination. The truth, is I was evolving and refining then and I am refining and evolving now. I am (and I really hope Michelle Obama is proud) in an ongoing state of becoming. I am also learning and this is one powerful lesson I have learned. While I didn’t want my father to die in order to have to do so, he did die. I didn’t want him to suffer even though I know he was suffering and I can’t change any of what occurred. I couldn’t change it then and I can’t change it now but I can let it instruct me. I can let it shape the qualities my soul needs and wants to experience more fully. It doesn’t all have to be for not. None of it has to go to waste and that’s true of anything. And so, while I may never fully know the answers to those questions, I am starting to think it was a combination of factors, the most salient being my own inability to separate person from behaviour. I didn’t like a lot of my father’s behaviour then and I probably wouldn’t like it now. There wasn’t always a lot to like. In fact, some of what he did was hurtful and unacceptable then and it probably still would be now. The difference now though is me and my ability to assert myself more effectively, to set boundaries where needed and to accept myself, and other people, more wholly and on each of our respective terms; wherever we are in our respective refinements. Previously, I would shut down instead of open up. Rather than let my Dad be who he needed to be, I’d either stick around, take whatever he threw out or try and pushback to change it. The flipside was I would just walk away and be angry. In fact, I’d be stewing, getting harder and harder in my own heart because I was resisting rather than accepting. I couldn’t see that while I didn’t have to stick around and accept certain behaviour, I still ought to have accepted all of him. I didn’t need to be around him as he behaved churlishly, but, I could no more separate that part of my Dad from him than I could the things in him that I could easily accept. Things including his free spirit, his strong will, his curiosity and his desire to learn; his sense of adventure, his sense of generosity and his kindness. I am not one of those people who can saint someone once they’re gone. I’m not even someone who looks back really in nostalgia or an attempt at revisionist history. I can still see who my Dad was — the good and the bad. The difference being now though, I can see him and accept him in his totality. There is no longer segments and fragmentation. I am no longer trying to surgically carve out the pieces I can’t stand while only preserving that which I love. I see him now. All of him and I just wish I had the insight to see it all sooner. But time is a trickster and a healer and, while it hopefully doesn’t change the clarity and the accuracy of what we see, it can certainly soften a lot of things, including the harder edges we put on ourselves, on each other and on the world. Time is the ultimate refiners fire. Melting away and softening that which we never needed and that which we certainly no longer need to keep. When my Dad initially became terminally ill, I felt the oddest physical sensation. For weeks on end, I felt like I was burning. Literally, physically on fire. I knew very clearly that I wasn’t set ablaze but I felt it. This burning coursing throughout my body. I even told a friend, “I feel like I’m going through a fire and waiting for, and asking to be, delivered from something I can’t be delivered from.”. Photo copyright Christine Quaglia After a time the physical sensation of burning passed and as my Dad’s illness progressed toward death, I never thought much about that sensation again, until many months later, when I received a note from a friend wishing me restoration and refinement for the upcoming New Year. This friend talked about the refiners fire and how, while painful, it is also purifying and it brought me back to that sensation that I had felt so acutely eighteen months before. The burning. The fire. The need to be delivered from something that I believed I could not be. At the time, I felt as though the fire would consume me. That it would swallow me whole and burn me up into nothing. And the fire did burn me up but not into nothing. The fire collided with the gift of my “one wild and precious life” and it refined that gift. Made it something new. I was delivered after all. I see that now and I see that my Dad was delivered too. My Dad who had resisted his illness and prognosis; raged and railed (as he was wont to do) against it, walled himself off from those who loved and wanted to help him. And, yet, at the end of it all, died with more peace and grace and love that any of us would have dared to think possible. At his sickest, he was going through a fire, he was being purified and refined; travelling to places within and beyond himself that those around him could not touch. We all go through fires and we may do so willingly or unwilling; with grace or without but we will all go through them and the fire will always do its work and refinement. Initially, my Dad went through his refining fire unwillingly, but, as is true for us all, there were forces beyond his control alchemizing and changing him. Elevating and increasing his being even as his physical body seemed to diminish and reduce. It is no accident his favourite book was “The Alchemist.”. There was pain and suffering and decay, but, also, purification and refinement along the path. That occurs along all of our paths whether or not there is a terminal illness. Life, and each part of it, is all a part of the refiners fire. All part of the refinement itself. And the more we each surrender to our refiners fires, the more we each become purified and take on the exact shape we are supposed to take on right now. I wish I had known that sooner than the hour of my fathers death. I wish I had seen that when I felt he was critical of me or behaving in ways I didn’t like. I wish I had seen with greater clarity that he was in a process of refinement and I was too. We all are always. I see that now. I see that life is an ongoing refiners fire and that acceptance of whole beings puts us even the tiniest bit closer to knowing unconditional love. Sometimes the refinement from pain is really obvious, other times the refinement from joy is really obvious and, other times still, the refinement is happening without our even knowing it. We are all being alchemized all the time and awareness of that helps us to accept the refining rather than resist it. Acceptance also helps us to accept the refinement of others as well as our own. I read a quote today that says, “[y]our weirdness will make you stronger. Your dark side will keep you whole. Your vulnerability will connect to the rest of our suffering world. Your creativity will set you free. There’s nothing wrong.”. And that is true for me, for my Dad, for all of us as we continue to be refined. Each of our stories, experiences and perceptions are unique and while we are not obligated to always agree wholly with each other we ought to try and accept each other wholly; for all of our sake’s. Everything and everyone changes and everything and everyone, eventually, comes to an end just like this epically great fall that we have just experienced. It came to an end and it is now refining into winter. There will be another epically great fall and a not so great one and everything in between. We can learn to accept all of the seasons and all of our own seasons and those of each other as we each get purified, refined and, in one form or fashion, emerge new from each and every fire.
https://medium.com/change-your-mind/the-refiners-fire-80381d90c24d
['Christine Quaglia']
2020-11-18 12:54:13.967000+00:00
['Parenting', 'Growth', 'Self Improvement', 'Development', 'Self']
Never give it for granted
Never give it for granted And how getting back into writing became a steep challenge for me Over the last two and half years of writing have become something I got more serious about. From something that I knew I enjoyed, to something I realised I loved and wanted to do regularly, even for a living or an as part of it. Because of this, I committed to it. I started doing it more and more often, to the point of turning it into a daily practice. After that, I reached my highest peak so far; not only I started writing longer and more elaborated pieces, but it also became easier for me to do, I got confident and comfortable with the process while enjoying it. I developed a style and became able to recognise what would work or not, what was right or wrong and, I learned to know when the idea was concluded. I learned how to beat the blocks. I learn to make peace with the fact that not every piece will be the best I will ever write, and that letting go and hitting publish was as part of the process as coming with ideas and typing the words on my computer. That it’s better done than perfect. That publishing something is better than editing forever, but also, that editing is not only as important but at times, as hard or more than writing. I also learned that some times a long edit is not about perfectionism, not knowing how to progress or just avoidance, but my inner voice telling me there’s something else to a piece. I learned to listen to my gut when something was right, wrong or not quite there yet. I learned that after publishing, I would always go back and find ways to make a piece better. I also came to feel proud after about how well some of those stories or ideas were able to pass the test of time, become more relevant after a while or validate my thinking towards the future. So? How did I find myself in this position? When and how it became so difficult? Why did I stop? These are some of the questions I keep asking myself every time I realise I have not been writing. Every time I realise the year has passed I have not managed to create even a fraction of what I’ve done in the past, but more importantly, that the progress, mastery and skills I managed to gain, feel like it abandoned me. That it feels difficult, almost like beginning again. That idea does not seem to emerge with the same effervescence and frequency as before, and at times, when they do, it’s like I don’t know how to grasp them, knead them and turn them into words. But here I am, trying to get back the only way I know how. This is not about comparing myself with the things I’ve done in the past, what I’ve done, what I achieved and how much I’ve written, instead, is a reminder that I’ve done it. That, although the only thing I seem to remember is how much did it, I’m pretty sure there’s a memory somewhere inside me, about how hard it was when I first started, many years ago, with no aspirations nor pretentions, and if that memory does not seem to exist, then it might because I never thought about writing that way. It was just about the fun, the enjoyment and the realisation of, not only that I could be good at it, the joy it brought to me, but first and foremost, about sharing ideas and putting them out there into the world. Never give anything for granted, but also, never let it go if you really want it, because the only way to get it or keep it, is to continue doing it. So here I go again.
https://medium.com/thoughts-on-the-go-journal/never-give-it-for-granted-aa5fb7e0e3d
['Joseph Emmi']
2019-09-09 23:45:35.189000+00:00
['Personal', 'Personal Growth', 'Writing', 'Personal Development', 'Journal']
The Modern Content Paradigm
By: Sam Bobo In today’s digital era, websites are the primary gateway to organizations for customers, partners, and investors. Organizations rely specifically on content to drive access to core web applications, to disseminate important information to prospective customers (e.g white papers, infographics, value statements, and feature overviews), or to even outline key documentation for Developers. Modern content is required to be intuitive, impactful, and engaging. This content is ever-changing as organizations grow and evolve with their messaging and branding. Content must reach audiences in all locations: via desktop, mobile, and other internet-connected devices. At a high level, what I have described above sounds similar to a product. However, most organizations fail to treat their content as a product. Contentful is working to transform how modern websites are built by decoupling website creation and content population, a method called “Headless CMS.” This decoupling happens when content is abstracted out into a structured, yet customizable format, known as the Content Model. Once a content model is built, it can be translated into a portable, scalable, and changeable format — i.e.: Javascript Object Notation (JSON) — that provides omnichannel deployment via RESTful APIs, allowing for the seamless creation, reading, updating, and deletion of content. Once the content is decoupled and translated into JSON, the information becomes reusable, with the ability to be repurposed and utilized among applications and sites. Through this new paradigm, Contentful is transforming the way modern product teams operate with respect to their individual roles, transforming the way content is integrated into the development process, and transforming the way organizations think about content. Transforming Team Roles In the Agile product development paradigm (and more specifically in Extreme Programming), one of the core practices is one known as the “Whole Team.” The Whole Team can be defined as a cross-functional group of people who contain the necessary skills and expertise to help a product come to fruition. Traditionally, this includes a Product Manager, many Engineers, a Designer, and QA. Outside of the core team includes stakeholders such as the Business Owner, Marketing, Sales, Legal, etc. When these entities come together, the core team contains the skillsets needed to operate independently while allowing for quick iterations and development cycles. Typically, team members work with their individual tools: Product Managers create requirements on Agile planning tools, Developers write code using integrated development environments (IDEs) and source code management tools, and marketers draft copy on word processing tools. What would happen, if Product Managers, Engineers, and Marketers could work on the same page (or need I say, tool!)? Contentful takes the idea of “Whole Team” a step further by unifying the “whole” product team with their unique Content as a Service (CaaS) offering, which provides both a programmatic and visual toolset for creating and managing content and content models. For Product Managers, Contentful provides a point-and-click, drag-and-drop interface through the Contentful Web App that displays an editable visual representation of the content model and contains tools to empower Product Managers to collaborate with Engineers to define new models. For Marketers, Contentful provides the ability to create, revise, and distribute content from preview to published easily on the Content Hub through a powerful markdown editor. Finally, these capabilities are bundled together in an intuitive series of REST APIs for Developers to tap into, via the Content Management API. Collectively, these tools connect to the same content model and content, allowing the product team to iterate even quicker! Contentful’s strategic series of offerings and CaaS platform enables a “whole” product team to rapidly model, draft, publish, and iterate on content that powers pages that serve users. The beauty of Contentful is that “pages” are no longer created, rather, content is grouped into models that make sense from a user experience perspective versus considering what would make sense displayed on the screen (the UI). These content models can then be utilized when building your user interface (UI). Not only does the content make more sense from an administrative side, but the content also avoids duplication of work as one model can be used across multiple pages. Furthermore, as new requirements arise, teams can operate in an Agile fashion to quickly and iteratively make modifications in response to business needs. Transforming Deployment Continuous Integration and Continuous Delivery (“CI/CD”) are best practices observed by Agile product teams. In CI/CD, when new requirements are worked on by the Engineering team, a new branch is created to write tests and complete the functionality independent from the production code. When the requirement has been reviewed by QA and accepted according to the acceptance criteria, the code is pushed into the production “master” code and run through a pipeline that consists of tests, security checks, and more which should take no more than 10-minutes maximum. Builds can occur a few times a week or even a few times a day, depending on the production velocity of the team, allowing for small iterative releases. Since content and its associated model are represented as a JSON object inside of code, in accordance to the “headless” CMS paradigm, it can be treated similarly to production-ready code: developed, tested, and deployed through the same CI/CD pipeline as all other code. Contentful provides Developers with their Content Management and Content Delivery APIs to ensure the seamless integration of content with production code. Additionally, Developers can leverage webhooks to automatically trigger a build when new data is published, customized down to the environment. While a development pipeline is a technical concept, its benefits are massive to businesses. When code is pushed to production, it undergoes a series of processes that must be cleared — code quality, bug check, unit and integration testing, security, and more. By treating content as code instead of text, businesses can ensure that content being produced does not lead to any negative changes to the overall webpage experience. What should be noted, however, is that Contentful’s integration into your development pipeline is not an out-of-the-box operation. TribalScale, a Contentful Technology and Solutions Partner, is composed of highly skilled Engineers and Product Managers trained on Contentful with a record of successful implementations to deliver projects faster. Transforming Content When breaking down a product into feature requirements or adding new requirements resulting from iterative feedback from users, Product Managers utilize a best practice of writing small, independent, valuable user stories that can be accomplished within a single iteration. This functional form of writing user stories enables Product Managers to continuously prioritize items in the backlog, and for the Engineering team to freely take those stories and build them (for more: read my blog on Writing Functional User Stories). Similar to a content model, user stories can be modeled in a hierarchy: from epics, to features, to user stories that piece together to “model” the product being built. Contentful follows a similar modular paradigm within their CaaS tool. Content models allow teams to modularly define content based on its use and application. Imagine playing with building blocks; each block typically has a specific dimension and color. You can construct the building blocks in a multitude of shapes and sizes, however, the block remains defined independently. You can think about a content model as a building block and a website as your structure. This can be visualized in the image below from Contentful: Now that content is modularized, it can be utilized across distribution channels (websites, mobile, IoT devices, etc.) or repurposed in a multitude of locations on a page and across multiple contexts. As Contentful puts it in their whitepaper: Your content model is the blueprint for your website, it guides the Contentful APIs to place each block of content where you specify, so that your website delivers a fully realized digital experience at every endpoint. With content modularized, organizations can more efficiently run A/B tests for marketing, apply advanced technologies such as machine learning to create targeted messaging, and drive traffic intentionally to realize intended outcomes, such as sales conversions. Conclusion Contentful’s unique perspective and approach to Content Management is driving change among digital teams in this modern era of development. By decoupling content from sites, Product Managers can work alongside Engineers to granularly define specific models to meet business needs, Engineers can ensure higher quality content production code through environment promotion and CI/CD hooks, and Marketers can repurpose content across a multitude of channels. TribalScale, a digital innovation firm, believes that treating content as a product and utilizing Contentful’s CaaS and “headless CMS” paradigm is the future. Our team of highly skilled Engineers and Product Managers have experience implementing Contentful inside of top financial services institutions. We suggest following these tips when implementing Contentful: Train and ensure everyone has a general understanding of what “modular sites” are. Decide on an initial set of sections that will comprise the majority of your site based on existing content, or that which will be added in the future. Hold a number of workshops to build out proposed pages using those existing sections. Virtualize these pages, with each section independent enough to define the content and to easily shift them around. Once the structure is decided, use the sections as a guide for copy and final changes. Sections are particularly designed so that content and assets are added as intended. You never know how the sections will come together until they are actually created in Contentful with actual content. For those who have not tried Contentful’s Content as a Service tool, I urge you to sign up and give it a try! Getting started with Contentful is extremely easy! Make sure to check out Contentful’s integration of the Content API with GraphQL, a modern, intuitive, data-query language for APIs that minimizes API calls and allows Frontend Developers to aggregate data quickly and efficiently through a queryable model abstracted from the content schema.
https://medium.com/tribalscale/the-modern-content-paradigm-1c69d5377c22
['Tribalscale Inc.']
2019-07-02 15:18:41.891000+00:00
['Content Is King', 'Design', 'Agile', 'Product Development', 'Content']
Python’s Sorted and Sort. A short and sweet study
Let’s take a look at how this works on real data. Here is the head of my coffee data frame: First two partial rows of my coffee reviews data frame From the data frame, I created a simple list of the ‘Coffee Country’ (the column ‘Coffee Country’ can not be seen in the data frame above, but it’s there to the far right) : coffee_country_list = list(df[‘Coffee Country’]) The coffee_country_list looks partially like this (remember, it’s 2195 entries long!): A partial list of ‘Coffee Country’ from the data frame and in my Jupyter notebook I tried out sort and sorted: sorted(coffee_country_list) coffee_country_list.sort() The result is the same: A partial list of results from using list.sort() and sorted() However, even though the result is the same, there is actually a difference between the two sorts. Upon applying the sorted() function on the coffee_country_list the list is unchanged. The function does not modify the list that is passed in and if I call coffee_country_list again the original is unchanged: coffee_country_list unchanged with sorted() The coffee_country_list that had .sort() applied, on the other hand, is changed permanently. The sort method changes the original list to create a new one:
https://medium.com/swlh/python-sorted-and-sort-515918b5d0da
['Annika Noren']
2020-10-02 18:37:52.273000+00:00
['Python', 'Data Science']
Amazon Brings Shows to Life to Stand Out In Streaming Space
As more streaming services like Disney+ and WarnerMedia launch for viewers, Amazon Studios is finding it has to do more to stay top-of-mind for customers. “At the end of the day we’re competing for customer’s time, and so it’s up to us to make sure that our customers realize we’ve got shows that are relevant, interesting, and high quality,” Amazon Studios head of marketing Mike Benson told Cheddar. “Whether we’re competing with another streaming service or a broadcast network, we’re competing for people’s time.” This year, there will be more than 500 original scripted series going into production, Benson pointed out. And with about 600 movies a year plus other video services like YouTube, there’s tons of content out there for people to choose from, he added. “We think adding more streamers into the streaming business is good overall,” he said. “I think on the customer side of things, it gets challenging to figure out since there’s so much, what do I watch? I think that becomes the bigger challenge for us as a marketer, how do we make it more relevant and create a more personalized experience for customers.” While Amazon still uses traditional TV and print advertising, it’s also focusing on creating show-based experiences to stay top- of- mind. It’s a strategy many streaming companies including HBO and Netflix have taken, creating multi-million dollar exhibits and experiences to bring their series to life. Recently, Amazon Studios created the Garden of Earthly Delights at South by Southwest in Austin to promote its show “Good Omens.” It also brought back the Carnegie Deli during the Upfronts in New York in connection with “The Marvelous Mrs. Maisel.” In July, the company will rent 60,000 square feet outside San Diego Comic-Con to promote three of its shows: “The Expanse,” “Carnival Row” and “The Boys.” On top of “participatory theater, stage performance and tech” exhibits during the day, the space will also host screenings and parties. “In many ways, it’s a mini-Disneyland we’re creating in this space,” said Dustin Callif, managing partner of Tool North America, which is Amazon Studio’s experiential agency that created the “Marvelous Mrs. Maisel” deli, “Good Omens” Chattering Nuns and “Grand Tour” battle cars. By turning its shows into can’t miss events, Amazon Studios hopes fans will go home and tell their friends and family. People will also share photos and videos on Instagram and other social platforms, creating more “free” buzz online for shows. It also gets the attention of the press, whose coverage of these events could give programs an additional boost. “While there are plenty of media that still work for us (like) television and digital marketing, experience and creating experiences for customers that allow conversations to start let us use social to scale,” Benson said. The idea is to create a cultural event to stand out in a crowded marketplace. “It’s about going beyond the binge,” said Callif. “How do you keep the conversation going with fans and get them not only excited and continuing with these properties, but also to hopefully be sharing it with their friends and family to get them engaged?”
https://medium.com/cheddar/amazon-brings-shows-to-life-to-stand-out-in-streaming-space-eac688c2c189
['Michelle Castillo']
2019-06-04 14:43:24.440000+00:00
['Amazon', 'Streaming', 'Advertising', 'Entertainment', 'Television']
When Your Muse Has Gone Missing
I’ve looked in the closet of my childhood bedroom and the pockets of my worn-down jeans. I’ve searched in the sky at the edge of the sunset and in the summer storm that raged outside my window. I’ve scoured through books and in between the pages of unsaid conversations. I’ve turned over my morning rituals and my evening routines. I’ve even rummaged through seashells at the edge of the ocean and ridden my bike into the woods of my youth. And yet, somehow, I can’t find my muse. I’ve gone hunting for the inspiration that used to live at the fringes of my soul and would burst with words that wouldn’t let me breathe until they were out. I’ve turned over my heart to see if I could find her. She used to soothe my heavy chest by releasing the weight onto the page. She used to bridge the gap between uncertainty and hope and the space between pain and healing. She was my confidant, my friend, and I thought that we would be in this thing called life together until the end. And yet. I feel as if she has deserted me and gone somewhere without me. She has left me to the depths of my unknowing and to the anxiety that coils against my lungs. I used to wake up with her soothing whisper in my ear until words on blank pages would naturally appear. Now I wake up worried about everything, and my language seems lost or incomplete. It’s as if every time I try to write, I am building a 500-piece puzzle with half of the pieces missing. I feel insufficient, deficient, and wanting. I am frustrated and voiceless without the drive that once lined my heart with fire and without the inspiration that filled my mind with imagery and soliloquies until it all poured out into stories that knew no boundaries or poetry that helped me sleep soundly. I am pent up and much too full of anecdotes that don’t connect. I am bloated with words I cannot express, and I feel as if both my intelligence and my creativity have been put to the test. And in this dense space, I somehow cannot let myself rest. I must keep looking, I tell myself. Maybe she is at the bottom of the page if I rant for long enough. Perhaps she is hiding at the bottom of a glass of wine that’ll ease my troubled mind (but just one, of course). Maybe I need to try harder, to be better, and to reach higher into my consciousness. Perhaps if I run a little faster and try to live my life a little fuller, then she will appear along with my highest self. Maybe she is just testing me to look for her more creatively. Or maybe there’s a different story to tell. Maybe my muse has a narrative of her own. Perhaps she became tired of me. She could’ve gotten sick of me always having these big, big dreams but never chasing them consistently. Maybe she felt exhausted from inspiring someone who procrastinated so hard on delivering. Perhaps she felt fatigued of always running after a person who couldn’t stop to see the page she was living in before writing the next chapter she wanted to be in. Maybe my muse has been trying to tell me something for far too long, but I refused to listen. Perhaps I only heard what I wanted to hear, and then I closed the book when it felt like too much or when I was too close to the thing I had always wanted. I was always a sentence away, and then I’d rip off the page. Or I’d put stories on a bookshelf after only getting through halfway. Maybe she felt unseen, unheard, and even unwanted. Perhaps it was me who left her and not the other way around. Maybe it’s time that I stopped the search. After all, they say the things that truly belong to you always return. So, perhaps, I focus more on making sure I am someone worth returning to. Maybe I slow down enough to add life between the sentences I speak and pause as much as I breathe. Maybe I hug myself before I sleep to remind myself that it’s okay to sometimes feel incomplete. Perhaps I inspire myself with a smile in the mirror instead of rushing out of the door. Maybe I become more by trying less. Maybe I take it one day and one word at a time. And maybe I use this opportunity to surrender to the absence of my muse and explore what’s left behind, explore what’s inside.
https://medium.com/scribe/when-your-muse-has-gone-missing-31fa069ef651
['Sonya Matejko']
2020-06-08 17:17:45.961000+00:00
['Writing Life', 'Writers Block', 'Writers On Writing', 'Writing', 'Muse']
These Are Not Love Poems
Poet’s Note Did you know we are in the middle of a poetry renaissance? Instagram and Twitter are filled with poets if you know where to look. Bookstores are displaying poetry collections in the front near the register instead of in the far back corner. Our society is starting to remember that we first told stories worth remembering in poetry. Love poems seem to always be the bulk of the verse humans produce. Just listen to any pop song, chances are it’s about love, or using sex as a substitute for love. The subversive side of me couldn’t resist writing some small stories about what happens when there is no love, or when love has gone. I call these haiku “not love poems.” Of course, almost all of my work could be called “not love poems.” The truth is these ten haiku really are love poems. Love exists in the gaps between each of us. Too often we fail to close the gaps, and we let love flitter away.
https://medium.com/weirdo-poetry/these-are-not-love-poems-a7d13d0e2df3
['Jason Mcbride']
2020-08-23 18:07:05.947000+00:00
['Fiction', 'Love', 'Writing', 'Haiku', 'Poetry']
Building a Pure CSS Animated Gradient Colour Button Is Easier Than You Think
Final Boss — Animated Gradient Button In CSS, we can’t transition gradients. It would be awesome to see smooth animation with CSS like this: But it won’t work. It immediately changes to the other one without transition. There are a few hacks to do it, but my favourite is to animate background-position . Firstly, we need to add two properties to our button: background-size : 200% auto : 200% auto background-position : left center Then on hover: background-position : right center In this case, I added a gradient starting with a white colour. It enhances the impression of an animated border. HTML: <a href="/" title="Hello button" class="btn">Hello</a> CSS: Standard and hover And that’s it! You can play with the final button on CodePen.
https://medium.com/better-programming/pure-css-animated-gradient-colour-button-is-easier-than-you-think-f19e86bbbc4f
['Albert Walicki']
2020-12-01 17:20:22.350000+00:00
['Programming', 'CSS', 'HTML', 'UX', 'Design']
Stop Comparing Your Career Journey to Others
Peaking at Different Ages Just as some of the greats achieved early on, others were late bloomers who gained notability later in life. Bryan Cranston was 44 when he starred in the career-defining Malcolm in the Middle. J.K. Rowling was 32 when, after multiple rejections, Harry Potter was finally published. Some people spend years working to achieve what they do. Others get lucky and have access to the recourses and opportunities that help them shoot to success much quicker. Regardless of how hard you work, there will always be factors outside your control that affect the result of your efforts. Maybe you’re close, but don’t quite have the natural abilities to be good enough. Or perhaps the right opportunity that will define your big break hasn’t come up yet. There are very few things that we are naturally good enough to succeed at. We have to spend years practicing to get to where we want to be. And some people progress and learn at tasks faster than others. That might be why they achieve the desired results faster. Just because someone achieved things quicker than you doesn’t mean you’re not on attract to reach your goals. It just means it’s not your time yet. Even when we are good enough, it takes just as long for that effort and ability to actually be recognized. Maybe you need a bit more practice. Or maybe you are good enough, and just some luck in finding that big break. In either sense, you aren’t past your best, but you do need to work a little harder to prove yourself. In the words of Julissa Loaiza (as quoted by Jay Shetty): “… You’re not falling behind, It’s just not your time.” Different Peaks Success is a weird concept. It’s so subjective and can be found in almost anything. Because of that, you get to define what success means in your life. Maybe getting out of bed is a big deal. Or handing in that school assignment. Or singing in a stadium full of millions of people. We all have different circumstances and day-to-day challenges, and these set the parameters for what we consider a success. No matter how big by comparison, other people's achievements cannot invalid yours. For they have different circumstances — for some affording food could be just as big a challenge as it was for Zuckerberg to design Facebook. You could be aiming towards a completely different conception of success to others. In a similar vein, you could be moving towards the same conception, but be at an earlier stage of your journey. Running an online publication with over 200,000 monthly readers is a success I often downplay. It’s not a revolutionary Social Media platform or a world-known pop song, but it is something I am immensely proud of. Some people will be envious of my achievements, in the same way I am of my more successful peers. I’m not oblivious to that fact. But if you’re feeling envious of others' achievements, take a minute to acknowledge the heights you’ve already achieved. Achievements you once desired, which are admired by those on the same path who haven’t yet reached that checkpoint.
https://medium.com/the-post-grad-survival-guide/why-you-should-stop-comparing-your-career-journey-to-others-152fa56dad1e
['Jon Hawkins']
2020-12-23 14:15:03.082000+00:00
['Work', 'Advice', 'Careers', 'Life', 'Psychology']
Building Pinterest’s A/B testing platform
Shuo Xiang | Pinterest engineer, Data As a data-driven company, we rely heavily on experiments to guide products and features. At any given time, we have around 1,000 experiments running, and we’re adding more every day. Because we’re constantly increasing the number of experiments and logging corresponding data, we need a reliable, simple to use platform engineers can use without error. To eliminate common errors made by experimenters, we introduced a lightweight config UI, QA workflow and simplified APIs supporting A/B testing across multiple platforms. (For more information about our dashboards and data pipeline, check out our previous experiments post.) We prioritized the following requirements when building the experiments platform: Realtime config change: We need to be able to quickly shut down or ramp up experiments in real time without code deploy for each config change, in particular when fixing site incidents. Lightweight process: Setting up the experiment shouldn’t be more complicated than a normal feature launch, yet should prevent the user from making predictable errors. Client-agnostic: The user shouldn’t have to learn a new experiment method for each platform. Analytics: To make better experiment decisions, we built a new analytics dashboard that was easier to use. Scalability: We needed the entire system to scale in both online service and offline experiment data processing. Simplified process Experiments at Pinterest follow a common pattern: Create the experiment with an initial configuration, create a hypothesis and document approach to test that hypothesis. Expose the experiment to Pinners, add new groups, disable groups and modify the audience via filters. Finish the experiment by shipping the code to all Pinners or rolling it back and documenting results. In our prior framework, these changes were handled via code, however we wanted to structure these changes in a UI to provide interactive feedback and validation, and in a configuration-based framework to push changes independent of code release. Common experiment mistakes like syntax errors, imbalanced group allocation, overlapping groups or violation of experiment procedures are verified interactively. We also proactively provide typeahead search suggestions to reduce the amount of human input, as shown in Figure 2. Now making an experiment change is usually a couple of clicks away. In order to make the configuration accessible by an arbitrary client in real-time, we take advantage of our internal system to store all experiment settings in a serialized format and synchronize them to every host of our experiment system within seconds. A typical config file has the following content after deserialization: {"holiday_special": { "group_ranges": { "enabled": {"percent": 5.0, "ranges": [0, 49]}, "hold_out": {"percent": 5.0, "ranges": [50, 99]} }, "key": "holiday_special", “global filter”: user_country(‘US’), “overwrite_filter”: {“enabled”: is_employee()}, "unauth_exp": 0, "version": 1 } } The benefit of the separation of config and code is the instant update of experiment settings, meaning configuration changes such as increasing the traffic of a treatment group doesn’t require code deployment. This frees up the experiment from the production deployment schedule and greatly speeds up the iteration, particularly when urgent changes are needed. Quality assurance A single experiment could affect millions of Pinners, so we have high standards for experiment operations and critical quality assurance tools. The experiment web app is also equipped with a review tool, which creates a review process for each experiment change. Figure 3 shows a pending change that modifies group ranges and filters. Reviewers are specified through the UI and will be notified by email. For most experiments we have a cross-team helper group made up of platform developers, users and data scientists. Almost every change is required to be reviewed by a helper who closely examines planning, hypothesis, key results, triggering logic, filter set up, group validation and documentation. Such a process is enforced on our web app so that each change is required to fill in an helper. We also have a regular experiment helper training program to ensure each team has at least one person who’s certified. An experiment is often associated with code changes that embed the control/treatment group information into the decision logic. We require experiment users to add a Pull Request (PR) link in the experiment platform via the Pull Requests button, so it’s easier for helpers and analysts to trace the experiment behavior and potentially debug if needed. In addition, we also send every change as a comment to the corresponding PR in Phabricator (our repository management tool), as shown in Figure 4. Users can create a test-only copy of the ongoing experiment in the UI (as shown in Figure 1). They’ll then be ported to a test panel shown in Figure 5. Any changes made in the test panel will not affect the experiment in production and will only be visible to the testing engineer, who can use the one-click Copy To Prod button to enable it in production. API The experiment API is the interface users will call to link their application code to the experiment settings they made via the UI. Two key methods provided are: def get_group(self, experiment_name) def activate_experiment(self, experiment_name) Specifically, the get_group method returns the name of the group to which the caller will be directed. Internally, the group is computed by computing a hash value based on experiment information, and the method has no side effect. On the other hand, calling activate_experiment sends a message to the logging system and contributes to the analytics result. These two methods sufficiently cover the majority of user cases and are commonly used in the following way: # Get the experiment group given experiment name and gatekeeper object, without actually triggering the experiment. group = gk.get_group("example_experiment_name") # Activate/trigger experiment. It will return experiment group if any. group = gk.activate_experiment("example_experiment_name") # Application code showing treatment based on group. if group in ['enabled', 'employees']: # behavior for enabled group pass else: # behavior for control group pass The gatekeeper object gk in the code above is a wrapper of user/session/meta information needed for an experiment. In addition to the Python library shown above, we have a separate JVM (Scala and Java) library implemented. Support for Javascript and mobile apps (Android & iOS) are also available. Design and architecture The experiment platform is logically partitioned into three components: a configuration system, a set of APIs and the analytics pipeline. They’re connected by the following one directional data flow: Configuration system persists user changes made on the web UI to our experiment database, whose information is regularly published at sub-minute granularity in a serialized format to each service. Experiment clients pick up the experiment configuration and make API calls to determine the experiment logic, such as experiment type and group allocation. The experiment activation logs generated by various clients are sent to Kafka through our internal Singer service, from which the analytics pipeline will create experiment reports with user defined metrics and deliver them on the dashboard. Summary This system rolled out last summer and supports the majority of experiments inside Pinterest. Team specific functionalities such as real-time metrics dashboard, experiments email notification, interactive documentation and collaboration tool and SEO API/UI are also being added to the system. If you’re interested in experiment framework and analytics platforms, join us! Acknowledgements: Multiple teams across Pinterest provide insightful feedbacks and suggestions shaping the experiment framework. Major contributors include Shuo Xiang, Bryant Xiao, Justin Mejorada Pier, Jooseong Kim, Chunyan Wang and the rest of Data Engineering team.
https://medium.com/pinterest-engineering/building-pinterests-a-b-testing-platform-ab4934ace9f4
['Pinterest Engineering']
2017-02-21 19:49:44.512000+00:00
['A B Testing', 'Analytics', 'DevOps', 'Data Engineering', 'Data']
Men Who Won’t Wear Masks Are Dopes
Photo: HBO Men Who Won’t Wear Masks Are Dopes These freedom fighters are defending their right to spread COVID-19 The Centers for Disease Control recommends wearing cloth face coverings in order to slow the spread of COVID-19. It’s a simple, proven, strategy: the only way to protect your loved ones — and neighbors — from a highly contagious and potentially fatal infectious disease is to wear a face mask. I don’t like masks although I’ve learned how to wear them so my glasses don’t fog up. I wear face masks in public because it’s the smart thing to do. I do not believe the severity or duration of the pandemic can be lessened or shortened with a defiant “fuck you” teenage attitude. I wish I could visit my family or eat at Red Lobster or just hang out with friends. But I can’t and no amount of complaining changes that. I wish this virus hadn’t happened. But it did and it must be dealt with. The reason Americans should self-quarantine, social distance, and wear masks is that these strategies work. And yet, because we live during an age of vanity and tribalism, there are people who refuse to do what it is best for their community. These people are, mostly, men. What’s up, fellas. It has come to my attention there are big strong men who think wearing cloth masks during a pandemic is a sign of conformity and weakness. They won’t be told what to do by fancy scientists and uppity doctors. These are probably the same geniuses protesting emergency stay-at-home orders. It’s just a hunch. You’ve seen the pictures of these maniacs: well-fed show-offs wearing expensive camouflage and posing with decorative semi-automatic rifles all while standing up to local governments scrambling to save lives. Why are these self-appointed freedom fighters — many of them masked — really marching on state capitols? Because they’re good boys who do what they’re told by conservative politicians who want the country to return to a normal that’s never coming back, at least not anytime soon. I know for a fact that submicroscopic infectious agents are not political. There is no such thing as a Democratic or Republican virus. To think otherwise is madness. I also know the economy will not recover if people are afraid of another outbreak. Do you know how to guarantee another outbreak? Ignore CDC guidelines. Confuse tyranny with common sense. Go shopping without a mask. Yesterday, Republican Senators refused to wear masks during a hearing where Dr. Anthony Fauci, the nation’s top infectious disease expert, told them that abandoning the strategies that have slowed the virus prematurely would result in needless “suffering and death.” I wrote that these no-mask fanatics are mostly men, but I’d like to point out Republican Senator Susan Collins from Maine is a woman. Gender is a construct and all humans can be selfish ding-dongs. These GOP Senators knew they’d be on TV and they were just following orders too. Those orders were coming from the President of the United States, who refuses to wear a mask despite his personal valet testing positive for coronavirus. Even the Vice President’s press secretary and wife of Trump senior advisor Stephen Miller — tested positive. The White House is slowly becoming a fever clinic. I don’t know if the president thinks refusing to wear a mask is manly. I know some dudes think working while sick is a feat of strength. I have a friend who once proudly superglued a wound shut because he didn’t want to go to the ER. When I told him it was rational to fear the emergency room he got defensive: he wasn’t afraid of the emergency room. Only wusses go to doctors, was the gist. The president’s anti-mask strategy could also be grounded in good old fashioned ignorance. The man may just not know how viruses work. During a recent meeting with maskless Republicans, the President said of Katie Miller, Mike Pence’s press person: “She tested very good for a long period of time. And then all of the sudden today she tested positive.” Yes, Mr. President. That’s how it works. One day you are well, then you get sick. There are times I wonder if Donald Trump — -a former bimbo playboy forced to work for the first time in his life — thinks he’s the star of a Twilight Zone episode about a game show host trapped in the game show he hosts. Naw. That would require self-awareness.
https://medium.com/humungus/men-who-wont-wear-masks-are-biological-weapons-23181a8a83a8
['John Devore']
2020-05-20 23:15:10.498000+00:00
['Politics', 'Masculinity', 'Men', 'Coronavirus', 'Pride']
For Lucy, who loves my content
Here’s an excerpt from the profoundly insightful blog post Lucy wanted me to link to. A heartfelt email in response to your highly personalized robo-spam. SUBJECT: Love your content! (and a proposal) Hi , I just wanted to follow up and see what you thought of linking to our site on your blog. Just double checking you received our previous email. Hi there, I was just browsing your website and as I was reading your site, I noticed you mentioned tech (http://www.alexandrasamuel.com/tag/changeevery), and so I thought you might also be interested in linking to a resource we put together on the ways technology is improving your health. Here is a link for your review: [link] <snip> If you were willing to add our link to that page, I would be more than happy to share it to our tens of thousands of social followers to help you gain some more visibility in exchange. Lucy P.S. I understand some people don’t like to be reached out to, if so, just let me know :) Lucy, Thank you so much for taking the time to follow up on your recent unsolicited email. So often I find that an email that refers to my writing as “content” is a dead giveaway that I’m just dealing with some random robot at a random content farm. But your automated follow-up email let me know that I’m dealing with someone who is prepared to go the extra mile by automating a follow-up message. And your astute observation that I have written a blog post about tech — well, that’s the kind of careful review of my website that makes me feel like I’ve really connected with someone special. Since you went to such effort I feel like the least I can do is tell you why I’m declining to link to the “content” you sent me, and which you so helpfully described for me as “link”. I couldn’t help noticing that “link” took me to a pretty spammy-looking URL before it redirects to a webpage of actual content™; gosh, that sure did leave me mighty curious about what you would direct my readers to after I linked to you: Porn? Offshore sports betting? Alt-right news sites? And then there was the content itself: gosh, it was a joy to behold. Your ambition in sourcing an article making the extraordinary assertion that technology can be useful for some things: well, it’s not every day I come across such insight. I was particularly impressed with the way your piece touched on so many almost-points without ever actually saying something. Not everyone can pull that off. So I guess you are right, Lucy: I guess you have to count me as one of those people who don’t like to be reached out to. Or maybe I am one of those people who only like to be reached out to by actual humans rather than content-farming robo-spammers. Alexandra
https://medium.com/i-reply-to-spam/for-lucy-who-loves-my-content-b3d1fd18384e
['Alexandra Samuel']
2017-09-26 01:05:42.366000+00:00
['Email', 'Humor', 'Marketing Automation', 'Marketing', 'Email Marketing']
Korean startups are jumping on the bandwagon
You know it’s serious when the camera crew is here Many startups and already existing corporations are finding ways to tap into blockchain technology. More and more companies are showing interest as the word is going around that South Korea regulators are considering reversal of ICO ban. On March 22nd, Gyoung-Pil Nam, the governor of Gyeonggi province, has hosted a meeting bringing blockchain experts, pioneers, and evangelists together. CUBE INT, MYcreditChain, UUNIO, Medibloc, and Overnodes A lot of successful people have come to share their ideas on what they’re trying to accomplish. Medibloc has successfully concluded its ICO, having sold $30 M worth of their tokens. It is a decentralized healthcare information ecosystem that is built on blockchain. UUNIO was working on an interesting project as well. They’re building a platform that supports community with reward system based on blockchain technology. Although it sounded similar to what Steemit is doing, UNNIO’s approach was little different from that of Steemit. “The STEEMIT community turned into a place where only the fittest survived and profit was determined by Steem Power rather than the quality of contents,” said the CEO of UUNIO. Personally, I’m not a Steemian and not aware of its problem but considering the recent Facebook’s data privacy scandal, the blockchain version of social networking services definitely seems to be necessary. Korea is known to be the world’s third-largest cryptocurrency market in terms of trading. There were only few who tried to incorporate blockchain technology into their business, mainly because of the regulations. However, more and more people are now showing interest and some are even quitting their jobs to jump on the bandwagon. There’s also an increasing number of Blockchain accelerators and Crypto VCs. Future seems bright despite all the FUD.
https://medium.com/overnodes/south-koreas-startups-jumping-on-the-bandwagon-575c3f47c33c
[]
2018-07-27 05:34:22.351000+00:00
['Bitcoin', 'Blockchain', 'Overnodes', 'Startup', 'Cryptocurrency']
The art of joining in Spark
Broadcasting or not broadcasting First of all, let’s see what happens if we decide to broadcast a table during a join. Note that the Spark execution plan could be automatically translated into a broadcast (without us forcing it), although this can vary depending on the Spark version and on how it is configured. We will be joining two tables: fact_table and dimension_table. First of all, let’s see how big they are: fact_table.count // #rows 3,301,889,672 dimension_table.count // #rows 3,922,556 In this case, the data are not skewed and the partitioning is all right — you’ll have to trust my word. Note that the dimension_table is not exactly “small” (although size is not information that we can infer by only observing the number of rows, we’d rather prefer to look at the file size on HDFS). By the way, let’s try to join the tables without broadcasting to see how long it takes: Output: Elapsed time: 215.115751969s Now, what happens if we broadcast the dimension table? By a simple addition to the join operation, i.e. replace the variable dimension_table with broadcast(dimension_table), we can force Spark to handle our tables using a broadcast: Output: Elapsed time: 61.135962017s The broadcast made the code run 71% faster! Again, read this outcome having in mind what I wrote earlier about absolute execution time. Is broadcasting always good for performance? Not at all! If you try to execute the snippets above giving more resources to the cluster (in particular more executors), the non-broadcast version will run faster than the broadcast one! One reason why this happens is because the broadcasting operation is itself quite expensive (it means that all the nodes need to receive a copy of the table), so it’s not surprising that if we increase the amount of executors that need to receive the table, we increase the broadcasting cost, which suddenly may become higher than the join cost itself. It’s important to remember that when we broadcast, we are hitting on the memory available on each Executor node (here’s a brief article about Spark memory). This can easily lead to Out Of Memory exceptions or make your code unstable: imagine to broadcast a medium-sized table. You run the code, everything is fine and super fast. A couple of months later you suddenly find out that your code breaks, OOM. After some hours of debugging, you may discover that the medium-sized table you broadcast to make your code fast is not that “medium” anymore. Takeaway, if you broadcast a medium-sized table, you need to be sure it will remain medium-sized in the future! Skew it! This is taking forever! Skewness is a common issue when you want to join two tables. We say a join is skewed when the join key is not uniformly distributed in the dataset. During a skewed join, Spark cannot perform operations in parallel, since the join’s load will be distributed unevenly across the Executors. Let’s take our old fact_table and a new dimension: fact_table.count // #rows 3,301,889,672 dimension_table2.count // #rows 52 Great our dimension_table2 is very small and we can decide to broadcast it straightforward! Let’s join and see what happens: Output: Elapsed time: 329.991336182s Now, observe on the SparkUI what happened to the tasks during the execution: As you can see in the image above, one of the tasks took much more time to complete compared to the others. This is clearly an indication of skewness in the data — and this conjecture would be easily verifiable by looking at the distribution of the join key in the fact_table. To make things work, we need to find a way to redistribute the workload to improve our join’s performance. I want to propose two ideas: Option 1 : we can try to repartition our fact table , in order to distribute the effort in the nodes : we can try to , in order to distribute the effort in the nodes Option 2: we can artificially create a repartitioning key (key salting) Option 1: Repartition the table We can select a column that is uniformly distributed and repartition our table accordingly; if we combine this with broadcasting, we should have achieved the goal of redistributing the workload: Output: Elapsed time: 106.708180448s Note that we want to choose a column also looking at the cardinality (e.g. I wouldn’t choose a key with “too high” or “too low” cardinality, I let you quantify those terms). Important note: if you cannot broadcast the dimension table and you still want to use this strategy, the left side and the right side of the join need to be repartitioned using the same partitioner! Let’s see what happens if we don’t. Consider the following snippet and let’s look at the DAG on the Spark UI If we don’t specify a partitioner, Spark may decide to perform a default repartitioning before the join As you can see, it this case my repartitioning is basically ignored: after it is performed, spark still decides to re-exchange the data using the default configuration. Let’s look at how the DAG changes if we use the same partitioner: Using the same partitioner allows Spark to actually perform the join using our custom options Option 2: Key salting Another strategy is to forge a new join key! We still want to force spark to do a uniform repartitioning of the big table; in this case, we can also combine Key salting with broadcasting, since the dimension table is very small. The join key of the left table is stored into the field dimension_2_key, which is not evenly distributed. The first step is to make this field more “uniform”. An easy way to do that is to randomly append a number between 0 and N to the join key, e.g.: As you can see we modified the dimension_2_key which is now “uniformly” distributed, we are on the right path to a better workload on the cluster. We have modified the join key, so we need to do the same operation on the dimension table. To do so, we create for each “new” key value in the fact table, a corresponding value in the dimension: for each value of the id in the dimension table we generate N values in which we append to the old ids the numbers in the [0,N] interval. Let’s make this clearer with the following image: At this point, we can join the two datasets using the “new” salted key. This simple trick will improve the degree of parallelism of the DAG execution. Of course, we have increased the number of rows of the dimension table (in the example N=4). A higher N (e.g. 100 or 1000) will result in a more uniform distribution of the key in the fact, but in a higher number of rows for the dimension table! Let’s code this idea. First, we need to append the salt to the keys in the fact table. This is a surprisingly challenging task, or, better, it’s a decision point: We can use a UDF : easy, but can be slow because Catalyst is not very happy with UDFs! : easy, but can be slow because Catalyst is not very happy with UDFs! We can use the “rand” SQL operator We can use the monotonically_increasing_id function Just for fun, let’s go with this third option (it also appear to be a bit faster) Now we need to “explode” the dimension table with the new key. The fastest way that I have found to do so is to create a dummy dataset containing the numbers between 0 and N (in the example between 0 and 1000) and cross-join the dimension table with this “dummy” dataset: Finally, we can join the tables using the salted key and see what happens! Output: Elapsed time: 182.160146932s Again, execution time is not really a good indicator to understand our improvement, so let’s look at the event timeline: As you can see we greatly increased the parallelism. In this case, a simple repartitioning plus broadcast, worked better than crafting a new key. Note that this difference is not due to the join, but to the random number generation during the fact table lift. Takeaways
https://towardsdatascience.com/the-art-of-joining-in-spark-dcbd33d693c
['Andrea Ialenti']
2020-12-07 01:34:03.753000+00:00
['Big Data', 'Apache Spark', 'Scala', 'Sql', 'Data Science']
One year as DNC CTO
One year as DNC CTO And less than 150 days until the general election In May 2019, motivated by the opportunity to tackle some of the most meaningful challenges in political tech, I joined the DNC as the Chief Technology Officer. The events over the past year — and the past months in particular — have only reinforced how critical it is to fight for progressive leaders. Reform is necessary to eliminate the systemic racism that plagues our institutions. I am honored to lead the Democratic Party’s efforts to leverage data, technology, and security to help progressive campaigns at every level of the ballot win elections. As someone who worked on the 2016 presidential cycle, I saw firsthand how the cyclical nature of campaigns is antithetical to ongoing, deep technical development and iteration. Having led analytics teams Facebook and Etsy, I knew there was an opportunity to create lasting change in the democratic ecosystem by establishing strong data infrastructure powered by strong technologists. This is the promise of DNC Tech: by building reliable and secure data infrastructure, innovative data products, and first-rate security & counter-disinformation practices, the DNC Tech Team can advance every Democratic campaign in this election cycle, but also in those to come. In the last year, our phenomenal team has built transformative infrastructure and data products that will help Democrats win. Our team has grown by more than 30% and is now more than 55 people strong. We’ve attracted top-tier talent from the tech industry, including Amazon, Uber, Twitter, and Facebook; politics, including members of the Beto, Booker, Buttigieg, Warren, and Sanders teams; and academia, including PhDs and advanced degrees from Duke, Penn, Michigan, London School of Economics, and many others. I’m particularly proud that our tech staff is 43% female and 28% people of color. Our team members are united by the desire to build lasting solutions to ensure the success of the Democratic Party. Leading this group of talented, thoughtful, and dedicated technologists has been the highlight of my year. I am humbled every day by them, and honored to represent their work. DNC Tech Team featuring Mambo — our unofficial mascot I am also immensely grateful for the partnership of the democratic state parties and sister committees. The incredible voter file managers and data directors at democratic state parties are the engine behind data-driven campaigns at every level of the ballot. Their feedback and partnership has helped us develop and deliver more effective data products and tools. Our work So, what exactly does the DNC Tech team do? In short, we focus on the shared resources used by campaigns at every level of the ballot — data infrastructure, the national voter file, tools for voters themselves, countering disinformation online, and security resources. Our users include the presidential nominee, but also state parties and sister committees, and by extension, all down ballot races. Our work is guided by two core principles: The infrastructure needed for effective, secure and efficient voter outreach should be built once and leveraged by every Democratic campaign. Every dollar a campaign spends building tech or cleaning data is a dollar that could go to talking to voters about the issues they care about. Our goal is to save campaigns time and money so they can focus on what matters most: the voters. The DNC is uniquely positioned to aggregate, enhance, and maintain the decades of data campaigns and the Democratic Party has collected about voters. Highlights of the last year Upgrading the party’s data infrastructure In the last year, we have significantly improved our data infrastructure, addressing technical debt accumulated over previous election cycles, from a variety of systems. As someone who personally worked through the fabled woes of the 2016 Democratic data infrastructure, it has been thrilling to equip our users with new data infrastructure. We built a new data warehouse that supports Democrats up-and-down the ballot for the 2020 cycle (and future election cycles). It is far more scalable, secure and usable. You can read more about Phoenix, our data warehouse, here. Supporting 20+ presidential campaigns During the primaries, the DNC Tech team provided data, training, and support to all presidential campaigns. As you can imagine, this kept us quite busy. We provided neutral, but deep support to every presidential campaign and loved learning from the innovative work happening at each one. Now that there is a presumptive nominee, we work exclusively to support the Biden for President team. Acquiring insightful data for campaigns to reach voters In order to help campaigns become more accurate and more efficient, we have worked tirelessly over the last year to acquire and provide data to ensure that Democrats are reaching the right voters. This includes acquiring over 45+ million cell phones for all Democratic campaigns to use for voter outreach, partnering with innovative new companies like Deck, and investing in faster and more frequent ingestion of the latest voter files from our partners in state parties. Developing innovative data and data science products In the last year, we’ve released a number of analytics and data science products, designed to improve the utility of the national voter file. A new record linkage algorithm de-duplicates 30+ million voter records, leading to a 9% increase in efficiency across the voter file. A new foundational dataset, Blueprint, helps campaigns quickly run analyses, create reports, and build custom models. We have also released several new models that leverage the DNC’s unique store of data, including hundreds of thousands of polling survey responses and millions of field IDs collected in races up and down the ballot since 2004. Example inputs for DNC data science models Building a counter-disinformation program & security culture The scale of malicious activity and disinformation operations is increasing. I’m grateful that Bob Lord has led ongoing and relentless efforts to improve the security stance of both the DNC and the broader Democratic ecosystem. (Speaking of which… have you completed your security checklist?) Over the course of the last year, we also established a Counter Disinformation Team dedicated to identifying and responding to bad actors targeting Democratic campaigns online. Having seen the intricacies of Facebook first-hand, I feel passionately that we all must hold a light to the platform and its effects on national discourse. Learn more about our disinformation program here. Road to the general With fewer than 150 days before the general election, the DNC Tech Team and I are acutely aware of the shifting state of the world. Our country is fighting dual epidemics — COVID-19 and systemic, institutionalized racism. Both of which disproportionately impact Black men and women. Our team is outraged by the recent murders of George Floyd in Minnesota, Ahmaud Arbery in Georgia, and Breonna Taylor in Kentucky. Words cannot express the deep sadness we feel for their families and communities. Our work today, to elect Democrats at every level of government, is more important than ever as we fight for justice, fair democratic processes, and the safety of Americans. We are thrilled to be partnering closely with the Biden for President team, state parties, and sister committees to develop and deliver data, products, and tools to effectively reach voters through new and varied channels to guarantee the success of Democrats at all levels throughout the United States. We hope that you will join us in our efforts, either by joining our team or by making a donation that will allow us to continue to make a difference in this election cycle and many more to come.
https://medium.com/democratictech/one-year-as-dnc-cto-76fa813b7836
['Nellwyn Thomas']
2020-06-11 20:06:01.961000+00:00
['Engineering', 'Politics', 'Data', 'Data Science', 'Democratic Party']
OpenCV Swift Wrapper
What should you do when working on a project that involves a lot of image processing? We had to build an iOS app that needed to work relatively fast. As the input and output were quite large, server processing was out of the question, if we wanted to deliver a nice and friendly user experience. What to do then? Oh yeah, we could use OpenCV. I’ve been tinkering quite a lot with it at university and knew that it should do the trick. What should be aware of in this particular case is that OpenCV is not as straight forward as installing a cocoa pod. It needs additional tinkering before you can write your methods. We’re going to show you how. Are you ready? Setup 1. Create a new Xcode project Select Create a new Xcode project Select “Single View Application” Name it however you want, I’m going with OpenCVProject Then set up Cocoapods using “pod init” 2. Add OpenCV to Podfile ” pod ‘OpenCV’ ” and run “pod install” in terminal. 3. Click new -> file -> new file, select Cocoa Touch Class 4. Name it “OpenCVWrapper”, a subclass of NSObject and set the language to Objective-C 5. When Xcode prompts you if you like to configure an Objective-C bridging header, choose to create a bridging header. This bridging header is the file where you import Objective-C classes so that they can be visible in Swift. 6. Import “#import “OpenCVWrapper.h” in OpenCVWrapper.h // // OpenCVWrapper.h // OpenCV Test // // Created by Alexandru Ilovan on 31/10/2019. // Copyright © 2019 S&P. All rights reserved. // #import <Foundation/Foundation.h> #import "OpenCVWrapper.h" NS_ASSUME_NONNULL_BEGIN @interface OpenCVWrapper : NSObject @end NS_ASSUME_NONNULL_END 7. And #import <opencv2/opencv.hpp>” in the OpenCVWrapper.m and “#import “OpenCVWrapper.h” in OpenCVWrapper.h // // OpenCVWrapper.mm // OpenCV Test // // Created by Alexandru Ilovan on 31/10/2019. // Copyright © 2019 S&P. All rights reserved. // #import "OpenCVWrapper.h" #import <opencv2/opencv.hpp> @implementation OpenCVWrapper @end 8. In order to use C++ inside Objective C (OpenCV is written in C++ and C++ cannot interface directly with Swift), you need to change the file extension from OpenCVWrapper.m to OpenCVWrapper.mm. 9. Add “#import “OpenCVWrapper.h”” to the Bridging-Header // // Use this file to import your target's public headers that you would like to expose to Swift. // #import "OpenCVWrapper.h" 10. Click new-> file -> new file, select Prefix header and create one 11. And add “#ifdef __cplusplus #include <opencv2/opencv.hpp> #endif” to it. // // PrefixHeader.pch // OpenCV Test // // Created by Alexandru Ilovan on 31/10/2019. // Copyright © 2019 S&P. All rights reserved. // #ifndef PrefixHeader_pch #define PrefixHeader_pch // Include any system framework and library headers here that should be included in all compilation units. // You will also need to set the Prefix Header build setting of one or more of your targets to reference this file. #ifdef __cplusplus #include <opencv2/opencv.hpp> #endif #endif /* PrefixHeader_pch */ 12. Go on to your project navigator. Under Build Settings, search Prefix Header and add the correct path for your .pch file. It should be “$(SRCROOT)/PrefixHeader.pch” or “$(SRCROOT)/YOUR_PROJECT/PrefixHeader.pch” 13. Now you can add methods in the OpenCVWrapper for your image processing and call them in Swift. To test it out, we’ll show you some code snippets for taking an image and convert it into a matrix. In the OpenCVWrapper.mm add the matFrom and imageFrom methods that we will mark private with a #pragma mark Private Don’t worry about the implementation details, they basically take an image and convert it into a matrix of pixels. // // OpenCVWrapper.mm // OpenCV Test // // Created by Alexandru Ilovan on 31/10/2019. // Copyright © 2019 S&P. All rights reserved. // #import "OpenCVWrapper.h" #import <opencv2/opencv.hpp> using namespace std; using namespace cv; @implementation OpenCVWrapper + (NSString *)openCVVersionString { return [NSString stringWithFormat:@"OpenCV Version %s", CV_VERSION]; } #pragma mark Public + (UIImage *)toGray:(UIImage *)source { cout << "OpenCV: "; return [OpenCVWrapper _imageFrom:[OpenCVWrapper _grayFrom:[OpenCVWrapper _matFrom:source]]]; } #pragma mark Private + (Mat)_grayFrom:(Mat)source { cout << "-> grayFrom ->"; Mat result; cvtColor(source, result, COLOR_BGR2GRAY); return result; } + (Mat)_matFrom:(UIImage *)source { cout << "matFrom ->"; CGImageRef image = CGImageCreateCopy(source.CGImage); CGFloat cols = CGImageGetWidth(image); CGFloat rows = CGImageGetHeight(image); Mat result(rows, cols, CV_8UC4); CGBitmapInfo bitmapFlags = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault; size_t bitsPerComponent = 8; size_t bytesPerRow = result.step[0]; CGColorSpaceRef colorSpace = CGImageGetColorSpace(image); CGContextRef context = CGBitmapContextCreate(result.data, cols, rows, bitsPerComponent, bytesPerRow, colorSpace, bitmapFlags); CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, cols, rows), image); CGContextRelease(context); return result; } + (UIImage *)_imageFrom:(Mat)source { cout << "-> imageFrom "; NSData *data = [NSData dataWithBytes:source.data length:source.elemSize() * source.total()]; CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data); CGBitmapInfo bitmapFlags = kCGImageAlphaNone | kCGBitmapByteOrderDefault; size_t bitsPerComponent = 8; size_t bytesPerRow = source.step[0]; CGColorSpaceRef colorSpace = (source.elemSize() == 1 ? CGColorSpaceCreateDeviceGray() : CGColorSpaceCreateDeviceRGB()); CGImageRef image = CGImageCreate(source.cols, source.rows, bitsPerComponent, bitsPerComponent * source.elemSize(), bytesPerRow, colorSpace, bitmapFlags, provider, NULL, false, kCGRenderingIntentDefault); UIImage *result = [UIImage imageWithCGImage:image]; CGImageRelease(image); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpace); return result; } @end And also a method for transforming the colours associated with the matrix in grey + (UIImage *)toGray:(UIImage *)source { cout << "OpenCV: "; return [OpenCVWrapper _imageFrom:[OpenCVWrapper _grayFrom:[OpenCVWrapper _matFrom:source]]]; } And then finally the toGray method in the #pragma mark Public + (UIImage *)toGray:(UIImage *)source { cout << "OpenCV: "; return [OpenCVWrapper _imageFrom:[OpenCVWrapper _grayFrom:[OpenCVWrapper _matFrom:source]]]; } Also, don’t forget to add the method headers to the OpenCVWrapper.h // // OpenCVWrapper.h // OpenCV Test // // Created by Alexandru Ilovan on 31/10/2019. // Copyright © 2019 S&P. All rights reserved. // #import <Foundation/Foundation.h> #import "OpenCVWrapper.h" #import <UIKit/UIKit.h> NS_ASSUME_NONNULL_BEGIN @interface OpenCVWrapper : NSObject + (UIImage *)toGray:(UIImage *)source; @end NS_ASSUME_NONNULL_END Ok, next, go to the Main. storyboard add an imageView and a button and add a stock image to the assets and set it on the imageView. Next, connect the IBoutlets like so and call the toGray method from the OpenCVWrapper: // // ViewController.swift // OpenCV Test // // Created by Alexandru Ilovan on 31/10/2019. // Copyright © 2019 S&P. All rights reserved. // import UIKit class ViewController: UIViewController { @IBOutlet weak var saltImageView: UIImageView! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. } @IBAction func didPressedButton(_ sender: Any) { let grayImage = OpenCVWrapper.toGray(saltImageView.image!) saltImageView.image = grayImage } } Finally, run the app. If you click on the button, it should greyscale the image. You’ve done it! There you have it, OpenCV Swift Wrapper with one of the most basic operations that you can do in image processing. Have fun and keep on learning! Originally posted on https://saltandpepper.co/2019/11/11/opencv-swift-wrapper/ by Alexandru Ilovan, Mobile Lead
https://medium.com/salt-pepper/opencv-swift-wrapper-6947ba236809
['Salt Pepper']
2020-06-18 14:36:49.469000+00:00
['Mobile', 'Development', 'How We Learn', 'iOS']
How to Make a Festive Vegan Meringue Wreath
Okay, let’s talk strategy — this meringue wreath is much, much easier than it looks, but it requires some organisation and careful attention to the instructions — I definitely recommend reading through the whole recipe first before starting it. The ideal process is as follows: On the morning of the day before you intend to eat the meringue wreath, chill the aquafaba and coconut cream. Later that evening, make the meringue, and while it cooks, make and chill the topping. I also recommend using this time to extract the pomegranate seeds, which you can store in a sealed container in the fridge till required. The cooled meringue can be simply kept in the oven until needed, but if you’re understandably going to be using it for other things, it can be carefully transferred to a sealed container and kept in the fridge. The next day, just before serving, transfer the meringue wreath to a plate, and cover with the prepared toppings. I love a little Christmas Eve late-night baking, but you could make this two days ahead — keeping the cooked meringue and the topping in sealed containers in the fridge. Either way, it still needs to be decorated right before serving. With everything prepared, this should a swift and serene undertaking. Feel free to change the decorations to suit your needs, but the sour crunch of pomegranates and gentle zing of the raspberries is perfect against the creamy sweetness below, and the pistachios look gorgeous against all that red. Strawberries, cherries, or redcurrants would all be equally visually effective. If you can’t find pomegranates, just use more berries —and I am yet to find pre-packaged pomegranate seeds which taste any good but if you can find some, feel free to use those instead of fresh. If you can only get hold of frozen fruit, allow it to thaw first in a sieve over a bowl, otherwise their liquid will dissolve the meringue. The important thing to know is, you can definitely achieve this, and however it turns out, it will be delicious. I made my meringue wreath on the most humid day of the year, with the air as thick as a kale smoothie, and it still turned out perfectly. The wreath broke a little at the seams as I transferred it to the plate, and yours might too, but you cannot tell once it’s pushed back together and covered in the toppings. And I accidentally crushed one of the meringues as I spread over the coconut cream, but again — it does not matter, and everyone will love it. Vegan Meringue Wreath Serves 10 (or one ;)) 3/4 cup aquafaba from a 15oz can low-sodium chickpeas, chilled 1/4 teaspoon cream of tartar 1 cup sugar 2 tablespoons cornstarch 1/2 teaspoon vanilla extract To decorate: 1 x 15oz can full-fat coconut cream, chilled for at least four hours (check the ingredients — there should be at least 70% coconut extract. Thickeners are fine.) 1 teaspoon vanilla extract pinch salt 1 pomegranate 1 cup fresh raspberries 1/4 cup shelled pistachios, roughly chopped 4 x mint sprigs, or as many as you prefer
https://medium.com/tenderlymag/how-to-make-a-festive-vegan-meringue-wreath-d40222ade5e3
['Laura Vincent']
2020-12-24 18:43:41.009000+00:00
['Vegan', 'Recipe', 'Christmas', 'Dessert', 'Food']
The impact of incentives on SMS/text message survey response rates on an mHealth platform in Southern Africa
AUTHORS: Charles Copley, Eli Grant Photo by Jen Theodore on Unsplash Surveys are often used to get feedback from users of a service. However, the kinds of people that respond to surveys might not be the kind of person whose feedback you really want! This is known as selection bias. Reducing this bias requires increasing the proportion of people that respond. One way to do so is to offer financial incentives. Another might be to vary the length of the invitation message. Perhaps someone is more able to process a shorter message, making them more likely to respond to these? In order to test the effect of this we put together an SMS experiment that invited mothers to participate in an online survey of a mobile health platform in South Africa. This experiment randomly assigned invitees to different forms of incentivization, both in mode (i.e. guaranteed fixed amount vs lottery) and in amounts which ranged from R0 to R50. We expected that the incentive amounts would increase response rates, however we also needed to assess how the incentives might impact the accuracy of the survey responses. In addition to the incentive amounts, we also randomly allocated people to receive different length invitations to participate. In order to test how incentives effected the validity of a person’s response, we embedded three different questions with known answers into the survey. The questions we chose were: In which province did you register? (there are only 9 provinces in South Africa so this is a very easy question to answer correctly, and the response does not say anything personal about the person. What is your age? This is a more identifying question that users may not want to answer if the user does not trust the service. This tests how likely a user would be to answer identifying questions correctly. Who registered you on the service? On this health platform women are registered by mobile phone. This is done either on their own phone (with assistance from a health care worker) or on the health care workers phone which may be done at a later stage based on a written register provided by mothers in the waiting room in order to save time. This is a more difficult event to remember, and so tests recall bias of survey responses. The experiment was designed as shown in the diagram below: Incentive Type For this study we randomly allocated participants to receive differing types of incentives.The difference is clear from the invitation questions given below: None Is the service helpful to you? Please take a survey to help us improve the service. Your identity will stay private. Answer the survey questions by replying with the number that matches your choice (it’s FREE). If you have any questions about the survey, reply to this SMS. Want to start the survey? Reply ‘JOIN’. Lottery Is the service helpful to you? Help us improve by taking a quick survey (it’s free). By participating you stand a 1/5 chance to WIN R50 airtime! Your identity will stay private. Reply ‘JOIN’ to start. Fixed Is the service t helpful to you? Help us improve by answering 8 quick questions. When you finish we’ll give you R50 airtime! Your identity will stay private. Reply ‘JOIN’ to start. As can be seen below, we found that the Fixed invitation produced significantly higher response rates as well as completions. What is more, we found that the lottery invitation did not produce a higher response rate than a non-incentivized survey request. Incentive Amounts In this study we randomly allocated people to receive differently communicated amounts as well as differing airtime incentive amounts (between R5 and R50 as seen in the diagram below. As shown in the diagram above, the lottery style did not increase the response rate with increased incentive amounts! On the other hand, we did find that the response rates increased when increasing the fixed incentive amounts. The rate of increase is not linear over this range. Invitation Length A final test that we embedded into the survey was whether the invitation length affected the response rate. We had two invitations of this form e.g. Is the service helpful to you? Please take a survey to help us improve the service. Your identity will stay private. Answer the survey questions by replying with the number that matches your choice (it’s FREE). If you have any questions about the survey, reply to this SMS. Want to start the survey? Reply ‘YES’. (54 words) (FIRST MESSAGE) Is the service helpful? Please help us improve by answering 8 questions for FREE. Your identity will stay private. Want to start the survey? Reply’YES’. To skip the survey reply ‘NO’. Replies are FREE. (33 words) (SECOND MESSAGE) You will now receive the questions. Answer by replying with the number that matches your choice (it’s FREE). If you have any questions about the survey, reply to this SMS with your question. (33 words) As can be seen above, there is weak evidence (p=0.054) that a shorter invitation is more likely to generate a survey response. Effects on response accuracy In order to test whether incentives had an effect on response accuracy we embedded three questions into the survey, that we already knew about the participants. The overall response accuracy rates to the different questions is given below: We then investigated whether incentivization (of any kind) increase the rate of false response. This is given below: We found that incentivization only had a significant impact on the question “How were you registered?” (X² = 4.6105, df = 1, p < 0.05), Conclusion When we started this experiment, most people at our organisation believed that incentivizing people to respond to surveys would heavily skew our results. We found that while this is true, it is not as strong an effect as most people had assumed. We also found that lottery style rewards were often the de facto method used in our services to increase uptake given limited budget. However our findings showed that lotteries have no effect on uptake of services. It is always important to question assumptions.
https://medium.com/patient-engagement-lab/the-impact-of-incentives-on-sms-text-message-survey-response-rates-on-an-mhealth-platform-in-8e850ae2b08
['Charles Copley']
2019-10-02 14:06:44.460000+00:00
['Startup']
How to Clean your Mind from Negative Thoughts and Embrace your Positive Thinking
Our mind doesn’t stop. Human beings think at the rate of 350 to 700 words per minute, but when we speak or listen we do so only at 150. What does that mean? That the largest number of words are the ones we say to ourselves. We have thousands of thoughts a day. Sometimes we are aware of them, sometimes they are like background noise. Sometimes we think about things that happened, sometimes about things that are happening to us now, sometimes about things that are yet to come. What happens in our mind has a lot of weight in our state of mind. That is why it is so important to clear our minds of negative thoughts. Thoughts and emotions are closely related. Emotion is not a thought, but through our thinking, we can manage it and moderate its intensity, as well as choose the state of mind in which we will find ourselves later. If we are always thinking, it is logical that during the day we feel many emotions. The problem comes when we are overwhelmed by a torrent of bad thoughts. Only by being aware of our negative thoughts can we take steps to deactivate them. Here are the most common negative thoughts and an antidote to neutralize them.
https://medium.com/change-your-mind/how-to-clean-your-mind-from-negative-thoughts-and-embrace-your-positive-thinking-cf36f6def95c
['Desiree Peralta']
2020-12-14 13:17:46.699000+00:00
['Mental Health', 'Personal Development', 'Self Improvement', 'Advice', 'Inspiration']
Understanding Consumer Behavior During COVID-19
Understanding Consumer Behavior During COVID-19 PurchaseLoop Research Insight Study March 2020 By Ryoji Iwata via Unsplash Introduction While the events of COVID-19 have been transforming the world, LoopMe has been staying on top of consumer sentiment. By using our proprietary PurchaseLoop Research platform, LoopMe (one of our portfolio companies) surveyed 8,000 people within the LoopMe audience pool across 8 global markets for the week ending March 27, 2020. We then analyzed that global sentiment data against our DMP across 320 dimensions of data produce — insights that are shaping our new ecosystem. In this report, you’ll find the key findings from those cross-sections to help you understand consumer sentiment today within an ever-changing landscape. The four areas of sentiment explored by the PurchaseLoop Research survey are the following: Media consumption index — What kind of media are you consuming most of the week? Buying index — thinking about your total purchases this week, how did your spending compare to last week? Aspirational index — what do you aspire to do when the COVID-19 crisis is over? Outlook index — what is your current outlook around COVID-19? LoopMe data analysis In addition to the market research surveys, we took a look at the data within the LoopMe DMP to reveal the growth in consumer behavior with devices now that most areas have adopted a stay at home restriction. Over the last two weeks of March, LoopMe’s audience platform has grown significantly as more people are increasing their screen usage and TV consumption. We are seeing more global scale, more activity, and more net new profiles to reach. Furthermore, we’ve seen a decrease in devices that are leaving our audience platform, showing consistency in increased reach figures. LoopMe has seen a 9% increase in reach, showing an increased total scale of 2.4B devices. LoopMe has seen an 11% increase in reach against active devices, which are devices we’ve seen at least twice in 30 days. This shows that not only the reach is increasing, but it’s increasing against actively targetable devices. LoopMe has seen a 20% increase in net new devices (over 114M), showing a more unique scale. LoopMe has seen dormant devices decrease by 16%, which are devices we don’t see for 30–60 days. This points to sustainable growth of net new and current devices. Media Consumption Index LoopMe asked respondents what type of media they were consuming the most this week: Gaming, News/Social Media, Movies, TV Shows, or Reading. We looked at this data holistically by age, and by country, to help us understand any changing trends in media consumption, both home and abroad, across various demographic sets. Not surprisingly, News/Social Media is the highest channel media consumed. Looking at News/Social Media is the highest channel of media consumed. Looking at News/Social consumption by country, Singapore (38%) topped the list and France (20%) rounded out the list, though all geographies showed this as a top form of media consumption. Additional expected media consumption trends present themselves, with Gaming and TV Shows indexing high globally. While reading popped most with older demographics, it did index as the lowest preferred media vertical globally. This supports the pre-COVID-19 trend of increased time in front of screens. At LoopMe, we’re expecting the current media trends to continue to see growth, such as increased video viewing & gaming engagement, rather than disruption among consumer media consumption patterns, but we’ll keep an eye on this as the weeks roll on. Buying Index LoopMe asked respondents how their total purchases for this week compared to last week to understand their buying sentiments. The possible responses were more, less, or the same. Regardless of country, age or gender, one trend loomed over the rest: overall spending is down. Where certain sectors like CPG/FMCG/OTC-Pharma and other daily-life essentials are flying off the shelves, general spending is down. Discretionary spending isn’t a priority for most of the globe, mirrored by dipping financial markets, store closings, unemployment rising, and possible pending economic recession. We haven’t seen any areas where people reported spending more week over week. Brands that work with LoopMe are noticing the same trends, and are acting on it. We're seeing brands mirror these consumer trends across our platform, shifting from product-based messaging to brand-based awareness campaigns. KPI’s are shifting from in-store foot traffic and towards attitudinal metrics. As seen throughout past economic downturns, maintaining brand awareness while consumer spending is down has been proven successful for brands in driving performance when spending picks back up. We encourage clients to engages with our team about attitudinal goals that can help elevate your brand from the noise during this unprecedented time. Aspirational Index LoopMe wanted to know what our respondents were aspiring to do after the events of COVID-19 resolve and we re-emerge back into society as we had before. Response options included purchases of high (car, house, etc.) or medium value (phone, television, etc.), returning to socializing outdoors, travel, or not to change the lifestyle they knew a few months back. Overwhelmingly and surprisingly, across most markets the top choices include: Return back to their lifestyle: United States survey respondents (50%) topped the list for this option. Spend more time socializing outdoors: Italy (52%) and France (49%) topped the list for this option. The more notable insights we uncovered here include the global aspirations around travel and buying behavior. Globally, all respondents aspire to travel ahead of making a purchase. Wanderlust is setting in as people are in lockdown. The UK, US, and Canada were the most conservative in terms of aspirational buying intent compared to EMEA and APAC. APAC regions surveyed have the highest aspiration to travel and buy medium and large ticket items. Outlook Index LoopMe wanted to not just include media consumption and purchase behaviors in this study, but also look at the psyche of our global respondents to see how optimistic they felt around the globe. Depending on their current phase in the COVID-19 pandemic will help uncover how sentiment is changing as various parts of the world are impacted. Through basic responses of Good, Bad, Neutral Opinion, we uncovered the following insights around that topic. While globally the unanimous outlook was pessimistic, some countries with early prevention measures and strong political measures have shown more optimism than others. For example, Germany had a higher positive response than a negative one. Comparatively, the US, UK, Canada, Singapore, and France all showed +20% difference favoring a negative outlook. Hong Kong was a focal point of the virus spread but now shows under 50% of respondents with a negative outlook. Conversely, Singapore has become more recently affected and is showing 8% points higher in negative response rates. This could point to people returning to confidence after weathering the anticipated worst of the storm. To further illustrate this trend, US, Canada, France, Singapore, and UK are also leading the way in a negative outlook as the pandemic has reached their shores ate later dates. All are above 50% in a negative outlook. Those countries that are under 50% in negative outlook are Germany, Italy and Hong Kong. In Conclusion Media Consumption is Now a Cross-Screen World Now more than ever media is being consumed in households at staggering rates. News, social media and gaming are on the rise, with device usage growing. The ‘Stay At Home’ mandate is forcing businesses and consumers’ lives to adapt to an always-connected, virtual, online world. Marketers need to shift messaging to adapt to this new reality. There is a great opportunity to learn more about consumers in the home like never before and smart brands will tap into the audience insights now. Purchasing Behavior is a Now an Uncharted Course Yesterday’s purchasing behavior doesn’t matter in our new reality. While overall spending is down, how and what consumers are looking to buy –– or what they are able to obtain –– changes by market and by day. Real-time data is critical to understand audiences buying intent and current purchase data. Outlook and Aspirational Goals Vary by Market In countries that have better control around COVID-19, we have seen more positive outlooks from consumers –– as we’ve seen in Germany –– whereas in countries or regions where the surge is coming sentiment was less positive. Marketers have an opportunity to lean into purpose-driven messaging to calm uncertainty among their consumers. Authenticity will help maintain relationships with their consumers. Additionally, while the travel industry saw an immediate impact from the COVID-19 pandemic, our survey reveals that aspirationally, consumers would like to travel as soon as they can, providing an important indicator for this vertical.
https://medium.com/dataseries/understanding-consumer-behavior-during-covid-19-af078596e656
[]
2020-04-10 12:12:13.281000+00:00
['Covid 19', 'Marketing', 'Consumer Behavior']
A generative oracle in a few lines of code using DeepPavlov
Recently the CISS 2 summer school on deep learning and dialog systems took place in Lowell. It was organized by the Text Machine Lab of UMass Lowell and Neural Networks and Deep Learning Lab at MIPT (iPavlov project), Moscow. People traveled from around the world to the school, to meet top-notch researchers and learn the state of the art. Besides the lectures and tutorials, the school held a competition between the participants, concerning team projects that we should carry out during the school. There was limited time to work on the team projects since the lectures and tutorials took up most of our days. Hence efficiency was a huge matter, and teams were worried about managing to have a working dialog system at school’s end. We told a little about the experience in this post. Manuel, Beatriz, and Estevão, left-to-right Our team was composed by Estevão Uyrá, Beatriz Albiero, and Manuel Ciosici. Estevão and Beatriz are from Brazil and work together at the Serasa Experian DataLab. Manuel, originally from Romania, was just finishing his Ph.D. in Denmark at Aarhus University. In the end, our team wrote a simple fortune telling chatbot that placed second in the competition. In this post, we will describe how we managed to create a complete chatbot powered by deep learning in just a few lines of code. Because we had lots of DeepPavlov experts at the summer school, we decided to follow their track of the tutorials. This was in some way a risky decision since we knew nothing about the library, but it really paid off. We knew from the beginning that we wanted to build a question-answering bot, but beyond that, we didn’t really know exactly what kind of conversation we wanted it to perform. Since time was short, we decided to start quickly from something simple and rethink our final product later on. Iterating fast We started by implementing a simple system to identify if a given text span from Wikipedia contains an answer to a given question. For this, we used the SQuAD 1.1 data set, which we pre-processed into triples of (question, sentence, flag). The flag indicated if the sentence text contains the answer to the question, and was used as the label. Put simply, we reduced the original problem that is to predict the precise span of the answer in the text into the binary classification problem, aiming to understand better how difficult was the problem, get some grasp of the dataset, and start coding. We then used DeepPavlov to download pre-trained GloVe embeddings using the DeepPavlov GLoVe embedder. In four lines we can embed each sentence in our dataset as the mean vector of its word embeddings, first downloading the model and then using it. Interestingly to note, the GloVeEmbedder class is able to use any file in the simple GloVe format, meaning that user-created embedding can also be used (see more in the docs). We concatenated question and sentence and inputted this to a shallow feed-forward neural network consisting of a layer with ReLU units followed by a sigmoid unit for which we used Keras in TensorFlow. Due to the straightforwardness of DeepPavlov and Keras, we only needed to write a few lines of code, which gave us more time to understand the problem we were working on. The precision value of .4 we found informed us that the task was indeed hard, and it would be challenging to train our own custom model to perform well enough to be the brain of a bot, especially since our computing power (and time) was short. After having gained this appreciation for the question answering task, we decided to use pre-trained question answering models. Luckily, DeepPavlov comes with pre-trained question answering systems trained on SQuAD, so when we wanted a proper question answering system, we could get one in just two lines of code. Downloading, in this case, didn’t even need a separate function, as all dependencies (char embeddings, word embeddings and model weights) are downloaded in the first run and stored for later ones. Before trying the DeepPavlov model, we were ready to spend most of a day making the pre-trained model work and were thinking of our next step to move the model into Telegram. In reality, the model worked right out-of-the-box and we had a full extra day of coding to do. Calling model with a background context and a question, like we did above, returns a span from the context that answers the question together with the starting character of the span, and a confidence score. Moreover, the response time was under a second. Aiming high We decided we had time to make a fortune-telling bot, by adding to our Q&A capability an extra pre-trained language model that could generate text on its own. The language model should generate coherent predictions about the future. Our get_future_prediction function does exactly that, by leveraging GPT2 capacity to generate coherent text after some seed text. The user’s name is provided as a parameter to the function which then creates a starting text containing some seed tokens and a starting sentence containing the user’s name. We use this as a starting point and generate a continuation of the sentence using GPT2. To prevent GPT2 from drifting off-topic, we only generate 80 tokens of text, after which we remove trailing tokens so that we have complete sentences. We put our starting text and the generated continuation text back into GPT2 as starting text and generate 80 tokens more. We go through this process until we have a 400 long text that usually looks like a vague future prediction that an oracle would make. We got the pre-trained GPT2 model from the official OpenAI implementation and had to spend some time making it work. After some adjustment, the model generated futures in approximately 2 min, which we considered enough to our purpose. We just made our telegram bot ask the user to wait, and then send a message when the future was ready to be questioned. The Oracle Time to put it all together. An overview of our bot’s architecture is shown in the image. When a user starts a conversation with our oracle, the chatbot starts by generating a future prediction that is 400 characters long based on the user’s name and some standard fortune-telling sentences. Once the future prediction is generated, the chatbot uses this as the background context to answer questions about the user’s future. This is where the DeepPavlov pre-trained SQuAD question answering system comes in. It takes the user’s question and the generated text, tries to find a span of the generated text that appears to answer the user’s question, and then sends the answer back to the user. The entire source code for the bot is in the GitHub repo. Generation of the oracle predictions (GPT2 text generation) is quite resource consuming, so instead of running locally, we ran it on Colab. If you run the notebook on a computer with a CPU only, expect several minutes to be spent on this task. We recommend you either run the notebook on a computer with a GPU or use Google Colab and select a GPU run time, which you can use for free. If you try the bot alone, 2 min is more than enough to generate a future. With many people, as we discovered in our live demo, it may take a lot more. That’s it. You now have your own personal fortune-telling oracle chatbot in a few lines of code. Beware, because you may not want to see what the future holds.
https://medium.com/deeppavlov/a-generative-oracle-in-a-few-lines-of-code-using-deeppavlov-c76a7fe4c690
['Estevão Uyrá Pardillos Vieira']
2019-07-24 00:20:57.068000+00:00
['Deep Learning', 'Community', 'Artificial Intelligence', 'Machine Learning']
Look forward to this.
Look forward to this. Looks to me this is the next gen of social media – a frictionless sharing of momentariness, that is at once not contrived or carefully curated yet well considered. This is where the “phone” as my natural extension of the hand, mouth, mind, ear, eye will play its role in creating stories. Stories that are spontaneous, true, transparent, momentary yet a memory. Looks like Hardbound has figured _that_ space well. Can’t wait to see.
https://medium.com/thoughts-philosophy-writing/look-forward-to-this-d9217c15c143
['Arindam Basu']
2016-04-13 20:13:54.797000+00:00
['Hardbound', 'Storytelling']
“Mama in the Time of Corona” or “This is your SCHOOL on Corona”
GREAT. NOW I’M DEPRESSED AND IRRATIONAL: WHAT WERE THOSE REASONS AGAIN? Photo by Anthony Tran on Unsplash 1.) Is it true that the mental health of our kids is better in person no matter what? Experts do not agree. a.) As Robin Fierstein, PSY.D., of Philadelphia posted on Facebook: “as a child and family therapist, I strongly disagree with the arguments that schools should re-open for children’s emotional health.” She calls opening schools back up “short sighted and illogical,” and cites examples of why: “children [potentially] experiencing more deaths of loved ones, friend’s loved one’s and community members; having to obey rigid and developmentally inappropriate behavioral expectations” (like social distancing, and wearing masks for hours); restricting peer engagement when peers are “right in front of them”; meeting educational standards amid all the changes; and the lack of predictability as Covid possibly takes staff members and/or classmates away. If kids are largely in school, what does that do to the rate of future infection? b.) Balanced against these health concerns are the educational needs of children. Experts agree students clearly learn better in schools, and in-school class is best practice. The reality in many parts of Texas (and elsewhere) is that kids learning remotely will be at an enormous disadvantage as some will struggle just to get online. For many kids, being in school also means access to food, medical care, and safety from abuse. Keeping schools closed [in any state] for a prolonged stretch has worrisome implications for social and academic development, some child development experts say. OUR FAMILY CONCLUSION: While kids clearly learn best in an in-class setting, and that is our ultimate goal, my job is freelance. I am able to be home. My kids can get online and get help from me when/where they need it. It’s not a long-term solution, but still early in the academic year, for us the health interests overshadow the academic/social-emotional concerns. (Though they are still enormous concerns.) 2.) As infections are inevitable, what is in place to mitigate exposure? Photo by Izzy Park on Unsplash a.) I just have to start here: I am exceedingly aware of the stark inequity when comparing our private school resources with that of most public schools. The discrepancy is real and it isn’t fair. I feel gross about it. But the unfair truth is, our school has resources such that they have gone above and beyond in terms of safety measures and specific protocol should outbreaks occur. They have planned for the possibility of shutting school down throughout the year as positive cases happen. They have set up a covered outdoor space, improved the ventilation system, and installed hand washing stations. We have lower student to teacher ratio in general, and walls have been knocked down in places to create bigger classrooms. I am keenly aware these changes are luxuries to which not everyone is privy. b.) Even with extensive precautions, there is no guarantee that the virus won’t spread in our school community. Children tend not to get exceedingly ill from the virus or appear to contract it as often, true, but children also show higher concentrations of the virus (from a nose test, for example). Do children just have better immune systems to combat the virus, and thus, even if they are exposed, don’t result in serious symptoms? Maybe. So parents sending kids to school, fairly confident their kids aren’t going to contract the virus, or that if they do, it’s not such a big deal, are probably right. For kids anyway. But if kids tend to be less symptomatic, and thus unaware of spreading the virus, will eventual exposure end up having a disproportionate effect on adults (teachers and staff)? If asymptomatic kids spread the virus unknowinly, there can be serious repercussions if adults contract the virus and become seriously ill. If that happens, will adults die? What would the effect be on the community were that to happen? How do we weigh that with the social/emotional advantage of in-class instruction? OUR FAMILY CONCLUSION: While we have been utterly impressed with all our school has done to prepare for possible exposure, the inability to control for the spread due to possible asymptomatic students, for one, doesn’t seem at this point, worth the risk. We will have more information over time to re-evaluate. 3.) What are the long-term effects on adults and/or children who get the virus? Photo by CDC on Unsplash a.) This one for me may be the kicker: we can’t know the long-term implications for anyone who contracts the virus — not even for those asymptomatic or who experience only mild symptoms. Even when someone has contracted the virus and came through it, for some there have been lingering cardiovascular, respiratory, and kidney issues. We currently have no idea what this means long term. b.) Reports in adults increasingly suggest that death is not the only severe outcome. Many adults seem to have debilitating symptoms for weeks or months after they first contract the virus. Which leads to the question, are kids who are infected vulnerable to those long-term consequences as well? OUR FAMILY. CONCLUSION: The only answer here is a big fat question mark. Although, as one of my dear friends reminds me, we don’t know the long- term effects of anything really, including breathing the air or ingesting whatever chemicals all day every day and that hasn’t kept us at home. I hear this and I get it. There are risks associated with stepping outside our door. The possibilities are endless. But time is going to answer some of these questions, and this question mark for us just feels too big. 4.) What is the most important factor in deciding whether children go back to school? a.) Researchers seem to agree that community transmission is the most important factor when deciding whether or not children return to school. As one doctor put it, “We just can’t keep a school free from the coronavirus if the community is a hotbed of infection.” b.) Dallas is as of now, down to orange designation, down from red and moving in the right direction. However, when we made the call, Dallas was still in the red. “STAY HOME STAY SAFE” seemed pretty clear. Obviously different zip codes have different numbers of cases, and ours were lower. Still, that red risk level was a line for us. Photo by CDC A COUPLE OF FINAL POINTS 1.) I have NO JUDGEMENT about anyone who decided (or decides) to send their kids to in person school. It was a tough and personal call for every family. It still is. Without clear information, it can feel like a crap shoot. Some help at the federal level would be peachy, but that’s another post! 2.) Gross inequities are at play here in our decision and we know it. As my children and I enjoy the snow or miss our friends or both, we are beyond lucky to have the choice we are afforded by attending a school that is able to provide exceptional safety measures. I feel like shit about this. But does feeling guilty and shitty help? Nope. Not any more than white privilege guilt helps the African American community. Individually, I need to find ways to help kids who do not have the amenities we do. So far, some volunteering has filled that description, but now that school is on, continuing that remains an open question. I know it isn’t nearly enough. Just for the hell of it at this point, the CDC has several charts on their website aimed to help parents decide what makes sense for their family and school. Still interesting, to a geek like me anyway, to look at the breakdown even if you’re decision is made https://www.cdc.gov/coronavirus/2019-ncov/community/schools-childcare/decision-tool.html. www.cdc.gov/coronavirus No matter what your choice, be safe and careful out there. Can’t wait to meet you in person. WRITTEN BY Freelance writer of plays/short stories/poetry/narrative non-fiction; lover of humor, chocolate, prat falls, my children and husband (in no particular order..).
https://medium.com/write-speak-play/mama-in-the-time-of-corona-or-this-is-your-school-on-corona-3ada34987d38
['Erin Ryan Burdette']
2020-09-17 20:28:05.276000+00:00
['Schools', 'Parenting', 'Kids', 'Decisions', 'Coronavirus']
Twelve Terrific Stories From Writers Worth Your Time
I got to know Aimée from devouring her posts and realizing they resonate. Deeply. Editor of Age of Empathy, I love her intuitive approach to writing. A piece she has shared with us is one of her many strong suits — fiction. Melissa brings positivity and cheer through her pieces and that’s what makes you gravitate toward her. She shares other writers’ works in her Engaged Writers series, and I would love to pay it forward. I admire his drive and discipline when it comes to writing. What’s charming about him is that he’s hilarious but is oblivious to it. He writes many inspirational posts, but here he has shared one humor piece you need to read for a good laugh. Ryan is a sweetheart. And a machine. I think he is one of the youngest writers on Medium but the most prolific. I don’t know how he does it, but he consistently writes quality content daily on every topic under the sun. One area he does well with is true crime. I linger on the words she writes because they’re beautiful. If you are easily moved to tears, get tissues ready for this story about losing her brother. Real, raw, and honest. This is the kind of stuff I love to read. I had the pleasure of working with Roz once. She is self-proclaimed “worth every penny,” and I can attest to that. A writing coach with many years under her belt, I have learned and continue to learn much from her. She makes Medium fun, as you can tell from her voice. She also loves the word “wombat.” Susan is the type of person who wants everyone to grow and succeed. Strong, supportive, and caring; she helped me early on with words of encouragement. I know she can help you, too, with her wisdom! From fiction to poetry, a writer of her caliber can light your screen on fire. I love her work and recently purchased her book Quintessence, waiting for it to brighten my mailbox. This piece about coming out will tug on your heartstrings.
https://medium.com/age-of-empathy/twelve-terrific-stories-from-writers-worth-your-time-bdfce332b807
['Tracy Luk']
2020-12-30 14:23:43.916000+00:00
['Inspiration', 'Writing', 'Aoe Prompt', 'Reading', 'Engaged Writers']
Tokenizing Public Infrastructure Pt 2: Standing the Test of Time
Tokenizing Public Infrastructure Pt 2: Standing the Test of Time Innovation Through Cryptoeconomic Creative Recombination Originally written and published by Steven McKie for the Amentum blog, here. I’m continually thinking a lot about America and how we can improve our country while things socially and politically continue to trudge forward in a state of apparent dysfunction. This is the same statement I made when I published Part 1; months later, that doubt has not receded from my mind, it has worsened. So, I share my thoughts in hopes of an epiphany, an answer. I think as a nation we are close to something truly breath-taking innovation wise. Below I expound my thoughts further, in hopes to spur a similar mindset in others. The tokenization of public infrastructure is a relatively new thought; one bearing vast implications on the mobilization of society as cryptoeconomic primitives (more on this, here) and mechanism design finally approach center stage.. Something that I am proud to continue to expound on more thoroughly following part 1 of this series. Since that post was published, things have gotten pretty exciting. The city of Berkeley in California is considering issuing token-assets backed by city bonds to fund new housing development; even the country of Venezuela has attempted to launch its own token, the “Petro”, which they state is “backed” by the countries’ oil reserves (and they were even sanctioned by the Trump administration further for their digital currency launch, a world-first). I was even personally contacted by a small city in Louisiana, and a major city in California to discuss these ideas further after my first post — cities are looking for sustainable solutions It’s becoming increasingly clear that we’ve reached an inflection point as a species. We know that we are inherently bound by the political and financial interests of those with more power than ourselves. Whether you’re the city of Berkeley, CA trying to fund further improvements in its city by leveraging new tech as the political climate clamps down on sanctuary cities; or whether you’re an impoverished nation with a volatile power struggle and exponential inflation like Venezuela, it’s clear these technologies offer new avenues of acquiring wealth for various, or nefarious purposes. The creation of tokenized assets for cities, countries, has many benefits. But, the most important of all is their ability to allow States and Cities the chance to experiment with new economic and social models, ones that could eventually find their way to being implemented by the U.S. Federal government (if we can work the kinks out). This ability to remix order, incentives, and structure is a very special component unique to cryptoeconomics — I like to call it creative cryptoeconomic recombination. Creative Cryptoeconomic Recombination With the implementation of tokenized assets with embedded micro and macro rewards, you can create economically unique solutions to gamify and create sustainable ecosystems that reward positive social order and behavior. These unique individual systems then become puzzle pieces, that when placed in the right unique combination, creates a picture worth saving. Or said in other terms: With the right economic modeling and incentives, your systems become fixture assets in a local economy, beholden to and empowered by the unique needs of the skilled individuals residing in that particular area. As this point, tokenized city and state assets begin deriving their value monetarily, socially, and politically, based on the contributions of that localized economy (learn more on blockchain-based governance, here). That value is then further compounded and speculated upon when viewed from a national, or international lens depending on the type of economic outputs they enable regionally. The greatest impact of tokenized publicly accessible goods and infrastructure is that they enable swift shifts in the political and economic influence of incumbents; thereby creating power where there exists inequity, or citizens financially uninspired and disenfranchised. This is in essence a grapple with authority, wrestling it firmly into the hands of those best suited to manage it economically. By creatively recombining cryptoeconomic primitives, public utilities, services, local business, and community development, we can condition the state to be reinvigorated for the 21st century. These would be enlightened, technical systems that have no obligations to ill-authority, only the stability and support of the people. The great thing about systems like these are their forward-extendability. If the redundant state of them is fairness and equity, then the propensity of future modular improvements and replacements of these systems are more apt to be continuously more efficient, stable and fair. Cryptoeconomic Remixes Just in the last year we’ve seen the evolution and proliferation of Token Curated Registries, Non-Fungible Tokens, ERC-20 assets…the list goes on (check out my recap on ETHDenver for more). The best part of these creations, aside from showcasing just how creative and flexible platforms like Ethereum are, is ideating around how to remix them accordingly for your particular use case. When designing tokens for cities and states, it’s important you construct city incentive mechanisms that maintain a new or existing urban sprawl, and lessen the potential for abandonment, while simultaneously incentivizing new growth. I’ll now explore some of the fundamental tools we’ll need to get there. Geo-Fencing to Incentivize Commerce: If you’re not familiar, geo-fencing is when you enable certain actions and services within a pre-determined geo-locational area. There are some very impressive projects seeking to enable this efficiently by rethinking location attestation (proving you are where you claim) with the assistance Proof of Location focused protocols. By enabling self-sovereign, verifiable locational data, you can create a slew of various applications by utilizing geofencing (more from the FOAM project on these topics, here). One of my favorite personal applications of this would be the invigoration of dilapidated downtown sprawl. Typical look of many U.S. downtowns. A sparse graveyard of uninspiring architecture and middle-class wage dreams. Uninspiring, and often occupied by small-business owners, downtown areas in most of modern America look similar to the above scene. Shopfronts empty, retail spaces with no foot-traffic. And to make these sections of town viable for business again as the world moves to internet specific retail, we need to rethink how we maintain businesses as we go through technological booms and busts. Cities that have their own token-based systems could subsidize tax payments to areas that experience low traffic and usage to certain geo-areas, and make spending that particular asset or sub-assets more powerful by increasing their purchasing power there (i.e. most areas of town pay full price for goods, but cities could offer a 1–5% discount for citizens that shop from those geofenced retailers). Dynamic Monetary Spending Mechanisms: If we assign ourselves to the idea that overtime cities will charge fixed transactional fees on their tokenized systems in lieu of taxes (easier accounting, greater economic efficiency), then we can get really creative. If all data is being aggregated and shared (perhaps using zero-knowledge proofs to protect privacy) we could easily determine, in real-time, where to create more incentives or reduce costs to spur commerce. By tokenizing a formulaic approach like Cost-Volume and Profit Analysis, as a business I could reference historical data and make informed decision on where and how I can reduce costs to incentivize my customers. Or, in times of high demand, increase prices to ensure maximum profitability by strategically competing for best-value against your competitors (can’t avoid it, greed is an inherent human function, embrace it). Token Curated Registries for City Planning: Another difficult aspect of maintaining a cities’ future progress and growth is planning. Electing officials to appropriately follow the will and vision of the people to improve conditions is the norm (or rather, that is the intended point of democracy). However, obeying the whims of the majority can be difficult, especially for larger cities. If you’ve not heard of Token Curated Registries (TCRs, we love acronyms in crypto) yet, you can learn more, here. But, put simply, they are a way for stakeholders in a tokenized ecosystem to seamlessly vote and come to a quorum on additions to, or removal of, items in a registry (think yearly budgets, infrastructure initiatives, etc). Trying to decide whether the city budgets for a new park, filling pot-holes, or investing in new books and laptops for a school district? This can be done more easily with a TCR. By allowing the stakeholders (citizens) or elected officials access to the TCR, they are forced to vote on the needs of the people and have it be transparent, or private (depending on the type of accountability necessary). Voting can be done in a matter of hours on key issues, allowing the public to mobilize wealth and time more efficiently, and not let city improvements get lost in the bureaucratic machine. Staking Bounties for City Initiatives: Do you live in a town encumbered with youthful college students and teenagers? Why not put the younger generation to work with real-time incentives? As a citizen, a general purpose digital bulletin board for road adoption, park clean-ups, local event volunteering and more could allow citizens to stake assets or tokens as a reward for the completion of, or continued participation in, activities that support and sustain your local economy. Tools like these could enable a local network of individuals to continuously help each other in real-time. This increase in odd-job economic activity helps build character and skills for the younger individuals, and creates an avenue for the working-class adults to put the money where their mouth is to improve order, and improve their community. Location Attestation and News Coverage: The last example I will provide you all is likely one of the most simple and most important. The news world has changed. If you can prove you are where you are with the assistance of Proof of Location specific protocols, you could ensure that news coverage is locationally accurate, reported by individuals with a monetary incentive to prove they were where they claim they were when reporting facts. This data could be stored historically (pictures & videos) in this cities’ blockchain to be referenced by businesses like insurance companies to allow for the realtime estimation of rates that are fair and frequently updated based on current conditions (with respect to people’s privacy). Conclusion There is no right or wrong way to build the tokenized cities of the future. We are bound and limited by only our imagination. But, improved efficiency, privacy, security, and the interoperability of these systems is the key for their long-term sustainability. Help us build a future humanity can be proud of, by first reshaping a nation our country can be proud of, together. “Never doubt that a small group of thoughtful, committed citizens can change the world: indeed, it’s the only thing that ever has.” -Margaret Mead, anthropologist, recipient of the Planetary Citizen of the Year Award in 1978. Interested in exploring these ideas and more? I’ve compiled a list for you to get started:
https://medium.com/blockchannel/tokenizing-public-infrastructure-pt-2-standing-the-test-of-time-7acea3b9e13e
[]
2018-05-16 23:41:22.929000+00:00
['Blockchain', 'Sustainability', 'Ethereum', 'City Planning', 'Cryptoeconomics']
I Have Been a Digital Nomad for Over Ten Years — Here’s How to Adapt to Remote’s Work “New Normal”
I Have Been a Digital Nomad for Over Ten Years — Here’s How to Adapt to Remote’s Work “New Normal” The new way of working presents us with numerous magnificent opportunities to live more consciously A regular “work-from-anywhere” office setting at my current location. (Photo: Javier Ortega-Araiza) Back in March, as panic wreaked havoc and we all struggled to understand what the new normal might mean, and where it might go, and how our lifestyles would be affected — in some cases, temporarily; in others, forever — we all came to grips with the acceptance that this was a new world — that how we used to live, and conduct business, had come to a radical turning point — and that the best we could do was to either adapt or perish. For some of us, the main changes came from our businesses being in industries that were directly hit or brought to a halt — hence now we needed to scramble to find alternate ways to make ends meet. For others, who were able to keep their steady jobs, the disruption came from now having to work from home — performing the same activities, but in a different setting. Besides, we had to consider the impact of these disruptions depending on our living situation — if we were with families, partners, or living by ourselves. Each particular case required a different adaptation strategy. There are plenty of articles that talk about whether this shift is good or bad, and that dissect all the pros — positive environmental impact, the ability to spend more quality time with loved ones, people relocating from crowded cities— and the cons — the loss of some sort of serendipity, the irreparable damage to the office worker-dependent economy — and how it might affect creative hubs such as Silicon Valley. All of them are available a Google — or a Medium — search away. This piece is more about how we can adapt, regardless of our situation, and about how we can have a more effective transition to a way of working that, in my opinion, is here to stay.
https://medium.com/build-something-cool/i-have-been-a-digital-nomad-for-over-ten-years-heres-how-to-adapt-to-remote-s-work-new-normal-6690b39907a7
['Javier Ortega-Araiza']
2020-10-14 13:30:41.614000+00:00
['Consciousness', 'Remote Work', 'Entrepreneurship', 'Self Improvement', 'Digital Nomads']
Should I wait another week to have my first user? — Planning a product release
Should I wait another week to have my first user? — Planning a product release When deciding on when to release a new product become the blocker. Let's put us in context, you have been researching and validating this new product. You have identified different users and now you need to plan the execution and release of the product. That's the point this story begins. MVP vs MLP I don't want to go into a lot of details in the differentiation between MVP and MLP, there are plenty of articles about it, the basic idea is that on an MLP you aim to get a few early users who really love your product rather than a ton of users who just like or use it. By building your first MLP you are aiming to also get early promoters, this is something really important as to make them love your product you really need to focus yourself on some specific aspect they appreciate, and some times this is not a functional requirement. This doesn't mean that your functional side should suffer, but maybe you should focus more on getting what thing you should do amazing instead of making a lot of things good enough. Planning what to build, and why to build it It's important that you define what you will be building for your MLP, but it's even more important that you know why are you building it. This is not about quantity, it's about quality and value for your user. You need to focus on building high-value features really well. Identify your early users Talk to all your users, and identify different commonalities across them to be able to segment your user base. Maybe your product public is too big, so it's important that you are conscious of who are you targeting in the early days. Probably that huge client who already has a system in place that they've been using the last 15 years and you are aiming to replace is not your ideal first user, or maybe they are, it really depends on your product and your understanding of your users. Where is the value? Identify where the value is, and exploit it without losing the count of effort required to do it. Focus on providing a great experience and solving the high-impact issues for your user. But be mindful of the relationship between value and effort, you don't want to have a forever in-development product. No value is actually there until it's shipped and used by your user. Photo by Marvin Meyer Planning what to release, and when to release it Besides the planning on execution, you also need to plan on how you'll release the new product. What strategies will you take to handle the release and follow up the early feedback so you can iterate on the current product? Cost of delay There is a cost we sometimes forget, the cost of delay. Not releasing some times can be lethal, all our products have a limited shelf life, and every moment we are holding them, they are losing value. Dave Masha goes through this a little bit more in detail in the following video. The value of getting early feedback The earliest you can release it's the earliest you can get feedback from real users and iterate over your current solution. You might have an idea of what is best for your users, and some hypotheses about them, but getting to observe them using the product by themselves is usually invaluable. Also, you can spot pain points, frustrations, and prioritize the next iteration based on your users' feedback. It's better to know that certain interaction is not really intuitive for users during the first couple of months and after implementing just a couple of features than 1 year later with a full product shipped. It's quite an exaggerated example but it makes the trick. Scale your release This is not something possible with every product, but something I've found useful in my experience (mainly B2B) is to plan an incremental release including more users on each iteration when we include features that will actually provide value for them. What I mean by it, is that sometimes you might be able to onboard certain users with less functionality provided than other users, as they might not apply for them. Let's say your product is a stock manager for stores, not all users will be interested in being able to manage multiple stores as they might have only one store, so you don't need to wait until the multiple stores' manager feature is ready to release it for this group of stores. Also, if there exists a good relationship with the users they might be open to conceding functionalities, for example, if there are 6 parameters, and 4 of them are modified on a yearly basis then you can talk to them about adopting an initial release where those 4 parameters can't be managed by them. Summary
https://uxdesign.cc/should-i-wait-another-week-to-have-my-first-user-planning-a-product-release-3b46f3aba872
['Mariano Cocirio']
2020-07-25 14:24:50.291000+00:00
['Product', 'Management', 'Startup', 'Technology', 'Product Management']
How MEDIUM Pays You (Debunked)
Now obviously, now that you generally know HOW medium pays you, the goal is to generate income. It is time for the second part of this article! How to make $100 Per Month or More on Medium Without Curation An extra $100 a month is a cool 1.2 grand a year. A definite nice addition to any other income, but far out of reach for many writers. Well, today we are going to create a plan to achieve this goal. I specifically mentioned without curation as that is not in our hands. This way, if your article is curated, it provides for extra bonus income. Let us begin… Quick Calculation For all of you math haters, I am sorry but this has to be done. A $100 a month divided by 30 days = $3.33 per day. This is the tricky part, but using some calculations that I have done which have via my stories to assume that you need about 2–3 hours of member reading time. How do I know this? I promise you I’m not full of shit, here are all my calculations laid out step by step: Now that we know all of this we can get started! The Plan I realized early not that submitting to those major publications where you see stories get thousands of claps is not worth it. I got published in The Ascent, and curated in startups and leadership. These two are some of the largest topics on Medium, yet this is how my story performed: The Overview They also take forever to look and respond to your stories, so we are not going to use these methods. First, you need to get followers. Like anything, writing on Medium is an investment. Go follow other people! Comment on their work! Highlight key parts! The more you interact with the community, the more followers you will get. Remember, this is not SOCIAL MEDIA, it is not a question of popularity and ratios but of interaction and connections. You are going to do this for around 30 minutes a day. It can be during your lunch break, before you go to bed or when you are taking a shit. The timing doesn’t really matter as long as you get it done. The good thing about being a writer is that you really do not need a specific topic. As long as you can write to keep the readers attention, you are golden. How do you do so? I’ll cover that in the tips and tricks section. Once you have decided on your topic, the goal is going to be to crank out a nice 5–10 minute article every two days. Remember, we have determined that member reading time is key, so short articles will not cut it. Still, don’t write 30-minute articles because the chances of people are reading them fully are very slim. Tips and Tricks Once you have cranked out this 5–10 minute article and proofread it and such, we need to meet the member reading time requirements to make $6.67 every two days. This means we need 4–6 hours of member reading time, INTERACT. External views DO NOT count toward money, so while it is good to have them, we are going to focus on members. If you are interacting with people’s posts for 30 minutes a day, we can usually count on 25–30 minutes of time to be covered form there. As you interact more, the time will increase that people check out your articles. Join medium Facebook groups. Doing so will allow you to post, and if you interact with others, they will definitely interact with your posts. We can get another 10–15 minutes from here. Submit to smaller publications. If you write a banger of an article, then go ahead and submit it to a large publication. But, these articles may not be published for DAYS. On top of this, I got 55 cents from being published in The Ascent. The problem is not with the publications, but the sheer volume of submissions they receive. Before y’all go hating, I started a publication called Never Fear, and will try to get your story at least 30 minutes of member reading time, as well as published under 4 hours. You can read the story here: We have now reached about an hour, which leaves 4 more that we need. Creative Steps Tagging authors. If you spam authors, they are not going to check out your content and add reading time. Also, you will soon run out of authors to tag and this strategy will not work. If you read content like yours, then I would say try to tag 4–5 authors in your story with a sentence two describing why they are there. Fit them in, don’t force them, and this will get you around 30 minutes. Post on Linkedin, and use LinkedIn groups. I don’t think this an option that is used much, but people on LinkedIn are usually interested in topics about the field they work in. Form connections in that field and you will get a lot of viewership. I would say with 500+ connection I have I can get a solid 2–3 hours of member reading time. For people just starting out, it can be around 30 minutes to an hour. Our total now is 135 to 165 minutes, which is around an hour shy of our goal. The rest is up to your own methods of getting viewership, but don’t worry. You see, the numbers I gave you above are rookie numbers. As time goes on, this begins to compound with more followers, more posts AND the old articles bringing large amounts of in revenue. If you do want to become a writer for Never Fear, fill out the google form below and I will add you! Also, if you think you are not good enough, we take a look at ALL submissions, and you are most definitely a better writer than you think.
https://medium.com/never-fear/how-medium-pays-you-debunked-a8c59de7ab92
['Aryan Gandhi']
2020-09-04 11:31:11.925000+00:00
['Growth', 'Writing', 'Life', 'Success', 'Finance']
Comparing Ourselves to Others
Abbie Parr (2019) | Getty Images Late Thursday, reports came out that the Los Angeles Dodgers acquired all-star outfielder Mookie Betts from the Boston Red Sox as a part of a three-team trade that also involved the Minnesota Twins. Up until last night, the two teams rumored to be in heavy pursuit of Betts were the Dodgers and the San Diego Padres. When the deal was announced, my social media feeds were flooded by disappointed Padres fans, bummed out at the tease of getting a premium player of Betts’ caliber and even more angry at the fact that he landed with the bitter rival Dodgers, who have been utterly dominant over the last decade while my Friars have scuffled near the bottom of the division each year. In many respects, my feelings align with those of all the pissed off fans who thought San Diego would come out of this deal on top. “Of course we didn’t get Mookie Betts — we’re the Padres! But why the Dodgers?! Come on!” In this circumstance, it’s easy to be upset about the trade, especially considering how close the beginning of the season is. From a fan perspective, that is okay. As fans, we opt for the dramatic and tend to wear our feelings on our sleeve. We are not the ones putting on a uniform and competing every night. From the perspective of the team, giving up on the year because of a trade that didn’t go your way would be pretty concerning. This particular situation for the Padres brings to mind a simple life reminder: we have nothing to gain by comparing ourselves to others. Image from @MiserableSDFan on Twitter In 2019, the Padres finished with a record of 72–90, exactly 36 games behind the Dodgers, who finished a whopping 106–56. In 2018, they finished 25 1/2 games behind them. In 2017, 33 games. The gap between the two teams is much greater than the sum of what one player, even Mookie Betts, could provide. Would Betts have made the Padres significantly better in 2020? Absolutely. Seeing him, Fernando Tatis Jr. and Manny Machado at the top of the batting order would have been beyond exciting. Would adding Betts have helped the Padres win the division and dethrone the rival Dodgers? Highly unlikely. Comparing ourselves to our peers is a taxing, tiresome and demoralizing way of thinking. Spending too much time dwelling on what we can’t do because of what we don’t have, as opposed to thinking of what we can do with what we do have, doesn’t help us get any closer to achieving our goals and improving our lives. For Padres fans, the sting of losing out on Mookie Betts to the Dodgers will probably linger for a while. We will gripe and complain throughout spring training, and depending on how April and May go, maybe even through the first chunk of the season. For the actual players on the team, worrying about Betts and the Dodgers will bring no benefit to their performance on the field. All they can worry about is their season and how they can best prepare to win games for themselves.
https://roberto-johnson.medium.com/30-day-writing-challenge-day-8-comparing-ourselves-to-others-7a3c5f3f139b
['Roberto Johnson']
2020-02-20 17:23:43.524000+00:00
['Sports', 'Personal Growth', 'Baseball', 'Writing', 'Self Improvement']
Byte Limes #1: Custom Iterables in JavaScript, Truly Reactive Forms, Auto Prefix Jira Issues to Git commit…
Byte Limes #1: Custom Iterables in JavaScript, Truly Reactive Forms, Auto Prefix Jira Issues to Git commit… Nivrith Gomatam Follow Nov 21 · 2 min read Hi friend, hope you’re well! Here are some articles that I think would be worth your time: Photo by Reid Zura on Unsplash Wouldn’t it be great if we could use the for…of loop to iterate over our own data structures? In this article, we will learn how to do this… Read more
https://medium.com/bytelimes/byte-limes-1-custom-iterables-in-javascript-truly-reactive-forms-auto-prefix-jira-issues-to-8fc37f6e9a0d
['Nivrith Gomatam']
2020-11-21 05:06:01.860000+00:00
['Software Engineering', 'Programming', 'Software Development', 'Web Development', 'JavaScript']
Reflections
I’m doing my weekly roundup again! All my posts for the previous week together for anyone who’s interested. Here’s last week’s featuring poetry, and a short story. I appreciate all the kind feedback on these posts and I hope that if it’s your first time reading them that you’ll enjoy them too.
https://medium.com/echoes-of-the-soul/reflections-3cf746743e47
['Heather Ann']
2019-06-16 15:31:02.746000+00:00
['Short Story', 'Fiction', 'Poetry On Medium', 'Writing', 'Poetry']
7 Tips to Go From Beginner to Advanced in Vue.js
1. Fully Understand Reactivity How reactivity works Reactivity is a simple concept in front-end development libraries and frameworks. Understanding how it works at a deep level, though, can be hard. But it is well worth your time. Here’s a small example: <h1>{{ name }}</h1> When the value of name changes, the UI will be updated accordingly. This is a very basic way to explain reactivity, but there are many more advanced examples to help you understand how it works. Where reactivity goes wrong Things can go wrong if you’re accessing a property within an object, as explained in this example: In the example above, we define myObject as an empty object in the data method. Later, we give myObject.message a value. This results in {{ myObject.message }} never displaying anything even though it receives a value at some point. Why is that? That is basically because Vue does not know of the existence of the myObject.message property and therefore cannot react to changes in its value. How do I fix this? There are a couple of ways to make sure that Vue reacts to changes in the myObject.message property. The most simple is to initialize it with an empty or null value: myObject: { message: '' } If myObject.message exists in the data method, then Vue will listen and react to changes in its value and update the UI accordingly. Another way to make sure the UI is updated is to update the full myObject object this way: this.myObject = {message: 'Hello'} Since Vue listens and reacts to changes in myObject , it will pick up this change and update the UI accordingly. In short, Vue doesn’t listen to property changes in an object unless it knows these properties. Properties need to be defined in the data method or you need to update the whole object instead of the properties to make sure Vue tracks changes. Learn more about reactivity by reading the “Reactivity in depth” section of the official documentation. By understanding reactivity well, you can:
https://medium.com/better-programming/7-tips-to-go-from-beginner-to-advanced-in-vue-js-af7ca56ea31d
['Aris Pattakos']
2020-11-18 16:31:33.928000+00:00
['Programming', 'Software Development', 'JavaScript', 'Vue', 'Vuejs']
“Not Pink Or Blue, Mom. Silver; I’m Silver.”
“Not Pink Or Blue, Mom. Silver; I’m Silver.” Here was a 10 year old child, grasping the notion of who they were at their innermost core better than most adults I knew, including myself. Photo by Eric Patnoudes on Unsplash Five years ago my youngest first learned about gender neutral pronouns. It happened during a visit to our local LGBT center. I’ll never forget the look on their face when my then 10-year-old discovered a glass bowl on the front desk filled with alternative pronoun lapel pins. I followed my child’s gaze and watched their body magnetically pull towards that bowl before reaching their hands in. None of us knew for sure yet why our (presumed) boy rejected everything masculine, but in that moment, I became part of an ethereal epiphany. As I watched this young child — but old soul — gently scoop handfuls of small, shiny, ‘singular they’ lapel pins, letting them sift like beach sand through their slight fingers, I sort of got it, all at once. My child whispered, “this is me.” And from within me, tears welled up. Both at the simple beauty of the moment, and the complex, difficult reality that my child was not merely a feminine gay boy, or a binary trans girl, as I’d estimated in previous years. This was going to be so much harder. But for my child, realizing that “they/them/theirs” was an option? It was like suddenly finding this huge piece of an impossible gender puzzle. And it was right there all along, hiding in plain sight. It was their fifth grade year in public school, and my child had already begun socially transitioning, although in retrospect, I’m not sure my husband and I knew that’s exactly what was happening at the time. We only knew our child was miserable and desperately wanted to express their gender in a way that was more authentic to their identity. Only, not just at home anymore; they were ready to enter the rest of the world like this, even with knowing how ignorant, how cruel the rest of the world could be. This social transition included a total switch from traditional boys clothes to traditional girls clothes, shoes, hair gadgets, and accessories; growing their hair as long as they could; and beginning to tell others — not something easy to do in fifth grade, let alone, public school, in the conservative south, where peers and teachers have known you for several years as “he/him.” Undoubtedly a boy. I worked in the same school my child attended, so I saw (and heard) a lot of things up close and personal. It was both a blessing and a curse. I savored the lovely moments, but it was damn near impossible to remain professional and contain my inner “MamaBear” when I heard homophobic and transphobic slurs directed at my child. One afternoon on our drive home they said, “Mom, I’ve started asking some of my friends to start using my they/them pronouns.” I smiled. Then they told me about a few classmates who responded (as expected), saying, “Nuh-uh! You’re a boy and you know it! You’ll always be a boy!” “They still don’t get it, Mom. I just don’t know what to say back to them anymore,” they said in a tired voice. And just like that, this became one of those MamaBear moments. Moms, in general, tend to launch into immediate problem-solving mode during occasions like these, so I started answering as if my child hadn’t issued only a rhetorical statement. “Well… I would take that as a great learning opportunity,” I offered. “You can really educate people on what it means to be you. Or, you know, you could say something like ‘you’re partly right. My sex assigned at birth was male, but —” And then my kid cut me off. “Okay. Mom, I’m going to have to stop you right there and let you know that if I use that word, they will all start running around and screaming, ‘He said a bad word!’ He said a bad word!’” “Sex isn’t a bad word,” I reasoned. “In fifth grade it is,” they countered. I should’ve taken the hint and gracefully exited the question and answer portion right then and there. After all, my kid had given me a convenient out. But instead, I kept going. (This is why my kids sometimes avoid talking to me now. Lesson learned.) At the time my child had frequently insisted that they felt neither male nor female. Whenever anyone asked the notorious “do you feel more like a boy or a girl” question, they’d always respond without equivocation, “I feel more like just a person.” “Okay,” I continued. “How about, ‘I was assigned male at birth, but I have never identified with that label my whole life.’” My child actually considered this for a moment before saying, “You know, I think I was actually born with two souls.” “Yeah?” I asked with interest. “Tell me more about that.” “So, it’s like this: I started out in your belly with a pink soul and a blue soul, and then the two souls meshed together and created a silver soul. A little pink and a little blue made me, well, ME. Silver. And that’s just how I was born. Two colors came together and made something that was not pink or blue. A little bit of pink and a little bit of blue, combined to make silver — something totally new.” And then I felt those damn tears welling up again. Because here was a child, a ten year old child, who grasped the notion of who they were at their innermost core better than most adults I knew of, including myself. And to top it off, as the religious Hamilton fan they’d always been, and the child of a theatre person turned writer, they ended this little monologue with an impromptu rhyme (dear God, how could I not love this kid?!) But I held the tears back and told my child that actually, that was a perfect response for the kids at school. It was far better than anything I would’ve or could’ve suggested.
https://medium.com/an-injustice/not-pink-or-blue-mom-silver-im-silver-aa0870e5bf55
['Martie Sirois']
2020-12-16 01:20:13.616000+00:00
['Parenting', 'Transgender', 'LGBTQ', 'Nonfiction', 'Equality']
An MVC Approach to Flutter
An in-depth look at an MVC Project Template Sample App This is a follow-up to the free article, Your Next Flutter Project, where I introduced a ‘Project Template’ you can use for your next Flutter project. It contains the foundation, the files, and the directories, that make up a Flutter app based on the MVC design pattern represented by the library package, mvc_application. You’re to take it and fill it up with your code for your next Flutter app. I hope to someday make such a template a ‘New Project’ option in IntelliJ and Android Studio, but that’s another story. In this article, I wish to further demonstrate the suggested implementation using the Contacts sample app currently incorporated in one of the project templates. Download this zip file and start it up. You’ll be greeted with the Contacts app. It’s suggested you use your IDE’s debugger and ‘walk through the code’ to get a better understanding of how the MVC framework works and, what’s more, how the Flutter framework itself works. I Like Screenshots. Click For Gists. As always, I prefer using screenshots over gists to show concepts rather than just show code in my articles. I find them easier to work with, and easier to read. However, you can click/tap on them to see the code in a gist or in Github. Ironically, it’s better to read this article about mobile development on your computer than on your phone. Besides, we program on our computers; not on our phones. For now. No Moving Pictures, No Social Media There will be gif files in this article demonstrating aspects of the topic at hand. However, it’s said viewing such gif files is not possible when reading this article on platforms like Instagram, Facebook, etc. They may come out as static pictures or simply blank placeholder boxes. Please, be aware of this and maybe read this article on medium.com Let’s begin. Let’s Add Contacts When you begin using this sample app, you’ll no doubt start entering Contacts. Let’s look ‘under the hood’ to see what happens when, after you’ve typed up a particular Contact, you then press the Save button. Now in the MVC design pattern, the View aspect here, in this case, is the build() function found in the _AddContactState object listed below on the left-hand side. At a glance, you can see the ‘Save’ event is handled, and rightly so, by a Controller object called, con. The Navigator is then called to pop back to the previous screen. On the right-hand side is a screenshot of that onPressed() function. We see the function performs some validation routines before calling its own ‘add’ routine. After that, it calls its refresh() function. A lot is going on here. Further note, the name of the function mimics the API used by Flutter itself. In other words, the function is named after the named parameter, onPressed. We’ll talk more about that soon.
https://medium.com/follow-flutter/an-mvc-approach-to-flutter-f333d6288078
['Greg Perry']
2020-11-10 22:00:05.192000+00:00
['Programming', 'Flutter', 'Android App Development', 'Mobile App Development', 'iOS App Development']
What is Go? What Can Be Done With Go? Why Should I Use Go?
What is Go? What Can Be Done With Go? Why Should I Use Go? Aman Khanakia Follow Dec 28 · 3 min read Go , also known as Golang, is a valuable and open source programming language supported by Google that is becoming increasingly famous and that most programmers should learn as soon as possible. 1. What is Go? Go , also known as Golang, whose steps were taken in 2007, was clearly mentioned as of 2009, reached Go 1.0 by mid-2012, with developers such as Ken Thompson, Rob Pike and Robert Griesiemer behind it and supported by Google, stands out with its simplicity and performance. is an open source programming language that can be considered quite young. “Go lovers” call themselves gopher . Although Gopher is a TCP / IP protocol, it is also an American geranium symbolized by the logo. 2. What Can Be Done With Go? The primary purpose of the Go language is to make system programming. We can see Go, which is actively developed for server-side use, as a great language for developing servers and subsystems. Go can enable you to produce quality projects for the web with both fast development and high performance Although we are confronted with limited examples yet, Google has a plan to use Go on the mobile operating system Android. In the future, developers who know Go will not only be able to handle the server and system side tasks, but also will be able to efficiently develop their own Android mobile applications with Go. Go can also appear as a programming language used in embedded systems. Even though a large part of the developer community believes this, it may not be completely predictable for now due to the ongoing C and C ++ crusade wars on embedded systems. 3. Why Should I Use Go? There is usually only one way to do a job in Go. This means orderly codes and order that are understood by all. Go is compiled into a single file. It is enough to copy one binary. In summary, you can place your existing code on dozens of servers without any problems. You can get rid of the complicated syntax rules. There are only 25 keywords in Go (C has 37; C ++ has 84 and the number is increasing) Simple and backward compatibility is a distinct advantage. Concurrency, static typed and garbage collection are other important advantages. 4. Who Uses the Go Language? Being an advantageous and practical programming language has made Go language the target of giant brands. Google has given its users a faster internet experience through the Go language. However, there are other technology companies other than Google that use the Go language to improve their system. Some of these companies are: Google Amazon Dropbox Ubuntu Facebook Twitter Apple Github Koding Click to see other sites using Go . Turkey in the list is also available from many companies. 5. An Exemplary Go Program After talking about the Go programming language, let’s show what it looks like. In the example below, “Hello World!” You will see a printout that says. package main import "fmt" func main() { fmt.Println("Merhaba Dünya!") } How similar to C and C ++ language, right? Who knows, maybe it will replace it someday :) NOTE: Our Go lessons are starting very soon. By following our khanakia.com blog, you will have access to a simple and high quality Go course.
https://medium.com/khanakia/what-is-go-what-can-be-done-with-go-why-should-i-use-go-44114ca27eaa
['Aman Khanakia']
2020-12-28 05:08:00.110000+00:00
['Golang Development', 'Golang Tutorial', 'Golang Tools', 'Development', 'Golang']
Linked Lists and LL Algorithms in Swift
Implementing a Node in Swift -Integer A choice has been made to implement a node that contains an Integer as the data payload. If you wish to see a node that is generic (that is, can carry any data) please see the second half of this article. The setup for a node (the class) Since our Node class needs to reference itself, a class has been chosen to represent the Node type (although an enum is a value type that can reference itself, I have chosen not to use that type to represent a Node in this case) This initialiser requires both a piece of data, and the next node in the linked list. Create instances of a node In terms of creating an instance of a Node, Swift helps us out just as if we are creating an instance of any class. We are going to create three nodes that are linked as a linked list, containing three elements interestingly here I’ve chosen to create the nodes in reverse order. The reason for this is that each node is connected to the next — and doing them in normal order would mean linking a node to a node that does not yet exist. How can you link to a node that doesn’t exist? You can’t. So… Swift doesn’t give us a good view of what we’ve created — so how can we be sure that we’ve correctly created these three nodes? printing an element gives us the following output: __lldb_expr_10.Node yes, we get it. It’s a node. Make the Node conform to CustomStringConvertable When we conform to CustomStringConvertable we need to provide a description that Swift will provide when we print the element. Which (when we print the head ) gives the following output: Data: 0 { Data: 1 { Data: 2 { null } } } This is much nicer, as each node is described as part of the chain now. Reading out the data from a Linked List Here we are going to use a while loop to traverse the linked list . We know when we have gone to the end of the linked list since the last element always points to nil! This prints the following to the console: 0 1 2 But it might be worth exploring what happened here. which leads us to the output to the console (as expected) as 0,1 and 2. Remove an element Remove a node from the head of the linked list Remove an element (a node)from the linked list from the head is a relatively easy operation. In Swift though, we need to make sure that head has a next element — and this is implemented through optionals in Swift. Since the old headNode has no reference it can now be deallocated, so there is no dangling head node. Remove a node from the middle of the linked list This is slightly more tricky than removing a node from the head of a linked list. We remove this second node and deal with the hanging node. Reverse a linked list The theory goes something like the following: A linked list can be reversed through the following code in Swift. To be noted here, is the creation of a empty node initially that will be the new tail of data: Conclusion: Linked lists are really important in programming, and this guide has shown you how linked lists can be implemented in the best programming language (Swift). Reversing a singularly linked list is a little tricky, and with that we can easily introduce the concept of a doubly linked list to cope with that. But more on that topic on another day (i.e. follow me).
https://stevenpcurtis.medium.com/linked-lists-and-ll-algorithms-in-swift-439a1da2eee
['Steven Curtis']
2020-04-27 15:26:47.380000+00:00
['Software Engineering', 'Software Development', 'Swift', 'Algorithms', 'Programming']
Transfer Learning with PyTorch
Dataset download and basic preparation Let’s start with imports. Here we have the usual suspects like Numpy, Pandas, and Matplotlib, but also our favorite deep learning library — Pytorch — followed by everything it has to offer. import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from datetime import datetime import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader from torchvision.utils import make_grid from torchvision import models, transforms, datasets We’re writing this code in Colab, or Colab Pro to be more precise, so we’ll utilize the power of GPUs for training. If you don’t know what Colab is, or wondering is it worth it to upgrade to the Pro version, feel free to check these articles: Because we’re training on the GPU and that might not be the case for you, we need a robust way for handling this. Here’s a standard approach: device = torch.device(‘cuda:0’ if torch.cuda.is_available() else ‘cpu’) device >>> device(type=’cuda’, index=0) It should say something like type=’cpu’ if you are training on the CPU, but as Colab is free there's no need to ever do so. Now onto the dataset. We’ll be using Dog or Cat dataset for this purpose. It has a plethora of images of various sizes, something which we'll handle down the road. Right now we need to download and unzip it. Here’s how: After a minute or so, depending on your internet speed, the dataset is ready to use. Now we can declare it as a data directory — not required but will save us a bit time down the road. DIR_DATA = ‘/content/data/dogscats/’ Data Preparation The first part of the first part is now done. Next, we have to apply some transformations to training and validation subsets, and then load the transformed data with DataLoaders. Here are the transformations we applied: Random rotation Random horizontal flip Resizing to 224x224 — required for pre-trained architectures Conversion to Tensor Normalization Here’s the code: train_transforms = transforms.Compose([ transforms.RandomRotation(10), transforms.RandomHorizontalFlip(p=0.5), transforms.Resize(224), transforms.CenterCrop((224, 224)), transforms.ToTensor(), transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) valid_transforms = transforms.Compose([ transforms.Resize(224), transforms.CenterCrop((224, 224)), transforms.ToTensor(), transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) Now we load the data with DataLoaders. This step is also straightforward and something you are probably familiar with: train_data = datasets.ImageFolder(os.path.join(DIR_DATA, ‘train’), transform=train_transforms) valid_data = datasets.ImageFolder(os.path.join(DIR_DATA, ‘valid’), transform=valid_transforms) torch.manual_seed(42) train_loader = DataLoader(train_data, batch_size=64, shuffle=True) valid_loader = DataLoader(valid_data, batch_size=64, shuffle=False) class_names = train_data.classes class_names >>> ['cats', 'dogs'] If we were now to inverse-normalize a single batch and visualize it, we’d get this: A quick look at the image above indicates our transformation work as expected. The data preparation part is now done and in the next section, we’ll declare a custom CNN architecture, train it, and evaluate the performance.
https://towardsdatascience.com/transfer-learning-with-pytorch-95dd5dca82a
['Dario Radečić']
2020-06-11 21:40:31.095000+00:00
['Programming', 'Deep Learning', 'Python', 'Data Science', 'Machine Learning']
How To Market Your Committees’ Achievements
Promoting the accomplishments of your committee is important in building the image of the group to continue having the support of your donors and volunteers. However, this is always an overlooked aspect because most of the time, the resources of the group is allotted to other areas like operations or fundraising. This is also because the term marketing is often not viewed as a purpose of the committee, is sometimes confused with advertising, and assumed to take a lot of time, effort, and money. It is true that marketing the achievements of your committee does take a lot of hard work, it can be done in an easy and cost-effective way. As long as you understand the fundamentals of doing so. Marketing Success To be able to market anything successfully, consistency and engagement are key. It means that you have to do it regularly and it has to be enticing enough for your target market to take notice and participate. This will only happen through solid and good planning. Steps to Market Your Achievements 1. Know your objectives Think and brainstorm about what the committee wants to achieve through marketing in the short and long term. Are there marketing objectives set? If there is none, then talk to the team and create your goals and objectives. Identify as well what behaviour changes or action does the committee want to achieve upon implementing the marketing plan. 2. Analysis Identify the internal and external factors that can affect the implementation of the marketing plan. What resources would you need? Which people in the group should participate? Once you have identified the answers to these questions, then you’ll be able to implement the plan continuously which is important in the success of any marketing plan. 3. Target Market Create a profile of the people that you want to attract with the marketing plan. It is a common practice to try and target everybody, while this sounds like a good plan it does not speak to anyone in particular and becomes ineffective. It is better to study more about the marketing language that your target market understands and accepts, otherwise, they will just repel or ignore your message. 4. Message Once you have the objectives, resources and target market, it is now time to create your message. Take the time and consider the following factors: Achievements that you want to promote The format of the promotion (creative graphics, written visuals, memo type) Participants who contributed to the accomplishments What will make it relevant to the reader? What is the tone of the announcement? 5. Channels Once you have your announcement format, select channels where you want to promote it. Be sure that the channels you employ are reachable and familiar to your target market. It can be done, but not limited, through the following: Social Media Email Blasts Newsletters Group Boards Or all of the above! The best way to start your marketing plan, especially if it is the first one, is to employ a simple and straightforward method. This is much easier for your target audience to absorb because they can easily repel it if it doesn’t talk to them or if it annoys them. Gathering as much information as you can in terms of your target market is key. Once you are familiar with these, then moving to your next marketing plan will be easier, faster and more effective.
https://medium.com/process-pa/how-to-market-your-committees-achievements-c6bfed8ad900
['Process Pa Team']
2019-02-06 01:01:01.111000+00:00
['Management', 'Leadership', 'Marketing', 'Meetings', 'Governance']
Hacking Productivity — We tried a company-wide “Secure Cockpit”
I’m passionate about talent management and culture. My objective is to make Osedea the best workplace for our team. Follow
https://medium.com/osedea/hacking-productivity-we-tried-a-company-wide-secure-cockpit-cf2f99662f29
['Ivana Markovic']
2020-09-09 13:59:38.646000+00:00
['Productivity', 'Technology And Design', 'Focus', 'Open Office']
How to Run MySQL and phpMyAdmin Using Docker
Setup There are two ways we can connect phpMyAdmin with MySQL using Docker. In the first method, we will use a single Docker compose file. For the second one, I’ll show you how to connect to an already running MySQL Docker container. First, you will need to install Docker. I’ll use macOS for both methods. Method 1 In this method, we will use a Docker compose file. We need to put docker-compose.yml inside a folder. The folder name used in this setup is phpMyAdmin. Let’s break down the individual ingredients of the docker-compose.yml file. version: '3.1' services: db: image: mysql restart: always environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: test_db ports: - "3308:3306" phpmyadmin: image: phpmyadmin/phpmyadmin:latest restart: always environment: PMA_HOST: db PMA_USER: root PMA_PASSWORD: root ports: - "8080:80" First, we are using a version tag to define the Compose file format, which is 3.1. There are other file formats — 1, 2, 2.x, and 3.x. Get more information on Compose file formats from Docker’s documentation here. We follow our version tag by the services hash. Inside this, we have to define the services we want to use for our application. For our application, we have two services, db, and phpmyadmin. To make our setup process quick and easy, we are using the pre-built official image of MySQL and phpMyAdmin for the image tag. When we use always for the restart tag, the container always restarts. It can save time. For example, you don’t have to start the container every time you reboot your machine manually. It restarts the container when either the Docker daemon restarts or the container itself is manually restarted. We have defined the environment variables under the environment tag, which we will use for database and phpMyAdmin authentication. Finally, the ports tag is used to define both host and container ports. For the db service, it maps the port 3308 on the host to port 3306 on the MySQL container. For the phpmyadmin service, it maps the port 8080 on the host to port 80 on the phpMyAdmin container. Now run the following command from the same directory where the docker-compose.yml file is located. The command docker-compose up starts and runs your entire app. If you encounter the following error, that means you’re already running a Docker container on port 3308. To fix the problem, you just have to change to a different port, or you can stop the other container. Bind for 0.0.0.0:3308 failed: port is already allocated Now choose any web browser and go to the following address. Voila! you should see the web page like the one below on your browser. Admin panel As you can see, there is a warning message at the bottom. Let’s fix that now. First, click on the Find out why link Now click on Create link You are all set to manage your database! Method 2 In this method, you will learn how to connect a phpMyAdmin docker container to a MySQL container that is already running. It is helpful when you don’t have a phpMyAdmin service in your docker-compose file. First, we need to list all the current Docker networks using the following command. docker network ls Now you should see phpmyadmin_default from the list. Our goal here is to find the application network that we have created using the docker-compose file in method one. Since we didn’t specify a network name in the docker-compose file for our application, Docker will give the network name based on the name of the directory with _default at the end. In this case, phpmyadmin_default . If you’re interested in Docker networks, check here. Well done, you have successfully identified the network! Finally, we can run a stand-alone phpMyAdmin Docker container, which is connected to our desired network. docker run --name stand-alone-phpmyadmin --network phpmyadmin_default -p 8081:80 phpmyadmin/phpmyadmin:latest The docker run command is used to run a container from an image. Here we are using phpmyadmin/phpmyadmin:latest image. The --name flag (optional) is used to give the container a specific name. If you don’t want to provide one, Docker will randomly assign a name. The —- network flag is used to connect to a Docker network. The -p flag is already discussed in this post. Now choose any web browser and go to the following address. Use root as a username and password to log in. This method is helpful when you want to connect multiple Docker containers.
https://towardsdatascience.com/how-to-run-mysql-and-phpmyadmin-using-docker-17dfe107eab7
['Mahbub Zaman']
2020-08-04 01:15:24.346000+00:00
['Programming', 'Software Engineering', 'Data Science', 'Docker', 'MySQL']
A New System for Designing Motion With Both Sketch and Figma
Welcome to AEUX One of the goals of the new system is to support more host apps and increase flexibility when working between teams. Exchanging data within the Adobe ecosystem is now well supported in XD and Illustrator, but with AEUX, you can import layers from Sketch and Figma, and support new Sketch features. Plus… Speed has been boosted by 93 percent on build time for complex artboards. has been boosted by 93 percent on build time for complex artboards. Symbol overrides for text and nested symbols are now supported. Symbol masters are located more efficiently, putting an end to beach-balling. overrides for text and nested symbols are now supported. Symbol masters are located more efficiently, putting an end to beach-balling. Image exporting has been reduced by drawing native Ae gradients and eliminating redundant images. exporting has been reduced by drawing native Ae gradients and eliminating redundant images. Text layer accuracy has been majorly improved with position, tracking, leading, upper/lowercase overrides, rotation, and horizontal/vertical flipping. layer accuracy has been majorly improved with position, tracking, leading, upper/lowercase overrides, rotation, and horizontal/vertical flipping. Groups have the option to be automatically created as Ae precomps. Additionally, groups of layers may be precomped and un-precomped with a click even if you don’t use the AEUX importer. have the option to be automatically created as Ae precomps. Additionally, groups of layers may be precomped and un-precomped with a click even if you don’t use the AEUX importer. Additional new features like nested booleans, layer and group masking, shape blurs, options to works with paths or parametric shapes. Learn more and download AEUX AEUX + Figma While Sketch is widely used, many visual designers have also begun using Figma. AEUX was designed to support working seamlessly when switching between teams. Figma’s core feature is online collaboration, so exporting layer data is done through a web app that taps into the Figma API. As of right now, Figma plugins cannot run inside of the design environment, which means layer exporting is done as a more traditional export/import. With a design app that runs online, security is the most important consideration. After authenticating the AEUX export app, you are asked to enter a Figma file URL. The app will gather all the data for each of the frames (artboards) within the file. Each frame may be downloaded individually and any required images will be processed and zipped up as well. Drop the new AEUX.json file into the AEUX panel in After Effects and layers will build. Note: the AEUX app doesn’t track user data, and does not view or store your designs in any way. It is a blind robot that does the conversion of your designs into the AEUX JSON format right within the browser on your machine. Access to file data is managed by the owner of the file from the Share menu. Lessons learned Building tools based on how you imagine another designer working can be tough. Sometimes your best intentions aren’t really valuable or people find new paths that you hadn’t planned for. I learned a lot building Sketch2AE and then learned even more building Overlord (shameless plug for a commercial tool). The biggest lessons I’ve taken from these projects are to limit the amount of mental mode switching. If someone is trying to design, let them design, instead of forcing them to read a manual. Remove copy/paste Layer data is now transferred behind the scenes in order to limit confusion. It’s not totally instant, but by the time you switch from Sketch to Ae, the panel should update to show new layers ready to build. This simplifies the transfer process and allows you to focus more on design. Make it interactive For me, one of the most compelling parts of the design process is trying things and finding what works and what doesn’t. This sits pretty contrary to file importing where you must restart everything again if you don’t prep one layer right. Overlord’s core idea is to let you transfer what you need when you need it. I wanted this concept to be central in AEUX, which required getting away from menu-diving and into a floating plugin panel for Sketch to more closely match the expected tool experience in Ae. Show me what’s happening In the previous version, notifications showed only after a successful export and were easily missed at the bottom of the Sketch window. So you had to wait, and guess if it worked. (This was a really poorly designed experience. I’m sorry about that.) A new floating Sketch panel also provides a dedicated place for notifications (both success and failures). The idea of showing the process extends into Ae as well. From the panel updates to progress bars on heavy comps to failure notification for elements that Ae cannot currently draw. Moving forward UX motion design is growing and we’re all still learning what works and what doesn’t, and how to best execute and test these designs. As we do, we’re trying to share what we learn. I hope AEUX helps you work faster and enjoy the process more. Get started with AEUX Interested in working with us? Learn more about Google UX and send us your portfolio or demo reel.
https://medium.com/google-design/aeux-f79e06e01594
['Adam Plouff']
2019-07-11 17:46:54.767000+00:00
['Design', 'Motion Graphics', 'UX', 'Tools', 'Animation']
Day 79 — Insert, Delete, Random O(1)🏁
Day 79 — Insert, Delete, Random O(1) 100 Days to Amazon Photo by Kaze 0421 on Unsplash 100 Days to Amazon — Day 79 Insert, Delete, Random O(1) Out of Free Stories? Here is my Friend Link. Introduction🛹 Hey Guys, Today is day 79 of the challenge that I took. Wherein I will be solving every day for 100 days the programming questions that have been asked in previous interviews. You have a bonus at the end if you keep reading. You can find out the companies that have asked these questions in real interviews. All these problems are taken from the following e-book. 🎓 This is completely free 🆓 if you have an amazon kindle subscription. This e-book contains 100 coding problems that have been asked in top tech interview questions. It also has a guide to solving all the problems in 200+ ways. These problems I assure you has been asked in previous interviews. You have to decide whether you want to go unprepared for a tech interview or go ahead and quickly search for this guide to solve the 100 problems. Begin Your ascent to greatness🚀 Note: this e-book only contains the links to the solutions. Code for 40 have been added. Day 79 — Insert, Delete, Random O(1)🏁 AIM🏹 Design a data structure that supports all following operations in average O(1) time. insert(val) : Inserts an item val to the set if not already present. remove(val) : Removes an item val from the set if present. getRandom : Returns a random element from current set of elements. Each element must have the same probability of being returned. Example🕶 // Inserts 1 to the set. Returns true as 1 was inserted successfully. randomSet.insert(1); // Returns false as 2 does not exist in the set. randomSet.remove(2); // Inserts 2 to the set, returns true. Set now contains [1,2]. randomSet.insert(2); // getRandom should return either 1 or 2 randomly. randomSet.getRandom(); // Removes 1 from the set, returns true. Set now contains [2]. randomSet.remove(1); // 2 was already in the set, so return false. randomSet.insert(2); // Since 2 is the only number in the set, getRandom always return 2. randomSet.getRandom(); Code👇 Author: Akshay Ravindran Algorithm👨‍🎓 Use HashMap for retrieval and insertion in O(1). When removing we could directly remove from the hashmap. But due to the increase and decrease of the list. A hashmap won’t be able to remember these indexes. To solve this you have to introduce the ArrayList. This List keeps track of the changes in indexes and the insertion position. Use Random Class to Create a Random object. This random object if given the size of an input array will return the index within the range based on equal probability. Return the values as expected based on the functions.🔚 Conclusion🐱‍🏍 Have you come across this question in your interview before? Share it in the comment section below. 🤝 Don’t forget to hit the follow button✅to receive updates when we post new coding challenges. Tell us how you solved this problem. 🔥 We would be thrilled to read them. ❤ We can feature your method in one of the blog posts. Want to become outstanding in java programming? Click HERE 🧨🎊🎃 I have published an ebook. A compilation of 100 Java(Interview) Programming problems which have been solved. (HackerRank) 🐱‍💻 This is completely free 🆓 if you have an amazon kindle subscription. Companies Previous Blog Posts
https://medium.com/javarevisited/day-79-insert-delete-random-o-1-5a6a2a90405a
['House Of Codes']
2020-12-18 13:32:57.077000+00:00
['Software Development', 'Coding', 'Interview', 'Java', 'Programming']
When Content Dies, All We’ll Have is Our Personal Narratives
When Content Dies, All We’ll Have is Our Personal Narratives On the Hellscape of Duplicitous Storylines Photo by Edu Lauton on Unsplash You hear it all the time, content is king. But is it? Or is that just another pile of vomitous clickbait being sprayed all over an audience of Gallagher fans who are excited, but also mildly perturbed that they sat in the front row? Are we all sitting in the front row of this content stream just sipping it like Country Time Lemonade on a warm summer day? Content is perceived to be everlasting, but the truth is — it’s not. Content is finite. Stories are everlasting. And when content dies, all we’ll have is our personal narratives. The real question is, as a writer, will you have any yarn to spin when content passes away? Or will your mind have deadened to the point of abject failure by the insane need to be seen and heard by using headline analyzers to determine how strong your words are? Almost everything we read that isn’t a personal narrative, current affairs op-ed, or fiction, we’ve read before. Almost every story on Medium that is formatted listimatically has been written before. Over and over into the void of our hopeful eyes, hoping to get that one lifehack that takes us from zero to hero, without doing any actual work. Does it ever bother you? This hellscape of duplicitous storylines. The neverending feed of all things you have read before, but with a new twist. And that twist being a reference from another author. Our words are eating each other and spitting them back out into a properly formatted listicle or long-form think piece on someone else’s long-form think piece. We are literary cannibals.
https://medium.com/assemblage/when-content-dies-all-well-have-is-our-personal-narratives-8c40b45066d5
['Jonathan Greene']
2020-02-11 15:38:51.843000+00:00
['Culture', 'Writing', 'Content Marketing', 'Blogging', 'Social Media']
Lambda, Map, and Filter in Python
Lambda A lambda operator or lambda function is used for creating small, one-time, anonymous function objects in Python. Basic Syntax lambda arguments : expression A lambda operator can have any number of arguments but can have only one expression. It cannot contain any statements and returns a function object which can be assigned to any variable. Example Let’s look at a function in Python: The above function’s name is add , it expects two arguments x and y and returns their sum. Let’s see how we can convert the above function into a lambda function: In lambda x, y: x + y; x and y are arguments to the function and x + y is the expression that gets executed and its values are returned as output. lambda x, y: x + y returns a function object which can be assigned to any variable, in this case, the function object is assigned to the add variable. If we check the type of add , it is a function .
https://medium.com/better-programming/lambda-map-and-filter-in-python-4935f248593
['Rupesh Mishra']
2020-04-20 21:01:27.747000+00:00
['Functional Programming', 'Python', 'Python Programming', 'Python3']
From Cake Boss to Chic Sugars
Christmas is my favorite Holiday. I like it all; from the over the top decorations to the never-ending Holiday movies. But my most treasured pastime is dinner with my family. Every year we all sit together to laugh and reflect on the year’s special memories. Then finally, we close the night with a slice of cake in celebration of our actual reason for celebration, Jesus Christ’s birthday. But, as you can imagine this year will be different. There will be no togetherness or Holiday cake. Everything will be virtual. So, to at least try and recreate some of those feelings I spoke with Erica Oldham, founder of Chic Sugars, a full-service bakery based in New Jersey. Erica has created cakes for talents such as Nicki Minaj, Jay-Z, Missy Elliot, and more. In our discussion, we chatted about a few ways to celebrate the Holidays safely, her unique career path, and of course advice for entrepreneurs. Erica didn’t expect to create Chic Sugars. In fact, until one year on her daughter’s birthday, she hadn’t even baked a box cake. However, being a fan of the television show Cake Boss, she became dedicated to creating a cake of that magnitude. It took lots of determination and hard work but as time passed her skills became sharper and she started selling cakes to peers and colleagues. Through word of mouth and intentional networking Erica became the go-to person for elaborate cakes in the tri-state area and beyond. “As an entrepreneur, you can’t be afraid to get out there and pitch your business, you never know who may be interested. I would tell everyone, I remember seeing Emily and Fab in New York, and I walked right up to them and handed Emily a business card.” It was this mindset that landed her a partnership with popular NY radio station Hot 97. Whenever the station had a special event, she’d create their custom cake. And as demand grew, she decided to leave the comfort of her traditional career. “I’m a numbers person. I was already doing well by creating cakes in my off-time. I remember simply doing the math, if I dedicate “X” more hours to Chic Sugars, I could make even more than what I’m making.” And that’s exactly what happened, as word spread, her demand grew immensely. Erica’s hustle remains out of this world, but like many, a balance has become a major difficulty for her. “I have the mentality that you only eat what you kill. As an entrepreneur, you have to think like that. I remember once I agreed to an event in August but got pregnant in November. I delivered on my due date, and the next day I was back in the kitchen baking!” Erica’s career path is very non-traditional. Prior to creating Chic Sugars, she worked in social work and even attended law school but her goal has always been consistent, to help others. Through, The Birthday Party Project, she’s able to help homeless children celebrate their special day by providing cakes. The non-profit partners with homeless shelters to celebrate kid’s birthdays monthly. “Cakes are great but if we can uplift the community it’s even more important.” Unfortunately after a time, burnout happened. And after suffering a back injury and many missed moments, Erica was strongly considering changing career paths. However, when COVID-19 happened, the demand of clients reignited her passion. “We’ve been very busy from “Zarties,” Zoom parties. We’ve put cakes in jars and shipped them out so others can virtually enjoy them with their loved ones. We’ve done cake and sips, where customers decorate cakes while enjoying alcoholic beverages. We even sell cake kits, this consists of three layers of cake and all the decorations needed with a picture of how the cake should look. This can be nice for date night, or to entertain the kids.” Finally, they sell their popular Quarantine cakes, small treats perfect for small gatherings. Today her resume includes Food Network, a client list of celebrities and influencers, and a thriving business with plans of expansion. When asked how she knew when she “made it” her response was very surprising. “I knew I made it when I felt undervalued,” she replied. “A country club reached out about Chic Sugars doing all of their weddings and special events. And I just didn’t like the way they were speaking to me, it sounded disrespectful. Then, they accidently sent me a client invoice instead of a personal one. And I learned that while they paid me $400 per cake they were charging the client $1000. All they were doing was taking the cake out the fridge, yet the company was making a $600 profit off my efforts.” This experience taught her a valuable lesson about knowing her worth, as all entrepreneurs should.” I throughly enjoyed my interview with Erica. She’s very down to earth and her hardworking spirit shines through in conversation. Her transparent journey speaks to the importance of dedication, hard work, and relationship-building. I’m excited to try a few cakes in a jar and of course the cake and sip! I hope her story inspires you or at least provides some cheer during this unique Holiday season. - Kirby Carroll Wright (@askKirbyCarroll)
https://medium.com/swlh/from-cake-boss-to-chic-sugars-c79c5f278e68
['Kircarroll Wright']
2020-12-21 18:02:11.559000+00:00
['Cake Decorating', 'Baking', 'Entrepreneurship', 'Small Business', 'Quarantine']
Implementing an Effective Architecture for ASP.NET Core Web API
The Internet is a vastly different place than it was five years ago, let alone 25 years ago when I first started as a professional developer. Web APIs connect the modern internet and drive both web applications and mobile apps. The skill of creating robust web APIs that other developers can consume is in high demand. APIs that drive most modern web and mobile apps need to have the stability and reliability to continue operating, even when traffic is at the performance limits. This article describes the architecture of an ASP.NET Core 3.1 Web API solution using the ports and adapters pattern. First, we’ll look at the new features of .NET Core and ASP.NET Core that benefit modern Web APIs. The solution and all code from this article’s examples can be found in my GitHub repository ChinookASPNETCore3APINTier. .NET Core and ASP.NET Core for Web API .NET Core, unlike .NET Framework, is a new web framework that Microsoft built to shed the legacy technology that’s been around since .NET 1.0. For example, in ASP.NET 4.6, the System.Web assembly that contains all the WebForms libraries carries over into more recent ASP.NET MVC 5 solutions. By shedding these legacy dependencies and developing the framework from scratch, ASP.NET Core 3.1 gives the developer much better performance and is architected for cross-platform execution, so your solutions will work as well on Linux as they do on Windows. Dependency Injection Before we dig into the architecture of the ASP.NET Core Web API solution, I want to discuss a single benefit that makes .NET Core developers’ lives so much better: Dependency Injection (DI). We had DI in .NET Framework and ASP.NET solutions, but the DI we used in the past was from third-party commercial providers or open source libraries. They did a good job, but a good portion of .NET developers experienced a big learning curve, and all DI libraries had their idiosyncrasies. With .NET Core, DI is built right into the framework from the start. Moreover, it’s quite simple to work with. Using DI in your API gives you the best experience decoupling your architecture layers — as I’ll demonstrate later in the article — and allows you to mock the data layer or have multiple data sources built for your API. Many updates in ASP.NET Core 3.1 help our solutions, including not needing to manually add the Microsoft.AspNetCore.All NuGet package. The needed assemblies are added to the projects by default and give access to the IServiceCollection interface, which has a System.IServiceProvider interface that you can call GetService<TService>. To get the services you need from the IServiceCollection interface, you’ll need to add the services your project needs. To learn more about .NET Core DI, I suggest you review the following document on MSDN: Introduction to Dependency Injection in ASP.NET Core. We’ll now look at the philosophy of why I architected my web API as I did. The two aspects of designing any architecture depends on these two ideas: allowing deep maintainability and using proven patterns and architectures in your solutions. Architecture Building a great API requires great architecture. We’ll be looking at many aspects of API design and development, from built-in functionality of ASP.NET Core to architecture philosophy and design patterns. Maintainability Of The API Maintainability for any engineering process is the ease with which a product can be preserved. Maintenance activities can include finding defects, correcting found defects, repairing or replacing defective components without having to replace still-working parts, preventing unexpected malfunctions, maximizing a product’s useful life, having the ability to meet new requirements, making future maintenance easier or coping with a changing environment. This can be a difficult road to go down without a well-planned and well-executed architecture. Maintainability is a long-term issue, and you should be looking at your API with a long-term vision. You need to make decisions that lead to this vision, rather than shortcuts that may make life easier right now. Making hard decisions at the start will allow your project to have longevity and provide benefits that users demand. What gives a software architecture high maintainability? How do you evaluate if your API can be maintained? Changes to our architecture should allow for minimal, if not zero, impact on the other areas. Debugging should be easy and not require difficult setup. You should have established patterns and used common methods (such as browser debugging tools). Testing should be as automated as possible and be clear and simple. Interfaces And Implementations The key to my API architecture is the use of C# interfaces, which allows alternative implementations. If you have written .NET code with C#, you’ve probably used interfaces. In my solution, I use them to build out a contract in my Domain layer that guarantees any Data layer I develop for my API adheres to the contract for data repositories. It also allows the controllers in my API project to adhere to another established contract for getting the correct methods to process the API methods in the domain project’s supervisor. Interfaces are crucial to .NET Core. If you need a refresher on interfaces, check out this article. Ports And Adapter Pattern You want your objects throughout your API solution to have single responsibilities. This keeps your objects simple and maintainable — in case we need to fix bugs or enhance our code. If you have these “code smells” in your code, then you might be violating the single responsibility principle. As a rule, I look at the implementations of the interface contracts for length and complexity. I don’t have a limit to the lines of code in my methods, but if you passed a single view in your integrated development environment (IDE), it might be too long. Also, check the cyclomatic complexity of your methods to determine the complexity of your project’s methods and functions. The Ports and Adapter Pattern fixes this problem by having business logic decoupled with other dependencies — such as data access or API frameworks. Using this pattern allows your solution to have clear boundaries and well-named objects with single responsibilities, allowing for easier development and maintainability. Picture the pattern like an onion, with ports located on the outside layers and the adapters and business logic located closer to the core, as shown in Figure 1. The external connections of the architecture are the ports. The API endpoints that are consumed or the database connection used by Entity Framework Core 3.1 are examples of ports, while the internal domain supervisor and data repositories are the adapters. Figure 1 — Visualization Of The Ports And Adapter Pattern Next, let’s look at the logical segments of our architecture and some demo code examples. These three segments, shown in Figure 2, should follow a logical separation between the consumer end-point or API, the domain segment which encompasses the business domain for the solution and finally the segment that contains the code for accessing the data in our SQL Server database. Figure 2 — Segments Of Architecture Domain Layer Before we look at the API and Domain layers, I need to explain how you build out the contracts through interfaces and how you implement our API business logic. Let’s look at the Domain layer. The Domain layer has the following functions: Defines the Entities objects that will be used throughout the solution. These models will represent the Data layer’s DataModels. Defines the ViewModels which will be used by the API layer for HTTP requests and responses as single objects or sets of objects. Defines the interfaces through which your Data layer can implement the data access logic. Implements the supervisor that will contain methods called from the API layer. Each method will represent an API call and will convert data from the injected Data layer to ViewModels to be returned. Our Domain Entity objects are a representation of the database that you’re using to store and retrieve data used for the API business logic. Each Entity object will contain the properties represented, in this case the SQL table. For an example, reference the Album entity in Listing 1. Listing 1 — The Album Entity Model Class The Album table in the SQL database has three columns: AlbumId, Title and ArtistId. These three properties are part of the Album entity, as well as the Artist’s name, a collection of associated Tracks and the associated Artist. As you’ll see in the other layers in the API architecture, I’ll build upon this entity object’s definition for the ViewModels in the project. The ViewModels are the extension of the Entities, which help give more information for the consumer of the APIs. Let’s look at the Album ViewModel. It’s very similar to the Album Entity but with an additional property. In the design of my API, I determined that each Album should have the name of the Artist in the payload passed back from the API. This allows the API consumer to have that crucial piece of information about the Album without having to have the Artist ViewModel passed back in the payload (especially when we’re sending back a large set of Albums). An example of our AlbumViewModel is included below in Listing 2. Listing 2 — The Album API Model Class The other area that’s developed into the Domain layer is the contracts via interfaces for each of the Entities defined in the layer. Again, we’ll use the Album entity (shown in Listing 3) to show the interface that is defined. Listing 3 — The Album Repository Interface As shown in Listing 3, the interface defines the methods needed to implement the data access methods for the Album entity. Each entity object and interface are well-defined and simplistic, allowing the next layer to be well-defined, too. Finally, the core of the Domain project is the Supervisor class. Its purpose is to translate to and from Entities and ViewModels and perform business logic away from either the API endpoints or the Data access logic. Having the supervisor handle will also isolate the logic to allow unit testing on the translations and business logic. Looking at the supervisor method for acquiring and passing a single Album APIModel from the API endpoint, as shown in Listing 4, we can see the logic in connecting the API front end to the data access injected into the supervisor, but still keeping each isolated. Listing 4 — Supervisor Method for a Single Album Keeping most of the code and logic in the Domain project will allow every project to keep and adhere to the single responsibility principle. Data Layer The next layer of the API architecture we’ll look at is the Data Layer. In my example solution, I’m using Entity Framework Core 3.1. This will mean that I have the Entity Framework Core’s DBContext defined, but also the Data Models generated for each entity in the SQL database. If you look at the data model for the Album entity as an example, you’ll see that three properties are stored in the database, along with a property containing a list of associated tracks to the Album and a property that contains the Artist object. While you can have a multitude of Data Layer implementations, just remember that it must adhere to the requirements documented on the Domain Layer. Each Data Layer implementation must work with the View Models and repository interfaces detailed in the Domain Layer. The architecture you’re developing for the API uses the Repository Pattern for connecting the API Layer to the Data Layer. This is done using Dependency Injection (as I discussed earlier) for each of the repository objects you implement. I’ll discuss how you use Dependency Injection when we look at the API Layer. The key to the Data Layer is the implementation of each entity repository using the interfaces developed in the Domain Layer. Looking at the Domain Layer’s Album repository in Listing 5 as an example, you can see that it implements the AlbumRepository interface. Each repository will inject the DBContext, which allows for access to the SQL database using Entity Framework Core. Listing 5 — Album Repository Based On The Album Repository Interface Having the Data Layer encapsulate all data access will allow facilitating a better testing story for your API. We can build multiple data access implementations: one for SQL database storage, another for a cloud NoSQL storage model and finally a mock storage implementation for the unit tests in the solution. API Layer The final layer we’ll look at is the area that your API consumers will interact with. This layer contains the code for the Web API endpoint logic including the Controllers. The API project for the solution will have a single responsibility, and that is to handle the HTTP requests received by the web server and return the HTTP responses with either success or failure. There will be a very minimal business logic in this project. We will handle exceptions and errors that have occurred in the Domain or Data projects to effectively communicate with the consumer of APIs. This communication will use HTTP response codes and any data to be returned located in the HTTP response body. In ASP.NET Core 3.1 Web API, routing is handled using Attribute Routing. You’re also using dependency injection to have the Supervisor assigned in each Controller. Each Controller’s Action method has a corresponding Supervisor method that will handle the logic for the API call. I have a segment of the Album Controller in Listing 6 to show these concepts. Listing 6 — Segment Of The Album Controller The Web API project for the solution is very simple and thin. I strive to keep as little code in this solution as possible, because it could be replaced with another form of interaction in the future. Conclusion As I have demonstrated, designing and developing a great ASP.NET Core 3.1 Web API solution takes insight to have a decoupled architecture that allows each layer to be testable and follow the Single Responsibility Principle. I hope this information will allow you to create and maintain your production Web APIs for your organization’s needs.
https://medium.com/rocket-mortgage-technology-blog/implementing-an-effective-architecture-for-asp-net-core-web-api-254f95b8a434
['Rocket Mortgage Technology']
2020-12-08 19:54:23.839000+00:00
['Architecture', 'API', 'Aspnetcore', 'Development', 'Dotnet']
Evolution of Natural Language Generation
Evolution of Natural Language Generation An article to draw attention towards the evolution of Language Generation Models Abhishek Sunnak, Sri Gayatri Rachakonda, Oluwaseyi Talabi Since the dawn of Sci-Fi cinema, society has been fascinated with Artificial Intelligence. Whenever we hear the term “AI”, our first thought is typically one of a futuristic robot from movies such as Terminator, The Matrix and I, Robot. Although we might still be a few years away from robots that can think for themselves, there have been significant developments in the fields of machine learning and natural language understanding over the past few years. Applications such as Personal Assistants (Siri/Alexa), chatbots and Question-Answering bots are truly revolutionizing the way we interface with machines and go about our daily lives. Natural Language Understanding (NLU) and Natural Language Generation (NLG) are among the fastest growing applications of AI due to the increasing need to understand and derive meaning from language, with its numerous ambiguities and varied structure. According to Gartner, “By 2019, natural-language generation will be a standard feature of 90 percent of modern BI and Analytics platforms”. In this post, we will discuss a brief history of NLG since the early days of its inception, and where it is headed in the coming years. What is Natural Language Generation? The goal of language generation is to convey a message by predicting the next word in a sentence. The problem of which likely word to predict (among millions of possibilities) can be tackled by using Language models, which are a probability distribution over sequences of words. Language models can be constructed at a character level, n-gram level, sentence level or even paragraph level. For example, to predict the next word that comes after “I need to learn how to ___”, the model assigns a probability for the next possible set of words which can be “write”, “drive” etc. Recent advances in neural networks such as RNNs and LSTMs have allowed processing of long sentences, significantly improving the accuracy of language models. Markov Chains Markov chains are among the earliest algorithms used for language generation. They predict the next word in a sentence by just using the current word. For example, if a model was trained using only the following sentences: “I drink coffee in the morning” and “I eat sandwiches with tea”. There is 100% chance it would predict “coffee” to follow “drink”, while there is 50% chance for “I” to be followed by “drink” and 50% to be followed by “eat”. A Markov chain takes the relationship between each unique word into consideration to calculate the probability of the next word. They were used in earlier versions of smartphone keyboards to generate suggestions for the next word in the sentence. Markov Model for an example sentence (Source: Hackernoon) However, by just focusing on the current word, Markov models lose all context and structure of the preceding words in the sentence which can lead to incorrect predictions, limiting their applicability in many generative scenarios. Recurrent Neural Network (RNN) Neural networks are models that are inspired by the workings of a human brain, offering an alternate method for computing by modeling non-linear relationships between inputs and outputs — their use for language modeling is known as neural language modeling. An RNN is a type of neural network that can exploit the sequential nature of the input. It passes each item of the sequence through a feedforward network and gives the output of the model as an input to the next item in the sequence, allowing for the storage of information from the previous steps. The “memory” possessed by RNNs makes them great for language generation, as they can remember the context of the conversation over time. RNNs differ from Markov chains, in that they also look at words previously seen (unlike Markov chains, which just look at the previous word) to make predictions. Unrolled Architecture of an RNN module (Source: Github) RNNs for Language Generation In every iteration of the RNN, the model stores in its memory the previous words encountered and calculates the probability of the next word. For example, if the model generated the text “We need to rent a ___ ”, it now has to figure out the next word in the sentence. For every word in the dictionary, the model assigns the probability based on the previous words it has seen. In our example, the words “house” or “car” will have a higher probability than words like “river” or “dinner”. The word with the highest probability is selected and stored in the memory, and the model then proceeds with the next iteration. Sentence Generation through unrolling of an RNN RNNs suffer from a major limitation — the vanishing gradient problem. As the length of the sequence increases, RNNs cannot store words encountered far back in the sentence, and only make predictions based on recent words. This limits the application of RNNs towards generating long sentences that sound coherent. Long Short-Term Memory (LSTM) Architecture of an LSTM module (Source: Github) LSTM based neural networks are a variant of RNNs designed to handle long-range dependencies in the input sequence more accurately than vanilla RNNs. They are used in a wide variety of problems. LSTMs have a similar chain-like structure to RNNs; however, they comprise a four-layer neural network instead of a single layer network for RNNs. An LSTM is composed of 4 components: a cell, an input gate, an output gate and a forget gate. These allow RNNs to remember or forget words over arbitrary time intervals by regulating the flow of information in and out of the cell. LSTMs for Language Generation Sentence Generation trough unrolling of an LSTM Consider the following sentence as an input to the model: “I am from Spain. I am fluent in ____.” To correctly predict the next word as “Spanish”, the model focuses on the word “Spain” in an earlier sentence and “remembers” it using the cell’s memory. This information is stored by the cell while processing the sequence and is then used when predicting the next word. When the full stop is encountered, the forget gate realizes that there may be a change in the context of the sentence, and the current cell state information can be overlooked. This allows the network to selectively keep track of only relevant information while also minimizing the vanishing gradients problem which allows the model to remember information over a more extended period. LSTMs and its variations seemed to be the answer to the problem of vanishing gradients to generate coherent sentences. However, there is a limitation to how much information can be saved as there is still a complex sequential path from previous cells to the current cell. This limits the length of sequences that an LSTM could remember to just a few hundred words. An additional pitfall is that LSTMs are very difficult to train due to high computational requirements. Due to their sequential nature, they are hard to parallelize, limiting their ability to take advantage of modern computing devices such as GPUs and TPUs. Transformer The Transformer was first introduced in the 2017 Google Paper “Attention Is All You Need”, where it proposed a novel method called the “self-attention mechanism”. Transformers are currently being used across a wide variety of NLP tasks, such as language modeling, machine translation and text generation. A transformer consists of a stack of encoders to process an input of any arbitrary length and another stack of decoders to output the generated sentence. Animation showing the use of a transformer for machine translation (Source: GoogleBlog) In the above example, the encoder processes the input sentence and generates a representation for it. The decoder uses this representation to create an output sentence word by word. The initial representation/embedding for each word are represented by the unfilled circles. The model then aggregates information from all other words using self-attention to generate a new representation per word, represented by the filled balls, informed by the entire context. This step is then repeated multiple times in parallel for all words, successively generating new representations. Similarly, the decoder generates one word at a time, from left to right. It attends not only to the other previously created words but also to the final representations developed by the encoder. In contrast to LSTMs, a transformer only performs a small, constant number of steps while applying a self-attention mechanism which directly models relationships between all words in a sentence, regardless of their respective position. As a model processes each word in an input sequence, self-attention allows the model to look at other relevant parts of the input sequence for better encoding of the word. It uses multiple attention heads which expands the model’s ability to focus on different positions regardless of their distance in the sequence. In recent times, there have been a few modifications made to vanilla transformer architectures which significantly improved their speed and accuracy. In 2018, Google released a paper on Bidirectional Encoder Representations from Transformers (BERT) which produced state of the art results for a variety of NLP tasks. Similarly, In 2019 OpenAI released a transformer-based language model with around 1.5 billion parameters to generate long, coherent articles using just a few lines of input text as a prompt. Language generation using OpenAI’s GPT-2 model (Source: Venture Beat) Transformers for language generation Recently, Transformers have also been used for language generation. One of the most well-known examples of transformers used for language generation is by OpenAI, in their GPT-2 language model. The model learns to predict the next word in a sentence by using attention to focus on the words previously seen in the model that are relevant to predicting the next word. Relationships determined by the self-attention mechanism in transformers (Source: Medium) Text Generation with Transformers is based on a similar structure to the one followed for machine translation. If we take an example sentence “Her gown with the dots that are pink, white and ___.” The model would predict blue, by using self-attention to analyze the previous words in the list as colors (white and pink) and understanding that the expected word also needs to be a color. Self-attention allows the model to selectively focus on different parts of the sentence for each word instead of just remembering a few features across recurrent blocks (in RNNs and LSTMs) which mostly will not be used for several blocks. This helps the model recall more characteristics of the preceding sentence and leads to more accurate and coherent predictions. Unlike previous models, transformers can use representations of all words in context without needing to compress all information into a single fixed-length representation. This architecture allows transformers to retain information across much longer sentences without significantly increasing the computation requirements. They also perform better than previous models across domains without the need for domain-specific modifications. The future of Language Generation In this blog, we saw the evolution of language generation from using simple Markov chains for sentence generation to using self-attention models for generating longer range coherent text. However, we are just at the dawn of generative language modeling, and transformers are just one step in the direction towards truly autonomous text generation. Generative models are also being developed for other types of content such as images, videos, and audio. This opens the possibility to integrate these models with generative text models to develop advanced personal assistants with audio/visual interfaces. However, we, as a society, need to be careful with the application of generative models as they open several possibilities for their exploitation in generating fake news, fake reviews and impersonating people online. OpenAI’s decision to withhold release of their GPT-2 language model due to the potential for its misuse is a testament to the fact that we have now entered an age where language models are powerful enough to cause concern. Generative models have the potential to transform our lives; however, they are a double-edged sword. By putting these models through appropriate levels of scrutiny, whether through the research community or government regulation, there is certainly going to be a lot more progress in this domain over the coming years. Regardless of the outcome, there should be exciting times ahead!
https://medium.com/sfu-cspmp/evolution-of-natural-language-generation-c5d7295d6517
['Abhishek Sunnak']
2019-03-16 04:21:50.343000+00:00
['Machine Learning', 'Naturallanguagegeneration', 'NLP', 'Artificial Intelligence', 'Deep Learning']
Why Being a Public School Teacher is the Greatest Job
Why Being a Public School Teacher is the Greatest Job Especially during this time of COVID-19 and remote work Photo by Ben Mullins on Unsplash I’ve always loved my job. Of course, many are the days that I’d prefer to stay home, go have lunch with friends and skip work. But, overall, I find teaching rewarding, challenging and fun. I’m an early childhood special education teacher at a public school in Connecticut. I teach a preschool class that includes students with moderate to significant disabilities along with non-disabled students. I’ve been doing this for the past 15 years. These past three weeks, I’ve felt more grateful than ever to have a job, this job specifically. Here are three reasons why: Job security Thousands of people have lost their jobs or income due to the pandemic. Hundreds more will follow. Many more still feel enormously anxious because they can’t be sure they’ll still have a job when this is over -even if, ultimately, they don’t end up unemployed. I have a secure job. Unless I fail miserably at it, I won’t be laid off. There aren’t that many qualified and credentialed folks looking for jobs as early childhood special ed teachers. I know. My school district hired one last summer and I participated in the interviews. My paycheck is a certainty, if there ever was one. I can count on a specific amount being deposited twice monthly. Of utmost importance, I’m able to pay for good health insurance for my family through my job. The best friendships During normal times, we’re a good group. During this crazy time, we’ve become “the best team,” as my colleague Violet says. The support, kindness and friendships are invaluable. The students and families Leaving the most rewarding part for last: the humans I serve. They take up a space of my heart every year and stay there forever. In real life, the kids provide instant gratification multiple times a day: when a child learns, smiles, says something funny, calls you “mom”, takes your hand, opens her eyes in wild amazement, pees in the potty for the first time ever — just to name a few examples. Even remotely, the rewards of the job are apparent every day: a parent’s email telling you her child cried when the video chat ended (because he was so excited to see you); a video showing Johnny standing in front of a screen following your rendition of Head, Shoulders, Knees and Toes; or, my favorite, a video of a child reading a self-made story with his own drawings of birds (the theme we’ve been focusing on).
https://medium.com/age-of-awareness/why-being-a-public-school-teacher-is-the-greatest-job-9187b7f14119
['Daniella Mini']
2020-04-02 21:05:01.909000+00:00
['Disability', 'Teaching', 'Remote Work', 'Coronavirus', 'Education']
Electron + React + Python (Part 3) — Boilerplate 2/3
Before we process, I urge you to read the following documentation: Preparing electron.js — main process The first question that you should be asking is why electron.js and not some other file? — This is the only piece of code that has access to the instance of ipcMain and as you know an app can have no more than one ipcMain instance. We make use of ipcMain to setup listeners that listen for events being triggered from renderer processes. So any message from a renderer process (in our case, the React UI) can be sent to the main process. what we are aiming for… The following modifications in electron.js would be needed to setup a listener. ipcMain.on( 'EVENT_LISTENER_NAME', (event, args) => { /* ---- Code to execute as callback ---- */ } ); Note: Explaining electron basics is not the objective of this article, but if you read the docs properly, you know that you can pass around data while triggering an event, and use that data while firing the callback at the listener. This is play a major role in helping us communicate with python scripts. Note: There is an interesting thing to note about the first argument of the callback in the event listener — event. This argument gives you information about the process that triggered the event, and you can make use of this argument to send a message back to the triggering process. The Electron.js guys have provided really nice examples around this concept. So for the sake of having an easy example, let us just set up a two event listeners. Why two? If you read the concept I described right at the beginning of this article, I said that the visible renderer process will send a request to main process — for that we will need one event listener. But there is another hidden renderer that will be used to execute the background task and when then hidden renderer wants to relay the processed data back to visible renderer, it will have to do it via the main process as shown in the diagram — for this we will needed the second event listener. After setting up electron.js, it should look something like this. Note: You may notice that I have left the callback blank, but that has been done on purpose. We will write the code for it when we revisit electron.js later in this article. Preparing React UI (app.js) — visible renderer process Now setup a few things in our UI to make sure we get a confirmation when our test runs successfully. There are two things the visible UI needs to do: Request main renderer to start a background process and process the data being provided by the UI. to and process the data being provided by the UI. Listen for an event that the background process will send to visible renderer via the main renderer process. So with the above points in mind, this is how we can visualize the app so far This is what we are aiming for… The following snippet (which will be placed in the ReactUI code)would be needed to trigger an event that the event listener, that we just set up in main renderer, would react to. ipcRenderer.send( 'BACKGROUND_PROCESS_START', { "Key1": 123, "Key2": 456 } ); So technically speaking, other than a few syntactic differences, everything is just the same for main process and renderer process. To keep things simple, let us trigger the request for background processing right when our UI mounts — componentDidMount componentDidMount(){ ipcRenderer.on( 'MESSAGE_FROM_BACKGROUND_1', (event, args) => { const { message } = args; console.log(message); } ); ipcRenderer.send( 'START_BACKGROUND_1', { "number": 25 }); } After making the required changes, your code should look something like this. Note: Pay close attention to the import statements. Whenever you need access to the ipcRenderer object, you’ll have to import it as shown for every file you access it in. const electron = window.require('electron'); const { ipcRenderer } = electron; If you get an error saying window.require is not a function then take a look at this: Note: So you’ll just need to update a function in electron.js and you should be good to go. To have access to node modules in the renderer process, this change will be mandatory. Preparing Hidden renderer (background.html) — hidden renderer process Now for the background process. There are a few things to note about this process: It is just another renderer process, nothing special. As opposed to the visible renderer process, this will be a hidden browser window. So no for of user interaction will be possible with it. As this is yet another window and the user has no way of interacting with it, it will be your responsibility to clean up and close this window else you will end up leaking memory. All this things are pretty important and handling them every time you need a background process becomes repetitive and bug prone after a while. So I wrote a little library to abstract away these things. But in this tutorial I’ll show you how to do it without the library so that you know what all things the library is actually doing for you. So let’s break it down into steps: When the visible renderer process asks for a background job to be done, the main process will create a new window. (We can dispatch the data we wish to send to the python script only after we are sure that this window has been created successfully). The hidden window gets created by the main process. After successful initialization, the hidden window fires an event telling the main process that it is ready to accept data. The main process sends appropriate data to the hidden process and the hidden process fires up the python script with the data it has been provided. The python script sends the processed information back to the hidden renderer, which again makes use of events to relay this information back to visible renderer via main process. This is what the visualization would look like What we are aiming for… Note: We’ll deal with the window creation when we revisit electron.js Keeping the above steps in mind, let’s start by firing a BACKGROUND_READY event which the main process will listen to. ipcRenderer.send('BACKGROUND_READY'); We will use the following snippet to setup a listener that will trigger the actual function that will start the python script. This event will be triggered by the main process when it gets a confirmation that the hidden renderer process has been initialized successfully and is now ready to accept data. ipcRenderer.on('START_PROCESSING', (event, args) => { const { data } = args; let pyshell = new PythonShell(path.join(__dirname, '/../scripts/factorial.py'), {pythonPath: 'python3', args: [data] }); pyshell.on('message',function(results) { ipcRenderer.send( 'MESSAGE_FROM_BACKGROUND', { message: results } ); }); }); After making the required changes, your code should look something like this. Revisiting electron.js — Gluing everything together Now let’s come back to electron.js and make the final changes that we discussed in the above section. We need an event listener to listen to BACKGROUND_READY event and send the data that we will store in a global variable. We also need an event listener setup to listen to the output from the background process and relay it back to visible renderer. The red arrows are the confirmation messages that are used to give the main process a signal that out hidden renderer is ready and the main process can now send in the data for processing. This is what we are aiming for now… The following lines of code will do so: ipcMain.on('BACKGROUND_READY', (event, args) => { event.reply('START_PROCESSING', { data: cache.data, }); }); After this, we will put in the code to create the hidden window using the BrowserWindow class as shown in the snippet below. I have few of the logical explanations for the readers to understand on their own. I have heavily commented the whole code but you still get stuck, feel free to ask. ipcMain.on('START_BACKGROUND_VIA_MAIN', (event, args) => { const backgroundFileUrl = url.format( { pathname: path.join(__dirname, `../background_tasks/background.html`), protocol: 'file:', slashes: true, } ); hiddenWindow = new BrowserWindow({ show: false, webPreferences: { nodeIntegration: true, }, }); hiddenWindow.loadURL(backgroundFileUrl); hiddenWindow.webContents.openDevTools(); hiddenWindow.on('closed', () => { hiddenWindow = null; }); cache.data = args.number; } ); After the changes have been made, this is what electron.js should look like: Time to add in some python code Pay close attention to where I am keeping the python scripts. Edit: Thanks to a vigilant reader, do install python-shell before adding the following code: npm install --save python-shell Rule of thumb — Always execute npm install when you clone any node project from Github. 👍 This is how the python script will fit in our diagram. Nice and simple… For the sake of simplicity, the code can literally be anything trivial. Let’s just find the factorial of a given number. For now I have passed data to the python code via arguments, but there are even more creative and flexible ways of passing data back and forth. More on that in the next article.
https://medium.com/heuristics/electron-react-python-part-3-boilerplate-2-3-a6da0244768f
['Aakash Mallik']
2020-06-23 06:31:15.632000+00:00
['JavaScript', 'Python', 'Coding', 'Electron']
Not Content With Contentful
This is the fourth in a short series on my attempts to monetize my parked domains. This covers my goals with Git-based work flows better and actually starts to provide code. I’ve been looking for a centralized data store that 100+ domains can pull content from. A headless CMS is a must, the same content may be served under vastly different formatting and styling between sites. As a developer, a git-based solution would be ideal. I couldn’t find one I liked, so I started building one. Contentful Reader Data Flow Contentful — Headless CMS Contentful is the leading headless CMS provider. I’ve used Contentful for some of my own blog content as well as at places I’ve worked. It makes it easy to create complex, multi-dimensional data that is fully localized. Contentful even have a free Community Space that supports a pretty complete site. The Cons: Only one free space is allowed and additional spaces start at $489. The number of object types or schemas is limited. The response time are not amazing, especially for a “CDN”. Netlify-Git-based CMS I’m intrigued by the promise of Netlify. It is git-based to fit in with the rest of the workflow and you could even store an editor with the code. The Cons: Tightly coupled with the Netlify website. No easy way to mix and match content on sites. Ghost — Headless CMS Ghost seemed like it would be a good fit for a blog. But making a single blog stream into hundreds of blogs is more of a job for an application. The Cons: A limited object model paradigm. No easy way to mix and match content on sites. The Solution —Create a CMS on AWS The intent was never to write any code, I thought maybe I’d end up with a couple CDK or Cloudformation resources. Being a cloud engineer building a solution on AWS is always appealing. Done correctly it can be cost-effective and can scale forever with zero changes or human attention. Using AWS it can be fast and reliable with very little effort. Start with Contentful Export Files As I said, I have some blog content in a free Contentful account and would like to use that. If I can export it I can use my free Community Space for playing with and testing Contentful using other content. Contentful also has a strong object paradigm that will be easy to build upon. Break it up! To make use of this large export file I created a Contentful Reader. Given a single large Contentful file it breaks up the JSON into many smaller files for each content type or entry. The reader can handle files that are GB’s in size and can process 60+ entries per second so a 1 GB file with 500k entries should take less than 15 minutes to process. Git it together These files are ideal for committing to a git repository. This allows you to apply all the git workflow tools in the market to your Contentful data. Git-based workflows with CMS data are like peanut butter and chocolate, they go great together. Benefits of Git Easily support multiple concurrent branches or versions. Increased auditability. Easy to revert mistakes. Integrates with other development processes. Dynamo is Dynamite The last step between the CMS and web page still benefits from the flexibility of a content stored in a database. Flat git files a lone can’t support a performant GraphQL API. For that we turn to DynamoDB and single table design. I use denormalization and hydration to create a Dynamo structure that can easily match my expected query patterns. The non-key field data in the table looks much like the data in the JSON files. The magic is in the keys created for the data. The JSON files from the repo are used to create complex partition key, sort key and GSI key values. These keys make it easy to fetch the data already formatted in the way we need it. Serve it up The Dynamo data can easily be served through both a REST API and an AppSync GraphQL API. The data is stored in an optimized format so that little if any processing needs to take place when serving the data. Due to the nature of AppSync and its resolvers it’s unlikely the same Lambda would be able to service both REST and GraphQL requests. Any logic that is shared between the two would be moved to custom JavaScript modules. I have a simple Cloudformation file for creating your own NPM and PIP repository using AWS CodeArtifact. Save cash with cache From a consumer perspective the Dynamo data is read only. This means we can easily implement a Cloudfront distribution in front of it with caching enabled. This can limit the max calls to the backend to handful in a month no matter how much traffic a site may generate. Optimizing the Dynamo Schema First let’s address a common misconception, DynamoDB shouldn’t be thought of as “schemaless”. It should be thought of as multi-schema or flexible schema. A single table can have items that adhere to different schemas. The schema for those items may not be explicitly defined but it is there or the data would be useless in an API. I highly suggest defining your schema as explicitly as possible. Optimizing keys for parts and aggregations For the use-case of breaking one blog stream into many there are two important considerations. How are sub-parts of the data used? How are parts of the data aggregated? I’m still trying a few different things and modeling how things will work for importing, updating and serving efficiently. Part 1 — Parking for Pennies Part 2 — AWS SSL Certificates Part 3 — Mass Hosting Paradigm
https://medium.com/swlh/not-content-with-contentful-5b4d3bdb21b3
['Brian Winkers']
2020-11-20 19:02:42.253000+00:00
['Contentful', 'CMS', 'JavaScript', 'Headless Cms', 'AWS']
UI vs UX: Differences In Frontend Design
User experience (UX) and User Interface (UI) are two of the most important elements of a front end design. Why do I say that? Have you heard of the following quote? “For every dollar, a business spends on UX, they earn $100 from it.” That means that a business is earning a 9,900% Return on Investment! As much as the two terms, UI and UX are important, people often confuse them when discussing web design and application. Perhaps these confusing design terminologies are no one’s fault. The surface-level description of the UI/UX design loosely means the same thing. It is only when you scratch the surface when one starts noticing the thin line that separates the two. This is where I will try to explain (in detail!) what is a user interface and user experience. By the end of this discussion, you will be able to differentiate between the two and have a better understanding of both. What is UI Design? A user interface (UI) is anything a software user interacts with on their screen while using the online application/tool. By anything, I mean everything; from sound to lights to buttons to forms and screens to pages. UI refers to the visual elements of a digital product and the way the product presents those elements to users. As of January 2020, over 1.2 billion websites exist on the Internet. Moreover, we know that Google’s artificial intelligence also looks a the website’s UI and UX for ranking. So, how did the front-end design become such a huge deal for businesses? History of User Interface Going back to the 1970s, when computers first came into the market, we had to use the Command Line interface for even a small interaction. This was a simple black screen with white text and A LOT of lengthy commands. The graphical interface was not only unavailable but, the computers could not support them either. A strong know-how of programming language was necessary to run simplest commands. Fast forward to the 1980s, Xerox PARC invented a “graphical user interface” (GUI), which allowed users to interact with computers through icons, menus, checkboxes and other buttons. A simple but hugely helpful interface was now present. One of the first Macintosh computers by Apple Computers. Photo Credits: Museums Victoria Finally, around the same time, Apple (formerly known as Apple Computer) invented a mouse-system for “point-and-click”. The Macintosh quickly became one of the first computers commercially available for homes and offices. By the mid-1980s, it was clear that UI design was important to grow a digital brand and the Macintosh was a huge example. If the users could not interact easily with computers or any digital (or physical) product, it would not sell. This is how UI web design was born and has progressed into one of the fundamentals of building a great online product. Thanks to the incredible progress in IT and software development, UI designers have an almost limitless opportunity to build just about anything for different products. From smart wearable devices to computers, mobile phones and everything on the internet, UI design is a prevailing element. What is UX Design? A user experience (UX) is a process that designers use to create meaningful products and provide relevant experience to users. If the UI design is about how a product will look and function, the UX design is about the experience the product will offer. Creating a perfect user experience involves designing the entire process of acquiring and integrating the product with usability, branding, and function. Perhaps one can even say that the work of a UX design goes beyond UI design. Specifically, you may even say that the UI is a subset and one of the important aspects of a broader term UX. Let’s hear it from Donald A. Norman, who is a co-founder and Principal Emeritus of Nielsen Norman Group. He is also a researcher, professor, and director at UCLA, and the author of The Design of Everyday Things: Donald A. Norman on User Experience (UX). Video Credits: Nielsen Norman Group According to Norman, the first requirement for outstanding user experience is to meet the exact needs of customers in priority. It is about creating a simple-to-use tool and goes beyond giving customers what they say they want. Researching UX Design Techniques As far as we have discovered, user experience is about researching the best experience for end-users. It is about putting them at ease, even if it requires making smaller packaging boxes of equipment or making a small handle to easily lift heavy equipment. User experience research methods are great for gaining insight and understanding of products or services. When research combines with UX design activities, then the overall product development efforts become more effective and useful. One question that usually comes to mind is: “When should I start my UX design research?” The answer: Right Now! While the method of UX design research for a brilliant UI UX strategy is a complete subject on its own, I am only going to touch it briefly. The earlier the research you begin, the more impact you will create with your product. Hence it is imperative that regardless of the stage of research you are in, you should begin your UX research at that moment. Important UX Research Methods & Activities. Photo Credits: Nielsen Norman Group User research is necessary for all stages and there is a useful method to go about it as well. The famous saying “the early bird gets the worm” is true for for UX design research. Luckily for you, there is an incredible ‘cheat sheet’ available that I discovered for myself. The infographic at the left speaks about the major UX research activities and consist of: Discover Explore Test Listen What is important to note here is that not all stages are exactly as prescribed, and none of them are placed in a rigid step-wise list. The concept here is to start from somewhere and learn more as you proceed. If you are confused about where to start or how to conduct research, then these UX research methods are a great place to start. You may find some methods more feasible than others, depending on your time limitations, product/service type, or other concerns. This is why I would prefer that you proceed to try different methods at different product/service development stages because all method types are aimed to achieve different goals and insight. What are the Differences Between UI and UX Design? If you require the most basic summary, then I would say that UI is simply the elements that users see and interact with while using a product or service. On the other hand, UX is purely about the experience that users get from using a product or service. For example: If UI is about making a curvy ketchup bottle that looks great, then UX is about making the curves of the bottle in such a way so consumers can easily grab it (even with oily hands!). Google is perhaps the best example. I would call them giants of the UI UX model as well. Consider this: The famous, white Google Search Engine Result Pages (SERP) is offering more than just thousands of links to your keyword search. Google knows when you come to their website, you are looking for accurate information. And, you want it instantly! Google built its entire environment around delivering accurate results in the shortest possible time. This is why it has dominated the world of Internet search engines. Statista reports that Google enjoyed a worldwide market share of whooping ~87% during July 2020. The rest of 13% divides between Yahoo!, Baidu, Yandex and Bing. Google’s Huge Market Share Recorded For July 2020. Photo Credits: Statista Imagine Google taking even 5 seconds to give you results. That would be a disastrous day for the multibillion-dollar company! Here is what Ken Norton, who is a partner at Google Ventures and a former Product Manager at Google has to say about the UI UX design: “Start with a problem we’d like to solve. UX design is focused on anything that affects the user’s journey to solve that problem, positive or negative, both on-screen and off. UI design is focused on how the product’s surfaces look and function. The user interface is only piece of that journey. I like the restaurant analogy I’ve heard others use: UI is the table, chair, plate, glass, and utensils. UX is everything from the food, to the service, parking, lighting and music.” If you are not into much of a reading mood, then here is a fantastic video to distinguish between UI and UX: A Quick Look At UI and UX. Career: Jobs of UI and UX Designers We now know the individual descriptions of UI design and UX design. We also know how the user interface is different from the user experience, and vice versa. Now it is time to analyze how the professionals of these respective fields work. Role of UI Designers UI designers are masters in visual designs. They work on design research and customer analysis. They take care of branding and graphic design. A UI designer will typically engage in user guides and build storylines for their brands. Although building a brand is not the sole responsibility of a UI designer, translating the brand into visuals and interactions certainly is. UI designers do this by: UI prototyping, Creating interactive features and animations, Creating responsive screens for all display sizes, And building an overall integration with backend development. Role of UX Designers A UX designer’s role is more spread out towards several areas. As an overview, UX designers are also concerned with user research to develop the best customer experience. Competitor analysis is prominent along with customer analysis for optimization. Wireframing and prototyping also matter here. These include: Wireframing Testing & Iteration Development plans Prototyping User experience designers need to have strong coordination with user interface designers. They must coordinate with developers, analyze product performance and track goals. As you may notice, the UX role is challenging and multifarious. The design of any product has deep roots going into multiple research levels. It is about iterations and refinements until one develops a perfect product for its customers. It is about building a product that accurately represents a company brand and satisfies customer relationships. Wrapping Up — UI UX Matters! When aiming for a design of an online product or service, SEO and content are not the only things that matter. This is especially in the case when trying to grab user attention to increase reach of your digital product. Whether you are designing a web application, mobile application, any digital product or even a physical product, design and experience are two of the most important elements to remember. Growing software development companies already realize the importance of a great UI design structure. However, it is paramount that businesses also consider UX designs in the process. A great UI/UX and branding will: Encourage user interaction with your platform, Generate customer and brand loyalty, Generate recommendation and referrals, Reduce development costs, Reduce internal costs, and Increase overall profits. Useful Stats Regarding UX Design. Photo Credits: InvoZone In this modern era, where consumers and clients are becoming aware of their rights and values, customer experience matters a lot. Consider the following stats by Forrester, Adobe and Google on the left. The field of UI and UX are confusing only because they are relatively new terms in product design. However, people can avoid this unnecessary confusion and this article will provide enough insight to get you going for future. In a nutshell, while the user interface is about how customers will interact with a product, the user experience is about their overall thoughts before, during and after using your product.
https://uxdesign.cc/ui-vs-ux-revisiting-differences-between-frontend-design-aspects-in-2020-and-importance-592ce9ac6360
['Talha Waseem']
2020-12-31 00:00:00
['UI', 'Product Design', 'Design', 'Front End Development', 'User Experience']
YamFlow — Reference Machine Learning Workflow
Hello, World! We’re happy to announce our new startup YAM (www.yam.ai). We initially chose YAM to stand for Yet Another Machine (which meant Artificial Intelligence) and later made it a recursive backronym YAM AI Machinery. At this startup, we strive to standardize the practices and frameworks of developing AI applications so that AI can be componentized for reuse and integration. With reusable and mashable AI components, we help enterprises build AI applications in a fast and proven fashion through consultancy. YamFlow Draft Specification First of all, we would like share with you the draft version of our reference workflow for the machine learning (ML) development life-cycle. We name this workflow YamFlow. YamFlow is aimed to provide a canonical taxonomy for practitioners to understand and communicate the flows of activities and data involved in a typical ML process. It specifies the key activities of pipelining data, modeling ML, training ML, and serving the inference. Internally, we also use YamFlow as the baseline for YAM AI Machinery to design interoperable frameworks for composing ML tasks and data. If you’re involved in developing ML applications in an enterprise environment, we believe you’ll find YamFlow useful too. While we are maintaining YamFlow as a live specification, we wish to hear your comments so that we can keep improving it for more accuracy, practicality, and generality. Please check out YamFlow at https://flow.yam.ai, which is maintained as a GitHub project. We’d love to hear feedback from you so that we can improve YamFlow to cover your use cases. Lastly, we’d love to stay in touch on social media:
https://medium.com/yam-ai/yamflow-reference-machine-learning-workflow-ffdb4a7ccf33
['Thomas Lee']
2019-04-11 09:43:45.263000+00:00
['Yamflow', 'Artificial Intelligence', 'Neural Networks', 'Data Science', 'Machine Learning']
One-Stop News
4. Feature Engineering: Feature engineering is an essential part of building any machine learning model. Feature engineering is the process of transforming data into relevant features to act as inputs for the machine learning model. Good and relevant features boost model performance. We have transformed textual data into Tf-Idf vector. Tf-Idf is a score that represents the relative importance of a term in the document and the entire corpus. 5. Model Creation: Model creation is the process of building models for the predictive tasks that we want to perform. For our case, we created models to classify news headlines into different categories, Summary generation, Sentiment analysis of news articles and topic modeling. We have built machine learning models such as LDA, Random Forests, SVM. For model building, we have used python libraries Gensim and Scikit-learn. 6. Results: We have developed a web application to deploy our project. This web-app is the front end of our project where users can access all the functionalities that our project offers under a single web page. For developing our web app, we have used the Django web framework. Feature implementation and evaluation: Following are our features and methods used to implement them: 1. Article Similarity: Text similarity has to determine how ‘close’ two pieces of text are both in surface closeness [lexical similarity] and meaning [semantic similarity]. Here we have used lexical similarity to gather similar articles. Article similarity feature draws the similarity between scrapped articles from multiple sources and outputs them to the dashboard. This functionality enables users to read about similar categories from multiple sources. Method: We perform cleaning and pre-processing of the scraped articles using NLTK library. This involves filtering, tokenization, part of speech tagging, lemmatization, removing stop words and so on. We used Doc2Vec package of genism library for model training and the dataset used is BBC News Dataset. Doc2vec is an extension to the word2vec-approach towards documents. Its intention is to encode (whole) document, consisting of lists of sentences, rather than lists of ungrouped sentences. The next step is to feed pre-processed articles into the model to generate vectors for each article. Now, the similarity between these articles was calculated using cosine similarity and the most similar articles from each source were produced as a result. There are other measures like Euclidean distance but here we have used cosine similarity as it measures the angle between two-word vectors in multi-dimension space. It focuses on the orientation of documents whereas euclidean distance focuses on the length of the documents. Hence even if two documents are oriented close but if their length carries a lot than euclidean distance gives less similarity compared to cosine similarity. Evaluation: Since it is unsupervised learning, model was tested by giving multiple similar articles manually. 2. Summary Generation: Text summarization is the problem of creating a short, accurate, and fluent summary of a longer text document. People often get bored while reading long paragraphs of text. Summaries are always useful to get a gist of article before diving deep into it. This feature generates a summary of similarly scrapped articles. Summary is generated using an extractive and abstractive approach. Here we have used an extractive approach. Method: Extractive text summarization involves the selection of phrases and sentences from the source document to make up the new summary. Techniques involve ranking the relevance of phrases in order to choose only those most relevant to the meaning of the source. During our first try, we generated summaries using the LSTM model but the results were not that good. In our final approach, we used the pre-trained BERT model for generating an extractive summary. This tool utilizes the HuggingFace Pytorch transformers library to run extractive summarizations. This works by first embedding the sentences, then running a clustering algorithm to find the sentences that are closest to the cluster’s centroids. Evaluation: To evaluate generated summary we checked it manually and compared them with the original articles and it performed really well. 3. Finding Sentiment: Sentiment analysis is the interpretation and classification of emotions (positive, negative and neutral) within text data using text analysis techniques. Sentiment analysis models detect polarity within a text (e.g. a positive or negative opinion), whether it’s a whole document, paragraph, sentence, or clause. News sources are often positively or negatively inclined towards the topic. Thus we provide functionality that predicts the sentiment of the article. We classify the given article into four categories negative, slightly negative, neutral, slightly positive and positive. Method: We used Kaggle movie review dataset for the training of machine learning models. EDA was performed on the dataset to remove data with access length and to balance out the categories. The input articles are pre-processed and are converted into vectors. This vector model is then dumped to reuse it during the prediction of new articles. The vectors are given as input to the model along with the labels to train the model. Evaluation: We tried two models Naïve Bayes and Random forest. Naïve Bayes gave an accuracy of about 57% and that of the random forest was 68%. Thus we chose random forest as our classifier. 4. Topic Modeling: Topic modeling is an unsupervised machine learning technique that takes a set of documents as input, detects words and phrase patterns within them, and automatically clusters word groups and characteristics that best describes the set of input documents. For our project, we have used topic modeling to extract important words for a given news article. These extracted words give users an idea about the topic that the news article is talking about. We have used the Latent Dirichlet Allocation (LDA) model to extract words. LDA is an unsupervised machine learning model that takes documents as input and provides topics and important words describing that topic as output in terms of probability with weights attached to each word. Method: We have used a subset of BBC News Dataset, which contains 308 articles of different languages. We have filtered English articles using ‘langdetect’ library. Then we tokenized sentences and words of each article using the NLTK library on which lemmatization was performed. Lemmatizing is the process of generating the root form of given words. We also removed stop words as these are the words that are not important for model building. We tried bi-gram and tri-gram words to feed as input. After some experimenting, we choose tri-gram word input as it gave better accuracy. We performed fine-tuning by experimenting with different parameter values of LDA model. Evaluation: As LDA is an unsupervised machine learning model, there is no defined evaluation metrics for it. Here we had to use Human Judgment to evaluate the model. For this, we printed the top 5 topics predicted by our LDA model for the given document and evaluated how well the model predicted topic and related words. Classification: News article classification aims to classify the news articles into pre-defined categories. News article classification can be seen as a text classification which is an application of NLP. Document Classification is a supervised machine learning problem. We have classified the news articles into five different categories: Business, Entertainment, Politics, Sports, Technology. We are scraping headlines from two renowned news websites namely ‘The Guardian’ and ‘The New York Times’. Then we are predicting the category of each headline using our trained model. We have noticed that usually news website does not give the category of trending news. So, with the help of our project, users can see categories of the trending news and choose whether to read the headline based on their interests. Method: We have used BBC News Dataset to train our model. BBC News Dataset consists of 2,225 documents with corresponding categories labeled. The dataset contains five different categories: Business, Entertainment, Politics, Sports, Technology. First, we performed Exploratory Data Analysis to get an idea about the dataset. We found out the dataset is balanced as it contained approximately the same number of documents of each category. We also plotted the distribution of the average length of articles per category. By doing this, we found out that Politics and Technology news article lengths are bigger than other categories. So, we filtered those two categories by retaining articles with a length of 1000 words and discard articles with more than 1000 words. We used matplotlib and Seaborn libraries to visualize the data. Before extracting features from the input dataset, we performed a few text cleanings tasks such as Special Character removal, removal of Punctuations, Lemmatization, and Stop Words removal. We performed all the text cleaning tasks using NLTK library. Then we tokenized each cleaned document into words using NLTK library. We converted tokenized words into Tf-Idf vectors as machine learning models only take numerical data as input. Tf-Idf gives a score to the terms which represent the importance of that term in the document and entire corpus. We used Scikit-learn library to generate Tf-Idf vectors. For the classification task, we compared the performance of two machine learning models: Support Vector Machines and Random Forests. We performed hyperparameter tuning by defining parameter values for both the models and running the randomized search. By this, we get to know the best performing model, which was SVM for our case. Then we performed Grid Search to tune the parameters more thoroughly by searching deep into hyperparameter space. We performed these tasks using Scikit-learn library. The final model predicts the category of the given news article with an accuracy of 95 %. Evaluation: For the classification task, we used accuracy as the evaluation matric. The accuracy metric measures the ratio of correct predictions over the total number of predicted instances. For our case, SVM performed best with testing accuracy of 94 %. We have also visualized the Confusion Matrix for both the models for model interpretation purposes.
https://medium.com/sfu-cspmp/one-stop-news-3dd8c4785785
['Tirth Patel']
2020-04-23 00:58:20.439000+00:00
['Big Data', 'Data Science', 'Machine Learning', 'NLP']
9 Feel-Good Reasons Why You Should Adopt a Senior Pet
Animal Advocacy 9 Feel-Good Reasons Why You Should Adopt a Senior Pet Share your love with an animal who really needs it. Photo by Tatiana Rodriguez on Unsplash Kittens and puppies are dazzling — they’re adorable, fun, and everybody should experience having one and watching them grow. The problem is because these littles are so cute and attention-grabbing, sometimes we ignore the fact older pets are in desperate need of homes too. More people adopt kittens and puppies than their senior counterparts, and the reasons that senior animals are in a shelter are almost always sad. They were surrendered because their family was incapable of caring for them, their humans got sick or died, or their family didn’t want them. I personally can’t stand to see those pictures of older dogs or cats in the shelter feeling sad and confused. You can see on their faces and in their eyes the rejection they feel. People have their reasons for returning their pets to the shelter, and some of those reasons are legitimate, but it still feels like a betrayal to the animal. Adopting a senior pet will get you all the good vibes. We all want to be loved and cared for, but no one is as deserving as a senior pet. They didn’t ask to be in this position — they thought their person would always be there to love and take care of them. Adopting a senior pet is a good deed, a mitzvah, and an act of kindness that will have positive repercussions in your life. It’s grateful for the things that you have and, in turn, helping an animal in need. You get to feel like a hero. Even if you go to a no-kill shelter, you may be a senior pet’s last chance at happiness. Adoption is an easy way to save or improve a life. You’ll feel good about yourself for giving a home to an older animal, and they’ll see you as a savior. They come already trained. Training a puppy or a kitten can be costly. Private training can cost anywhere from $45 to $125.00 per class, Obedience training is approximately $35 to $75 per day, and Boot Camp ranges from $120 and up depending on the program. You can save a ton of money just by expanding the acceptable age-range of a new dog or cat. Senior pets won’t take up too much of your time. They’ve already been trained, they’ll bond with you pretty quickly, and if you give them a treat or two, they’re deliriously happy. They have an understanding of how the world works. I’ve had senior, middle-aged, and young animals, and while kittens and puppies are cute and cuddly, they’re also very curious and haven’t developed a sense of danger. Puppies and kittens will blindly try anything, no matter how deadly it may be. You must be on the alert at all times so that your pet doesn’t put itself into a dangerous situation. A senior dog or cat knows the score and doesn’t need to figure everything out — they already know, and they’d rather take a nap than chew on an electric cord. You can teach them new tricks. The old saying you can’t teach an old dog new tricks isn’t true. You absolutely can, for you’re not starting from scratch but instead are building on a foundation of knowledge. Many dogs and cats love to learn new things, and they get excited to show off what they’ve learned. They’ve got a chill attitude. Senior pets have seen a lot in their lives, and they’ve learned to take things in their stride. Their moments of zoomies and frenzied behavior are behind them. Older animals are less likely to get over-excited, stressed-out, or agitated than their younger counterparts. Senior pets are content to take naps in the sun, hang out with their humans, and get some head scratches. You may not have to pay an adoption fee. Many shelters waive the adoption fee for senior pets, so you save money while doing a good deed. You may want to give the shelter or the rescue organization a donation anyway. If you’re of a certain age and want to adopt a senior pet, you probably won’t have to pay an adoption fee either. The flip side of age and adoption is if you’re older, some shelters or rescue organizations won’t let you adopt a kitten or puppy because they want their adopters to be around for the life of the animal. They’re grateful. No matter if the senior pet has been in the shelter for a month or an hour, they’re thrilled when someone gives them their forever home. They see you as a rescuer and as their savior, and they’re incredibly thankful for you. The adopted senior pet will give you unconditional love. Most pets will love you no matter what, but senior pets sense that their options are limited. They see the cute puppies and adorable kittens being snapped up by loving families, and it’s almost more than they can hope that a kind person will choose them.
https://medium.com/creatures/9-feel-good-reasons-why-you-should-adopt-a-senior-pet-87cc002883fa
['Christine Schoenwald']
2020-12-20 17:52:47.465000+00:00
['Pet Adoption', 'Mental Health', 'Cats', 'Life', 'Education']
The Critical Skill Organizations Need, But Many Are Overlooking
The Critical Skill Organizations Need, But Many Are Overlooking And three ways to develop it Photo by Mike Kononov on Unsplash The next industrial revolution is upon us as robotics, automation, and artificial intelligence disrupt every industry of the economy. We’re already seeing glimpses of the future—wearable technology, autonomous vehicles, drone delivery, connected supply chain, and smart farming. With these tools, companies are transforming into digital businesses powered by ever-growing amounts of data. COVID-19 has accelerated these digital transformation efforts as companies have had to adopt new tools and processes to stay relevant while also protecting their employees and customers. Even before this acceleration, however, many companies were struggling to fully achieve their digital transformation goals and of course many face even more challenges as their teams work remotely. Why? While there are many reasons to explain this gap, one of the biggest reasons is clear: digital transformation is not about the technology; rather, it’s all about the people. After all, what good are these new technologies or customer experiences if the organization doesn’t have the people to implement and leverage them to their full potential? That’s why one of the biggest organizational skill gaps and therefore a key barrier to any digital transformation success is data literacy. “Data literacy” as a concept means understanding the fundamental aspects of data — from creation to transformation to application in a business context — and being able to communicate with others using data. “Learning to ‘speak data’ is like learning any language. It starts with understanding the basic terms and describing key concepts.” — Gartner, 2018 Unfortunately, most businesses still believe that data literacy is only a necessary skillset for data and analytics roles. But poor data literacy across other roles in business and IT will manifest into miscommunications, lost productivity, and unrealized business value. If people all across the organization can’t speak data, it’s going to make it difficult to successfully implement any technology solution or drive business decisions through insights and analytics. In fact, a recent report on the Human Impact of Data Literacy shows that data-literate organizations are more likely to convert leads into customers, more likely to retain customers, and more likely to grow profits. Calculating the average return on investment in data literacy for the companies in the study, this equates to an increase of approximately $500 million in enterprise value. So how can organizations create a more data-literate workforce and ultimately achieve their digital transformation goals? 1. Infuse data into daily work Photo by Campaign Creators on Unsplash One of the best ways that people can start learning data right away is by using it as often as possible. Promoting a data-driven culture starts at the top. Leaders should talk about the company’s goals by using metrics and dashboards to show the actual performance by the facts and figures. They should also reinforce these in their communications, whether written or in meetings and presentations. “Speaking data during everyday interactions, from board meetings to team calls, begins to set the tone for the new mode of communication,” said Valerie Logan, Research Director, Gartner. It’s important that senior leaders set this tone, because they communicate their organization’s priorities through their behavior. Only after senior leaders are aligned and model how to use data in their daily work can the data-driven culture permeate all other levels of the organization. In order to go beyond communication, the next way to use data every day is for leaders to support the growth of their teams and individual employees through data-driven goal setting. While goal setting frameworks always sound simple enough, most companies still fail to successfully drive or measure the right outcomes. The biggest reason? Leaders simply try to do too many things, too quickly. A simple, yet powerful formula to overcome this problem is by using OKRs, or Objectives and Key Results. OKRs are a framework to set quarterly performance targets (objectives) and define the few important metrics that will demonstrate success toward achieving these targets (key results). When done well, one quarterly objective with three key results can help ground a team’s or an employee’s day-to-day work in data, which produces actionable insight and instant feedback. For example, perhaps a company has the objective to “grow sales by 2% this quarter”. The key results of this could look different by role, but for a sales rep might be something like 1) gain 5 new customers 2) grow sales in one specific product by 5% and 3) create a targeted cross-selling plan as a sales teams. With these key results in mind, the sales rep can come up with daily tasks to accomplish these goals and track progress toward achieving them by reviewing the data on a regular basis. In practice, the challenge for organizations is how they hold each other accountable to these goals and incorporate data in ways that are motivating and fun, rather than threatening and stress-inducing. Using data for the purposes of growing employees makes learning data more personal and rewarding while at the same time benefits the organization as a whole as people start to shape their work every day using the power of data. 2. Offer and incentivize data training courses In order to reinforce a data-driven culture, it’s important that leaders emphasize the value of data skills by offering free and easily accessible learning opportunities so employees can increase their data literacy. Major tech organizations like Salesforce and Amazon are already spending hundreds of millions of dollars to provide technical training programs as well as e-learning certifications to their employees. Beth Galetti, Amazon’s senior vice president of worldwide HR, recently commented, “The most consistent thing we see that’s changing is the need for some level of technical skills in any job.” According to the Human Impact on Data Literacy report, “Organizations need to recognize that the exponential growth in data usage has accelerated far beyond the skills and confidence of the employees required to use it. Only 25% of employees felt fully prepared to use data effectively when entering their current role.” As technology continues to shift the way people work, data literacy will become more and more important across all areas of the business. Since this shift is a continual journey, employee training should integrate learning into everyday work and reinforce a growth mindset to evolve and adapt as technology changes. For this reason, training should be quick, relevant to the employee’s task at hand, and fun to complete. Most important to driving adoption and reinforcing the data-driven culture, leaders must offer incentives and reward people who increase their data literacy. These rewards further highlight data as a priority and keep people motivated to continue with their personal growth. People are any organization’s biggest asset, so ensuring that the people have the right skills is critical for any business to be successful. For this reason, organizations must provide both training and incentives that promote data literacy, as these are sound investments for staying competitive in a quickly changing business landscape. 3. Drive change with data as the guide Photo by David Travis on Unsplash Due to rapidly-changing technologies, it’s also critical for all roles within an organization to develop “digital dexterity,” or the ability to make business decisions and take action based on data. Large, well-established businesses must now think and act like startups. This means breaking down traditional silos across the organization, democratizing data for wider use and application, creating cross-functional teams to solve new business challenges, and running data-driven experiments to test new solutions. “People are finally realizing that the ability to analyze information is no longer just the role of the IT or data scientists.” — Jordan Morrow, Global Head of Data Literacy at Qlik Leaders should encourage hackathons or proof-of-concept workshops to promote curiosity, agility, collaboration, and innovation — all characteristics of a data-driven organization. These kinds of exercises allow people to look to data for answers, but also become open to new ideas and solutions. And these concepts aren’t just for customer-facing products. Leaders should also promote these tools for improving the internal employee experience. The only condition to implementing permanent changes? They must be supported by data. This is how leaders can align the culture to the expected outcomes of their digital transformation initiatives and become a truly data-driven organization.
https://medium.com/slalom-business/the-critical-skill-organizations-need-but-many-are-overlooking-ca0dfac0c1c3
['Steven Hopper']
2020-11-17 15:32:22.015000+00:00
['Leadership', 'Motivation', 'Work', 'Business', 'Data']
Principal Component Analysis Deciphered
In machine learning, we often have to deal with high-dimensional data. But not all features that we use in our model may in fact not be related to the response variable. Adding many features in the hope that our model would learn better and give accurate results often results in a problem which we generally refer to as ‘the Curse of Dimensionality’, which states: As the number of features or dimensions grows, the amount of data we need to generalize accurately, grows exponentially. To overcome this problem we need to identify the most important features in our dataset. One such method to identify the principal features from the dataset, thereby reducing the dimensionality of the dataset, is Principal Component Analysis (PCA). In the video above, consider how a larger picture of the dog is repeatedly shredded and re-attached to form four smaller pictures of the same dog. Intuitively, selecting the right features would result in a lower-dimensional form without losing much information. PCA emphasizes this variation and brings out the dominant patterns in a dataset. What exactly is PCA? 🤔 PCA takes in a large set of variables, uses the dependencies between these variables to represent it in a more manageable, lower-dimensional form, without losing too much information. PCA serves as a good tool for data exploration and is often done as part of exploratory data analysis (EDA). Suppose we have n observations and d variables in our dataset and we wish to study the relationship between different variables as part of EDA. For a larger value of d, let’s say 60, we get d(d-1)/2 two-dimensional scatter plots. Such a huge number of plots (1770, in this case) makes it certainly difficult to identify the relationship between features. Further, these 2D plots contain only a fraction of the total information present in the dataset. This is when PCA comes into the picture. PCA is a technique for feature extraction — so it combines the input variables in a specific way, then gets rid of the “least important” variables while still retaining the most valuable parts (or principal components) of all of the variables! Principal Components you say? A principal component is a normalized linear combination of the original features in the dataset. Suppose we start with d-dimensional vectors and want to summarize them by projecting down into a k-dimensional subspace such that the axes of the new subspace point into the directions of the highest variance of the data. Our final result would be the projection of the original vectors on to k directions, termed as Principal Components(PC). Fig. 1: Plot between Ad Spending (in 1000s) and Population (in 10,000s) taken from a subset of the advertising data (ISLR) for 100 cities. The blue dot denotes the mean (μ). As evident from the plot (Fig. 1), the first principal component (the green solid line) direction has the maximum data variance, and it also defines the line that is closest to all n of the observations. The first principal component captures most of the information contained in the features such that larger the variability captured by the first PC, the larger information captured by component. Fig. 2: First and second principal components in a subset of the advertising data (ISLR). The direction of the second principal component is given by the blue dotted line (Fig. 2). It is also a linear combination of the original features which captures the remaining variance in the dataset such that the correlation between first and second principal component is zero, and thus their directions are orthogonal or perpendicular to each other. Similarly, for d features in our dataset, we can construct up to d distinct principal components. But how many principal components do we need? Choosing the right number of principal components is essential to ensure that PCA is effective. A dataset containing n observations and d features accounts for min(n − 1, d) distinct principal components. But we are only interested in the first few important components that are enough to explain a good amount of variation in the dataset. One way to determine this is to look at the cumulative explained variance ratio which is a function of the number of components. A scree plot depicts this ratio explained by each of the principal components. The elbows of the plot signify the optimal number of principal components. Fig. 3: Cumulative explained variance ratio after PCA on LFW face recognition dataset. The curve shown in Fig. 3 quantifies how much of the total, the 200-dimensional variance is contained within the first n components. For example, we see that with the faces the first 40 components contain more than 80% of the variance, while we need around 150 components to describe close to 100% of the variance. Where would you use PCA? PCA has been widely used in many domains, such as computer vision and image compression. It is mainly used for the following applications: Data visualization: PCA allows you to visualize high dimensional objects into a lower dimension. PCA allows you to visualize high dimensional objects into a lower dimension. Partial least squares: PCA features can be used as the basis for a linear model in partial least squares. PCA features can be used as the basis for a linear model in partial least squares. Dimensionality reduction: Reduces features dimensionality, losing only a small amount of information. Reduces features dimensionality, losing only a small amount of information. Outlier detection (improving data quality): Projects a set of variables in fewer dimensions and highlights extraneous values. How is PCA formulated though? Given a matrix X, which corresponds to n observations with d features, and an input k, the main objective of PCA is to decompose matrix X into two smaller matrices, Z and W, such that X= ZW, where Z has dimensions n*k and W has dimensions k*d (see Fig. 4). Each row of Z is a factor loading. Each row of W is called a principal component. Fig. 4: PCA decomposes matrix X into two smaller matrix Z and W. In PCA, we minimize the squared error of the following objective function: There are three common approaches to solve PCA, which we describe below. Singular Value Decomposition (SVD) This approach first uses the Singular Value Decomposition (SVD) algorithm to find an orthogonal W. Then it uses the orthogonal Wto compute Z as follows. 2. Alternating Minimization This is an iterative approach that alternates between: Fixing Z, and finding optimal values for W Fixing W, and finding optimal values for Z 3. Stochastic Gradient Descent This is an iterative approach, for when the matrix X is very big. On each iteration, it picks a random example i and features j and updates W and Z as PCA in action: Feature Reduction We already know that, by definition, PCA eliminates the less important features and helps produce visual representations of those features. Let’s see how this really applies to a feature reduction problem in practice. For this example, we will use Iris dataset. The data contains four attributes: Sepal length, Sepal width, Petal length, Petal width across three species namely Setosa, Versicolor, Virginica After applying PCA, 95% variance is captured by 2 principal components. PCA in action: Feature Extraction In an earlier example, we saw how PCA can be a useful tool for visualization and feature reduction. In this example, we will explore PCA as a feature extraction technique. For this, we will use the LFW facial recognition dataset. Images contain a large amount of information, and processing all features extracted from such images often require a huge amount of computational resources. We address this issue by identifying a combination of the most significant features that accurately describe the dataset. Download and view data We will load faces data from sklearn.datasets.fetch_lfw_people. The dataset consists of 1867 images each having a 62x47 resolution. import numpy as np import matplotlib.pyplot as plt import warnings from sklearn.datasets import fetch_lfw_people # Download dataset with warnings.catch_warnings(): warnings.filterwarnings("ignore",category=DeprecationWarning) faces = fetch_lfw_people(min_faces_per_person=40) # plot images fig, axes = plt.subplots(3, 10, figsize=(12, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(faces.data[i].reshape(62, 47), cmap='bone') Applying PCA on the dataset To produce a quick demo, we simply use scikit-learn’s PCA module to perform dimension reduction on the face dataset and select 150 components(eigenfaces) in order to maximize the variance of the dataset. from sklearn.decomposition import PCA faces_pca = PCA(n_components=150, svd_solver=’randomized’).fit(faces.data) # Plot principal components fig, axes = plt.subplots(3, 10, figsize=(12, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(faces_pca.components_[i].reshape(62, 47), cmap='bone') Now we will use the principal components to form a projected image of faces and compare it with the original dataset. components = faces_pca.transform(faces.data) projected = faces_pca.inverse_transform(components) # Plot the results fig, ax = plt.subplots(2, 15, figsize=(15, 2.5), subplot_kw={‘xticks’:[], ‘yticks’:[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i in range(15): ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap=’binary_r’) ax[1, i].imshow(projected[i].reshape(62, 47), cmap=’binary_r’) ax[0, 0].set_ylabel(‘complete resolution’) ax[1, 0].set_ylabel(‘150-D projections’); As we can see the principal features extracted using PCA capture most of the variance in the dataset and thus, the projections formed by these 150 principal components are quite close to images in the original dataset. Things to Remember Here are some important points one should remember while doing PCA: Before doing PCA, data should first be normalized. This is important as different variables in the dataset may be measured in different units. PCA on an un-normalized dataset results in higher eigenvalues for the variable having maximum variance corresponding to the eigenvector of its first PC. PCA can be applied only on numerical data. Thus, if the data has categorical variables too they must be converted to numerical values. Such variables can be represented using a 1-of-N coding scheme without imposing an artificial ordering. However, a PCA is NOT to be conducted when most of the independent features are categorical. CATPCA can instead be used to convert categories into numeric values through optimal scaling. What did we learn? So we started with the curse of dimensionality and discussed how principal component analysis is effective in dimensionality reduction, data visualization in EDA and feature extraction. If implemented properly, it can be effective in a wide variety of disciplines. But PCA also has limitations that must be considered like patterns that are highly correlated may be unresolved because all principal components are uncorrelated, the structure of the data must be linear, and PCA tends to be influenced by outliers in the data. Other variants of PCA can be explored to tackle these limitations, but let’s leave it out for a later time. 🤓
https://medium.com/sfu-cspmp/principal-component-analysis-deciphered-79968b47d46c
['Vaishnavi Malhotra']
2019-03-14 23:10:48.521000+00:00
['Data Science', 'Machine Learning', 'Pca', 'Big Data', 'Principal Component']
Installing Apache Pig 0.17.0 on Windows 10
This article is a part of a series that we are publishing on TowardsDataScience.com that aims to illustrate how to install Big Data technologies on Windows operating system. Previously published: In this article, we will provide a step-by-step guide to install Apache Pig 0.17.0 on Windows 10. 1. Prerequisites 1.1. Hadoop Cluster Installation Apache Pig is a platform build on the top of Hadoop. You can refer to our previously published article to install a Hadoop single node cluster on Windows 10. Note that the Apache Pig latest version 0.17.0 supports Hadoop 2.x versions and still facing some compatibility issues with Hadoop 3.x. In this article, we will only illustrate the installation since we are working with Hadoop 3.2.1 1.2. 7zip 7zip is needed to extract .tar.gz archives we will be downloading in this guide. 2. Downloading Apache Pig To download the Apache Pig, you should go to the following link: Figure 1 — Apache Pig releases directory If you are looking for the latest version, navigate to “latest” directory, then download the pig-x.xx.x.tar.gz file. Figure 2 — Download Apache Pig binaries After the file is downloaded, we should extract it twice using 7zip (using 7zip: the first time we extract the .tar.gz file, the second time we extract the .tar file). We will extract the Pig folder into “E:\hadoop-env” directory as used in the previous articles. 3. Setting Environment Variables After extracting Derby and Hive archives, we should go to Control Panel > System and Security > System. Then Click on “Advanced system settings”. Figure 3 — Advanced system settings In the advanced system settings dialog, click on “Environment variables” button. Figure 4 — Opening environment variables editor Now we should add the following user variables: Figure 5 — Adding user variables PIG_HOME: “E:\hadoop-env\pig-0.17.0” Figure 6 — Adding PIG_HOME variable Now, we should edit the Path user variable to add the following paths: %PIG_HOME%\bin Figure 7 — Editing Path variable 4. Starting Apache Pig After setting environment variables, let's try to run Apache Pig. Note: Hadoop Services must be running Open a command prompt as administrator, and execute the following command pig -version You will receive the following exception: 'E:\hadoop-env\hadoop-3.2.1\bin\hadoop-config.cmd' is not recognized as an internal or external command, operable program or batch file. '-Xmx1000M' is not recognized as an internal or external command, operable program or batch file. Figure 8 — Pig exception To fix this error, we should edit the pig.cmd file located in the “pig-0.17.0\bin” directory by changing the HADOOP_BIN_PATH value from “%HADOOP_HOME%\bin” to “%HADOOP_HOME%\libexec”. Now, let's try to run the “pig -version” command again: Figure 9 — Pig installation validated The simplest way to write PigLatin statements is using Grunt shell which is an interactive tool where we write a statement and get the desired output. There are two modes to involve Grunt Shell: Local: All scripts are executed on a single machine without requiring Hadoop. (command: pig -x local) MapReduce: Scripts are executed on a Hadoop cluster (command: pig -x MapReduce) Since we have installed Apache Hadoop 3.2.1 which is not compatible with Pig 0.17.0, we will try to run Pig using local mode. Figure 10 — Starting Grunt Shell in local mode 5. References
https://towardsdatascience.com/installing-apache-pig-0-17-0-on-windows-10-7b19ce61900d
['Hadi Fadlallah']
2020-05-05 20:43:43.244000+00:00
['Hadoop', 'Hadoop 3', 'Apache Pig', 'Big Data']
Genome Assembly — The Holy Grail of Genome Analysis
Genome Assembly — The Holy Grail of Genome Analysis Assembling the 2019 novel coronavirus genome The 2019 novel coronavirus or coronavirus disease (COVID-19) outbreak has threatened the entire world at present. Scientists are working day and night to understand the origin of COVID-19. You may have heard the news recently that the complete genome of COVID-19 has been published. How did scientists figure out the complete genome of COVID-19? In this article, I will explain how we can do this. Genome A genome is considered as all the genetic material, including all the genes of an organism. The genome contains all the information of an organism that is required to build and maintain it. Sequencing How can we read the information present in the genome? This is where sequencing comes into action. Assuming you have read my previous article on DNA analysis, you know that sequencing is used to determine the sequence of individual genes, full chromosomes or entire genomes of an organism. Fig 1. A PacBio sequencing machine. PacBio is a third-generation sequencing technology which produces long reads. Image by KENNETH RODRIGUES from Pixabay (CC0) Special machines, known as sequencing machines are used to extract short random sequences from the genome we are interested in. Current sequencing technologies cannot read the whole genome at once. It reads small pieces of mean length between 50–300 bases (next-generation sequencing/short reads) or 10,000-20,000 bases (third-generation sequencing/ long reads), depending on the technology used. These short pieces are called reads. If you want to know more details about how viral genomes are sequenced from clinical samples, you can read the following articles. Genome Assembly Once we have small pieces of the genome, we have to combine (assemble) them together based on their overlap information and build the complete genome. This process is called assembly. Assembly is like solving a jigsaw puzzle. Special software tools called assemblers are used to assemble these reads according to how they overlap, in order to generate continuous strings called contigs. These contigs can be the whole genome itself, or parts of the genome (as shown in Figure 2). Fig 2. Sequencing and assembly Assemblers are divided into two categories as, De novo assemblers: assemble without the use of reference genomes (E.g.: SPAdes, SGA, MEGAHIT, Velvet, Canu and Flye). Reference guided assemblers: assemble by mapping sequences to reference genomes Two Main Types of Assemblers Two main types of assemblers can be found across bioinformatics literature. The first type is the overlap-layout-consenses (OLC) method. In OLC method, first, we determine all the overlaps between the reads. Then we layout all the reads and overlaps in the form of a graph. Finally, we identify the consensus sequence. SGA is a popular tool based on the OLC method. The second type of assembler is the de Bruijn graph (DBG) method [2]. Rather than using the complete reads as they are, the DBG method breaks reads into shorter fragments called k-mers (with length k) and then build a de Bruijn graph using all the k-mers. Finally, the genome sequences are inferred based on the de Bruijn graph. SPAdes is a popular assembler which is based on the DBG method. What can go wrong in Genome Assembly? Genomes contain patterns of nucleic acids that occur many times across the genome. These structures are called repeats. These repeats can complicate the assembly process and result in ambiguities. We cannot guarantee that the sequencing machine can produce reads covering the entire genome. The sequencing machine may miss some parts of the genome and there won’t be reads covering that region. This will affect the assembly process and those missed regions will not be present in the final assembly. Genome assemblers should address these challenges and try to minimise the errors caused during assembly. How to Evaluate Assemblies? Evaluation of assemblies is very important as we have to decide whether the resulting assembly meets the standards. One of the well-known and most commonly used assembly evaluation tools is QUAST. Listed below are some criteria used to evaluate assemblies. N50: minimum contig length that is required to cover 50% of the total length of the assembly. minimum contig length that is required to cover 50% of the total length of the assembly. L50: number of contigs that are longer than N50 number of contigs that are longer than N50 NG50: minimum contig length that is required to cover 50% of the length of the reference genome minimum contig length that is required to cover 50% of the length of the reference genome LG50: number of contigs that are longer than NG50 number of contigs that are longer than NG50 NA50: minimum length of aligned blocks that are required to cover 50% of the total length of the assembly minimum length of aligned blocks that are required to cover 50% of the total length of the assembly LA50: number of contigs that are longer than NA50 number of contigs that are longer than NA50 Genome fraction (%): percentage of bases that align to the reference genome Getting Hands Dirty Let’s get started with the experiments. I will be using the assembler SPAdes to assemble reads obtained from sequencing patient samples. SPAdes makes use of next-generation sequencing reads. You can download QUAST freely as well. You can get the code and binaries from the relevant homepages (which I have provided as links) and run these tools. Type in the following commands and verify whether the tools are working correctly. <your_path_to>/SPAdes-3.13.1/bin/spades.py -h <your_path_to>/quast-5.0.2/quast.py -h Download the data I assume you know how to download data from the National Center for Biotechnology Information (NBCI). If not, you can refer to this link. The reads for our experiments can be downloaded from NCBI with NCBI accession number SRX7636886. You can download the run SRR10971381 which contains reads obtained from an Illumina MiniSeq run. Make sure to download the data in FASTQ format. The downloaded file can be found as sra_data.fastq.gz . You can extract the FASTQ file using gunzip. After extracting, you can run the following bash command to count the number of reads in our dataset. You will see there are 56,565,928 reads. grep '^@' sra_data.fastq | wc -l You can download the publicly available COVID-19 complete genome[3] from NCBI with GenBank accession number MN908947. You will see a file in FASTA format. This will be our reference genome. Note that we have renamed it to MN908947.fasta . Assemble Let’s assemble the reads of COVID-19. Run the following command to assemble the reads using SPAdes. You can provide the compressed .gz file to SPAdes directly. <your_path_to>/SPAdes-3.13.1/bin/spades.py --12 sra_data.fastq.gz -o Output -t 8 Here we have used the general SPAdes assembler as a demonstration to this article. However, since the reads dataset consists of RNA-Seq data (read more about RNA from my previous article), it is better to use the --rna option in SPAdes. In the Output folder, you can see a file named contigs.fasta which contains our final assembled contigs. Evaluating the Assembly Results Run QUAST on the assemblies using the following command. <your_path_to>/quast-5.0.2/quast.py Output/contigs.fasta-l SPAdes_assembly -r MN908947.fasta -o quastResult Viewing the Evaluation Result Once QUAST has finished, you can go into the quastResult folder and view the evaluation results. You can view the QUAST report by opening the file report.html in your web browser. You can see a report similar to the one shown in Figure 3. You can click on “Extended report” for more information such as NG50 and LG50.
https://towardsdatascience.com/genome-assembly-the-holy-grail-of-genome-analysis-fae8fc9ef09c
['Vijini Mallawaarachchi']
2020-03-04 06:03:08.726000+00:00
['Dna', 'Biology', 'Data Science', 'Science', 'Bioinformatics']
What Is Office Housework and Why Should You Stop Volunteering for It?
How Does It Affect Your Career? Office housework often consists of non-promotable tasks that consume time you could be spending on promotable work. Time is essential and limited. If you have been raising your hand for such tasks often, stop. It might be holding you back from doing “glamour” work. My experience with office housework I used to volunteer for organizing pretty much every team activity. I once organized an event for 80 people in the office. My core job was data science work and I was using my spare time doing office housework. Office housework did give me access and visibility to senior leaders, but it wasn’t the type of visibility that helped my career. Since learning about office housework (and my “aha” moment), I have been mindful of things I volunteer for. Just because I am good at something doesn’t mean I need to do it. Hosting a conference with 900 attendees. The picture above is from a conference that I helped organize (office housework) and host (glamour work). Organizing the conference was months of effort. Did I need to do that? No. Did this work end up in my promo doc? No. However, hosting the conference as an MC did make it to my performance review. I helped organize it because I am good at it and wanted to, but in hindsight, I took away my time from other promotable work that would have ended up on my promo doc. Be mindful of how you use your time.
https://medium.com/better-programming/what-is-office-housework-and-why-you-should-stop-volunteering-for-it-f750e8456b64
['Sundas Khalid']
2020-08-07 14:42:44.238000+00:00
['Work', 'Women In Tech', 'Essentialism', 'Startup', 'Data Science']
Is the Orville really faster than the USS Enterprise?
In The Orville episode, “Pria,” Capt. Ed Mercer (Seth MacFarlane) tells Pria Lavesque (Charlize Theron) his ship is powered by a Dysonium quantum drive capable of speeds in excess of 10 lightyears (ly) per hour. This speed scale is far more intuitive than Star Trek’s warp factor but how exactly does it compare? Could the Orville beat the Enterprise? What does the Orville’s speed mean? Captain Mercer mentions that his ship can go an excess of 10 ly an hour. We will assume this means the ship can travel faster for limited periods should an emergency arise. There are 8.760 hours in a year. If the Orville travelled at that speed for a year, it would travel a total distance of D = 10 × 8,760 = 87,600 ly. A lightyear is the distance a light beam will travel in one year. This means that the Orville can travel at speeds of 87,600 times the speed of light or 87,600c. Artist impression of a supermassive black hole at the centre of a galaxy by ESO/L. Calçada (ESO website) To put this in context, our nearest star system Alpha Centauri is just 4.2 ly away. The Orville could travel that distance in a little over 25 minutes. If the crew gets hard-up for money, they can run an inter-solar system pizza delivery service. Getting to the Galatic Center — the rotational center of the Milky Way — which is about 27,000 ly away, will take a little over 112 days. Assuming the ship’s engines can run constantly for extended periods of time, the Orville crew could get there in less than four months. Enterprise NX-01 The Enterprise NX-01 The Enterprise (NX-01), built in 2151, was Earth’s first warp 5 capable ship. It had a maximum speed of warp 5.06 but could reach speeds of warp 5.2 for short periods in the case of an emergency. We can convert this speed relative to the speed of light using the formula v = w³c, where w is the warp factor. Plugging the NX-01’s maximum speed into the warp equation and we get a maximum speed of 129.6c. This is nowhere close to the Orville’s max speed. Maybe the other ships to bear the name Enterprise will fare better. Travel to Alpha Centauri will take the Enterprise close to 12 days. In a race against the Orville, the NX-01 loses easily. While it will take months for the Orville to make the trip to the center of the galaxy, it will take the Enterprise over two centuries, about 208 years to be exact. The crew of the Enterprise may begin the journey but they won’t live to see the crew of the Orville gloat over their victory. USS Enterprise NCC-1701 The USS Enterprise NCC-1701 The Enterprise captained by James T. Kirk (William Shatner) is a much faster ship than the NX-01, coming in at warp 8.0. This means the NCC-1701 attains speeds that is 512 times the speed of light or 512c. Again, this is nowhere near the Orville’s speed so, it too will easily be beaten in a race. A race to Alpha Centauri will take this Enterprise about three days. A trip to the Galatic Center is a little better than her predecessor but Kirk will still take several decades to get to the finish line, taking about 53 years to get there. Again, the crew of the Orville will be waiting for some time before the Enterprise reaches the finish line and they will probably be laughing while they are at it. USS Enterprise NCC-1701D USS Enterprise NCC-1701D This is where things get interesting. For Star Trek: The Next Generation and subsequent series, Michael Okuda modified the previous formula to incorporate a few important differences. For warp factors 1 through 9, the formula to calculate a ship’s speed is v = w¹⁰/³c but for warp speeds between 9 and 10, the speed increases exponentially. This has come to be known as the Okuda scale as this section of the graph was hand drawn. This means there is no known formula for the interval between 9 and 10 but fortunately Wolfram Alpha can extrapolate a value from the curve printed in the Star Trek: The Next Generation Technical Manual. The USS Enterprise NCC-1701D has a maximum speed of warp 9.2 but can travel at warp 9.6 in emergency situations up to 12 hours. Plugging those specs into Wolfram Alpha yields speeds of 1,649c and 1,909c respectively. Though more advanced than the previous ships to carry the name Enterprise, this ship is still much slower than the Orville. Capt. Jean-Luc Picard (Patrick Stewart) manages to reach Alpha Centauri in a little under a day — 22.3 hours — but the Orville crew has already had a full day’s rest and relaxation as they wait for the Enterprise crew to cross the finish line. Running the engines at warp 9.6 will only shave three hours off the Enterprise’s trip which means the Orville still wins. If anything, the Enterprise-D fares better than previous ships in a race to the Galactic Center but this is still nothing to boast about. A trip to the center of the Milky Way takes a little over 16 years. The crew of the Enterprise will make it in their lifetimes but any children born on the Orville will already be in high school and probably be laughing at the relic they learned in history class as it crosses the finish line. USS Enterprise NCC-1701E USS Enterprise NCC-701E If the Enterprise-D can’t beat the Orville surely the Enterprise-E will. Right? It is, after all, a more advanced ship that can travel at warp 9.995. As this section of the graph increases asymptotically, it is several times faster than Enterprise-D, clocking in at 14,507c. It seems that despite being a more advanced ship, the Federation can’t build something that will beat the Orville. A race to Alpha Centauri isn’t as big a loss compared to previous Enterprises with the Enterprise-E arriving in a little over two-and-a-half hours. They still lose the race but it’s not bad compared to the other ships. Things are also better in a race to the Galactic Center. Obviously Picard and his crew lose to the Orville but they get there in a little over 1 year and 10 months. Maybe the Orville can have a little fun while they are at it. They can make about three round trips to the Galactic Center before the Enterprise-E makes it across the finish line. Who is Faster? Comparing ship speeds Definitely the Orville. Maybe the next generation of Federation ships will beat what the Union has to offer but I won’t be holding my breath anytime soon. To be fair, Star Trek has not always strictly adhered to transit times in the TV shows or movies but we can safely say that the Orville is truly the ship you want to be on — it will get you where you want to go with lots of time to spare. Check out the calculations behind this article in “Physics of Orville vs. Enterprise”.
https://medium.com/science-vs-hollywood/is-the-orville-really-faster-than-the-uss-enterprise-7cd1ed717241
['David Latchman']
2019-04-11 02:57:47.847000+00:00
['Star Trek', 'The Orville', 'Science Fiction', 'Science', 'Television']
How to Build a Welcome Email for Crypto-Oriented Services?
Over the last couple of years, the popularity of email marketing has decreased given the growing number of readily-available marketing channels. Despite this aspect, emails remain highly-relevant, especially when it comes down to communicating with customers, following up on orders, or encouraging them to purchase your crypto-based service or product. Some of the best marketing practices dictate that businesses should engage in contact with newly-registered users. After all, there’s no profitability in getting sign-ups to your service, but rather in encouraging users to commit and purchase a cryptocurrency service, subscription or product. Getting these conversions generally entails that users are given easy access to all required information for decision-making. In general, a purchase is only made once potential customers fully understand how products/services work, and once a relationship of trust has been attained between the involved parties. Based on these aspects, this article will highlight some of the main factors worth keeping in mind when building automated welcome emails for crypto-oriented services. Relevant email marketing rules worth considering prior to making automated emails 1. Choose a proper mailing service At this time, the market is filled with companies that will happily provide email-sending services. Generally, choosing a provider should only be done after careful research of all offers, meant to determine the best choice for your current demands. For most email services, you simply need to import your list of contacts, which can then be used for segmentation purposes. Segmentation acts as a targeting instrument, meant to ensure that your emails are being sent out to users who are relevant for your current campaigns. This helps with classing newly-registered users in a category of their own to help facilitate automated welcome campaigns. Mailing services generally offer several other perks, including open rate stats alongside a series of other interesting insights for emails being sent out. Similarly, you also have the possibility to A/B test newsletters in order to further improve their open rates. This is highly relevant for the crypto niche, where service providers regularly rely on email campaigns to enable better communication with customers. 2. Constantly verify whether emails are being sent to the spam folder Spam has been around for decades, and it’s certainly not slowing down. Due to this, numerous email service providers have implemented systems which rely on algorithms in order to check incoming emails for spam. Despite the honest work of these algorithms, there are times when necessary emails are still flagged as spam. Several settings can be tweaked to make sure that your welcome emails don’t have to endure the silent treatment. Anyhow, it is essential to test out all emails prior to sending them massively, since failure to do so can lead to the ineffectiveness of your email marketing campaigns. 3. Choose and rely on a template for all emails In the world of business-to-customer communication, standardization is key since it facilitates brand recognition,while also giving your emails a professional look. Therefore, it is important for crypto and blockchain-based companies to design templates for all emails being sent out. Some of the main elements worth taking into consideration include logos, images, heading, text, buttons and the footer. These elements should be designed and built together. When choosing a template, some of the best marketing practices dictate that you also consider data from user analysis tools. 4. Choose a writing style At this point in time, it doesn’t really matter whether you choose a formal or informal style. What’s important is consistency; thus, companies should not rely on different writing styles when conversing with the same customer. Similarly, when drafting up an email template, it is also important to make sure that you are speaking the same language as your users. We are in the cryptocurrency field here, so chances are that the user portrait is more tech-oriented. Most of the times, technicalities shouldn’t be explained in-depth in an email, but do make sure to offer all relevant information, according to user profiles. The welcome email — your first interaction with a client Nothing beats a first impression. This concept applies everywhere in real-life, and it certainly applies to the internet as well. As such, the welcome message represents the first direct form of communication between a business and a customer, so it’s certainly important to have your first emails sound appealing and professional. Here are some relevant tips worth keeping in mind: Write your first email on behalf of team members, or even the CEO In the world of email marketing, personalization goes a long way. Typically, users love seeing that an actual team member or even the CEO has taken the time to write up an email. Doing so showcases professionalism, and facilitates trustworthiness, brand loyalty and brand recognition in the long run. Use Call-to-Action A first-time email must have a call-to-action element implemented within its content. Potential customers should be guided towards checking out the shop or taking a product tour. Failure to include a call-to-action might remove all value from your emails. In the case of complicated projects, it’s best to include links to educational resources A well-informed customer is more likely to purchase a crypto-based product or service, as opposed to a user who doesn’t understand how the product works. Educational resources should be easy to understand, yet they should also discuss the topic in-depth. Show your strong points A welcome email must also show users why you’re the right pick for the product/service they are seeking. Consider your target audience and list all of the advantages of using your services. This is also the place to mention what differentiates your business from competitors. Include a welcome offer This tip can work wonders in the case of potential customers who have not yet made a purchase decision and are still looking for offers. In the case of welcome promotions, you can include some form of limited-time discount, or a fee waive for first-time purchases. Avoid sending out welcome emails that contain the users’ passwords This is a risky practice that can have several unexpected consequences. Therefore, it’s best to avoid including registration passwords in unencrypted emails. The follow-up email — your second chance at a positive first impression Sending out follow-up emails is a great method of keeping users in the know about your newest products, services, policies and promotions. While having one welcome message is a good marketing practice, organizing a set of automated emails will likely yield even better results. However, your automated emailing strategy should be organized in a way that protects your promotional emails from being classed as spam. Users shouldn’t feel bombarded with the emails that lack any actual value since this will likely lead to decreasing open rates. Rather, crypto companies should focus on creating a chain of welcome emails that contain relevant information for customers, and which are sent when the time is right. Doing so ensures that users will continue reading your emails, while also helping you achieve your goals as a service. Here are some examples of topics that can be used for email chains: You have recently registered, and we just noticed that you are interested in ‘the topic of your niche’. Here is an article exploring the subject in-depth. This approach will facilitate trust-building while also showcasing your expertise to potential customers. A video of your CEO discussing the project At this point in time, the cryptocurrency market is dealing with mass lack-of-trust, given the handful of failed and scam-based projects. A video from your CEO is bound to improve your credibility both in the short and long-term. Share feedback, testimonials and real-life use cases for your service or product By doing so, potential customers will be able to easily spot the value in your project, thus encouraging a purchase decision. For emails like this, you can also use the services of sphere influencers to make your email even more valuable. A promotional email reminding users about discounts they might want to consider Oftentimes, users are busy and tend to forget about all the discounts and promotional offers that they are entitled to. Bottom line Despite the increasing popularity of other marketing channels, email remains a highly-relevant tool that is bound to facilitate customer engagement, trust, credibility, brand loyalty, brand recognition, and of course, conversions. However, the positive impact of this marketing instrument can only be ensured if email strategies are done right. The tips that have been outlined so far should definitely put you on the right track for successful email campaigns, but only if done consistently. It is important to keep in mind that marketing professionals throughout the world advise testing out different strategies, prior to settling. At this point in time, new personalization and automation tools can make email marketing considerably easier, but also more effective since it also enables quick conversions.
https://medium.com/cointraffic/how-to-build-a-welcome-email-for-crypto-oriented-services-16983410dcd2
[]
2020-01-16 14:20:01.703000+00:00
['Cry', 'Marketing', 'Cryptocurrency', 'Email Marketing Tips', 'Email Marketing']
Cohesive Design for a React Component
Introduction Backend developers, often who work with Object-oriented programming languages, are well-versed with the use of design patterns and SOLID principles for better quality, maintenance, and conciseness of the program. This is often left on the sidewalk by the front-end developers when designing the React components. One of the important concepts called “cohesion” is far ignored which causes spaghetti code and the architecture sooner or later starts to smell. The change in one place causes an apocalypse — maybe that’s too far said, affects multiple unknown places, and if this is much prevalent in your codebase it could end up rewriting your application. Many developers design the React components that make it harder to understand because it is complex, intertwined relationships are difficult to interpret. We are so used to quickly write a new component on a storybook and tack it to the application without much thought on the design of the component. Like me, most of you have started writing a new React component and that is perfectly fine to get started. Working on the same application over time and iteratively adding more functionalities to existing components soon ran into the problems which made me a fanatic of emphasizing cohesion in the components. Unlike other posts, the scope of this post is limited to the topic of cohesion, separation of concerns, and loose-coupling in the context of React components. So let’s get started. What is Cohesion? In computer science, cohesion refers to the measure of the strength of different elements such as functions, classes, data, presentation, business logic, and services that are tied together. In other terms, it is how focused is a portion of code for a unified purpose. High cohesion means less coupling and it is often tied to the topics of loose coupling and separation of concerns which we will see in the following sections. It would be easy to understand cohesion by exploring these two related topics. It may be confusing whether cohesion defines loose coupling and separation of concerns or the loose coupling itself does. Instead of understanding from different blogs, I would rather keep it to how I understand them. In my opinion, these topics are different but related to each other. Separation of Concerns (SoC) This is a fairly known topic for many backend developers which is to do with separating things to group them at a place where they are used and are easy to track and separated from others. For example, a basket of assorted fruits can be segregated into their categories and individual baskets and place such that to match apples to apples, about the same as in a superstore. How cohesion is related to SoC? Given a basket of assorted fruits, if you were given a job of replenishing the low quantities counting in a basket every hour how difficult it would be? Wouldn’t it be easy to have each fruit in a separate basket and labeled with the name, count, and maximum items? Indeed. We are essentially assigning duties or separating the duties of each basket to maintain these attributes. Based on some statistics of the data, you may choose to arrange baskets. Also, it enables us to make a decision to place most picked fruits nearby and it can be rearranged each week based on customer behaviour. Having a bin with a single responsibility is very much what SoC means and cohesion implies. A component given a job should do it really well and that should not be responsible for doing other jobs than it is designed for. Though separated, it can be combined to create an abstraction by composing them. This is a building block for the cohesive design. Loose coupling Coupling is a connection between modules, classes, or two or more entities. Without this connection, a dependent become useless except all by itself. Low coupling is preferred over strong coupling. The high coupling means more interdependency between classes which makes reuse difficult. The change in one place affects one or more unknown places. Again take the example of the basket of fruits. When an item is picked from the bucket, it affects the entire count of the bucket and arduous need to keep track of each item. The same objective can be achieved by having each fruit placed in its own basket next to each other counting and replenishing with a breeze. Hmm, this is what loose coupling is all about. Limiting the coupling reduces the interdependency of the classes and make code more readable. It is not always possible to eliminate the dependency, but always try to keep it very thin. Loose coupling can be achieved with proper separation of concerns and cohesion. Why Cohesion is important? You may have come across a code that is technically correct and works fine, but something is not right when you try to add more functionality and extend it. The code we write is supposed to be extensible, maintainable, and easy to read. It is the design of the code that allows to peel layers like an onion to easily re-apply and move around like a lego. To give a simple example of cohesion — The Lego blocks to build different models that can be interchanged to build new models. In contrast, a change in a single part of the design of the rocket can significantly affect the overall design and can put the whole project in jeopardy. Well, well, well, all good so far but how to apply them to the React component? Let’s start with an example, always better than a bunch of words. Problem We are going to work with a simple Expandable component that toggles the content when clicked on the “show more” link. In the context of React component design, lack of cohesion can have consequences which are illustrated in this simple example. const Expandable = ({ isOpen, children, buttonText, openIcon = <Icon color="blue" name="angle up" />, closeIcon = <Icon color="blue" name="angle down" /> }) => { const [open, setOpen] = useState(isOpen); const expandableContainerRef = useRef(); const onClickHandler = () => { setOpen(!open); } useEffect(() => { if (expandableContainerRef && expandableContainerRef.current) { if (open) { expandableContainerRef.current.style.maxHeight = expandableContainerRef.current.scrollHeight + "px"; } else { expandableContainerRef.current.style.maxHeight = 0; } } }, [open, expandableContainerRef]) return ( <div className="container"> <button className="button" onClick={onClickHandler}> {buttonText} {!!open ? openIcon : closeIcon} </button> <div ref={expandableContainerRef} className="body"> {children} </div> </div> ); }; The example is nice and clean for now. The problem starts when more variations are added to the implementation. To add to the complexity, the body of the Expandable component handles a text with limited words that allows toggling between a set of words and all the content. Similarly, a table with limited rows. Let’s assume you end up implementing as the following code snippet. const Expandable = ({ isOpen, children, buttonText, variant, wordsSize, rowsSize, text, openIcon = <Icon color="blue" name="angle up" />, closeIcon = <Icon color="blue" name="angle down" /> }) => { let body = children; const [open, setOpen] = useState(isOpen); const expandableContainerRef = useRef(); const onClickHandler = () => { setOpen(!open); } useEffect(() => { if (expandableContainerRef && expandableContainerRef.current) { const currentScrollHeight = expandableContainerRef.current.scrollHeight + "px"; if (open) { expandableContainerRef.current.style.maxHeight = currentScrollHeight; } else { if (!variant) { expandableContainerRef.current.style.maxHeight = 0; } else { expandableContainerRef.current.style.maxHeight = defaultBodyHeight || currentScrollHeight; } } } }, [open, expandableContainerRef, variant, defaultBodyHeight]) if (variant === "limitByWords") { const truncatedText = text.split(" ").slice(0, wordsSize).join(" "); body = !!open ? text : truncatedText; } if (variant === "limitByRows") { const truncatedRows = React.Children.toArray(children).splice(0, rowsSize); body = !!open ? children : truncatedRows; } return ( <div className="container"> <button className="button" onClick={onClickHandler}> {buttonText} {!!open ? openIcon : closeIcon} </button> <div ref={expandableContainerRef} className="body"> {body} </div> </div> ); }; Without much thought on a design, one may choose to implement all these variants in the same component. Sure, you can do it and may get off without any bugs. But is it a good design? Maybe not. Let’s understand why. As we add more variations to the component, each time we change the component’s props thus the interface, and the implementation. This design exposes at least two problems: a) lack of separation of concern — Expandable component’s responsibility is to toggle the content of its body and not the what and how to render the content. b) tight coupling — an addition of a variant causes the component’s default implementation to change and also the number of props. Nevertheless, props become conditional and developers need to know their use. You could do a better job by adding brief documentation on how to use props, but self-documentation is better than additional notes. If we are able to achieve loose coupling and separation of concern, the Expandable component will be cohesive in nature. Different bits and pieces of a code can be made interoperable, composable, and extendable. How to solve the problem? Let’s start with the SoC. The job of the expandable is to toggle the content and that’s it. See the following code modification. export const Expandable = ({ isOpen, children, buttonText, initialBodyHeight, openIcon = <Icon color="blue" name="angle up" />, closeIcon = <Icon color="blue" name="angle down" /> }) => { ... ... return ( <div className="container"> <button className="button" onClick={onClickHandler}> {buttonText} {!!open ? openIcon : closeIcon} </button> { React.Children.map(children, child => React.cloneElement(child, { open, ref: expandableContainerRef, className: "body"})) } </div> ); }; As you can see, what and how to render the body is no longer the responsibility of the Expandable component. It is up to the child component. Not just that, we also made the Expandable component to have a single responsibility. Naturally, the dependencies on the props by the variants disappeared. Wait, how would I implement the child with different props? Exactly, they are child components. Let children define what they need. Okay! But how the Expandable component knows to work with a child? Well, this is where the loose-coupling concept kicks in. Remember as mentioned earlier, no coupling is of no use and loose coupling is preferred over tight coupling. The child component must accept “ref” and optional “open” as a prop to work with the Expandable. Notice that other props for each child component can be passed. Hmm, so now I can pass variant-specific props to a child component and child component as children of the Expandable component. And this is how we apply loose coupling. The following code snippet is the implementation for each variant that is rendered as the body of the expandable. DefaultDisplay export const DefaultDisplay = React.forwardRef( ({ className, children }, ref) => ( <div ref={ref} className={className}> {children} </div> ); ); Display Limit Words export const DisplayWords = React.forwardRef( ({ className, text, wordsSize, open }, ref) => { const truncatedText = text .split(" ") .splice(0, wordsSize) .join(" "); return ( <div className={className} ref={ref}> {open ? text : `${truncatedText}...`} </div> ); } ); Display Limit Rows export const DisplayRows = React.forwardRef( ({ children, rowsSize, className, open }, ref) => { return ( <div className={className} ref={ref}> {open ? children : React.Children.toArray(children).splice(0, rowsSize)} </div> ); } ); By now it is clear and as it can be seen that each variant is free for reuse at other places in a different context or on its own as long as needed props are passed. Adding a new variant with this design is as easy as creating a new component and passing in the props by Expandable. Interesting! This means a change to the Expandable component is not needed at all. Neither do I have to touch any other components. Nonetheless, you can nest Expandable within an expandable to create a different visual structure. Well, this is what cohesion is and allows us to compose things like layers that can be peeled and applied to another. Using a cohesive Expandable component export const DefaultExpandable = () => ( <div style={{ margin: "10px" }}> <Expandable buttonText="Show more" initialBodyHeight="0px" <DefaultDisplay> <div> <div>Body goes here</div> <div>Body goes here</div> <div>Body goes here</div> <div>Body goes here</div> </div> </DefaultDisplay> </Expandable> </div> ); export const LimitWordsExpandable = () => ( <div style={{ margin: "10px" }}> <Expandable buttonText="Show more"> <DisplayWords text="The slice() method returns a shallow copy of a portion of an array into a new array object selected from begin to end (end not included) where begin and end represent the index of items in that array. The original array will not be modified." wordsSize={10} /> </Expandable> </div> ); export const LimitRowsExpandable = () => ( <div style={{ margin: "10px" }}> <Expandable buttonText="Show more"> <DisplayRows rowsSize={2}> <div>Body goes here</div> <div>Body goes here</div> <div>Body goes here</div> <div>Body goes here</div> </DisplayRows> </Expandable> </div> ); Nice! How it looks in real? Exactly the same as the original implementation. Bonus Can you use it with any child component? Not for all cases, you can have a child element of the Expandable that work exactly like the “DefaultDisplay” child component. Wow! References
https://afiz-momin.medium.com/designing-a-react-component-1d488dbbddf6
['Afiz Momin']
2020-12-17 02:26:53.211000+00:00
['Cohesion', 'React', 'Separation Of Concerns', 'Loosecoupling']
Gross is a really nebulous concept.
Gross is a really nebulous concept. On one hand it’s the gradual process by which things get bigger things get better and get slightly older. On the other it’s a process that is defined by the limits and the struggles against itself. True progress can happen without resistance. How do you handle the strain and stress that true growth requires. . #lettering #wordart #fountainpen #lamy #sotd #growth #watercolorpaper #sketch #brushpen #copicmarkers #pilotmetropolitan #ink #doodle #doodles #color #coffee #lettering #brushtype #mindfulness #realtalk #bonsai #random #healthy #mentalhealth #battleon #goodthoughts #brushlettering #latenight #sosleepy Originally published on Instagram
https://medium.com/design-is-brutal/gross-is-a-really-nebulous-concept-2258f7469ab0
['Paul Muller']
2016-12-19 01:07:30.727000+00:00
['Instagram', 'Mindfulness', 'Design']