title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Tachyons: The Hypothetical Faster-Than-Light Particles in Physics
Shortly after, it was established that the tachyons would not under any circumstances lead to faster than light propagation, but would instead lead to the previously mentioned “Tachyon condensation”, where unstable particles would due to quantum fluctuation decay into stable ones. More about that later. Relativity theory — what does Einstein have to say? Does the relativity theory state that nothing can move faster than light? It actually only states that nothing with a non-zero mass can accelerate to the speed of light (and above) because it would require infinite energy and get an infinite mass. Only if the particle is massless (like photons and gluons) it can travel the speed of light. So, according to relativity theory, a particle will not be able to accelerate to faster than the speed of light. But what if it always had that speed, from the beginning of the Universe? What if it was born like that in the Big Bang? If such a particle existed, it would not be able to decelerate to slower than light. It would have this weird property, that its required energy would decrease as its velocity increased, and hence it would approach its lowest energy state when it approached the speed of infinity. Visa Versa, it would require infinite energy to slow down to the speed of light and would hence never be able to cross that speed limit. This makes the speed of light a two-way barrier to particles traveling at speeds on either side of it That would not be breaking the rules of relativity, it would, however, have other weird properties: Tachyons would have an imaginary rest mass (assumingly that energy needs to be a real and positive value, it follows from the equation below. If the denominator is an imaginary value, then the numerator needs to be the same, for the result to be a real value). An observer could see them travel in the opposite direction through time, thus being able to see them before they are created (in the frame of the observer), depending on the relative configurations between the frames in spacetime. Fun fact, while particles with imaginary mass are called tachyons, ordinary particles with a non-zero mass that travel slower than light are called bradyons (or tardyons) and massless particles are called luxons: Bradyon: v < c, m²>0 Luxons: v = c, m²=0 Tachyons: v > c, m²< 0 All three classes of particles can exist in theory (the first two we know quite well), and the third group is not breaking any physical laws. But none of the particles can switch to another class. Totalitarian principle in quantum physics: “Everything not forbidden is compulsory.” The Higgs field was a tachyonic field before spontaneous symmetry breaking All theories that involve tachyonic fields are cases of spontaneous symmetry breaking. The Higgs mechanism is an example of spontaneous symmetry breaking and hence also an example of a tachyonic field at the beginning of the Universe. First, a short explanation of spontaneous symmetry breaking. Usually, physical systems will be symmetric in potential energy around its stable state at a local minimum, which means that the energy potential only has one lowest and hence one resulting stable state. But in the lowest energy state in a vacuum, a quantum system might not exhibit the same symmetry. Fluctuations might result in a spontaneous symmetry breaking, where two lowest states are possible and one is chosen. The symmetry is broken with that choice of configuration. A visualization of the energy potential at a high enough energy level with symmetry (left). At lower energy levels, the center becomes unstable, one of the positions with both a lower energy state is chosen and the result is that symmetry is broken (middle-right). Credits: CC/commons.wikimedia.org An example of a spontaneous symmetry breaking happened when the weak and the electromagnetic force broke from the unified electroweak force when the Universe was about a picosecond old and 1,000,000,000,000,000 K (10¹⁵ K ) hot. Before the Universe cooled down to this temperature**, fundamental forces of electromagnetism and weak fundamental force were unified as one force: The electroweak force. Given the same amount of energy (246 GeV), the two forces would unify again into one electroweak force. (**according to current theories it was at 10³² K at the Planck time — the shortest period of time that makes sense in calculations, 10–⁴³ sec.) Electromagnetism and weak fundamental force appear to be two completely different forces with different corresponding force-carrying particles. While the electromagnetic force is carried by the massless photon, the particle carriers of the weak force, that can change the flavor of quarks and make particle decay, are W and Z bosons with a relatively heavy particle mass. Those particles didn’t exist before the spontaneous symmetry breaking of the electroweak force. There were four massless bosons for carrying the electroweak force, three W bosons (carrying spin ) and one B boson (carrying charge +1). They were energy-symmetric and massless. With the massless force carriers, every interaction happens at the speed of light, particles colliding, annihilating and creating in quantum fluctuations. As the temperature and energy level drops to 246 GeV, something happens. The particles start interacting with the underlying Higgs field, which we call the Higgs mechanism. The symmetry of the massless W and B bosons break, creating new particles, by interacting with the four components of the Higgs fields, two charged and two neutral Higgs fields. This creating new particles and force carriers: Two charged W bosons, W- and W+(created from two of the W bosons and the charged Higgs fields). Two neutral particles, the Z_ 0 and the photon, γ (created from the neutral Higgs fields). A Higgs boson is created as a consequence, coupling with all other particles giving them their rest mass. While the W+, W-, and the Z bosons get their masses from interacting with the Higgs fields, the photon remains massless. Before the symmetry breaking, the Higgs field (but not the particles) had an imaginary mass, meaning it was a tachyonic field. The imaginary mass really means that the field is unstable, having a minimum in potential energy at a local maximum instead of a local minimum. Small fluctuations at a quantum level would lead the field to one of the local minimums, as seen in the figure above. This is the tachyonic condensation that happened, breaking the electroweak symmetry, giving mass to the W and the Z particles, and condensate the unstable tachyonic Higgs field into the stable Higgs field. I’ll stop throwing words at you now. Here’s a kitty in a Mexican hat instead (The Mexican hat cat!). Mexican hat cat! Credits: Own production with images from Canva. Time Travel, Dark energy, and all other cool stuff As mentioned, tachyonic particles might be observed traveling backward in time due to their speed being faster than the light reaching the observer. That is if they could be observed at all, which is not the case. So far they exist only as hypothetical particles and cool science fiction stories. Unfortunately, the only cool consequence of a tachyonic field so far has been the spontaneous symmetry breaking, changing the standard model of the Universe. But that was the age of the Universe ago. On the positive side, there is still so much we don’t know about the physics of the Universe we live in, like what is most of it (95%) is really made of. The fundamental particles that we currently know only constitute about 5% of the entire Universe. 27% of the Universe is contributed by dark matter, which we only know is something with a mass that we cannot observe (but we can observe the effects of the extra mass). It could be some exotic matter, that doesn’t interact with photons and that we haven’t discovered yet. 68% is dark energy, which we know almost nothing about, other than there is something that accelerates the expansion of the universe. We call it dark energy, because we can’t see it and because it’s expanding something, like energy usually does. But other than that — we have no idea. One explanation using fluctuating tachyon-anti-tachyon pairs is given by two physicists, Herb Fried and Yves Gabellini, in a publication The Birth and Death of a Universe. Although it might sound like an exciting idea solving several problems at ones, this theory makes radical assumptions of the existence of a specific type of tachyons, which makes most physicists wary of such a theory. Herb Fried behind the original publication explains: “If a very high-energy tachyon flung into the real vacuum (RV) were then to meet and annihilate with an anti-tachyon of the same species, this tiny quantum ‘explosion’ of energy could be the seed of another Big Bang, giving rise to a new universe. That ‘seed’ would be an energy density, at that spot of annihilation, which is so great that a ‘tear’ occurs in the surface separating the Quantum Vacuum from the RV, and the huge energies stored in the QV are able to blast their way into the RV, producing the Big Bang of a new universe. And over the course of multiple eons, this situation could happen multiple times.” Why not? After all, we might be nothing but a temporary fluctuation.
https://medium.com/predict/tachyons-the-hypothetical-faster-than-light-particles-in-physics-bf910c79c0bd
['Lenka Otap']
2019-11-28 05:04:06.717000+00:00
['Time Travel', 'Space', 'Physics', 'Science', 'Dark Energy']
Dating Expert Andrea McGinty: “5 Things You Need to Know to Survive and Thrive During & After A Divorce”
If you were to write the script for this post-divorce chapter in your life, I want you to title it, “Your Excellent New Adventure.” We have talked about finding a hobby, making time for you, meeting new friends and the icing on the cake, dating. No more venting about your ex not understanding you, not paying attention to you, not noticing you, the communication gap that manifested inside and outside the bedroom! Now you have control and the freedom to get back out there and find someone who can give you all that plus, make you feel special and wanted again. Intrigued? Andrea McGinty is a premier dating coach and founder of the newly launched, 33 Thousand Dates and started It’s Just Lunch. McGinty says the first step is being ready for a thrill. Ilyssa Panitz: Let’s get the lowdown on you. Andrea McGinty: I have been doing this for over 25-years, so I consider myself quite the dating expert. I have clients that are regular people and celebrities you see in the spotlight. I got into this back when I was in my 20’s and in the process of getting married. Five weeks before my wedding my fiancé called me and basically dumped me. I was thinking, “How am I going to meet people?” The most common places are college, grad school and the workplace. I was thinking, who do I go to? I was living in Chicago at the time and while I had friends looking to fix me up, the matches were all wrong. I was meeting these people over lunch or a casual drink after work. I figured each would last an hour and then give me the option to get out of there if I was not having a good time. I knew I was good at fixing people up and compared it to be like a recruiter and the idea was born. Ilyssa Panitz: It is like you being here right now with me. You have to interview clients like I am interviewing you for this story? Andrea McGinty: Yeah. We ask them a lot of questions, so we get to know them before we send them out. I would want to know about a person’s background and what they do professionally. Once we found some commonalities, we will have you go meet for lunch and then call us with your feedback. This was so helpful to us because it is you telling us what you thought of the person such as: was the guy was good looking, he was too quiet and then apply those responses to your next date. I started sending people out on dates and before I knew it, I sent 33,000 people out on dates hence that became the name of my company. The majority of my business is people who are 20 and 30-somethings. It has now expanded to an older demographic. When I started my company, there was no online dating the way it is today. Ilyssa Panitz: I wanted to mention that. How has your business changed since online dating on your desktop computer and all of these apps on your phone developed? Andrea McGinty: It has changed a lot, of course. I know what it is like because I have kept up with it as it has evolved. I have helped people work on their profile, update their profiles, suggest a photo and choose the right site. I would say 90% of my business is teaching people how to use a site to work to their advantage. That means not sitting around and wasting your time answering responses such as: “You’re so cute we need to meet.” I try to teach them how to avoid being just a text buddy, FaceTime friend and deciding who is worth meeting and how to choose wisely from the pool to make that happen. Ilyssa Panitz: How does a FaceTime date call work? Andrea McGinty: You want to use that time wisely to decide if you can talk to this person face to face and see if there is enough substance to invest the time and meet for a real date. You can also use this time to see what he looks like, check out his manners, a little bit about where he lives and get a real feel of what their personality is like. I try to put a cap on how much time you should spend on a call like this. If you decide to turn it into a happy hour, look to see what they are drinking. That could say a lot about them and if they have a sense of humor. Ilyssa Panitz: Can you still meet in person given the Covid-19 pandemic? Andrea McGinty: Depending on where you live, sure. If you are somewhere and it is cold, some places have outdoor bubbles with heating lamps that abide by the safety guidelines so you can have brunch or a drink. I suggest scheduling the date for about an hour and make sure to talk about their feelings about the pandemic ahead of time. Also, see if they are staying current and relevant to what is happening. Ilyssa Panitz: Talk to me about dating after a divorce? Andrea McGinty: For people who have been married for a long time, the online dating thing is going to be new to them. We spend time teaching these clients about the dangers, mistakes and the pitfalls that can go along with it. Ilyssa Panitz: Such as? Andrea McGinty: Dating online gives more people options to meet someone. What that means is finding the right meeting places for a date and finding the right dating app that is best suited for where you are. For a hook-up, think Tinder. For a long-term relationship, think Match. If a divorced woman between 38–40 is getting back out there, and wants a serious relationship with a quality man, go to a site because that is where they tend to look. A great example is Elite. I also suggest just using one site at a time. You really need to fully understand how the site and the filters work so you can monitor it best to your advantage. I do not want people to get bummed out or overwhelmed. Men tend to respond to fresh meat and a new picture. When you get overwhelmed with an influx of messages and they are saying things like, “Hey baby you are so cute let’s meet,” I want them woman to put them in a pile and ignore them. Ilyssa Panitz: What do you do in a situation like that? Andrea McGinty: Go to the site filters and put in what “you” are looking for. Height, geographic location, religion (if it applies) and be proactive in what is a priority for you. Then go back and see who fits. If you respond to everyone it will be disastrous. Ilyssa Panitz: What are some eye catching key words you should use on your profile so it does not sound like the others? Andrea McGinty: A picture is a thousand words. Make sure it is a great picture of you. Once a guy sees if you are cute, try to include funny words too. Show off your humor. Maybe you can find something positive that came out of the pandemic. Also, highlight something interesting about yourself such as a book you wrote or a mountain you climbed. Ilyssa Panitz: What about those who shy away from online dating because they hear the horror stories? Andrea McGinty: Well, there are a lot of marriages that come out of them too. Don’t talk to too many people. You will get too many opinions and you need to get over the horror stories. Look at it as a new adventure. See it as fun! Ilyssa Panitz: I want to jump ahead to the first date. For people who are going through a divorce or are already divorced, why should you not overshare on the first in-person meeting? Andrea McGinty: You should never overshare on your first date. You are just getting to know this person and it should go slowly. People who want to talk about their divorce, or that they are still dealing with custody issues — these topics are negative, and you do not get a second chance to make a first impression. Instead, I encourage people to maybe mention something along the lines of, you have kids but then leave it at that. Do not give the other person too much information. The only thing you should think about is if you want to go on a second date with them. Get to the second date and then try to imagine if you could see yourself bringing this person to a cocktail party with your friends. It is one baby step at a time. You need to keep some things private especially in the beginning. You do not know this person well enough to disclose things like that. Ilyssa Panitz: How do you handle the question, “Why did you get divorced” without giving away too much? Andrea McGinty: You can handle that by being positive. Say, “I was divorced after 20-years and for me that was a home run being married for that long. It was amicable and I am moving on.” Then change the subject and move on to something new. Do not say, tell me about yours because a lot of divorces are not amicable and their situation can be nasty. Stick to a positive answer rather then a negative. No one wants to hear crummy stories. Ilyssa Panitz: I am sure. Andrea McGinty: It is also a sign that you are not ready to date. If the first thing you are going to hear and talk about from the other person is why they got divorced, they probably have not moved on. Someone who is ready to date will not focus on it but rather talk about something they noticed in your profile picture or in your bio. When you are in a good place, your answer will change to something like, “I am loving my new hobby, my new job or extra time with my kids.” Ilyssa Panitz: For a woman who is coming out of a long-term marriage and seeking to meet someone new, what kind of questions do you ask of her and the men you can introduce her to? Andrea McGinty: Values is a big one. I ask about interests such as: do you work-out? What do you do in your free time? Tell me a bit your family? Tell me about your last relationship and why it worked? What didn’t you like about the person? What was your favorite quality about them? Ilyssa Panitz: What are the red flags someone should look for and not ignore? Andrea McGinty: Attitude! You want to be on a date with someone who has a positive and great attitude. If he has a horrible attitude and bashing his ex-wife, you are done! You should look at this as a fun adventure and treat it as I get to meet someone new. If it doesn’t work out look at it as I have three more dates lined up next week. Take the whole process in stride and you are being introduced to a lot of interesting people. Also, look for signals that the person is talking too much about their ex at every opportunity. They are not ready to date, especially if they are fresh off the boat and just signed their divorced papers. You don’t want someone spending all of the time ripping their ex-wife and the alimony they have to pay. Ilyssa Panitz: How important is location, location, location? Andrea McGinty: It is very important. I once had a well-established male client, and he was upset about the types of women he was meeting. When I looked at his profile, it stated he was willing to meet women up to 200-miles away from where he lived and he included pictures of himself without a shirt on. First off, 200-miles is a far distance, and second those images are a turn off. You have to be dressed when you are posting an image and narrow down a radius that is realistic. Everyone is busy and wants easy access to meet someone for a drink, dinner or spend an afternoon going on a hike. Try to pin it down to a ten-mile radius. Ilyssa Panitz: How do you inspire women to not get discouraged if the dating experience is not starting off well? Andrea McGinty: I start with tweaking the profile, I may add a second dating site and then fine tooth some of the responses to the questions. It may take a dozen or so dates until you meet someone who is well suited for you. Remember, there is nothing wrong with going back and adding something we forgot. It can open the pool up a lot and give her more opportunities to meet the right person. Ilyssa Panitz: What are five things someone needs to know to survive and thrive during/after divorce in the dating world? Andrea McGinty: One: Make sure you are ready to date. Be aware that everything that comes out of your mouth is neutral, happy and not negative. Two: Hire someone who knows what they are doing so you have a positive experience, especially coming off a divorce. It will cost you the same as signing up for a bunch of different dating websites. I charge a few hundred bucks but then again so do these sites over the course of time. Three: Go treat yourself to a make-over. Buy some new clothes for dating, get a new haircut or get some new make-up. Feel really good about yourself with a whole new look. Four: Have a positive attitude! You do not want to be with someone who is doom and gloom so neither should you. Five: Do not stop dating after your first online date. Keep going. Keep getting out there. Do not put all of your eggs in one basket just because one date went well or give up because the first one was a dud. You are more desirable to a man when you are actively dating because he realizes other men are interested in you too.
https://medium.com/authority-magazine/dating-expert-andrea-mcginty-5-things-you-need-to-know-to-survive-and-thrive-during-after-a-c2ca375a3244
['Ilyssa Panitz']
2020-12-28 22:03:35.723000+00:00
['Relationships', 'Wellness', 'Divorce', 'Women', 'Dating']
Submission for Queen’s Children
We’d like this network to be a Mutual Aid and Inspiration Community. FIRST OF ALL, FEEL FREE IN YOUR EXPRESSION MY FRIENDS! WE LOVE EACH ONE OF YOU AND WE FORM A MOSAIC! 🌑 🌘 🌓 🌕 🌗 🌒 🌑 Feel free to propose anything to empower the feminine : words of love, words of wrath or words of humor, real life stories, tales, poems, essays, fictions and art (photos, paintings, digital…). We ask you to take your time, to reread your texts. We won’t correct you but we’ll communicate in a private message. You may propose non-English translations too but only while or after the English one has been published in our publication. We hope to have a diversity in cultures, languages and continents. How to propose your text If you are enlisted as a writer Once you have achieved your draft, click on the three little dots between the bell and the Publish button in the right hand corner of your draft page. Select Queen’s Children in Add to publication and click the submit button. If you are not in our writer’s list, feel free to comment this page so that we can enrole you in our list. For other topics, feel free to write to [email protected] Some rules Your story should empower the feminine. This means respect and elevation, even if it is not precisely about the Sacred Feminine. The Sacred Feminine includes casual things too. Any word is allowed depending on the right context, as long as you don’t abuse it. Choose carefully your image. Make sure that all images are attributed to the author, with a link. For your first image, please choose the right position, the fourth one which allow to adapt to any screen size. Please write some alt text to describe it for those who use an audio browser. Of the five tags you are allowed. One tag must be Sacred Feminine. With it, you should add one of those : * POETRY for poetry and tales * STORIES for life stories and fiction * ESSAY for reflections * ART for art work We ask you to take your time and reread your texts. We will communicate with you in a private message, if needed. Submit only unpublished drafts published work with agreement from the publisheer and with a mention with alink to the original publication. After 30 days, you can publish in an other publication, if so, please write at the top of the post: “Originally published in Queen’s Children” with a link to your story. The delay won’t more than a day. We are happy to build this community around the Sacred Feminine.
https://medium.com/queen-s-children/submission-for-queens-children-3b64bdacfab
['Jean Carfantan']
2020-05-18 09:03:41.232000+00:00
['Sacred Feminine', 'Writing', 'Writers On Medium', 'Feminine Energy', 'Feminine Power']
Use These Free Tools To Generate New Writing Ideas
Use These Free Tools To Generate New Writing Ideas Some free tools and one freemium OpenAI headline generator. Writing is tough enough. Sometimes it seems like professional writers can think of attention grabbing topics and headlines on the spot. For many of us, it’s not that simple. So, for everyone who is not gifted with unlimited writing ideas, there’s tools. Copy.AI (OpenAI) CopyAI will help you automate the tedious, and oftentimes frustrating, aspects of headline (& ad) creation. They help you brainstorm high-quality, audience-based copy in real time. Hubspot’s Blog Idea Generator Portent’s Title Maker Impact’s BlogAbout Title Generator Linkbait Generator
https://medium.com/roi-overload/use-this-free-tool-to-generate-new-writing-ideas-9a3a3bd27013
['Scott D. Clary']
2020-11-10 03:14:12.334000+00:00
['Writing Ideas', 'Writing', 'Blogging', 'Writer', 'Content']
Sentiment Analysis and Topic Trending Analysis with Weibo Data
Chu Chu, Minyi Huang, Valerie Huang, Yinglai Wang Abstract As one of the most popular online social gathering platform for microblogging in China, “Sina Weibo” (“新浪微博”) has become a rich database to collect Chinese text and has attracted extensive attention from academia and industry. Netizens express their emotions through Weibo, thus generating massive emotional text messages. Through data collection, data processing, model selection, sentiment analysis, hot search and visualization, our project created an extended analysis of the emotional status of netizens on certain topics, opinions on a social phenomenon, and preferences, which not only have a certain commercial value, but also help to understand societal changes. Motivation and Background The sentiment trend of the social world, which presents what people care about over time and how they feel about the hot topics, can not only give a data analytical solution for business operation or marketing decision, but also reveal the emotional changes and attitudes of social groups and official accounts, as well as user behaviors across different time periods. Data Source: “Weibo” Weibo, short for “Sina Weibo” (“新浪微博” in Chinese), is an information sharing/microblogging website, similar to Twitter. It is one of the biggest social media platforms in China, with over 600 million monthly active users as of Q3 2019. Users can post information within 140 words or share or repost instantly. It gives Internet users more freedom and convenience to communicate information, express opinions, and record events. Sentiment Analysis Sentiment analysis refers to the analysis of the emotional state implied by the speaker in conveying information, attitude, judgment or evaluation of his or her opinions. Sentiment analysis on massive text data on Weibo helps us understand the Internet public opinion trend, expand companies’ marketing capabilities, and predict cases of emergency situations. Motivation Currently, research on sentiment analysis of Chinese microblogs is still starting. There are many explorations on sentiment analysis of Twitter and other English-language social platforms, but the specific application in Chinese language has certain limitations from a natural language processing perspective, where grammatical rules and language habits are very different. We are interested to apply the data science techniques we have learned to understand people’s opinions, and how it changes and reflects social mood. Our project integrates sentiment analysis and topic modelling into several parts: text preprocessing, information extraction and emotion classification. Text preprocessing includes word segmentation, part-of-speech tagging, and stopping word formation, etc,. Emotional information extraction is based on certain rules extracted from Weibo Unit elements of propensity characteristics; sentiment classification is extracted from the underlying sentiment information. The emotional information extracted is divided into words, topics and relationships, and comes down to sentiment calculation with semantic dictionary and classification based on machine learning. Problem Statement Sentiment analysis Given a message, how we can decide whether it is of positive, negative, or neutral sentiment. 2. Topic Modelling How people’s attitude and focus about certain topic change during a period of time 3. Model Selection How well will the segmentation and sentimental analysis model perform? Data Science Pipeline Methodology Hot search analysis Data collection Hot Search Topic data collection is done by sending HTTP requests using weibo API. To align with the weibo post data collection, the time period was also set from Jan 1, 2020 to Mar 26, 2020. We collected roughly 6000 Hot Search Topics in JSON format and extracted information we need (i.e., time_stamp, content, views, start_time, end_time). Data preprocess We used the start time and end time to calculate the alive time in seconds and also extracted start date and end date in the form of ‘YYYY-MM-DD’. Duplicated and null hot search contents were removed. When plotting the distribution of the length of alive time and the number of views over time, we normalize these two sets of data to make them more compact. Hot search words frequency Unlike English sentences, Chinese sentences are character by character and no space in between them. Topic content was tokenized by using the segmentation tool ‘jieba’. We also created certain rules on filtering stop words and meaningless vocabularies. 2. Sentiment analysis Data collection We crawled the search result of the keywords we obtained from the Hot Search Analysis under domain s.weibo.com/, and scraped the HTML elements as needed using BeautifulSoup. In order to avoid the user verification disruption, we used Firefox webdriver and implemented a random break mechanism and auto reconnect after time out. In this way, we have the crawler running on the ec2 instance for a week and gathered about 90,000 weibo posts raw data. Considering the time limit on this project and the amount of work to label the data, we only used data from Jan 10, 2020 to Mar 26, 2020. Sentiment mark labeling has to be done before we pre-process the data, otherwise it would be hard to read tokenized content for anyone. Four of us spent nearly a month finishing labeling 45,000 weibo posts with sentiment marks. Data preprocess Text preprocessing techniques include word segmentation, part-of-speech tagging, syntactic analysis, and other natural language processing. These technologies are relatively mature. Though resources forChinese NLP are limited, there are several packages and libraries for us to use which includes a complete set of XML-based Chinese language processing modules. The application of these has laid a good foundation for our sentiment analysis. According to the characteristics of the microblog text, the link address in the microblog text, “@” Character (for responding to or communicating with other users) and “#” character (for topics categorization) for filtering. Even the data were scraped properly from HTML elements, they were still highly unstructured and cannot be used to fit in a classification model. All forwarded posts are treated as duplicates and we managed to locate the origin post and keep only that one in the data set. We also created 26 regular expression rules to filter the post’s content and extract attributes we needed. Here are some examples of the data. Before: 729,0,新冠,2020–03–10,//@GonozQvQ://@JaneMere://@诛砂:是有人良心被狗吃了!//@紫飞SAMA: //@杨林-杨家枪法第六十七代传人:浙江还给新冠患者做肺移植,两例了。这医疗救治强度不敢想象//@维稳先锋卡菊轮:74万,换算下来就是10万刀多点,再一算,在美国也就够35人次的核算检测。,03月10日 14:13,0,0,0 127,2,新冠,2020–02–27,#天津爆料# 【2月27日6时至18时 天津新增1例新冠肺炎确诊病例 累计确诊病例136例 治愈出院6人】记者从市疾控中心获悉,2月27日6时至18时,天津新增1例新冠肺炎确诊病例,累计确诊病例136例。今日治愈出院6人,累计治愈出院102例。 第136例患者,男,41岁,为天津市海河医院呼吸与危重症医学科副主 ​ 展开全文c,02月27日 22:59,8,16,18 After: 729,__label__negative,新冠,2020–03–10,是 有人 良心被狗吃 了 浙江 还给 新冠 患者 做 肺 移植 两例 了 这 医疗 救治 强度 不敢 想象 七十四 万 换算 下来 就是 十万 刀 多点 再 一算 在 美国 也 就 够 三十五 人次 的 核算 检测,2020–03–10 14:13:00,0,0,0 127,__label__neutral,新冠,2020–02–27,天津 爆料 二月 二十七日 六时 至 十八 时 天津 新增 一例 新冠 肺炎 确诊 病例 累计 确诊 病例 一百 三十六 例 治愈 出院 六人 记者 从市 疾控中心 获悉 二月 二十七日 六时 至 十八 时 天津 新增 一例 新冠 肺炎 确诊 病例 累计 确诊 病例 一百 三十六 例 今日 治愈 出院 六人 累计 治愈 出院 一百零二 例 第一百 三十六 例 患者 男 四十一岁 为 天津市 海河 医院 呼吸 与 危重症 医学科 副 主 ​ ,2020–02–27 22:59:00,8,16,18 Current open source Chinese word segmentation tools or modules mostly have some comparison data on the closed test set, but this can only show the effect of these word segmentation models on a certain closed test set, and can not fully explain its performance. Sentiment classification We used fastText as the model for efficient learning of word representations and sentence classification. Its major advantage is multilingual word vectors, and supports multiprocessing during training. Some preliminary results seem to show that fastText embeddings are better than word2vec at encoding syntactic information. The original word2vec model seems to perform better on semantic tasks. By comparing their performance in some other downstream supervision tasks, it will be interesting to see the portability of these two models for different types of tasks. In our scenario, fastText as a N-gram classification model has its reputation by facebook, support for all unicode languages, multi-labeling and it’s easy to tune. Topics cluster The topics of the Weibo texts were extracted by Latent Dirichlet Allocation(LDA) model. LDA is a Bayesian probability model to identify the semantic topic information. NLTK is also used for data preprocessing. After removing the Chinese stopwords, we create the dictionary and corpus needed for topic modeling. In order to get an optimal number of topics for the model, Coherence value is used to evaluate the quality of a given topic model. We also built a number of LDA models with different values of the number of topics k and picked the one that gives the high coherence value. Since the coherence score seems to keep increasing, we choose the model that gave the highest CV before flattening out. 3. Visualization Hot Search Time-Series analysis of the Weibo Hot Search was necessary to investigate what people care about most in a period of time. We summed up the total views and alive time every day among three months and plotted a heat map as follows. It is obvious to see that the number of views on the 26th January are much higher than other days, and the total alive time on that day is also relatively long. Let us take a close look at what happened on that day. We picked the top 20 hot search on the 26th January. Surprisingly, there are 18 out of 20 records directly related to or caused by “coronavirus”. In addition, we segmented the hot search, and selected 20 higher frequency words to make two pie charts by the number of views and the length of alive time. It is not surprising that when the measurement is total views in the first pie chart, all the words are related to “coronavirus”. Therefore, we decided to choose “coronavirus” as our sentiment keywords. We could not only have a comprehensive understanding of the development of this epidemic, but also see how people‘s emotion has changed over time. Sentiment Analysis and Trend Sentiment score and proportion are used to measure the sentiment trend of Weibo posts. Sentiment score is the average of the label value,with negative label as -1, neutral as 0, positive as 1. Proportion presents the faction of each labeled class. Considering the difference between official account and personal account and wondering what kind of post people prefer, we take posts with more that 50 like/forward/comment as effective data. From our result, people’s attitude on bad things is not always negative, which can be shown from the red line in two pictures on the left. During the outbreak of COVID-19, not only negative impact of the events is transferred on social media, encouragement is also passed on positive energy. At the same time, people have a preference toward positive posts, which can be shown in the two figures below. WordCloud WordCloud is a data visualization technique to vividly represent text data and is widely used for analyzing data from social network websites. The size of each word indicates its frequency or importance. It is a great tool to show our results for most frequent words and topics by highlighting and coloring. We generated our WordCloud in Python with the following modules: matplotlib, pandas and WordCloud. From the segmented word list from pre-processing, we aggregated all post contents on each day, calculated the word frequency list and generated the WordCloud object. For each day, we generated six different WordClouds, to show the difference of: ⓵all posts with fewer than 100 comments/likes/reposts, ⓶all posts, ⓷all posts with more than 100 comments/likes/reposts, ⓸all posts with negative labels, ⓹all posts with neutral labels, and ⓺all posts with positive labels. Then we translated the six WordClouds to English. Take the day of 2020–02–20 as an example: Difficulty and Challenge Data collection Most webs only present real-time hot search but not historical data. It is time consuming to find a proper API to get hot search data. The way we collect our data is through target searching, which might trigger the robot detection of the website and ask for identity verifications. We have to overcome this issue using random break times and keep the crawler running for weeks. Tokenizing Chinese sentences is also quite challenging, we have to fine tune and evaluate six different segment models to find the appropriate one for this project. Locating original post among tons of forwarded posts. Pre-processing Deal with unstructured and non-grammatical texts. Aligne different numerical and date formats over multiple languages. Data Labeling People express opinions in complex ways, including rhetorical devices such as sarcasm, irony, implication, etc. Not to mention the extensive usage of acronyms. All these indicated that this job can only be done manually. In order to improve the performance of our model, we used 30 days labeled over 45,000 weibo posts. Model Selection Segmentation Model We evaluated 6 different text segmentation models (jieba > hanlp > pkuseg > thulac > snownlp > foolnltk) and picked one that performs best with Chinese character sentences. Word Representation Model We compared Word2Vec and FastText models on Chinese character sentences and found FastText gives a better result that is more compatible with our sentiment tagging procedures. Visualization Mandarin-English Conversion As we know the hard limit for google translate API requests is 30K Unicode characters (code points). To overcome this issue, we splitted the vocabulary of over 60,000 unique words into 15 chunks and distributed 15 tasks using virtual machines. Then merged all the results locally to be used for the translation mapping. Model Evaluation Sentiment trend value Sentiment trend value is designed to present the total trend of data. It measures the average classified result for negative as -1, neutral as 0, positive as 1. The result of prediction is closed to the label result, which is as followed: F1-score, hamming loss and confidence These three scores were used to evaluate the classification accuracy. F1-score is a weighted harmonic mean of precision and recall. Higher values of the F1-score indicate that the classification method is more effective. Hamming loss is the fraction of labels that are incorrectly predicted. Confidence is the predicted probability of test data that predict correctly. The definition of precision(P), recall(R), F1-score and the result as followed: P = TpTp+Fp, R =TpTp+Fn , F1=P*R*2P+R PR Curve and ROC Curve PR/ROC Curve is used for further understanding of the model. PR Curve shows the relationship between precision and recall. ROC Curve summarizes the trade-off between the true positive rate(TPR) and false positive rate(FPR) for the model using different probability thresholds. TPR and FPR are defined as followed: TPR = TpTp+Fn, FPR = FpFp+Tn The indicators are for binary classification, but the model is for multi-classes problems. Therefore, we use micro-average to present the PR/PRC Curve of total data. Micro-average aggregates the contributions of all classes to compute the average metric. We also compute the PR/PRC Curve of each class respectively. The above result is the model that trained and tested with 60,000 weibo posts. It performs better than we expected, possible reason is the posts for training and testing all belong to the same topic, so that their contents could be similar. Before training with full data, we also tried to train the model with only 1000 labeled data, but that performs poorly. Data Product Our Visualization result can be accessed by the following URLs Result of Hot Search Topics http://ec2-18-218-241-59.us-east-2.compute.amazonaws.com:8051/ Result of Weibo Sentiment Analysis http://ec2-18-218-241-59.us-east-2.compute.amazonaws.com:8050/ Lessons Learnt Team Collaboration Under this special quarantine time due to COVID-19, our team transitioned from off-line meeting to on-line meeting, and under different time zones. It can be challenging sometimes, but also served as a great opportunity to practice code-sharing, self-discipline and remote communication, which can be essential skills later in either academia or industry. Data Science Techniques Data Collection Through the challenges we faced, we learned that in order to have a decent result, data collection serves as the most important first step and should be considered thoroughly. In our case, data should be collected randomly, and then classified or labels, instead of using target searching. Multi-language Results We also learned how to efficiently prepare for multi-language versions of all results since our raw data is all in Chinese. Summary & Future Work Surprisingly, from the visualization we can tell, people’s attitude on bad things aren’t always negative or dominated by negativity. That’s also why data science is important, things might not always be the way we thought. Compared with traditional text sentiment analysis, our project on Weibo has its similarities and particularity. The basic structure of data science flow is similar, but the particularity is mainly reflected in dealing with Chinese language differences. As a new research direction of data science, Chinese Weibo sentiment analysis still has many areas worthy of in-depth exploration. Future sentiment analysis on Weibo should have the following directions: Dealing with Spam Contents Pre-process procedures need to be refined. The existence of spam information on Weibo will undoubtedly interfere with sentiment analysis, and currently there are very limited established filtering algorithms. Labeling Data We can find a better way to label data, like using crowdsourcing. Finish labelling over 50,000 content in a short time is overwhelming. Online Language Usage There is a lack of filtering and sentiment mining of “online language”. There are few related dictionaries or corpora available for use now. Label Expansion Weibo posts, as well as many texts have various emotions, and should not be limited to positive neutral and negative aspects. It can be extended to explore different emotions and their levels. Live Updates The data analysis product can be made from using historical data to live updates, which needs scalable tools and cloud computing.
https://medium.com/sfu-cspmp/sentiment-analysis-and-topic-trending-analysis-with-weibo-data-7ff75e178037
['Valerie Huang']
2020-04-18 04:21:22.702000+00:00
['Sentiment Analysis', 'Weibo', 'Topic Modeling', 'Big Data', 'Data Science']
Embracing Uncertainty
Embracing Uncertainty Accepting an unknown future is one of the greatest powers you can develop. Photo by Anne Nygård on Unsplash The Question: Will I Be Safe? I’ve often wondered if I could truly know my future in all its rich detail, as if handed a biography and I could read through all the chapters to see where life would take me, would I really want to know? The answer is no, I wouldn’t. But we all carry within us this voice — sometimes small, other times quite vocal — that wants to be certain that we’ll be taken care of, we’ll be safe, and we’ll have our material needs met. We might be curious about where we’ll end up, if we will get to realize our dreams, if we will find love or success or self-realization or whatever it is that we hope lies for us in the future. That little voice — anxious, worrisome, needy, clingy — struggles with uncertainty. It’s the source of uncertainty. The possibility of pain, suffering, setback, or worse lurks in the unseen ticks of the clock, waiting to spring forth. You might experience this as a deep anxiety that you are doomed to failure or as an inability to rest comfortably when life is going smoothly, as if you’re waiting for the other shoe to drop. Uncertainty is the insecure ego’s way of painting possibilities with a paintbrush of fear. It takes infinite possibilities and highlights the negative ones, and in so doing, obscures the vast positive potential of the unlived moment. Uncertainty is the tightly coiled sphincter, the blocked root chakra, the overprotective caretaker who sees only danger, never fun. What is the antidote to this way of seeing the world through danger-tinged glasses? The Answer, Part 1: What Was the Lesson of Your Pain? First, you have to get to the root of your fear that life is threatening to you. You feel that you’re not enough, not good enough, or won’t have enough. The likely reason is that you’ve experienced failures, losses, setbacks, abuses, or even profound tragedies. Those experiences, and the emotions that accompany them, are not to be denied, but did you survive? The fact that you are reading these words right now is a testament to your resilience. Now you’re looking not to repeat them. That’s sound advice if you did something that caused you a lot of pain and can learn the lesson you were meant to learn. Maybe you tried to get away with something — some kind of deceit or harm — and got caught. Maybe you took a risk with some kind of venture, business, or another enterprise, and it “failed.” Did you see your experience as a learning opportunity, gaining valuable lessons or did you see the event as life smacking you in the face? No doubt much of your fear is that you resisted the lesson that this painful moment had to offer you. You’re still resisting the lesson, and, as a result, still feeling a lot of pain. This pain is still alive in your body. Its tendrils have wounded deep in your chakras, especially the root, but others as well. This is a kind of trauma response — your fears are not rational, and they don’t respond to reason. They get triggered, like a fight or flight response; a portion of your brain dedicated to survival is still on high alert. You have to embrace that pain, feel it, and give it permission to be experienced and released. Learn the lessons of those events, rather than simply looking to avoid their repetition in the future. The Answer, Part 2: Looking for the Mystery, Awe, and Wonder Second, embracing uncertainty requires you to perceive the future through a different lens. Rather than focusing on the negative potential of the future, shift your view to one of awe, wonder, and mystery. What good or amazing possibilities might unfold, including ones you have never thought possible? It’s a shift in feeling. You cultivate an openness to options you might not even be aware of, with a feeling of excitement, anticipation. It’s like saying to the universe, I like this menu. Some dishes sound great; others not so much. How about a chef’s menu with a selection of dishes not even on this menu? You’re asking for life to surprise and delight you, rather than school you with more painful lessons. This is why I don’t want to know my future, as if jumping ahead to the end of a book or movie, or skipping episodes in a t.v. series and going straight to the finale. We love not knowing the outcome and instead relish watching the decisions that our favorite characters make, and cringing when they repeat the same “mistakes” and don’t see their own patterns that seem as clear as day to us, the viewers. Why shouldn’t our own lives not inspire the same kind of mystery, awe, and delight? That’s not always easy. Our lives can be a mix of events, some well beyond the scope of our individual sense of self. The past month, for example, has presented the entire nation with a powerful lesson in being with uncertainty. Despite all the polls and predictions, the outcome of the Presidential election was never clear. No one predicted that Georgia would go for President-Elect Biden, for example. The election unfolded without any kind of major ballot snafu (read: Florida 2000), or any kind of infiltration of state election systems by foreign agents. But we were held in suspense until Saturday morning. Even now, we must be with the uncertainty of how this transition unfolds, as the current President defies the norms of how a normal administration concedes the election and begins the process of handing over power to the next President. You may find yourself wrestling with fear of what comes next. Do you meet this moment with fear and worry or with a wonder and a sense of trust about how it will all unfold? The Answer, Part 3: Embracing Your Power to Create This is the mystery of the present moment. Pregnant with multiple possibilities, each second a portal to new dimensions. Each moment offers you an array of choices for what you will create with the gift that the moment affords you. That’s the joy of uncertainty — the path ahead is not fixed. You may have more or fewer choices in any given moment based on prior choices. (That’s the law of karma.) But you always have more choices than you realize, including how you react to all that is unfolding in your life and in the world more broadly. What will you create with this moment? Practice being with not knowing, with the uncertainty, and be in touch with that inner voice that does not feel safe, that fixates on the future as a source of pain. Taking care of that voice by getting at its roots can allow you to let go of uncertainty and lead you to start trusting life’s mystery. Then you can open to each moment of the unknown with the belief, rooted in trust, that life is leading you always to growth, opportunity, and flourishing. That’s how you start co-creating with the universe — by allowing yourself to be led. Only then will you come to discover treasures and worlds that you never imagined were possible because you gave up focusing solely on ensuring a future that was safe and familiar. Is that scary? Yes, sometimes. The ego will delight for a bit, and then say, But what about this next moment? This one might be dangerous. Life, in this way, when you let go and allow yourself to be carried, can sometimes feel like a rollercoaster: You know that you’re still held in place, strapped in tight, and yet the twists and turns can still be exhilarating. Embrace uncertainty, and let the universe take you on the ride of your life.
https://medium.com/know-thyself-heal-thyself/embracing-uncertainty-267f3b2c309b
['Patrick Paul Garlinger']
2020-11-12 14:30:49.014000+00:00
['Spirituality', 'Personal Development', 'Life', 'Trust', 'Energy']
Want a Happier & More Fulfilling Life? Diversify Your Sources of Happiness
Want a Happier & More Fulfilling Life? Diversify Your Sources of Happiness Your happiness is your personal responsibility Happiness or misery is a product of our subjective relationship with expectations and reality. It’s an inside job. Epictetus once said, “Man is not worried by real problems so much as by his imagined anxieties about real problems.” Over the long-term, a feeling of contentment, happiness comes down to embracing the objective reality of life, leveraging what’s in your control and letting go of too many illusions. “The great source of both the misery and disorders of human life, seems to arise from over-rating the difference between one permanent situation and another,” says Adam Smith. Learning how to adjust your expectations can change your approach to life. Viktor Frankl, author of the best-selling book, Man’s Search for Meaning says two of the most important values in life are the experiential, or that which happens to us and the attitudinal, or our response in difficult situation. Life is a moving current — there will always be obstacles that challenge our emotional stability. No one has the privilege of being free from the burdens of life and living it, but there are things you can do and a mindset you can adopt to keep moving and still make the most of what life throws at you. Regardless of your current trajectory on life, the means to a rewarding and fulfilling life is possible. Anyone can learn to do more of what can guarantee fulfilment and happiness. Unexpected sources of happiness The only way not to fall into the misery trap is to expand your sources of happiness and decrease your sources of misery. It’s that simple — loosen your fixation on what makes you happy and do more of what makes you come alive. Learn to develop the flexibility to reshape your sources of happiness. Investors are great at diversification — it’s one of the golden rules of investing. It’s never safe to put all your eggs in the same basket. It’s also generally not wise to rely on just a single income stream if you work for yourself. The risk of loss becomes too high. In life, we all have a limited amount of time and energy to spend. And there are so many unimportant things that want your attention ( be ruthless in eliminating those things). Investing your time in things that can bring you joy can indirectly help you avoid a life of misery. Many people wait for something to happen or someone to make them happy. In many cases of unhappiness, people experience difficult circumstances that create paradigm shifts, whole new frames of reference by which they see the world and themselves and others in it, and what life is asking of them. When this happens, they can easily expect something or someone to make them happy. Explore your values, priorities, and the lifestyle you want and adopt habits and routines that can help you design your own happiness. Take it slow — you can choose the one idea at a time to feel the most impact on your life. Focus on adopting the new mindset and see how it works for you. Jim Taylor, PhD recommends you ask yourself the following questions — “What do you value most in your life? What aspects of your life do you want as your priorities? What kind of life do you want to lead in the short term and in the future, say, in 10+ years? What do you want your life to be filled with (e.g., marriage, children, travel, health and exercise, culture)?”. The good news is, a single individual or thing should not be the sole reason for your happiness and fulfilment in life. When you leave your happiness in someone else’s hands, you’ll end up being dependent on them and when they disappoint you, you’ll become empty inside. There are many factors that may influence happiness — outlook in life, a comfortable standard of living, great relationships, new adventures, pursuing creative and meaningful goals and skilled and meaningful activities. To find out what works best for you, create your own personal list of “sources of happiness” and save it in your favourite note-taking app. Then, the next time you are feeling down, you have a bunch of powerful different choices to draw from when you need that extra boost to keep you going every day. It pays to build deeper relationships, it’s also important to build routines and other personal but fulfilling activities that can help you avoid a life of misery. It’s incredibly important to find sources of happiness in your life that aren’t tied to people or stuff because it’s much easier to lose that happiness when they are gone. Take advantage of all the sources of happiness in your life. Your happiness is your personal responsibility. You have the ability to control your own emotions. Do not let anything or anyone rob you from your own happiness. When your happiness is in the hands of other people, they determine when you can be happy. Choose to do more experiential activities that’ll bring you joy for as long they are not detrimental to your long-term happiness.
https://medium.com/personal-growth/to-avoid-a-life-of-misery-diversify-your-sources-of-happiness-18e11a061d34
['Thomas Oppong']
2020-12-11 00:05:53.459000+00:00
['Happiness', 'Relationships', 'Self', 'Psychology']
The Garlic Freak
(9–17) Charles James Worthington never tasted garlic until he was sixteen years old. It was on one of his first clumsy dates. It was on a double date, actually. It was the standard movie then pizza date. It was the first time Charles had ever eaten a pizza not made by his mother. He finally learned what real pizza was like and he also discovered garlic. (And girls.) From that moment on Charles became wildly passionate about garlic. He did research on it. He went on a quest to try every ethnic cuisine that included garlic. He began going to grocery stores after school just to have a double handful of garlic in which to stick his nose. How on earth could he have gone sixteen years of his life without any experience with garlic whatsoever? It was because of his mother. The family never ate out in those sixteen years. Charles’ mother was an extreme penny pincher. She knew how to feed a family of seven on less than a buck-fifty. The only food Chucky (he was called Chucky back then) ever tasted was his mother’s cooking. Even the seemingly thousands of sack lunches that Chucky had eaten at school were made by her. And she hated garlic. She hated it so much that she vehemently refused to even allow it in her home. Chucky learned this when his mother found a few stray cloves of garlic in Chucky’s sock drawer. “Don’t you ever bring garlic into this house again! You understand me?” “No. I don’t understand your weirdness about garlic. What’s the deal? I’ve tasted it and it’s delicious.” She put her hand to her mouth, slowly dropping it to speak, “So you’ve tried it?” “Duh.” “I tried to shield you from it as long as I could but I guess deep down I knew that eventually you would succumb. I can’t protect you from garlic your whole life. You’ll be a man soon and you’ll need to make your own decisions. Just hear me now, Chucky; garlic is evil! It will inflame your loins and your stomach and it will turn your brain into mush. It’s what is used to control people. Promise me that you will stay away from garlic!” “What? No. I love garlic.” Once again, she covered her mouth. “And one more thing….” She didn’t move. “From now on, my name is not Chucky. It’s Charles!” Her hand still on her face, his mother let out a faint whimper then turned and left the room.
https://medium.com/recycled/the-garlic-freak-f9fd479541fa
['White Feather']
2019-05-21 18:24:49.004000+00:00
['Short Story', 'Fiction', 'Humor', 'Food', 'Psychology']
A Relationship With God Can Boost Mental Health, Even if You Don’t Believe
A Relationship With God Can Boost Mental Health, Even if You Don’t Believe New research points to mental health benefits for the devout and agnostic alike Photo: d3sign/Moment/Getty Images Maybe you don’t believe in God. But could cultivating a relationship with God, despite your agnostic stance, make a difference for your mental health? As a philosopher of religion, this question is of great interest to me — and now a recent research trend suggests the answer might be yes. For decades, researchers have wondered about the factors that account for the complex relationship between religion and mental health. Under certain circumstances, it appears that religion positively influences mental health — though not in all cases. One key variable in this equation has become increasingly clear over the past ten years: the value (to the believer) of a perceived relationship with God. Research in this area focuses on what psychologists call “attachment to God.” If “attachment” sounds familiar, it should — attachment theory (often invoked as a framework for understanding relationships) has received a great deal of attention in recent years. Just because a person lacks the sort of evidence they might require in order to believe in God doesn’t mean they won’t take an intellectual risk and try to engage in a relationship with God — just in case God is there. The idea of attachment to God was borne out of a research paradigm that began with studies of attachment relationships between infants and caregivers. Infants could be avoidantly attached — seeking to strike out on their own, cold toward their caregivers, not experiencing much need for them. They could be anxiously attached — constantly needing reassurance from their caregivers, afraid to engage their environment independently, and worried their caregivers might abandon them. Or, they could be securely attached and thus occupy a kind of happy medium in which they felt assured of their caregiver’s support (when needed), which enabled them to explore their world confidently. Roughly the same attachment orientations, it was theorized, might characterize a person’s relationship (or lack thereof) with God. A person might avoid engaging with God, might be anxious about their relationship with God, or might be secure in their relationship with God. Today, research that has accumulated for nearly 30 years makes it clear that people do vary in their attachment orientations toward God, and that difference makes a difference. Many studies support the idea that a secure attachment to God is important for a believer’s mental health. Secure attachment is associated with less drug and alcohol abuse, less problematic internet use, less loneliness and depression, and greater satisfaction with life. When believers are instead anxiously or avoidantly attached to God, these same behaviors can be troubled. Moreover, believers’ attachment to God appears to influence outcomes uniquely. In other words, when researchers have conducted studies that control for believers’ attachment relationships to family members or partners, their attachment relationship with God has still proven significant for their mental health and well-being. Yet, one might wonder whether these benefits are exclusively available to believers. In a 2016 study, psychologist Alyssa Strenger and her colleagues wondered about just this, arguing that research on attachment to God should attend to non-believers as well as believers. Their recommendation that studies give non-believers attention bucked the trend of the time. In Stenger and team’s view, even non-believers might still hold mental representations of God that affect their behaviors, emotions, and cognitions. From a philosophy of religion perspective, Strenger’s suggestion makes good sense. In philosophy, “theists” are typically understood to be those who believe in God, “atheists” are those who believe God doesn’t exist, and “agnostics” neither believe God exists nor believe God doesn’t. They’re on the fence, as it were. The emotions, behaviors, and even perceptions that are the stuff of personal relationships can exist and function apart from our beliefs (or the lack thereof). Notably, this is a purely cognitive characterization. It has only to do with what a person believes or doesn’t believe. Yet, much more is involved in a relationship — whether to another human or to God. Just because a person lacks the sort of evidence they might require in order to believe in God doesn’t mean they won’t take an intellectual risk and try to engage in a relationship with God — just in case God is there. The emotions, behaviors, and even perceptions that are the stuff of personal relationships can exist and function apart from our beliefs (or the lack thereof). This same idea is echoed in recent work by psychologist Steven Pirutinsky and colleagues. Pirutinsky conducted studies with Jewish populations, for whom cognitive aspects of religion are frequently less important in empirically verifiable ways. An example: weighing actions more heavily than beliefs when judging the religiosity of themselves and others. Because of this, Pirutinsky initially thought that attachment to God may not be significant for his Jewish participants’ mental health. But it turned out he was wrong. His explanation of why attachment to God did matter for Jewish participants was along the lines of what I suggested above: Whether a person cultivates a securely attached relationship to God is not constricted by their cognitive attitudes alone. Strenger, too, tested her suggestion that attachment to God may be significant for non-believers by focusing on eating disorder symptoms and God attachment. She found that secure attachment to God was important for both believers and non-believers when it came to eating disorder symptoms, and there was no significant difference in the role it played for believers versus non-believers. Specifically, for both believers and non-believers, lower levels of anxious attachment to God weakened the effects of sociocultural pressure. In my own recent work, I too have found further support for thinking that God attachment may be important for agnostics. I recently took a second look at two studies — here and here — of attachment to God and mental health in which data was collected for agnostics, but was not reported on in the original studies. In both, I found that attachment to God was significantly related to mental health for agnostics. Further, I found that while theists who are securely attached to God may gain a mental health edge on agnostics who are not securely attached to God, they don’t gain this advantage on agnostics who are securely attached to God. Now, these are only a few studies. They must be interpreted with caution, and more work is needed on the topic. Yet they at least suggest that for some agnostics, cultivating a relationship with God may offer some surprising benefits — even if it isn’t accompanied by belief.
https://elemental.medium.com/a-relationship-with-god-can-boost-mental-health-even-if-you-dont-believe-449e51cb645b
['T Ryan Byerly']
2020-12-28 06:32:41.070000+00:00
['Spirituality', 'Mental Health', 'Mindfulness', 'Religion', 'Self']
Simulating An Epidemic Outbreak With JavaScript — Part 7
TLDR: I made an epidemic outbreak simulation that can be played here. Show Me The Curves A lot of articles out there about the pandemic is talking about flattening the curve. Reducing the peak infection numbers so the medical facilities don’t get overwhelmed. So let’s add some charts to our simulation to see how our curves are doing. Install ChartJS into our project with an npm install. npm install chart.js --save Import it into our app.js. import Chart from 'chart.js' Here’s how I set up my chart. Essentially stripping out all the labels and lines and whatnot. I just want to see my curves. const myChartCtx = document .querySelector( '.my-chart' ) .getContext( '2d' ) const myChart = new Chart( myChartCtx, { type: 'line', data: { labels: [ 0 ], datasets: [ { data: [ beds ], fill: false, pointRadius: 0, borderWidth: 1, borderColor: 'rgba( 66, 153, 225, .5 )', }, { data: [ 0 ], fill: true, pointRadius: 0, borderWidth: 0, backgroundColor: 'rgba( 0, 0, 0, .1 )', }, { data: [ 0 ], fill: true, pointRadius: 0, borderWidth: 0, backgroundColor: '#ED8936', }, { data: [ total / 2 ], pointRadius: 0, borderWidth: 0, } ] }, options: { legend: { display: false }, tooltips: { enabled: false }, scales: { xAxes: [ { display: false } ], yAxes: [ { display: false } ] } } } ) function addData( who, data ) { myChart.data.datasets[ who ].data.push( data ) myChart.update() } So if you noticed, I have 3 datasets in my chart set up. The first one is a single line that draws across my chart. This will represent the breaking point of the medical facilities. As mentioned in the previous articles, this is about 25% of my total population. The second is my death count. It is a black area chart with a 10% opacity so it will overlay on the chart to give a sense of the growing numbers of dead beans in our population. The last dataset is my infection numbers. This will include both infected beans and detected beans. This is the curve to flatten and keep below our breaking point line from above. Every time I loop through my particles to do a count, I push these data to my chart and update the chart. I also set a small deference counter so that I don’t push my data that often. Currently, my loops are on a 60 fps, I don’t want to update my chart that often.
https://medium.com/footprints-on-the-sand/097-simulating-an-epidemic-outbreak-with-javascript-part-7-c8876e9f62d5
['Kelvin Zhao']
2020-04-06 07:13:23.024000+00:00
['Development', 'Virus', 'JavaScript', 'Simulation', 'Coding']
Origins #36 — The Responsibility Is Real
Origins #36 — The Responsibility Is Real Delivering value and creating reactions with the product, while building relationships The last week prior to launching our online store, we put in a lot of effort to make sure that everything works fine, mostly payments and that the product inventory is displayed correctly on the website. We also approached testing the website, the same way we did developing our product. We had people from our community go on it and then communicated the feedback. Thanks to all the people who took the time to help us out! Feelings I felt a very strange sudden rush of responsibility, minutes after the store was live. All the work and content produced so far, was pointing towards this day, on which we start shipping the product we’ve been working on for a while. After some self talk, I managed to ease the tension and come to an understanding that it’s a false sense of a milestone event, as it is just the beginning, instead of an end goal that has been accomplished. So, thinking through it, I was able to get back into a more stoic mindset, ignore the self-imposed “importance” of the event and mentally move on to the next tasks at hand. First Shipments Luckily and thankfully we’ve been keeping people in the loop about what we are doing, while also building relationships through the podcast and other social networks. To many of those people, we sent out samples of the product, while it was still in development. A very good sign was that many of those early testers, liked the product so much, that during our first days of operation they bought a few shirts. This enabled us to go through the whole process of accepting an order and fulfilling it, making sure everything works and do so in a volume that is manageable, but not overwhelming. Having an established relationship, also allowed us to check in with them and ask how the payment process was, what the automated emails from our system look like and other customer facing communication. As of the day of writing this post, those initial shipments are traveling towards their final destinations and we’re eager to ask about initial reactions to a now finalised product + packaging. With the web store, everything has been very smooth so far, which gave us the green light to mentally step into the next stage of building this brand, which is figuring out how to market the product and get more people to hear about it. Online Marketing Strategy As I think I’ve mentioned in a previous post, we think about marketing in two ways, short-term/transactional and long-term/brand building one. We consider that in the short-term, in order to start generating some volume or orders, Facebook Ads will be our main mass marketing channel. Our expertise and interests naturally lie more into creating and investing more time in the long-term projects, which would be creating content and building relationships. So we’ve decided that we will outsource our Facebook marketing efforts to an agency, or a person, which would enable us to focus on brand and relationship building. We played with some Facebook Ads, before launch, but we realise that in order to be effective there, we either have to invest a lot of time to learn it, or get someone who knows what he’s doing. We choose the latter, which opens up time and focus to work on our naturally stronger points. That’s what we are currently up to, looking for a partner to take care of our online marketing efforts. We will let you know how that goes ;) Marin’s story on the DULO Instagram. Sourcing the community for knowledge ;) If you know anyone, or are someone who might be interested in working with us, let us know ;) Thank you for your time! Please give a few 👏 and/or leave a comment. It means a lot us and helps other people hear about our journey! Also, say Hi on Instagram | Facebook | YouTube | Twitter | Snapchat | SoundCloud
https://medium.com/the-needle/origins-36-the-responsibility-is-real-aad5576bfac1
['Julian Samarjiev']
2020-09-02 10:49:01.267000+00:00
['Business Development', 'Marketing', 'Online Marketing', 'Digital Marketing', 'Social Media']
Web Automation
(Published initially November 23rd 2017) Information is power. This is such a basic principle in finance that those who make privileged use of it get seriously penalized. But on the other hand it is perfectly legitimate to analyze widely available information in a novel way to obtain an advantage in the markets. Although such information is publicly available on the Internet, there will not always be a beautiful API in JSON or XML ready for consumption by algorithms; so having some basic notions of web automation will be of great help in our work. Actually the heading “web automation” goes beyond the mere collection of data, a reviled concept known as web scraping, and also includes the possibility of interacting with the web pages themselves, verifying credentials, filling in forms, providing data and activating services. That is, full bi-directional autonomous interaction. A practical example would be the connectors that we have developed at Ágora Asesores Financieros to find the daily quotes of certain exotic funds (not available in the data pipes of Bloomberg or Factset) and update them selectively in the cloud that holds the portfolios of our clients, saving a lot of time, effort and mistakes. The Python programming language provides us with a series of simple ways to perform web automation. We are going to give a brief introductory tour through them with increasing degrees of functionality. 1. Bare, Naked and Raw The basic module for Internet interaction in Python is requests. Only with that we already have enough to work with. For example, let’s get the X-Trackers ETF quote on the Euro Stoxx 50 from Morningstar. import requests as req res = req.get("http://www.morningstar.es/es/etf/snapshot/snapshot.aspx?id=0P0000HNXD") text = res.text start = ">EUR\xa0" end = "<" start_pos = text.index(start) end_pos = text.index(end, start_pos) print(text[start_pos+len(start):end_pos]) First we download the entire web page from Morningstar, and then find the value that is between the two text strings contained in the start and end variables. No big deal, but it is not a very robust method and the expressions necessary to isolate text strings can become very ugly very fast. The use of regular expressions can somehow temporarily save us; for example, the previous code would become: import requests as req import re res = req.get("http://www.morningstar.es/es/etf/snapshot/snapshot.aspx?id=0P0000HNXD") print(re.findall(">EUR\W([^\<]*)<", res.text)[0]) Brief, no doubt about it, but regular expressions can be devilishly complex to decipher as well. And neither do we solve the fundamental problem of fragility: when anything changes the text of the page a bit, even without changing its structure, or for example in the case that the text “EUR” appears in an upper section, the system will fail. 2. With Structure Luckily we have another option: instead of browsing plain text, we can navigate through the logical structure of the web page that we are visualizing as defined by the HTML code. Python has the BeautifulSoup module that will allow us to navigate the structure using CSS expressions once we have installed it. > pip install bs4 Our code would then become this: import requests as req from bs4 import BeautifulSoup as soup res = req.get("http://www.morningstar.es/es/etf/snapshot/snapshot.aspx?id=0P0000HNXD") html = soup(res.text, "html.parser") print(html.select("#overviewQuickstatsDiv table td.text")[0].text) We still have to clean the “EUR” part if we want, and ready. The CSS search can be interpreted as “Find the element identified as overviewQuickstatsDiv, and give me the content of the first cell of class text you find inside your table”. As long as the structure of the page does not change, something quite unlikely, our search will succeed. 3. With Sessions Sometimes the web services will not be available directly but we will have to identify ourselves instead (in terms of Internet, start a session) and perform a series of steps before completing our task. No problem, Python is also willing to lend us a hand here. Although the requests module itself has support for sessions, the handling of forms, cookies and states can quickly turn complex, so it is advisable to use a higher grade library to encapsulate these tasks and change our journey into a simple walk through the digital park, following links, filling out forms and pressing buttons. Mechanize has traditionally been a highly dependable library, but unfortunately it has become somewhat outdated by only supporting Python 2.x. In contrast with that, RoboBrowser will provide us with full support for Python 3.x. > pip install robobrowser For these more elaborate examples I have created a project-lab on GitHub called ScrapHacks that you can clone on your local machine to perform your own experiments. You’re welcome! from lxml import etree url = "https://www.quefondos.com/es/planes/ficha/?isin=N2676" resp = req.get(url) html = etree.HTML(resp.text) value = html.xpath("//*[@id=\"col3_content\"]/div/div[4]/p[1]/span[2]") For the case of session management it is worth reviewing the file pricescrap.py, where we combine RoboBrowser with an alternative library to BeautifulSoup called Lxml, interesting because it allows to use XPath expressions in addition to CSS. > pip install lxml In the example file we download the price of three financial assets with methods similar to the previous ones, but then we navigate to a service in the cloud, we initiate session with private credentials and we dump these price updates in specific client portfolios. 4. With Browser Although in the previous example we talk about “surfing” or use the term browser in our code, it is good to understand that it is just a metaphor. RoboBrowser is not a full web browser in the sense that Chrome, Firefox or Safari are. It only emulates part of its functionalities but it lacks many others, such as the ability to execute the JavaScript code associated with the pages. This is important. Sometimes the JavaScript code is purely decorative, but in others its execution is fundamental for the correct interpretation of the page. Traditionally, the active composition of the page was done on the server side and when it came to the client side it was a static element, but due to the use of certain development frameworks, as well as for reasons of security and flexibility, increasingly the active composition of the page is performed on the client side. In such cases, being unable to interpret JavaScript will entail a miserable failure. But let’s not throw in the towel so soon. Python has an excellent integration with Selenium, a project that will allow us to take control of the browser of our choice and act as if we were sitting in front of the machine, clicking on buttons and filling in boxes so that it is practically impossible to distinguish a human session from an automated one. from selenium import webdriver driver = webdriver.Chrome() driver.set_window_size(1000, 1000) driver.get("https://www.duolingo.com") driver.find_element_by_id("sign-in-btn").click() driver.find_element_by_id("top_login").send_keys(credentials["username"]) driver.find_element_by_id("top_password").send_keys(credentials["password"]) driver.find_element_by_id("login-button").click() Please excuse me here while I open a little parenthesis: I am a regular user of the amazing services of Duolingo, with which I have already learned several languages and I hope to learn many more. However, the Mobile App does not allow to see the grammar lessons (for example the part of “Tips and Notes” here at the bottom of the page) that although can be ignored when a Spanish learns Portuguese, become essential if you want to survive while learning of a language as distant as Russian. However a gentleman does not complain about a gift, and anyway I am more a person of action, so for this example I have created a simple script called duolingoscrap.py that extracts and groups the grammar lessons in a convenient summary. The challenge is that the Duolingo website is interpreted on the client side, which makes the use of Selenium, combined with BeautifulSoup, essential. 5. With Traffic Control Up, up we go in our pyramid. What do we have left now that we are able to navigate the web as a human? Well, maybe some extra-human capabilities. Sometimes the interpretation of the JavaScript code is so incredibly complicated, often with the explicit desire to keep it safe from eyes too curious as ours, that it is impossible to find the web element we want to extract. In such cases what we can do is directly observe the traffic that enters and leaves our machine in order to find that element. We can achieve this by combining Selenium with BrowserMobProxy as an intermediary (proxy) between our browser and the world. It is a program written in Java so we will need to have a JRE running on our machine, but this convenient wrapper allows us to work with it as if it were another piece in our Python arsenal. from selenium import webdriver from browsermobproxy import Server browserMob = ".%sbrowsermob-proxy-2.1.4%sbin%sbrowsermob-proxy" % (os.path.sep, os.path.sep, os.path.sep) server = Server(browserMob) server.start() proxy = server.create_proxy() chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("--proxy-server={0}".format(proxy.proxy)) driver = webdriver.Chrome(chrome_options = chrome_options) proxy.new_har("safaribooks") driver.get(url) har = proxy.har for entry in har['log']['entries']: # processing So we want to download a movie hidden in the code? I may not understand what the activation process of the video is, but I just set it in motion and when I see a multimedia element in my traffic I capture it. Done. This is precisely the use case for the safarihacks.py script, which uses Selenium to log in with a test account in the SafariBooks on-line library and then proceed to serially download the books in the catalogs of our interest. In the case of multimedia courses, the code gets support by BrowserMobProxy to identify and download the video files. Finally, the icing on the cake comes in the form of an integration with PDFReactor to automatically convert downloaded books to PDF format. With these basic techniques you can create very powerful web robots (also called spiders). Have fun experimenting with the ideas in your own projects, and if this post has been useful to you, I would appreciate it if you could recommend it and share it in your social networks. Check as well this continuation article in which we cover the very relevant topic of Hybrid Web Automation. Good luck and see you next time! Summary Python provides means to turn web automation into a very easy task. Don’t miss https://github.com/isaacdlp/scraphacks with practical examples: Useful components:
https://medium.com/algonaut/web-automation-a66dd2be5c2a
['Isaac De La Peña']
2020-08-31 23:36:27.323000+00:00
['Programming', 'Web Automation', 'Python', 'Data Science', 'Web Scrapping']
10 Essential Skills for the Modern UI & UX Designer
Typography involves much more than choosing a great font. When used effectively, it can enhance usability by improving readability, accessibility, and hierarchy within an interface. If you’re reluctant to create typography guidelines from scratch, try the Material type scale generator to generate font sizes for paragraphs, headings, buttons, and so on. To create type scale guidelines for a UI project, here’s how I do it: Select a font to work with. My favorite places to get high-quality UI fonts are Google fonts or Adobe fonts. Avoid more than 2 typefaces. Instead of introducing new fonts to the interface, use font families. Fonts from the same family are designed to work together, so they’re flexible and consistent. Establish a base font size. I start by establishing the most commonly used type scale for body copy and determine a suitable line-height. Line height. Keep line-height range between 130% and 150% for optimal readability. This isn’t always true, but it’s a good place to start and then make adjustments as needed. Define a scale. A scale provides consistency, rhythm, and hierarchy because they are predictable. To set type scale for h1, h2, h3, body, captions, buttons, and so on, we need a scale value to multiply by our base font size. Common scales for type are 1.250x,1.414x, 1.5x, 1.618x. Test scales on devices. Test font with different scales on multiple device sizes to decide on the right value. 4. How to craft a perfect case study View a case study template I created — View Figma. The ritual of crafting a comprehensive case study at the conclusion of a design project is a design project in and of itself. Documenting the design process has become so ingrained in many up and coming designers that it seems designers go out of their way to use as many colorful sticky notes as possible to take the perfect picture for their case study. Here is a format that I use for case studies: Intro Overview. Provide a high-level description of the project. Client. Who was the client? Who was the solution for? Role. What was your role in the project? (i.e., Lead UX Designer) Duration. How long was the project? Tools. List the tools you used for the project. (i.e., XD, Whimsical, Defining the problem Hypothesis. State what problem was hypothesized that lead to the project being started. Create a problem statement. In exactly one sentence, sum up the problem you’re aiming to solve with your design. Discovery. This is the initial validation of our hypothesis. In the discovery phase, we research our problem and find existing solutions and possible opportunities for improvement. Testing Interviews. Assuming there have been potential opportunities uncovered in the discovery phase, it would now be time to interview potential customers. Outline the number of interviewees or surveys, age range (if applicable), gender (if applicable), and interview duration. Goals. Indicate the different discoveries that were trying to be made during the interviews. Insights & Opportunities Journey map. A journey map will help uncover these insights to refine your understanding of the users’ pain points and find potential opportunities. In your journey map, be sure to outline the opportunities in the user’s journey. Opportunities. Through discovery, research, and interviews, we’ve uncovered many potential opportunities for our product. Outline 3 key opportunities that your design could help solve. Solution State the solution. A solution statement that outlines how the design will solve the core problems people are encountering. Design Design principles. What principles guided design decisions? MVP. Show the minimum viable solution that you created to solve the aforementioned problems and opportunities. Gathering feedback. How did you gather feedback on the MVP? (i.e., user testing, hotjar, google analytics, surveys, etc.) Testing insights. Describe the findings that came out of your testing of the design. Include quotes from users as well. New iteration responding to feedback. The design process is inherently iterative and ongoing, so I like ending my case studies with updated designs based on initial feedback and a mention of roadmap features for the future. Conclusion Sum up your findings, challenges, client quotes, and other notes to bring it all together. 5. How to write effective UX copy In a perfect world, UX writing is the task of professional UX writers. However, companies often rely on their UX team to convey clear messaging instead of hiring UX writers. If your team has a UX writer, great! If not, I have some tips you can use to convey clear messages to users. Write all copy at once. It can be tempting to write the copy ad hoc while designing the product, but this can often lead to mixed tones and a lack of cohesion in the messaging. Create one document for all alerts, messages, modals, explainer text, and so on. It can be tempting to write the copy ad hoc while designing the product, but this can often lead to mixed tones and a lack of cohesion in the messaging. Create one document for all alerts, messages, modals, explainer text, and so on. Keep it short and sweet. There is likely fluff in UX copy that can easily be removed with a quick audit. Instead of saying: “Only Premium members have access to this feature” saying “Join Premium to Access” will keep it concise and to the point. There is likely fluff in UX copy that can easily be removed with a quick audit. Instead of saying: “Only Premium members have access to this feature” saying “Join Premium to Access” will keep it concise and to the point. Keep it consistent. When addressing the user, be sure to remain in first or second person, whichever was chosen. So instead of “edit your location in my account” say, “edit your location in your account.” When addressing the user, be sure to remain in first or second person, whichever was chosen. So instead of “edit location in account” say, “edit location in account.” Avoid jargon. Unless we’re designing an app for experts. Avoid industry-specific terminology like “buffering” or “configuring.” Unless we’re designing an app for experts. Avoid industry-specific terminology like “buffering” or “configuring.” Write in present tense. Instead of saying: “message has been sent” say, “message sent.” Instead of saying: “message has been sent” say, “message sent.” Begin with the objective. When a phrase describes a goal and the action needed to achieve it, start the sentence with the goal. Instead of saying: “Drag a photo to the trash to remove it from this album” say, “To remove a photo from this album, drag it to the trash.” 6. How to give design critique Giving constructive feedback and responding to less-than-constructive feedback is a critical skill that can be uncomfortable for new designers. Lack of awareness about basic feedback skills leads to clients providing vague and useless feedback like “can you make it pop?” To provide effective feedback, there are a few things we can do: We must be clear and specific. “The imagery you used on the careers page doesn’t represent our culture well. Let’s show a playful and relaxed graphic to connect better with the candidates we hope to attract” is more useful feedback than “make it pop.” “The imagery you used on the careers page doesn’t represent our culture well. Let’s show a playful and relaxed graphic to connect better with the candidates we hope to attract” is more useful feedback than “make it pop.” Present the designer with the problem, not just the solution. At first glance, we may think that we know the solution and request the designer “use this graphic for the careers page instead.” However, when we start by presenting the problem, the designer can understand why we want to change the graphic and can suggest ideas we may not have considered. At first glance, we may think that we know the solution and request the designer “use this graphic for the careers page instead.” However, when we start by presenting the problem, the designer can understand why we want to change the graphic and can suggest ideas we may not have considered. Give high-quality examples. It’s always helpful to share patterns from other companies or designs we’ve found in portfolios. Not because the designer should copy these, but it can be a helpful reference for how the design can be improved. It’s always helpful to share patterns from other companies or designs we’ve found in portfolios. Not because the designer should copy these, but it can be a helpful reference for how the design can be improved. Don’t forget compliments. As designers receiving a list of comments, revisions, issues, and so on can be daunting. Be sure to balance our feedback by calling out the parts of the experience that we think is working really well. For example, “I really like how you’ve laid out the welcome screen — it flows perfectly from the previous screen. However, it took me a second to notice the login button at the top. Can we make that button more prominent?” As designers receiving a list of comments, revisions, issues, and so on can be daunting. Be sure to balance our feedback by calling out the parts of the experience that we think is working really well. For example, “I really like how you’ve laid out the welcome screen — it flows perfectly from the previous screen. However, it took me a second to notice the login button at the top. Can we make that button more prominent?” Tone matters. How we present our feedback and format sentences can greatly impact how designers (or people in general) respond to them. Instead of saying, “using that icon to represent a delivery doesn’t make any sense.” We could say, “the delivery icon is confusing to me. I’m used to seeing a package for delivery. Is this the best icon to represent delivery?” 7. How to create an unmoderated remote usability testing plan Many different user testing methods can be used to gather findings. Some common tests are usability testing, card sorting, tree testing, a/b testing, and feedback surveys. To understand the different options for testing, I recommend reading Quantitative User-Research Methodologies: An Overview. In this example, I will provide a template that can be used when conducting a qualitative test. Define the goal The first step in creating a remote user testing plan is to define what we hope to achieve from the test. Writing a statement that defines the goal will be our guiding light when constructing the test. Write a simple statement like “explore if an onboarding flow is the best experience for onboarding new users.” 1. Hypothesis Just like those science fair experiments in middle school, we should prepare a hypothesis. This will form the basis for what we are testing against. Continuing with the last example, our hypothesis might be, “users appreciate a seamless experience to welcome them into the app and explain the features as opposed to exploring for themselves.” 2. Screening Questions Screening questions are asked to potential participants to ensure they’re a good fit for the test. For example, if we’re testing a workout app, then we may want to ask questions like, “do you currently use any workout apps?” or “do you exercise regularly?” If the tester answers no to these questions, they wouldn’t be a good fit and wouldn’t continue to our test. 3. Scenarios In this step, we need to define the different scenarios that a tester would go through to help us uncover insights. For example, a scenario might be testing out the app as a new user without an onboarding experience, exploring features on their own. Another scenario might be testing an onboarding experience that ushers new users through the app and explains features. 4. Tasks & questions for each scenario Initial — Gather the tester’s expectations and explain the scenario. Gather the tester’s expectations and explain the scenario. Questions & tasks — Explain the tasks to complete and define questions that will be asked throughout the test. Explain the tasks to complete and define questions that will be asked throughout the test. Final questions — Ask how the experience compared to their expectations. It can help to see what they liked, didn’t like, were confused by, and final thoughts. 5. Final questionnaire Gather final thoughts and ask any final questions to tie all the scenarios together. Which option did they prefer? How would you rate each experience on a scale of 1–5? Be sure to show visuals again to remind testers of each scenario. 6. Results Here are a few tips for gathering the best results from remote user testing: Not all users are tech-savvy , so it’s crucial to create tests that don’t require overly complicated tasks. , so it’s crucial to create tests that don’t require overly complicated tasks. Ask detailed screening questions to ensure we get the right testers. Or better yet, recruit our own users or target customers. to ensure we get the right testers. Or better yet, recruit our own users or target customers. Always use simplified phrasing as opposed to technical terms. as opposed to technical terms. According to Neilson Norman Group, we only need to test 5 users to uncover 85% of usability issues. 8. How to design for development When designing for development, there are considerations, constraints, and best practices we should keep in mind throughout the process. Considering development will make us better designers to work with and improve the overall quality of the products that we ship. Some steps to ensure a seamless handoff and easier lift for developers: Consider reusable patterns unless unique designs add real value. During the design process, we should audit our design when introducing new conventions, elements, animations, and so on. If these additions aren’t bringing additional value to the user, then they should be reconsidered. The elements that we do include; buttons, inputs, screen layouts, flows, and so on, should be reused as much as possible for consistency and to save time. unless unique designs add real value. During the design process, we should audit our design when introducing new conventions, elements, animations, and so on. If these additions aren’t bringing additional value to the user, then they should be reconsidered. The elements that we do include; buttons, inputs, screen layouts, flows, and so on, should be reused as much as possible for consistency and to save time. Take advantage of collaboration tools like Zeplin for code snippets, inspection, easy asset downloading, and more. Figma, InVision, Adobe, or Marvel will also work just fine. like Zeplin for code snippets, inspection, easy asset downloading, and more. Figma, InVision, Adobe, or Marvel will also work just fine. Avoid introducing new, unnecessary features . Don’t introduce features that will overcomplicate the development process while bringing no additional value to the application. Focusing on the business objectives, user needs, project scope, timeline, and how products are developed will help prioritize which features are essential. . Don’t introduce features that will overcomplicate the development process while bringing no additional value to the application. Focusing on the business objectives, user needs, project scope, timeline, and how products are developed will help prioritize which features are essential. Organize all screens into sections in Zeplin and the design file. in Zeplin and the design file. Name all artboards sequentially and appropriately for easy discovery and understanding. and appropriately for easy discovery and understanding. Mark assets for export. Keep an archive of old screens and ensure all new screens are up to date. 9. How to make low fidelity wireframes Wireframing with a tool like Whimsical is faster and more lightweight for throwing ideas together and getting a feel for the layout and hierarchy of our design. It’s harder to fall in love with a design when it’s only a wireframe, so we can take criticism and feedback while holding on to our dignity. Whimsical has predefined guard rails that make it easy to add components and define the hierarchy, layout, and content of screens — but not be distracted by the small details. Whimsical restricts us from getting caught up with colors, selecting typefaces, adding our own icon family, and so on. The simplicity helps keep me focused on the overall experience and not get sidetracked by the little details like spacing and colors as I would in Sketch. 10. How to become a better designer every day There will always be new trends, tools, design libraries, startups, product updates, and all the other things that we’re sure will make us a better designer. Here are a few things I’ve done to help me improve my design skills by leaps and bounds:
https://uxdesign.cc/10-essential-skills-for-the-modern-ui-ux-designer-ee6e9b53fcf9
['Danny Sapio']
2020-12-14 19:11:24.033000+00:00
['Visual Design', 'Design', 'UI', 'UX', 'Product Design']
Hadoop Tutorial - A Comprehensive Guide To Hadoop
Hadoop Tutorial - Edureka If you are looking to learn Hadoop, you have landed at the perfect place. In this Hadoop tutorial blog, you will learn from basic to advanced Hadoop concepts in very simple steps. Alternatively, you can also watch the below video from our Hadoop expert, discussing Hadoop concepts along with practical examples. In this Hadoop tutorial blog, we will be covering the following topics: How it all started What is Big Data? Big Data and Hadoop: Restaurant Analogy What is Hadoop? Hadoop-as-a-Solution Hadoop Features Hadoop Core Components Hadoop Last.fm Case Study How It All Started? Before getting into technicalities in this Hadoop tutorial blog, let me begin with an interesting story on how Hadoop came into existence and why is it so popular in the industry nowadays. So, it all started with two people, Mike Cafarella and Doug Cutting, who were in the process of building a search engine system that can index 1 billion pages. After their research, they estimated that such a system will cost around half a million dollars in hardware, with a monthly running cost of $30,000, which is quite expensive. However, they soon realized that their architecture will not be capable enough to work around with billions of pages on the web. They came across a paper, published in 2003, that described the architecture of Google’s distributed file system, called GFS, which was being used in production at Google. Now, this paper on GFS proved to be something that they were looking for, and soon, they realized that it would solve all their problems of storing very large files that are generated as a part of the web crawl and indexing process. Later in 2004, Google published one more paper that introduced MapReduce to the world. Finally, these two papers led to the foundation of the framework called “Hadoop“. Doug quoted on Google’s contribution in the development of Hadoop framework: “Google is living a few years in the future and sending the rest of us messages.” So, by now you would have realized how powerful Hadoop is. Now, before moving on to Hadoop, let us start the discussion with Big Data, that led to the development of Hadoop. What is Big Data? Have you ever wondered how technologies evolve to fulfill emerging needs? For example, earlier we had landline phones, but now we have shifted to smartphones. Similarly, how many of you remember floppy drives that were extensively used back in 90’s? These Floppy drives have been replaced by hard disks because these floppy drives had very low storage capacity and transfer speed. Thus, this makes floppy drives insufficient for handling the amount of data with which we are dealing today. In fact, now we can store terabytes of data on the cloud without being bothered about size constraints. Now, let us talk about various drivers that contribute to the generation of data. Have you heard about IoT? IoT connects your physical device to the internet and makes it smarter. Nowadays, we have smart air conditioners, televisions etc. Your smart air conditioner constantly monitors your room temperature along with the outside temperature and accordingly decides what should be the temperature of the room. Now imagine how much data would be generated in a year by smart air conditioner installed in tens & thousands of houses. By this you can understand how IoT is contributing a major share to Big Data. Now, let us talk about the largest contributor to the Big Data which is, nothing but, social media. Social media is one of the most important factors in the evolution of Big Data as it provides information about people’s behavior. You can look at the figure below and get an idea of how much data is getting generated every minute: Social Media Data Generation Stats - Hadoop Tutorial Apart from the rate at which the data is getting generated, the second factor is the lack of proper format or structure in these data sets that makes processing a challenge. Big Data & Hadoop — Restaurant Analogy Let us take an analogy of a restaurant to understand the problems associated with Big Data and how Hadoop solved that problem. Bob is a businessman who has opened a small restaurant. Initially, in his restaurant, he used to receive two orders per hour and he had one chef with one food shelf in his restaurant which was sufficient enough to handle all the orders. Traditional Restaurant Scenario - Hadoop Tutorial Now let us compare the restaurant example with the traditional scenario where data was getting generated at a steady rate and our traditional systems like RDBMS is capable enough to handle it, just like Bob’s chef. Here, you can relate the data storage with the restaurant’s food shelf and the traditional processing unit with the chef as shown in the figure above. Traditional Scenario — Hadoop Tutorial After a few months, Bob thought of expanding his business and therefore, he started taking online orders and added few more cuisines to the restaurant’s menu in order to engage a larger audience. Because of this transition, the rate at which they were receiving orders rose to an alarming figure of 10 orders per hour and it became quite difficult for a single cook to cope up with the current situation. Aware of the situation in processing the orders, Bob started thinking about the solution. Distributed Processing Scenario - Hadoop Tutorial Similarly, in Big Data scenario, the data started getting generated at an alarming rate because of the introduction of various data growth drivers such as social media, smartphones etc. Now, the traditional system, just like cook in Bob’s restaurant, was not efficient enough to handle this sudden change. Thus, there was a need for a different kind of solutions strategy to cope up with this problem. After a lot of research, Bob came up with a solution where he hired 4 more chefs to tackle the huge rate of orders being received. Everything was going quite well, but this solution led to one more problem. Since four chefs were sharing the same food shelf, the very food shelf was becoming the bottleneck of the whole process. Hence, the solution was not that efficient as Bob thought. Distributed Processing Scenario Failure - Hadoop Tutorial Similarly, to tackle the problem of processing huge datasets, multiple processing units were installed so as to process the data parallelly (just like Bob hired 4 chefs). But even in this case, bringing multiple processing units was not an effective solution because: the centralized storage unit became the bottleneck. In other words, the performance of the whole system is driven by the performance of the central storage unit. Therefore, the moment our central storage goes down, the whole system gets compromised. Hence, again there was a need to resolve this single point of failure. Solution to Restaurant Problem - Hadoop Tutorial Bob came up with another efficient solution, he divided all the chefs in two hierarchies, i.e. junior and head chef and assigned each junior chef with a food shelf. Let us assume that the dish is Meat Sauce. Now, according to Bob’s plan, one junior chef will prepare meat and the other junior chef will prepare the sauce. Moving ahead they will transfer both meat and sauce to the head chef, where the head chef will prepare the meat sauce after combining both the ingredients, which then will be delivered as the final order. Hadoop in Restaurant Analogy - Hadoop Tutorial Hadoop functions in a similar fashion as Bob’s restaurant. As the food shelf is distributed in Bob’s restaurant, similarly, in Hadoop, the data is stored in a distributed fashion with replications, to provide fault tolerance. For parallel processing, first, the data is processed by the slaves where it is stored for some intermediate results and then those intermediate results are merged by the master node to send the final result. Now, you must have got an idea of why Big Data is a problem statement and how Hadoop solves it. As we just discussed above, there were three major challenges with Big Data: The first problem is storing the colossal amount of data. Storing huge data in a traditional system is not possible. The reason is obvious, the storage will be limited to one system and the data is increasing at a tremendous rate. Storing huge data in a traditional system is not possible. The reason is obvious, the storage will be limited to one system and the data is increasing at a tremendous rate. The second problem is storing heterogeneous data. Now we know that storing is a problem, but let me tell you it is just one part of the problem. The data is not only huge, but it is also present in various formats i.e. unstructured, semi-structured and structured. So, you need to make sure that you have a system to store different types of data that is generated from various sources. Now we know that storing is a problem, but let me tell you it is just one part of the problem. The data is not only huge, but it is also present in various formats i.e. unstructured, semi-structured and structured. So, you need to make sure that you have a system to store different types of data that is generated from various sources. Finally let’s focus on the third problem, which is the processing speed. Now the time taken to process this huge amount of data is quite high as the data to be processed is too large. To solve the storage issue and processing issue, two core components were created in Hadoop — HDFS and YARN. HDFS solves the storage issue as it stores the data in a distributed fashion and is easily scalable. And, YARN solves the processing issue by reducing the processing time drastically. Moving ahead, let us understand what is Hadoop? What is Hadoop? Hadoop is an open-source software framework used for storing and processing Big Data in a distributed manner on large clusters of commodity hardware. Hadoop is licensed under the Apache v2 license. Hadoop was developed, based on the paper written by Google on the MapReduce system and it applies concepts of functional programming. Hadoop is written in the Java programming language and ranks among the highest-level Apache projects. Hadoop was developed by Doug Cutting and Michael J. Cafarella. Hadoop-as-a-Solution Let’s understand how Hadoop provides solution to the Big Data problems that we have discussed so far. Hadoop as a Solution - Hadoop Tutorial The first problem is storing huge amount of data. As you can see in the above image, HDFS provides a distributed way to store Big Data. Your data is stored in blocks in DataNodes and you specify the size of each block. Suppose you have 512MB of data and you have configured HDFS such that it will create 128 MB of data blocks. Now, HDFS will divide data into 4 blocks as 512/128=4 and stores it across different DataNodes. While storing these data blocks into DataNodes, data blocks are replicated on different DataNodes to provide fault tolerance. Hadoop follows horizontal scaling instead of vertical scaling. In horizontal scaling, you can add new nodes to HDFS cluster on the run as per requirement, instead of increasing the hardware stack present in each node. Next problem was storing the variety of data. As you can see in the above image, in HDFS you can store all kinds of data whether it is structured, semi-structured or unstructured. In HDFS, there is no pre-dumping schema validation. It also follows write once and read many models. Due to this, you can just write any kind of data once and you can read it multiple times for finding insights. The third challenge was about processing the data faster. In order to solve this, we move the processing unit to data instead of moving data to the processing unit. So, what does it mean by moving the computation unit to data? It means that instead of moving data from different nodes to a single master node for processing, the processing logic is sent to the nodes where data is stored so as that each node can process a part of data in parallel. Finally, all of the intermediary output produced by each node is merged together and the final response is sent back to the client. Hadoop Features Hadoop Features- Hadoop Tutorial Reliability: When machines are working in tandem, if one of the machines fails, another machine will take over the responsibility and work in a reliable and fault-tolerant fashion. Hadoop infrastructure has inbuilt fault tolerance features and hence, Hadoop is highly reliable. Economical: Hadoop uses commodity hardware (like your PC, laptop). For example, in a small Hadoop cluster, all your DataNodes can have normal configurations like 8–16 GB RAM with 5–10 TB hard disk and Xeon processors, but if I would have used hardware-based RAID with Oracle for the same purpose, I would end up spending 5x times more at least. So, the cost of ownership of a Hadoop-based project is pretty minimized. It is easier to maintain the Hadoop environment and is economical as well. Also, Hadoop is an open source software and hence there is no licensing cost. Scalability: Hadoop has the inbuilt capability of integrating seamlessly with cloud-based services. So, if you are installing Hadoop on a cloud, you don’t need to worry about the scalability factor because you can go ahead and procure more hardware and expand your setup within minutes whenever required. Flexibility: Hadoop is very flexible in terms of the ability to deal with all kinds of data. We discussed “Variety” in our previous blog on Big Data Tutorial, where data can be of any kind and Hadoop can store and process them all, whether it is structured, semi-structured or unstructured data. These 4 characteristics make Hadoop a front-runner as a solution to Big Data challenges. Now that we know what is Hadoop, we can explore the core components of Hadoop. Let us understand, what are the core components of Hadoop. Hadoop Core Components While setting up a Hadoop cluster, you have an option of choosing a lot of services as part of your Hadoop platform, but there are two services which are always mandatory for setting up Hadoop. One is HDFS (storage) and the other is YARN (processing). HDFS stands for Hadoop Distributed File System, which is a scalable storage unit of Hadoop whereas YARN is used to process the data i.e. stored in the HDFS in a distributed and parallel fashion. HDFS Let us go ahead with HDFS first. The main components of HDFS are: NameNode and DataNode. Let us talk about the roles of these two components in detail. HDFS - Hadoop Tutorial NameNode It is the master daemon that maintains and manages the DataNodes (slave nodes) It records the metadata of all the blocks stored in the cluster, e.g. location of blocks stored, size of the files, permissions, hierarchy, etc. It records each and every change that takes place to the file system metadata If a file is deleted in HDFS, the NameNode will immediately record this in the EditLog It regularly receives a Heartbeat and a block report from all the DataNodes in the cluster to ensure that the DataNodes are live It keeps a record of all the blocks in the HDFS and DataNode in which they are stored It has high availability and federation features. DataNode It is the slave daemon which run on each slave machine The actual data is stored on DataNodes It is responsible for serving read and write requests from the clients It is also responsible for creating blocks, deleting blocks and replicating the same based on the decisions taken by the NameNode It sends heartbeats to the NameNode periodically to report the overall health of HDFS, by default, this frequency is set to 3 seconds So, this was all about HDFS in nutshell. Now, let move ahead to our second fundamental unit of Hadoop i.e. YARN. YARN YARN comprises two major component: ResourceManager and NodeManager. YARN - Hadoop Tutorial ResourceManager It is a cluster level (one for each cluster) component and runs on the master machine It manages resources and schedule applications running on top of YARN It has two components: Scheduler & ApplicationManager The Scheduler is responsible for allocating resources to the various running applications The ApplicationManager is responsible for accepting job submissions and negotiating the first container for executing the application It keeps a track of the heartbeats from the Node Manager NodeManager It is a node level component (one on each node) and runs on each slave machine It is responsible for managing containers and monitoring resource utilization in each container It also keeps track of node health and log management It continuously communicates with ResourceManager to remain up-to-date Hadoop Ecosystem So far you would have figured out that Hadoop is neither a programming language nor a service, it is a platform or framework which solves Big Data problems. You can consider it as a suite which encompasses a number of services for ingesting, storing and analyzing huge data sets along with tools for configuration management. Hadoop Ecosystem - Hadoop Tutorial Now in this Hadoop Tutorial, let us know how Last.fm used Hadoop as a part of their solution strategy. Last.fm Case Study Last.fm is internet radio and community-driven music discovery service founded in 2002. Users transmit information to Last.fm servers indicating which songs they are listening to. The received data is processed and stored so that, the user can access it in the form of charts. Thus, Last.fm can make intelligent taste and compatible decisions for generating recommendations. The data is obtained from one of the two sources stated below: scrobble: When a user plays a track of his or her own choice and sends the information to Last.fm through a client application. When a user plays a track of his or her own choice and sends the information to Last.fm through a client application. radio listen: When the user tunes into a Last.fm radio station and streams a song. Last.fm applications allow users to love, skip or ban each track they listen to. This track listening data is also transmitted to the server. Over 40M unique visitors and 500M page views each month Scrobble stats: Up to 800 scrobbles per second More than 40 million scrobbles per day Over 75 billion scrobbles so far 3. Radio stats: Over 10 million streaming hours per month Over 400 thousand unique stations per day 4. Each scrobble and radio listen generates at least one log line Hadoop at Last.FM 100 Nodes 8 cores per node (dual quad-core) 24GB memory per node 8TB (4 disks of 2TB each) Hive integration to run optimized SQL queries for analysis Last.FM started using Hadoop in 2006 because of the growth in users from thousands to millions. With the help of Hadoop, they processed hundreds of daily, monthly, and weekly jobs including website stats and metrics, chart generation (i.e. track statistics), metadata corrections (e.g. misspellings of artists), indexing for search, combining/formatting data for recommendations, data insights, evaluations & reporting. This helped Last.FM to grow tremendously and figure out the taste of their users, based on which they started recommending music. I hope this blog was informative and added value to your knowledge. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of Big data.
https://medium.com/edureka/hadoop-tutorial-24c48fbf62f6
['Shubham Sinha']
2020-09-10 09:39:54.454000+00:00
['Big Data', 'Hadoop', 'Hadoop Training', 'Hdfs', 'Mapreduce']
Multi-label Text Classification with Scikit-learn and Tensorflow
Multi-label models There exists multiple ways how to transform a multi-label classification, but I chose two approaches: Binary classification transformation — This strategy divides the problem into several independent binary classification tasks. It resembles the one-vs-rest method, but each classifier deals with a single label, which means the algorithm assumes they are mutually exclusive. — This strategy divides the problem into several independent binary classification tasks. It resembles the one-vs-rest method, but each classifier deals with a single label, which means the algorithm assumes they are mutually exclusive. Multi-class classification transformation — The labels are combined into one big binary classifier called powerset. For instance, having the targets A, B, and C, with 0 or 1 as outputs, we have A B C -> [0 1 0], while the binary classification transformation treats it as A B C -> [0] [1] [0]. The evaluation metric to measure the performance of the models is the AUC measure, which stands for “Area Under the ROC Curve.” A ROC curve is a graph showing the performance of a classification model at all classification thresholds. Figure 8 — AUC (Area Under the Curve) This curve plots two parameters: True Positive Rate TPR = TP/(TP+FN) False Positive Rate FPR = FP/(FP +TN) TP = True Positive; FP = False Positive; FP = False Positive; FN = False Negative A model’s performance is assessed after running it with 5 different seeds to try to mitigate any bias. Scikit-learn First of all, it is necessary to vectorize the words before training the model, and here we are going to use the tf-idf vectorizer. Tf-idf stands for term frequency-inverse document frequency, and the tf-idf weight is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus. from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(strip_accents='unicode', analyzer='word', ngram_range=(1,3), norm='l2') vectorizer.fit(X_train) X_train = vectorizer.transform(X_train) X_test = vectorizer.transform(X_test) 1. OneVsRestClassifier The estimator used was RandomForestClassifier, and since the labels are analyzed separately, the result is the average of the AUC score of the categories. from sklearn.multiclass import OneVsRestClassifier from sklearn.ensemble import RandomForestClassifier Figure 9 — AUC score per category AUC score: 0.517097 2. BinaryRelevanceClassifier This method is very similar to the OneVsAll, but not the same. If there are x labels, the binary relevance method creates x new datasets, one for each label, and trains single-label classifiers on each new data set. One classifier may answer yes/no, thus the “binary relevance.” This is a simple approach but does not work well when there are dependencies between the labels. The estimator used is GaussianNB (Gaussian Naive Bayes). from skmultilearn.problem_transform import BinaryRelevance from sklearn.naive_bayes import GaussianNB classifier = BinaryRelevance(GaussianNB()) classifier.fit(X_train, y_train) predictions = classifier.predict(X_test) print('AUC score: {}'.format(roc_auc_score(y_test,predictions.toarray()))) AUC score: 0.544241 3. ClassifierChain This approach combines the computational efficiency of the Binary Relevance method while still being able to take the label dependencies into account for classification. On the other hand, that makes this method more expensive computationally speaking. The estimator used is LogisticRegression. from skmultilearn.problem_transform import ClassifierChain from sklearn.linear_model import LogisticRegression classifier = ClassifierChain(LogisticRegression()) classifier.fit(X_train, y_train) predictions = classifier.predict(X_test) print('AUC score: {}'.format(roc_auc_score(y_test,predictions.toarray()))) AUC score: 0.519823 4. MultiOutputClassifier This strategy consists of fitting one classifier per target(A B C -> [0 1 0]). This is a simple strategy for extending classifiers that do not natively support multi-target classification. The estimator used is KNeighborsClassifier. from sklearn.multioutput import MultiOutputClassifier from sklearn.neighbors import KNeighborsClassifier clf = MultiOutputClassifier(KNeighborsClassifier()).fit(X_train, y_train) predictions = clf.predict(X_test) print('AUC score: {}'.format(roc_auc_score(y_test,predictions))) AUC score: 0.564452 Tensorflow Text classification has benefited from the deep learning architectures’ trend due to their potential to reach high accuracy. There are different libraries available for deep learning, but we chose to use here Tensorflow because, alongside with PyTorch, they have become the most popular libraries for the topic. On the other hand, word embeddings are low dimensional as they represent tokens as dense floating-point vectors and thus pack more information into fewer dimensions. This technique normally gives a performance boost in NLP tasks, for instance, syntactic parsing and sentiment analysis. It is possible to either train the WordEmbedding layer or use a pre-trained one through transfer learning, such as word2vec and GloVe. For the following models, the vectorization used was texts_to_sequences, which transforms the words in numbers, and the pad_sequences ensures all the vectors have the same length. from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences tokenizer = Tokenizer(num_words=5000, lower=True) tokenizer.fit_on_texts(data['description']) sequences = tokenizer.texts_to_sequences(data['description']) x = pad_sequences(sequences, maxlen=200) Class weights were calculated to address the imbalance problem in the categories. most_common_cat['class_weight'] = len(most_common_cat) / most_common_cat['count'] class_weight = {} for index, label in enumerate(categories): class_weight[index] = most_common_cat[most_common_cat['cat'] == categories]['class_weight'].values[0] most_common_cat.head() Figure 10 — Class weights 1.DNN with WordEmbedding We started with a simple model which only consists of an embedding layer, a dropout layer to reduce the size and prevent overfitting, a max-pooling layer, and one dense layer with a sigmoid activation to produce probabilities for each of the categories that we want to predict. from keras.models import Sequential from keras.layers import Dense, Embedding, GlobalMaxPool1D from keras.optimizers import Adam import tensorflow as tf model = Sequential() model.add(Embedding(max_words, 20, input_length=maxlen)) model.add(GlobalMaxPool1D()) model.add(Dense(num_classes, activation='sigmoid')) model.compile(optimizer=Adam(0.015), loss='binary_crossentropy', metrics=[tf.keras.metrics.AUC()]) Figure 11 — DNN architecture AUC score: 0.890245 2. CNN with WordEmbedding Convolutional Neural Networks recognize local patterns in a sequence by processing multiple words at the same time, and 1D convolutional networks are suitable for text processing tasks. In this case, the convolutional layer uses a window size of 3 and learns word sequences that can later be recognized in any position of a text. from keras.layers import Dense, Activation, Embedding, Flatten, GlobalMaxPool1D, Dropout, Conv1D filter_length = 300 model = Sequential() model.add(Embedding(max_words, 20, input_length=maxlen)) model.add(Conv1D(filter_length, 3, padding='valid', activation='relu', strides=1)) model.add(GlobalMaxPool1D()) model.add(Dense(num_classes)) model.add(Activation('sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=[tf.keras.metrics.AUC()]) Figure 12 — CNN architecture AUC score: 0.886286 3. LSTM with GloVe WordEmbedding In this model, we will use GloVe word embedding to convert text inputs to their numeric counterparts, which is a different approach because this is a pre-trained layer. The model will have one input layer, one embedding layer, one LSTM layer with 128 neurons, and one output layer with 21 neurons (the number of targets.) from keras.layers import Flatten, LSTM from keras.models import Model deep_inputs = Input(shape=(maxlen,)) embedding_layer = Embedding(max_words, 100, weights=[embedding_matrix], trainable=False)(deep_inputs) LSTM_Layer_1 = LSTM(128)(embedding_layer) dense_layer_1 = Dense(21, activation='sigmoid')(LSTM_Layer_1) model = Model(inputs=deep_inputs, outputs=dense_layer_1) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[tf.keras.metrics.AUC()]) Figure 13 — LSTM architecture AUC score: 0.887574
https://medium.com/swlh/multi-label-text-classification-with-scikit-learn-and-tensorflow-257f9ee30536
['Rodolfo Saldanha']
2020-05-08 21:21:33.333000+00:00
['Deep Learning', 'Artificial Intelligence', 'Neural Networks', 'Towards Data Science', 'Machine Learning']
Glossier: A Brand You Can Be Friends With
Treat Customers Like Friends Glossier invested most of the day interacting with their followers on Instagram. They would answer questions and comments daily. They already had a big community — thanks to Into The Gloss, the beauty blog that kicked start the brand — but as they gained more spotlight, Glossier were able to build a cult-like following. Glossier set apart from their competitors by continuously engaging their customer in order to create products that resonated with the followers’ needs. Every product they launched were result of a co-creation process between the community and the brand. When the brand diligently cultivated interest to fulfill their customers’ needs, the customers would reward them with fierce loyalty. Glossier’s best selling fragrance that claimed ‘You’ as its important ingredient On Twitter, Glossier focused on building friendly and approachable rapport rather than flooding their feed with non-stop promotion. Their tweets were witty and relatable, inviting people to retweet or response to the posts. Glossier’s not overly promotional tweet Turn Customers Into Influencers & Evangelists Other thing that should be highlighted was Glossier’s strategy to encourage their Instagram followers to post their own beauty regime and tag the brand’s social account. They turned ordinary girls into social media influencers and evangelists featuring their products. Their authentic, humanizing, and personal approach image seemed to be the mindset behind Glossier’s high engagement rate. People would post snazzy pictures that feature Glossier’s products and write good reviews to be worthy of Glossier’s repost. A product review by Glossier’s customer This user-generated content played a dominant role in uplifting Glossier’s sales, as Emily Weiss the CEO & Founder of Glossier, said that 70% of their sales come from direct/organic/referral traffic. Moreover, this strategy cut a large chunk of budget used for marketing expenses. Not Glossier-centric Glossier implemented a slightly different strategy on YouTube. Their most-watched content was a series called Get Ready With Me, featuring women doing their beauty routines. The women were often bare-faced at the beginning of the video before gradually applying their skincare and makeup — not to cover up their flaws but instead to make their physical features stood out. The series were fascinating because the videos often included products from other brands. Glossier aimed to add value to their brand by producing informative an helpful contents, as opposed to blatant Glossier-centric sales pitch.
https://medium.com/digital-society/glossier-a-brand-you-can-be-friends-with-f1790e4cb29c
['Sashi A.']
2020-02-14 01:27:29.037000+00:00
['Glossier', 'Marketing', 'Digital Marketing', 'Digisoc1']
26+ Useful Machine Learning Blogs and Newsletters to Increase Your Productivity
These are blogs, sites, and newsletters that my colleagues read and I browse through. We can keep up with the rapidly increasing Machine Learning knowledge-base. These are good summations that guide us to the more detailed papers we should read in our specializations. Research Paper Tools I think many researchers use arxiv Sanity Preserver, but it is talked about rarely. I use arxiv Sanity Preserver solely to find articles of interest. I use Mendeley to store, read, and markup those PDF-formatted articles that I find interesting through arxiv Sanity Preserver. Blogs Over the last eight years, a newer phenomenon has been the uprising of blog sites that are great supplements to arxiv. The following are my “goto” blog sites: Medium is a myriad of blogs divided by class or category on-line daily publications . It is one of our “goto” blog sites. Also, Medium publications such as towardsdatascience, theStartup, machine-learning, Artificial Intelligence, and programming explain and summarize techniques and papers and expose us to what is new or a leading trend. is a myriad of blogs divided by class or category on-line daily publications It is one of our “goto” blog sites. Also, Medium publications such as towardsdatascience, theStartup, machine-learning, Artificial Intelligence, and programming explain and summarize techniques and papers and expose us to what is new or a leading trend. Distill is probably my favorite blog site. What draws me is the clarity of the topic under discussion. The second draw is illuminating graphics. There is an abundance of interactive graphics, probably done with hand-crafted JavaScript . I feel compelled to say; you can get close to these visual effects with the latest releases of HiPlot and Streamlit , both of which are covered in later blogs. is probably my favorite blog site. What draws me is the clarity of the topic under discussion. The second draw is illuminating graphics. There is an abundance of interactive graphics, probably done with hand-crafted . I feel compelled to say; you can get close to these visual effects with the latest releases of and , both of which are covered in later blogs. The Deeplearning.ai blog site is a site that lists all the courses produced by Deeplearning.ai, a growing list of tutorials, the “Pie & AI” signup, and “Pie & AI “ event descriptions and dates. I recommend that you go to the Deeplearning.ai site at least once a month to review the new material. Sites The Ethical Machine Learning’s awesome-production-machine-learning and awesome-artificial-intelligence-guidelines sites. I start here if I want to find the latest and greatest in the Machine Learning production or ethics categories. Machine Learning Production topics. Source: https://github.com/EthicalML/awesome-production-machine-learning fast.ai site references blog, book, package, community chat group, and three years of courses ranging from beginner to SOTA (state-of-the-art) on deep learning. The package and courses are based on Pytorch , except for the first course based on Tensorflow . I expect another course version next year and another release of the fast.a i package. Last year they took a stab at Tensorflow on Swift . It will not surprise me if the upcoming course introduces Julia . blog, book, package, community chat group, and three years of courses ranging from beginner to SOTA (state-of-the-art) on deep learning. The package and courses are based on , except for the first course based on . I expect another course version next year and another release of the i package. Last year they took a stab at on . It will not surprise me if the upcoming course introduces . realpython has the best Python tutorials I have ever read on advanced Python fundamentals. People that are Python beginners, as well as multi-year experienced Python software engineers, can learn from the tutorials on this site. has the best tutorials I have ever read on advanced fundamentals. People that are beginners, as well as multi-year experienced software engineers, can learn from the tutorials on this site. goggle.ai.hub has components (mostly docker images), documentation, tutorials, Jupyter notebooks, code, Tensorflow examples, Kuberflow pipeline example codes, and much more, all centered on Machine Learning. The goggle.ai.hub contents can be used on your local computers and the cloud. It is not specific to the Google Cloud Platform (GCP) . has components (mostly images), documentation, tutorials, notebooks, code, examples, pipeline example codes, and much more, all centered on Machine Learning. The contents can be used on your local computers and the cloud. It is not specific to the . Kaggle is the site for Machine Learning competitions, all artifacts associated with the kaggle competitions, and various real-world domain datasets in Machine Learning. Many people have used the fast.ai package to learn machine learning techniques and that place them in the top 10% of any Kaggle competition. is the site for Machine Learning competitions, all artifacts associated with the competitions, and various real-world domain datasets in Machine Learning. Many people have used the package to learn machine learning techniques and that place them in the top 10% of any competition. PapersWithCode will not stop you from searching GitHub for Python packages but will help you get the source code associated with a published machine learning paper. More critical, this community effort has both TRENDING and SOTA tabs. PapersWithCode is a fantastic compliment to arxiv Sanity Preserver. The mission of Papers with Code is to create a free and open resource with Machine Learning papers, code, and evaluation tables. — https://paperswithcode.com/about datasciencecentral.com does not have generally have the best blogs, but there may be a few gems for you here. Other great machine learning blogs are: Newsletters TheSequence Scope is a free subscription while the Edge has a $50/year subscription fee. The logo and quote state their mission well. TheSequence is an unusual way to learn and reinforce your knowledge about machine learning and artificial intelligence. The Algorithm from MIT Technology is a weekly newsletter that summarizes the latest machine learning news. The Algorithm is a newsletter for people who are curious about the world of AI. I’m here to help you cut through the nonsense and jargon to figure out what truly matters and where all this is headed. You’ll hear from me every Friday with updates and thoughts on the latest AI news and research (as well as some added magic and memes). — Karen Hao, Senior Reporter The Batch is a weekly newsletter written by Professor Andrew Ng and is one of several products of deeplearning.ai. Miscellaneous These are useful podcasts, especially if you are in a situation where you can not read. I find it useful to print these out. I am old school as you get Python doc snippets with a keystroke in most of the Python IDEs (Interactive Code Environments — code editors).
https://medium.com/swlh/26-useful-machine-learning-blogs-and-newsletters-to-increase-your-productivity-a5c4d171eaa4
['Bruce H. Cottman']
2020-12-26 08:45:42.683000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Knowledge', 'Blog']
Making Sense of Roman Numerals
In today’s world, we take for granted the numerical system that we use on a daily basis. The average human being is born with ten fingers, hence a counting system on base 10 is very natural to us. We have exactly 10 elementary numerical symbols, ranging from 0 to 9. Every other natural number can be expressed in a permutation of these 10 symbols. I was teaching my eldest daughter addition with change, and she was able to keep up with the lesson. This serves as proof that a five-year old would readily grasp numbers in base 10, and would be able to add any two natural numbers to form a third natural number using no more than the 10 elementary numerical symbols. What if human beings were born with other than 10 fingers? What if an alien race were born with 8 digits in total on both hands instead? Would they be computing in base 8? Sometimes, even when we have 10 fingers on 2 hands, the fact that it is possible to consider just the 5 fingers on 1 hand, civilisation could reach a different conclusion and decide to count in base 5. The Mayans used both fingers and toes to develop a base 20 system. Today’s post is inspired by this challenge on Edabit. So if you want to attempt to solve the puzzle by yourself beforehand without any spoilers, do not read any further. The Romans had a very interesting counting system that is still in use today. The system combines elements of 5s and 10s, as well as 5 less 1s and 10 less 1s. This logic is extended by each power of 10. So, think of 50s and 100s, 50 less 10s and 100 less 10s. And so on and so forth. When we break the counting method down to individual elements, we can group them into two distinct sets of data. In this example we arrange them into two Python dictionaries. In the first dictionary, we define what each latin numeral represents. Unlike Arabic numerals, which have 10 symbols, the Romans made do with 3 at each 10s. So at its simplest, you could constitute any number from 1 to 10 using just, I, V, and / or X. So II would represent 2, and VII would represent 7. The counting system also had a minus-one feature, where IV would represent 4, or V less I. This necessitates the second Python dictionary, which handles these minus-one examples, for both 5s, 10s, or its 10-multiples. So XL would be 50 less 10 to make 40, while CM would be 1,000 less 100 to make 900. With this basic understanding, we set off to write a program that would be able to convert a modern-day Arabic numeral into its Roman equivalent, and vice versa. So — I know we have not written these yet — let us imagine that we already have one program called convert_numeral_to_roman and another one called convert_roman_to_numeral. We can deduce the nature of the conversion by looking at the input’s data type. If a Roman numeral is fed into the program we want to write, it would be a string of Latin letters. If it were modern-day numerals, it would be an integer. So we could write the function-in-question like this: While we have not written both sub-programs, it is good to jot these two lines down first so we know there are two to dos in order to make the program work. So for the first part, how do we parse a latin numeral into its modern-day equivalent? One way which I found works is to translate the numbers from left to right, adding on as we go along. It is important to start from the minus-one dictionary first. As two elements are analysed at the same time, this resolves the disambiguation problem of the machine misinterpreting the Roman letters for its individual parts. For instance, had we used the first dictionary, IV would be read as 1 plus 5 equals 6, instead of 4. It is only when the program is not able to find the pattern in the first two elements of the Roman numeral that we would resort to analysing just the first element. In Python, this could be done by purposely causing the program to throw an error by asking it to search for a key that does not exist in the minus-one dictionary, redirecting the code using the try-except arrangement. Once we are in the exception handing block, we change our analysis to just the first element of the latin string. As the elements get analysed, we add the sum of the parts and successively shorten the remainder of the Roman numeral. The answer is found when there are no more latin letters left to analyse. What if a number is fed to the program? This time, we would have to translate the number into its constituent Roman letters — starting from the largest possible value. The dictionary in use here would be a merger of the two original dictionaries since we only need one here. This time the access would go in reverse — the modern-day integers would be used to determine the Roman letters. So we use dictionary comprehension to swap the keys and values, utilising the items method native to the dictionary object in the process. Basically, we could think of dictionary.items() as a list of key-value tuples. The technique is a must-know in the art of dictionary manipulation. Once the combined dictionary with integer keys is in place, we can start encoding the Roman letters — exhausting the largest-value letters first before descending down the chain. This is why we loop by accessing the keys, sorted by largest to smallest. We keep encoding until the remainder is 0 and we subsequently return the answer. If you have followed this far, I hope that you would agree that while MMXX has not been easy — let us continue to do the best we can and emerge stronger once MMXXI comes along.
https://medium.com/swlh/making-sense-of-roman-numerals-c49d55e2b682
['Kelvin Tan']
2020-09-16 07:01:38.280000+00:00
['Roman Numerals', 'Dictionary', 'Number System', 'Python', 'Higher Order Function']
7 Reasons To Quit Garbage Writing
7 Reasons To Quit Garbage Writing Stop Writing and Start Reflecting Image by Gerardo Gómez from Pixabay Garbage writing is an overbeaten advice for dealing with writer’s block. Write a chunk of crap then give it brutal pruning, says the advisor. Pen any thought that dances across your mind. Keep punching the keyboard till your fingers ache. Quality, they say, is born of quantity. Just write till your writer’s block disappears. Unlike speaking, where you can hover around sounding grandiose and saying absolutely nothing, quality writing requires deep reflection and concrete messages. The message is as important as the medium. If you can’t reflect, write, and rewrite your piece, then your writing blows the whole point. Don’t write for writing's sake. More than any other means of communication, writing commands the public perception, dictate the truth, and alter lives decision. Thus unsolicited writing can be harmful to writers and even more deadly when made public. If that’s not enough to scare you off the famished road, here are seven solid reasons you probably should reconsider garbage writing your way off writer's block. It Is Not Deep Work Garbage writing is not deep work. Deep work is meaningful work. Deep work is intentional, duly, and dutifully engaged. Deep work is the ability to focus without distraction on a cognitively demanding task. It provides a sense of true fulfillment that comes from craftsmanship. In Deep Work, author and professor Cal Newport emphasize that deep work is the superpower in our increasingly competitive twenty-first-century economy. Writing whatever flashes through your mind is the archetype of shallow works. Roaming through thoughts to write whatever it is that pops out lacks all intense and intensity. It is the sought of work without focus. It is not satisfying, neither can it be fulfilling. It lacks all advantages and accounts for no meaningful progress. Deep work allows your art to contribute to the greater good. Shallow works add up to nothing in particular. Writing for writing’s sake does not necessarily add to the body of literature you are crafting out of your writing career. Fact is: it will largely serve nobody including you. It Might Crash Your Self-esteem When all you write all day are random thoughts, your motivation suffers and it goes on to hurt your self-worth. You lose confidence in your work. Art ceases to amuse you. The artist in you falls sick. You will dread in a lonely world. You might feel detest and amount to nothing. In a short while, you might begin to consider yourself a failure. It is not uncommon that writers commit suicide. In fact, insanity, addiction, and suicide permeate all forms of art. Be careful of overstretching yourself to write and thus writing profanity. The backlash can be ruinous more than your discomfort of lacking what to write about. It Stops Working After two weeks of writing garbage, what happens? It stops working. You halt and become clueless about what next to trash out. Repetitive use of language will bore you out. You will get sick of it. Then, you are back to the problem you sought to escape from. You Might Become Addicted You can also get stuck writing trash after a stretch of time. New habits become solidified starting from the first 18 days. If you can as well push through enough and keep up with your garbage writing for that long, you might become addicted to it. At a point, I got addicted to this that I refer to myself as a narrative poet. I couldn’t necessarily communicate well-thought messages anymore that I began to hide under the broad umbrella of poetry. If fall victim, it might become difficult to rescue yourself out of the vicious cycle. You Have Nothing To Publish Writing without a purposeful end amounts to junk. No publication takes junks from writers. No audience will slow that either. Quite frankly, your introspective self will fight you not to embarrass yourself if you attempt posting it on your feed. Trust me, the lizard brain wins every round of such a fight. Over time, you are blessed with drafts. Possibly tens of them that are worth nothing. Bad for the eyeballs and hurtful to your ego. It Doesn’t Serve Your Audience KO! You won this time. You push through your trash. Pinned right on top of every other post, tweeted, and shared on Facebook. A few hundreds of views rush at it. Good comments or bad ones, engagement is engagement. But then, the guilt is real. You know writing trash doesn’t add an inch of value to your audience. Nobody can possibly learn from it. An uncollected stream of thoughts with fanciful words drives no points home. It Is Not Profitable The dream of every writer is to get published, control a large audience, and make some money. All of that is a mirage if your contents failed to tear grounds and break through the crowd. If your writings don’t worth publishing, it just cannot build an audience either. For the money, forget it altogether. A Word of Wisdom Deep work is intense, requires focus, and cognitively demanding. Meaningful writing is deep work. Don’t bask empty words against blank pages. When stuck, do something worth writing about, then write about it. You have four ways to attack your writer’s block — why, what, how, and when. Get a new purpose to your writing — be firm on your why. The clarity of why makes magic happens. What to write about is what is worth reading about. Conclude on an idea to explore and share your light on. This article is about garbage writing. Should you list out the key lessons or take us on a chronology of how the world evolves. How is the process to your why and the means to deliver your what. In his latest book, When — The Scientific Secrets of Perfect Timing, bestselling author Daniel H. Pink provides fresh insights from biological and behavioral science on the hidden patterns of everyday life. His research offered that through the day there is always a peak, a trough, and a rebound. Pay careful attention to your peak and rebound time to leverage the power of when to engage your writing. Finally, for consistent meaningfully writing it is best to gather enough experience and as well endeavor to experience enough. Pick up a new book, take on new challenges, figure new ways of doing things, advise a younger you, or share your observation about the world.
https://medium.com/nano-writers/7-reasons-to-quit-garbage-writing-36e8c8eb4912
['Timmy Brain']
2020-11-21 06:47:38.565000+00:00
['Writing Life', 'Thinking', 'Writing', 'Quitting', 'Reasons Why']
Eco-packaging: what are the options?
Eco-packaging: what are the options? By Biyya Mansoer Cover image credit: Left background image: Unsplash, Right background: The Happy Bag Co. Centre image: @aaltointernational Nowadays, packaging from shopping does not only serve the purpose of protecting the products. Packaging can be aesthetically pleasing, fun, or luxurious and this brings us a joyful and unique unboxing experience. While we can get excited over the box, bag, or wrapper that our items come in, single-use packaging is a major contributor to the waste crisis today. In the EU alone, packaging waste represents approximately 87 million tonnes or about 170kg per person (according to Eurostat). Why single-use packaging is unsustainable Single-use packaging (plastics in particular) is often very difficult to recycle. According to U.N Environment, only 9% of the world’s nine billion tonnes of plastic has been recycled. Small items such as plastic bags or bubble wrap can get stuck into the crevices of recycling machines. Therefore, they are often rejected by recycling centres (Financial Times). In addition, plastics are not compostable; instead, they break down into smaller pieces of plastic called microplastics. The inability to be recycled or composted results in them being discarded as waste and then dumped into landfills. Unrecycled plastic waste can also end up floating around in our oceans, harming marine animals that mistake them for food (National Geographic). The fashion industry itself is no stranger to the current waste crisis. The drastic increase in online orders today suggests that there are more products delivered to consumers using single-use plastics. Luckily, there have been innovations of eco-friendly alternatives. We have previously discussed the impact of online vs offline shopping and how you can minimise your environmental footprint when shopping online in this article. But now, we are discussing the types of eco packaging and plastic alternatives that you can look out for to reduce your waste from online orders. But first, what exactly is eco-packaging? While ‘unsustainable’ packaging is single-use, you can probably guess that eco-packaging is the opposite. Eco-packaging is developed with the purpose of reducing waste and the environmental footprint in the life cycle of packaging. The concept of eco-packaging packaging may not be as straightforward as you may think; it can be a little complex. It is much more than just prioritising paper packaging or cardboard boxes. Why is that? There are so many types of packaging used in different stages of the supply chain. This can be from shipping boxes to plastic bags that protect clothing from moisture during shipping. And of course, the final packaging that holds the final product. Due to the complex nature of the supply chain, many companies have put minimal packaging as a common goal in their sustainability initiatives. This is a step in the right direction. The growing consumer pressure for sustainability and the rise of environmental activist groups in recent years have made a significant contribution to such efforts. We, as consumers, have become increasingly aware of the negative impacts that excessive waste can have on our planet. We can now assess a brand’s commitment to sustainability by looking at the type of packaging that they use. While not every stage of the supply chain uses sustainable packaging, you can look for better alternatives for the material that your shopping is packed in. So, what are the alternatives? Recyclable and recycled packaging Recyclable packaging is probably one of the most commonly used alternatives by many brands. Packaging that can be recycled is made from materials that can be transformed into something new after it has been used. Cardboard and paper packaging can often be recycled so make sure that you dispose of them in the recycling bins accordingly. Recycling is extremely important because it helps to divert waste from landfills. Image credit : Dawn Printing Many brands today are pushing to become more sustainable and this often means that they begin to disclose the type of packaging they use. Whether the brand is sustainable or not, they may use cardboard/paper packaging or wrappers that are recycled. To ensure that the packaging comes from responsibly sourced forests you can look for the Forest Stewardship Council (FSC) certification. The FSC is an international non-profit organisation that is committed to promoting the responsible management of forests worldwide. Another certification you may look out for is the Global Recycled Standard (GRS). The GRS verifies the recycled content of products as well as environmental and social responsibility practices. Certifications can ensure that you will not be greenwashed! Here are some brands on Renoon that incorporate recyclable and/or recycled packaging: # Eticlò and Reformation currently use plastic-free and 100% recycled paper products incorporated into their packaging that is also FSC certified. # Underprotection: all of their cardboard boxes, gift boxes, and wrapping paper is made from recycled material. Their gift boxes and bags are FSC certified. Postal bags are made from recycled plastics that are GRS certified. # Girlfriend Collective uses packaging that is 100% recycled and recyclable. # TALA currently utilises packaging that is recycled and recyclable as well as tags that are made from 100% plantable paper. # Woodstrk: packaging & hang tags are eco-friendly, plastic-free, and made from 100% recycled cardboard. Compostable packaging By definition, compostable material is biodegradable but with an added benefit: they decompose and become food for new plants (Bio plastics News). Packaging that is compostable is made from plant-based materials that can break down. Although, keep in mind that just because your packaging is compostable does not mean that you should dispose of it right away! While compostable packaging can be biodegradable, it may not always be the case, according to WRAP. Therefore, it is useful to read the labels in your compostable package to know how you can discard it correctly because sometimes they specifically suggest that you place them in the compost bin. In the fashion industry, compostable packaging is still not widely accessible because producing and sourcing them can be expensive and time-consuming. Not to mention that it is still a fairly recent invention and therefore there is no existing system in place to enforce this as a standard. Fortunately, there are fashion brands on Renoon that have begun to use compostable packaging: # Pangaia and Gabriella Hearst use TIPA packaging — a part bio-based plastic alternative that can fully disappear within 24 weeks in a compost facility. TIPA packaging can be put in a home compost or industrial compost systems along with food waste. # Reformation has incorporated some compostable packaging made from biomaterials. # September The Line has begun to use postage bags that are 100% compostable and are made from sustainably-sourced plants # Santicler commits to a zero-waste business model and uses fully compostable packaging crafted from bio-plastics. In fact, their bags are made just like an orange peel- they are fully compostable in less than 30 days with no toxic residue! Image credit: Reformation (left), SupplyCompass (Right) Reusable packaging Although is it not widely accessible just yet, reusable packaging has been on the rise recently as an alternative to reduce waste. RePack is a reusable and returnable packaging service that has been collaborating with fashion retailers. On Renoon, brands such as Ganni and Mud Jeans have partnered with RePack as a solution to provide free reusable and returnable packaging for their online orders in the future. Would you like to see more brands on Renoon that are utilising reusable packaging? Or, would you like to be notified when your favorite brands on Renoon will start using reusable packaging? Let us know! Image credit: Aalto (left), Pack-pedia (right) What you can do to reduce your packaging waste Image credit: Pinterest (original source unknown) As eco-packaging is being used more and more, we can now choose to shop from brands that utilise eco-friendly alternatives. However, some types of eco-packaging such as the compostable or reusable types are not widely accessible just yet. Not to worry! You can try to shop less or buy multiple items in one single order to minimise the use of single-use packaging. It is all about doing the best you can to reduce your waste footprint. Is sustainable packaging a priority for you? Let us know! In the meantime, start your search for fashion that is good for our planet.
https://medium.com/age-of-awareness/eco-packaging-what-are-the-options-ee527478b41a
[]
2020-08-24 15:33:46.075000+00:00
['Packaging', 'Innovation', 'Sustainable Development', 'Sustainability', 'Fashion']
13 Surprising Ways Opposites Attract Like Magnets
13 Surprising Ways Opposites Attract Like Magnets Many people believe that couples who share the same hobbies, personalities, skills, and career interests stick together for life. Still, there is proof that opposites attract with a love that never ends. Photo by Marc A. Sporys on Unsplash Is it possible for opposite partners to maintain happiness and love in their relationship? I believe it’s true because I’ve witnessed it with my own eyes in both sets of my grandparents, who remained married and deeply in love until their deaths. Science also proves it. According to the Myers-Briggs test, people draw closer to those who have opposite personality traits. Thirteen ways to know for sure that opposites can attract and maintain a loving relationship for life are: 1. Your partner is mannerly, and you’re the wild kid. Imagine an intelligent, smart, and mannerly man who works in the medical field with a young woman who was a wild child throughout her years. She has a strong sense of humor, has lived through harsh life experiences only to overcome them, and works in the legal field. She’s also a few inches taller than him. His ambitiousness, honesty, loyalty, intelligence, gentleness, maturity, and respect is a significant turn on for someone like her. She may also remind him to enjoy life now and then and embrace the feeling of fun and humor. Joseph Cilona, PsyD., a licensed clinical psychologist in Manhatten, explains, “Many important aspects of relationships, especially personality traits, needs, and preferences are a much better fit when they are opposite or complementary, rather than similar.” 2. An introvert and extrovert balance out each other. An introvert may never feel like socializing or going out, but the extrovert can fuel the desire to go out now and then. On the other hand, an extrovert feels like going out frequently but will learn to enjoy and love relaxing days at home with their introvert. It eventually becomes a balance. 3. You don’t care what other people say or think, are madly in love with them for who they are, and have nothing to prove to anyone. Everyone may ask, “What do you see in him/her? He/she’s not attractive. He/she’s too old/young for you. You have nothing in common. It won’t work!” Listen to your heart instead of seeking the approval of everyone else. You know you truly love someone when they are opposite of you, you can’t stop loving them or thinking about them, and you’re sincere about your feelings. Some people date the wrong person to prove to their families or friends that they have an open mind or spite them. It’s crucial to recognize that you’re dating an opposite person sincerely for yourself and not for spite. 4. You’ll embarrass each other but find them adorable. This one reminds me of both sets of my grandparents. Both of my grandfathers were tall men, and my grandmothers were incredibly petite. My mom’s dad was the wild child, the social butterfly, spoke his mind, had an extreme sense of humor, and my grandma is an introverted, quiet, and shy book nerd. My dad’s dad was business-like, proper, and mannerly, and my other grandma was an outgoing, tomboyish, and sassy little spitfire. The way they teased and embarrassed each other in front of us never ended, and it was an expression of their love toward each other. They were significantly different, but their passion and attraction to each other remained the same, even in old age. As I look back through my childhood and teens, I’ve recognized the same “wild child” traits in my maternal grandpa and paternal grandma in myself. In my recovery from trauma, I’m gradually and gratefully becoming that “wild child” again. 5. You’ll add to each other’s strengths. In the beginning, an introvert and extrovert can be shy and nervous around each other. Then, the socialite eventually begins talking and cracking jokes quite a bit. The extrovert teaches the introvert ways to communicate and have a little fun with the conversation. Another example is an introvert who loves to comfort their partner by physical and emotional touch. The extrovert might have always recoiled at cuddling, but their introverted partner will show them the art of love in that way to help them feel comforted. 6. You’ll love learning their views and respect them. When you truly love someone, you develop mutual respect without arguing. Both of you may have different political views, but you respect each other’s boundaries and opinions without forcing each other to do anything. When you both have mutual respect out of true love, your relationship has the potential to last a lifetime. 7. You’ll start practicing some of their characteristics and habits. Every person changes through time, and it happens more often in couples who have different characteristics. You may admire one or more of your partner’s habits and begin making them yours. Implementing their practices can lead them to have more patience with you, and you’ll also enjoy it. 8. You’ll complement, not complete each other. No one on this earth needs another person to feel whole. You’re the only one who can make yourself feel that way. On the contrary, it’s a beautiful thing when you have another person to enhance you, and you have many differences. Examples of great opposites are: He’s short and has trouble reaching things, and she’s tall and can reach almost anything. He’s a cuddler, but she has never felt comfortable cuddling. He shows her the comforting feeling of it, and she gradually begins to love cuddling with him. He’s the serious, mannerly, organized, homebody, and she’s the funny tomboyish thrillseeker who gets him out of his shell now and then. 9. You’ll develop an open mind to learn and try new things. Although you and your partner may have many differences, the extrovert can awaken the introvert’s inner child. For example, let’s say that the introvert hasn’t had a pet since he was a kid. The extrovert has a pet and shows the introvert how it felt to have a pet. The introvert will most likely adopt one too, form an inseparable bond with it, and feel more than grateful. The introvert will also love explaining why his pet may knock over the food or water bowl and show him prevention methods since she has had long-term experience raising a pet. The extrovert will also enjoy learning the “proper” ways of the introvert. 10. Your patience will grow. As the couple learns their differences and adapts to them, there will be times they can irritate each other. No relationship is perfect, and that’s where their devotion, true love, and patience get tested. No marriage or relationship is ever simple. Although my grandparents loved each other deeply and were significantly different, there were times they got on each other’s nerves. Their tolerance of each other increased as they spent more time together, and their love and compassion grew through the years. 11. You’ll become less judgmental and more understanding. There’s no doubt that you’ll be less critical of other people when you spend more time with someone who’s much more different than you. Both will develop a new way of thinking and behavior that connects you closer. After acknowledging your differences, you’ll be more patient with each other and the people around you. 12. You’ll become more spontaneous. Dr. Joseph Cilona says, “When it comes to romantic compatibility, many people first think of similarities. While similar traits can certainly increase romantic compatibility, it’s not always the case and can backfire for some couples.” It gets boring when you date someone who shares all the same interests as you. I’ve been there. You do everything together, it becomes the same routine, and it gets boring. It’s like you’re dating yourself and always expect the same things each day or week. Every date night is Netflix and chill, dinner, and sex. It gets old fast, and expectations start to get unmet. A spontaneous partner does the unpredictable and spices up the relationship. 13. You’ll embrace each other’s differences and encourage them. Having too many similarities can create problems because you’ll begin worrying when your lover does something different. Frequently, it causes couples to attempt controlling each other due to unmet expectations. You’re more motivated to do something different when you don’t have as many things in common. You love each other’s differences, and both of you remember that those differentiating characteristics are what pulled you together like a magnet in the first place. Worlds apart attract each other beyond control like a magnet. It’s a fact that opposites attract, and these couples have the potential to maintain a fulfilling relationship for life. Their passion is like a blazing fire. It’s not about being with a person who has everything in common with you, works in the same field, or has the same personality. A loving and passionate relationship is all about loving what makes you unique and having an open mind.
https://medium.com/an-idea/13-surprising-ways-opposites-attract-like-magnets-4fe1fcb03526
['Mari Colham']
2020-10-05 06:55:43.372000+00:00
['Relationships', 'Love', 'Lifestyle', 'Psychology', 'Dating']
Memories of 25 (A Quadrille)
Memories of 25 (A Quadrille) An Ode to My Late Grandmother My late Maternal Grandmother. Time period? Unknown to me. Photo enhancement? My Own. cancer stripped you of life. 25 needed you, but you were already gone. claimed by a disease that terrifies me in my sleep. I become reacquainted — with you in nature and in the glossy pupils of blinkless eyes. you are alive, even in death.
https://medium.com/a-cornered-gurl/memories-of-25-a-quadrille-9969bfba2ffd
['Tre L. Loadholt']
2017-11-28 21:19:10.822000+00:00
['Loss', 'Quadrille', 'Death', 'Writing', 'Family']
A scientific reason why you shouldn’t compare yourself to others
A scientific reason why you shouldn’t compare yourself to others Instead, try celebrating with them Rawpixel on Unsplash Any student of science knows that in an experiment, comparisons are only valid if you control all the other variables that could affect the outcome. If you are studying the effect of physical exercise on heart rate, you need to control for things like caffeine consumption, age, previous heart conditions, and a host of other things that could also affect heart rate. Ironically, I’ve also compared myself to other people for years. I felt I wasn’t as hard-working as them or as confident or that I was falling behind in some other way. I conveniently forgot about all the control variables. When you compare yourself to someone else, you forget all the variables that made them who they are are — their genes, their upbringing, the events of their life. Funnily enough, you overlook the fact that others may be comparing themselves to you, while you take all the positive variables of your own life for granted. Or at least, that’s what I do. Such comparisons are scientifically invalid. Instead of envying someone their good fortune or achievements, we can aspire to what I’d like to call peer cheering (as opposed to scientific peer review). By this I mean finding something to celebrate when you hear someone else’s good news and feeling glad that they will enjoy it. As hard as this may seem to do, especially when they’re in a situation you’d really like to be in yourself, I’ve found it’s possible. I’ve tried doing it lately, and when I can manage it, it feels great. We’ve been fed a false narrative that one person getting what they want somehow makes others less likely to do so. Isn’t the opposite often true? Haven’t you at some point been inspired by those who’ve already managed to achieve your dreams? I think it simply comes down to wanting to feel good for others. When you come across someone whose hard work has finally paid off or who’s had a stroke of luck or is simply content for someone reason, picture them dancing around with joy, and wish them well. Even if you don’t genuinely feel it at first (that was my worry!), if you do it enough, you start to really feel happy for them. I’ve repeated this experiment a few times so I know it works. That doesn’t mean I no longer feel any envy, of course — but it’s a step forward. Less comparison + Being able to feel glad for others = Expansion of your own potential for happiness Maybe I should be sharing this with the students I teach science to, alongside the equation for photosynthesis.
https://medium.com/the-maths-and-magic-of-being-human/a-scientific-reason-why-you-shouldn-t-compare-yourself-to-others-and-why-you-should-celebrate-1a4378139467
['Roshan Daryanani']
2019-05-27 23:18:34.699000+00:00
['Happiness', 'Celebration', 'Envy', 'Science', 'Comparison']
The AniGay Guide to Lupin the Third: Intro & Part 1
The AniGay Guide to Lupin III: Part 1 Like the rest of the Lupin TV series, Part 1 is available in full over on Crunchyroll. Unlike the rest of the series, Part 1 also has an easily available and reasonably-priced DVD set, which even includes the otherwise-impossible-to-find “pilot film” created just before Part 1 began airing in 1971. Cool! And now, without further ado, let’s talk about Jigen and Zenigata. Because when you talk about queerness in Lupin III: Part 1, that’s what you’re talking about. More specifically, the subject at hand is Lupin’s complicated relationships with Jigen and Zenigata, both of whom are depicted as being (in two very different ways) in love with Lupin. Does he reciprocate? Are these relationships consistent in any way from episode to episode, series to series? And what do they mean to the characters involved? There’s only one way to find out. Zenigata Zenigata’s relationship to Lupin is remarkably consistent across the entire series, and tends to teeter on the edge between literally queer and metaphorically queer. How is that possible? Well, when Zenigata says he’s tied to Lupin by “destiny” or “fate,” he’s ostensibly talking about his repeated attempts to arrest Lupin, but the language Zenigata uses to describe these feelings occasionally (okay, pretty frequently) crosses the line into so-unambiguously-romantic that it’s clear the creators want us to interpret it that way. In fact, the majority of what Zenigata says about Lupin can be interpreted either way—is he talking about trying to capture Lupin like a cop captures a criminal, or is he using another, less literal meaning of the word “capture”? This double-meaning is sometimes acknowledged explicitly by other characters, and frequently hinted at by the visuals — like the flower petals in the gif above — and soundtrack, blatantly enough to push things into that “literally queer” category. But Zenigata’s professional-cum-romantic interest in Lupin often feels just as much like commentary on the inherently romantic overtones of the “cat and mouse” trope as it does a considered characterization of Zenigata. Like everything else about queerness in Lupin III, this varies. Not only does Part 1 establish Zenigata’s conflicted feelings super early on (the extremely romantic gif above is from the very first episode), it also lays the foundations for Lupin’s flippantly affectionate attitude toward Zenigata’s obsession with him. And though Zenigata as a character will soften up a bit after Part 1 (for the most part), his playfully antagonistic relationship with Lupin is one of the series’ most stable, all things considered. Jigen Jigen and Lupin’s relationship is a different story. But though Jigen’s role in Lupin’s life may appear to change from one episode to another, the fact that he has romantic feelings for Lupin can be relied upon to stay the same. This is partly because Jigen is depicted as gay with almost 100% consistency over the course of Lupin the Third’s half-a-century run. Jigen’s romantic distaste for women is well-documented, though often poorly translated, and the fact that his sexuality is defined by his dislike for one gender rather than his preference for another is more than just a nod to queer Japanese history (more on that in a future post); it also allows him to fly somewhat under the radar as a semi-openly queer character. I say “semi-openly” because Jigen’s sexual orientation manages to be both surprisingly overt and conveyed almost exclusively via hints, subtext, and coding: There’s no possible non-queer interpretation for the pair of scenes above, in which Lupin’s date with Fujiko is starkly juxtaposed with Jigen’s frustrated dart-throws at a heart-shaped target, but the queerness is also theoretically possible to miss because it’s never stated outright. For this reason, though it gets extremely close to explicit, I’d almost always put “Jigen is gay” squarely in the “implicit-literal” queerness quadrant. But what about Lupin? Where does he fit into the equation? I know I sound like a broken record by now, but: It varies! The status quo of Lupin and Jigen’s relationship takes on a few different forms. In some episodes, like the one above (1.9), the dynamic is that Lupin’s primary (almost never requited) romantic interest is Fujiko (or, less often, another woman), while Jigen plays the unlucky third party in a Lupin-centric love triangle. (And I’ll be referring to it as the Love Triangle configuration.) Sometimes this is contentious (again, above) and sometimes Jigen and Fujiko seem to have reached an understanding. Importantly, Jigen and Fujiko’s relationship runs the gamut from mutual hatred, to begrudging respect, to affectionate friendship, again depending on the episode. Another typical version of the Lupin/Jigen dynamic is a partnership that feels much more reciprocally romantic—though Lupin’s reciprocation always remains implicit. I’ll refer to this configuration as Crime Husbands. Generally, the Crime Husbands dynamic appears in episodes that don’t centrally feature Fujiko, though occasionally some hijinks conspire to cause Lupin to “choose” or “end up with” Jigen over Fujiko, which appears to strengthen their bond. Those are the episodes that end with Lupin and Jigen literally walking off into the sunset together, physically embracing while the screen fades to black, and so on. Lupin is always over-the-top touchy-feely, but Crime Husbands episodes emphasize this between Lupin and Jigen to a degree that can only be described as: cute. There are other Jigen/Lupin(/Fujiko) dynamics that pop up over the course of Lupin III, but when Part 1 focuses on the queerness of Jigen’s relationship with Lupin, it’s primarily based on the two templates I just listed. And I promise I’ll eventually stop repeating myself, but pretty much none of this makes sense if you try and fit it all into one consistent timeline. It’s not that linear time doesn’t exist in the world of Lupin the Third; it’s that multiple linear timelines exist, so trying to merge them is an exercise in futility. Queerness and Crime There’s one more aspect of Lupin the Third that’s worth pointing out early on, even if I won’t delve super deep into it for now. As I touched on briefly in my Beyond “Canon Gay” post, criminality is one way in which queerness can be conveyed metaphorically: Criminals, especially career criminals, live on the fringes of society permanently and don’t fit into its “normal” expectations.¹ Obviously, Lupin and his gang are all prime examples of characters who refuse to adhere to conventional, heteronormative life paths. So even regardless of the particulars of their literal sexual orientations, this makes the entire framework of the Lupin universe inherently metaphorically queer. In Lupin the Third, this overarching metaphor is rarely if ever used as a direct or overt means to discuss queerness, unlike in series I would categorize more decisively as “metaphorically queer” (Devilman Crybaby; Miss Kobayashi’s Dragon Maid; etc), but it’s still interesting to keep in mind because the Lupin gang’s commitment to their respective criminal lifestyles is, at the very least, a guaranteed way of keeping the pressures of heteronormative society off their backs (and out of their show). Okay! So you’ve got a good idea of the basic queer structures in place for Lupin the Third: Part 1. Ready to watch some episodes? Below are a few that no queer Lupin viewer should miss.
https://medium.com/anigay/lupin-guide-part-1-c9b299976c55
['Elizabeth Simins']
2018-10-15 15:35:25.438000+00:00
['LGBTQ', 'Anime', 'Queer', 'Lupin The Third', 'Analysis']
10 Terribly Misread Bible Verses
1. “One flesh” doesn’t mean sex! The phrase “one flesh” in Genesis 2:24 is often thought by Christians to mean the basic function of marriage is having sex. “Therefore shall a man leave his father and his mother, and shall cleave unto his wife: and they shall be one flesh.” No. The word translated “flesh” is a very typical word for “family.” Like “flesh and bone,” it refers to shared family. Note Leviticus 25:49, where ‘flesh’ (basar) is translated as ‘clan’. As David Instone-Brewer, writes: “The phrase ‘they shall be one flesh’ would probably have been interpreted to mean ‘they shall be one family’.” 2. Women aren’t cursed to “desire” men Christian men like to say that women are cursed by God to “desire” them. It’s a beguiling reading of Genesis 3:16, when God is talking to Eve. “I will make your pains in childbearing very severe; with painful labor you will give birth to children. Your desire will be for your husband, and he will rule over you.” The Dead Sea Scrolls were helpful in untangling a mess Christian tradition had made. Joel N. Lohr’s 2011 paper “Sexual Desire? Eve, Genesis 3:16, and תשוקה” lays out the evidence. The rare word, teshuqa, translated “desire” actually means “turning” in the sense of “returning.” As Lohr paraphrases Genesis 3:16: “Despite increased pain in childbearing, Eve would actively return to the man.” She’ll accept the pain of childbirth, and keep reproducing the race. Someone’s gotta do it? 3. “Adultery” doesn’t mean cheating When Christians read most anything in Old Testament law, like Exodus 20:14, they introduce a lot of confusion. The famous words go: “Thou shalt not commit adultery.” This does not mean a husband or wife in a monogamous marriage gets sexy with someone else. First of all, the Bible assumes polygamy, and biblical heroes from Abraham to Solomon have sex with slaves, prostitutes, concubines and wives. These women are viewed as possessions. ‘Adultery’ is a crime in property law, since men often like to see wives as property. To commit adultery is to steal a married woman, and that’s it. “The commandment,” as David J.A. Clines writes, is “not concerned with sexual ethics or social stability or anything other than the threat of theft to a man and his property.”. 4. God doesn’t like sex interrupted! Traditionally, Christians get hives around the Song of Songs — all that sex! — but the one line they like is the refrain (2.7; 3.5; 8.4): “Do not arouse or awaken love.” And they’ve taken this to mean: you’re supposed to wait until the right time to be sexual. Like when your parents and clergy say it’s okay? In “A ‘Do Not Disturb’ Sign? Reexamining the Adjuration Refrain in Song of Songs,” Brian P. Gault lays out facts. The verse is saying that lovemaking will last “as long as it desires,” so the lovers are indicating they’re not to be interrupted. 5. “Lust” is not a sexual word! Lots of Christians think Jesus says it’s a crime to think ladies are looking sexy and fun and you wouldn’t mind spending time with them. Isn’t that Matthew 5:28? “But I tell you that anyone who looks at a woman lustfully has already committed adultery with her in his heart.” Fact check. The word translated “lust,” the Greek word, epithumia, is the regular word for ‘desire’. Jesus ‘lusts’ in Luke 22:15 — to eat Passover! A fact you might not hear in church: the language of Greek, like Hebrew, has no different words for ‘woman’ and ‘wife’, so translators actually don’t know if Matthew 5:28 refers to a woman who is single, or married. Jason Staples notes that epithumia can point to the Hebrew word for ‘covet’—a crime not of sex, but theft of property. It seems Jesus is identifying, not inner sexual thoughts, but an intention to steal. 6. “Eunuch” doesn’t mean a non-sexual person In Matthew 19:12, Jesus praises someone often thought highly disfavored: the eunuch. He indicates, then, a religious category called the “eunuch for the sake of the kingdom of heaven.” For Christian tradition, this was taken to mean a non-sexual person. Aren’t eunuchs non-sexual? So you were allowed to be single—as long as you weren’t having sex, is how the thinking went. As J. David Hester reminds us in “Eunuchs and the Postgender Jesus,” the eunuchs of the ancient world were not celibate, often having phallic functioning, and famous for other sexual skills. Also note—non-married sex is not a crime in the Bible. There was never evidence to think eunuchs are okay with Jesus because they’re unsexual. 7. Paul’s “burning” doesn’t refer to sex In yet another effort to pretend that God hates sex, the Christian tradition reads 1 Corinthians 7:9 rather badly. “But if they cannot contain, let them marry: for it is better to marry than to burn.” What is this burning? Christians say ‘sex’—as they typically combine ideas of sexual desire and hellfire. The classics scholar Ann Nyland points out in her Source Bible translation that the ‘burning’ is probably grief. In ancient Rome, we find phrases like a mother is “burned up” with grief over a child’s death. Tombstones have inscriptions like: “father and mother who are burning with grief.” Then note that Paul, in 2 Corinthians 11:29, writes of those who’ve left the Christian community: “Who is led into sin, and I do not inwardly burn?” They’re gone, and he grieves them. So let’s re-read 1 Corinthians 7:9. Paul is speaking to candidates for missionary work. It’s more for single people, he says. But some are regretting they’d have to abandon a love interest. For them, Paul says, “it is better to marry than to burn”—which means: Get married and be happy. Fact: In the teachings of Jesus, love has great significance. 8. God’s “bosom” means His womb In the eerie scene of John 13:23, Jesus and an unnamed younger man are lying together, and the other guy is “in the bosom of Jesus.” Elizabeth E. Platt offers a literal rendition: “There was reclining, one, from among his disciples, in the bosom of Jesus, whom Jesus loved.” This “bosom” is also found in John 1:18, where Jesus is in the “bosom” of the Father. Except “bosom” is really just the regular Greek word for womb. In “The Kolpos of the Father (Jn. 1:18) as the Womb of God in the Greek Tradition,” Daniel F. Stramara, Jr. lays out the evidence. In John 13:23, we’re finding the phrase: in the womb of Jesus. If you don’t like sexual duality in the divine, the Bible is not for you. 9. No, God doesn’t want women to keep quiet What could be more clear than 1 Corinthians 14:34? “Women should remain silent in the churches.” God thinks women should be quiet in church, right? No, sorry. As scholars have been documenting for a century, the 1 Corinthian letter is clearly structured with passages of Q&A, with lots of language like “In regards to…” The fact is, the ‘letter’ is essentially a dialogue. As many scholars have documented, like David W. Odell-Scott in “Let the Women Speak in Church,” Paul approves of female speech. The strongest reading of this passage is that he includes a quoted section of a question, likely from a traditional Jew. (A ban on women speaking is found in the Talmud, owing not to ideas of female inferiority, but to menstruation purity codes.) Paul sharply disagrees. His reply starts in v. 36: “What? came the word of God out from you? or came it unto you only?” (KJV) 10. No, Paul doesn’t list being gay as a ‘vice’ Christians are amazing at injecting lunacy into weird corners of the Bible. Typical translations of 1 Corinthians 6:9 (cf. 1 Tim 1:10) read: “Do not be deceived: neither the sexually immoral, nor idolaters, nor adulterers, nor men who practice homosexuality…” In 1980, the historian John Boswell, in Christianity, Social Tolerance and Homosexuality, reminded the faith of basic facts—like bisexuality being the norm in the ancient world. Also, he noted, there was almost nothing known about that very rare word, arsenokoita. A scholarly search ensued for traces of what it could mean. William L. Petersen, in “On the Study of ‘Homosexuality’ in Patristic Sources,” documented the trail of evidence leading to an unexpected scene in Greek mythology: the god Zeus abducting the shepherd Ganymede and taking him to Olympus. Petersen writes: “In the taking of Ganymede, Zeus is the model of someone who commits ἀρσενοκοιτία.” Except this wasn’t a sex scene—the shepherd had godlike beauty, and Zeus wanted to show him to the other gods. Even if it was sexual, Zeus was not ‘homosexual’, as Petersen notes, but “seduces women and men with equal relish.” The crime in view remains a puzzle. Do you have to know every little jot and tiddle before you can tune into Jesus’ main message? Which seems to be the insistent refrain: ‘love one another’.
https://medium.com/belover/ten-terribly-misread-bible-verses-4b482a862723
['Jonathan Poletti']
2020-12-25 17:43:05.924000+00:00
['Religion', 'Christianity', 'Bible', 'Analysis', 'Life']
How to Teach Journalism in 2019
Don’t get mad, but I was kind of excited about the Covington High School mess. I’m not a think piece person, and I take no delight in potent I-was-right-you-were-wrong moments from either side of the aisle. It wasn’t the actual event that engaged me, but rather the discussion of how the media handled it, or should have handled it, or might handle such a thing in the future. I wasn’t excited because I have a strong opinion. I wasn’t really excited for me at all. I was excited for my students. I teach high school journalism at a terrific public arts high school in Chicago. The class meets once a week for three hours. The school itself amazes me constantly, and its students amaze me more. But best of all is the job of teaching journalism to an unusually diverse cross-section of 18-year-olds in the 21st century. Our classroom is tiny, but the school provides all the kids with their own MacBook outfitted with the Adobe Cloud and Microsoft Suite. That’s really all you need to make journalism in 2019 — well, that plus an internet connection and the little camera-computers 100 percent of high schoolers keep in their pockets 24–7. My students did not walk into my classroom at the beginning of the school year particularly interested in journalism. Last spring, they had to decide whether to take this class with me (a relatively established teacher at the school) or a literary magazine class with a new and unknown teacher. I’m guessing they picked the teacher more than the subject matter. Over the past semester, however, I’ve watched a love of journalism grow and blossom in my students, I have seen them take on more and more ambitious stories, and I’ve noticed them growing more engaged with the news at large. Also, lest this dewy-eyed assumption go unchecked, several students have straight up told me that they weren’t even a tiny bit into journalism until this year. The Covington High School story was interesting because it got to the very center of the most important thing with which a room of young journalists should grapple. The Covington High School story was interesting because it got to the very center of the most important thing with which a room of young journalists should grapple. Here are some of the questions that came up as I was reading (and reading and reading) about this story: What is the truth? How do we tell it? How can we be sure we have all the details we need? What is context? How much context is necessary? What does the Covington High School story tell us about citizen-driven video reporting? What stories most need to be told, heard, learned, and understood, and why? I know some of those questions seem pointed, but I assure you I don’t know the answers. I had some answers to these kinds of questions 10 years ago, but — and this is, for the most part, wonderful — the answers are rapidly changing. In 2001, I was excited to grow up and become a journalist, the career I’d chosen for myself at age seven. Journalism at my own high school in Portland, Oregon, was a daily class: 50 minutes five days a week devoted to all things newsworthy. In the first week, we learned about the inverted triangle, why the word lede was spelled that way, and how to use the marvelously thick Associated Press Stylebook. The newspaper office had the boxy early Apple computers with blue or green backs that made them look more modern and hip than the boring gray behemoths we had at home. They came with a design program called Quark, which we used to lay out the paper. The students did everything. We had class jobs, we assigned stories, we wrote and edited articles, we laid out the paper, and we sent it to the printer. There were roughly 50 journalistically eager kids in the class. The teacher, Mr. U, took attendance and read through the articles to make sure there were no egregious errors or offensive language. For the most part, we took the class seriously. Everyone read the school newspaper, newspapers were cool, and we wanted our newspaper to be great. When I started teaching journalism, I ordered individual copies of The Associated Press Stylebook for my students. They also got copies of The Elements of Journalism by Bill Kovach and Tom Rosenstiel. I had fond memories of looking up how to include a title or a number. In writing, so little is mathematical; it’s satisfying to have a right and a wrong way of doing certain things. But my students were intimidated and insulted. Surely this was on the internet somewhere, they said. (It is, I said, but isn’t it so nice to flip through the pages? To smell the binding glue? They said no.) The Elements of Journalism is an exceptional book that picks apart with great detail the philosophical groundwork of journalism. The gist is this: Tell the truth, and remember that the news belongs to the people (as opposed to, say, the government). The authors claim that there are 10 core principles, but they basically boil down to these two ideas, both of which I completely love and fundamentally agree with. But this book could not have been written by two whiter, more intellectualized men. These dudes are writing about how journalism is at the center of freedom and revolution, but their audience is wearing a suit and getting into a car with leather seats; their audience gives generously to NPR every year and has a summer home on the coast. The people at the forefront of revolution in pursuit of freedom look and sound nothing like the authors, so no matter how valiant and subversive the book’s ideas may be, it won’t ever find its way into the hands of the people who need it. Instead, it’s a purloined letter: My students literally had this book in their backpacks, and the ideas inside it could help them be the great activists they want to be, but they would never read it because it looks and sounds like it belongs in the briefcase of someone named Robert. I make my students read the book in class (trust me: no one is grabbing it off their bedside table), and I assign the reading alongside a quiz so no one will fall asleep. The quiz is designed to be fairly easy, highlighting the writers’ most important points. My students do not like taking these quizzes, but every time they do someone else stays after class to say, “This is actually a kind of radical book. Did you know that? These ideas are low-key amazing.” I did know that. But now we need to figure out how to get more Generation Zers to learn them. A recent study (cited in Forbes) showed that for people born between 1995 and 2012, the average attention span is eight seconds, compared with 12 seconds for millennials like me. This is at least partly because Gen Zers look at more screens at one time than any previous generation. On average, according to the same study, they might pivot among five screens at once. The first time I saw this in practice, my mind was blown. A student had her iPhone, iPad, and laptop playing different video files, and she was typing an assignment while she bounced among the devices, listening to and watching different screens. She wrote the whole assignment while doing this. She didn’t turn out a Pulitzer Prize–winning piece of writing, but it wasn’t bad either. The biggest misconception about this generation (and about mine, too) is that the selfies and videos and screen addictions are indicative of narcissism. But Generation Z is not a narcissistic generation. When displays of compassion, empathy, and selflessness take a different form, they can be hard to recognize. I’ve never seen kindness manifest as clearly as it does among my students. I can also see how painful it is to be shackled to a device; how much anxiety, fear, and confusion is amplified by access to constant virtual connection and communication. My students suffer a great deal, and they know each other’s suffering intimately. They take care of each other in ways that are largely unknown to other generations. All these realities offer implications about how we have to start teaching journalism. If attention spans are short, it will be hard to get kids to read long-form articles. Gen Zers are also not ignorant to the ways that their devices cripple them. In my classes, I use this language to get the kids to put their phones away: Your phone is a little anxiety-producing monster that sucks up your time, energy, and happiness. It isn’t your fault you are constantly being forced to know 30,000 things at one time; keeping track of all your notifications and connections is a major burden. It is a gift you give to yourself to put your phone all the way away. Nothing terrible will happen in the next three hours. I give you permission to focus on just one thing at a time in this class, to take care of your own self and your own mind for a change. My students react with gratitude and understanding. Those of us who didn’t grow up with smartphones may not grasp how much energy they require. They aren’t always — or even usually — conducive to pleasure. The students understand that their phones are frequently a source of great but seemingly necessary pain. All these realities offer implications about how we have to start teaching journalism. If attention spans are short, it will be hard to get kids to read long-form articles. Gen Zers aren’t clicking through the New York Times or the Washington Post the same way millennials still do. Even if these sources— which have significant financial resources—invest in interactive news-delivery systems, they don’t know how to get young people (the intended audience) to navigate to them. And, crucially, when a million things are flashing and blinking and lighting up all at once on a single smartphone, it’s increasingly (and terrifyingly) difficult to get kids to recognize and steer clear of fake news. Just because a task is difficult does not mean it is impossible. But our lesson plans need to accommodate the values and realities of an increasingly digital generation. That does not mean we should eliminate or even minimize thorough, well-written, substantial reporting. My students are enraptured with excellent articles. The key is to get kids to understand the purpose and value of good journalism, and then ask them to own the work of bringing it to one another. Back to Covington High School. I brought in my lesson plan for the article the Tuesday after the media maelstrom. Everyone who was going to be talking about the subject was already doing so; Twitter was just beginning to calm down. I asked my students if they’d heard anything about the story. Three of 16 raised hands. (My students are diverse in almost every sense of the word. I have kids of every race, socioeconomic status, gender orientation, and sexual orientation. To be fair, the class skews nonwhite and female, and its political spectrum is way left of center.) I added, “It was the photo of the boy with the MAGA hat smiling near the Native American man that was all over Twitter”; two more hands came up. As a fairly plugged-in millennial, my social media had been flooded with this story all weekend long. I couldn’t get away from it. But my kids aren’t plugged into the same networks. The students who raised their hands had one thing in common: active Twitter feeds. The ones who congregate primarily on Instagram and YouTube were out of the Covington loop. This was perfect for the lesson I had planned because it allowed most students to take in all the information at once and draw conclusions around my big questions. I brought in 16 different articles about the story from all over the lawless internet. There were informational pieces and think pieces; editorials on the near left, editorials on the far left, editorials on the right. Each student read a few articles, and then they worked as groups of four to piece together what they thought had happened and what they thought journalists should have done with this story. “I’m still having trouble understanding what actually happened,” said one student. “Did the Covington kids start it? Or was it the Black Hebrew Israelites? What is a Black Hebrew Israelite?” “I think the point is that we don’t really know what happened,” said another. “Even with the video, it’s hard to tell. There are a lot of versions of the story and even though we can see and hear some things, we still can’t know everything. Even the video can’t tell us everything.” “Yeah. I think that some of these op-ed guys have a point — we shouldn’t leap to conclusions. We’re so quick to judge now. This whole story is all about confirmation bias,” someone offered. And then this: “But whose voices do we normally hear? And what are the consequences for these different groups? My friend got up in a white protester’s face in California, and that got caught on video, but not the way the white guy had been instigating her. And she got arrested. These guys didn’t get arrested. They’re being celebrated. It wouldn’t have been the same if the groups had been switched.” Bingo. That’s the kind of nuance and careful differentiation that my students have the opportunity to bring into focus. They can challenge the idea that neutrality and truth are synonymous. They can pull identity and privilege into the context, and insist that this is non-negotiable when it comes to reporting the news. In every single journalism class, we analyze the news to see whose stories are being told and by whom. Overwhelmingly, decorated journalists are still white and male. Journalism, like (notably) tech and management, needs to reckon with the fact that the long-accepted way of doing things may need tweaking. Our style books may need rewriting; our language could stand for re-evaluating. The New York Times is supposedly written at a 10th-grade level, but it’s still written for an academic audience, and it’s hard for some of my 12th graders to understand. This is not because they’re unintelligent or illiterate, but because they are literate in different ways. My students speak more languages — I’m talking about the myriad languages embedded in a fully digital world — with more adeptness and frequency than their generational predecessors ever could have dreamed. The purpose of journalism is to bring the truth to the people. This means we need to (1) have an unfailing, nonpartisan commitment to what is true, and (2) adapt language and communication methods to speak to larger groups of people—not the other way around. Generation Z wants to act; it wants to make the world a better place. All things considered, journalism is an easy sell. If everyone knew what was going on in the world, we would all be able to make better and more informed decisions. Governments lie, corporations lie, and regular human beings lie. As those entities grow stronger and louder through digital media, it will take courage and tenacity to tell the truth. We can’t rely on old rules, outdated language, and traditional methods of communication to make it happen. So we have to keep asking the hard questions that will propel our students to act.
https://medium.com/s/story/how-to-teach-journalism-in-2019-aa33773dfcd9
['Sophie Lucido Johnson']
2019-02-08 00:49:41.063000+00:00
['Journalism']
Seeking Purpose: What an Old Story Can Teach New Leaders
It’s remarkable that a story so short is so rich with characters. They are subtly threaded together into a narrative tapestry that reveals more lessons each time we scan our eyes through its colorful weaves. That’s the beauty of these stories. Through time, they begin to carry the familiar comfort of an old, trusted blanket. We have, of course, the Shepherd and the Caliph at the center of the story from whose stories we can pull great meaning — but looking closer, we can see a few more. There is also the Vizier, the Bodyguard, the Servants, the Shepherd’s Brother, and the Sheep. Each of them, for all their modesty in the telling, play huge roles in the story’s unfolding. Much like all the vocations of Morocco, each of these characters describe their own unique purposes as ancient as they are urgent. Let’s give them each their due examination. The Shepherd I have felt like the Shepherd many times in my life. The urgency and excitement of his reaction to his discovery implies a deep wandering that came beforehand. Perhaps it was aimless. Perhaps it was driven by seeking. Whatever came before, the moment he tasted that water became the moment he found his purpose. He feels immediately that something must be done about this and so his reaction is twofold: First, to abandon everything that brought him there, passing off his flock to his brother. Second, to bring his discovery straight to the highest authority he knows, enduring the pains of long distance travel to reach the capitol with breathless speed. What we, as the omniscient beholders of this story, discover in the end is that his excitement is naive. In the grander scheme of the caliph’s empire, there is nothing new or unique about this water at all. In fact, it may very well be very low grade. Yet, he is made none the wiser. Buffered from shame or indignity, he returns home triumphant with a newfound purpose sanctioned by the highest power in the land that will be carried on for generations after him. We are left here with a question. Is the Shepherd a fool? He is no more a fool than all of us. For is there not always a grander scheme that renders even our most impassioned endeavors naive? Perhaps what matters most about our purpose is less the audacity of its meaning and more the audacity of the actions we take to support it. Perhaps as a start, we should consider that this spirit of seeking, wherever we encounter it, may be worth protecting from the shame that would destroy it. The Caliph The story begins with a testament to the wisdom and diplomacy of the Caliph, Harun al-Rashin. With that, we are primed to regard his character as the ineffable moral center of the tale, which he delivers on brilliantly. What the Caliph gives us is a critical lesson in leadership. Passion borne from purpose must always be handled with care. Trivialize someone’s purpose and you also diminish their passion. Without passion, an empire will fall. Among the many of difficult duties of leaders is the job of knowing a grander scheme. With that also comes knowing which truths of that grander scheme to communicate to their team and when. This can become a trap into hubris for any leader. Knowledge doesn’t reveal power, it reveals purpose. To interpret the Caliph’s actions as a pat on the head for a naive underling is to miss an entire side of the wisdom. Even an empire as great as the Caliph’s is but one grander scheme in an infinite chain of even grander schemes. The highest wisdom, then, is to recognize that there will always be greater hands moving pieces on the board than our own — and that true power doesn’t need to always descend from the heavens above, but can sometimes spring up from the most humble ground below. How we react to it is everything. Who could know what role any new oasis may play for an empire generations into its future? Like the water of paradise itself, an impassioned sense of purpose must be treasured and protected no matter how remote. Who best to be the steward of an idea than the one who feels driven by it most? The Vizier Yahya the Barmakid, vizier to the caliph, stands as a lesser image of power. He is an example of one corrupted by their own sense of purpose, which has festered into self-importance. Immediately, his response to the Shepherd is dismissive. He is too busy. So too must be the Caliph; at least in his mind. There is little to be found heroic about his character in this story. In fact, as the closest thing to its villain, he stands as a steadfast obstacle to the resolution our hearts desire in its telling at every turn. We may ourselves want to dismiss him from this tale entirely — but we can’t and we shouldn’t. Therein lies the rub. You may find his presence disagreeable to the scenery, but what what matters most is that he is involved. Like all the other characters, what we see in the Vizier is only a reflection of ourselves. He, as this story’s shadow, also holds one of the stories many sacred keys. The Vizier challenges us to stay alert, guarded, and always moving forward. On the surface, to be guarded from outside distractions or threats. In greater depths of wisdom, to be guarded from the blinding force of arrogance that hides within ourselves. With our desire to move things forward — be it in the form of asking each other qualifying questions about careers, or promoting ourselves, marketing brands, guarding doors, or making deals— we must recognize the difference between holding power and being adjacent to it. We must know that one of these invites purpose and the other invites exchange. The Servants, The Bodyguard, The Brother, The Sheep (and the Missing Women) With the lesson of the Vizier fresh in our minds, the purpose of these remaining characters can be drawn into higher relief. They are the most humble of all in this story, yet without any of them the wheels that spin the threads of its tapestry cannot turn. Who was it that found the water of paradise? It was the Sheep. Who was it that enabled the Shepherd to take his journey to meet the Caliph? It was the Brother. Who was it that enabled the Caliph to taste the water? It was the Bodyguard. Who was it that actually walked the Shepherd into his life of purpose? It was the Servants. Lastly, who is it that is missing entirely? It is the Women. Unfortunately, this story makes no mention of the women, but we know they are there. All too often unmentioned and all too often bearing burdens men like me would never know — so long as all we are shown is the empty spaces where their stories belong. Still, those empty spaces serve as a reminder of the importance of the unseen behind any accomplishment. If truth is genuinely our guide, then we must never forget there is always something or someone missing from how our stories get told. No matter how central of a player we may feel to our own personal myths, we must remember that any purpose, especially the most bold, cannot be fulfilled alone. Whatever our purpose may be and from wherever it comes— be it inherited, discovered, or purchased — it will always be just one thread upheld in a broader tapestry of others, woven together to reveal a higher pattern whose meaning we can only behold with soft enough eyes. No purpose is ever achieved alone.
https://medium.com/swlh/seeking-purpose-what-an-old-story-can-teach-new-leaders-15e065f17350
['Will Cady']
2020-07-30 21:09:38.272000+00:00
['Purpose', 'Inspiration', 'Culture', 'Humanity', 'Storytelling']
Tokenism and Representation: A Fine Line in Popular Media.
It’s the 21st century. Fantasy, sci-fi, horror, comedy, and every other genre seems to be played out under the sun; being woke is a currency and diversification is key. Production houses seem to be bringing out show after show, milking on views of their target audience with clean-cut character-types that would fit their demographic. And while new scenarios and stories find their way to mainstream media, the people striving to bring these stories to the screen, find themselves balancing on the tightrope of building the lives of people on-screen. For the average media consumer, the entertainment industry seems to have everything for everybody. Fantasy and sci-fi storylines stemmed from real-life disasters, political satires weaving their way into dramatic plotlines, common vices uncovered in feel-good family movies, deep-rooted societal issues bring reality to the horror genre, etc. Sociologists find that popular media in society could be understood as a reflection of its people. Finding an original and creative niche, decades into popular storytelling, can be hard for writers, and thus, the building of characters, as any mainstream writer would find, isn't so easy when one's society has a problematic gaze. An easy look at the evolution of Hollywood could be a good indication of that: The mainstream cinema-going audience forgets that while art imitates life, life also tends to imitate art, and thus by depicting particular views, one projects the stereotypes of the majority, onto the minority. And while these minority communities try to participate in society, they are forced to fit into these prejudicial boxes. These are milder versions of deep-rooted issues present in a society, like that of racism and colorism, patriarchy, and religious radicalism that also found a horrifying place in common society in those times. And as long as the lens was held in the hands of the silent majority, cinema would only exist as a window into privileged homes for afflicted minorities, rather than a mirror for the majority to rewrite it's normative. Even as present-day activists would have hoped for another way, this had to slowly begin with the white nod to the black lens. This came from a promising place of a mutual understanding of personal narratives, and the passing on of the lens to the community. When minorities found themselves in the writer's rooms, collaborating with the majority to build their own cinematic histories, and eventually found themselves in the director’s chairs, narratives that depicted the normal lives of minorities came through. Narratives of their struggles, failures, victories, became synonymous with that of society. This could be seen in extremes like that of ‘12 Years a Slave,’ and in stories made for a general audience like ‘Marvelous Mr’s. Maisel,’ and in many others. Representation seemed to come through. A still from the movie, ’12 Years A Slave.’ But consequently, even as these new characters come on screen today, the gaze could make or break the narrative. Mainstream movies like ‘The Help’ and ‘Blue is the Warmest Colour’ (both, adapted from books.) were popularly discussed as depicting important perspectives of minorities. Yet, at the grassroots level, it displayed problematic views, in turn, hurting the African-American, and LGBTQ+ community respectively. A still from the movie ‘Blue is The Warmest Colour.’ The power dynamic of ‘The Help.’ Actors of the adaptations spoke out about the discomfort and the realisation of the impact of their portrayal, the ‘Blue is The Warmest Colour’ actors stating, that as lesbian lovers on-screen, they were worried that they were playing out a male fantasy due to the narrative. Hollywood actor, Viola Davis regretted her role in ‘The Help,’ as it played into a white saviour narrative. Interestingly, both of these adaptations were made by directors of the majority, for the depicted minority, to ostentatiously congratulate themselves for being ‘allies.’ This marked the early stages of regression, or what educated activists would call ‘Tokenism.’ Emboldened by the system, Tokenism could be considered the watered-down, diet version of the prejudices of the past, its impact creeping into communities under the cloak of allyship. Minorities came to see themselves on screen, not under any prejudicial gaze, but as contributing to the storylines. The primary difference now is that writers found a way to slyly work around the creative mechanism, and try to liken it with diversity. For one, Hollywood remakes like that of ‘Ghostbusters’, ‘Ocean’s 8,’ were branded as feminist films, even as the community recognised the harm of attempting to masculise femininity for global recognition, rather than celebrating femininity. The mere creation of these roles pigeon-holed these characters to fit into legendary roles written with men in mind. Writers often get cornered into the ideals of writing a character with no specific sociological characteristics, so that any individual could fit into portraying that role. The Doctor Who series franchise popular for The Doctor’s morality and experiences, in the early 2000s, saw a steep drop in views in Season 11, fans accusing the creators of attempting to blatantly tokenize its characters, when the entire cast was abruptly changed to include one individual from every minority group in the UK, seemingly to bring views to a franchise that was already dying. The problem could be best understood when, because of lazy writing, these individuals don’t seem to have any autonomy. They are driven by the momentary storylines, and don’t have any character depth beyond it, or that they are written to supplement the stories of a privileged majority. All the Doctors from ‘Doctor Who.’ A simple run-through of popular films and shows today could reveal the underlying issue. The tropes of the ‘token black guy’ that exists only to die early in horror movies, the ‘trophy wife/bimbo girlfriend’ that seems to have no individual agency, the ‘gay best friend’ who only exists for shopping trips and boy advice; popular shows including characters like Kevin, the gay best-friend in the teen-drama show Riverdale, Raj, in the sitcom Big-Bang Theory, Stanley Hudson in The Office, all with different formulas of tokenism, should have been red flags from the beginning. This is most prominent with the representation of females in popular media. According to Katha Pollitt, a popular media practice would be to include only one woman in an otherwise entirely male ensemble, where they were hyper-sexualized and existed only in reference to men like Princess Leia in Star Wars, Black Widow in The Avengers, Elaine in Seinfeld, etc. She coined it ‘The Smurfette Principle.’ These characters would consequently fail the Bechdel test, revealing deep-rooted gender bias. All of this was done for more views, without addressing societal effects. Needless to say, one could see how this could be problematic. After the Oscars was boycotted half a decade ago under the hashtag #OscarsSoWhite, they tried to rebrand themselves, and announced new rules of diversity this year, for films to be nominated, calling for representation on screen, in the crew, and at the studio. Yet many predict that filmmakers will take the easy way in and tokenise their characters and crew since the bar is set so low. The only way to get out of lazy character sketches, and treading the fine line between diversity and tokenism, would be to bring systemic change. Only when minorities are brought into every sphere of creating content for the masses, will the sense of being tokenised be transformed into the essence of being empowered. And finally, when communities have systems set in place that provide opportunities to own their perspectives, they’ll be able to drive change in how they’re seen in popular media and seen in society. This would, in turn, allow them to influence the direction of the production of media. Character depth and narrative matters, and only when popular media is able to portray that sensitively, is when representation will be commonplace.
https://medium.com/the-volume-collective/tokenism-and-representation-a-fine-line-in-popular-media-6fa803dc939b
['Joanna Dias']
2020-12-05 15:27:34.566000+00:00
['LGBTQ', 'Women', 'Hollywood', 'Representation', 'Storytelling']
Einstein’s Formula for a Happy Life
Einstein’s Formula for a Happy Life A few days before Einstein twirled into the Reaper’s grim arms, his assistant — Dukas — found him in the hospital bed, “in agony, unable to lift his head.” Yet on the very next day, a mere 24 hours or so away from his death-day, Einstein “asked Dukas to get him his glasses, papers, and pencil, and he proceeded to jot down a few calculations.” “He worked as long as he could,” noted biographer Walter Issacson, “and when the pain got too great he went to sleep,” for the final time. Indeed, Einstein died doing the one thing he loved most — working. Ah, circumstances reveal character! “Genius is one percent inspiration,” said Einstein, “and 99 percent perspiration.” Indeed, it’s not by accident that no one has ever become great by accident. After all, as Einstein once noted: “Only a monomaniac gets what we commonly refer to as results.” Show me someone great and I’ll show you someone obsessed. Besides, what more is “greatness” than the child of an obsession? For the above reason, when Einstein was asked for the secret to a happy life, though the questioner expected an answer long and sour, Einstein kept it short and sweet: “If you want to live a happy life, tie it to a goal, not to people or things.” Here lies Einstein’s formula for a happy life.
https://medium.com/mind-cafe/einsteins-formula-for-a-happy-life-b29aff61a9c7
['Genius Turner']
2020-12-27 12:14:20.610000+00:00
['Life Lessons', 'Self Improvement', 'Life', 'Self', 'Productivity']
window.location Cheatsheet
Looking for a site’s URL information, then the window.location object is for you! Use its properties to get information on the current page address or use its methods to do some page redirect or refresh 💫 window.location.origin → 'https://www.samanthaming.com' .protocol → 'https:' .host → 'www.samanthaming.com' .hostname → 'www.samanthaming.com' .port → '' .pathname → '/tidbits/' .search → '?filter=JS' .hash → '#2' .href → 'https://www.samanthaming.com' window.location.assign('url') .replace('url') .reload() .toString() window.location Properties Difference between host vs hostname In my above example, you will notice that host and hostname returns the value. So why do these properties. Well, it has do with the port number. Let's take a look. URL without Port window.location.host; // 'www.samanthaming.com' window.location.hostname; // 'www.samanthaming.com' window.location.port; // '' URL with Port window.location.host; // 'www.samanthaming.com:8080' window.location.hostname; // 'www.samanthaming.com' window.location.port; // '8080' So host will include the port number, whereas hostname will only return the host name. How to change URL properties Not only can you call these location properties to retrieve the URL information. You can use it to set new properties and change the URL. Let’s see what I mean. // START 'www.samanthaming.com' window.location.pathname = '/tidbits'; // Set the pathname // RESULT 'www.samanthaming.com/tidbits' Here’s the complete list of properties that you can change: // Example window.location.protocol = 'https' .host = 'localhost' .hostname = 'localhost:8080' .port = '8080' .pathname = 'path' .search = 'query string' // (you don't need to pass ?) .hash = 'hash' // (you don't need to pass #) .href = 'url' The only property you can’t set is window.location.origin . This property is read-only. Location Object The window.location returns a Location object. Which gives you information about the current location of the page. But you can also access the Location object in several ways. window.location → Location window.document.location → Location document.location → Location location → Location The reason we can do this is because these are global variables in our browser. window.location vs location All 4 of these properties point at the same Location object. I personally prefer window.location and would actually avoid using location . Mainly because location reads more like a generic term and someone might accidentally name their variable that, which would override the global variable. Take for example: // https://www.samanthaming.com location.protocol; // 'https' function localFile() { const location = '/sam'; return location.protocol; // ❌ undefined // b/c local "location" has override the global variable } I think that most developer is aware that window is a global variable. So you're less likely to cause confusion. To be honest, I had no idea location was a global variable until I wrote this post 😅. So my recommendation is to be more explicit and use window.location instead 👍 Here’s my personal order of preference: // ✅ 1. window.location // 🏆 2. document.location // ❌ 3. window.document.location // why not just use #1 or #2 😅 4. location // feels too ambiguous 😵 Of course, this is just my preference. You’re the expert of your codebase, there is no best way, the best way is always the one that works best for you and your team 🤓 window.location Methods window.location.toString Here’s the definition from MDN This method returns the USVString of the URL. It is a read-only version of Location.href In other words, you can use it to get the href value from the As to which to use, I couldn’t find much information as to which is better; but if you do, please submit a PR on this 😊. But I did find a performance test on the difference. One thing I want to note about these speed tests is that it is browser specific. Different browser and versions will render different outcome. I’m using Chrome, so the href came out faster then the rest. So that's one I'll use. Also I think it reads more explicit then toString() . It is very obvious that href will provide the URL whereas toString seems like something it being converted to a string 😅 assign vs replace Both of these methods will help you redirect or navigate to another URL. The difference is assign will save your current page in history, so your user can use the "back" button to navigate to it. Whereas with replace method, it doesn't save it. Confused? No problem, I was too. Let's walk through an example. Assign 1. Open a new blank page 2. Go to www.samanthaming.com (current page) 4. Press "Back" 5. Returns to 👉 3. Load new page 👉 `window.location.assign('https://www.w3schools.com')`4. Press "Back"5. Returns to 👉 www.samanthaming.com Replace 1. Open a new blank place 2. Go to www.samanthaming.com (current Page) 3. Load new page 👉 `window.location.replace('https://www.w3schools.com')` 4. Press "Back" 5. Return to 👉 blank page Current Page I just need to emphasize the “current page” in the definition. It is the page right before you call assign or replace . 1. Open a new blank place 2. Go to www.developer.mozilla.org 3. Go to www.samanthaming.com 👈 this is the current Page 4. window.location.assign('https://www.w3schools.com'); // Will go to #3 4. window.location.replace('https://www.w3schools.com'); // Will go to #2 How to Do a Page Redirect By now, you know we can change the properties of the window.location by assigning a value using = . Similarly, there are methods we can access to do some actions. So in regards to "how to redirect to another page", well there are 3 ways. // Setting href properties window.location.href = 'https://www.samanthaming.com'; // Using Assign window.location.assign('https://www.samanthaming.com'); // Using Replace window.location.replace('https://www.samanthaming.com'); replace vs assign vs href All three does redirect, the difference has to do with browser history. href and assign are the same here. It will save your current page in history, whereas replace won't. So if you prefer creating an experience where the navigation can't press back to the originating page, then use replace 👍 So the question now is href vs assign . I guess this will come to personal preference. I like the assign better because it's a method so it feels like I'm performing some action. Also there's an added bonus of it being easier to test. I've been writing a lot of Jest tests, so by using a method, it makes it way easier to mock. window.location.assign = jest.fn(); myUrlUpdateFunction(); expect(window.location.assign).toBeCalledWith('http://my.url'); Credit StackOverflow: @kieranroneill: But for that that are rooting for href to do a page redirect. I found a performance test and running in my version of Chrome, it was faster. Again performance test ranges with browser and different versions, it may be faster now, but perhaps in future browsers, the places might be swapped. Scratch your own itch 👍 Okay, a bit of a tangent and give you a glimpse of how this cheatsheet came to be. I was googling how to redirect to another page and encountered the window.location object. Sometimes I feel a developer is a journalist or detective — there’s a lot of digging and combing through multiple sources for you to gather all the information available. Honestly, I was overwhelmed with the materials out there, they all covered different pieces, but I just wanted a single source. I couldn’t find much, so I thought, I’ll cover this in a tidbit cheatsheet! Scratch your own itch I always say 👍
https://medium.com/dailyjs/window-location-cheatsheet-f7ff4eff7604
['Samantha Ming']
2020-04-27 13:45:34.107000+00:00
['Software Engineering', 'JavaScript', 'Software Development', 'Programming', 'Web Development']
Credit Where Credit is Due
Credit Where Credit is Due Navigating Academic Credit and Authorship Credit assignment and decisions about paper authorship are a surprisingly difficult topic to navigate, particularly for junior researchers. I distinctly recall working really hard on my first paper, reflexively listing myself as last author — given my initials, I’m accustomed to being last on lists of names — and discovering through the oblique comments of my co-authors that name ordering on papers was actually a thing. That notion seemed rather comical to me at the time, but I quickly learned that people, bizarrely, really cared about it. General Principles Most professional organizations have a standard for authorship (for instance ACM, IEEE, APA, ICMJE). At my own institution, the golden rule is: ‘Fairness, leaning on the side of inclusiveness.’ This broad, purposely non-specific policy lets researchers decide on which standards, often set by their own community, to adhere to. We also provide researchers with a mechanism to seek advice and handle escalations. Standards vary in subtle ways, from whether being an active writer of the paper is a necessary condition for authorship, to practices surrounding acknowledgments and citations. My central advice to junior researchers is to talk about credit assignment: talk about it early, and repeat the conversation as often as needed. ‘Success is directly proportional to how many awkward conversations you’re willing to have’ — Tim Ferriss Most issues I have encountered have to do with mismatched expectations: ‘You said I would be a co-author / But you did nothing / But you said…’, or ‘You submitted this without acknowledging me?!’. Talking about it is the best antidote. Early conversations about authorship need not be understood as contracts: things change, people get busy with other things, and new people join in. This is why making a habit of having those conversations regularly is helpful, as long as everyone involved is willing to engage, as they should. One thing that greatly helps grounding these conversations is shared progress documents, where everyone’s input into the project is documented along the way. And these are difficult conversations! Some would say ‘Crucial Conversations.’ It is very tempting to procrastinate on having them, thus missing the chance to uncover problems early. I have seen cases where the author list would only be settled on the day the camera-ready paper was due, which puts an inordinate amount of stress on collaborators, particularly those in junior roles. There is no universal template on how to have those conversations, as much depends on the respective roles of the participants and how mature the work is at the time. A good place to start is as soon as a narrative for the research starts taking shape. As collaborators start discussing the ‘what’ and ‘when’ of a potential publication, it is an opportunity to steer the conversation towards the ‘who’ as well, while setting the expectation that this will be an ongoing process. Associating your name with any body of work should not be taken lightly. Use acknowledgements liberally — being acknowledged on someone’s work is generally an appreciated recognition, but never acknowledge or add someone as a co-author on a paper without asking them first. There are a number of people who take steps to not have their name or affiliation publicized online, and it is important to respect their choice of privacy. I have also declined to be co-author on papers I had nominally made contributions to because I disagreed with the quality of the outcome or value of the work as a standalone publication. Note also that the authorship standards on patent filings tend to be much more stringent than on academic publications, since having a co-inventor on a patent to be shown to not have participated in the invention may be grounds for invalidating the patent altogether. Common Patterns There are some common scenarios that tend to be fertile sources of questions around credit. One is the role of team managers, leads or otherwise academic advisors. Simply put, in my team, we don’t practice ‘honorary authorship.’ Leads should not be added as authors for providing headcount, funding, attending meetings or merely having a pulse. There are intangible contributions such as steering the research, advising or enabling it organizationally in specific ways that can be worthy of recognition, but I don’t believe that the all-too-common practice of adding the lab leads on every paper is healthy, especially since it cheapens somewhat the contributions of senior team members. Another question that comes up often is how to recognize the impact of software engineers who contributed infrastructure in support of the research. Any engineering work that is specifically done in the service of a research project is unequivocally grounds to be invited to contribute to the paper. Work that is more horizontal, and contributes to multiple projects without specialization such as tooling and frameworks, is generally worth acknowledging or citing when there is a corresponding publication. It is worth noting that academic credit, for those who haven’t built their career around research outcomes, is not uniformly valued. It is important to try and understand how to reward one’s work in ways that matter to them. The issue of prior art is another common one, particularly when it comes to building on top of one of your colleagues’ work, for which they may feel some degree of ownership. The general standard here is that any published work is grounds for a citation, not co-authorship. Demanding to be co-author on any derivative of one’s work is a common impulse, but is to be resisted unless it involves unpublished work. Many communities try to normalize for that by measuring impact by the number of non-self citations, though that standard is not uniformly applied. On the flip side, use of specific, actionable, and previously unpublished ideas warrants the individual who came up with the idea to be invited to co-author a paper. Issues tend to arise when people seek credit for general concepts that they have shared, but not developed further than merely discussing them at a high level. It is easy to write down and circulate an idea, without qualifying or attempting to publish it, in the hopes that perhaps someone else will work on something similar and who is now duty bound to credit you. But this is a very damaging dynamic, particularly in greenfield areas where there are many ‘obvious’ ideas to try and the hard work is to qualify and validate them. My antidote to this is to consider any claim of prior art on generic, non-specific ideas to be grounds for an acknowledgment, not co-authorship. It is a delicate balance because you want to still incentivize the free flow of ideas, but not to the extent that every embryo of a concept coming out of one’s mouth becomes a flag planted in the research landscape that says ‘keep out or give me credit.’ In large teams, with long-running efforts involving a number of people and aspects to a problem, one very common struggle is the issue of optimizing for credit being attributed to the team at large versus the desire for smaller parts of the effort to publish more quickly on their own sliver of the issue. There is often a temptation to ‘self-scoop’: quickly publish a paper that either competes with a larger effort, solves a part of it without addressing the larger scope of the project, or uses new infrastructure built for the project that is unpublished. It is one of the few cases where perfectly solid and publishable research may need to be held back, and the larger project’s interests might rightfully preempt the interests of a few. Unconscious Biases and Incentives I find it very useful to try and understand what implicit biases and incentives may be at play when collaborating on a project because it is easy to unwittingly create a difficult situation out of the best of intentions. Picture for instance: A senior contributor adds a second junior contributor to project, threatening ‘first authorship’, Conversely, a junior member seeks advice from another senior contributor, threatening ‘last authorship’, A collaborator adds #Celebrity to the project, delighted to get a chance at co-authoring a paper with them. Other team members are unhappy because it is now a ‘#Celebrity project’ and their own contributions will be eclipsed, The authors agree on author order upfront on the basis of certain expectations, but the project evolves, Aaron Aappy suggests alphabetical ordering, Zoe Zzzywk is not amused. In many fields, Ph.D.s are made out of first-author papers, tenure cases out of last-author papers, and the number of shots one has at building one’s portfolio is limited, creating strong incentives in favor of or against certain types of collaborations. There are many other ways one’s personal history can come to bear into someone’s perspective on fairness: if you are from an underrepresented group in your field, maybe ‘equal authorship’ doesn’t feel as equal to you. Or if you’re an engineer whose contributions to research efforts have been overlooked in the past, a mere acknowledgment of your work, as opposed to co-authorship, may feel like yet another slight. This is why I emphasize talking about those issues openly in a workgroup from the start because as long as those biases remain unconscious, to yourself as well as to others, they get in the way of even defining what fairness means for everyone. No matter how much we want to think that fairness can be objectively defined, it largely remains a largely subjective measure in this context. Credit is not an additive quantity A final common cognitive bias I encounter often is this notion that adding another author to a paper always dilutes one’s contributions, as if credit was a fixed pie that was to be carved between co-authors. This is completely untrue: nobody will care if you’re co-author on a 4-person or a 5-person paper — it may well be quite the opposite in the context of single-author papers. In fact, if you have a very well-established researcher as a co-author, to some extent they may lend your paper credibility, effectively adding to the overall credit pie more than they take away. The Difficult Cases There are also many ways things can get complicated in a less benign manner. Picture these: Alice presents some ideas at the team meeting. Months later, Bob submits a paper on a very similar idea. Alice was not aware. Advisor suggests a set of projects. Alice and Bob independently pick up on the same idea and, without speaking to each other, independently run with it. Bob starts a collaboration with Alice, but doesn’t find time to work on it. The paper is written before he gets a chance to free up enough time to contribute. Bob suggests an idea. Alice says ‘I had the same idea a few months ago, let me write it up’, and quickly produces a document describing the idea. Alice and Bob, first and second authors respectively, submit a paper to a conference but it gets rejected. Alice has other things to do but Bob works with Chris, revise it substantially, resubmit and it gets accepted. Bob does a lot of work for the project, but the work doesn’t end up getting included in the write-up. In each of these scenarios, there is a natural setup for conflicting perspectives, or room for actual blame, and the right thing to do may depend on a number of factors. This can be where having frank conversations, ones that include both facts and people’s interpretations and feelings surrounding those facts, matters the most. This is also where some form of independent adjudication may become particularly useful. Questions of authorship can also ultimately be one of those decision points where getting to a degree of closure that is acceptable to everyone involved is more important than the actual outcome. Push comes to shove, tossing a coin is not the worst way, if it comes to that, to yield a decision. In particular, it doesn’t come with the ‘baggage’ of rationalizing why a specific decision was made, and enables everyone to move on. Should I Care? Many would regard issues around authorship as petty and unworthy of their attention. I’ll admit to being in that camp for much of my career, focus a lot more on moving the scientific needle rather than the details of credit appropriation. But as my role grew into becoming a research lead, I got to appreciate how much the fact that not wanting to care didn’t absolve me from paying attention, because it genuinely matters to others around me, and for reasons that I can’t simply dismiss out of hand. As much as I’d love for ego and career development to be merely secondary to the drive toward exceptional research outcomes, I understand that they themselves also fuel a large fraction of scientific progress. People respond to a wide palette of intrinsic and extrinsic motivators, but their self-image tends to always be at the very center of their motivation, and there is no escaping the fact that seeing one’s name recognized in the public sphere is an important part of it. So I’ve turned around and decided that embracing people’s desire for appropriate and fair recognition and that helping promote fairness and transparency in academic credit was worthy of attention, if only so that people’s attention may be confidently redirected to the more important questions of producing good science together. If good fences make good neighbors, fair and inclusive credit makes good research, and that’s reason enough to care. Thanks to Aleksandra Faust for helping put together the material this article is based on, and to Joyce Noah-Vanhoucke for extensive editing.
https://towardsdatascience.com/credit-where-credit-is-due-ff9d3c38c940
['Vincent Vanhoucke']
2020-01-08 23:42:16.333000+00:00
['Publishing', 'Science', 'Academia', 'Research']
Propfolio x Decent Labs
Until recently, real estate management was stuck in the Stone Age. Physical copies of investment reports, antiquated spreadsheets, and long chains of command made obtaining and compiling property data endlessly time consuming. When Propfolio came onto the scene in 2017, their founders saw beyond the industry’s status quo. As three former commercial real estate professionals, Peter Bird, Tom Cartlidge and Angus Abbott knew the intimate details of their trade, and exactly what they needed to create to improve it. They saw the future of real estate: property data at your fingertips, personalized portfolio analysis at the push of a button and transparency at every turn. They imagined a virtual space that encompassed all of these features, as well as the ability to recognize key risks, while streamlining and strengthening day-to-day management processes. In short, they wanted real estate management to be intelligent, accessible, and simple. Since their founding, Propfolio has generated considerable enthusiasm, receiving their first investment from Pi Labs in 2018, and being selected just last year to join ten other startups in The Arcadis City of 2030 Accelerator powered by Techstars. Propfolio’s dynamic drive and exceptional product has made our work with them at Decent Labs a natural partnership. Since February of this year, our joint efforts have been engaged in retaining the original spirit of Propfolio while working to push boundaries within a rapid design and development process. Aiming to embody the same holistic approach the London-based start-up was first founded on, the designers at Decent Labs have transitioned from designing a simple prototype to constructing the live, go-to-market version of Propfolio! With the vision of a thoughtful, clean user interface and the intentions of a comprehensive experience for real estate professionals, Peter, Tom and Angus’s ideas have begun to take shape. “Decent Labs understood our problem, they saw beyond a demo model and helped us create a truly original product that our clients will be able to put to immediate use,” Peter explained, “From the initial branding to prototyping and development, what they helped us build will grow with our company and will be able to scale even more effectively than we first hoped.” Propfolio was just warming up when Decent Labs was brought on board. “Through the support of Decent Labs’s designers, we were able to watch our sketches turn into wireframes, and saw the core of our brand being consistently represented with each step.” From the prototype Propfolio was able present to investors on Techstars’s 2020 Demo Day, to the upcoming MVP launch, “Decent Labs has consistently impressed and excited us with how easy they’ve made building out our product.” The next step in this collaboration? Onboarding and introducing clients to the app itself. Peter emphasized, “As members of the industry ourselves, we know our clients, and we knew that our product had to be approachable and uncomplicated, and that’s exactly what we’ve created.” Both the Propfolio team and Decent Labs have concentrated on building a UX that feels comfortable, while at the same time able to support complex asset information and smarter performance reporting. “It’s the core of our product, the ability to have all your data at the push of a button, when you need it, wherever you are.” Propfolio and Decent Labs have worked tirelessly to bring real estate management to the 21st century, and with the arrival of the Propfolio app right around the corner, there is no doubt that these ambitions will become a reality.
https://medium.com/decentlabs/propfolio-x-decent-labs-34b1410a166b
['Emily Loughran']
2020-09-09 21:08:21.007000+00:00
['Startup', 'Real Estate', 'Investing', 'Venture']
Nine Million New Jobs in a Pandemic
Nine Million New Jobs in a Pandemic COVID hasn’t killed the resilient U.S. jobs machine Photo by grmarc on Vector Stock. The American economy is incredibly dynamic — even in a soul-crushing pandemic. Lost in the disheartening news about the depressing number of firms going bust and workers losing their jobs is the other side of the ledger: new and existing firms are adding millions of new jobs. I estimate about three for every ten jobs lost through furloughs or lay-offs — or almost nine million new jobs in all — were created during the dark first six weeks of the pandemic. This process of simultaneous job creation and destruction is a vital, if underappreciated, feature of our economy, and will be an important force in our recovery from the pandemic recession. Still, the overall labor market picture is grim and unlikely to be fully reversed soon. Over 21 million Americans lost their jobs in March and April (Fig. 1), the first two months this nascent recession, equal to 14.5% of all jobs, according to the Bureau of Labor Statistics (BLS); less than half of these jobs have been recovered in the three months since at the economy began to reopen (May through July). The astonishing 20.8 million jobs lost in April alone was ten times the previous record monthly job loss. In only six weeks we lost virtually all the jobs created in the last decade (22.8 million) — almost times as many jobs as were lost in the entire Global Financial Crisis (GFC), which itself was the high-water mark for economic downturns since the Great Depression 90 years ago. The unemployment rate jumped from just 3.5% in February — its lowest rate in a half-century — to 14.7% in April, its highest rate since the Depression, before easing back down to 10.2% in July, though still above the high-water mark during the GFC. But even these metrics do not fully capture the extent of labor market distress as the “headline” unemployment figure counts only individuals actively seeking new work. With much of the country locked down and many jobless people otherwise either fearful of venturing out of their homes or skeptical of the prospect of finding new work in this environment, job-searching activity is limited, thereby depressing the measured unemployment rate. A more comprehensive measure of unemployment that also includes “underemployment” — people working part-time involuntarily or people who have dropped out of the labor force — rocketed from 7.0% to 22.8% in April and still at 16.5% in July, near the 17.2% peak in the GFC. Add to that the many gig and part-time employees working fewer hours for less income and we have over a quarter of Americans working less than they want or not at all — on par with the lowest depths of the Great Depression. And yet, despite all this depressing news, there is cause for some optimism. For one thing, the vast majority of workers who lost their jobs were only temporarily “furloughed” or “completed temporary jobs” rather than permanently “laid-off” (Fig. 2), which theoretically should simplify and expedite rehiring as government-mandated shutdowns are relaxed. To wit, a survey conducted by the National Federation for Independent Business early in the pandemic found that half of its members expected to rehire all of their former workers and another quarter will rehire most. And a Washington Post — Ipsos poll found that more than three-fourths of newly unemployed workers believed they’re highly or somewhat likely to get their old job back. But were these expectations realistic? Not fully. Many firms have been unable to quickly or fully rehire many of their former employees — whether furloughed or laid off — even once allowed to reopen. The practical challenges of restarting and conducting their businesses, restocking their inventory, and getting their employees safely to the worksite are daunting for many firms. For others, consumer demand will remain depressed until customers feel safe, which may be prolonged for services such as restaurants and concerts where social distancing and other safety requirements will range from difficult to near impossible. Already the share of pandemic-related layoffs that swung from “temporary” to “permanent” has doubled from April (9.7%) to July (22.3%). The speed of employment recovery will also hinge on the degree of financial distress among businesses and households: How many businesses will not survive the lockdown? How many households will go bankrupt? The longer the lockdowns continue, or firms are forced to operate at reduced levels, the greater the number of businesses that will never reopen, particularly under-capitalized small businesses, who are already starting to fail or announce permanent closures. Likewise, the longer workers remain unemployed or on reduced income, the greater the financial distress among consumers, compounding the downturn. The early evidence is that the job losses are highly concentrated among lower-wage workers, magnifying the financial strain among households least able to afford gaps in income. No doubt the CARES Act was helping many businesses and households stay afloat in the short term, though most of the programs have since expired. And with the coronavirus continuing to spread in the absence of a robust national program for testing and contact tracing, it seems inevitable that periodic shutdowns and the economic downturn will continue for quite some time — and beyond the capacity of the federal government to keep propping up the economy. All of which raises the prospect of widening financial insolvency among businesses and households, thereby muting and delaying the rehiring of furloughed workers. Some Firms and Industries Thrive During Adversity While there is considerable doubt about how much and how quickly ailing firms will be able to rehire their former employees, other firms are actually thriving, even surging. Most obviously, e-commerce is booming at the expense of physical retailers, while groceries are flourishing as restaurants languish. To cite some noteworthy announcements: Walmart plans to hire 50,000 more workers, on top of the 150,000 they just hired to deal with increased demand, while Amazon is adding another 75,000 new workers to the 100,000 previously announced. That’s 375,000 new jobs from just two firms. Add to that 100,000+ new hires at grocers like Safeway. Kroger, and Albertsons; drug chains like CVS (50,000 new jobs), Walgreens, and RiteAid; and home improvement stores like Lowe’s (30,000) and Home Depot. The food delivery startups (DoorDash, Uber Eats) are also adding workers, as are pizza chains and dollar stores. Even the fledgling legal cannabis industry is seeing a hiring boom. All this despite the net loss of almost two million retail jobs overall over the past two months, equal to 14% of all retail jobs. But job creation is hardly limited to just the retail sector. Jobs are growing across many sectors for firms that make products or provide services that help us either deal with the health crisis or otherwise adjust to the “new normal.” Of the former, it’s all hands on deck, and many new hands, for pharmaceutical companies seeking COVID-19 vaccines or treatments and manufacturers of masks and hand sanitizers. Cleaning services are seeing a surge in demand, as are delivery services. Thus, the economy continues to create many new jobs, as well as new firms, even as many more are destroyed or at least waylaid. A Very Dynamic Economy This process of simultaneous job creation and destruction is not unique to this pandemic. Only the scale is unprecedented. When we hear that 200,000 jobs are added in a monthly jobs report — the average over the past decade of economic expansion — we implicitly focus on only these net new jobs created. But the reality is much more dynamic, with a vastly greater number of jobs begun and ended each month, as shown in the following graph, which goes back to 1999 when the government started tracking the flow of gross job gains and losses (Fig. 3). The U.S. economy created an average of 7.3 million private-sector jobs each quarter during the ten-year expansion than ended with the pandemic, while destroying 6.7 million jobs, for a net gain of about 535,000 private-sector jobs each quarter. This net is reversed in a recession, with the number of jobs destroyed exceeding the number of jobs created. In 2009, the worst year of the GFC, the economy shed 30.9 million private-sector jobs while adding 25.4 million, for a net loss of 5.5 million jobs. By contrast, in 2014, which was the strongest year of job growth during the last business cycle, the economy added 29.3 million private-sector jobs while losing 26.3 million, for a net gain of 2.9 million jobs. In fact, our economy reliably destroys many jobs even in strong growth years and creates many jobs even in the depth of recessions. The net — plus or minus — is relatively small in comparison to both the gains and losses but makes all the difference to our economic prosperity. New Jobs in a Pandemic . . . Which brings us to the current recession. The media reports of hiring at retail and delivery firms illustrate that the economy is still adding many jobs, even as we’re hemorrhaging jobs overall. So when we read that the country lost over 22 million jobs during March and April, we now understand that’s net jobs lost. But how many gross jobs were lost and, more importantly, how many were created? Estimates of gross jobs gained and lost for the current period will not be released for months, but we can estimate gross job losses using the number of “initial claims” for unemployment as an imperfect proxy.[1] Gross job losses historically track pretty closely to unemployment claims and thus can be used as a rough gauge of gross job losses. Some 30.8 million people filed new claims for unemployment benefits in March and April. With 22.2 million net jobs lost over those two months, that implies the economy added about 8.7 million new jobs over the same period (Fig. 4). In other words, the economy gained almost three new jobs for every ten it lost (8.7M versus 30.8M). This finding almost precisely matches the findings from a Federal Reserve Bank of Atlanta study by Dave Altig, John Robertson, and other Atlanta Fed economists and researchers, which concluded that “COVID-19 caused 3 New Hires for Every 10 Layoffs.” And it’s broadly consistent with another Fed report that found that 4% of U.S. adults surveyed they started a new job in March compared to 13% that lost their job during the month (again, a 3 to 10 ratio). [1] Using initial unemployment claims data as a proxy for gross job losses isn’t perfect. Some people who lost work couldn’t file for unemployment for one reason or another (and thus are not counted in the initial claims data). On the other hand, some people who did file for unemployment were subsequently hired for other jobs (overcounting jobless claims). Also, the time periods for the unemployment claims and the employment surveys do not coincide exactly; the period for jobs data ends earlier in the month, though Fed staff analysis of ADP data, as well as the initial claims data itself, suggest lay-offs slowed sharply in the second half of the April, narrowing the discrepancies between the two sources. If at least directionally accurate, the 8.7 million jobs created in the early days of the pandemic was as extraordinary as the pace of job losses making the headlines. In the two decades for which we have data, the rate of gross jobs created is remarkably steady, typically hovering between 7.1 million and 7.8 million new jobs per quarter and has hit 9.0 million in a quarter only once. The 8.7 million jobs added in just two months, if confirmed, demonstrates the huge changes in how our economy is functioning. . . . And New Businesses, Too A related but distinct feature of our dynamic economy is that even in a recession, firms are opening new branches and entrepreneurs are forming new companies. If history is any gauge, one in five new jobs would be at new establishments, whether a new branch of an existing firm or an entirely new entity.[1] The share of jobs created in expanding versus new establishments has remarkably consistent over time. As with the ratio of jobs created to lost destroyed, we won’t know the actual recent share of jobs attributable to new establishments for quite some time, but my guess is that it’s somewhat below its historical ~20% average: The Census Bureau reports that although the number of new business applications rose 2% in the second quarter of 2020 relative to a year earlier, “Business Applications with Planned Wages” — that is, the applications most likely to result in new jobs — are down more than 13%. Conclusions and Implications With all the ugly economic headlines in the first two months of the pandemic, it is perhaps comforting to learn that upwards of nine million jobs were created over this period, even as three times as many positions were lost. Likely more than a million of these new jobs were created in either new firms or the new branches of existing firms. Capitalist economies are endlessly adaptable, as firms and workers continuously seek new to identify and capture changing market opportunities, even during harsh economic conditions. This dynamism gives hope that our economy will continue to evolve and create new opportunities for enterprising workers and investors as we collectively confront the many challenges of containing and ultimately conquering COVID-19. There is no escaping the fact that the U.S. labor market was in a free fall from mid-March through mid-April, though some of the carnage has since been reversed. A net 9.3 million jobs were created or revived in May through July — even as an additional 23.8 million workers filed for unemployment, implying that almost 33 million Americans returned to work, either at their old jobs or in new positions. Still, the cold, hard reality is that some 13 million fewer Americans hold jobs now than just prior to the pandemic. Many more gig and part-time workers are working fewer hours for less income. And given the continued rise in coronavirus cases across the country, and ominous warnings from epidemiologists and public health professionals of a potential major infections wave in the fall, it seems inevitable that some of the recent progress we’ve made in restarting the economy and rehiring workers will be reversed — particularly with the expiration of the various federal business and household income support programs. Restoring our pre-pandemic prosperity — imperfect as it was — will be a long, tough haul. [1] According to the BLS, “An establishment is defined as an economic unit that produces goods or services, usually at a single physical location, and engages in one or predominantly one activity. A firm is a legal business, either corporate or otherwise, and may consist of several establishments.”
https://medium.com/the-innovation/nine-million-new-jobs-in-a-pandemic-covid-hasnt-killed-the-resilient-u-s-jobs-machine-a735d73b0c08
['Andrew Nelson']
2020-09-03 19:47:03.552000+00:00
['Covid 19', 'Jobs', 'Economy', 'Entrepreneurship']
Stop playing dice with paradise!
By all accounts, J&K has become a nerve-wrecking problem for all the stakeholders in that region right from the Kashmiris themselves to the mechanical arrangements that involves India and Pakistan in this long and complex arena of conflicts, apathy, pathos and despair. It’d be very lucid to say anything about ‘K’ without addressing the evolution of this state. Kashmir has been a lingering issue that should have been resolved way back. I agree India has failed Kashmir. Pakistan too has. They have failed to address people’s concerns. It has rather become fashionable to speak over the ownership rights of this disputed land. Grandiosity from both side of the border steals the limelight while addressing people directly takes a back burner. No wonder, empty rhetoric is what is left on the table. When the British finally decided to relinquish the Indian subcontinent, they were discussing about how the future state/s would take place. Various ideas right from creation of nations on the basis of language, basic culture, region & religion emerged. Of all the factors, religion became the ultimate criterion. Though the Indian subcontinent boasted of nearly all-existing world religions right from the Indic beliefs of Hinduism, Jainism, Buddhism & Sikhism to Abrahamic yet Indianised versions of Islam, Christianity & Judaism as well. Considering social passions, it was decided to include Islam and ‘the rest’ as primary dividing factors and thus, the modern states of multi-religious India and largely Muslim Pakistan got created. But some regions became a thorn for resolution, primarily, the princely state of Jammu & Kashmir. Kashmir being a Muslim majority area was claimed by Pakistan for obvious reasons whereas India’s claims rested on ‘accession’ agreement signed between the Maharaja of J&K and the Indian government. It also needs to be pointed that though Pakistan was proclaiming itself as the homeland for sub-continental Muslims, even then, the modern partitioned India had more Muslims in entirety than Pakistan. Also, Pakistan’s convictions further deteriorated when its eastern wing got separated to form the present independent nation of Bangladesh. The struggle for independent B’desh from united ‘Islamic Republic of Pakistan’ was on linguistic basis and not religion as cornerstone. It somehow proved that religion couldn’t always be a cohesive factor. Now coming back to ‘K’. More than 60 years have died giving birth to this problem as it is, and still we are running in political circles. People who’ve been following Kashmir can easily vouch for the fact that not all is well in paradise. Sadly, it is burning. And I suppose people with a benevolent heart and mind would feel pained to witness their agony and would like to see an end to their sufferings. Kashmiris themselves are tired and are demanding ‘Azaadi’. ‘Azaadi’ from the daily humiliation they go through and ‘Azaadi’ to live in a functional society with pristine atmosphere. After all, it’s a basic right for every breathing human being. No one likes Army or, for that matter, any non-civilian body interrupting daily course of life, and that too on a weak hint of suspicion. And it’s a naked secret that defense forces have used coercion and violation at a drop of hat. Having said that, playing devil’s advocate, what I don’t understand about voices coming from the Valley with statements such as “Kashmiri society and Indian society are different” and that “Kashmiri culture and Indian culture varies”. My questions are — What is Indian culture? Can anyone define Indian culture? Is India too homogenous to assimilate Kashmir influence within its society and national frame? Can you identify any single aspect (say language/religion/culture) and declare it’s truly Indian and rest as not? On the contrary, Indian society is a vast and diverse phenomenon. India’s diversity is capable of holding variety of interests and ideas even when conflicting each other eventually leading to broad based assimilation. Can’t beautiful Kashmir fit into exotic India? Don’t you think of all the existing options for Kashmir its continuance, as part of India would be a crown for its own welfare as well as for an idea called India? Normality must return to Valley, at the earnest. We all agree. But what after that? A permanent solution must yield taking all the relevant factors into consideration. People who have long reneged on their Kashmiri identity in favor of power should keep their mouth shut and mind open. They are misleading the masses into false utopian beliefs. It will be nothing more than a farce and eventually too late to reconcile with truth. Arrogant Army powers must be severely amended, if not repealed completely. Crime is a crime and that applies to everyone. Shopian rape case showed how fragile the judiciary is in J&K. The culprits were not held accountable for their misdeeds. It was blindfolded Themis that went to trial and acrimoniously disrobed. Events as such shouldn’t be allowed to repeat. And yes, there is an undeniable gap between mainland Indians and people in the northernmost state. Lack of communication has been a huge deterrent. Tourism in J&K, which helped a lot in fostering economy, goodwill and camaraderie was, no wonder, attacked by the secessionist/terrorist elements. As long as people-to-people connect is absent, all other efforts will only be on paper. If we want to call a country with 28 states, then we better not act like a 27 states nation. Interestingly, Bollywood of the past glorified Kashmir as a tourist destination, but today, even it prefers Swiss Alps. My post here reeks of parochialism and I can’t do anything about it for a very simple reason: I am an Indian. My nation was built on common aspirations, common dreams and a search for common identity. We weren’t forced to shout “I am an Indian” at any point of time. The sense of Indianness trickles from our heart no matter whichever state we belong to or whatever tongue we speak. Yes, we are facing problems in the form of poverty, Naxalism, corruption and whatnot but talking of secession of Kashmir, I don’t think it helps the case. India has a timeless history of tolerance. Even today, we tolerate a lot. But frankly, toleration of sedition is one thing and secession, another. We don’t have to look weak. All we have to do is be right and right now, we are far from right. We are in the middle of somewhere. Prosperous Jammu & Kashmir, Progressive India and South Asian haleness should be the ultimate aim.
https://medium.com/shaktianspace/stop-playing-dice-with-paradise-6c989c69d084
['Shakti Shetty']
2017-01-10 10:45:49.684000+00:00
['Jammu And Kashmir', 'Politics', 'Democracy', 'Religion', 'Pakistan']
Scientists Edited Human Embryos in the Lab, and It Was a Disaster
Scientists Edited Human Embryos in the Lab, and It Was a Disaster The experiment raises major safety concerns for gene-edited babies Photo illustration, sources: Wellcome Trust, ZEPHYR/Science Photo Library/Getty Images Reengineering Life is a series from OneZero about the astonishing ways genetic technology is changing humanity and the world around us. A team of scientists has used the gene-editing technique CRISPR to create genetically modified human embryos in a London lab, and the results of the experiment do not bode well for the prospect of gene-edited babies. Biologist Kathy Niakan and her team at the Francis Crick Institute wanted to better understand the role of a particular gene in the earliest stages of human development. So, using CRISPR, they deleted that gene in human embryos that had been donated for research. When they analyzed the edited embryos and compared them to ones that hadn’t been edited, they found something troubling: Around half of the edited embryos contained major unintended edits. “There’s no sugarcoating this,” says Fyodor Urnov, a gene-editing expert and professor of molecular and cell biology at the University of California, Berkeley. “This is a restraining order for all genome editors to stay the living daylights away from embryo editing.” While the embryos were not grown past 14 days and were destroyed after the editing experiment, the results provide a warning for future attempts to establish pregnancies with genetically modified embryos and make gene-edited babies. (The findings were posted online to the preprint server bioRxiv on June 5 and have not yet been peer-reviewed.) Such DNA damage described in the paper could cause birth defects or genetic diseases, or lead to cancer later in life. “This is a restraining order for all genome editors to stay the living daylights away from embryo editing.” Since CRISPR’s debut as a gene-editing tool in 2013, scientists have touted its possibilities for treating all kinds of diseases. CRISPR is not only easier to use but more precise than previous genetic engineering technologies — but it’s not foolproof. Niakan’s team started with 25 human embryos and used CRISPR to snip out a gene known as POU5F1 in 18 of them. The other seven embryos acted as controls. The researchers then used sophisticated computational methods to analyze all of the embryos. What they found was that of the edited embryos, 10 looked normal but eight had abnormalities across a particular chromosome. Of those, four contained inadvertent deletions or additions of DNA directly adjacent to the edited gene. A major safety concern with using CRISPR to fix faulty DNA in people has been the possibility for “off-target” effects, which can happen if the CRISPR machinery doesn’t edit the intended gene and mistakenly edits someplace else in the genome. But Niakan’s paper sounds the alarm for so-called “on-target” edits, which result from edits to the right place in the genome but have unintended consequences. “What that means is that you’re not just changing the gene you want to change, but you’re affecting so much of the DNA around the gene you’re trying to edit that you could be inadvertently affecting other genes and causing problems,” says Kiran Musunuru, a cardiologist at the University of Pennsylvania who uses CRISPR in his lab to research potential heart disease therapies. If you think of the human genome — a person’s entire genetic code — as a book, and a gene as a page within that book, CRISPR is like “ripping out a page and gluing a new one in,” Musunuru says. “It’s a very crude process.” He says CRISPR often creates small mutations that are probably not worrisome, but in other cases, CRISPR can delete or scramble large sections of DNA. This isn’t the first time scientists have used CRISPR to tweak the DNA of human embryos in a lab. Chinese scientists carried out the first successful attempt in 2015. Then, in 2017, researchers at the Oregon Health and Science University in Portland and Niakan’s lab in London reported that they’d carried out similar experiments. Ever since, there have been fears that a rogue scientist might use CRISPR to make babies with edited genomes. That fear became reality in November 2018, when it was revealed that Chinese researcher He Jiankui used CRISPR to modify human embryos, then established pregnancies with those embryos. Twin girls, dubbed Lulu and Nana, were born as a result, sending shockwaves throughout the scientific community. Editing eggs, sperm, or embryos is known as germline engineering, which results in genetic changes that can be passed on to future generations. Germline editing is different from the CRISPR treatments currently being tested in clinical trials, where the genetic modification only affects the person being treated. While many scientists have opposed the use of germline editing to create gene-edited babies, some say it could be a way to allow couples at high risk of passing on certain serious genetic conditions to their children to have healthy babies. Beyond preventing disease, the ability to edit embryos has also raised the possibility of creating “designer babies” made to be healthier, taller, or more intelligent. Scientists almost universally condemned He’s experiment because it was done in relative secrecy and it wasn’t meant to fix a genetic defect in the embryos. Instead, he tweaked a healthy gene in an attempt to make the resulting babies resistant to HIV. In the United States, establishing a pregnancy with an embryo that has been genetically modified is prohibited by law. More than two dozen other countries directly or indirectly prohibit gene-edited babies. But many countries have no such laws. Since He’s fateful gene-editing experiment became public, a researcher in Russia, Denis Rebrikov, has expressed interest in editing embryos from deaf couples in an attempt to provide them with babies that can hear. Niakan could not be reached for comment, but in a December 2019 editorial in the journal Nature, she argued that much more work on the basic biology of human development is needed before gene editing can be used to create babies. “One must ensure that the outcome will be the birth of healthy, disease-free children, without any potential long-term complications,” she wrote. The embryos edited by Niakan and her team were never intended to be used to start a pregnancy. In February 2016, her lab became the first in the U.K. to receive permission to use CRISPR in human embryos for research purposes. The embryos used are left over from fertility treatments and donated by patients. Niakan’s paper comes as the U.S. National Academies, U.K.’s Royal Society, and the World Health Organization are contemplating international standards around the use of germline genome editing in response to the global outcry over He’s experiment. The committees are expected to release recommendations this year or in 2021. But because these organizations have no enforcement power, it will be up to individual governments to adopt such standards and make them law. Urnov says the new findings should influence those committee’s decisions in a substantial way. Musunuru agrees. “Nobody has any business using genome editing to try to make modifications in the germline,” he says. “We’re nowhere close to having the scientific ability to do this in a safe way.”
https://onezero.medium.com/scientists-edited-human-embryos-in-the-lab-and-it-was-a-disaster-9473918d769d
['Emily Mullin']
2020-08-18 15:37:30.322000+00:00
['Gene Editing', 'CRISPR', 'Science', 'Reengineering Life', 'Dna']
How to Be Comfortable Speaking a New Language
Despite spending months learning a language, you still feel unease speaking it. You’re starting to understand more but it still feels foreign to you. The words coming out of your mouth feel wrong, even when they aren’t. This is a struggle all language-learners have to go through. One we’ve all stumbled against at least once. One we fear we can never pass. One we wish never existed. The journey to becoming comfortable in a new language isn’t the journey to become fluent. It’s only part of it. An important part, to say the least. How can we feel comfortable speaking the language quickly when it took us years to be at ease with our native language? Do we have to spend years to reach such a level? We don’t. As adults, we have more tools at our disposal. We don’t have to be passive and wait for this unease to disappear, we can tackle it head-on. How? Here are 5 ways I’ve used in the past decade to rid myself of this feeling and enjoy the journey even more.
https://medium.com/the-language-learning-hub/how-to-be-comfortable-speaking-a-new-language-10527510f978
['Mathias Barra']
2020-08-09 12:44:55.967000+00:00
['Language', 'Learning', 'Education', 'Self Improvement', 'Productivity']
Small talk your way to becoming an influential UX designer
I am a self-taught UX designer who made the transition from architecture in 2019. My first day of work at a real-life tech company was a something of a culture shock. I had always worked in design agencies and boutique architecture firms where I sat shoulder to shoulder with my fellow designers, ready to help each other out in an instant. In contrast, I was now working not only with a team of 10 UX designers (spread company wide), but I also supported two agile teams of developers, engineering managers, and product managers. I had to quickly learn the ins and outs of the complex technical requirements, business KPIs and the needs of our users, while mastering this whole UX thing as I went along. Spoiler Alert: I eventually found my way around the place, through a deliberate regime of small talk. There is a reason why one of the top skills required for UX designers today is not mastery of design tools but communication skills. Learning software can be done through practice; learning design patterns is about copying and adapting what is out there, but understanding your product, business, and users, this cannot be achieved by just quietly doing your homework. UX design is about an amplified multi dimensional level kind of team work. This is across teams, disciplines and hierarchies in an orgsanisation. Small talk can be described as pointless chatter. “It’s “small” because you talk about unimportant things, in a way that fills up silences and makes you both feel more comfortable and friendly with each other.” The smallness of the conversation can be a feel like a burden to many but I prefer to focus on the second half of this definition, making people feel comfortable with one another. Here is how small talk helped me become a better UX designer and how it can maybe help you. Small talk helps you better navigate an organisation. It might help to think of small talk as the opportunity to collect little nuggets of knowledge about our co-workers. Collecting these nuggets every day helps paint a picture of a someone beyond the operational facts we learn in a workday. You may quickly discover more things you have in common with your co-workers and develop a rapport with one another. It is equally important to recognise who hates small talk wholly and treat them differently. Starting a new job, there is an initial scramble to understand the culture, business and get to know your colleagues. Friendly chatter helps us identify quickly who the possible go-to people are when problems arise. We can use these conversations to help us draw a map of knowledge and influence. The sooner you build a support system at work, the faster you can solve problems and become an influential designer. Small talk is a gateway to more meaningful conversations. One of my favourite colleagues is an Engineering Manager who I eventually small talked into a friendship. He was quick to warn me that he despises small talk. Whenever we chat, I come up with creative ways to gauge his mood. How many meetings have you had today? On a scale of 1–10, how productive was your day?. My camouflaged small talk helped us evolve our discussions out of the realm of day to day pleasantries. Think of small talk as the quest for stories about people’s days and life. These stories help you build better connections at work and can turn colleagues into friends. This one doesn’t apply just for UX, but friendships at the office can make work a a real joy. I benefited immensely from the support of my new found friends at work, and our conversations and often heated debates, were a great way for me to learn and think more holistically about the product I was working on. Small talk helps create a collaborative atmosphere. I desperately crave harmony at work and in life. As an ENFP personality type, I am also hypersensitive to awkward social situations. I practice small talk in an attempt to make people feel more relaxed. Since the switch to remote work, fewer people seem to make the extra effort to be friendly during video conferences. By opening up a dialog at the beginning of these meetings, the entire tone can change and become more collaborative. This can be a great time to ask a question about something you can see in their background or just break the silence with some chit-chat. When people feel more comfortable, they share their ideas and thoughts more readily. Small talk will make you a better user researcher. The exact same skills mentioned above will help designers conduct better user research. Beyond being a great communicator, this is one of the most sought after skills for designers. The most skilled moderators allow the conversation to flow naturally and let the interviews navigate seamlessly between tasks and questions. The candidate should have no idea the interview has even started. The other side of the coin is moderators who skip small talk entirely and abruptly list off a bombardment of questions as they are read from the discussion guide. Dominating conversations in this way can mean that the testing environment becomes so unnatural that the results are less reliable. These kinds of tests lack fascinating off-hand insights, which can be some of the best learnings from user interviews. Use small talk as a tool to set the tone for a casual conversation, which hopefully leads to more reliable research and some more talkative users. Small talk can help you navigate conflicts and mediate discussions. A considerable part of the job is not only empathising with our users but with the many stakeholders involved in building a product. A healthy tension between Product Managers, Developers, and UX is fundamental for any successful product development process. Small talk can be a perfect tool to break down tensions and let conversations become less volatile. A conversation about something outside of work can help everyone remain calm and civil. This should not play out in a manipulative fashion, but more of a reminder that the workplace’s misunderstandings should not strip us of our humanity. A bit of empathy for one another can go a long way toward coming up with the best resolutions as a team. Small talk helps workshops and ideations run smoother. This is a no brainer. Another highly sought skill for UX designers is running workshops and ideations. I have experienced my fair share of workshops ran by someone who awkwardly tries to engage a group of people in a lackluster manner. Remote workshops are even more difficult, and activating the group is about making everyone feel comfortable and engaged. A UX designer should know who to invite and how to get everyone involved in the conversation. This is where the skills above become crucial in combination. Successful workshops are about facilitation, mediating conflicts, and fostering collaboration. No one is better suited than the friendly neighbourhood UX designer to pull it off Small talk makes you adorably annoying. I am unapologetic about how small talk has helped me grow as a designer and find my way around a complex organisation. I tend to focus on the adorable and not the annoying, so I am advocating for this approach. Some people won’t be into it and will probably ignore your social advances, which is totally fine. Participating in small talk is about being open to people and hoping a “How are you” could one day progress to a “Could you help me out?” Without the relentless friendliness and openness, I might still be feeling like I did on day one: Disconnected, lost, and not knowing where my place is in the organisation. I know this kind of social proactiveness may seem insincere and ridiculous to the introverted designers out there. Nonetheless, I implore you to do it anyway. If you are the best designer in the whole world, you will not make an impact if no one knows who you are. Remember being enthusiastic is worth 25 IQ points, so put on a smile, give it a try and let me know how it goes. Phase 1: Small talk. Phase 2: ??? Phase 3: Profit. References and fun reads: Ogeleka, C. (2016, July 29). HOW TO TURN SMALL TALK INTO SMART CONVERSATION. Retrieved December 12, 2020, from https://medium.com/@christianachiogeleka/how-to-turn-small-talk-into-smart-conversation-c9d873ea1d3a McAndrew, F. (2020, January 18). Why Small Talk Is a Big Deal. Retrieved December 12, 2020, from https://www.psychologytoday.com/us/blog/out-the-ooze/202001/why-small-talk-is-big-deal Chatting with a Purpose: Introverts and Small Talk. (n.d.). Retrieved December 12, 2020, from https://www.16personalities.com/articles/chatting-with-a-purpose-introverts-and-small-talk Sterling, B. (2020, November 18). Life advice from guru Kevin Kelly. Retrieved December 13, 2020, from https://www.wired.com/beyond-the-beyond/2020/04/life-advice-guru-kevin-kelly/
https://medium.com/design-bootcamp/small-talk-your-way-to-becoming-an-influential-ux-designer-db82c146ff7c
['Reem Alwahabi']
2020-12-26 21:19:03.494000+00:00
['Remote Working', 'Product', 'Careers', 'Coronavirus', 'UX']
2 Simple Ways to Reprogram Your Mind to Do Anything
How to Change Your Programming to Achieve the Results You Desire. If negative thoughts turn into beliefs that determine your behavior, then the opposite holds true. Positive thoughts and beliefs can lead to positive action. This means we can reprogram our minds to become successful, happy, fulfilled, etc. There are two ways you can reprogram your mind: (1) self-hypnosis, and (2) repetition. #1. Self-hypnosis We spend most of our waking time in the beta state. Here our consciousness is directed towards cognitive tasks and the outside world. When we relax — before we go to sleep, while we daydream, when we meditate — our brain enters the alpha state. The theta state is reached during sleep or deep meditation. In the theta state, your conscious mind shuts off, and you’re subconscious mind takes over; you’re back in the hypnotic state from your early years of childhood. If you listen to affirmations or a self-help audio program during sleep/deep meditation, your subconscious mind will insert pieces of the recording into your programming. Your brain always records what goes on through your mind, consciously or unconsciously. I like to listen to affirmations as I drift off to sleep. I’ll set a timer on my phone (anywhere between 1–3 hours) so that when the alarm goes off, it’ll automatically stop whatever is playing. I recommend Higher Vibration Meditations on Youtube. They offer a variety of different affirmations as well as guided meditations. You can also record your own affirmations and play them on repeat. Not to mention, it’s a soothing way to go to sleep. #2. Conscious Repetition When we’re in our waking state, the only way we can learn something is through practice and repetition. Much like learning how to ride a bike, you have to repeat certain actions to form a habit. When you become conscious of a negative thought, you have to replace it with a positive one. Avoiding negative thoughts will not form the habit because we think in pictures, not words. If you tell yourself “don’t think about how insecure you are”, you’re going to think just that because our subconscious mind doesn’t operate through negatives. This is where affirmations become particularly important. If you’re trying to become more confident, say “I am confident”, repeatedly. All the time. Especially when thoughts of insecurities pop into your head. Through repetition, your subconscious mind will insert those statements into your programming, and your conscious mind will make it real. Once it’s programmed, you have no more work to do. The belief will stay until you rewrite the program again. You’ve heard the phrase, “fake it until you make it.” Your job isn’t to believe your affirmations (you won’t) but to constantly repeat them. Your subconscious mind will take care of the rest.
https://medium.com/the-innovation/2-simple-ways-to-reprogram-your-mind-to-do-anything-f1110f568a22
['Brenda Abigail']
2020-12-18 15:32:19.416000+00:00
['Self Improvement', 'Personal Development', 'Psychology', 'Life', 'Success']
Lunch with The Waltons: Finding Comfort Among Old Friends
PANDEMIC COPING Lunch with The Waltons: Finding Comfort Among Old Friends PS: Don’t call between 12 and 1 Photo by Andrea Piacquadio from Pexels Thanks to an unprecedented worldwide health emergency, most of us haven’t hugged a friend in nine months. Or been in the same room with extended family or co-workers. Or attended a wedding, a funeral, or a party to share joys and sorrows. Many of us are feeling a void. A need for human connection. Shhhh…don’t tell I don’t tell many people because they’d likely find it funny or pitiable. But here it is: I watch The Waltons reruns five days a week at noon as a way to cope. Religiously. Fervently. And on days I can’t watch it, I record it to savor later. Go ahead and snicker. I deserve it. I fell into the habit early in my pandemic work-from-home regimen. Grilled cheese and ‘70s family programming during my lunch break were just what I needed to adjust to a new, more solitary work day. ’Twas salve for the soul grieving interaction with the outside world. Cheesy, predictable plot lines set in a simple time with kindhearted, wise characters solving life-and-death problems in 48 minutes. Swoon. The oldies TV station plays the series — all 221 episodes over nine seasons — sequentially. Which means, nine months into this routine, I have yet to see a rerun and I enjoy fresh anticipation each weekday. It wasn’t supposed to last The Waltons original run began in 1972 and continued to 1982. The series was developed in half-hearted response to congressional hearings on television programming and an implied encouragement for more “family-friendly” entertainment. CBS network executives didn’t expect it to catch on and even slotted it against high-ratings programming on other networks so that it would not linger. (My family must have watched the other shows, because I don’t remember seeing much of the Walton family.) 1972 promotional photo for The Waltons TV show. Photo credit: CBS Television To their surprise, the homey, rose-colored stories of a large, rural family surviving the Great Depression struck a chord with viewers, and it became a popular series in its first season. The cast of mostly unknown actors gained quick renown, with Richard Thomas (John-boy Walton, the eldest child) an easy standout. It wasn’t long before TV and movie producers tried to woo him to other projects, finally succeeding after five seasons. But the series continued on another four years without Thomas, owing to strong viewer loyalty and a strong ensemble cast of well-developed characters. Does this obsession with a 40+-year-old series make me look old? I’ve often rolled my eyes at people who are stuck in the past. You know the sort…the 65-year-old sporting a mullet and still talking about that Led Zeppelin concert he went to when he was 17, or the octogenarian savoring Frank Sinatra tunes, over and over and over. The world is a big place with exciting new things happening all the time. New music, new movies, new technology. Getting stuck in the past is a quick way to get old, I’ve always reasoned. When you have kids, you get dragged along through pop culture changes and manage to stay tuned in to what’s popular with their generation. Even though my kids are pretty much out of the nest, I still get cultural infusions from them, thank goodness. You know, it’s just part of what makes me a “cool mom.” Or at least that’s what I tell them for a laugh. Yet, I now find myself mired in a 1970s telling of 1930–40s Americana, which is sort of a double-whammy for “old.” And, silly or not, I love it. I’ve cried with the practically perfect Virginia family over the attack on Pearl Harbor. I’ve laughed at Grandpa’s lovable antics. I’ve grinned through various Walton’s Mountain weddings. I’ve even longed to sip a bit of the Baldwin Sisters’ famed “recipe.”
https://medium.com/bigger-picture/lunch-with-the-waltons-finding-comfort-among-old-friends-a827359cc5b8
['Tina L. Smith']
2020-12-22 20:24:40.426000+00:00
['Mental Health', 'TV Series', 'Culture', 'Lifestyle', 'Remote Working']
How to Use Kustomize for Your Kubernetes Projects
3. Add Changes to the Existing K8s Resource Files Changing the label of the existing Kubernetes deployment Inside the resources section, you can add your K8s resource files, such as deployments, services, namespaces, ConfigMaps , etc. At the moment, I only have a K8s deployment. After specifying the resources , you can then move to the next section where you define your changes. commonLabels: someName: #some-value owner: #mike app: #test The commonLabels field allows you to define labels/names to your YAML configuration file. As shown above, you can add your labels to it. Once you’re done with the changes, you can run the command below to apply them to your deployment: $ kubectl delete -f nginx-deployment.yaml First, you need to remove the existing pod. But before deleting the existing pod, make sure to run the kubectl get all --show-lables command to check what you have in the Labels section in your pod: // output NAME READY STATUS RESTARTS AGE pod/nginx-deployment 1/1 Running 0 26s LABELS app=nginx,pod-template-hash=66b6c48dd5 Then you can apply your changes by running the command below: $ kubectl apply --kustomize . OR $ kubectl apply -k . Let’s run the kubectl get all --show-labels command and see whether it has taken the changes: // output NAME READY STATUS RESTARTS AGE pod/nginx-deployment 1/1 Running 0 9m14s LABELS app=nginx,owner=randil,pod-template-hash=6dcc7ddc48 NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deployment 1/1 1 1 9m14s LABELS app=nginx,owner=randil NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-deployment-6dcc7ddc48 1 1 1 9m14s LABELS app=nginx,owner=randil,pod-template-hash=6dcc7ddc48 Note: If you’re running an older version of Kubernetes, you have to install Kustomize on your computer before getting to this process. Then, you can run the commands below to apply the changes. $ kustomize build . > test-deployment.yaml $ kubectl create -f test-deployment.yaml Changing the names of the existing Kubernetes deployment Kustomize helps you to change the names of your K8s resource files. For that, you can use the namePrefix and nameSuffix fields: resources: - nginx-deployment.yaml commonLabels: owner: randil namePrefix: test- nameSuffix: -dev namePrefix appends your custom text at the beginning of the name and nameSuffix appends your custom text to the end of the existing name. // output - extracted only the relevant part NAME deployment.apps/test-nginx-deployment-dev Refer to the docs for more options.
https://medium.com/better-programming/how-to-use-kustomize-for-your-kubernetes-projects-42a90c00bd56
['Randil Tennakoon']
2020-12-16 16:14:12.429000+00:00
['Programming', 'Kubernetes', 'Docker', 'DevOps', 'Containers']
Athletes, the new business angels banking on tech
If you were watching the Ellen DeGeneres Show over the summer, you might have caught ex-NBA superstar Shaquille O’Neal explaining how he invested in Google in pre-IPO in the late 90’s after watching someone else’s kid at a Four Seasons Hotel in Los Angeles. That person ended up putting him on the Google investment opportunity which then went on to becoming one of the biggest tech company in the world. According to Shaq, he made “a really big return” and still owns shares as of today. In fact, an investment right after Google’s IPO in 2004 would be yielding a x15 return today and I think Shaq is looking at somewhere around a x100 return. As funny and surprising it may sound, the 7ft1 and 325lbs investor might have been one of the earliest athlete banking on tech and it’s interesting to notice that it happened hazardously. The thing is, tech investing has long been reserved to a certain typology of investors and atypical HNWI such as athletes have been overlooked. The public image of athletes not understanding what to do with their money is still in the heads of many, and their financial advisers often didn’t bother suggesting complicated tech investments. On the other hand, good early stage tech companies are very hard to find, and they would likely prioritize targeting more educated investors. As athletes became more aware of the investment opportunities available to them and with the tech ecosystem maturing, the gap between athletes and tech has been reduced. Investing in tech notably became a lot more accessible since technology started spreading across all industries. As a result, there was an increasing number of tech companies going public, which created liquidity as well as diversification options for investors. Internet further contributed to bring athletes and tech companies closer together by offering direct communication channels but also by allowing both parties to collect information more efficiently; but the gap is still there. Over the last decades, economies started growing at a slower pace and tech companies became a portfolio “must have” for investors struggling to find good investment opportunities. Innovation and disruption have been key to create real economic value and it drove garage startups to become some of most powerful public companies in the world today. In 2019, 7 of the 10 largest companies were tech companies (Microsoft, Amazon, Apple, Google, Facebook, Alibaba, Tencent). In 2009, it was only one (Microsoft, at about a fifth of today’s value). From an investor’s perspective, tech companies and tech stocks have simply been generating some of the most exciting returns for some time now. By taking a simple look at the performance of the NASDAQ (index tracking the biggest US tech companies) and the S&P 500 (index tracking the biggest US companies at large) over the last 10 years, the NASDAQ has been returning about 107% more (181% vs. 289%). Relatively recent mind-blowing return examples include Amazon, Google, and Netflix. For instance, if you invested $100 in Netflix back in the 2002 IPO, you’d be looking at more than $33k today. Let alone if you invested pre-IPO. Hence, returns in early and growth stage startups can sometimes go as high as two to three-digit multiples. Among some of the best VC deals, Sequoia Capital turned a $60 million investment in WhatsApp into $3 billion from 2011 to 2014 when it was acquired by Facebook. Pieter Thiel, famously known as Facebook’s first investor, received more than $1 billion in proceeds from his $500,000 initial investment. 2019 will be remembered as a breakthrough year for athletes investing in tech as a few of them reportedly made a killing in major tech IPOs. Carmelo Anthony, Stephen Curry, and Lance Armstrong invested respectively in Lyft, Pinterest, and Uber while Andre Iguodala was an investor in Zoom, Jumia, and PagerDuty. In fact, Armstrong was vocal about his investment in Uber and told CNBC it even “saved his family”. The road to success for Lance Armstrong has been bumpy to say the least. The cancer survivor and former road racing cyclist was once deemed the greatest cyclist to ever do it after winning a world championship, 7 tour de France, and an Olympic bronze medal. At the peak of his career, Lance Armstrong was worth an estimated $125 million from salaries and endorsements that brought him as much as $20 million per year at some point. However, Armstrong later admitted to doping accusations and was stripped from his sponsors including Nike, Trek, Anheuser Busch, and Oakley while being held liable in repayments for damages. In the end, Armstrong lost his tour de France titles and his Olympic medal, but he also suffered losses of about $75 million in terms of lawyer fees, endorsements, and legal settlements. Nevertheless, before the financial meltdown Armstrong made a $100,000 investment in Uber through Chris Sacca’s Initialized Capital as early as 2009. According to Bloomberg, the Uber investment returned Armstrong about $20–30 million 10 years later. That’s somewhere close to half of his $50 million net worth as of 2018. The recent success stories of athletes investing in tech come with no surprise as a selected group of athletes has been pushing an influent movement about players investing in tech for a few years now. Carmelo Anthony was the first one to really get some skin in the game when he took advantage of his trade to the New York Knicks to develop his business activities. In 2013, he started Melo7 Tech Partners with Stuart Goldfarb, former president and CEO of a billion-dollar corporation and member of the WWE Board of Directors. Melo7 Tech Partners invested in more than 30 early stage tech companies and 6 years later the fund realized 12 exits including deals like Lyft (ridesharing app, IPO), Bonobos (clothing brand, $310 million acquisition by Walmart), or Luxe (on-demand valet parking app, undisclosed acquisition by Volvo). The portfolio also includes fast-paced growing startups like Casper (Mattress manufacturer) or Andela (HR tool). Tennis superstar Serena Williams might let you think that she got the tech buzz after marrying Reddit and Initialized Capital co-founder Alexis Ohanian. In fact, even though she publicly announced the launch of Serena Ventures this year, the fund has been investing since 2014 into early-stage companies led by women and people of color, and those that value “individual empowerment” and creativity. Williams teamed up with former asset manager at JPMorgan, Alison Rapaport, to oversee the investment activities and at the time of writing, Serena Ventures has invested in more than 30 companies including startups like Impossible Foods (food from plants), Mayvenn (hairstylist platform), or the Daily Harvest (food delivery). The duo noticeably just realized their first exit with Olly (wellness and nutrition product manufacturer, undisclosed acquisition by Unilever). Former NBA Champions Andre Iguodala, Stephen Curry, and Kevin Durant just so happened to all end up playing for the Golden State Warriors, in the Bay Area — birthplace of the Silicon Valley and HQ of some of the biggest tech companies in the world such as Apple, Google, or Facebook — and piled up close to 50 investments altogether in the past 3 years. Iguodala, Durant, and Curry have all been very active in the tech ecosystem developing relationships with venture capital wizards like Ben Horowitz, speaking at major tech events like TechCrunch Disrupt, and most recently launching the Players Technology Summit which has been running for 3 years now. Iguodala, who’s probably the most prolific tech athlete-investor of the bunch, has partnered on his investments with Rudy Cline-Thomas, founder and managing partner of VC fund Mastry Inc. With a dozen of investments in his portfolio and a previous exit with Tristan Walker’s Walker & Company (undisclosed acquisition from Procter & Gamble), Iguodala just added a great trio of exits this year with Zoom (video conferencing, IPO), Jumia (e-commerce platform, IPO), and PagerDuty (operations performance platform, IPO). He’s also a shareholder in hot startups like Casper, Lime (electric scooters), The Player’s Tribune (publishing platform), or GOAT (sneakers marketplace). Iguodala has been one of the most vocal players to get athletes to follow his path and is well on his way to become a figure of the VC ecosystem post-retirement. Durant started Thirty Five Ventures in 2016 with his partner Rich Kleiman, an ex-agent and co-founder at Rock Nation Sports, and has been a very active investor to say the least. KD invested into more than 30 startups with shares in some of the hottest deals including Lime, Postmates (on-demand delivery), Acorns (trading app), or Coinbase (cryptocurrency exchange). Through Thirty Five Ventures, Kevin Durant had his first exit with Grove (financial planning, undisclosed acquisition from Wealthfront) and is also doing well with other investments like with his stake in Postmates which has reportedly multiplied by 10 already. Curry founded SC30 Inc in 2017 with an ex-Davidson teammate and a Stanford graduate, Bryant Barr, and adds up to the athletes realizing their first exits this year with Pinterest (visual bookmarking tool, IPO). Curry invested in a decent range of other startups as well, including TSM (e-sport organization), CoachUp (sports coaching), Team SoloMind (gaming platform), Brandless (ecommerce), Hooked (chat entertainment), Slyce (marketing automation platform), and SnapTravel (hotel deals messenger). Following his Uber exit, Lance Armstrong just launched Next Ventures, a $75 million venture capital fund to back startups in the sports, fitness, nutrition and wellness markets. Armstrong notably teamed up with Lionel Conacher, a former investment banker and senior executive in several public companies. Finally, Andy Murray partnered with Seedrs, an equity crowdfunding platform, and has done more than 30 deals since he first started in 2015. Murray invests mainly in tech startups related to health and wellbeing, nutrition, or even dogs. He reportedly works with a team of advisers to assess his investment opportunities. Athletes and tech startups are a great match and I expect a lot more of them to keep pouring into the space. The recent spurge of athletes investing in tech has already led many others to jump right in and the recent successes will keep the momentum going. With tech spreading into consumer facing areas where athletes have an unfair advantage (sports, lifestyle, nutrition, wellness, or entertainment), they can use their leverage to add value to some of the hottest tech verticals such as marketplaces, e-gaming, foodtech, media and streaming, medtech, cannabis, or sportstech. It’s worth noting that investing in tech also grew popular because of its societal dimension. Tech investments are often compelling to investors looking to have an impact in the world and the idea of investing in tech companies fueled with smart young people building cool stuff to solve complicated world-scale problems surely sounds like a good pitch. On the other side, tech startups usually struggle to get funding in the very early stages of the company, namely pre-seed and seed stages (often referred as “the valley of death”). Athletes have the potential to be influent early stage business angels and to help these companies to reach later stages of financing with big premiums on their shares. In general, the Series A round is the tipping point where funding goes from angel investing to institutional investors (VCs or Corporate Ventures) and where risk of investing decreases considerably since companies are a bit more mature. Consequently it’s a lot harder for angel investors to join the rounds from Series A and onwards because of the increased competition and ticket size. And if investing in early stage startups is riskier, investors have ways to work against the risk and maximize the upside. For instance, they can hedge their bets with multiple investments of smaller tickets, co-invest with other experienced business angels, and even be granted tax exemptions. Don’t let the incredible successes of tech companies and the hype of investing in startups fool you, however. Even though investing in tech companies can bring sizeable returns to investors, it remains complex and riskier than other traditional asset classes. In technology, the only constant is change and companies have rapid obsolescence cycles. Companies often have to adjust along the way and pivot multiple times. It’s even not uncommon to see companies on top of their market go on to raise hundreds of millions of dollars to end up bankrupt a few years later, especially in economic downturns (e.g. pets.com or eToys during the dot com bubble). Adding up to that, the greatest challenge of early stage investors is what we call the deal flow, or the incoming flow of startups screened by the investor. The best deals are usually in high demand and investors must be part of certain networks or have a significant track record to be granted a seat at the table. For instance, if it wasn’t through Chris Sacca, Armstrong probably would have never had the opportunity to invest in Uber. Even with a good deal flow, the norm in the industry is that experienced VCs usually pick 10 out of 1000–2000 startups to hit the homerun on 1 out 10 investments. The other investments are more spread out with 2–3 investments performing well, and the rest being flat or underperforming. Not to mention that shares of startups are illiquid and that investing cycles usually last 5 to 7 years before investors can exit and make a profit. Any amateur investor rushing into tech investing has a sure way to lose money. Indeed, it takes time before investors get to see qualitative startups under their radar and even then, it takes even more time and experience before investors can make sense of their deal flow to pick the winners. Athletes are notably often assaulted by the wrong kind of companies and it can be extremely confusing at first. This is how we go back to athletes needing to surround themselves with a qualified team to make investment decisions, and it couldn’t be truer with tech startups. Finding good startup investments work somewhat like the law of large numbers, the greater the number of startups screened, the greater the chance to discover good deals. In fact, very few are the athletes who did well in occasional tech investments and that’s probably why athletes succesfull in tech opted for setting up their own VC funds or partnering with one instead. Even the great investor Magic Johnson reportedly partnered with Detroit Ventures. Interestingly in August 2018, one of the most renowned VC in the world Andreessen Horowitz launched a $15 million Cultural Leadership Fund featuring Afro-American athletes (led by Kevin Durant) and entertainers to “invest in companies in the Andreessen Horowitz portfolio who are interested in partnering with the cultural leaders who invested in the fund”. The same summer, Intel and the NBA reportedly worked together on investing in startups in sports while the NFL Player Association launched One Team Collective, a technology and business accelerator for new businesses in the world of sports. These initiatives come supporting a SportsTech ecosystem that has been growing at a fast pace over the last few years with an increasing number of sports dedicated funds, incubators, or innovation hubs joining the space. The list includes and is not restricted to: Hype Sports Innovation, Sapphire Sport, Global Sports Venture Studio, NYVC Sports, Stadia Ventures, Courtside VC, Bruin Sports Capital, Advantage Sports Tech Fund, Aser, Bitkraft, Shorai, Sports Investment Partners, Trust Esport Ventures, Full Stack Sport Ventures, Capital Sports Ventures, PodiumVC, Le Tremplin, leAD Sports, KICKUP Sports, Active Lab, GSIC Sport Thinkers, Score, SportTech Hub, TenKan-Ten, UEFA Startup Challenge, Wylab, Stakrn, SportUp, Platform A, Spin Accelerator, Dodgers Accelerator, Blue Star Accelerator, Black Lab Sports, Sport eXperience, The Pitch, Sloan Sports, ASTN, Sixers Innovation Lab, Chelsea Foundation, Arsenal Innovation Lab, and Barça Innovation Hub. Most recently, the FC Barcelona announced a 120 million € fund (Barça Ventures) and Seventure and La Caisse d’Epargne partnered up on an 80 million € fund (Sport & Performance Capital) to invest in sports and nutrition related startups. I believe this is the beginning of a trend that is about to get much bigger. Athletes will increasingly intend to take advantage of their social capital in the tech ecosystem, while other investors and startups will put on an effort to bring them into more deals. This will push traditional VC investors and sports dedicated fund to offer more opportunities for athletes to invest alongside them and leverage their influence. Eventually, athletes will grow even more influent. It’s interesting to see how athletes will embrace these new opportunities and how it will affect the way they get through their careers financially. As athletes gradually get better financial counseling, I’m particularly looking forward to seeing the tech ecosystem allowing them to unleash all their investment potential.
https://medium.com/swlh/athletes-the-new-business-angels-banking-on-tech-c4170b2d9d20
['Etienne Boutan']
2019-09-13 19:06:27.691000+00:00
['Sports', 'Startup', 'Venture Capital', 'Technology', 'Finance']
Toxic traits that need addressing in any design team
Individualism I don’t know about you, but I certainly tried to do everything myself in the early years of my career. And because I tried to do everything myself, I experienced burn-outs more times than my then-fragile mental health should’ve allowed. When I wanted things done “ correctly” (whatever that means), I had to do it myself. The mentality was flawed and evidently dangerous. By letting myself be more collaborative with my peers, I slowly got out of this bad habit of mine. I ask for help more often these days, making fewer assumptions and asking for help when I need it. In my experience, I’m not an exception in any way. Individualism is not uncommon in the Design industry. The mainstream public still believes the superstar-designer myth, which in turn affects how we see ourselves as professional designers. For instance, Jony Ive didn’t single-handedly create Apple’s most iconic designs. I know firsthand Apple’s Design teams to be massive, spanning across many disciplines. Although I never met him, I see Jony as a great Design leader, and there’s no doubt he will come down in history as one of the most prominent Design figures; but no, he did not do it all alone. Even if we, the designers doing the workday in and day out, know Design can’t be done in a silo, many still internalise individualism (including myself). Our stubbornness for it is even more evident when we fly solo. By learning the difference between owning the work we do and isolating ourselves from our peers (within Design & beyond), we can learn how to let go, to delegate, to better collaborate, and, therefore, to develop productive and healthy work practices. While flat structure promises autonomy, overvaluing those who can get things done on their own without needing supervision or guidance can lead to a lack of accountability developed within the team’s culture. Moreover, more junior members may feel like they’re being neglected with no assistance or long-term career development. The flat structure should instead be about transparency and open communication, focusing on a shared mission and how everyone is working toward them; i.e. holding people accountable as a group rather than as individuals. Meanwhile, make sure that credit is given to all those who participate in each project, not just the leaders or most public person. When building a new team, leaders ought to intentionally develop a culture where people bring problems to the group. One tangible tactic is not only using staff meetings as a place to report activities, but also a place to solve problems taken place within the organisation.
https://uxdesign.cc/toxic-traits-that-need-addressing-in-any-design-team-388bc7a60685
[]
2020-12-20 07:01:11.168000+00:00
['Work', 'Design', 'Design Thinking', 'Technology', 'UX']
Could Real Dragons Ever Evolve?
Dragons are the single most ubiquitous and recognizable mythological creature ever dreamed up by the human consciousness. They crop up in hundreds of cultures around the world, from China and Hawaii to New Zealand and Iceland. These mighty, fantastical beasts hold a special place in our tales, from Smaug to Toothless, King Ghidorah to Mushu, Falkor to Fafnir, dragons entrance us with their power, graceful shapes and movement, and sometimes even wisdom. David E. Jones, an anthropologist, believes he has come up with an explanation for this in his book An Instinct for Dragons. Jones studied vervet monkey troops of the African savannah and realized that their primary predators are raptors (birds of prey), snakes and big cats. These would have been roughly the same for humanity’s earliest ancestors such as the australopithecines. He posits that this gave rise to a deep-seated, inherited fear of such animals which has stuck with us throughout our evolution, and made its way into our nightmares and, by extension, tales of monsters. Beowulf, St. George and the dragon, the story of the dragon’s pearl — Stories of these chimeric beasts go back thousands of years, to the very dawn of written language, and likely back to the beginnings of spoken language as well. The ancient Chinese philosopher Wang Fu even wrote of dragons as having the claws of eagles, feet of tigers and necks of snakes. Dragons, as we imagine them, are generally reptilian in most of their features, although many possess hair, horns and even feathers. Another thing they usually have in common is large size. These factors are also seen in the “thunder lizards” that we know existed in our real world’s distant past: the dinosaurs. We know of hundreds of dinosaur species that evolved for flight like the microraptor, as well as archosaurs such as the pterosaurs. The largest of these seems to have been Quetzalcoatlus northropi with a wingspan that reached 36 feet, roughly matching the size of many dragons of myth. A common misconception is that birds evolved from these flying dinosaurs. However, paleontologists have known for decades now that our modern birds actually descend from terrestrial dinosaurs related to small theropods like the velociraptor. These creatures took to the trees to hunt and scavenge. Those with mutations that helped them survive falls — such as flaps of skin growing between fingers and arms, and proto-feather scales — lived longer and had more offspring, passing on these mutations until they became more prominent. Eventually, the forearms became wings, and the feathers grew larger, and the aves class was born. In our fictions, there are numerous basic body plans for dragons. The three primary ones are: four legs with two separate wings that sprout from above the foreleg shoulders; an avian body plan with two rear legs and the front limbs as wings; and the elongated serpentine body with (or without) two or four legs. No larger, more complex land animals have ever evolved on Earth that exhibited more than four limbs. Indeed, all mammals, reptiles, amphibians and birds either have, or are directly descended from animals that had, four limbs. Fish are the one complex class of animal today that departs from this body plan, though all of the other classes (tetrapods) evolved from from them. The most realistic possibility, therefore, would likely be a giant reptile evolving toward the body plan of a bird, as seen in Game of Thrones. Size becomes the most constraining factor, however. Thickly built, muscular dragons of substantial dimensions would not be able to generate enough lift with wings that were remotely proportional to their bodies. Our modern flying birds, and the flying reptiles of the past, only managed flight due to hollow bones and either very large wings and/or robust feathers. But what about some other very common characteristics of dragons: intelligence, extremely long lives, and the ability to “breathe fire”? In some modern large reptiles and birds we see intelligence comparable to that of some primates. Crocodiles and monitor lizards can be canny hunters, while crows consistently rank among some of the most intelligent problem-solving animals. We also find a number of reptiles that can live for over a century, such as tuatara lizards and giant tortoises. Lastly, we have the oft-described fire breath. A number of creatures eject poisons, such as spitting cobras. The archerfish shoots a tiny blast of water to knock insect prey down into the water. But shooting flame is another thing altogether. However, this may not be impossible for a giant reptile. An insect called the bombardier beetle combines hydrogen peroxide and hydroquinone and fires this out from its abdomen in a stream of chemicals to the boiling point (100 degrees C). It may not be so far-fetched to imagine a similar mechanism evolving in a giant winged reptile: Glands could exist in its gullet that secrete multiple chemicals that, when joined, produce a superheated reaction which is then expelled through the creature’s open mouth. Research biologist Brian J. Ford posits that such a chemical could easily be produced biologically. Animals in a state of ketosis naturally create acetone, which is highly flammable. Perhaps our dragon controls this production and emits a stream of acetone, which in turn passes by a specialized structure of teeth or scales that strike together to create a spark and ignite the acetone — thus generating a gout of flame that sears the beast’s prey. While the existence of any mythological beings should always be dealt with skeptically, it is not completely beyond the power of evolution to result in something that physically resembles and behaves in the way of legendary dragons. Such creatures could have existed in the distant past, some hundreds of millions of years ago, or might still evolve millions of years from now. If (or when) another mass extinction event takes place, and humans are knocked from our place at the top of the mountain, an opening might remain in which some reptilian or avian survivors advance over the ensuing millennia. Growing in size and strength, ultimately these creatures could reach the status of apex predator, ruling sky and earth. And, considering the growing capabilities of gene editing, we could one day even create dragons ourselves… Thank you for reading and sharing!
https://medium.com/predict/could-real-dragons-ever-evolve-4e347aed44a0
['A. S. Deller']
2020-12-28 13:12:51.459000+00:00
['Science', 'Biology', 'Fantasy', 'Evolution', 'Monsters']
Don’t let anyone, including Tim Denning, tell you that pursuit of money will cause you to give up…
Don’t let anyone, including Tim Denning, tell you that pursuit of money will cause you to give up writing. I love most of his work, but there are times when his words come from another planet – as in a planet where it rains money. I’ve been a happy freelance writer for 30+ years and money is my principal motivator. Have I written for free? You betcha! I have a brain that communicates best in written language and a soul that demands I do so. Mr. Denning is also condescending when he says $1,000–2000 isn’t enough to quit your job. For thousands of us it is the exact right amount we need for life changes. Read his story about News Break if you want to add to his bank account. Or here’s mine:
https://medium.com/everything-shortform/dont-let-anyone-including-tim-denning-tell-you-that-pursuit-of-money-will-cause-you-to-give-up-2f35233d40d6
['Melinda Crow']
2020-12-15 23:49:22.927000+00:00
['Writing', 'Money', 'Publishing', 'Ideas', 'Life']
First Steps
Pixabay First Steps Microfiction I turned my back for a second. That’s all. One second. When I spun around, holding the blanket, Josh wasn’t there. Lips wobbling, I reached my shaking arms towards the cot. Where’d my baby gone? It didn’t make sense. The sides were too high for him to climb and there was nowhere to hide. I couldn’t drag my eyes away from the empty bed. His blanket fell from my hands, blue elephants mocking me, dancing before my eyes. “Josh? Where are you?” Even I could hear the note of desperation in my voice. I gazed until my eyes watered. Nausea rose in my stomach and my hands trembled. But just before I screamed, the air flickered, like a heat haze, distracting me. Then came a ‘pop’, and there he lay, giggling. My mouth dropped open. Invisibility? And early manifestation. Looked like he’d got the genes. I thought he would, but I’d had my money on X-ray vision….
https://medium.com/stevieadlerteachandblog/microfiction-first-steps-eee7da48cc30
['Elise Edmonds']
2018-07-21 06:26:44.155000+00:00
['Fiction', 'Microfiction', 'Fantasy', 'Writing', 'Superheroes']
95% of users rely on reviews. And now what?!
This report shows that 92% of consumers trust recommendations from the people they know. Bringing the online to the offline: Batiste Dry Shampoo with star ratings on the lid. Let's be clear here… This shampoo is not for me 🙄 Don't know if you've checked my profile photo, but I have a very minimalistic hairstyle. This example is only to show how awesome offline merging with online can influence a decision. Do you know any other dry shampoo with a score 4.6 out of 5?? If you are in a shop, are you going online to double-check if other brands are better rated? This quote from the book "The Brain", by neuroscientist David Eagleman, can help us understand this behaviour a bit better. We are a splendidly social species. We need each other. We need each other. And that explains a lot… Online world Baymard Institute did an amazing research and found out that 95% of users relied on reviews to learn more about products. It's not just up to businesses to tell their users/customers how great their product is, it's up to other humans as well. As a matter of fact, a simple online review might be more powerful than a huge amount of copy trying to convince you that a product is good. Read about "The Coolest Cooler". It's in regards to one of the (if not, the biggest) fail in Kickstarter history. It started of by being the most amazing Cooler ever, raising the astonishing amount of 13 MILION DOLLARS. However, they failed after infinite bad reviews and orders not fulfilled… 💸 👉 Read about the coolest cooler fail, here. Reviews are fundamental for massive businesses Some new business models are only working because of user reviews. Airbnb: Would you stay in a stranger house solely based on the stranger's description of his house? No, thank you.. Trip Advisor: Who's actually advising? Users! eBay: Would you buy a product from a seller with zero reviews and no ratings? Trustpilot: With 45 million reviews, Trustpilot released a very insightful report stating that: "Trust and reputation must be earned." Yelp: Users have written more than 155 million reviews on the platform. Micael Lucas is a professor for Harvard Business and his study showed that: "Restaurants which increase their ranking on the platform by one star raise revenue by 5–9%". Amazon: Can you imagine shopping at Amazon without reading the reviews?
https://uxdesign.cc/95-of-users-rely-on-reviews-and-now-what-dd4afc0b6422
['Flavio Lamenza']
2020-04-21 08:53:23.976000+00:00
['Design Process', 'UX Design', 'User Experience Design', 'Marketing', 'User Experience']
Apple’s New M1 Chip is a Machine Learning Beast
I watched the keynote and saw the graphs, the battery life, the instant wake. And they got me. I started to think, how could one of these new M1-powered MacBooks make their way into my life? Of course, I didn’t need one but I kept wondering what story could I tell myself to justify purchasing another computer? Then I had it. My 16-inch MacBook Pro is too heavy to carry around all the time. Yeah, that’ll do. This 2.0 kg aluminium powerhouse is too much to be galavanting. Wait… 2.0 kg, as in, 4.4 pounds? That’s it? Yes. Wow. It’s not even that heavy. C’mon now… let’s not let the truth get in the way of a good story. I had it. My reason for placing an order on a shiny new M1 MacBook (or two). My 16-inch MacBook is too heavy to lug around to cafes and write code, words, edit videos and check emails sporadically. And Apple seems to think their new M1 chip is 11x, 15x, 12x, 3x faster on a bunch of different things. Thought-provoking numbers but I’ve never measured any of these in the past. All I care about is: can I do what I need to do, fast. The last word of the previous sentence is the most important. I’ve become conditioned. Speed is a part of me now. Ever since the transition from hard drives to solid-state drives. And I’m not going back. I bought the 16-inch in February 2020. I’d just completed a large project and was flush with cash, so I decided to future proof my work station. Since I edit videos daily and hate lag, I opted for the biggest dawg I could buy and basically maxed everything except for the storage (see the specs below). Thankfully I’ve still got a friend at Apple who was able to apply their employee discount to the beast (shout out to Joey). Anyway, we’ve discussed my primary criteria: speed. Let’s consider the others: Speed. If it’s not fast, get lost. Cost. A big factor but I didn’t mind paying for the higher spec machine nor do I mind paying for a quality computer. It’s my primary tool. I use it to make art, I use it to make money, I use it to learn, I use it to communicate to the world. Portability. I don’t like sitting in an office all day. Can I take this thing to a cafe or library for a few hours without searching for a power outlet? Consider portability a combination of battery life and weight. Why test/compare them Why not? But really, I’m a nerd. And an Apple fan. Plus, I wanted to see how my big-dawg-almost-top-of-the-line 16-inch MacBook Pro faired against the new M1 chip-powered MacBook’s. Plus, I can’t remember being this excited for a new computing device since the original iPhone. Other reasons include: carrying around a lighter laptop and tax benefits (if I buy another machine before the end of the year, I can claim it on tax). Mac specs Whenever I buy a new machine, I usually upgrade the RAM and the storage at least a step or two from baseline. 512GB storage and 16GB RAM seems to be the minimum for me these days (seriously, who is running a 128GB MacBook effectively?). So for the M1 MacBook’s, I upgraded both of their RAM from 8GB to 16GB and for the 13-inch Pro, I upgraded from 256GB to 512GB storage. The 16-inch MacBook is my current machine, which I’ve never had a problem with until running the tests below. Specifications of each of the Macs tested. Note: there are cheaper configurations of each but I typically upgrade the RAM and storage on each of my machines. *Price (actual) is the actual price I paid for each model. Note for the MacBook Pro 16-inch, I actually paid ~$5,500AUD since I have a friend who works at Apple and applied his employee discount (thank you Joey). **Price (baseline) is the price you’d pay if you upgraded all processing components (e.g. 8GB -> 16GB RAM on the M1 models and 2.3GHz -> 2.4GHz on the Intel model) except storage (since storage is usually the most expensive upgrade). The tests Apple’s graphs were impressive. And the GeekBench scores were even more impressive. But these are just numbers on a page to me. I wanted to see how these machines performed doing what I’d actually do day-to-day: Writing words. I assume they all perform well at this. I assume they all perform well at this. Browsing the web. Same as above. Same as above. Editing videos. One of the primary uses I was interested in and one of the main reasons I bought the 16-inch MacBook Pro with dedicated GPU. One of the primary uses I was interested in and one of the main reasons I bought the 16-inch MacBook Pro with dedicated GPU. Writing code. A text editor doesn’t require much but Xcode is getting pretty hefty these days. A text editor doesn’t require much but Xcode is getting pretty hefty these days. Training machine learning models. I write a lot of machine learning code. I don’t expect to be able to train state-of-the-art models on a laptop but at least being able to tweak things/experiment would be nice. Reflecting on the above, I devised three tests: Video exporting with Final Cut Pro. I made a pretty hefty video earlier in the year (2020 Machine Learning Roadmap), it’s 2 hours, 37 minutes+ long. So I figured it’ll be cool to see how long each machine takes to export it. Machine Learning Model training with CreateML. Apple’s black-box machine learning model creation app. I don’t any large Xcode files handy but I decided to see how the CreateML app handles training machine learning models on the new silicon. Native TensorFlow code using tensorflow_macos . The test I was most excited for. Apple and TensorFlow published a blog post saying the new TensorFlow for macOS fork sped up model training dramatically on the new M1 chip. Are these claims true? Why not test more extensively? These are enough for me. I’ve got other sh*t to do. Alright, time for the results. The best results for each experiment have been highlighted in bold. Experiment 1: Final Cut Pro video export For this one, all machines were given ample time to pre-render the raw footage. So when the export button got clicked, they all should’ve been relatively on the same page. Experiment details: Video length: 2 hours, 37 minutes+ 2 hours, 37 minutes+ Export file size: 26.6GB (quoted), ~6.5GB (actual) No surprise here, the 16-inch MacBook Pro exported in the fastest time. Most likely because of the dedicated 8GB GPU or 64GB of RAM. However, it seems using the dedicated GPU came at the cost of battery life drain and fan speed (in the video you can hear the fans going off like a jet). During the video export, the M1-powered MacBook Air and MacBook Pro 13-inch remained completely silent (the MacBook Air had no choice, it doesn’t have a fan but the MacBook Pro 13-inch’s fan never turned on). Experiment 2: CreateML machine learning model training I’ve never actually used a machine learning model trained by CreateML. However, I decided to see how one of Apple’s custom apps would leverage their new silicon. For this test, each MacBook was setup with the following CreateML settings:
https://towardsdatascience.com/apples-new-m1-chip-is-a-machine-learning-beast-70ca8bfa6203
['Daniel Bourke']
2020-12-24 23:09:23.067000+00:00
['Apple', 'M1', 'Machine Learning', 'Apple Silicon', 'Editors Pick']
Flee your country by using Python to find the best job offers at LinkedIn
Note that the parameters for the request are like this. There’s no problem if you remove all the other parameters by the way. If you keep scrolling down, you’ll notice this: The new jobs are coming as the start parameter increases by a factor of 25. 👩‍💻 Coding time Let’s first create our CSV file. Let’s make some requests to LinkedIn address we have just discovered. https://www.linkedin.com/jobs/search?keywords=jobName&location=locationName&start=25’ We will be getting the first 100 jobs to filter as we said before increasing by a factor of 25. If you try to filter more than a hundred jobs and you will be getting a lot of outdated data. We are making several requests here to get 100 job results from the LinkedIn website. As said at the beginning of this article by looking at the HTML. all the cards on the left are ‘a’ elements whose class is ‘result-card__full-card-link’. All these elements have a link that if you saw that ‘href’ property over there. Also, note that the link is actually the job page you are seeing on the right side of the page. So we first find the div that contains all the job cards we are looking for. soup.find(class_='jobs-search__results-list') Then we find all 'a' elements whose class is 'result-card__full-card-link'. They are actually our job cards. .findAll(‘a’, { ‘class’ : ‘result-card__full-card-link’ }) And finally, we append the results of every iteration to an array called joblist. This way our variable joblist holds a list of a hundred 'a' elements. 😱 The next step is to iterate over those jobs to get the name and the link for the job's page. After getting the link, we'll make get requests to get the job’s description that, as pointed before, is as simple as getting a div whose class is show-more-less-html__markup. To find the keyword visa in the description we're going to use regular expressions. description.findAll(string=re.compile(r”\bvisa\b”,re.I)) So in the code above, the function re.compile is receiving the regular expression ”\bvisa\b” as a parameter as well as the constant re.I, that indicates to ignore case sensitive. You can use other keywords like 'relocation' or 'sponsorship'. Let us know in the comments whatever works best for you. Now run your code and start practicing for code interviews. 🤓
https://medium.com/analytics-vidhya/flee-your-country-by-using-python-to-find-the-best-job-offers-at-linkedin-43ae2e506d99
['Matheus V De Sousa']
2020-08-15 16:14:11.175000+00:00
['Web Scraping', 'Immigration', 'Python', 'Jobs', 'LinkedIn']
Localization vs Internationalization
Internationalization is the process of making your software easy to adapt to different languages or regions. Internationalization is often abbreviated by i18n. In a website, i18n can mean to use Babel projects (e.g. flask-babel in Python, gettext in PHP, FormatJS in JavaScript, …) Localization is the process of adapting your software to a different language or region. Localization is often abbreviated by l10n. l10n includes translation, adjusting currencies ($, €, £, ¥, …), systems of measurement (metric vs imperial), adjusting time zones. The language change might make a lot of other changes necessary. For example, it can very well be that the layout does not work any longer. It might mean that you need to get more data about the user because there are other ways to address the user. Localization is not defined by the language You might be tempted to use the language as an identifier for the locale. That doesn’t work: English: The UK has calendars that start on Monday, but the USA have calendars starting on Sunday. The UK has Pounds, the US has Dollars. German: Switzerland has the Swiss franc, Germany has the Euro. Localization is not defined by the country This one is far closer, but there are many more detailed rules about time zones. For example, the US has four time zones and Russia has 11 time zones. Also the language is not defined by the country. French is the official language of Quebec, but the rest of Canada speaks English. See also
https://medium.com/plain-and-simple/localization-vs-internationalization-fd2561dfdbcb
['Martin Thoma']
2020-09-26 17:24:15.132000+00:00
['Software Engineering', 'Software Development', 'Business', 'English Language', 'Terminology']
Automatic Tableau Data Refreshing Through Google Cloud and Sheets
Automatic Tableau Data Refreshing Through Google Cloud and Sheets A Tableau Public data pipeline automation tutorial via Python, Google Sheets, and Google Cloud This post walks through the steps needed to build an automated data refresh pipeline for Tableau Public through python, Google Sheets, and Google Cloud. In a previous post Auto Refreshing Tableau Public, I explained how to connect a Tableau Public workbook to Google Sheets to take advantage of the daily Tableau Public-Google Sheet refresh. I described how to schedule a launch daemon locally to update the data contained within the Google Sheet, thereby refreshing the data in the connected Tableau Public workbook. While this setup works well for relatively infrequent refresh cycles, it does not work well for data that needs to be updated daily (or more frequently). The setup detailed below solves for this problem to create a truly automated data refresh procedure. Why should you automate your refresh pipeline? While I was pretty pleased with my original data refresh approach to update the Premier League table every week, I’ve since built Tableau dashboards that needed to be updated more frequently. For my MLB Batting Average Projection Tool, I needed to refresh the data on a daily basis for it to be relevant. Opening my personal computer every single day of the MLB season wasn’t a great option, so I started looking around for a reliable task scheduling approach. I ultimately settled on the workflow below: Schedule an instance to run using Google Cloud Scheduler Kick off a cron job within the instance to run my code to pull the updated data and load it to Google Sheets Schedule the instance to stop running using Google Cloud Scheduler (to save money since it only really needs to be on for five minutes a day) I considered using Google Cloud Scheduler to execute the script directly, instead of using cron in the instance, but I like having the instance to ssh into and I was already familiar with using a virtual instance, so it was the path of least resistance. I also considered using Airflow, which I use at work, but it would have required a similar scheduling setup and an extra layer of deployment with the web server. However, I am in the process of transitioning this process to Airflow, so I can more easily schedule new jobs in the future and will update with a follow-on post once complete. Getting started with Google Cloud If you’re setting up Google Cloud for the first time, I’d recommend following this guide. First time users on Google Cloud get a $300 credit for the first year, though you must enable billing and fill in credit card information to use it. You can also use Google Cloud’s free tier, which has usage limits. The free tier limits your available memory and processing power, but you should certainly have enough to perform basic operations. I use the second smallest tier instance size for this script, but it’s very cheap since I only run this for 5 minutes a day. Creating an instance I use the same instance each time so I can store my code for repeated use. There is probably a better way to do it, but this was the most straightforward method way for me. To move code between my local machine and the instance, I use GitHub. Obviously GitHub makes version control easier, but it’s also a much simpler way to move code than scp-ing (secure copying) from my local machine to the instance each time I need to update the script. Creating a project To get started, you’ll first need to create a new project, since Google Cloud organizes everything within projects. To create a new project, go to the project page and click “create project”. You can name it whatever you want, but make sure it’s something easy to type in case you end up referencing it from command line. After you’ve set up your project, you’ll probably want to enable the Compute Engine API. Go to your project’s console (the home page for your project — to get there click Google Cloud Platform in the upper left) and click on APIs. At the top of the next screen, click “Enable APIs and Services”, then search for the Compute Engine API and add it. Launching an instance After enabling the API, you can navigate back to console and click on the “Go to Compute Engine” link (if it doesn’t appear, click on the sidebar icon in the upper left, scroll down and click on Compute Engine). When you land in the Compute Engine, you’ll have the option to create an instance. Click “Create” to create your instance. You can give your instance a name (again, preferably an easy one to type), then select a region and availability zone. These are the Google Cloud server locations where you can host your virtual machine. The typical guidance is to choose a region close to you, but I don’t think it matters that much. Your zone selection isn’t particularly important either. However, when you go to launch your instance, it will launch in that region, zone combination by default. You can move it across zones in case your default zone is down (which happens occasionally), but I’ve never needed this option. After selecting your region and zone, you’ll select your instance type. I use the series N1, machine-type g1-small. There are a whole bunch of options based on your computing needs. The g1-small has served me well for this and other efforts so I’ve kept it! From there, you’ll want to click “Allow full access to Cloud APIs” under Access Scopes. This will ensure your instance can be scheduled to start and stop. Lastly, you’ll want to allow HTTP and HTTPS traffic. You’ll need them to run a script that gets data from somewhere, then stores it in Google Sheets. You can change these options later, but it’s easier to set them up from the start. Once your instance is set up, you can launch it by clicking on the instance, then hitting start! Setting up your instance To connect to your instance, you can either open the connection in a new window, follow one of the other options to open it in browser, use another ssh client, or connect through gcloud (the Google Cloud command line interface). I use a mix of Console and gcloud to work with Google Cloud, but you can comfortably use either. However, when connecting to instances, I prefer gcloud so I can interact with them more natively. To install gcloud, follow the instructions here. To connect to your newly created instance through gcloud, you can either type out the command in your local terminal or copy the command from the dropdown and paste it into your local terminal. If you aren’t sure if it worked, you’ll know you’re in your instance if you see that your terminal lists your location as <your google username>@<your instance name> (for me that’s irarickman@instance-1). Congrats, you’re now in your virtual machine! Installing packages and tools For my specific use case, I needed to set up a few things to get it ready to run my MLB data refresh script. Your own setup may differ depending on your needs, but I needed the following Python packages — Your VM should come with Python. If it doesn’t, follow step two Run sudo apt update (If you don’t have python) Run sudo apt install python3 Run sudo apt install python3-pip to install the latest pip Install any packages you need via pip3. For me, this was mainly pybaseball, pygsheets, and a few smaller ones. Install Git and clone your code repo — If you don’t have Git installed already, follow the steps below. This assumes you want to pull code from Github or Gitlab. If not, skip this step! Run sudo apt install git Clone your repo as you normally would! I used https auth, which may prompt you for your username and password. If you use SSH, you’ll need to go through the normal ssh keygen set up. Create a Google Sheets API app and connect to it — To avoid recreating another tutorial, I recommend following Erik Rood’s excellent Google Sheets API setup. After you’ve set up your credentials, you will want to secure copy them into your instance for use. To secure copy, open a new terminal tab so you’re back in your local directory and run gcloud compute scp <file_path>/client_secret.json <googleusername>@<instance-name>:<~/file path>. The first time you scp you’ll be asked to create a passphrase. If you just press enter twice, it will not create one. If you do enter a passphrase, you’ll need to enter it each time you scp. Skipping the passphrase can be very helpful if you try to scp again months from now and can’t remember your passphrase. If you run into any errors connecting to the instance, you may need to specify the project and zone (remember it also needs to be running)! For more guidance, I recommend checking out the GCP documentation. Once your creds are loaded, you can authenticate your app. This is a one time authentication. Your browser may try to warn you that the application is unsafe. You can hit advanced and proceed anyhow. To set up authentication, you can either just try running your script (and making sure you set your authorization file location appropriately) or running python from the command line in the location where you moved your credentials and typing: import pygsheets gc = pygsheets.authorize() You’ll be directed to complete the authentication flow by copying a url into browser. Follow the ensuing instructions and paste the key into the command line and you should be all set! You can see how my code uses Google Sheets here. Scheduling your instance This is the part that allows you to start and stop your instance on a regular schedule. To set up the full workflow, you’ll need to create each of the following Pub/Sub topic — A message that will carry the notification to kick off an event. — A message that will carry the notification to kick off an event. Cloud Function — A function to actually perform an event. — A function to actually perform an event. Cloud Schedule Task — A scheduled command to kick off the workflow. Setting up the Pub/Sub topic To start, navigate to Pub/Sub and click on “Create Topic”. You’ll want to give it an easy name to track, such as “start-instance”. Setting up the Cloud Function Next, hop on over to your cloud functions and click “Create Function”, then follow the steps below: Give your function a name, probably something like “startInstance”. Pick your region (again, probably want to keep it in the same region). Select Pub/Sub as your Trigger. This is what will kick off your function. The Pub/Sub topic is really just delivering a message to your function to let it know it needs to start. In this case, it also delivers the zone and instance to start. Choose the “start instance” Pub/Sub in the drop-down. You can choose whether to “retry on failure”. Depending on the frequency of your task and structure you may or may not need to retry. I do not for mine. Hit “Next” and arrive at a code editor. In the “Entry Point” input field, enter the name of the function (e.g., startInstance). In the index.js editor, erase the existing code in the editor and enter the code below. Be sure to replace your function name where it says “exports.<enter function name e.g., startInstance>” on lines 33 and 77. This code can also be found on google’s tutorial repo, however I made a few small changes in lines 38–39, 82–83, and 120–122 . The script provided by Google calls for a label to be passed in the schedule task. I don’t label my Google Cloud resources, so I removed the label component from the search. The version below can be pasted into the index.js editor for both the start and stop function, just remember to change the stop function name. To be clear, you do not need the start and stop code to be in the respective start and stop functions, but for convenience you can find all the code below. // // Licensed under the Apache License, Version 2.0 (the “License”); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an “AS IS” BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // Copyright 2018 Google LLC//// Licensed under the Apache License, Version 2.0 (the “License”);// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// http://www.apache.org/licenses/LICENSE-2.0 //// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an “AS IS” BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License. // [START functions_stop_instance_pubsub] const Compute = require(‘ const compute = new Compute(); // [END functions_stop_instance_pubsub] // [START functions_start_instance_pubsub]// [START functions_stop_instance_pubsub]const Compute = require(‘ @google -cloud/compute’);const compute = new Compute();// [END functions_stop_instance_pubsub] * Starts Compute Engine instances. * * Expects a PubSub message with JSON-formatted event data containing the * following attributes: * zone — the GCP zone the instances are located in. * label — the label of instances to start. * * * * completion. */ exports.<enter start function name> = async (event, context, callback) => { try { const payload = _validatePayload( JSON.parse(Buffer.from(event.data, ‘base64’).toString()) ); //const options = {filter: `labels.${payload.label}`}; const [vms] = await compute.getVMs(); await Promise.all( vms.map(async (instance) => { if (payload.zone === instance.zone.id) { const [operation] = await compute .zone(payload.zone) .vm(instance.name) .start(); /*** Starts Compute Engine instances.* Expects a PubSub message with JSON-formatted event data containing the* following attributes:* zone — the GCP zone the instances are located in.* label — the label of instances to start. @param {!object} event Cloud Function PubSub message event. @param {!object} callback Cloud Function PubSub callback indicating* completion.*/exports.= async (event, context, callback) => {try {const payload = _validatePayload(JSON.parse(Buffer.from(event.data, ‘base64’).toString()));//const options = {filter: `labels.${payload.label}`};const [vms] = await compute.getVMs();await Promise.all(vms.map(async (instance) => {if (payload.zone === instance.zone.id) {const [operation] = await compute.zone(payload.zone).vm(instance.name).start(); // Operation pending return operation.promise(); } }) ); // Operation complete. Instance successfully started. const message = `Successfully started instance(s)`; console.log(message); callback(null, message); } catch (err) { console.log(err); callback(err); } }; // [END functions_start_instance_pubsub] // [START functions_stop_instance_pubsub] * Stops Compute Engine instances. * * Expects a PubSub message with JSON-formatted event data containing the * following attributes: * zone — the GCP zone the instances are located in. * label — the label of instances to stop. * * * */ exports.<enter stop function name> = async (event, context, callback) => { try { const payload = _validatePayload( JSON.parse(Buffer.from(event.data, ‘base64’).toString()) ); //const options = {filter: `labels.${payload.label}`}; const [vms] = await compute.getVMs(); await Promise.all( vms.map(async (instance) => { if (payload.zone === instance.zone.id) { const [operation] = await compute .zone(payload.zone) .vm(instance.name) .stop(); /*** Stops Compute Engine instances.* Expects a PubSub message with JSON-formatted event data containing the* following attributes:* zone — the GCP zone the instances are located in.* label — the label of instances to stop. @param {!object} event Cloud Function PubSub message event. @param {!object} callback Cloud Function PubSub callback indicating completion.*/exports.= async (event, context, callback) => {try {const payload = _validatePayload(JSON.parse(Buffer.from(event.data, ‘base64’).toString()));//const options = {filter: `labels.${payload.label}`};const [vms] = await compute.getVMs();await Promise.all(vms.map(async (instance) => {if (payload.zone === instance.zone.id) {const [operation] = await compute.zone(payload.zone).vm(instance.name).stop(); // Operation pending return operation.promise(); } else { return Promise.resolve(); } }) ); // Operation complete. Instance successfully stopped. const message = `Successfully stopped instance(s)`; console.log(message); callback(null, message); } catch (err) { console.log(err); callback(err); } }; // [START functions_start_instance_pubsub] * Validates that a request payload contains the expected fields. * * * */ const _validatePayload = (payload) => { if (!payload.zone) { throw new Error(`Attribute ‘zone’ missing from payload`); } //else if (!payload.label) { //throw new Error(`Attribute ‘label’ missing from payload`); //} return payload; }; // [END functions_start_instance_pubsub] // [END functions_stop_instance_pubsub] /*** Validates that a request payload contains the expected fields. @param {!object} payload the request payload to validate. @return {!object} the payload object.*/const _validatePayload = (payload) => {if (!payload.zone) {throw new Error(`Attribute ‘zone’ missing from payload`);} //else if (!payload.label) {//throw new Error(`Attribute ‘label’ missing from payload`);//}return payload;};// [END functions_start_instance_pubsub]// [END functions_stop_instance_pubsub] In the package.json editor, erase the existing code in the editor and enter the following: Click Deploy and your function should be set up! Note, in the step below, we’ll pass the zone and instance name from the scheduler, which will be delivered via Pub/Sub to the function so it knows what to start! Setting up your Cloud Schedule Task Finally, go to the Cloud Scheduler, hit “Create”, then follow the steps below: Select a region for your job (probably the same as your instance). On the next page, give your job a name and description. I use “start-instance” for the one that starts it and “stop-instance” for the one that stops it! Specify the schedule. The caption below offers more information, but you’ll need to use the unix-cron format to schedule. The nice thing about cron scheduling is that it’s flexible enough to schedule for every 5 minutes or the third day of every month at midnight. For more info, check out this help page. Select your timezone. Be careful when doing so. Later on we’ll discuss setting cron jobs in your instance. These default to UTC, so if you decide not to change your cron timezone, you’ll want to make sure the schedules are aligned. I like using UTC for both as it is not affected by daylight savings. Select your target as Pub/Sub. Enter your topic name — it should be the name you used in the Pub/Sub step above. In the payload section, you’ll tell your task what zone and instance to start. Paste in and edit the code below: {“zone”:”<zone>”,”instance”:”<instance-name>”} Setting up the stop instance workflow The workflow above is great for starting your instance, but the whole point of this process is to start, then stop the instance. To set up the stopping workflow, follow the same steps, just change the names to stop and double check that you fill in the stop function name in the cloud function script. Remember to set the time intervals between starting and stopping appropriately so you stop the script after it’s been started (and vice versa). Scheduling your python script Once you’ve set up your instance to start and stop, you’ll want to set up your script to run via crontab in the instance. This process is fortunately much more straightforward. Start up your instance and ssh in. Once in your instance, type crontab -e. You’ll be asked to choose your editor, (I prefer nano) then you’ll be taken to the crontab file. To read more about this file, checkout crontab.guru. There you can also find a helpful editor for testing crontab timing. Once in your crontab file, you can schedule your script. Again, be mindful of timing and time zones! The goal is to schedule your crontab to run while your instance is running. Your crontab will be running on UTC by default. You’ll therefore want to take into account the appropriate UTC time to align with your instance’s start/stop time. Once you find the right time, enter a command similar to the one below to schedule your script: 0 10 * * * python3 /home/<google username>/projections_code/update_batting_avg-gcp.py If you have multiple python installations (e.g., python 2.7, anaconda, etc.) you will need to specify exactly which python executable to use. Similarly, you will likely want to adjust your path based on where your file is located. Lastly, I recommend testing your scheduler and cron times to make sure they’re in alignment. I tested mine by setting the timelines to run a few minutes later, then adjusting the actual scripts once I knew it worked. While it was a good amount of work up front, it’s certainly saved me some time since and makes for a fun Tableau dashboard. I hope you found this guide informative — please feel free to reach out with feedback!
https://towardsdatascience.com/automatic-tableau-data-refreshing-through-google-cloud-and-sheets-13aeb3962fd8
['Ira Rickman']
2020-08-11 16:20:09.572000+00:00
['Google Cloud', 'Tableau', 'Python', 'Data Automation', 'Data Pipeline']
What you need to know before starting your crowdfunding campaign
It’s the year of crowdfunding for journalism: Recent examples like The Correspondent or Tortoise show how it has increasingly become a tool for journalists and newsrooms to finance their ventures. Yet, orchestrating a successful campaign seems like a complex and demanding task to many. What needs to be considered before taking the leap? Through our programmes like the News Impact Academy and the Engaged Journalism Accelerator, we got to talk with four news innovators who have already successfully executed their own crowdfunding campaigns. Sophie Lacroix Guignard helped heidi.news to raise 120.000 USD before the media’s sale started. Sean Dagan Wood and his team at Positive News ran the #OwnTheMedia campaign, becoming the first global media co-operative to be established through crowdfunding. Krautreporter’s editor Sebastian Esser and his colleagues launched their advertisement free outlet with the help of 15.000 supporters. Clara Jiménez Cruz is the co-founder of Maldita.es, an independent, community-funded non-profit organisation that fights disinformation. Here is their advice for everyone looking to start their own crowdfunding campaign. 1. Make sure crowdfunding is the method for you Crowdfunding is a tricky and complex endeavour and not a solution for every newsroom. Consider alternatives to crowdfunding first to make sure that this method really fits your needs, plans and current situation. Sean: “It’s always worth looking at all the options for how to raise finance, from equity to grants to crowdfunding and everything in between, and ensure that crowdfunding makes the most sense for your particular circumstances. But if you’re creating independent journalism, crowdfunding could be well suited because you can appeal to the fact that raising money from the community fits with your values and protects your credibility and accountability.” Clara: “If you’re thinking of crowdfunding each year, I would say you should try a membership or subscription model since you’re aspiring to get funded by a community you need to take care of.” Sebastian: “Remember that one-off crowdfunding also has disadvantages: It’s a funding method, an investment by your community in you and your work. It is not a business model. Once you’ve spent all the money, it’ll be gone. So what’s your plan for after you’ve spent the money? How can you turn your journalism into a sustainable way of earning regular income? How can your crowdfunding become an investment into something sustainable instead of a straw fire? One fairly new path that is becoming more and more common, especially in Europe: membership. Turn your fans into monthly paying members and use the crowdfunding as a kick-off campaign to build something lasting and sustainable. You can use membership platforms like Patreon, Memberful or my company, Steady, which is made for independent publishers in Europe.” 2. You’ll need to invest first Most likely, you consider crowdfunding to generate more funding. The hard truth is though: Before you can start collecting money, you will need to invest both financially and time-wise into your campaign to get the help you need and ensure your project’s success. maldita.es Clara: “Hire someone to take care of the overall crowdfunding. Trust me, it’s worth it. You have to put a huge effort and time into it; you get exhausted and you’re probably not doing a good enough job.” Sean: “Identify what you’re good at (which might be creating content, for example) and what you’re not good at, but which is needed to pull off an effective campaign. Find a way to bring in or pay for that expertise. The return on your investment will be well worth it. A successful campaign is dependent upon continued momentum and needs a lot of project management expertise and resource, so be cautious about spreading yourself or your team too thin. Otherwise, you won’t be able to generate the necessary ongoing engagement to convert people to support. We are a very small team so we hired a full-time campaign manager on a commission to bolster our team in both the planning and execution stages of the campaign, and worked with a couple of additional freelancers and consultants too.” 3. The launch is just the tip of the iceberg A successful campaign usually requires months of planning. If you pin down all logistics before you go live, your campaign becomes more trustworthy and you are more likely to succeed. Sean: “We began planning our new structure as a ‘community benefit society’ and our new business strategy years in advance and starting priming our established audience for the crowdfunding campaign a few months before launch. Hitting the ground running with our core audience piling in to buy shares created a crucial start — and because of effective planning, we were ready to then maintain that momentum through our constant marketing and PR activity.” Many crowdfunding campaigns sink because people launch a campaign and think the work starts there. Sebastian: “Do the maths. 5% of your community will give you around 5 Euros on average if you ask them 5 times. So if you need 1.000 Euros, you’ll need 200 people who pay (the 5%) or 4.000 people including those who don’t pay (the 95%). Now: how can you reach 4.000 people 5 times? Email, Facebook, Instagram, YouTube? Partners, events, PR? Play with the numbers until it sounds doable to yourself.” Clara: “You need to think about what you are and what you’re offering: if you have a committed community you probably don’t need to think of great material rewards and you have to encourage the message of feeling part of that community; if people don’t know you and you haven’t ‘done anything’ for them they’re probably going to want something in exchange and therefore you will be spending more on rewards.” 4. You’ll need to know your values to convince others A clear vision that draws your readers in is just as important as figuring out the logistics of your campaign. Sebastian: “Be clear about your mission. You are not only asking people for money, but you are also asking them to join your movement. What is your movement? You have less than 10 seconds to convince somebody with a crystal clear pitch, that is inspiring and easy to understand. Sanity check: Does your pitch tell the world mainly why you need the money (not good)? Or does it tell people why they should join your movement (good)?” Sophie: “Take enough time to work on your argument and make your case: Why should people support something they have not seen yet? What are the reasons people should support you at this stage? In the case of heidi.news, we drew a list of 5 arguments (available in English on the page heidi.news/en). Try to put yourselves in their shoes, and understand their expectations.” 5. Transparency is key When you are asking your readers to become your partners and support you financially, you need to be able to answer their questions on your processes, finances and goals. Sean: “If there is integrity behind what you want to achieve (ie. you’re in it for the journalism, not just the money) then be transparent in your communications and this will give you the ability to make bold and clear asks for support. Look at how others achieved what they achieved but ensure you’re speaking to your community in a voice that is real for you.” Sebastian: “Be painfully transparent. What is the money for? What will you do with it? Why can’t you pay it yourself? Why does it have to be that much?” 6. Be prepared for self-promotion To reach as many people as possible, you should give some thought into how you want to communicate during the different phases of your campaign. Clara: “You need to constantly tell people that you’re crowdfunding and why you’re crowdfunding. Sean: “Create a clear narrative around why you’re doing what you’re doing, which engages people’s head and heart. Continue developing the story at all stages during (and after the campaign), regularly marking milestones in the campaign and bringing out the drama of it. Create tension between the progress of the campaign and the chance it could fail. All the while, connect people back to why your project is special and needed: the core purpose. Be able to sum up that purpose in one phrase and hammer it throughout the campaign.” Bonus advice Sebastian: “Pro tip: People don’t want to pay your salary. If you tell them: “I want to get paid” it’s not going to work (it’s unfair, I know).” Clara: “Bear in mind what rewards you’re giving: Mugs are cool but they have high mailing costs and often break on the way.” Sophie: “Don’t leave too much time between the campaign and the launch. People get impatient. Make sure you are able to deliver quickly afterwards.” Sean: “No one likes a boring video with cheesy music. Don’t be vague about why you’re running your crowdfunding campaign, and don’t communicate with desperation (a ‘confident urgency’ is better).”
https://medium.com/we-are-the-european-journalism-centre/what-you-need-to-know-before-starting-your-crowdfunding-campaign-edc92142ec55
['Stella Volkenand']
2019-02-27 10:37:18.673000+00:00
['Media', 'Membership', 'Insights', 'Crowdfunding', 'Journalism']
Build Advanced React Input Fields Using Styled Components and Storybook.js
The Multiple-Value Clearable Input In addition to the clear icon, we’ll try to make the combo input more interesting. Whenever a user hits the Enter key, the existing none-empty value will be extracted out of the input field to be an item, displayed with a gray background. Duplicated item values indicates an error. The combo input will be bordered by red lines. In addition, we can change the clear icon to red color too. This MultiValueClearableInput can wrap around to the following lines if the size exceeds the width. Entering Backspace will delete the text in the input field and then items, one by one, from back to front. When there are no duplicated-item values, the combo input border will turn back to black. Clicking the clear icon will clear all of the items, along with the current value in the input field. This MultiValueClearableInput can be built with a container composed by three things: a list of div s, an input field, and a clear icon. In addition, the list of div s and the input field are grouped by an input container. We copy the ClearableInput code and expand it to accomplish the MultiValueClearableInput . Here’s src/components/AdvancedInputs/MultiValueClearableInput.js : Line 13 defines the container’s border color, which is based on whether there’s an error. isError is passed to the props at line 77. Line 53 adds a state to track item values. Lines 55-57 calculate the state and whether there’s an error (duplicated item values). Lines 79-81 specify a list of items, in addition to the input field and the clear icon. Line 84 handles the KeyDown event. The event handler is defined at lines 59-74. Why do we use KeyDown instead of KeyPress ? It’s because the KeyPress event is invoked only for printable character keys, and the KeyDown event is raised for all characters, including nonprintable characters, such as Control , Shift , Alt , Backspace , etc. The Enter key is a printable character. Line 94 sets the SVG color based on whether there’s an error. Line 97 clears item values along with clearing the input field (line 96). This file adds code to the ClearableInput . Instead of having two files, we could add a parameter for it to handle both cases. Do you want to take it as an exercise?
https://medium.com/better-programming/build-advanced-react-input-fields-using-styled-components-and-storybook-js-a231b9b2438
['Jennifer Fu']
2020-12-30 00:25:19.495000+00:00
['Programming', 'Storybook', 'Nodejs', 'React', 'JavaScript']
Misters Darcy, Ranked
Misters Darcy, Ranked A Listicle With Some Commentary Picture Mr. Darcy in your mind’s eye. What do you see? Brown hair? Some eyes? Rich-man ruffle shirts and short pants? Or rather, whom do you see? Colin Firth or Matthew MacFadyen? Well all of this is wrong according to a panel of experts. Apparently he looked more like a regular schmo, with gray hair to boot: Wait, looked? “Real” Mr. Darcy? Yes, that’s right. We’re arguing over the visual integrity of a real fake literary character. Well in the spirit of such nonsense, why don’t we take a ride back and rank ALL the Misters Darcy we’ve come across through the years. I’m not including spinoffs like “Lizzie Bennet’s Diary” or “Bride and Prejudice” or whatever. I am including the zombie one though because that preview got me for like one second. Do not try to tell me Bridget Jones’s Darcy counts because he does not. Here are the Darcies, in order from worst to best: Sam Riley from Pride and Prejudice Zombies (2016) Laurence Olivier, Pride and Prejudice, 1940 Bow-tie-for-hair Darcy, Alex Balk, 2017 Matthew Rhys, Death Comes to Pemberley, 2013 Not-dancing Darcy, Nicole Dieker, 2017 Billing-statement Darcy, Megan Reynolds, 2017 David Rintoul, Pride and Prejudice, 1980 (BBC Miniseries I) Napkin Darcy, by Mike Dang, 2017 Guardian Guy, 2017 (honestly the way this guy is drawn, it’s not so bad) Image: UKTV/Nick Hardcastle/PA Boy Darcy, Kelly Conaboy, 2017 Matthew MacFadyen, Pride and Prejudice, 2005 Extremely Accurate Darcy, Christine Friar, 2017 Colin Firth, Pride and Prejudice, 1995 (BBC Miniseries II) Which one would you fuck?
https://medium.com/the-hairpin/misters-darcy-ranked-b863da77449b
['Silvia Killingsworth']
2017-02-09 18:44:06.475000+00:00
['Pride And Prejudice', 'Jane Austen', 'Movies', 'Mr Darcy', 'Books']
“Press” Premieres on PBS Oct 6
Image Credit: IMDb.com Public Television “Press” Premieres on PBS Oct 6 Journalism drama reflects real-world U.K. newspaper life What is Press About? Press revolves around two rival fictional British newspapers, The Herald and The Post, and their staff. Even though the BBC has a well-established reputation for high-quality period dramas, one should not underestimate what it can do with contemporary set productions. Journalists at The Herald, a left-leaning broadsheet newspaper, look to write serious hard-hitting news content without compromising journalistic integrity. In Wendy Bolt’s words, the people at The Herald think it is “a prize-winning crusading liberal lefty paper exposing hypocrisy and corruption.” During the first episode, Bolt, played by Susannah Wise, was seen undercover at The Herald. She was there to dig up dirt on the journalistically honest broadsheet for The Post. Although Evans describes Bolt as a “provocative twenty-first-century icon,” Edwards more accurately referenced the author as “a toxic right-wing troll.” From the way Edwards speaks of Bolt, you would be forgiven for thinking The Herald’s investigative reporter was referencing real-life Fox News commentators. Directed by Tom Vaughan, in the era of digital media and the 24-hour news cycle, the six-part series highlights the changing nature and growth of modern journalism. Media is truly the industry that never sleeps. Who is in the Press Cast? Holly Evans, played by Charlotte Riley, is the deputy news editor at The Herald. Constantly annoyed by the easily correctable errors she sees being published, the news editor believes The Herald’s content should be accurate and without reproach. The editor of The Post, more concerned with filling pages with populist entertainment dribble than hard-hitting news content, is Duncan Allen. “We do the news,” Allen said, “but we cheer people up.” Allen, played by Ben Chaplin, has virtually zero home life because of the commitment he has to his career. The broadsheet shares a building with The Post. The Post, owned by Worldwide News CEO George Emmerson, is a tabloid publication suggestive of the U.K.’s newspaper The Sun. It’s therefore not surprising the tabloid’s CEO, played by Poirot actor David Suchet, is reminiscent of real-world media mogul Rupert Murdoch. With both publications reporting on the same stories from different angles, the tabloid staff are more interested in getting the scoop than they are with moral and ethical standards. According to Evans, The Post is “sexist sensationalism that doesn’t check it’s facts.” Further to Riley, Chaplin and Suchet, “Press” also stars Priyanga Burford, Al Weaver, Ellie Kendrick, Brendan Cowell, and Shane Zaza as Amina Chaudury, James Edwards, Leona Manning-Lynd, Peter Langly and Raz Kane, respectively. Is there a Trailer?
https://medium.com/harsh-light-news/press-premieres-on-pbs-oct-6-c8940565fdb6
['Shain E. Thomas']
2019-09-24 12:37:47.721000+00:00
['Pbs', 'Journalism', 'BBC', 'Drama']
Alex Jones: Putting the Con in Conspiracy Theories
If you don’t know who Alex Jones is, congratulations. I envy you. He is basically just a walking trash bag. Alex Jones is a far-right media personality who spews garbage conspiracy theories and basically just yells at the internet. Most notably, he spread lies about the children of the Sandy Hook massacre being a hoax. Already, after reading that, I’m sure you despise him. Join the club. Now, I’ve been loosely aware of conspiracy theories that are fringe and outlandish as we many of us are, from the earth being flat to the moon landing being fake. They are, simply put, preposterous. Anyone who thinks logically would know there is nothing true about them. If anything, it’s a fun exercise in using your imagination. Unfortunately, during Covid-19, many of these absurd media personalities are gaining new traction. An old friend of mine has become an extreme right-wing activist who genuinely believes in these things and shares this stuff on Facebook every hour of the day. Because of that, I’ve decided it’s time to point out the flaws in the logic of people like Alex Jones. The funny thing about conspiracy theorists is that they are critical of everyone but themselves. Obviously, that is because of confirmation bias, a psychological phenomenon where one listens only to those they agree with. Now, let’s try and work around this bias, and be critical of him if you aren’t already. I am begging you, or your wacky cousin, to actually look closely at what people like Alex Jones gain by spreading such misinformation. He is not your friend. Alex Jones is a slimy salesman who is taking advantage of people who will believe things without substantiated evidence. If you believe the moon landing was fake, then you’re also likely to belive Alex Jones’ brain pills might actually work. Maybe his toothpaste that prevents COVID will also prevent you from getting sick! Fun fact: They don’t. Labs have tested the products he sells and many of them will not live up to the claims. Yes, the items do have the ingredients listed, but the claims are largely exaggerated. That’s how marketing works. In many cases, his products are just an overpriced version of something you can already buy for $5 at any drug store. Let’s take a step back and look at the model of InfoWars. How does it make money? By selling garbage products to people who blindly believe things without any research. In fact, he makes much of his money by selling these “miracle” products to his viewers. He preys on the gullibility and vulnerability of his audience to sell his products. If you follow Alex Jones and buy his products, please look closely at how someone benefits from these kinds of flat-out lies and conspiracies. If someone is telling you white people are being eradicated one minute, and trying to sell you a bulletproof vest the next, what are their real intentions? To scare you, and profit off your fear. He’s a Con Artist much more than he is a Conspiracy Theorist — that’s what the are. While that might be a bit of a damper on your mood, there is some good news. The families of the Sandy Hook victims have successfully sued Alex Jones and were awarded $100,000 dollars. If they’ll get it, who knows, he is dick after all. It’s a start, but hopefully, court cases like these will discourage other people from spreading misinformation like our least favourite douche canoe.
https://medium.com/the-innovation/alex-jones-putting-the-con-in-conspiracy-theories-9a82ebb23edd
['Victoria A. Fraser']
2020-09-20 19:45:03.223000+00:00
['Conspiracy Theories', 'Propaganda', 'Advertising', 'Research', 'Psychology']
Hierarchical Clustering
Hierarchical Clustering Ravasz and Girvan-Newman Algorithms Figure 1 Tree of Life. Hierarchical clustering defining the 3 biological domains: Archaea (red), Bacteria (blue) e Eukarya (green). Source Hereby are presented two categories of hierarchical clustering algorithms: agglomerative (Ravasz algorithm [1]) and divisive (Girvan-Newman algorithm [2, 3]). Ravasz Algorithm It is divided into 4 sequential steps. Definition of a Similarity Matrix Matrix entries can represent evolutionary distances between nodes or the number of neighbors a pair of nodes has in common. In the second case, each entry is defined as: This implies when nodes and do not have neighbors in common: The maximum value is obtained when both nodes are connected and have the same neighbors: Group Similarity Criteria After joining the most similar nodes, clusters need to be compared with the remaining elements of the network (nodes/clusters). Three clustering approaches can be used: Single Linkage: similarity between two groups is equal to the similarity between the most similar elements; Complete Linkage: analogous to the previous measure but using as reference the most dissimilar nodes from each cluster; Average Linkage: considers the average distance of every possible pair combination in the 2 clusters. Hierarchical Clustering Procedure Having defined a similarity matrix and a similarity criterion to compare clusters, the following steps are executed: Assign a similarity value to every pair of nodes in the network; Identify the most similar community/node pair and join both. The similarity matrix is updated based on group similarity criteria; The second step is repeated until all nodes are in the same community. Dendrogram Cut At the end of the execution, a single tree joining all nodes is obtained — dendrogram. Although it is possible to identify the most similar nodes, it does not return the best partition of the network. In fact, the dendrogram can be cut in one out of several levels. To solve this, modularity is calculated for each partition and the one with the highest value is chosen. Combining the four steps, the complexity of the algorithm is estimated: Step 1: similarity between every pair of nodes is calculated. Complexity should be , being the number of elements in the network; Step 2: each community is compared against the others. This requires calculations; Steps 3 and 4: using a convenient structure to represent data, in the worst-case scenario, the dendrogram can be built in steps. Despite this algorithm is slower than some presented in the next sections, it is significantly faster than the brute force approach which requires operations. Girvan-Newman Algorithm Instead of connecting nodes based on similarity criteria, the algorithm developed by Michelle Girvan and Mark Newman removes edges based on centrality criteria. This is repeated until none remains. Defining Centrality GN algorithm identifies the pair of nodes that most likely belong to different communities and removes the link connecting them. Centrality matrix needs to be recalculated after each removal. Each entry is calculated using 2 alternative approaches: Link Betweenness: it is proportional to the number of shortest-paths connecting all pairs of nodes that cross the respective link. Complexity is or , in sparse networks. Random-Walk Betweenness: after picking random nodes and , a random path between those is traced. Doing it for every combination of nodes, the average number of times the link is crossed is recorded. is proportional to this value. The first step of this calculation requires the inversion of a matrix, thus computational complexity is . The average flowing over all pairs of nodes requires steps. In the case of a sparse network, the overall complexity is . Hierarchical Clustering Procedure After choosing one of the two centrality criteria: Calculate the centrality for every pair of nodes; Link with the highest centrality is removed from the network. In case of a tie, one is randomly picked; Centrality matrix is updated; Two previous steps are repeated until any link is left in the network. Dendrogram Similarly to the Ravasz, Girvan-Newman algorithm does not predict the best partition. Again, modularity is used to determine the optimal cut in the dendrogram. Regarding the complexity of the algorithm, the limiting step is the centrality calculation. If link betweenness is chosen, the complexity is . The overall complexity is obtained by multiplying the previous by the number of times the centrality matrix has to be calculated — (until all links are removed). This means or (sparse network) is the final complexity. Respecting Ravasz and GN algorithms, it is important to ask whether hierarchical structure is really present in real networks or if the algorithms are imposing it. Are there nested modules inside bigger ones? Is it possible to assess, a priori, if a network has this structure? One way to check whether hierarchical modularity is present is by analyzing the clustering coefficient: This dependence with the node’s degree lets us identify whether such pattern is present. In many real networks this phenomenon is present: scientific collaboration, metabolic and citation networks. As expected, under degree-preserved randomization, community structure disappears. Resembling Erdős-Rényi random networks, where these structures are not present. References [1] E. Ravasz, A. L. Somera, D. A. Mongru, Z. N. Oltvai and A. L. Barabási, “Hierarchical Organization of Modularity in Metabolic Networks,” Science, vol. 297, no. 5586, pp. 1551–1555, 2002. [2] M. E.J. Newman and M. Girvan, “Finding and evaluating community structure in networks,” Physical review. E, Statistical, nonlinear, and soft matter physics, vol. 69, 2004. [3] M. Girvan and M. E. J. Newman, “Community structure in social and biological networks,” Proceedings of the National Academy of Sciences of the United States of America, vol. 99, no. 12, pp. 7821–7826, 2002.
https://medium.com/swlh/hierarchical-clustering-64846f9935bc
['Luís Rita']
2020-05-29 22:13:11.082000+00:00
['Ravasz', 'Girvan Newman', 'Algorithms', 'Hierarchical', 'Clustering']
AlphaFold 2 Explained: A Semi-Deep Dive
Image by Dale At the end of last month, DeepMind, Google’s machine learning research branch known for building bots that beat world champions at Go and StarCraft II, hit a new benchmark: accurately predicting the structure of proteins. If their results are as good as the team claims, their model, AlphaFold, could be a major boon for both drug discovery and fundamental biological research. But how does this new neural-network-based model work? In this post, I’ll try to give you a brief but semi-deep dive behind both the machine learning and biology that power this model. First, a quick biology primer: The functions of proteins in the body are entirely defined by their three-dimensional structures. For example, it’s the notorious “spike proteins’’ which stud coronavirus that allows the virus to enter our cells. Meanwhile, mRNA vaccines like Moderna’s and Pfizer’s replicate the shape of those spike proteins, causing the body to produce an immune response. But historically, determining protein structures (via experimental techniques like X-ray crystallography, nuclear magnetic resonance, and cryo-electron microscopy) has been difficult, slow, and expensive. Plus, for some types of proteins, these techniques don’t work at all. In theory, though, the entirety of a protein’s 3D shape should be determined by the string of amino acids that make it up. And we can determine a protein’s amino acid sequences easily, via DNA sequencing (remember from Bio 101 how your DNA codes for amino acid sequences?). But in practice, predicting protein structure from amino acid sequences has been a hair-pullingly difficult task we’ve been trying to solve for decades. This is where AlphaFold comes in. It’s a neural-network-based algorithm that’s performed astonishingly well on the protein folding problem, so much so that it seems to rival in quality the traditional slow and expensive imaging methods. Sadly for nerds like me, we can’t know exactly AlphaFold works because the official paper has yet to be published and peer reviewed. Until then, all we have to go off of is the company’s blog post. But since AlphaFold (2) is actually an iteration on a slightly older model (AlphaFold 1) published last year, we can make some pretty good guesses. In this post, I’ll focus on two core pieces: the underlying neural architecture of AlphaFold 2 and how it managed to make effective use of unlabeled data. First, this new breakthrough is not so different from a similar AI breakthrough I wrote about a few months ago, GPT-3. GPT-3 was a large language model built by OpenAI that could write impressively human-like poems, sonnets, jokes, and even code samples. What made GPT-3 so powerful was that it was trained on a very, very large dataset, and based on a type of neural network called a “Transformer.” Transformers, invented in 2017, really do seem to be the magic machine learning hammer that cracks open problems in every domain. In an intro machine learning class, you’ll often learn to use different model architectures for different data types: convolutional neural networks are for analyzing images; recurrent neural networks are for analyzing text. Transformers were originally invented to do machine translation, but they appear to be effective much more broadly, able to understand text, images, and, now, proteins. So one of the major differences between AlphaFold 1 and AlphaFold 2 is that the former used concurrent neural networks (CNNs) and the new version uses Transformers. Now let’s talk about the data that was used to train AlphaFold. According to the blog post, the model was trained on a public dataset of 170,000 proteins with known structures, and a much larger database of protein sequences with unknown structures. The public dataset of known proteins serves as the model’s labeled training dataset, a ground truth. Size is relative, but based on my experience, 170,000 “labeled” examples is a pretty small training dataset for such a complex problem. That says to me the authors must have done a good job of taking advantage of that “unlabeled” dataset of proteins with unknown structures. But what good is a dataset of protein sequences with mystery shapes? It turns out that figuring out how to learn from unlabeled data-”unsupervised learning”-has enabled lots of recent AI breakthroughs. GPT-3, for example, was trained on a huge corpus of unlabeled text data scraped from the web. Given a slice of a sentence, it had to predict which words came next, a task known as “next word prediction,” which forced it to learn something about the underlying structure of language. The technique has also been adopted to images, too: slice an image in half, and ask a model to predict what the bottom of the image should look like just from the top: Image from https://openai.com/blog/image-gpt/ The idea is that, if you don’t have enough data to train a model to do what you want, train it to do something similar on a task that you do have enough data for, a task that forces it to learn something about the underlying structure of language, or images, or proteins. Then you can fine-tune it for the task you really wanted it to do. One extremely popular way to do this is via embeddings. Embeddings are a way of mapping data to vectors whose position in space capture meaning. One famous example is Word2Vec: it’s a tool for taking a word (i.e. “hammer”) and mapping it to n-dimensional space so that similar words (“screw driver,” “nail”) are mapped nearby. And, like GPT-3, it was trained on a dataset of unlabeled text. So what’s the equivalent of Word2Vec for molecular biology? How do we squeeze knowledge from amino acid chains with unknown, unlabeled structures? One technique is to look at clusters of proteins with similar amino acid sequences. Often, one protein sequence might be similar to another because the two share a similar evolutionary origin. The more similar those amino acid sequences, the more likely those proteins serve a similar purpose for the organisms they’re made in, which means, in turn, they’re more likely to share a similar structure. So the first step is to determine how similar two amino acid sequences are. To do that, biologists typically compute something called an MSA or Multiple Sequence Alignment. One amino acid sequence may be very similar to another, but it may have some extra or “inserted” amino acids that make it longer than the other. MSA is a way of adding gaps to make the sequences line up as closely as possible. Image of an MSA. Modi, V., Dunbrack, R.L. A Structurally-Validated Multiple Sequence Alignment of 497 Human Protein Kinase Domains. Sci Rep 9, 19790 (2019). According to the diagram in DeepMind’s blog post, MSA appears to be an important early step in the model. Diagram from the AlphaFold blog post. You can also see from that diagram that DeepMind is computing an MSA embedding. This is where they’re taking advantage of all of that unlabeled data. To grok this one, I had to call in a favor with my Harvard biologist friend. It turns out that in sets of similar (but not identical) proteins, the ways in which amino acid sequences differ is often correlated. For example, maybe a mutation in the 13th amino acid is often accompanied by a mutation in the 27th. Amino acids that are far apart in a sequence typically shouldn’t have much effect on each other, unless they’re close in 3D space when the protein folds-a valuable hint for predicting the overall shape of a protein. So, even though we don’t know the shapes of the sequences in this unlabeled dataset, these correlated mutations are informative. Neural networks can learn from patterns like these, distilling them as embedding layers, which seems to be what AlphaFold 2 is doing. And that, in a nutshell, is a primer on some of the machine learning and biology behind AlphaFold 2. Of course, we’ll have to wait until the paper is published to know the full scoop. Here’s hoping it really is as powerful as we think it is.
https://towardsdatascience.com/alphafold-2-explained-a-semi-deep-dive-fa7618c1a7f6
['Dale Markowitz']
2020-12-09 16:23:33.585000+00:00
['Data Science', 'Technology', 'Python', 'Machine Learning', 'Biology']
Verification for beginners: a journalism trainer writes
Verification for beginners: a journalism trainer writes Learning ‘the four second rule’ to avoid spreading fake #coronanews. To be clear: I’m not key worker. I’m not a doctor; nor am I a virologist, epidemiologist, or health expert, except in the way in which we are all armchair immunologist these days. In fact, I’ve only got one science qualification. GCSE, biology, grade C, class of 1989. But fake science, false rumours, and dodgy information? It’s been a big part of my professional life for the last two decades. I train journalists* around the world, mostly in developing and transitional countries. I’ve worked with junior reporters, desk editors, station bosses, and everything in-between, from newsgathering teams to programme-makers and individual reporters. Specifically I’ve worked extensively in countries where journalism is not normally a financially rewarding profession; so it’s less competitive. Fewer people have access to higher education and as a result, people often end up reporting on health issues, for example, without having a science background. battling fake news from the bunker of accuracy… I’ve written handbooks and online courses for journalists and communicators too, and whilst it’s not my specialist subject, I’ve delivered a fair amount of training on verification skills over the years. It’s a rapidly changing field, but the underlying questions are the same. Is this information true? Can we broadcast it? Print it? Go live with it? In this current pandemic, with so much false information around, it seems some of these basic journalism skills might be useful for ordinary folk who feel overwhelmed by information. This is a pretty basic introduction to the topic, and aimed at a non-technical, non-medical audience: older relatives, perhaps, or that friend you have who, er, overshares. But I’ve included links for those who want to dig deeper. It’s a relatively long read, but since we are in lockdown, I think it’s worth it. And the process gets much quicker: verification, formerly known as fact-checking, is essentially a filtering process. It becomes almost instinctive, once you answer a few basic ‘W’ questions with satisfaction. I think it takes about four seconds. What??? (What’s the story? And is it too good to be true? Or too bad?) Is it too good to be true that you could avoid a respiratory infection from a nice cup of tea? Yes. It is literally too good to be true, but it’s one of the many fake rumours out there. Is it true that petrol pumps are causing the spread of COVID-19? Yes, but all hard surfaces have this potential, and a WhatsApp message that claimed the ‘virus seems to be spreading quickly via petrol pumps’ is most certainly fake. Note: There is nearly always a bit of truth mixed up with the fake stuff; it adds credence to the report. Here’s a quick and easy check you can do. Copy and paste a couple of key phrases into Google, or a dedicated factchecking site, like Fullfact.org. In this case try ‘virus seems to be spreading quickly via petrol pumps’. You can add a couple of other keywords like ‘factcheck’, ‘Snopes’, or even ‘hoax’ if you already have doubts. That’s what I did with the petrol pump story: it took about four seconds to do. source: screengrab from Google search page 22 March, created by author. It’s confusing when lazy journalists don’t do their verification properly. In this example, you can see that the Sun fell for a hoax (although a later edition did), the Daily Mail didn’t. Does it matter? Potentially; if people fixate on one particular source of transmission, they ignore others and the behaviour can lead to the spread of a virus which kills people. Having said that, I just got back from a shop where an old guy blew his nose into his hanky, and rifled for an eternity through the tabloid newspapers with the same hand. Some people are beyond help. Note: The copy and paste key words technique works better on a big screen. My personal theory is that people are using phones for information which means they skim read more, and copy and paste less. This will get worse as lockdown progresses. It’s just a theory. Who??? (AKA what’s the source?) Where did this information come from? Not the sharer, your pal, who is a trustworthy, straight up guy. But the original source who you don’t know at this stage. In an editorial team meeting, what’s the source? is usually the second follow up question, if a story shows legs. With inexperienced news teams I have to ask this a lot. What’s the source? A friend What’s his source? He works in the hospital Did he actually see somebody rob a doctor for his ID in order for the thief to use it to steal bog roll? No What’s his source then? *reporter checks* He read it on Twitter *story spiked* At the back of her mind, the news editor is thinking, who can we speak to to go on record to confirm or deny this rumour? Who is credible? With fake corona virus news, as with much fake news in general, look for the ‘official’ element (to add believability) mixed with twist of insider/secrecy (which is why the mainstream media has not picked up on this). Look at this tweet, which I noticed today (23 March) whilst researching this article: It falls into the ‘too bad to be true’ category. Surely not? People wouldn’t do that? Several people responded, expressing disgust at humanity. The Tweet has been shared several hundred times. But to my cynical eye, it just smelt like a rat, for loads of reasons. Firstly, this would be a big story, if true. Why wasn’t it being reported? People are desperate for C-19 stories, and London is a big city. Somebody would go on the record; I’d send reporters to investigate, if I were a news editor. Also, stealing photo ID didn’t make sense. To steal from a hospital you’d need to know where the store cupboard is, which would mean asking around in a crime scene which has been infected by a deadly virus using fake ID. Who would take that risk? Strange things do happen, but it didn’t make sense. Finally, there was a couple of specific phrases in the initial mail. ‘Junior doctors’: why would only junior doctors be told about this, surely it would be all staff? ‘one London hospital’: surely if this were happening in one London hospital, all hispitals would be at risk? So I aked the who question. Quite a few times. I found that the first mention of seems to have been made by a Russia Today reporter called Afshin Rattansi. Russia Today (AKA RT) is Kremlin-funded propoganda, and not a reliable source of news. Specifically RT sets out to portray a negative picture of British life, and is not considered trustworthy. He pinned his Tweet. Needless to say, Afshin did not reply. But his comments were successful in causing arguments by those who responded; some people called for the army to be brought in, some blamed those on the right, others blamed those on the left. Inevitably, some blamed immigrants. “Our country is going to hell,” said one user, perhaps unwittingly parroting RT’s editorial line which is basically, ‘the West is collapsing, look at this story which proves it’ It drew out a whole load of other commentary from people who felt the same, and horror stories too. People are spitting on doctors and the police in Holland, claimed another. Digging deeper, and upsetting a Dutch lady on Twitter in my efforts to find a source. It turns out to have been a single incident, a drunk lady who remembered nothing about what happened. Horrible, but not a pandemic of spitting: and at a policeman not a doctor. But it seemed some people wanted to believe the worst. It was also curious to see how involved the arguments were about who to blame for this thing that never happened. I flagged up the flaws in the story, and got a handful of positive messages. But mostly people kept on sharing it, even after I’d exposed the story as fake. Hundreds of them. Sarah Muldoon, if that is her real name, also smelt a rat and did something about it that I was too lazy to do; she contacted the Met Police, who confirmed they had not been contacted about any incident. Sarah, and many like her, are amongst the unsung heroes of the pandemic. The fact-checkers. At this stage, the story does not have any evidence associated with it. (update #1 : 24 hours later, there is still no evidence that this story is true, although as the lockdown was announced the need for medical staff to wear ID at all times to avoid being stopped by the police has been discussed. So the advice not to wear ID could turn out to be the opposite of future government advice. Source: BBC Radio 4 Today programme interview. update #2: There were some muggings of doctors, it seems, and medical teams told to be careful; but it was a few months ago. Not C19 related, or anything to do with ID so far as I can tell. Still no word from the original poster of this story (Russia Today). The lesson from this, is beware suspiciously vague insider knowledge. It’s the ‘stable lad/friend of the star/Jürgen Klopp’s hairdresser thing’ thing. Let’s look at another example. The source is quoted as ‘an internal email for staff in London St George’s Hospital’. The alarm bells ring loudly just from this first sentence. Why would a NHS restrict access to vital information about COVID-19? Which doctor would withold information that could save lives? Why no recipient or time code? I printed it out, and did a detailed-ish analysis on this, after it was shared on my five-a-side football WhatsApp group; the guy who posted it was genuinely asking for verification, because the guy was suspicious. It’s an interesting case study (requires biggish screen).
https://medium.com/swlh/verification-for-beginners-a-journalism-trainer-writes-fbcb858ffc02
['Nick Raistrick']
2020-04-23 19:25:18.109000+00:00
['Covid 19', 'Verification', 'Journalism', 'Lockdown', 'Training']
How Self-Driving Cars Could Help Extend Our Human Life Spans
Dr. Lance Eliot, AI Insider (Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/) What is the secret to achieving old age? Jeanne Calment, having lived to the age of 122, had attributed her longevity to her diet which was rich in olive oil. Or, you might find of interest the case of Susannah Jones, she happily consumed four strips of bacon for breakfast each morning, which was included with her scrambled eggs and grits, and was known to eat bacon throughout each day — she lived 116 years. Does this mean that if you are desirous of reaching a ripe old age that you should rush out to buy lots of olive oil and bacon? Well, maybe. I can’t say for sure that this won’t help you, but nor can we say with any certainty that it will help you to make it into your hundreds. One acrimonious debate about old age is whether you are born with the ability to reach it or whether it is your environment that can produce it. In this nature versus nurture debate, some would argue that your environment is the primary influence for successfully reaching old age. If you live in a place that provides a suitable climate, if you live nearby those that can help care for you when you get older, if you have medical assistance that can apply the latest life extending care, under these conditions you have a chance of achieving older age. Someone that might have a perfectly nature-designed old-age DNA can be readily wiped out sooner by living in a place and time that does not foster living to an older age. Maybe both nature and nurture intertwine such that we cannot separate one factor from the other. AI Autonomous Cars And Maximizing Human Life Spans What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars and also keenly interested in how self-driving cars will be used by society. Here’s a thought provoking assertion: AI self-driving cars will help to maximize human life spans. I’ve debated this topic at some industry conferences and thought you’d like to know about it. There are already assertions that AI self-driving cars will reduce the number of car related deaths, which is considered one of the largest benefits to society for the advent of self-driving cars. I agree that someday it is likely that AI self-driving cars will reduce the number of car related deaths, but I also claim that it is many years into the future and that for the foreseeable future it won’t materially impact the number of car related deaths. Indeed, I argue that this whole idea of “zero fatalities” is a gimmick and misleading or stated by those that are perhaps misinformed on the matter. Even if the advent of AI self-driving cars eliminated all car related deaths, you need to realize that the number of car related deaths per year in the United States is about 40,000. There are about 325 million people in the United States. As such, though every life is precious, the saving of 40,000 lives out of a population of 325 million is important but not something that will cure all deaths from happening. There are an estimated 650,000 deaths each year in the U.S. due to heart disease, and another 600,000 deaths due to cancer. In theory, if we were only looking at number of deaths as a metric, we would say that we should take all the money spent toward AI self-driving cars and put it toward curing heart disease and cancer, since that has a much higher death rate than car related deaths. The point here is that the AI self-driving car emergence will not presumably alter the likelihood of achieving older age by the act of reducing or eliminating deaths in the population. That’s not going to move the needle on the old age achievement scale (though, allow me to emphasize that each life lost due to a car accident is a tragedy). Mobility As A Factor In Longevity What then might the AI self-driving car be able to do to advance our ages? One aspect that is touted about AI self-driving cars is that it will increase the mobility of humans. There are some that say we are going to become a mobility-as-an-economy type of society. With the access to 24×7 car transportation and an electronic chauffeur that will drive you wherever you want to go, it will mean that people today that aren’t readily mobile can become mobile. Kids that can’t drive today will be able to use an AI self-driving car to get them to school or to the playground or wherever they need to go. The elderly that no longer are licensed to drive will be able to get out of their homes and no longer be homebound, doing so by making use of AI self-driving cars. So, we can make the claim that via the use of more prevalent mobility, it could allow those that are older to be able to more readily visit with say medical advisers and ensure that their healthcare is being taken care of. Need a trip to the local hospital? In today’s terms, it might be logistically prohibitive for the homebound elder to make such a trip. In contrast, presumably with ease they will be able to call forth an AI self-driving car that can give them a lift to the nearby medical care facility. Access And Frequency Of Healthcare Healthcare can also more readily come to them, including having clinicians that go around in AI self-driving cars and can visit with those that need medical assistance. If you are willing to believe that having timely medical care is an important factor in achieving and maintaining older age, the AI self-driving car can be a catalyst for that to occur. Boosting The Spirit And Reduce Isolation Another case of how an AI self-driving car might contribute to the aging process in terms of prolonging life might be due to increased access to other humans and presumably gaining greater mental stimulation and joy in life. Want to visit your grandchildren? Rather than having to arrange for some convoluted logistics, you just get the AI self-driving car to take you to them. Some say that isolation tends to lead to early deaths. AI self-driving cars have the potential for increasing socialization and reducing isolation. This is achieved by the ease of mobility. Physical Fitness As A Factor Another factor might be physical fitness. If you are at home and isolated, you might not be inspired to do physical fitness. Admittedly there are more and more in-the-home treadmills and bikes that will allow you to virtually interact with others across the globe, but this still doesn’t seem to be as meaningful and motivating as doing so in-person. With an AI self-driving car, you could readily get to some location whereby physical fitness with others is able to take place in-person. It might be to get you to the yoga shop or the local gym. Food And Nutrition Importance Food and nutrition seem to be a factor in extending life. Once again, the mobility aspects of the AI self-driving car can assist. We already have lots of ridesharing like services emerging today that will bring food to your home. The emergence of AI self-driving cars is going to certainly expand that capability. The so-called “boxes on wheels” will be food delivery vehicles that are being operated as AI self-driving cars. The ease of getting food delivered to your home will be simplified. This all seems pretty good and an encouragement that AI self-driving cars might have another significant benefit to society, namely extending our life spans. Other Side Of The Coin As with anything that can be a benefit, the odds are that there will be potential unintended adverse consequences too. The AI self-driving car could actually become a life limiter, rather than a life extender. You could use the mobility for purposes that put you at greater risk. Maybe you have the self-driving car bring you fatty foods every day to home and to work. Perhaps you use the self-driving car to avoid having to contend with visitors by never being at home? Conclusion You’ve likely seen the famous sigmoid graph that shows the typical mortality rate for humans. It’s a kind of “S” curve that starts up, then stays at a relatively constant rate of increase, and then tails off at the end. Benjamin Gompertz was the famous mathematician that is most known as the formulator of the “law of mortality” and for which he asserted that the human rate of death is related to age as a sigmoid function. A variant is the Gompertz-Makeham law that includes the sum of age-independent components. Is there perhaps no true ceiling for human aging? Is the sky the limit? Gompertz’s indication that resistance to death decreases as the years increase might either be an immutable law of nature, or maybe it is something that we can defy or at least extend. If you are looking for more reasons to want to have AI self-driving cars, one could be that it might aid our societal efforts to maximize our life spans. I’ll see you on the other side of 150 years of age. For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website The podcasts are also available on Spotify, iTunes, iHeartRadio, etc. More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/ For his Medium blog, see: https://medium.com/@lance.eliot For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot Copyright © 2019 Dr. Lance B. Eliot
https://lance-eliot.medium.com/how-self-driving-cars-could-help-extend-our-human-life-spans-93e264da8013
['Lance Eliot']
2019-09-06 19:51:13.838000+00:00
['Self Driving Cars', 'Driverless Cars', 'Autonomous Cars', 'Autonomous Vehicles', 'Artificial Intelligence']
An Intuitive Explanation of Random Forest and Extra Trees Classifiers
A Single Stump vs 1000 Stumps Suppose that we have a weak learner, a classifier whose accuracy is slightly better than a random decision, with a classification accuracy of 51 %. This could be a decision stump , a decision tree classifier with its depth set to one. At first instance, it would appear that one shouldn’t bother which such a weak classifier; however, what if we consider putting together 1000 slightly different decision stumps (an ensemble), each with 51 % accuracy to make our final prediction? Intuitively, we can see that in average, 510 of these classifiers would correctly classify a test case and that 490 would misclassify it. If we collect the hard votes of each classifier, we could see that on average there would be about 20 more correct predictions; consequently, our ensemble would tend to have an accuracy higher than 51 %. Let’s see this in practice. Here we will build a decision stump and compare its predictive performance to an ensemble of 1000 of them. The ensemble of decision trees is created using the Scikit-learn BaggingClassifier. The decision stump and the ensemble will be trained on the Iris dataset which contains four features and three classes. The data is randomly split to create a training and test set. Each decision stump will be built with the following criteria: All the data available in the training set is used to build each stump. To form the root node or any node, the best split is determined by searching in all the available features. The maximum depth of the decision stump is one. First we import all the libraries that we will use for this article. Script 1 — Importing the libraries. Then, we load the data, split it, and train and compare a single stump vs an ensemble. The results are printed to the console. Script 2— Stump vs Ensemble of 1000 Stumps The accuracy of the stump is 55.0 % The accuracy of the ensemble is 55.0 % The results show that the ensemble of 1000 decision stumps obtained an accuracy of 55 %, showing that they are no better than a single decision stump. So what happened? Why are we not getting better results? Well we basically created 1000 decision stumps that were exactly the same. It’s like we asked a single person what their favorite food was 1000 times and, not surprisingly, obtained the same answer 1000 times.
https://towardsdatascience.com/an-intuitive-explanation-of-random-forest-and-extra-trees-classifiers-8507ac21d54b
['Frank Ceballos']
2020-04-06 03:36:45.595000+00:00
['Random Forest', 'Decision Tree', 'Python', 'Data Science', 'Machine Learning']
Whose second half can you influence?
He even missed the backboard in the first half. Twice. Once from the right. Then from the left. It wasn’t pretty. The Team At the half, he came to me. “Coach, I shouldn’t play in the second half. I can’t make a shot. I’m terrible.” I shot back quickly. “Yep, you were awful,” and I hit him in the arm and laughed. He was surprised, but laughed, too. I kept going, “But you missed from the right then you missed from the left. The next ones are going straight down the middle and into the basket. You had an awful first half, but that first half has nothing to do with the second half. The first half is in the past and the second half is in the future.” It seemed like he was listening. Which was nice. So I kept going. “The team needs you in the second half. Things are going to turn around for you, you’ll see.” There was hope in his eyes. “Now go grab a sandwich and get some water and we’re going to turn this around, OK?” “OK, coach,” he ran off to the bench to get a sandwich. On the first play of the second half, he put up an off-balance shot and it went in. You could practically feel the electricity in the stadium go from negative to positive–as clear as the two sides of a battery. It was all coming from him. He turned to me and gave me a full-toothed smile and big thumbs up. He hopped like a bunny to get back on defense and I’m not sure he knew he was still part of this earth. He made a few more shots soon in the second half and his world was right again. His defense was on, he had loads of energy, and he was smiling during it all. “Sticks and stones may break my bones, but words will never hurt me.” Words can hurt us. They can also help us. Who do you know who’s had a rough first half and could use a lift heading into the second? Pull them up. Extend your hand. Give them a lift. It might cost you a minute of your time and it might mean the difference between a “1” in the win column, a smile for the rest of the day, or dare I go out on a limb and say that it could be the beginning of the second half of something much greater that turned around because of one moment in one day and something you said to help them, to make a difference. According to the laws of the universe, we weren’t supposed to win last week. But we did. According to the laws of the universe, that player was supposed to have a bad second half. He didn’t. Whose second half can you influence?
https://medium.com/the-ascent/whose-second-half-can-you-influence-89b653ba954c
['Bradley Charbonneau']
2018-12-17 23:31:00.931000+00:00
['Coaching', 'Parenting', 'Inspiration', 'Second Chances', 'Motivation']
Storyboarding in UX Design
Image credit: wikiHow Storyboarding in UX Design by Nick Babich In user experience design we’re familiar with user research techniques like workshops and interviews. We synthesise our research into user stories and process flows. We communicate our thinking and solutions to our teams with artefacts like personas and wireframes. But somewhere in all of this lies the real people for whom we’re designing. In order to make our product better, we must understand what’s going on in their worlds and how our product can make their lives better. And that’s where storyboards come in. In this article I’ll focus on storyboards as a medium to help explore solutions to UX issues, as well as communicating these issues and solutions to others. What is a Storyboard? Storyboards are illustrations that represent shots that ultimately represent a story. Basically, it’s a sequential art, where images are arrayed together to visualise the story. This method came from the motion picture production. The Walt Disney studio is credited with popularising storyboards, using sketches of frames since the 1920s. Storyboards allow them to build the world of the film before they actually build it. The Lion King storyboard art from Disney. Stories are the most powerful delivery tool for information: Visualization. Pictures are worth a thousand words. Illustrating things works best for understanding of any concept or idea. The images can speak more powerfully than just words by adding extra layers of meaning. Memorability. Stories are 22 times more memorable than plain facts. Empathy. It’s possible to tell a story that everyone could see and relate to. We often empathize with characters who have real-life challenges similar to our own. Engagement. Stories capture attention. People are hardwired to respond to stories: our innate sense of curiosity draws us in and we engage more when we can sense a meaningful achievement about to be had. What is a storyboarding in context of UX design? Storyboarding in UX is tool which help you visually predict and explore a user’s experience with a product. It’s a very much as thinking about your product as if it was a movie in term of how people would use it. It would help you to understand how people would flow through the interaction with it over time, giving you a clear sense of how to create a strong narrative. Why does storytelling matter in UX? Stories are an effective and inexpensive way to capture, relate, and explore experiences in the design process. In relation to UX design process this technique has following benefits: Human-centered design approach. Stories put a human face on analytic data. Storyboards bring our solutions to life, so that designers can walk in the shoes of their users, and see solutions as they see them. Storyboarding helps designers to understand existing scenarios, a well as test hypotheses for potential scenarios. ‘Pitch and critique’ technique. Storyboarding is a team-based activity and everyone can contribute (not just designers). Same as for movie industry, each scene should be presented and critiqued by all team players. Approaching UX with storytelling inspires design concepts and brings teams closer together around a clearer picture of what’s being designed. Iterative approach. Storyboarding relies heavily on an iterative approach. The action of sketching out role-play tests for design concepts, lets designers experiment at little or no cost. Nobody gets too attached to the ideas that are generated because they’re so quick and rough. Creating your own storyboard When thinking about storyboarding, most people focus on their ability (or inability) to draw. The good news is that you don’t need to be good at drawing before you start drawing scenario storyboards. Storyboard frame from Martin Scorsese film ‘Goodfellas’ What is far more important is the actual story you want to tell. Clearly conveying information is a key. Designer’s main skill is not Photoshop or Sketch, but the ability to formulate and describe a scenario. How to work out a story structure? If you are going to create a visual representation of stories to communicate user issues to others, there’s some preparation to be done to make them logical, understandable and convincing in their arguments. By understanding the fundamentals of the story and deconstructing it to the building blocks, we can present it in a more powerful and convincing way. Each story should have following essential elements: Character. The specific persona involved in your story. Their behaviours, appearance, and expectations, as well as any decisions they make along the way, are very important. Revealing what is going on in your character’s mind is essential to a successful illustration of their experience in the storyboard. Scene. It’s an environment that the character finds herself in (a real-world contexts that involve place and people). Plot. All too often you designers jump straight into explaining the details of their design without first explaining the back story. Don’t be one of them — your story must be created with a structure in mind, there should be an obvious beginning, middle, and end. The narrative that unfolds in your storyboard should focus on a goal for the character. Plot should start with a specific trigger and end with either the benefit of the solution, or a problem that the character is left with. Try using Freytag’s Pyramid in structuring your plot. Stories tend to follow a narrative structure that looks a lot like a pyramid. Freytag’s Pyramid, showing the five parts, or acts: Exposition, Rising Action, Climax, Falling Action (or final suspense and resolution) and Denouement (Conclusion). Ben Crothers added a quick story into the pyramid about a guy and his phone that won’t work. To make your story powerful, here are some points to think about: Authenticity. The main thing is to make the character, their goal, and what happens in their experience as clear as possible. If you’re writing something that doesn’t resonate with your product, your users will be able to tell. Thus, keep the focus on real humans in real contexts, and your audience will empathise with them. Simplicity. Cut out any unnecessary extras. No matter how good a sentence, picture, or page may be, if it doesn’t add value to the overall message, you should remove it. Emotion. It’s essential to communicate the emotional state of your character throughout their experience. Using storyboard to illustrate experiences Starting the storyboard can be a little daunting, especially if you’re not confident in your drawing skills. But don’t worry, the guideline mentioned below will help you turn out a better scenario storyboard: Start with a plain text and arrows. The main thing is to break the story up into the moments (context, trigger, the decisions a character makes along the way, and ends up with the benefit or the problem). A sequence of moments. Add emotions to your story. Add emoticons to each step, to help others get a feel for what’s going on inside the character’s head. Remember to illustrate any reactions to success/ pain points along the way (what is the character expecting to happen, and how does the result affect him/her?) Try drawing in each emotional state as a simple expression. The same sequence of moments as in example above, but with added emoticons, to help get a sense of what’s going on for the character’s emotional state. Translate each step into a storyboard frame. Emphasize each moment, and think how your character is feeling about it. Storyboard frame Design clear outcome. Make sure your storyboard leaves your audience with no doubt about the outcome of the story: if you’re describing an unfavorable situation — end with the full weight of the problem, if you’re presenting a solution — end with the benefits of that solution to your character. Smile and sadness on human faces can add emotions to your story and it comes alive in the hearts and minds of your audience. Image credit: Chelsea Hostetter, Austin Center for Design Conclusion Storyboarding in UX is not easy. But it does work. Visuals are a great way to bring a story to life, so try to utilise them wherever possible. Every bit you can do to understand a user is tremendously useful. Thank you! Follow UX Planet: Twitter | Facebook Originally published at babich.biz References
https://uxplanet.org/storyboarding-in-ux-design-b9d2e18e5fab
['Nick Babich']
2020-05-12 11:58:27.967000+00:00
['Design', 'UX', 'User Experience', 'User Research', 'Emotions']
The History Behind: The Antikythera Mechanism
Investigations into the problematic piece were dropped, the device primarily ignored and written off until 1951 when the eminent British physicist and historian of science Derek John de Solla Price became interested in what the discovery had actually been. Price and the Greek nuclear physicist Charalampos Karakalos published an extensive paper in 1974 under the title Gears from the Greeks: The Antikythera Mechanism, a Calendar Computer from c. 80 BC. The comprehensive 70-page work included x-rays and gamma-ray images of the device and laid out how it may have worked. Price was the first to conclude that the Antikythera Mechanism had been used to predict the position of planets and stars dependent on the month. He stated that the main gear would move and represented the calendar year, this, in turn, would move the smaller cogs which represented the planets, sun and moon. With the user providing input and the clockwork mechanism making a calculation to give an output, the device could legitimately be considered a basic computer. “The mechanism is like a great astronomical clock … or like a modern analogue computer which uses mechanical parts to save tedious calculation.” Derek J. de Solla Price, Scientific American The mechanism had initially been recovered in a single heavily encrusted piece, soon breaking into three and, since, many more as smaller bits have fallen off through handling and cleaning. Other parts of the device were later found on the sea bed during an expedition by the famed French diver Jacques Cousteau. There are overall 83 known surviving parts with seven of those being mechanically significant. These parts contain the majority of the device’s mechanism and inscription. There are also sixteen smaller parts to the device which have incomplete engravings. Reconstruction | Moravec, Wikimedia Commons, (CC BY-SA 4.0) The device was encased in wood and had doors, inscriptions on the back acting as an instruction manual of sorts. Inside the device, there is a front face and a rear face, with internal clockwork gears working an adjustable mechanism controlled by a hand crank. Adjusting the device would allow the user to predict astronomical positions and solar events such as eclipses decades in advance. The 30+ gears of the machine would follow the movements of the moon and sun through the zodiac, even modelling the moon’s orbit. Knowledge of the technology used to create the Antikythera Mechanism mechanism was lost. Despite similar devices appearing during the Islamic golden age, nothing of such complexity would be made again until the invention of the astronomical clock in the fourteenth century. However, there is evidence that the devices may not have been all that rare in Ancient Greece. Writing in the first century BC, the famed Roman statesman Cicero mentioned two such machines that predicted the movement of celestial bodies. Cicero said that these mechanisms were built by the scientist Archimedes and brought to Rome by General Marcus Claudius Marcellus following the siege of Syracuse in 212 BC. Marcellus had taken the device with him, reportedly being saddened by the death of Archimedes whom he’d held in the highest regard. The plunder then became a family heirloom and was still in existence at the time of Cicero’s writing. Antikythera mechanism right sideview, showing the inner worings of the device, Thessaloniki Technology Museum | Gts-tg, Wikimedia Commons, CC BY-SA 4.0 The two devices in Roman hands were said to be very different, one described as somewhat crude-looking compared to a second more ornate form. Perhaps indicating either a level of development or that unique versions of the device existed for the more affluent. The more elaborate form of the machine had been deposited at Rome’s Temple of Virtue by Marcellus. The links to Archimedes have been reinforced by later Roman writers such as Lactantius, Claudian, and Proclus. One of the last great Greek mathematicians of antiquity Pappus of Alexandria said that Archimedes had written extensively on the subject of the machines, penning a manuscript by the name of On Sphere-Making. Sadly this is now lost. Other documents do survive, however, with some even including drawings of such mechanisms and instructions on how they worked. One of these devices was the odometer, the modern version of which is an essential component of any car dashboard. The original invention was used by the ancient Romans to place their famous mile markers alongside Roman roads. While the first descriptions of the device came from Vitruvius around 27 BC, the odometer has been attributed to Archimedes himself over 200 years prior. When scientists attempted to build the device depicted in the images, it failed to work until the shown square gears were replaced by the cogs of the type found in the Antikythera Mechanism, leading to speculation that the mechanism and Archimedes are linked. Tying with the reports from Cicero, it seems that the Antikythera Mechanism may well have been invented by Archimedes of Syracuse. However, it could not possibly being one of the devices mentioned, with both stated to exist in Rome long after his death. Besides the two devices already highlighted, Cicero also identifies a third in production by his friend Posidonius, again, which can’t have been the artefact found in 1901. This then leads to a conclusion that the devices were not as uncommon as perhaps initially thought, with at least four known to exist and possibly many more. The technology of Ancient Greece and Rome was seemingly lost for centuries following the conquest of Greece by Rome in 146 BC and then subsequently the fall of the Western Roman Empire. Similar technology would appear again, however, in the Byzantine Empire before flourishing in the Islamic World. In the 9th century, the Caliph of Baghdad commissioned the Banū Mūsā brothers, both noted scholars, to write the Book of Ingenious Devices, an extensive illustrated work on technical devices, amazingly including automata. The brothers were working at the legendary Bayt al-Hikma (House of Wisdom) where Islamic scholars poured over ancient Greek and Roman texts, largely forgotten and ignored in the West. The Banū Mūsā brothers described all manner of devices that would have been considered wonders in 9th century Europe such as automatic controlling systems and feedback controllers. Other automata included fountains, musical instruments and automated cranks. “Nothing like this instrument is preserved elsewhere. Nothing comparable to it is known from any ancient scientific text or literary allusion. It is a bit frightening, to know that just before the fall of their great civilization the ancient Greeks had come so close to our age, not only in their thought but also in their scientific technology.” Derek J. de Solla Price There is a tendency in the to believe that computers, automata and other modern marvels are the work solely of Britain or the United States, that our age alone is the first to see technological innovation. Yet, this is far from the truth. While much of the world was in darkness, Rome and Greece were making spectacular advances in computation and sciences such as astronomy. While Europe was fighting off Vikings, the Islamic world was deep in study, reviving these ancient technologies and adding their own modifications and advancements. Eventually, these theories of science and philosophy would drift into the West during the enlightenment, the dark ages that had covered Europe following the fall of Rome finally being overcome. The Antikythera Mechanism stands as a symbol of what was lost with that fall and equally what might have been possible had Greece and Rome continued to thrive. The Caliphs of Baghdad knew that these ancient empires had much to tell us and that remains true even today, with much left undiscovered about the real power and technology of philosophers, thinkers and scholars such as Archimedes, Hipparchus and hundreds more besides.
https://medium.com/the-mystery-box/the-history-behind-the-antikythera-mechanism-4ca6240146d5
['Michael East']
2020-12-07 15:23:57.978000+00:00
['Archaeology', 'History', 'Ancient History', 'Technology', 'Science']
How to increase app rating in AppStore?
How to increase app rating in AppStore? Sayler8182 Follow Nov 13 · 2 min read Photo by Charles Deluvio on Unsplash Many magnificent apps have a problem with very few ratings in AppStore. The main reason is that users have no incentive to do it or even don’t know how to do this. An application can ask users for their opinion about the App, but it should be done at the right moment. No one will rate five stars right after the first launch. To gain best feedback user should use the app a couple of days, and be able to postpone our question. How to code it? What app should do? We need to make some assumptions: - save first launch date - count each app launch - ask about review under certain conditions - force app asking about a review (eg. “Rate app button”) Now we can define interface: Let me show you complete code implementation it lets you better understand my explanation. We are using UserDefaults to save data shared between app session: - application’s first launch date - application’s launch count - last asking date — user can postpone the review Our micro-service is also parametrizable: - minimum launch number after which popup can be shown - minimum time after which popup can be shown - minimum interval after which popup can be shown after postpone Available methods: - save first launch date - increase count number - force popup presenting - presenting popup only when conditions are met Along with the release of iOS 14, amendments to the version were introduced. From now on we can show popup on the chosen scene — only in iPad and Mac. We can add feature support to our code:
https://medium.com/macoclock/how-to-increase-app-rating-in-appstore-9efc1281bef
[]
2020-11-14 06:33:01.948000+00:00
['Programming', 'App Development', 'iOS', 'Mobile App Development', 'Swift']
Why You Shouldn’t Let Go of Your Past
Let go of the past. Live in the present. Don’t worry about the future is an adage that we personal and spiritual growth coaches like to use as a developmental benchmark. Easier said than done, right? Letting go of our past is probably one of the most difficult things to do. We lived those experiences. They are ingrained in our memories. We can’t make them go away or un-happen. Shit happened, so what are we going to do about it? While I totally agree with the importance of letting go of certain emotional aspects of your past, there are good reasons why you shouldn’t completely let it go. “The secret of change is to focus all your energy, not on fighting the old, but building the new.” — Socrates Your past matters. Sure, it’s ugly, riddled with mistakes, regrets, pain, failures, guilt, and shame. It is your story. It is what makes you who you are today. You are the main character in your story. Character development is an essential part of any good story. In fiction writing, character development is the process of building a unique, three-dimensional character with depth, personality, and clear motivations. Character development also refers to the changes a character undergoes over the course of a story as a result of their actions and experiences. You can’t develop if you have nothing to change. You grow and evolve chapter after chapter. When you take what you learned from your previous chapters and apply it to your present and future chapters, then the story gets even better. If you erased your past, then you’d be looking back at a blank page rather than a very juicy story, not knowing how you got to where you are or where you’re going. Your past is your foundation. It is the record of your life, your rearview mirror, which shows you where you’ve been and why you ended up where you are today. Sure, we all want a clean slate, but that’s something you can start over with today with the added benefit of having a past jam-packed with lessons and experiences to draw upon. “Those who cannot remember the past are condemned to repeat it.” — George Santayana I look at my past as my personal guidance system to show me the way to what I want to become. Lord knows I’ve made plenty of mistakes, and there are a lot of things in my past that I am certainly nor proud of, but I had to decide what I was going to do about it. I could either use it to my advantage or let it drag me down. I’m not advising you to hang onto the emotions from the past. That’s called “baggage”. You can’t move forward and evolve if you keep hanging onto those negative emotions and replay them over and over again. To your mind, it’s like you’re repeatedly reliving the unfortunate experience. It doesn’t know any different. It’s only doing what you tell it to do. Your emotions and response to your past are your choices. We are 100% responsible for how we choose to respond, react, feel, and behave about any situation in life, past or present. You can choose to wallow in self-pity; or you can choose to get over it, find the lessons and blessings from it, and enjoy life regardless. We always like to say “if I knew then what I know now…” Well, you do know now, so what are you going to do with that wealth of the information and all the stuff that you’ve gathered for the past however many years? Don’t just toss it away because it’s not pretty. Forgive it, appreciate it, bless it, and leverage it. Your past has caused you to know what you do want and what you don’t want. It is the planting part that has allowed you to grow. It protects you and guides you. It’s the study and research material for your life’s exam. Don’t try to shove it aside as if it never happened for fear those skeletons in your closet will come back to haunt you. Look those skeletons straight in the eye sockets and tell them “Ok. You happened. Thank you for all you taught me. I am a better, stronger, more aware person because of you” and lovingly send them on their way. When you honor your personal history, you benefit from the lessons, which shaped who you’ve become. “We’re only as old inside as the wounded child who sustained our oldest hurts — neglect, ridicule, criticism, abuse, etc.,” said Tina Gilbertson, LPC. You can’t heal if you don’t tend to your wounds. When you try to hide from your personal history and insist upon focusing on the present and the future, you can’t help that child heal. This is how emotional maturity is developed. Your entire story, no matter how gruesome it may be, deserves your interest and attention. Don’t live in your past and replay the negative emotions of it. Let go of those. Simply look at it like the goldmine it is rather than something that should be erased or brushed under the rug. You can’t lose it so you might as well use it. Namaste.
https://medium.com/one-minute-life-hacks/why-you-shouldnt-let-go-of-your-past-5c2d3575b85f
['Kelly Lee Reeves']
2020-10-18 18:04:41.750000+00:00
['Personal Development', 'Personal Growth', 'Life Lessons', 'Letting Go', 'Storytelling']
My new ‘claps’ policy
My new ‘claps’ policy I’ll be clapping more than once, after all When Medium first released their ‘claps’ feature, I wasn’t very sure if I’d like it or not. Claps seemed a bit of a vague way to express appreciation for something. And there was always the big question: how many times should I clap? Clapping once Medium, as I quickly found out, weights your claps depending on your average. If you usually clap once while I clap ten times on average, then your claps will have ten times as much weight as mine. That gave me a convenient loophole to get out: I could pretend that claps were still recommends, and clap only once. I wouldn’t have to worry too much about ‘how much’ to clap, that way. And, for the few articles that especially resonated with me, I could simply hold down the button for a little longer. Clapping again That was when I found another way in which claps are useful. The old ‘❤️️ recommend’ buttons, though you may not notice it, were actually used not for one purpose but for two: To recommend an article To acknowledge a comment This could be a carry-over from Facebook, where the convention is to ‘👍 like’ a comment, if you want to acknowledge it but don’t really have anything to say. The problem is that, in Medium, each response is its own story. So, if I recommend a comment to say “thanks”, it gets the same positive response as if I recommend a very deep and inspiring story. I mean, I like your comment and all, but it wasn’t that good either! I’m sure, there are algorithms to handle all that, but it just seems nicer to follow the rule: clap once for comments, clap more for articles. Clapping more It was only much later that I recognised the true purpose of claps. For paying members of Medium, it’s not just a way to share your article, it’s also a way to control where your funds go. Each member gets a fixed budget, which is paid to authors of members-only articles. How much each article gets depends, among other things, on how much the member has ‘👏 applauded’ it. To put it simply: the more you clap for a piece, the larger the share of your money it gets. Of course, it’s not only claps that count. The revenue-giving algorithm takes many other factors (how many people have read the article? how much time have they spent on it)to figure out the overall ‘value’ of a piece. But the claps factor surely helps a lot in deciding. Over time, I’ve begun ‘extra-clapping’ more often. Instead of just one or two claps, I actually hold the button a bit depending on how much I liked the article. If it actually helps, then why not? Clapping better Clapping is not a calculated thing. I don’t decide how to clap; I just go by the feeling. And, I think that’s how claps are meant to be. You don’t sit and decide “okay, I’ll give 23 claps to this person and 6 to that one”. You don’t say how you liked the article. You express it. But don’t stop there. Just because you’ve expressed your feeling doesn’t mean you can say it as well. When claps first came out, I wondered if it would make the number of responses go down. People may feel they’ve expressed themselves enough. So, don’t stop at clapping. Say how you feel as well, by writing a response. Which article will you do that to? Perhaps you could start with the one you’ve just finished reading ;-)
https://medium.com/dayone-a-new-perspective/my-new-claps-policy-24bb03a593b5
['Badri Sunderarajan']
2017-10-12 12:40:53.751000+00:00
['Medium', 'Product', 'Medium Claps', 'Design', 'You Tell Me']
Dates That Sucked
II. The player The flat party was busier than I was expecting. There wasn’t a theme or any dress ups this time — my university housemates loved themes — but there was a rugby game on the television and everyone was loud, yelling, laughing, drunk. He caught my eye across the room. I recognized the pull in my gut and smiled. I could tell he thought I was attractive and suddenly he was attractive too. The dark mop of curls, the casual way he dangled his leg over the arm of the couch. I made my way to the chair next to his. But I was too sober to fall for his drunk compliments. “Show me your room,” he slurred. He leaned in and kissed my neck. I was tipsy enough to let him. “Come on. I’ve never met anyone like you.” I shook my head. “Give me your phone number then,” he looked up at me with soft brown, begging eyes. I knew these college guys: the ones who are too smooth, too charming. “No, I don’t think so.” “Let’s go nightclubbing,” he said. My friends were keen. I gave in. He paid for my drinks, placed his hand on the small of my back, acted like we were on a date instead of a sweaty dance floor. “I have to see you again this week,” he said. “Fine,” I laughed. The ‘date’ ended in a dark street filled with people and a deep thrum of bass from the club. He said goodbye with promises and tender kisses. He pressed his body against mine and held me like he couldn’t stand to let me go. He never called. My housemate discovered later he already had a girlfriend. III. The socialite Was it my birthday? It was definitely a special occasion, maybe an anniversary. He picked a fancy restaurant in our town, not the fanciest one but up there. It was attached to a hotel, which thrilled me. Hotel restaurants always make me feel like I’m on vacation or going somewhere exotic. Vintage mirrors on the cream walls reflected other couples: backs of heads, wine glasses in hands, women with more jewelry than me. We took a seat at one of the tables-for-two in front of the large windows. He browsed the menu, sitting upright, elbows on the table; looking more “business” than usual in an ironed long-sleeved shirt. We ordered — scallops for me, chicken for him — and he spoke with the waiter. He knew him socially. They’d hung out on a beach, played guitar and sung around a fire. Their eyes locked, they mirrored each other’s animated gestures, the conversation buzzed with chemistry. I spun my wedding band. Then the waiter was called away. He swiveled in his chair, scanning the room for someone else to speak to. Anyone except me. That was our last date. IV. The “funny” guy My aunty died, a woman who helped raise me, and I missed the funeral. Too far to go from where I live. There isn’t the time or resources when you’re a solo mum to up and leave whenever you want to. He called for a second date. The first had gone okay and company seemed nice. Having someone to hold, to be held by, seemed nice. He talked about himself and wanted to watch comedy shows. A distraction he said. He recited poetry: “I Fuck Sluts”, and gave me a look. I laughed and cringed. I looked at him sideways. “What are you telling me with that?” He didn’t answer. He couldn’t stay. I knew dating should be better than this but I didn’t want to be alone. Abandoned. The cut of divorce was fresh and the comfort of a man’s arms around me was cool salve on my still bleeding wounds. My body craved it. “Don’t be clingy,” he said. He screwed up his face, looked pissed off. A light came on in my mind. With it a flood of shame. Did I need a date so badly? I laughed and pushed him away, “I’m joking. Go!” I meant it. Something shifted under my ribs. He wasn’t who I wanted. This wasn’t what I needed. I waited until he left before I cried.
https://kellyeden.medium.com/dates-that-sucked-46c4b7848f78
['Kelly Eden']
2020-12-27 23:10:56.461000+00:00
['Relationships', 'Love', 'Dating', 'Nonfiction', 'This Is Us']
AI-powered Spell-check and Grammar-check in Business Applications
Wud yu read this artcle if it was ful of speling mistaks ? Ofcourse not. Incorrect spelling are not only limited to personal life, but unfortunately also exists in business applications. Now-a-days most of our writing is via a word processor like word or using mobile phones. And they already have features built in to highlight spelling mistakes and even correct them using autocorrect feature However in the business world, there are many applications such as master data management, call-center applications, resource planning, customer relation applications etc.., which do not have any autocorrection features. This allows spelling mistakes to creep into the system and it creates many problems while using these applications Let us look at some of the common problems scenarios in businesses Incorrect text in master data Master data , such as product master data, customer master data and others are a key piece of information necessary for smooth working of business applications. Generally master data has a key, attributes and description. For example a product master data will have the product identifier, attributes (such as color, weight, size etc..) and product description. Out of all these fields, the product description is one of the most important as it is the only field which gives information about what the product really is. Also it is highlighted on e-commerce websites and is one of the first thing seen by prospective buyers As the origin of the product description data is from humans, it is prone to mistakes. These mistakes occur simply because of human nature, attempt to find abbreviation for long product names , or trying to combine two different words into one Some of real-life examples of such mistakes in product names are PINK NEW BAROQUECANDLESTICK CANDLE, PINK OVAL JEWELLED MIRROR, LUNC BAG RED RETROSPOT, PAPER CHAN KIT 50’S CHRISTMAS, LIGHT GARLAND BUTTERFILES PINK When such mistakes are highlighted on company websites, it can put off clients from buying the products Wrong spelling, words and grammar in business communications Business communication mainly involves B2C (business to client), B2B (Business to business) and internal communications. With rise in different communication channels such as classic email, web-chat, support service , social media etc.., the amount of text being communicated has gone up in recent years. This means that the chances of errors in spelling, words as well as grammar has also gone up Many of the business applications , such as web support services or B2B portals , might not have the basic orthograph or grammar check. Also there are many cases where automated spell checkers would not be able to catch the errors. Here are some real-life examples of business communications where spell check was either not active or did not work Let’s us meet tomorrow morning an then we can present you our offering We will send you male with detail description on how to solve the issue We will dedicate tile to understand your pan point With the ever increasing workload and stress, not all humans proof check or re-read all communications. Such mistakes does not create a good impression with clients, customers as well as internal employees and sometime can lead to losing business, trust or cultural non-acceptance. Just by having an end-note “Please excuse for any typos” would not help as it conveys the message that the person is not very meticulous and is more of a sloppy person. Unchecked AI generated text There is a rising number of AI based applications which generate text such as automated personalized emails or chatbots. Generally one would expect that spelling or grammar mistakes would not occur in such automated text generation, but there are chances of such errors. The main reason for an AI generated text to go wrong is insufficient training data or incorrectly parameterised AI architecture So generally it is good idea to put an AI based spell and grammar check on top of AI generated text. You can think this as an AI program controlling another AI program Making machine learn to predict correct spellings and words Can a machine learn how to spell a word correctly within a given context ? For Example can it recognize the error in the word male in the sentence : “We will send you male with detail description on how to solve the issue” . The answer is yes. AI can be used to go beyond what existing spell-checkers can provide. They can not only identify spelling mistakes, but also identify correctly spelled words but used in a wrong context. At the center of advanced spell-checking or autocorrection lies algorithm to recognize sequential pattern of characters or words. For example the word “sequential” is orthographically correct but the word “sequentialy” is mis-spelled even though the first ten characters (“sequential”) are correct. AI programs are very good at sequential pattern detection and can help differentiating between the right pattern (“sequentially”) from a wrong pattern (“sequentialy”). The type of AI architecture which works on sequential pattern data are called Recurrent Neural network (RNN) or Long Short Term Memory (LSTM). Let’s look at these techniques with help of an example to identify a right character which follows a sequence of characters. As an example given the pattern “sequentiall”, it can identify that the next letter would be “y” and not anything else. A simplified representation of RNN or LSTM would be an input sequence and an output sequence. For example, we can train the RNN to have input as “sequentiall” and output as “y”. The input is fed into neurons, which are represented by circles. Simplified representation of RNN or LSTM We need to train our network on various sentences and words, for which we need data. One possibility is to take already available correct text such as from Wikipedia or any other sources. For each line in the source, break it up into various lines in order to use it for RNN or LSTM training purposes. Once the RNN or LSTM is trained on sufficient data, it will create a weight matrix for each letter. This weight matrix is kind of latent representation for each letter. It represents what the artificial intelligence has learned about each letter. The size of weight matrix is number of characters * number of features. The number of characters is unique number of characters occurring in text corpus used for training. The number of features is up to us to select. If for example number of features is chosen to be 128, then each character will be represented by a bunch of 128 weights. This weight matrix can be seen as “understanding” which AI developed looking at the text corpus. Here is the weight matrix created after training a RNN or LSTM based on randomly selected Wikipedia data (around 100 GB). The rows represent different characters and the columns represent the 128 weights for each character. Lower the weight, more black is the color. Higher the weight, more blue is the color AI brain internal representation is a weight matrix The weight matrix is an internal representation of what AI has learned about each character. This matrix is a bit difficult to understand by humans. However we can apply some data science techniques to this matrix in order to understand what AI has learned One such technique we can apply is called clustering, which will group together characters which have similar weights. After applying clustering algorithm (called K-Means) to weight matrix , we see the following clusters AI learns to differentiate between alphabets, numbers and punctuations These cluster results are very interesting. Just by looking at the corpus text, AI has learned to differentiate between alphabets, numbers and punctuations. This shows the self-leaning power of AI and how it can think like human by just looking at the text This “knowledge” of characters represented by weights is used in the RNN or LSTM architecture to predict the character. This process is shown here RNN or LSTM process For each character , the weight matrix is read and hidden state corresponding to the neuron is updated. The hidden state for one neuron serves as input to the next neuron. The process continues with the output of last neuron is used to predict the last letter. The architecture can be used in a similar fashion to predict the next word. Example the input can be “we will dedicate” and the architecture can predict the word “time” has more probability to occur next rather than the word “tile” Integrating business applications with AI model So once we have our AI trained to predict correct spellings or next word, there are different ways by which we can leverage it. One of the common ways is to develop an API which wraps the AI logic (also called as model) and exposes it via an API service. Any business application which would like to use spell-check or word check can make a call to the API and see if the predicted word is equal to word in the business application. For example the business application can send the word “sequentialy” and the API can send back response that the word would be probably misspelled as well send back the possible right word “sequentially”. Based on the business application there could be different possible scenarios on how to leverage the AI model Concluding thoughts Spell-checks, grammar-checks, autocorrection should not only be restricted to mobile or productivity applications. It is very important that all these basic sanity checks are also integrated into business applications. Various state of the art AI algorithms and techniques can be used to achieve this and ensure that you have a smooth business functioning as well as to have a good external image Hope you have enjoyed reading this article.
https://pranay-dave9.medium.com/ai-powered-spell-check-and-grammar-check-in-business-applications-6dc316224ab0
['Pranay Dave']
2018-08-25 07:27:07.244000+00:00
['Business Apps', 'Business Value', 'Artificial Intelligence', 'Machine Learning']
Ram Dass Inadvertently Declares Autism as Natural Enlightenment
Mental Processing Since many people are looking for a cure for autism they think it is a misfortune, a blemish, a disease, or a disorder, yet it is none of the above. In a society set up to prioritize intellectual capacity and override bodily wisdom, it’s not surprising that sped-up thought processing is also an aspect of autism – being highly sensitive, aka absorbent of the general intellectual-priority orientation of our times. Photographic memories and savants skills are fawned over, but the fast processing required is a general feature of the autistic mind – it’s a matter of does the mind has space to flourish or is it bombarded by a sea of strong NO’s being ignored or refused? Give the person a sea of full-bodied YES ‘s and watch what they can do. This is true for anyone. Most of us are not trained to follow or even allow full-bodied YES’s. And, with the sped-up cognitive processing of the general psychedelic trip participants — aka autistic people (is the argument I’m making) — ultimately the absorbing and reflecting happens at a radically increased rate given compassionate activation versus traumatic stress. If the results within each person are reflective of the person’s expectations (set up by early years conditioning, no doubt) as well as their current environment, then conditioned beliefs and behavioral patterns must be attended to consciously. When not attended to consciously the person is likely to attract or choose environments that reflect major aspects of their original environment where the conditioning took root. It can be a negative feedback loop where a person can be stuck at this first rung on the ladder of experiences overwhelmed in hypersensitivity and hyperactive mind… unavailable to harness it happily.
https://medium.com/psychologically/ram-dass-inadvertently-declares-autism-as-enlightenment-7a9f13dc0b5c
['Kelsey Jean Marie']
2020-11-05 12:10:15.814000+00:00
['Spirituality', 'Self', 'Psychology', 'Mindfulness', 'Autism']
Clinging on the Unknown
Clinging on the Unknown or how to be by yourself, first. ‘You were born with wings, why prefer to crawl through life?’ Habits are what keeps people sane of which 2 are certain in any case: sleeping and waking up. So we structure our lives around our biological needs. Eat, don’t eat, just breathe. Lately, the need of being alone has increased. Aloneness as a habit/ fact of life. How can we ‘give’ when we are alone? To whom? We become self-absorbed, artists of our own egocentrism. No more objectivity, no more science. Just us as our own center of the universe. And everything goes around. I ran away from authority only to be yearning for structure anyway. Like a leaf in the sky, I tremble in my own insecurities colored by the night. Darkness scares me because I cannot see in- the future. The light reminds me of the possibilities of the days wasted. I am in between completely. And the world, the same. We run on experiments, perhaps, and what’s a year’s time frame in the infinity of space? Just a little bit of content.
https://medium.com/one-minute-life-hacks/clinging-on-the-unknown-70030d7b8702
[]
2020-10-09 08:38:52.729000+00:00
['Self-awareness']
Medium Publication Followers
Tips for Increasing Your Medium Publication Followers 1. Mention your publication in your Medium profile. Your Medium profile is one of the best ways to promote your Medium publication. Make sure that you include a URL linking directly to your publication. 2. Pick a well defined niche. Aiming to become your own mini “Medium within Medium” is not a good strategy. Pick a niche. Something that you are passionate about and have the desire to explore. State in your description what types of topics you cover, and which topics you don’t. 3. Market your publication shamelessly. Getting people to actively follow a Medium publication is surprisingly difficult. One of the biggest reasons is because many readers are confused by Medium’s overly complex layout. Readers might think they are following your publication when they follow your profile, but this is not true. Similarly, their is no way to add readers as followers to your publication. This means that even a reader who opts into a writer’s external newsletter or mailing list would have to separately follow a publication. So make sure you ask readers to follow your publication specifically! The Startup does this well: 4. Make sure that your publication is listed in all the essential aggregated lists for Medium publications — Some places you can list your publication are directly on Smedian, on the comments of related niche content (outside of Medium), and on social media pages that help readers become writers for your publication. 5. Include a CTA at the end of your articles. Even if your article is about an unrelated topic, readers are typically curious about the other projects writers are involved with. Here is an example from one of my CTAs: 6. Make sure that you fill out all of the relevant search tags, under your publication settings. These tags play a major role in helping viewers discover your publication in internal Medium search results. If readers can’t find a publication, chances are they won’t follow it. For example, for my publication, Digital Marketing Lab, I utilize the following five search tags: Digital Marketing, Strategic Communication, Social Media Marketing, Email Marketing, and Branding. You want to make sure your publication tags are broad enough that readers will search for them (digital marketing is broad enough) but not too broad where your publication will have to compete against larger publications (marketing may be too competitive of a tag to make it worthwhile for this publication). 7. Do not send too many letters to your publication followers — I have seen many publications do this. Just because you can reach all of your followers with ease is not a reason to send excessive messages. After all, followers can choose to stop receiving your letters (while still remaining followers) or they could choose to unfollow your publication altogether. I would generally suggest not sending more than 1 letter per week.
https://medium.com/blogging-guide/medium-publication-followers-ac40f2064339
['Medium Formatting']
2020-07-10 00:23:52.283000+00:00
['Medium', 'Writing', 'Publication', 'Followers', 'Social Media']
How My Medium Articles Rank #1 on Google — and How Yours Can, Too
So how exactly did I snatch that #1 spot on Google? I don’t know. I’m sorry. There’s no magic bullet and luck certainly played a big role. That said, there are some strategies that will make you more likely to rank high on Google. I’m no SEO expert, but I’ve been studying the topic for a little over a year, and have learned some tricks of the trade to ensure the success of my future posts. Even if you pick just one or two strategies from the following list and add them to your arsenal, it could significantly increase your chances of seeing your search traffic skyrocket. Note: this guide is meant to apply to any platform. Whether you write on Medium or your personal WordPress/Squarespace blog, you’ll be able to use all of these strategies. Let’s dive in. #1 Use Clever Keywords Finding the right keywords is a balancing act. On the one hand, you want a keyword with high volume (lots of searches). But at the same time, you want one that has low competition (so you have a chance of actually getting that number one spot on Google). If we take my Nudge-story as an example, picking ‘nudges’ as my keyword would’ve been too broad; you’d probably get the book with that title by Nobel Laureate Richard H. Thaler — no way I would ever beat that. Making things a bit more specific though — “creative examples of nudges” — works wonders. Why? It’s still good overall search volume, but without the intense competition from Thaler’s bestselling book. Toss in some relevant long-tail keywords and a solid meta description (more on that later), and voilà: the #1 spot on Google. If you’ve read anything about SEO, you probably came across that phrase, ‘long-tail keywords’. It simply means you’re using target keyword phrases that are four to six words long. Doing so is becoming more important than ever: “The shift toward long-tail keywords will be even more essential to SEO success. Why? Because of voice search.” — Jeff Keleher Thanks to voice search, long-tail keywords are the new norm. It’s now smarter than ever to use longer keyword phrases because people are asking full-sentenced questions when they use Siri, Alexa, or Google Home devices. To find the right long-tail keywords, try Answer The Public, an incredible keyword research tool that’s perfect for finding questions your audience is asking and uncovering golden keyword opportunities with ease. You might also want to use a free tool like Google Trends to check how many people are actually searching for these key phrases. #2 Write an Engaging Headline First of all, great headlines increase social media shares. More social media shares mean more attention and, as search engines are becoming smarter and smarter, they are taking into account the total number of social media shares a story receives. Second, interesting and well-crafted headlines affect click-through rates, which eventually becomes a very important aspect of gaining high rankings. If a story gets more click-through’s, it is an indication that users find that piece more interesting than other ranked pages. Simply put, the internet is flooded with content. Over 2 million blog posts are published every single day. You need a headline that catches people’s attention. Here’s a useful Medium story by Shannon Ashley with some great tips on how to write engaging headlines: #3 Optimize Images Google doesn’t just look for images. It looks for images with alt text — the short keywords that describe what the image is about. For your Medium stories, simply click on an image to write a brief description that explains what the image is in relation to your article. #4 Share on Social Media Google allocates weight to social media signals in their ranking. One of the best ways to get your posts seen is to share them on social media. Assuming you already have an audience on various platforms, you should be sharing your posts with your existing fans and followers. “Don’t build links. Build relationships.” — Rand Fishkin Facebook or LinkedIn groups and online forums can also be great places to promote your posts. Just be clear on the group’s promotion policies so you don’t get into trouble for self-promotion and spam. #5 Update Frequently Many people, myself included, think of writing online as a set-it-and-forget-it type of thing. You write an article, it either goes viral or flops, and whatever happens next is out of your control. Wrong. Google openly encourages content creators to update their content on a regular basis. The world is constantly changing, and Google wants to present its users with the most up-to-date content possible to reflect this. This means that you’ll have to update your articles frequently. Fortunately, this doesn’t have to be a difficult process. I usually add some new tips, clean out the no-longer-relevant bits, change time-sensitive information, and hit re-publish. Google will then recognize the changes I made. #6 Include a Compelling Meta Description If you don’t provide a meta description, Google will show a random snippet of your article. Usually, this isn’t the kind of message you want to deliver to potential readers. Instead, you should provide Google with a custom phrase that shows up underneath your headline in the results: your meta description. It’s a snippet of up to 156 characters which summarizes a page’s content. It looks like this: Meta description of my Medium article on Snickers’ famous ad campaign. On Medium, you can edit your meta description under More Settings > SEO Title and SEO description. Use your key phrase from step #1 (along with its synonyms) and add a brief summary. If you want to test your meta description and check if the length is right, you can use this Google SERP Snippet Tool to predict how your story will look in Google’s search results. Bonus: Three Quick SEO Wins Article length. Google prioritizes articles with substance. If your story has 250 words, chances are it won’t pop up on Google any time soon. Instead, from experience, I’d recommend shooting for around 1200–2200 words. Google prioritizes articles with substance. If your story has 250 words, chances are it won’t pop up on Google any time soon. Instead, from experience, I’d recommend shooting for around 1200–2200 words. Check the URL Medium automatically generated for your article. If needed, adapt a personalized URL that suits your SEO needs. You can do so under More Settings > Advanced Settings > Customize Story Link. If needed, adapt a personalized URL that suits your SEO needs. You can do so under More Settings > Advanced Settings > Customize Story Link. Canonical links. If you’re importing stories from elsewhere (e.g. your personal website), you need to tell Google that this shouldn’t be seen as duplicate content. Indicate which article should take priority under Settings > Advanced Settings > Canonical Links. Closing Thoughts If you want your blog articles to rank higher on Google, you’ll need to put in some work to make it happen. By using the strategies and tactics recommended in this article, you can make some massive SEO gains. However, at the end of the day, the most important part of SEO is to simply create content that is useful and engaging for your target audience — aim for good content over SEO every time.
https://medium.com/the-innovation/how-my-medium-articles-rank-1-on-google-and-how-yours-can-too-d45c22cb663b
['Yannick Bikker']
2020-09-10 19:14:13.048000+00:00
['Writing', 'Blogging', 'Social Media', 'SEO', 'Search']
The Universe Might Be In Ground-Hog Day
Hubble Deep Field — WikiCC The Universe Might Be In Ground-Hog Day We have all heard about the horrible fate that awaits the Universe, but a new theory may give existence hope of new life. The heat-death of the Universe has been a commonly known fact since the discovery of Dark Energy in 1998. Since then science predicts that the Universe will expand, at an ever-increasing rate, until all the stars have died and Black Holes have evaporated, leaving a cold dead sea of fundamental particles across all of existence. Quite a sad death for such a magnificent Universe, although this death will take trillions of years. But, the infamous Roger Penrose (Steven Hawkings’ close colleague) may have discovered that the Universe is immortal, and will be reborn over and over again. Hoorah! To prove this bizarre theory correct, we need ancient aliens and infinitesimally small ripples in spacetime. But before we dive into that, let’s start at the beginning. You may have heard of the Big Bounce, which was a prevalent theory before we discovered Dark Energy. In this theory gravity eventually pulls the Universe back from its expansion after the Big Bang and crushes it all together to form a new singularity. That singularity then goes on to ‘Bounce’ back and create a new Big Bang. This cycle continues over and over again, forming a cyclical Universe. Sadly, we know this can’t be the case because of Dark Energy, the force causing the acceleration of the expansion of the Universe. But Penrose has come up with a theory that allows for a cyclical Universe without needing a Big Bounce. Penrose’s breakthrough came when he realised that the state of the Universe at the beginning of the Big Bang and at its’ heat-death are functionally the same. At least from the photons perspective… Let me explain. At the moment of the Big Bang, all the matter-energy of the Universe was skewed towards energy. The vast majority of matter hadn’t condensed out of the high energy soup that was the early Big Bang. Instead, it would have been full of energetic photons of light. The Big Bang is far more complicated than this, but this picture of a bright Big Bang helps explain Penrose’s theory easily. During the life of the Universe, the matter-energy balance goes more in favour of matter. Right now we have lot’s of matter in the form of stars, planets and the objects that inhabit them, but the amount of energy in the Universe has decreased. But in the heat-death of the Universe, matter gets transformed back into energy as particles decay. So the Universe’s matter-energy balance gets skewed back towards energy as everything slowly decays and turns to photons of light. But, photons are very special particles. They have no mass, and so they travel at the speed-of-light. That speed means that they don’t experience time or distance (in the direction they travel in). So, from the photons’ perspective, they get emitted and appear at their destination instantly. This weird property of photons means that they don’t ‘know’ how big the Universe is, to them, distance has no meaning. A photon flying about at the end of the Universe would think it looked identical to the Big Bang, with lot’s of other photons of light and no matter, while the Universe appears to be squashed into a distanceless point. This led Penrose to suggest a new model for a cyclic Universe called Conformal Cyclic Cosmology (CCC). It states that as the Universe undergoes heat-death, the shift of matter-energy to mostly energy causes a new Big Bang to take place, starting the Universe up once more, and so on and so on. This means that there could be countless Universes that existed before our own, and there will be countless after our own, which is challenging to get your head around. CCC, unlike the Big Bounce, doesn’t require the Universe to collapse back on itself to start a new. Instead, it’s better to think of it as distance and time becoming meaningless as there is nothing that can experience them in the distant future, where everything is just photons. Then from this distanceless timeless period, a new Big Bang can start, which creates its own distance and time. This theory isn’t as mad as you might think. The very complicated maths involved in creating CCC, which is based on Einstein’s field equations, all workout perfectly fine. What’s more, this theory does answer some really puzzling problems with the Universe, like the reason for Big Bang uniformity and the Black Hole Information Paradox. But, there are some predictions of this theory that could give us the telltale signs that could prove Penrose correct. There are two and they are both even weirder than the theory itself! The CCC predicts that we should see hot rings in the Cosmic Microwave Background (CMB). This isn’t a mark of the previous Universe but is a message from an advanced civilisation in the previous Universe. Penrose has shown they can leave us a message in the background noise that is spread across the whole Universe. So have we found such rings? If so, it would show that another Universe existed and that advanced aliens existed! Two birds one stone. The Cosmic Microwave Background — ESA While it’s hard to see any in the CMB maps, computers have found what could be these rings hidden in plain sight; they are just very, very faint. These results are contested, but none-the-less, the CMB could prove Penrose correct, we need a more accurate map to find these rings, if they do exist. It is worth stating that these rings would be the discovery of the century, possibly even the millennium! The second prediction is based on the recent Gravitational-Wave results at LIGO, which could also show us some evidence for CCC without the need for aliens. In theory, as the Universe decays towards a ‘mostly energy state’, it releases low-level Gravitational Waves. This is known as Erebon Decay. The equations in CCC predict that some Erebon Decay should have already happened and that we should have some minimal Gravitational-Wave radiation washing over us from it. LIGO, our best Gravitational-Wave detector, has picked up noise from both its detectors that appear to correlate. In other words, the noise that they have filtered out might be the small gravitational waves from Erebon Decay. Penrose himself has even published a paper describing how LIGO and other gravitational wave experiments could prove him right. What’s even more incredible, CCC allows gravitational energy from a previous Universe to radiate into the next. This could actually be the force behind Inflation and Dark Energy, the mysterious force that is pushing the Universe to expand faster and faster. Needless-to-say that Penrose’s theory and equations can explain a plethora of unexplained phenomena, it isn’t without its critics, but it seems to answer so many problems physics throws up. So, will our Universe be born again like the Pheonix from the ashes? It looks promising, and if it is accurate, we can open up a whole new world of science and peer into a time before the Big Bang. To find the answer, all we need to do is get better CMB and Gravitational Wave data, which we are already in the process of doing. The future of CCC looks very promising indeed. If gives me comfort in the thought that our Universe might be immortal, being born again and again, creating untold Solar Systems, stars and natural wonders, rather than this magnificient entity dying a long cold death for eternity.
https://medium.com/predict/the-universe-might-be-in-ground-hog-day-79dba78715ba
['Will Lockett']
2020-10-09 21:56:01.890000+00:00
['Science', 'Space', 'Technology', 'Physics', 'Data Science']
Enabling Industry 4.0 — Why we invested in Geomiq
In a world where companies are as efficient as their supply chain and every day is key not to fall off the innovation curve, a broken quote-to-order process means that a significant part of the efficiencies of Industry 4.0 are lost on the way. Now, more than ever, is the time for intelligent manufacturing and agile supply chains. But first and foremost, now is the time to fix the quote-to-order process so as to enable a real end-to-end Industry 4.0 revolution. And here’s where Geomiq comes into play: custom on-demand manufacturing of parts in 3-days to fuel Industry 4.0. Innovation has driven immense progress in industrial manufacturing since the 18th century, back when mechanical production was powered by steam. We have gone from mass production using electrical power in the 19th century to automated production powered by electronics and IT systems in the 20th century. And it is now time for technology to take manufacturing to the next level, integrating advanced manufacturing techniques with the internet of things (IoT) to create manufacturing systems that are interconnected and can communicate, analyse, and autonomously use the information they are provided with. The time for the autonomization and digitisation of manufacturing is finally here. Industry 4.0: a new era in industrial production. Source: IBT Industry 4.0. Intelligent production. Autonomous robots that know what they need to do without being told. AI, of course. Smart factories. Machines that can see, think and decide. It certainly sounds very exciting. Very techie, and very futuristic. So much that it even made industrial manufacturing look (very) sexy to the eyes of a VC investor like me. However, to be honest, also a bit too deep-tech for an investor in tech-enabled marketplaces and platforms. The future on the manufacturing floor looks like this. And it’s already here. Source: Forbes But then I met Sam and Will, co-founders of Geomiq, and Industry 4.0 took a whole different meaning. It’s certainly groundbreaking to think of fully automated manufacturing floors run end-to-end by robots with no human intervention. But all the efficiency and productivity gains of all that innovation is certainly hindered if the most basic step of the process — i.e. making the manufacturing order — is broken. The communication step between engineers and manufacturers is broken. Source: Samaipata. CAD and new generation “CAD- like software” has digitalized the design step and advanced manufacturing techniques are powering intelligent manufacturing. But sourcing/procurement is lagging behind, mainly driven by the complexities of working with suppliers (i.e. manufacturers). Companies still face long lead-times (c. 6 weeks per prototype part in the UK) caused by a uselessly complex, long, iterative and unreliable quote-to-order process: it takes a design engineer forever to find the right supplier, then it takes the supplier forever to get back to them with a quote, put some quality issues into the mix and add the typical delays…and you get very long lead times. Now think that if an average assembly has 30 parts and you need to do this for each of them…you get innovation projects paralized for weeks. And in a world where companies are as efficient as their supply chain and every day is key not to fall off the innovation curve, a broken quote-to-order process means that a significant part of the efficiencies of Industry 4.0 are lost on the way. Definitely, now is the time for intelligent manufacturing. But first and foremost, now is the time to fix the quote-to-order process so as to enable a real end-to-end Industry 4.0 revolution. And here’s where Geomiq comes into play. Geomiq (stands for Geometry IQ) is a data-driven manufacturing marketplace building the world’s best and most sustainable supply chain by directly connecting design engineers with pre-vetted manufacturers. Engineers upload the CAD file of their design, get a quote within 24h and have the manufactured part with them in as little as 3 days. Custom on-demand manufacturing of prototypes and low-volume production to fuel Industry 4.0. A year and a half old, as of today Geomiq has manufactured more than 500k parts for c. 2k engineers working at industry leaders such as Delphi Technologies and Shadow Robots and is growing at a 60% average MoM. When we invested in Geomiq and issued a Term Sheet two and a half (sleepless) weeks after meeting the team for the first time, the world was a very different place than it is today. But today, in a world hit by Covid-19, our investment thesis holds more than ever. We do not know what the post-Covid-19 world will look like, but we are certain that it will require more innovation, more agile supply chains and more autonomous manufacturing processes. In this new future, Industry 4.0 will become evermore relevant and we will need more founders like Sam and Will developing sustainable tech solutions to enable it. Sam and Will are founders on a mission and they have a secret Some call it founder-market fit. We call it ‘founders with a secret’. We mean industry insiders, who understand their market and customers inside out. And who are obsessed with solving the problem they are after. Founders on a mission. Please meet Sam: mechanical engineer by background and at heart, 12-years design engineer and a force of nature. He was running a €600m marine project in Germany and got frustrated over and over again at how inefficient it was to get prototype parts manufactured. One day he just had enough and left to launch Geomiq. And on his way out, he met Will. Second-time marketplace entrepreneur, sales machine and a people’s person now also turned engineer. Not sure he had ever heard about CNC machining before meeting Sam but he definitely gets people and the frustration they feel when they can’t do their job properly because of inefficiencies in the process. He is on a mission to fix that. And you should listen to him talking about CNC machining now…a natural! What we really see when we look at Sam and Will. Discussion on who’s who is still live. On top of it all, and many other things like a lightspeed execution, Sam and Will are also wonderful humans. Which is key for us if we are bound to work together for years. A very telling story about Sam and Will. About a month and a half ago, when it was already obvious that Covid-19 was a serious threat to people’s lives and to our healthcare system — and Will and Sam were flat out redesigning operations so the team could work from home — I got a call from them. They thought Geomiq could help. They had spent the weekend navigating engineering groups and were organizing a hackathon that night to unite efforts in one direction and make them actionable. Geomiq was at a privileged position to help bring PPE (and potentially a ventilator) to UK hospitals as fast as possible and Sam and Will had decided to put all of Geomiq resources into that. Countless hours followed and Covid 19 Makers was born. They have made a lot of progress and are now working with many medical devices providers to accelerate their manufacturing processes. In the past few weeks we have seen the unimaginable become our new reality, our livelihoods striped of from their very essence, and a lot of pain around many of us. But we have also seen the best in people rise up. In a situation when we are all trying to survive, it takes a lot of humanity, sense of responsibility and generosity to put all your business and personal resources to work to try and help the common good. Geomiq is a defensible and scalable business… Network effects are inherent to the model but more than that, Geomiq has built-in SaaS features to make the product more sticky for both engineers and manufacturers. Moreover, it has built a strong community of advocates, both on the supply and the demand side. Very high retention and repetition rates speak for that. Geomiq is also a very capital efficient business as it beautifully combines B2C dynamics in bottom-up acquisition of engineers (lighter B2B sales forces, lower CAC, virality…) with the higher LTV of B2B models (higher AOV, higher retention and engagement…). …in a large growing market… At Samaipata, we spend a lot of time thinking about where the future is going and making sure we invest in businesses that are riding on strong socio-economic and technology trends. We call it ‘why now’ — why is it now the right time to launch this business as opposed to 3–5 years ago. With Industry 4.0 upon us, that was a no-brainer in the case of Geomiq. Moreover, we look for very big and growing markets. And manufacturing is a huge $6.7tn global market. Just prototyping and low-volume production is a €10bn market in the EU alone, being conservative. Think about it: every physical good around you has many parts (even if you don’t see it). Well, each of those parts has been prototyped, several times, in several materials. And this space is growing, constantly, as a more sophisticated consumer demands shorter product life cycles and faster innovation, and new materials are developed, enabling what wasn’t possible before. Take 3D-printing, one of the additive manufacturing techniques Geomiq’s platform users. After the consumer 3D-printing hype (and crush) of 2013–2014, 3D printing is now ripe to revolutionize manufacturing and is expected to grow from $10bn in 2018 to $97bn in 2024, according to ARK Big Ideas 2020 Report. Case study of Terran 1 Rocket built by Relativity Space: 3D printing collapses the time between design and production, shifts power to designers, and creates products with radically new architectures and less waste, at a fraction of the cost of traditional manufacturing. …where a marketplace will be the winning model… At Samaipata, we have a deeply specialized investment thesis and invest in pre-series A marketplaces and digital platforms. So, while looking at Geomiq, we spent a lot of time thinking whether a decentralized on-demand marketplace powered by multiple suppliers (manufacturers) in the back-end was the winning model in this market vs. a single fully vertically integrated manufacturer. We concluded with full conviction that that was the case in prototype and low-volume production as it is a market: very fragmented, with numerous players on both the supply and the demand side. There are more than 40k mechanical engineers and 10k advanced manufacturers in the UK only with high heterogenity in demand needs (materials, manufacturing techniques, sizes…) and supply capabilities, which makes it very unlikely for just one manufacturer only to be able to cater all of the demand needs where one fully-vertically integrated manufacturer, regardless of the size and budget, can never compete in price with a network of multiple manufacturers. Let me explain why. In this market, where volumes are low and procurement times irregular (as they are driven by R&D cycles), there are no economies of scale. Price efficiency is driven by capacity — actually, by excess capacity: in an industry where c.80% of the manufacturing costs are fixed costs, a supplier with underutilized capacity will quote for a new job at variable cost (20%)+ little more (% of fixed costs) to try and cover at least some the fixed costs. On the other hand, a manufacturer at full capacity, will quote at 100% costs (variable and fixed) + margin. In a nutshell: the more excess capacity, the more competitive in price a manufacturer will be. Thus, by definition, a marketplace powered by hundreds of manufacturers will always have more potential excess capacity and more room to be more competitive in price — as different manufacturers will have excess capacity at different times — than one single manufacturer by itself (hopefully so for the single manufacturer, as if it always has spare capacity it won’t last long in business). …and that makes the world a little more human. Today’s manufacturing supply chain is socially and environmentally unsustainable. It is opaque, wasteful, resource intensive and heavily reliant on fossil fuels. Geomiq’s platform sits between the designers and the manufacturers and will be a critical part of the new infrastructure needed to build a more sustainable and transparent supply chain. Through better product specification and disciplined material and supplier sourcing, Geomiq’s platform can drive efficiency and productivity gains, reduce waste and promote the use of more sustainable materials. We are proud investors of any business that uses tech to make the world a little more human; and even more so alongside the incredible guys from Eka Ventures, kings and queens of sustainable consumer technology, who are also joining Geomiq on their quest to build the world’s most best and sustainable manufacturing supply chain. Welcome to Samaipata guys, it’s an honour to be your co-pilot…let’s continue breaking speed limits! #letsdothis
https://medium.com/samaipata-ventures/industry-4-0-why-we-invested-in-geomiq-75aaa93f4701
['Carmen Alfonso-Rico']
2020-04-28 07:03:16.318000+00:00
['Manufacturing', 'Industry', 'Startup', 'Venture Capital', 'Tech']
Spring Boot OAuth2 Login With GitHub
Spring Boot Code To enable Spring Security OAuth 2.0, we need to add the following starter: compile 'org.springframework.boot:spring-boot-starter-oauth2-client' or for Maven: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-oauth2-client</artifactId> </dependency> Now, we’ll need to modify our application.yml : spring: security: oauth2: client: registration: github: clientId: ${GITHUB_CLIENT_ID} clientSecret: ${GITHUB_CLIENT_SECRET} The GITHUB_CLIENT_ID and the GITHUB_CLIENT_SECRET are environment variables that hold the values that you get back once you register your application on GitHub (same for Google, Facebook, or any other provider). Now let’s configure our security: @EnableWebSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.authorizeRequests() .anyRequest().authenticated() .and() .oauth2Login(); } } In the above code, we want every request to be authenticated. We add oauth2Login in order to configure authentication support using OAuth 2.0. Now if we try to access localhost:8080 in our browser, we’ll be forwarded to the GitHub sign-in page: So what happened here? When a request is made to localhost:8080 , Spring security will try to find an authenticated object, but eventually, it fails to. So it redirects to: http://localhost:8080/oauth2/authorization/github Internally, this request is getting handled by OAuth2AuthorizationRequestRedirectFilter , which uses implements doFilterInternal that matches against the /oauth2/authorization/github URI and redirect the request to the above redirect_uri contains the same value we put when we registered our application. After we successfully authenticate against GitHub, the user will be redirected to login/oauth2/code/github with the authentication code in the request parameters. This will be handled by the OAuth2LoginAuthenticationFilter , which will perform a POST request to the GitHub API to get an authentication token.
https://medium.com/swlh/spring-boot-oauth2-login-with-github-88b178e0c004
['Maroun Maroun']
2020-12-23 07:39:55.938000+00:00
['Java', 'Spring', 'Security', 'Oauth2']
Students Prototype Design for School Security System Using Raspberry Pi
Students from BGA High School Prototype School Security Solution School safety is on everyone’s mind but no one more than students. They are the ones who go to school every day and have to consider what might happen if an intruder enters their campus. This is something Amy Yarbrough, former student at Battle Ground Academy in Franklin, TN, has given a lot of thought to. After writing a paper on the ethics of teachers carrying handguns, Amy brainstormed better ways to make their school safer. BGA has an entrepreneurship class that teaches students about local businesses and allows them to work on starting a business themselves. Initial State CEO Jamie Bailey spoke to the class about his company and technology. This gave Amy an idea: she could create a school security system using Initial State, a Raspberry Pi, and a smart lock. The students had to present ideas of how they could use Initial State in a business and Amy’s Raspberry Pi project idea won. Now she and her team member Evelyn Zhu, current student at Battle Ground Academy, needed to take their project from idea to reality. That is where the Initial State team came in to help. Based on their idea and outline, the team put together a system that would read a finger print scanner connected to a Raspberry Pi, send a mass text alert to students and faculty, use smart lock to secure the room, and live stream updates to a dashboard. Raspberry Pi with fingerprint scanner Here is how the system works. It is built with a fingerprint scanner connected to a Raspberry Pi 3 Model B+ outfitted with a case and touchscreen. The program is started by running a python script in the command line on the Pi. The fingerprint scanner and Raspberry Pi will sit at a teacher’s desk. The program is set to constantly run and the Pi is connected to WiFi. The teacher’s fingerprint is indexed in the system. If an emergency occurs, the teacher can initiate the program by pressing a button on the touch screen. This will turn on the fingerprint scanner and allow them to scan their fingerprint. Once the fingerprint is matched, the prompt asks if the teacher would like to start an emergency alert. The teacher can start an alert or cancel if it is not an emergency. Raspberry Pi running School Security Program Starting an emergency alert will initiate Twilio to send a mass text to all faculty, teachers, and students on the contact list. The text message says, “Code Blue.” This lets everyone know an emergency is happening and to follow the safety procedures. Twilio text message The next step in the process is for the teacher to ensure the door is closed and secure it with a Haven smart lock. The Haven smart lock can be initiated with a FOB if the door is already closed. The lock will ensure an intruder cannot open or break down the door with force. It is outfitted with a SmartThings sensor. This IoT sensor can send data to an Initial State dashboard to show if it is open or closed. This will confirm that the classroom is secure. Haven smart lock with SmartThings sensor attached While all this is happening, data is being sent from the Raspberry Pi to an Initial State dashboard. The Raspberry Pi will send an update showing an emergency alert is active, that the mass text has successfully been sent, and the name of the teacher who started the alert. The SmartThings sensor on the Haven smart lock will show it is activated and secured. Another SmartThings contact sensor on the door will show that it is closed. If this school security system was equipped in each teacher’s room, the dashboard could show updates for every room. Within the dashboard users can measure time between alert being initiate and doors being secured. This would allow for the school to measure response times for each room and measure the system’s effectiveness. Initial State school security dashboard Once the emergency has been resolved, the teacher can end the alert on their Raspberry Pi and the program will restart to await a future emergency. Two students in a high school entrepreneur class were able to envision a system that would keep their school safe. A system like this is fairly inexpensive and easy to implement. This could even be used for a home security system. It is a viable option for schools that don’t have a big budget but need a solution.
https://medium.com/initial-state/students-prototype-design-for-an-inexpensive-reliable-school-safety-system-93cbe6ef2053
['Elizabeth Adams']
2019-11-19 22:05:47.340000+00:00
['Raspberry Pi', 'Entrepreneurship', 'DIY', 'Security', 'Education']
[誰准你沒驗測過就來Demo的?] - 談Potentially Shippable
Kuma老師的軟體工程教室 Welcome to the Kingdom of Software Engineering
https://medium.com/kuma%E8%80%81%E5%B8%AB%E7%9A%84%E8%BB%9F%E9%AB%94%E5%B7%A5%E7%A8%8B%E6%95%99%E5%AE%A4/%E8%AA%B0%E5%87%86%E4%BD%A0%E6%B2%92%E9%A9%97%E6%B8%AC%E9%81%8E%E5%B0%B1%E4%BE%86demo%E7%9A%84-%E8%AB%87potentially-shippable-5bff6f5def6e
['Yu-Song Syu']
2019-01-28 08:57:28.095000+00:00
['Scrum', 'Software Engineering', 'Agile', 'Potentially Shippable', 'Testing']
Meta-Programming in Python
Meta-Classes Now that we’ve seen decorators, they are for decorating functions. But there is more to meta-programming than decorators, such as meta-classes. Meta-classes are special types of classes, rather than ordinary classes in Python. Where an ordinary class defines behavior of its own instance, a meta-class defines the behavior of an ordinary class and its instance. A meta-class can add or subtract a method or field to an ordinary class. Python has one special class, the type class, which is by default a meta-class. All custom type classes must inherit from the type class. For instance, if we have class Calc , with three class methods, and we want to provide debug functionality to all the methods in one class then we can use a meta-class for this. class Calc(): def add(self, x, y): return x + y def sub(self, x, y): return x - y def mul(self, x, y): return x * y First, we need to create a meta-class MetaClassDebug , with debug functionality, and make the Calc class inherit from MetaClassDebug . And, when we call any method from the Calc class, it will get invoked with our debug_function . def debug_function(func): def wrapper(*args, **kwargs): print("{0} is called with parameter {1}".format(func.__qualname__, args[1:])) return func(*args, **kwargs) return wrapper def debug_all_methods(cls): for key, val in vars(cls).items(): if callable(val): setattr(cls, key, debug_function(val)) return cls class MetaClassDebug(type): def __new__(cls, clsname, bases, clsdict): obj = super().__new__(cls, clsname, bases, clsdict) obj = debug_all_methods(obj) return obj class Calc(metaclass=MetaClassDebug): def add(self, x, y): return x + y def sub(self, x, y): return x - y def mul(self, x, y): return x * y calc = Calc() print(calc.add(2, 3)) print(calc.sub(2, 3)) print(calc.mul(2, 3)) **************** output **************** Calc.add is called with parameter (2, 3) 5 Calc.sub is called with parameter (2, 3) -1 Calc.mul is called with parameter (2, 3) 6 Bingo! In the above snippet, we created a meta-class MetaClassDebug and wrote a new method which is responsible for creating an instance of class and applies our decorator function debug_function to the object (instance), which will get created for every class that inherits MetaClassDebug . Calc is inherited from MetaClassDebug , hence every method has been decorated by debug_function from debug_all_methods . This way, we can add new behavior to all the methods within a class and also control the instance creation of a class using a meta-class. We can achieve a lot with a meta-class, such as adding a method or field to class, removing a method or field from a class, and many more. I wanted you to have a quick look at meta-programming in Python, so I wasn’t able to cover all the things in this post. I hope that this article has helped you to familiarize yourself with the concept of meta-programming. Criticism is always welcome!
https://medium.com/better-programming/meta-programming-in-python-7fb94c8c7152
['Saurabh Kukade']
2019-10-17 13:22:36.934000+00:00
['Metaprogramming', 'Python Programming', 'Programming', 'Functional Programming', 'Python']
The Augmented Bollinger Bands.
The above curve shows the number of values within a number of standard deviations. For example, the area shaded in red represents around 1.33x of standard deviations away from the mean of zero. We know that if data is normally distributed then: About 68% of the data falls within 1 standard deviation of the mean. About 95% of the data falls within 2 standard deviations of the mean. About 99% of the data falls within 3 standard deviations of the mean. Presumably, this can be used to approximate the way to use financial returns data, but studies show that financial data is not normally distributed but at the moment we can assume it is so that we can use such indicators. The issue with the method does not hinder much its usefulness. Hence, the Bollinger bands are simple a combination of a moving average that follows prices and a moving standard deviation(s) band that moves alongside the price and the moving average. GBPUSD (in black) with its 20-period Bollinger Bands. (Image by Author) o calculate the two Bands, we use the following relatively simple formulas: Image by Author. With the constant being the number of standard deviations that we choose to envelop prices with. By default, the indicator calculates a 20-period simple moving average and two standard deviations away from the price, then plots them together to get a better understanding of any statistical extremes. This means that on any time, we can calculate the mean and standard deviations of the last 20 observations we have and then multiply the standard deviation by the constant. Finally, we can add and subtract it from the mean to find the upper and lower band. Clearly, the below chart seems easy to understand. Every time the price reaches one of the bands, a contrarian position is most suited and this is evidenced by the reactions we tend to see when prices hit these extremes. So, whenever the EURUSD reaches the upper band, we can say that statistically, it should consolidate and when it reaches the lower band, we can say that statistically, it should bounce.
https://medium.com/swlh/the-augmented-bollinger-bands-9f632609e9cc
['Sofien Kaabar']
2020-12-26 08:51:40.666000+00:00
['Data Science', 'Machine Learning', 'Artificial Intelligence', 'Trading', 'Finance']
Creating a Twitch Command Script With Streamlabs Chatbot
Appendix B: Nice to Have Here is some neat stuff you could add to your command to make it just a little bit cooler, but they’re by no means necessary to create your commands. Adding a cooldown period You might not want your commands to be available to everyone all the time, even though they’re awesome. You could have a busy chat or someone could be a troll and spam the command all the time. That’s where cooldowns come in. SC has a few handles to add and check for cooldowns on a user or a command. I’m going to show the user-specific cooldown here. It involves two small additions: Add the user cooldown at the end of the Execute(data) method, using data.User to get the user ID of the viewer and specify 30 seconds as the cooldown time: That should put a dent in the number of triggers. Now, at the beginning of the Execute(data) method, in the command check, include an extra check for the user cooldown. If a user calls this command again while still on cooldown, we don’t want to execute our logic. Adding that bit of logic: Now the trolls have been thwarted! Checking whether I can trigger the command shortly after each other yields a promising result. The command does not execute, because I triggered it in the last 30 seconds: I guess he must be speechless after his faulty conclusion last time. Now you know how to add a user cooldown. Adding a cooldown for the command itself has a similar flow, simply exchange the user cooldown methods with the command cooldown methods. Adding UI You may or may not have seen an interface in SC for some commands where the controller can change values used in the script. It looks something like this: The huge added benefit is that things like cooldown time and other values can be changed from outside of the script, without having to touch the script at all. This saves quite a bit of work and makes the script easier to handle for people who aren’t used to scripting. I want this for our script, so let’s dive into it: First off, let’s create the UI_Config.json file in the same folder as our Python file. Get the naming right, create it and open it in your IDE: It’s kinda empty in here… SC has the format and options of the file documented on their GitHub Wiki page. First, we have to choose the name and type of file our values will be dumped in to use in our script. Add the following to the file: Mind the brackets and the quotes. We need two values for our script: the probability in percentages and the cooldown in seconds. Both are numbers, so we’ll need a numberbox for both. Let’s add those and fill in the fields: value: the initial value, label: what appears above the box, tooltip: when you hover over it with your mouse. Save the file, go back to the Scripts section in SC and reload the scripts. When you now click on the Mulder command, it shows our fresh new UI: What’s this!? Your Mulder is evolving! We’re not there yet, friend. Remember the output file we defined? Hit Save Settings in the UI and head back to your IDE. You should see two new files created: So far, so good. We’re interested in the settings.json file, which is the file we defined. If you open it up, you’ll see our defined values and their data: This is automatically generated by SC. Nice, huh? We now want to use these dynamically updated values instead of the hardcoded ones in our file. To this end, we’ll need to import some libraries to help with reading out this settings file. Add the following above the global script information variables: We’ll get to using these next. Before we load the file, we need to have something to store these variables in. Let’s create a global settings object to that end: Empty at first, we’ll load the values next. We only want to read these values in once, when the script is (re)loaded. There is no need to read those every time the script executes. Which method runs only once per (re)load? You remembered! It’s the Init() method. Let’s start adding logic to it step-by-step. First, we need to let Python know we are going to change our global settings object: Now Python knows that we mean our global settings object when we use it in the method. We’re going to need to access the settings.json file. As it’s in the same place as the current script we’re in, we can ask Python to get the path to the directory we’re in right now for later use: Just a little longer, you can do it! Alright then! Now, let’s get to the juicy bit: reading the file and storing the contents in our settings object. We’re going to construct the full path to the file with the working directory we defined before and join that with the actual filename. Make sure the filename is exactly the same as the filename defined in UI_Config.json : We have loaded the settings! Great! Now that we have loaded the settings, we can use that object to access the values defined in the UI. Let’s replace the hardcoded values with the dynamic reference to their counterpart (Make sure the spelling matches the spelling in the UI_Config.json file): Test time! Head towards SC, go to the Scripts section and reload the scripts. Calling the command in the console yields the result we had before, but now we can dynamically change our values from outside of our script: Same, but not the same? Bonus point: We can catch a possible error when trying to read our settings file. In our current scenario, a read error would result in the script breaking, but if we catch and handle this error, we’re able to build a fallback. This is the cleaner way to handle this. The following code catches any error, or exception in Python lingo, logs the error for research, and provides fallback values so the script can continue while you calmly investigate what’s happening: Make sure the spelling is correct! And that’s how you incorporate UI!
https://medium.com/better-programming/creating-a-twitch-command-script-with-streamlabs-chatbot-step-by-step-a9f8cccd680d
['Nintendo Engineer']
2019-08-29 02:59:51.305000+00:00
['Programming', 'Twitch', 'Commands', 'Python', 'Streaming']
Dependency Injection — What is It, and How to Use It.
What is a Dependency? In software engineering, there is a complex-sounding technique called dependency injection that aims to help organize code modularly by creating objects that depend on other objects. The objects that other objects depend on are called dependencies. The use of dependency injection helps solve the problem known as ‘spaghetti code.’ If you haven’t heard of this term, it refers to software that is ‘held’ together by bad design and architectural planning, in which each object is connected to one another. This makes codebases hard to maintain. To build software that lasts — thoughtful planning and execution are crucial, and dependency injection can help with the process of producing modular code. Using Dependencies in Code Before we dive into injecting dependencies, I will show you a basic example of how to use one. To reiterate, a dependency is an object that other objects depend on in order to operate. In the following code, you will see a Colony class with a queen property, an initializer, and a formColony method. There is also the QueenBee class and the Bee protocol. class Colony { var queen: Bee init() { queen = QueenBee() } func formColony() { queen.startMating() } } class QueenBee: Bee { func startMating() { print("Begin mating flight.") } } protocol Bee { func startMating() } When an instance of Colony is initialized, the queen property is assigned to an instance of the QueenBee class. Note that this queen property can be anything that is of the Bee type. The formColony method calls the queen object's startMating method. As you can see, the QueenBee class conforms to the Bee protocol and will print "Begin mating flight." when the startMating method is called. The dependency in this setup is the QueenBee object inside the Colony initializer. Since Colony directly references QueenBee in the initializer, it is considered tightly coupled with the QueenBee object. This is not good, because now Colony depends on QueenBee to function correctly. The use of dependency injection will help avoid using dependencies as you have seen here. Dependency Injection Using Swift Within the Swift programming language, there are a few different ways to go about dependency injection — initializer injection, property injection, and the lesser-used method injection. Initializer Injection When using initializer injection, you pass the dependency object through to another object via its initializer. The usage of the dependency object (sometimes called a service) is defined within the object it’s being passed to (sometimes called a client) — but the actual creation doesn’t happen until it’s passed through the client’s initializer. To modify the previous code to adapt dependency injection using the initializer injection method: class Colony { var queen: Bee init(queen: Bee) { self.queen = queen } func formColony() { queen.startMating() } } let firstQueen = QueenBee() let firstColony = Colony(queen: firstQueen) firstColony.formColony() Now, Colony doesn't directly reference the QueenBee object in its initializer. Which means the tight coupling problem has been solved, and any object of the Bee type can be used with Colony . The above code will print "Begin mating flight." Note that I mentioned any object that is of the Bee type can be passed into the initializer. This is great because you can exchange the type of bee used as the colony's queen. Of course, this wouldn't happen in the real world because bee colonies must have a queen bee - but a good example is changing the type of hive the colony lives in. I've made slight modifications to the code to show this: class Colony { var queen: Bee init(queen: Bee, hiveType: Hive) { self.queen = queen } func formColony() { queen.startMating() } } let firstQueen = QueenBee() let topBar = TopBarHive() let firstColony = Colony(queen: firstQueen, hiveType: topBar) You can now change the type of bee as well as the type of hive used to form this colony. Another way to integrate dependency injection is through the property injection method. Property Injection Property injection is pretty much exactly what it sounds like — you pass the dependency directly through to an object’s property. Here is a modified version of the Colony class: class Colony { var queen: Bee! func formColony() { queen.startMating() } } let firstQueen = QueenBee() let firstColony = Colony() firstColony.queen = firstQueen firstColony.queen.startMating() The queen is being assigned via a property, and all methods of the queen will operate as expected when called as seen. In other words, this will again print “Begin mating flight.” Method Injection A lesser-used way to integrate dependency injection is by using a setter method. Setter methods are custom methods of an object that use a parameter to set a certain property’s value based on what was passed through the parameter. It works somewhat like initializer injection (in that it uses a parameter to give value a property), but you have to call it yourself after the object has been created. class Colony { var queen: Bee! func formColony() { queen.startMating() } func setQueenBee(_ queen: Bee) { self.queen = queen } } let firstQueen = QueenBee() let firstColony = Colony() firstColony.setQueenBee(firstQueen) Here, the setter method is the setQueenBee method within Colony. When an object that conforms to the bee protocol is passed through to that method's parameter, it will set the bee property to the value of the parameter. This is another way of integrating dependency injection, but it's not the most convenient. Conclusion That’s it for dependency injection! It really isn’t as scary as it sounds, and after you try it out for yourself, it will become much more instinctive. It’s a simple technique that helps developers build better software by making it modular, maintainable, and scalable. Thanks for reading! If you’d like more programming content, be sure to check out the rest of the Rusty Nail Software Blog. You can read the original posting here.
https://andrewlundydev.medium.com/dependency-injection-what-is-it-and-how-to-use-it-61ea7b33411
['Andrew Lundy']
2020-12-28 06:09:10.388000+00:00
['Software Engineering', 'Swift Programming', 'App Development', 'Programming']
One in two Pythonistas should learn Golang now
Your average software engineer is still in love with Python. Married, even. But not those at Google, Uber, Dropbox, Soundcloud, Slack, and Medium. The programmers at top corporations have long fallen for the language with the cute mascot. That’s not to say that Python is no good. It’s great! But whether it’s for APIs, web services, or data processing — while most developers are still using Python, top-performers are adopting Golang, or Go, more and more. Because it rocks. Created by pioneers Go was invented by an all-star trio at Google: Robert Griesemer was one of the heads behind Google’s V8 JavaScript machine and a main developer for Sawzall, another language invented by Google. Rob Pike co-developed the Unix environment and co-created the Limbo programming language. With Ken Thompson, the team had the inventor of Unix and the creator of the B language — the predecessor of C — on board. Google was originally written in Python — yes, Python is still cool — but around 2007, engineers were searching for a better language to perform typical tasks at Google. They were encountering problems like these, according to a talk by Rob Pike in 2012: Slow builds: Producing new code was taking forever. Sounds familiar to me! Producing new code was taking forever. Sounds familiar to me! Uncontrolled dependencies: Have you ever tried to install a software package, only to find out that you have to install at least five other dependencies and umpteen sub-dependencies to get it work? It turns out that even Googlers have that problem. Have you ever tried to install a software package, only to find out that you have to install at least five other dependencies and umpteen sub-dependencies to get it work? It turns out that even Googlers have that problem. Each programmer using a different subset of the language: In Python, one developer might use the numpy package, another one prefers scipy, and so on. When the programmers want to blend their code into one package, things get messy. In Python, one developer might use the numpy package, another one prefers scipy, and so on. When the programmers want to blend their code into one package, things get messy. Poor program understanding: People who say they understand code in the minute they read it are lying. At least if it’s not a dead-simple “Hello World”-program. And the documentation of the code often doesn’t help — in most cases it doesn’t even exist, or it’s badly written. People who say they understand code in the minute they read it are lying. At least if it’s not a dead-simple “Hello World”-program. And the documentation of the code often doesn’t help — in most cases it doesn’t even exist, or it’s badly written. Duplication of effort: Have you ever copied a piece of code from one part of the program, just to copy it somewhere else? Bad practice. But most programming languages make it easy to do. Have you ever copied a piece of code from one part of the program, just to copy it somewhere else? Bad practice. But most programming languages make it easy to do. Cost of updates: With such a mess as described above, does it really surprise you that updating your software is going to take a lot of time and brainpower? Not cool. With such a mess as described above, does it really surprise you that updating your software is going to take a lot of time and brainpower? Not cool. Version skew: With duplicate code floating around the place, engineers might only update one version of the original code snippet and forget about the others. So you end up with a version that contains both new and old code. Sounds chaotic? It is. With duplicate code floating around the place, engineers might only update one version of the original code snippet and forget about the others. So you end up with a version that contains both new and old code. Sounds chaotic? It is. Difficulty of writing automatic tools: It’s possible to write programs that write code themselves — in fact, most programs do that at some stage. But with modern programming languages, that is still hard to pull off. It’s possible to write programs that write code themselves — in fact, most programs do that at some stage. But with modern programming languages, that is still hard to pull off. Cross-language builds: You know the problem — Python is great for small-to-medium scripts, C++ is great for elaborate programs, Java is great for web development, Haskell is great for lazy but robust code. The result is that a single program often contains snippets from many different languages. But for compiling, debugging and the sake of cleanliness, it is much better to write a program in one single language. So the trio set out to design a language that was clean, simple, and readable. A language that would eliminate, or at least ease, these all-too-common problems in software engineering. A lean language… The root of many of these common problems is the complexity of modern languages. Think of Python or C — have you ever tried to read the whole documentation? Good luck with that. In contrast, the greatest feature of Go is its simplicity. That’s doesn’t mean you can’t build complicated code with it. But Go is very deliberate about not having features that bring more complexity without solving the problem. For example, Go doesn’t have classes like other object-oriented languages. A much-used feature of other languages, classes are great to make one object inherit the properties of another object. The problem is that if you try to change the structure of one object without changing that of the others, you’ll break the code. Go has an alternative, called struct, that favors composition over inheritance. The gopher for the Google App engine. Image from the Golang website. Other key features of Go are: Type safety: In C, you can use pointers to do just about anything — including crashing the program. Go doesn’t let you mess around like that. In C, you can use pointers to do just about anything — including crashing the program. Go doesn’t let you mess around like that. Readability: Like Python, Go puts readability first. This makes it more beginner-friendly than most languages, and makes code easier to maintain. Like Python, Go puts readability first. This makes it more beginner-friendly than most languages, and makes code easier to maintain. Documentation: Especially junior developers find it tedious to write some blurb about your code so that others can use it. With Godoc, this process is much more automatized than in most languages — and the developers don’t have to waste valuable time by writing down what they’ve been doing. Especially junior developers find it tedious to write some blurb about your code so that others can use it. With Godoc, this process is much more automatized than in most languages — and the developers don’t have to waste valuable time by writing down what they’ve been doing. Orthogonality: This means that if you change one object in your code, no other object will change because of that. In this sense, a radio is orthogonal because the volume doesn’t change if you change the station. Much unlike C, for example — if you change one thing, then others can depend on that and also change. Go is orthogonal because it makes things simpler. This means that if you change one object in your code, no other object will change because of that. In this sense, a radio is orthogonal because the volume doesn’t change if you change the station. Much unlike C, for example — if you change one thing, then others can depend on that and also change. Go is orthogonal because it makes things simpler. Minimality: In Go, there’s only one way to write a piece of code. Compare that to Python, where you have zillions of ways to write one thing! In Go, there’s only one way to write a piece of code. Compare that to Python, where you have zillions of ways to write one thing! Practicality: Important stuff should be easy to code — even if that means that other things are impossible to do in Go. The logic here is that you want to increase the productivity of a developer by making recurring tasks fast and easy. And if there is a more complex problem — which is a rare occurence anyway — they can always write that in another language. All this might sound boring and uncreative. And in a sense that’s true — this is no language with funky features that you could use to impress others, plethora of ways to solve a problem, no freedom without limits. Go is no language that is there to explore, to do research with. But it’s amazing when you’re trying to build something that works. When you’re on a team with lots of different people from different backgrounds working on the same code. When you’re tired of all of the mess that you encounter with other languages.
https://towardsdatascience.com/one-in-two-pythonistas-should-learn-golang-now-ba8dacaf06e8
['Rhea Moutafis']
2020-05-15 13:06:56.504000+00:00
['Python', 'Golang', 'Software Development', 'Towards Data Science', 'Programming Languages']
The Perfectionist’s Impossible Rules Of Writing
The Perfectionist’s Impossible Rules Of Writing Are you a rule follower or do you break the rules with your writing? Photo by Mark Duffel on Unsplash When it comes to writing, I follow the rules, partly thanks to my perfectionist nature. In fact, I’m a ‘rule-follower’ in most of my daily life, but that’s a whole other story. When I see a writer who is a ‘rule breaker’ I usually think one of two things (and it has to be two because two is a perfect, even number): (a) They’re lazy (b) They’re amazing Let me explain. (a) They’re lazy When a writer blatantly ignores the rules of writing and forgets to put in a capital letter or writes could of instead of could have, I instantly assume they’re being lazy. I get it, we’re all busy and sometimes we don’t have time to edit our work. But that’s not good enough. Not for the perfectionist in me, anyway. (b) They’re amazing When a writer does all of the rule-breaking on purpose, I’m in awe of that person. I secretly wish I could be like them but can’t seem to find the key to unlock my ‘inside the box’ thinking. While I’m a fan of predictability in daily life, (routine is my best friend) I love being thrown out of my comfort zone when I read. Whether it’s a writer breaking the fourth wall or creating a new language, I get excited when I read something new or unexpected. But when it comes to writing my own words, I’m predictably a rule follower. Therefore, I’ve created a set of rules so perfect they’re impossible to follow, but the anxious perfectionist in me can’t help but strive to abide by them. The Perfectionist’s Impossible Rules Of Writing Rule #1: You are not permitted to make a mistake Not only do I err on the side of caution, but I try not to err at all. That’s because failure is the worst possible outcome for the perfectionist within me, and: Mistakes = Failure There’s a little voice in my head telling me that mistakes are healthy, failure leads to growth, you need to make mistakes to learn. But then there’s a bigger voice in my head telling me that I’m not allowed to fail. Because failure means I’m not perfect. And if I’m not perfect then I’m not good enough. It’s a terrible thought process, and when I’m in a calm, receptive state of mind I can see how silly it all sounds. But when I’m anxious and have just failed at something, (no matter how big or small) then the voice telling me, “you’re not allowed to fail!” sounds perfectly rational and my reactions usually aren’t exactly perfect. Rule #2: If you do happen to make a mistake, you must anxiously worry about it for at least 3 nights When I submitted my first children’s manuscript to a publisher I felt a sense of relief that I’d finally taken the plunge to do it. But that relief was soon replaced by dread when I realised I’d made not one mistake in the manuscript but several. I wanted to email the publisher and tell them to ignore the mistakes. I didn’t. I contemplated sending a revised draft and pretend that I’d sent them the wrong copy, but that went against all the advice I’d received about submitting your work to publishers. It was supposed to be a polished manuscript, but I’d failed. This led to sleepless nights because night-time seems to be when my brain goes through the worst possible scenarios. It was useless to worry but at the time it’s how I chose to react. The funny thing is the manuscript wasn’t that great even without the errors. It never got published and I wasted three nights worrying about nothing. Rule #3: You must rewrite your work a minimum of 3 times The delete key on my phone and laptop gets the most use. That’s because I’m constantly rewriting a sentence or paragraph. I find a better way to say a sentence. Or I just hate the sound of what I’ve written and delete it all and start again. The perfectionist in me tells me nothing is good enough, I need to rewrite until it’s perfect. It’s hard not to listen to that niggling voice, but if I did then I’d never get anything written. Rule #4: Nothing is good enough. You must edit your work meticulously Once I manage to get an article written, and then rewritten, it’s time to start the intense editing process. This usually takes three times as long (there’s that number three again, why can’t it be a perfect number two?) to complete than actually writing an article or manuscript. Removing redundant words, (my overused ones are “that” and “just”) fixing spelling mistakes, (autocorrect doesn’t pick up the difference between, “their,” “they’re,” and “there”) and making sure the piece flows are essential aspects of editing that take a while to do. The editing process is usually a bit easier than the rewriting process, as there are (finally) words on the page. As much as it’s time-consuming, I secretly enjoy the editing process. It makes me feel as though I’m achieving something (perhaps by trying to perfect my work?) Rule #5: Always worry about what other people think I’ve finally published my work and it’s time to sit back and relax. Wrong. Now it’s time for the worry and anxiety to hit their peak. Especially when people give negative feedback or reviews, I fall apart. I need to find a way to put myself together and remind myself I’m not perfect. I never will be. And that’s ok. That’s human. Rewriting the rules I am good enough and I need to keep moving forward. I need to keep writing. Don’t give in to the self-critic. Don’t worry so much about what other people think. Those who love you will love you regardless of whether you’re a good or bad writer. Enjoy the journey and never give up. You are good enough.
https://medium.com/mama-write/the-perfectionists-impossible-rules-of-writing-3a2eb127fd8
['Lana Graham']
2019-11-02 11:22:41.506000+00:00
['Rules', 'Perfectionism', 'Rules For Writing', 'Writing', 'Writing Tips']
Can bad students give good feedback?
A modern approach to teaching and learning is to use peer feedback. The idea is to let students give feedback to each other, both as a way to ensure that students get more feedback, but more importantly because students learn a lot from giving feedback to their peers (Sadler 2006, Liu 2006). When discussing the concept of peer feedback with teachers and students, the most common response is: What if my students don’t understand the material? Will the less well-performing students be able to give feedback? It is natural to think this way, that students that are not high performers academically are not able to provide useful and constructive feedback to their peers. But as Habeshaw (1993) writes, all students “.. are in the best position to know what their difficulties are and to judge what kind of feedback is helpful”. Studies show that feedback needs to be rational and supported by suggestions (Kim 2005) and that receiving ‘justified’ comments in feedback improves performance of students (Gielen 2010). Teachers love data ❤️ Let’s look at some actual data! To try and answer the question, we have dug into our data at Peergrade to see if we can get a clearer picture. By reviewing data from more than 10.000 students across 500 courses, we looked at the correlation between the quality of feedback a student provides (as evaluated by the receiving peer) and how well they performed on their own assignments. The horizontal axis shows each student’s academic performance which is the average of their score relative to other students for each assignment. The vertical axis shows the average feedback score of feedback they gave to their peers. When we look at the data, we see that there is a very weak correlation (r = 0.11, p-value = 0.0007) between how good students are at giving feedback, and how good their own work in the course is. Surprisingly, students who perform significantly worse than their peers in the assignments, are usually able to provide great feedback to others. All students learn from peer assessment The important takeaway is that all students are able to learn from peer assessment, and that it is possible to use it as a pedagogical tool even when teaching classes where students are not all on the same level. If you are interested in trying peer assessment for yourself, head over to Peergrade and give it a go — it is totally free for teachers! The content in this post is part of a submitted research paper to the 12th International Conference on E-Learning)
https://medium.com/peergrade-io/can-bad-students-give-good-feedback-eef1534887a2
['David Kofoed Wind']
2019-05-03 09:00:35.728000+00:00
['Feedback', 'Edtech', 'Teaching', 'Science', 'Education']
How to log in Apache Spark
An important part of any application is the underlying log system we incorporate into it. Logs are not only for debugging and traceability, but also for business intelligence. Building a robust logging system within our apps could be use as a great insights of the business problems we are solving. Log4j in Apache Spark Spark uses log4j as the standard library for its own logging. Everything that happens inside Spark gets logged to the shell console and to the configured underlying storage. Spark also provides a template for app writers so we could use the same log4j libraries to add whatever messages we want to the existing and in place implementation of logging in Spark. Configuring Log4j Under the SPARK_HOME/conf folder, there is log4j.properties.template file which serves as an starting point for our own logging system. Based on this file, we created the log4j.properties file and put it under the same directory. log4j.properties looks like follows: Basically, we want to hide all logs Spark generates so we don’t have to deal with them in the shell. We redirect them to be logged in the file system. On the other hand, we want our own logs to be logged in the shell and a separated file so they don’t get mixed up with the ones from Spark. From here, we will point Splunk to the files where our own logs are which in this particular case is /var/log/sparkU.log. This (log4j.properties) file is picked up by Spark when the application starts so we don’t have to do anything aside of placing it in the mentioned location. Writing Our Own Logs Now that we have configured the components that Spark requires in order to manage our logs, we just need to start writing logs within our apps. In order to show how this is done, let’s write a small app that helps us in the demonstration. Our App: Running this Spark app will demonstrate that our log system works. We will be able to see how Hello demo and I am done messages being logged in the shell and in the file system while the Spark logs will only go to the file system. So far, everything seems easy, yet there is a problem we haven’t mentioned. The class org.apache.log4j.Logger is not serializable which implies we cannot use it inside a closure while doing operations on some parts of the Spark API. For example, if we do in our app: this will fail when running on Spark. Spark complaints that the log object is not Serializable so it cannot be sent over the network to the Spark workers. This problem is actually easy to solve. Let’s create a class that does something to our data set while doing a lot of logging. Mapper receives a RDD[Int] and returns a RDD[String] and it also logs what value its being mapped. In this case, noted how the log object has been marked as @transient which allows the serialization system to ignore the log object. Now, Mapper is being serialized and sent to each worker but the log object is being resolved when it is needed in the worker, solving our problem. Another solution is to wrap the log object into a object construct and use it all over the place. We rather have log within the class we are going to use it, but the alternative is also valid. At this point, our entire app looks like follows: Conclusions Our logs are now being shown in the shell and also stored in their own files. Spark logs are being hidden from the shell and being logged into their own file. We also solved the serialization problem that appears when trying to log in different workers. We now can build more robust BI systems based on our own Spark logs as we do with other non distributed systems and applications we have today. Business Intelligence is for us a very big deal and having the right insights is always nice to have.
https://medium.com/hackernoon/how-to-log-in-apache-spark-f4204fad78a
['Nicolas A Perez']
2017-07-13 17:42:15.222000+00:00
['Spark', 'Big Data', 'Scala']
Thank You To Everyone
January Adeline Dimond kickstarted 2020 with this thought-provoking nine-minute read. The mission, to spark political discourse. February I like articles that challenge conventional views and this one by Emme Beckett certainly dropped a few jaws! “I’m a whore, not a plumber,” states Emme. Love it. March By March, we were all knee-deep in the pandemic and Medium switched to an all-out info war for Covid. There was no end to the relentless tide of Covid articles. Fortunately, Ena Dahl piqued everyone’s interest with her knife fetish. I’m continually in awe of Ena and her empowering displays of sexuality. This article literally cut through the mire. April It was all about the hot sauce in April as TBI embraced our sex writers. There were so many great posts, we couldn’t keep up! From Yael Wolfe getting naked for her art, Emme Beckett pooping her bed, or Demeter deLune wanting sex with another man. For me, the highlight was this beautiful post from Alex Woodroe wanting nothing more than love from her father. Poignant and heartbreaking. May For a while, this story was all anyone saw in the trending section, and rightly so. Phoenix Cocklove’s powerful post about a marriage crumbling broke many hearts. The sadness evoked when she simply states “He used to notice everything” is gut-wrenching. Sadly, this was to be one of Phoenix’s last articles as by August she promptly disappeared. Come back Phoenix, we miss you! June Joe Duncan is a writer I greatly admire. He holds nothing back and refuses to dumb-down for an audience. His work often dives deep into the emotional state of a man where no subject is taboo. But it’s his political discourse that really floats my boat. This barnstorming piece on police brutality was nothing short of epic. (Do check out his comical Guide to Tinder for a lighter side of Joe!) July TBI owes a massive debt to one of our early champions and Editor, Edward Anderson. His written reports on true crimes are a joy to read. Well researched and filled with drama, Ed knows how to weave a tale. Here’s Ed retelling the story of President Harding. Great stuff. August It’s hard to choose between these two political articles. Both were filled with gravitas and explored the nature and response from society. I love articles like this and Pete Ross and Tré Ventour brought the fire. Articles that challenge the status quo and INCITE! In between these two was TBI’s dear friend and all-around lovable rogue, Ryan Fan. He graced us with two stories that brought a lighter, irreverent note and worth another mention with his ugly cars and top writing badges. September Linda Caroll fights back against ageism. This was a spectacular piece of writing that fired shots at the ‘woke’ crowd and people slinging generalizations. Linda is an awesome writer. It’s the way she breaks down the attitudes of the millennial and blindsided them with fierce logic! A definite highlight of the year! October It took me a long time to gather the courage and ask Jessica Wildfire to write for TBI. Like many, I’m simply in awe of her talent and grateful she submitted to the TBI cause! I like to think your humble editor from New Zealand was the inspiration behind this piece. And yes, I gushed for a week or two! November This was a toss-up between Joanna Henderson’s open letter to Trump supporters and Tracy Stengel’s attempts to not get a complex! I love the lighter side to both of these articles. Here’s Tracy trying to convince her mom to read her work. December Oh my god! December has awakened so many new and exciting writers for TBI. It’ll be an injustice not to mention a few here. GrayMatter is always a must-read! His mental health articles are filled with emotion and longing for self-discovery. Hogan Torah challenges everything and fits snugly in the TBI family like an impromptu flash mob at your grandad’s funeral. Rozali Telbis and Stephanie Tolk both hit hard with their take on society and finally a mention to long-serving fan favorite, M. C. Frances and MonalisaSmiled.
https://medium.com/the-bad-influence/thank-you-to-everyone-35fc979d02d9
['Reuben Salsa']
2020-12-23 20:07:44.056000+00:00
['Salsa', 'Year In Review', 'Writing', 'Ideas', 'The Bad Influence']
7 Best Rust Programming Courses and Books for Beginners in 2021
7 Best Rust Programming Courses and Books for Beginners in 2021 Want to learn Rust in 2021? Here are the best online courses and books you can read to learn Rust from scratch. Hello guys, If you are looking to learn Rust Programming language in 2021 and looking for useful resources like books, tutorials, and online courses, then you have come to the right place. Earlier, I have shared the best Golang courses, and in this article, I am going to share some of the best books and courses to learn Rust from scratch in 2021. Rust is one of the relatively new (born in 2015) and powerful programming language which combines the power of C++ with the safety of Java and other interpreted languages. When a Programming language is designed, it was either designed for power like C/C++ or for safety like Java, Python, etc. but we didn’t have the best of both. There were many attempts to combine the power of C/C++ and safety offered by Java, and it looks like only Rust has got that right. Since its debut in 2015, Rust has gained the attention of the world and the developer community. One significant proof that it has been voted as the most desired programming language for the last four years in the StackOverflow survey. Its popularity is also growing day-by-day. According to GitHub Octoverse, Rust was the second-fastest-growing language previous year just behind Dart, and it is also increasing in Google trends. The significant advantage of Rust is the performance it offers, which makes it suitable for system programming. For a long, system programming and embedded programming space have been dominated by languages like C/C++. While they provide full control over programs and hardware, they lack memory safety. It is also hard to write concurrent code using C++, while Java solves some of the C++ problems with respect to safety and concurrency; it did so at the expense of performance. It offers safety but needs a bulky runtime called Java Virtual Machine or JVM. Because of their significant runtime, languages like Java are not suitable for System programming and never really made an inroad into that space. Rust seems to offer the middle ground, while it provides blazing fast speed, which was only possible with C/C++ code; it also provides the safety of interpreted languages like Java, Haskel, Python. This is the main reason for Rust’s rise in the space of System programming and the Big Data domain. It offers a credible alternative of languages like C/C++, D, and Golang for system programming. If you are looking to learn a new Programming language that will improve your overall programming skills and practices in 2021, then the Rust programming language is the best programming language. My favorite Online Courses learn Rust Programming Language in 2021 When it comes to learning a new programming language, I generally follow my 3-point formula, which starts with an online course and finishes with a personal project. After learning the basics and core parts using an online course, I generally read a book while working on my own project built using the new programming language. If you want to learn Rust, you can also follow this 3-point formula. Anyway, without wasting any more of your time, here is my list of some of the best courses to learn Rust in 2021. I like to learn by doing approach and that’s why when I see this course on Udemy I couldn’t resist. This is one of the best online courses to learn Rust in 2021 for beginners. Created by Lyubomir Gavadinov, this practical Rust programming course will teach you the fundamentals of Rust. The format is a bit different than most other courses. Instead of jumping between unrelated concepts in every video and showing examples that have nothing to do with the real-world use of the language, you will learn entirely through practice. Here are the key things you will learn in this Rust course: The fundamentals of the Rust Programming Language Low-level memory management Rust’s unique approach to memory safety How to troubleshoot common compiler errors You will build real Rust applications and introduce new concepts when we need them to solve actual problems. For example, you will learn Rust basics by building a command-line application and then move on to create a complete working HTTP server using Rust programming language. Here is the link to join this Rust course — Learn Rust by Building Real Applications
https://medium.com/javarevisited/7-best-rust-programming-courses-and-books-for-beginners-in-2021-2ed2311af46c
[]
2020-12-17 07:59:51.178000+00:00
['Rust', 'Software Development', 'Coding', 'Development', 'Programming']
Intro to Post-Structuralist French Philosophy for Data Scientists (Part I)
Quick Background: Hegel (1770–1831), Marx (1818–1883), Nietzsche (1844–1900) Much of the modern French philosophical thinking we’ll explore has been deeply influenced by the German philosophers G.W.F. Hegel, Friedrich Nietzsche, and Karl Marx . Without briefly touching on some key concepts they introduce, it will be difficult to give context to the later work of Foucault, Deleuze, and others. So in what follows, I’ll try to hash out just what you need to know in order to make sense of things later. Keep in mind that the post-structuralists tend to eschew direct argumentation in favor of developing vast, sweeping philosophical visions. They don’t so much as make arguments as present their readers with alternate possibilities for reality. My apologies if I seem to ramble at times. Hegel claims to have discovered the emergent logic of thought. Concepts paradoxically contain opposing aspects (thesis & antithesis) which are resolved at higher levels (synthesis). Source: Denise Spivey’s Pinterest. Hegel: Conceptual Dynamism and Fuzzy Logic It’s difficult to do justice to Hegel in a few paragraphs, so I’ll just focus on aspects of Hegel that I think are most relevant to data science. To start, Hegel’s ideas suggest conceptual limits on the power of supervised machine learning to account for reality. Hegel was not a fan of binary, either-or logic, the kind embodied in the crisp set theory of traditional statistics used in machine learning. Things could both be and not be at the same time. For example, you are not, strictly speaking, the same person you were yesterday. Billions of cells which make up your body have died and been replaced by new ones. Yet you appear to be the same person and we refer to you using the same name. Child-Travis and adult-Travis are clearly different, yet still the same person. How can we reconcile this fact? Seen this way, Hegel can be thought of as an intellectual forebear of Lofti Zadeh’s Fuzzy Set Theory, which posits crisp set theory as a special case where set membership functions only take on the values 1/0. Hegel’s speculative logic and “possibility theory” (as opposed to probability theory) share conceptual roots in my view. Hegel’s understanding of the logic of scientific thinking is immanent in nature. By this he means that concepts contain within themselves their own negation. On the surface, it sounds paradoxical and in violation of the law of the excluded middle (something is either p or not p), which undergirds modern probability theory, but it shares a likeness with many spiritual concepts, such as samsara in Buddhism. The image above illustrates how concepts evolve over time by integrating these positive and negatively immanent aspects. Synthesis is a creative act resulting from thesis and antithesis “annihilating” one another in an act of creative destruction (thus the samsara reference). We should note that destruction is a necessary step in self-realization. What appears as contradiction can in fact be reconciled in a higher unity if we are willing to go along with it far enough. In mathematics we often find some abstraction to be a mere special case of an even more abstract conception. The dot product is a special case of the more abstract inner product, for instance. Contradiction is not to be avoided, but to be embraced if we are to understand the nature of spirit (Geist) in its absolute form. Hegel is perhaps most famous for his claim that there is a logic to history, that it unfolds according to a rational order, of which individual humans — as conscious, rational subjects — also partake. For Hegel, reason and freedom are linked: freedom is precisely the unfolding of reason in this rational order as it strives towards a point of totality or singularity, a point where it comprehends itself as such. What we see as reason is really just the self-expressive movement of reason on this dialectical journey, manifested in human consciousness, human institutions, works of art, and so on. As rational, self-conscious creatures, we can, at best, go along with this dialectical ride, but we cannot escape the “cunning of Reason.” Hegel famously spoke of the “slaughterblock of history,” and saw both good and bad events as necessary realizations of an inexorable march towards self-realization of “absolute spirit.” Reason cannot be tamed. In my view, a major difference between Continental and Analytic schools of philosophy concerns the essence of logic. Is it intrinsically dynamic and free as Hegel contends? Or can it be subdued and formalized, as Frege, Whitehead, and Russell hoped? There is a distinctive Zen-like element to Hegel’s Logic of Science. According to Stephen Houlgate’s The Opening of Hegel’s Logic: From Being to Infinity, Hegel’s exploration into the nature of scientific thinking is based on the presupposition-less observation of thought’s own dynamics. In other words, Hegel aimed to uncover the nature of thought by letting thought move on its own. We should not, as Kant did, assume there are certain a priori categories to which thought must be constrained. In modern form, the transhumanist movement views the human species as a mere step in a larger cosmic unfolding of Absolute Spirit’s self-realization (the so-called Singularity). We should also point out that this idea of the rational unfolding of history is taken up by Marx. Marx’s Communist utopia was the final realization of human history: capitalism was supposed to be just a stop on the way. Interpretivist social science has been shaped by similar ideas from Husserl’s phenomenology, which aimed to study the objects of consciousness precisely as objects of consciousness, rejecting earlier rationalist claims that substances of mind and body could be cleanly separated. Natural scientists steeped in Enlightenment rationalism may thus initially feel uncomfortable with Hegelian dialectical thinking, which avoids the simple, binary oppositions of self-other, inside-outside, good-bad, and male-female. Indeed, clear boundaries between self and other can and do break down at the biological and molecular levels, as those suffering from auto-immune disease can attest to. Self-consciousness and the Struggle for Recognition by Others Hegel is possibly the first philosopher to explicitly grapple with self-consciousness and reflexivity in cognition. For him, recognition by “others” provides grist for self as object. In other words, self-realization — recognizing oneself as a “self” — fundamentally depends on the social recognition of other autonomous objects (persons) who recognize your existence as an individuated person with unique desires and goals. Our personal identities depend on this act of recognition by others. We are not disembodied Cartesian egos: we are interdependent and social creatures embedded in social environments. Axel Honneth extends Hegel’s ideas to interpret cries of social injustice by marginalized communities as the struggle for recognition. On this view, oppressed groups are fighting for claims of recognition and social legitimacy. Judith Butler, in her book Undoing Gender, explains the struggle for recognition in a way that highlights Hegel’s commitment to a precursor of fuzzy logic: “To be called a copy, to be called unreal, is one way in which one can be oppressed, but consider that it is more fundamental than that. To be oppressed means that you already exist as a subject of some kind, you are there as the visible and oppressed other for the master subject, as a possible or potential subject, but to be unreal is something else again. To be oppressed you must first become intelligible. To find that you are fundamentally unintelligible (indeed, that the laws of culture and of language find you to be an impossibility) is to find that you have not yet achieved access to the human, to find yourself speaking only and always as if you were human, but with the sense that you are not, to find that your language is hollow, that no recognition is forthcoming because the norms by which recognition takes place are not in your favor.” Seen from the limits of crisp set theory, transgender and other marginalized groups of people appear as contradictions. We are taught that sexual identities must be either-or, 1 or 0. The mere existence of something like a transgender identity threatens to undermine the most basic divisions of reality, leading some to violence and anger at such metaphysical denial. But Hegel would say that by learning to embrace contradictions, we can actually achieve an understanding of something much greater. The social effects of Industrialization deeply influenced Marx. Today we are dealing with the social externalities of Algorithmization. Photo by Museums Victoria on Unsplash Marx: Alienation and Ideological Superstructures A follower of Hegel, Marx of course was famous for his Communist Manifesto, but his contributions to social science run deep, even today. Though many of his historical predictions never came to pass, his critique of the capitalist system and conceptual approach is still highly influential. Case in point, see Shoshana Zuboff’s monumental book Surveillance Capitalism. Borrowing from Hegel, Marx believed there was a rational shape or arc to historical development which would ultimately result at some future point in a Communist utopia, in which the “shackles” of the oppressed proletariat class would be torn off. At the same time, it is often said Marx “turned Hegel on his head” or “inverted” his ideas. For Hegel, the dynamism of thought ultimately accounted for our experience of reality, a philosophical position known as idealism. Hegel famously wrote, “What is rational is real, and what is real is rational.” But in Marx it is the material, economic facts that accounted for reality. So while Hegel was an idealist, Marx was a materialist. In Marxian thinking, facts about the mode of production of material objects determine the structure of society. As just mentioned above, Marx was a materialist in this sense, as he believed the material conditions(i.e., the (economic relations between capitalists and laborers) of a society determined its non-material structure. Marxians refer to the means of production as the base or substructure upon which society is formed. Everything else, including all social norms, morality, law, and culture are part of the superstructure. The Capitalist Superstructure Marx was concerned with how the factory system of his time effectively separated the lone worker from the fruits of his labor in exchange for a wage. This observation was important because Marx believed that humans derived meaning and enjoyment from their productive activities. Humans were essentially creatures of labor, capable of building and creating novel objects to meet their needs. Consequently, the dispossession of the fruits of their labor by the capitalist ruling class was deeply troubling for Marx. In short, capitalism alienates people from their work and replaces moral questions of value with dollar signs. Further, the overspecialization inherent to organizations in capitalist systems leads to what the French sociologist Durkheim captured in his concept of anomie. Assembling widgets — or soldering iPhone components — for most of your waking moments is certainly not a life Marx saw as conducive to human flourishing. We can already see some scholars using this image of dispossession to explain how persons are alienated from their personal data in the pursuit of advertising profits by corporations like Google and Facebook. Check out Zuboff’s Surveillance Capitalism if you’re interested in reading more about this kind of thinking and how it applies to the major Behavioral Big Data (BBD) platforms. Nietzsche believed Christian morality was fundamentally backwards: it valorized precisely those things that prevented greatness and creativity in individuals. Photo by Christoph Schmid on Unsplash Nietzsche: Slave Morality and The Will to Power Nietzsche would have been a great digital marketer: he knew how to use shock value in order to get the attention of his readers. In his classic On the Genealogy of Morality, Nietzsche lays out a powerful critique of Western Judeo-Christian values. According to Nietzsche’s psychological analysis, Christian morality is based on a reversion of Roman upper-class values. Oppressed early Christians harnessed their “resentment” to glorify their social and political weakness. Consequently, meekness, equality, and other expressions of a lack of power are held up as moral ideals, when in fact they are merely manifestations of what Nietzsche believes to be a kind of “slave morality” of the Lumpenproletariat. We Westerners have been duped by such slave morality, Nietzsche would say, leading to a kind of moral and spiritual stunting that prevents us from living life in the fullest, most creative, and passionate way. In this sense, there is a clear affinity with Marx’s notion of the ideological superstructure, where bourgeois (capitalist) morality has supplanted our more basic and pre-industrial way of living. Rousseau would agree that we moderns seem to have lost our lust for life. It’s no secret that Nietzsche was an elitist. He espoused a kind of mythology of the unique, creative, and powerful individual embodied by the Greek heroes in Homer’s Odyssey, for instance. In the grand scheme of history, we forget the masses and remember the great movers and shakers of society. We really only care about developing those Black Swans whose lives leave an indelible mark on future generations, for better or worse. Notice the similarity to the ideas of Ayn Rand and her technochauvinist followers. Based on his Genealogy of Morality, it’s clear that Nietzsche believed moral values were not based on any kind of deep, unchanging metaphysical truths, but rather on the interests and values of the ruling classes. Might makes right. This idea of moral perspectivism goes back to the era of Socrates, but Nietzsche takes it to its logical extreme. Robert Solomon, in his book Living with Nietzsche, explains that perspectivism is the idea that “all doctrines and opinions are only partial and limited by a particular point of view.” What we know is thus intrinsically limited to our contexts of knowing, perceptual limitations, languages, social upbringing, etc. There’s no God’s eye view, according to Nietzsche. We will see this idea resurrected by Foucault, Derrida, and Latour in later sections. For now, we can already see how such perspectivism would seem to limit claims of ML objectivity, especially when for-profit companies decide which data to collect. Nietzsche would agree that raw data is an oxymoron. At the same time, however, Nietzsche would likely deplore the efforts of social justice warriors for greater equality and claim that these new data representations of persons — whether biased or not — are expressions of a corporate will to power by Facebook and Google. We shouldn’t hold it back. Foucault claims the power of “the gaze” (constant surveillance) derives from its ability to be internalized, thus creating consumerist masses of “docile bodies.“ Photo by Paweł Czerwiński on Unsplash Foucault (1926–1984): Disciplinary Power & The Panopticon, Michel Foucault is perhaps the most important thinker for understanding current issues of social justice in tech. Foucault was really interested in dissecting and illuminating the “invisible” forms of power. He wasn’t so interested in obvious examples of political repression or violence against citizens by the state, but in the way social norms subtly worked to “produce” subjects with easily distinguishable labels. Foucault, like Nietzsche, was worried by the oppression of the unique individual by the conformist masses. In fact, Foucault saw the advance of statistical theory, with it use of theoretical populations, as a case in point of this oppression of the individual and the emergence of what he called biopower. For Foucault, categorical labels (e.g., “insane,” “man,” “woman,” “black”) allowed states to achieve more or less the same degree of conformity as physical violence might, but with a patina of Enlightenment humanism. In modern society, as opposed to Medieval Europe, deviance from norms was not punished through physical violence, but through labels and the institutions associated with them. For Foucault, schools, prisons, and hospitals were designed to engender docile bodies and mold the masses. We can clearly see that Foucault is launching a critique of the modern way of life. Further, we can see influence of Nietzsche in his rejection of slave morality (glorifying the unique, powerful, and passionate individual) and the support for the exercise of our innate will to power, which serves to distinguish us as unique individuals. Foucault most famously illustrated the invisible nature of power through his metaphor of the Panopticon prison. In this wheel-and-spoke-like prison design, a central guard could watch each individual cell, but no prisoner could know he was being watched. Without needing to physically brutalize the prisoners, guards could instead achieve conformity merely through the possibility of the gaze. According to Foucault, the power of the gaze arises through its ability to become internalized by an individual. Once internalized, the gaze functions to influence and control the individual in her dispositions to behave in certain ways without any physical intervention. It’s an invisible form of power. Knowledge and Discourse as Expressions of Power Ruha Benjamin’s recent book, Race After Technology, draws deeply from the kinds of ideas Foucault explored. Like Marx before him, Foucault was concerned with describing the “ideological superstructure” associated with various socio-historical periods, from the Renaissance until the modern period. Combined with Freud’s theory of the unconscious, Foucault, in his book The Order of Things, called these superstructures epistemes and set out to uncover the unconscious “rules” determining the form of discourse of a particular period. Epistemes can be thought of as the preconditions for the possibility of various discourses, which prescribed various and often unconscious criteria for knowledge. They set the rules for what counted and what didn’t, what had value and what didn’t. Thomas Kuhn’s idea of scientific paradigms is a great example of this kind of thinking. Outside of a given paradigm, certain questions don’t even make sense to ask. For Foucault, the ability to dictate the epistemes of a period is what really amounts to power: it’s a power to shape the existential narratives of subjects who must operate within the confines of a given discourse. Foucault initially believed discourse was all-prevailing in determining our experience, institutions, and social practices. As Wittgenstein would also argue, we can’t escape from these linguistic and social discourses which shape how we think and behave. From this conclusion we see claims in Science and Technology Studies (STS) such as, there’s no such thing as raw data independent of our linguistic and cognitive apparatus. Lastly, I should also mention that some critics dislike the way in which Foucault removes agency from persons and places them at the mercy of the current discourses in society. Why resist or protest for change if we are powerless to act outside of the prevailing discourse? Deleuze was intrigued by the way in which unitary objects could be divided and distributed, generating new forms and possibilities for being. Photo by Matt Artz on Unsplash Deleuze (1925–1995): Control, Modulation, and Dividuals Enter our last philosopher for this post, Gilles Deleuze. Like Foucault and Nietzsche before him, Deleuze is fascinated by power, creative expression, and perspective. But he updates their work in light of the age of the computer and digital revolutions. Unlike Foucault’s older “disciplinary societies,” Deleuze sees us now living in “societies of control,” which modulate persons through “perpetual training” (e.g., schools) to achieve conformity. The enclosed spaces that Foucault described are now replaced by distributed networks. The mass/individual distinction of societies of discipline no longer holds. Hegelian dialectical thinking appears again as we enter this new technological era, described as an era of the transhuman. Traditional boundaries between what is human and non-human thus become blurred. This is, however, not necessarily a bad thing for Deleuze, as we will see. In the era of big data, individuals are now dividuals, digitally spread through databases and social networks. Power is exercised by breaking down unitary things into distributed forms. Questions of identity are no longer clear once objects have been cut and reduced into their component parts. While Foucault’s society of discipline relies on mechanical objects like “levers, pulleys, and clocks” to exercise power, societies of control are more about “energy, entropy and computers.” According to Deleuze, we have entered a new stage of capitalism where services, not products, are the goal. The corporation has replaced the factory. We don’t have persons, but ambiguously “coded figures,” such as “stockholders.” They are faceless and without clear identity. Marketing, not the production of goods, has become the “center or the ‘soul’ of the corporation.” Digital technology splits up individuals and finds new ways of recombining them and extracting value. As marketing theorists Cluley & Brown (2015) explain the shift: “power is exercised by manipulating and extracting value from parts or micro-assemblages.” The Rhizome is a defining metaphor for Deleuze’s thought. It suggests diverse networks of objects can give rise to new, emergent phenomena. Photo by Matt Artz on Unsplash Life is Difference: Deterritorialization and Recombination Deleuze was fascinated with the process of becoming and the related concept of identity. Art and philosophy were special in that they allowed for a ‘deterritoralization’ of ideas into new environments. This deterritorialization process allows us to create new things. Deleuze wants to go beyond the correspondence theory of truth, which drives much of traditional, rationalist scientific thought, to fully detach map from territory. If you’re a data scientist, you can imagine ‘deterritorialization’ as a change of basis operation (matrix multiplication) for a given vector representation (here an idea or concept). In the new ‘basis representation” (context), we may derive new and previously unnoticed insights regarding the concept’s underlying nature. If we ditch the correspondence theory, though, some might worry that language will no longer have anything to “hook on to,” and we would just become depressed nihilists. What’s the point if nothing refers back to anything real? Deleuze turns this conclusion on its head, just as Nietzsche did in his Genealogy of Morality. Instead of giving up on the project of science because language is really just self-referential at its base, Deleuzian thought celebrates the fact there are dividuals that can be freely computed and deleted without any clear semantic connection to real people. By divorcing people from their digital representations, we can play with them in new ways. Just as CTRL-Z gives you the freedom to create things you might otherwise be too afraid to attempt, Deleuze is trying to show how this “detached” aspect of digital reality leads to creativity and new forms of being. Deleuze believed the goal of philosophy was to create new concepts. For him, life is difference. The goal of life is to think differently, to become different and create differences. We should be happy that we can’t fit our experience into the closed and easily bounded structures of Foucault’s society of discipline. Deleuze is clear: This isn’t a failure, but a reason to celebrate and explore the possibilities for invention and creation. The goal of philosophy and art is to create difference rather than agreement and common sense. We should embrace difference as it fuels the process of becoming.
https://towardsdatascience.com/intro-to-post-structuralist-french-philosophy-for-data-scientists-c74019122f17
['Travis Greene']
2020-10-01 14:29:40.324000+00:00
['Social Justice', 'Philosophy', 'Artificial Intelligence', 'Data Science', 'Technology']
How Might Dialects Developed In Captivity Affect Reintroduction Success For Parrots?
The spectrograms were then grouped based on their similarities to determine whether they could be accurately assigned to their source populations based on their acoustic structure. As Ms Martínez suspected, each of the four populations had its own discrete dialect. Further, she found that contact calls produced by the relict flock at El Yunque were radically different from contact calls produced by all the other populations, and was described as a single repeated syllable, confirming anecdotal reports. It was almost as if the last truly wild flock was speaking a different language entirely. This contrasts with the contact calls produced by the other flocks of parrots, which are comprised of least two different syllables. Careful analysis revealed that the El Yunque captive, the Río Abajo captive and the Río Abajo free-flying populations all shared at least two call variants, but even still, each population’s contact calls were nevertheless distinctive. How did these dialects arise? “The reasons why this happened vary between populations”, Ms Martínez said in email. “To start, the first captive population (the one in El Yunque) was founded by vocally naive parrots.” “[T]he founders of the first captive population were brought into captivity as eggs and chicks and had not yet learned the wild dialect of their parents. These captive birds had few opportunities to interact with their wild counterparts because there were very few wild Puerto Rican Parrots left at this time and because the captive facility was not located very close to the few remaining wild parrot roosting and foraging sites”, Ms Martínez said in email. “Since they lacked vocal tutors of their own species, we believe that the first captive parrots may have modeled their vocalizations on Hispaniolan amazons.” Hispaniolan amazon parrots, Amazona ventralis, are a closely-related species (more here) that is comparatively plentiful on their native Caribbean island of Hispaniola, which is split between the nations of Haiti and the Dominican Republic. These parrots were regularly used as foster parents for iguaca chicks in the early days of the conservation program to help boost the population numbers of iguaca and thus, large numbers of them were kept at the captive breeding facility for this purpose. A pair of Hispaniolan parrots (Amazona ventralis) in a cage. (Credit: TJ Lin / CC BY-SA 2.0) “Vocal divergence occurred a second time when the second captive breeding facility was founded in Río Abajo”, Ms Martínez went on in email. Unlike the facility in El Yunque, this population was founded by adult birds so birds in the Río Abajo flock had the opportunity to learn their calls from members of their own species. “However, when populations are separated, cultural traits can undergo a process of cultural drift”, Ms Martínez said in email. Vocal learning, like any learning or copying process, is imperfect so innovations and tiny changes — “errors” — can emerge as one bird learns its calls from another. “This is one way in which dialects emerge in nature and we believe this is what happened in Río Abajo”, Ms Martínez explained in email. “A similar process likely explains the vocal divergence that occurred when parrots were released into the wild in Río Abajo. Cultural drift again causes the vocalizations to change. For the wild birds in Río Abajo, we believe that the vocal divergence was reinforced by the social interactions that occur outside of captivity. Wild birds model their vocalizations on other wild birds because these birds need to be able to communicate with each other to find food and mates.” Parrots identify flock members on the basis of their vocalizations, and being a member of a flock comes with a number of other advantages besides finding mates, like evading predators, and working together to find ephemeral food sources. Tragically, the original dialect spoken by the last 50 or so remaining wild iguaca was forever lost when Hurricane Maria roared across Puerto Rico, destroying the last remnants of the precious relict population living in El Yunque. Hurricane Maria was a huge setback for the Puerto Rican Parrot Recovery Program “Losing an entire wild population is an incredible blow to the recovery of any endangered species”, Ms Martínez explained in email. “But that wasn’t even the worst of it.” Not only did Hurricane Maria completely destroy the wild relict population in El Yunque, but it also caused a 40% decline in the free-flying iguaca population in Río Abajo. Hurricane Maria also caused significant damage to both breeding facilities in Río Abajo and El Yunque and damaged another facility in the Maricao Commonwealth Forest where a third captive population was being prepared for reintroduction by the Recovery Program. “We’ve been able to make a comeback in recent years though”, Ms Martínez added in email. “Today, the wild population in Río Abajo surpasses that of its pre-Maria numbers. A group of wild parrots was released in El Yunque at the beginning of 2020. The captive breeding facilities are being remodeled and there are plans to re-initiate the release efforts in Maricao Forest as early as next year.” Are there any plans to teach the captive-bred iguaca their original calls? “Preserving the original calls at this point seems very unlikely”, Ms Martínez replied in email. “The relict dialect went extinct when hurricane Maria destroyed the relict population in El Yunque and there are no members of that population left that produce this dialect.” The original contact calls made by the relict flock of iguaca can only be heard in audio and video recordings. “However, the surviving dialects are still functional to the wild birds as evidenced by how well the wild flock in Río Abajo is doing”, Ms Martínez elaborated in email. “The calls are functional as long as all members of a given population respond to them within the appropriate contexts. That’s why it’s important to give captive birds every opportunity to listen and learn the wild calls of the population that they will be released into.”
https://medium.com/swlh/can-parrots-that-speak-different-dialects-understand-each-other-5867e2cd44ed
['𝐆𝐫𝐫𝐥𝐒𝐜𝐢𝐞𝐧𝐭𝐢𝐬𝐭', 'Scientist']
2020-11-09 20:39:04.773000+00:00
['Captive Breeding', 'Conservation', 'Ornithology', 'Science', 'Language Learning']
Introduction to Istio Service Mesh
Istio Architecture The Istio Service Mesh is divided into two components called data plane and control plane. All the envoy proxies come under the data plane. All the communication between microservices in your Kubernetes cluster happens through these proxies. The traffic between these Envoy Proxies is known as data plane traffic. The control plane components configure how the traffic is routed by the envoy proxies. In the earlier versions, the control plane components are deployed as Individual pods, with reference to Istio version 1.6 all these components are deployed as a single pod called istiod. Some of the tasks performed by istiod It is responsible for converting the istio based YAML files into the configuration understandable by envoy proxies It is responsible for propagating these configurations to the envoy proxies at run time It is also responsible for managing and generating TLS certificates to allow Mutual TLS connections between envoy proxies in the data plane. Some Important Features of Istio Service Mesh Traffic Routing Features a) Request Routing b) Fault Injection c) Traffic Shifting d) Ingress e) Traffic Mirroring 2) Security a) Certificate Management b) Authorization and Authentication 3) Observability and Visualization of your Service Mesh a) Kiali Dashboard b) Grafana c) Prometheus
https://pavan1999-kumar.medium.com/introduction-to-istio-service-mesh-2bc68d2ffdac
['Pavan Kumar']
2020-07-25 06:31:08.796000+00:00
['Istio Service Mesh', 'Istio Service Tutorial', 'Kubernetes', 'Istio', 'Envoy Proxy']
Alexander Stans Achilles: Heroism Intersects with Fandom
Strabo describes Alexander as a man who was “fond of Homer” (Strabo 13.1.27). The Macedonian monarch’s most treasured possession was his copy of the Iliad, annotated by Anaxarchus, Aristotle, Callisthenes and Alexander himself (Martin 2012). Allegedly, the two items Alexander always placed under his pillow was a copy of the Iliad and a knife. All of this reinforces the impression that the text continued to influence his thinking. The Conquest Eastward and its Homeric Parallels The Iliad details an East-West clash between a coalition of Greek states and Troy and its allies, which many ancient Greeks believed to be a factual account (Chrissanthos 2008: 79). To Alexander, this text was reportedly “a viaticum of the military art” (Plutarch 8.3). The text was thus a prism through which to view war (Higgins 2010). In the Iliad there are intense descriptions of combat, almost delighting in the slaughter of war. Homeric conflict can bring renown, but may also destroy lives; generate fortunes, but also cause chaos; inflict cruelty, but also create comradeship. At the centre of this maelstrom stands Achilles, who fully embodies the ambivalent nature of war itself. The meaning of the warrior’s life was to rush onward into the maws of danger, earning glory (kleos) (Higgins 2010). In the Iliad, Diomedes gives voice to these ideals when he is urged to turn back (Iliad 5.251–254): Argue me not toward flight, since I have no thought of obeying you. No, for it would be ignoble for me to shrink back in the fighting or to lurk aside, since my fighting strength stays steady forever. In this incident, Diomedes exhibits the noble character that Aristotle argued for (On Rhetoric 2.15.3), because the hero trusts in his strength, even in the face of extreme danger (Martin 2012). This is a crossover point between the Iliad and Aristotle’s thinking. A warrior’s reward for exhibiting these traits was glory and fame, as described by Achilles (Iliad 9.412–416): [I]f I stay here and fight beside the city of the Trojans, my return home is gone, but my fame shall be immortal; but if I return home to the beloved land of my fathers, my distinguished fame is gone, but there will be a long life left for me, and my end in death will not come to me quickly. During his conquest of the Persian Empire (c. 334–328 BCE), Alexander sought to exhibit this forceful spirit. In May 334 BCE, Alexander reportedly reached Ilium, where he sacrificed to Athena and poured a libation for heroes, after which he visited the tomb of Achilles (Plutarch 15.7). This act created a metaphorical link between the Iliad, a story detailing an ultimately successful Greek conquest of a power in the East, and the very campaign he had just launched. “Alexander the Great at the Tomb of Achilles” by Giovanni Paolo Panini. From Wikimedia. At the temple of Athena in Ilium, Alexander would also exchange his own armour for an (allegedly) Trojan-era set of armour, which would henceforth be carried out in front of him prior to battles (Arrian1.11.13). One of the sacred shields of Ilium, believed to have been Achilles’, was used to shield Alexander during his operations in a city of the Malli (326 BCE), during his operations in the Punjab region (Edmunds 1971: 373). The link to the Iliad wasn’t only a fleeting romanticism, but there was a genuine desire to bring icons of the Homeric stories into combat, to absorb their spirit. In late 326 BCE, at the Hydaspes River, there would occur the first pushback against Alexander’s indefatigable drive, when his soldiers refused to go any further (Martin 2012). Alexander wanted to go beyond his heroes’ conquests, but to his men it looked increasingly like endless labours. During this standoff, Alexander reportedly addressed his army with the following words (Arrian 5.26.4–5): Exertions and dangers are the price of deeds of prowess, and it is sweet for men to live bravely, and die leaving behind them immortal renown. Or do you not know that it was not by remaining in Tiryns or in Argos or even in the Peloponnese or Thebes that our ancestor attained such renown that from a man he became, or was held, a god? If this account is to be believed, then the Macedonian king reveals his own drive — to exhibit noble virtues linked with the divine — to his troops, in the hope it would provide inspiration for them. Similar to the heroes of the Iliad (Iliad 5.251–254), Alexander did not want to turn back from a challenge (in this case everything that lay beyond the Hydaspes River); instead, he wanted to test his virtue against unknown opponents (Martin 2012). The Homeric ideal of continually striving for the best permeated through Alexander, but the soldier’s desire to see their homes won out, marking the Hydaspes as the furthest extent of Alexander’s empire. The Effect of Fandom and Heroes Fandoms have become a major feature of life in the 20th and 21st centuries. Fan, short for fanatic, is derived from the Latin term fanaticus, which could mean “belonging to the temple, a temple servant, a devotee” (Schulman 2019). A devotion to a variety of topics is possible, from fictional universes to celebrities, and the intensity of the dedication to it can differ from person to person. In the 21st century, ardent fans would become known as “stans” (derived from a 2000 Eminem track, where he raps about a fictional stalker fan) (Schulman 2019).
https://medium.com/history-of-yesterday/alexander-stans-achilles-heroism-intersects-with-fandom-b464b878dfd5
['C.S. Voll']
2020-06-15 19:16:01.064000+00:00
['Media', 'Culture', 'Philosophy', 'History', 'Psychology']
The Social Cost of Scaling Software — and Living With It.
“A system of cells interlinked within cells interlinked within cells interlinked within one stem, dreadfully distinct…” —Baseline test, Blade Runner 2049 (2017) What’s the problem? If you work within a fairly large software development organization (say, >150), your organisational structure might have different supergroups, each composed of multiple teams. It might look like this: I blame Spotify. Each supergroup would have a different mission and priorities, which branch off to multiple individual teams, each with a different mission and priority, composed of individual developers with their own internal rubric of missions and priorities — You get the idea. If you’re particularly lucky (or unlucky), the group might be developing to a single product — and in some cases, a single stack. This is hard, for reasons that are very fundamentally human. I’m going to spend some time explaining why. A Thought Experiment. Let’s say you (Developer Red) joined a team at the same time as another developer (Developer Blue), so you can work on the same things for say…5 years. You shared the same incidents, ran through the same sprints, did the same kickoffs, had the same leaders. I would submit that at the end of that period, you and Developer Blue would develop similar priorities and sensibilities over time. Not completely of course, but there would be things that you both would hold as important. This sounds glib, but let’s say we flip to the opposite. We made a copy of you — with everything that you are, everything you hate, and everything you hold important. But we split you off to two different teams for the same time period. Off-screen: John Locke and Immanuel Kant screaming at my explanation. One version of you went through a team that had a lot of crunch, had leadership that indexed towards customer value as important. The other had a bit of flex and time to index for quality. Different projects, different leaders, different teams. I would submit that at the end of that process, you would end up with two very different developers. I guess what I’m trying to say is, our sensibilities and the things that we hold important are largely (and in some thought circles, completely) defined by our immediate environment, the constraints that we live in, and the people who are around it. Why this makes things hard. Unfortunately, a very fundamental aspect of software delivery is being in discussions with other people who have those different fundamental priorities and sensibilities. 90% of the job. Remember: each tribe would have their own priority, their own mission — so these conversations would be happening all the time. The problem being human makes it vulnerable to human constraints. For some people, it takes a large amount of effort to empathise, reconcile, and understand another viewpoint. Whether it’s due to an avoidance of conflict, or the expense of emotional labor, people will end up creating groups who hold the same sensibilities. We like talking to people who aren’t a lot of work. “They’re so rooted in their opinion, I just can’t deal.” And if we’re not careful, we might accidentally end up with hard-drawn bifurcations of opinion (and perhaps even social groups). And if we’re really not careful, the social partitions that we have end up accidentally representing themselves in the software that we create. How many times have we seen different organisational groups with fundamentally different stacks or practices? Conway’s Law basically, explained in a roundabout way. I use the word accidentally deliberately; there is a choice that software development organisations must consciously make when they get to a size that enables the above social problems. We see this all the time in different frames (Standards vs. Autonomy, Consistency vs. Enablement, Leverage vs. Independence) — which is essentially a question of: How much do we have to be aligned? I don’t think there’s a “right” choice here — only a set of tradeoffs that would apply themselves differently in different organisational contexts. However, it is important that an active choice is made as it implies an understanding of the tradeoffs. After all, if we don’t make a decision, then a decision will be made for us by circumstance. A caveat. Briefly: This is just one of the possible failure modes of large software organisations — one that I care about deeply. There’s a lot more, covered much more in depth, but we’re focusing on just this for now.
https://medium.com/swlh/the-social-cost-of-scaling-software-and-living-with-it-b588cd103b97
['John Contad']
2020-12-15 10:59:57.930000+00:00
['Management', 'Leadership', 'Engineering', 'Software Development']
Why Are There No Volcanoes in South Asia?
Why Are There No Volcanoes in South Asia? Why are earthquakes prevalent in south Asia, but almost no volcanoes? 18 earthquakes have hit Delhi NCR (National Capital Region) of India in the three months preceding today (9th of July, 2020). Few more have hit the states of Gujarat in the west and Mizoram in the east in the same duration, in addition to the two Cyclones which struck India and Bangladesh. Over the millenniums, the Indian subcontinent has witnessed a whole barrage of natural disasters except one- volcanic eruption. It is not a mere coincidence that the subcontinent, despite being categorically rich and diverse in the physiography, from the frosty Himalayas in the north to lush green Nilgiris in the south, there is absolutely no volcanic activity. except for Barren Island in the Andamans, India. South Asia is defined as the geographical area encompassing Afghanistan, Bangladesh, Bhutan, India, Maldives, Nepal, Pakistan, and Sri Lanka. The union territory of Ladakh is the frontier of Indian Himalayas in the north and boasts of the highest motorable road in the world | ‘Leh’ ~ Image by Anuj Bansal on Unsplash Are we floating over magma? We need to understand a bit of geography to understand the reason behind the frequent earthquakes but the absolute absence of volcanism in south Asia. The earth can be portrayed as a thin crust floating over a solid mantle, which encapsulates a core of iron and nickel. As per the most widely accepted theory of Plate Tectonics, the present-day continental landmasses, as well as the oceans, are in fact ‘plates’ (portions of crust) floating over asthenosphere (the molten part of the upper mantle which contains the ‘magma chamber’). These plates are either converging towards or diverging from or sliding past each other. The plates can either be Continental or Oceanic, depending on the feature above the crust. For instance, the largest ocean of the world is situated over an oceanic plate- the Pacific plate. Present-day South Asia, including India, is a part of the continental Indo- Australian plate. However, the physical geography of the world was quite different 225 million years ago, when India was a large landmass (synonymous with the present-day peninsular area of India), floating off the Australian coast, separated from the Eurasian mainland by a body of water- ‘Tethys Sea’. About 200 million years ago, the landmass started its 6,000-kilometre journey northward, towards a major plate- the Eurasian plate. About 40 million years ago, the denser Indian landmass collided with and plunged below the Eurasian plate, and the resultant compression, metamorphosis and folding of sediments led to the formation of the Himalayas. The Himalayas are one of the youngest mountain ranges of the world and bearer of the highest point on earth ~ Mount Everest | Image by Martin Jernberg on Unsplash The Indian and Eurasian plates are still converging at the rate of about 5 cm every year and consequently, the Himalayas are still rising by a few cm every year as evidenced from satellite mounted high precision atomic clocks and desiccation of lakes in Tibet. Desiccation: Whenever a region is uplifted, the lakes in the region lose water and level of granular terrace changes. The Ring of Fire Across the world, almost all the earthquakes and volcanoes are located in the zones where an oceanic and a continental plate collide and converge. In such an event, the denser oceanic plate subducts by 5 to 30 kilometres below the lighter continental plate and the ‘magma’ makes it’s way above through the weak subduction zone formed. For illustration, the Pacific plate upon converging with the North American plate in the east and Eurasian plate in the west, forms a 40,000 km long horseshoe-shaped ring around the Pacific- the Ring of Fire, which houses 75% of the world’s volcanoes and 90% of the earthquakes. The regions marked in red constitute the Pacific Ring of Fire, which houses the most violent, large and active Volcanoes and Earthquake sensitive zones of the Earth | Illustration by Digitally Learn The classic case of Himalayan convergence In case of formation of present-day South Asia, the collision involved two continental plates and not an oceanic plate. In such cases, due to folding and faulting, even those seismic activity is retained which leads to frequent earthquakes but the zone of subduction is as deep as 50 to 70 kilometres. The magma from the asthenosphere cannot penetrate such a thick continental crust and hence it stays in the crust, as a result of which there are no volcanoes in India and the entire of South Asia.
https://medium.com/environmental-intelligence/why-are-there-no-volcanoes-in-south-asia-6cf0ec8b3d8f
[]
2020-07-10 01:15:54.647000+00:00
['India', 'Outdoors', 'Geography', 'Earth', 'Science']
Attention Please: Document Classification
So, how does attention work? It’s one thing to look at a given sentence and say which words are important. However, this model obviously is useless if it’s not generalizable, so it needs to somehow learn the properties of words, as well as how these properties interact and which interactions result in significance. Mathematical Depiction of Described Steps Step One: Represent each word in the vocabulary as an embedding vector of N dimensions. This is a super common approach in NLP, more information here. Step Two: Send each sentence of embedding vectors through a GRU. The GRU is going to have a hidden state in between each word. Typically for prediction, we only care about the final state, but for this model, we want to keep track of each intermediate state as well. Let hᵢ be the vector that represents the hidden state afterword i. Note that, while likely, not necessary, I followed the paper in using a bidirectional GRU. This means that the model runs through the sentence forward and backward. Each word i then have hidden states hᵢᶠ and hᵢᵇ, and we simply concatenate these two vectors into hᵢ and proceed. Step Three: Feed each of the hᵢ through a fully connected linear layer, including a bias term. The paper recommends that the output size have dimension 100; I have not yet explored the efficacy of tweaking this hyper-parameter, although I think that could be an interesting research area. For each element in the resulting vector, take the tanh. Call this new vector uᵢ, again corresponding to word i. Step Four: Send each of the uᵢ through another linear layer, this time without a bias term. This linear layer should have a scalar output, so now we have a single scalar value associated with each word i. Then apply the softmax function for each sentence; the scalars will sum up to one for each sentence. Let the scalar for sentence i be called αᵢ. Step Five: We’re almost at prediction time. We now have for each word i in a given sentence, a vector hᵢ, and an importance scalar αᵢ. It’s crucial to understand here that these hᵢ vectors are different from the original word embeddings, as they have memory of the sentence in both the forward and backward directions. We take an element-wise weighted sum for all vectors in the review, call this review vector s. Step Six: The function applied to s differs based on the objective of the model, but because this model is interested in document binary classification, I applied a final linear layer to the vector s, which returns a singular value p, the probability of belonging to class 1.
https://medium.com/towards-artificial-intelligence/attention-please-document-classification-7be927e758a
['Jon-Ross Presta']
2020-06-11 17:22:54.256000+00:00
['NLP', 'Artificial Intelligence', 'Data Science', 'Featured', 'Machine Learning']
10 Pitfalls In Reactive Programming
10 Pitfalls In Reactive Programming with Pivotal’s Reactor I’ve been doing Scala projects with Akka Streams for quite a few years now and I have a reasonably good feeling of things to watch out for. At my current project we are doing Java and we are using a different implementation of the Reactive Streams Specification: Reactor. While learning the library I stumbled upon many common mistakes and bad practices which I’ll be listing here. Credits to Enric Sala for pointing out these bad practices. Reactive Streams Firstly, let’s have a look at the Reactive Streams Specification and see how Reactor maps to that. The spec is pretty straight forward There’s a Publisher that is a potential source of data. One can subscribe to a Publisher with a Subscriber . One passes a Subscription to a Subscriber . The Subscription is used to demand elements from the Publisher . This is the core principle of Reactive Streams. The demand controls whether data can flow through. With Reactor roughly there’s two basic types that you are dealing with: Mono , a Publisher containing 0 or 1 element Flux , a Publisher container 0..N elements There’s a type called CoreSubscriber which implements the Subscriber interface, but this is more like an internal API. As a user of the library you don’t really have to use this directly. One can “subscribe” to a Mono or Flux in a blocking way by using one of the block method variants. One can also use the subscribe method to for instance register a lambda. This will return a Disposable type which can be used to cancel the subscription. 10 pitfalls Alright, enough theory. Let’s dive into some code. Below I’ll list 10 potentially problematic code snippets. Some will be plain wrong, others are more like a bad practice or a smell. Can you spot them? #1: Whoop Whoop Reactive! Let’s start simple and try to use a Mono type. So what’s going on here? In our problem method we are calling an update method which returns a Mono<Void> . It’s a void, because we don’t really care about the result, so what could be wrong here? Well, the update method actually won’t be executed at all. Remember that the demand determines whether data can flow through? And that the demand is controlled by the subscription. In this snippet we didn’t subscribe at all to the Mono , hence it won’t be executed. The fix is pretty simple. We just have to use a terminal operation, like one of the block or subscribe variants. Alternatively, we could propagate the Mono to the caller of the problem method. #2: Reactive + Reactive = Reactive Now we know how to deal with reactive methods, let’s try to compose them. First we are calling create and then we use doOnNext to make a call to the update method. The then() call ensures we are returning a Mono<Void> type. Should be fine, right? It might surprise you that also in this case the update method won’t be executed. Using doOnNext or any of the doOn* methods are NOT subscribing to publishers. #3: Subscribe all the Publishers! Cool, we know how to fix this! Just subscribe the inner publisher, right? This might actually work, however the inner subscription won’t be nicely propagated. That means we don’t have any control of it as a subscriber to the publisher returned by the problem method. The take away here is to only use doOn* methods for side-effects, e.g. logging, uploading metrics. To fix this code properly and propagate the inner subscription we need to use one of the map flavours. Let’s use flatMap since we want to flatten the inner Mono and compose a single stream. We can also drop the then() call, because flatMap will already return the type of the inner publisher; Mono<Void> . Just flatMap that sh*t! Sweet :) #4: I didn’t quite catch that… Are you ready for another one? This time we do subscribe to the Mono returned by the update method. It could potentially throw an Exception so we apply defensive programming and wrap the call in a try-catch block. However, as the subscribe method doesn’t necessarily block, we might not catch the exception at all. A simple try-catch structure doesn’t help with (potentially) asynchronous code. To fix it we can either use block() again instead of subscribe() or we can use one of the built-in error handling mechanisms. You can use any of the onError* methods to register an “on-error hook” and return a fallback publisher. #5: Watch me Let’s have a look at the following snippet What we are trying to achieve here is to subscribe to the update and transform the result to a Mono<Integer> . Hence, we use the map operation to get the length of the string foo. Although the update will be executed at some point, we are again not propagating the inner subscription, similar to pitfall #3. The inner subscription is detached and we have no control over it. A better way would be to once again use flatMap and transform the result using the thenReturn operator. Should you bother to use subscribe at all? Most of the time not. There are a few potential use cases: Short-lived fire-and-forget tasks (e.g. telemetry, uploading logs). Please be mindful about concurrency and execution context. Long-running background jobs. Remember the Disposable that is being returned. Use it for lifecycle control. #6: Don’t count on it… The next one might be a tricky one Here we are simply accumulating all numbers flowing through our stream using a doOnNext operator and print out the resulting sum when the stream completes using the doOnComplete operator. We are using an AtomicInteger to guarantee thread-safe increments. This might seem to work when calling problem().block() once or even multiple times. However, we will a completely different outcome if we subscribe to the result of problem() multiple times. Moreover, if for whatever reason downstream a subscription gets renewed the count will be off too. This happens due to the fact that we are collecting state outside of the publisher. There is shared mutable state amongst all subscribers, which is a pretty bad smell. The proper way would be to defer the initialisation of the state to the publisher, for instance by wrapping it in a Mono as well. That way every subscriber keeps its own count. #7: Close, but no Cigar The next one has a similar issue. Can you spot it? Here we are trying to upload an input stream and our UploadService is nice enough to close it for us when we are done using the doFinally operator. To ensure we finish the upload successfully we want to retry five times on any failure using the retry operator. When a retry kicks in we will notice that the input stream is already closed and all our retries will be exhausted with an IOException . Similar to the previous case we are dealing with state outside of the publisher here, namely the input stream. We are closing it, hence changing its state, by using the doFinally operator. This is a side-effect which we should avoid. The solution once again is to defer the creation of the input stream to the publisher. #8: Trick or Thread The following issue is likely the most subtle one out of the ten, but nevertheless good to be aware of Here we are doing everything right on first glance. We are once again composing two publisher, this time by using flatMap . This code will probably work, but it’s worthwhile realising what’s going on behind the scenes. While flatMap looks like a simple mutator similar to the ones on a collection like API, in Reactor it’s an asynchronous operator. The inner publisher will be subscribed to asynchronously. This leads to uncontrolled parallelism. Depending on how many elements our Flux<String> findAll() will emit we are potentially starting 100’s of concurrent sub streams. This is probably not what you want and I think the Reactor API should be more explicit about this if not disallow this. With Akka Streams for instance this wouldn’t even be possible. The corresponding operator is explicitly called mapAsync , which gives you a clear indication that you are dealing with concurrent execution here. Moreover, it strictly requires you to explicitly limit the concurrency by passing a parallelism integer parameter. Luckily there’s an overload for flatMap in Reactor that allows you to configure the parallelism as well. Often you wouldn’t even need parallelism at all. If you just want to compose two streams synchronously you can use the concatMap operator. #9: My Stream is Leaking Almost there. When writing reactive code you sometimes have to integrate with non-reactive code. This is what the following snippet is about. This code is almost too simple. We are dealing with a Flux<String>, but we don’t want our API to expose reactive types. Therefore, we are converting our stream to an Iterable<String> using the built-in toIterable method. While this will probably lead to the expected result, transforming a Reactor stream to an Iterable in this way is a smell. An Iterable does not support closing so the publisher will never know when it’s done. Frankly, I don’t understand why toIterable is even part of the stream API. I think we should avoid it! The alternative is to convert to the newer java.util.Stream API using the toStream method. This does support closing of the resources neatly. #10: I don’t want this to end If you came this far, congrats! You might not want this to end, like in the code snippet below Here we continuously want to observe a stream and save each element as it flows through. This will be a potentially endless stream so we don’t want to block the main thread. Therefore, we are subscribing to the elastic Scheduler using the subscribeOn operator. This scheduler dynamically creates ExecutorService -based workers and caches the thread pools for reuse. Finally, we call subscribe() to make sure the stream will be executed. The issue here is that any failure in either the observe upstream or the inner publisher created by save will result in termination of the stream. We are lacking error handlers or a retry mechanism. One can for instance register an error handler using one of the onError* operators use any of the retry operator variants on either the inner or outer publisher use the doOnTerminate hook to restart the complete stream. Conclusion So, lessons learned. If you can take away a few things from this it would be the following Don’t make any assumptions about other publishers The upstream can fail, so you need to handle potential errors and think about retries and fallbacks Control concurrency and execution context. Keep things simple and prefer concatMap over flatMap if you don’t strictly need parallel execution. If you do need parallelism, be explicit about its limits using the flatMap(lambda, parallelism) overload. Also, in those cases use subscribeOn to use an appropriate Scheduler . Don’t make any assumptions about other subscribers Avoid side-effects and closing over mutable state outside the publisher It should be always safe to (re)subscribe Thanks for reading! I hope you enjoyed it and learned some things, like I did :)
https://medium.com/jeroen-rosenberg/10-pitfalls-in-reactive-programming-de5fe042dfc6
['Jeroen Rosenberg']
2019-11-28 19:16:17.495000+00:00
['Java', 'Streaming', 'Reactive Streams', 'Reactor', 'Reactive Programming']
Article Crash Investigation
Article Crash Investigation Deconstructing My Latest Writing Failure Photo by Tanja Zöllner on Unsplash “Thanks so much for sending this story our way. We have to be really selective at the moment, so we’re going to have to pass on this occasion.” I appreciated the editor taking the time to let me know my story had been rejected. But it didn’t make the rejection sting any less. One of the first things I read on Medium when I joined the platform two months ago were articles from writers, writing about writing, for other writers. Most common threads in these articles were that you must develop a thick skin and learn how to handle constant rejection. So I actually went into my Medium writing career fully prepared to face endless rejection. And then something weird happened. Things went well for me. Within the first couple of months, I was published thrice in a couple of major publications and got curated for those articles as well, under fairly big topics such as ‘Work’, ‘History’, and ‘Relationships’. Now let’s keep it real — I’ve only made about $30 from my writing so far in two months on the platform, so it hasn’t changed my life. Yet. However, I’ve had enough encouragement from people’s responses to my writing to think that this is something worth taking seriously. So last week I resolved to work on my craft even more and level up. And that’s what made this particular rejection even worse — I worked harder on this article than any other article I’ve written so far, even more than the one that has had the most success and delivered the majority of my Medium earnings to date.
https://medium.com/honest-creative/article-crash-investigation-f32521bcd68e
['Ali Q']
2020-10-21 12:50:44.797000+00:00
['Work', 'Writing', 'Personal Development', 'Success', 'Writing Tips']
I’ll Never Be Less Than Me
By all accounts, I am “too much.”⁣ Complicated, as a dear friend has often called me. Too driven, too intense, too feeling. Too wild, too free, too deep. Unable, if not incapable of, settling into the ways of being most people expect or desire.⁣⁣ ⁣I am too much.⁣⁣ I am complicated.⁣⁣ I am intense.⁣⁣ ⁣⁣ I am someone who many people do not want all parts of (I know this, because they’ve told me). What makes me who I am is not fully desired or welcome in relationships and spaces of all kinds. Because it’s triggering and different and chock full of emotion. It’s uncomfortable and consistently challenging to all that is familiar and known.⁣⁣ ⁣⁣It has taken years, but I can finally say to you with my whole heart: I’m at peace with my too-muchness, and I hope that you are too.⁣⁣ The problem isn’t that we’re too much, friends… but rather, that we decided to agree with the ones who said it as if it was bad and wrong. We decide to give those people and their opinions more weight than they deserve, rather than responding with, “yes, and?”⁣⁣⁣⁣Of course you and I are too much. That too-muchness is all things necessary to being who we’re here to be and doing the work we’re here to do. It is needed right now in a way that we’re only just beginning to understand and appreciate.⁣⁣⁣⁣ My too-muchness is my gift, just the same as yours. And I, for one, will never allow someone to throw it in my face as if it’s bad or wrong again. Maybe it’s unwanted or unwelcome, and that’s okay. It’s simply not for them. But I will never be less than who I am for the sake of someone else’s comfort again. Not ever.⁣⁣ I hope you won’t either.
https://medium.com/thrive-global/ill-never-be-less-than-me-b0807970cfad
['Stephenie Zamora']
2019-04-03 15:06:08.003000+00:00
['Self-awareness', 'Relationships', 'Love', 'Happiness', 'Self Improvement']
Love Is Patient, Love is Kind
Love Is Patient, Love is Kind Love does not boast I have struggled with that emotion for as long as I can remember. Saying it can be a scary, vulnerable prospect. What if I throw out the L word and receive an awkward silence? I grew up in a home where I knew I was loved by my mother. She was an alcoholic and had some mental issues, but there was no doubt she loved me and was proud of me. She would have done anything for me. The problem with my mother is that she had low self-esteem. She was a middle child sandwiched between a beautiful tall drink of water and a younger sister that was an overachiever. Something was different for her. I don’t know what happened to her-what made her struggle, but she walked a little slumpier than the other two. I learned the story about my existence when I could fully comprehend it. I learned that my Biological Father and my mother had relations. He was a couple of years younger than her. He was somewhat of a gigolo around town. I picture him shirtless driving a hot rod, but that could just be my imagination. When my mom found out she was pregnant, he gave her 60 dollars for an abortion and left. I learned earlier this year, that my mom did go to the abortion clinic, but obviously didn’t follow through thanks to her sister’s pleading. I feel like that abandonment dug a deep hole in my mom and she never recovered. During my birth, she had her tubes tied (at 23) and never married. Her life from then on was characterized by her boyfriend after boyfriend. Each one was more abusive than the last. Her alcoholism continued to deteriorate her self worth and I slowly began to lose respect for her. It is hard to watch someone you love make horrible choices and slowly I had to fade away. Perhaps because of my upbringing, I wore a hard exterior. I acted like love didn’t really matter to me. I played tough and hard to get, having many loveless flings-where internally I wanted more, but I would never let on. When I finally fell in love again after a failed marriage. I was talking to my internet boyfriend (Yes, I met him on My Space) and I remember saying, “so when do we start saying I love you.” I could tell the feeling was mutual even though we had only been dating a few months. When you know, you know. I am happy to report, that although I can still be tough as nails I have learned a vulnerable side too. Through being seen during childbirth, marriage, and twelve years together I have had to work on shedding that “I don’t care” attitude. It only makes me feel empty and alone. It’s Okay to Share our Vulnerabilities Caring is okay. It is perfectly okay and wonderful to care for another human being unless they don’t care for you back, and then it’s painful as hell. The loneliest existence is being with someone who doesn’t care about you as much as you care for them. When you are at the bottom of their “to do” list and they are at the top of yours. What I’ve learned over the years is that I am responsible for filling up my own cup. Whatever extra I get is overflow and it’s great, but I don’t rely on it or expect it. I have to ask for what I need. My husband can’t read my mind, and stomping around and slamming cupboards doesn’t make him realize that I am needing something from him, it just creates tension and fuels a non-existent fire. I didn’t learn many relationship tools growing up, but through a lot of therapy and outside help I’ve been able to find a calm demeanor when dealing with relationship challenges. I still hate talking about money and if you mention a spreadsheet I’m out of there, but for the most part, I feel comfortable running my ideas by my spouse versus in the past, bombing past the discussion aspect, and straight to what I want when I want it. I'm thankful that there are no knock-down-drag-out fights and drama like I created in my 20’s. Most importantly, when things don’t go my way-I no longer just run away. I am in it for the long haul. I’ve learned about commitment and I have been able to stay strong in faith for love itself. It always protects, always trusts, always hopes, always perseveres. 1 Corinthians 13:4–8
https://medium.com/blueinsight/love-is-patient-love-is-kind-18b03fe656ac
['Melissa Steussy']
2020-12-04 15:14:04.623000+00:00
['Self-awareness', 'Blue Insights', 'Relationships', 'Love', 'Self Improvement']
That Big Data problem — Thinking the Hadoop way
That Big Data problem — Thinking the Hadoop way What is the “big data problem”? “On the night of July 9, 1958 an earthquake along the Fairweather Fault in the Alaska Panhandle loosened about 40 million cubic yards (30.6 million cubic meters) of rock high above the northeastern shore of Lituya Bay. This mass of rock plunged from an altitude of approximately 3000 feet (914 meters) down into the waters of Gilbert Inlet (see map below). The impact generated a local tsunami that crashed against the southwest shoreline of Gilbert Inlet. The wave hit with such power that it swept completely over the spur of land that separates Gilbert Inlet from the main body of Lituya Bay. The wave then continued down the entire length of Lituya Bay, over La Chaussee Spit and into the Gulf of Alaska. The force of the wave removed all trees and vegetation from elevations as high as 1720 feet (524 meters) above sea level. Millions of trees were uprooted and swept away by the wave. This is the highest wave that has ever been known.“ (quoted from http://geology.com/records/biggest-tsunami.shtml) Now lets use our imagination a bit, and pretend we’re on a digital world, and that an even bigger wave can be seen on the horizon, only that the wave is made up of 1’s and 0’s. That’s the current status of information on the net right now. A huge wave of data is being generated every second, ranging from user generated information such as tweets, status updates, uploaded pictures, blog posts, comments, text messages, e-mails and so on to machine generated data, like server access logs, error logs, transaction logs, etc. And that’s not even the problem, the problem is that we need to start thinking in terms of TB or even PB of information, billions of rows instead of millions of them in order to be able to handle this big wave that’s coming. Normally, when you have to store information on your application you ask yourself one basic question: What do I need this information for? And from the answer you get, you plan your storage and you start saving that specific information. Lets look at an example, from two different perspectives: Traditional way of thinking: Say for example, you’re a web development company and you’re asked to create a basic web analytics app for your company site. So you ask yourself: What do I need the information for? As an answer, you might get something like: To get number of visits to each page. To get a list of referrer sites. To get the number of unique visits. To get a list of web browsers used on the site. It’s a short list, I know, but this is a basic example. Back to the problem: You have your answer, all that information can be fetched from the server’s access log, so you configure your log files to store that information, great! You’re done! Yes, you’re done, you got your system ready, it shows the information you were asked to show, but you also closed the door to other potential analytics that could come out of the information stored on those access logs (like request method used, response code given, size of the object returned and so on) and other sources of information. Thinking in “big data” terms: Thinking in “big data” terms means (at least to me), saving all the information you’re working with on your project and then finding out new and exiting ways to interpret that information and get results out of it. Back to the problem, with the “big data” way of thinking this time: This time around, you think in “big data” terms, so you already have lots of data being saved for every visit, such as: Access log information. Error log information. User input (if there is any) User behavior data (such as clicking patterns and smiliar) and so on. That’s because when you created your website, you asked yourself a different question: What is all the information I can get from my website? And since you changed your question, you significantly change the answer to your problem. You now have a vast amount of information to analyse and get insight from. This is great, but where do we store all this log information? It could potentially become too much for a single machine and we don’t want to loose any information by rotating logs and using other techniques. So another valid question would be: What kind of hardware do I need to store and process all that information in a timely manner? What kind of hardware do we need then? We need some kind of setup that will allow us to: Store vasts amounts of data Process this data in a timely manner Be able to grow as much as we want (storage and processing power wise) Be fault tolerant (storage and processing power wise) Affordable That is a lot to ask (specially if we consider the last point) of a single computer, isn’t it? So the answer will probably come in the form of a distributed system. Enter Hadoop What is Hadoop? In a nutshell, Hadoop is the solution to our problems, one of them (mind you), but a pretty powerful one at that. In more detail, Hadoop is an Open Source Apache project, dedicated to solve two major problems related to big data: Where to store all of the information? How to process that information at an affordable cost and in an reasonable amount of time? To answer these questions, Hadoop provides the following solutions: HDFS This is the Hadoop Distributed File System, it allows us to store in a reliable way all the information we need. This works by interconnecting commodity machines (affordable) and using the resulting shared storage (Store vasts amounts of data). The HDFS takes whatever we throw at it and splits the files into evenly sized chunks of data, and then spreads them throughout the cluster. In this stage, it also replicates the files, providing data redundancy and fault tolerance. Thanks to the HDFS we can have as much storage capacity as we need, by adding new machines to the cluster (Be able to grow as much as we want ). We also gain a very important asset, that is fault tolerance. Since we’re replicating the information into several nodes of the cluster, our commodity machines are free to fail and the only place where that will affect us is performance (no data loss or incomplete information). MapReduce This is the other “leg” of Hadoop, an implementation of the MapReduce algorithm proposed by Google in 2004. The MapReduce algorithm allows us to process large amounts of information (terabytes of information) in a distributed (thouthend of nodes) and fault tolerant manner (Process this data in a timely manner) . And if we consider that we already have a cluster of computers working for us with the HDFS, MapReduce is the perfect match to take advantage of that computational power sitting there on every node of the cluster. This algorithm has two basic steps: Map step: In this stage, the input data will be split into smaller chunks to be analyzed and transformed by processes called “mappers”. Thanks to the integration with the HDFS, the main node will effectively schedule map jobs to use the data that’s already on the nodes they’re running on, allowing the system to utilize very little bandwidth. The output of these mapper jobs will be a set of (key, value) tuples. Reduce step: The output of the mappers will be sent into the reduce jobs. These jobs will process the information with an added benefit of knowing that it’s input will be given in a sorted manner by the system. They’re main purpose is to aggregate the information given by the mappers and output only that which is needed. There is an implicit step between 1 and 2, that is the shuffle & sort step, done by the system automatically. In this step, the system will sort the output of all mapper nodes by the key of each tuple and it’ll send these sorted results into the reduce nodes, assuring that all tuples with the same key will go to the same reducer. Graphical representation of the MapReduce steps Thinking the Hadoop way So, we have our data, we have our questions to ask to that data, we have our needs and we have our solution. What now? Your following steps could include: Installing and configuring your Hadoop cluster: For this step, the company Cloudera has a standarized distribution of hadoop, which they call Cloudera’s Distribution Including Apache Hadoop(CDH). You can download it for free and it comes with serveral other projects from the Hadoop ecosystem (such as Pig, Hive, Hbase, and so on). And for managing your cluster, you could use their Cloudera Manager, which allows you to manage up to 50 nodes for free. Upload the information to your HDFS. Transform your information using a MapReduce job: I consider this step to be optional. I would use a “hand-written” MapReduce job if I had transform my data set in a specific way in order to query it later on. Query your data set: There are several ways to do this, tools like Pig or Hive, allow you write MapReduce jobs (for data transformation) on a higher level language (PigLatin or SQL). Others like HBase and Cassandra work better for quick queries to that data, they work directly over the HDFS ignoring the MapReduce framework, but you’re a bit limited on what you can do with the information. And finally, a pretty common question: Is Hadoop the best solution for big data analysis out there? Probably not, since “the best” is always relative to your needs, but it’s a pretty darn good one, so give it a try. Besides, all the cool kids are doing it: Facebook — 15 PB of information last time they revealed the number. Ebay — 5.3 PB of information on their clusters. LinkedIn Twitter And many others, check out the complete list here.
https://medium.com/moove-it/that-big-data-problem-thinking-the-hadoop-way-6fc0a617d954
['Blog Moove-It']
2016-08-04 12:14:19.985000+00:00
['Analytics', 'Big Data', 'Hadoop']
David S. Ware New Quartet’s Théâtre Garonne, 2008
Out November 15 on Aum Fidelity Reviewed by John Payne When David S. Ware passed away on in October of 2012, the world lost a sound it’s never getting back again. That sound was revolutionary, it was a tough sound, a punk-jazz sound that asks a lot of questions and can’t wait around for answers. Ware’s sax tone was a raspy, ragged, haaard-blowing, Ayler-ish thing that frequently produced a kind of fear ­­– fear that the man was gonna explode, he’s blowing so hard. That concern is palpable on this live concert recording, Théâtre Garonne, 2008, the latest issue from the David S. Ware Archive Series on the ever-righteous Aum Fidelity label. The set showcases the fact that Ware had already been suffering the strains of the illness that eventually killed him. The incomparable David S. Ware Ware here is giving his all, as a kind of final statement, perhaps. Whatever the case, he goes out expressing joy, and that joy permeates these performances. He also seems determined to communicate with his audience, proffering a a freely improvised and quite noisy jazz that relies heavily on an old-school thing called melody. And not just any old melody, but in particular ones like the main theme as heard over the course of the first track, “Crossing Samsara, Part 1” and its “Part 2” reconfiguration. As on all of the album’s pieces, the quartet jabs out a Monkish, bopping theme that’s really more like a heavy rock riff, which quickly explodes in all directions. Though the pieces expand into enormously complex sonic vistas, that nice little theme is stated very briefly, then all hell breaks loose, as if the band is impatiently warming us up before getting to the real message, the real meat of the matter, which is what a theme implies, suggests, triggers and maybe ought to be musically argued with. Right out the gate, Ware’s tone is already indicative of a pent-up, well, not rage but passion. And his passion is not pretty, it’s rough and raw and chainsaw-edged. He and his clan blast that theme to atomic bits; they’re a bad bunch, no pussyfooting. Drummer Warren Smith’s rolling, rattling toms feel their way across the unison-played theme, Ware’s rasping sax and William Parker’s double bass like spiderwebbing across guitarist Joe Morris’ mildly abstruse flights of fancy. If these three guys are roughly playing “tonal,” or in the same key, it matters not, as these initially wild sax/bass/guitar blasts are relatively short and all briskly return to the theme. The theme is Ware’s handy devise to help listeners make sense of this densely interwoven improvised music. With “Crossing Samsara, Part 2” we have a remelodicized variation on the theme, briefly played in the same rhythm, then here quickly comes Ware’s next bullfrog-butterfly sax solo: His grinding tone, somewhat akin to Tunisian Mizwid reed instrument’s, sears through and chews up phrases deriving out of the theme, spraying a multitude of images and emotional terrains: He blows like water rushing, a plant uncoiling, a dog chasing a ball, a cat chewing a rat. These truly probing musicians’ prodigious techniques aid enormously in their search for aural nirvana. Morris’ particle-smashing “Part 2” guitar solo stretches out in astounding flights across the neck of the guitar, creating a centrifugal force as dizzying lights spin ‘round our heads. Interesting, too, how his clean tone ­­– dry, almost flat, no distracting effects — directs attention to the theme of the piece, little bits of it, anyway. Clean lines from especially the guitar and sometimes the sax work to emphasize the melody in Ware’s pieces, which had been obscured somewhat in his past work with pianists such as Matthew Shipp. Shown here: David S. Ware, saxophone; William Parker, bass; Muhammad Ali on drums As heard in “Durga,” this springboarding of heavy-duty spontaneous jazz squawking off supremely melodic thematic material (it’s “rhythmelodic,” in Ware’s terminology) graces the entire set with a balancing, even accessible feel. There is a wonderful visuality in Ware’s solo playing on this one: He takes us down to the river. When “Reflection”’s reedy opening sax solo pokes its pointy head in, a low-key guitar bit looks over its shoulder in curiosity, like, What’s up? Like the huge hum of an old turbo-prop plane, Ware’s split toned sax hovers in sustained drones, which he discovered in the course of his improvisation. In the “Namah,” an opening duo duet is all tiny parts, skittering bass over rolling, quiet drums, then guitar chords that slant over Ware’s beefy sax walls in lacy latticework. And here comes the theme clearlyback in, little shards of it scattered over the rest. It’d be hard to overemphasize the overall mental and physical effect of Ware’s “rhythmelodic” compositions, which turn what will facilely be heard as a massive mountain of shrieking free jazz into what it’s possible to perceive as real, true songs, albeit of a super-modernized shape and size. @riotmaterial
https://cvonhassett.medium.com/david-s-ware-new-quartets-th%C3%A9%C3%A2tre-garonne-2008-183bf38985ae
['Riot Material']
2019-11-04 16:43:18.200000+00:00
['Jazz', 'Review', 'Music', 'Culture', 'Art']