title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
My ‘To Be Played’ List is Out of Control so Naturally I Started Playing Skyrim Again
It’s kinda hard to believe that Skyrim came out 9 years ago. In some ways, it seems like it’s been here forever. Certainly it’s remained in the forefront of my gaming consciousness. Maybe its assumed permanence is some side effect of its ubiquity – its been ported practically everywhere. And I’ve been there for each iteration: on the Xbox 360 at launch in 2011. the legendary edition on PC a few years later. when it arrived on the Xbox One in 2016, with mod support. on the Switch in 2017 – Skyrim on the go! Thanks to Xbox’s streaming functionality, I’ve even played Skyrim briefly on my phone. Skyrim was the last game I waited in a line to purchase on the day it released; the occasion remains unique in that I clearly remember the mostly uneventful transaction nine years later. Most games are launched on Tuesdays but November 11, 2011 was a Friday. I believe they launched on Friday because of the cool 11/11/11 date. I was appreciative of it for less symbolic reasons: I’d taken the day off from work, which meant I had 3 uninterrupted days of play ahead of me. There were probably a half dozen of us queued up outside the GameStop in the mall, waiting for employees to roll up the gate at 10 AM and let us have at thee. My wife had come along good-naturedly, even though I’m sure there were other things she’d rather be doing on a day off. The kids had been shuffled off to school and we were waiting to buy a videogame. I clearly remember thinking that we were probably the only people who had to adjust our schedule to be there, a pair of 30 year-olds surrounded by college kids. The guy in front of us was a portly twenty-something wrapped in a green denim duster, greasy hair askew at awkward angles. He kept rattling the cage and demanding the employees “gimme my skrim”, like an imprisoned madman yelling at the guards. He quieted once we were let inside, but renewed his demands with vigor once he had the complete attention of the cashier. I honestly believe it’s the only thing he said the whole time. My own transaction was notable only in that it was quick and without any ranting. My first character was a burly Nord in the style of Conan the Barbarian. He carried a giant sword and eschewed magic. I don’t recall his name, but before I stopped playing him we’d conquered the dragon threat, reclaimed the north for the Nords, and stood atop the Champions. I purchased the DLC but put the game aside before I made any headway. Later forays into Skyrim came through selective mods, which enhanced, improved, or modified the gameplay, freshening up an experience I’d already sunk several hundred hours into. I no longer cared about the Dragonborn storyline, one of the game’s weakest. Rather, I just kinda wandered and let the game find me. And pursue some of the quests I’d not previously attempted. Last week I fired up Skyrim for the first time in at least a year, reviving a game with a date stamp of 2017. Thus was Mister Whiskers reborn from the digital ashes and turned loose on a world in search of heroes once again. But Mister Whiskers is nobody’s hero. Hence all the tomb raiding.
https://medium.com/fan-fare/my-to-be-played-list-is-out-of-control-so-naturally-i-started-playing-skyrim-again-40877fc233c2
['Eric Pierce']
2020-12-28 15:08:01.291000+00:00
['Gaming', 'Pop Culture', 'Skyrim', 'Xbox', 'Writing']
Dissociative Identity Disorder
An Explanation in Laymen’s Terms Photo by Elijah Hiett on Unsplash There are many psychological disorders which, if you are not in the know, may seem obscure or strange. Dissociative identity disorder (DID) is one such condition. The public view of DID has been shaped by what they see in the movies and popular television programming. Many folks, upon hearing the many myths about the disorder, find it to be frightening or even something to be desired. More understanding of the facts connected to DID needs to be talked about openly with the public so that people can be more informed about the realities of what dissociative identity disorder is and how it affects those who live with it. Awareness is the driving force behind the writing of this article, offering a brief explanation of dissociative identity disorder, written in layman’s terms. Dissociation Dissociation is a fancy word for “zoning out”, and all humans do it. In fact, dissociation is the human brain’s way of dealing with, among other things, overwhelming circumstances, and boredom. A good example of a common dissociative incidence, that most people will find familiar, is the movie theatre experience. You go to the theater to see a movie you have been looking forward to for months. You sit down in an empty row with your popcorn and soda, and the movie begins. Soon, you get thoroughly engrossed in the film’s plot. After the movie ends, you are surprised at the late hour. Not only this, but you suddenly become aware that there are people sitting beside you that weren’t there when you began watching the movie and that you have eaten your popcorn and drank your soda. You have little recollection of the other people seating themselves beside you, or of your eating and drinking your treats. When Dissociation Goes Terribly Wrong Photo by Jurica Koletić on Unsplash The human mind is a marvelous complex of organized thought. However, sometimes we are met with circumstances that are too hard to deal with in any organized manner. When we experience these overwhelming circumstances, we utilize what is termed as defense mechanisms. Dissociation is one of these defense mechanisms that work well when we find ourselves overwhelmed or bored. When we find ourselves in stressful or boring situations, we simply “check out” or dissociate until our intellect is needed once more. Dissociative identity disorder is a defense mechanism taken to the extreme, where dissociation becomes a life-changing obstacle. With dissociative identity disorder, this human ability to dissociate causes one to become disconnected from one’s thoughts, feelings, and memories. In this state, the survivor is protected from what their mind has determined to be overwhelming circumstances. This is all done unconsciously, just as was the theatre experience. While dissociation can be a wonderful coping mechanism when one is in danger, or bored, it can also be very destructive. The life of a person living with Dissociative Identity Disorder is full of destroyed friendships, ended romantic relationships, lost jobs, and many other important factors in life that most take for granted. Sometimes survivors can lose their sense of right and wrong, while dissociated and get in trouble with the police. Another effect might be financial problems due to alters who do not understand that credit and debit cards are not bottomless. The alternate Ego States The hallmark, and best-known symptom by the public, of dissociative identity disorder, is the presence of alternate ego states (alters). Ego states are a normal function of the human mind and are found in everyone. We form a new ego state with each new experience, to be triggered when we experience something similar in the future. This enables us to know how to cope with the new situation by drawing on what we did previously. In most people, these states of consciousness can communicate with one another, and together they form what is perceived as a cohesive personality, with a running timeline of the events in that person’s life. Like other humans, persons living with dissociative identity disorder form new ego states in new situations to help cope with similar situations in the future. However, many of their ego states were formed during highly traumatic and emotionally charged past events. This sets up the survivor for the perfect storm. When triggered by a stimulus that reminds the survivor’s brain of a long-ago traumatic event, they recognize that event as happening in the now. Since the event from the past was highly traumatic, the survivor utilizes the coping mechanism of dissociation to escape. In this disconnected state, the old ego state doesn’t merely tell the person how to cope, it is forced to take over. Since the events that forced the creation of the ego states in the past were supercharged with emotion and fear, they have become separated by amnesiac walls for self-protection, and self-preservation. These barriers prevent proper communication between the ego states, and as a result, the events that happened during the original trauma, as well as the events that happen while the person is in a dissociated state, are not communicated to the original personality nor with each other. The survivor experiences the consequences of these dissociative events by people telling them about things they have said or done that they do not remember. This is frightening and becomes very disruptive to their lives. Their personality has become so fragmented that the experience of having a continuous timeline of life events is lost. Time Loss Photo by Jon Tyson on Unsplash Time loss is another hallmark of dissociative identity disorder that is described by those who live with it as being their number one enemy. The effect of lacking the reassurance that they will not dissociate and awaken hours, days or years later is staggering. To understand this phenomenon, one must first speak of how most humans experience time. Most people experience time as the illusion of it linerally passing from moment to moment. Although they may not remember every event of each day, because their lives run in a predictable sequence, there is the comforting feeling of knowing pretty much what has happened in any given hour. The triggers that cause an ego state to be triggered are all about, with the average person experiencing these triggers as fond reminiscences to pleasant times in their past. These trips down memory lane do not last long, and often leave the one experiencing it feeling warm and fuzzy. As such, they do not disrupt the experiences of their timeline. People who live with dissociative identity disorder do not have this luxury. To a survivor, triggers do not feel warm and fuzzy. They are experienced as flashbacks to horrendous events in the person’s past, and this memory thread, in turn, activates a disconnected ego state. The result is a dissociative event, which is means that the time experienced by the person living with dissociative identity disorder is chopped up and not continuous. Their timeline is splintered and experienced by leaps and jumps which can sometimes entail hours, days or even years. One cannot express how disruptive this effect can be on a person’s life. Defeating the Stigma Surrounding DID People who live with dissociative identity disorder face many obstacles in their lives, but the one most agree is the hardest is the stigma. According to the Webster’s New World Dictionary, the short definition of stigma is, “a mark of disgrace or reproach or a perceived negative attribute that causes someone to devalue or think less of the whole person.” Unfortunately, stigma is often thought of as being synonymous with shame, and this thinking keeps many who are living with a severe mental health diagnosis like DID from seeking and receiving the help they need. Receiving a diagnosis such as dissociative identity disorder is difficult enough. One must now only own the horrible facts of the past, but also one faces grieving over a lost childhood and any illusion that it was normal. To have to face family, friends, and co-workers who shun or shame them, is an enormous burden. There is staggering pain involved in working on the issues that caused the disorder to form, and it takes many years of very hard work to overcome their effects on life today. Very often people who live with DID are faced with devastating isolation due to the horrific stigma involved with their disorder. These innocent victims of a disorder they do not want and did not cause are ridiculed, mocked and feared by society. The Exploitation of DID There is a popular movie, which came out in 2017, which helps to perpetuate the myth that people struggling with this disorder are endowed with supernatural powers and are murderous psychotic killers. One must agree that the character in the movie makes for a fantastic horror story, but the fear it instills in the publics thoughts about people who live with DID is more than troubling. The truth is that most people who experience dissociative identity disorder are much more likely to be victims of crime than perpetrators. Yes, there are in every population demographic a certain percentage of people who are criminally minded or even dangerous. However, the portrayal of people living with DID in the movies has been historically tilted to show them as insane or worse. There are even questions proposed regularly on social media forums wondering if people who have developed dissociative identity disorder can climb walls. The answer to these inquiries and wonderings is a lot less glamorous than Hollywood would like. In a dissociated state, persons living with dissociative identity disorder are extremely vulnerable to being mugged, raped, and murdered. The main reason for this propensity is that the ego state that is in control during the dissociative event, does not understand or comprehend the complexities of society, or how to protect themselves. Because of this, they can easily fall prey to unscrupulous people. The Stigma Must Be Combated Photo by Mag Pole on Unsplash There are ways to combat this misunderstanding of the public that has been caused by using DID as a money-making venture in the media. One way is to openly discuss the realities of dissociative identity disorder, having people whose lives are restricted by its effects tell what it has done to their lives. Putting a face on the disorder, and allowing the public to glimpse the tragic ways persons living with DID struggle daily, may help end the stigma. Another way that may be much harder to accomplish, is to force the media to talk about these realities before their films are shown in theatres. A short clip, explaining that their film is fiction and that persons who live with dissociative identity disorder experience life in a much different way than what is to be portrayed, can help remind moviegoers that what they are seeing is make-believe and not factual. Not Weird or Strange The main objective of this piece has been to help people understand that survivors living with dissociative identity disorder are not weird or strange. They are ordinary people who have taken the human ability to escape overwhelming trauma through dissociation to a higher level. Indeed, many would have gone insane or died were it not for the very human ability to flee into their minds. The next time you are involved in a discussion where a myth is being propagated about dissociative identity disorder, speak up. Do not remain silent. The tragic effects on the lives of survivors demand that we all dispel the myths and misinformation rampant in society today. After all, we’re not talking about animals, demons or monsters. We are speaking of human beings who have survived overwhelming odds. They deserve dignity and respect.
https://shirleydavis-23968.medium.com/dissociative-identity-disorder-31ca16c3fbef
['Shirley J. Davis']
2019-08-18 15:47:19.333000+00:00
['Truth', 'Stigma', 'Mental Health', 'Did', 'Laymen']
Modern Reflections of the Past
I can still recall his face and on good days, his voice. My grandfather was my favorite person. He was more than a grandfather — he was the sturdy tree we would climb on when we were children, tossing us over his shoulder and making us laugh until we were exhausted. He was the smell of summer and the excitement that came with visiting his beach house each weekend. My grandfather died when I was in high school, it was the longest day of my life so far. When I first heard about The History Project and began planning our workshop series, I couldn’t help but think about him. The stories I now make up, the places my imagination goes while looking at old photos. Working with The History Project and the team was an incredible and reflective experience. I was instantly moved by Niles’s story and all that the platform could do, organically making you think of memories that may have been tucked away in the sock drawer of our minds. I am the Program Associate of Senior Planet, the country’s first technology themed senior center. My job allows me to work closely with the thousands of members that come through our door on a monthly basis, the most rewarding part of my job. Working with the History Project introduced me to a group of our members on a deeper level. I was able to hear their stories, their memories — their lives. What occurred before they stumbled upon our center, what brought them to us, to me. I think technology is so important to seniors, it gives us all the opportunity to reflect on some of their favorite life moments. No matter what age you are, technology reminds us of our stories. The History Project and similar tools bring something personal to technology, something leaving us all hungry, not just for the past, but for the power of memories, and how we can create more of them.
https://medium.com/the-history-project/modern-reflections-of-the-past-72611168aecb
['Emily Guzewicz']
2016-08-09 15:53:26.018000+00:00
['Storytelling', 'History']
How to Add Scroll to Top Feature in Your Vue.js App
Photo by Mark Rasmuson on Unsplash If a page has a long list, then it is convenient for users if the page has an element that scrolls to somewhere on the page with one click. In plain JavaScript, there is the window.scrollTo and element.scrollTo functions which take the x, y coordinate of the screen as parameters, which isn’t too practical for most cases. There’s also the scrollIntoView function available for DOM element objects. You can call it to scroll to the element that’s calling this function. With Vue.js, we can do this easily with the Vue-ScrollTo directive located at https://github.com/rigor789/vue-scrollTo. It allows us to scroll to an element identified by ID of an element and also add animation to the scrolling. It makes implementing this feature. In this article, we will build a recipe app that has tool tips to guide users on how to add recipes into a form. Users can enter the name of their dish, the ingredients, the steps and upload a photo. In the entry, there will be a ‘Scroll to Top’ button to let the user scroll back to the top automatically by clicking the button. We will build the app with Vue.js. We start building the app by running the Vue CLI. We run it by entering: npx @vue/cli create recipe-app Then select ‘Manually select features’. Next, we select Babel, Vue Router, Vuex, and CSS Preprocessor in the list. After that, we install a few packages. We will install Axios for making HTTP requests to our back end. BootstrapVue for styling, V-Tooltip for the tooltips, Vue-ScrollTo for scrolling and Vee-Validate for form validation. We install the packages by running npm i axios bootstrap-vue v-tooltip vee-validate vue-scrollto . Now we move on to creating the components. Create a file called RecipeForm.vue in the components folder and add: <ValidationObserver ref="observer" v-slot="{ invalid }"> <b-form <b-form-group label="Name" v-tooltip="{ content: 'Enter Your Recipe Name Here', classes: ['info'], targetClasses: ['it-has-a-tooltip'], }" > <ValidationProvider name="name" rules="required" v-slot="{ errors }"> <b-form-input type="text" :state="errors.length == 0" v-model="form.name" required placeholder="Name" name="name" ></b-form-input> <b-form-invalid-feedback :state="errors.length == 0">Name is requied.</b-form-invalid-feedback> </ValidationProvider> </b-form-group> @submit .prevent="onSubmit" novalidate> Name is requied. <b-form-group label="Ingredients" v-tooltip="{ content: 'Enter Your Recipe Description Here', classes: ['info'], targetClasses: ['it-has-a-tooltip'], }" > <ValidationProvider name="ingredients" rules="required" v-slot="{ errors }"> <b-form-textarea :state="errors.length == 0" v-model="form.ingredients" required placeholder="Ingredients" name="ingredients" rows="8" ></b-form-textarea> <b-form-invalid-feedback :state="errors.length == 0">Ingredients is requied.</b-form-invalid-feedback> </ValidationProvider> </b-form-group> <b-form-group label="Recipe" v-tooltip="{ content: 'Enter Your Recipe Here', classes: ['info'], targetClasses: ['it-has-a-tooltip'], }" > <ValidationProvider name="recipe" rules="required" v-slot="{ errors }"> <b-form-textarea :state="errors.length == 0" v-model="form.recipe" required placeholder="Recipe" name="recipe" rows="15" ></b-form-textarea> <b-form-invalid-feedback :state="errors.length == 0">Recipe is requied.</b-form-invalid-feedback> </ValidationProvider> </b-form-group> <input type="file" style="display: none" ref="file" <b-button v-tooltip="{ content: 'Upload Photo of Your Dish Here', classes: ['info'], targetClasses: ['it-has-a-tooltip'], }" >Upload Photo</b-button> </b-form-group> @change ="onChangeFileUpload($event)" /> @click ="$refs.file.click()"v-tooltip="{content: 'Upload Photo of Your Dish Here',classes: ['info'],targetClasses: ['it-has-a-tooltip'],}">Upload Photo <img ref="photo" :src="form.photo" class="photo" /> <br /> <b-button type="reset" variant="danger" </b-form> </ValidationObserver> </template> Submit @click ="cancel()">Cancel <script> import { requestsMixin } from "@/mixins/requestsMixin"; export default { name: "RecipeForm", mixins: [requestsMixin], props: { edit: Boolean, recipe: Object }, methods: { async onSubmit() { const isValid = await this.$refs.observer.validate(); if (!isValid || !this.form.photo) { return; } if (this.edit) { await this.editRecipe(this.form); } else { await this.addRecipe(this.form); } const { data } = await this.getRecipes(); this.$store.commit("setRecipes", data); this.$emit("saved"); }, cancel() { this.$emit("cancelled"); }, onChangeFileUpload($event) { const file = $event.target.files[0]; const reader = new FileReader(); reader.onload = () => { this.$refs.photo.src = reader.result; this.form.photo = reader.result; }; reader.readAsDataURL(file); } }, data() { return { form: {} }; }, watch: { recipe: { handler(val) { this.form = JSON.parse(JSON.stringify(val || {})); }, deep: true, immediate: true } } }; </script> <style> .photo { width: 100%; margin-bottom: 10px; } </style> In this file, we have a form to let users enter their recipe. We have text inputs and a file upload file to let users upload a photo. We use Vee-Validate to validate our inputs. We use the ValidationObserver component to watch for the validity of the form inside the component and ValidationProvider to check for the validation rule of the inputted value of the input inside the component. Inside the ValidationProvider , we have our BootstrapVue input for the text input fields. Each form field has a tooltip with additional instructions. The v-tooltip directive is provided by the V-Tooltip library. We set the content of the tooltip and the classes here, and we can set other options like delay in displaying, the position and the background color of the tooltip. A full list of options is available at https://github.com/Akryum/v-tooltip. The photo upload works by letting users open the file upload dialog with the Upload Photo button. The button would click on the hidden file input when the Upload Photo button is clicked. After the user selects a file, then the onChangeFileUpload function is called. In this function, we have the FileReader object which sets the src attribute of the img tag to show the uploaded image, and also the this.form.photo field. readAsDataUrl reads the image into a string so we can submit it without extra effort. This form is also used for editing recipes, so we have a watch block to watch for the recipe prop, which we will pass into this component when there is something to be edited. Next we create a mixins folder and add requestsMixin.js into the mixins folder. In the file, we add: const axios = require("axios"); const APIURL = " http://localhost:3000 ";const axios = require("axios"); export const requestsMixin = { methods: { getRecipes() { return axios.get(`${APIURL}/recipes`); }, addRecipe(data) { return axios.post(`${APIURL}/recipes`, data); }, editRecipe(data) { return axios.put(`${APIURL}/recipes/${data.id}`, data); }, deleteRecipe(id) { return axios.delete(`${APIURL}/recipes/${id}`); } } }; These are the functions we use in our components to make HTTP requests to get and save our data. Next in Home.vue , replace the existing code with: <div class="page" id='top'> <h1 class="text-center">Recipes</h1> <b-button-toolbar class="button-toolbar"> <b-button </b-button-toolbar> Recipes @click ="openAddModal()" variant="primary">Add Recipe <b-card v-for="r in recipes" :key="r.id" :title="r.name" :img-src="r.photo" img-alt="Image" img-top tag="article" class="recipe-card" img-bottom > <b-card-text> <h1>Ingredients</h1> <div class="wrap">{{r.ingredients}}</div> </b-card-text> <b-card-text> <h1>Recipe</h1> <div class="wrap">{{r.recipe}}</div> </b-card-text> <b-button href="#" v-scroll-to="{ el: '#top', container: 'body', duration: 500, easing: 'linear', offset: -200, force: true, cancelable: true, x: false, y: true }" variant="primary" >Scroll to Top</b-button> @click ="openEditModal(r)" variant="primary">Edit </b-card> @click ="deleteOneRecipe(r.id)" variant="danger">Delete <RecipeForm </b-modal> @saved ="closeModal()" @cancelled ="closeModal()" :edit="false" /> <RecipeForm :edit="true" :recipe="selectedRecipe" /> </b-modal> </div> </template> @saved ="closeModal()" @cancelled ="closeModal()":edit="true":recipe="selectedRecipe"/> <script> // @ is an alias to /src import RecipeForm from "@/components/RecipeForm.vue"; import { requestsMixin } from "@/mixins/requestsMixin"; export default { name: "home", components: { RecipeForm }, mixins: [requestsMixin], computed: { recipes() { return this.$store.state.recipes; } }, beforeMount() { this.getAllRecipes(); }, data() { return { selectedRecipe: {} }; }, methods: { openAddModal() { this.$bvModal.show("add-modal"); }, openEditModal(recipe) { this.$bvModal.show("edit-modal"); this.selectedRecipe = recipe; }, closeModal() { this.$bvModal.hide("add-modal"); this.$bvModal.hide("edit-modal"); this.selectedRecipe = {}; }, async deleteOneRecipe(id) { await this.deleteRecipe(id); this.getAllRecipes(); }, async getAllRecipes() { const { data } = await this.getRecipes(); this.$store.commit("setRecipes", data); } } }; </script> <style scoped> .recipe-card { width: 95vw; margin: 0 auto; max-width: 700px; } .wrap { white-space: pre-wrap; } </style> In this file, we have a list of BootstrapVue cards to display a list of recipe entries and let users open and close the add and edit modals. We have buttons on each card to let users edit or delete each entry. Each card has an image of the recipe at the bottom which was uploaded when the recipe is entered. For scrolling to top functionality, we used the v-scroll-to directive provided by the V-ScrollTo library. To make scrolling smooth, we set the easing property to linear . Also, we set the duration of the scroll to 500 milliseconds. el is the selector of the element we want to scroll to. Setting force to true means that scrolling will be performed, even if the scroll target is already in view. cancelable is true means that the user can cancel scrolling. x set to false means that we don’t want to scroll horizontally, and y set to true means we want to scroll vertically. container is the selector for the container element that will be scrolled. offset is the offset in the number of pixels when scrolling. The full list of options is at https://github.com/rigor789/vue-scrollTo. In the scripts section, we have the beforeMount hook to get all the password entries during page load with the getRecipes function we wrote in our mixin. When the Edit button is clicked, the selectedRecipe variable is set, and we pass it to the RecipeForm for editing. To delete a recipe, we call deleteRecipe in our mixin to make the request to the back end. The CSS in the wrap class is for rendering line break characters as line breaks. Next in App.vue , we replace the existing code with: <template> <div id="app"> <b-navbar toggleable="lg" type="dark" variant="info"> <b-navbar-brand to="/">Recipes App</b-navbar-brand> <b-navbar-toggle target="nav-collapse"></b-navbar-toggle> <b-collapse id="nav-collapse" is-nav> <b-navbar-nav> <b-nav-item to="/" :active="path == '/'">Home</b-nav-item> </b-navbar-nav> </b-collapse> </b-navbar> <router-view /> </div> </template> <script> export default { data() { return { path: this.$route && this.$route.path }; }, watch: { $route(route) { this.path = route.path; } } }; </script> <style lang="scss"> .page { padding: 20px; margin: 0 auto; max-width: 700px; } button { margin-right: 10px !important; } .button-toolbar { margin-bottom: 10px; } .tooltip { display: block !important; z-index: 10000; .tooltip-inner { background: black; color: white; border-radius: 16px; padding: 5px 10px 4px; } .tooltip-arrow { width: 0; height: 0; border-style: solid; position: absolute; margin: 5px; border-color: black; } &[x-placement^="top"] { margin-bottom: 5px; .tooltip-arrow { border-width: 5px 5px 0 5px; border-left-color: transparent !important; border-right-color: transparent !important; border-bottom-color: transparent !important; bottom: -5px; left: calc(50% - 5px); margin-top: 0; margin-bottom: 0; } } &[x-placement^="bottom"] { margin-top: 5px; .tooltip-arrow { border-width: 0 5px 5px 5px; border-left-color: transparent !important; border-right-color: transparent !important; border-top-color: transparent !important; top: -5px; left: calc(50% - 5px); margin-top: 0; margin-bottom: 0; } } &[x-placement^="right"] { margin-left: 5px; .tooltip-arrow { border-width: 5px 5px 5px 0; border-left-color: transparent !important; border-top-color: transparent !important; border-bottom-color: transparent !important; left: -5px; top: calc(50% - 5px); margin-left: 0; margin-right: 0; } } &[x-placement^="left"] { margin-right: 5px; .tooltip-arrow { border-width: 5px 0 5px 5px; border-top-color: transparent !important; border-right-color: transparent !important; border-bottom-color: transparent !important; right: -5px; top: calc(50% - 5px); margin-left: 0; margin-right: 0; } } &[aria-hidden="true"] { visibility: hidden; opacity: 0; transition: opacity 0.15s, visibility 0.15s; } &[aria-hidden="false"] { visibility: visible; opacity: 1; transition: opacity 0.15s; } } </style> to add a Bootstrap navigation bar to the top of our pages, and a router-view to display the routes we define. Also, we have the V-Tooltip styles in the style section. This style section isn’t scoped so the styles will apply globally. In the .page selector, we add some padding to our pages and set max-width to 700px so that the cards won’t be too wide. We also added some margins to our buttons. Next in main.js , we replace the existing code with: import Vue from "vue"; import App from "./App.vue"; import router from "./router"; import store from "./store"; import BootstrapVue from "bootstrap-vue"; import VTooltip from "v-tooltip"; import "bootstrap/dist/css/bootstrap.css"; import "bootstrap-vue/dist/bootstrap-vue.css"; import { ValidationProvider, extend, ValidationObserver } from "vee-validate"; import { required } from "vee-validate/dist/rules"; extend("required", required); Vue.component("ValidationProvider", ValidationProvider); Vue.component("ValidationObserver", ValidationObserver); Vue.use(BootstrapVue); Vue.use(VTooltip); Vue.config.productionTip = false; new Vue({ router, store, render: h => h(App) }).$mount("#app"); We added all the libraries we need here, including BootstrapVue JavaScript and CSS, Vee-Validate components along with the validation rules, and the V-Tooltip directive we used in the components. In router.js we replace the existing code with: import Vue from "vue"; import Router from "vue-router"; import Home from "./views/Home.vue"; Vue.use(Router); export default new Router({ mode: "history", base: process.env.BASE_URL, routes: [ { path: "/", name: "home", component: Home } ] }); to include the home page in our routes so users can see the page. And in store.js , we replace the existing code with: import Vue from "vue"; import Vuex from "vuex"; Vue.use(Vuex); export default new Vuex.Store({ state: { recipes: [] }, mutations: { setRecipes(state, payload) { state.recipes = payload; } }, actions: {} }); to add our recipes state to the store so we can observer it in the computed block of RecipeForm and HomePage components. We have the setRecipes function to update the passwords state and we use it in the components by call this.$store.commit(“setRecipes”, response.data); like we did in RecipeForm . Finally, in index.html , we replace the existing code with: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width,initial-scale=1.0" /> <link rel="icon" href="<%= BASE_URL %>favicon.ico" /> <title>Recipe App</title> </head> <body> <noscript> <strong >We're sorry but vue-tooltip-tutorial-app doesn't work properly without JavaScript enabled. Please enable it to continue.</strong > </noscript> <div id="app"></div> <!-- built files will be auto injected --> </body> </html> to change the title. After all the hard work, we can start our app by running npm run serve . To start the back end, we first install the json-server package by running npm i json-server . Then, go to our project folder and run: json-server --watch db.json In db.json , change the text to: { " recipes ": [ ] } So we have the recipes endpoints defined in the requests.js available. After all the hard work, we get:
https://medium.com/swlh/how-to-add-scroll-to-top-feature-in-your-vue-js-app-9adf799a04c1
['John Au-Yeung']
2019-11-28 15:36:38.455000+00:00
['Technology', 'Programming', 'Software Development', 'JavaScript', 'Vuejs']
Designing for Peace of Mind
Growing up is one of the few experiences that everyone can relate to. And whether it’s a distant memory or a more recent one, a hallmark of childhood is that healthy push-pull between freedom and safety. We want breathing room for exploration and growth, but also parameters within which to feel safe and respected. How those are defined and enacted differs for every family, however, so when a group of people at Microsoft came together to build the Family Safety app, you can imagine the breadth and complexity of that purview. It wasn’t just about identifying core needs and corresponding solutions, it was also about creating a design framework that would be robust and flexible enough to apply atop any family’s needs. Ultimately, the app’s ability to drive productive conversations proved central to their design thinking. Conversations support family achievement in ways that are effective, sustainable, and collaborative because they foster co-creation and respect the agency of kids and adults alike. Short or long-term goals, screen time, grades, driving — these are among the many topics that families come together to define, discuss, and plan for. From a design perspective, we can create experiences that support and spark those conversations. Conversations where you co-define the environments you want to create, habits you want to build, and adjustments you might need to make while en route because, as 2020 has reminded us in ALL CAPS, life is unpredictable. Through building better habits and conversations, the app hopes to provide families with more peace of mind. Creating emotive and inclusive visual designs
https://medium.com/microsoft-design/designing-for-peace-of-mind-defebeddd6da
['Cayla Dorsey']
2020-08-06 18:07:37.597000+00:00
['Family', 'Safety', 'Technology', 'Microsoft', 'Design']
WordPress: One of the Best Content Management Systems
WordPress is an impressive content management system (CMS) built upon a PHP / MySQL foundation. By all counts, it is the most utilized CMS in the world, installed on some 60 million plus websites and used by over 20% of the top 10 million websites. WordPress enables people to maintain and manage website content through a web-based “backend” system. It provides relatively simple mechanisms for creating blog posts and static pages, then organizing those posts and pages into categories and menus. It also provides a nice media library tool useful for managing the image assets in use within your site. The media library handles technical overhead like thumbnail generation. For developers, WordPress delivers well-documented mechanisms for extending its core functionality through a theming architecture, a plugin architecture, function overrides, and action hooks & filters. This flexibility has given birth to a huge community of developers who have contributed themes and plugins that help people customize their site and get more functionality out of it. And of course, expert WordPress developers can use this functionality to build wholly unique themes and features for their clients and employers. Pros: Provides a working CMS “out of the box” Simple customizations are pretty easy to do Lots of themes and plugins to satisfy common needs Relatively fast, especially with performance plugins Great for SEO with SEO plugins Non-technical people can maintain and manage the site content Great for news, opinion, magazine, and small business marketing sites where the focus is on content It’s free and open source! Cons: Its flexibility enables inexperienced developers to implement bad practices that can make upgrades difficult or impossible Not well suited for use with highly custom sites where unique functionality is the focus Not well suited for use in situations with many different structured document types Does not naturally take advantage of many of the latest techniques and technologies in web development Is WordPress Right for Me? Great question, and, of course, it depends. The question of which platform to develop your project on is an important one, and should be carefully considered with respect to the vision for your site. If you expect your site’s success will be driven primarily by the content you (or your writers) produce, and that content will have a relatively simple organizational structure, then WordPress is probably a great platform for your site. On the other hand, if you expect the primary value of your site to be in its unique features, or you have complex data structures, or expect lots of growth in unpredictable ways, then you may want to consult with some experts and consider alternative platforms. Endertech’s WordPress Tips & Tricks Use the Yoast SEO Plugin For most sites, SEO is a very important consideration. You need to be producing content for your site that searchers will find relevant, and you need to structure the content so that search engines know what the page is about. The Yoast SEO plugin provides both tools and a lot of helpful guidance to assist in these matters. Use the W3 Total Cache Plugin Performance is a critical component of your site’s success. People are used to the speed of Google and other top echelon sites, and can quickly get frustrated with slow performing sites. Many of the techniques for achieving top speeds are quite sophisticated, and the W3 Total Cache Plugin does a great job of simplifying their implementation and providing an interface for switching various performance features on and off. Use Child Themes Many amateur developers, just trying to do things quick and easy, will hack away at installed themes and core files to get the desired look or function for a WordPress site. This is the wrong approach. The proper way to extend a theme is by following the established process for creating a Child Theme. This pattern will prevent you from hacking on core files, and enable the underlying WordPress installation to be safely updated. Don’t Hack the Core Similar to using Child Themes… don’t modify core WordPress files. This breaks future compatibility, and can introduce serious security concerns. Extend functions within your Child Theme, or create your own plugins for most advanced functionality. Be aware that the best WordPress plugins often times provide hooks you can use to override default functionality with minimal effort. Interact with Us! → Instagram | Facebook | Twitter See our latest and most read stories on our new Medium publication, Endertech Insights !
https://medium.com/endertech-insights/wordpress-one-of-the-best-content-management-systems-ef1273b673ef
[]
2017-09-12 18:55:20.862000+00:00
['WordPress', 'Content Marketing', 'Development', 'Web Development', 'SEO']
Corporate Storytelling
Corporate Storytelling Preserving stories of companies & businesspeople Corporate Books We have broad experience in producing high quality books for institutions & corporations. From the history of a company to the professional trajectory of an entrepreneur, an executive or the dean of a school. Here some samples of our work. “Flash of Genius” The history of Andrew Corporation Founded in 1937 in Chicago, Illinois, the Andrew Corporation was the most important manufacturer of satellite antennas. A few years ago the company was sold and in order to preserve the legacy of his grandparents, and that of the company, Edward Andrew Jr. hired MSB Storytelling to produce “Flash of Genius.” Browse the book online A life in the company A retirement surprise present The CEO of Carlson Wagonlit Travel, Geoffrey Marshall, receiving his surprise retirement book. Companies such as American Express, Aegon, Pfizer, Carlson-Wagonlit among others hired MSB Storytelling to produce a surprise retirement book as a present to honor top executives retiring from their companies. Watch Geoffrey Marshall testimonial A hundred years, thousands of lives Schools and Colleges anniversary books We produced several books for educational institutions turning 100 and 50 year anniversaries. Belgrano Day School (100 years), St. Agnes School (50 years) and St. Hilda’s College (100 years). Also we made many retirement surprise books for their Deans and Directors. Watch a General Director testimonial. THE STEEL COVER BOOK Company Anniversary Book Guidi Industries, a metallurgical engineering company that produces metal parts for the automotive industry celebrated their 70th anniversary. In MSB Storytelling we designed an original anniversary book with molded steel covers.
https://medium.com/msb-storytelling/corporative-storytelling-454b5b8ed2a5
['My Special Book']
2018-02-09 19:51:34.578000+00:00
['Anniversay', 'Corporate Books', 'Retirement Gift', 'Storytelling']
Warby Parker’s Online Vision Test Provides Clues About The Difficulty Of Disrupting Healthcare
Can the healthcare industry be disrupted? Warby Parker is one of the companies that is certainly trying. It has created a platform on which customers can take an online vision test and get prescription glasses ordered and delivered. Founded in 2010 by four classmates from Philadelphia’s Wharton School of Business, Warby Parker is rumored to be valued at about $1 billion. There has also been some excitement about Amazon’s recent moves into the pharmacy business. Can these new entrants, with a technology background succeed in these industries? When we think of disruption, we are often impressed by companies that develop new technologies and clever business models. But successful disruption requires more than that. In fact, disrupting industries such as healthcare, education and transportation will require more than just clever products, services and business models — companies will have to think more deeply about their business environment and work hard to influence policy decision-making. The Third Wave In the book The Third Wave, Steve Case identifies three waves of internet technologies. In the first wave (1985–1999), companies like Cisco, IBM, AOL and others were building the necessary technologies for the internet. Their work mostly involved laying the foundations of the online world that we have today. In the second wave (2000–2015), the app economy and mobile revolution took hold. Companies like Amazon, Google, Facebook, Twitter and others were able to leverage the foundations built in the first wave to create great products and services. There is evidence that this second wave has peaked and the next technology wave is now starting to take place. According to Steve Case, during the third wave (2016-present), the internet will be fully integrated into everything we do and every product we use. In the era of the internet of everything, industries such as healthcare, education, transportation and food production will be impacted by ubiquitous connectivity. Beyond Innovation All this is exciting news if you are an emerging startup, but caution needs to be taken. During the second wave, successful startups were created on top of infrastructures that had already been built during the first wave. A high school senior could create a disruptive startup from their bedroom. This was the era of incubators, accelerators, minimal investments and rapid growth. The third wave is much more similar to the first wave. Disrupting incumbent companies during the third wave will be a lot more difficult and expensive. Innovators will have to make larger financial investments, form partnerships with other companies and influence government policy. These requirements play well into the inbuilt advantages of large companies, who have the resources to invest in the R&D required to create healthcare products. When it comes to resources, Warby Parker is not exactly poor. As already noted, the company is rumored to be worth $1 billion. However, in the United States, there are currently 11 states in which the use of online vision testing is illegal. People wanting to buy new prescription glasses have to go and see an optometrist. In these states, the business environment matters more than Warby Parkers clever technology and business model. The company will only succeed in these states if it can somehow get this legal policy changed. The Business Environment Uber have faced similar challenges, as they have tried to scale in Europe. They have been banned or voluntarily pulled out of several countries such as Denmark, Greece and Hungary. This is not because their product or service is awful. In my opinion, Uber has a great business model — perhaps one of the best in the way it leverages technology to deliver on demand transportation services. However, in order for Uber to scale globally into every country it has to work with policy makers within those geographies. The business environment matters more for Uber than its clever technology and business model. Airbnb has also faced similar challenges with regulations in different cities across the world. Tesla and other electric car makers are on track to change how we drive over the next decade. But it is still a fact that the global success of companies working in this industry will depend on policy changes and significant infrastructure developments. Beyond how great the products are, the business environment matters. On Disruption The healthcare industry can and will be disrupted. The same is true for education and transportation. But the companies that will succeed are those that are not just focused on their technology and business models. The companies that will succeed will be those that will invest time, energy and resources into changing their business environment. This will involve challenging vested interests and getting governments and other institutions to change their policies and laws. This is not a job for the high school senior working from their bedroom. This article was first published on Forbes where Tendayi Viki is a regular contributor. Tendayi Viki is the author of The Corporate Startup, an award winning book on how large companies can build their internal ecosystems to innovate for the future while running their core business.
https://tendayiviki.medium.com/warby-parkers-online-vision-test-provides-clues-about-the-difficulty-of-disrupting-healthcare-b9fee46fe4a4
['Tendayi Viki']
2018-11-05 07:31:01.676000+00:00
['Innovation', 'Entrepreneurship']
How to Select All <div> Elements on a Page using JavaScript
How to Select All <div> Elements with jQuery For completeness, I think it is important to look at how you would select all <div> elements using the popular jQuery library, in case you start working on a project using jQuery or just prefer jQuery’s syntax. Though I don’t usually use jQuery myself, I find it’s useful to be aware of jQuery syntax so I know what I’m looking at when I see it. While jQuery’s popularity has been decreasing in favor of native syntax and frameworks like React or Vue, you’ll commonly see it on older projects. Needing to know jQuery syntax is especially useful when looking at legacy code that someone might not have updated in the last five or ten years. The jQuery syntax is a little simpler, but not much more: $("div") is the jQuery equivalent to document.getElementsByTagName("div") . Here is the API documentation on jQuery’s element selector: “Description: Selects all elements with the given tag name. → element: An element to search for. Refers to the tagName of DOM nodes. JavaScript’s getElementsByTagName() function is called to return the appropriate elements when this expression is used.” — jQuery Docs As you can see, $("div") is actually just a wrapper for the JavaScript function we already covered — syntactic sugar for convenience purposes. Here is a brief code example, where I load jQuery from the console and then use it to select all the <div> elements on the page. View the raw code as a GitHub gist Note that jQuery’s $("div") method returns the exact same HTMLCollection as the getElementsByTagName() call — the jQuery selector is just a wrapper around the native JavaScript syntax that makes it more concise.
https://medium.com/datadriveninvestor/how-to-select-all-div-elements-on-a-page-using-javascript-9b2cd16af740
['Dr. Derek Austin']
2020-11-19 20:03:26.272000+00:00
['Software Engineering', 'Programming', 'Software Development', 'Web Development', 'JavaScript']
It Gets Dark Before the Light
“Where thoughts go, energy flows.” — a spiritual proverb You were told not to be stupid, but you were already smart. They convinced you that you needed to create an identity, but you were born perfect. They said that you needed to exceed the average. It was a lie. Average people don’t exist in the real world. You were unique. You were sorry that you were different. They said they loved you, but they didn’t know how. (They tried their best.) They demanded that you fight their demons for them because they failed. You tricked yourself into believing they were your demons. If you were a boy, you needed to make enough money, said the people who failed at love. If you were a girl, you were too fat or your hair was too curly, said the sad people who failed at love. You bought the clothes. You bought the makeup. You bought the house. You bought everything that anyone would ever want, just in case. You thought you were kind. No one bought it. The ground beneath your feet began to shift violently. Doubt began to seep. You stopped believing you were smart. The days became long. You took every relationship that had shown you unconditional love and smashed it to pieces. You sighed with relief. There was nothing left to screw up. You were finally alone. You let yourself cry. You stopped struggling. Everything became very still. You thought you would die but your heart kept beating. It didn’t care what you thought. Your heart stopped apologizing. It became quiet. You were still breathing. You saw the upside to the blackness of the night. If you fell, you knew you could get up. It was hard at first. You stopped fretting and you started exploring. You practiced falling. From the dark, there comes the light. Thank god for the dark. Merry Christmas, everyone, and a happy New Year. Call to action Hit the ❤ button if you liked this article! You’ll help others find it. Sign up for my free weekly newsletter (packed with thoughts I don’t share anywhere else) at janehwangbo.com.
https://medium.com/personal-growth/it-gets-dark-before-the-light-1bff1d97d2d8
['Jane Hwangbo']
2017-04-06 21:52:13.265000+00:00
['Love', 'Life Lessons', 'Entrepreneurship', 'Life', 'Poetry']
My Own Time
Photo credit: Pixabay So between my pending surgical procedures and my mother-in-law’s recent diagnosis of T-cell lymphoma, I find myself “working” from home again. I helped my client find a full-time caregiver since I am no longer able. Ok, so I haven’t exactly figured out where the money will come from yet, however, I have been working hard, taking more classes in copywriting, doing little projects and sending work out here and there. Suddenly though, my family seems to think that just because I no longer work outside the house, that I have all the time in the world! “You aren’t doing anything now, you should be able to go shopping when you want, take Claudette to all of her appointments, etc.” Uh, no. What I said was, if I was able to, I wouldn’t mind doing what I can for whomever needs it to keep my husband, or brother-in-law from missing any work. That doesn’t mean I’m at everyone’s beck and call 24/7! I’m trying to write here, I’m working on my own time, fulfilling my dream of becoming a freelance writer, not twiddling my thumbs at home. And actually, it isn’t my whole family making me feel that way, just my sister-in-law. She really doesn’t get it I guess, that what I do here IS a job. Even if I haven’t made any big money yet, just pennies really. But someday, I’ll have it all worked out. Alright, just had to get that off my chest, rant over. Anyone else ever feel that way? I know you do, I’ve heard it before from many of you writers out there. How do you deal? What do you say to these non-believers that your work is justifiable? Me, I’m just waiting until that day the check arrives in the mail, or I finally sign that first client. Then I can show her and say, “Told ya.”
https://medium.com/100-naked-words/my-own-time-30e7b4268abe
['Kim Smyth']
2017-10-12 19:01:01.134000+00:00
['100 Naked Words', 'Freelance Writing Jobs', 'Work Life Balance', 'Writing']
A girl born with 4 legs and 2 vaginae!!?
Often, when people tell someone about themselves, they put forward their date of birth, date of marriage or similar things related to their life. But have you ever read that a person is giving details of body parts in his introduction? like; 'I have two hands, one eye and two ears'. This is utterly stupid but someone who needs to do it too. She is different from the world and in her introduction she has to say that 'I have four legs, not two’. This is Myrtle Corbin Myrtle Corbin was born in Tennessee, United States. His birth surprised everyone. The very innocent and lovely Myrtle came to this world with only one difference compared to the rest of the people and that is her four legs. Yes, Myrtle has not two but four legs, two in the same general position, and the other two legs are attached to a separate limb just in the middle of her body. According to doctors, the two separate legs associated with Myrtle are weak and Myrtle cannot even fully control them. These legs are shorter and more delicate than their other legs. These feet belong to none other than myrtle According to doctors, the middle two legs are not of her own, but of her diaphysis twin sister. Now you might not be able to understand this and it seems strange to you, but it is true. Her twin sister, who is associated with Myrtle, is her twin sister who has not come into this world. Doctors say that sometimes there are twins who are connected, but the body of one becomes full size but the other Some parts of the body are joined with the previous child. This means that Myrtle had a twin sister who was in her mother's womb before she was born. But only Myrtle was born who did not come along with birth but brought her twin sister's feet. This world is amazing It is strange and bizarre that you have brought someone else’s body parts with you after birth. According to doctors, Myrtle could control her unborn sister’s body, but it was very challenging for her to use it while walking. It was also said that those two legs connected with those two legs had only 3-3 fingers. This strange thing about Myrtle also made him famous all over the world. When she was only 13 years old, a biography was written on her life, named 'Biography of Myrtle Corbin’. Myrtle married also Myrtle also had a sister named Willey Ann. She was married in the year 1885 to a boy named Locke Bunnell. Locke had a brother, Doctor James Clinton Bicknell, who proposed to Marital shortly after his brother’s marriage. Myrtle and James’s marriage proclaims true love. There is one more important point here and that not only Myrtle but also her unborn sister can have sex. That is, not one but two vaginas were present in Myrtle’s body. Myrtle is said to have given birth to eight children, three of whom died in childhood. In the context of Myrtle's children, it has also been said that two of her three children were born from her vagina and the other two from the other vagina. Now whether this fact is true or not, but it is considered possible to be seen medically. This world is also one miracle. Just think how would you feel if you have 4 legs or 3 hands or 3 eyes or 4 ears? I am amazed and you?
https://medium.com/illumination/a-girl-born-with-4-legs-and-2-vaginae-62c3b8cafefc
['Nilesh Mithiya']
2020-12-19 19:26:38.310000+00:00
['Mystery', 'Girls', 'Biology', 'Vagina', 'Science']
Digital radio brings tight targeting and happy listeners
Comedians Lee and Dean, who present special shows for Fix Radio in May 2018. One of the benefits of digital platforms is that you can launch new, niche radio stations there. My daughter goes to sleep listening to a children’s radio station on her digital radio, as one example. The programming from Kinderling Kids Radio from Sydney, is safe and age-appropriate — and successfully relaxes her for a good night’s rest. Another example is a Chicago start-up, Quantum Music, which is running online radio stations for the seven million Chinese ex-pats who live in North America. There’s clearly a need for a station like this — but probably not possible for an FM frequency everywhere. And then, in the ultra-competitive market of London, where a typical listener can find over 110 radio stations on their DAB receiver, there’s Fix Radio. See what they did there Ask anyone who’s had work done on their home or office recently, and they’ll tell you that builders, plumbers, plasterers and the rest are heavy radio listeners, right through their work day. Fix Radio — tagline “We’re nailing it!” — is a radio station specifically aimed at builders and tradespeople. The station, available on DAB, online and through an app, has just celebrated its first birthday, and is growing both in listenership and in advertising — describing itself as an “important platform to the construction industry’s biggest brands”. Listener marketing in a city which is often cold and damp has been well-targeted and simple: the “Bacon Butty Tour” consists of brightly-coloured vans which drive to building sites across London: they have so far handed out over 20,000 hot butties to promote the station. Craftily, the vans also visit clients, too. Butty delivery van. White, naturally. If the weather is important to the marketing of the station, it’s also important content. Detailed weather updates keep outdoor workers informed — laying concrete in the rain isn’t a good plan, it turns out — and the station’s website nicely contains specific weather forecasts for some of London’s most busy construction areas. Programming is nicely tailored to the audience, who listen long and often to the station. There’s a “no-repeat guarantee” for the music they play, which is all up-tempo. There’s daily construction industry news, and sports news; lots of “music marathons” as well as focused chat in drivetime periods. The station also airs a weekly show with advice on everything from tool thefts to tax advice and doing your own accounts; and as any good station does, it supports a charity that is relevant to its listeners — a construction industry charity, in this case. New platforms are, essentially, making these types of radio stations possible. Stations aimed at this tight demographic couldn’t operate in the limited world of FM spectrum; but with digital, radio stations can target much tighter. Fix Radio seems to be a good example of what you can do with a clear focus on your audience, and a digital platform to reach them.
https://jamesrcridland.medium.com/digital-radio-brings-tight-targeting-and-happy-listeners-4c99665198b9
['James Cridland']
2018-04-29 01:28:22.375000+00:00
['Radio', 'Targeting', 'Marketing', 'Digital', 'Content']
Cover the King
Cover the King Author’s note- This story is an homage to the genre of mystery/crime short that once appeared in magazines like Ellery Queen or Arthur Hitchcock Magazine. It was while Bethanne was constructing the timeline for the homicide detective that she declared that everything that occurred that morning stemmed from an innocuous invitation the year before. “I should have known then, that we were here on borrowed time,” she said. “Known when?” asked Detective Simpson. “Right after we moved into the complex. Our neighbors, the Rileys came to introduce themselves. They said we’d have to come over for a night of cards sometime. I should have known right then and there that no good would come of it. Not that I could have stopped it, of course. Saying the words “cards” to Dick was like saying “cheesecake” to someone on a diet. Once it got into his head, he’d become obsessive about it.” “Liked to play, did he?” asked the policeman. “For money, or just simple enjoyment?” “Both,” responded Bethanne. “Although I don’t think his enjoyment was simple,in any sense. It was more . . . primordial might be the only way to describe it, or maybe primal. You see, he simply had to win.” “Sore loser, then?” Detective Simpson scribbled into his notebook and added an asterisk and underlined the sentence. Bethanne shook her head wryly. “That doesn’t even come close to describing it, Detective. If he lost, it could only be through sinister forces in the universe working against him or maybe a cheating opponent. He would actually throw the cards on the floor, like a four year old. It was the most embarrassing thing you could possibly imagine. If only I had listened to my mother. Oh well.” She frowned and began scraping at a spot on her apron. “Your mother didn’t like him?” asked Detective Simpson. “Oh no, she never met him” answered Bethanne. “But he failed her boyfriend test. Before she died she told me it was fail-safe and to never get involved with any man who tanked on both parts.” The female officer who was sitting in on the interview, Officer Lucas, who until now had been sitting silently making her own notes during the conversation, suddenly perked up and leaned in to ask her a question. “What is this ‘boyfriend test’ of your mother’s? Professional interest only” she added when Simpson slightly sniggered. “Oh, when I was about sixteen she told me that all you needed to know about whether someone was good boyfriend or even husband material, was to give him two tests as early as possible, so as not to waste time”. “And these were, what?” asked Lucas. “ Should I leave the room?” interrupted Detective Simpson. “You’ll tell me but then have to kill me, violation of the sisterhood sort of thing?” Bethanne looked at him and smiled. “You’ll think they’re very silly. But it’s as accurate as a Geiger counter and radioactivity”. “Please, don’t keep us waiting.” Officer Lucas looked like she was going to snap her pencil in suppressed impatience. “Number 1 — ask him in a very public and crowded place to hold your purse while you go to the ladies room. If he refuses, strike one. Very, very serious strike.” Detective Simpson shifted uncomfortably. “So what’s the big deal if he doesn’t want to hold your purse?” “Insecure in his masculinity” replied Bethanne. “He doesn’t know anyone there, what does he care if a bunch of anonymous strangers see him holding a purse for a couple of minutes? If he’s that insecure, warning, warning,red alert, abandon ship. Probably in constant need to be domineering and in control of his docile female partner at all times. At least according to my mom.” “Seems pretty flimsy to me,” Detective Simpson snorted. “Baby /bathwater thing. Hardly a firing offense.” “Test 2?” Officer Lucas was determined that they not be diverted. “Test 2 is to play a game with them and see how they handle both winning and losing. Someone who has to always win has no sense of priorities or possibly even compassion or empathy. It’s fine to enjoy winning, but it’s a matter of degree, do you see?” Bethanne stared intently at Officer Lucas, totally ignoring Detective Simpson. Simpson took the opportunity to go over to the pot and refresh his coffee. He recognized that Lucas had achieved one of her famous ‘bonding moments’ with the subject and he had no desire to impede the process. “Oh, I see completely. My ex-husband enjoyed nothing more than bankrupting everyone else in Monopoly. Even the kids when they were little! I like your mom’s test. I can think of a few people to share it with.” Officer Lucas tapped some quick notes into her personal phone before closing it and picking up her official notebook again. “I think we’ve strayed from the subject at hand,” said Detective Simpson, returning to the table. “Let’s get back to the card party at the Riley’s. That was when — last October? What happened then?” Bethanne sighed. “Everything that I would have expected to happen. Dick started off okay, but as the night wore on and he had more than a few drinks, the real Dick made his appearance. He ridiculed people for their plays, insisted on showing them how they could have managed their hand better, disparaged the food, and won a great deal of money. Detective, have you ever heard anyone chortle?” “Chortle? You mean like laugh? That’s Advanced Placement English to me. Who chortles except in Victorian novels?” Detective Simpson looked at his watch and sighed. Would this interview ever end? He’d had enough of maternal character tests and vocabulary quizzes. Get on with it, lady! “Why is this relevant?” Officer Lucas, ever diligent, thought Bethanne in her own meandering way, was leading them down the road to the all important motive responsible for the corpse in the dining room. Bethanne twisted her wedding ring and looked at Simpson. “You’re right, Detective. Most people have never heard an actual chortle in real life. It’s the sound of triumph, glee, greed and victory all wrapped up together in one sound. That was Dick’s only laugh. His single enjoyment was when he had reduced someone else to rubble. Anyway, that night all he did was chortle. He made everyone feel like imbeciles and poor cooks. I saw the looks and I knew that we would never be invited to another social gathering at Meadows of Runnymeade as long as we lived here. Same old story.” She started to sniffle and accepted the tissue offered by Officer Lucas before continuing her narrative. “Officers, we have moved every other year for as long as I can remember. He didn’t mind being a social pariah, but I did. I got lonely, you see. All I ever wanted was to have people over to dinner and play cards and have shopping outings with girlfriends like other people but Dick always spoiled it. He always says that this time it will be different, he’ll keep it in check, be everyone’s best bud, but he never does. Show him the deck of cards and he has to be Master of the Universe. So we’re looking at new complexes now that our lease expires next month. I really really liked it here. I wanted to stay, but Dick says who wants to live surrounded by village idiots.” A single lonely tear traced it’s way down Bethanne’s cheek. “Tell us what happened this morning, Mrs. Morrow. There must be a reason for what you did.” Officer Lucas spoke softly and kindly to the drab little woman who sat across from her at the table and tried not to notice the numerous speckles of blood that covered her apron. “I was making breakfast for Dick and listening to my morning show on the radio. He had gone out to get the mail and had just come back in. I had my solitaire game laid out on the table so I could get back to it after Dick ate and I had done the dishes. I had just learned this game a little while ago- double decks and quite complicated. I never win, maybe 1 in fifty. I was about to have a win, I could just feel it. All the kings were covered. But breakfast has to be right at nine, or else, so I had to break away. So Dick comes in with the mail,takes a look at my game, picks up the cards and plays out the hand in about two minutes. I didn’t even know he knew this game! Anyway, after winning, he pushed all the cards together, both decks (!), so sorting them will take forever, and then says “Bethanne, why do you waste your time on that rubbish? Surely there’s something more exciting for you to do aside from endless solitaire. I think your brain is rotting from inactivity. And this garbage on the radio!” and then he turned off my show. I served him his breakfast and we ate in total silence. I felt very strange, like there was a big weight sitting inside of me that was bursting to get out. I kept quiet and took deep breaths and thought of nice things, just like my daughter showed me. She’s a beautiful girl, lives in Seattle now so we never get to see her. And then I did the dishes. Dick went through the mail and there was something there from our property manager. We own a few rental properties, Dick buys them at tax sales. They’re all really run down and I’m embarrassed to even own them. Anyway, we had this one tenant, a really nice woman who’s lived there for five years but lately the rent has been slow because she got sick, poor thing. I asked if we couldn’t give her a break as all the other rents were coming in just fine and we had a surplus in the account but Dick said that while one old cow feeling sorry for another old cow was touching, business was business and it was no concern of his if someone got sick, that was their problem. Dick insisted on reading the letter from the property manager out loud, even though I said he didn’t need to. But he just had to share his victory. It said that our “issue” on River Street had been resolved, the eviction was successful and we would get to keep the entire security deposit. And then he did it. I was drying my old heavy cast iron skillet and I had it right in my hands when he did it. I had a kind of flash and I thought my brain had actually exploded. I don’t really remember what happened next, but when I came out of it, he was lying there and I still had the skillet in my hand and it had blood all over it. So did my apron and the tablecloth and the rug. I can’t believe that I did that. If only he hadn’t done it, maybe he would still be alive and we could move again and get a fresh start.” Bethanne finally lost her composure and slumped over the table, her body wracked by giant heaving sobs. “Mrs. Morrow, I’m missing something. What exactly did your husband do that upset you so much? Did he try to hit you or abuse you in anyway? Was it self defense?” Detective Simpson waited for the answer that he knew would determine this woman’s future — whether charges would be pressed, a trial held, prison time or not, the whole ball of wax. “He. . . he . . . CHORTLED! And I knew it couldn’t go on. I couldn’t go on and he couldn’t go on, so I just tried to make it stop.” *** Later, after booking was complete and all the reports filed, Detective Simpson and Officer Lucas headed out to the Central Marketplace and Food Court to catch a bite to eat after a long and arduous day. They were waiting in line for the latest craze, Hong Kong bubble wrap waffles with short ribs, when Detective Simpson felt something shoved into his hands. “Hold this, while I catch the Ladies Room. I won’t be a sec,” Officer Lucas dashed off and melted into the throngs of consumers. Detective Ernie Simpson looked down and saw that he was now custodian of Officer Eleanor Lucas’ handbag. — — — — — — — — — — — — — — — — — — — — — — — — © 2018 Valerie Kittell
https://medium.com/thecabbagegarden/cover-the-king-9134ad9c7656
['Valerie Kittell']
2020-08-22 17:43:37.269000+00:00
['Short Story', 'Fiction', 'Lit', 'Writing', 'Mystery']
Chi-Square Hypothesis Testing in Statistics
Chi-Square Hypothesis Testing in Statistics Relationship association between categorical features Photo by William Iven on Unsplash Chi-square test is a non-parametric test in hypothesis testing to know the association of two categorical features in bi-variate data or records. Non-parametric tests are distribution-free test because it is based on very less number of assumptions that’s why it is not normally distributed. When the target variable doesn’t show normal distribution can be seen target are in ordinal or in nominal and existence of outliers. The Chi-square test also stated that the variance of a sample is somehow equal to the population from which the sample was taken. That’s why called the hypothesis for population variance. To test whether one categorical variable is associated or has an effect on another categorical value, we check the hypothesis on these two conditions shown below: H0: Two categorical variables are independent of each other. H1: Two categorical variables are not independent of each other. H0 and H1 are the null hypotheses and alternate hypotheses, respectively. After testing, if we get to know that we have to reject the null hypothesis, then we have to accept the alternate hypothesis that says both categorical data have some level of association. The test performs on p-values that determine if the p-value is less than 0.05, then both categorical values have a strong association, and if the p-value is more than 0.05, then they are independent. The formula for Chi-Square is shown below: Chi-Square Formula. Image by Author The distribution of Chi-square called is Z square distribution, and the diagram of chi-square is shown below: Chi-Square distribution. Image by Author This test is only on categorical data such as gender ( male, female), color ( red, green, orange, etc.), and other binary categories. Many learners are still don’t know many things in their learning path, and we are always trying to get knowledge in a simple, meaningful way. The below tree will give you a little hint on how to choose a test for bivariate data. Hint to choose test and plot for bi-variate data. Image by Author We will take an example of a preference between ice-cream and chocolate in adults and children. The two hypotheses are given below: Age and preference for ice-cream and chocolate are independent. Age and preference for ice-cream and chocolate are not independent. Consider the table for the analysis, as shown below: Category & Category data. Image by Author The next step to add the row and column to make a divide by total. Getting Expecting Values Now we have both observed value and expected value. We will calculate the chi-square value for each cell by applying the formula we saw above. Calculated chi-square values for each cell. Image by author After adding all values, the overall chi-square value is now 4.102. Well, this chi-square value is similar to the z test. Now to get the critical chi-square value with a degree of freedom. The DOF is one less than the total number of total rows and columns. In a row, we have two rows and two columns also. So the DOF will be as shown below: DOF = (row-1)*(column-1) = (2–1)*(2–1) = 1 After knowing the degree of freedom, we can calculate the critical chi-square value with the help of the alpha value. The alpha value is the value that comes after choosing the confidence of interval. Confidence interval and Alpha value. Image by author The Alpha value can choose from this table, as shown in the photo. When we see the chi-square table with a 5% alpha value, the critical values come at 3.84. We can observe these values through a chi-square distribution photo. View of two values. Image by author We can see that the chi-square value is greater than the critical value. So, we have to reject the null hypothesis. Also, if we see that if we choose the alpha value to 1%, the critical value comes 6.64. P-value is between 5% and 1% means if we have a significance region is 5%, then still we have to reject the null hypothesis. But, if the 1% alpha value, the chi-square value is less than critical, then we have to accept the null hypothesis. Types of Chi-Square test Test for independence (Two-way chi-square test): Good for categorical values association. Test for the goodness of fit (One-way chi-square test): Good to check observed values differ from the theoretical value. Conclusion: The Chi-square test is very good when we have categories features in a data set. Reach me on my LinkedIn. Mail me at [email protected]. Recommended Articles 2. Python Data Structures Data-types and Objects 3. MySQL: Zero to Hero
https://medium.com/towards-artificial-intelligence/chi-square-hypothesis-testing-in-statistics-87884bc73d99
['Amit Chauhan']
2020-12-31 02:34:40.653000+00:00
['Programming', 'Statistics', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
Why Leaving a Project Unpolished Is OK
Does Your Customer Really Want the Bells and Whistles? “80 percent of features in the average software product are rarely or never used. Publicly-traded cloud software companies collectively invested up to $29.5 billion developing these features, dollars that could have been spent on higher value features and unrealized customer value.” — Pendo’s 2019 Feature Adoption Report. Don’t spend time adding low-value features backed by assumptions. Perfectionists often tailor products to their ideal image, not necessarily the customer’s. The customer may appreciate the extra features, but not all improvements will change their decision to purchase or retain a product. Example: A web developer spends a month trying to create a user experience that shows data in a table with filterable/sortable columns. Unexpected outcome: The customer uses the export feature to download the data and view it in Excel. Sometimes a simple conversation with a customer can save tons of time and money. In this real-world case, the customer provided the data to a third-party for auditing. There would never be a case where the auditors would be allowed access to the fancy web table.
https://medium.com/better-programming/why-leaving-a-project-unpolished-is-ok-fe4a43836f40
['Kevin Fawcett']
2020-10-23 16:44:56.947000+00:00
['Programming', 'Agile', 'Startup', 'User Experience', 'Product Management']
Dreams Do Come True
Introduction to ILLUMINATION-MIRROR Photo by bruce mars on Unsplash I’ve been writing since I was a young child. I wrote in spurts and was never consistent. Most of the time through my early 20s, it was when I was down, a relationship break up, or some sort of personal catastrophe. I began writing again about a year ago at the urging of a friend Linda Aileen Miller. She helped me discover ILLUMINATION and ILLUMINATION-Curated and a great community of writers. I have to say, I didn’t write in earnest until recently. I didn’t have the confidence or commitment to take responsibility for what I needed to do. Thanks to many fine writers, I began to write more seriously and more often. Some are sad, some are light, some are whimsical, and some stories tend to be serious. Take a look at this article I wrote as I was losing my beloved King Charles Maggie Mae. It gave me great comfort and solace to write this story and begin the process of her transitioning to the Rainbow Bridge. There is nothing that will replace her in my heart but it helped me tremendously to move on in the grieving process. I also write about family experiences, mental health issues, and just for fun! I have been leaning more towards poetry and there are so many outlets to share your writing on these platforms. Change is the name of the game. We’re embracing it here and I hope you’ll join us! This is an article about mental health I designed about panic attacks. I tried to be light, include personal experience, and also facts! Once again, there are so many incredible writers in ever way you can imagine. I hope you’ll join us! Feel free to reach out to me! I’ve learned so much in this past year! I’m happy to oblige! Participate in Challenges. It not only introduces your work but it introduces you to a wider audience, to many writers, and is an opportunity for growth and fun!
https://medium.com/illuminations-mirror/dreams-do-come-true-8f1aa312ab4d
['Janny S Heart']
2020-12-29 16:55:22.284000+00:00
['Illumination', 'Illumination Curated', 'Writing', 'Self Improvement', 'Illumination Poetry']
Five lessons on the most pressing challenges for newsroom leaders
Newsrooms are still largely driven by the on-going, underlying sense of urgency that came with the digital age of news production. But leaders need time to reflect if they want to build a powerful editorial strategy, run a newsroom successfully, and create a space where ideas can flourish. That’s exactly what we invited our participants to do at the News Impact Academy in Copenhagen, led by Design Strategist Tran Ha. She applies design thinking to a newsroom environment to provide our 19 participants with the tools they need to tackle their most pressing challenges. These are the main takeaways for newsroom leaders: 1. How to drive innovation in your newsroom Strong leadership is often associated with revolutionary ideas, but innovation doesn’t have to be bright and shiny. It happens by introducing small changes on a daily basis. “We would all love to come up with that one idea that no-one has ever thought about, but in reality, most ideas already exist,” said Tran. To be innovative you need to look at the problem that everyone else is looking at, but frame it in a way so that you can actually take action on it. Put effort into finding the right questions instead of rushing into solutions. “We’re just not looking at needs and opportunities in the right way,” said Tran. This approach forces you to take a step back and spend a lot of time in the problem sphere. “It will feel uncomfortable and messy,” she said. But when you listen to people — your team or your audience — you’ll find out where the friction point is, which allows you to identify the need. Once you have a solution for this need in mind, set up a small experiment that allows you to back out quickly if it’s not working; e.g. a weekly debrief, daily team-lunch, an internal newsletter. If your prototype catches on, build upon it, turn small innovations into habits so they stick and real change happens. Participants working on a prototype as part of a design thinking exercise 2. How to lead a team while managing the day-to-day Leadership can be a lonely path, so it’s important to build a network of people who really understand your challenges, for example, the difference between leadership and management. “Leading is about giving a vision and managing means achieving that vision,” said one participant. Our default mode tends to be managing, according to Tran, which is when you’re focusing on tasks. “But the hardest work is the strategy,” she added, “that’s why people don’t do it. A lot of leaders are not good at leadership, because they are focusing on tasks,” Tran said. On a daily basis, it’s coming and going between managing and leading, putting out fires, thinking long-term, micro-managing, and giving autonomy. In a busy environment, where you have to get things done, you need to know when you can take time to focus on your leadership. You can switch to leadership mode when… you or your team are stuck and you don’t know how to get it done the team is out of alignment you have a high profile project, you want to get it right, and the answer is not so straight forward you can take time to reflect 3. How to motivate staff “Autonomy and freedom are the best drivers for creativity,” said Jakob Nielsen, one of our guest speakers at the News Impact Academy. He is the Editor-in-Chief of Altinget, one of Denmark’s most innovative newsrooms, who produce a slow version of political news, primarily for B-to-B subscribers. Christina Andreasen, Deputy Managing Editor at Berlingske and Jakob Nielsen, Editor-in-Chief at Altinget in conversation Jakob’s leadership strategy is all about trust. You have to be clear about goals and red lines while giving people freedom and ownership, according to him. “My team can experiment and if they fail that’s fine, I’m not on their backs,” he said. “Give people freedom!” For example, his team is encouraged to apply for external funding to set up new projects— a creative competitive process that brings good energy to the newsroom. This allowed Altinget to become the leader in robot journalism in Denmark and enabled them to bring in new skills through such projects. 4. How to lead multi-disciplinary teams Jakob’s collaboration with project managers made him realise that you can do great things when you’re open to change. “For 15 years I’ve been working with people that did what I did [produce stories], then for the first time I got to work with people doing stuff I didn’t know anything about,” he said. Many newsrooms are hiring programmers, graphic designers and other people with skill-sets that are fairly new to journalism. “Developers and designers think very differently,” said one participant. De-marginalising these new roles within the newsroom is key to bringing innovation to journalism. “If you only work with people that do what you do, you’ll end up with the same answer,” said one participant. 5. How to deal with resistance to change A (remote) discussion with Anita Zielina about change management on 27 September 2019 You’ll always have people who are resistant to change, according to Anita Zielina, director of innovation and leadership at Craig Newmark J-School at CUNY, who led our discussion on change management. “What makes it so hard is that human beings, in general, are change-averse,” she said. “People who are excited about change tend to forget that not everyone sees it that way”, Anita Zielina, director of innovation and leadership at Craig Newmark J-School at CUNY Anita recommends leaders to focus their attention on caring for change-makers instead of spending time with the resistant ones. “Your stars are valuable allies who will drive the rest of the people and make others feel excited about change,” she said. The best way to support your change-makers is to create as much freedom as possible, help them in their careers, and show them your plans, according to Anita. “Let them have fun and keep the negative energy away,” she said, “try to battle the fights on culture so that they don’t have to do it.” Change projects need constant direction and active prioritising, according to Anita. “They’re like a plant that needs watering every day,” she said. Ideas become successful only when people talk about it every day to point the way and the light at the end of the tunnel. “You want people to ask questions and discuss ideas so don’t be afraid to over-communicate,” she said, “when you sound like a broken record, you may still have people who will ask: Excuse me, why are we doing this?” But when the energy levels come up and things start to grow and scale, you’ll know you’re on the right track.
https://medium.com/we-are-the-european-journalism-centre/five-lessons-on-the-most-pressing-challenges-for-newsroom-leaders-593eaf1d596a
['Ingrid Cobben']
2019-10-15 05:16:01.573000+00:00
['Leadership', 'Media', 'Insights', 'Journalism', 'Design Thinking']
Do we get to Choose how to Live?
This train is a high-speed womb. Compressed into my burgundy window seat in a carriage decked out in shades of red and pink, I watch the French, then Belgian, then Dutch countryside zip past. Three hours and twenty minutes after leaving Paris, I am reborn on a platform at Amsterdam Centraal station. Gingerly, I make my way to the elevator, unsure how to proceed with a present so full of promise it looks unreal, such is the contrast with what came before. I can’t shake off the feeling that my being in the Netherlands is yet another anomaly, in line with the many more that now make up my daily reality. After the parasite in my head curtailed my freedom and held me hostage for five years, leading an autonomous life again is a shocking experience. Not a day goes by that I’m not mildly surprised I’m still around, all the more as all I actively wanted for five years was to find a way not to be. Unable to think, unable to write, unable to support myself, unable to imagine a future that stretched beyond the next hour, I was done with life. Over the course of five years, major depressive disorder was the slow death that almost disappeared me. Little by little, I lost track of what made me me as I sank deeper into an affective, emotional, professional, and intellectual coma. I was never supposed to wake up. Or exit the void. Or throw myself back into and at life. And yet, here I am, suitcase in tow, blown away by it all. Granted, it is also windy in Amsterdam.
https://asingularstory.medium.com/do-we-get-to-choose-how-to-live-4ab64d3b8a63
['A Singular Story']
2020-04-17 20:02:01.167000+00:00
['Travel', 'Personal Growth', 'Life Lessons', 'Mental Health', 'Self']
Pandas Trick #1 — Change the default number of rows returned from the head method
The pandas DataFrame head method returns the first 5 rows by default. This is controlled by the parameter n . In this trick, we will use partialmethod from the functools standard library to set n to a different number. This trick is available on the Dunder Data YouTube channel (Subscribe!). Become an Expert If you want to be trusted to make decisions using pandas, you must become an expert. I have completely mastered pandas and have developed courses and exercises that will massively improve your knowledge and efficiency to do data analysis. Master Data Analysis with Python — My comprehensive course with 800+ pages, 350+ exercises, multiple projects, and detailed solutions that will help you become an expert at pandas. Get a sample of the material by enrolling in the free Intro to Pandas course. First, let’s read in a sample DataFrame containing bike rides from the city of Chicago and call the head method with the defaults. Note, that it returns 5 rows. import pandas as pd bikes = pd.read_csv('data/bikes.csv') bikes.head() The functools standard library comes packaged with partialmethod which allows you to set parameters of a particular method. You can set any number of parameters with it. Below, we reassign the DataFrame head method so that it returns 3 rows as a default instead of 5. from functools import partialmethod pd.DataFrame.head = partialmethod(pd.DataFrame.head, n=3) bikes.head() Why do this? I use the head method frequently to shorten the displayed output of the DataFrame. When I am creating live tutorials, 5 rows can take up too much space on the screen, so changing this default to 2 or 3 makes sense and saves a bit of time. It can also be helpful if you have written a large report that makes use of many calls to the head method, and want to shorten all of those outputs with a single command. General usage partialmethod is only available in python 3 and can be used for all methods to set some or all of the parameters to a particular value. Master Python, Data Science and Machine Learning Immerse yourself in my comprehensive path for mastering data science and machine learning with Python. Purchase the All Access Pass to get lifetime access to all current and future courses. Some of the courses it contains: Exercise Python — A comprehensive introduction to Python (200+ pages, 100+ exercises) — A comprehensive introduction to Python (200+ pages, 100+ exercises) Master Data Analysis with Python — The most comprehensive course available to learn pandas. (800+ pages and 300+ exercises) — The most comprehensive course available to learn pandas. (800+ pages and 300+ exercises) Master Machine Learning with Python — A deep dive into doing machine learning with scikit-learn constantly updated to showcase the latest and greatest tools. (300+ pages) Get the All Access Pass now!
https://medium.com/dunder-data/pandas-trick-1-change-the-default-number-of-rows-returned-from-the-head-method-bc7c21ce0d53
['Ted Petrou']
2020-08-25 03:12:24.866000+00:00
['Data Science', 'Pandas', 'Programming', 'Python']
TensorFlow está muerto, que viva TensorFlow
Head of Decision Intelligence, Google. Hello (multilingual) world! This account is for translated versions of my English language articles. twitter.com/quaesita Follow
https://medium.com/datos-y-ciencia/tensorflow-est%C3%A1-muerto-que-viva-tensorflow-f18c86ca5515
['Cassie Kozyrkov']
2019-10-10 02:43:27.155000+00:00
['Artificial Intelligence', 'Ciencia Y Datos', 'Data Science', 'Technology', 'Machine Learning']
I Wanted to be Sexually Liberated, So I Became a Stripper
I Wanted to be Sexually Liberated, So I Became a Stripper All my life, I thought the naked body was sinful and meant to be hidden. At 23, I finished business school and started stripping. Photo by Ashley Byrd on Unsplash I grew up in a conservative home, in an equally conservative town, and my parents had my future planned out from a very early age; Marriage, children, and a corporate job, just like my mother and father. There’s no denying how lucky I was as a kid to have everything I needed. I went to church every Sunday with my family, had great grades in school, and competed in gymnastics in my free time. I was the baby, daddy’s little girl, and I lived to please my parents. Naturally, when I entered puberty, and my body began changing — to the appreciation and fascination of boys my age — it confused and delighted me to be recognized for something I hadn’t worked for. My body was just part of me, yet my new curves were celebrated by the opposite sex, criticized by my female classmates, and chastised by my mother and grandmother. I know it may sound crazy, but even my relationship with my father changed when I could no longer wear my training bras. He became noticeably awkward around me, and he stopped inviting me out with him after he witnessed men his age paying me attention at the hardware store. It was a confusing feeling for a 13-year-old girl to be suddenly punished by her father for something out of her control. As an adult, I covered up my body and fell into place because I thought my relationship with my dad depended on it. I let go of any wild dreams of accepting my physical self and exploring my sexuality, and instead, I dated men who were just as conservative and concerned with marriage as I was — men who were just as protective about my body as my father. I wasn’t a virgin past 19, but I held back so many parts of myself for the fear of being “sinful.” It’s no secret that children who grow up in homes like mine — homes where they can’t make their own choices about hobbies or the clothes they wear, homes where they are taught that sex and the naked body is shameful — grow up resentful and desperate to rebel. They want to do anything and everything to go against the status quo. Is it any wonder that I rebelled right into working at a strip club? I think not.
https://medium.com/erin-taylor-club/i-was-taught-to-be-ashamed-of-my-body-and-at-23-i-became-a-stripper-3a862de93647
['Erin Taylor']
2020-12-27 01:09:49.564000+00:00
['Mental Health', 'Sex', 'Sexuality', 'Self', 'Sex Work']
สิ่งที่ควรคำนึงถึง 10 ข้อ ก่อนที่คุณจะสร้างแผนภูมิเกี่ยวกับ COVID-19
Sign up for The 'Gale By Nightingale Keep up with the latest from Nightingale, the journal of the Data Visualization Society Take a look
https://medium.com/nightingale/%E0%B8%AA%E0%B8%B4%E0%B9%88%E0%B8%87%E0%B8%97%E0%B8%B5%E0%B9%88%E0%B8%84%E0%B8%A7%E0%B8%A3%E0%B8%84%E0%B8%B3%E0%B8%99%E0%B8%B6%E0%B8%87%E0%B8%96%E0%B8%B6%E0%B8%87-10-%E0%B8%82%E0%B9%89%E0%B8%AD-%E0%B8%81%E0%B9%88%E0%B8%AD%E0%B8%99%E0%B8%97%E0%B8%B5%E0%B9%88%E0%B8%84%E0%B8%B8%E0%B8%93%E0%B8%88%E0%B8%B0%E0%B8%AA%E0%B8%A3%E0%B9%89%E0%B8%B2%E0%B8%87%E0%B9%81%E0%B8%9C%E0%B8%99%E0%B8%A0%E0%B8%B9%E0%B8%A1%E0%B8%B4%E0%B9%80%E0%B8%81%E0%B8%B5%E0%B9%88%E0%B8%A2%E0%B8%A7%E0%B8%81%E0%B8%B1%E0%B8%9A-covid-19-5c57b7853cf4
['Pattarawat Chormai']
2020-04-10 10:22:16.931000+00:00
['Coronavirus', 'Data', 'Data Visualization', 'Covid 19']
Open Source Libraries for Apps on iOS
The Action We hadn’t any possibility to push or affect the release of App. So we decided to pick up the most innovative features that we made in App and by using a completely different approach in form cases and developing — create open-source libraries that anybody can touch and grab for their own projects. On the developing side, shortly, we have decided to code simple and clean code as much as possible on the Swift UIKit with itemized documentation according to the guidelines. And that’s my best definition of what we were going to do by dev side 😂 . I hope, soon my teammate also will prepare his own storyline. First concepts My approach was to design all selected features like parts of one product. So basically, they can be presented as a concept of any app visually. I came to the conclusion that I will do for a popular brand’s product, but which has not been released yet and will launch approximately when we planned to publish our libraries in the public domain at the same time. That product was an Apple TV+. All views will be our representation of how this application might look like with our invents. The Cellular Number Mask We have been building an ecosystem for wholesale products. It would be impractical to force users to register on each platform individually when they use several of our platforms. The most obvious way to create our own userbase system for gaining access via one account to all products, like Google Account. The best practice in a registering method is signing by social media because it does not require additional authorization. But we had configured a difficult system that allows us to use only email or mobile phone number. Since the first iterations of the main service released in 2014, we had a serious group of customers who already had accounts. And based on it, we figured out that 72% of users use their phone number as a way to log in, and the rest 28% of their email addresses. 28% is a huge amount to ignore them and we cannot leave a mobile phone number as a single process. We must design both methods but we absolutely didn’t want to allocate two input forms in the registration window. We had a challenge, how to combine two processes into one input form. Email forms don’t require any additional features than the usual text ones. However, the numbers basically more conveniently type into masks that include predefined country code and cells for other digits. According to the researches, only 0,009% of users in the world use the digit as the first symbol for an email address. My idea was to design input that had a default view in the idle state and will transform into a custom field with a cellular mask immediately after typing the first digit. During designing this feature we found similar decisions, even found an open-source library that almost contains our needs, and for saving time we have used that one with some list of customizations. And that point also motivates us now to create an open-source library with our vision in order to help other teams as we were. Theoretically, our Cellular Number Mask can be useful for fintech projects where also exist serious security guidelines that also don’t allow the use of social networks during authorizing in the application.
https://medium.com/swlh/open-source-libraries-for-apps-on-ios-9c85b85c8593
['Ehrlan Zholdosh']
2020-10-30 10:39:52.340000+00:00
['Libraries', 'Design', 'Open Source Software', 'iOS', 'Swift']
The (un)surprising cause of the next financial crisis.
The (un)surprising cause of the next financial crisis. You know that it is coming, you just don’t know how … The next financial crisis. Most predictions agree on the timing (2019 / 2020) and that it will be one of the deepest crises in the world history (the economy has been growing consistently for 10 years now), but no one can predict how it will happen. Knowing that, of course, would be priceless. Traders and hedge fund managers who took the ‘right side of the trade’ and bet against the 2007 subprime crisis made hundreds of billions over night. Jon Paulson and many others got famous and (even much) richer. Crises are certain; they are inevitable like death and taxes because the world economy is based on the concept of greed. Human greed is good — it is what drives the economy up … and then gets it going again after it falls over the cliff. It is a necessary substance; a serum and a poison in one. It was in the U.S. that greed got reinvented as a good thing. That is, it was the new world that injected poor immigrants with the idea of American dream and made money into a universal merit of success. And ever since it has been the U.S. that gave the world market bubbles. Last time it was the subprime mortgages and derivatives that no one understood; this time it is the tech sector. There are two sides to the tech sector. Facebook and Google are what Standard Oil Co. Inc. was at the beginning of the 20th century—a well tolerated monopoly. Together with Apple, they form the ‘old’ side of the tech sector — the one that actually makes money … and tons of it. But beyond that,there is the utopian tech sector. You know: Silicon Valley, its millennial founder-entrepreneurs, with zen-like smiles, enlightened modesty and open shirts. So very unlike the old world dominated by Wall Street, they and their tiny little start-ups inspired the beginning of the present economic recovery by promising to create an infinitely more efficient and cleaner way of doing things, exchanging factories and steel for smartphones and abstract products. Three companies — Amazon, UBER and Tesla (ATU) — are a perfect embodiment of this trend. To some extent, ATU are truly revolutionary — making our lifes cheaper (Amazon), more practical (UBER) and exciting (Tesla) — but for the most part, people love them because they are a sensation. Good enough is just not good enough when it comes to financial markets. People don’t want to be told that they should stick their savings into a 1.5%-yielding super-slow-growing annuity. No, we all want our chance at becoming millionaires, and if that requires us to chase a sensation, then so be it. This is not to say that UBER hasn’t changed the urban transportation for good, that Tesla doesn’t make innovative cars, or that Amazon doesn’t offer convenience and price that others can’t match. No. The problem is that somewhere down the line, ATU have become a religion of future and people forgot to value them on the cold basis of profits & losses. Somewhere down the line, ATU freed themselves from the dryness of discounted cash flow analyses, and they took the whole market with them on a trip to forever. Amazon — a nightmare of the old-industry CEOs — continues on its path toward the world domination because it can afford to reinvest everything it makes and more*. It is a company doomed to die. For now, the world is still large enough. Amazon can keep on growing, expanding into other sectors and devouring its competitors. That, of course, makes it into an irresistible investment proposition — a company on a path to become the ultimate monopoly; the one place where we buy everything. Inevitably, though, it will reach a point when it will grow too big. It will either have to be cut in pieces by the government labelled as a ‘monopoly’, or it will be re-priced by the market, increasing its cost of capital because the promise of further growth and domination will make no sense anymore (people will begin to ask for a profit eventually). *Amazon is a life work of an ex-hedge fund manager (who else better to trick investors’ minds) — a company which functions like a perpetuum mobile. Jeff Bezos has once drawn his Amazon flywheel concept on a napkin. Lower prices lead to more customers, which attract more outside sellers to Amazon, who should be OK with smaller profit in order to be able to sell on Amazon. Reinvestment of all profits improves company and reduces cost leading to lower prices again in a vicious endless circle. One arrow is missing — that arrow going out: ‘profits to shareholders’. UBER — a clever, but first and foremost a cheaper way to do urban transportation. A company that has burned through billions of private money to become the one and only player in the business that it created — intelligent transportation. But, for all its dominance, UBER is facing a ‘Groupon moment’. Its technology is becoming commoditized and re-used by cheaper alternative providers. Tesla — This one is easy. A tech company on a mission to beat auto industry on its home turf. Tesla burns cash at a breathless pace in a race against time to get its Model 3 production fully running before other giants enter the space with better alternatives. Blessed by what has been until now an inexhaustible level of positive investor sentiment, the company’s plans are truly colossal, but the last estimates show that it will burn through its cash reserves by the end of this financial year. If Musk is forced to go to financial markets asking for money again this year, it could be a make-or-break moment for Tesla. People always fight the last war. Scarred by 2007, investors are looking for the next big short; the next Collateralized Debt Obligation, or other structured product that could bring the markets down. There might not be one this time round. All financial crises are caused by some sort of a market bubble; a situation when prices overshoot value, which eventually leads to an ‘aha’ moment and a downward spiral. ATU and other utopian tech companies have infected markets with a profound belief that they are about to revolutionise world. This made them into cash-burning giants with a zero cost of capital. And that’s fine for now. But, their eventual (and unavoidable) collapse may prove to be systemic, dragging down the market, not because of their size (which is insignificant), but because investors will reprice the market, taking the promise of infinitely more efficient tomorrow out of the equation. See you on the other side,
https://medium.com/thatmeaning/the-un-surprising-cause-of-the-next-financial-crisis-dc59eb2379c4
['George Salapa']
2018-05-05 17:04:13.652000+00:00
['Economics', 'Market', 'Amazon', 'Tesla', 'Uber']
Predicting Movie Genres using NLP
Introduction I was intrigued going through this amazing article on building a multi-label image classification model last week. The data scientist in me started exploring possibilities of transforming this idea into a Natural Language Processing (NLP) problem. That article showcases computer vision techniques to predict a movie’s genre. So I had to find a way to convert that problem statement into text-based data. Now, most NLP tutorials look at solving single-label classification challenges (when there’s only one label per observation). But movies are not one-dimensional. One movie can span several genres. Now THAT is a challenge I love to embrace as a data scientist. I extracted a bunch of movie plot summaries and got down to work using this concept of multi-label classification. And the results, even using a simple model, are truly impressive. In this article, we will take a very hands-on approach to understanding multi-label classification in NLP. I had a lot fun building the movie genre prediction model using NLP and I’m sure you will as well. Let’s dig in! Table of Contents Brief Introduction to Multi-Label Classification Setting up our Multi-Label Classification Problem Statement About the Dataset Our Strategy to Build a Movie Genre Prediction Model Implementation: Using Multi-Label Classification to Build a Movie Genre Prediction Model (in Python) Brief Introduction to Multi-Label Classification I’m as excited as you are to jump into the code and start building our genre classification model. Before we do that, however, let me introduce you to the concept of multi-label classification in NLP. It’s important to first understand the technique before diving into the implementation. The underlying concept is apparent in the name — multi-label classification. Here, an instance/record can have multiple labels and the number of labels per instance is not fixed. Let me explain this using a simple example. Take a look at the below tables, where ‘X’ represents the input variables and ‘y’ represents the target variables (which we are predicting): ‘y’ is a binary target variable in Table 1. Hence, there are only two labels — t1 and t2 ‘y’ contains more than two labels in Table 2. But, notice how there is only one label for every input in both these tables You must have guessed why Table 3 stands out. We have multiple tags here, not just across the table, but for individual inputs as well We cannot apply traditional classification algorithms directly on this kind of dataset. Why? Because these algorithms expect a single label for every input, when instead we have multiple labels. It’s an intriguing challenge and one that we will solve in this article. You can get a more in-depth understanding of multi-label classification problems in the below article: Setting up our Multi-Label Classification Problem Statement There are several ways of building a recommendation engine. When it comes to movie genres, you can slice and dice the data based on multiple variables. But here’s a simple approach — build a model that can automatically predict genre tags! I can already imagine the possibilities of adding such an option to a recommender. A win-win for everyone. Our task is to build a model that can predict the genre of a movie using just the plot details (available in text form). Take a look at the below snapshot from IMDb and pick out the different things on display: There’s a LOT of information in such a tiny space: Movie title Movie rating in the top-right corner Total movie duration Release date And of course, the movie genres which I have highlighted in the magenta coloured bounding box Genres tell us what to expect from the movie. And since these genres are clickable (at least on IMDb), they allow us to discover other similar movies of the same ilk. What seemed like a simple product feature suddenly has so many promising options. 🙂 About the Dataset We will use the CMU Movie Summary Corpus open dataset for our project. You can download the dataset directly from this link. This dataset contains multiple files, but we’ll focus on only two of them for now: movie.metadata.tsv: Metadata for 81,741 movies, extracted from the November 4, 2012 dump of Freebase. The movie genre tags are available in this file Metadata for 81,741 movies, extracted from the November 4, 2012 dump of Freebase. The movie genre tags are available in this file plot_summaries.txt: Plot summaries of 42,306 movies extracted from the November 2, 2012 dump of English-language Wikipedia. Each line contains the Wikipedia movie ID (which indexes into movie.metadata.tsv) followed by the plot summary Our Strategy to Build a Movie Genre Prediction Model We know that we can’t use supervised classification algorithms directly on a multi-label dataset. Therefore, we’ll first have to transform our target variable. Let’s see how to do this using a dummy dataset: Here, X and y are the features and labels, respectively — it is a multi-label dataset. Now, we will use the Binary Relevance approach to transform our target variable, y. We will first take out the unique labels in our dataset: Unique labels = [ t1, t2, t3, t4, t5 ] There are 5 unique tags in the data. Next, we need to replace the current target variable with multiple target variables, each belonging to the unique labels of the dataset. Since there are 5 unique labels, there will be 5 new target variables with values 0 and 1 as shown below: We have now covered the necessary ground to finally start solving this problem. In the next section, we will finally make an Automatic Movie Genre Prediction System using Python! Implementation: Using Multi-Label Classification to Build a Movie Genre Prediction Model (in Python) We have understood the problem statement and built a logical strategy to design our model. Let’s bring it all together and start coding! Import the required libraries We will start by importing the libraries necessary to our project: Load Data Let’s load the movie metadata file first. Use ‘\t’ as the separator as it is a tab separated file (.tsv): meta = pd.read_csv("movie.metadata.tsv", sep = '\t', header = None) meta.head() Oh wait — there are no headers in this dataset. The first column is the unique movie id, the third column is the name of the movie, and the last column contains the movie genre(s). We will not use the rest of the columns in this analysis. Let’s add column names to the aforementioned three variables: # rename columns meta.columns = ["movie_id",1,"movie_name",3,4,5,6,7,"genre"] Now, we will load the movie plot dataset into memory. This data comes in a text file with each row consisting of a movie id and a plot of the movie. We will read it line-by-line: Next, split the movie ids and the plots into two separate lists. We will use these lists to form a dataframe: Let’s see what we have in the ‘movies’ dataframe: movies.head() Perfect! We have both the movie id and the corresponding movie plot. Data Exploration and Pre-processing Let’s add the movie names and their genres from the movie metadata file by merging the latter into the former based on the movie_id column: Great! We have added both movie names and genres. However, the genres are in a dictionary notation. It will be easier to work with them if we can convert them into a Python list. We’ll do this using the first row: movies['genre'][0] Output: '{"/m/07s9rl0": "Drama", "/m/03q4nz": "World cinema"}' We can’t access the genres in this row by using just .values( ). Can you guess why? This is because this text is a string, not a dictionary. We will have to convert this string into a dictionary. We will take the help of the json library here: type(json.loads(movies['genre'][0])) Output: dict We can now easily access this row’s genres: json.loads(movies['genre'][0]).values() Output: dict_values(['Drama', 'World cinema']) This code helps us to extract all the genres from the movies data. Once done, add the extracted genres as lists back to the movies dataframe: Some of the samples might not contain any genre tags. We should remove those samples as they won’t play a part in our model building process: # remove samples with 0 genre tags movies_new = movies[~(movies['genre_new'].str.len() == 0)] movies_new.shape, movies.shape Output: ((41793, 5), (42204, 5)) Only 411 samples had no genre tags. Let’s take a look at the dataframe once again: movies.head() Notice that the genres are now in a list format. Are you curious to find how many movie genres have been covered in this dataset? The below code answers this question: # get all genre tags in a list all_genres = sum(genres,[]) len(set(all_genres)) Output: 363 There are over 363 unique genre tags in our dataset. That is quite a big number. I can hardy recall 5–6 genres! Let’s find out what are these tags. We will use FreqDist( ) from the nltk library to create a dictionary of genres and their occurrence count across the dataset: I personally feel visualizing the data is a much better method than simply putting out numbers. So, let’s plot the distribution of the movie genres: Next, we will clean our data a bit. I will use some very basic text cleaning steps (as that is not the focus area of this article): Let’s apply the function on the movie plots by using the apply-lambda duo: movies_new['clean_plot'] = movies_new['plot'].apply(lambda x: clean_text(x)) Feel free to check the new versus old movie plots. I have provided a few random samples below: In the clean_plot column, all the text is in lowercase and there are also no punctuation marks. Our text cleaning has worked like a charm. The function below will visualize the words and their frequency in a set of documents. Let’s use it to find out the most frequent words in the movie plots column: Most of the terms in the above plot are stopwords. These stopwords carry far less meaning than other keywords in the text (they just add noise to the data). I’m going to go ahead and remove them from the plots’ text. You can download the list of stopwords from the nltk library: nltk.download('stopwords') Let’s remove the stopwords: Check the most frequent terms sans the stopwords: freq_words(movies_new['clean_plot'], 100) Looks much better, doesn’t it? Far more interesting and meaningful words have now emerged, such as “police”, “family”, “money”, “city”, etc. I mentioned earlier that we will treat this multi-label classification problem as a Binary Relevance problem. Hence, we will now one hot encode the target variable, i.e., genre_new by using sklearn’s MultiLabelBinarizer( ). Since there are 363 unique genre tags, there are going to be 363 new target variables. Now, it’s time to turn our focus to extracting features from the cleaned version of the movie plots data. For this article, I will be using TF-IDF features. Feel free to use any other feature extraction method you are comfortable with, such as Bag-of-Words, word2vec, GloVe, or ELMo. I recommend checking out the below articles to learn more about the different ways of creating features from text: tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=10000) I have used the 10,000 most frequent words in the data as my features. You can try any other number as well for the max_features parameter. Now, before creating TF-IDF features, we will split our data into train and validation sets for training and evaluating our model’s performance. I’m going with a 80–20 split — 80% of the data samples in the train set and the rest in the validation set: Now we can create features for the train and the validation set: # create TF-IDF features xtrain_tfidf = tfidf_vectorizer.fit_transform(xtrain) xval_tfidf = tfidf_vectorizer.transform(xval) Build Your Movie Genre Prediction Model We are all set for the model building part! This is what we’ve been waiting for. Remember, we will have to build a model for every one-hot encoded target variable. Since we have 363 target variables, we will have to fit 363 different models with the same set of predictors (TF-IDF features). As you can imagine, training 363 models can take a considerable amount of time on a modest system. Hence, I will build a Logistic Regression model as it is quick to train on limited computational power: from sklearn.linear_model import LogisticRegression # Binary Relevance from sklearn.multiclass import OneVsRestClassifier # Performance metric from sklearn.metrics import f1_score We will use sk-learn’s OneVsRestClassifier class to solve this problem as a Binary Relevance or one-vs-all problem: lr = LogisticRegression() clf = OneVsRestClassifier(lr) Finally, fit the model on the train set: # fit model on train data clf.fit(xtrain_tfidf, ytrain) Predict movie genres on the validation set: # make predictions for validation set y_pred = clf.predict(xval_tfidf) Let’s check out a sample from these predictions: y_pred[3] It is a binary one-dimensional array of length 363. Basically, it is the one-hot encoded form of the unique genre tags. We will have to find a way to convert it into movie genre tags. Luckily, sk-learn comes to our rescue once again. We will use the inverse_transform( ) function along with the MultiLabelBinarizer( ) object to convert the predicted arrays into movie genre tags: multilabel_binarizer.inverse_transform(y_pred)[3] Output: ('Action', 'Drama') Wow! That was smooth. However, to evaluate our model’s overall performance, we need to take into consideration all the predictions and the entire target variable of the validation set: # evaluate performance f1_score(yval, y_pred, average="micro") Output: 0.31539641943734015 We get a decent F1 score of 0.315. These predictions were made based on a threshold value of 0.5, which means that the probabilities greater than or equal to 0.5 were converted to 1’s and the rest to 0's. Let’s try to change this threshold value and see if that improves our model’s score: # predict probabilities y_pred_prob = clf.predict_proba(xval_tfidf) Now set a threshold value: t = 0.3 # threshold value y_pred_new = (y_pred_prob >= t).astype(int) I have tried 0.3 as the threshold value. You should try other values as well. Let’s check the F1 score again on these new predictions. # evaluate performance f1_score(yval, y_pred_new, average="micro") Output: 0.4378456703198025 That is quite a big boost in our model’s performance. A better approach to find the right threshold value would be to use a k-fold cross validation setup and try different values. Create Inference Function Wait — we are not done with the problem yet. We also have to take care of the new data or new movie plots that will come in the future, right? Our movie genre prediction system should be able to take a movie plot in raw form as input and generate its genre tag(s). To achieve this, let’s build an inference function. It will take a movie plot text and follow the below steps: Clean the text Remove stopwords from the cleaned text Extract features from the text Make predictions Return the predicted movie genre tags Let’s test this inference function on a few samples from our validation set: Yay! We’ve built a very serviceable model. The model is not yet able to predict rare genre tags but that’s a challenge for another time (or you could take it up and let us know the approach you followed). Where to go from here? If you are looking for similar challenges, you’ll find the below links useful. I have solved a Stackoverflow Questions Tag Prediction problem using both machine learning and deep learning models in our course on Natural Language Processing. The links to the course are below for your reference: End Notes I would love to see different approaches and techniques from our community to achieve better results. Try to use different feature extraction methods, build different models, fine-tune those models, etc. There are so many things that you can try. Don’t stop yourself here — go on and experiment! Feel free to discuss and comment in the comment section below. The full code is available here. You can also read this article on Analytics Vidhya’s Android APP Related Articles
https://medium.com/analytics-vidhya/predicting-movie-genres-using-nlp-46d70b97c67d
['Prateek Joshi']
2019-06-10 06:22:02.641000+00:00
['NLP', 'Python', 'Data Science', 'Text Analytics', 'Machine Learning']
How I went from broke college kid to traveling the world as a Digital Nomad
While in my second year of undergrad, I was feeling stuck and confused about my future. I was working part-time, making $150 per week while accumulating debt pursuing a degree that I wasn’t passionate about. I hoped to achieve big things in my life but was unsure what steps I needed to take to get ahead. I had just transferred to a much bigger university than my previous school and didn’t know anyone. To make matters worse, I was living in an off-campus apartment with a random guy who didn’t even attend my school, and I had to commute to classes every day. My situation was the result of poor planning on my part, but regardless of how I got where I was, I wasn’t living the exciting college life that I had hoped or expected I would be. Although I eventually made friends and acclimated to my situation, I knew that I couldn’t stand to remain in college for the duration of my 4-year degree. I knew I would have to face the reality of my situation, sooner or later. The truth that a marketing major wasn’t something I was interested in pursuing as a career and that I was only in college because I didn’t know what else to do at that stage of my life. Social pressure was the main reason I went to college in the first place, and staying made it appear to my parents and friends from high school that I was doing something with my life. I knew that even if I managed to make it to graduation, I would be receiving a marketing degree that would be obsolete by the time I received it. I had to switch my mindset from waiting for the right information to be given to me, to being an autodidact and seeking out the proper knowledge and education. If I wanted to escape college and still succeed in my life, then I knew I needed a plan. Dropping out without a plan would likely result in me working minimum wage jobs and living at home, which wouldn’t be an improvement to college life by any stretch. I knew I needed two things before I could drop out A marketable skill — most college dropouts that fail don’t have one. I couldn’t enter the real world without something of value to offer. Without skills, I was only worth the hours that I was willing to work. Not to mention that I’d be replaceable by anyone willing to do the same work as me for less. Marketable skills are what college and universities are supposed to be providing to their students, but they often fail to do so. The good news is that it’s not the 1800s or even the 1900s anymore — in today’s world, most skills can be acquired online for free. Unless you’re studying to become a doctor — you probably shouldn’t learn how to perform surgery from Youtube videos. The best part about the internet is that you can use it to learn skills and to find people who are willing to pay for those skills. The majority of people hiring freelancers on the internet don’t care about degrees; they care about results. I knew that if I could build a portfolio of work around a skillset, then it would be just as good or better than earning a college degree.
https://dansapio.medium.com/how-i-went-from-a-broke-college-kid-to-traveling-the-world-as-a-digital-nomad-21c0f213faa1
['Danny Sapio']
2019-11-03 22:42:22.487000+00:00
['Travel', 'Freelancing', 'Entrepreneurship', 'Graphic Design', 'Digital Nomads']
Making big moves in Big Data with Hadoop, Hive, Parquet, Hue and Docker
Making big moves in Big Data with Hadoop, Hive, Parquet, Hue and Docker Jump and run in this brief introduction to Big Data What data at most big companies in 2020 looks like. Seriously. The goal of this article is to introduce you to some key concepts in the buzzword realm of Big Data. After reading this article — potentially with some additional googling — you should be able to (more or less) understand how this whole Hadoop thing works. So, to be more precise in this article you will: Learn a lot of definitions (yay) Spin up a Hadoop cluster with a few bells and whistles via docker-compose. Understand Parquet files and how to convert your csv datasets into Parquet files. Run SQL (technically HiveQL but it is very similar) queries on your Parquet files like it’s nobody’s business with Hive. Also, be expected to have some basic knowledge in Docker & docker-compose, running Python scripts etc. — nothing crazy but it is better if you know in advance. If you are already using an alternative to Hadoop’s ecosystem — cool — this article is more geared towards readers that have to get familiar with Hadoop due to work, university etc. and is just one “solution” to the Big Data conundrum with its pros and cons respectively. Big Data: this is the big one. Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time. Now you might ask the simple question (and a lot of people have): “How big is big data?” Well, to be frank that is a very hard question and depending on how fast technology moves, what one considers “big” data today might be “small” data tomorrow. Nonetheless, the definition above is pretty timeless since it refers to sizes beyond the ability of commonly used tools — this is your reference line; so in 2020 let’s bite the bullet and say the following is true: when you start dealing with double digit TB datasets in your DBs and above, you are probably hitting the limits of some of the more run-of-the-mill tools and it is maybe time to look into distributed computing and potentially this article. Hadoop: a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. The 3 most important core modules you should know about are: (Storage) HDFS: a distributed file system with high-throughput access and redundancy (copies of all files are maintained across the cluster) at the same time (Resource Management) YARN a.k.a. Yet Another Ressource Negotiator: a framework for job scheduling and cluster resources management e.g. which nodes are available etc. (Processing) MapReduce: a YARN-based system for parallel processing of large datasets. This is the main algorithm used to seamlessly distribute computational tasks across the nodes of the cluster. You can read upon the origins of MapReduce around the Web. Popular alternatives are TEZ and Spark, which were developed later for processing data even more efficiently. Parts of the Hadoop Ecosystem in one diagram. Focus on HDFS, YARN, MapReduce and Hive for now. Hive: a data warehouse software that facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive. So, basically Hive sits on top of the aforementioned Hadoop stack and it allows you to directly use SQL on your cluster. Hue: an open source SQL Assistant for Databases & Data Warehouses i.e. an easy GUI for looking into HDFS, Hive etc. Very handy for beginners! It is maintained by Cloudera and you can find it on GitHub. Parquet: a columnar storage* format available to any project in the Hadoop ecosystem. You can read why this is a good idea with big data sets in the explanation below. Again, there are a lot of alternatives but this technology is free, open-source and widely used in production across the industry. *columnar storage: in normal row-based DBs e.g. MySQL you are storing data in rows, which are then distributed across different blocks if you cannot fit your data on one block. Now if you have a lot of columns and rows in your data distributed across multiple blocks, things can get pretty slow. Which is why you could instead store each column in a separate block. In that case you can access all the data of a column by just accessing one block. There is a great longer explanation on the concept here. As AWS puts it (unrelated to Parquet but still true): “Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk.” Here is also another comparison to CSV, which shows how much storage you can save and what kind of speedups you can expect. Just a great Jaws reference and this is here solely to brighten your mood if the article is already too much. Building something Ok. Time to build something! Namely a Hadoop cluster with Hive on top of it to run SQL queries on Parquet files stored in HDFS all the while visualizing everything in Hue. Does this sentence make more sense now than in the beginning of the article? Cool. There are a lot of ways to do this and Hadoop is famous for operating computer clusters built from “commodity hardware” but since this is just a learning exercise, it is a bit easier to quickly spin up a small Hadoop cluster with the aforementioned stack in Docker with docker-compose — you can of course do this in Kubernetes too but it is way beyond the scope of this article. This setup here will not be even close to production but then again this article should merely serve as a gateway for your big data journey. Here is the repo for this article: A nicer view of the docker-compose file in the repository, which should roughly sketch out the architecture of the project. Some highlights from the implementation in Docker This was not quite as straightforward as initially thought, so here are some pointers from the development in case you want to customize this further. The focus of this article is not to give you a crash course in Docker and docker-compose either, so this section is brief and only highlights some places where you could get stuck. If you want anything to work with Hue you need to figure out to override Hue’s default configs by mounting the hue-overrides.ini file (you will find it in the repo and the override in docker-compose). Obvious right? Wink, wink. In hue-overrides.ini you should be looking at: [[database]] => this is the internal Hue DB, [[hdfs_clusters]] => connecting to HDFS to view files in Hue, [[yarn_clusters]] => setting up YARN and [beeswax] => connecting to Hive to run SQL queries in Hue. If you do not have this line thrift_version=7 in hue-overrides.ini, Hue will refuse to connect to the Hive (=Thrift) Server because it is defaulting to a Hive Server version that is too high. This took hours. If you use Hue’s default SQLite DB, you will get the message “database locked” when you try to connect to Hive => that is why there is a db-hue Postgres DB in the docker-compose file. Something about SQLite not being suitable for a multi-threaded environment as described here. Cloudera should work on their error messages… POSTGRES_DB, POSTGRES_USER, POSTGRES_PASSWORD in hadoop-hive.env can be used with the official postgres Docker image to directly create the DB user when you spin up your container. Check. Watch out with your 5432 port and not exposing it multiple times since PGDB is running more than once for this project (once as a metastore for Hive and once as a DB for hue) tl;dr on the next steps Ok. Short summary on what will happen next for the impatient engineers: Start Hue, Hive and your Hadoop nodes with docker-compose up Download a .csv dataset from Kaggle & convert it with the supplied Python script Import said Parquet file to HDFS via Hue & preview it to make sure things are OK Create empty Hive table with the Parquet file schema after inspecting the schema with the parquet-tools CLI tool Import file from HDFS into Hive’s table Run some SQL queries! Starting the cluster and launching Hue with docker-compose Well, since everything is already setup, just clone the repository on your computer and type docker-compose up in your terminal. That’s it. Then go to localhost:8888 and you should (after setting up the initial password for Hue) see this screen: This screen shows you your Hadoop cluster’s HDFS while the sidebar shows you the DB tables in Hive’s metastore — both of which are empty in this case. Uploading Parquet files on HDFS and previewing them in Hue When trying to open some (quite a few) Parquet files in Hue you are going to get the following error message: “Failed to read Parquet file” and in your docker-compose logs: NameError: global name ‘snappy’ is not defined Turns out that Hue does not support the snappy compression that is the default for a lot of Parquet converting tools like pandas. There is no workaround for this except for recreating your Parquet files (if they are using snappy). Worst UX ever on Cloudera’s side… In the GitHub repository you will find a parquet_converter.py, which uses pandas and specifies the compression as None, so one does not default to snappy that will subsequently breaking Hue. Meaning you can take any dataset from e.g. Kaggle in .csv format and convert it to Parquet with the provided Python module. At this point — if you are unfraid of the CLI — the best suggestion is for you to forget Hue and just use Hive and HDFS directly for your Parquet files. But if you stuck around with Hue like me you could see a UK Punctuality Statistics report from Kaggle that was converted with the Python script mentioned above and then uploaded as a file: The File Browser in Hue when you click on the successfully imported Parquet file. You can access the File Browser from the dark sidebar on the left. Creating a Hive table from your Parquet file and schema After seeing that your data was properly imported, you can create your Hive table. For this you should run the following command in your command line in the folder where you converted your file (probably /your_github_clone/data): parquet-tools schema 201801_Punctuality_Statistics_Full_Analysis.parquet This will output the schema you need to create the table in Hue (UTF8 = string in Hue): message schema { optional binary run_date (UTF8); optional int64 reporting_period; optional binary reporting_airport (UTF8); optional binary origin_destination_country (UTF8); Time to create your table: The preview of the new table you created. Go the DB icon in the dark sidebar and create a new table manually with the schema as described above. Afterwards click on the import button in the dark sidebar and import your Parquet file into the empty table. Afterwards you should see the above screen. Running SQL queries Running SQL queries was promised and it shall be delivered. The first icon in Hue’S sidebar is its query editor. What if you wanted to find out all the flights from Poland that had an average delay of more than 10 minutes? SELECT * FROM `2018_punctuality_statistics` WHERE origin_destination_country=’POLAND’ AND average_delay_mins>=10; The auto-complete feature in the editor is fantastic, so even if you are a SQL novice, you should be able to play around with your data without too much effort. Finally. Time to let go Dear reader, sadly you have reached the end of this article. Please write in the comments below if you feel like this journey should have a sequel. So, to recap you learned how to run a Hadoop cluster with Hive for running SQL queries all the while visualizing everything in Hue with docker-compose. Not bad. This is of course a very very simplified look into what is possible with Hadoop but chances are that you are just starting out in the field, so give yourself some time and build on this knowledge and infrastructure. Furthermore, there are great online courses out there that you can look into next. Looking into 2020 and beyond Now, if you have been sort of listening in on Hadoop’s ecosystem over the last few years you would have seen that the two biggest players on the market — Cloudera and Hortonworks — merged around a year ago amid a slow down of the Hadoop big data market. Add the fact that people seem way more interested in Kubernetes than in older Hadoop specific technologies like YARN for resource management and orchestration, the fast adoption of DL frameworks like PyTorch and you sort of have a perfect storm forming for the ageing Hadoop stack. Nonetheless projects like Apache Spark are chugging along by e.g. introducing Kubernetes as an alternative to YARN. Exciting times for the ecosystem! Sources:
https://towardsdatascience.com/making-big-moves-in-big-data-with-hadoop-hive-parquet-hue-and-docker-320a52ca175
['Nikolay Dimolarov']
2020-08-06 08:13:18.985000+00:00
['Big Data', 'Analytics', 'Hive', 'Hadoop', 'Docker']
O que é e Como Funciona o Amazon FBA? [GUIA COMPLETO]
in In Fitness And In Health
https://medium.com/blog-do-dinheiro/o-que-%C3%A9-e-como-funciona-o-amazon-fba-guia-completo-b2b3771c0088
['Bruno Cunha']
2019-12-18 16:57:16.453000+00:00
['Ganhar Dinheiro Internet', 'Ganhar Dinheiro', 'Amazon', 'Ganhar Dinheiro Online', 'Amazon Fba']
In search of framework
When it comes to execution, it is not just the state-of-the-art technology or the people, it is THE HOW (structured approach equipped with methods. techniques, frameworks) that makes all the difference. With unabated enthusiasm in analytics and data driven decision-making(DDDM), I extensively looked around for information on existing frameworks for moving from data to decisions and how the data analytics projects and programmes are managed. To my total surprise, the search results on this topic were pretty disappointing and unimpressive to say the least. whatever happened to information deluge. There are two possibilities for this though - 1. We may well be in very infancy phase of it (data to decisions) 2. Of-course my search skills can be questioned. Before I dive into my findings on frameworks from known players in the arena, a very quickie definition of — What is Framework? A framework is a real or conceptual structure intended to serve as a guidance. It is defined as an underlying set of ideas, principles, agreements or guidelines that provide a ‘roadmap’ and supportive structure for achieving a specific goal. Top 3 - 1. By Gartner The only framework with holistic approach to analytics programme spanning cross strategy and performance. Gartner uses business analytics as an umbrella term to represent BI, analytics, and performance management combining with it data driven decisions.(Rest covered in separate blog) 2. By PWC — Four aspects of the framework - Discovery — define the problem phase, develop a hypothesis. collect and explore data. Insights — perform the data analysis. Actions — Link insights to actionable recommendations. Outcomes — Execution plan and review the outcomes that have transformed the business. 3. BADIR by Aryng The data to decision journey is five stepped process, from determining business questions to recommendations. BADIR is an acronym which stands for - Business Question Analysis Plan Data Collection Insights Recommendations It can’t get any simpler than this, the titles of all 5 steps are indicative and self-explanatory, so allow me to skip the unnecessary detailing just for the sake of it. The top 3 are chosen purely on the basis of its ease of use, flexible approach to incorporate different methods, techniques(as highlighted by articles). To summarise, In all honesty, framework neither is a silver bullet nor does it guarantee to make the project/Analytics Efforts fail-proof. But hey, lack of using any framework would let decision makers either drive in the wrong direction or drown in the data.
https://medium.com/datacrat/in-search-of-framework-f707c3f2fca6
[]
2019-01-03 15:42:14.470000+00:00
['Big Data', 'Data Analytics', 'Data Driven Decisions', 'Data Science', 'Business Analytics']
Are You Substituting Real Life For Online Life?
Are You Substituting Real Life For Online Life? How easy it is to fall into the mindless scrolling routine and not even notice One of my favourite parts about social media is how easy it is to get lost in all it offers, especially when you’re in dire need of a distraction. All you’re required to do is scroll through and it engulfs you. You’re in a trance, wandering through the land of tweets, pictures, stories and best of all–memes. It’s working. You’re laughing, it’s distracting you from your problems; it’s great. Until it’s not. But you don’t realize this for a while. Most people don’t. I’ve found that most of us don’t know who we are or the reason we do the things we do. There’s this set image we have of ourselves, the person we believe we are (which is most likely the person we were at a point but aren’t anymore). We haven’t processed the fact that we’ve grown and changed into a different person over time–sometimes negatively. This is the reason why when other people tell us we are at fault or have a flaw, it’s tough to admit. We assume if it were true, we would have noticed. We feel we are fully in control of ourselves and unsusceptible to any form of change we don’t condone. When people spoke about their unhealthy relationship with social media, I scoffed and thought “can’t be me.” Like most people do, I believed I was immune and those kinds of things don’t happen to me. That I’m too smart to let it happen. So I was indifferent about my relationship with social media, never giving it much thought. Twitter is my most used platform, so when I refer to social media, it takes up ninety percent of that space in my life. At the start of the year, I went on breaks I planned to be a weeklong but ended up barely lasting three days. Every time Twitter triggered any negative emotion, I’d deactivate my account only to be back in a week right where I started, mindlessly scrolling and falling deeper into this virtual world. It got worse every time I came back. I knew I needed to distance myself, but I didn’t know how to do it. It took some time, but I finally saw I had become so attached that I had substituted my actual life for it. I was in a “social media induced reverie” where I was more concerned with maintaining my image online than dealing with the life in front of me. The way I looked and portrayed myself there mattered more. I ended up spending more time online than I needed to. My mental health was suffering. It gave me anxiety; I compared myself to random people I didn’t know and wished I was living their lives–they seemed to have things under control. Social media is supposed to be fun, but it wasn’t anymore. It was an act I was trying to keep up with but couldn’t because of how quickly things change on there. It kept outrunning me; I kept falling short, and it exhausted me. I asked how much of myself I will sacrifice for a world that existed on-screen. After my previous failed attempts, I tried another social media break but this time with a purpose and no set time frame so as to reduce the pressure — this time, for as long as I could manage. My goal was to come out of it less attached, dependent and influenced by social media. To stop being obsessed with the image others had of me in their minds and how to tweak it in my favour. I stayed away from Twitter for a month and two days. I was so proud of myself because not too long ago, I couldn’t handle three days without a relapse. The best thing during this break was the peace of mind I experienced. There was no image “upkeep” to worry about. No one to compare myself to. I felt a lighter, like this baggage I had been dragging around wasn’t with me anymore. It was amazing. The thing about a social media hiatus, it forces you to do things you keep putting off because all the time you complained you never had is suddenly there. So, I tried to create a habit out of reading an article a day; I immersed myself in books, movies and music that I loved and worked on improving my writing. I saw that the more I stayed away, the more I wanted to stay away. I never thought I’d reach a point in my life where I would consider leaving social media for good–especially Twitter. But I started contemplating never going back because I realised it took way more than it gave me and memes weren’t a good enough reason to stay. Although, I’m not so ignorant and dense as to believe that Twitter is the sole cause of all my problems. They existed prior to Twitter but it just happens to be a trigger and contributor to the bigger issue of my insecurities which I need to deal with personally. Even if I leave Twitter behind, it wouldn’t take long for something else to trigger the anxiety and misbelieves in me and get me struggling with the issues I thought I left behind when I gave up Twitter. After pondering on it for a long time, I decided not to give it up. I’ve learnt a lot from Twitter, made amazing friends and gotten a lot of inspiration. It was the way I engaged with it and let it get to me that was the problem. So the plan is to use it in moderation, to my advantage and surround myself with people and things that inspire me not things that bring me down and make me depressed. And more importantly to take the space to work on myself away from all social media. But saying these things and doing them are different. Going back, I’ve found it’s easy to get sucked back in and fall into the mindless scrolling routine again. So, I have to give myself time to work out the best combination that won’t leave me down. I know the journey isn’t linear and I don’t know how much time it’ll take. But I know I don’t want to spend my time on there trying to be a perfect person for the benefit of strangers who couldn’t care less.
https://medium.com/an-injustice/are-you-substituting-real-life-for-online-life-f5b2c900748
['Fatima Mohammed']
2020-05-06 15:13:47.618000+00:00
['Social Media', 'Mental Health', 'Addiction', 'Insecurity', 'Culture']
No Money, No Problem — 6 Ways to Get Capital for Your Startup
No Money, No Problem — 6 Ways to Get Capital for Your Startup Some common ways to approach scaling your business If you are considering raising extra money for your startup, here are 6 ways to get investments and turn your business into a multi-billion dollar corporation! I spent the last 6 years in Silicon Valley working for companies like Yelp and Airbnb, being surrounded by founders on one side and Venture Capital investors on the other. The Silicon Valley dream is to start a company which: Has a mission is to change the world Has the potential to grow into a multi-billion-dollar company Impacts the lives of millions (or ideally, billions, of people) Has the potential to raise impressive funding rounds from the most reputable VC funds (like Sequoia, a16z, or Founders Fund). Annie Spratt at Unsplash However… Getting money from VCs is only one of the ways to get funding. Let’s explore it further. There are funds that focus on a specific company stage (early stage, series A, B, etc) and multi-stage funds who diversify their portfolio across companies’ tenure. Venture Capital investors are making a lot of bets, which is expensive. Funds are looking for a high ROI, to cover their previous losses and to get a return to the LPs who invested in the fund. For example, if you had invested $10,000 in Uber in 2010 it would be worth $127.5M today (!). VCs not only give you money, they ideally give you support and the network. Raising capital accelerates your growth and an opportunity to conquer the market quicker. But keep in mind that taking venture capital means signing up to scale and grow rapidly, as VCs are looking for some exit for your startup within the 10 year period, which might not be right for every team and business. It’s important not to disillusion yourself with VC money. Sometimes venture capital is necessary for your success as a startup. But sometimes it might be a trigger on the road to failure. Startup team at Unsplash 2. An alternative to VCs is bootstrapping your business. You don’t need venture capital to get started in most industries to solve a real problem for customers and charge money for it. Here are three ways to think about this: Bootstrapping a startup means starting lean and without the help of outside capital. It means continuing to fuel growth internally from cash flow produced by the business. Quite a few famous companies opted-in for this path and scaled billion-dollar ventures. Spanx, GoPro, Github, just to name a few. You are forced to immediately start thinking about how your business will be making money. It’s sometimes disappointing to see that profitability is almost the last thing some companies think about — they comfortably continue burning through VC funding and might end up dying without figuring out how to make their business model work. Bootstrapping certainly has pros like staying in control and dictating the direction of the business, but it also comes with certain cons, such as lack of capital to grow when you really need to, no access to the support network that could come through going to an accelerator or raising VC money, and of course, the risk of burning through your savings. Danielle Maclnnes at Unsplash 3. Accelerators are also a path to get funding and accelerate your startup; some of the most famous accelerators include YC, 500 startups, and Techstrars. An accelerator is a 3 to 6 months program meant to accelerate your startup. Accelerators are not technically free, depending on the way you look at it — they offer you funding for a % of your company. Accelerators typically offer seed money in exchange for equity in the company. This may range from $10,000 to over $120,000. For example, YC takes 7% equity for 120k. Accelerators can be great. They provide a high-pressure environment, peer support, and mentor network support to help you grow your company. And usually, accelerators have a “demo day” where you would present your startup to get noticed by potential future investors. It’s important to have an action plan and a clear “why” for joining an accelerator — you must be committed to the idea, understand that you are giving away equity, and ensure that this is the right accelerator for your specific startup: look at the team running it, make sure it’s the right set of mentors and the program can give you the right support. Faria Anzum at Unsplash 4. Another way to get funding for your startup is to turn to angel investors. Angel investors are accredited individuals who support founders and usually write checks between 5 to 100k. Angels primarily invest in pre-seed and seed companies and are focused on qualitatively evaluating an opportunity: the founding team, their interest in the space, etc. 5. Crowdfunding has seen a rise in the last decade. Oculus, Larian Studios, Pebble all had their kickstart through crowdfunding. I believe we will see more and more influencers crowdfunding their ideas — they already have the needed reach to get their projects funded. Crowdfunding could be great if you’re trying to prove a concept or test the demand. It does go without saying though that platforms like Indigogo and Kickstarter mostly offer a payment collection gateway: success of the campaign still hugely depends on the work you put into spreading the word about it. 6. Last but not least, I want to cover grants. A startup small business grant is monetary funding from the government or an organization that is given in order to help small companies and nonprofits succeed in building and growing their business. Unlike loans, you don’t have to pay this money back. It may come with rules that dictate how you can spend it. To conclude, there are a lot of paths you can take to get capital for your business. The most important thing to keep in mind is to optimize for your business, your goals, how you want to grow the business, what you want the exit to look like and basing your decisions off of these factors. P.S. I also made a YouTube video about this topic. You can check it out here: https://www.youtube.com/watch?v=kYeTtcER99o
https://medium.com/startup-grind/no-money-no-problem-6-ways-to-get-capital-for-your-startup-ce38268f9568
['Luba Yudasina']
2020-09-30 16:21:57.575000+00:00
['Startup Lessons', 'Startup', 'Venture Capital', 'Fundraising', 'Scaling']
BubbleTone: New Version Released
BubbleTone is a new generation messenger with a number of useful features. Customized for the needs of mobile operators, the messenger provides additional revenue streams on voice calls and SMS messages from the app and other telecom services. The Bubbletone team is glad to announce that a new version of our messenger has been released on October 26. We have fixed a number of software bugs and implemented a number of improvements to make it easier to use the application. So, what has been changed, exactly? Interface The first/main tab now contains a list of calls that a user has made, answered or missed; New icons for the “make a call within the Bubbletone network” (peer-to-peer calls) and “make a call to PSTN” options; The option “save gender information” in the “personal information” section in a user’s personal account now works properly; The last message sent or received by the user now stays visible, even while the user is typing a new message with the pop-up keyboard; There are a number of Android phones that have the following defect: their proximity sensors do not work properly. As a result, when calls came in, the screens of these phones would go black. This problem has been solved; The settings menu includes a section called “additional info”. Among other information, this section includes the user’s Bubbletone number and balance. The Bubbletone number is the number which the user receives after they have signed an agreement with an operator. In the previous app version, this information disappeared from time to time. This problem has been solved. SMS messages In the previous version, if a user had a large number of text messages stored on their phone, the app worked slowly. In the new version, the app works quickly regardless of the amount of messages stored on the phone. Distribution of broadcast messages Operators are now able to receive notifications on whether their users have received and read the messages broadcasted to them. Audio and video connection We have upgraded codecs. As of now, we use VP8 codecs to provide a better quality of audio and video connection, provided the users are using Internet with high bandwidth; When a user would make a call, they would have to listen to a ringback tone. As of now, originators of calls can enjoy listening to music when placing a call, while the destination terminal is alerting the receiving party. Messages Previously, after starting a Hide-Chat — a secret chat, hidden from prying eyes — the user could not make the chat visible in case there was no need in hiding it anymore; Users can now make Hide-chats visible or invisible again whenever they need to; The development team has built the final version of Smoke-Chat — a solution that prevents secret messages from being forwarded or copied. After a message has been read, it disappears (it is shown burning down, and then the smoke clears) so the recipient cannot save, copy or forward it; In the previous version, to send a video to other users, a user was able to record a video with the time limit of 30 seconds. As of now, there is no time limit. Users can record videos that last as long as they need. Settings The additional section “Notifications and Ringtones” has been added to the settings menu. This section allows users to choose ringtones for notifications when they receive messages and calls, switch vibration on/off and send notification signals to headphones. Our development team continues to work on the application. In the coming future, more useful features will be added. In November, our team will also start to actively promote the app among mobile operators. More news will be coming soon. Stay tuned! The messenger is available on the App Store and Google Play: https://itunes.apple.com/us/app/bubbletone/id1298142945?mt=8 https://play.google.com/store/apps/details?id=com.countrycom.bubbletone
https://medium.com/bubbletone-blockchain-in-telecom/bubbletone-new-version-released-a2788646cdaf
['Bubbletone Blockchain In Telecom']
2019-05-15 09:57:59.443000+00:00
['Android', 'Mobile Application', 'iOS', 'Messenger', 'Mobile App Development']
Improving Your Mental Health Begins By Talking To Other People
Improving Your Mental Health Begins By Talking To Other People You don’t need to deal with your problems alone. Photo by Julian Hanslmaier on Unsplash I used to cry in my bedroom for several hours until the early hours of the morning. It was as if nobody understood me or could genuinely relate to what I was going through. I felt overwhelmed. My anxiety was at an all-time high. Every day, my mental health was worsening. The quality of my relationships was deteriorating as I felt like I couldn’t confide in anyone. I didn’t want to deal with my problems alone. But at the time, I had no choice. That’s when I realized something changed my life forever. Healthy relationships are built on a mutual sense of trust and understanding. So when you feel like nobody else can understand what you’re going through, the quality of your relationships will inevitably suffer as a result. Which is why it’s essential to spend more time with your friends, family, and spouse. Strong relationships are a vital element of a happy life. Our loved ones support us during moments of adversity and help us keep everything together while our entire life feels like it’s falling apart.
https://medium.com/publishous/improving-your-mental-health-begins-by-talking-to-other-people-336d7f010a9d
['Matt Lillywhite']
2020-11-23 15:09:43.125000+00:00
['Relationships', 'Life Lessons', 'Mental Health', 'Self Improvement', 'Loneliness']
The LMS: The Moral of the Story
From researching this market for over 6 years, I’ve decided to put my thoughts on paper…I mean, today’s version of paper being an online blog. If you are interested in the future of education technology and how it will impact students, faculty and administration then you must be interested in where the Learning Management System market is going. I’ve decided to break it down by phases of the origins, to the future. From paper to online: the early players Early players in the LMS market created platforms that essentially brought courses from being completely offline with paper and pencil, to online. Resources were made available on a faculty site, grades were put online and announcements were possible. What the early players did best was make a file system that was secure for the faculty to upload resources. The cons… These systems evolved in a way that other similar technologies have evolved. They became loaded with features upon features upon features. And then that file system became loaded with folders, and folders. And then some more folders… Faculty and student reactions to using this poor and ugly system The system got so out of touch, slow, and tedious, that even for the most simple task of uploading a file or assignment became ridiculously time consuming. Wasn’t this system supposed to make it easer to access files instead of printing things out for classes to get? It was starting to not be as simple and for faculty, some decided enough was enough, and they moved to other platforms or got rid of the system entirely. This is where the market started to shift and a few new players arrived. The incremental system As a phase II of the LMS market, Canvas by Instructure has done good things but mainly has been incrementally better than the previous legacy systems. Canvas by Instructure is a cloud based learning management system with additional grading and analytics features. The platform is in my opinion a little better face to what Blackboard, D2L, Moodle, or any of the other legacy players have created. What Canvas did best was bring the LMS to the Cloud. They tout about a better gradebook, but in actuality, it’s pretty much the same as any other legacy LMS. I call them the Blackboard 1.5. What really differentiates Canvas from the other players is their marketing. They are good at selling and making things sound much better than they actually are in my opinion. And with that marketing, they were able to do something no one thought was possible — take over Blackboard market share. From 2008 when they were founded to 2010, they had zero contracts. However, from 2010 in their first contract signing to 2015, they went from zero to over half a billion in value. Goes to show that you don’t have to do much to overtake the competitors in the market. If you have a slightly better product and good marketing, you can win. The next generation is social It’s clear with the technology we use like facebook, twitter and linkedin that social media has taken over the modern interface. Messaging apps and social media are norms in the real world but in education, it seems that we are still around 10 or 15 years behind the game. What I expect is a move to social media, mobile and apps within the LMS. It’s normal for students and faculty who utilize these applications on a daily basis outside of education. Why not have them in our educational lives? This is quite similar to a few revolutions in technology over the past decade. Here’s some examples that can show how markets have previously been disrupted and how it relates to education. iPhone vs Blackberry Blackberry was the market leader of the phone market at one time. What Blackberry did best was provide an all-in-one solution with a great keypad and messaging app. Problem was that they built everything in-house. Meaning, it was very difficult to integrate applications outside from Blackberry itself. And they had higher costs than new comers like Apple since they were the one’s developing all the apps rather than the community doing it for them. The phone of apps Apple revolutionized the market not with the best core applications but with the combination of the core applications it had (i.e. Phone, Messaging, Safari, Music, etc) + Touch Design + App Store. The App Store is really what took the iPhone to the next level and conquered the phone market. The App Store allowed for any individual to customize their phone with applications they wanted. It allowed for a community to develop and make the iPhone that much better, every time. And guess what, Apple didn’t have to pay for this to occur. They could literally sit and watch as their ecosystem grew like wildfire. Before you knew it, they had many more apps than Blackberry could ever have. Apple created a new category. An app phone. Airbnb vs Hotels You’d think with an industry that has been around for a very long time, that it would be impossible to break into it. I mean, that’s what the investors say… Your home, anywhere But with Airbnb, what they did best was prove the naysayers wrong. Hotels are often expensive and do not showcase a local vibe that many want when they are traveling. Airbnb provided a low cost and fun experience to gain the local culture of a community. For the first time in this market, a company quickly took over the traditionally stagnant. And by doing so, they created a new category. The sharable home. Uber vs Taxi’s Hailing a cab will soon be a thing of the past Anyone 5 years ago would have said that Taxi’s were just annoying but we just ‘had to deal with them’. The guys at Uber thought differently and said why not shake this up and make it mobile. Basically what uber did was made the taxi go mobile and made it more comfortable and convenient for drivers and passengers to get together. They created a new category. The mobile taxi. Moral of the story All of these markets are breakable. Just because a market has been around for awhile does not make it impossible to break into. I firmly believe that the LMS market will be disrupted as well with a new category. The social learning platform. Where’s the community in online education today? Some things yet to be done in this LMS market: Connecting the campus community to the LMS Opening up the community to create applications into a single platform Having a mobile ready platform (*not just an app as a checkbox for an RFP) And, a framework based on a social network (i.e. community) These are the fundamental differences in the LMS of the future. Imagine a platform that can make that happen. It’s just a matter of when this happens, not if.
https://medium.com/notebowl/the-lms-the-moral-of-the-story-a11d8f601d63
['Andrew Chaifetz']
2017-08-23 02:07:06.219000+00:00
['Startup', 'Edtech', 'Education Technology']
More Americans Willing to Take COVID Vaccine
PUBLIC OPINION AND COVID VACCINES More Americans Willing to Take COVID Vaccine Public confidence campaigns seem to be working In a recent USA TODAY/Suffolk University Poll, 46% of Americans say they will take the vaccine as soon as they can. Compared to a USA TODAY poll in late October, that is close to double. Also in the new poll, 32% say they will wait for others to get the shots before they do so themselves. This is great news, and it is indicative of the success of public campaigns by physicians, nurses, other healthcare leaders as well as public officials — including the esteemed Dr. Anthony Fauci — to boost confidence in the vaccines. I myself did the same thing (see below), and thank God, so far the vaccine has been very well tolerated by me and my colleagues. The vaccine is only truly effective at achieving herd immunity if a majority of the population gets the vaccine. So, the more people take it, the faster we can finally see an end to this nightmare. I am very encouraged by this new poll finding, and I encourage everyone who gets their vaccine — especially physicians and nurses on the front lines — to publicize their vaccination and get the word out.
https://medium.com/beingwell/more-americans-willing-to-take-covid-vaccine-5ec4dfece751
['Dr. Hesham A. Hassaballa']
2020-12-23 21:54:25.004000+00:00
['Covid 19', 'Vaccines', 'Public Health', 'Medicine', 'Science']
What Is GitHub
Introduction Humans have always thought of ways to connect and to work together as a team, over the years we have created new ways to connect virtually allowing us to work together without having to worry about the distance. Applications like Facebook and Twitter are now indispensable for us. Almost all video games allow you to play online with friends or strangers from all over the world. GitHub is another one of these pieces of software that allow people to connect and work together in a more efficient, fast, and dynamic way. In this blog, I will explain how GitHub works and the different things that we can use it for. How it works GitHub -in simple terms- is just a space or a cloud if you will where you can save, share, and edit files simultaneously with someone else. Every file, files, or whatever project that you save into GitHub is called a repository or “repo”. Imagine that you work at Netflix as a product manager and one day you realize that you want to make changes to the homepage. You talk to the developer team and you explain all the changes that you want to do. Let’s suppose that all the code for Netflix is saved on GitHub in different files. The lead developer would then divide the task into different chunks and send the instructions to each developer. It would be very difficult if everyone was working on the same files right? One of the main features is being able to create a copy of the repo and have it in your computer that way if you write code that doesn't work then it won't affect the program because you would be working on a copy of the project, this is called cloning a repo.
https://medium.com/swlh/what-is-github-423f9049ab2d
['Sebastian De Lima']
2020-07-13 02:51:40.612000+00:00
['Programming', 'Computer Science', 'Github', 'Technology', 'Productivity']
Low-Code Founders: Tracy Smith, founder of Vipii
Low-Code Founders: Tracy Smith, founder of Vipii Inside the mind of an entrepreneur: our founder of the week tells us about the remote queuing solution he developed on Bubble to reduce physical queues in stores and lower the number of high-risk interactions. What led you to entrepreneurship? I’ve always been attracted to entrepreneurship, that’s why I’ve always done side projects alongside my studies. Even though I have a lot of ideas, I still can’t devote as much time to it as I would like. I studied product management and am now an IT tech consultant. Although I’m not a developer from the start, I was able to extend my field of action and my skills thanks to Bubble which allowed me to develop new projects such as Vipii. Can you introduce us to Vipii? During the quarantine, many local stores had to close, preventing people from obtaining goods they needed. When they reopened, I noticed that people were flocking to the stores and there were a lot of queues, which was not very Covid friendly. I figured that these owners and people were in need of a queuing solution to limit these high-risk social contacts. After doing market research to see what solutions already existed and especially their prices, I realized that they were not accessible to everyone, especially small businesses. That’s why I decided to develop a cheaper remote queue solution that would be simple to use. This idea is not revolutionary in itself, its real added value is its small price which allows everyone to have access to it in these difficult times. To benefit from it as a business, you just have to register your venue on the application. As a visitor, all you have to do is search for the store you wish to visit and register directly on its page, without having to signup or download anything. The application then announces you when the way is clear. What challenges have you encountered? From a technical point of view, it has not always been easy to take Bubble in hand. Responsiveness and application optimization are particularly hard to achieve. The ideal would be to develop a very complete library of responsive elements in order to save time and gain fluidity, as you probably already have at Cube. Then, there’s a lot of testing and fixing work to be done to make sure that everything works well, that there are no bugs and that the different solutions used to build the application (Twilio in my case) are well connected. It took me several days of viewing online tutorials, reading manuals and instructions for each solution used to feel comfortable with it. How was the learning and development process on Bubble? I liked the tool very much right away. I trained with it for a year and a half and I am now convinced that it will have a major role to play in the future. Originally, I had some experience in backend development and it helped me a lot to understand how to work with Bubble. On the other hand, the ability to build a responsive application is not innate and I had a hard time with that part. I also had to use code to be able to put all the features I wanted, so I bought plugins and integrated them. Today, I know that if one day I leave my job to start my own project, I will use Bubble to develop my product because I really appreciate the fast and simple iteration capacity that the tool offers compared to traditional code (with which it takes at least 2 weeks to fix the slightest bug when you can do it in a few seconds on Bubble). I even talked to some developer friends who tested it, and they really liked the tool. They find that it allows them to focus on the things that really matter and add real value. What advice would you give to those who are hesitant to embark on an entrepreneurial project? The most important thing is to take the plunge and get your idea validated before spending too much time on Bubble to build the product. Because you have to be sure that the solution you bring will meet someone’s problem, that there are real needs. Otherwise, it is risky to invest time (and therefore money) in it. This is true even if you want to develop a simple landing page: you must always have your idea validated beforehand by talking about it to get feedback.
https://medium.com/cube-insider/low-code-founders-tracy-smith-founder-of-vipii-94f37ce425ef
['Melanie Bialgues']
2020-12-29 08:16:14.964000+00:00
['No Code', 'Queue', 'Low Code', 'Entrepreneurship', 'Bubble']
This Is My Playbook for Crushing It on Medium in 2021
SATIRE This Is My Playbook for Crushing It on Medium in 2021 I survived 2020, 2021 is the time to thrive Photo by Tnarg on Pexels. I started writing seriously on Medium only in September but I have already published 24 articles, been curated 5 times, and am a top writer in two topics. In these three months, I have grown from zero followers to 111 followers (and counting). I say this not to boast but to show you I have mastered the Medium opening game. Tip 1: It is important to establish your authority early in the article. I’ve spent weeks studying the advice of the established Medium gurus. I’ve signed up to so many free Medium guru newsletters that my Gmail can no longer distinguish between spam and ordinary e-mail. I’ve done all this work to make this — my 2021 Medium Playbook— to raise my standings in the Medium game. I’m offering this playbook FREE for all you dear readers now, for this LIMITED TIME ONLY. Once I’ve set up my Substack newsletter, my Zoom consulting business, and set up my ConvertKit e-mail thread, this playbook will only be available to subscribers to my $49 course. So, act fast! Just kidding. I don’t have any of these yet. But did you feel the urgency? Tip 2: Hit them hard with the Call to Action Without further ado, here are my three key approaches to crushing it on Medium.
https://medium.com/muddyum/this-is-my-playbook-for-crushing-it-on-medium-in-2021-d95e5d45b288
['U-Ming Lee']
2020-12-22 18:02:11.929000+00:00
['Satire', 'Humor', 'Culture', 'Writing', 'Comedy']
Journey to the UX — Part 1. Start asking questions!
Journey to the UX — Part 1 Start asking questions! Today’s business people don’t just need to understand designers better. They need to become designers — Roger Martin To step into the UX design world, the first thing we can develop in ourselves is to have prepared minds. You need to understand that the biggest part of user experience is what you don’t objectively see but, rather, what you feel. Because as UX designers, you are to advocate for users. If you don’t understand what their pain points, goals, desires are; who does? And in order to successfully build a rapport with your users, you should be able to empathize with them, to view your products through their eyes. You don’t have to be a born artist or a gifted designer, to begin with; but initially, you can start asking questions — begin with most redundant things, like doors. To push or to pull — it’s a lifetime question Now, you’ve probably never given much thought to doors. In fact, you’d ask, “Doors? What’s with them?”. Please, bear with me though. Imagine this, on a sunny day, we pace along with our favorite songs in our ears on a journey to a fancy coffee shop. Swiftly, without pausing, we reach out to push a door that will lead us to delicious cupcakes and a swirly cappuccino then… *THUD* the door doesn’t budge because it ain’t a f-king PUSH door. As we flusteredly look around for any instruction on this door, our confidence cracks, our mood gets disrupted — gone from superb to whatever. All because of a door!!! Almost everyone of us has experienced a situation where the door didn’t behave as we had expected intuitively. Some could have been in way more embarrassing scenarios — we could be slamming into a glass door without even realizing, then shamefully find ways to get through it. We could have been lost in the most innovative thought that could have opened to an entirely new discovery, but it was shattered due to the hinderance at the badly designed door. A closer look Let’s take a step back and do some observations. Most of the time, instead of having doors open in both directions, most doors are one-way direction trips, meaning either inwards or outwards — as in human language, push or pull. Because we tend to spend as little effort as possible, with our human instinctual behavior to conserve energy whenever we could which is another sophisticated way of saying we are naturally so lazy that it is often the case that we more likely choose to push a door first. And yet, when we see something that fits the grip of our hands, our brain screams “PULL THE LEVER, KRONK!”. Reference to an old meme to show my coolness. (©Disney’s The Emperor’s New School) Sometimes, we get confused facing doors that seemingly look like they should be pushed while their labels signal us to pull and vice versa. This is where things get interesting. An example of the confusing door. We start realizing that no matter how big or small the door is, we interact with it mostly through its handle; except for the automatic doors since they aren’t manhandled (duh!). And there are so many different designs for that — namely lever, knob, grip, latch, etc. Occasionally, we encounter a slight variation of them, however, in the end all of their usability reasonably works in certain cases and also fails in others. There are way more doorhandle designs than these. We can ask a bunch of generic questions like: What is the exact problem? Who are the users? Why do some doors open outwards while some open inwards? Why do some certain doors have instruction labels (push/pull) on themselves and others don’t? What should be good examples of door design? Why do the existing designs work/fail? As we’re getting into the topic, let’s go even deeper by making more specific questions: Do users unknowingly expect the different types of door handles in order to behave accordingly? In which situations should doors be set to open outwards? Would the door’s aesthetics be affected if they have different handle types on either side? (E.g. one for push behavior, another one indicating pull action) And so on. Then, I did a little digging myself and… My initial investigation concluded that most exterior doors in buildings are set to open outwards to prevent the cold wind blows the doors wide open and gets inside the building during winter time. On another note, the design choice also helps the doors close tighter to their frame, which reduces heat loss. Well, I was wronger than wrong. When the weather gets colder, the pressure built up from the warmer air inside the building will actually try to push its way out as to balance itself, hence, my wrong conclusion. The correct answer is to prioritize the human escaping flow optimization. Because people need to get out as fast as possible under safety hazards like fire, so when the horde march, there won’t be a situation where the door can’t be opened due to the mass of people trying to push against it. Door with an obvious hint for grabbing and pulling. How do we choose door handles now that we know our typical user behavior? As long as doors are open outwards, we can use the handles with a strong “pull” cue on the outside, that tend to have a vertical long shape and are easy to grasp. But do we need to use the same type for the other side too? The answer is “it depends”, as many would lean towards the symmetrical aesthetic purpose. In case we want it to be symmetrical, meaning the same type that hints “pull” to be used on the inside, we should put on the “push/pull” labels on both sides to indicate distinguished actions Although it may look beautifully balanced, it is definitely creating bad functionality leading to user frustration as discussed above, that is something we are trying to avoid. Otherwise, a horizontal bar should be used on the inner side. Because it is an effective solution implying a “push” behavior. I have often seen its frequent use mainly in emergency fire exit doors. Some even push their cue stronger (puns intended) with vertical push plates with nothing to grab. Left doors with horizontal bars is used for common emergency exit, while right doors are equipped with push plates which is commonly seen at hospitals. Although, this solution can’t really be used with glass doors because, you know, aesthetics reason. Thus, there is no really perfect solution, and bad doors are everywhere.
https://medium.com/the-shortcut/journey-to-the-ux-part-1-1285e368ab2c
['Nam Nhu']
2019-04-08 18:51:10.667000+00:00
['Design', 'UX Design', 'User Design', 'User Experience']
How to excel at designing a great user onboarding experience
As a child who grew up in Delhi, I was always fascinated by the rich heritage that Delhi offers to its residents. The Mughal monuments, their grandeur past, the spectacular architecture- everything about them was just so enticing. Except for one thing. I was unaware of how to experience these monuments, the right way . I wanted to delve into the rich history of these monuments but I always found myself unhinged to my country’s magnificent past. At first, I blamed myself for not paying enough attention in my history classes. But then, as I talked to other people, they tagged along in my plight. Here’s what I felt when I visited these monuments- I didn’t feel any connection while walking down the historical alleys of the fort. Even when I hired a travel guide, they just incessantly talked about irrelevant information, often swaying from one thing to another. The touts only cared about their money and had no interesting facts to share. Call it lack of knowledge or lack of empathy for visitors who came to spend their entire day at the monument. All I really wanted to do was to bask in the glory of a marvelous past. Alas! No one really cared what I wanted! This was my experience until a few years back. However, it changed recently on my tour of a very famous Delhi monument. I opted for an audio guide this time and by the end of my tour, I was in awe of the whole experience. It was, as if, every nook and corner of the monument spoke to me, told me stories about the forlorn past. I was intrigued and, at the same time, happy. Happy, because this was the first time someone uncovered the layers of my questions and valued my time. User onboarding is quite similar to that. If you look at the analogy, just like people who visit monuments, users come to visit your website/application for a purpose. They use the product as a medium to accomplish something in their lives — it could be sending bulk emails to their subscribers or booking tickets for their vacation. So, the onboarding process should help your users understand how they can do so quickly without feeling too overwhelmed or helpless. The onboarding experience should be such that it delights them and makes them loyal users of your platform. Therefore it’s important to identify- The steps that you want the users to take, and How can you guide them at each step and help them perform the right actions? Now, let’s dive straight into how you can design a better onboarding experience for your users. The experience should help users feel connected The ‘real’ purpose that people use an application is because they want a solution to their ‘problems’. For example- they are on social media to make new friends, connect with old ones, share their thoughts and be virtually present for all those guilty pleasures. Therefore, for any kind of platform, figuring out what ticks with users should be the first step while designing the onboarding experience. For example- How can you give flexibility to users when they want to sign up? Can they use their existing social credentials to access the web/mobile app? Do you really need all their personal details like date of birth, age, maiden name or alternate email address? Is it absolutely necessary to gather a user’s account/credit card information? Let’s look at Twitter as an example. When you are new to a platform, the first thing that piques your interest is- how does it work? What is it about? Twitter has answered this question on their landing page- And the best part? Once you are successfully signed up after verifying your email address, it doesn’t force you into any compulsory actions. Every screen has a ‘skip’ stage and you can get back to any step later. In the example below, Twitter tells you why you follow some people. It’s because only when you follow someone, their tweets will appear on your timeline. They also ask for your interests (with a valid reason) so that they can personalize your experience. Onboarding experiences are like the first impression. They’ll fail if you are putting up your user for a wild goose chase without informing them why they ought to chase it. So the important thing to keep in mind is that a successful sign up doesn’t count as a conversion. Your application is a success when your users understand the app and come back again and again to use it. The experience should win users’ trust According to a survey, 77% of users drop an app within 3 days after they download it. Of course, it’s not just the onboarding experience which is to blame. But it’s true that a good onboarding experience can help improve customer conversion and retention. And for a good onboarding experience, you have to win over people’s trust. When users feel safe sharing their information, they are more likely to use the application again and again. For instance, consider FinTech apps which can’t function properly without the friction of asking for confidential financial information. This friction point is inevitable to ensure smooth functioning of the application. However, it can be made more bearable for users by designing a comforting onboarding experience with detailed explanations on the safety features that you’ve included. It’s important to make users feel confident about using the app by entrusting that their money or personal info is safe with you. So, always ensure that both safety and simplicity are ticked off in your onboarding designs. A good example is of Robinhood which is a very popular trading app. It gives detailed information on why it’s asking for particular information. For example- when you are prompted to enter your country, it tells you that you need to fill this because federal laws demand it. Similarly, it communicates proactively why the app needs to have your SSN before you proceed with any further steps. The experience should meet users’ expectations Every user downloads an app or starts using a service for a reason. They have either heard rave reviews about the product from their friends or your marketing team has done an excellent job at grabbing those eyeballs. Therefore, when someone downloads the app, you have just one chance to grab that user through your excellent onboarding process. It’s your only shot to show that you’ll meet (or, even surpass) their expectations. And to meet their expectations, you first need to understand your users and their mental models. Why are they here? What do they want to accomplish? Have they previously used a similar application? This is important to consider because when the user is new to the platform, there is a possibility that (s)he has zero vocabularies/expertise in the area. For example- when users sign up on Duolingo, they want to learn a new language. The good thing about their onboarding experience is that they give an option of taking a placement test before starting any lessons. This way they put the users at ease and if they have a prior experience they can revisit their fluency in the language. Also, they do not prompt you to unnecessarily sign up on their platform. And when they do, they give a valid reason for it- they need the information to save your progress. The final word Considering that there are countless apps these days, it would be difficult to drill down on a fixed checklist which can help create an excellent onboarding. Just as there isn’t approach to solve problems, there also isn’t any one silver bullet to nail user onboarding. Onboarding is something which will always be very product and geography specific. People from various backgrounds, with different mental models, are going to use the application. The choice is with us- we can either let users wander about and swim in the uncertainty or we can welcome them with a flexible and resilient experience that serves them well. One way to excel at it is to involve real users. You can prototype your user onboarding experience and get feedback from real users. And once it’s live, use the power of data to analyze if any further improvements can be made to it. We hope this helps you look at user onboarding from a different perspective and make it count as a crucial part in the application’s success.
https://uxplanet.org/how-to-excel-at-designing-a-great-user-onboarding-experience-b87129485529
[]
2019-12-24 11:03:23.290000+00:00
['Onboarding', 'Design Thinking', 'User Onboarding', 'Design Process', 'Design']
Requiem for The Carnegie Deli
It was the winter of 1985, and my friend Jerry, his son Tim, and I had driven from East Tennessee to New York in order to interview the playwright/screenwriter, Horton Foote. Jerry was writing a book on Foote, and I was finishing my dissertation, also about Foote’s life and work. Not everyone is lucky enough to meet the subject of his Ph.D. dreams, but I was. Horton was a gracious man, and he and his wife Lillian welcomed us into their East Village apartment. I wish I remembered more about the couple of hours we spent there, but the time buzzed me as if I were a kid sitting for those brief moments on at the Pizitz Department Store Santa’s knee in downtown Birmingham. After we left, and we had driven all night and so were plenty tired, Jerry suggested we go uptown and have lunch at the Carnegie Deli. Didn’t have to ask twice, either. I had been to only one other Jewish restaurant in New York at this point, and that was an obscure dairy restaurant that will be featured later in this series. So, here was to be my first taste of an authentic New York Jewish deli. I have to add here that my wife and I had been semi-vegetarian until I left for this trip. We did eat fish and occasionally chicken, but NO RED MEAT. So at the Carnegie, I could have opted for any of the multitude of smoked fish plates, salads, or sandwiches. Could have, but didn’t. I chose the Jackie Mason, or was it the Henny Youngman, sandwich?: mounds of corned beef and pastrami on rye, with mustard, and horseradish on the side. If you never had the pleasure, these sandwiches weigh two-to-three pounds whether or not the waiter keeps his thumb on the scale. My joke later on was that I spent the next week digesting that sandwich, though I don’t know whom I thought I was fooling, or why I thought that was a joke. That much red meat is hard on the digestive tract anyway, but having not consumed any red meat for the previous four years? Imagine. I likely ate two whole kosher half-sour pickles before our food was served, too, and maybe I looked as green afterward as that semi-cucumber. Still, whatever color you had in your mind, nothing could match the beet-horseradish color of Tim, Jerry’s fourteen year-old son, who decided that what he wanted for lunch was the hamburger. I’m sure you’ve heard stories about gruff, brusque NYC waiters, and maybe they’re that way because goyish boys order burgers in a deli. In any case, our waiter intervened in Tim’s epicurean life: “LISTEN KID, IF YOU WANTA BURGER, GO ACROSS THE STREET TO MCDONALD’S. THIS IS A DELI AND IN A DELI, YOU ORDER DELI SANDWICHES.” You know when someone you don’t know means what he’s saying, and Tim was no exception. Jerry and I watched the poor guy actually gulp, and then say, “yessir, i will have a corned beef sandwich.” Which he went on to eat with gusto, eyeing the waiter from then on with either love or fear (or both) in his eyes.
https://medium.com/one-table-one-world/requiem-for-the-carnegie-deli-dec7fda4f295
['Terry Barr']
2020-08-30 18:53:27.110000+00:00
['One Table One World', 'New York', 'Food', 'Jewish', 'Family']
Why Not Use Kubernetes?
Why Not Use Kubernetes? Is Kubernetes really right for your stack? When to choose Kubernetes? Many teams are excited to start using Kubernetes. Some are interested in the resilience, elasticity, portability, reliability and other advantages Kubernetes offers natively. Some are technology enthusiasts and just want an opportunity to work with this platform, to get to know more about it. Some developers want to acquire experience with it, so they can add another highly demanded skill to their resumé. In general, most developers these days want to work with Kubernetes at some point. That may be a really good idea and it may not. Kubernetes is Designed to Solve Distributed Architecture Issues As defined by the official documentation website, “Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more.” It’s not exclusively made for distributed systems, but for containerized applications. Even so, it does provide many resources that make it easier to manage and scale distributed systems, like Microservices solutions. It’s also considered an orchestration system. Automation and orchestration are different, but related concepts. Automation helps make your business more efficient by reducing or replacing human interaction with IT systems and instead using software to perform tasks in order to reduce cost, complexity, and errors. In general, automation refers to automating a single task. This is different from orchestration, which is how you can automate a process or workflow that involves many steps across multiple disparate systems. When you start by building automation into your processes, you can then orchestrate them to run automatically. — What is orchestration? RedHat official website In other words, Kubernetes makes it easier to manage complex solutions that would be hard to maintain without a proper orchestration system. While you can implement DevOps engineering practices on your own, it’s not scalable if you go from dozens to hundreds of services.
https://medium.com/better-programming/why-not-use-kubernetes-52a89ada5e22
['Grazi Bonizi']
2020-06-04 13:39:28.884000+00:00
['Azure', 'Programming', 'Kubernetes', 'DevOps', 'Aks']
How the U.S. Chamber of Commerce wrecked the economy — and made the pandemic worse
A version of this article appeared earlier in Salon As hospital intensive care units overflow again, and delays in COVID-19 testing reports reach record levels in many cities, a conversation I recently had with Sen. Ed Markey, a Massachusetts Democrat, reminded me that I had forgotten something utterly critical: Donald Trump’s decision to unilaterally disarm America in the face of the coronavirus invasion was urged upon him by an ostensible defender of American business: the U.S. Chamber of Commerce. When the pandemic reached America, we were not ready — any more than we were ready when Japan bombed Pearl Harbor. But Trump had the tools to do what the U.S. has often done: make up for lack of preparedness. The crucial gaps to fill in March were supplies for testing to limit the spread of the virus, and medical equipment to treat those who got sick — testing kits, swabs, reagents, masks, gowns and gloves — by the billions. Government health agencies estimated that if the pandemic took hold, the country would need, for example, 3.5 billion N95 medical masks. We had 12 million. Presidents have available, and have routinely used, the Cold War-era Defense Production Act to overcome such critical supply shortages — not just in wartime, but also to ensure adequate relief supplies after natural disasters. DPA can be used to put emergency purchases at the head of a supply chain, but also to require factories to convert their output to provide needed equipment in adequate volumes. Members of Congress urged Trump to appoint a military official as DPA czar to coordinate production and distribution of essential pandemic-related medical supplies, as was done in the Korean War. In March, Trump was leaning towards robust use of the DPA, fashioning himself as a wartime president and the battle against the coronavirus as America’s “big war.” He invoked the DPA to require General Motors to speed up its production of ventilators. But he quickly discovered that he had an enemy within — not the virus as such, but the U.S. Chamber of Commerce. Within days of Trump’s announcement that he would mobilize as if for war and use the DPA, the Chamber of Commerce’s lobbying army swung into action. Among the Chamber’s arguments: DPA would impose “red tape on companies precisely when they need flexibility to deal with closed borders and shuttered factories.” Unstated: if the government took charge of the supply chain for tests, masks, and gowns, it could also prevent bidding wars from competing hospitals and states that would, and did, drive prices — and profits — through the roof. In response, labor unions representing nurses, hospital staff and other frontline workers who were unable to be tested, or to obtain masks or gowns, protested and urged the Chamber to join a national mobilization to defeat the coronavirus. The appeal was not answered. On March 23, Markey and his Massachusetts colleague, Sen. Elizabeth Warren, wrote to the Chamber demanding an explanation. The Chamber’s response: Defense Production Act reliance was unnecessary, because “American companies will do whatever it takes to support America’s response to the pandemic and shore up the economy.” But what the workers had feared — insufficient production, soaring prices and profits, chaotic delivery mechanisms — took over the health care market during the first wave of virus spikes in the Northeast. Some hospitals were paying 15 times the usual price for masks. The Chamber continued its strenuous opposition to enabling the government to ensure an adequate supply of tests and PPE. Even Trump conceded that profiteering had taken over the market, and ultimately, he did invoke the DPA to prevent it — but when it came to the export market. Supplies increased, but often at exorbitant prices. Only when the shutdowns of most of the American economy brought the number of hospitalizations down sharply did supply and demand come into temporary balance. Americans believed that if there was a second wave later in the year, at least health care workers on the front lines would have the tools they needed. Trump, denied by Chamber opposition of an easy pathway to acting like a heroic wartime president, seemingly lost interest. By June, his focus had shifted from fighting the virus to reopening the economy and then reigniting the culture wars. Media headlines proclaiming his lack of interest did not even provoke “fake news” tweets from the East Wing of the White House. The virus had a longer attention span. With major states like Florida, Texas and Arizona opening rapidly and prematurely, the holes in the jerry-built testing, tracing, and quarantine systems each state had fashioned without federal guidance gave the virus its chance. Cases — although not, initially, fatalities — began to soar. Suddenly, what any nationally coordinated effort would have been tracking and resolving all along — that the nation had stepped up production of tests, masks and gowns to meet the needs of an economy in shutdown, but had nowhere near the level of supplies for a massive second wave — came home to roost. By early July, new cases were flooding the hospital capacity of even some of the nation’s major health care centers, such as Houston. In New Orleans, testing centers had to close eight minutes after they opened. In 100-degree heat in Phoenix, lines at testing centers were eight hours long. San Antonio and Austin were forced to limit testing to those with symptoms, leaving the system utterly unable to detect asymptomatic cases, when research suggests that to 40% of infections take place. Wait times for test results soared with caseloads, so that even those who got tested might not find out they were positive until they were well past the contagious stage. Nurses were again forced to use one N95 mask for weeks at a time. Prices for what was actually available soared. States reported they could only fulfill 10% of their orders. Even now, in July, the U.S. has nothing like the 3.5 billion N95 masks that we knew in January we would need. Indeed, how could we? No manufacturer could have guessed in March how severe the crisis would be by fall. With no one managing the system, it was clearly foolish for any private company to produce, on speculation, several billion masks. Market signals cannot prepare America for the massive increase in possible scale required by such a pandemic. Nor can individual cities and states trigger the necessary ramp-up of supplies. Only a systematic national plan designed to ensure that we were ready for the worst-case scenario could have protected us. Inevitably, the “market solutions” favored by the U.S. Chamber of Commerce have failed America not once but twice. In failing, they have taken the nation’s economy over the precipice into a deep, long, economic decline. That the loudest voice claiming to represent U.S. business chose to defend profiteering, embrace short-term thinking, and risk the collapse of the American health care system and economy is both shameful and shocking. And that most Americans, including me, had already forgotten this is scary. This is not the first time the Chamber has successfully pursued policies that put us enormously at risk — for years it has been one of the major forces preventing Washington from adopting even the most modest efforts to accelerate a transition off fossil fuels to save lives and protect the climate. But if the Chamber can get away once again with having caused massive economic damage while risking the health of millions, and with its public reputation unscathed, it is unlikely to change its behavior.
https://carlpope.medium.com/how-the-u-s-chamber-of-commerce-wrecked-the-economy-and-made-the-pandemic-worse-dd0ad8ee803d
['Carl Pope']
2020-07-24 18:01:41.802000+00:00
['Trump', 'Coronavirus', 'Chamber Of Commerce', 'Economy', 'Ppe']
Can I Be Bothered?
Can I be bothered? No. Let’s be honest I have two choices. Yes or no. And, let’s face it, if you’re approached by a salesman and they’re not very attractive you’d be inclined to always say no. No is easier. It also means less work. A better question, perhaps, would be why should I be bothered? I mean, this question asks for more; encourages deeper thought and reflection. So, why should I be bothered? Bothered to get out of bed, to have a shower, to leave my room, to go to work, to go to university, to write, to make and have a website. Since I only have eight hundred words and I’m under assignment, I will only address one of these. Why should I be bothered as an aspiring author, to have a website? Do I even need one? For starters, I’m a full-time student, work when I’m not studying, have a part-time blog and have religious commitments. I don’t have time. I don’t have the experience or knowledge. You also hear all those horrible stories about internet thieves stealing your details and hacking into your websites. Besides, doesn’t everyone use social media now? It’s easier to use social media, because I’ve been using it for a while. Plus, I bet it’s very expensive to have your own website. Though, with a little research I found out that over four billion people around the world use the internet (Kemp, 2018). That’s a massive amount. I’d never be able to reach over four billion people in one place in one day, the traffic alone would be ridiculous! But, even if only 0.0001% of people were to look at your website that’s still over 4,000 viewing your site! I do get why businesses would want to have a website. It’s also a good way of getting a product out there. Promoting your products with 24/7 advertisement and marketing. You can be promoting your business while you sleep. That is a nice thought, but how does it benefit me? I’m already lacking a great deal of sleep. I read an article from a web-page on Writer’s Edit, a website dedicated to aspiring authors, and in that it argues that “Essentially for an author, your name is your brand; and for any brand, a website is an integral element in valuable promotion.” (Edstein). They also mention, Joanna Penn, bestselling author of over twenty-five fiction and non-fiction books, and on this subject she said: “I’ve built a multi-six-figure business off the back of my author websites… Your website is one of the most important things to get sorted if you’re serious about your author Career…It’s your home on the internet and the hub for your books. It’s how readers, agents, publishers, journalists, bloggers and podcasters judge how professional you are.” (Penn). On closer evaluation, I decided to look up a variety of my favourite authors to find that indeed they all have their own websites, each providing wonderful detail on their novels and news on future releases. While also providing links to various social media forums to follow. It seems a website is like a source for which you can centralise your crafted creativity while also filtering condensed teasers through all forums you’re a part of. It’s all united through one medium. It also doesn’t cost anything to have a website. Sure, to get good, personalised domain names you will need to get out your wallet; but, in my opinion, I don’t need a personalised domain name just yet. I’m still honing my craft and building up a rapport. I can learn all the while the domain name is free and gain necessary experience and skills. So that when I am ready for branching out I will know what I’m doing. Social media, I’ve learnt from my searching, is like the bread crumbs that lead potential readers to your writing home, where you can sit them down with your wonderful ideas and get them stuck to your fan base. The only concern I had, is that there’s still the potential of hacking. But Hellen Keller once said, “Life is either a daring adventure or nothing.” I guess you either take the leap and get the experience or you don’t and miss out on what could be life changing. It seems to me that the answer to why I should be bothered to have a website as an aspiring author can be accompanied by a self-reflecting question: Am I serious about becoming an author? There are only two answers to that question. So, will I? You can find my blog here. References: Edstein, Eloise. “Should Writers Have a Website?”. Writer’s Edit. https://writersedit.com/fiction-writing/should-writers-have-website/ Keller, Helen. P50. Let Us Have Faith. Doubleday & company, 1940. Kemp, Simon. “Digital in 2018: World’s internet Users pass the 4 Billion Mark”. We Are Social. 30th January 2018. https://wearesocial.com/blog/2018/01/global-digital-report-2018 Penn, Joanna. “Step By Step Tutorial: How To Build Your Own Self-Hosted Author Website In 30 Minutes”. The Creative Penn. https://www.thecreativepenn.com/authorwebsite/
https://medium.com/clippings-autumn-2018/can-i-be-bothered-3765aaa89a9c
['Olivia Pettman']
2018-12-12 10:22:51.112000+00:00
['Website', 'Writing', 'Writer', 'Cccu', 'Social Media']
(Truly) Poor Economics
Esther Duflo and Abhijit Banerjee, who share a 2019 Nobel Memorial Prize in Economic Sciences with Michael Kremer, answer questions during a press conference at the Massachusetts Institute of Technology in Cambridge on Oct. 14. SCOTT EISEN/GETTY IMAGES Why RCTs Are Not a Promising Tool for Development He was a bold man that first ate an oyster. — Jonathan Swift If there has been a“next big thing” in the field of economics during the past decade, it is the application of techniques from medical research — specifically, “randomized controlled trials,” or RCTs — to assess the effectiveness of development or other government-initiated projects. The general thrust of the work is as simple as it is seemingly brilliant: rather than employ complex, and often unreliable, econometric methods to tease out the extent to which a project or policy actually had a beneficial impact on intended beneficiaries, why not follow the tried and true methods employed by pharmaceutical researchers to assess the efficacy of medical treatments? The steps are: Randomly divide the experimental population into a treatment group that receives “benefits” from the program, and a control group that doesn’t. Assess outcomes for both groups. Determine whether or not a significant difference exists between the two groups. This approach, championed by MIT economists Esther Duflo and Abhijit Banerjee (recent recipients of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, also known as the Nobel Prize in economics, and the coauthors of the book Poor Economics) among others, seems a self-evidently fabulous improvement on the status quo in development aid, which not infrequently in the past involved assessing program effectiveness by simply checking whether or not the money was spent. Clearly, a world in which aid money is allocated to projects that actually benefit people is preferable to one in which money goes to whoever most effectively maneuvers to get the money to start with and then most reliably manages to spend it. In this way, the use of RCTs in development arguably advances the goal of aid effectiveness, famously championed by NYU economist William Easterly in a series of books and a popular blog titled Aid Watch whose slogan is “Just Asking that Aid Benefit the Poor.” Describe your favorite entrepreneurial initiative to one of the many “development” economists influenced by this line of thinking and the most probable response you’ll receive is: “Sounds wonderful, but where’s the evidence of effectiveness?” This question will be followed by the following admonition: “In order to find out whether or not it worked, you really should do an RCT.” So what is the problem with applying RCTs to development? Aside from their expense (administering an RCT generally costs upwards of $100,000), the Achilles heel of RCTs is a little thing known to the statistically inclined as “external validity” — a phrase that translates informally to “Who cares?” The concept of external validity is straightforward. For any assessment, “internal validity” refers to the mechanism of conducting a clinical trial, and the reliability of results on the original setting. A professionally conducted RCT that yields a high level of statistical significance is said to be “internally valid.” However, it is fairly obvious that an intervention rigorously proven to work in one setting may or may not work in another setting. This second criterion — the extent to which results apply outside the original research setting — is known as “external validity.” External validity may be low because the populations in the original and the new research setting are not really comparable — for example, results of a clinical trial conducted on adults may not apply to children. But external validity may also be low because the environment in the new study setting is different in some fundamental way, not accounted for by the researcher, from the original study setting. Econometric studies that seek to draw conclusions about effectiveness from data that span large geographical areas or highly varied populations thus typically have lower levels of internal validity, but higher levels of external validity. The fundamental issue is not the purity of the methodology employed (as exciting as such methodological purity is to the technically inclined) but rather the inherent complexity of the world being studied. As (actual development economist) Ricardo Haussman states it: Another method that looses its appeal in a world of high dimensionality is the randomized trial approach. A typical program, whether a conditional cash transfer, a micro-finance program or a health intervention can easily have 15 relevant dimensions. Assume that each dimension can only take 2 values. Then the possible combinations are 2^15 or 32,768 possible combinations. But randomized trials can only distinguish between a control group and 1 to 3 treatment groups. Or as Don Berwick states in the context of public health interventions: How can accumulating local reports of effectiveness of improvement interventions, such as rapid response systems, be reconciled with contrary findings from formal trials with their own varying imperfections? The reasons for this apparent gap between science and experience lie deep in epistemology. The introduction of rapid response systems in hospitals is a complex, multicomponent intervention — essentially a process of social change. The effectiveness of these systems is sensitive to an array of influences: leadership, changing environments, details of implementation, organizational history, and much more. In such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect. (Emphasis added) Finally, as Angus Deaton (also a Nobel Laureate in economics) and Nancy Cartwright helpfully clarify: RCTs can play a role in building scientific knowledge and useful predictions but they can only do so as part of a cumulative program, combining with other methods, including conceptual and theoretical development, to discover not “what works,” but “why things work”. Those who most vociferously and naïvely advocate that we apply techniques from medical research to economics make a fundamental error: They fail to appreciate the fact that, when it comes to external validity, medical research is the exception that proves the rule. Indeed, in aid-led development in general, of the few real historical successes, nearly all are in public health. Outside of public health, few of the large-scale, top-down development programs have in fact succeeded. Why is this? Multiple conjectures are possible. But one persuasive one is this: when it comes to biophysical function, people are people. For this reason, a carefully developed medical protocol proven to be effective for one population is highly likely to work for another population. The smallpox vaccine tested on one population tended to work on other populations; this made it possible to eradicate smallpox. Oral rehydration therapy tested on one group of children tended to work of other groups of children; millions of children have been spared preventable deaths because the technique has been adopted on a global basis. Indeed, medical protocols have such a high level of external validity that, in the United States alone, tens if not hundreds of thousands of lives could be saved every year through a more determined focus on adherence to their particulars. These huge successes were achieved, and continue to be achievable, though bold action taken by public health officials. They are rightly celebrated and encouraged, but — outside of other public health applications — not easily replicated. Successes in medicine contrast sharply with failures in other domains. Decades of efforts to design and deploy improved cook-stoves — with the linked aims of reducing both deforestation and the illness and death due to indoor air pollution — have so far primarily yielded an accumulation of Western inventions maladapted to needs and realities in various parts of the world, along with locally developed innovations that cannot be expanded to meet the true scale of the challenge. For development programs in general, and RCTs in particular, public health is the exception that proves the rule. What does work in areas outside of public health? How is it possible to design, test, and implement effective solutions in environments where complexity and volatility are dominant? The general principle applies: Success requires adaptability as well as structure, flexibility as well as structure — a societal capacity to scale successful efforts combined with an engrained practice of entrepreneurial exploration. As the uniquely insightful Mancur Olson wrote in his classic Power and Prosperity: Because uncertainties are so pervasive and unfathomable, the most dynamic and prosperous societies are those that try many, many things. They are societies with countless thousands of entrepreneurs who have relatively good access to credit and venture capital. What works in development, according to Olsen, is experimentation. Why? Because we don’t know what works.
https://auerswald.medium.com/truly-poor-economics-dc21944b7747
[]
2020-08-11 20:13:38.849000+00:00
['Economic Development', 'Randomized Control Trials', 'Entrepreneurship']
Career Advice for My 20-Year-Old Daughter
Prioritise being happy in what you do, because a career is a long time We spend a lot of time working and while this sounds like a cliché, it’s true that happiness is what matters. You don’t necessarily get to choose where you work or who you work with. You’d better believe that even the most ideal-seeming job is going to be a drag at times. With all this in mind, you’d better ensure that you choose work that makes you happy for most of the time, otherwise you’ll spend a lot of days struggling to get up in the mornings. Money is just one reason for working — Don’t make it into the only thing that you pursue I was convinced when I started working, that I could do anything asked of me as long as the pay was good. I completely underestimated just how fleeting the sense of fulfilment is from getting a big pay-cheque. What’s more enduring is the sense of worth you get from the work you do, and how you feel about doing it. Creativity, a sense of meaning, a belief that your work is of value and service of others, being able to express yourself and your values through what you do and freedom to choose how you do your job, all count massively. Having autonomy about how and where I work has also been an enormous factor for me. When I’ve had to commute daily to a certain location and have had to clock-on and off at set hours, I’ve found the structure to be constraining and frustrating. Having the freedom to be able to work at home when I’ve wanted and being trusted to get the job done while working the hours it requires, have both made me feel empowered and valued. Money can quickly become an assumed part of life rather than a motivating factor. After a certain point money ceases to be a motivator for many, once their living expenses are met. Freedom, favourable working conditions and the ability to direct time as you see fit all count for much more. I’ve written extensively about why I enjoy working from home. I don’t waste time commuting, I can fit in my side-hustle around my day job and it’s easier to fit in things like exercise around work which also enriches my quality of life. It won’t be for everyone, but I mention this as an example of where working conditions can outweigh the perceived importance of merely striving to make more money. Don’t extend your living costs and become too dependent upon your income Nothing strips work of enjoyment and meaning more than feeling like you’re stuck in a job and have to keep doing it whether you want to or not, merely because you are dependent on the paycheck to meet your commitments. I spent many years of my working life treating each payrise as an opportunity to increase my standard of living and my committed outgoings as a result. Bigger houses, fancier cars, hire-purchase agreements and indulgent spending meant that I became more and more reliant on my salary as it increased through my career. If I’d realised earlier that ‘stuff’ doesn’t equate to happiness, and that there are few pleasures as great as knowing that you are financially stable and have a decent chunk of money saved and invested, then I would certainly have invested more and spent less. Beyond a certain basic level we don’t really need more in life. When we get a pay rise this is an opportunity to start saving more with the goal of accumulating enough money that we can theoretically retire from work earlier. If I had my time again, I’d prioritise that over consumer spending. Being free of work earlier in life as a result of having built up a solid financial buffer is about as close as we can possibly get to buying more time in our lives. Photo by Saulo Mohana on Unsplash Don’t underestimate the value of boring old stability and safety Safe and stable will mean different things to different people. The definition evolves as we get older, and when we’re young we might value this less than we will once we start a family. When I started working, exhilaration and excitement were all I really wanted (besides money of course). I quickly found out that what I actually needed was a degree of stability and comfort in my life; the sort of stability that comes from having a lifestyle that I can comfortably afford, and enough money to save and invest regularly. Unfortunately it’s taken most of the last 20-years to achieve that state and I spent most of that time craving and chasing it. If you can turn that on its head and build that stability from the ground up, I think it will serve you well. Not everyone wants to be married with a home and 2.4 kids by the time they’re 30. Whatever level of stability you require, whether that’s enough money saved to fund your basic living costs for a year, or enough money for a plane ticket home from whatever corner of the world you decide to make home, prioritise putting that in place from the off. Your income doesn’t have to come from a single source I believe it will become more commonplace and more widely understood in future that people build portfolio careers and generate income from multiple sources rather than relying on one job for all their income. My parents always advocated saving and investing as well as the power of property to generate income, but for many years into my early adult life I didn’t appreciate that this was about more than just making a provision for retirement. Had I embraced the idea sooner I might have been less dependent on my day-job than I remain to this day. I’ve established a few small sources of income that now generate money for me regardless of whether I do anything with them or not. I’ve written and self-published books on Amazon, launched online training courses on sites like Udemy and I write regularly here on Medium. All of these generate evergreen passive income and I’m working hard to grow these. I also have my day job and do a small amount of freelance writing and ghost-writing in the evenings, to make extra money. There are new opportunities to make money emerging all the time via technology, and there’s no better time to leverage and exploit these trends than when they first come about — be an early adopter. In recent years, trends that I’ve dabbled with have included: Podcasts using Anchor.fm Self-publishing on Amazon Amazon Merch Becoming a Fulfilled By Amazon retailer Blogging on Medium Creating a YouTube Channel Email Marketing Not all of these have been successful or made money. The key thing is to give them a go though, not just to dabble but to pick one of the many business opportunities that exist and to give it a good few months of effort to see what is possible. Each one that contributes $1 every month has the possibility to grow and scale over time. Success takes a long time Reviewing and retracing the steps taken in my career and in my side-hustles it’s difficult to identify any landmark moments when my fortunes radically changed. Success on even a small scale has taken time. Progress is reflected in the compounded results from many average days of work put in over many years. There are many failures that have been necessary to yield the results that eventually lead to success. This isn’t like the world of celebrity, where stars seem to rise from nowhere. Interestingly though, while those who are famous just for being famous seem to become so overnight (and fade away just as quickly), those who are genuinely successful and accomplished in their field, have usually taken many years to reach that state: “I start early and I stay late, day after day, year after year. It took me 17 years and 114 days to become an overnight success” -Lionel Messi Very few people start out in their career without a desire to progress, improve and succeed on at least some level. Unfortunately, many of us do underestimate how long it will take and how difficult it is to make this happen. I think it’s unfortunate that for my daughter’s generation there are numerous factors conspiring against them to get what they want as quickly as they expect it. This video from Simon Sinek puts it better than I ever could: It might be tedious and demoralising to think of the time it will take to succeed and to notionally ‘make it’. With lengthening lifespans and growing competition in the global workforce, this trend seems more likely to increase rather than decrease the time and effort that success demands. It would be as well to prepare for this. Summing up As I consider the advice for my daughter, while on one hand the situation could seem bleak (for the increased competition from a global workforce for example) I think the possibilities and opportunities for making money in unconventional ways, make this an exciting time to be entering the world of work. I have at least another ten years (or more likely, twenty) before I hope to be able to stop working. I will be exploiting as many of these as I can for myself too. I’m excited to share this journey with her.
https://medium.com/the-post-grad-survival-guide/career-advice-for-my-20-year-old-daughter-1cbc673931ed
[]
2019-11-15 15:01:03.105000+00:00
['Work', 'Parenting', 'Entrepreneurship', 'Self', 'Career Advice']
Amazon reimbursed me for a price drop after my purchase. Here’s how I did it.
Amazon reimbursed me for a price drop after my purchase. Here’s how I did it. Spoke to 7 Amazon representatives in one month. Photo by Christian Wiediger on Unsplash You know that shitty feeling when you purchase something online only to discover a huge price drop shortly after? I found out firsthand when I bought a Kindle as a gift earlier this year. I screamed “ethics!!!” but really, I was just a pawn of capitalism. But I was not going to let it go. So I spent almost a month trying to get back the grand figure of $70 and I did. Here’s what happened and how I did it: Case in question Product: Kindle Paperwhite — Now Waterproof with 2x the Storage — 8GB (International Version) (note: damn you fancy marketing titles) Purchase price: $210 Price after sudden price drop: $140 Reimbursement sought: $70 Amazon representatives spoken to: 7 24 Feb — Order delivered All great for now. But then came the sudden price drop. I didn’t take a screenshot of it, rookie mistake because Amazon could just play it off. So take your screenshots if you ever get in such a situation. 03 Mar — The complaint Note: I am seeking a “reimbursement” because frankly speaking, they are not obligated to “refund” me. I started off a little humbly, hoping to get my $70 back and move on. But then came the reply. Amazon rep #1 — Tanveer Tanveer’s response made more determined to be reimbursed. I let him know what I thought of his “return for full refund and make a new purchase” suggestion. 04 Mar — Escalate Still fairly calm. But as you can see, I’m not wasting time with Tanveer. The first line of customer service (read: defence) can only offer so much. Always request to escalate to a higher authority when your issue is not immediately resolved. Next, Habib came along. Amazon rep #2 — Habib Okay Habib is not helpful either but the key is to persist. 06 Mar — Escalate #2 Not wasting time with Habib either. Straight to the point. 18 Mar — Enter Amazon Rep #3 Amazon rep #3 — Zareena Judging from the response crafted, Zareena is probably a more senior rep. So it’s likely that my case had been escalated but still, there was no resolution. Firstly, Zareena didn’t say anything new besides fluffing up the service recovery. Second, notice the gap in response. It was a war of attrition. But joke’s on them, I was well prepared for battle — I was an unemployed fresh graduate who really needs his $70 back. 19 Mar — Leverage and reiterate I reiterated what I said to Tanveer and Habib and also leveraged on the poor service response. But out of nowhere, Purohit arrived. 19 Mar — Purohit confusion Amazon rep #4 — Purohit Zareena had disappeared. At this point, it became clear that they had switched to a strategy designed to confuse me. First, with the sudden change in rep, then in claiming some error with my email address. I responded quickly. 20 Mar — Zareena reappears But before I could sort things out with Purohit, Zareena reappeared. It only got more confusing. Amazon’s non-personalised service emails means neither Zareena nor Purohit were cc-ed in each other’s emails. But what became clear was that they were trying testing my resolve. Confusion was just another tactic in their art of attrition. Well played, but not this time Amazon. 24 Mar — Persist As you can tell from the 4-day gap, I contemplated giving up. But I persisted, and asked to get on a direct call to resolve this circus show. 25 Mar —New challenger Lahari Unfortunately, Zareena ghosted me again. However, a new rep came to my “service”. Amazon rep #5 — Lahari Lahari continued to toot the same tune as the 4 Amazon reps before her. But notably, she added that the return window for my purchase had expired which meant that I was unable the “return and full refund” solution was no longer available to me. She was essentially saying “Too bad bro, we played the long game and you lost.” Now I knew I had to recalibrate my angle of attack onto Amazon’s live chat. 27 Mar — Amazon live chat Here, I spoke to Tarunkishore and asked him to look into my case. It seemed like live chat worked because Tarunkishore finally agreed to reimburse me. I believed it might be because he reviewed my case and saw how persistent I was. An automated email followed to ask if my problem was solved. Amazon rep #6 — Tarunkishore Dear reader, I wished it was. But it wasn’t. 27 Mar — False resolution When Tarunkishore reimbursed me, he issued me a $70.18 Amazon gift card even though I specifically requested to refund it through the card I used to purchase. I was not going to be trapped into spending $70.18 on Amazon. I wanted a cash reimbursement to spend my money poorly elsewhere. But this is the problem with Amazon and their customer service system. Computer generated email addresses that tries to frustrate you and make you drop the matter. 31 Mar — Resolution Amazon rep #7 — Harish 7 Amazon reps and almost a month later, I finally got my $70 back. At what cost? An unnecessary amount of time and effort. Was it worth it? I’d like to think yes. So, the next time you encounter a price drop and want to get a reimbursement, get on a call or live chat. Escalate if your problem is not immediately resolved. State your argument clearly and persist if their solution does not make sense. That could save you the trouble of going through a month-long journey with 7 different customer reps. But then again, maybe that’s just the how the game goes. So… seek a reimbursement at your own risk mate. If you have a similar crazy customer service story, share them with me in the comments. Would be nice to know I’m not alone. P.S. Special thanks to Tanveer, Habib, Zareena, Purohit, Lahari, Tarunkishore, and Harish.
https://medium.com/the-haven/amazon-reimbursed-me-for-a-price-drop-after-my-purchase-heres-how-i-did-it-f6fd1acad9bf
[]
2020-10-24 03:50:14.768000+00:00
['Customer Service', 'Amazon', 'Sales', 'Comedy', 'Shopping']
Lives in Hiding, Emotional Abuse, the Closet, and Other Concerns
The Reconnection I was pleasantly surprised when Dinu decided to contact me again. I thought I had lost a friend for good but then here she was after five years, telling me that she was stupid and scared. Please forgive her. Five years and many arguments after, her boyfriend had sat her down and told her she might be bisexual and finally gave her “permission” to talk to me. As expected, after telling her of her sexual orientation, he also suggested a threesome to help her make sure that she’s bisexual. So you stopped her from owning a part of her sexuality out of fear and then after many fights and disappointments told her it’s ok? Why did you need her to be 100% straight in the first place? Why did you feel threatened by someone who lived at least a thousand miles away and only chatted with your girlfriend roughly once a month? Why do you still casually joke and shame her for her attraction to women? Call her a lesbian and think it’s a derogatory term? Can you even claim to love her? The New Personality As much as I was happy to reconnect with a “lost” friend, I was met with a changed version of the former friend. The confusion our talk created was so much that I had to google signs of emotional abuse just to remind myself what it is. The details of our conversation went as below. She tells me that he finally felt secure enough in her love for him. My question is, what did she have to sacrifice to get there? The answer lies in her current state of being — it seems she had to break herself down to prove her devotion. She tells me she has developed depression and an anxiety disorder. She has suicidal thoughts. She tells me she’s broken. She uses the identity of broken without question. She tells me her boyfriend is amazing because he’s fixing her. Fixing? Broken? The friend I used to know did not speak this way about herself. Not even once. If no hugely traumatic occurred in the past few years, then how could there be such a change in self-perception? She asks me if I find her pretty. The person I used to know didn’t ask questions like “am I pretty?”. What kind of question is that? Maybe she should have asked these questions when she was supposed to be in her insecure adolescent phase. She was similarly feeling unsure and lost about other aspects of her life. What happened? Where did her self-esteem go? In addition, she has developed a high emotional reactivity that wasn’t there before: She was super quick to assume I don’t like her as a person anymore when I didn’t feed validating answers to all her insecure questions. She even imagined I hate her. She got a little angry, all in the span of one conversation. She showed jealousy over other people in my life, lacked boundaries, asked questions that are too familiar for someone who just reappeared in my life out of nowhere. I could no longer recognize this person. She wasn’t like this before. Has she been emotionally abused? As far as I know, emotionally abused people tend to develop emotional instability as a result of the abuse. Emotional abuse can even impact friendships because emotionally abused people often worry about how people truly see them and if they truly like them. — Sherry Gordon, Very Well Mind Am I seeing things that are not there? I am sure that Dinu has changed but can I know for sure that there is something off about her relationship as well? Is she suffering because of her boyfriend or are they suffering together because she can’t integrate the part of herself that likes other women into her current reality? Then again, I remember that she wasn’t trying to disown her sexual attraction to women all those years prior to confiding in him.
https://medium.com/prismnpen/lives-in-hiding-emotional-abuse-the-closet-and-other-concerns-b0b5dcd854d
['Tima Loku']
2020-12-13 03:51:05.677000+00:00
['Relationships', 'LGBTQ', 'Mental Health', 'Women', 'Creative Non Fiction']
I Keep Hearing That Working From Home Is The Future — And It Bugs Me to No End
Remember how, just until prior to the whole COVID-19 predicament, a lot of employers seemed to be firmly planted in their conviction that workers at home are not to be trusted? That they must be slacking off because, apparently, they have no self-control, let alone a desire to actually earn their salary? Yup. And suddenly — less than a year after — these same employers are telling us that working from home is the future and that they are considering leaving their office spaces behind for good. The way I see it, there are two possibilities as to why. Either some employers are genuinely convinced that working from home is the wish of every single worker — and thus, now that company policy allows it, we might as well just all switch, because we all agree — right? Or — more likely, in my opinion — they realized they can fully get rid of the significant expenditure of renting office space, buying toilet paper, office supplies, coffee, and what have you. Of course, that can only be pulled off if every single employee works from home. Or, well, from anywhere except the office. But here comes the catch — not everyone is into this whole working from home business. In fact, most friends and coworkers I’ve spoken to — people who had no prior experience as, say, freelancers — expressed that they would want to go back to the office sooner than later. And I agree wholeheartedly. I miss being around people. I need to see others puttering around the office, I need to be able to go pick up a coffee with a colleague, to sit on a sofa and chat, or just reposition myself every hour just to freshen up my environment. Doesn’t matter if I wear headphones through it all. The mere presence of others working beside me is both calming and inspiring. Also, I live in an apartment that Google tells me is the equivalent of 215sqft. That’s tiny. I’ll go nuts staying here. And I don’t know about most people, but my chances to escape home would be slim to none. Where else could I go if not my office? Add to that the fact that, if you deal with confidential data or need reliable Internet all day as I do, you can’t really be sitting on some public wi-fi hotspot in a hip café (post-Corona, clearly). While on that topic — I’ve had crazy anxiety about my Internet connection failing on me. If my domestic network fails and I can’t get into the meeting, it’s pretty much my problem to solve. When something like that happens in an office, no employee is under any real pressure — except maybe for the one who is in charge of keeping that connection up and running. But it’s mostly the company’s problem. I just detest the concept of being liable for so much more while getting paid the same and slowly eroding my mental health as the line between life and work deteriorates. There is a myriad of reasons why someone’s personal space might not be conducive to work. As I’ve mentioned — my own place is tiny, cramped, my desk is two steps away from my bed, and of course, there’s no company. Someone else’s home will have different limitations. What I have, that a lot of employers seem not to, is the awareness that everyone has challenges I have no idea about. That applies to WFH. It’s not even only about the work part. Home used to be my sacred space — where I took care of myself and could wind down after the day’s challenges. Now it has become the place where I’m stressed more often than not. As nice as it is to be at the office, it’s just as nice to be able to leave it at the end of the day.
https://medium.com/illumination/i-keep-hearing-that-working-from-home-is-the-future-and-it-bugs-me-to-no-end-1318c6894f94
['Alice Audie']
2020-12-23 12:33:04.275000+00:00
['Covid 19', 'Work', 'Home', 'Mental Health', 'Self']
How We Use RocksDB at Rockset
In this blog, I’ll describe how we use RocksDB at Rockset and how we tuned it to get the most performance out of it. I assume that the reader is generally familiar with how Log-Structured Merge tree based storage engines like RocksDB work. At Rockset, we want our users to be able to continuously ingest their data into Rockset with sub-second write latency and query it in 10s of milliseconds. For this, we need a storage engine that can support both fast online writes and fast reads. RocksDB is a high-performance storage engine that is built to support such workloads. RocksDB is used in production at Facebook, LinkedIn, Uber and many other companies. Projects like MongoRocks, Rocksandra, MyRocks etc. used RocksDB as a storage engine for existing popular databases and have been successful at significantly reducing space amplification and/or write latencies. RocksDB’s key-value model is also most suitable for implementing converged indexing, where each field in an input document is stored in a row-based store, column-based store, and a search index. So we decided to use RocksDB as our storage engine. We are lucky to have significant expertise on RocksDB in our team in the form of our CTO Dhruba Borthakur who founded RocksDB at Facebook. For each input field in an input document, we generate a set of key-value pairs and write them to RocksDB. Let me quickly describe where the RocksDB storage nodes fall in the overall system architecture. When a user creates a collection, we internally create N shards for the collection. Each shard is replicated k-ways (usually k=2) to achieve high read availability and each shard replica is assigned to a leaf node. Each leaf node is assigned many shard replicas of many collections. In our production environment each leaf node has around 100 shard replicas assigned to it. Leaf nodes create 1 RocksDB instance for each shard replica assigned to them. For each shard replica, leaf nodes continuously pull updates from a DistributedLogStore and apply the updates to the RocksDB instance. When a query is received, leaf nodes are assigned query plan fragments to serve data from some of the RocksDB instances assigned to them. For more details on leaf nodes, please refer to Aggregator Leaf Tailer blog or Rockset white paper. To achieve query latency of milliseconds under 1000s of qps of sustained query load per leaf node while continuously applying incoming updates, we spent a lot of time tuning our RocksDB instances. Below, we describe how we tuned RocksDB for our use case. RocksDB-Cloud RocksDB is an embedded key-value store. The data in 1 RocksDB instance is not replicated to other machines. RocksDB cannot recover from machine failures. To achieve durability, we built RocksDB-Cloud. RocksDB-Cloud replicates all the data and metadata for a RocksDB instance to S3. Thus, all SST files written by leaf nodes get replicated to S3. When a leaf node machine fails, all shard replicas on that machine get assigned to other leaf nodes. For each new shard replica assignment, a leaf node reads the RocksDB files for that shard from corresponding S3 bucket and picks up where the failed leaf node left off. Disable Write Ahead Log RocksDB writes all its updates to a write ahead log and to the active in-memory memtable. The write ahead log is used to recover data in the memtables in the event of process restart. In our case, all the incoming updates for a collection are first written to a DistributedLogStore. The DistributedLogStore itself acts as a write ahead log for the incoming updates. Also, we do not need to guarantee data consistency across queries. It is ok to lose the data in the memtables and re-fetch it from the DistributedLogStore on restarts. For this reason, we disable RocksDB’s write ahead log. This means that all our RocksDB writes happen in-memory. Writer Rate Limit As mentioned above, leaf nodes are responsible for both applying incoming updates and serving data for queries. We can tolerate relatively much higher latency for writes than for queries. As much as possible, we always want to use a fraction of available compute capacity for processing writes and most of compute capacity for serving queries. We limit the number of bytes that can be written per second to all RocksDB instances assigned to a leaf node. We also limit the number of threads used to apply writes to RocksDB instances. This helps minimize the impact RocksDB writes could have on query latency. Also, by throttling writes in this manner, we never end up with imbalanced LSM tree or trigger RocksDB’s built-in unpredictable back-pressure/stall mechanism. Note that both of these features are not available in RocksDB, but we implemented them on top of RocksDB. RocksDB supports a rate limiter to throttle writes to the storage device, but we need a mechanism to throttle writes from the application to RocksDB. Sorted Write Batch RocksDB can achieve higher write throughput if individual updates are batched in a WriteBatch and further if consecutive keys in a write batch are in a sorted order. We take advantage of both of these. We batch incoming updates into micro-batches of ~100KB size and sort them before writing them to RocksDB. Dynamic Level Target Sizes In an LSM tree with leveled compaction policy, files from a level do not get compacted with files from the next level until the target size of the current level is exceeded. And the target size for each level is computed based on the specified L1 target size and level size multiplier (usually 10). This usually results in higher space amplification than desired until the last level has reached its target size as described on RocksDB blog. To alleviate this, RocksDB can dynamically set target sizes for each level based on the current size of the last level. We use this feature to achieve the expected 1.111 space amplification with RocksDB regardless of the amount of data stored in the RocksDB instance. It can be turned on by setting AdvancedColumnFamilyOptions::level_compaction_dynamic_level_bytes to true. Shared Block Cache As mentioned above, leaf nodes are assigned many shard replicas of many collections and there is one RocksDB instance for each shard replica. Instead of using a separate block cache for each RocksDB instance, we use 1 global block cache for all RocksDB instances on the leaf node. This helps achieve better memory utilization by evicting unused blocks across all shard replicas out of leaf memory. We give block cache about 25% of the memory available on a leaf pod. We intentionally do not make block cache even bigger even if there is spare memory available that is not used for processing queries. This is because we want the operating system page cache to have that spare memory. Page cache stores compressed blocks while block cache stores uncompressed blocks, so page cache can more densely pack file blocks that are not so hot. As described in Optimizing Space Amplification in RocksDB paper, page cache helped reduce file system reads by 52% for three RocksDB deployments observed at Facebook. And page cache is shared by all containers on a machine, so the shared page cache serves all leaf containers running on a machine. No Compression For L0 & L1 By design, L0 and L1 levels in an LSM tree contain very little data compared to other levels. There is little to be gained by compressing the data in these levels. But, we could save some cpu by not compressing data in these levels. Every L0 to L1 compaction needs to access all files in L1. Also, range scans cannot use bloom filter and need to look up all files in L0. Both of these frequent cpu-intensive operations would use less cpu if data in L0 and L1 does not need to be uncompressed when read or compressed when written. This is why, and as recommended by RocksDB team, we do not compress data in L0 and L1, and use LZ4 for all other levels. Bloom Filters On Key Prefixes As described in converged indexing blog, we store every column of every document in RocksDB in 3 different ways and in 3 different key ranges. For queries, we read each of these key ranges differently. Specifically, we do not ever lookup a key in any of these key ranges using the exact key. We usually simply seek to a key using a smaller, shared prefix of the key. Therefore, we set BlockBasedTableOptions::whole_key_filtering to false so that whole keys are not used to populate and thereby pollute the bloom filters created for each SST. We also use a custom ColumnFamilyOptions::prefix_extractor so that only the useful prefix of the key is used for constructing the bloom filters. Iterator Freepool When reading data from RocksDB for processing queries, we need to create 1 or more rocksdb::Iterator s. For queries that perform range scans or retrieve many fields, we need to create many iterators. Our cpu profile showed that creating these iterators is expensive. We use a freepool of these iterators and try to reuse iterators within a query. We cannot reuse iterators across queries as each iterator refers to a specific RocksDB snapshot and we use the same RocksDB snapshot for a query. Finally, here is the full list of configuration parameters we specify for our RocksDB instances.
https://medium.com/rocksetcloud/how-we-use-rocksdb-at-rockset-c500876fd6d4
[]
2019-09-05 19:04:14.545000+00:00
['Database', 'Rocksdb', 'Storage', 'Indexing', 'Software Engineering']
Blaming the Victim
Blaming the Victim Literally Literary and The Writing Cooperative prompt Photo by Clayton Cardinalli on Unsplash For three years as an undergraduate, I claimed a Social Work major. I wanted to do good work for the poor and disadvantaged. I was minoring in English and decided to take a course in Southern Literature in which we read several Faulkner short stories and his masterwork, Absalom, Absalom! As I read Faulkner’s prose about the benighted south, I felt kinship. I was an Alabamian, after all. I said goodbye to Social Work then and became an English major. Yet from my social work days, I will always remember William Ryan’s seminal text, Blaming the Victim, from which he reflected on our nation’s pervasive, negative attitude toward our “have-nots.” An attitude that blames these unfortunate people for the downward course of their lives. That the poverty, neglect, and abuse they suffer is entirely their doing. That they don’t have a job because they are too lazy to hold one. Getting a job isn’t easy under any circumstance. I applied for approximately 100 college teaching positions up and down the eastern half of this country. My wife, herself finishing her Master’s degree in Counseling Psychology, was willing to go wherever I could get a job. Of those 100 applications, I landed two interviews. The one I got was at Presbyterian College in Clinton, South Carolina. Clinton, a former mill town, had fallen on hard times. The mill had shut down, and the town’s population, hovering at 10,000, was on the wane. So upon my hiring, the first question I had for the college was whether or not it could also employ my wife, and if it couldn’t do that, could it help her find a job? I noticed early on that many faculty member’s spouses had staff jobs on campus, in the library and in administration. However, the college already had a counselor, a member of the psychology faculty. As we all know, one counselor for 1200 students is surely adequate. To her credit, though, this professional helped my wife find a job in another town to the south of Clinton— a good state job with benefits. The catch was that since we had freely chosen to live in Greenville, forty-five miles to the north of Clinton, our commute would be close to 200 miles per day, given that we had one car. We made this commute for two years until my wife became pregnant, quit that job, and then after our baby was a few months old, went into private practice. In the end, everything worked out for us, and we’ve been in these same jobs for over thirty years, have raised two daughters, and finally got that second car. So I shouldn’t complain. I am a full professor, have tenure, love my colleagues, students, and the courses I teach. Sure, there have been conflicts and down times, and the salary isn’t up to competitive standards, at least for those at my rank. Over my tenure, I have taught courses in Southern Literature, Film Studies, Creative Nonfiction, and I inaugurated a course in American Literature and Ethnic Identity, in which I have taught such works as Typical American, by Gish Jen, The Romance Reader, by Pearl Abraham, and Caucasia, by Danzy Senna. I started this ethnic lit course for three reasons: Our students are predominately Caucasian. My wife is a native of Iran. I am half-Jewish. Of these three reasons, the last two formed the problem that I can never forget. My “Exulansis.”
https://medium.com/literally-literary/blaming-the-victim-e8474f9e5425
['Terry Barr']
2019-11-27 02:28:23.729000+00:00
['Literally Literary', 'Exulansis', 'Nonfiction', 'Family', 'Education']
Useful Mobile Marketing Tools to Grow Your Business in 2020 (Pricing Included)
Starting a winning mobile marketing strategy is anything but simple. There’s a lot that goes into it. You’re required to master many different techniques and juggle multiple mobile marketing channels at the same time. Unless you stay on top of your mobile marketing strategy, it will fall apart quickly. That’s why smart marketers use different tools to help them. Mobile marketing tools not only make it easier to keep track of everything but improve your campaigns and grow your business. To make your life easier, we have compiled a list of top mobile marketing tools you need to be a winner in 2020. The types of tools we cover range from analytics, social media, and text marketing tools to SEO, ASO, and multi-channel marketing tools. Analytics Tools Mobile marketing is all about the numbers. If you don’t follow key metrics, you won’t have any success with your mobile marketing campaign. Unless you get really, really lucky. However, the probability of that is very low. So just stick with analytics. Here are a couple of analytics tools that will help you rule the mobile marketing world. Google Analytics is an all-around great tool to have in your marketing belt. It offers you an incredible number of features for free. Even though Google Analytics is not made exclusively for mobile, there are still a lot of relevant metrics you can follow. Google Analytics Features Traffic reports Campaign tracking Conversion tracking Custom dashboard Keyword referrals Goals Event tracking Real-time reporting Attribution Custom metrics Pricing Free Enterprise (customized pricing) AppsFlyer This powerful tool specializes in marketing analytics and mobile attribution. You can use it to track and improve your mobile campaigns, monitor your app engagement, see where the installs come from, etc. This tool helps you use the power of data in order to make better marketing decisions. AppsFlyer is great for gaming, retail, travel, financial, entertainment, food & drink, and many other industries. AppsFlyer Features Impressions and cost reports Multi-channel measurements Lifetime value reports Return of interest reports Ad revenue attribution Uninstall attribution Deep linking Email marketing management A/B testing Social media integration Audience segmentation Cohort analysis Contextual targeting Tracking of in-app events Text messages Website analytics Location-based marketing Pricing Plan Basic Plan — pay as you go (free trial) Custom Plan — Full features. Contact AppsFlyer for more details. App Annie This powerful tool is used for app analytics and mobile market research. It gives you access to all the data and analytics you need to improve your mobile marketing strategy. You can easily track the performance of your app on a user-friendly dashboard, and compare different metrics. Market analysis is another great App Annie feature. You can do extensive research and even track your competitors’ app store revenue. If you want to take your mobile marketing strategy to a new level and make your app a success, you should try this mobile marketing tool. The basic plan is free, so why not? App Annie Features App store tracking Paid search App store optimization Download and revenue estimates App usage estimates Advertising estimates SDK insights App ranking Event tracking App category intelligence App keyword ranking Pricing Plan Free (only basic features) Premium (full features). Cost determined according to many different factors like country location, number of apps, company size, etc. Soomla Soomla is a leading tool in monetization measurement. It provides you with important in-app advertising insights and allows you to control your ads. For example, Soomla lets you identify ads that are increasing churn rates and helps you eliminate them, which is quite useful. Soomla Features App monetization Attribution Cohort analysis Funnel analysis In-app events analysis Ad revenue attribution Pricing Plan Basic tier ($999 per month) Blue tier (price upon request) Green tier (price upon request) Blue tier (price upon request) If you want to check out even more mobile app analytics platforms, click here. Social Media Tools Social media is an integral part of any mobile marketing strategy. As long as users keep spending hours glued to their screens and scrolling through social media feeds, marketers will try to capture their attention. However, managing social media accounts requires great planning, organization, and strategy. That’s why markets use scheduling tools to keep track of social media., Hootsuite Hootsuite is probably the most popular social media scheduling tool out there. There are many reasons for that. Hootsuite allows you to schedule posts for all top social media platforms, manage your content plan, and even track the performance of your posts with social analytics. What drives many people to use Hootsuite is their free plan which allows you to manage up to 3 social media profiles and schedule 30 posts. However, there are also paid plans for more advanced users. What I like about Hootsuite is their easy-to-use dashboard, and content plan that makes it simple to keep track of scheduled posts. Hootsuite Features Scheduling Monitoring Content curation Social analytics Team management 24/7 support Pricing Plan Free Professional ($19 per month) Team ($99 per month) Business ($599 per month) Enterprise (contact Hootsuite for pricing) SMS Marketing & Notifications Tools Another important mobile marketing channel is text and notifications. It is a great technique for increasing your reach and keeping your audience engaged. Here are some easy-to-use SMS marketing tools for setting up your text campaign. EZ Texting EZ Texting tool makes it easy to get started with text marketing. It’s great for beginners and has all the features you need, like scheduling, tracking, personalization, and reports. 160 thousand marketers use the platform and more than 4 billion messages have been sent through it. EZ Texting Features SMS marketing Sign-up forms Keywords MMS messages Text to vote SMS polls Shortcode services Personalization Drip campaigns Scheduling Tracking Reports Pricing Plan 14-day free trial Plus ($49 per month) Select ($94 per month) Elite ($149 per month) Pro ($250 per month) Bronze ($450 per month) Silver ($1100 per month) Gold ($2000 per month) Custom plan Airship Airship is a great tool for handling in-app messages, push notifications, and text messages. Use it to send relevant messages to your audience at the right time and boost user engagement. Airship Features Push notifications In-app messaging SMS Web notifications Mobile wallet Open channel API Automation Personalization Optimization Analytics Pricing Price available upon request WhatsApp Business If you’re looking for a 100% free texting tool, then you should try WhatsApp Business. It’s not as fancy as other messaging tools and doesn’t have as many features, but it’s still quite useful. WhatsApp Business Features Business profile Quick replies Contact labels Automated messages Pricing Free Multi-Channel Marketing Tools Trying to balance multiple marketing channels at once can be very frustrating and time-consuming. Unless, of course, you have some tools to help you. Iterable To manage your marketing efforts across multiple mobile channels, try Iterable. This tool uses AI to analyze user behavior and come up with the best time and channel to engage them. I found this to be a standout feature. However, you can use Iterable for many other things like creating and managing lifecycle campaigns and retargeting users. Iterable Features Multi-channel integration Audience identification Lifecycle campaign creation Personalization Optimization Data integration Real-time metrics Pricing Plan Free trial Subscription starting from $500 SendPulse SendPulse allows you to handle text messages, push notifications, web notifications, emails, Facebook messenger, and Viber all in one place. I would single out their drag and drop editor as an exceptional feature. Send Pulse Features Drag and drop editor Subscriptions forms Trigger emails Reporting API Web push monetization Automated sending Email scheduler Personalized notifications Segmentation Unsubscriber list Chatbots A/B testing Pricing Free trial Customizable subscription Pay as you go option VIP Plan App Store Optimization Tools If you have an app, ASO is one of the key things you need to focus on to make it successful. To find out how to do app store optimization like a pro, check out our simple guide. AppTweak This tool is great for increasing the visibility of your mobile game or app. AppTweak offers you all the data you need to optimize app stores and increase downloads. Top mobile leaders like PayPal, Microsoft, Expedia, Amazon, and LinkedIn use this mobile marketing tool. AppTweak Features Keyword research ASO report Keyword monitoring Keyword shuffler Category ranking Keyword data Organic keywords Paid keywords Keyword auto-suggestion Algorithm change detector Keyword counter Revenue estimates API Visibility score Pricing Plan Starter ($69 per month) Guru ($299 per month) Power ($599 per month) Enterprise (custom plan) SEO Tools Search engine optimization is crucial for the success of your mobile marketing strategy. Here’s a great tool that will help you boost your organic mobile traffic. SEMRush With more than 4 million users, SEMRush is one of the leading SEO tools on the market. It allows you to do everything from competitor analysis, keyword research, deep link analysis, rank tracking, and traffic analysis. SEMRush Features Organic research Organic traffic insights Keyword research Backlink building Position tracking Analytics Rank tracking Site audit On-page SEO checker Search engine sensor Content audit SEO content template Traffic analytics Pricing Plan Pro ($99 per month) Guru ($199 per month) Business ($399 per month) Enterprise (custom plan) Have you used any of these mobile marketing tools? Are there any mobile marketing tools you think we should add to this list? Tell us in the comments below! Read More About Mobile Marketing 👇 About Udonis: In 2018 & 2019, Udonis Inc. served over 14.1 billion ads & acquired over 50 million users for mobile apps & games. We’re recognized as a leading mobile marketing agency by 5 major marketing review firms. We helped over 20 mobile apps & games reach the top charts. Want to know how we make it look so effortless? Meet us to find out.
https://medium.com/udonis/starting-a-winning-mobile-marketing-strategy-is-anything-but-simple-f4e5aa22bca4
['Andrea Knezovic']
2019-12-04 15:22:07.201000+00:00
['Marketing Tools', 'Digital Marketing Tools', 'Marketing', 'Mobile Marketing', 'Mobile Marketing Tips']
Ahmad Hasan Dani, RIP
The subcontinent lost a distinguished resource in archeology in the passing away of the great Pakistani Indologist, Ahmad Hasan Dani on January 26. Dani was born in Basna (now in Chhattisgarh) in 1920 and became the first Muslim to graduate from Banaras Hindu University (1944). His works on Harappa, Mohenjo-daro and the Sarasvati civilization have had a profound impact on our understanding of the subcontinent’s history. Dani was one of the few scholars who continued to challenge the two popular, yet divergent theories of whether the Indus Valley Civilization (IVC) was Dravidian (supported by scholars like Asko Parpola), or Aryan (“Out of India Theory”). He also continued to question theories of IVC’s cultural and religious continuity into early Vedic Hinduism, disagreeing, again, with his contemporaries on the issue. In 1949, Dani was the first to propose a connection between this reference to “Hariyupiah” in the Rig Veda and the IVC center of Harappa: In aid of Abhyavartin Cayamana, Indra destroyed the seed of Varasikha. At Hariyupiya he smote the vanguard of the Vrcivans, and the rear fled frighted. (Rig Veda, XXVII.5) Professor Dani was a recipient of the Hilal-e-Imtiaz (“Crescent of Excellence”), Pakistan’s second highest honor. During his career, he published more than 30 books, and lived and worked all across the subcontinent, including Dhaka, Peshawar and South India. He was fluent in 14 Indo-European and Dravidian languages. Professor Dani’s intervention in 2005 prevented the construction of an amusement park over an archaeological site in Harappa. His research has helped a region burdened by 400 years of subjugation and slavery, rediscover itself. May he rest in peace.
https://medium.com/ini-filter-coffee-archives/ahmad-hasan-dani-rip-7abe807a512
['Rohan Joshi']
2017-02-27 17:13:58.159000+00:00
['Indus Valley Civilization', 'India', 'Archeology', 'Pakistan', 'World']
Understanding the Spark insertInto function
Photo by @marcusloke on Unsplash Raw Data Ingestion into a Data Lake with spark is a common currently used ETL approach. In some cases, the raw data is cleaned, serialized and exposed as Hive tables used by the analytics team to perform SQL like operations. Thus, spark provides two options for tables creation: managed and external tables. The difference between these is that unlike the manage tables where spark controls the storage and the metadata, on an external table spark does not control the data location and only manages the metadata. In addition, often a retry strategy to overwrite some failed partitions is needed. For instance, a batch job (timestamp partitioned) failed for the partition 22/10/2019 and we need to re-run the job writing the correct data. Therefore, there are two options: a) regenerate and overwrite all the data, or b) process and overwrite the data for the needed partition. Option two is discarded due to performance issues, imagine that you have to process an entire month of data. Consequently, the option first option is used and fortunately spark has the option dynamic partitionOverwriteMode that overwrites data only for partitions present in the current batch. This option works perfectly while writing data to an external data store like HDFS or S3; cases, where is possible to reload the external table metadata by a simple, CREATE EXTERNAL TABLE command. However, for Hive tables stored in the meta store with dynamic partitions, there are some behaviors that we need to understand in order to keep the data quality and consistency. First of all, even when spark provides two functions to store data in a table saveAsTable and insertInto, there is an important difference between them: SaveAsTable: creates the table structure and stores the first version of the data. However, the overwrite save mode works over all the partitions even when dynamic is configured. creates the table structure and stores the first version of the data. However, the overwrite save mode the partitions even when dynamic is configured. insertInto: does not create the table structure, however, the overwrite save mode works only the needed partitions when dynamic is configured. So, SaveAsTable could be used to create the table from a raw dataframe definition and then after the table is created, overwrites are done using the insertInto function in a straightforward pattern. Nevertheless, the insertInto presents some not well-documented behaviors while writing the partitioned data and some challenges while working with data that contains schema changes. Order of the Columns Problem Let’s write a simple unit test where a table is created from a data frame. it should "Store table and insert into new record on new partitions" in { val spark = ss import spark.implicits._ val targetTable = "companies_table" val companiesDF = Seq(("A", "Company1"), ("B", "Company2")).toDF("id", "company") companiesDF.write.mode(SaveMode.Overwrite).partitionBy("id").saveAsTable(targetTable) val companiesHiveDF = ss.sql(s"SELECT * FROM ${targetTable}") So far, the table was created correctly. Then, let’s overwrite some data using insertInto and perform some asserts. val secondCompaniesDF = Seq(("C", "Company3"), ("D", "Company4")) .toDF("id", "company") secondCompaniesDF.write.mode(SaveMode.Append).insertInto(targetTable) val companiesHiveAfterInsertDF = ss.sql(s"SELECT * FROM ${targetTable}") companiesDF.count() should equal(2) companiesHiveAfterInsertDF.count() should equal(4) companiesHiveDF.select("id").collect().map(_.get(0)) should contain allOf("A", "B") companiesHiveAfterInsertDF.select("id").collect() should contain allOf("A", "B", "C", "D") } This should work properly. However, look at the following data print: As you can see the asserts failed due to the positions of the columns. There are two reasons: a) saveAsTable uses the partition column and adds it at the end. b) insertInto works using the order of the columns (exactly as calling an SQL insertInto) instead of the columns name. In consequence, adding the partition column at the end fixes the issue as shown here: //partition column should be at the end to match table schema. val secondCompaniesDF = Seq(("Company3", "C"), ("Company4", "D")) .toDF("company", "id") secondCompaniesDF.write.mode(SaveMode.Append).insertInto(targetTable) val companiesHiveAfterInsertDF = ss.sql(s"SELECT * FROM ${targetTable}") companiesHiveAfterInsertDF.printSchema() companiesHiveAfterInsertDF.show(false) companiesDF.count() should equal(2) companiesHiveAfterInsertDF.count() should equal(4) companiesHiveDF.select("id").collect().map(_.get(0)) should contain allOf("A", "B") companiesHiveAfterInsertDF.select("id").collect().map(_.get(0)) should contain allOf("A", "B", "C", "D") } Now the tests pass and the data is overwritten properly. Matching the Table Schema As described previously the order of the columns is important for the insertInto function. Besides, let’s image you are ingesting data that has a changing schema and you receive a new batch with a different number of columns. New Batch With Extra Columns Let’s test first the case when more columns are added. //again adding the partition column at the end and trying to overwrite partition C. val thirdCompaniesDF = Seq(("Company4", 10, "C"), ("Company5", 20, "F")) .toDF("company", "size", "id") thirdCompaniesDF.write.mode(SaveMode.Overwrite).insertInto(targetTable) While trying to call insertInto the following error is shown: Hence, a function that returns the missing columns in the table is needed: def getMissingTableColumnsAgainstDataFrameSchema(df: DataFrame, tableDF: DataFrame): Set[String] = { val dfSchema = df.schema.fields.map(v => (v.name, v.dataType)).toMap val tableSchema = tableDF.schema.fields.map(v => (v.name, v.dataType)).toMap val columnsMissingInTable = dfSchema.keys.toSet.diff(tableSchema.keys.toSet).map(x => x.concat(s" ${dfSchema.get(x).get.sql}")) columnsMissingInTable } Then, the SQL ALTER TABLE command is executed. After this, the insertInto function works properly and the table schema is merged as you can see here: val tableFlatDF = ss.sql(s"SELECT * FROM $targetTable limit 1") val columnsMissingInTable = DataFrameSchemaUtils.getMissingTableColumnsAgainstDataFrameSchema(thirdCompaniesDF, tableFlatDF) if (columnsMissingInTable.size > 0) { ss.sql((s"ALTER TABLE $targetTable " + s"ADD COLUMNS (${columnsMissingInTable.mkString(" , ")})")) } thirdCompaniesDF.write.mode(SaveMode.Overwrite).insertInto(targetTable) val companiesHiveAfterInsertNewSchemaDF = ss.sql(s"SELECT * FROM $targetTable") companiesHiveAfterInsertNewSchemaDF.printSchema() companiesHiveAfterInsertNewSchemaDF.show(false) New Batch With Fewer Columns Let’s test now the case when fewer columns are received. val fourthCompaniesDF = Seq("G", "H") .toDF("id") fourthCompaniesDF.write.mode(SaveMode.Overwrite).insertInto(targetTable) The following error is shown:
https://towardsdatascience.com/understanding-the-spark-insertinto-function-1870175c3ee9
['Ronald Ángel']
2019-10-23 13:26:05.149000+00:00
['Spark', 'Big Data', 'Hive', 'Dynamic Partitions', 'Insert Into']
I Am Living In A Box
I am living in a box I created it to keep me safe I thought I was free I could no longer see my box but it was there I kept bumping into it Invisible restrictions Constrictions How to live prescriptions All part of my box I created it to keep me safe I thought I was free I can now feel my box I know it is there I’ve tried kicking and punching it It flinches Moves a few inches Then invades me like chinches They live in my box I created them to keep me safe I thought I was free I thought I was free I thought I was free
https://medium.com/illumination/i-am-living-in-a-box-d61c60ffff30
['John Walter']
2020-07-28 05:28:09.574000+00:00
['Poetry', 'Freedom', 'Self', 'Safety', 'Mental Health']
Microsoft Research Unveils Three Efforts to Advance Deep Generative Models
Microsoft Research Unveils Three Efforts to Advance Deep Generative Models Optimus, FQ-GAN and Prevalent bring new ideas to apply generative models at large scale. Generative models have been an important component of machine learning for the last few decades. With the emergence of the deep learning, generative models started being combined with deep neural networks creating the field of deep generative models(DGMs). DGMs hold a lot of promise for the deep learning field as they have the ability of synthesizing data from observation. This feature can result key to improve the training of large scale models without requiring large amounts of data. Recently, Microsoft Research unveiled three new projects looking to advance research in DGMs. One of the biggest questions surrounding DGMs is whether they can be applied with large-scale datasets. In recent years, we have seen plenty of examples of DGMs applied in a relatively small scale. However, the deep learning field is gravitating towards a “bigger is better” philosophy when comes to data and we are regularly seeing new models being trained in unfathomably big datasets. The idea of DGMs that can operate at that scale is one of the most active areas of research in the space and the focus of the Microsoft Research projects. Types of DGMs A good way to understand DGMs is to contrast it with its best-known complement: discriminative models. Often described as siblings: generative and discriminative models encompass the different ways in which we learn about the world. Conceptually, generative models can attempt to generalize everything they see whether discriminative models learn the unique properties in what they see. Both discriminative and generative models have strengths and weaknesses. Discriminative algorithms tend to perform incredibly well in classification tasks involving high quality datasets. However, generative models have the unique advantage that can create new datasets similar to existing data and operate very efficiently in environments that lack a lot of labeled datasets. The essence of generative models was brilliantly captured in a 2016 blog post by OpenAI in which they stated that: “Generative models are forced to discover and efficiently internalize the essence of the data in order to generate it.” In that same blog post, OpenAI outlined a taxonomy for categorizing DGMs which included three main groups: I. Variational Autoencoders: An encoder-decoder framework that allows to formalize this problem in the framework of probabilistic graphical models where we are maximizing a lower bound on the log likelihood of the data. II. Autoregresive Models: This type of model factorize the distribution of the training data into conditional distributions effectively modeling every individual dimension of the dataset from previous dimensions. III. Generative Adversarial Networks: A generator-discriminator framework that uses an adversarial game to generate the data distributions. In recent years, we have seen major advancements applying DGMs to large scale models such as OpenAI GPT-2 or Microsoft’s Turing-NLG. These models followed similar learning principles: self-supervised pre-training with task-specific fine-tuning. The biggest question remains whether DGMs can be systematized for large-scale learning tasks. In that regard, Microsoft Research recently unveiled three major research efforts. Optimus In the paper Optimus: Organizing sentences with pre-trained modeling of a universal latent space, Microsoft Research introduces a large-scale VAE model for natural language tasks. Optimus provides an innovative DGM that can be both a powerful generative model and an effective representation learning framework for natural language. Traditionally, large scale pre-trained natural language models have been specialized in a single role. Models such as GPT-2 or Megatron have proven to be powerful decoders while models like BERT exceled as large scale encoders. Optimus combines both approaches in an novel architecture shown below: The Optimus architecture includes a BERT-based encoder and a GPT-2-based decoder. To connect BERT and GPT-2, Optimus used two different approaches. The first approach the latent variable (z) is represented as an additional memory-vector for the decoder to attend. Alternatively, the second approach adds the latent variable(z) is added on the bottom embedding layer the decoder and directly used in every decoding step. The initial tests in Optimus showed key advantages over existing pre-trained language models: 1) Language Modeling: Compared with all existing small VAEs, Optimus shows much better representation learning performance, measured by mutual information and active units. 2) Guided Language Generation: Optimus showed unique capabilities to guide language generation at a semantic level. 3) Low-Resource Language Understanding: By learning unique feature patterns, Optimus showed better classification performance and faster adaption than alternative models. FQ-GAN In the paper Feature Quantization Improves GAN Training, Microsoft Research proposed a new DGM approach to image generation. FQ-GAN’s innovation relies on representing images in a discrete space rather than a continuous space. Training with large datasets has been one of the main challenges of generative adversarial networks(GANs). Part of that challenge has been attributed to the fact that GANs rely on a non-stationary learning environment that depend on mini-batch statistics to match the features across different image regions. Since the mini-batch only provides an estimate, the true underlying distribution can only be learned after passing through a large number of mini-batches. To address this challenge, FQ-GAN proposes the use of feature quantization (FQ) in the discriminator. A dictionary is first constructed via moving-averaged summary of features in recent training history for both true and fake data samples. This enables building a large and consistent dictionary on-the-fly that facilitates the online fashion of GAN training. Each dictionary item represents a unique feature prototype of similar image regions. By quantizing continuous features in traditional GANs into these dictionary items, the proposed FQ-GAN forces true and fake images to construct their feature representations from the limited values, when judged by discriminator. This alleviates the poor estimate issue of mini-batches in traditional GANs. The following diagram illustrate the main components of the FQ-GAN architecture: The initial tests with FQ-GAN showed that the proposed model can improve image generation across diverse large-scale tasks. The FQ module proved to be effective in matching features in large training datasets. The principles of FQ-GAN can be easily incorporated into existing GAN architectures. Prevalent In the paper Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training, Microsoft Research introduces Prevalent, a DGM agent that can navigate a visual environment following language instructions. The challenge that Prevalent addresses is a classic one: training deep learning agents in multi-modal inputs is nothing short of a nightmare. To address the multi-modal input challenge, Prevalent proposes to pre-train an encoder to align language instructions and visual states for joint representations. The image-text-action triplets at each time step are independently fed into the model, which is trained to predict the masked word tokens and next actions, thus formulating the visual and language navigation pre-training in the self-learning paradigm. To model the visual and language navigation tasks, Prevalent relied on three fundamental datasets: : Room-to-room (R2R), cooperative vision-and-dialog navigation (CVDN), and “Help, Anna!” (HANNA). R2R is an in-domain task, where the language instruction is given at the beginning, describing the full navigation path. CVND and HANNA are out-of-domain tasks; the former is to navigate based on dialog history, while the latter is an interactive environment, where intermediate instructions are given in the middle of navigation. The Prevalent architecture collects the image-text-action triplets are collected from the R2R dataset and fine-tuned for the tasks in the R2R, CVDN and HANNA environments. The result is an agent that is able to not only master the three environment but to effectively generalize knowledge for unseen environments and tasks. DGMs are a key element to scale deep learning models. Microsoft Research efforts with Optimus, FQ-GAN and Prevalent present new ideas that can be incorporated into the new generation of DGM models. Microsoft Research open sourced the code related to this efforts together with the research papers.
https://jrodthoughts.medium.com/microsoft-research-unveils-three-efforts-to-advance-deep-generative-models-b1d2fe3395e8
['Jesus Rodriguez']
2020-04-27 12:59:53.496000+00:00
['Deep Learning', 'Invector Labs', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
The 3 Most Common Ways Law Firms Waste Money on Marketing
Getting distribution for your products or services is usually the single most challenging aspect of building any business, including a law firm. Anybody with a law degree can open up a solo practice. But how will you reach enough clients? And more importantly, how will you do so in a way that costs less than the amount of revenue you can generate from serving each one? Answering those questions is the very foundation of your business model. Here are 3 common ways law firms waste money on marketing to help you cut down on unnecessary expenditures, and set your business model up for success. Law Firm Marketing: The Holy Grail The legal industry is highly fragmented, with big firms at the top serving large corporations, and thousands of smaller firms competing for all the individual and small business clients. This fragmentation makes the market highly competitive, and drives up the cost of marketing and advertising. Just look at this list of 2015’s highest priced search terms through Google advertising, and notice how many of them are related to finding a lawyer. That’s a lot of advertising dollars being directed at your target clientele by your competition. Given the tremendous competition and high advertising costs, it’s clear that finding a way to effectively market your law firm without breaking the bank is the holy grail for your continued success and growth. The particular marketing approach that will work best varies from one firm to the next. But, here are 3 common ways that many law firms waste money on marketing to give you a good idea of what not to do as you implement your law firm marketing plan. The 3 Most Common Ways Law Firms Waste Money on Marketing 1. Not reaching the right audience As we have written about before, the key to establishing a strong law firm brand is having a well-defined target audience, and positioning your law firm to create awareness within that group. If you are spending your marketing dollars aimlessly, without a clear target audience in mind, there’s a very high chance you will be wasting money. Many law firms just throw up a billboard or bus stop ad with a phone number and hope for the best. But hope is not an effective marketing strategy. Before you put up a physical ad, you need to be absolutely sure that your target clientele will not only see that ad, but also be in a position where they would be likely to contact you after seeing it. This is quite difficult to measure, which is one reason why online marketing can be more cost effective. Unlike physical ads, you have a much better ability to gauge the intent of a prospective client at the time they are interacting with your ad or marketing message (e.g. via direct search advertising, SEO optimized articles or blogposts, etc.). But regardless of where and how you plan to market your law firm, it’s critical that you have first identified the demographics of your target audience. Think about the types of people that become your clients most frequently — age, sex, religion, ethnicity, location, social class, etc. are all factors to consider. Each of these data points will correspond closely to the marketing methods and channels that make the most sense. Identifying your audience is one of the most important steps to take before implementing a marketing plan, so don’t overlook it, or you’ll be setting yourself up to waste money. 2. Using the wrong marketing strategy Marketing is a game of different strokes for different folks, as they say. But it’s not about preference as much as it is about what strategies work, and why. We covered this topic in detail in our post about the law firm marketing spectrum. The general concept is that the marketing methods available to you are largely determined by the type of law practice you run, the average cost of your services, and the lifetime of an average client. You’ve got to carefully consider key performance indicators such as average client acquisition cost, client lifetime value, and ROI when deciding on your marketing strategy. Failing to pick the right marketing strategy is a surefire way to waste money, so here are some general guidelines about the types of marketing strategies that work for various types of practices: High Volume, Low Value: typically one-off legal matters such as traffic, criminal, basic business/real estate transactional work, etc. word of mouth referral partnerships social media blogging Medium Volume, Medium Value: typically more complex matters such as estate planning, family law, civil litigation, or contingency cases like personal injury and employment paid lead generation PPC advertising SEO video marketing print/physical advertising Low Volume, High Value: typically longer term clients such as high net-worth estate planning/family law, corporate law, etc. events referral partnerships networking content marketing There’s not a formula for picking the best marketing strategy for a law firm, and it may require some trial and error. The right marketing strategy will be one that enables you to reach your target audience and communicate your message to them in a cost effective manner. 3. Not tracking the results and adjusting over time Marketing is anything but a “set it and forget it” activity. The world is constantly changing, and marketing strategies that worked in the past will eventually become obsolete as competitors catch on to trends, and new technologies change the landscape. Not understanding the numbers and continuing to spend money on channels that don’t produce results is one of the biggest ways law firms waste money on marketing. You’ve got to keep up with the trends, or your marketing efforts will gradually become less effective, and the costs will go up. You should constantly be tweaking your approach, maintain an open mind, and utilize technology to analyze the results. Here are some basic guidelines about what data you should be collecting, how to capture it, and how to interpret it in order to continually improve your marketing strategy with time: What to Track The most important things to track are the number of leads each marketing channel produces, the total amount of money spent on each marketing activity, and the conversion rate for each of those channels over a given period of time. How to Track It In order to track these things, you will want to employ a combination of strategies. For phone based lead channels, it’s important to have different phone numbers assigned to each particular advertisement or marketing method so that you can identify which channel produced a call. For web based lead channels, you should set up Google analytics to track how many visitors you received, how they got to your website (e.g. from search, from social media, from an ad, etc.), and what percentage of them completed a goal (such as filling out your contact form, calling your office, or signing up for your newsletter). You also need a good system in place, such as a law firm CRM, in order to track every lead and identify the source from which that lead came to your law firm. How to Interpret It With this data in hand, you should look at the total cost to acquire each paying client and compare that number across each of the channels. If any channel isn’t producing good results, eliminate it immediately so you’re not wasting money. Next, create an action plan for ways to decrease your client acquisition costs and increase your conversion rates wherever possible. For instance, strive to improve your SEO ranking so that you get more organic website traffic, rather than paying per click. Use the suggestions in this post to maximize your website conversion rate so that there is a higher likelihood of each visitor contacting you. And don’t fall victim to this bottleneck which costs many firms clients when getting engagement letters signed. Continue to measure the results of your marketing on a monthly or quarterly basis, keep doing more of the things that work, and toss out the things that don’t. It’s that simple. Summary Creating an effective marketing plan is not going to be easy. In fact, it is often the hardest part of growing a law practice. Understanding the 3 biggest ways law firms waste money on marketing is an important first step. The three biggest mistakes many firms make are: Not identifying and targeting the right audience Not utilizing the right marketing channels for your audience Failing to track the results and adjust your strategy with time The good news is, if you avoid making these money-wasting mistakes, you’ll give yourself a good shot to implement a marketing strategy that’s destined for success from the start.
https://medium.com/law-firm-marketing/the-3-most-common-ways-law-firms-waste-money-on-marketing-b8c52cfde57d
['Aaron George']
2016-12-05 20:56:16.300000+00:00
['Law Firm Marketing', 'Growing A Law Firm', 'Business Of Law', 'Marketing', 'Kpi']
Open Source & Secret Santa with Santulator
By Adam Carroll Introduction It’s time for a festive post on the King Tech Blog! A few weeks ago I released the first full version of Santulator, a fun, Open Source program to help you to run Secret Santa draws. I’m from a big family and I initially wrote this with families in mind but it has since proven to be useful for office Secret Santa draws too. It’s completely free, so download it and give it a try yourself. Santulator is a personal project and not a King project. Since this is a technology blog, I’d like to share some details about how Santulator works. The project is under active development so I’ve fixed all the links to v1.1.0 to avoid them becoming stale as changes are made in the project on a regular basis. Open Source Open Source projects encapsulate an open technology exchange, collaborative participation, rapid prototyping, transparency, and community development. Santulator is entirely Open Source and is available on GitHub. If you are interested, I’d encourage you to fork the project and play around with the code yourself. Internally, Santulator is built on lots of other great Open Source software, too many projects to mention them all here. I’ll highlight a few in this post to give you a flavour of what is available. Personally, I’m a big fan of Open Source and it’s fun to participate in the various software development communities. I’m also the author of another Open Source project, VocabHunter and through my work on that, I have developed my enthusiasm for the world of Open Source software development. User Interface The Santulator user interface is built using JavaFX 11. This makes it possible to build a cross-platform desktop application on Mac, Linux and Windows. You can see the main Santulator screen here with the list of participants in the draw: Another article I wrote for the King Tech Blog, “How JavaFX was used to build a desktop application”, explains in detail how this can be done. The user interface itself is relatively simple. It consists of the main screen that I presented above along with a guided “wizard” that you use for running the draw. Here’s the wizard, in action: The ControlsFX Open Source library provides the components for the wizard, along with those used in various other parts of the Santulator user interface. Festive Colours Corporate colour schemes in business applications have their place but that place is probably not in a Secret Santa program. One of the things that I wanted to achieve with Santulator is a festive look and feel I want it to look fun and not like a productivity tool. JavaFX makes it easy to change the colours and styles of user interface components using a dialect of Cascading Style Sheets (CSS). I took advantage of this by applying a festive colour scheme to the user interface: A couple of years ago I went to the JavaOne conference (now renamed to Oracle Code One) through my job at King. There I attended a presentation “JavaFX Tips and Tricks” by Dirk Lemmermann. One of Dirk’s tips was to get to know the JavaFX default Modena style sheet and to learn from that. This tip proved to be very useful to this work as it was the key to finding exactly the styles that I needed to change. You can see the Modena style sheet in the JavaFX repository on GitHub here. I chose a set of three base colours based on ideas of Christmas trees and wrapping paper. You can see them here: I define these three basic colours in colours.css, as follows: -colour-principal: #4FB684; -colour-bow: #FFDA90; -colour-significant: #B21118; I then derive most of the other colours in the user interface, based on these three. For example, the colours for the “About” dialogue, are defined in colours.css by overriding styles from the Modena style sheet. The definition of the background colour for this dialogue looks like this: .about { -fx-base: derive(-colour-principal, -40%); } Overall we ended up with what I think is a nice festive “About” dialogue: Packaging Santulator Just as it was important for me to try to make using Santulator fun and simple, I also wanted to make it easy to install. Santulator is a cross-platform application that runs on Mac, Linux and Windows and I was keen to include an appropriate installer for each of those systems. For example, a Mac user should feel comfortable installing Santulator just as they would any other Mac application without needing to know that underneath it is implemented in Java. I was able to achieve my goal of creating “native” installable bundles for Santulator using the Java Packager. These bundles include everything that the user needs to run the program, including Java itself. Also, thanks to JLink, only those Java 11 modules that are actually needed are included in the bundles. JLink was introduced in Java 9 and can be used to build a custom Java runtime image containing a specific set of modules. If you’re thinking about doing this in your own project, have a look at my article “Using the Java Packager with JDK 11”. The Draw Results When you run your Secret Santa draw, Santulator creates a collection of PDF files containing the draw results. One PDF is created for each participant telling them who they will buy a present for. Just for fun and to maintain the secret of the draw, the files can be protected with a password. That way you won’t accidentally see all of the draw results if you’re distributing the files. Behind the scenes, Santulator uses the OpenPDF library to generate the PDF files and Bouncy Castle for the password protection. Thanks to these Open Source libraries, the Santulator code to generate the PDF files is nice and simple, as you can see here in PdfGiverAssignmentWriter. Testing As any software engineer will tell you, good automated tests are the foundation on which high-quality software is built. Santulator includes a suite of JUnit 5 tests that run automatically whenever the Gradle build is executed. In addition to the unit tests, Santulator also includes an automated GUI test. I have found that this is a great way to catch problems that might otherwise have slipped through the net. The GUI test is built using TestFX and to learn more about this, see my article “User Interface Testing with TestFX”. The Santulator automated GUI test normally runs in “headless” mode. This means that the test doesn’t take over your display while you’re running the build. Sometimes, it’s useful to switch off headless mode while developing. Just for fun, here’s a video of the test in action with headless mode disabled. See if you can keep up with the testing robot! Plans and Ideas for the Future I’m really pleased to have released this first version of Santulator and to see people using it and sharing it with others. I also hope that the ideas in the Open Source Santulator code and in this article will be useful to people working on other projects. I’ve received a lot of feedback and some good ideas for future directions so I’ll have to see what I have time for in the new year. And being Open Source, I welcome technical contributions in the form of issue reporting and pull requests on GitHub. Several users of the system have mentioned that it would be great to be able to email the results of the draw directly to the participants from within the program. Each email could contain an attachment with the password-protected PDF file that Santulator generates, personalised for the participant, containing the name of the person for whom they will buy a present. Another thing I’d like to do is to make the system available in other languages. To make a start at this, all of the messages that the user sees have already been extracted to the Java Resource Bundle file, SantulatorBundle.properties. Since I live and work in Barcelona, it would seem natural to start with Spanish and Catalan language versions of Santulator. I have lots of other ideas so if you’d like to keep up with the progress, use the “watch” function on the Santulator GitHub project. Happy Holidays! I hope you’ve enjoyed this quick run through of the technology behind Santulator. Feel free to fork the repository on GitHub and experiment with the code. It’s all Open Source and available free of charge. And if you’re going to run your own Secret Santa draw, why not download Santulator and give it a try? Have fun and a very happy holiday everyone!
https://medium.com/techking/open-source-secret-santa-with-santulator-9101972359fc
['Tech At King']
2018-12-14 09:32:02.759000+00:00
['Programming', 'Open Source', 'Java', 'Javafx', 'Technology']
I Am a Mom Whose Neighborhood Burned During the Protests. It’s Time to Defund the Police.
It was yesterday. I was scrolling Twitter on my phone, and my sweet toddler hit me in the leg. When he hit me, it did not matter that some days earlier he had said, “Pippy nice.” The fact was, he hit me. I have learned, through my training as a mental health therapist, to ask questions about behaviors like this. Behaviors, after all, are a kind of communication. What was my son trying to tell me? Was he too tired? Did he get too much screen time? Did he need my attention? What was going on just before he hit me? I have learned, as a mother to young children, that you have to answer behaviors the same way you answer language. This is the context from which I say: The behaviors of the police demand answers. I do not know all of the reasons the police are not agents of safety in my own community and many others. There are many ideas from many people who have been more awake to the problems of policing much longer than me. But I do know that the police’s behaviors are communication. It is time to listen to what police officers’ actions are saying, to what they have been saying for years. The police are not solving our crimes. The police hurt people and then hurt the people who ask them to stop hurting people. Police officers are telling us with their bodies that there is a very big problem with policing in the United States. When my toddler hits me, I work to understand what he is saying with his body. At the same time, when he hits, I do something else. I gently and firmly hold his hands. I say, “I won’t let you hit me.” I do this because I love him. Calling to defund and demilitarize the police is not necessarily an aggressive call to action, though it can be. It is not a way to demonize all people in uniform. It is loving boundaries, which we can set with anyone. Especially those we love. These boundaries can keep our community — all of our community — truly safe.
https://medium.com/fearless-she-wrote/i-am-a-mom-whose-neighborhood-burned-during-the-protests-its-time-to-defund-the-police-423b2ae7ea4d
['Emily Pg Erickson']
2020-06-08 14:54:06.583000+00:00
['Mental Health', 'Parenting', 'Politics', 'Equality', 'Racism']
Would it be wise to use iPhone 5s in 2020?
The iPhone 5s has been around since 2013 while it looks like iPhone 5 but it brought some key upgrades. When iPhone 5s first came out everybody loved that device because of it’s small form factor and the most secure touch ID in any smartphone at that time. Also, iPhone 5s is Apple’s first iPhone with a 64-bit mobile processor. Now we are in 2020 and if you can check the smartphones market especially the prices of Apple iPhone's, you will feel the prices of new iPhone’s is too high and not many people can afford or like to spend too much money on smartphones. Photo by Daniel Roe on Unsplash So the question arises Is it really worth it to use a 7 year old device in 2020? For giving you a good review i tried to use iPhone 5s for a week to test if it really worth it to use or not. Let’s start: Price: You can easily buy iPhone 5s from ebay for $50 to $80. Which is the most cheapest and up to date iPhone you can buy in 2020 Design: If you see the design of iPhone 5s it is an old style design with big top and bottom bezels. It is not like 2020 style smartphones with small notches and punch hole camera cut out on screen. But it has a premium looking design. The body is made up of aluminum and glass. Which makes this phone a very solid device. It weighs only 112 grams and the size of the phone is 4.7 inches which helps to hold the device very easily and to use with one hand very easily. The best thing about this device is it comes with a capacitive finger print sensor which works pretty well, it is little bit slow if you compare with latest devices but still it completes the job very well. Software Support: Best thing i like about this device is it runs iOS 12 which is two years old iPhone software but the amazing thing is Apple still give software updates to this 7 years old device. As i am writing this article i got a new software update for iOS 12.4.8 which includes some important security patches. I am really amazed with this that Apple still give software support to a 7 year old device. It support all the latest versions of apps available on App Store. Also support all the latest Social Media Applications. iMessage: There are many method you can find online to get iMessage without buying any Apple product because Apple products are so expensive and not everyone can afford or like to spend too much money on electronics products. And there are many people who love to use Android but they also want to take an iPhone with them because many of their friends and family members use iMessage for communication purposes. So if you want iMessage but don’t want to spend a lot of money on Apple latest devices you can use iPhone 5s easily for messaging purpose. Social Media Experience: I tried to use few of top famous Social media Applications those people usually use these days like Facebook, Instagram, Snapchat, Tiktok, YouTube Whatspp, Discord etc. And i have experienced you can easily use messaging applications like Whatspp, Discord or Facebook messenger every smoothly but when it comes to content looking applications like Facebook, Instagram, Snapchat i have experienced a little bit lag in Facebook, Snapchat and Instagtam usage and also in using their story feature. Youtube on the other side works perfectly fine but if you are a person who likes to watch videos on bigger screen then this phone is not for you. Gaming Experience: In Gaming experience you already know how a 7 years old device with only 1GB of Ram can run latest games smoothly. But i have tried Player Unknown Battle Ground (PUBG) on it and it works perfectly fine on low settings. Yes it heats up the device but the overall game is playable. I tried so many light games on to it and they run pretty smoothly like Brawl Stars. I can easily say that if the main purpose of you is to play games then this phone is not for you Battery Experience: It is an old device so don’t expect too much from this device in the battery department. If you are looking for an older device like this, i am sure you are not going to use this for more power use rather this device is for those people who have a very light phone usage like making phone calls, sending messages or using social media apps. I used this phone for more than a week and i can easily get 4 to 4.5 hours of screen on time. Which is i think very good. Camera Experience: Camera performance is not good but also not bad. It comes with a 8 mega pixel rare camera which records 1080p HD video at 30 fps and a 1.2 mega pixel front selfie camera. It takes very good photos in bright light environment but when it comes to low light environment the performance of the camera is not really good. Overall Usage Experience: Overall the usage performance is pretty good for people who only use their smartphones for normal usage like phone calling, messaging and light social media and camera usage. It is a 7 years old device but i am amazed how quick and responsive this device is. Apps easily load in 3–4 seconds which is i am not a big deal for many users. Also it is very easy for one hand use because of a small form factor. Conclusion: Good for people who only use their phone for calling or messaging. Good for people who sometime use their phone to consume social media. Good for people who have smaller hands like me. Good for people who love to use Android but also carry an iPhone as side phone to use some Apple service like iMessages. Good for people who play light game on their phones. If you like my article please feel free to give a clap. If you have any question or any suggestions for me please feel free to tell me in the comments section.
https://medium.com/macoclock/would-it-be-wise-to-use-iphone-5s-in-2020-3f08575ec19a
['Umar Usman']
2020-07-17 05:38:21.396000+00:00
['iPhone', 'iOS', 'Apple', '2020', 'Smartphones']
Objective Tragedies
Start with the objective. There’s the virus, with all the havoc it’s wreaked — the poorest students falling further behind (no internet at home), the badly needed therapy appointments cancelled or unattended, passersby spitting in the direction of folks with Asian descent. Then there’s the subjective. Your favorite Indian restaurant closing, indefinitely. Not getting to hug your friend goodbye. Breaking up, but tenderly, each of you cupping the other’s face the way you’d hold a robin’s egg. The grocery store being out of milk. Your world is small these days and it’s hard to triage these things. But since we’re all public health workers now, it’s become your job to triage, so you will. You will be a fish in a Venetian canal, where the waters (you may have heard) are now clear. Not, I will add, because of decreased pollution, but because of decreased boat activity, which allows sand and sediment to remain on the canal floor. The difference matters little to the fish, who can suddenly see where they’re swimming. No time to distrust it. Keep going. Keep going. Keep going.
https://medium.com/literally-literary/objective-tragedies-2ff491f8b11f
['Emma Jane Laplante']
2020-03-31 02:25:11.274000+00:00
['Relationships', 'Love', 'Travel', 'Environment', 'Poetry']
Chance Encounters
Subtle Glint a poem Photo by Daniel A. Teo Gleams of the sun delicately caresses the ardent ripples of blue horizon composed as the two elements homogenizes into ornate gold Thin sheets of cloud shadows land from heaven’s glow as day dims and we settle down after the perspiration on the fine shores the very serenity amidst the tempest of the sea And as the waves collide into the rocks we espy the last glimpse of around still dazzling from the glows of the sun when they still remain gleaming but instead under the subtle glint of the evening lamp poles.
https://medium.com/chance-encounters/subdued-glint-91329e911057
['Daniel A. Teo']
2020-11-27 17:45:12.144000+00:00
['Poetry', 'Prose', 'Writing', 'Photography', 'Chance Encounters']
What Should I Look for When Hiring a Book Editor?
What Should I Look for When Hiring a Book Editor? A helpful list of the good and bad Image credit: Christina Morillo from Pexels Congratulations! You’ve written a book! Or maybe you’re still in the process of writing it, but you’ve already begun the search for an editor. The first thing you’ll want to do before you even start looking for someone is to figure out what type of editing you need. Now that you’ve narrowed down what you need, it’s time to find out who is best suited for the job. More specifically, who is best suited for YOUR job. Because there are many factors that come into play, the editor who did such a great job for your writing friend’s memoir may not be the right person for your YA science fiction, and that’s okay. Here are a few things to consider: Does the editor have a website? Academic editors often don’t have their own websites; working for universities or research facilities will keep their name in the right circles, and they don’t need to advertise in the same way other editors do. An email address is sufficient. But if you’re looking for a book editor, you’re more likely to find them if they have a website. If they’re part of a professional editing association, you should be able to find them in the directory of that association even if they don’t have a site of their own. The website can be elaborate or as simple as a landing page that takes you to their services and a contact form. The important thing to note here is that the editor can be found somewhere other than a Goodreads thread, claiming they edit all types of books for $100. Someone who takes the time and effort to set up a website shows that they are willing to invest in their own business. Why is it important to look for this investment? In a nutshell, we don’t tend to value that which costs us nothing. If an editor has invested in a website, they’re more likely to have invested in their education as well, because they see the benefit that education and experience can bring to the table. Can you verify their claims? If the editor you’re looking into says they’ve edited dozens of bestsellers, can you find proof of that claim? If they have their own business, you should be able to make use of the “Look Inside” feature on Amazon to see if their name or business name is somewhere in the front matter. If no mention of the business is made, you can look in the acknowledgments for a name or even send a quick email to the author or publisher to double-check. You don’t have to be suspicious of everyone, but if you’ve found an editor on your own, you can’t just assume they’re telling the truth. I re-edited books for someone years ago after a not-really-an-editor butchered them, and that fake editor still had the books listed on her site long after her name was nowhere on them anymore due to extensive rewrites. The author even received an email from another author at one point — someone doing their research before hiring, thank goodness — asking how she liked that person’s services, and she was able to steer him clear of wasting his money after explaining the situation. Can you get a sample edit of the editor’s work? This one’s a little tricky, but I’ll explain. Many editors, myself included, offer a sample edit on a portion of your manuscript before agreeing to the job. This serves two purposes: you can see if that editor knows what they’re doing, and the editor can see how much work the MS needs (and will price accordingly). There are authors who insist that you should never, ever pay for a sample edit. I used to think this way until I joined a group of professional editors and heard the reasons many of them chose to only do paid samples. Those who do free samples will only do them on a small portion of the MS, usually 500–1000 words (2–4 pages), to limit their time spent on the tire-kickers. Even doing that small sample, pricing out the project, and sending an email with the information can take an hour of work time that pays nothing — and it often results in a thank-you with no actual contract signed, through no fault of your own other than the fact that someone else was cheaper. Those who do paid samples are willing to edit a longer portion of the MS for a small fee that compensates at least some of the time spent. If the author books the job, that fee comes off the total price. I offer both free and paid sample edits now. Note: if you’re looking for a developmental editor, you’re not likely to be able to get a sample, since developmental editing is such a broad-view type of edit on the manuscript as a whole. Some developmental editors will offer a paid sample on a few chapters, but be aware that the cost will be much higher than that of a copyedited sample, due to the nature of the edit. Can you look at samples of the editor’s finished work? As I mentioned above, the “Look Inside” feature on Amazon and other book-selling sites is a handy tool. Not only can you verify that your potential editor’s name is somewhere in there, but you can also see the first chapter or download a sample from the book. These are used as a teaser to entice buyers, but they’re a wonderful resource. Just as you can tell whether a book is going to be interesting or not from the first chapters, you should be able to see whether the editor did a decent job or not in those same chapters. If you find a host of errors in the first handful of pages — genuine errors, not stylistic choices — that’s probably a good sign that you should remove that particular editor from your list of maybes. You’re not likely to be able to see a before & after of another author’s work, because that would violate privacy. But you should be able to find enough samples of finished work to satisfy you. Does the editor get referred by others? One of the things I love about the authors I work with is that they refer me to others. That tells me they’re not only happy with my work, but that they’re willing to trust that I’d do a good job for someone else, too. Word of mouth can make or break someone’s business. If someone is hesitant to recommend a particular professional, ask why. Are they known as hard to work with? Do they cause division in their professional groups? I was referred to an author recently by an editor I’d never actually worked with, and when I dropped a quick note to thank the editor for the job, she said she knew we all had similar skills, but that I seemed to be nice and not contentious in the groups we were both part of. I ended up with a great project to work on and a new connection as well. Does the editor have testimonials? I feature author testimonials on my website because each of them seems to focus on a different aspect of my services. A couple of them were written for me upon request, but the ones I like the best are those candid ones I’ve grabbed (with permission) from emails between the author and me. I love to capture those moments when they’re excited and encouraged about what I’ve sent back to them, or when they’ve mentioned me in a blog post while promoting their book. This is how I end up with testimonials like, “I couldn’t find a single thing to fight with you about; rather disappointing.” And I enjoy every word of it. Can you afford them? Let me start off by saying that editing isn’t cheap. It’s a skilled job and it takes a lot of time, and you need to be willing to pay for that expertise and the time it takes to do the job well. That said, there are enough editors out there who do their best to work with authors and budgets, so I wouldn’t start your search by stressing over cost. There will always be the extreme lows and extreme highs, and it’s up to you to see what’s best for your project and price range. There is an editor who will fit your writing style and your budget. You just need to take your time and look thoroughly. The old saying of “good, fast, cheap — choose any two” is true. If you are trying to keep your costs down, you may find an editor who’s willing to work for a lower price by fitting your work bit by bit in between other, full-paying jobs. You may decide to go with a scaled-down version of the editing you want, or you may have done your homework early enough that you’ve saved enough to be able to afford exactly what you want with no compromises. Everyone is different, and every budget is different. Some editors are willing to take multiple payments when they see a manuscript that really has potential, and others insist on full payment up front. The important thing is to ask if you have questions about pricing or payments, because you never know if something can be worked out. The lowest estimate isn’t always the worst editor and the highest price doesn’t always guarantee the best editor. Look at all the edits from the samples you’ve gotten and see who really “gets” you and complements your writing style, and go from there. Who are they really? A final thought on hiring an editor for your book: check out who they are when they’re not being an editor. Do they have a social media presence? You don’t need to be a stalker, but it’s not a bad thing to check out someone’s Instagram, Twitter, Facebook, or other social media sites. Can you still work with someone whose skills you admire, but who posts hateful slurs toward other cultures on their Twitter? What if you see a Facebook rant where they go on and on about “that idiot author” they’re working with? Do you want to be the next potential social media fodder for that person? An author friend told me years ago about reading a tweet sent to him by a friend, with a “Check this out” attached. The tweet was from his editor, moaning about the book she was editing, how boring it was and how she wanted to gouge her eyes out. She mentioned specific instances and phrases that confirmed it was indeed his, and basically ridiculed his work, never considering that he’d see the tweet. Personality does count. If an editor doesn’t seem to respect the opposite sex (let’s face it, this happens on both sides), then perhaps they’ll try to bully you into thinking you can’t question any changes they make in your MS. Or they’ll try to impose their own set of values in your writing to make it sound more like something they believe in (and again, this can happen on both sides of the fence). My husband has a great saying: “Don’t forget what you already know about someone.” I suppose it goes along with Maya Angelou’s reminder of “When someone shows you who they are, believe them the first time.” There are enough great book editors out there that you should never have to work with someone you don’t respect, or who doesn’t respect you. How can you know what to do? Don’t panic. Take your time, look across a wide range of avenues, and do your research. Have I mentioned that you shouldn’t panic? Because you really shouldn’t. Choose with care and you could end up with a long and healthy working relationship. If you’re currently looking for an editor for your book, I’d love to talk to you! Check out my website (the link is in my profile here on Medium) and fill out my contact form. I don’t set standards for others that I don’t meet myself, so feel free to check me out on any of my own social media. Logo image property of Easy Reader Editing, LLC Sign up to get my free ebook with over 60 resources for writers! You just read another exciting post from the Book Mechanic: the source for writers and creators who want to make more work that sells and sell more work they make. If you’d like to read more stories just like this one tap here to visit
https://medium.com/the-book-mechanic/what-should-i-look-for-when-hiring-a-book-editor-85222c299476
['Lynda Dietz']
2020-12-30 02:36:46.746000+00:00
['Book Editing', 'Book Editor', 'Editing And Proofreading', 'Writing', 'Editing']
Good Advertising and Bad Advertising
Consumers have a complicated relationship with advertising campaigns. On one hand, the popularity of ad-blocking software bears witness to the growing ambivalence that online readers have for AdWords, commercials, and promotional offers. This has become an issue both for websites that depend on ad revenue, and for companies that depend on ads to bring in clients and traffic. On the other hand, research shows that consumers don’t hate advertising per se — they just want it to be better. Furthermore, successful advertisements not only win new customers to a brand, but sometimes go viral in completely unironic ways. Bad Advertising Not every company has the money to launch a multi-million dollar commercial. But by looking at prominent examples of advertising campaigns that failed and backfired spectacularly, it’s possible to isolate trends that marketers should avoid at all costs. The Kendall Jenner Pepsi Commercial In the history of bad advertising, Pepsi may have won a world record with its magnificently bad decision in early 2017 to launch a commercial that mysteriously tied its product to political activism. The self-styled “short film” follows celebrity Kendall Jenner through the streets of a generic American city in the midst of a nondescript rally attended by hordes of armored police. Although the situation seems tense, Jenner wins one of the officers over by offering him a can of (apparently magical) Pepsi soda. Pepsi clearly believed they had something good going on with this commercial. After all, it cost them millions of dollars to produce. But on the day of its release, the Internet’s reaction was so universally negative that the company was forced to pull the ad before 24 hours had even elapsed, and publicly apologized; echoing public sentiment, Time Magazine called the stunt “an inauthentic cash-in on many people’s unhappiness”. McDonald’s “Dead Dad” Commercial To promote its “Fillet-o-Fish” sandwich in the U.K, fast good giant McDonald’s launched a bizarrely contemplative commercial about a boy who approaches his mother to ask about his deceased father. Naturally, the two go to a local McDonald’s for solace, where the boy orders a Fillet-o-Fish with tartar sauce. His mother longingly remarks, “That was your dad’s favorite too.” The advertisement was panned, prompting Twitter outrage, and an article on the BBC’s website. Eventually McDonald’s formally apologized for something that many considered “distasteful”, and U.K regulator Advertising Standards Authority decided to reevaluate its run after receiving traumatized reports from across the country. Burger King “O.K. Google” Commercial Not to be outdone by their competitor, Burger King quickly took on the challenge of irritating the TV and YouTube watching public with a commercial that attempted to hijack users’ Google Home devices with the wake-word “Okay Google,” followed by the question, “What is a whopper burger?” In theory, a Google Home device would have answered with a description lifted from Wikipedia: The Whopper is a hamburger, consisting of a flame grilled beef patty, sesame seed bun, mayonnaise, lettuce, tomato, pickles, ketchup, and sliced onion. Users found the tactic exploitative and annoying; Google responded by deliberately breaking the gimmick. But before this official response, Internet trolls worked hard to manipulate the Wikipedia article so that victims would be told the Whopper was made of “100% medium-sized child,” “cyanide,” and similar libels. Reactions to the commercial were so negative that — excluding news commentary like the one posted here — all versions of the ad were removed from YouTube. Good Advertising An advertising campaign doesn’t have to become famous to be successful — but now that we’ve talked about infamously bad adverts, it’s time to talk about famously good ones. The Man Your Man Could Smell Like In 2010, a simple thirty second ad began a long series of commercials featuring former NFL receiver Isaiah Mustafa. In a bathroom, Mustafa pities his audience for lacking him as a significant other, but assures them that Old Spice deodorant can help to take away the sting. By the end of his surreal and rambling monologue, he is sitting on a horse at a beach. The commercial received universally wide acclaim, and is up to 53 million views on YouTube. This number is particularly important, because it shows that people watched this ad — and still watch it — on purpose for fun, not because they’re forced to. Old Spice saw a 107% increase in body wash sales after the commercial launched, and the company did not stop there: 186 more videos were produced immediately afterwards as part of a “Response Campaign” just to field messages from fans of the ad. This campaign raked in 5.9 million views on its first day — more than Barrack Obama’s victory speech had received on its first day. Truly this is a level of consumer interaction most brands can only dream about. Purple Mattress Commercials Purple Mattress exploded onto the scene in 2016, to the tune of $76 million in revenue. Much of this success can probably be attributed to the integrated creatives who simultaneously manage its social media presence and advertisements. At 81 million views, the first Purple Mattress commercial featuring an “egg test” to demonstrate the unique structure of the product is already more popular than the Old Spice commercial on YouTube. Other commercials by the company feature a mother Sasquatch explaining the benefits of a Purple Mattress in the wilderness. The creativity and humor in these ads is widely commented on, but another conspicuous attribute is the company’s dedicated social media presence, as evinced by Twitter interactions with ecommerce editor Samantha Gordon: Chuck Testa The holy grail of online advertising is to become a meme — not merely to go viral, but to actually become a meme featured in organic image macros across the web. In 2011, this happened to unsuspecting California taxidermist Chuck Testa after uploading a seemingly low-budget commercial to the Internet. The ad represents an elusive balance of “so bad it’s good,” with moments hilariously awkward enough to be endearing, and the weird but eminently meme-worthy slogan “Nope! Chuck Testa”. On first glance, the whole thing looks like a happy accident — a low quality, local commercial that became ironically famous. In reality, the camp aesthetic was completely intentional, and engineered by Commercial Kings, brainchild of self-made Internet celebrities Rhett & Link with the intention of going viral. It worked.
https://medium.com/online-marketing-institute/good-advertising-and-bad-advertising-b8da6b89fd7
[]
2017-10-28 15:21:58.536000+00:00
['Advertising', 'Online Marketing', 'Marketing', 'Viral Marketing', 'Digital Marketing']
Hey! You!
Photo by JD Mason on Unsplash January 2018: Have you ever gotten neon light messages from The Universe? Well, that was my week. Bill Engvall’s famous line “Here’s your sign!” kept running through my mind over and over as I walked through My Life the last few days. I suppose that’s what happens when you take your head out of your ass and start listening. You see — I recently recommitted myself to my meditation practice. I had been meditating all along on a regular, but I sat down (literally) and I decided I was going to find a slot, put it into my schedule every day. Not just when I felt like it, not just when there was time, not just when I was at the end of my rope and I knew I ‘needed’ it. Every. Single. Day. So here is what happened. White Feather recently replied to one of my stories and got me thinking about connections and our ‘one-ness’. And then there was this post from one of my favorite authors which popped up on my Medium feed.Add to that my random morning meditation selection played heavily on the ‘all beings are connected theme’. The Universe had spoken. I sat on my meditation cushion at the end of the meditation and I pondered some in silence. Message received, but what do I do with this information? How do I apply it? What challenge(s) lie ahead where this bit of insight is going to come in handy? The difficulty of resolving and unifying the complex components of our personalities is an ongoing lifelong journey. It may, in fact, be THE journey. The reason we arrive, live, love and exist. Just to sort out and remember who we really are. To mesh all those bits back to ‘one-ness’. (Thank you White Feather for that!) And not just ‘one-ness’ within ourselves but also with everything. Every. Single. Thing. Every soul. Every creature. Every living thing. Even every non-’living’ thing. From flora to fauna. Plants, animals, the wind, the sun, the moon, The Universe. We are all connected. We come from the same place — literally, figuratively, spiritually, and energetically. Nearly every religious practice out there claims the same thing. Do unto others. Because don’t you see? The “others” aren’t really others…they are us. Or certainly part of us. Even scientists agree — living creatures share more in common with their DNA than they don’t. I just Googled it — we have 50% of our DNA in common with the banana I just had for breakfast. The fruit fly buzzing over the rest of the bunch — 60%. The lab mice we torture — 75–90% depending on if you look at theirs in us or ours in them. They have 90% of ours. We have 75% of theirs. There are no lines that divide us/them, black/ white, good/bad. There is only this. Life. Yoga means ‘to yoke’ in ancient Sanskrit. The reason yogis began to do the asanas — the postures associated with what westerns think of as yoga is to be able to prepare their bodies to sit and meditate for longer and longer periods of time. Yoga has many branches — many ways of ‘yoking’ our spirit, soul, and body back together. Many paths back to ‘one-ness’. I can only speak for My Path, of course. Every person gets to ponder their own steps along the way and decide for themselves what makes sense and what is complete bullshit. I never did face any ultimate challenge this week which gave me an ‘ah-ha’ moment. What I did have though was a peaceful flow in My Life as I faced the everyday shit which normally made me sigh with discontent. I found myself more grateful. For everything. I looked for the ‘one-ness’ in everyone. And everything. I suppose that’s a pretty big ‘ah-ha’ moment in itself. There is a quote I saw recently — again The Universe whispering in my ear — it goes like this: “Prayer is talking to The Universe, meditation is listening to it” — Ankit Vekariya Namaste.
https://medium.com/recycled/hey-you-4a641ecd4193
['Ann Litts']
2019-07-27 22:48:48.408000+00:00
['Self-awareness', 'Spirituality', 'Life Lessons', 'Meditation', 'Life']
How Do Language Models Predict the Next Word?🤔
How Do Language Models Predict the Next Word?🤔 N-gram language models - an introduction Photo by Mick Haupt on Unsplash Have you ever guessed what the next sentence in the paragraph you’re reading would likely talk about? Have you ever noticed that while reading, you almost always know the next word in the sentence? Well, the answer to these questions is definitely Yes! As humans, we’re bestowed with the ability to read, understand languages and interpret contexts, and can almost always predict the next word in a text, based on what we’ve read so far. Can we make a machine learning model do the same? Oh yeah! We very well can! And we already use such models everyday, here are some cool examples. Autocomplete feature in Google Search (Image formatted by author) Autocomplete feature in messaging apps (Image formatted by author) In the context of Natural Language Processing, the task of predicting what word comes next is called Language Modeling. Let’s take a simple example, The students opened their _______. What are the possible words that we can fill the blank with? Books📗 📒📚 Notes📖 Laptops👩🏽‍💻 Minds💡🙂 Exams📑❔ Well, the list goes on.😊 Wait…why did we think of these words as the best choices, rather than ‘opened their Doors or Windows’? 🙄 It’s because we had the word students, and given the context ‘students’, the words such as books, notes and laptops seem more likely and therefore have a higher probability of occurrence than the words doors and windows. Typically, this probability is what a language model aims at computing. Over the next few minutes, we’ll see the notion of n-grams, a very effective and popular traditional NLP technique, widely used before deep learning models became popular. What does a language model do? Describing in formal terms, Given a text corpus with vocabulary. V , , Given a sequence of words, x(1),x(2),…,x(t) , , A language model essentially computes the probability distribution of the next word. x(t+1) . Probability distribution of the next word x(t+1) given x(1)…x(t) (Image Source) A language model, thus, assigns a probability to a piece of text. The probability can be expressed using the chain rule as the product of the following probabilities. Probability of the first word being x(1) Probability of the second word being x(2) given that the first word is x(1) given that the first word is Probability of the third word being x(3) given that the first two words are x(1) and x(2) given that the first two words are and In general, the conditional probability that x(i) is word i , given that the first (i-1) words are x(1),x(2),…,x(i-1) The probability of the text according to the language model is: Chain rule for the probability of a piece of text (Image Source) How do we learn a language model? Learn n-grams! 😊 An n-gram is a chunk of n consecutive words. For our example, The students opened their _______, the following are the n-grams for n=1,2,3 and 4 uni grams: “the”, “students”, “opened”, ”their” grams: “the”, “students”, “opened”, ”their” bi grams: “the students”, “students opened”, “opened their” grams: “the students”, “students opened”, “opened their” tri grams: “the students opened”, “students opened their” grams: “the students opened”, “students opened their” 4- grams: “the students opened their” In an n-gram language model, we make an assumption that the word x(t+1) depends only on the previous (n-1) words. The idea is to collect how frequently the n-grams occur in our corpus and use it to predict the next word. Dependence on previous (n-1) words (Image Source) This equation, on applying the definition of conditional probability yields, Probabilities of n-grams and (n-1) grams (Image Source) How do we compute these probabilities? To compute the probabilities of these n-grams and n-1 grams, we just go ahead and start counting them in a large text corpus! The Probability of n-gram/Probability of (n-1) gram is given by: Count occurrences of n-grams (Image Source) Let’s learn a 4-gram language model for the example, As the proctor started the clock, the students opened their _____ In learning a 4-gram language model, the next word (the word that fills up the blank) depends only on the previous 3 words. If w is the word that goes into the blank, then we compute the conditional probability of the word w as follows: Counting number of occurrences (Image Source) In the above example, let us say we have the following: "students opened their" occurred 1000 times "students opened their books" occurred 400 times -> P(books/students opened their) = 0.4 "students opened their exams" occurred 200 times -> P(exams/students opened their) = 0.2 The language model would predict the word books; But given the context, is books really the right choice? Wouldn’t the word exams be a better fit? Recall that we have, As the proctor started the clock, the students opened their _____ Should we really have discarded the context ‘proctor’?🤔 Looks like we shouldn’t have. This leads us to understand some of the problems associated with n-grams. Disadvantages of the n-gram language model Problems of Sparsity What if “students opened their” never occurred in the corpus? The count term in the denominator would go to zero! If the (n-1) gram never occurred in the corpus, then we cannot compute the probabilities. In that case, we may have to revert to using “opened their” instead of “students opened their”, and this strategy is called back-off. What if “students opened their w” never occurred in the corpus? The count term in the numerator would be zero! If word w never appeared after the n-1 gram, then we may have to add a small factor delta to the count that accounts for all words in the vocabulary V .This is called ‘smoothing’. Sparsity problem increases with increasing n. In practice, n cannot be greater than 5 Problem of Storage As we need to store count for all possible n-grams in the corpus, increasing n or increasing the size of the corpus, both tend to become storage-inefficient. However, n-gram language models can also be used for text generation; a tutorial on generating text using such n-grams can be found in reference[2] given below. In the next blog post, we shall see how Recurrent Neural Networks (RNNs) can be used to address some of the disadvantages of the n-gram language model. Happy new year everyone! ✨ Wishing all of you a great year ahead! 🎉🎊🥳 References [1] CS224n: Natural Language Processing with Deep Learning [2] NLP for Hackers
https://medium.com/towards-artificial-intelligence/how-do-language-models-predict-the-next-word-66e0a705583e
['Bala Priya C']
2020-12-28 13:17:28.960000+00:00
['NLP', 'Artificial Intelligence', 'Machine Learning', 'Language']
SwiftUI Tutorial — Lists and Navigation
List We’re first going to start by adding a struct called EmojiItem that will represent each of the emoji in our List : Now, we’re going to define an array of EmojiItem that we will display in the List . You can add this array inside the ContentView struct . Now, we’re going to define a struct that we will use to display the emoji in a circle: Finally, we can go to the body section of our ContentView to work on our app’s UI. We’re going to embed all the content of the app inside a NavigationView ; inside it, we will add a List view containing the emojiList we defined earlier. We’re going to embed each EmojiItem inside a HStack that will show the emoji inside a EmojiCircleView , and the emoji’s name in a Text view. If we run our app now, we should be able to see the list of emoji, but nothing happens when tapping on row in the list…Yet.
https://medium.com/swlh/swiftui-tutorial-lists-and-navigation-16e1b4dbb98b
['Ale Patrón']
2020-10-14 19:31:35.503000+00:00
['Programming', 'Software Engineering', 'Xcode', 'iOS', 'Swift']
Graph Data Modelling in Java
Modelling data is a crucial aspect of software engineering. Choosing appropriate data structures or databases is fundamental to the success of an application or a service. In this article, I will discuss some techniques related to modelling data domains with graphs. In particular, I will show how labelled property graphs, and graph databases, can be an effective solution to some of the challenges we sometimes encounter with other models, such as relational databases, when we deal with highly-connected data. By the end of this article we will have a simple — but fully functional — implementation of an in-memory labelled-property graph in Java. We’ll use this graph to run some queries on a sample dataset. All the code presented here can be found on GitHub. Sample domain: book reviews Before writing this article, I headed over to kaggle.com and browsed through some of the data sets available there. Eventually, I picked this book review data set which we’ll use as a running example throughout this article. This data set contains the following CSV files: BX-Users.csv, with anonymised user data. Each user has a unique id, location and age. BX-Books.csv, which contains each book’s ISBN, title, author, publisher and year of publication. This file also contains links to thumbnail pictures, but we won’t use those here. BX-Book-Ratings.csv, which contains a row for each book review. If we picture this data set as an entity-relationship (ER) diagram, this is how it would look like: Some fields are denormalized, e.g. Author , Publisher , and Location . While this might or might not be what we eventually want (depending on the performance characteristics of certain queries), at this point let's assume we want our data in normalized form. After normalization, this is our updated ER diagram: Note that we split Location into 3 tables ( City , State , and Country ) because the original field contained strings such as "San Francisco, California, USA" . Now, suppose we store this data into a relational database. As a thought exercise, how easily can we express these two queries with SQL code? Query 1 : get the average rating for each author. : get the average rating for each author. Query 2: get all books reviewed by users from a specific country. If you’re familiar with SQL, you’re probably thinking joins: whenever we have relations that span multiple tables, we typically navigate the relations by joining pairs of tables via primary and foreign keys. However, both queries involve a non-trivial amount of joins — especially the second one. This is not necessarily a bad thing — relational databases can be quite good at executing joins efficiently — but it might lead to SQL code that is difficult to read, maintain, and optimize. Moreover, in order to navigate many-to-many relationships, we had to create join tables (e.g. BookRating or BookPublished ). This is a common pattern which, however, adds some complexity to the overall schema. We could denormalize some of the data, but this would have the side-effect of locking us into a specific view of our data and causing our model to be less flexible. Domain modelling with Java classes Now let’s suppose we want to store our entire data set in memory. The sample data set that we got from Kaggle contains less than a million entries, so it will easily fit in memory. Storing our data in memory is a trivial but perfectly valid approach, especially if we want to support a read-only workload and we don’t need to guarantee write consistency and persistency. If we need to offer those guarantees, we can still store the data in memory, but we’ll probably need to ensure that our data structures are thread-safe and that we somehow persist the changes to non-volatile memory. We’ll start with some classes, for example: Soon, though, we are confronted with a question: how do we link these classes together? How do we establish relationships between books and authors? In the case of Book and Author , we might consider that a book has one author (let's keep things simple and suppose each book has only one author), and use a direct reference: This makes it trivial to get the author of a book. However, the reverse (getting all books written by an author) is more expensive, because we need to scan the entire list of books. For example, if we’re looking for all books by Dan Brown, we need to write something like this: When we have many (e.g. millions) books, this will perform poorly. We could store references to books in the Author class itself: However, this solution doesn’t “feel” good. Every time we want to change a book’s author, we have two places to change. Besides, how should we handle relationships that contain extra data? For example, the relationship between a book and its publisher contains a piece of extra data, i.e. the year of publication. It’s not clear where we should put this field: should we declare it in the Book class? Then how do we handle cases when a book has multiple publishers and/or years of publication? Should we create a new class (e.g. BookPublished ), essentially adopting the join table pattern of relational databases? Labelled property graphs I would argue that both the relational and the Java reference models share a common trait: they represent relationships as entity data. In both cases, not only does an entity contain attributes about itself (e.g. a Book table contains its title and ISBN code) but it also contains data about how it's connected to other entities. This entity-centric (or table-centric) model, which is adopted by RDBMSs, has been traditionally successful thanks to its efficiency in storing and retrieving huge amounts of entities. If our data has a lot of loosely-connected entities, a table-centric model works well. However, when our data has a lot of relationships, representing them as data might result in more complex queries and, overall, in a less flexible data model. Graph models adopt a different approach: in a graph, relationships are modelled explicitly and are treated as first-class citizens of the data model, just like entities. You may recall from mathematics that a graph is a collection of nodes (also known as vertices) and edges (sometimes called relationships). Each node stores some data, and each edge connects two nodes. Here’s a picture of a sample graph with 6 nodes and 6 edges: On top of this, a labelled property graph model adds a few extra features: Each node and each edge have a label that identifies their role in the data model. Each node and each edge store a set of key-value properties. Each edge has a direction, i.e. it is a directed graph (as opposed to undirected graphs, where edges don’t have directions) This is essentially the model adopted by production graph databases such as Neo4j and Titan. So how can we model the book review domain as a labelled property graph? Here’s my first take at it: Nodes have labels (e.g. the Book node is labelled "Book") and some properties -- for example, the Book node has two properties, isbn and title . These properties correspond to entity attributes in our ER diagram. Edges have labels too (e.g. the edge connecting User and Book is labelled "Reviewed"). Some edges also have properties -- for example, the Reviewed edge has the rating property, which stores the rating a user gave to a book, and the Published by edge has a year property, i.e. the year the book was published. Other edges don't have properties -- for example, the In City edge which connects a user to the city where they live doesn't have properties because we don't need to store extra data on that relationship. The picture above represents the schema of our graph model. When we create a graph instance and store some data in it, here is how it could be pictured (this is just a subset of the entire graph): This could have easily been drawn on a whiteboard. In fact, when it comes to connected data, adopting a graph model is arguably quite natural and intuitive. For more information about labelled property graphs, and how to model data with them, I recommend checking out Neo4j’s Graph Modeling Guidelines or Kelvin Lawrence’s free book on Apache Gremlin. Implementing a labelled property graph in Java Let’s see how we can implement a labelled property graph in Java. All the code in this section can be found in this repository on Github (the repository also contains code to parse the CSV files from Kaggle.) We’ll start by defining the Node and Edge classes. We'll also define a common superclass, GraphElement , which represents elements that have a label and properties. Nothing surprising here, except maybe those outgoingEdges and incomingEdges fields in the Node class. This is essentially how we connect nodes and edges together, and how we'll navigate the graph to extract meaningful data (we'll see that soon.) I chose to represent outgoingEdges and incomingEdges as lists, but these might as well be sets (e.g. hash sets or tree sets) or other structures. The choice depends on a number of factors (e.g. whether we need to guarantee uniqueness of each edge.) However, these consideration are beyond the scope of this article -- if you are looking for efficient in-memory graph databases, you might want to consider products such as Memgraph or Neo4j embedded. For this example I decided to keep things simple and use plain array lists. Also, each node has an id field. Unsurprisingly, the primary function of this field is to guarantee uniqueness of each node. Next, we’ll define the Graph class which exposes methods for creating nodes and edges: The two maps nodeIdToNode and nodeLabelToNode allow us retrieve nodes by their id and label, respectively. This will become especially useful when we start writing queries. We define a BookReviewGraph class as a subclass of Graph : We don’t use random-generated ids (e.g. UUIDs); instead, we repurpose entity data as ids. This has a couple of advantages: it’s easier to debug, and it’s easier to query (if we’re looking for books by author, we can just get the node with id = "author-" + authorName ) Sample queries Now that we have a basic graph implementation for our book review domain, let’s see how we can implement the two queries we introduced above: Query 1 : get the average rating for each author. : get the average rating for each author. Query 2: get all books reviewed by users from a specific country. The pattern for both queries will be the same: starting with a node (or a set of nodes), we navigate through edges and nodes, until we reach the data that we want to return. In general, querying a graph means extracting a subgraph, i.e. a specific pattern of nodes and edges, that yields the desired answer. Query 1: average rating for each author Let’s start by writing a query that returns the average rating for a specific author. The query will traverse a section of the graph in order to extract the data we want (average rating for each author.) We can break it down into 4 steps: Start from the node corresponding to the author we are interested in. Navigate through the “Written by” edges that point to the author node. Get the source nodes of the “Written by” edges. These will be book nodes. Navigate through the “Reviewed” edges that point to the book nodes and extract the rating property from these edges. As a picture: In code: Using this query, we can easily find the average rating for each author: Here’s the output of the query with the data set we got from Kaggle: Gustavs Miller 10.0 World Wildlife Fund 10.0 Alessandra Redies 10.0 Christopher J. Cramer 10.0 Ken Carlton 10.0 Jane Dunn 10.0 Michael Easton 10.0 Howard Zehr 10.0 ROGER MACBRIDE ALLEN 10.0 Michael Clark 10.0 ... Query 2: books reviewed by users from a specific country This query adopts the same pattern as the previous one. We start with country nodes and traverse the graph until we get to book nodes, then we collect book titles: In code: Here’s the output we get with the Kaggle data set, when we query for books reviewed in Italy: La Tregua What She Saw Diario di un anarchico foggiano (Le formiche) Kept Woman OME Always the Bridesmaid A Box of Unfortunate Events: The Bad Beginning/The Reptile Room/The Wide Window/The Miserable Mill (A Series of Unfortunate Events) Aui, Language of Space: Logos of Love, Pentecostal Peace, and Health Thru Harmony, Creation and Truth Pet Sematary Maria (Letteratura) Potemkin Cola (Ossigeno) ... Improving performance with indices In both queries above, the starting point into the graph is a precise node which we get via the graph.getNodeById() method. This works when we know the id of the starting node. However, in many cases we might not have a starting node — instead, we might need to start with a set of nodes. For example, suppose we want to code the following query: Query 3: given a book title, get the average age of users who reviewed a book with that title. Our starting set of nodes is made up of the books with the given title. We could go through each book node and filter out books with a non-matching title: This is essentially what databases refer to as a full table scan: in order to find the books we want, we have to go through all of them sequentially. The cost of this operation is linear in the number of books, so this can potentially take a long time. In almost all cases (except maybe when the sequence is very short) we can improve performance by using a map (essentially a simplified version of an index in a database). First, we create a map ( booksByTitleIndex ) which we update every time we add a new book: Now we can use this index at the beginning of our query: When we run this query on the Kaggle data set, here’s the output we get with title “Dracula”: 30.338983050847457 A HashMap is a good choice when we want to find exact matches on a certain property, because the average cost of finding exact matches is constant (O(1)). When we want to find entities based on the total order of a certain property (e.g. get all users whose age is above 25), we can use a different data structure, e.g. a TreeMap, where the cost of finding elements is logarithmic. If we wanted to find elements by prefix, a Trie would be the right choice to implement an index. Conclusion In this article we’ve seen how we can model a data domain using graphs, and how to implement a simple labelled property graph in Java. The code is quite basic and lacks error handling and thread-safety, but it’s a starting point to understand and apply graph modelling to data domains. In the past year, I have worked on a project that has involved the analysis of high volumes of interconnected data. Modelling this data as graphs has allowed my team and myself to have an intuitive understanding of how the data is connected: transitioning from whiteboard brainstorming to data model implementation was a straightforward process. Initially we were unsure how the data would eventually be queried; adopting a graph model has allowed us to keep our model flexible and easy to query throughout the project. The last decade has seen a growing interest in graph databases in the software community. Most of the technology powering these databases is not really new; what’s new is the fact that we now have a lot of interconnected data, and we want to query this data based on how elements are connected together, rather than just the values they hold.
https://medium.com/swlh/graph-data-modelling-in-java-dc815a0b8b24
['Alberto Venturini']
2020-09-22 10:31:53.804000+00:00
['Data Modeling', 'Java', 'Neo4j', 'Graph Database', 'Graph']
Are America’s Cultural Struggles the Result of Risk-Averse Incrementalism?
America is at war with itself — but are racial and political divisions really to blame, or is something deeper fueling our discontent? We’re joined by Dr. John Parmentola, Adjunct Staff Member at the RAND Corporation, to discuss his upcoming book, “Creating Wealth from Worthless Things”, which argues that a risk-averse focus on efficiency & incrementalism has hurt the economy, reduced individual opportunities, and widened the wealth gap underlying today’s social conflicts. Welcome, John! Right now, there’s a lot of social unrest in America, with increasing social fragmentation along racial & political fault-lines, a rise of interest in socialism, and growing distrust of the financial system. How are diminishing opportunities driving these channels of unrest? When opportunities are limited for quality jobs and social pathways to make a living, people will seek other ways to fulfill their needs. It’s fundamentally about survival, but it doesn’t always express itself that way — especially when it’s politically expedient to divide people into victim groups because of economic strife and despair. Dr. John Parmentola, Adjunct Staff Member at the RAND Corporation. (Website) People in difficult situations are highly susceptible to manipulation, and blaming others for their problems is an easy way to capture votes. However, this political strategy doesn’t solve the serious problem of economic strife and doesn’t help the over 40 million people in this country who would like to get out of poverty. This problem doesn’t depend on your color, ethnicity, creed, gender, or sexual orientation. We need to provide these people with new social pathways that will enable them to improve their quality of life. Socialism & communism promise to do this and they’re easy to sell because they scapegoat the rich, but history tells us these models fail to fulfill vital human needs. None of the issues we’re discussing are new — so what’s made 2020 such a “perfect storm” for violence & social unrest? It seems like COVID-19 and the quarantine sparked it — what’s driving it now? The COVID-19 crisis has amplified the number of people in desperate situations and essentially flattened the economic strata of our country. The pandemic has pushed many people into desperation, and George Floyd was the spark that set everything ablaze. It was a tipping point for many people. People are upset, and rightfully so — but this situation highlights symptoms of a much deeper problem that shouldn’t be overlooked. The virus will cause jobs to be eliminated because businesses will seize the opportunity to replace workers with technology. Businesses will strive to survive by reducing costs and becoming more efficient — but will workers be able to be retrained and can they adapt to this changing environment? That remains to be seen. US National debt as a percentage of GDP is at record levels. (PGPF) Whether government or private, the financial system is driven today by short-term interests; however, the deeper problem we face is a long-term one. Over the last 40 years, the government has accelerated its borrowing from the future to pay for immediate needs for political reasons. This strategy cannot create the future, as it is completely focused on today’s problems. As for the private sector, it is primarily driven by quarterly returns to satisfy its stakeholders and investors’ financial interests. The private sector is dominated by efficiency and incremental improvements that are low-risk. People should be angry about being left behind, but the blame is misplaced. The government has failed to provide them with quality education and investments in human potential to expand the frontiers of knowledge in science, engineering, and mathematics. What is required is a strategy that has the potential to expand human imagination as to what is possible, feasible and practical. This strategy can create new opportunities for quality jobs, new industries, new products and services, and economic growth. Instead, what we see are empty, quick-fix promises that may sound comforting, but won’t make a substantial difference in the lives of the people and their families. It’s just the same old political dogma. A protestor carries a flag upside down, a sign of distress, in Minneapolis in May 2020. (AP News) Would it be accurate to say that when opportunities are reduced for people to make a decent life for themselves and their families, then people on the bottom of the socio-economic pyramid get hurt first? Are these protests a canary in a coal mine, so to speak? This problem has been building up for some time now. We went from a single earner that could work hard enough to raise a family and realize the American dream to multiple earners with multiple jobs that struggle to accomplish the same thing. The quality of available jobs has declined substantially. Meanwhile, our education system’s quality has also steadily declined since around 1980, especially in critical areas of the future, such as science, technology, engineering, and mathematics. There are not enough highly qualified teachers to prepare our precious youth for a highly competitive world where China and India are able to outproduce us in terms of graduates. Is it any surprise we’re falling behind in business? This situation didn’t happen overnight — it’s the result of persistent decay that started nearly 40 years ago, and it has changed our national culture from one capable of risk-taking and bold steps to one of risk avoidance and incrementalism. This isn’t how the U.S. became the world economic and military power after WWII. Over 40 years, we’ve reduced our investment in human potential and pushing out the frontiers of knowledge like we did after WWII. Those long-term investments after WWII produced an explosion of discoveries and science-inspired inventions that the world wanted and admired. Every invention we hold dear today came from those investments. We still depend and improve upon all these inventions to this day. They are a consequence of the digital, communications, and biotechnology revolutions that were all inspired by scientific discoveries. Government R&D investment in basic research created new industries after WWII. (website) Let’s talk about Summers and Bernanke’s secular stagnation, where savings exceed investment which leads to a chronic lack of demand. This is claimed to be a reason that the GDP growth rate has declined since 1980, and said to be exacerbated by income & wealth inequality. What can you tell me about that? I’m not an economist, but I’m aware of three contributing trends to the systematic decline in the GDP growth rate since 1980. First, corporate America has been focused on efficiency, which has driven down the costs and prices for goods and services, and lowered the GDP growth rate by reducing the monetary value of all goods and services sold over time. Another trend is a shift from wealth creation to wealth transfer, which happens as the private sector has been replacing legacy industries with more efficient ones that do the same thing in different ways. Finally, since 1980 we have seen increasing market competition from countries like Japan, Germany, China, and South Korea that has limited growth. Historically, from 1950 to 1980, the annual GDP growth rate exceeded 5% about 20 times. The US GDP growth rate has declined by half since 1950. (Tradingeconomics) During this period, the U.S. produced many unique things the world wanted, and we had little competition. Since 1980, the annual GDP growth rate exceeded 5% only twice. Increasing global competition combined with a focus on efficiency has reduced scarcity of things and lowered prices to negatively impact GDP. So, if efficiency and competition are stifling growth, the question is how to foster it? The answer is by investing in R&D at the levels after WWII to acquire new knowledge about how the natural world works. The discoveries that come from this investment expand human imagination through education, which inspires the creation of unique products and services for consumption. Fundamentally, the value of unique knowledge is very important to our nation’s economy by allowing the creation of entirely new industries that foster long-term economic growth and the creation of quality jobs. Now, does decreasing wealth inequality play a role in increasing demand? Sure, but if we took the wealth of the top 20% of U.S. households and redistributed it evenly, it would temporarily increase spending, but that will not sustain long-term economic growth. Corporate wealth-transfer through higher efficiencies won’t stimulate growth, and neither will private wealth-transfer through tax redistribution. So, I don’t see wealth transfer as a path to solving the problem of opportunity shortage. It would be a short-term fix to disparities in wealth, but the longer-term economic growth issue would still exist. In my view, “we the people”, working through the government, must do what the private sector cannot. We need to overcome short-term risk-averse thinking and invest in basic R&D to expand human imagination to create opportunities for growth. In my judgment, the current path we are on will likely lead us to stagnation and more social strife. Federal R&D funds have been replaced by risk-averse private sector funding. (ITIF) In your upcoming book, “Creating Wealth from Worthless Things,” you explore the creation of wealth through the expansion of human knowledge in contrast to optimizing the efficiency of production processes and incrementally improving products and services. Can you tell me a bit more about the key topics you cover in it? The cause of economic growth has been an unresolved mystery pursued by leading economists for nearly two hundred and fifty years. Creating Wealth from Worthless Things is groundbreaking in that it identifies the actual cause of economic growth and wealth creation. The book accomplishes this by describing the history of unexpected discoveries that inspired each of the six extraordinary technological revolutions that shaped the modern world’s creation. These events led to the creation of new industries, enormous numbers of jobs, and unique products and services that improved the entire world’s quality of life. The discoveries I’m describing came from accumulating rules of how the natural world works, which in themselves had no commercial value — in other words, they were “worthless things”. The vast majority of them were funded through risk-taking by governments and wealthy patrons, and enabled the British Empire to become dominant by 1900 and made the U.S. the world-leading economic and military power after WWII. Unfortunately, over the last 50 years, our culture has changed from one that was willing to take risks and bold steps to one that now embraces risk aversion and incrementalism. In the book, I explain the social and political forces that have caused this cultural transformation, and how this transition limits our future growth and increases the likelihood that China will replace us as a world leader. This book concludes with a plan to avoid these potentially grave outcomes and explains the steps required to advance our economic growth. I also describe a new and unexpected technological revolution that will create new opportunities for a prosperous future for all humankind. John Parmentola’s upcoming book, “Creating Wealth from Worthless Things”. (website) Federal R&D spending has been on the decline since the 1960s, hasn’t it? Does that also apply to industries like computing, or is that highly successful market an exception that proves the rule? The outcome of high-risk R&D is uncertain, which has led the private sector to focus on low-risk development commonly called innovation. This is a process of adapting inventions to an application in the form of marketable products or services. In contrast, R&D, especially research, is about the discovery of new rules of how nature works, which leads to massive breakthroughs but has a high degree of risk. Since 1965, U.S. federal R&D funding as a percentage of G.D.P. has declined by 65%. During this period, private sector R&D increased by a factor of about 3 compared to federal R&D; however, unlike federal R&D, private sector R&D is primarily low-risk D to support existing product lines and services. What has been lost in this significant decline of federal R&D is opportunities for creating new pathways to economic growth. Federal spending on basic vs. applied research development. (SSTI) To appreciate this, imagine that the rules of chemical reactions and the discovery of the 98 fundamental elements of the periodic table never happened. Would the worldwide chemical industry today be generating $5 trillion per year and impacting the numerous industries that depend on it? I do not think so. In regard to computing, except for the pursuit of quantum computing, the private sector has been squeezing out as much performance as they can out of well-understood materials and designs, but like other technologies and industries, they are reaching fundamental performance limitations. Computing innovation continues with new approaches like A.I. and Machine Learning, which exploit familiar technology for new applications, but physics will eventually limit these as well unless something unexpected comes along. In 2019, Parmentola spoke on “Creating Wealth from Worthless Things” at MIT. (Watch it on YouTube) It sounds like you’re saying that slowing high-risk R&D and a focus on efficiency are hurting long-term growth and the availability of jobs. Do you view these trends as “putting the squeeze” on opportunities in America and fueling the social unrest that we’re seeing in the news? Increasing efficiency and incremental improvements in things can produce valuable short-term results — the question is how long we can run our society on a model of incremental improvement, especially when increasing efficiency typically results in the net loss of jobs. Moore’s Law is one example of incrementalism that may not last. (InfoQ) Historically, the incremental improvement of technology based on current knowledge and available materials always has a finite lifetime. Performance eventually runs out of headroom, and incremental improvements eventually become too small and costly to pass onto consumers. Many major technologies produced by the commercial sector and defense industry are reaching performance limits in such areas as transportation, computing, communications, energy production and storage, and defense technologies. Unless they overcome formidable performance barriers, we are headed toward stagnation in technology performance. However, rather than focus on discovering unique materials and new knowledge to overcome performance barriers, companies in today’s risk-averse world have dedicated themselves to process improvement to increase efficiency. This involves incremental improvements in logistics or fulfillment and business processes. That’s not a revolution, it’s a path to stagnation, and one that eliminates jobs and removes social opportunities for people to achieve success in life. Ultimately, the question of R&D spending leads us back to the topic of creating quality jobs and social pathways for people to realize the American dream, and it leads us to a future with reduced social tension between the “haves’” and the “have nots.” New knowledge inspires new inventions, industries, and jobs — and the more of it there is, the less tension and strife there will be. About Our Guest Dr. John Parmentola has built a highly distinguished career over four decades as a scientist, teacher, entrepreneur, inventor, innovator, a pioneer in founding new research fields, an international human rights activist, and a leader of complex research and development organizations with broad experience in the private sector, academia and high-level positions within the federal government and defense community. He received the 2007 Presidential Rank Award for Meritorious Executive from President George W. Bush for my service to the Department of the Army. Dr. Parmentola was also an Air Force Intelligence Agency nominee for the 1996 R. V. Jones Award of the Central Intelligence Agency for his work in arms control verification, and a recipient of the Outstanding Civilian Service Award and the Superior Civilian Service Award for his many contributions to the U.S. Army. He is an Honorary Member of the U.S. Army STs, a Fellow of the American Association for the Advancement of Science, and a recipient of the U.S. Army 10 Greatest Inventions Award, the Alfred Raymond Prize, and the Sigma Xi Research Award. He has presented more than 500 speeches and published numerous scientific papers and articles on science and technology policy. He is also the author of an authoritative book on space defense. Dr. Parmentola is currently a consultant to The RAND Corporation, where he works on defense, energy, and science and technology assessment, strategy, and planning issues for government agencies, both domestic and foreign. He also works on a volunteer basis for the National Academy of Sciences. Before this role, Dr. Parmentola has served as a Senior Vice President at General Atomics, the Director for Research and Laboratory Management for the U.S. Army, Chief Scientist at the U.S. Department of Energy, Chief of Advanced Systems & Operations at the Defense Threat Reduction Agency. Dr. Parmentola has a Ph.D. in Physics from M.I.T. and has served on the faculty of M.I.T., West Virginia University, and as a Fellow at the John F. Kennedy School of Government. Learn more about him on his website at: https://johnparmentola.com/
https://medium.com/predict/are-americas-cultural-struggles-the-result-of-risk-averse-incrementalism-5c9d6452f09c
['Tim Ventura']
2020-10-12 02:48:02.982000+00:00
['Innovation', 'America', 'Government', 'Technology', 'Science']
Enabling Data-Driven Decisions
The Financial Times has always relied on facts and data to deliver the highest-quality journalism to our readers. The data-driven culture has always been part of the company values. Therefore how we manage our data internally within the organization is very important to us. Fundamental to this is to have a dedicated central platform for telemetry and data management as part of our FT Core technology group. This is a group that is part of FT Product & Technology, owning three of the central technology platforms. They are powering up our customer-facing products spanning content publishing, content metadata, paywall, and analytics data. My team is responsible for the Data Platform. This is the platform for telemetry and analytics data and our mission is to deliver reliable data with high quality in a timely manner to the internal users and teams at the FT to enable decision making and new product development. We have a very big and diverse group of direct users and indirect consumers using and benefiting from the valuable data our platform collects and stores. Financial Times board members — for strategic and tactical decision making. — for strategic and tactical decision making. Marketing teams — for campaign design and planning to acquire and retain subscribers. — for campaign design and planning to acquire and retain subscribers. Editorial teams — for monitoring the performance and the readership for the articles and content they produce. — for monitoring the performance and the readership for the articles and content they produce. Advertising teams — for identifying sell/target subscribers groups for different products. — for identifying sell/target subscribers groups for different products. Our Product teams — to design better products for the FT readers and drive personalization to help to acquire and retain them. — to design better products for the FT readers and drive personalization to help to acquire and retain them. Analytics, Business Intelligence, Contact Strategy, and Data Science teams are among the main direct users of the Data Platform, using the data to conduct analysis, build dashboards and reports, and train models that are then widely used across the FT. Let’s review several use cases for the data delivered by the FT Core Data Platform. Power up Analytics At Financial Times we use a variety of business metrics to better understand the impact and the opportunities for future growth. Some of them are around engagement. It is absolutely essential to understand how engaged our readers are. We do that by using a metric called RFV (Recency, Frequency, and Volume). And what we are looking at is for every single reader: When did they last come to our site? — this is recency How often do they come in any given time period? — this is frequency How much content do our customers read when they come to our site? — this is volume Based on RFV we can determine the score of engagement for every single reader. Power up Retention As well as a user’s current level of engagement, it is useful to know who may be about to become engaged or disengaged. The Data Science team at the FT is developing and training RFV predictive models to identify individual and corporate subscribers that are moving from engaged to disengaged over the next 4 weeks and vice versa. Based on the results provided by these propensity models our Customer Care team is able to contact the readers that are likely to disengage and in many cases, successfully retain them. Envoy is an internal decision engine used across our customer-facing product line and helping in building smarter products. This engine uses the results from the same models to consistently target predicted to disengage subscribers by offering personalized newsletters and predicting the next best action for them. This is another example of the data-driven culture at the FT. And the raw data comes entirely from the Data Platform. Power up New Product development FT.com is offering relevant content based on personal preferences When new readers subscribe to FT.com, they provide information about the industry they are involved in. We store it as part of our arrangement models within the Abstraction Layer. It is further used for enabling a variety of convenient features like personalized topic recommendations as part of the subscriber’s personal myFT feed page. This is helping our readers by saving their time in finding relevant content based on personal preferences. Power up Internal Tools The performance of the best read ever article at FT.com Lantern is an internal monitoring tool powered by data from the Data Platform. It is an editorial focussed tool that provides analytics for the content Financial Times is publishing. It is used by Editorial teams to monitor the performance of the content they are producing. The main metric used is Quality Reads which is calculated within the Streams Layer of the Data Platform and further provided for consumption to Lantern with minimal latency. How do we support all these different use cases? Streaming the data into the Data Lake and further flowing downstream to the consumers We ingest clickstream data for the usage tracked at the web sites and the mobile apps for the digital version of Financial Times titles. That data is streamed nearly real-time via the Data Platform Stream Layer for further data processing. In addition to FT products usage data, we also ingest data from internal and external data vendors, for example, our internal platform managing the subscribers’ membership, the platform for the content our journalists are publishing with its metadata; as well as some external systems like Zuora for payments and Salesforce for corporate subscribers contracts. All the data is stored with minimal latency within the Data Platform Data Lake and further used for building the Abstraction Layer where we generate valuable data models providing insights to the business departments. Clear and well-established data contracts provide those insights to the Analytics Layer where multiple audited metrics are calculated and ready to be used by the internal business departments. The same insights feed the generation of a variety of data science models. Democratizing the data All the way from Streams through Data Lake to Analytics Layer through Abstraction the Data Platform ensures data is highly secured, validated, and with the highest data quality. We ensure that the insights we are generating are not revealing any personally identifiable information and thus are ready to be democratized and widely used for decision making and ready to drive growth. To ensure data democratization we provide tools for better visibility and understanding of the data along with some self-service enablement for easy data access and the ability for building new data workflows easily. We are constantly working on extending the platform with new capabilities to enable the rising demand for data, be able to generate more and more valuable insights and power up machine learning and various dashboards as part of business intelligence. How are we building the Data Platform? The mechanics for governing the streams, the lake, and the flows The FT Data Platform lives entirely in the Cloud. We are building it using AWS managed services by preference as we are reducing the operational cost significantly. Our big project now is to consolidate all our Stream Layer services to ingest new data via AWS MSK with Spark jobs and Apache AirFlow for the workflows orchestration. Currently, we are using Kinesis streams, SNS and SQS but our research shows that with the newly proposed approach we will have better scalability and more effective cost management. Our data is landing and stored in the Data Lake. Now we are working on standardizing the formats by using the Parquet in S3 where it will be available for reading via Redshift Spectrum. Our Abstraction and Analytics Layers reside in the main Enterprise Data Warehouse using Redshift. For virtualizing the variety of underlying formats and data stores we plan to use Presto service. As an initial evaluation, we plan to use our own deployment of Presto within AWS (aka “Vanilla” Presto). Long term if we prove that the concept works for us we may migrate to some of the managed Presto services. As a foundation of the platform, we are moving now from EC2 and ECS to EKS. We anticipate this migration will increase the platform’s scalability and operational cost-effectiveness significantly. For the data lineage, we are planning to try Apache Atlas. For monitoring and observing the system’s health, we will continue relying on Grafana, Splunk, and our internal monitoring platforms BizOps and Heimdall. For monitoring the data quality and automating this process across the internal FT systems, we built and integrated within the platform a homegrown Data Quality Metrics Checks framework. How are we enabling the data to be consumed? Execution Environment for Data Workflows For enabling our tech teams to easily build new workflows, and our product teams to do product discovery and development more effectively, we recently developed a new capability. We named it internally E2 which stands for Execution Environment but relates nicely with Euler’s number. It is a processing engine for different algorithms like Data Science Models and Machine learning on top of the data we have in our Data Lake. Our ambition is to support scalable workflows written in a variety of programming technologies like R, Python, Java, and Spark. We will continue sharing our adventures and challenges in building our core platforms using the latest technology trends. Stay tuned and if you recognize our mission as yours consider joining us in this exciting journey!
https://medium.com/ft-product-technology/enabling-data-driven-decisions-564359b79788
['Elena Georgieva']
2020-05-18 11:25:52.890000+00:00
['Big Data', 'Data Platforms', 'Financial Times']
Rights and wrongs when creating profile
The biggest GDPR fine in Germany which H&M is to pay has uncovered a delicate yet scandalous problem — spying on employees. It was mentioned there that some profiles were created and continuously developed bringing details of about two hundred employees to a number of managers. But is there such a thing as righteous profile? H&M will be charged €35.3 million — penalty imposed by the Data Protection Authority of Hamburg. The company, which has a service center in Nuremberg, is accused of collecting and storing private life data of its employees. H&M has allegedly been gathering too much data than it had rights to about hundreds of its employees since 2014. In the press release describing the incident it is said that lots of private details got documented by the company’s management, including information about family issues, religion, illness information and diagnoses. These records, in some cases quite elaborate and full of particularities, made for further processing and analysis were available for dozens of employees to access the information. The arbitrariness with which the company’s managers acted collecting and recording private life data during casual talks as well as keeping a history of such details to create an illicit profile, is among the key problems which the Data Protection Authority is trying to convey by exacting the biggest ever GDPR fine within Germany so far. The fact that the company collects private information surfaced unexpectedly in 2019 — after a configuration error in the system which spouted the data letting anyone working at the company access the information. The exposed details were available for hours. 60GB of information was provided by H&M to the DPA. The company took many steps to recoup the level of security and transparency — the brand has reportedly contributed into compliance and launched a data protection program in Nuremberg. A new employee has been assigned to implement data protection coordinating. The new risk management framework of the affected service center included mechanisms preventing whistleblowing and updating privacy status. It has been noted that H&M was going to reimburse the employees for major inconvenience. Interestingly, according to Trust Anchor, the penalty considering the company’s turnover the penalty should have been two times higher — about €61 million, but thanks to the cooperation with the DPA it has been cut in half. Now back to the concept of righteous profile and what it can be. The desire to create a profile for each key employee in an organisation is not really outrageous, it is all about what can and can’t be done. Recording casual talks with your colleague or making notes in order to hand them over for further processing is definitely not a legitimate option, but a major privacy breach. When it comes to employee assessment and evaluation of appropriateness of job assignment position, there are legal measures which should be taken. The automated profiling solution can be implemented within a company’s system. It is developed to help HR department, to improve decision-making regarding a specialist’s appointing, tasks allocating and entrusting employees with crucial responsibilities, enhancing overall performance and increasing productivity, figuring out skillful and promising employees. Only correspondence conducted via corporate channels during work hours and on corporate devices can be analysed in order to create an unbiased understanding of a staffer’s skills, diligence, strengths and weaknesses. Knowing these details allows avoid prejudices, erroneous opinion or overly personal likes and dislikes. The system runs on strictly set algorithms and analyzes data excluding personal opinions and emotional baggage. In some cases, companies, which hire remote employees and never have any opportunity to know each specialist in person, to pay attention to professional progress, involvement into the business processes and projects or growing disinterest, might want to learn a bit more about whether a hired professional’s goals, intents and dedication meet the same ones of the company. Employee analysis helps enhance communication within a team and between managers and employees in many ways: • Some jobs require specific qualities, and some of these jobs are serious enough to not accept even few mistakes or a person who pretends to be suitable but in fact quite unfitting for that very job. Of course, there are tests which can be made before taking such professionals onboard and letting them lead the project, but tests won’t show you the real picture of an employee in progress, whereas this is exactly what is important — to see how a specialist deals with an assigned job in progress, whether something changes in comparison to what was at the start. • Know your employee individuality in order to not spoil the cooperation spirit by wrong attitude, have a unique approach to someone if necessary. • Information can’t be trusted equally to everyone who gets access to corporate assets. Automated profiling has been developed to the extent when emotionally and informationally limited corporate correspondence can draw out results based on criteria which matter only within job-related issues, for example, whether a specialist can be trusted with the top secret data. Appointing new CEO, bringing a teacher to work with children — it might be helpful to make sure that no recruitment mistake was made. Automated profiling lets you identify prevailing type of conduct, personality traits, purposes and respond to management decisions, influencing colleagues, motivation, loyalty and criminal tendencies without resorting to such illicit and utterly indecent ways of pulling information as spying and whistleblowing divulging private life details. Automated profiling lets a company receive reliable and impartial staff assessment with no emotional baggage, know its implicit leaders who can manage the team if needed, make important decisions concerning the level of responsibility, appointing new positions, granting access to sensitive data and even measure healthy workplace environment.
https://medium.com/major-threats-to-your-business-human-factor/rights-and-wrongs-when-creating-profile-c8822b7f2fc1
['Alex Parfentiev']
2020-10-08 14:25:54.551000+00:00
['Profile', 'Privacy', 'Recruitment', 'Psychology', 'Data Breach']
Third Law of the Interface: Interfaces form an ecosystem
Interfaces live in an ecosystem and there is a fertile and conflicting exchange among them. When the engineers who created the first computers needed a device to program them, they simply adapted what they already had: the typewriter QWERTY keyboard. And when in the 1960s the computer needed a real-time output device, they had no doubts: the television screen was waiting for them. Like the synapses of a neuron or the valences of a chemical element, interfaces have the possibility of linking with other interfaces. Interfaces, as Claude Lévi-Strauss (1964) said about myths, engage in a dialogue and “think each other”. The dialogue between interfaces does not discriminate against any type of device or human activity. What today is on the screen, yesterday was in the real world, and what will appear tomorrow in a videogame, will later be found on the Web. Interfaces form a network that looks like an expansive hypertext in perpetual transformation that carries out operations of movement, translation, transduction and metamorphosis. The evolution of interfaces depends on the correlations that they establish with other interfaces. If the interface does not engage in a dialogue with other interfaces, it does not evolve and it runs the risk of being extinguished (Fourth Law). The impossible interface Sometimes the interface does not find good interlocutors for dialogue. The printing press, invented in China a millennium before Johannes Gutenberg, could not become consolidated in that society because it was almost impossible to dialogue with a system of ideographic writing in which each sign corresponds to a concept. As Marshall McLuhan (1962) explained, the interface of the Chinese press lacked an interlocutor: the Latin alphabet. The Gutenberg machine, on the other hand, integrated into one interface the wine press, the Latin alphabet, paper, binding systems and the techniques of fusing and molding lead. Five hundred years after Gutenberg something similar happened with graphic interfaces. Several companies attempted to market a personal computer with a user- friendly interface (Apple Lisa in 1980, the Xerox Star in 1981), but they failed. Finally, in the prophetic year of 1984, the miracle occurred: the Macintosh, the machine for the rest of us, conquered the public. Why did the Mac succeed where the Apple Lisa and the Xerox Star had failed? Because it established a dialogue between its graphic operating system, the printer laser of Hewlett-Packard and the PostScript language of Adobe. The union of these three technologies revolutionized the way the world understood computing, created new professional fields such as Desktop Publishing (DTP) and generated the conditions for the personal computer revolution in the 1980s (Lévy, 1992). Perfect interfaces The situation happened again at the beginning of the 21st century. As Steven Levy explains in The Perfect Thing (2006), the appearance of the iTunes software, the progressive reduction in the size of hard disks, the lower price of memories and the development of the Firewire interface converged into the coolest product of the new decade: the iPod. The iPod is an interface that integrates different hardware and software elements – a 1.8-inch hard drive, the Firewire connection, the MP3 format for audio compression – with the former Macintosh application for playing and managing music: iTunes. As in 1984 with the Macintosh, the interconnection of actors determined the success of the perfect thing. Just one year after Levy’s book was published it was already old. On 29th June 2007 Apple introduced a new perfect thing with an even more extended network of actors: the iPhone. This description of high-technology devices that converge into a single interface should not eclipse the human actors that participate in them. Designers (the Apple design team, not just Steve Jobs), institutions of any kind (media, markets, Apple Stores, research labs, etc.) and, obviously, consumers, participate and interact in the network built around these almost perfect – new improved models are presented every semester – interfaces. Theoretical networks Social sciences have had an intermittent interest in technological change. Classical thinkers like Adam Smith, David Ricardo or Karl Marx saw mechanization or division of labor as fundamental topics of their economic theories. Nevertheless, from the end of the 19th century to the 1950s the economy was more interested in the equilibrium of variables so that the attention was focused on other fields. The development of a new school of thinking around the Austrian economist Joseph Schumpeter brought the problem of technology, innovation and entrepreneurship into focus again. For many years, researchers believed that the role of inventors was central in the innovation process: that’s why we still talk about James Watt’s steam engine, Thomas Edison’s light bulb, Alexander Bell’s telephone and Steve Jobs’ Macintosh. To every name there is a corresponding artifact, or more than one (Thomas Edison also ‘invented’ the phonograph, and Steve Jobs the iPod, the iPhone and the iPad). This conception is based on the heroic role played by each individual inventor in the creation of a new artifact. Researchers like Nathan Rosenberg (1992), one of the most recognized historians of economy, denounced this ‘heroic theory of invention’ that impregnates our language, patent system and history books. In this context, the Laws of the Interface prefer to establish a dialogue with conceptions and theories like the Social Construction of Technology (SCOT) (i.e. Bijker, Hughes and Pinch, 1987; Bijker and Law, 1992; Bijker, 1997), the Actor- Network Theory (ANT)(Callon, 1987; Law and Hassar, 1999; Latour, 2005), media ecology (McLuhan, 1962, 2003; McLuhan & McLuhan, 1992; Scolari, 2012, 2015; Strate, 2017), media archaeology (Huhtamo & Parikka, 2011; Parikka, 2012) and media evolution (Scolari, 2015, 2019). The contributions of Arthur (2009), Basalla (1988), Levinson (1997), Logan (2007), Frenken (2006), Manovich (2013) and Ziman (2000) have also been integrated into this interdisciplinary and polyphonic conversation. The Laws of the Interface, in a few words, proposes an eco-evolutionary approach to socio-technological change based on the contributions of all of these authors and disciplines. The content of one interface is always another interface What happens when we deconstruct an interface? The windmill was one of the most important inventions of the Middle Ages. If we deconstruct a windmill, what do we find? A combination of the water mill and ship sails, two technologies invented in Antiquity. If we dismantle the water mill we will find a wheel, an axis and many other technological actors that interact with them. When we open an interface we always find more interfaces. This fractal dimension of interfaces could take the form of a new law or at least a corollary: the content of one interface is always another interface.
https://uxdesign.cc/third-law-of-the-interface-interfaces-form-an-ecosystem-e6293a108089
['Carlos A. Scolari']
2019-10-09 23:49:23.079000+00:00
['UI', 'Usability', 'Design', 'Interfaces', 'Technology']
The Latest Theory That May Answer the Origin of Covid-19
The Latest Theory That May Answer the Origin of Covid-19 The Mojiang Miners Passage (MMP) hypothesis explains many oddities of the Covid-19 pandemic. Image by JohannaIris from Pixabay In a July commentary, “A Proposed Origin for SARS-CoV-2 and the COVID-19 Pandemic,” Jonathan R Latham, virologist doctorate, and Allison Wilson, a professor of biology, presented the Mojiang Miners Passage (MMP) hypothesis that provides “a plausible and parsimonious explanation of all the key features of the COVID-19 pandemic and its origin,” they stated. “It accounts for the propensity of SARS-CoV-2 infections to target the lungs; the apparent preadapted nature of the virus; and its transmission from bats in Yunnan to humans in Wuhan.” Let’s see what the hypothesis is about. What is known about the origin of SARS-CoV-2? First, let’s start with the known facts. The closest relative to SARS-CoV-2 is RaTG13, a bat sarbecovirus isolated from the Yunnan Province of China, with about 96% genetic identity. A recent genomics study in Nature Microbiology shows that SARS-CoV-2 descended from RaTG13, which indicates that SARS-CoV-2 came from bats without any intermediate host. “Current sampling of pangolins does not implicate them as an intermediate host,” stated the study authors. Prior research in May also said that pangolin is not the intermediate host that passed SARS-CoV-2 to humans. The pangolin coronavirus (pangolin-CoV-2020) and SARS-CoV-2 are only 90.32% identical. “Bat-CoV-RaTG13 was more genetically close to SARS-CoV-2 at both individual gene and genomic sequence level compared with the genomic sequence of pangolin-CoV-2020 assembled in this study,” this research concluded. “Our study does not support that SARS-CoV-2 evolved directly from the pangolin-CoV.” SARS-CoV-2 came from a bat sarbecovirus called RaTG13, not pangolin, human-made, or the wet market. How it got spread into humans is still a mystery. Another fact is that SARS-CoV-2 is not human-made. Genetic engineering leaves a ‘fingerprint’ in the organism’s genome, which can be caught with genetics techniques. In January, the Massachusetts Institute of Technology (MIT) used the Finding Engineering-Linked Indicators (FELIX) tool of the US Director of National Intelligence to confirm that SARS-CoV-2 was never genetically manipulated from any known coronaviruses. In fact, the FELIX tool shows that SARS-CoV-2 best matches are naturally occurring coronaviruses. In late May, the Chinese CDC ruled out the Huanan wet market in Wuhan as the source of the SARS-CoV-2 outbreak. The WHO announced the same. SARS-CoV-2 was not found in any animals tested from the wet market. And a third of early Covid-19 patients never had contact with the wet market. Therefore, SARS-CoV-2 came from somewhere else besides the wet market in Wuhan. So the fact is that SARS-CoV-2 came from a bat sarbecovirus called RaTG13, not pangolin, human-made, or the wet market. How it got spread into humans is still a mystery. “The organizations [of the U.S. intelligence community] decided to continue investigating two alternatives,” Sarah Scoles, a freelance science writer, wrote in OneZero. “The more likely explanation that the virus jumped from an animal to a human, and the more remote possibility that it was a natural virus released in a lab accident, which still hasn’t been ruled out [by the US intelligence community].” What happened in Mojiang Mine in 2012 In late April of 2012, six miners at Mojiang Mine fell ill with unknown pneumonia. They were brought to the Kunming University Hospital in Yunnan, which is about 250 km from the mine. Their symptoms include dry cough (all patients), sputum (all), high fever (all), difficulty breathing (5), myalgia (5), low blood oxygen levels (4), headaches (3), and blood clots (2). Treatments used were steroids (all), antivirals (5), ventilation (3), and blood thinners (2). And half of the miners (3) died in the end. Samples (at least blood and thymus tissues) from the miners were sent to the Wuhan Institute of Virology to determine the causative agent. The conclusion made was that “the unknown virus lead to severe pneumonia could be: The SARS-like-CoV from the Chinese rufous horseshoe bat,” wrote the authors. So, the miners had a coronavirus infection. (Please also note that the original study that detailed the Mojiang miners’ pneumonia is a Masters thesis, which is still a credible scientific source supervised by a committee of academics.) In that same year, ZhengLi Shi, director of the Center for Emerging Infectious Diseases at the Wuhan Institute of Virology, led a surveillance study at the Mojiang Mine. Her team collected fecal swabs from 276 bats. Using genetic sequencing, they detected nine coronaviruses species, of which six were never classified and one was RaTG13. (Recalled that RaTG13 is a bat sarbecovirus that is 96% identical to SARS-CoV-2). Some of these coronavirus species had likely infected the miners, although no comparative studies have confirmed this. Such events are not at all new as bat-to-human spillovers of viruses had occurred before. MMP hypothesis part I: Human passage Recalled that in the surveillance study in Mojiang Mine, Zhengli Shi’s team discovered the bat coronavirus RaTG13, which is 96% identical to SARS-CoV-2. Although RaTG13 is the closest relative of SARS-CoV-2, a 4% genomic difference is still too vast, which requires 20–50 years of evolution. Hence, an intermediate host is likely involved, as was the case of SARS and MERS, because coronavirus evolution rate speeds up in a different species. Contrary to this conventional view, the MMP (Mojiang Miners Passage) hypothesis states that RaTG13 may have evolved into SARS-CoV-2 in the miners in the Mojiang cave. Coronaviruses usually infect the upper respiratory tract. Miners work in conditions of poor air quality, which compromises respiratory health. Thus, RaTG13 bat coronavirus might have made its way into the miners’ lower respiratory tract where the lungs reside. Lungs are a large organ, and since the miners’ pneumonia was severe enough to require prolonged hospitalization, the virus load must have been enormous. “Evolutionary change is in large part a function of the population size,” Dr. Latham and Prof. Wilson explained. “The lungs of the miners, we suggest, supported a very high viral load leading to proportionately rapid viral evolution.” (The term ‘passage’ refers to a standard technique to ‘culture’ viruses in a new set of cells. As viruses can only replicate using another cell’s machinery, the passaging of viruses is required for research purposes. By this analogy, the RaTG13 bat coronavirus was passaged in humans in the Mojiang Mine.) As mentioned above, the miners’ symptoms closely resemble that of Covid-19. “Anyone presenting with them today would immediately be assumed to have COVID-19,” Dr. Latham and Prof. Wilson remarked. And the corresponding treatments administered to the miners — steroids, antivirals, blood thinners, and ventilation — are precisely the same for Covid-19. Therefore, the patient zero of the pandemic might be one of the miners. Why did the miners not spread the disease to others? The novel coronavirus is most contagious during the early phase of the disease, probably one to two days before symptom onset. The miners were only taken to the hospital when their pneumonia had become severe. And mask-wearing was probably widely practiced in hospital settings. Therefore, the coronavirus at that time might not get much of a chance to spread. MMP hypothesis part II: The escape Also recalled that samples (blood and thymus tissues) from the miners were sent to the Wuhan Institute of Virology for research purposes. As their labs were under construction at the time of sample collection, virologists may have begun experimentations in 2017 or 2018, Dr. Latham and Prof. Wilson said. Then the virus may have leaked from the lab by accident. “The more likely explanation that the virus jumped from an animal to a human, and the more remote possibility that it was a natural virus released in a lab accident, which still hasn’t been ruled out [by the US intelligence community].” It may be an outrageous thing to state, but unintentional lab-leaked microbes have happened many times around the world. According to USA Today, over 1100 lab accidents involving the escape of bacteria, viruses, or toxins to agriculture or humans were reported to federal regulations between 2008–2012. Even a 2009 paper in the New England Journal of Medicine admitted that the re-emergence of the 1977 H1N1 swine flu pandemic — that disappeared from the human population in 1957 — was “probably an accidental release from a laboratory source.” Moreover, SARS had escaped from labs six times — one in Singapore, one in Taiwan, and four in Beijing. So it is not surprising that coronaviruses had leaked from the Wuhan Institute of Virology as well. “Accidents happen on a regular basis. We have seen a few cases of high-profile labs in recent years where accidents happened or mistakes were made,” Dr. Filippa Lentzos, a biosecurity expert at King’s College London, stated. “For instance, in 2014 at the CDC there were safety lapses involving Ebola virus, anthrax and bird flu, and there have been lapses at the NIH [National Institutes of Health] involving variola virus which causes smallpox.” Not to mention that the Wuhan Institute of Virology received two official warnings from American embassy officials in 2018 concerning inadequate laboratory safety measures. Chinese national team has also found that the Wuhan lab did not meet federal standards in five categories. There were also reported accidents that lab workers got wounded from bat’s attack or exposed to bat urine, according to VOA News. Still, this narrative is not to blame research. There is no doubt that science is the cornerstone of human civilization. It is just that sometimes unfortunate accidents happen. MMP hypothesis in a nutshell “We suggest, first, that inside the miners RaTG13 (or a very similar virus) evolved into SARS-CoV-2, an unusually pathogenic coronavirus highly adapted to humans,” Dr. Latham and Prof. Wilson said. “Second, that the [Zhengli] Shi lab used medical samples taken from the miners and sent to them by Kunming University Hospital for their research. It was this human-adapted virus, now known as SARS-CoV-2­, that escaped from the WIV [Wuhan Institute of Virology] in 2019.” “The closest known relative to SARS-CoV-2 is a virus sampled by Chinese researchers from six miners infected while working in a bat-infested cave in southern China in 2012. These miners developed symptoms we now associate with Covid-19. These viral samples were then taken to the Wuhan Institute of Virology…,” agreed Jamie Frederic Metzl, an author and senior fellow at the Atlantic Council. “If the virus jumped to humans through a series of human-animal encounters in the wild or wet markets, as Beijing has claimed, we would likely have seen evidence of people being infected elsewhere in China before the Wuhan outbreak. We have not. The alternative explanation, a lab escape, is far more plausible.” What the MMP hypothesis explains For one, SARS-CoV-2 binds to the human ACE2 receptor with remarkable efficiency. “Such exceptional affinities, ten to twenty times as great as that of the original SARS virus, do not arise at random, making it very hard to explain in any other way than for the virus to have been strongly selected in the presence of a human ACE2 receptor,” Dr. Latham and Prof. Wilson noted, such as in the workers in Mojiang Mine. And the bat sarbecovirus RaTG13 can indeed bind to the human ACE2 receptor. A study published in May in The Lancet Microbe has also shown that SARS-CoV-2 does not replicate efficiently in cultured (in a lab dish or plate) kidney and lung cells of bats — suggesting that SARS-CoV-2 probably evolved in a human host rather than bats. “In short, the MMP theory is a plausible and parsimonious explanation of all the key features of the COVID-19 pandemic and its origin. It accounts for the propensity of SARS-CoV-2 infections to target the lungs; the apparent preadapted nature of the virus; and its transmission from bats in Yunnan to humans in Wuhan.” Second, viruses usually undergo rapid evolution when they replicate in a new host. For instance, when MERS and SARS first adapt to humans, phylogenetic analyses revealed numerous mutations and recombinations in the viruses’ genomes. But such rapid evolution was not observed with SARS-CoV-2 genomes at the beginning of the pandemic. Yet SARS-CoV-2 has infected far more people than SARS and MERS did. “That is to say, its evolutionary leap to humans was completed before the 2019 pandemic began,” stated Dr. Latham and Prof. Wilson. “It is hard to imagine an explanation for this high adaptiveness other than some kind of passaging in a human body.” Third, the MMP hypothesis complements known facts about the origin of SARS-CoV-2 — that it evolved from a bat sarbecovirus called RaTG13; it does not come from pangolin or the Huanan wet market, or by human design. A complementary theory to the MMP hypothesis Another convincing theory regarding the origin of SARS-CoV-2 concerns gain-of-function research. Laboratories around the world had intentionally made microbes adapt to different species — through serial passaging experiments — to study the possibility of epidemics. Hence, a possibility exists that SARS-CoV-2 was cultured (or passaged) in laboratory settings to improve its binding affinity to the ACE2 receptors. This ‘gain-of-function lab escape’ has been theorized by some research groups to explain the uncanny rapid adaptability of SARS-CoV-2 to humans. One is a peer-reviewed research paper in Bioassays in August, titled “Might SARS‐CoV‐2 Have Arisen via Serial Passage through an Animal Host or Cell Culture?.” Another example is the commentary of Birger Sørensen, a Norwegian virologist specializing in HIV vaccine research, and his colleagues. This ‘gain-of-function lab escape’ theory does not contradict the fact that SARS-CoV-2 is not genetically manipulated. Serial passaging mimics natural evolution in an accelerated fashion in the lab, so it does not count as direct genetic engineering or human-designed virus. It also does not contradict the MMP hypothesis, but rather complements it: RaTG13 or SARS-CoV-2 isolated from the Mojiang Mine may have undergone gain-of-function experiments in the Wuhan Institute of Virology before its accidental escape. Short abstract and closing The MMP hypothesis states that the bat sarbecovirus RaGT13 infected workers in the Mojiang Mine in 2012. This RaGT13 then underwent rapid evolution upon the encounter of a new organism, evolving into SARS-CoV-2. These infected miners also had pneumonia signs that were indistinguishable from Covid-19. The miners also underwent treatments used today for Covid-19. Samples from the miners were taken to the Wuhan Institute of Virology for research purposes. SARS-CoV-2 may have then leaked from the lab by accident. In between sample collection and viral escape, a possibility exists that gain-of-function experiments were done. The MMP hypothesis (maybe plus the gain-of-function theory) explains many facets of the pandemic — such as its ability to infect the lower respiratory tract (which is uncommon of coronaviruses), its unusual adaptability to humans within a short timeframe, and its mysterious zoonotic transfer from bats in Yunnan to people in Wuhan. This hypothesis also does not contradict known facts that SARS-CoV-2 came from bats, not pangolin, human-designed, or the Huanan wet market in Wuhan. Lastly, note that the MMP hypothesis is only a possibility and not proven yet. The WHO announced in August that they will conduct epidemiological investigations, with the help of China, on the early source of SARS-CoV-2 in Wuhan soon.
https://medium.com/microbial-instincts/the-latest-theory-that-may-answer-the-origin-of-covid-19-d9efbe7072ae
['Shin Jie Yong']
2020-10-16 09:13:43.909000+00:00
['Covid 19', 'Life', 'Technology', 'Science', 'Education']
Automation eating your industry? (Answer: Yes.) These are the skills that will always be valued in the workplace.
by Alison E. Berman ✍️ If you’d asked farmers a few hundred years ago what skills their kids would need to thrive, it wouldn’t have taken long to answer. They’d need to know how to milk a cow or plant a field. They needed general skills for a single profession that barely changed. This is how it’s been for most of human history. But in the last few centuries? Not so much. Each generation, and even within generations, we see some jobs largely disappear, while other ones pop up. Machines have automated much of manufacturing, for example, and they’ll automate even more soon. But as manufacturing jobs decline, they’ve been replaced by other once-unimaginable professions like bloggers, coders, dog walkers, or pro gamers. In a world where these labor cycles are accelerating, the question is: What skills do we teach the next generation so they can keep pace? More and more research shows that current curriculums, which teach siloed subject matter and specific vocational training, are not preparing students to succeed in the 21st century; a time of technological acceleration, market volatility, and uncertainty. To address this, some schools have started teaching coding and other skills relevant to the technologies of today. But technology is changing so quickly that these new skills may not be relevant by the time students enter the job market. In fact, in Cathy Davidson’s book, Now You See It, Davidson estimates that, “65 percent of children entering grade school this year (2011) will end up working in careers that haven’t even been invented yet.” Not only is it difficult to predict what careers will exist in the future, it is equally uncertain which technology-based skills will be viable 5 or 10 years from now. So, what do we teach? Finland recently shifted its national curriculum to a new model called the “phenomenon-based” approach. By 2020, the country will replace traditional classroom subjects with a topical approach highlighting the four Cs — communication, creativity, critical thinking, and collaboration. These four skills “are central to working in teams, and a reflection of the ‘hyperconnected’ world we live in today,” Singularity Hub Editor-in-Chief David Hill recently wrote. Hill notes the four Cs directly correspond to the skills needed to be a successful 21st century entrepreneur, when accelerating change means the jobs we’re educating for today may not exist tomorrow. Finland’s approach reflects an important transition away from the antiquated model used in most US institutions; a model created for a slower, more stable labor market and economy that simply no longer exists. In addition to the four Cs, successful entrepreneurs across the globe are demonstrating three additional soft skills that can be integrated into the classroom — adaptability, resiliency and grit, and a mindset of continuous learning. These skills can equip students to be problem-solvers, inventive thinkers, and adaptive to the fast-paced change they are bound to encounter. In a world of uncertainty, the only constant is the ability to adapt, pivot, and get back on your feet. Like Finland, the city of Buenos Aires is embracing change. Select high school curriculums in the city of Buenos Aires now require technological education in the first two years and entrepreneurship in the last three years. Esteban Bullrich, Buenos Aires’ minister of education, told Singularity University in a recent interview: “I want kids to get out of school and be able to create whatever future they want to create — to be able to change the world with the capabilities they earn and receive through formal schooling.” — Esteban Bullrich, Minister of Education, Buenos Aires The idea is to teach students to be adaptive and equip them with skills that will be highly transferable in whatever reality they may face once out of school. Embedding these entrepreneurial skills in education will enable future leaders to move smoothly with the pace of technology. In fact, Mariano Mayer, director of entrepreneurship for the city of Buenos Aires, believes these soft skills will be valued most highly in future labor markets. This message is consistent with research highlighted in a World Economic Forum and Boston Consulting Group report titled, New Vision for Education: Unlocking the Potential of Technology. The report breaks out the core 21st-century skills into three key categories — foundational literacies, competencies, and character qualities — with lifelong learning as a proficiency encompassing these categories. From degree gathering to continuous learning This continuous learning approach, in contrast to degree-oriented education, represents an important shift that is desperately needed in education. It also reflects the demands of the labor market — where lifelong learning and skill development are what keep an individual competitive, agile, and valued. Singularity University CEO Rob Nail explains, “The current setup does not match the way the world has and will continue to evolve. You get your certificate or degree and then supposedly you’re done. In the world that we’ve living in today, that doesn’t work.” Transitioning the focus of education from degree-oriented to continuous learning holds obvious benefits for students. This shift in focus, however, may be a difficult situation for academic institutions to adapt to as education as an industry becomes increasingly democratized and decentralized. Any large change requires that we overcome barriers—and in education, there are many — but one challenge, in particular, is fear of change. “The fear of change has made us fall behind in terms of advancement in innovation and human activities,” Bullrich says. He goes on: “We are discussing upgrades to our car instead of building a spaceship. We need to build a spaceship, but we don’t want to leave the car behind. Some changes appear large, but the truth is, it’s still a car. It doesn’t fly. That’s why education policy is not flying.” Education and learning are ready to be reinvented. It’s time we get to work.
https://medium.com/singularityu/automation-eating-your-industry-22173e674d04
['Singularity University']
2017-03-29 01:49:39.584000+00:00
['Teaching', 'Entrepreneurship', 'Future Of Work', 'Future Of Learning', 'Education']
The Hidden Yet Incredibly Powerful Benefits Of Being Self-Aware And Empathetic
The Hidden Yet Incredibly Powerful Benefits Of Being Self-Aware And Empathetic As it turns out, facing myself is scary, but only at first. Then it’s the best thing I could’ve ever done. Photo from Pexels.com People might describe me as driven or ambitious but, truth be told, I’ve never been a career person. My life has always been focused around relationships — romantic or not. I care deeply about the relationships in my life and they have played a huge role in my personal fulfillment and well-being. When I was younger, I thought this was a bad thing. I felt ashamed for being so sensitive and emotionally impacted by romantic relationships. While many of my peers went out and about to network and talk about their impressive CVs, all that I found a real interest in was anything concerning people. I was unknowingly horning those so-called soft skills that keep being pointed out as the most significant qualities yet never seem to have enough weight in the real world. Deep down, I’d always known what my core gifts are, but back then I didn’t know they were gifts and what they were good for. For example, my empathy and self-awareness. I couldn’t use these two qualities to directly make money. I couldn’t really quantify them (or at least didn’t know how) to show others that I could be of use to them and prove to myself I was good enough. For a long time, I didn’t think very highly of myself. I even hated my own gifts. I was convinced that my empathy and sensitivity were my weaknesses which held me back in life. I wished I could be a detached, too-cool-to-care-about-love, career-driven woman. I developed an attraction towards busy, emotionless people who seemed successful on the outside but had little empathy and depth. Here are the quotes from my favourite self-help book called Deeper Dating by Ken Page that sum up my past dating experiences well: Deep inside we know that these Core Gifts are worthy, and we never stop longing to find someone who treasures them, but after getting the message that these gifts are risky or unlovable, we learn to hide and bury them. and If we deny or dishonor a Core Gift, we are likely to choose someone who also dishonors it — and then to be intensely vulnerable to any negative judgment they have about us. The author of this book and I went through similar journeys. He wrote: As I came to value my sensitivity (a journey that still continues), my life began to change in wonderful ways. I started building love, not just chasing it down with people who weren’t particularly interested. I began to spend time with the precious people who honored me for who I was. I gradually stopped looking for tawdry sex and found myself meeting kind and available men more often. The more I embraced my authentic self, the more the quality of men I dated improved. I could relate to this completely. Eventually, like the author, I also came to realise that I could choose to embrace my core gifts and let authentic love come into my life. My whole world shifted. All along, I had asked myself what my self-awareness and empathy were for. I had thought that these qualities were only self-directed and could not make me valuable to anyone. If anything, they might have even caused me overwhelming stress and made letting go almost an impossible task. But now I know. Embarking on this journey of embracing my core gifts, I’ve gained a level of clarity that is both mature and transformative. It’s true that my self-awareness and empathy are more self-directed than benefiting others. But it’s not entirely true. It works like this: My self-awareness and empathy help me find myself first, so I can selflessly help others. And the gift is that it has happened at a rather young age. I could imagine, without these qualities, I would end up wasting my time doing many jobs that don’t suit me, being in many relationships that don’t work for me, and regret it all later in life when the consequences become permanent. My self-awareness and empathy have allowed me to put many psychological issues in the past and, thanks to this, I have so much time and space ahead of me to enjoy myself peacefully and do good for others. The most amazing thing is that I have stopped feeling like I’m on a journey. There’s a deep sense of knowing in me that I’m here. Every day, I’m here, I’m at the right place, and I’m true. Being highly self-aware and empathetic has also led to having high emotional intelligence and good interpersonal skills, which serve my lifelong interest in relationships very well. Most notably, when my core gift is empathy, the right people for me are, luckily, not those who are detached, careless, insensitive, or obsessed with work. They’re those who are empathetic, understanding, and caring, with an emotional depth of an ocean. I’m grateful every day that they’re the people I’ve already had in my life — the people I’ve intuitively navigated towards amidst pain and confusion. Because of these people, there’s so much compassion, kindness, and lovingness in my world. It’s abundant and overflowing. It’s always been there and ready for me to come and bathe myself in it. The moment I chose to appreciate those qualities in myself, I unlocked an ability to appreciate them in others and recognise their presence and power in everything I do. As it turns out, facing myself is scary, but only at first. Then it’s the best thing I could’ve ever done.
https://medium.com/tinglymind/the-hidden-yet-incredibly-powerful-benefits-of-being-self-aware-and-empathetic-4c5d4c021386
['Ellen Nguyen']
2020-08-02 21:29:22.683000+00:00
['Self-awareness', 'Self Improvement', 'Life', 'Self', 'Empathy']
Paint the Wall With Pain
Find the place the scar lies and pick the scab. No brushes are needed. Find the loose thread on the stitches. Pull. Slowly. This may hurt. Or fast if you prefer. Like a bandaid. One quick motion. Do you feel better? No? I didn’t think so. Reach inside. Wipe your hands on the walls. Swirl into shapes. Flowers, trees, hearts. Swords to slay your dragons, your monsters, your own evil twin. Sometimes you have to lose yourself again and again before you can be found. See that bucket beside you? You’ve collected your tears. Throw it there. Against the wall. Wash it all away. Not enough? Cry. Rinse. Repeat.
https://medium.com/brave-inspired/paint-the-wall-with-pain-4cdaff433268
['Gretchen Lee Bourquin', 'Pom-Poet']
2019-10-16 13:48:05.646000+00:00
['Mental Health', 'Courage', 'Prose Poem', 'Poetry', 'Healing']
The Psychological Facts That People Don’t Know About Book Reading
The Psychological Facts That People Don’t Know About Book Reading A study done at the University of Sussex found that reading can reduce stress by up to 68%. Photo by Matias North on Unsplash Good books can inform you, enlighten you lead you in the right direction. once you start book reading you experience a new world and start loving the habit of reading you eventually get addicted to it. I love book reading because it visualizes a new life experience or teaches valuable life lessons. Every book has some unique lesson when we start exploring then we learning something different. A study found in the 2009 book reading is a way to relax and reduce stress. It is important to read a good book for at least a few minutes every day to stretch the brain muscles. The book really a best friend when we board, upset, depressed, and annoyed. A good book always guides on a great path.
https://medium.com/the-innovation/the-psychological-facts-that-people-dont-know-about-book-reading-554d366665af
['Ashish Nishad']
2020-11-19 16:23:08.149000+00:00
['Anxiety', 'Personal Development', 'Reading', 'Depression', 'Books']
Lie to the world — don’t lie to yourself
Image by Kevinsphotos from Pixabay Telling a lie is forgivable, but living a lie is not. By constructing your world from lies, only destruction will come. Even the smallest lie, if left unacknowledged, can kill any form of love. Occasionally, I’ll ask people if they’re honest. Are you a liar? I’ll say. Pointless question, right? The dishonest would just lie about it. More revealing is how people think about their own lies — the specifications they give. How often? When? The details mean more. The reasons are way more interesting. The most dangerous liars aren’t the ones who lie all the time. The real shits are the ones who lie all the time AND claim to be terrible liars. Oh, I suck at lying. I’m incapable of telling anything but the truth. Be wary of anyone who says this. They’re the same people who say they don’t understand the point of telling a lie—that such a reprehensible quality is simply beyond them. Everyone knows the point of lying — it’s to distort reality. The real honest people — the truth seekers, the ones with integrity, the people who are self-aware — these people usually admit to it when it matters most. When the stakes are high, when the cost involves hurting another — when the effects actually mean something, the honest people come clean. They don’t double down. They want to purge themselves of this awful tendency. They would feel appalled by how much they’ve lied, dismayed to the point of guilt and disappointment. So, ultimately, I don’t fear the casual liar, because I am one. All writers are. It’s a very human thing. It’s a very real thing. Anyone who uses language can frame something in such a way that it no longer resembles the truth. The people I really fear are the ones who have so thoroughly convinced themselves of their own honesty that they lose all ability to even recognize the most obvious deceit within themselves. It is the self-deceiver who is most disconnected from reality and causes the most damage and pain to others.
https://medium.com/scuzzbucket/lie-to-the-world-dont-lie-to-yourself-ec29983d0d88
['Franco Amati']
2020-11-25 23:58:36.895000+00:00
['Prose', 'Scuzzbucket', 'Psychology', 'Lying', 'Poetry']
Creating and training a U-Net model with PyTorch for 2D & 3D semantic segmentation: Training [3/4]
In the previous chapters we created our dataset and built the U-Net model. Now it is time to start training. For that we will write our own training loop within a simple Trainer class and save it in trainer.py. The idea is that we can instantiate a Trainer object with parameters such as the model , a criterion etc. and then call it’s class method run_trainer() to start training. This method will output the accumulated training loss, the validation loss and the learning rate that was used for training. Here is the code: In order to create a trainer object the following parameters are required: model : e.g. the U-Net : e.g. the U-Net device : CPU or GPU : CPU or GPU criterion : loss function (e.g. CrossEntropyLoss, DiceCoefficientLoss) : loss function (e.g. CrossEntropyLoss, DiceCoefficientLoss) optimizer : e.g. SGD : e.g. SGD training_DataLoader : a training dataloader : a training dataloader validation_DataLoader : a validation dataloader : a validation dataloader lr_scheduler : a learning rate scheduler (optional) : a learning rate scheduler (optional) epochs : The number of epochs we want to train : The number of epochs we want to train epoch : The epoch number from where training should start Training can then be started with the class method run_trainer() . Since training is usually performed with a training and a validation phase, _train() and _validate() are two functions that are run once for every epoch we train with run_trainer() (line 33–53). If we have a lr_scheduler, we also perform a step with the lr_scheduler. To visualize the progress of training, I included a progress bar with the library tqdm — it just needs an iterable. Now let’s take a closer look on what happens when calling _train() and _validate() . If you are familiar with using PyTorch for network training, there is probably nothing new here. In _train() we basically just iterate over our training dataloader and send our batches through the network in train mode (line 56–64). We then use this output together with our target to compute the loss with the loss function for the current batch (line 65). The computed loss is then appended in a temporary list (line 66–67). Based on the computed gradients, we perform a backward pass and a step with our optimizer to update the model’s parameters (line 68–69). At the end we update our progress bar for the training phase to show the current loss (line 71). The function outputs the mean of the temporary loss list and the learning rate that was used. In _validate() , similar to _train() , we iterate over our validation dataloader, send our batches through the network in validation mode and compute the loss. This time, without computing the gradients and without performing a backward pass (line 78–97). Start training with the Carvana dataset Let’s create our Carvana data generators once again, but this time run the code within a jupyter notebook. The only thing we have to change to make it work as intended, is line 3 in trainer.py from: from tqdm import tqdm, trange to: from tqdm.notebook import tqdm, trange . # Imports from utils import get_filenames_of_path import pathlib from transformations import Compose, AlbuSeg2d, DenseTarget from transformations import MoveAxis, Normalize01, Resize from sklearn.model_selection import train_test_split from customdatasets import SegmentationDataSet import torch from unet import UNet from train import Trainer from torch.utils.data import DataLoader import albumentations # root directory root = pathlib.Path('/Carvana') # input and target files inputs = get_filenames_of_path(root / 'Input') targets = get_filenames_of_path(root / 'Target') # training transformations and augmentations transforms_training = Compose([ Resize(input_size=(128, 128, 3), target_size=(128, 128)), AlbuSeg2d(albu=albumentations.HorizontalFlip(p=0.5)), DenseTarget(), MoveAxis(), Normalize01() ]) # validation transformations transforms_validation = Compose([ Resize(input_size=(128, 128, 3), target_size=(128, 128)), DenseTarget(), MoveAxis(), Normalize01() ]) # random seed random_seed = 42 # split dataset into training set and validation set train_size = 0.8 # 80:20 split inputs_train, inputs_valid = train_test_split( inputs, random_state=random_seed, train_size=train_size, shuffle=True) targets_train, targets_valid = train_test_split( targets, random_state=random_seed, train_size=train_size, shuffle=True) # inputs_train, inputs_valid = inputs[:80], inputs[80:] # targets_train, targets_valid = targets[:80], targets[:80] # dataset training dataset_train = SegmentationDataSet(inputs=inputs_train, targets=targets_train, transform=transforms_training) # dataset validation dataset_valid = SegmentationDataSet(inputs=inputs_valid, targets=targets_valid, transform=transforms_validation) # dataloader training dataloader_training = DataLoader(dataset=dataset_train, batch_size=2, shuffle=True) # dataloader validation dataloader_validation = DataLoader(dataset=dataset_valid, batch_size=2, shuffle=True) Please note that I resize the images to 128x128x3 using Resize() to speed up training. This will generate batches of images that look like this: from visual import Input_Target_Pair_Generator, show_input_target_pair_napari gen = Input_Target_Pair_Generator(dataloader_training, rgb=True) show_input_target_pair_napari(gen) I can then instantiate the Trainer object and start training: # device if torch.cuda.is_available(): device = torch.device('cuda') else: torch.device('cpu') # model model = UNet(in_channels=3, out_channels=2, n_blocks=4, start_filters=32, activation='relu', normalization='batch', conv_mode='same', dim=2).to(device) # criterion criterion = torch.nn.CrossEntropyLoss() # optimizer optimizer = torch.optim.SGD(model.parameters(), lr=0.01) # trainer trainer = Trainer(model=model, device=device, criterion=criterion, optimizer=optimizer, training_DataLoader=dataloader_training, validation_DataLoader=dataloader_validation, lr_scheduler=None, epochs=10, epoch=0) # start training training_losses, validation_losses, learning_rates = trainer.run_trainer() Training will look something like this: Improve the data generator Although training was performed on a NVIDIA 1070, it took 1:19 min to train 2 epochs with only 96 images (size 128x128x3) for each epoch. Why is that? The reason why this is so painfully slow, is because every time we generate a batch we read the data in full resolution (1918x1280x3) and resize it. And we do this for every epoch! Therefore, it would make more sense to either store the data in a lower resolution and then to pick the data up, or better: store the data in cache and access it when it’s needed. Or both. Let’s slightly change our custom SegmentationDataSet class (I will create a new file and name it customdatasets2.py , but you can replace your customdatasets.py with this one): Here we added the argument use_cache and pre_transform . We basically just iterate over our input and target list and store the images in a list when we instantiate our dataset. When __getitem__ is called, an image-target pair from this list is returned. I added the pre_transform argument because I don’t want to change the original files. Instead, I want the images to be picked up, resized and stored in memory. Again, I included a progress bar to visualize the caching. If you want to run it correctly in Jupyter, please import tqdm from tqdm.notebook in line 4 in customdatasets2.py . Let’s try it out. The changes in code are the following: # pre-transformations pre_transforms = Compose([ Resize(input_size=(128, 128, 3), target_size=(128, 128)), ]) # training transformations and augmentations transforms_training = Compose([ AlbuSeg2d(albu=albumentations.HorizontalFlip(p=0.5)), DenseTarget(), MoveAxis(), Normalize01() ]) # validation transformations transforms_validation = Compose([ DenseTarget(), MoveAxis(), Normalize01() ]) # random seed random_seed = 42 # split dataset into training set and validation set train_size = 0.8 # 80:20 split inputs_train, inputs_valid = train_test_split( inputs, random_state=random_seed, train_size=train_size, shuffle=True) targets_train, targets_valid = train_test_split( targets, random_state=random_seed, train_size=train_size, shuffle=True) And it looks something like this: The first progress bar represents the training dataloader and the second the validation dataloader. Let’s train again for 2 epochs and see how long it’ll take. Training took about 2 seconds only! That’s much better. But there is one part we can still improve. Creating the dataset that reads images and stores them in memory takes a bit of time. When you look at the code and the CPU usage, you’ll notice that only one core is used! Let’s change it in a way, so that all cores are used. Here I use the multiprocessing library. Before we perform training, let’s also make a quick detour and talk about the learning rate. Learning rate finder The learning rate is one of the most important hyperparameters in neural network training. Choosing proper learning rates throughout the learning procedure is difficult as a small learning rate leads to slow convergence while a high learning rate can cause divergence. Also, frequent parameter updates with high variance in SGD can cause fluctuations, which makes finding the (local) minimum for SGD even more difficult. To identify an optimal learning rate, we can test different learning rates empirically with a learning rate range test. Inspired by the best practices I picked up from the fast.ai course, I recommend using a learning rate finder before starting the actual training. Sylvain Gugger from fast.ai wrote a really good summary about this problem. The code that I will show you is based on Tanjid Hasan Tonmoy’s pytorch-lr-finder, which is an implementation of the learning rate range test from Leslie Smith. I only slightly modified the code and included a progressbar (yes, I like them). Let’s perform such a learning rate range test. Since our dataset is rather small (96 images), we’ll perform some extra steps (1000). The upper progressbar displays the number of epochs and the lower progressbar shows the number of steps we perform on the current epoch. Let’s plot the results of the test: 0.01 seems to be a good learning rate. We’ll take it. Let’s train for 100 epochs, and visualize the training and validation loss. For that I will use matplotlib and write a function that I can add to the visual.py file. Let’s see what the function plot_training() will output when we pass in our losses and the learning rate. Training looks good! Why you shouldn’t write your own training loop Training can be carried out very easily in plain PyTorch. But you should actually do it only if you want to learn PyTorch. It is generally recommended to use higher level APIs, such as Lightning, Fast.ai or Skorch. But why? This is well explained in this article. You can probably imagine that if you want to integrate functionalities and features, such as logging, metrics, early stopping, mixed precision training and many more to your training loop, you’ll end up doing exactly what others have done already. However, chances are that your code won’t be as good and stable as theirs (hello spagetthi code) and you’ll spent too much time on integrating and debugging these things rather than focusing on your deep learning project. And although learning a new API can take some time, it might help you a lot in the long run. If you want to implement your own features anyway, take a look at this article, and use callbacks. For this series, I’ll try to keep it very simple and without the use of out-of-the-box solutions. I will only implement a very basic data processing pipeline/training loop with good ol’ PyTorch. Summary In this part, we performed training with a sample of the carvana dataset by creating a simple training loop. This training loop can be visualized with a progressbar and the result of training can be plotted with matplotlib. We noticed that training was painfully slow because our data was picked up very slowly by the data generator. Because of that, we changed it in a way so that data is only read once and then picked up from memory when needed. Additionally, we added a learning rate range finder, to determine an optimal learning rate which we then used for model training. In the next chapter, we’ll let the model predict the segmentation maps of unseen image data.
https://johschmidt42.medium.com/creating-and-training-a-u-net-model-with-pytorch-for-2d-3d-semantic-segmentation-training-3-4-8242d31de234
['Johannes Schmidt']
2020-12-12 15:03:56.737000+00:00
['Python', 'Deep Learning', 'Semantic Segmentation', 'Pytorch', 'Tutorial']
Does the Moon Move with You? No, That’s a Parallax.
SCIENCE Does the Moon Move with You? No, That’s a Parallax. Parallax or motion parallax is the phenomenon that an object’s apparent position varies concerning another object or the background when viewed from different positions. Photo by Benjamin Voros on Unsplash For example, if one moves to the right about the viewing direction (for example, while sitting in a vehicle looking to the left about the direction of travel), the direction in which one sees an object in the foreground will turn to the left faster than the direction in which an object in the foreground is seen. Background. Distant objects thus seem to move partially with the observer’s movement relative to nearby objects. The brain translates the relative depth from this. This is known as depth perception. Photography. An object to be photographed appears to be slightly different from the viewfinder of a simple viewfinder camera than from the recording lens. The lens and viewfinder are not in the same position or directly behind each other are a few centimeters apart. With a photo of an object that is a few meters away from the lens, the parallax does not cause any major problems because the lens and viewfinder then ‘see’ almost the same. However, the closer an object is to the camera, the greater the difference between the viewfinder image and the actual picture being taken, the parallax. With a two-eyed SLR camera, such as those made by Rollei, Yashica, and Mamiya, the parallax can be canceled by the parallax converter, which is located between the camera and the tripod. After focusing through the top lens, the camera is lifted with the converter, and the recording lens is level with the viewfinder lens previously. The recording is then the same as the observation. However, this only works with stationary objects. There are also cameras such as the Mamiya C 330 that show a parallax correction in the viewfinder so that the above is not an issue. A one-eyed SLR camera fundamentally solves the parallax problem. Because the viewfinder shows exactly what the lens sees, there are no differences between the viewfinder crop and the actual image. What will occur is that there is more image on the image than previously seen. This is because most SLR cameras have a focusing screen of about 90%. Only (semi) pro cameras have a limit of 100%. With this type of camera, the image is transferred by the recording lens via a mirror and a prism directly into the viewfinder. The mirror will pop up momentarily during the actual recording. This creates a different kind of parallax, time parallax. You lose your subject for a while, which does not happen with a rangefinder camera. There are also digital cameras where the viewfinder image is electronically transferred from the image sensor to a monitor (LCD). These cameras are also parallax-free. Another parallax problem occurs when creating a panorama by stitching individual photos together: there are often areas where it cannot be properly fitted. Measuring instruments. When reading analog measuring instruments, errors may occur due to parallax. Two types of parallax can be distinguished here, the parallax between the measuring instrument and object and the parallax in the measuring instrument itself. Digital readouts are parallax-free because the result does not depend on the observation. Parallax between measuring instrument and object. Parallax can occur between the scale of the measuring instrument and the object being measured. A well-known example is a ruler that is placed on an object. If the scale is on top of the ruler, there is a distance between the scale and the object measured. The observer who moves his head slightly to the left or right sees the parallax’s effect: the scale appears to move relative to the object. Even with a transparent ruler with the scale on the bottom, parallax can occur due to light refraction. If the observer cannot read straight from above, the scale is perceived slightly shifted. The same amount shifts the part of the object below the ruler, but the part outside is not. Reading through the ruler gives the best result: the marking line and the point to be measured have the same deviation so that the parallax is, in fact, compensated. The photo shows that the parallax compensation is best too close to the edge; further from the edge, parallax increases because the ruler’s top is slightly inclined. Parallax in the measuring instrument itself. Parallax can also occur in the measuring instrument itself when reading a pointer against a scale’s background. The simplest example is reading the time on an analog clock that is observed obliquely. A commonly used method to combat this measurement error is the so-called mirror reading: just below or above the scale, there is a groove of approximately 5 mm wide in the dial, with a mirror behind it. By looking perpendicular to the dial, the mirror image and pointer coincide, and the observation is correct. Another method, especially used in cheaper instruments, is the knife pointer, where the pointer’s tip lies in a vertical plane. When the tip of the pointer appears as thin as possible, the field of view is perpendicular from above. This method is much less accurate than that with the mirror scale. The essence of these corrections is to improve the reproducibility of the reading angle. Even if the mirror or pointer is misaligned, the result is the same as when calibrating the instrument. Artillery. When using artillery in a combat situation, the fire control system must consider the position differences of the individual guns. Although this is not directly related to observation (usually geographic coordinates are the focus), this problem’s geometric explanation is the same as for parallax. Parallax is the basis for measurement methods. The relative displacement or fall is photographically recorded in photogrammetry when taking aerial photographs from two different positions. In a stereoscope, the overlapping sets of photos can be viewed in three dimensions. With a parallax meter equipped with a micrometer, the parallax difference of each point can be determined. The parallax shift is a measure of the height difference. For example, height differences of buildings and objects can be determined, or terrain height maps can be made. In terrestrial photogrammetry, differences in distances are determined in photographs taken from the ground. In astronomy, parallax is used to measure distances. The extent to which a star apparently shifts when observed from two opposite points of the Earth’s orbit (i.e., taking two measurements at the same place on Earth, but with a six-month difference in time) is used as a measure to determine the distance. The parsec (a contraction of parallax seconds) is a unit of distance used in astronomy, defined as the distance an object must lie to be seen from Earth with a parallax of 1 arc second.
https://medium.com/carre4/does-the-moon-move-with-you-no-thats-a-parallax-c1368123146c
['Bryan Dijkhuizen']
2020-11-22 11:08:48.874000+00:00
['World', 'Science', 'Parallax', 'Math']
Web Notifications Experiments for the June 7 U.S. Primary
Web Notifications Experiments for the June 7 U.S. Primary A quick word on our most recent experiment. Tonight (Tuesday, June 7) we’ll be sending out three sets of experimental notifications about the last significant primaries of the 2016 presidential election. We’ll send individual state results as they come in, insights from our reporters in the field at candidates’ events and, tomorrow morning, a recap of the ten most important highlights of the night. If you’re already receiving notifications: You’ve probably landed on this page because you tapped on an alert. For future alerts, pull down on the notification to expand it to see all of the text. If you’d like to sign up: Web notifications are currently only available on Chrome, so if you have an Android mobile phone (Samsung, included!), sign up here. We’ll also be gathering feedback from those who participate and will send out a follow-up survey after the last alert on Wednesday. And when we have data and a good sense of what was most engaging to users, we’ll write up what we learned and post it here on Medium. Questions or observations? Drop us a note: [email protected] The Guardian Mobile Innovation Lab operates with the generous support of the John S. and James L. Knight Foundation.
https://medium.com/the-guardian-mobile-innovation-lab/web-notifications-experiments-for-the-june-7-u-s-primary-8b966aea2e0b
['Sasha Koren']
2016-06-07 22:54:18.958000+00:00
['Mobile', 'Notifications', 'Journalism']
HURON RIVER HAIKUS
Tobias Freeman HURON RIVER HAIKUS Two Damselflies slow green river snaps concentric rings bullet out one damsel flies free. Yellow Leaf slow green river grabs yellow leaf in eddy mouth summer spins then sinks. Leaving slow green river hears long necks trumpeting, we fly less a few cygnets. Unmapped slow green river chills below terrapins burrow and aquifers invaded. Cold Memory frozen green river records ice-storm limb fallen cruel change cut in ice. Snow Melt churned green river bends sunshine skips across ripples without frogs humming. Ollie-Ollie-Oxen-Free! hometown river was pooh-sticks, rock steps, soakers — gone childhood friends and days. Droughty cracked green river mud trunks rise tangled, still dead. no sliders sunning. Flash Flood high green river fast watershed drinks runoff — xenocide byway. Dioxane our green river sang once upon a time through towns — swim advisory. PolyFluoroAlkylS green river lazy worms on hooks, favorite holes — fish advisory. Superfund in parts per billion poisoned river dies — not a molecule floats free.
https://medium.com/resistance-poetry/huron-river-haikus-d993f2a8c6b2
['Lisa Patrell']
2020-06-29 20:59:58.669000+00:00
['Swimming', 'Water', 'Environment', 'Resistance Poetry', 'Pollution']
About Me — Yemeece. Beauty, Grace, Power, Complexity…
Writing took me on a journey inwards into me. I began to explore the parts of me I never knew existed. I fell in love with reading, and the books I read showed me that whatever I was feeling had been felt before, and that stripped me of my loneliness. I gulped up many books like a thirsty stranger who just survived a desert storm and had only found an oasis. “Whatever l was feeling had been felt before by someone else and that stripped me of my loneliness.” Unlike anything else in my life, including myself, writing was gentle with me. It sharpened my reasoning and helped me to ask the right questions. It began to strip my mind of toxic and limiting mindsets. I began to heal and know the freedom I find hard to describe in words. Gradually, I began to develop the courage to share my poems with the world. In 2019, I published my first collection of poems titled Stirred by life. This book has been an oasis for travelers like myself journeying through the desert of life. My name is Yemeece. I am a writer and poet. I write for people who need a balm for their pain. I write to destroy toxic mindsets and harmful beliefs. I write for the underdogs like myself who need a silver streak in the grey sky to keep believing. I write for the afraid, the marginalized, the curious, for individuals who seek a healthy mind and a hearty soul. I write to you. I write for me.
https://medium.com/about-me-stories/about-me-yemeece-28287dcc64bd
['Yemisi Fadiya']
2020-12-08 11:02:05.539000+00:00
['Writing', 'Introduction', 'Female', 'Life', 'About Me']
Building Android Apps with Mirah and Pindah
I gave a talk earlier today at the Beijing Ruby group today about my experiences building native Android applications with Mirah and Pindah. Although I generally prefer building mobile web apps over native development when possible, Mirah is a really promising alternative to Java if you have to go native. Check out the preso below and get involved in the growing Mirah / Android community. Make sure to grab the code for the example app on the GitHubs.
https://medium.com/zerosum-dot-org/building-android-apps-with-mirah-and-pindah-72420197d686
['Nick Plante']
2017-11-04 17:27:18.439000+00:00
['Android', 'Android App Development', 'Ruby', 'Mirah', 'Mobile App Development']
How to Buy a Quality Business on Flippa
I heard recently from the head of Flippa that my business partners and I had collectively bought and sold the most businesses by gross value. This was a surprise, as that’s not our day job, and we’ve done less that a dozen transactions. Our actual job is building SmartrMail. Flippa is the biggest marketplace for buying and selling online businesses. We’ve used Flippa to help start businesses, grow our existing businesses, and to generate free cash flow as ‘side hutles’ while we were employed. We’ve done well financially from it: everything we’ve sold has been for at least 3x what we paid. It’s also taught us some important lessons on running and growing different types of internet businesses. I’ve had friends often ask ‘how do I buy a website?’. So, for them and anyone else interested, here’s my guide on how we buy websites on Flippa: 1. Decide What Type of Business To Look For First off, think about the type of business you might want to run. We’ve bought a range of businesses, from Design Blogs to Digital Template Stores, to Marketing Tools. Some are going to require more hands-on work to improve, grow and maintain. An eCommerce store will take more day to day running than a blog. But none of them are ‘passive’. All require effort and work to improve and maintain. My personal view on web businesses is if they’re not growing they’re dying, so you will need to always be actively growing your business. Our personal preference is older, established websites that have an audience, but haven’t been given much attention from their owners. So they may have a steady (but declining) stream of organic traffic, or have an established customer purchase history, but haven’t been updated for a long time. We’re happy to buy old, ugly websites, as long as they’re on an easy to improve platform. In our experience, sites on platforms like WordPress, BigCommerce, Shopify, Magento are best as they can quickly be improved. In eCommerce we tend to like digital goods over real goods as there’s no stock to handle. We avoid drop-shipping as it seems too hard to get good margins. Established blogs are great if you like and understand the topic. I’ll go into more detail around why we like blogs, and few ways we’ve quickly monetized blogs to 10x earnings, in another post. It’s up to you what type of business you choose. We look at hundreds to find quality and value, and only buy businesses we can understand or confidently learn about quickly. 2. Search for Businesses at least 2 years old Set Website Age to 2+ Years. To quickly weed out 90% of the junk and scam businesses, limit your search for businesses that are minimum two years old. Does this mean you might overlook a good, young business? Certainly. But it removes you having to sift through the bulk of the silt to find a speck of gold. Most scammers or quick flippers don’t have the patience set up and run a scam website for more than a year. You can set Web Site Age in advanced search. 2. Understand Valuation Multiples You’re buying a business, so you should understand the basics of how business is valued, which is Annual Earnings Multiples. If you’re buying shares of a public company on a stock exchange, you might pay 20x Annual Earning for those shares. That is, that company’s annual earnings are 1/20 of the value of the company. Public companies are considered a ‘safe’ investment as there’s a lot of information about them, their books are audited by a trusted 3rd pary, and there’s high liquidity as you can easily sell your shares. Buying on Flippa is the complete opposite. There’s an incredibly high risk of getting ripped off when buying a business from an online marketplace, so the value should be much lower than 20x earnings. When looking at a businesses earnings, make sure you use the last 12 months of data to calculate ‘annual earnings’. Often people will look at the last 3 months and multiply that by 4. That’s a mistake. It’s human nature to optimise for earnings when you’re selling a business, so the last 3 months earnings are likely jacked up (by any means). Also, just assume some costs have been stripped out to boost earnings, espcially any owner’s time. 3. Buy Value We look to buy companies on a Price to Earnings (P/E) of 1x. That is, if you held the company for a year, with no change in current earnings, it would pay back the value of the purchase price. We’ve bought businesses on valuations between 0.8x-3x earnings before. Mostly we’ve bought at around 1x-1.5x. Sometimes we’ll pay up to 3x on a business that has obvious ways to grow earnings quickly, or that can provide one of our other businesses with immediate earnings growth. Mostly we’ll stick to paying 1x — 1.5x earnings. The risk of buying something worth less than it seems is so high that we’d prefer not to go over this. It’s better to miss out on something good that isn’t cheap than lose your money. 4. Set Your Budget and Earnings Range So lets say you’ve set an earnings range of 1x-2x P/E. You can work out what Monthly Profit range to set on search by dividing your total budget to buy a business by 24 and 12 (1x-2x). For example, assume you’ve got $2,500 to spend on a business. The Monthy Profit slider could be set between $50–250 Profit per month. If you pay $2,500 for a business earning $250 per month, that’s a P/E of 0.8x (great!). Paying $2,500 for a business earning $50 per month is 4.2x, which is expensive. Ideally you’d pay up to $1,200 max for a $50 per month business. 4. Search for a Business You Can Run Ok we understand how to value a business, so it’s time to browse away. Look for anything that you understand or thing you could learn about. If you’re buying a blog or ecommerce site, it’s pretty easy to understand what you’re getting. For example, buying a blog about something you’re already interested in means you can hit the ground running with new content, and new marketing, as you understand the topic, and the audience. Neil Patel wrote a comprehensive guide to building up your blog audience. Read it in a day and that’s enough to grow your blog. As you build your audience you can add different streams of revenue outside of just Ads, which I’ll go into another time. The beauty of buying an established business, as opposed to starting one, is you’re getting many years of hardwork for not a lot of money. If you stick to looking at businesses built on top of one of the established platforms mentioned above, you can easily find people on Upwork to help with technical and design updates. 5. Understand the Technology Risks If you’re buying a more complicated tech business, you need to understand the technology risk. For example, if you’re looking at a SaaS, you need to research the market, and the technology it’s built with. Understand the cost to continue development and support. Same goes for native mobile apps on iOS and Android. If you’re a developer and you find a business in an area you understand, then you’re at a distinct advantage in assessing it. If you’re not technically minded, then I’d suggest either partnering with someone who is, or avoiding businesses that rely heavily on technical innovation. 6. Be Patient and Thorough There are thousands of listings, but most of the businesses on there are junk. The rest are scams. Seriously. Only a handful are quality, so deep research and patience is needed. Expect to be looking for at least a couple of weeks at new listings. I’d suggest creating an account, and saving your particular search. That way you’ll get send a daily email digest of new listings matching your search critera. This is the most important step: the patience to look at hundreds of businesses and saying no quickly, until you find one that has met all the above criteria. It’s something Warren Buffet and Charlie Munger talk about a lot: doing nothing for a long time until they find something excellent. 7. Be Ready to Move Quickly When you find a business you like, that’s priced within the P/E range you’re comfortable with, message the seller. Ask: Add you to their Google Analytics (you’ll need to give your gmail address) to verify their traffic sources. What’s their reserve? What would they do to grow the business? Checkout the seller’s selling history, has the site been sold previously, if so what were the comments on it. Google the seller. Go to the website for sale, search around it, sign up, dig in and use it. Go to their social channels, check what people are saying about the business. If it’s selling something, buy something to see what it’s like. If everything checks out, next set up a time to Skype/Hangouts call with the seller. Ask the same questions, reasons for selling, and make them screenshare and login to their payment gateway (PayPal, Stripe etc) and show you the actual earnings on the call. Do the same with the website’s admin panel, and email inbox. Indepth research here is the most crucial step to avoid getting your face ripped off. Expect to deep research like this 3–10 times before you find and successfully buy a business. If you start asking questions straight away, you’ll be prepared incase a business suddenly gets a Buy It Now price added. For businesses that have a high reserve price, just ‘watch’ them. It likely won’t sell, and if it does be thankful it wasn’t you that overpaid. Often first time sellers have unrealistic price expectations, and may relist a business a few times before it comes into a reasonable sale range. 8. Use Escrow to Buy If you’re buying on Flippa, just use their Escrow services. Give yourself at least a 7 day escrow period. This means the business, email and social accounts will be transferred over to you on day 1, and you’ve got 7 days to check it’s legit before the funds are released to the buyer. Use the time to check the traffic (sales if there are any) is as it should be. Woohoo, you’re now a business owner! Next step, growing your business…
https://medium.com/on-startups-and-such/how-to-buy-a-quality-business-on-flippa-4fa9701a069c
['George H.']
2020-11-05 05:49:32.950000+00:00
['Growth Hacking', 'Growth', 'Startup', 'Flippa', 'Hustle']
Wake Up at 4.… AND Get Enough Sleep?
To this day, I don’t know how I made it through high school. Homeroom bell rang at an ungodly 7:13 a.m., and I lived in a rural part of the county. If I rode the bus to school, that meant a full hourlong ride from the bus stop (which was a 5-minute sprint from my house) to the school doors. So, I awoke at… 6:04 a.m.? Probably. Once I was able to drive — and my mom was nice enough to find me a crappy car — I was able to make the drive in about 23 minutes (though it really should have taken me 34). I don’t know how I survived that... But I also don’t know how I managed to get out of bed that early for that long — while I was a teenager, who, let’s face it, all suck at getting out of bed. All of them. You know they do. I remember my mom RAGING at me for what felt like hours. WHY wouldn’t I get out of bed? Didn’t I know I was just making things worse? The longer I waited, the more hurry I would be in, the more dangerous a drive. The more she had to yell, the more miserable I was making her… and everyone in the house. I remember finally getting up and turning on the shower… then falling asleep on the toilet for 20 more minutes. More railing, weeping and wailing and gnashing of teeth. You’re wasting the water! You’re wasting the heat! Do you not have any respect for how hard I have to work to pay those bills?! Get ready and go to school you lazy $@#&! Ok, my mom was nicer than that… but not much. (And we all know she was thinking it.) I somehow made it through, and off to college. College was a dream! I could schedule all my classes to suit MY schedule (and body type). (Which might have meant it took a few extra years, since most classes do not start afternoon 12 noon… but I digress.) I was able to do it how I wanted, and for several years I found myself very happily drifting to bed around 1 or 2 in the morning, and sleeping till about 10 or 11am. That felt perfectly natural for me. And, in the convening life cycles where I’ve been able to pick my own schedule — including working at bars/restaurants after college, working in theatre during, and a couple of unemployment stints in the years since — that schedule is largely the one my body naturally gravitates toward, and has for a couple of decades now. That’s right: decades.
https://medium.com/swlh/wake-up-at-4-and-get-enough-sleep-36f6fb404e12
['Heather Nowlin']
2020-07-03 20:22:46.951000+00:00
['Morning Routines', 'Sleep', 'Happiness', 'Life', 'Productivity']
¿Hola?
Digital product designer that also happens to do a bunch of other stuff on the side. Jack of all trades. Follow
https://medium.com/postales-de-la-tormenta/hola-a58ba4624069
['Nicolás J. Engler']
2019-06-05 13:31:00.926000+00:00
['Short Story', 'Fiction', 'Creative Writing', 'Nonfiction']
React’s Context API Explained
Usage of Context API MovieContext.js Let’s start by renaming our Movie.js file to MovieContext.js . This file will be responsible for passing data to other components. Here we will start by importing createContext from the react module. Thus, alter your import statement as follows: Imports in MovieContext.js This createContext method will help us create a Context instance, which will aid in sending data to various other components. Next, let’s export our Context instance like so: Exporting the MyContext context instance Now, we will export a function called MovieContext and then define it like so: Further code in MovieContext.js Lines 2–16 : Standard definition of our movies state In our return block, write the following code (where code goes here is written on line 19 ): Further code in MovieContext.js These lines of code indicate that we are now fully capable of sharing data between components without passing down props manually. The value attribute in the MyContext.Provider tag in line 1 is the data that we will share to various components. props.children on line 2 means that the components that will be rendered between the MovieContext tags will have access to the data located in MovieContext . If this is unclear, don’t worry. It will be explained through code later in this post. MovieList.js If you recall, this component was used to display the movies array. Let’s modify this file. First, import MyContext from MovieContext : Imports in MovieList.js Other than that, import useContext like so: Further imports in MovieList.js Within the MovieList function definition, start by writing the following line of code: Using the MyContext object This line basically declares a context hook called NewContext . This useContext function takes in an argument that asks for what context object it should use. As we want to use the MyContext instance, we pass in MyContext as the argument. One question though: What is the value of the MyContext variable? We’ll find out its value shortly. Before that, we’ll have to change our code further. Now, write the following code after the NewContext declaration: Further code in MovieList In this code, we are simply outputting the value of NewContext and then using the export statement so that the MovieList function can be used in other files. In the end, our file will look like this: App.js First, add the following imports to App.js : Imports in App.js Now, in the return block, write the following code: Further code in App.js This code indicates that now MovieList has access to the data that was shared by MovieContext . In the end, App.js looks like this: Run the code. This will be the output: The output of the code So where did this string come from? Let’s backtrack to MovieContext.js and find the following piece of code: In MovieContext.js : Code to find in MovieContext.js This means that the data we write in the value attribute will be shared with the components.
https://medium.com/better-programming/reacts-context-api-explained-baebcee39d2f
['Hussain Arif']
2020-08-04 07:56:12.767000+00:00
['JavaScript', 'React', 'Reactjs', 'Nodejs', 'Programming']
Reasons to not install Hadoop on Windows
A few years ago, I was hearing from my colleagues, “don’t ever think about installing Hadoop on Windows operating system!”. I was not convinced of this saying because I am a big fan of Microsoft products, especially Windows. In the past few years, I worked on several projects where we were asked to build a Big Data ecosystem using Hadoop and related technologies on Ubuntu. It was not so easy to work with these technologies, especially since there is a lack of online resources. Last month, I was asked to build a Big data ecosystem on Windows. Three technologies must be installed: Hadoop, Hive, and Pig. At the end of the project, the only consequence I have is that “Think 1000 times before installing Hadoop and related technologies on Windows!”. In this article, I will briefly describe the main reasons for this consequence. Are these technologies developed to run on Windows? The first releases of Hadoop were demonstrated and tested on GNU\Linux, while it was not tested under the Win32 operating system. For Hadoop 2.x and newer releases, Windows support was added, and a step-by-step guide was provided within the official documentation. This should be ok if you are going only to install Hadoop. But when it comes to other related technologies such as Apache Hive, not all releases are supported (For apache Hive only 2.x releases supports Windows). You will need to do some hacks and workaround to install these technologies, such as using Cygwin utility to execute GNU/Linux shell commands or to copy cmd scripts from other releases. Besides, some services may not work correctly. As an example, we installed Apache Hive and Apache Pig. We tried to connect with those technologies through the WebHCat Rest API or using the Microsoft Hive ODBC driver. The connection was made successfully, but we couldn’t execute any command since any simple command was throwing time out exceptions. In brief, Windows is not as stable or as supported as Linux. Other Reasons Cost One other reason is the licensing cost. Linux is a free and open-source operating system. It will be costly if you need to deploy a multi-node Hadoop cluster on Windows machines. Lack of resources In general, Big Data technologies don’t have many online resources. But, most of those resources are related to Linux while you may struggle in a small issue related to a Windows environment. Even if you ask for support on an online community like Stack Overflow, most experts work with Cloud-based Hadoop clusters or a Linux on-premise installation. What to do if you are using Windows? If you are using Windows, and need to use Hadoop and related technologies, you may: Use Linux virtual machines to install Hadoop, note that your machine must have sufficient resources. Use a cloud-based Hadoop service such as Microsoft Azure Hadoop cluster. If you cannot go with any of those suggestions and you have to install Hadoop, you need first to search for the supported releases of all required technologies (Hadoop, Hive, Spark …). Then, you must choose the compatible ones. As an example, Hive 2.x releases only supports Windows while 3.x needs some hacks, and not all its features may work properly. So, if you need to install Apache Hive, you have to use Hadoop 2.x releases since newer releases are not compatible with Hive 2.x releases. References
https://medium.com/munchy-bytes/reasons-to-not-install-hadoop-on-windows-5bf22f3f0005
['Hadi Fadlallah']
2020-08-27 10:35:08.877000+00:00
['Hadoop', 'Windows', 'Big Data']
How to Create an Evening Routine for a Productive Tomorrow
For some people, it seems so easy to follow a routine from the moment they wake up. They jump out of bed, journal, meditate and drink immune-boosting tea infused with fennel and goji berries. Then they go about their day being awesome. You might want to begin your day with as little to think about as possible. You might have a family and children to organize. You might just not be a morning person. That’s okay. To optimize your day, one of the most valuable things you can do is set up an evening routine for the night before. Your mornings may be gloriously unstructured, but you might thrive on having a process in the evenings to springboard into tomorrow from. A routine should be like breathing. You shouldn’t have to constantly think about it or struggle to remember which steps to do and what order to do them in. It should mold seamlessly to your natural rhythm, making it easier to sustain the habit over time. Creating a successful routine to optimize your day tomorrow can be broken down into three steps. These are reflection, preparation, and relaxation. If you naturally have more energy in the evening, you might want to plan your day in detail before bed. If you're someone whose brain switches off after 6 pm, you might just want to take a shower and then jump into bed with a hot drink and a good book. Building a routine that works for you is all about finding your natural rhythm. These three principles will help you build an effortless evening routine that will set you up for an amazing day tomorrow.
https://medium.com/swlh/how-to-create-an-evening-routine-for-a-productive-tomorrow-b02f9af10197
['Ruth Matthews']
2020-12-21 12:32:10.989000+00:00
['Happiness', 'Personal Development', 'Self Improvement', 'Self', 'Productivity']
Do People Still Talk On The Phone?
A Tiny Moment of “Picking Up Where You Left Off” Photo by Philipp Lansing on Unsplash The Moment I just got off the phone with a friend I had not spoken to in nine years. No major fight or disagreement explaining the time gap. I just hadn’t tried to call. Before our phone call, I was checking out his LinkedIn and it alerted him to my presence. Minutes later a strange number appeared on my phone and went to voicemail. I usually erase these calls. Damn spammers! For some reason, I decided to listen to the message. Lo and behold, it was one of my former chef buddies checking up on me. The Reflection My first thought was nobody calls anymore and I texted him to that effect. I followed up with a phonecall and boom it was on! We checked in to review how we getting along. Addition of children, career changes, and other life stuff. During our fifteen-minute conversation, we talked, maybe two minutes, about the pandemic and other US craziness. I am going to cliché to you now. Yes, it was like we had just talked the other day, but that’s the connection you have with some folks. At the end of our conversation, I felt energized and positive about my current life. I know right? How so in a catastrophic time? Let me offer a different outlook. Like Doc Holliday said to Wyatt Earp in the film, Tombstone, “there’s no normal life, just life.” There’s no crazy life right now, just life. Don’t stop living it. The Takeaway Don’t let Covid or a splitting country stop you from being you. Keep connecting with others. They want to connect too. Every day, every moment doesn't have to be shrouded in nihilism. Give out a yell, scream an ahhhhhh shit! Things stink! Then go do human stuff the best you can. Don’t let technology steal your human connection. You are not a text or an Instagram or a Youtube or an email. Josh Kiev is an actor, chef, and decided not to be negative today!
https://medium.com/tiny-life-moments/do-people-still-talk-on-the-phone-9d4c74fd61e8
['Josh Kiev']
2020-11-21 09:50:17.638000+00:00
['Positive Thinking', 'Connection', 'Motivation', 'Friendship', 'Tiny Life Moments']
How Concept Drift Ruins Your Model Performance
The world is inherently dynamic and nonstationary — that is, constantly changing. It is inevitable that the performance of many machine learning models will decline over time. This is particularly relevant for models related to human behavior. Contemporary machine learning models do not generalize well to new environments without explicit training examples. Model performance may start to degrade as data and the “ground truth” change over time. These are critical weaknesses that must be accounted for in machine learning systems. But what are the characteristics of such changes, which result in low-quality responses from models? How can you monitor and maintain your models so that the downstream user always sees quality outputs? It is essential to anticipate how changes in data from dynamic real world environments will affect your models and how to handle these changes. This article will first provide a theoretical foundation for understanding how concept drift can appear and behave and then discuss approaches for addressing drift. Understanding how data and models can drift is essential for designing a model monitor and response plan. What is Concept Drift? The image below provides a simple illustration of a “concept”. The 2-dimensional data points are mapped to either the color red or green, according to some unknown process G. The “concept” is the true mapping G→{green, red} of data points to color. The light grey line shows the learned mapping F that distinguishes between red and green data points that approximates G. A concept — the distinction between red and green data points. The grey line represents the learned concept distinguishing between red and green. Simply stated, concept drift occurs when there is a change in G, the underlying function that generates the data you observe. Of course, we cannot observe G directly. Instead, we observe G indirectly by sampling from the data generated by G. This definition assumes that we receive data points over time that describe some phenomenon that we want to model, such as toilet paper consumption. In the case of toilet paper demand, the underlying data generating function is the quantity of toilet paper consumed (and hoarded). Actual flushing (G) and hoarding of toilet paper cannot be observed directly of course. Instead, we can only “sample” from actual consumption by observing the quantity purchased through various supply channels. We then train a model, F, of toilet paper consumption using observed purchase data. COVID-19 induced changes in toilet paper consumption and purchase patterns — aka drift in G. What Does Concept Drift Look Like? Drift can appear as either virtual drift (no immediate impact on model performance) or real drift (impact on model performance). Note that the distinction is relative to a model. Real Concept Drift Real concept drift is a change in the mechanism that generates your data, such that your model’s performance decreases. As shown by the illustration below, the concept (distinction between red and green data points) has rotated and changed shape. A model that learned the concept during Regime A is now obsolete under Regime B and will have poor performance. Absent updates to the model after real concept drift, the model will no longer correctly describe the full target concept space. Real concept drift from Regime A to Regime B. The function learned during Regime A misclassifies some data points observed in Regime B. In terms of toilet paper, the pre-COVID consumption pattern is “Regime A” and the post-COVID consumption pattern is “Regime B”. A model that proficiently forecasts TP demand in 2019 before the pandemic would perform poorly in 2020 during the pandemic. Vice versa, a model that has been updated to forecast TP demand during COVID-induced economic lockdowns, where people seldom leave home, would likely perform poorly on historical data where people often leave home. Virtual Concept Drift Concept drift in G does not necessarily affect the accuracy of your model F. In virtual concept drift, the distribution of observed data has changed, but the previously learned mapping F still correctly applies to the new data generating process G’. The chart below illustrates virtual drift from Regime A to Regime B. Although the distribution of the data in each regime is different, the model F still assigns the data point color correctly per the light grey line. Thus, your machine learning model only becomes obsolete under real concept drift. Virtual concept drift from Regime A to Regime B. The function F (grey line dividing red and green points) trained on data from Regime A is still valid in Regime B. In the real world, your model performance will not indicate that virtual drift has occurred because your model is still performing well! Indeed, even if you are monitoring the distribution of the data itself, you may require a “large” number of observations to ascertain the presence of virtual drift. (Monitoring the distribution of your data directly is usually impractical if it has more than a couple dimensions). Is virtual drift a problem? It depends. If your objective is to ensure that your model continues to meet given performance metrics, then your model is sufficient in the presence of virtual drift. Virtual drift presents a hidden risk that the model may treat certain outlier observations from the new distribution incorrectly. Hidden risk of virtual drift: What are the correct classifications of the grey data points in Regime B? Will the model (grey line) continue to produce accurate predictions for these points? Drift Severity The severity of drift can vary widely. Most concept drift occurs as intersected drift, where part of the input space has the same target class in both the old and new concepts. In the extreme, severe drift occurs when all examples are misclassified under the new target concept. Misclassifications can be due to new classes in the target concept and changes in the class definition. Intersected drift occurred between Regime A and Regime B as only some data point classifications changed. Severe drift occurred between Regime A and Regime C, where all the classification of all data points changed. If you can detect and measure drift, you could specify a threshold to distinguish between “major” and “minor” drift. Drift magnitude for a binary classification task could be measured as the percentage of observations whose class has changed. If the drift magnitude is less than the threshold, the model simply requires updating with new data. If the drift magnitude is greater than the threshold, the drift is “severe” and the model ought to be abandoned in favor of a freshly trained model. Severe drift is typically rare and the presence of severe drift is often apparent without the use of sophisticated drift detection. Drift Velocity and Stability Abrupt drift Concept drift with a short drift duration is known as abrupt drift. Abrupt drift occurs when the data generating function suddenly stops generating data with concept G and suddenly generates data according to concept G’. The abrupt drift paradigm assumes that concept drift occurs over discrete periods of time, bounded by stable periods without drift. As an example of a sequence of abrupt drifts, the medical journal Lancet reported that China changed the case definition for COVID-19 seven times between Jan 15 and March 3, 2020. Each change in definition caused a change in how the cases were counted; thus each change caused an abrupt drift in the concept of daily COVID case counts in China. Incremental Drift and Stability Incremental drift implies a long drift duration, also known as continuous drift. In this case, the change is a steady progression from concept G to concept G’. The speed, or duration, of concept drift is the number of time steps for a new concept to completely replace an old concept. In the case of classification, at each subsequent timestep, fewer data points are classified according to the old concept G and more data points are classified according to the new concept G’. You could think of each of these time steps between G and G’ as distinct, intermediate concepts. An example of incremental drift would be how consumer behavior gradually changes as COVID-19 economic lockdowns are lifted in an area — people may be hesitant to return to “normal” and only return slowly. Each small change in behavior is an intermediate concept between G (behavior during lockdown) and G’ (“normal” behavior after restriction is lifted). It is worth noting that the progression from behavior in G (lockdown) to G’ (“normal”) is nonlinear as people idiosyncratically alter their behavior in response to exogenous factors like changes in case numbers. A concept is unstable between concepts G and G’, the period between concepts because it will change again. This instability could be observed as greater noise in the data and in your model metrics. Further, some concepts are inherently unstable and chaotic, never arriving at a stable concept, such as market price movements. The chart below illustrates incremental drift that progresses over a period of 100 time steps. The functions v1(t) and v2(t) model the probability that an example from the old and new concepts, respectively, will be presented at time t. The speed of drift is the slope of v2(t). Illustration of incremental drift. (Source: Minku, White, and Yao 2010) How do you detect and address drift? This discussion, of course, would be incomplete without solutions to detecting and correcting for drift in models and data. Many refer to the subject of drift detection as model monitoring. The basic approach to addressing concept drift is to monitor your model to detect drift, retrain the model, and deploy the new model version. This basic approach works well for regimes where you expect concept drift to abruptly shift from one stable concept to a new stable concept. It can also be acceptable in cases with small and/or incremental drift if there is some tolerance for variation in model performance. For less stable regimes or situations where continuous drift is expected, incremental updates of your model with new observations may be more appropriate. In the case of online model updates, it is still important to monitor for model drift. There are three basic approaches to monitoring for concept drift. Monitor model performance Monitoring model performance is straight forward in principle: if the model performance declines below some expected level, reevaluate the model. To implement this, you need to make several decisions that will impact the sensitivity and frequency of your drift detection. 1. How many new predictions do you use in calculating model performance? 2. Which performance metric(s) do you evaluates and which threshold(s) do you apply? 3.How often will you monitor for drift? 4. How error-tolerant is the user of your model? This will help inform your answers to (1)-(3). 5. How do you respond to drift? Manual evaluation and retraining of your model? Automatic updates? Monitor statistical measures of confidence in model predictions Another approach is to monitor the distribution of model prediction or residual values or the confidence in those values. It is far easier to monitor distributional changes the value(s) produced by your model than the potentially high-dimensional input data. The specific formulation of the the statistical monitor depends on the speed and quantity of predictions to monitor. In addition to the questions in the previous section, some relevant decisions to make include: Does the Kolmogorov–Smirnov (KS) test indicate that distribution of your prediction or residual values has changed? Some minimum number of examples are required in order to accurately compare whether two distributions are the same with the KS test. Does a given prediction / residual fall within a given confidence interval of the distribution of predictions / residuals observed during training? For other adaptive test statistics for drift detection in the academic literature, refer to Dries and Ruckert (2009) in the References section. Online updates One option is to train your model online — that is, automatically update your model weights with new observations on a periodic basis. The periodicity of updates could be daily, weekly, or each time you receive new data. This solution is ideal if you anticipate incremental concept drift or an unstable concept. This option is not foolproof because there is still some risk that model drifts away from true target in spite of the online updates. This could occur for a number of reasons. An outlier could have an outsized influence on the model in online training, pulling the learned model further away from the target concept. This risk can be mitigated by conducting online updates with batches of observations, rather than with single data points. The learning rate could be too small, preventing the model from updating quickly enough in the presence of large drift. The learning rate could also be too large, causing the model to overshoot the target concept and to continue to perform poorly. For these reasons, it is still important to monitor models that are updated online. Other approaches Various algorithms have been proposed in the academic literature to detect concept drift. Work in drift detection generally aims to efficiently identify the true points of concept drift with accuracy while also minimizing the drift detection time. A review of these proposals is outside the scope of this article.
https://towardsdatascience.com/concept-drift-can-ruin-your-model-performance-and-how-to-address-it-dff08f97e29b
['Alexandra Amidon']
2020-07-11 18:30:44.886000+00:00
['Modeling', 'Regime Change', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
How to Easily Fetch Binance Historical Trades Using Python
Coding Time Parsing the arguments The script will use the following arguments: symbol : The symbol of the trading pair, defined by Binance. It can be queried here, or it may be copied from the URL of the Binance web app, excluding the _ character. Remove the ‘_’ from the last part of the URL and you get the symbol starting_date and ending_date : Self-explanatory. The expected format is mm/dd/yyyy , or, in Python slang, %m/%d/%Y . To get the arguments, we’ll use the built-in sys (nothing too fancy around here), and to parse the date, we will be using the datetime library. symbol = sys.argv[1] starting_date = datetime.strptime(sys.argv[2], '%m/%d/%Y') ending_date = datetime.strptime(sys.argv[3], '%m/%d/%Y') + timedelta(days=1) - timedelta(microseconds=1) We are adding one day and subtracting one microsecond so that the ending_date time portion is always at 23:59:59.999 , making it more practical to get same-day intervals. Fetching trades With Binance’s API and using the aggTrades endpoint, we can get at most 1,000 trades in one request, and if we use start and end parameters, they can be at most one hour apart. After some failures, by fetching using time intervals (at some point or another, the liquidity would go crazy and I would lose some precious trades), I decided to try the from_id strategy. The aggTrades endpoint is chosen because it returns the compressed trades. In that way, we won’t lose any precious information. Get compressed, aggregate trades. Trades that fill at the same time, from the same order, with the same price will have the quantity aggregated. The from_id strategy goes like this: We are going to get the first trade of the starting_date by sending date intervals to the endpoint. After that, we will fetch 1,000 trades, starting with the first fetched trade ID. Then, we will check if the last trade happened after our ending_date . If so, we have gone through all the time period and we can save the results to file. Otherwise, we will update our from_id variable to get the last trade ID and start the loop all over again. Ugh, enough talking, let’s code. Fetching the first trade ID First, we create a new_end_date . That’s because we are using the aggTrades by passing a startTime and an endTime parameter. For now, we only need to know the first trade ID of the period, so we are adding 60 seconds to the period. In low liquidity pairs, this parameter can be changed because there is no guarantee that a trade occurred in the first minute of the day that’s been requested. Then, parse the date using our helper function to convert it to a Unix millisecond representation by using the calendar.timegm function. The timegm function is preferred because it keeps the date in UTC. def get_unix_ms_from_date(date): return int(calendar.timegm(date.timetuple()) * 1000 + date.microsecond/1000) The request’s response is a list of trade objects sorted by date, with the following format: So, as we need the first trade ID, we will be returning the response[0]["a"] value. Main loop Now that we have the first trade ID, we can fetch trades 1,000 at a time, until we reach our ending_date . The following code will be called inside our main loop. It will perform our request using the from_id parameter, ditching the startDate and endDate parameters. And now, here’s our main loop, which will perform the requests and create our DataFrame . We check if the current_time that contains the date of the latest trade fetched is greater than our to_date , and if so, we: fetch the trades using the from_id parameter parameter update the from_id and current_time parameters, both with information from the latest trade fetched and parameters, both with information from the latest trade fetched print a nice debug message debug message pd.concat the trades fetched with the previous trades in our DataFrame the trades fetched with the previous trades in our and sleep a little so that Binance won’t give us an ugly 429 HTTP response Cleaning and saving After assembling our DataFrame , we need to perform a simple data cleaning. We will remove the duplicates and trim the trades that happened after our to_date (we have that problem because we’re fetching in chunks of 1,000 trades, so it’s expected that we get some trades executed after our target end date). We can encapsulate our trim functionality: def trim(df, to_date): return df[df['T'] <= get_unix_ms_from_date(to_date)] And perform our data cleaning: df.drop_duplicates(subset='a', inplace=True) df = trim(df, to_date) Now, we can save it to file using the to_csv method: filename = f'binance__{symbol}__trades__from__{sys.argv[2].replace("/", "_")}__to__{sys.argv[3].replace("/", "_")}.csv' df.to_csv(filename) We can also use other data storage mechanisms, such as Arctic.
https://medium.com/better-programming/how-to-easily-fetch-your-binance-historical-trades-using-python-174a6569cebd
['Thiago Candido']
2020-05-07 14:22:50.551000+00:00
['Blockchain', 'Programming', 'Python', 'Cryptocurrency', 'Crypto']
Evaluation Metrics Part 2
Let us discuss in brief, the other metrics in this picture Prevalence Prevalence is the fraction of the total population, that is labeled positive. Negative Predictive Value Negative Predictive Value or NPV is the proportion of negatively labeled samples which are correctly predicted negative. Positive and Negative Predictive Value can again be expressed in terms of prevalence, specificty and sensitivity as False Discovery Rate FDR FDR is the proportion of positively predicted samples which are originally labeled negative. In other words, it is the proportion of false positives out of all the positively predicted samples. False Omission Rate FOR FOR is the proportion of negatively predicted samples which are originally labeled positive. In other words, it is the proportion of false negatives out of all the negatively predicted samples. False Positive Rate FPR, Fall-out FPR is the proportion of negatively labeled samples which are incorrectly predicted positive. False Negative Rate FNR FNR is the proportion of positively labeled samples which are incorrectly predicted negative. Positive Likelihood Ratio LR+ LR+ is the ratio of the probability of a sample being predicted positive given that the sample is originally labeled positive to the probability of the sample being predicted positive given that the sample is originally labeled negative. In real life scenario, LR+ denotes the probability of a person who has a disease testing positive divided by the probability of a person who does not have the disease testing positive. Higher the value of LR+, the more likely a positive test result is a true positive. On the other hand, LR+< 1 indicates that a positive test result is likely to be a false positive. Negative Likelihood Ratio LR- LR- is the ratio of the probability of a sample being predicted negative given that the sample is originally labeled positive to the probability of the sample being predicted negative given that the sample is originally labeled negative. In real life scenario, LR- denotes the probability of a person who has a disease testing negative divided by the probability of a person who does not have the disease testing negative. Diagnostic Odds Ratio DOR DOR is the measure of the effectiveness of a diagnostic test (or model), and is defined as the ratio of the odds of the test (prediction) being positive if the sample (subject) is originally positively labeled relative to the odds of the test (prediction) being positive if the sample (subject) is originally negatively labeled.
https://medium.com/the-owl/evaluation-metrics-part-2-756e380cd7f3
['Siladittya Manna']
2020-06-26 02:49:32.214000+00:00
['Deep Learning', 'Metrics', 'Python', 'Data Science', 'Machine Learning']
NLP visualizations for clear, immediate insights into text data and outputs
NLP visualizations for clear, immediate insights into text data and outputs Using Plotly Express and Dash to explore data and present outputs in natural language processing (NLP) projects. Samples of NLP visualizations Extracting information from text remains a difficult, yet important challenge in the era of big data. Whether it comes to customer feedback, social media posts, or the news, the sheer volume of data to be analyzed can overwhelm information to be extracted. This is where modern natural language processing (NLP) tools come in. They can capture prevailing moods about a particular topic or product (sentiment analysis), identify key topics from texts (summarization/classification), or amazingly even answer context-dependent questions (like Siri or Google Assistant). Their development has provided access to consistent, powerful, and scalable text analysis tools for individuals and organizations. Still, aspects unique to languages can make it difficult to explore data for NLP or communicate result outputs. For instance, metrics that are applicable in the numerical domain may not be available for NLP. (E.g. what would be a mean, or a standard deviation of a set of word tokens?) Even if they could be calculated, presenting the data to audiences can be challenging. Data visualization can help with this, of course, but it can be time-consuming to learn a particular package. Building a web dashboard can be even more challenging—often requiring languages unfamiliar to NLP practitioners such as CSS, HTML, and JavaScript. So, in this article, we wanted to share with you ways that Plotly Express and Dash can ease some of this pain. Plotly Express and Dash were designed with code readability and succinctness as priorities, to enable easy creation of high-quality local (Plotly Express) and web dashboard (Dash) visualizations. In other words, they aim to have data visualization support your work, not have it become a new headache. With that said, let’s get into it! We use a consumer complaints database corpus for this example, but the concepts and visualizations we discuss should be universally applicable. The code is available on this GitHub repository, and a deployed version of the app. Please feel free to follow along with this article, clone it, and make improvements! (All analysis and notes here are for demonstration purposes only.) Local visualizations Data exploration Our dataset contains over 18,000 rows and three columns. While this isn’t large by modern standards, it’s not really possible to ‘eyeball’ this raw data. Let’s explore this dataset with Plotly Express, starting with the distribution of complaint counts by their date (to see trend over time): Histogram of complaint counts by date Now we’ll plot a histogram for the 20 companies with the most complaints: Histogram of complaint counts by target company Or by narrative length: Histogram of complaint counts by narrative length You may have noticed the succinctness of our code. Analysis by multiple variables, or changing to a log scale is also a cinch — just pass additional parameters as shown below: Histogram of complaint counts by date (x-axis) and company (color) Even better, these Plotly charts integrate seamlessly into Dash for dashboard generation as you will see later. Now that we have looked at the distributions, let’s move on to review the text data in substance, starting with n-grams. Visualizing n-grams N-grams are simply sequences of tokens (words), and have many practical applications as well as being a great exploratory method. As single words can only tell us so much, let’s move straight to plotting counts of top bigrams. Counts of top bigrams Isn’t that neat? Most of these bigrams appear to indicate sensible groups of complaint types, and the counts show the volume of each group (credit report and credit card related complaints appear to be most common). To drill down further into this data, a hierarchical visualization, such as a treemap, could be used. This example below divides the data by company and then whether the phrase ‘credit report’ is included. Box sizes indicate group sizing, and color indicates average narrative length. Treemap showing the total share of complaints, portion mentioning credit reports, and average lengths Notice that the visualization immediately reveals length-related patterns. Credit report related complaints tend to be longer, and a couple of companies’ complaints also stand out generally. In some cases, you may wish to compare proportions of complaint bigrams for each company, in which case a stacked bar might be useful: Stacked bar chart showing complaint proportion by bigram Companies with higher volumes of credit card complaints pop out to the eye, as does one with a high student loan-related complaint. For a closer review, we may even compare two companies directly, as done here for top 50 bigrams: Bigram comparisons for two companies This enables an easy comparison of two datasets by subject matter. Qualitative comparisons While we don’t have time to get into the technical weeds, very broadly speaking, word embeddings (dense embeddings to be precise) enable qualitative comparisons of words. They can represent words, and, by extension, concepts or documents as high dimensional vectors, which also provide opportunities for interesting visualizations. Take a look at this simple representation of bigrams using a bubble chart: Displaying bigram concepts in a bubble chart Here, high-dimensional bigrams are represented as two-dimensional representations using a dimensionality reduction technique called t-SNE. Similar charts could be produced for any subset to compare text similarities and insights — say, for each company, or by length. This might be a good opportunity to highlight that each of these charts were created in just a few lines of code using Plotly Express. Not only that, although you see static screenshots here, Plotly will generate interactive charts in your browser or notebook. Crucially, they can easily be incorporated into a live dashboard with Dash. NLP dashboards made easy with Dash The value proposition of Dash is similar to, and intertwined with, those that made Python the leading language for NLP. It has a low learning curve, readable yet succinct code, a thriving community of users, as well as useful libraries and modules that can be leveraged to create dashboards. Significantly for data scientists who are not also web developers, Dash abstracts many elements of web development to Python, allowing you and your team to remain in the Pythonic state of mind if desired. Take a look at this Dash example for a navigation bar — notice that the HTML/DOM elements all created from within Python. This is the web app that the snippet was taken from. Demo Dash web app (link) Dash provides Python interfaces to web-based components, while being declarative and reactive. Together, it enables easy creation of flexible, informative front ends that are accessible for everyone to interact with, whether for data exploration or presentations. As foreshadowed above, incorporating one of these Plotly Express charts into Dash is straightforward. For example — the word embedding bubble chart can be implemented in Dash like this: As implemented, the user can select a parameter (perplexity) as a dropdown item, which initiates the callback function and updates the graph reactively — changing the 2-dimensional representation of the vectors. Below is a comparison of the bubble charts, at two different perplexity values. Dash app t-SNE graphs at different parameters This two-company bigram comparison is also incorporated in the Dash application as shown below. Dash app — N-gram comparison component More importantly, we only needed around 30 lines of code to add each Plotly Express chart to the Dash app, including interactivity and formatting, all without ever leaving Python. We think that this will ultimately improve productivity and efficacy for data scientists such as yourself. Obviously, this is just a quick skimming of what is possible in NLP visualizations, but we hope to have showed you the kind of simplicity and ease of use that we believe makes Dash and Plotly a powerful tool for NLP practitioners. We invite you to explore the app and the code yourself, and create your own visualizations and dashboards and applications.
https://medium.com/plotly/nlp-visualisations-for-clear-immediate-insights-into-text-data-and-outputs-9ebfab168d5b
['Jp Hwang']
2020-03-30 16:37:04.161000+00:00
['Data Science', 'Programming', 'Python', 'Machine Learning', 'Technology']
Swift — Creating a Custom View From a XIB (Updated for Swift 5)
9. Use Your View We’re done! Now we’ve got a custom view, made from a XIB file, that we can use throughout our project. Let’s see how that looks. Click on your storyboard file, and drag in a UIView. Click on the identity inspector in the upper right, and change the class to TestView (or whatever you named your UIView). You can place it anywhere you want on the screen. We constrained it to the center of the screen, and had it take up 50% of the width, and 50% of the height — just for demonstration purposes. Open up your assistant editor again. This time it should open to your view controller file. Drag the TestView in as an IBOutlet. Your view controller should now look like this: One last thing — remember that label on our view? Let’s change the text. I’m going to use my old boss’s favorite greeting: That’s it. Let’s run the project! There you have it — an app that simulates life in an NBA front office. Remember when you set up the XIB, and I said the size doesn’t really matter? This is why. The XIB is going to take up the size of this view — however big or small, or whatever shape you make it in your storyboard (or code). All that really matters is how you’ve constrained all of the elements within your XIB. You can use this view throughout your app. If you’d like to go back and redesign how it looks, you can just change it in your XIB file, and all those changes will flow throughout the app. Questions or comments? Let me know! For anyone interested in investing, please check out Stock Genius, a brand new stock free tracking app I built featuring real time prices directly from the exchange (and many custom views built from xibs!).
https://medium.com/better-programming/swift-3-creating-a-custom-view-from-a-xib-ecdfe5b3a960
['Brian Clouser']
2019-07-02 23:03:29.913000+00:00
['iOS', 'Swift', 'Mobile App Development', 'Swift 3', 'Programming']
Can You Solve The Mystery Of Edgar Allan Poe’s Death?
Can You Solve The Mystery Of Edgar Allan Poe’s Death? Here’s the evidence that’s puzzled doctors for 150 years. “EDGAR ALLAN POE died in Baltimore on Sunday last. His was one of the very few original minds that this country has produced. In the history of literature, he will hold a certain position and a high place. By the public of the day he is regarded rather with curiosity than with admiration. Many will be startled, but few will be grieved by the news. He had very few friends, and he was the friend of very few — if any.” — Richmond Semi-Weekly Examiner, October 12, 1849 Poe was a superstar, adorning cigar boxes. | Hulton Archive/Getty A pioneer of detective fiction, short stories and sci-fi, America’s most famous horror writer left us with a whodunit uncomfortably similar to his own work. On the night of October 3, a journalist from the Baltimore Sun found Edgar Allan Poe, 40, sprawled half-conscious near a pub called Gunner’s Hall, wearing someone else’s soiled, shabby clothes. Poe had left Richmond, VA almost a week before, bound for Philadelphia to edit a book for a minor poet. A delirious Poe asked the journalist to contact Joseph E. Snodgrass, a magazine editor with medical training. Snodgrass arrived and brought Poe to Washington College Hospital, where he lapsed in and out of consciousness and was sometimes violent. He died four days later. The cause of death was listed as phrenitis: swelling of the brain. In the century and a half since, numerous friends, enemies and biographers of Poe have offered no less than ten theories explaining his death, and almost all are based on circumstantial evidence. Illustration of Poe’s most famous poem, “The Raven.” | Hulton Archives/Getty Poe the drunkard Poe famously had a problem processing alcohol: One drink lit him to the gills. A temperance activist, Snodgrass blamed the writer’s death on liquor in speeches nationwide. But Poe’s attending physician, Dr. John J. Moran, published a pamphlet claiming a “perfectly sober” Poe did not smell of liquor when admitted to the hospital, and refused it when offered, drinking only water. That said, Moran, who became a star on the speaking circuit with his colorful accounts of the poet’s death, has discredited himself in the eyes of historians with his widely varying reports of what Poe said and did in his final days. Murder most foul Moran and other chroniclers have said Poe fell afoul of “cooping,” a practice whereby one was drugged, put into different clothes to disguise identity and forced to vote several times. Gunner’s Hall was indeed a polling site in a local sheriff’s election being held the night he was found. It’s also been theorized that Poe was waylaid by ruffians — or the three brothers of his wealthy fiancée, Elmira Shelton — who beat him and forced liquor down his throat. Historians have largely dismissed these theories, and while alcohol could have caused delirium, science suggests otherwise. Poe’s obituary noted his “extreme personal beauty.” | Hulton Archive/Getty Poe’s frankly awesome hair debunks several theories In 1999, public health researcher Albert Donnay analyzed clippings of Poe’s hair taken after his death. Donnay had conjectured the writer had died of carbon monoxide poisoning from exposure to the coal gas that was used for heating at the time. But the text was not conclusive. It did show low levels of lead, indicating Poe had indeed refrained from drinking. The test also showed elevated levels of mercury, leading Chris Semtner of the Poe Museum in Richmond to propose that Poe had died of the mercury chloride prescribed after he was exposed to a malaria epidemic three months before his death. But the mercury found was 30 times below levels consistent with that. Did the cat do it? Possibly not. At a pathological conference in 1996, Dr. R. Michael Benitez was given a list of symptoms for an anonymous patient and asked to determine cause of death. The patient turned out to be Poe, and the diagnosis Benitez gave was rabies. Poe did own pets, including a cat who died soon after he did. But people with rabies are strongly averse to drinking water, which conflicts with Dr. Moran’s (admittedly unreliable) account of Poe downing the stuff. Benitez further acknowledged his diagnosis could not be proved without DNA evidence. Poe’s (second) final resting place in Baltimore. | Authenticated News/Getty That’s no brain! Poe was initially buried in an unmarked grave in the Poe family plot in Baltimore. After visiting the unceremonious resting place, poet Paul Hamilton Lane raised funds to build a fine new monument. Poe’s body was relocated there in 1875, almost exactly 26 years after his death. Brain tumors don’t decay as quickly as brains do. | stockdevil/Getty The sexton overseeing the reburial, George W. Spence, later remarked that he had lifted up Poe’s skull when the body was exhumed and that “his brain rattled around inside just like a lump of mud, sir.” Knowing as we do now that the brain is one of the first parts of the body to decompose, author Matthew Pearl consulted with doctors who confirmed that the object could not have been Poe’s shrunken, dessicated brain — but could well have been a brain tumor. This would explain his delirious behavior. We may really never know Of course, the cause of the poet’s death could be far less fantastic. Poe was not feeling well before setting out from Richmond for Philadelphia, and both his fiancée and his doctor had advised him not to go. Flu, perhaps? Equally mundane causes like encephalitis or meningitis might have explained his brain swelling and other symptoms. “I think he had a brain tumor. I don’t know that much of anything can be definite from our vantage point…And just because he had a brain tumor, doesn’t mean other things didn’t happen. You can have a brain tumor AND be murdered, or run over by a train or any other variety of things!” — Matthew Pearl Other mysteries remain. Moran’s notes indicated that the night before Poe’s death, he repeatedly uttered the name “Reynolds.” No scholarship has been able to determine who this person was. Even Poe’s final words are in doubt. Moran, whose lurid accounts of the poet’s death made him a star, at one point claimed Poe uttered, “Lord help my poor soul.” Contradicting himself even on this final point, Moran alternatively recalled far more grandiose last words: “He who arched the heavens and upholds the universe, has His decrees legibly written upon the frontlet of every human being and upon demaons incarnate.” Author Pearl believes the mystery will never be solved: “I think an exhumation could answer questions using forensic analysis. But there’s no way it will really happen, and even if it somehow did any analysis would probably raise as many questions as propose answers.”
https://medium.com/omgfacts/can-you-solve-the-mystery-of-edgar-allan-poes-death-2e29e777effe
['New Visions']
2016-09-23 22:05:12.751000+00:00
['Edgar Allan Poe', 'Mind', 'Poetry', 'Crime', 'Books']
Your Life Is Full of Porn. Stop Getting Yourself Off.
This Is Your Life On Porn You wake up. You go to the toilet and relieve yourself. Your day starts out well. You have many goals you want to achieve and your to-do list is ready and waiting for you. “Just one sec,” you say to yourself. Then, porn enters the bathroom while your undies are wrapped around your legs. Lifestyle Porn You whip out your phone and open the app of your favorite magazine. Everywhere is consumer-focused ads telling you that your life could be better. They show the lifestyle you could be living if you weren’t living this 9–5 nightmare you call a life. You think to yourself “I’m so stupid. How do I escape the rat race?” Lifestyle porn is all about people you’ll never meet and places you may never get to go. It’s a curated list of the top 1% of experiences you could have in your life. The lifestyle looks perfect. You never see what it takes to earn the lifestyle, only the end result which is the lifestyle. It’s frustrating as hell to watch lifestyle porn. Your lifestyle is never going to be the same as someone else and that’s the point. You can create your own lifestyle rather than copy a porn version you’ll never have. Money Porn Laptop. Laptop. Laptop. They always have a laptop on their knees and downpayment on a Ferrari ready to be delivered during a socially distanced unveiling at their mortgaged home. The laptop is a symbol for one word: easy. A laptop makes you think making money is easy. If money was so gosh damn simple, we’d all be rolling in it and nobody would ever need to wake up early for work again. Money porn sells the dream that cash will solve all your problems. If you only had money, then you’d have happiness. Money porn is a lie. Without meaning and fulfillment, money won’t do a thing for you. In fact, money can make your life worse, not better, if you haven’t discovered meaning or fulfillment first. Money can cause you to be a jerk and be addicted to the ridiculous goal of having to be first while others lose. Nobody has to lose for you to win and that’s the problem with money porn. Startup Porn All you need is a business and you’ll be successful. In Australia, where I live, 9/10 startups fail in the first 5 years. This means startup porn is statistically designed to ruin your life. A business is hard work. Easing your way into business is a superpower and all the startup porn ignores that. The startup peddlers tell you to walk away from everything and start a business. That’s stupid advice. Adding too much risk to your life will only stress you out, leading you to make terrible decisions you’ll regret later in life. You can be happy without a startup. (If you’ve got a regular job, then you’re already an entrepreneur with one customer anyway.) Revenge Porn Social media makes this version of porn really easy. You can sabotage other people in the comment’s section of their posts or in the hidden chamber of secrets known as direct messages. Seeking revenge feels good. Seeing people lose, so you can, win seems obvious. When you eradicate the idea of winners and losers from your life, you welcome the gift of opportunity through the door of your mind. The more people you help, the better you do for yourself. If you help people, they will help you in return. You can do more when you collaborate, than you can by yourself. The game of life is rigged against you. You can never be a winner at everything, so why even try to? It doesn’t make sense. The need to win only leads to eventual disappointment. You can crush it today by giving up revenge porn and helping people do better. People “Doing It” Porn Porn consisting of people having sex is not good for you either. Most of this content shows scenes and acts you can never replicate. The bar you have for physical looks and crazy sexual acts will only increase. Drop porn for real-life sex with your partner. It’s much better. Influencer Porn If you have a personal brand and lots of followers, you’ll do incredibly well. A personal brand is everything, they say. No it’s not. Gary Vee explains social media nicely. “I really miss when people understood that people who consume their content are a community, not a group of people that are there to serve their ambitions.” Nobody gives a damn about how many followers you have or your brand. The influencer movement is a lie designed to keep you on social media platforms so you continue to play the game. Use social media, absolutely — but use it to be helpful and for a cause greater than your own selfish desires. Take it from somebody who knows the social media game well — 100,000 followers feels like 1000. Followers and a brand won’t make you rich, successful, happy or die with no regrets. Influencer porn is designed to sell you products, not make you successful in life. You don’t need any of it. Cure Your Porn Addictions with This. You don’t have to live a life of porn. Porn is the default option and we don’t even know it. I lived a life of porn too. Not anymore. The secret to kill all forms of porn is discipline. Discipline yourself to focus on what you know is good for you. You have a list of habits already that you probably follow — like exercise, reading, leisure time, meditation — and you can focus your time there and get far better returns than the endless porn-fuelled addictions of meaningless nonsense. Porn is an addictive distraction to doing the work you know you want to do. Getting started with your life’s work each day is hard, but so is continually distracting yourself with life porn. Whatever your version of porn is, abolish it. You can get yourself off with life, rather than porn. It feels better too.
https://medium.com/the-ascent/your-life-is-full-of-porn-stop-getting-yourself-off-c16cc0b092f1
['Tim Denning']
2020-07-31 17:01:01.525000+00:00
['Addiction', 'Life', 'Money', 'Social Media', 'Productivity']
Truly Customizing Power BI with React, Angular, or any web framework
With the growth of the amount of data available in organizations, presenting it in a clear and direct way is increasingly important. In this context, Power BI — Microsoft’s business analysis tool — has gained prominence. Even counting on integrated components and navigation mechanisms, enough to meet most of the regular enterprise needs, the platform still stands out for its customization possibilities. Besides being able to customize the platform’s built-in components, it is possible, with some front-end engineering skills, to develop new ones from scratch. Developing a Power BI Custom Visual The development process of a new component takes place through the programming of Custom Visuals, using the PowerBI Visual Tools package, or pbiviz, which can be installed with Node Package Manager — NPM. Pbiviz command-line interface The development of a Custom Visual requires only knowledge of conventional web technologies such as Typescript, HTML and CSS and can be enhanced by the use of frameworks such as React, Angular or D3.js. The design of a Custom Visual is generated, with all that is necessary, by a CLI tool, provided by the NPM package that was indicated above. Basically, the web developer will need to write little code — just two methods: the constructor and the update of a class that implements IVisual . In addition, the file capabilities.json , also generated by the CLI tool, allows the user to define properties, such as colors and fonts, which can be customized later during use by the end-user in Power BI. export class Visual implements IVisual { constructor(options: VisualConstructorOptions) { // code here } public update(options: VisualUpdateOptions) { // ... and here } } The tooling provided by pbiviz allows the web developer to have instant feedback on his work, with updates that occur with hot reload. All supported by Power BI Service.
https://towardsdatascience.com/truly-customizing-power-bi-with-react-angular-or-any-web-framework-5652b86a723e
['Thiago Candido']
2020-09-14 13:28:41.472000+00:00
['React', 'Angular', 'Power Bi', 'Web Development', 'Data Science']