title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Songs Of The Sea
Songs Of The Sea Image Source-Google Images The sea sings me melodies, While I sit and watch it churn, Churning every thought and dream, Of every dreamer on the shore! It sang me a symphony of love, While crashing waves against the rocks, As love is like the waves which crash, Not knowing if they’ll ever make it to the shore! It sang to me in its loneliness, Accompanied by the soothing sound of the waves, As solitude is like a balm, It keeps numbing away all the pain! -Babar Mir
https://medium.com/poets-unlimited/songs-of-the-sea-bec3a60f49e4
['Babar Mir']
2017-04-02 19:03:45.596000+00:00
['Love', 'Writing', 'Feelings', 'Poetry', 'Emotions']
Button UX Design: Best Practices, Types and States
Image credit: designshack Button UX Design: Best Practices, Types and States by Nick Babich Buttons are an ordinary, every-day element of interaction design. Despite this, because buttons are a vital element in creating a smooth conversational flow in web and apps, it’s worth paying attention to these basic best practices for buttons. Also we’ll go over button types and states — important information you need to know to create effective buttons that improves the user experience. Best Practices for Buttons Make Buttons Look Like Buttons Think about how the design communicates affordance. How do users understand the element as a button? Use shape and color to make the element look like a button. Groupon Sign in form focus on the primary action. And think carefully about touch target size and padding when designing. The size of buttons also play a key role in helping users to identify these elements. Various platforms provide guidelines on minimum touch targets. Results of an MIT Touch Lab study found that averages for finger pads are between 10–14mm and fingertips are 8–10mm, making 10mm x 10mm a good minimum touch target size. Image Source: uxmag Location and Order Put buttons where users can easily find them or expect to see. For example, iOS UI guidelines show known location for buttons Mind the order and position of buttons. The order that buttons go in, especially if there are corresponding pairs (such as ‘previous’ and ‘next’) is important. Ensure the design puts emphasis on the primary or most important action. In example below we use red for the button that performs a potentially destructive action. Notice how the primary action is not only stronger in colour and contrast, but also is on the right hand side of the dialog. “Delete” button is more prominent than “Cancel” button. Labels Label buttons with what they do. Add a clear message of what happens after the click. Same example as above but without proper text labels. Feel the difference? No labels for buttons. Call to Action (CTA) Make the most important button (especially if you use them for calls to action) look like it’s the most important one. Create Resume is clearly a CTA button. Button Shapes Usually, you’ll want to make buttons square or square with rounded corners, depending on the style of the site or app. Some research suggests that rounded corners enhance information processing and draw our eyes to the center of the element. Rounded rectangular button You can be more creative and use other shapes such as circles, triangles or even custom shapes, but keep in mind the latter might be a bit more risky. Floating action button is a good example of custom shaped button. Be sure to maintain consistency throughout your interface controls, so the user will be able to identify and recognize your app user interface elements as buttons. Button Types and Behavior 1. Raised Button Raised Button typically rectangular button that lifts (the shading indicates that it is possible to click). Raised buttons add dimension to mostly flat layouts. They emphasize functions on busy or wide spaces. Use Inline (to give more prominence to actions in layouts with a lot of varying content). Use raised buttons to give more prominence to actions in layouts with a lot of varying content. Behavior Raised buttons lift and fill with color on press. Example Raised buttons stand out more than flat buttons. Example for Android application. 2. Flat Button Flat buttons do not lift, but fill with color on press. The major benefit of flat buttons is pretty simple — they minimize distraction from content. Flat button on app canvas. Use In dialogs (to unify the button action with the dialog content) Flat buttons in Android dialog. On toolbars Flat buttons on toolbar. Source: Material Design Inline with padding, so the user can easily find them Flat buttons. Source: Material Design Behavior Example Flat button in Android application dialog. Flat buttons in dialog. Source: Material Design 3. Toggle Button A toggle button allows the user to change a setting between two (or more) states. Toggle button. Use Most common toggle button is used as On/Off button in preferences. Also Toggle buttons can be used to group related options. But your layout should be arranged in a way to convey that certain toggle buttons are part of a group. Also toggle button requires: Have at least three toggle buttons in a group Label buttons with text, an icon, or both Toggle button with one option is selected. Source: Material Design. Icons are appropriate for toggle buttons that allow a single choice to be selected or deselected, such as adding or removing a star to an item. They are best located in app bars, toolbars, action buttons or toggles. Toggle button for Twitter “Like” . Source: Ryan Duffy It’s a very important to choose right icon for your button. I’ve cover this topic in article Icons as Part of an Awesome User Experience. Example Apple iOS use toggle buttons for Settings section. 4. Ghost Button Ghost buttons are those transparent and empty buttons that have a basic shape form, such as a rectangular. They are generally bordered by a very thin line, while the internal section consists of plain text. Different ghost buttons. Source: Dadapixel Use Using ghost button as a primary CTA is usually not such a good idea. You can see Bootstrap example and ghost button Download Bootstrap which looks the same way as their main logo which may confuse users. Download Bootstrap is a button. Have you notice that? Ghost button is best used for secondary or tertiary content, since it will not (or should not ) compete with your primary CTA. You ideally want the user to see your main CTA and then (if not relevant) skip over it to the secondary button. Positive action has much higher contrasts and user sees a clear action. Primary button (CTA) is Purchase Now and ghost button is a secondary button. Behavior Normal state (left) and Focused state (Right). Source: Dadapixel Example AirBnB website has ghost buttons for action “Become a Host” AirBnB website. 5. Floating Action Button Floating action button is a part of Google Material Design. It’s a circular material button that lifts and displays an ink reaction on press. Use Floating action buttons are used for a promoted action. Behavior They are distinguished by a circled icon floating above the UI and have motion behaviors that include morphing, launching, and a transferring anchor point. Choosing Button Type Choosing a button style depends on the primacy of the button, the number of containers on screen, and the screen layout. Button type selection suggested by Google Material Design. Z-depth. Function: Is button important and ubiquitous enough to be a floating action button? Dimension: Choose button type depending on the container it will be in and how many z-space layers you have on screen. Layout: Use primarily one type of button per container. Only mix button types when you have a good reason to, such as emphasizing an important function. Button States This point is not so much about how the initial button looks like for the user, but it’s about hovering a button and finding that nothing changes. User might gets confused: “Is it a button, or not. Now I have to click to find out if that thing that looks like a button is actually a button. Well...” Button isn’t a one-state object. It’s multi-state. And providing a visual feedback to the users to indicate current button state should be a top priority task. Normal State Main rule for this state—button should look like a button in a normal state. Windows 8 is a good bad example of such problem — it hard for users to know if things are clickable or not in a Settings menu. Buttons in Normal state in Windows 8. Focused State Offering a good visual feedback to users that they’re hovering over a button is good practice. The user instantly knows their action was accepted and they want be delighted by visual rewards. Pressed State By animating different elements of your design you can add a bit of excitement and delight your users with creative and helpful motion. Inactive State There are two possibility — either hide a button or show it in disabled state. Arguments for hiding the button: Clarity. Only showing what is needed for the task at hand. Save estate. It allows you to change the controls, using the same space for different means. This is handy when there is a lot going on. Gmail does this. Gmail hides unused buttons. And makes it visible only if user made an appropriate action. Arguments for using a disabled state: Show the action possibility. Even if button isn’t in use, the user has a chance to learn that the action is possible. You may even have a tooltip explaining the criteria for use. Control location. The user can learn where controls and buttons live within the interface. Disabled state button Conclusion Button is meant to direct users into taking the action you want them to take. A smooth handover keeps the conversation flowing along; glitches such as an unable to find a right button, at best, as interruptions and, at worst, as breakdowns. Button UX design is always about recognition and clarity. Think of the web or app as a conversation started by a busy user. The button plays a crucial role in this conversation. Follow UX Planet: Twitter | Facebook
https://uxplanet.org/button-ux-design-best-practices-types-and-states-647cf4ae0fc6
['Nick Babich']
2017-07-16 18:08:45.739000+00:00
['UX', 'Design', 'User Experience']
Legible Lambdas
Photo by Math on Unsplash We all love lambdas, don’t we? Lambdas are powerful (passing methods around, getting rid of anonymous classes…you get the picture) and with great power comes great responsibility. When we switched to using Java 8 at work, I was excited about finally getting to use lambdas! But very quickly, I found myself cramming all my code into a lambda. I was not only trying to overuse it, I was also writing code that was very unreadable. Over the course of the past couple of years, I have gathered some “wows” and “gotchas” with using lambdas, that I have run into AND more importantly, run away from (all examples pertain mainly to Java): Using Consumers, Functions, and Suppliers Consumers are like methods with a void return type and one input argument. Functions are processing methods that take an element of type A and produce an element of type B (A and B could also be the same type). Suppliers are comparable to methods that take no input arguments but always produce an output. It took me a while to get a hang of these nuances. Understanding these differences helps a bunch when you have to refactor some code using one of these interfaces. For example, consider the following snippet of code: someList.stream().map(listItem -> { Step 1; return result of Step 1; }).map(step1Item -> { Step 2; return result of Step 2; }) someOtherList.stream().map(listItem -> { Step 1; return result of Step 1; }).map(step1Item -> { Step 3; return result of Step 3; }) In order to be able to reuse applying Step 1 to listItems, we could extract the input to the first map method into a Function interface and with that change, the code would now look as follows: someList.stream().map(applyStep1()) .map(step1Item -> { Step 2; return result of Step 2; }) someOtherList.stream().map(applyStep1()) .map(step1Item -> { Step 3; return result of Step 3; }) Function<a> applyStep1() { return A -> { Step 1; return result of Step 1; }; } An easy way to do this: Let your IDE help you with extracting inputs to maps into Functions (select the entire block of code inside the map -> Right click and refactor -> Extract -> Method -> name the Function and TADA). This can also be done for other interfaces like Consumers and Suppliers ! Reusing reduction methods Want to get the sum of all the items in a list? The average? Look no further, the streams API has a method for both! integerList.stream().mapToInt(Integer::intValue).sum() integerList.stream().mapToInt(Integer::intValue).average() The point I am trying to make here is, there are reduction methods that may be provided out of the box and it is a good idea to always look before venturing out to write your own :) Everything does not have to use the streams/parallel streams API The streams API was one of the most widely celebrated features of java 8 and rightly so. It plays very well with lambdas and as someone new to this, I was subconsciously converting ALL my collections to streams irrespective of whether or not it was required. Similarly streams vs parallel streams. The parallel is good right? Yes. Is it good ALL the time? ABSOLUTELY NOT. The internet is full of articles and performance benchmarks on these topics and I would highly recommend doing your research before streaming through EVERYTHING in your code base. Break up the giant lambdas! We are required to apply forty four steps to our input and we decide to use a map. But are we required to apply all the forty four steps in a single map method? Well lets see. So if we were to use only one map method, this is what our code would look like: someList.stream().map(listItem -> { Step 1; Step 2; Step 3; Step 4; . . . Step 44; return result of all above Steps; }); Next consider this: someList.stream().map(listItem -> { Step 1; return result of Step 1; }).map(step1Item -> { Step 2; return result of Step 2; }).map(step2Item -> { Step 3; return result of Step 3; }).map(step3Item -> { Step 4; return result of Step 4; }); . . . I believe one of the biggest advantages of using lambdas is how elegantly you can break up processing steps into their own map method (there are other methods one could use and I am just citing map as an example here). I always like to break up big map methods into individual ones that are more readable and maintainable (this also allows for reusability). At the same time, I would recommend against blindly having only one line of execution within every map method. We could always combine processing steps into a map as seen fit (For example, Steps 1–3 could be inside a single map). map() with an if loop vs a filter You can filter items in a collection using filter(). How long was it before I moved ifs inside my maps to actually be filter predicates? Long enough. What I am saying is this: someList.stream().map(listItem -> { if(listItem.startsWith("A") { //Do Something } }); Can be instead written as this: someList.stream() .filter(listItem -> listItem.startsWith("A")) .map(listItem -> { //Do Something }); Though this may or may not necessarily provide a performance bump, it adds to readability and ensures the use of appropriate methods. Switching to using lambdas was a big jump for me that took a long time to get used to and it continues to surprise, frustrate, and wow me ALL at the same time!
https://medium.com/javarevisited/legible-lambdas-4259c831918e
['Janani Subbiah']
2020-11-22 09:58:37.316000+00:00
['Technology', 'Programming', 'Software Development', 'Java', 'Coding']
The Introvert’s Struggle: Eating Lunch With Coworkers
The Introvert’s Struggle: Eating Lunch With Coworkers No, I don’t want to talk about food while eating food Photo by bantersnapson Unsplash I have no doubt that I can go days, weeks, months without uttering a single word to another human and be better for it. Sound extreme? Then you’re probably an extravert. You might want to stop reading here. I love eating meals alone too. Maybe that’s why I respect my cats so much. I envy their quiet, solitary lives. The most admirable creatures are the ones that have no need to prove their strength or ferocity to anyone but themselves. So many people are showoffs, braggarts, constantly trying to prove they’re the best. In an environment where everyone tries to portray their optimal self, it’s no wonder all people want to do is talk all the time. The worst time to be around people is at work when they’re eating lunch. When I worked in an office, this was by far the most challenging part of the day to get through because I felt really pressured to socialize. And it was hard to disguise the anxiety I was feeling. I didn’t want to seem tired or exhausted (which everyone was, but still). I didn’t want to show my coworkers that I wasn’t really interested in what they were like outside of work. Now, I’m sure some people have already tuned-out at this point. This guy’s anti-social. I’m not going to bother with this. He’s over-generalizing. Sure, a lot of workplaces are really great, and there’s nothing wrong with loving your coworkers. I completely agree with that. I wish I had that kind of camaraderie with this particular group of people. But I just didn’t. They were all pretentious. And they ate pretentious foods, like organic stuff with quinoa and all kinds of grains that I had never heard of. They ate oats and skyr for breakfast. Come on, can we get some cereal and milk in here once and a while? What’s with all the muesli? And no, I don’t want to add chia and flax seeds. And the conversations… oh the conversations. The only three topics these coworkers ever talked about at the table were the following: where they’ve traveled what shows they were binge watching and… food! Right — food. They loved to talk about food while they ate food. This was by far the most common conversation topic. I remember thinking, wow, I bet zombies have better conversations while they’re eating corpses. I mean—I get it—it’s sort of natural to be like, oh this sandwich is great. That’s fine. But when the primary content of discourse is food while you’re stuffing your face with it — it’s just too much. It’s like, listen, I’m just eating my food here, and it’s decent — I don’t need to talk about the nuances and the nutrition and the preparation and the subtle combinations of flavors. I’m putting it all in my mouth right now — I can taste all the combinations. I’m chewing and digesting, and in a minute I’ll be worrying about whether or not it’s gonna come back out of me before I have to start working again. I don’t need all this. Can we talk about something else please? No. They take it even further. It goes from what they’re eating to all the cool things they’ve ever eaten in the history of their lives. All the fantastic dishes consumed on vacations to exotic countries. Or even more lame, all the fancy meal prep and planning they do on Sundays (again, while they’re binge watching their favorite TV shows). It’s the introvert struggle—we’re forced to talk to uninteresting and unoriginal people all the time. And the sad thing is that I understand the reason they’re so boring and unoriginal. It’s because everyone’s so overly consumed by their careers. The only real things ordinary people have to talk about are the little scraps of non-work related activities they manage to squeeze into their schedule when they’re not inundated by responsibilities. I mean, I guess if you really think about it, a substantial portion of the time most people spend on non-work related tasks has a lot to do with bodily functions, like eating. And we can’t really talk about pooping or farting or having sex, so eating is the natural go-to. And when they get their two weeks of PTO, and they actually get to live life for a little while — in impressive fashion mind you, because they all work in tech — it’s natural that they want to brag about it. As sad as food dialogue can be, it might be that binge-television is an even sadder topic. It’s the only thing left at the end of the day, right? When your brain is total mush — it’s all your body is capable of doing in those two hours between dinner and sleep. Put on that Netflix or Hulu or Prime, or whatever overpriced streaming service you like, and have at it. Photo by Nik Shuliahinon Unsplash We all watch the same stuff too, because the algorithms in our devices are all tuned into the same data, and we’re all being recommended the same poorly written garbage, and then we’re telling our coworkers to watch it. Why not reads books or do some art or write poetry? Well, most of that stuff takes too much time, thought, and imagination. Who has energy for that? So I’m out of luck, really. My coworkers probably think there’s something wrong with me. That I’m awkward or that I don’t know how to socialize. They see that I never really have much to say, so there must be something wrong with me. I’m sure they notice that I’m so quiet. To them, I’m the weirdo for not constantly filling the atmosphere with my voice in between bites of kale. And then there’s after work—don’t even get me started on the pressure to socialize off the clock. I’m certain they all wonder why I have no desire to join them for happy hour—why I don’t want to gather around at the loudest bar in town and get plastered. But they gotta do it, right? They have to drown out the sound of the nothingness in their own heads. Why don’t I want to hear you sing karaoke? Because I already hear you sing music that isn’t yours all day long. I don’t need to hear more of it. Workplace lunch tables all across the universe are like this. So what’s an extreme introvert to do? I remember I would try to escape. Since all these people lived by the clock, I would employ strategic timing to get away from them. If I didn’t have a meeting between 11:30 and 12:00, I would get up and go to lunch by myself. This way I would beat all the pre-lunch social roaming that took place. When all the morning meetings ended, they’d float around like hungry, deflating balloons, meandering from cubicle to cubicle, asking to see who was available for lunch. And by then, I would already be gone, somewhere, anywhere that didn’t require me to be in earshot of more insipid conversation. It was hard, though, to find a peaceful place to eat lunch in a busy city. Sometimes it was impossible to locate an area with any kind of seating at all, let alone somewhere that wasn’t insanely crowded and required you to wait about a third of your lunch break in line just to get the food. It was a nightmare. Big, bustling cities are not a place for introverts who want to use their lunch break as a quiet time to clear their heads. No respite from the chaos. No haven to recapture a bit of your sanity. No luck in a metropolis so mad and all-consuming. No wonder that janitor who cleaned the office would eat lunch by himself in the maintenance elevator. He knew what was up. Small talk. Pointless conversations. It’s all noise on top of noise on top of noise. I can’t stand being part of conversations that don’t go anywhere and don’t mean anything. If I’m not learning something profoundly interesting about you or sharing something incredibly interesting about me, then I want out. All I can hope is that there’s at least one other person out there who understands.
https://medium.com/scuzzbucket/the-introverts-struggle-eating-lunch-with-coworkers-adda2913e6b3
['Franco Amati']
2020-10-25 17:30:58.271000+00:00
['Work', 'Self', 'Introvert', 'Psychology', 'Lunch']
Love is not Free
True Love is not free. We’ve all heard it said many times and in many ways, and have quite possibly repeated it on many occasions, that love is free, love doesn’t cost a thing or money can’t buy love. Although it is true that money cannot buy love, I strongly disagree with the sentiment that love for another human being does not have a price. When we are truly in love with someone, we are willing to pay for that love with much more than silver and gold. Love is not a commodity to be bought, sold and traded, but it is a precious energy that costs us our life force. A person in love is willing to go to any lengths to ensure that his/her loved one is safe, happy and content. There are countless ways that we pay the price of loving another. Here are just 5 of the many attributes of true love that are worth much more than money: 1. Commitment Being in love is to be fully committed and available to the one we love. This means that we must be willing to sacrifice our valuable time and energy towards nurturing and developing the relationship with our loved one. True love is unconditional. A committed relationship calls for dedication and hard work for it to evolve and flourish. True love costs our commitment. 2. Honesty Everybody lies. This is just a fact of life and sometimes a small ‘white lie’ may be required, depending on the circumstances. A loving relationship, however, is an exception to this unspoken rule and leaves no room for dishonesty. We must be prepared to communicate with our partners and it is virtually impossible to communicate clearly and effectively without complete honesty. True love costs our honesty. 3. Loyalty When our loved ones really need us, we are there to validate and support them. This is loyalty. Love is not something that we just turn off and on when it is convenient for us. A relationship is an obligation to be there for the one we love, even and especially through the hard times. Loyalty is the backbone of a successful relationship; it is a rare quality that must be appreciated and reciprocated. True love costs our loyalty. 4. Respect Mutual respect is an essential quality of any healthy relationship. It is necessary for us to hold each other’s best interests in high regard. We must be willing to sacrifice certain things out of respect for our partner. When we love someone we always consider the impact of our words and actions. The mutual respect of each other’s wants, needs, and boundaries, is a necessity in a loving relationship. True love costs our respect. 5. Trust Lack of trust can be detrimental to any relationship. There is no love without trust, and there is no trust without honesty. If we do not believe that our partner is being honest with us then it is impossible to trust them. Trusting another person with our heart is one of life’s most vulnerable acts. This is why it is very important that we only trust our hearts to those we truly love. True love costs our trust. To love another fully and completely is to surrender your heart into their possession and trust that it is in good keeping. No price is too high for the ones we love. Of course, I am exclusively referring to true love and not a toxic or abusive ‘love’. By its very definition, abuse is not, nor is it ever, a price of true love. When we enter a relationship of true love, it is essential that we are willing to communicate, learn and grow with our partner. If we are not able to pay this price, then we may not yet be ready for a committed relationship; and we should always be open and honest about this. Every loving and lasting relationship must be built on the firm five-point foundation of commitment, honesty, loyalty, respect, and trust.
https://bobbyjmattingly.medium.com/love-is-not-free-328b89ca8c02
['Bobj Mattingly']
2019-09-27 08:53:08.609000+00:00
['Self-awareness', 'Relationships', 'Love', 'Self Improvement', 'Love And Sex']
Star Wars Symbology: The Actual Sacred Texts Of Star Wars
Star Wars Symbology: The Actual Sacred Texts Of Star Wars The Rise Of Skywalker Ends With A New Call To Adventure — For Us. “We’ve passed on all we know, a thousand generations live in you now.” — Luke Skywalker Whether you’re aware of it or not, the Jungian aspects of Star Wars are what you love most about Star Wars. Let me explain. First of all, we’re drawn in, with heightened awareness (consciously or not), when we perceive the symbolic language of the archetypes has been communicated. Put another way, we perk up when an archetype appears in a story. Why is that? I’ve been writing about Star Wars using the framework of Jungian Psychology for awhile now, and I’ve received a ton of feedback — both positive and negative. (Like I mentioned, people perk up at the archetypes.) I believe this is due to the fact that Star Wars is the great myth of our generation. Not Harry Potter. Not MCU. Star Wars. As such, like every era’s great myth, Star Wars is now embedded in what Carl Jung called the Collective Unconscious. But why Carl Jung? Because Carl Jung is to George Lucas, as Anakin Skywalker is to Ben Solo. The chain of influence in Star Wars goes like this: Carl Jung →Joseph Campbell →George Lucas →JJ Abrams By now, it’s accepted canon that George Lucas had the movie script for a “Space Western about a guy named Luke Starkiller” until he got his hands on Joseph Campbell’s, The Hero With A Thousand Faces. The Jung connection is less explored. But there’s no doubt that Joseph Campbell’s theories were directly influenced by his study of Carl Jung. Beginning first and foremost with the concept of, the archetypal heroes journey. In The Hero With A Thousand Faces, Joseph Campbell outlines the the monomyth. The monomyth is the myth at the core of all myths. He called the monomyth, the Heroes Journey. Campbell’s theory that the universal monomyth flows beneath and within everything is a brilliant distillation and direct descendant of Carl Jung’s theory of the collective unconscious. Back jacket of The Hero With A Thousand Faces by Joseph Campbell The basic framework of the archetypal heroes journey is always this: “A hero ventures forth from the world of common day into a region of supernatural wonder: fabulous forces are there encountered and a decisive victory is won: the hero comes back from this mysterious adventure with the power to bestow boons on his fellow man.” — The Hero With A Thousand Faces Does that sounds familiar? Perhaps like the logline or template of every screenplay ever? Take for example, the following quote from screenwriting guru Blake Snyder’s book, Save The Cat! Strikes Back: “Logline Template” from the best-selling screenwriter’s playbook, “Save The Cat! Strikes Back” by Blake Snyder There’s a reason why Snyder referenced Star Wars throughout Save The Cat! Strikes Back. Because Star Wars: A New Hope was the first movie to consciously aim at hitting all “the beats.” What screen writers like Blake Snyder call, “hitting the beats,” Joseph Campbell called the steps along the heroes journey and the monomyth. What Campbell called the monomyth, Jung called the collective unconscious. Jung believed there are numerous archetypes that are all integrated into the collective unconscious. Jung in turn, was influenced by the work of Nietszche, Goethe, Freud, William James, the Greeks, the Bible and others. But it’s worth noting that it was Jung who survived his own heroes journey and “dark night of the soul” (which is a classic story beat) to distill his theories, and bring forth the archetypes of the collective unconscious. “There exists a collective, universal, and impersonal nature which is identical in all individuals. This collective unconscious does not develop individually but is inherited. It consists of pre-existent forms, the archetypes.” — Carl Jung Now, the Jungian archetypes are recognizable throughout Star Wars: Reylo, the Anima and Animus The archetypes include but are not limited to — the Hero, the Shadow, the Wise Old Man, the Trickster, and the Anima & Animus. Carl Jung’s depiction of The Wise Old Man archetype from The Red Book, looking a lot like Obi Wan from E4. Star Wars Broke The Fourth Wall Which brings us to The Rise of Skywalker, and Luke’s line in the trailer, “We’ve passed on all we know, a thousand generations live in you now.” In the context of the movie, we assume Luke tells that to Rey. But this sentence mirrors what Carl Jung taught about myth. The archetypes in myth express the core of human wisdom encoded throughout the generations in the collective unconscious. In The Last Jedi, a big deal is made of burning the sacred texts — and starting anew. We are lead to believe the sacred jedi texts were destroyed by a bolt of Force lightning brought on by Master Yoda. But as the Millenium Falcon made its getaway at the end of the film, we see the sacred texts stored safely on board their new home. Thus when everything is taken together, I believe when Luke said, “We’ve passed on all we know, a thousand generations live in you now” he says it to us. In reality, you could argue this is where Luke breaks “the fourth wall” of cinema, and addresses the audience directly. Luke acknowledges the end of the ninth and final film of the core saga — and tells us that Star Wars has passed on all it knows to us. It’s worth noting that the choice of phrasing of “A thousand…” is a subtle, but obvious nod to the actual sacred texts of Star Wars. Not the sacred Jedi texts aboard the Millenium Falcon — but the source text that turned Star Wars from a 1970’s Space Western into the Myth of a Generation. In short, the true sacred texts of Star Wars are the books we’ve been discussing— Joseph Campbell’s The Hero With A Thousand Faces and The Collected Works of Carl Jung, including The Red Book. The ideas in these books form the backbone of the Star Wars mythos. And thus, when Luke Skywalker references what we’ll call “the actual sacred texts of Star Wars,” he not only breaches the fourth wall— Luke sounds the Call to Adventure. There are actual sacred texts in this world. The call to adventure is another beat, another step on the heroes journey. The ultimate role of a myths is to become a tool. A tool that we use to guide us and show us how to live our own lives. And ultimately, the final role of the Hero, is to return to his tribe with the boon. Luke breaching the fourth wall, calling us to adventure, offers that. We are his tribe. And Luke Skywalker’s final act is to grants us — Star Wars Fans — the ultimate boon. Luke is giving us the ultimate treasure map. Luke is pointing the way. Luke is telling us to track down the actual sacred texts of Star Wars. Put another way, read Campbell, read Jung, read the works that influenced them, watch lecture videos on myth and religion, find scholars who continue the work today, dive down the rabbit hole of a Rene Girard, take responsibility for your education — form your own curriculum. Seek wisdom from the sacred texts, and apply it to your life. This treasure map, these texts, and the inner journey they offer, is where the adventure of Star Wars lives on… The Star Wars Saga Fades To Black With a Call To Adventure, For Each Of Us Because as our real world expands and evolves, we must never forget that the greatest human battles are waged within. Our greatest locus of control is our self, our thoughts, our reactions. What will you do next? Our Star Wars storytellers have told us to explore the works of psychology, mythology, spirituality, and religion. This is not a two hour movie, it’s a lifelong journey. But by waking up to the deeper possibilities within, we have a chance to accept this call to adventure, fulfill our fuller potential, and become the heroes of our own lives. And our communities need heroes. And the next generation is watching us. One of the pages inside “The Hero With A Thousand Faces” by Joseph Campbell “The problem is nothing if not that of rendering the modern world spiritually significant.” — Joseph Campbell This is how a story lives forever. One last thing, the Jedi has been described as a warrior-monk. Thus, if we listen closely, Star Wars is also telling us to maintain our bodily health. Thanks for reading. And may the force be with you, always.
https://medium.com/jung-skywalker/star-wars-symbology-the-actual-sacred-jedi-texts-of-star-wars-19b112249645
['Brian Deines']
2020-08-28 16:07:46.778000+00:00
['Philosophy', 'Star Wars', 'Symbolism', 'Psychology', 'Film']
Component-Based Forms
Image attribution: superoffice.com Reactive sub-forms implemented with independent, highly re-usable Angular Components This article was inspired by a recent discussion with a manager regarding how to integrate someone with a math/Aerospace Engineering background (that would be me) into a group that worked on heavily form-based applications. I inquired about the nature of their forms and they were mostly very long and complex, a few of which required a bit of math in the process of validation. The manager was skeptical that multiple devs could work on the same form at the same time, especially given the size and complexity of just a single form. I talked about the concept of Component-Based Architecture and how the same idea could be (and, in fact, has been) implemented in Angular forms. The process is simple; delegate form groups to individual Components. Have those Components return a reference to a FormGroup and make each component responsible for validation of its sub-group. Construct the original, large FormGroup from the constituents created by individual Components. Not only does this process allow multiple developers to work on a single, large form, it promotes a high level of re-use since the same combinations of controls in a form group are likely to be encountered multiple times across one or more applications. This concept is nothing new and a number of articles have been published online, such as This article discusses how to apply Component-Based Forms to the most common operation in e-commerce, credit-card payment. Front-End Credit-Card Processing The second form I ever created back in the late 1990s involved credit-card processing. That was an interesting experience because it involved an introduction to Luhn numbers and Luhn validation. Now, before you get a case of math anxiety, understand that absolutely zero math will be covered in this article. Any necessary computations have already been encapsulated away into a small library of Typescript functions that are available with the GitHub that accompanies this article. The credit-card portion of a payment form typically involves three items, 1 — The credit card number (optional selection of card type) 2 — Card expiration date (mm/yyyy — may be one or two form controls) 3 — Card CVV (usually xxx or yyyy) Front-end credit card processing is often backed by a data file that specifies the list of accepted cards and relevant data for each card. This file may be static or server-generated. For the group of controls to be valid, the following conditions must apply 1 — The credit card must be in the list of supported cards 2 — The credit card number must be theoretically valid for its type 3 — The credit card expiration date must be the current month/year or in the future 4 — The CVV must have the correct number of digits for the card type You may have seen payment forms that require entering a card type. This is not actually necessary as the first 4–6 numbers in the credit card number are the BIN . The card type can often be deduced from the first two entered numbers with a RegEx pattern. The concept of a card number being ‘theoretically’ valid means that the card number is the correct length for the card type and that it passes a metric known as Luhn validation. A Google search will yield a substantial amount of material on this topic. For purposes of this article, Luhn validation is coded for you in a Typescript library not all that different from the JavaScript and ActionScript versions I wrote decades ago. One issue with front-end credit card processing is that a theoretically valid card number with an acceptable expiration date and CVV may not actually be valid to charge. The expiration date and/or CVV may have been entered incorrectly or the card might be stolen or not yet activated. In these cases, the otherwise valid card number returns from a server check as being not valid for charges. What we hope to achieve with client-side validation is to minimize the chance that a good card comes back from a server check as invalid for simple reasons such as one digit in the card number was incorrect. The user might have chosen the correct expiration month, but forgot the year, causing the expiration date to be behind the current date. Or, perhaps the user intended to use their Visa card but mistakenly started typing their MC number. It is good UX to indicate to the user that a MC card number is being typed while the number is being entered. Correctness of a credit-card form group is a unique example of cross-field validation. For example, we can not simply state that a CVV is valid when the user has entered any three digits. The number of required digits varies based on card type, so we must first know the card type. The card type is available after a partial entry of the card number, but may become invalid if the user changes their mind, backspaces, and starts to enter another card number. The next few sections discuss how to handle the individual aspects of front-end credit-card validation and then we will see how to tie it all together in a reusable Angular Component. Credit Card Data For the code covered by this article, supported credit cards are specified by several pieces of information, indicated in the CCData Interface in the file, /src/app/shared/cc-data/card-data.ts Typically, credit card numbers are fixed-length, but some card numbers are allowed to vary between an optional minimum and maximum number of digits. Presence of these properties overrides the length property. The RegEx pattern is the minimal pattern necessary to identify the card, which usually requires the first two digits. The information in the above file is old as it does not consider MasterCard BIN numbers that can begin with 2 as of 2017. I’m willing to wager that almost every dev knows more about RegEx than I do, so modifying the pattern is left as an exercise should you wish to use this code in production. Functions For Credit Card Type and Valid Numbers There are four Typescript functions in the folder, /src/app/shared/libs, get-card-type.ts (access credit card type from current number) in-enum.ts (is the supplied value in a string Enum?) is-length-valid.ts (is the card number length valid given a card type?) is-valid-luhn.ts (does the card number pass Luhn validation?) The card type (i.e. MasterCard or American Express) is computed in get-card-type.ts and is determined by matching a RegEx pattern as the credit card number is typed by the user. Once the card type is known, it can be used to further validate the card number (in terms of proper length) and the CVV. Once the card type is determined, the correct number of digits (or range of digits) can be looked up and used to further validate the card number. The card number length (number of typed digits excluding spaces/dashes) is checked with is-length-valid.ts. While correct number of digits is a necessary condition, it is not sufficient to completely validate the card number. After the correct number of digits have been entered, is-valid-luhn.ts performs Luhn validation on the entered card number. If that validation passes, then we have checked the card number to the maximum extent available on the front end. The card may still be invalid to charge, but we have eliminated user entry error as a cause for an invalid card number. Identification of card type and reflection of this information in the UI also helps the user understand that they may have accidentally typed a Visa card number in when they intended to put this particular charge on a MC or Amex. Expiration Date There is not much that can be done other than validate that the card expiration is in the future. We want to catch situations where the current month is March, for example, and the user selects January as the expiration month, but forgets to select the correct year. The form is likely to be initialized with the current year as the expiration year, so this is an easy mistake to make for someone in a hurry to get through checkout :) CVV Validation The CVV is likely to be a three- or four-digit number and all we can do is verify that it is a number with the correct number of digits for the known card type. So, now that we understand how to validate each individual element, let’s see how it all comes together in an actual example. Main Form The primary form in /src/app/app.component.html represents a subset of a typical payment form containing name, address, and credit card payment information. The relevant portion of this form is shown below. <form [formGroup]="paymentForm" (ngSubmit)="onSubmit()"> <label> First Name: <input type="text" formControlName="firstName"> </label> <label> Last Name: <input type="text" formControlName="lastName"> </label> <!-- remainder of form here --> <app-credit-card [placeHolder]="'Enter Card Number'"> </app-credit-card> <button class="submit-button form-pad-top" type="submit" [disabled]="!creditCardComponent.valid">Submit</button> </form> Note that the area of the form expected to contain the credit card number, expiration, and CVV controls has been replaced by a component , CreditCardComponent (/src/app/credit-card/credit-card.component.ts), with the selector, app-credit-card. This main payment form is a typical reactive form, whose creation can be found in /src/app/app.component.ts, @ViewChild(CreditCardComponent, {static: true}) public creditCardComponent; public ngOnInit(): void { this.paymentForm = new FormGroup({ firstName: new FormControl(''), lastName: new FormControl(''), creditCard: this.creditCardComponent.ccSubGroup, }); } This demo is unconcerned with validation of first and last name. Note that layout for credit-card controls and creation of a FormGroup for that layout has been delegated to CreditCardComponent. Since all credit-card related validation is also delegated to that component, it can be easily re-used anywhere we need a set of credit-card controls inside any form. To better understand how this all fits together, let’s walk through that component in detail. CreditCardComponent The sub-form layout contains a text Input for the credit-card number (it may contain spaces or dashes), two select boxes for expiration month/year, and an number Input for the CVV. The layout also contains an area to display an image of the credit card type as soon as it can be detected from user input. Some DIV’s are provided to display textual explanation of various errors to the user. Note that this is not a form in and of itself; this component controls the layout of a FormGroup inside another Form — the main payment form for our example. The CreditCardComponent constructor creates the ccSubGroup FormGroup as follows, This demo illustrates one production feature I’ve been asked to implement in the past, disabling the expiration date and CVV controls until a valid credit-card number is entered. I’ve had mixed results in the past binding to the disabled attribute and setting an initial value in the FormControl constructor, so the enable/disable operations are handled via code. You could also create a separate group for these controls and enable/disable the entire group. The ccSubGroup may be accessed by a parent component since it is already public for binding purposes. This is exactly what we saw in the FormGroup creation for the main payment form, above. Before deconstructing any further, it is necessary to discuss validation strategy for this entire set of controls. As mentioned earlier, this is an interesting case of cross-control validation. Typical practice is to apply a single validator to an entire form group instead of one validator per control. Consider, however, the choreography of interaction with our group of controls. 1 — User begins typing. Credit card type is identified within a small number of digits. Change the credit card image. Card type is constant unless the user deletes enough digits to invalidate the current card. Then, card type is ‘unknown.’ 2 — Length of the credit card input is invalid until the correct number of digits is entered. Luhn validation is then applied to the card number. If that validation passes, the credit card number is considered valid. 3 — Expiration month and year may be set at any time and we can only verify that it is the present month/year or in the future. 4 — CVV can not be validated until the card type is known since the number of digits may vary. So, card number and CVV validation are coupled in that the card number specifies the card type. Now, it is possible to validate the group as a whole, but there are redundant operations such as fetching the card type. Typically, the card type can be determined with two digits. The remainder of the card type lookups are not necessary. It is also clumsy to have a control validator communicate the card type outside the validator in order to dynamically switch the card-type image. Here is such a validator should you wish to apply such an approach. It may be found in /src/app/shared/validators/card-validator.ts. Note, however, that the card type is not communicated outside the validator, so it must be looked up again outside the validator in order to dynamically switch the card-type image. My personal preference with this type of coupling between controls is to provide complete programmatic control over the validation process inside a key-up handler. This approach has the benefit of being simple, efficient, and can easily accommodate a wide variety of change requests. The first step in this process is to offload input management and validation of the credit card number to an Angular attribute directive. This can be seen in the CreditCardComponent template, /src/app/credit-card/credit-card.component.html, <label for="creditcard"> Credit Card <input creditCardNumber [class]="ccnClass" type="text" id="creditcard" formControlName="ccNumber" placeholder="{{placeHolder}}" (onCreditCardNumber)="onCardNumber($event)" (onCreditCardType)="onCardType($event)" (onCreditCardError)="onCardError($event)" > </label> This directive is located in /src/app/shared/directives/credit-card-number.directive.ts. To conserve space, its implementation is summarized; you may review the source code at your convenience. 1 — Three Outputs are provided, one of which indicates the credit card type. This Output is only emitted when the card type changes from its previously set type. The second Output emits the currently typed card number (which may include spaces or dashes). The final Output is emitted on any error in the credit card number. 2 — A ‘keydown’ HostListener allows only a specific set of characters and other keys such as Backspace or Delete. The allowable list is minimal, so add allowable keys as needed. 3 — A ‘keyup’ handler cycles through all the validation checks. The card type is cached as a class variable, so it is only updated when the card type changes. Luhn validation is only performed when a card number of the correct number of digits has been entered. 4 — Tests are made against a specific set of errors and, if found, dispatched to the host component. There may be more than one error, so only the first one found is emitted. Optimization Note: The HostListener’s applied to keyup and keydown fire change detection on the attribute directive. For a simple credit card number input with no children, this is likely not to be an issue. Consider using RxJS fromEvent as an alternative. This is left as an exercise for the reader. This approach could be called ‘old school,’ but it is simple and efficient. Most of the validation work is performed by the utility functions listed earlier in this article. Now, we can return to CreditCardComponent. The component provides handlers, onCreditCardNumber, onCreditCardType, and onCreditCard error to handle Output of credit card number, type, and errors from the card number directive. The card type is used to look up a credit card name and image that are displayed in the component through simple binding. Validation of expiration month and year is relatively straightforward. We only check that the selected expiration month and year are the current month/year or in the future. An input handler is added to the CVV Input control to check the CVV value as it is typed, <input [class]="cvvClass" id="cvv" type="number" min="1" max="9999" step="1" formControlName="cvv" (input)="onCVVChanged($event)"> </label> Validation of the CVV currently checks the exact number of digits for the current card type. You could use an Angular ValidatorFn factory that accepts a fixed digit range (since this is known at the time the form group is constructed) and return a ValidatorFn that checks against the digit range. This is less exact, but more conforming to the ‘Angular’ way of validating form controls. Note that the current treatment of the CVV Input only demonstrates validation; this control does allow decimal entry, which should be disallowed in a production control. Classes applied to various controls are also set programmatically. This allows for a very detailed control over visual appearance as the user types at the expense of having to write more code. Some DIV’s are optionally rendered in the layout to provide textual explanation of the current error. For example, <div *ngIf="cardError === CreditCardErrors.INVALID_LENGTH" class="form-error-message">Length Invalid</div> <div *ngIf="cardError === CreditCardErrors.INVALID_NUMBER" class="form-error-message">Invalid Card Number</div> You could also use one DIV and set the message imperatively through binding. Such options are left to you as an exercise. The component also provides an accessor that is responsible for indicating to the parent component (who owns the complete payment form) that the credit card group (or sub-form) is fully valid. public get valid(): boolean { return this.isValidCardNumber && this.isValidExpDate && this.isValidCVV; } Integrating the Sub-Form with the Main Payment Form Let’s return to the main app component, /src/app/app.component.ts and review the reactive form setup for the full payment form, public ngOnInit(): void { this.paymentForm = new FormGroup({ firstName: new FormControl(''), lastName: new FormControl(''), creditCard: this.creditCardComponent.ccSubGroup, }); } The credit-card sub-group is handled completely by the CreditCardComponent and for demo purposes, its valid accessor is used to control whether or not the Submit button is enabled. <form [formGroup]="paymentForm" (ngSubmit)="onSubmit()"> <label> First Name: <input type="text" formControlName="firstName"> </label> <label> Last Name: <input type="text" formControlName="lastName"> </label> <!-- remainder of form here --> <div>.</div> <div>.</div> <div>.</div> <app-credit-card [placeHolder]="'Enter Card Number'"></app-credit-card> <button class="submit-button form-pad-top" type="submit" [disabled]="!creditCardComponent.valid">Submit</button> </form> The creditCardComponent variable is a ViewChild that provides a direct reference to the CreditCardComponent in the template, @ViewChild(CreditCardComponent, {static: true}) public creditCardComponent; As soon as the user types in a valid credit card number, sets a correct expiration month/year, and a valid CVV, the Submit button is enabled and the CC form Inputs are marked with ‘valid’ styling. Re-Using the CreditCardComponent in Another Application It clearly takes more effort to break a large form into sub-forms and implement each sub-form with a separate component. The benefits of this approach include cleaner, simpler implementation of large, complex forms and the ability to re-use the sub-form components in another application. Suppose a second application is developed that also requires credit-card processing. Like the above example, the new payment form requires a credit-card number, expiration month/year, and CVV, so the newly created CreditCardComponent should be applicable. However, a different layout is requested, namely that the CVV field should be below the expiration month and day select boxes. After all, you know designers … they love to change things :). Let’s also presume that different red/green colors have been requested to indicate error and valid conditions. The designer does not wish to display the credit-card image indicating the current card type. So, we have two sets of changes in the new application, styles and layout. Styles are simply a matter of a new style sheet and layout requires only a different template. The internal operations of CreditCardComponent are unchanged in the new application. So, an easy way to re-use the component is to extend CreditCardComponent and overwrite the metadata. This is illustrated in /src/app/credit-card-2/credit-card-2.component.ts, import { Component } from '@angular/core'; import { CreditCardComponent } from '../credit-card/credit-card.component'; @Component({ selector: 'app-credit-card-2', templateUrl: './credit-card-2.component.html', styleUrls: ['./credit-card-2.component.scss'] }) export class CreditCard2Component extends CreditCardComponent { constructor() { super(); } } Usage of the new component is illustrated by a second main application component, /src/app/app-2.component.ts. You may see the new component in action by changing the switch variable in /src/app/app.module.ts that controls which main component is used to bootstrap the application, const example: string = 'example1'; @NgModule({ declarations: [ AppComponent, App2Component, CreditCardComponent, CreditCardNumberDirective, CreditCard2Component, ], imports: [ BrowserModule, ReactiveFormsModule, ], providers: [], bootstrap: example === 'example1' ? [AppComponent] : [App2Component], }) export class AppModule { } Change the string, ‘example1’ to ‘example2’. Then, re-run the application to see the new layout and styles in action. Credit-card processing and logic, however, are unchanged between the two application. So, we maintain flexibility with a high level of re-use with the sub-form. The alternative would have been to copy and paste template and code blocks from one application to another. Then, you end up with two monolithic forms that share a lot of duplicated template sections and component code. The sub-form approach lends itself nicely to implementation inside a multi-repo framework such as Nx. Importing one component and making modifications as illustrated above are incredibly simple! I hope you have found some useful ideas (and code) in this article. Good luck with your Angular efforts! EnterpriseNG is coming EnterpriseNG is a two-day conference from the ng-conf folks coming on November 19th and 20th. Check it out at ng-conf.org
https://medium.com/ngconf/component-based-forms-29b7e8a20cdf
['Jim Armstrong']
2020-09-30 19:46:08.047000+00:00
['Angular Forms', 'Typescript']
My Productivity Mega List | 14 Strategies
I’ve read, watched, learnt, practiced and preached productivity for a while. There’s always a steady stream of great strategies out there and I’m a huge fan. I’ve picked these strategies up along the way and use them every single day. Mind you, I could write a 10 minute piece on each of these making my case as to why they’re great. But the list is long so I kept each one short. They’re a great help to me, hopefully you find at least one of them useful! 1 | The Morning Memo At the end of every day, set out a small but substantial goal for the next day and regardless of what you do the next day, you have to complete that task. Whatever else happens that day, you will have achieved something. 2 | The 3 Second Rule Force your survival instinct to kick in while you leave your analytical mind behind. If you know you need to do something, count to 3 and do it. After you get to 0, you have to do it, otherwise this rule will never work for you again. This rule will force you to throw away all the time you will have otherwise spent talking yourself out of it or procrastinating. It’s basic, it’s simple and seems like an ill thought throwaway ‘rule’. I promise it’s not, try it. 3 | Hardest Thing First Tackling the hardest part of your day, as the first thing you do in the day will make the rest of your day oh so smooth. Everything will come effortlessly and you’ll be exponentially more productive, able to keep the momentum going when each task get’s easier and easier. 4 | Divide And Conquer The trouble with productivity or lack there of, is trying to take something huge and tackle it head on. This will almost always be so daunting that you will resort to procrastination. If you divide up the work into more digestible chunks, you’ll be able to bring yourself to do the work and one step at a time, finish the previously gargantuan task. 5 | Effort Management and Delegation You have 1000 effort points a day. You want to spend all those effort points on the things that matter. Let’s say the optimal task to spend a single effort point on brings you $1. If task X takes 200 effort points (1/5 of your daily limit or $200 in potential value) but it only costs you $100 to get someone else to do it, then just delegate and don’t do the task yourself. Spend the effort on what matters, delegate smaller tasks. Logically it makes perfect sense, practically, we all neglect it. 6 | Rubber Duck Debugging You have a problem that you’re working through in your head, the issue with that is, things in your head can get a little complex and muddled. This causes you to hit an all time productivity low. You have a persistent problem and it’s hard to work through. To fix this, you get a rubber duck and explain the problem to the rubber duck. This helps you verbalise the problem and most of the time, when you verbalise it by using actual communication, you tend to fix the problem. (you don’t really need to use a rubber duck, an inanimate object or disinterested friend is ok too) 7 | The Knowledge Card Trick Sticky notes, Trello, a whiteboard… something. Getting tasks from head to paper is a fantastic way to de-clutter the mind. There’s no better way of getting things done than actually knowing what needs to be done. And make no mistake, even though those tasks are in your head somewhere, you tend to not know about it until it’s facing you in the ‘To-Do’ column in Trello. Bonus points if you stick a deadline on each task, this does a lot of preventative work. What are we preventing you ask? A never ending To-Do List I answer. 8 | The Marathon vs Sprint You should put maximum effort into not relying on motivation. I love motivational and eye-opening quotes from better people but that doesn’t exactly help me. Neither I or you or anyone will achieve their goals from random sporadic moments of motivation followed by a decline in effort. Avoid going cold turkey on bad habits and avoid going all in on good habits. You aren’t a switch, you’re a human and you need to condition yourself by making small changes that will eventually turn into big results. 9 | The Community Architect The old adage of “you’re the average of the 5 people you spend the most time with” has a lot of truth to it. We’re social creatures and as intrinsically valued as we might be, we will always look for external validation, competition and support among other things. If you’re a hustler that hangs out with drones, your hustling won’t improve, won’t be appreciated and feeling out of place might put you in a state of mind that won’t be healthy for anyone, least of all yourself. You’re the architect of your own community, make sure you spend extra care when picking your friends. 10 | Steam, Serenity, Sports and Sex You’re not a robot and neither is anyone else. You can’t be expected to or can’t expect perfection and pure productivity of yourself. So put away that guilt because I know you feel it. Everyone needs to blow off steam, meditate one way or another, keep physically active and get physical as much as possible. These things aren’t a luxury, they’re requirements. If you still feel guilty, treat it like maintenance, because that’s what it is. 11 | The False Substitute I hate Gary Vaynerchuk. Not because he’s a bad guy — he’s not, not because his content isn’t valuable — he’s got great tips, and not because of that aura of arrogance. You know, the usual reasons why people aren’t a fan. I hate Gary because whether it’s intentional or not, an army of aspiring entrepreneurs neglect sleep and substitute it for work because of his content. Look, If you need 8 hours, don’t do 6. You don’t need to do 6. Getting 2 extra hours at the expense of losing quality for every other hour in your day is just no worth it. It’s a false substitute. I need 7–8 hours of sleep for maximum on-the-ball time. You best bet that I’m taking those hours. If you can have maximum on-the-ball time with 5–6 hours, power to you sir and/or madam. If a ‘hustler’ laughs at you for sleeping 8 hour nights, crack a smile and move on. 12 | 15 Minute Self Care Routine Every day, I block out time for deliberate and focused self care. Making my bed, tidying my desk, shaving and showering. I’ve found that I can’t fully take care of tasks and problems throughout the day, if I don’t respect myself enough to spend some time on myself. It doesn’t have to be over the top, it doesn’t have to be huge. Making it deliberate and focusing on only that will be enough to build up a small routine with huge dividends. If you haven’t walked into a tidy work space, or ended the day by retreating into a tidy bedroom, give it a try. 13 | Social Media Trading Hours Delete all social media applications from your phone, and have a designated time in the day to check these on your desktop/laptop. You can try setting the schedule and being disciplined enough to not look at them until the time comes but bare in mind… Top apps tend to be owned by companies worth billions of dollars, and they tend to invest quite a bit of time and money to make you an addict through behavioral triggers and complex algorithms, neatly packed into smart UX design. Get rid of them and enjoy the extra hour in your day that you magically created. 14 | Fear Deconstruction If your productivity in relation to new undertakings, and general progression to more fulfilling things has ever been hindered, you can blame fear. Starting a new workout regimen, trying a new diet, taking a new class are all productive. But fear of the most mundane sort will take over and stop you from doing these things. Physically write down every fear you have relating to the action. You’ll find that writing them down and taking a moment to read will help you realise how ridiculous some of these fears are, eventually allowing you to work through them.
https://medium.com/swlh/my-productivity-mega-list-14-strategies-4bbb36e650e6
['Sah Kilic']
2019-12-11 02:06:17.066000+00:00
['Personal Growth', 'Life Lessons', 'Self Improvement', 'Life', 'Productivity']
How The Design World Is Being Ripped Apart Right Now
Now that Design is on the move from the marketing & communication departments to the boardrooms, designers are faced with a huge dilemma. The Design world is changing and designers have to make a move. Consultancy agencies are moving into the new space that is opening up between business strategy, IT development and creative design. The complex, fast-paced challenges clients are facing require a new kind of integrated approach that combines the best of business, technology, and design. In this new flaying field, designers have to position themselves. They have to respond. They have to come up with an answer or get pushed out of the game. The value dilemma On the one hand, designers see the huge opportunities opening up to move into a more strategic position. On the other hand, the (business) capabilities that are required for this leap are far removed from the reason why they started with design in the first place, why they get up in the morning. This week I saw two interviews with two different people that occupy the exact opposite of this spectrum: Stefan Sagmeister and Stephen Gates. One is arguing for a restoration of the value we place on beauty. The other argues that, if designers let go of the idea of beauty as the goal and focus more on opening up their creative process, design can move into the most valuable value proposition ever. This is a question about value: Should Design defend the value of beauty and be less aligned with the ideas of value that companies have? Or, Should Design let go of the idea of beauty as a goal, focus on opening up the creative process, and become more valuable as a service to companies? Noun or verb, that is the question The dilemma circles around the idea of what design is: Is Design a noun? Is it the thing that designers make, the object? Or is Design a verb? Is it the thing that designers do, the process? The case of Stefan Sagmeister Stefan Sagmeister is a well-known designer from New York. In the discussion about beauty versus functionality, Sagmeister pushes the idea of the function of beauty. He argues that making an object more beautiful is adding to its functionality. He talks about adding emotion, delight, love, and care to make objects more human, more sustainable. He also argues for keeping the process closed. He says that great designs are always made by one person, never a group. The result of his ideology is that he is able to produce designs that are stunning and make people happy. The downside of his approach is that it takes more time to create things of true beauty, the type of beauty that really adds a sustainable delight function to objects. The last 10 percent to get it perfect typically take as long as getting from 0 to 90 percent. In software development, they call this the 90–90 rule, but it is the same in design. In a commercial world, the question is: who is going to pay for that? Culturally aware clients with deep pockets? Or the designer himself? Sagmeister is known for creating an environment that gives him the freedom to do low paying jobs. But this space and independence come at a price. You have to be able to work with very little overhead, be creative with finances and be able to deliver super high-quality work that justifies all the effort and sacrifices you make. In the design donut, he is way over on the side of the artist. In his interview, he states making money is the only disadvantage of his approach. Design can have a sustainable functional value, but the price is high. Either the designer or the client has to pay the price. Sagmeister uses the following value system for designers: Beautiful products create happiness. Happiness creates value for companies because happier users buy more of the product or are willing to pay a premium. This value for companies gives beauty its value. The case of Stephen Gates Stephen Gates is the newly appointed Head of Design Transformation at InVision. He takes a totally different approach. He sees design as a set of capabilities that can be applied to more processes than the traditional design process that produces beautiful objects. He believes that creativity is going to be the stock-in-trade of designers. Complexity, pace and need to connect to the user creates a huge need for creativity, visualization, and making things in business today. And Design is in an excellent position to meet those needs and have a huge impact on the performance of companies. He argues for opening up the process to others, become more empathetic to the needs and input of stakeholders and users, and being willing to fail publicly. The result is that if designers take this approach they can move to more strategic tables, have more impact, add more value to a business and charge higher fees. The downside of this approach is that all people involved have to get comfortable with being uncomfortable. This approach works if you can take the client on a journey to a place that is unknown at the start. It also requires an open attitude from designers. They have to open up their process to others. Designers also need to up their business game. They have to understand how businesses work, what challenges businesses face and how design can help. And business thinking and business language is not something most designers like. Business people are a totally different breed and most designers don’t want or are not able to cross that bridge. Gates uses the following value system for designers: The creativity designers bring make businesses projects perform better. Better projects create more value for companies. This value for the business gives creativity its value. Choose your battles Both approaches need clients that are willing to pay and are willing to participate in a specific process. You either pay for beauty or for creativity. You either hand over a briefing or co-create. Both approaches ask a lot of designers. You either have to be able to produce very high-quality beautiful work or be able to walk and talk business. You either value beauty or creativity. One is moving against and the other with the grain. With the rise of Design Thinking, Service Design, Business Design and however you call it, the momentum seems to be towards design as a process. Fighting for beauty is going against conventional wisdom. Both positions are valid. Both positions require certain skills and talents one needs to possess. It’s a matter of choosing your battles and assessing your skillset. Faustian Bargain Can’t designers just add creativity to business processes and create high-quality beauty at the same time? This dilemma might seem like a Faustian Bargain to some in which the designer abandons his or her spiritual values or moral principles in order to obtain knowledge, wealth or other benefits. Sell your Design-soul if you open up the process If you listen to Sagmeister, one could think you are betraying Design, or even the human race if you open up your process and deliver creativity instead of beauty. He sees an opportunity for digital designers. If they can create beauty online as he creates it in offline products, the riches will be immense. What he doesn’t realize is that it takes more than a nice User Interface to create beauty online. The service you are using also has to work properly and deliver value. And that means working with the technology department, the organizational processes, business cases, project management, corporate strategy etc. It’s useless to act as the sole genius designer in this context. Stakeholder engagement is far more important than expressing your personal idea about beauty. If the designs get too outspoken, they get in the way. That doesn’t mean there is no room and need for delight. When everything becomes more uniform, the small details in the interaction can build a connection, add personality. But in a world of Agile development, it’s tough to prioritize quirky interactions over functional features. It’s one thing to design something, it’s another to get it built. Extraordinary design not only takes more time to design, but it also takes more time to get built. You have to make a business case for the additional effort, you have to show the added value or pay for it yourself. Sell your Design-soul if you stick to beauty If you listen to Gates, you might think you are doing Design a disservice if you care about beauty. He sees an opportunity for designers that want to open up their process and use creativity as their service. The problem with this approach is that if you neglect beauty, the creativity of the designer becomes less powerful. Beauty gives the designer his power. Beauty creates the engagement, the inspiration, the energy that enhances the performance of projects. If you can incorporate the function and value of beauty in the creative process, your process becomes more powerful. The synergy model I believe the true power of the designer lies in the combination of the two value systems for the designer mentioned above. If the designer can provide both beauty and creativity, he can meet both the needs of the business: happiness and performance, the soft and the hard. And he can use both parts of his skill set: creating objects and the creative process. This way the designer can have his cake and eat it. Some designers might operate more on the process side, but they can deliver more value if they also use the beauty value systems a bit. Other designers will want to work on the object side. But their designs will also benefit from focussing more on an open process and co-creation. This way you have access to better questions, and deeper insights.
https://medium.com/design-leadership-notebook/how-the-design-world-is-being-ripped-apart-right-now-efe9f01a33d8
['Dennis Hambeukers']
2018-09-28 05:44:07.800000+00:00
['Synergy', 'Design Thinking', 'Design', 'Beauty', 'Design Leadership']
Atlas Coughed
Atlas Coughed Donald Trump has steadily turned masks into symbols — not of government overreach, but of governmental impunity. Photo: Drew Angerer/Getty Images By Megan Garber During Wednesday evening’s vice-presidential debate, as he refused to acknowledge that climate change is an existential threat and to agree that he would accept the results of the upcoming presidential election and to elaborate on the Trump administration’s alleged plan to ensure that Americans will continue to have health care during a raging pandemic, Vice President Mike Pence uttered the following line to Kamala Harris: “Stop playing politics with people’s lives.” The wrongness of the comment was made even more acute by the events that followed the debate: After the event concluded, Pence was joined on the stage — outfitted with plexiglass, in weak acknowledgment of the fact that politics is people’s lives — by his wife, Karen. She was pointedly not wearing a mask. Pence then posed for pictures with fellow Republicans. None of them wore a mask. “Shocking but not surprising” has been a constant refrain of the Trump years, as applicable to the fact that leaders who reject science have failed to control the pandemic as it is to the fact that the president himself, last week, was taken to the hospital with a case of COVID-19. But maskless Trumpists are, at this point, neither shocking nor surprising. Their particular brand of medical vigilantism is, on the contrary, entirely logical. Donald Trump and those in his orbit have spent months insisting that wearing a mask is not what it is in reality — a simple act of solidarity and public health — but is instead a symbol, laden with meaning. To wear a mask, they have suggested, is to capitulate. To wear a mask is to engage in empty performance. (“Every time you see him, he’s got a mask,” the president said of Joe Biden during last month’s presidential debate. “He could be speaking 200 feet away from them and he shows up with the biggest mask I’ve ever seen.”) To wear a mask, perhaps above all, is to betray the leader. In June, Trump suggested that people were wearing masks not to keep others safe, but to signal their disapproval of him. Earlier this week, Trump returned to the White House after his stay at Walter Reed National Military Medical Center. One of the first acts the still-contagious president engaged in was to remove his mask, dramatically, on the balcony of the South Portico. People made jokes about Evita; the photographer who was on the balcony to capture Trump’s return was likely much less amused. After Wednesday’s vice-presidential debate between Senator Kamala Harris and Vice President Mike Pence, Pence’s wife, Karen, was pointedly not wearing a mask. Photo: Alex Wong/Getty Images Masks, in this conception of things, are not tools of public health; they are weapons in America’s ongoing culture war. They cannot be understood outside the context of partisanship, of factionalism, of “playing politics” — of battle. The absurdity of it all is also a tragedy: Trump and his allies are now trapped in a flawed argument of their own making. They are reluctant to wear masks precisely because they have spent months insisting that masks are not objects, but signals of virtue (“virtue,” apparently, being territory they have ceded to the other side). They have caught COVID-19 to own the libs. And now they are exposing more people — people who often have little choice in the matter — to a virus that is spread through human breath. It is a profound dereliction of duty and of basic decency. It did not have to be this way. In June, Adam Aron, the CEO of AMC Theaters, made an announcement: The chain’s Cineplexes, he said, would not require patrons to wear masks when the theaters gradually reopened to the American public. AMC, Aron explained, didn’t “want to be drawn into a political controversy.” (Aron, his announcement having drawn him into a political controversy, later reversed that decision.) Also that month, as Trump was preparing for an in-person rally in Tulsa, Oklahoma, MSNBC interviewed a man who was waiting to attend the event. The man was not wearing a face mask. The reporter asked him why. “We had a friend who died from COVID, and his son was on a ventilator; he almost died,” he explained. “So we know it’s real. But then at the same time, you don’t know what the facts are; you feel like maybe one side plays it one way and the other side plays it another.” So the man understood, intimately, the threat of the virus; he chose not to wear a mask anyway. And he subjected everyone around him to the same logic. This is another way the Trump administration has eroded norms; the norm in this case is community itself. Notions of shared responsibility, of common fates, of mutual compassion all crumple under the weight of a mask that has been weaponized. You can see that erosion, as well, in many people’s insistent gendering of masks — as objects that are symbolic, allegedly, of the assorted weaknesses of femininity. (“Might as well carry a purse with that mask, Joe,” the conservative commentator Tomi Lahren scoffed earlier this week, in response to a video the Democratic presidential candidate shared of himself wearing a mask.) You can see it in messaging that treats Donald Trump’s catching of the virus not as the result of profound recklessness, but instead as an excuse for an embattled president to prove his mettle on the field. Trump is “a true warrior,” Eric Trump tweeted, after his father was diagnosed with COVID-19. “You are a warrior and will beat this,” Ivanka Trump agreed. Kelly Loeffler, the senator from Georgia, tweeted a manipulated clip from a World Wrestling Entertainment publicity video of Donald Trump tackling the WWE chair, Vince McMahon, in 2007: Superimposed over McMahon’s face is an image of the coronavirus. The Fox News contributor Greg Gutfeld found the ultimate twist of the logic: Trump, he suggested, got sick as a brave and selfless sacrifice for the rest of us — a leader leaping off his steed to do his fighting on the ground. “He didn’t want America to hide from the virus,” Gutfeld said. “He was going to do the same thing; he was going to walk out there on that battlefield with you.” The effect of it all is to obscure the selfish recklessness that caused the president of the United States to contract the virus in the first place. War, as rhetoric, emphasizes the clash rather than the cause. If you can move the conversation toward the war itself, you might give people permission to forget why the battle is being waged in the first place. And you might be able to shift the terms of the discussion away from the sweeping failures of the federal government and toward the familiar reductions of the culture war. You can erode the matter down to the basics of “personal responsibility” versus “governmental intrusion.” You can break the rules, openly, and justify doing so under the logic that the rules do not apply to your side of the war. “I will put a mask on when I think I need it,” the president said during his in-person debate with Biden, when he might well have been carrying the virus. The American president was very possibly putting his opponent, and the many others in that room, in mortal danger. He was playing politics with people’s lives. Trump is, in that way, out of step with his constituents: In a recent poll, 74 percent of Americans said that they always wear a mask when out in public. The message of the president’s bare face is clear: It refuses to acknowledge the preferences — or the needs — of other people. To go maskless in this moment is to flout the rules, proudly and wantonly and dangerously. It is to declare war against other Americans when the enemy at hand is a virus that threatens bodies on a bipartisan basis. The president’s maskless appearances this week have been extensions of all the other times he has broken the rules, whether they relate to his finances or his judicial appointments or his lies or his performative cruelties. They have been preening displays of impunity. And they are reminders that war itself can be weaponized, as a justification for things that would never be permissible in times of peace. Yesterday morning, Mike Lee, the Utah senator, tweeted, “Democracy isn’t the objective; liberty, peace, and prospefity [sic] are. We want the human condition to flourish. Rank democracy can thwart that.” Lee was laying out a framework through which “rank democracy” — which is to say, the will of the people as demonstrated through their votes — might be rejected. His tweet was reckless and deeply revealing. It is unsurprising, and not shocking at all, that Lee wrote that tweet while infected with the coronavirus: He was at the White House event thought to be responsible for the outbreak among high-ranking members of the government. At that event, Lee chatted with fellow guests and hugged them. He was not wearing a mask.
https://medium.com/the-atlantic/atlas-coughed-9407704d4649
['The Atlantic']
2020-10-09 18:20:41.964000+00:00
['Donald Trump', 'Politics', 'Masks', 'Covid 19', 'Coronavirus']
MagicOnion — Unified Realtime/API Engine for .NET Core and Unity
Interface that is strongly typed by C# By using a shared C# interface between a server and a client, both client-to-server and server-to-client method calling is strongly typed. For example, let’s say that the following interface and class will be shared. By having both the server and the client share these, an error-free communication can be established between them simply by implementing this interface on both sides. In this way, there is no need to generate code from an intermediate language, and methods can be called over the network just by calling them (even with multiple inputs or primitive-type variables) in a manner that is coherent to C# syntax. Of course, it supports autocompletion. An actual implementation is outlined below. The server implements an interface defined as IGamingHub. it is all done asynchronously. (tasks relaying return values are asynchronous.) values can be returned. (If an exception is caught, it will be relayed to the client as such.) Grouping by Group makes it possible to send to clients in a group using Broadcast(group). The client can receive data broadcast from the server by implementing an interface defined as IGamingHubReceiver. Also, IGamingHub itself acts as a network client that is automatically implemented on the server. The client side can receive data broadcast from the server by implementing an interface defined as IGamingHubReceiver. Also, IGamingHub itself serves as a network client automatically implemented on the server. As everything is strongly typed as C# variables, IDF’s refactoring is tracked on changes in a method’s name and its inputs on both the server side and the client side. incomplete implementation results in a compile error, allowing you to spot them and fix them. string-free communication improves efficiency. (Method names are automatically converted to ID numbers, so no string is sent.) primitive-type variables can be sent in a natural manner. (There is no need to wrap them in a designated request class.) When using Protocol Buffers, you need to manage .proto (IDL: Interface Definition Language), worry about how to generate them, etc., but as long as it is written in C#, none of this occurs. Zero deserialization mapping In RPC, especially in real-time communication involving frequent transmission of data, it is often the serialization process where data is converted before being sent that limits the performance. In MagicOnion, serialization is done by my MessagePack for C#, which is the fastest binary serializer for C#, so it cannot be a limiting factor. Also, in addition to performance, it also provides flexibility regarding data in that variables of any type can be sent as long as they can be serialized by MessagePack for C#. Also, taking advantage of the fact that both the client and the server run on C# and data stored on internal memory are expected to share the same layout, I added an option to do mapping through memory copy without serialization/deserialization in case of a value-type variable. Nothing needs to be processed here, so it promises the best performance theoretically possible in terms of transmission speed. However, since these struct-type variables need to be copied, I recommend handling everything as ref as a rule when you need to define a large struct-type, or it might slow down the process. I believe that this can be easily and effectively applied to sending a large number of Transforms, such as an array of Vector3 variables. Why gRPC’s Bidirectional Streaming is not enough gRPC comes standard with Bidirectional Streaming, which implements bidirectional communication. In fact, the streaming RPC of MagicOnion is constructed upon Bidirectional Streaming. // Bidirectional Streaming definition by proto rpc BidiHello(stream HelloRequest) returns (stream HelloResponse); However, it is difficult to use Bidirectional Streaming as an RPC for real-time communication for many reasons. The biggest reason is that, since it is not an RPC at this point, after a connection is established, the Request/Response defined using oneof (one type containing multiple types) must be manually branched to the method that needs to be called. That may be feasible, but there are still many hurdles. For example, the client cannot wait for the server to complete operation (Once the request is sent, the next line of code is executed.) not being able to wait for the response means that the client cannot receive return values or exceptions. there are currently no way to bundle multiple connections. Even if you construct a system to handle these issues, you can never escape from the template of Bidirectional Streaming that is generated by proto, so it messes up the code. While MagicOnion’s StreamingHub uses Bidirectional Streaming to establish a connection, it communicates using a unique lightweight protocol within this communication frame, realizing an RPC for real-time communication that feels natural to C# developers. Why I choose distributed model and gRPC In contrary to other real-time communication engines for Unity, MagicOnion itself does not have its own load balancer. There are several strategies to realize distributed processing, and I recommend using cloud platforms or other pieces of middleware. For example, when hosting on internal memory completely independently, one way is to have an external Service Discovery/Matching Service decide which server to use. Another is that completely distributing the load using a TCP load balancer while delegating the process of broadcasting by Group to Redis makes it possible send data to clients connected to different servers. This function comes standard with MagicOnion as MagicOnion.Redis. This is suited to implementing chat functionality, notification, etc. Also, much like gRPC itself, MagicOnion is suited to implementing what are called Microservices, so you can build a server-to-server connection and construct a server-to-server-RPC structure. Now, MagicOnion is built on gRPC, but it completely ignores the need for providing language-independent RPC using .proto, which is its most notable characteristic. Moreover, the fact that network communication is limited to HTTP/2(TCP) does not necessarily make it ideal for creating games. However, there are good reasons why I chose gRPC. One reason is the maturity of the library. There are no libraries available for communication that support server/client implementation including Unity, and the core part (gRPC C, which is shared across all languages) is used by almost all developers including by Google, which means it is highly stable. It may be possible to implement an original communication library composed of parts that are specific to communication in games, but ensuring stability from the ground up is not an easy task. Do not reinvent the wheel, right? However, I am not satisfied with C# binding in gRPC in terms of performance. That is why I think it may be a good idea to keep using gRPC C Core while completely replacing C# binding. At least, if it is limited to the Unity side (client communication), I believe it is both feasible and effective. Another reason is the ecosystem. gRPC has established itself as the de facto standard as a modern RPC, so it is supported by many servers and middleware. HTTP/2 and gRPC being industry-standard protocols, there are many advantages of using them, such as using them with Nginx or request-based load balancing by Envoy. Also, there are many blogs and slideshows providing information on gRPC, which makes it easier for developers to build a better system. MagicOnion has an original application layer built into it, but its infrastructure is gRPC, so any piece of middleware or any shared knowledge can almost always be applied directly. I believe that a modern server should have a cloud-ready architecture, and that a system that fully utilizes infrastructure and middleware supplied by a cloud provider has a better chance of performing well than a system that attempts do everything by itself. Therefore, the framework that deals with the infrastructure should be lightweight, composed of essential functions only. Supporting API communication The goal of MagicOnion is to be a Unified Network Engine. What I mean by “Unified” here is not that both the server and the client use C#, but that real-time communication system and API communication system are unified. The API communication system shares the same interface, and is designed to generate client code automatically if a method is defined using C# syntax. Also with API communication, everything about the framework is thoroughly made asynchronous and non-blocking. What makes this look almost natural is the async/await function provided by the C# language itself. It also comes with a filtering function that hooks the execution before and after a request is made, which also contributes to the natural asynchronous processing. The filter can also be used with StreamingHub in the same manner. Swagger It is hard to check if APIs are working properly, and it is not easy to debug from Unity all the time, and gRPC cannot be debugged using a tool like Postman. Therefore, I designed MagicOnion so that it automatically generates API documents that can be executed by Swagger. As MagicOnion acts as a hosting HTTP/1 server, there is no need to set up an external proxy server, and all you need to do is to add several lines of code to the part that handles launching. This is all you need to do to be able to check if the APIs are working properly, and just by defining debug commands as APIs, they show up on Swagger, so it may be possible to easily prepare commands that operate database for debugging. StreamingHub does not support it at the moment, but I am planning to make a WebSocketGateway that connects WebSocket and MagicOnion. Deployment and hosting In the past, the biggest issue on the C# server side is how to deploy and how to host. It was, after all, running on a Windows Server. The fact that gRPC is not IIS-based made things even more difficult. However, now it is easy. If you make a container using Docker, there is nothing special about doing things using C#. There is nothing complicated about turning a MagicOnion application generated by .NET Core into a container. In fact, it is quite easy (as it is just a .NET Core console application). Once it is done, all you need to do is to deploy it inside a Linux container. It does not matter where, whether it be ECS, Fargate, GKE, or AKS. There are many online articles on this and you can apply their practice directly. Making a container when it comes to C# today is not really for constructing a local environment. It is for easily carrying things into development/deployment environments and allowing people who are not particularly familiar with C#/Windows to build on rich knowledge on infrastructure without learning anything special. That, I think, is the largest advantage. Conclusion You can start using MagicOnion just for real-time communication, and you can also use it for API communication, which will perform really well. As it supports fast transmission of data and data serialization designed with compression in mind, it will make all your communication-related worries go away. Also, as async/await is utilized in Unity, it may serve as a gateway to incorporating the latest C#. As a real-time communication framework, it only provides Client-Server RPC. However, that is the only thing you need and you can build all other functions yourselves. (It depends, but generally speaking, it will not require much work.) Free of all unnecessary functions, I believe that it ensures the best coding experience when it comes to RPC. (I wish I could say the best performance as well, but there are a few things that can be improved in the way it handles gRPC C# binding, so I hope I will be able to say that when I release the next version.) Also, as it is an independent closed system, you can for example use it to exhibit VR/AR content just by keeping the server running within the same LAN network even if the LAN network has limitations……! I hope you will give it a try. I hope to be able to keep writing about MagicOnion as well as how things are going with UniRx, UniRx.Async, MessagePack for C#, etc., on this blog.
https://neuecc.medium.com/magiconion-unified-realtime-api-engine-for-net-core-and-unity-21e02a57a3ff
['Yoshifumi Kawai']
2019-02-28 05:39:12.169000+00:00
['Unity3d', 'Csharp', 'Programming']
Planning and Implementing a Government Focused Market Development Fund (MDF)
The Insider Perspective: [Credit: This program is based on the ScaleUP USA’s Successful Corporate and Government Sales Acceleration Program] In my 25+ years of technology career, I have worked in leadership positions in Federal, State, and local governments. During this time I have been “bombarded” with thousands of pieces of (useless) marketing material and sales calls. The offices I led have bought billions of dollars worth of technology products and services. Unfortunately, I have observed that 9 out of 10 sales calls lead to no follow-up wins, frustrating both the seller and the buyer. More importantly, the marketing campaigns and promotions that companies do towards their federal and government clients and prospects are many times outdated in their approach and meaningless. The question is why is this happening and can it be corrected? This situation has arisen, because of several reasons. First, currently government marketing and sales campaigns are run based on sellers perspective of the buyer and not on buyers insight or needs and this needs to change. Second, the government marketing and advertising budgets are utilizing older methodology and tactics which do not work in the hyper-intensive, heavily multi-tasked government world. When you have a hundred unread emails, the possibility you will open, read a white paper attached to an email blast and then call up the seller is unrealistic at best! Essentially, plain old government advertising and marketing is dead! Relationship and trust building focused edusales is very much in vogue! Change your government campaigns or be left behind. This is especially true if you are targeting large federal, state or local government accounts for sales. Best Practice #1: Build government marketing and sales campaigns based on government buyers needs and not sellers perspective of the government buyer! What has this got to do with the deployment of business to business (B2B) or business to government (B2G) Market Development Funds? Everything. Each year tens of billions of dollars worth of these funds are utilized to run ineffective government advertising and marketing campaigns and wasted. What if we could use some of the best practices to improve their impact so all OEMs, channel partners, and buyers get the results they seek? Traditional Advertising and Marketing (Old) vs. Relationship Building Advertising and Marketing (New) Market Development Funds Vs. Co-Op Funds: Before we get into details of best practices in implementing government Market Development Funds (MDF), I want to address the difference between MDF’s and Cooperative Advertising Funds (Co-Op Funds). Market Development Funds are incentives made available by Manufacturers and Brands (OEM’s) for channel partners like distributors, value-added resellers, affiliates, and others to create targeted local market awareness and to generate sales leads. MDF’s are typically provided before the actual marketing program is implemented by the channel partner. If you are an OEM and don’t have a MDF program, set one up. You could get something going for as low as $25K per year, though larger amounts are much preferred. Cooperative advertising funds are typically promotional reimbursements made to channel partners like distributors, value-added resellers, affiliates, and others after implementation of the actual marketing program based on some pre-agreed sales performance criteria and implementation guidelines. BEST PRACTICE #2: We suggest that government MDF’s generally work better than government Co-Op funds as they are forward-looking and the OEM has a better opportunity to strategize execution with the channel partner before implementation versus Co-Op programs which are historical in nature, where the money is already spent, and there is not much the OEM can do to improve performance while reimbursing the amount to the channel partner. Don't Get Left Behind. The Digitization of Biz Dev is Here! Government Market Development Funds, Funding Mechanisms Government MDF’s can be distributed by various different mechanisms. There can be flexible funding (channel partner controls fund utilization) to focused funding (manufacturer controls fund utilization). Funding can also be based on the channel partner’s past success (peak performer, average performer, and low-performer). BEST PRACTICE #3: Typically the government MDF’s are distributed either as a standard or fixed allowance or a full stipend that the channel partner can spend on OEM’s pre-approved programs. This is the ideal option and a best practice, especially when the stipend covers all the cost associated with the channel partner’s specific marketing program. What is critically important here is to make sure the channel partners not only have the necessary funds but also have the necessary expertise to build and conduct a modern, relevant, and impactful government marketing campaign or else a half-baked government marketing program will emerge which will benefit no one. The latter happens all the time. Typically, the OEM or channel partner may work with a third party like ScaleUP USA to ensure that a proper campaign is developed and implemented. Typical impactful campaigns can cost upwards of $25K per channel partner per year and the developed program can have a shelf life of about a year — about the same cost of participating in a trade show but with year-round publicity and results! We at ScaleUP don’t like the “quick spend” techniques like a significant expenditure at a trade show. Such activity, unfortunately, does not build the relationship with the buyer (prospect) upfront, very critical for large B2B or B2G selling. Alternatively, the OEM may distribute the Market Development Funds as a subsidy or a discount for the channel partner’s marketing and promotions expenditure. This is not recommended as the balance amount needs to come from the channel partner and that can constrain the proposed campaign unless the OEM needs to stretch their MDF and are willing to risk their share of the funding. Finally, The other option is that the OEM enables a rebate or a payback after the channel partner has spent the money. We at ScaleUP USA are not fond of the rebate or a payback option at all — as you now have a problem motivating the channel partner to pay upfront first and then fill in the paperwork for reimbursement. This one is a loser! Government Market Development Funds, Focus, and Clarity are Important. While developing the MDF strategy, clarity of objective is critical for the MDF program designer. As a best practice you should ask the following questions while defining this program: Why: Why are we deploying these MDF funds? Who: Who will these MDF funded promotions target? When: What should be the timing for this targeting? Where: What is the geographic location or category of prospect targeting? How: How can the MDF funds be used for maximum impact? What: What will you exactly do with the funds? Result: What specific result are you expecting from the program? BEST PRACTICE #4: Based on ScaleUP USA’s assessment it is critical is these government Market Development Funds MUST deliver value to the OEM, the channel partner and the client or the government prospect. If the value equation for any of the three is not delivered, the marketing promotion will not succeed. It is, therefore, critical that the OEM ask and the channel partner submit a government MDF utilization plan. We typically can work with either the OEM or the channel partner or both to develop and implement this MDF program. Single Company Road Shows Often Result in Poor Returns Due to the “Selling” Focus! Traditional Government Market Development Funds Usage Models: Traditional government marketing campaigns include email blasts, newsletters, social media, white papers, face to face events, webinars/webcasts, online presentations, research reports, blogs, articles, micro-sites, videos, advertising, and infographics. BEST PRACTICES #5–11: Whatever you do use as your medium, here are some of best practices relating to the options: #5: MUST DELIVER VALUE to the OEM, the channel partner and the client or the government prospect. #6: STOP SELLING your company, product or services. Start educating and influencing the government prospect or client and build a relationship. #7: INDEPENDENCE of the message and messenger matters to the government prospect. Use neutral brand to promote the story. #8: TRUST BUILDING should be the major aim of the government MDF campaign. Trust building in your company, products, services, and the channel partners. #9: LONGER SHELF LIFE is better. Quick start-stop activities like hosting events, and attending trade shows are costly and less impactful over a long run for government marketing campaigns. #10: CONTINUITY IS CRITICAL. Build a government MDF campaign that will continue steadily, and grow over time by building on past success. #11: MEASURES OF SUCCESS for government MDF program should be defined, measured, and reported on regularly. Challenges in Government Market Development Funds Deployment: As I mentioned earlier, During my long technology career, I had oversight of buying billions of dollars’ worth of technology products and services and therefore attended a large number of sales calls and promotional events. I recognized then that the large government organizations do not buy because of sales calls a seller makes, but because the large organization has an emergency, need, challenge or an opportunity and they are convinced the seller can address the problem successfully (think trust). Unfortunately, making the precise match between the exact buyer problem and the needed seller solution is very difficult so market development is key too. BEST PRACTICE #12: We suggest you focus on how “organic problem solving” happens in large organizations and then focus on educating the problem solvers, influencers, and decision-makers early and often how your company can best solve their problem using digital “edusales” means. The digitization of business development is happening. Don't get left behind. Start by digitally building internal champions, create communities of interest and connect with them to your company, products, and services. You must become your government buyers trusted advisor even before you ever speak to them! In the process, you will get the much-needed government exposure, brand awareness, and timely business leads for your company. This is the core focus of “buyer-driven sales methodology” discussed in the ScaleUP USA’s Corporate and Government Sales Acceleration program. About the Author: Nitin Pradhan is an educator, technologist, and social entrepreneur based in the Washington DC metro region. He is the former award-winning CIO of the US Department of Transportation and an Obama Appointee. He leads the ScaleUP USA’s Digital Business and Career Growth Accelerator including the Federal Business Accelerator, the Federal Career Accelerator, and the Career Trajectory programs. Business and government leaders can connect with him on LinkedIn.
https://medium.com/launchdream/12-best-practices-for-oems-and-channel-partners-in-implementing-a-market-development-fund-mdf-ae2b6702b2f9
['Nitin Pradhan']
2019-12-02 18:09:01.308000+00:00
['Marketing', 'Digital Marketing', 'Advertising', 'Government', 'Sales']
For everyone who survived the election season, here’s a special treat.
For everyone who survived the election season, here’s a special treat. We had too many great stories about embracing life as we age to fit in our last newsletter, so you’re getting two batches of Crow’s Feet stories in one week. This is 40. 42 to be exact by Jen Kleinknecht The True Secret to Happiness. Ask an ancient person by Julia E Hubbel Was it Dementia? Or Just Stress. Adventures in Aging by Roz Warren The Day I Realized I Was Turning Into My Mother. It happens to all of us — despite our best efforts by Rose Bak Twenty Years. A high school reunion and retirement story by Dennett Unfinished Embroidery. Poignant Reminders by Thewriteyard For My Granddaughters.“You can be anything you want to be.” By Ann Litts Now Is The Time To Do What You Really Want To Do. Because today is yesterday’s tomorrow by Bev Potter The Analogy. A pandemic story by Dennett A Seventyish Woman Has a Dream for America. I think the strongest feeling I have about the results of the election is disappointment in almost half of my fellow Americans by Jean Anne Feldeisen Thankfully the Election is Over. Now it’s time to be good citizens by Jean Anne Feldeisen November Coming. Moody light toys with autumn colors by Jean Anne Feldeisen Two Chances to Get New Ideas About Aging. Now that it’s snowing, I need stimulation inside my house by Nancy Peckenham The Gift of Temporary Blindness. How badminton changed the way I see by Max K. Erkiletian The Freedom of Age. Getting More Colorful Over Time by Max K. Erkiletian Tiny Love Stories. Big feelings in a few well-chosen words by Cindy Shore Smith STRESS! What it does to your mind and body and how to stop it! By Jo Ann Harris You’re Not Funny, You Old Fart. Cautions for Hoary Humorists by Randy Fredlund Give a gift filled with ideas about to get the most out of later life, Crow’s Feet: Life As We Age, now in paperback and ebook on Amazon, Barnes & Noble and Bookshop, where sales benefit your local independent bookseller.
https://medium.com/crows-feet/for-everyone-who-survived-the-election-season-heres-a-special-treat-7396bd92bac4
['Nancy Peckenham']
2020-11-11 12:18:04.759000+00:00
['Healthy Lifestyle', 'Aging', 'Seniors', 'Wellness']
Increase Your Article Views 25% By Writing For a 6th Grade Reading Level
Reading Level Is the Key Have you ever checked what reading level your writing is suitable for? In this day and age of short attention spans (did you just see that blue car drive by?!), it’s getting harder and harder for people to focus. This extends to the complexity of reading levels as well — more and more, people are unwilling to sift through advanced prose to get to the information they’re after. If you’re writing on a blog or other platform, that means you might be writing at a non-optimal level for your potential readers. Ryan McCready did an in-depth analysis of popular stories and found that writing for a sixth-grade reading level increased recommendations by a whopping 25%. That’s a big increase for just changing a few words here and there. The NN Group ran an interesting study comparing language complexity on a pharmaceutical website. They found huge increases in understanding by changing the text to be easier to understand. Their reasoning? They geared the text towards simpler levels (grades five through six) to cater to lower literacy individuals — apparently 40% of the US population. This chart from Shane Snow at Contently further amplifies the point. Image by Shane Snow at Contently If you’re writing at an eighth or ninth grade level as I have been, you’re missing out on potentially half of your readers! Still not convinced? Even Google apparently uses the ease of reading as one of the factors in its ranking engine. You could see significant gains by adjusting your writing to a fifth or sixth-grade level.
https://medium.com/better-marketing/increase-your-article-views-25-by-writing-for-a-6th-grade-reading-level-6b862153d654
['J.J. Pryor']
2020-03-18 04:30:35.180000+00:00
['Writing', 'Reading', 'Advice', 'Inspiration', 'Culture']
The Future of Data Science, Data Engineering, and Tech
The Future of Data Science, Data Engineering, and Tech 6 experts’ views on tech in 2021 Photo by Kelly Sikkema on Unsplash. As 2020 comes to a close, we wanted to take a moment to reflect on all the changes in technology as well as look to see where things are going. Whether you are looking at startups and their IPOs, improvements in technology, or you paid attention to Amazon re:Invent, we saw a year filled with companies continuing to try to push boundaries. A personal favorite announcement from 2020 was AWS’s SageMaker Data Wrangler that is designed to speed up data preparation for machine learning and AI applications. This seems like a great move towards having more fluid machine learning pipelines that will hopefully further make machine learning more accessible to companies not focused on tech. But 2020 is ending, so we asked people from various parts of the tech world to provide their insights into what they were looking forward to in 2021 — whether that be new startups, technologies, or best practices. Let’s see what they had to say.
https://medium.com/better-programming/the-future-of-data-science-data-engineering-and-tech-7f0a503745fd
[]
2020-12-17 18:33:07.658000+00:00
['Machine Learning', 'Python', 'Python3', 'Data Science', 'Programming']
The Stack That Helped Opendoor Buy and Sell Over $1B in Homes
The Stack That Helped Opendoor Buy and Sell Over $1B in Homes Originally posted on StackShare About Opendoor Unless you’re in San Francisco or New York, selling your home is a giant headache that typically lasts three months. Opendoor removes the headache — go online, ask for an offer, answer a few questions and we’ll buy your home directly from you. We’ll take it from there and deal with selling the home to another buyer while you can go on with your life. Right now we operate in Phoenix, Dallas, and Las Vegas. We’ve completed over 4,800 real estate transactions — over $1B in homes. For a company about to turn 3 years old, it’s pretty crazy how far we’ve come. There’s a lot that goes into one real estate transaction. First, there’s what you might consider our core engineering challenge: making an accurate offer on each home. If we offer too much, we’ll lose money and go out of business; if we offer too little, we’ll seem like scammers and offend our customers. After we buy the home, we’ll work with contractors to do any necessary repairs and touch-ups, then put the home on the market and find a buyer. Since we own every home, we can do clever things like putting smart locks on all the doors and offering all-day open houses. I’m a frontend engineer, and mainly like to work on the consumer-facing website. I’m currently working on improving the experience for first-time home buyers. The process can be really scary for people who don’t know anything about real estate. Engineering Organization Our team is split between product engineering and data science: the tools used by each team are different enough that the teams work in separate code bases. Of course, the resulting product has to be well-integrated, and the product team pulls a lot of data from data science APIs. This coordination is tricky to get right; Kevin Teh from the data science team wrote about it in some detail in a recent post. At first, we split the product team into “customer-facing” and “internal tools” groups. It was nice to have all the frontend engineers on the same team, but we noticed that some projects didn’t have clear owners. For example, our buyer support team uses some internal tools we’ve built. Should those tools be developed by the “internal tools” team, or is support part of the customer experience? Now the team is split into cross-functional teams based around parts of the business. The Seller team handles people selling to us; the Homes team handles renovations and inventory; and the Buyer team puts our homes on the market and finds buyers. As we grow, the lines between teams often get blurry, so we expect that the structure will always be evolving. It’s common for engineers to move between teams, including between the product and data science teams. Product Architecture We started in 2014 with a Ruby on Rails monolith and Angular frontend, both of which were good ways to move fast while we were very small. The MVP of our customer-facing product was a multi-page form where you could enter information about your home to get an offer, but that was just the tip of the iceberg. We had to build internal tools to help our team correctly price homes and manage the transaction process. We used Angular and Bootstrap to build out those tools; the main goal was to add features quickly, without fiddling around with CSS — in fact, without requiring any frontend experience at all. We use Puma as our webserver, and Postgres for our database — one big benefit is the PostGIS extension for location data. Sidekiq runs our asynchronous jobs with support from Redis. Elasticsearch shows up everywhere in our internal tools. We use Webpack to build our frontend apps, and serve them using the Rails Asset Pipeline. We use Imgix to store photos of our homes, as well as most of the icons and illustrations around our site. We mainly use Imgix’s auto-resizing feature, so we never lose track of our original images, but can later load images of appropriate size for each context on the frontend. Monolith to Microservices Where appropriate, we try to break isolated logic out into microservices. For example, we’re working on a service which calculates our projected costs and fees. Our cost structure changes frequently, and we want to estimate how policy changes might affect our fees. This code wasn’t a great fit for the Rails app because we wanted it to be accessible to our analysts and data scientists as well. We’ve now split this logic out into its own service. It uses a version-history-aware computation graph to calculate and back-test our internal costs, and (soon!) will come with its own React frontend to visualize those calculations. Our data science stack is also a fully separate set of services, so there’s a lot of inter-app communication going on. To let these services authenticate to one another, we use an Elixir app called Paladin. Opendoor engineer Dan Neighman wrote and open-sourced Paladin, and explains why it’s helpful in this blog post. Authentication is based on JWTs provided by Warden and Guardian. Data Science Architecture I’ve always found data science at Opendoor interesting because it’s not the “grab as much data as you possibly can, then process it at huge scale” problem I’m used to hearing about. To find the price of a house, you look at nearby homes that sold recently, then squeeze as much information out of that data as you possibly can by comparing it to what you know about the market as a whole. Our co-founder Ian Wong has a more in-depth talk here. We can group most of the data science work into several core areas: Ingesting and organizing data from a variety of sources Training machine learning models to predict home value and market risk Quantifying and mitigating various forms of risk, including macroeconomic and individual house-liquidity Collecting information in a data warehouse to empower the analytics team For data ingestion, we pull from a variety of sources (like tax record and assessor data). We dump most of this data into an RDS Postgres database. We also transform and normalize everything at this phase — we’re importing dirty data from sources that often conflict. This blog post goes into more detail on how we merge data for a given address. For our machine learning model, we use Python with building blocks from SqlAlchemy, scikit-learn, and Pandas. We use Flask for routing/handling requests. We use Docker to build images and Kubernetes for deployment and scaling. Our system lets us describe a model as a JSON configuration, and once deployed, the system automatically grabs the required features, trains the model, and evaluates how well the model did against performance metrics. This automation lets us iterate really fast. We’re starting to use Dask for feature fetching and processing. Other companies often use Spark and Hadoop for this, but we need support for more complex parallel algorithms. Dask’s comparison to PySpark post describes this perfectly: Dask is lighter weight and is easier to integrate into existing code and hardware. If your problems vary beyond typical ETL + SQL and you want to add flexible parallelism to existing solutions then dask may be a good fit, especially if you are already using Python and associated libraries like NumPy and Pandas. The final piece of our data science architecture is the Data Warehouse, which we use to collect analytics data from everywhere we can. For a long time we used a nightly pg_dump to move Postgres data from each service’s database directly into a home-built Data Warehouse. We recently migrated to Google’s BigQuery instead. BigQuery is faster, and lets us fit more data into each query, but the killer feature is that it’s serverless. We have many people running queries at “peak hours”, and don’t want things to slow down just because we have a preallocated number of servers available. High-Tech Open Houses Since Opendoor actually owns all the houses we sell, we can be creative about how we show them to potential buyers. Typically, if you want to see a house for sale, you have to call the listing agent and schedule a time. We realized early on that we could make open houses way more convenient by installing automatic locks on our doors so the homes could be accessed at any time. For version 0 of the project, we literally posted our VP of Product’s phone number on the doors of all our houses — buyers would call in, and he’d tell them the unlock code. For version 1, we added Twilio so we could automatically send unlock codes over SMS. For version 2, we built a mobile app. Customers expect a good mobile experience these days, but our all-day open house feature made it twice as important. You can use the app to find nearby homes as you’re driving around, and explore them on a whim — a huge improvement from the traditional process! We built our app in React Native. A major part of that choice was pragmatic — our team had a lot of experience with web technologies, and almost no experience with native technologies. We also wanted to support both iPhone and Android from early on, and React Native let us do that (we released our app for iPhone first, and adding Android only took an extra couple weeks). Not everyone wants to install an app, so it’s still possible to access our homes via SMS. We’ve added a few security mechanisms — one worth mentioning is Blockscore, which lets us quickly run identity verification using phone numbers. For riskier numbers, we disable the automatic entry system and have our support team call the customer to collect their information. Tools and Workflows We manage our repositories and do code reviews on GitHub. All code is reviewed by at least one other engineer, but once it’s in the master branch, it’s assumed to be ready to deploy. If you want to deploy your code, you can do it in three steps: ./bin/deploy staging Check your work on staging ./bin/deploy production This takes 10–15 minutes in total. We’ve worked hard to automate the process so we can move fast as a team. We use Heroku for hosting, and run automated tests on CircleCI. Slack bots report what’s being deployed. There are a lot of external services we rely on heavily. To run through them briefly: Help Scout and Dyn for emails; Talkdesk and Twilio for calls and customer service; HelloSign for online contract signing; New Relic and Papertrail for system monitoring; Sentry for error reporting. For analytics, we’ve used a lot of tools: Mixpanel for the web, Amplitude for mobile, Heap for retroactive event tracking. We mainly use Looker for digging into that data and making dashboards. Joining Opendoor Engineering Opendoor has a very entrepreneurial, pragmatic culture: Engineers here typically talk with customers, understand their needs, and take the initiative on projects. We’re big on ownership and empowering others, and are aggressively anti-snark. We’re looking for engineers of all backgrounds: it doesn’t matter what languages you work with now, we’re sure you’ll ramp up fast. Find out more about Opendoor jobs on StackShare or on our careers site. Huge thanks to Kevin Teh, Mike Chen, Nelson Ray, Ian Wong, and Alexey Komissarouk for their help putting together this post.
https://medium.com/hackernoon/the-stack-that-helped-opendoor-buy-and-sell-over-1b-in-homes-4a2e59fbcea7
[]
2017-07-17 16:51:08.636000+00:00
['Startup', 'Techology', 'StackShare', 'Data Science', 'Real Estate']
Is artificial intelligence a ticket to Borges’ Babylon?
Is artificial intelligence a ticket to Borges’ Babylon? A thought experiment under construction Versión en castellano (Spanish Version) As I tried to imagine what a world completely governed by un-explainable AI would look like, I was reminded of Jorge Luis Borges’ “The Lottery in Babylon”. In this story Borges describes how a simple Lottery eventually became a complex (and secret) institution. At first, like every other lottery, it oversaw the random process of assigning a participant the jackpot. Eventually, all inhabitants of Babylon were forced to participate and the lottery became more complex: people could lose (or gain) a job, a position among the nobles, the love of their lives, life itself, honor… “The complexities of the new system are understood by only a handful of specialists (…) the number of drawings is infinite. No decision is final; all branch into others.” At its peak, every aspect of a person’s life became subject to the secret rulings of the Lottery. This piece does not include a claim that AI will trigger the end of humanity. Yet the era of enlightenment, in which human rationality was granted center stage in our social system, could be coming to an end. AI is being presented to us as a sorcerer that offers magic to those willing to take a leap of faith. If we cave (or perhaps even if we don’t?) our lives may end up governed by an endless chain of secret lotteries. But what is artificial intelligence? When you tag a picture of your friend Ana on Facebook you are basically helping train Facebook’s AI to distinguish Ana from every other friend you have on the network. You tag her in a pic in which she’s smiling. And one in which her lips are covered by a coffee mug, and one in which she’s sleeping. At first Facebook typically suggests the wrong tags. But over time it becomes quite good at figuring out the combination of factors that make Ana different from Emma. Creepy success! You’ve trained the AI system to be better than that professor who still can’t tell Ana from Emma, even though you are already several months into the course. What’s AI? AI is basically a catch-all phrase used to describe a broad set of methodologies. Machine learning, among the most popular, involves “training” a model on a case-by-case approach, such as tagging and correcting wrong tags on Facebook pictures. Through this method the machine learning system eventually develops an implicit system of rules and exceptions underlying the collection of teachings, which it then uses when exposed to new cases (such as a new photo of Ana). A revolutionary component of some AI methods includes the possibility of continuous training, unsupervised by humans. In this way, progress regarding how to best execute a specific task–like distinguishing between people, or choosing the best move in a game of chess– is exponentially quicker than how humans learn the same tasks. Machines don’t need a break. Expectations regarding how AI could upgrade healthcare, education…[you name it]… are sky high. These expectations are often based on myths that go far beyond what the technology affords today. The same positive hype is mirrored by an equivalently extreme set of fears . “With artificial intelligence, we are summoning the demon”, claimed Elon Musk in a recent interview. Yet most specialists in the field argue today’s artificial intelligence is too basic to instill such grand fears. If the robot uprising is your uncle’s obsession, reassure him that they won’t be taking over the world anytime soon, and have him watch this gif on a loop for a while. As a reaction to the anxiety, some governments have chosen to draft regulation. In 2018 a right to an explanation will take force in the EU. This is meant to ensure individuals affected by automated decision-making processes can be told why their specific case led to a specific decision. It establishes that algorithms that substantively affect people’s lives can’t hide their inner processes in a black box. They have to be understandable by the people they affect. But what does this right to an explanation actually require? This is still being debated. Some claim the complexity of AI systems means that the explanations that can be developed would be meaningless to a human. Imagine Facebook’s algorithm trying to explain why Ana isn’t Emma: It wouldn’t say one has a freckle, thinner hair, etc. Computers don’t abstract like we do. The AI system probably turned their faces into pixels, and isn’t assessing the freckle as “a freckle”, but as a disturbance in the pattern of pixels. And so on with each difference Producing an explanation might require backwards-engineering the effect of each of the (potentially millions)of images the system was exposed to throughout the training processes. The quantity of information this backwards-engineering process would spit out is of a similar size and complexity as the original problem…the problem we decided to delegate onto computers precisely because of its size and complexity. Therefore, the argument goes, we can’t expect to enjoy the benefits of AI systems AND understand how these outputs came to be. There is a trade-off. If this is trade-off is inherent to AI systems, and the benefits of AI are as high as expected, the candle of rationality that we have told ourselves has been key to understanding our world and personal history might be blown out. A complete paradigm shift. Not having an explanation for something like Facebook’s distinction between Anna and Emma might be fine. When a computer are allowed to tell a judge you are guilty of a crime, or unworthy of credit, it’s another matter. Let’s go back to Borges’ Babylon now. As mentioned, a place where every aspect of a person’s life was subject to the secret rulings of the lottery. “The [Lottery], with godlike modesty, shuns all publicity”, begins the closing paragraph. The narrator then wonders whether or not the Lottery still governs the fate of Babylonians today…or if the Lottery ever existed in the first place, but concludes that “it makes no difference whether one affirms or denies the reality of the shadowy corporation, because Babylon is nothing but an infinite game of chance.” Juan Ortiz Freuler CC-BY But…does it matter if the Lottery exists? This was Borges’ mind game decades before AI and explainability were on debate. Let’s assume that in both worlds you would face the exact same fate: would you be indifferent between a world in which an unknown third party is executing such fate, as compared to one where it merely occurs? I believe the answer is no. We are not indifferent. Borges’ narrator mentions having overheard back-alley discussions regarding whether the Lottery was corrupt and actually favoring a privileged few. The narrator mentions the Lottery publicly denied these claims and insisted that the Lottery representatives were mere executioners of fate with no actual power. The very existence of a space for doubt should tilt us towards picking the world in which no third party has the power to decide whether or not to meddle with our lives in such way. But then again we are told there is an inherent trade-off between accuracy and explainability. So the contrast is not merely between two equivalent outcomes. Unexplainable models, so they say, come with extra benefits. Would we choose the world of unexaplainabilty for an extra dollar a day? Probably not worth the dread. -Ok. What about a million dollars? -Hmm…? Yet the problem with constructing the option in this way is that, if the artificial layer we allow Companies to build on top of our existing chaos is in fact unintelligible, we wouldn’t really be able to assess the trade-off itself! We might no longer know if we were given an extra dollar, a million, or if the system actually sucked up our wallet. As in Borges’ story, we might be told that actually “No decision is final; all branch into others.” So it turns into a matter of trust… Would such a trade-off require our societies to abandon rationality as a guiding principle and have us build up our faith in the gods of silicon? Borges’ Babylon seems run by a pretty unaccountable Company. Questioning outcomes was something to be done in a dark alley, not in the public square. At this point in time, being afraid of AI as an entity is as reasonable as being afraid of dice. Self-conscious AI is many decades away according to the optimists, and even further away according to the rest of the experts, many of which claim it is not something that can be achieved. Yet we should learn from the past. Those who claim to be interpreters of the whims of god tend to reserve for themselves a disproportionate share of “god’s gifts”. Such is the effect of power on humans. Thus today we should focus on questions like are the dice loaded? who gets to define and/or execute the consequences of a draw? We should focus on those building AI systems, and those who pay their checks. These are the people who might be fiddling with the idea of setting up tomorrow’s lottery. We should not be lured into trading our drive to understand the world for shiny mirrors. Nor should we feel paralyzed by these challenges. Tangible progress towards the explainability of algorithms is already underway. More can and needs to be done. What‘s our compass in these troubled waters? #1- Fairness Those who broker the distribution of the efficiency gains enabled by AI in low risk cases should ensure a reasonable portion of these benefits are geared towards the development of auditable and fair outcomes of AI in high risk cases. This research is particularly urgent in areas such as access to public services and the judicial system. #2- Don’t cave to oppression Until we develop robust mechanisms to interpret and explain outputs, and ensure a degree of fairness, governments should not impose these systems on people. Offering them as an alternative to human decision-makers is something that might sound attractive. Given systemic discrimination and existing human bias, some people might reasonably prefer a black box than a racist human judge, for example. Understanding these baselines is important. Such is the way the jury system is being deployed in the Province of Buenos Aires: understanding that the introduction of juries might significantly alter the odds of being convicted , offering the jury system as an option the accused can opt-in for ensures they do not perceive this change as a violation of their right to a fair and impartial trial. The jury is not a feature of the process as such. Perhaps, like in Babylon, the lottery begins as “a game played by commoners”. Those with nothing to lose. The two-tiered system that would ensue is unethical, even if its in the narrow interest of each person who chooses it. As such, we need to foster a broader conversation which acknowledges that governments have a duty to eliminate the underlying systems of oppression. #3- Public disclosure Our political representatives need to create incentives for developers to open the black boxes before our governments contract their services or buy their products. This could take the form of conditions to be included in public tenders, tests to be carried out as part of public tenders, or liability for not disclosing certain risks, for example. Over time, advances in explainability in the high-stakes areas such as those driven by government contractors could lead to the development and adoption of explainable models in more areas. That is precisely the role of government: to look into the future and design an incentive structure that–honoring the rights and balancing the interests of each individual and group that forms the social fabric–triggers the coordination required for the construction of a world. A world that each and every one of us can look forward to. As such, in times in which technology is actively reshaping social relations and the distribution of wealth, it needs to double down on these responsibilities. So, is artificial intelligence a ticket to Borges’ Babylon? Not necessarily. Babylon is nothing but a possible world. One we should not settle down for. - *Working draft. Comments and suggestions welcome
https://juanof.medium.com/is-artificial-intelligence-a-ticket-to-borges-babylon-fbf90a449da5
['Juan Ortiz Freuler']
2020-02-07 21:00:34.920000+00:00
['Machine Learning', 'Jorge Luis Borges', 'Artificial Intelligence', 'Borges', 'Deep Learning']
What Works for &yet Chief of Strategy Sarah Bray: Intrapreneurship, Mission, and Confidence
The Nitty Gritty How Sarah Bray, entrepreneur, author, and digital strategist, transitioned from working exclusively for herself to joining the smart and passionate team at &yet, a design and development consultancy What it means to be an intrapreneur in the modern creative world — and what drew Sarah to the &yet team How to fuel your self-confidence, especially if you’re moving from the entrepreneurship world to a team culture Why Sarah and the &yet team create resources, like Leadershippy, that serve the company culture as well as the public to inspire, educate, and support them on their work/life journey Have you ever felt that you could never work for someone else, other than yourself? Sarah Bray, entrepreneur, author, and digital strategist, felt the same. That is: until she saw how she could give more life to her ideas by working on a team. Despite working independently for years, today, Sarah works as the Chief of Strategy at &yet, a design and development consultancy based out of Richland, Washington, that centers their work on possibility and people.. Listen to this inspiring episode of What Works to hear more from Sarah about her transition from working solo to working in tech. We release new episodes of What Works every week. Subscribe on iTunes so you never miss an episode. Tapping into your confidence as you transition from entrepreneurship to intrapreneurship “My confidence in what I could do and what I could bring came from those experiences and that validation. I was at a point in my growth that I didn’t have to seek out those people. I never had to sell my ideas to anyone because they’d been reading my work for a long time and they knew who I was.” — Sarah Bray The digital entrepreneurship world and the tech world are similar in many ways. Culturally, they’re both forward thinking and quick moving. There isn’t much bureaucracy (hopefully!) — autonomy and bold ideas are welcome. But the big difference between the online business world and the tech world is that the people who work within each realm don’t cross paths often. As Sarah shares, her new coworkers at &yet weren’t familiar with her digital work, besides her business partner Adam. But it didn’t matter because Sarah knew she created quality work… and she used that confidence to push forward from running solo to joining forces with others. If you’re considering making the jump from growing your own business to working for someone else, consider: what do you do really well? How is what you do well served by pivoting to a team-based environment? And how does this shift serve you personally and professionally? Embracing frustration to fuel your work “Frustration is the most amazing thing. Anytime there’s something I’m annoyed about or that’s driving me crazy, that’s the feeling that I know my own limitations well… and that I really need to be working with other people to move my ideas farther than I’m able to take them.” — Sarah Bray Something I love about &yet’s company culture is that they fully embrace the idea of possibility. But not as a grandiose vision that doesn’t feel grounded in reality. Instead, it’s at the heart of everything they do and something they highlight on their website’s homepage. Possibility is no doubt something that Sarah embraces in her life, too. If she didn’t, would she have considered working for someone else? Would she have believed that working with others could make more of her ideas come to life than what she could do alone? Embracing different perspectives to create better work “When you’re on your own, you do what you want to do because there isn’t anybody pushing back. On a team, it’s a lot different. It’s a good thing. It’s so good to have these differing perspectives coming at you.” — Sarah Bray As entrepreneurs, we often make all the decisions on our own. We have our grand vision that we work every day to bring that alive in the world… and sometimes we operate within an echo chamber. This might sound obvious but it’s a great reminder: collaborating with others can help you to create better work, regardless if that’s a consultancy that you contract work out to or if that’s you working on a team for someone else. How often do you feel like some of your ideas haven’t seen the light of day because you can’t do everything yourself? Even if you don’t plan to jump into an intrapreneur role, how can you embrace possibility in your business today? Hear more from Sarah Bray on this episode of What Works. We dig further into intrapreneurship while still staying true to your own values and mission.
https://medium.com/help-yourself/what-works-for-yet-chief-of-strategy-sarah-bray-intrapreneurship-mission-and-confidence-f9ff349bac8
['Tara Mcmullin']
2018-06-12 15:47:01.409000+00:00
['Leadership', 'Work Life Balance', 'Community', 'Intrapreneurship', 'Entrepreneurship']
Oddments Of You
I’m getting back in the race At an adequate pace Where time is my only space. Grounding, pounding in my face, In my head. It never lets up, Hasten the coursework until I’m dead. I dread the week, I dread my pay, I dread the weary light of day Most of all I hate how my heart is made up of oddments of you
https://medium.com/scrittura/oddments-of-you-6e2c3aa6287e
['Mary Jones']
2020-11-05 23:19:16.464000+00:00
['Depression', 'Poetry', 'Reflections', 'Anxiety', 'Mental Health']
Why it’s the era of Data Science but not of the Data Scientist
Intro Huig da Nerd alias @hugo_koopmans : uber-nerd/data-scientist > interested in data mining, (text) analytics, data visualization and how to make money with that. Managing partner at DIKW. Data science is hot. Everybody is talking about it. Data scientists are being worshiped, and rightfully so. You have been working in data science for decades now (you look much younger). Indeed I started out in the previous millennium with what we now call data science, back then be did “No cure, no pay” projects to proof we could predict customer behavior from data. Really? Really. By the way, we never lost a challenge but still we had a hard time selling our product because it was so threatening to people. And actually that did not change… Would you say the current Data Science hype is deserved? And what is the impact on business? Well, “deserved” feels a bit awkward, I see it as an inevitable evolution of the human species (see Ray Kurzweill’s book “Spiritual Machines”, and yes I am a believer [Editor’s note: No.]) The impact on business is of course huge, even without the common public noticing. For example the real-time auction of Google Adwords is still something that surprises a lot of nice looking ladies on the parties I attend (as hot (not so young) nerd you can imaging my party schedule…) Did your projects change, (are they really getting cooler) if you compare with 10 years ago? I must say “Yes” to this question, especially the possibilities with real-time, location based stuff is mind-blowing cool… Let’s look at some of your guru’s. Davenport told us businesses have to compete on analytics. Do companies know there is a competition? To be honest, no. Let me tell you a secret: Companies do not exist… a company is a collection of people… a brand is a concept to persuade consumers… but the people that work for companies most of the time are to busy to fight battles in their own little kingdoms and do not have the power to look over their own social network (max 50 people) and act on behalf of the company/organization (often more then 1000 people). But that is an evolutionary thing, as explained beautifully by Richard Dawkins (Blind Watchmaker, Selfish Gene | Meme). So individuals understand very well there is a competition, as said, it is like playing Chess on to different boards at the same time. Do you like what you have read so far? Get a quarterly update of what I am busy with. Thomke convinced us that Experimentation Matters, but do companies care? Which companies have adopted a culture of experimentation? Not very many and if they have they have a hard time keeping it(see last alinea for why). Siegel shows us organizations can predict who will click, buy, lie or die. But how many companies truly have precog capabilities? Again, still very few companies are able to leverage the predictive power present in their data. I have seen examples of companies that fight their own size and inflexibility to the extend it hurts every day big-time. But the truth is that predictive capabilities are not (yet) the biggest challenges companies have. A lot of companies are still very very busy with their core processes. And in the (database) marketing domain most people are just afraid that they cannot proof their added value, really… I have been in lot’s of places were I tried to spread the word, almost on a religious basis, to see marketing as a series of statistical science experiments. That is, to see a campaign as a statistical experiment. Yes, including control groups to examine results so we can proof the added value of floors full of marketing people. But I still fail miserably. O, so that could be me? Hmmm… let me think about that, next best question please. With Data Science being the hottest topic around Data Scientist get a rockstar status, right? I can only imagine the VIP treatment they are getting. Boardroom influence, golden careers, pioneering the data revolution. It’s good to be a data scientist, right? Well, I love my trade and yes I am pestered to work abroad more then once a week but I feel that the biggest mistake that is made right now is that there is no career path for young people in the organizations that look for a career in data science. Statistical skills are maybe valued more then in the past, but the transparency these skills bring to organizations, to what works and what does not work are so threatening that data based decisions are still not wide spread and common practice. Let me close off with my favorite quote from the book CoA: “In Gods we trust, all others have to prove their point.” Amen. Do you like what you have read? Get a quarterly update of what I am busy with.
https://medium.com/i-love-experiments/why-it-s-the-era-of-data-science-but-not-of-the-data-scientist-1acd76a3b53c
['Arjan Haring']
2016-03-13 07:45:33.136000+00:00
['Nerd', 'Big Data', 'Data Science']
The Accurate Information About News Break’s Creators Program and My Personal Experience
News Break Creator Program The Accurate Information About News Break’s Creators Program and My Personal Experience You can earn a lot of money, especially for the first three months if you get accepted as one of their creators. Source: Metiza I have seen several articles about News Break that include some inaccurate information that I wanted to correct. I have just joined a little over a week ago and have been in contact quite a bit with the platform staff and New Break Social Media groups with members who have been creators since the program first began on October 15th. So the information I am presenting here is from a variety of sources and includes links to help you learn more about the program and navigate the site a little easier. A Bit About the Site and Company Looking into the company, I learned that the site and popular app had been founded in 2015 by Jerry Yang, a billionaire computer programmer, internet entrepreneur, and venture capitalist and co-founder and former CEO of Yahoo! Inc. and Jeff Zheng, another tech billionaire who was the head of Yahoo Labs in Beijing. Not too shabby. News Break is a website and popular app for iPhone and Android that aggregates local news based on the location of the user. News can also be selected in a lot of different categories such as culture, politics, education, entertainment, science, health, World News, and society to name a few. According to their about page “We surface the most impactful, most relevant news & information, wherever you live and work.” They boast 45 million monthly active users across iOS, Android, and web, claiming that “creators can expect their content to reach an incredibly diverse reader base on News Break that extends far beyond what some other platforms can deliver.” This means that similar to Medium, there is already an audience for the content published there. Another plus. My Motivation to Join I have read a lot of posts that express writers' general dissatisfaction with different areas of Medium. Likely the biggest area is earnings as there seem to be changes happening to the algorithm more frequently and some have questioned whether Medium is more in it to establish their own brand while ignoring the desires and wellbeing of its writers. I don’t know any more than anyone else what has been going on, only that my earnings have started again to plummet and for the first time in over two years it looks like I may drop below $200 this month. Yet even though financially I need to find a better way to earn money which I’ve come to doubt will ever be satisfied by Medium, I still have a love for Medium, both the platform and the writers here. In a perfect world, if I am not going to earn much more than pocket change here, I’d like to find another platform that might offset my financial needs so I can still enjoy interacting, writing, and publishing on Medium without feeling the pressure to just get things out in the hopes something might “hit”. So, when I received the email inviting me to apply to a brand-new platform called News Break, I thought I’d take a look. First Impressions I wasn’t exactly sure what to think about the News Break site when I first explored it. It seemed like it was just a news aggregator for stories written by major news brands so I wasn’t certain why I’d been invited to apply. NBC, CBS, Reuters, LA Times, Newsweek, the Associated Press, Forbes, New York Post, msn.com, and other mainstream publications from the U.S. and all over the world have content on the site. What I couldn’t figure out was where individual creators fit in or what types of stories we could write. I asked the team about how to find individual creators vs. content from large news sites and was told there is no one location where individual creator content lives and to search for areas I’m interested in and any creators would show in the results. I searched for half an hour and never found anything other than large brands. This left me confused and I wasted a few days since I wasn’t sure how or if there was real potential there for us. After being in contact with several team members and members of site-related groups, I learned that the reason it’s tough to find individual publishers unless you have a name is that since the program is so new, there aren’t that many compared to the major news brands. We are there however, it just takes a bit of time to build your network, like anywhere else. What You Can Publish I wasn’t certain at first what to write since it’s branded as a local news site so I initially assumed you could only write news stories. The content policy didn’t help as it also seemed to indicate that this was the case, having a statement about your sources and making sure the news you report isn’t over 30 days old. I wasn’t sure what I could write that might get views over similar articles from major news outlets. My first article was about Trump changing the citizenship test as yet another way to make it harder for people to immigrate to the states. I incorporated information about some of his other efforts to limit immigration and tied it into the topic. I made sure I had links throughout that went to reputable sources when I made a statement that might be fact-checked. While my leftist leanings showed, I tried to keep out overt opinion statements in keeping more with a news report. It was reviewed and accepted in a couple of hours which was the same for my other articles. It did okay, getting a couple of hundred views within the first 24 hours. Since I couldn’t find individual content creators, I had no examples to look at for what others were writing on the site. I searched for others on social media and through other types of publishing groups to see if I could locate any others who wrote on the site. I found several on other platforms I publish on, a couple of social media groups that were really useful in terms of helping “newbies” learn the ropes, and a few others on sites where content writers hang out. I learned that they are trying to expand the site past just journalistic news reports and want it to become more than just a news aggregator. It seems like they are attempting to become a competitor to Medium and from the looks of it, with the pay incentives I’ll describe in a minute, they have a good chance of taking a lot of writers and membership dollars away from Medium if it doesn’t change. As for what you can publish, it’s very similar to what you can publish on Medium except for fiction and poetry. While the content guidelines are a bit confusing as they seem to suggest you can only publish news, this is not the case. You can also publish personal essays, opinions, personal stories and can write from any perspective, including first-person, that you choose. You aren’t allowed to publish gratuitous sexual content, violence, hate speech, and the like. They seem to be a bit more limiting on sexual content in particular from what I’ve heard. You can also publish stories we have on other sites (though I recommend being careful about doing this without first changing them some since otherwise, they may cannibalize each other hurting your overall views.) I also write on Hubpages and they’ve made some changes that made me less than happy about the performance of a group of articles I have there. I’ve begun updating and improving these, with the intention of transferring a few to News Break to see how they do. I’m planning on removing some of the articles from Medium to publish on News Break as well. They ask that you refrain from using any false, spam, misleading, discriminatory, defamatory, or otherwise derogatory language. Here is News Breaks content policy. Applying for the Program This is one mistake I’ve seen in most articles. You can’t just sign up for the program, you have to apply, be vetted, and approved after which they offer you a contract. The application doesn’t take long, only about ten minutes or so. You complete some basic information and attach the links to some of your writing samples. A plus for Medium writers is that you can give them the link to your profile so they can choose which of your articles to look at. I can’t say for certain whether that had anything to do with it, but my application was vetted and approved in about 12 hours which I thought might have been influenced by presenting them with close to 1000 articles I’ve written on Medium. The instructions state that it typically takes 5–7 business days though several people I’ve spoken to were approved in a couple of days. After I signed the contract, it was just a matter of setting up my profile, completing my tax forms, and deciding on the payment method I preferred. Payment Options There are four options, each of which had different fees associated with it. These are: Direct deposit USD $1 Wire transfer USD $15 Check USD $3.00 1.9% to 3% FX fee may apply Paypal 2% of the amount + USD $2.00, maximum USD 3.00. FX fee: 1.9% to 3% Earnings Potential Earnings Potential This is where I’ve seen other articles get things wrong and what people are the most interested in. It’s a little confusing in terms of the wording in the contracts and subsequent offers that are being made so I’ve again gone to the team members to get it straight. In general, from what is listed in various social media and other News Break related groups there is a guaranteed payment of $1000 every month you meet their criteria. The criteria consist of writing a certain number of articles a week (as of now it is three for a total of twelve a month) of 1000 words or more that are approved and that remain on the site, having a certain number of followers by the end of the month (currently set at 500) and averaging a specific number of views per article by the end of the month (for now it is 500 views per article). There is also the possibility of a rev share model for income that comes for advertising, and extra earnings for achieving more impressions and followers than required. Reader referrals and creator referrals may also provide bonus awards and there is mention of possible prize money for unique challenges and promotional periods throughout the year. Since the program is new, and they’d like to attract as many quality writers as they can as quickly as possible, they are offering an additional bonus for some of those who are accepted into the program early on. The bottom line is if you are interested in joining this site, it will benefit you to join it as early as possible since they aren’t saying how long different offers will be made. It’s possible that once they have signed enough quality writers, they could start dialing back some of the packages and perks they are including in their current offers. My Impressions While it took a bit of figuring out that was a bit frustrating at first, once I found contacts inside and outside the company to go to, things began to fall into place. Since they just opened the program up to individual content writers on October 15th, there are still some kinks and navigational issues to work out. They are also working on a clearer set of guidelines for those in the Content Creator program. There are many potential perks to the program, especially if you apply and are accepted early. With 45M active users and 1.5B page views, they have already built up an audience similar to the way Medium has. However, their payment schedule isn’t based on how many paid memberships there are, so it isn’t limited each month to an exact amount. There are also a lot of bonuses and extras by way of earnings and opportunities. Since we are not allowed to provide any specifics about what is in each person’s contracts or what we know about how the program works since it breaches their confidentiality rules, the best way to learn about it is to go directly to the team. There is a contact button, and it sends you to a page with several options for asking questions and getting feedback. You can download the app or apply without paying anything to do so. Once you are accepted into the program you will receive a contract explaining the specifics of your deal to you. It’s still early in the game and the first paychecks haven’t gone out to know for sure how this program will work for independent publishers. I’ll keep you updated as I learn more about it. Full Disclosure: This is my referral link for fellow writers. If you would like to sign up for the program, I’d really appreciate it if you would use it. If you would just like to download the app to read what is on the site without necessarily contributing to the content, you can do so here. If you prefer not to use my links it’s no problem. My only intention in writing this article was to share this opportunity with other Medium writers who, like me, maybe looking for other writing income streams where earnings are generated by a different method than Medium. I hope this information proves useful for many of you and that it helps you reach your writing goals. Happy Writing!
https://medium.com/the-partnered-pen/the-accurate-information-about-news-breaks-creators-program-and-my-personal-experience-a8c1aa283bbf
['Natalie Frank']
2020-11-29 14:30:27.233000+00:00
['Writing', 'Professional Development', 'Success', 'Writing Tips', 'Income']
Oracle ADF BC Reusing SQL from Statement Cache
Oracle ADF BC by default is trying to reuse prepared SQL query from statement cache. It works this way when ADF BC runs with DB pooling off (jbo.doconnectionpooling=false). Normally we tune ADF application to run with DB pooling on (jbo.doconnectionpooling=true), this allows to release unused DB connection back to the pool when a request is completed (and in this case, statement cache will not be used anyway). If View Object is re-executed multiple times during the same request — in this situation, it will use statement cache too. However, there are cases when for specific View Object you would want to turn off statement cache usage. There could be multiple reasons for this — for example, you are getting Closed Statement error after it tries to execute SQL for statement obtained from statement cache. Normally you would be fine using statement cache, but as I said — there are those special cases. We are lucky because there is a way to override statement cache usage behavior. This can be done in View Object implementation class either for particular View Object or in the generic class. After View Object was executed, check the log. If this is not the first execution, you will see log message — “reusing defined prepared statement”. This means SQL will be reused from statement cache: To control this behavior, override getPreparedStatement method: We create new prepared statement in this method, instead of reusing one from the cache. As a result — each time View Object is executed, there is no statement cache usage: Download sample application from GitHub repo.
https://medium.com/oracledevs/oracle-adf-bc-reusing-sql-from-statement-cache-1a23b557e49e
['Andrej Baranovskij']
2019-01-20 17:13:54.137000+00:00
['Oracle Adf', 'Jdbc', 'Jdeveloper', 'Java', 'Programming']
ALL Businesses That Do Not Survive Recessions Are Ponzi Schemes
Photo by JESHOOTS.COM on Unsplash Here is the thing: businesses are supposed to be based on products or services that do solve problems. These problems, usually, do not ‘magically’ disappear during an economic recession. In fact, their true nature of ‘necessity’ becomes apparent during such situations. Forget about ‘social entrepreneurship’ and non-profits, businesses are meant to address problems or inconveniences that communities face. We invented those other lame categories of ‘organisation’ (outside of traditional for-profit business) because most businesses, especially those related to banking and insurance were not, and never have, done a good job! Ponzi Schemes galore A ponzi scheme is when 1) the ‘winners’/beneficiaries/customers that are mostly rewarded are the ones who came first (taken to the mathematical limit, this would mean the founders), 2) the customers or beneficiaries who came last are not only the most ‘critical’ to the success of the Ponzi, but are the ones who lose the most!, 3) ‘the lie’ works until a problem is encountered (usually within the business model). Businesses tend to be founded on a set and/or series of assumptions that have to do with ‘market conditions’ and prevailing consumer interests (or fads!). A ‘good business’ is supposedly one that has made the ‘correct’ assumptions and has managed to execute diligently according to a particular plan or strategy. So far, in this definition, the entire paradigm of ‘a business’ is missing its raison d’etre. The traditional definition of a business, lacks a clear differentiating factor from that of a Ponzi Scheme. All Ponzi Schemes also start with a series of assumptions about consumer behaviour and ‘market conditions’. The ‘intent’ of a traditional businessman/woman, may very well differ substantially from that of a con-man running a Ponzi Scheme. But the entire system or business model is exactly the same until a problem is encountered either with the business environment/market-conditions or ‘assumptions’ made about consumers. A ‘business plan’ is just another form of a con: it is usually the investor who is the getting conned by a business plan. An ‘ad’ is also a form of a con: the customer is getting conned! An Anti-con Business Model The surest way to avoid turning an honest business into a mirror image of a con, is to once again, focus on solving unique problems. It’s my version of Peter Thiel’s ‘Build a Monopoly’ or the ‘Zero to One’ advice for startups. It is also a version of Richard Cantillon’s ‘demand/supply’ economic model for business assets. A business founded on solving problems cannot fail when the set of assumptions fail or when the market conditions change. A business founded on a ‘monopoly’ model creates a robust scarcity model that will both survive an economic downturn, and thrive during an economic boom. F.O.M.O. = Loss Aversion = A Con For the consumer, NEVER fall for a con! Even if the ‘ad’ or the ‘projected rewards’ are irresistible! DON’T do FOMO! For the merchant, NEVER ‘scheme’ to make money! If you do, don’t cry when the business collapses during a recession/unpredictable-event(s) and you realise that you lost a lot of your own money and perhaps that of an investor / a bank too! Don’t con! Don’t be conned! Only ‘necessary’ businesses are anti-cons. The rest have to be avoided like the plague! Clap, Follow or contact me on twitter to support this content.
https://medium.com/datadriveninvestor/all-businesses-that-do-not-survive-recessions-are-ponzi-schemes-7d7ce30996ae
['Lesang Dikgole']
2020-05-18 10:13:42.115000+00:00
['Insurance', 'Startup', 'Recensioni', 'Banking', 'Bitcoin']
The 1 Dumb Mistake 99% Of Content Marketers Make
Content creation isn’t about coming up with one brilliant idea. This is what very few people understand about winning the content creation game. You’re a designer. You’re a photographer. You’re a writer. You’re a columnist. You’re a creative agency. You’re creating content for your clients. Guess what — you’re all doing it wrong. The question isn’t, “What should we make? What is going to go viral? What is going to grab people’s attention?” The right question, the question you should be asking yourselves, as a team is, “What process can we come up with so we are creating something new, every single day?” Content creation is about Volume. Plain and simple. Smart companies don’t get the most exposure on their content marketing. CONSISTENT companies get the most exposure on their content marketing. Highly intelligent, highly prolific, highly talented individuals don’t get the most viewership on their content. Highly CONSISTENT, highly DISCIPLINED, highly COMMITED individuals get the most viewership on their content. This is the 1 dumb mistake so many content creators make. If you’re not putting out something new, something valuable, and something worth paying attention to, every single day, then you’re losing. Because the truth is, nobody cares about what you made yesterday. They care about what hits them right now. They care about what they want to share, right now. The Internet has created this sense of urgency within people, and unless you are catering to that sense of urgency, you’re not only falling behind — but you’re invisible. You’re forgotten. Nobody knows who you are anymore. Which is why I find it insane when people spend weeks upon weeks on a single piece of content. If that’s your masterpiece, go for it. I spent five years writing my first book. That was my golden egg, and I wanted it to be a certain way. But all throughout that journey, I also was writing every single day online. Why? Because I wanted to keep people paying attention. I wanted to make sure my audience was still there for the day my book was done. Your business is no different. If you are playing the content creation game, and you aren’t operating with a sense of urgency and prioritizing volume, you’re doing it wrong. Want to build your Instagram? Post something stunning, every single day. Want to build your blog? Write something new, every single day. Does that sound like hard work? That’s because it is. And that’s precisely why only a handful of people become “influential”. This is a marathon. I’ve been writing 2–3 pieces of content, every single day, for almost five years straight. Big-time YouTubers do the same. So do Instagram stars. That’s who you’re competing with. It’s not just about being creative. It’s about being creative, every single day. Volume wins.
https://nicolascole77.medium.com/the-1-dumb-mistake-99-of-content-marketers-make-8c857c6f0b12
['Nicolas Cole']
2020-05-28 18:51:47.837000+00:00
['Marketing', 'Content Marketing', 'Content Writing', 'Content', 'Content Strategy']
iPhone Settings You Need to Turn Off Now
#2. iCloud Analytics Like iPhone Analytics, turning this off will improve your iPhone’s battery life and give you more privacy. We just want to turn this off as we did for iPhone Analytics it’s just with iCloud on the same screen nice and easy. #3. Significant Locations Significant Locations tracks the places you visit most often and saves them onto your iPhone. Turning it off will improve your battery life and help you maintain your privacy. Significant Locations means that your iPhone keeps track of everywhere you go by using your GPS. They can then deliver relevant advertising to you, which is supposed to make your experience better but I think it’s kind of creepy. For instance it’s really weird when you’ve had it turned on for a long time and then you go to Significant Locations. It’s like here are the ten places you’ve been and I’m like wow I had no idea! It’s like a horror movie so let’s turn off Significant Locations go back to Privacy, tap Privacy in the upper left and scroll up to Location Services. Now scroll all the way to the bottom to System Services. You know they don’t want you to turn it off when they bury it in like five layers. Then scroll to Significant Locations, tap on that and turn it off, if I had left it on for a week it would have my home address, gym, work and any place I’ve been so turn it off and you’ll have the option to clear the history there in case you’re freaking out which is pretty cool. Go to Settings > Privacy > Location Services > System Services > Significant Locations > Turn Off Significant Locations #4. System Services Almost everything in System Services are unnecessary as they are constantly running in the background of your iPhone and draining its battery. So let’s just tap back to Settings and we are going to turn off everything in here and this is my recommendation except for Emergency Calls & SOS, Find my iPhone and Share My Location if that’s a feature that you use. Go to Settings > Privacy > Location Services > System Services So let’s turn off: · Compass Calibration · Location-Based Alerts · Setting Time Zone — Kind of only important if you’re traveling a lot so if you do travel just turn it on and then it fixes the time zone. Turn it back off again once you arrive, you can tell that even with it on there is a purple arrow which means that it was tracking your location to find out what time zone you were in. Wasting your battery life so if you are not traveling through time zones it doesn’t need to be checking all the time. · WiFi Calling — Turn it off if you already have good cellphone reception in your area. So I promise you that all these things are just going to save your iPhone battery life. Here’s the thing all these things make it seem like if you turn them off then it’s going to stop working. Except it doesn’t stop working if you turn it off a lot of this stuff is just data for Apple. So if you think you can live without it just Turn it off. #5. Limit Ad Tracking Limit Ad Tracking will prevent advertisers from collecting more detailed information about you. Limited Ad Tracking is off by default so this is something we are going to Turn on in order to Turn it off. All right so let’s tap back to the main Location Services menu then back to Privacy and then scroll to the bottom and tap Advertising. · Go to Settings > Privacy > Advertising > Turn On Limit Ad Tracking So yes by turning it on it’s going to turn off the tracking to prevent advertisers from tracking you, as you use your iPhone when connected to the Internet. You’d be amazed how well it works to help them target your location and run ads as you move around. Collecting more information about you so keep it on as it’s kind of creepy. #6. Fetch New Data Fetch New Data — Changing your Settings from Push to Fetch will help you save battery life. So let’s tap Privacy in the upper left hand corner to go back, tap Settings again and scroll down to Passwords & Accounts. · Go to Settings > Passwords & Accounts > Tap Every 15 Minutes > Turn Off Push > Change Accounts you want to Fetch So with Fetch new data is set to every 15 minutes which is good with Push turned off. I recommend always turning off Push as your iPhone will always be connected to your Mail Servers and iCloud and will always be requesting mail every second, whereas Fetch means that your iPhone checks every 15 minutes or 30 minutes or however often you decide. Whether there’s new mail so unless you’re in a situation where you need to know exactly this second when you got a new email you can save a lot of battery life, by turning off Push and changing your accounts to Fetch which you can do individually. So Turn off Push and have Fetch enabled for your main Mail Accounts and then set up the Fetch settings at the bottom and choose every 15 minutes. I should mention too that you can always open the Mail App and see if there’s a new email. So whenever you use the App it’s going to check as well, so most people won’t ever notice a difference except for improved battery life! #7. Background App Refresh Background App Refresh for certain Apps — This will prevent Apps from constantly running in the background of your iPhone and draining its battery life. · Go to Settings > General > Background App Refresh > Turn Off Apps you don’t need to be continuously updating in the background
https://medium.com/swlh/iphone-settings-you-need-to-turn-off-now-5b66b9556df1
['Hear Aboutit']
2020-12-07 21:42:02.647000+00:00
['Apple', 'Tech', 'Gadgets', 'iPhone', 'Privacy']
I Will Never Wear A Mask She Said
I came across the following message on Facebook today: I’m not complying with the govt no matter what. I don’t wear a mask, I wear a headscarf instead. If a clerk tells me I’m going the wrong way in an aisle, I ignore them and keep going. Those arrows are ludicrous and serve no purpose. I refuse to use hand sanitizer. We’re still going to go see Duke’s parents regardless of the stupid rules. The govt has absolutely no right whatsoever to keep people from seeing their own parents and family members. This country is sliding downhill into fascism and communism and its people who just sit back and accept it that is contributing to the downfall of our free society. We no longer live in a free and democratic country, it’s time people woke up and rose up against the govt. People should be mass protesting in the streets against these restrictions on their freedoms, but no, they’re just hiding in their homes, afraid of the virus, when really we should be very afraid of the govt and what it is doing to our country!!!!! I find this extremely upsetting. How can anyone be so arrogant, selfish, and careless not to wear a mask? Everyone is wearing masks, everyone is practicing social distancing, everyone is washing their hands or using hand sanitizer, so why does this woman think she is above all this. She should be charged with reckless endangerment. I had a brief conversation with her, pointing out the dangers of not wearing a mask. The gist of my message was, you might have the virus and not know it. You might infect others. Because of you, people might get sick or even die. Her response was “I don’t care. I’m not afraid of the virus. I hope you get it.” I wasn’t the only one who didn’t agree with her. James (not his real name) posted the following poster for her attention and stated “I am deeply disappointed in you Judith (not her real name). Image courtesy of North 99 To which she replied, “Too bad.” What is wrong with this woman. I’ve seen pictures of Queen Elizabeth wearing a mask, I’ve seen pictures of Hollywood celebrities wearing a mask if they can do it, why can’t she? Then there are those who claim to have a medical condition that prevents them from wearing a mask. Asthma is the number one excuse. However, Dr. David Stukus, a member of the Medical Scientific Council for the Asthma and Allergy Foundation of America (AAFA) stated “Most people with asthma, even if it’s severe, can manage to wear a face mask or covering for a short period of time.” If someone feels they absolutely cannot wear a mask, they should avoid all social contact. They should stay in their home, shop online, do curbside pickup, or arrange for delivery. Some think of being clever by obtaining an exempt badge. Most of these badges are fake. Others wear a badge that states ‘I’ve been tested and I’m safe’. This too is baloney. That person might be safe today but might not be safe tomorrow. Image courtesy of FTBA When I asked online about Covid symptoms I got the following response: How long does it take for symptoms of the coronavirus disease to appear? On average it takes 5–6 days from when someone is infected with the virus for symptoms to show, however it can take up to 14 days. https://www.who.int/health-topics/coronavirus#tab=tab_1 When will people realize that if we are ever to beat this virus we have to stand together and we all have to do our part. That means wearing masks, practice social distancing, and wash hands. Never think that it won’t happen to you.
https://medium.com/illumination/i-will-never-wear-a-mask-she-said-53889c8c6d6e
['Conny Manero']
2020-11-13 03:42:42.123000+00:00
['Pandemic', 'Masks', 'Covid 19', 'Virus', 'Coronavirus']
How Long are Meetings?
How Long are Meetings? The Mom Test by Rob Fitzpatrick Solve challenge here: https://www.bookcademy.com/home/show_daily_challenges/how-long-are-meetings Early conversations are very fast. The chats grow longer as you move from the early broad questions (“Is this a real problem?”) toward more specific product and industry issues (“Which other software do we have to integrate with to close the sale?”) For example, it only takes 5 minutes (maximum) to learn whether a problem exists and is important. A bit further along, you’ll find yourself asking questions which are answered with long stories explaining their workflow, how they spend their time, and what else they’ve tried. You can usually get what you came for in 10–15 minutes, but people love telling stories about themselves, so you can keep this conversation going indefinitely if it’s valuable for you and fun for them. At the extreme end, learning the details of an industry takes an hour or more. Thankfully, those are easier conversations to facilitate since the other person (usually some sort of industry expert) can go into a monologue once you point them in the right direction.
https://medium.com/bookcademy/the-mom-test-by-rob-fitzpatrick-how-long-are-meetings-a8db5dfc6d70
['Daniel Morales']
2019-10-10 13:59:04.220000+00:00
['Bookcademy', 'Summary', 'Books', 'The Mom Test', 'Daily Challenges']
I Had a Choice to Make: Andrew or My Eating Disorder?
I Had a Choice to Make: Andrew or My Eating Disorder? When the most important man in your life is the one telling you not to eat too much Photo by Maylies Lang Photography courtesy of the author For over a decade, the most important man in my life was the one in my head who told me not to eat too much. Many people with eating disorders find it helpful to personify the disease. Mine didn’t have a name, but I felt sure it was a dude. Our relationship was a classic case of codependency. We’d been together so long, I couldn’t tell where he ended and I began. I’ll call him ED. We met in high school. His voice spoke to me from my magazines, in the tips to stay svelte and the instructions on “how to eat.” My manual for womanhood. But he wasn’t entirely a stranger. I’d heard whispers of him before. At the dinner table, when I ate pasta with my dad and my brother while my mom ate salad. When my grandpa described the perfect female leg with three distinct spaces: crotch, calves, and ankles — the original thigh gap. It was clear that to be small was to be worthy. I wasn’t about to let my unruly body jeopardize that. At 15, we were young lovers. I was clumsy, my emotions rampant, unable to sustain restriction. I’d be “good,” preparing my magazine-prescribed meal while waiting by the phone for my older boyfriend to pick me up to go swimming. I had to wear a bikini later. When he called to cancel, I’d swerve, gorging on tortilla chips, cereal, granola bars, then purging when the discomfort and guilt became unbearable. I still remember the burn and the shame. My knuckles red and raw from the acid. But I wasn’t really skinny, so it wasn’t really a problem. By 17, though, we were going steady. I would have laughed at my earlier attempts, been embarrassed even, at the lack of self-control and the absence of effect. I went to Africa, and there, I honed my craft. I was young and scared and out of my element, but I could create a routine around what and when and how I ate. My ED kept me safe. There was no room for fear when all I thought about was food. When I came back, rail-thin, weighing what I had as a preteen, I finally reaped the rewards. After years of trying, I’d cracked the code. “I’d thinned out, I looked fabulous, I was wasting away, I was tiny,” people said. My parents were terrified, and I saw their fear as a marker of my success. Finally. I would not gain that weight back for a very long time. At 20, the relationship became truly abusive. When my blood work came back with a big, red warning, I was immediately given a blood transfusion. I was poked and prodded as doctors tried to root out the cause of my anemia. Why was my iron count zero? I had a colonoscopy, and in my drugged stupor, I remember the doctor saying, “She’s so small, we’re going to have to switch to the pediatric scope.” I faded out of consciousness, completely delighted. They were trying to see if I had celiac disease (as if I ate gluten, ha). The doctor said if I had been in a car crash and lost a lot of blood, I would have died. I didn’t have a period for over a year. I was tiny. But none of those doctors ever asked if I was starving. As my twenties went on, ED’s voice changed, a subtle shift to reflect a new trend: “Strong is the new skinny.” I had an epiphany, a reckoning with myself: “If I can’t be the smallest one, I’ll be the strongest.” Enter boot camp. Even more obsessive exercise. Long runs followed by doubleheader hot yoga classes. The yogis called it “doing a double,” and it was celebrated. Everyone was so impressed I could sweat for an hour after a 10 kilometer and then walk home. I leaned more into orthorexia than anorexia. I did a cleanse with the yogis. I went vegan. I ate squash for breakfast. My whole life became a cleanse, and nobody at that studio ever asked if I was starving, either. When I went to Africa the second time, for my master’s research (smart people get eating disorders, too), ED and I were on a bit of a break. But when a well-meaning Tanzanian friend gestured toward my body and said, “Ah! Our country has been treating you well! You must really like our food!” all I heard was, “You are fat now.” I blanched. The room spun. This—in a country where many face unspeakable challenges and suffer from actual starvation, not on purpose—was my worst nightmare. I ran straight back into ED’s arms, and I made a plan. I would once again return to Canada triumphant: emaciated. I already walked everywhere in the hot sun, but now I stepped it up, doing workout videos on my concrete floor beside my fan, even if it wasn’t working because the power was cut. I counted peanuts and popcorn kernels. I picked at pieces of chapati and pushed away my rice on a six-day trek up Mount Kilimanjaro. I went on a safari and ate next to nothing and found a Belgian lover who validated me when he said something fatphobic in reference to someone else. He couldn’t possibly think I’m fat because he’s fatphobic. If I were fat, he wouldn’t be sleeping with me. I was fatphobic, too. I was sick of choosing between controlling my body and living my life. At 24, I was back in Canada. Extra skinny again and in the flow of restriction. It was easy to deny myself for a while. I started dating, something I’d never really done. I felt sexy. People started looking at me with envy again. I felt special. I tried Tinder and was moderately successful. I slept with some older men and felt scandalous. But I felt empty, too. I was starving in more ways than one — and hungry for a love that would sustain me when food couldn’t, because I’d never let it.
https://humanparts.medium.com/i-had-a-choice-to-make-andrew-or-my-eating-disorder-e2db687a1c96
['Ivy Staker']
2020-05-18 16:05:18.643000+00:00
['Relationships', 'Eating Disorders', 'Mental Health', 'Love', 'Life Lessons']
Revitalized
Revitalized A Poem Photo by Jr Korpa on Unsplash After about twenty seconds of solid rubbing, twisting my hands are warm enough to be brought to work again But it’s hard to revitalize the eyes, they want so much in the way of the same but with subtler alterations Yeah, they’d like to be featured but they don’t want to burn at the sight of the sun or any one thing Another twenty seconds go by, and I didn’t write anything, don’t even know which doors to knock on right now I feel unleashed, but not in the good way; just hanging just drifting and calling out names of random beers I can’t even drink this away not that I was gonna… Well, I might have thought that way, once or twice But I had to leave your grip get on with the marching towards another stale camp site full of yesterday’s weeds Ambushed by a sensitivity that makes a wilderness out of this pseudo-civilazation and all the curses got let out Another interval, 20 seconds fingers somewhat warmer now but lacking ideas for what to put down next An absent crack from the basement and I recoiled in the gaze from the mirror, hoping for direction, a plateful to take me up After all, I won’t be able to leave anything as it stands but the change has to come with and that’s a burden too
https://medium.com/illumination-curated/revitalized-c072fdfa217e
['J.D. Harms']
2020-12-17 17:07:36.007000+00:00
['Poetry', 'Pain', 'Writing', 'Musing', 'Image']
In pictures: putting soldiers to the test in Liverpool
Step 3: open the doors for everyone wanting to and volunteering to get a COVID-19 test. Testing will be carried out in new and existing test sites, using home kits, in hospitals and care home settings, and schools, universities and workplaces. There is even a test site in Anfield Stadium, home to Liverpool Football Club. Liverpool residents can book online, walk-up, or by invitation from the local authority.
https://medium.com/voices-of-the-armed-forces/in-pictures-putting-soldiers-to-the-test-in-liverpool-5789e59f6fd7
['Ministry Of Defence']
2020-11-13 16:19:39.284000+00:00
['Liverpool', 'Covid 19', 'Coronavirus', 'Military', 'Photo Essay']
Empathy Is Overrated
Our culture is obsessed with the power of empathy. Whenever a politician strips a marginalized group of their rights, the left decries the lack of empathy. When hate groups rise up and spout vitriol, their apparent lack of empathy is blamed as the root of their evil. Even critiques of capitalism somehow become conversations about empathy. It’s as if people believe the most pressing problem is lack of love in billionaires’ hearts, not the systems of power and capitalism that made them billionaires. As an Autistic person who cares about social issues, this obsession with empathy frustrates me. Feeling another person’s emotions does not innately make you a good person. Being emotionally sensitive doesn’t ensure that you’ll take the steps necessary to help someone. And those of us who struggle with empathy are not monsters or robots. We are just as capable of acting compassionately as anybody else. Empathy is overrated. It is an alluring illusion. In reality, we never know how another person feels. And we don’t have to. We don’t need intuitive, magical empath powers to uplift other people or to right society’s wrongs. Our actions and choices can matter so much more than how we feel. Empathy is an illusion You’ve probably heard empathy defined as “feeling what another person feels.” Even in psychology, we often explain empathy that way. Empathic people feel sad when other people are sad. When you witness someone getting punched, empathy might make your own brain light up with pain. It’s almost like having psychic abilities. Right? Empathy is an emotional simulation of what you believe another person might be feeling. The problem with this definition of empathy is people tend to take it literally. Self-identified empaths (as well as highly sensitive people, or HSPs) often believe they are uniquely intuitive and have a “sixth sense” for how other people feel. Nearly every popular book about empaths and HSPs feeds into this belief. They describe empathy as a “gift,” using awed, vague language that suggests it’s almost like magic. This isn’t actually the case. At best, empathy is an illusion. It’s an emotional simulation of what you believe another person might be feeling. These simulated emotions can be intense and compelling, but that doesn’t mean they are correct. If a person’s facial expressions are hard to read or if their experiences and reactions are a little out of the ordinary, empathy may fail to tell you what they’re going through. I’m an Autistic person, and empathic people read my emotions incorrectly all the time. I once had a co-worker, Lauren, who was very sensitive and kind. Lauren was absolutely convinced I was a miserably sad, lonesome soul. Every time she popped into my office to say hello, she’d notice I was frowning, so she’d frown back at me in an exaggerated way and ask in a low, concerned voice if I was doing “okay” — as if I were a scared baby bunny lying injured in the woods. When Lauren looked at me, she felt kind of sad and uncomfortable. She assumed that meant I was sad and uncomfortable, too. In reality, my resting facial expression is just flat and seems “emotionless,” especially to non-Autistic people. Research shows that neurotypical people often feel uneasy around Autistic folks, even if they can’t pin down why. Confusion over how we express emotion is often a big part of it. In her attempts to connect with me emotionally, Lauren left me feeling alienated and misunderstood. When we’re too confident in the intuitive magic of empathy, we risk making all manner of errors. We may assume that a person on trial for a crime is heartless and sociopathic, when really they’re frozen with panic. Those of us who are non-Black may believe a Black woman is “angry” because racism has clouded our perceptions. We may only have sympathy for people who express emotions in ways that seem normal to us based on our culture. Instead of bringing us together, misplaced empathy can drive us apart. Empathy is not perspective-taking In psychology, we sometimes draw a distinction between affective (or emotional) empathy and cognitive (or mental) empathy. Affective empathy is feeling what (we believe) another person is feeling. When the average person uses the word “empathy,” that’s the one they mean. Cognitive empathy, also known as perspective-taking, is imagining what it’s like to see through another person’s eyes and thinking about what they might be going through. Perspective-taking is distinct from empathy in many ways. For one, perspective-taking is a skill that anyone can practice. You don’t have to be naturally good at it. Perspective-taking involves thinking carefully about a person’s life and critically analyzing how they think, and we can update or refine our understanding as new information comes in. It’s not an instinct; it’s a behavior you can choose to take. Many Autistic people—as well as people with attention deficit hyperactivity disorder (ADHD), antisocial personality disorder, borderline personality disorder, and others—have a hard time with empathy. We often overcompensate by developing keen perspective-taking skills. I can’t always read someone’s emotions from their face or tone of voice, but I can pay attention to the content of what they say, think about what I know about them and their lives, and draw reasonable conclusions from all that data. I spend a lot of time thinking about the lives of other people, trying to piece together an understanding of how they might experience the world. Whenever I meet someone new, I try to think about how I can avoid accidentally hurting or alienating them. If they’re a member of a marginalized group, I keep in mind the dozens of ignorant, microaggressive things people probably say to them all the time, and I try my best to avoid doing any of that. If they share personal, sensitive information with me, I try to really listen and not respond with any undermining cliches. It always shocks me when a supposedly more empathic, non-Autistic person wanders into the exact same conversation and immediately says the most obvious thing that comes to mind or downplays another person’s emotions with treacly look-on-the-bright-side language. Such a lack of care is unfathomable to me. Yet people who are supposedly empathic behave this carelessly all the time. Some people find socializing so effortless that they never had to learn to perspective-take. As a result, many of their interactions are thoughtless, bubbly, and ultimately, pretty shallow. Empathy is overwhelming One of the other drawbacks of empathy is how overpowering it can be. When you are caught up in feeling another person’s emotions (or what you believe their emotions are), you may not be able to think clearly. You may even lose sight of the person you’re empathizing with. Autistic people are often stereotyped as lacking empathy, but one common theory of Autism is that we experience excessive, distressing levels of empathy. Autistic people can easily get overloaded by the anguish, rage, or even joy of other people. We may become confused by intense yet hard-to-name emotions we feel. It can cause us to have meltdowns or to dissociate. I sometimes get stressed when people are raucous and laughing too loudly; even though I want to share their happiness, it puts me on edge. On the flip side, talking deeply with a distressed person can leave me feeling drained for days afterward. When I get overwhelmed with another person’s emotions, I start to check out. I look even more detached and robotic than usual. I may be unable to make eye contact with them. I may even start falling asleep. This is an Autistic shutdown, but people mistake it for apathy and a lack of empathy. The real problem is that intense empathy sometimes inhibits helping behavior. Empathy can overwhelm non-Autistic people in damaging ways, too. Sometimes, people get so wrapped up in empathizing with another person that they forget to focus on who was actually harmed. A white person might cry about racism so loudly that it pulls focus from the people of color actually suffering, for example. Or a supposedly supportive feminist friend might be so distressed to hear about your abusive ex that you find yourself having to comfort them, instead of the other way around. A lot of people would chalk this kind of behavior up to narcissism, but narcissistic people can be caring and compassionate just like anyone else. The problem here is not that people feel intense emotions about events that don’t involve them. Those feelings are completely neutral, and they are neither evil nor good. The real problem is that intense empathy sometimes inhibits helping behavior. It’s fine to feel immense sadness on behalf of someone else, so long as you don’t mistake that for taking productive action. At the end of the day, it is how you behave that matters far more than what you’re feeling. Empathy is not compassion Empathy is an internal experience. By itself, it does nothing to remedy structural injustice or bring comfort. When progressive left-leaning people decry the lack of empathy in our culture, what they really mean is the lack of compassionate acts. Thankfully, people don’t need empathy to behave compassionately. Compassion drives us to do things like check up on older isolated relatives, donate money to unemployed people’s crowdfunding campaigns, and volunteer time driving people to the polls. Unlike empathy, which is mostly emotionally driven, compassion can be emotional, intellectual, or even philosophical in nature. I might decide to advocate for my university’s graduate student union because worker exploitation makes me sad, or I might get involved because I recognize, intellectually, that such efforts are important. It doesn’t matter whether my heart or my mind led me to behave compassionately. What matters is that I made the choice to get involved. Autistic people are often deeply compassionate, regardless of whether we feel empathy or not. People who are even more deeply demonized, such as those with antisocial personality disorder or borderline personality disorder, can also behave compassionately without empathy. You don’t have to feel someone else’s feelings in order to care about their well-being. You just have to believe that human life has worth and that suffering should be prevented and minimized as much as possible.
https://humanparts.medium.com/empathy-is-overrated-6cf4090c601e
['Devon Price']
2020-05-04 15:42:22.733000+00:00
['Personal Growth', 'Life', 'Autism', 'Psychology', 'Self Improvement']
Applying design to supply chain data.
Supply chain data are complex, multilayered, and sometimes hard to untangle. When it comes to using data to make decisions, design has a crucial role in ensuring data are efficiently understood. With the right presentation and interpretation, supply chain data can be transformed into actionable insights that lead to more sustainable, more secure, and more robust supply chains. For three years now we’ve been collaborating with the Stockholm Environment Institute (SEI) and Global Canopy Programme (GCP) to visualise supply chain data on Trase.Earth. The platform currently provides data and insights on 13 commodities from eight countries. As the number of platform users increased, the experience and needs of those people diversified too. From journalists revealing the links between burgers and deforestation to companies wanting to understand the environmental impact of their supply chains, people across the world are using Trase to understand the links between forests and the food on our plates. So, after a careful review of the google analytics and research by the Engagement Team at Global Canopy that includes interviews with people who use the platform, we embarked on a redesign that provides more guidance to first time users and offers faster access for all users. In this blog we’ll review some of the goals we wanted to achieve with the redesign, and explain how they make complex data easier to understand. Goal 1. First-time users feel comfortable with the tool. When Trase.Earth was first created, the teams at SEI and Global Canopy often gave presentations or one-on-one introductions to those who were interested in using it. While this was an amazing opportunity to get an expert’s insight into the intelligence within the dataset, it’s not a scalable approach. To make the data more accessible to those who don’t have a trade data expert on hand, our designers reimagined the entry points to Trase’s data tools. The goal our designers wanted to achieve was making first-time visitors feel welcome and spark their curiosity. To leave them feeling like they’ve been shown what they can do with the data, and what insights they can extract from it. By reducing the friction that comes with learning how to use a new tool, we are able to draw people deeper into the data while giving them context and interesting numbers to sink their teeth into. Goal 2. Navigation that’s easy as 1,2,3. Easy, fluid navigation is crucial for both first-time and returning users. For Trase, our designers created a step-by-step approach that introduces the different data options while showing users how the tool works. Upon arrival on the data tool page, the first step is to select a commodity.
https://medium.com/vizzuality-blog/applying-design-to-supply-chain-data-56f71dcedb6d
['Camellia Williams']
2020-04-21 11:58:06.903000+00:00
['Design', 'Data', 'Supply Chain', 'Data Visualization', 'UX']
Get started with Hashicorp Vault
HashiCorp Vault — This product is currently running in many big enterprise companies. I have seen a lot of people complain about the complexity of it and the pain of setting it up. I think this product is great and would like to share my experience and knowledge to help you speed up the initial set up. You will find it’s not as difficult as you think. In this tutorial, I will walk you through how to set up vault with different types of backends in your local environment, also give you the example code so you can begin with a single command. But first, let’s have a quick look at why this product is currently in high demand. What is Vault and Why? Vault is the one-stop-shop for all your sensitive data. It stores and tightly controls access to tokens, passwords, certificates, encryption keys for protecting secrets, and other sensitive data using a UI, CLI, or HTTP API. Centralized — Vault is a comprehensive solution in one box, no need to have additional products for a specific feature, it contains Key Management System Encryption System PKI System … Multi-cloud — Just like all the other HashiCorp products, Vault is multi-cloud friendly. Imagine that your company is running a multi-cloud solution, and your AWS application needs to have access to GCP projects, the AWS IAM doesn’t make sense to GCP, however, with Vault as a broker, it does. Run Anywhere — Vault is just a piece of software and can be run anywhere. To install it, use homebrew , yum and apt-get , or maybe just download directly from Hashi’s website. Vault binary can be used in many modes Server mode — this is the mode that allows Vault to host the API server, interact with your client, and persist your secret data. this is the mode that allows Vault to host the API server, interact with your client, and persist your secret data. Client mode — the same binary can be used as a client, since vault is just an API server, in the client mode vault can send a request to the server (consider it as a wrapper for curl) you may use curl or any other tool to send requests as well. the same binary can be used as a client, since vault is just an API server, in the client mode vault can send a request to the server (consider it as a wrapper for curl) you may use curl or any other tool to send requests as well. Agent mode — this is a slightly more advanced way of using vault, it will sit somewhere in your VM or Container, talk to the vault server and retrieve the secret on your behalf, and eventually injects the secrets somewhere in the VM for your application. Read my other post for more details. Let’s have a quick look at how to run it with different backend storage. Run the test server in memory Have vault added in your environment PATH and run the following command, just as simple as that, you can start to use vault in dev mode against your localhost at port 8200 . vault server -dev Or do it with docker docker run --cap-add=IPC_LOCK -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' -e 'VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:1234' -p 8200:1234 vault IPC_LOCK memory lock is required to prevent swapping sensitive data to disk VAULT_DEV_ROOT_TOKEN_ID (optional) the root id can be easily be set in dev with the environment variable, if not, Vault will generate a random root token during initialization and output it in the log. Once up in dev mode, Vault will be initialized and unsealed, i.e. ready to use. VAULT_DEV_LISTEN_ADDRESS (optional), it defines where the API endpoint is listening to, needs to be 0.0.0.0 if in a container. Use the same binary as the client to login with the following command, and Vault server will return a token stored at ~/.vault-token , the vault client will use it to send any later commands. vault login <your-password> vault login with root token File storage backend Vault can the secret data in a local file. However, it does not support High Availability, so it’s not ideal for production, I personally use it as the local test environment, so you can keep the work environment from last time. In order to make it work, we will need to create a config file and add the following code block. This block is telling vault where to persist the encrypted data and listen to any request at port 8200 , you can also custom the storage settings and have multiple listener blocks. storage "file" { path = "vault/data" } listener "tcp" { address = "127.0.0.1:8200" tls_disable = 1 } ui = true Run the following command to start Vault server. vault server -config=/path/to/your/config/file When running Vault in non-dev mode, you will need to init and unseal the Vault before using it. vault operator init vault operator init vault operator unseal <shamir-key1> vault operator unseal <shamir-key2> vault operator unseal <shamir-key3> vault login <initial-root-token> Of course, you can do it in docker, it’s pretty much what people do these days, to keep things tidy and clean. Start fresh from downloading the latest version of Vault on alpine or simply use the latest Vault Docker image. Since we have the ui=true specified in the config file, Vault is also available at localhost:8200 , try it in the browser. Sealed Vault Use Consul as backend When talking about Consul we can easily expand this into another whole new article, but for now, all you need to know is we can use Consul as the storage, and it supports HA which is the recommended way of running vault for production by HashiCorp. The ideal structure is having multiple consul servers as the storage cluster and a few vault servers on top. A production-ready cluster diagram would look as follows. Here I will provide an example of a docker-compose running a single vault and consul together. Don’t worry if the code is not complete here, I have the repo at the very end of the page, with more operational instructions in the README. docker-compose up -d Clone the repo and run the above command in the consul-backend folder, after about 10 seconds, Vault and Consul should be alive. Kubernetes You may think using Vault is the most complex on k8s, but on the contrary, it’s actually the easiest, everything has been sorted out for you, you can get it installed using helm and it will work without any change to the default chart file. Install Vault only helm install vault hashicorp/vault helm repo add hashicorp https://helm.releases.hashicorp.com helm install vault hashicorp/vault Or use Consul as the backend helm install consul hashicorp/consul --values helm-consul-values.yml helm install vault hashicorp/vault --values helm-vault-values.yml helm repo add hashicorp https://helm.releases.hashicorp.com helm install consul hashicorp/consul --values helm-consul-values.ymlhelm install vault hashicorp/vault --values helm-vault-values.yml That’s pretty much what you need to do to get Vault up and running to play with, see it’s not that hard right? Here is the repo with all the script you need on this page. Thanks for reading. I hope you found this page useful so far. Follow me and watch this space as I will be posting more Vault related articles. Here is my personal blog, check it out and click the AD link if you like it. Cheers.
https://medium.com/weareservian/get-started-with-hashicorp-vault-cc132dce627d
['Phil Xu']
2020-08-31 01:27:58.462000+00:00
['Consul', 'Hashicorp', 'Hashicorp Vault', 'Secrets', 'Kubernetes']
Photo of Planet Uranus Taken With off-the-Shelf Cell Phone
Pictures of Uranus are tricky, but yes, it’s very true My goal is to photograph the trails of satellites in the night sky, with known stars in the background, to enable analysis with Python scripts to calculate things like altitude, velocity, and other parameters. My passion is using Python for creative and fun real-world applications, and this night-sky application seemed like a good challenge. Photo by Usukhbayar Gankhuyag on Unsplash Updated Camera About a week ago I traded up my phone for a Samsung Galaxy S20 Ultra model. I chose this phone for its cameras, although there are several other phones available with similar capabilities suitable for various types of simple astrophotography. I wasn’t sure if any of today’s phones would be up to the task of capturing stars and satellites, but I was in for a pleasant surprise! Photo by Jonas Leupe on Unsplash Finding a Dark Sky Location Last night was our first drive out of town to find better dark sky conditions, with the hope of catching a satellite or two. (The Python scripts are yet to be written, but stay tuned as I plan to present some code in the next few days.) The night was a huge success, as the photographs showed many more stars than we could see with the naked eye, and we caught several satellites that we were able to clearly identify with the aid of the online Stellarium program. Even as the Sun was setting we captured many stars in our photographs For example, we could clearly see the constellation Cassiopeia with the naked eye, but the photographs revealed many more stars in that part of the sky than we had expected. Here’s a comparison of a zoom in on one small part of a photo with a screen grab of Cassiopeia from Stellarium: A small part of a photo showing Cassiopeia Courtesy Stellarium online app It’s tricky to do the photograph justice here in this online article, but it is very possible in the original photo to see many faint stars that perfectly match those shown in the online Stellarium app as you zoom in on the same part of the night sky. Notice that even the M31 galaxy is visible in the photo as a faint smudge. Camera Setup To capture these photos I put my Samsung Galaxy S20 Ultra camera into Pro mode, manually set the focus to infinity, the ISO to 1600 (it can go up to 3200 on this camera), the exposure to 8 seconds, and the aspect ratio to 4:3. I used a tripod, as this is an absolute necessity to get clear 8 second exposures. Photo by Ben Collins on Unsplash — Any astrophotography requires a tripod We came home with a couple dozen photographs, several showing Russian, U.S., and even Egyptian satellite trails. Again, stay tuned for a future article showing how we verified these satellites, and how we can do interesting calculations on those photos. But the story doesn’t stop there! Fun Facts About Uranus Uranus is an ice giant planet almost 20 times further from the Sun as the Earth, and its diameter is only 4 times that of the Earth. Jupiter is almost 3 times bigger than Uranus, and it’s only about 5 times as far from the Sun as the Earth. Uranus takes 84 Earth years to orbit the Sun just once. Jupiter is easy to see in the night sky, but Uranus is smaller, much further away, and it sits far out there in much dimmer sunlight. It really is hard to see, or photograph. Planet Uranus fact: It’s almost exactly 4 times the diameter of the Earth Catching Uranus in a Photograph While studying the various photographs against the Stellarium display, I noticed that Uranus was marked in a position that should be in one of the photos. Sure enough, when I zoomed in on that part of the photo, the dim white dot of Uranus was there! Uranus is normally nearly impossible to see with the naked eye, or to photograph with off-the-shelf cameras without special lenses and filters, so I was totally impressed and blown away with this discovery. Courtesy Stellarium online app
https://jccraig.medium.com/photo-of-uranus-taken-with-off-the-shelf-cell-phone-11014fbfdee7
['John Clark Craig']
2020-11-08 03:32:26.730000+00:00
['Astronomy', 'Cell Phones', 'Python', 'Astrophotography', 'Amateur Astronomy']
A Guide to Deep Learning and Neural Networks
Components of Neural Networks Every neural network consists of neurons, synapses, weights, biases, and functions. Neurons A neuron or a node of a neural network is a computing unit that receives information, performs simple calculations with it, and passes it further. All neurons in a net are divided into three groups: Input neurons that receive information from the outside world Hidden neurons that process that information Output neurons that produce a conclusion In a large neural network with many neurons and connections between them, neurons are organized in layers. An input layer receives information, n hidden layers (at least three or more) process it, and an output layer provides some result. Each of the neurons inputs and outputs some data. If this is the first layer, input = output. In other cases, the information that the neurons have received from the previous layer is passed to input. Then it uses an activation function to get a new output, which is passed to the next layer of neurons in the system. Neurons only operate numbers in the range [0,1] or [-1,1]. In order to turn data into something that a neuron can work with, we need normalization. We talked about what it is in the post about regression analysis. Wait, but how do neurons communicate? Through synapses. Synapses and weights If we didn’t have synapses, we would be stuck with a bunch of inactive, useless neurons. A synapse is a connection between two neurons. Every synapse has a weight. It is the weight that changes the input information while it is transmitted from one neuron to another. The neuron with the greater weight will be dominant in the next neuron. One can say that the matrix of weights is the brain of the whole neural system. It is thanks to these weights that the input information is processed and converted into a result. During the initialization (first launch of the NN), the weights are randomly assigned. Later on, they are optimized. Bias A bias neuron allows for more variations of weights to be stored. Biases add a richer representation of the input space to the model’s weights. In the case of neural networks, a bias neuron is added to every layer. It plays a vital role by making it possible to move the activation function to the left or right on the graph. It is true that ANNs can work without bias neurons. However, they are almost always added and counted as an indispensable part of the overall model.
https://medium.com/better-programming/a-guide-to-deep-learning-and-neural-networks-a2379c42f59b
[]
2020-10-16 15:35:40.347000+00:00
['Machine Learning', 'Neural Networks', 'Artificial Intelligence', 'Deep Learning', 'Programming']
Mental Models For Dummies
Blind spots, big blind spots Having a variety of mental models at our disposal is particularly important when facing complex problems, as it provides us with an ability to see the world through multiple lenses. Unfortunately, however, society tends to look at reality through a single lens, dividing it into discrete topics to make it more accessible for study. Shane Parrish captures this nicely in his latest book The Great Mental Models: “Most of us study something specific and don’t get exposure to the big ideas of other disciplines. We don’t develop the multidisciplinary mindset that we need to accurately see a problem. And because we don’t have the right models to understand the situation, we overuse the models we do have, and use them even when they don’t belong.” For example, an economist will often think in terms of supply and demand. A behavioural psychologist will think in terms of reward and punishment. Through their respective disciplines, these individuals only see part of the situation, the part of the world that makes sense to them. “None of them, however, see the entire situation unless they are thinking in a multidisciplinary way. In short, they have blind spots, big blind spots.” To better navigate the complexities of life, and to help us to see our blind spots, we need what Charlie Munger — one of the most successful people in the world — calls, a “latticework of mental models” (see image below). Source: Example of a latticework — interlacing strips of material forming a lattice A latticework is a great way to conceptualise mental models because it demonstrates the interconnected nature of knowledge. Reality is not comprised of a unique set of disciplines. We only break it down that way to make it easier to digest. However, once we learn something, we need to place it back into the interconnected system from which it came. Only then can we begin to build an understanding of the whole. This, as Shane notes, “is the value of putting the knowledge contained in mental models into a latticework.”
https://medium.com/personal-growth/mental-models-for-dummies-527419014f9e
['Brian Pennie']
2020-12-19 18:51:40.269000+00:00
['Leadership', 'Decision Making', 'Life', 'Thinking', 'Psychology']
Be So Good They Can’t Ignore You
Back when I was still in high school, I was a bass clarinetist in the Minnesota All-State Band. One year, we had a conductor from Texas who told a story about his college days in California. You see, this conductor was heavily immersed in the school’s arts programs, and he told us about a classmate named Steve. Steve was an actor. Well, Steve was a wannabe actor, and he was known for “stealing the scene” in every production. Supposedly, it was a bit of a joke on campus that this dude was so over the top that he drew attention to himself even when he had non-speaking roles. Our conductor animated his story with flailing arms and overeager expressions to illustrate how Steve could always be seen in the background of any performance. I sat there on stage listening to this story over twenty years ago, so who knows what details or nuance I’m forgetting. But I believe the conductor’s point was to stress the power of perseverance, and the punchline of the whole thing was that this kid who couldn’t quit drawing attention to himself was none other than Steve Martin.
https://medium.com/honestly-yours/be-so-good-they-cant-ignore-you-e8830c61e4db
['Shannon Ashley']
2019-10-17 18:56:29.836000+00:00
['Success', 'Comedy', 'Life Lessons', 'Inspiration', 'Writing']
Stop persisting pandas data frames in CSVs
Excel and to_excel() Sometimes it’s handy to export your data into an excel. It adds the benefit of easy manipulation, at a cost of the slowest reads and writes. It also ignores many datatypes. Timezones cannot be written into excel at all. # exporting a dataframe to excel df.to_excel(excel_writer, sheet_name, many_other_parameters) Useful Parameters: excel_writer — pandas excel writer object or file path — pandas excel writer object or file path sheet_name — name of the sheet where the data will be output — name of the sheet where the data will be output float_format — excel’s native number formatting — excel’s native number formatting columns — option to alias data frames’ columns — option to alias data frames’ columns startrow — option to shift the starting cell downward — option to shift the starting cell downward engine — openpyxl or xlsxwriter — or freeze_panes — option to freeze rows and columns Advantages of excel: allow custom formating and cell freezing human-readable and editable format Disadvantages of excel: very slow reads/writes (20 times/40 times slower) limit to 1048576 rows serialization of the datetimes with timezones fails More information on Pandas IO page. Performance tests results for excel. Only 54% of columns kept the original datatype, it took 90% size of the CSV, but it took 20 times more time to write and 42 times more time to read HDF5 and to_hdf() Compressed format using an internal file-like structure suitable for huge heterogeneous data. It’s also ideal if we need to randomly access various parts of the dataset. If the data are stored as table (PyTable) you can directly query the hdf store using store.select(key,where="A>0 or B<5") # exporting a dataframe to hdf df.to_hdf(path_or_buf, key, mode, complevel, complib, append ...) Useful Parameters: path_or_buf — file path or HDFStore object — file path or HDFStore object key —Identified or the group in the store —Identified or the group in the store mode — write, append or read-append — write, append or read-append format — fixed for fast writing and reading while table allow selecting just subset of the data Advantages of HDF5: for some data structures, the size and access speed can be awesome Disadvantages of HDF5: dataframes can be very big in size (even 300 times bigger than csv) HDFStore is not thread-safe for writing fixed format cannot handle categorical values SQL and to_sql() Quite often it’s useful to persist your data into the database. Libraries like sqlalchemy are dedicated to this task. # Set up sqlalchemy engine engine = create_engine( 'mssql+pyodbc://user:pass@localhost/DB?driver=ODBC+Driver+13+for+SQL+server', isolation_level="REPEATABLE READ" ) # connect to the DB connection = engine.connect() # exporting dataframe to SQL df.to_sql(name="test", con=connection) Useful Parameters: name — name of the SQL table — name of the SQL table con — connection engine usually by sqlalchemy.engine — connection engine usually by chunksize — optionally load data in batches of the chunksize Advantages of SQL: Slower than persisting on disk (read 10 times/write 5 times, but this can be optimized) Databases are understandable by all programmers Disadvantages of SQL: Some data format are not kept — category, int, floats and timedeltas depending on the database performance can be slow you may struggle to set up a DB connection in some cases If you would like to increase the write time of .to_sql() try Kiran Kumar Chilla’s method described in Speed up Bulk inserts article. Feather and to_feather() Feather is a lightweight format for storing data frames and Arrow tables. It’s another option how to store the data, which is relatively fast and results in a small file size. It did not include it in the measurement because the engine locks the files for quite a long time and it’s hard to do several repetitions of the performance test. If you plan to persist a data frame once, feather can be an ideal option. Other methods Pandas offer even more persistence and reading methods. I’ve omitted json and fix-width-file because they have similar characteristics like csv. You can try to write directly to Google Big Query with .to_gbq() or stata format. New formats will definitely appear to address the need to communicate with a variety of cloud providers. Thanks to this article, I started to like .to_clipboard() when I copy one-liners to emails, excel, or google doc. Performance Test Many of the methods have benefits over the CSV, but is it worth using these unusual approaches when CSV is so readily understandable around the world. Let’s have a look at the performance. During the performance test I focus on 4 key measures: data type preservation — how many % of columns remained original type after reading — how many % of columns remained original type after reading compression/size — how big is the file in % of csv — how big is the file in % of csv write_time — how long does it take to write this format as % of csv writing time — how long does it take to write this format as % of csv writing time read_time — how long does it take to read this format as % of csv reading time For this reason, I have prepared a dataset with 50K random numbers, strings, categories, datetimes and bools. The ranges of the numerical values come from numpy data types overview. data = [] for i in range(1000000): data.append( [random.randint(-127,127), # int8 random.randint(-32768,32767), # int16 ... Generating random samples is a skill used almost in every test. You can check the support function generating random strings and dates in the GitHub notebook, I’ll only mention one here: def get_random_string(length: int) -> str: """Generated random string up to the specific lenght""" letters = string.ascii_letters result_str = ''.join([random.choice(letters) for i in range(random.randint(3,length))]) return result_str Full code to generate the data frame is described in this gist: Generate random data and measure the read/write speed in 7 iterations Once we have some data we want to process them over and over again by different algorithms. You can write each of the tests separately, but let’s squeeze the test into one line: # performance test performance_df = performance_test(exporting_types) # results performance_df.style.format("{:.2%}") The performance_test function accepts a dictionary with the test definition, which looks like: d = { ... "parquet_fastparquet": { "type": "Parquet via fastparquet", "extension": ".parquet.gzip", "write_function": pd.DataFrame.to_parquet, "write_params": {"engine":"fastparquet","compression":"GZIP"}, "read_function": pd.read_parquet, "read_params": {"engine":"fastparquet"} } ... } The dictionary contains the functions which should be run, e.g. pd.DataFrame.to_parquet and the parameters. We iterate the dict and run one function after another: path = "output_file" # df is our performance test sample dataframe # persist the df d["write_function"](df, path, **d["write_params"]) # load the df df_loaded = d["read_function"](path, **d["read_params"] I store the result into a dataframe to leverage Plotly.Express power to display the results with few lines of code: # display the graph with the results fig = pe.bar(performance_df.T, barmode='group', text="value") # format the labels fig.update_traces(texttemplate='%{text:.2%}', textposition='auto') # add a title fig.update_layout(title=f"Statistics for {dataset_size} records") fig.show() Performance test results. Data Format Perservation measure % success and size and speed are compared to the csv. Sanity Check Testing things on random samples is useful to get the first impression how good your application or tool is, but once it will have to meet the reality. To avoid any surprise, you should try your code on the real data. I’ve picked my favorite dataset — US Securities and Exchange Commission quarterly data dump — and run it through the performance test. I have achieved very similar results which persuaded me that my assumption was not completely wrong.
https://towardsdatascience.com/stop-persisting-pandas-data-frames-in-csvs-f369a6440af5
['Vaclav Dekanovsky']
2020-11-03 11:09:40.803000+00:00
['Pandas', 'Python', 'Dataframes', 'Plotly Express', 'Code Performance']
FlightPredict II: The Sequel
FlightPredict II: The Sequel Predict flight delays (now with PixieDust) A couple months ago, David Taieb put together a tutorial on how to Predict Flight Delays with Apache Spark MLLib, FlightStats, and Weather Data. For the sequel, we sprinkle some PixieDust onto his original solution and the result is pure magic. PixieDust is an open source Python helper library that extends the usability of notebooks. Using PixieDust’s visualization and apps features, we provide a customized, interactive, and more pleasing experience than you’ll find in a regular notebook. Pre-flight checklist Before you follow the steps in this post, run through the Predict Flight Delays with Apache Spark MLLib, FlightStats, and Weather Data tutorial. At a minimum, you must complete the following steps from that tutorial: ✓ Set up a FlightStats account (REQUIRED! In the first tutorial, you could skip this step, but you need these credentials to run this notebook.) ✓ Provision the Weather Company Data service ✓ Obtain or build the training and test data sets Once you’ve done that, you can tackle this tutorial, which is a run-through of my Flight Predict with PixieDust notebook, which you can run from the IBM Data Science Experience (DSX) or from a local Jupyter Notebook environment (with Spark 1.6.x and Python 2.x). Cleared for take-off While you can run the application from any Jupyter Notebook environment, I used IBM’s Data Science Experience. The first step is to get the Flight Predict with PixieDust notebook into DSX: Note: For best results, use the latest version of either Mozilla Firefox or Google Chrome. Sign into DSX. Create a new project (or select an existing project). On the upper right of the screen, click the + plus sign and choose Create project. Add a new notebook (From URL) within the project Click add notebooks Click From URL Enter notebook name Enter the notebook URL: https://raw.githubusercontent.com/ibm-cds-labs/simple-data-pipe-connector-flightstats/master/notebook/Flight%20Predict%20with%20Pixiedust.ipynb Select the Spark Service Click Create Notebook If prompted, select a kernel for the notebook. The notebook should successfully import. Fly through the notebook Run through each cell of the notebook in order. When you use a notebook in DSX, you can run a cell only by selecting it, then going to the toolbar and clicking on the Run Cell (▸) button. If you don’t see the Jupyter toolbar showing that run button and other notebook controls, you’re not in edit mode. Go to the dark blue toolbar above the notebook and click the edit (pencil) icon. Go through the notebook, running each code cell. Install PixieDust and its flightpredict plugin. Run the first 2 cells, which install and update pixiedust and the pixiedust-flightpredict plugin. Restart the kernel. From the menu, choose Kernel > Restart. Run the following cell to import the python package and launch the configuration dashboard: import pixiedust_flightpredict pixiedust_flightpredict.configure() The dashboard checks the current status of the app and guides you through setup. Add credentials and update incorrect or missing info (x icon) entries. On the top right of the dashboard list, Click the Edit Configuration button. Enter the credentials you got completing the first tutorial. To save, click the Save Configuration button. The dashboard updates to show completed data. To create a cell with code to load the training data, click on Generate Cell code to load trainingData. The new cell appears under the dashboard. Go to the newly created cell and run the cell. The cell output is a PixieDust visualization of training data which you can view in various formats and also download or save into Cloudant or Object Store. Re-run the Configuration Dashboard cell you ran in Step 3 and it updates show you’ve loaded training data. Complete configuration. Continue through the dashboard, clicking each Generate Cell code to load button then running the new cell that appears below the dashboard. Repeat for each remaining incomplete task, except for custom handler, which is optional. (You can use the custom handler cell to provide new classification and features. For example, you may want to include a day of departure feature.) To confirm that you completed all steps, you can run the dashboard cell again. All entries should show None under Action required (except the custom handler, which is optional) Train and evaluate the models Like the first flight tracker tutorial that you ran through, this notebook creates and runs four models (Logistic Regression, Naive Bayes, Decision Tree, and Random Forest) — this time using PixieDust to display data and the model evaluations. Now that your data’s loaded, go to the Train multiple classification models section and run each of the four code cells. Run the display(testData) cell to evaluate the models. The pixiedust-flightpredict plugin generates a custom airplane dropdown menu that lets you: Measure accuracy via an accuracy table and confusion matrixes, which you read about in the first tutorial. Again, you can use this tool to judge performance and decide if more training data is needed or if the classes need to be changed. See a histogram showing the probability distribution. Visualize Features (results) in a scatterplot. The airplane menu is a custom PixieDust plugin created for this notebook. PixieDust provides an API that makes it easy for anyone to contribute a new visualization plugin, like that nifty plane menu. You too can extend PixieDust with custom features that serve your needs. Stay tuned for tutorials and docs explaining how to code your own plugin. Run the models The predictive models are now in place, and it’s time to launch the flight delay prediction application. In the Run the predictive model application section, run the cell. (You can change the initial airport code, LAS, to another city, if you want. You’ll also be able to do so in the app that launches.) import pixiedust_flightpredict from pixiedust_flightpredict import * pixiedust_flightpredict.flightPredict("LAS") Enter a flight information and click Continue. You’ll see delay predictions from the models, the weather forecast for each airport, and a flight path map: From here, you may Start Over to enter a new flight information or Go to Notebook to return to the notebook. What you can make out of it Run the last code cell in the notebook, which displays a map with an aggregated view of all the flights that the app has searched: Click on an airport to see all outgoing flights Click on a flight path to get a listing of the flights and number of passengers who searched the specific flight You can return to the notebook and continue to play with the data. See what you can uncover or improve upon within the flight delay predictions. You are now free to move about the cabin Predicting flight delays based on weather using machine learning started out as a way of showcasing the flexibility of a notebook. However, with the inclusion of PixieDust, visualizing the data is now even easier. To take it all the way, you could build a user interface and make this a full-fledged application. You can load, manipulate, and present the data all within the notebook. PixieDust is an open source project looking to improve the notebook experience. You’ll find lots of guidance in its GitHub repo wiki. All are invited to contribute and pull requests welcome! We can have a parade and serve hot hors d’oeuvres…
https://medium.com/ibm-watson-data-lab/flightpredict-ii-the-sequel-fb613afd6e91
[]
2017-02-09 17:09:32.669000+00:00
['Machine Learning', 'Cognitive Computing', 'Python', 'Apache Spark', 'Pixiedust']
The Art of Demotivation
Use negative self-talk to your advantage Memes and quotes that motivate you to greater achievement, that create the sort of behavior that will make you more successful in all that you attempt to do in life? You’re not going to use any of that. In the art of demotivation, you’re going to do just the opposite. Find or make up negative quotes or memes that judge behavior. The effects of negative self-talk are widely known to derail your attempts to get better at something. Sounds effective in stopping you cold, doesn’t it? Why not use that to your advantage? As an example, say you want to lose weight. Rather than positive messages that your mind knows are fake, like “carbs are bad” or “my caveman ancestors didn’t need it, so neither do I,” try something like this: “What you eat in private you wear in public.” That’s pretty negative, right? But it makes you think twice about running for the pantry, doesn’t it? Here’s another for someone who’s trying to quit smoking: “You’re friends hate how you smell becaues you smoke all the time.” That one gets even worse if your friends have actually said something like that to you. If you’re single, make it instead about how kissing a person who smokes tastes like licking an ashtray. This point may sound harsh, but there’s a reason negative self-talk regarding self-esteem ruins any attempts at happiness: it’s effective.
https://medium.com/curious/the-art-of-demotivation-b672cca1d16e
['Ryan M. Danks']
2020-12-02 03:11:12.476000+00:00
['Self Improvement', 'Motivation', 'Self', 'Life Hacking', 'Inspiration']
Startups In The House! Innovators In Residence Join 8-Week Bootcamp
The Oakland Startup Network, Apps Without Code, and Kapor Center for Social Impact have partnered to announce a unique program for entrepreneurs building the tools of tomorrow — Innovators Residence at Kapor Center Innovation lab (iLab). Our goal is to guide local founders from prototype to product and monetization via hands-on curriculum and mentoring. A total of 6 tech startups based in Oakland were selected to receive full scholarships to the 8-week bootcamp, and be the first incubated cohort of innovators residing in the Kapor Center iLab. The teams selected are to receive curated coaching, mentoring, as well as co-working space to effectively scale their business and generate revenue. Sonam Swati is founder of bossy, a fintech organization localizing crowd-lending spheres by encouraging small businesses to create incentives on products or services they offer. The goal of bossy, is to create a connected community of grass-roots organizations, and women entrepreneurs, by providing a web and mobile platform that will seamlessly feature, process, and celebrate loans given to women following their dreams. Eugene Baah is founder of Resoltz, a wellness tech platform designed for physical education instructors. The digital learning platform allows higher-education instructors to create fitness programs, track nutrition, and deliver meditation/mindfulness programs, while verifying activity using affordable wearable branded devices. Resoltz increases the access of physical education resources to students on and off campus, it allows instructors to spend less time creating course content, and to monitor student progress more efficiently. Pamela Martinez is founder of Prezta, a platform that will allow borrowers and lenders to agree on a payment plan, receive reminders about payments, and track how much money has been paid back over time. This platform plan to solve the lending problem in the U.S., and empower people to leverage relationships in times of financial need and build sound financial habits. Vicente Garcia is founder of Woke, a digital media space and entertainment network for content that is truly diverse and representative. Woke disrupts the status quo of media and entertainment by showcasing content from underrepresented identities in bold and creative ways. The main product is an online streaming service that allows users to access and pay for diverse streaming content that represents their values. Alivia Blount is founder of PreUni, a startup dedicated to eliminating the paper trail during the application process in preschool databases. By issuing a unique form submission, PreUni enables parents to create a single profile for themselves and child to create a streamlined application. The goal of PreUni is eventually establish a system which tracks a student from pre-school to high school. Elon Hufana is founder of Shasel, a marketplace that delivers timeless mobile personalized travel experiences and itineraries that align with your interests, schedule, and budget.
https://medium.com/kapor-the-bridge/startups-in-the-house-innovators-in-residence-join-the-kapor-center-ilab-da73e3be98b8
['Chris Mclemore']
2017-09-25 18:12:40.368000+00:00
['Entrepreneurship']
Georgia O’Keeffe on Daily Work, Happiness & Success
Georgia O’Keeffe on Daily Work, Happiness & Success Certain people come our way that reinforce our inherent thoughts and ideas. Georgia O’Keeffe is one of those I once believed that there was someone out there with the answers — somebody somewhere who knew what I didn’t. All I needed to do to find them and extract their knowledge was to be persistent. I was sure I would discover the secret in a mentor’s words, in the lines of an old book, at a conference, or a business networking event. Whatever the magic ingredient, I didn’t have it. Instead, it was out there, and I was going to find it. I didn’t find it. Not for a moment did I consider that my pursuit would come to nought, that I would lose all material gain made over the previous fifteen years. I was swallowed by my own insatiable appetite for external validity. In some respects, you could say that I wasted fifteen years chasing ghosts. Perhaps, but I learned something valuable that the right course would never have taught me. “Whether you succeed or not is irrelevant, there is no such thing. Making your unknown known is the important thing, and keeping the unknown always beyond you.” ―Georgia O’Keeffe, Artist I learned that the first and maybe the only reason to work is for the inherent enjoyment we get from it. What other reason is there ever to do anything? Have we not learned that working this 9-to-5 thing, or whatever variation of it you want, is a waste of a life? I think we all individually need to come to this realisation on our own. Eventually, we get it that there is nobody to please, no extrinsic reward that can ever make us happy. There are no short-cuts, no hacks, no quick fixes. It’s simply about the work. In the meantime, this is one of our most significant challenges because many of us work for what we can get out of it. We need to make a living, pay bills, fulfil commitments, and subsequently work becomes a chore. Happiness eludes us. Success, or the lack thereof, becomes our dominant focus, and daily work suffers. Writer’s block, creative block, or simply a downright funk kicks in and momentum keeps it going. Until we can snap out of it, we can’t.
https://medium.com/the-reflectionist/georgia-okeeffe-on-daily-work-happiness-success-b5158d7bf4d0
['Larry G. Maguire']
2020-04-05 14:58:06.360000+00:00
['Work', 'Self', 'Art', 'Happiness', 'Creativity']
Dear web designer, let's stop breaking the affordance of scrolling
We can do better than a "Scroll arrow" Huge's research can tell us one thing or two about how some users can skip your content once you break the affordance of scrolling and about the solutions to that problem. Even though the scrorrow had a very successful result, is it really a solution to be tested? Compare the results between "Scroll arrow" and "Short image". They're literally the same. Now compare the "Scroll arrow" with "Control image". I mean it's obvious to me that in the case of the arrow users scrolled cause the page was yelling at them. In other words, it works but it doesn't provide a good experience. If people perceive content bellow the image, they’ll naturally scroll. Using subtle animation to communicate (not an animated arrow though) Animating the elements of the page can give great clues about the content below that huge picture. I'm not saying I have the perfect solution for every case, but I'll use animation to brainstorm other ways to handle this. In the first example, our content pops from the bottom and disappears right after. It's like saying "Hello, I'm here. If you need me, just do your thing:" If you're using a parallax effect in the main picture, take advantage of it to help give that sneak peek a less subtle effect — also to be consistent with the page's behavior. After all if the picture zooms out when the user scrolls, it should do the same on that page load hint: In case of multiple blocks, the content can be nicely choreographed: Don't hide the content, take control of it Google Fit Android app uses just part of the first card from below the big circular chart to indicate that there’s more content to see. This approach is intuitive and elegant cause it's using no additional elements to talk to the user. It’s just them hanging out on the land of good perception, while leaving a lot of room for that main circle to shine. This isn't new. In 2006, Jared Spool was already discussing the use of the cut-off look to improve the affordance of scrolling. On the web you can achieve something like this getting the picture section to fit around 90% of the viewport max-height, with just one line of CSS or some quick JavaScript (if you need to support old browsers). What about combining it with an animation and setting a lower opacity for the content? That way it can't take much of the user's attention from your beloved main picture: Let's just be careful about the level of opacity. If it's too low we're doing no good. Oh and let's not forget to set the opacity back to 100% when the user scroll the page or interact with those elements as well :-)
https://uxdesign.cc/dear-web-designer-let-s-stop-breaking-the-affordance-of-scrolling-fe8bf258df7b
['Rodrigo Muniz']
2016-03-31 15:57:58.797000+00:00
['UX Design', 'UX', 'Design']
The Problem With Maslow’s Pyramid
The Problem With Maslow’s Pyramid Abraham Maslow’s Hierarchy of Needs implies a linearity, with self-actualization crowning our enlightenment, once the rest of the pyramid’s ‘baser needs’ are met. While powerful, it is a misguided notion. In reality, we need to start “at the top”. Here’s why. Maslow’s Hierarchy of Needs © Anthony Fieldman 2020 His was a brilliant piece of reflective­­­ — if intuitive — insight that gave form to a fundamental idea: that needs precede wants. Said another way, it proposed that without first establishing a foundation of health, safety and shelter, the pursuit of personal fulfillment seems trivial — a new-age conceit, even. In truth, it’s not that simple. Abraham Maslow’s Hierarchy of Needs takes the form of a pyramid, reinforcing the central notion that like the ‘real’ pyramids that inspired it, one builds a life upward, laying one gravity-laden layer of wellbeing atop the other, in succession. Gravity is symbolic in his diagram, but the basis is the same. In principle, the theory goes, we must first satisfy our physiological needs (food, shelter, clothing). Second, once those are in hand, we can secure the psychological need for safety — both physical and fiscal. Third, sequentially, we can focus on finding intimacy — the stuff of emotional fulfillment in the form of companionship. Once all of these “needs” are met, we can attend to the relative luxury of pursuing esteem in the community: respect, status, and the like. And finally, the pyramid culminates in the highest order of (perceived) human activity: that of self-actualization — the stuff of bodhisattvas, sadhus and “luminaries” like Gandhi, King, Sinek, Schmachtenberger, Chopra and Robbins. Or at least that’s what an entire section of the bookstore tells us. You remember bookstores, right? But life isn’t linear. Some (most; all) of us need to explore, uncover and develop different parts of ourselves, to allow us to act effectively toward achieving those very basic needs, way down the pyramid. In other words, sometimes (often; always), Maslow’s categories of wellbeing, however well-founded they are, don’t exist in a hierarchy, as such. While I’m at it, nor do they unfold linearly. That’s because life circumstances like our upbringing, the societal constructs in which we are born and live, our relationships, and sheer serendipity, along with the psychological state of our own personal emotional development at any given point in time, have profound influence over — and create the context to — our lives. Accordingly, because of our distinctly complex socio-economic-emotional constructs, we are forced to continually summon the totality of our capacities — in paucity or in richness — toward the pursuit of holistic wellbeing. That is, toward (self-)love, (self-)acceptance and purpose. A common Western psychological conceit goes something like: “Once I have a good job, money in the bank and an adequate place to live, then I’ll have the time/energy/security/confidence to focus on finding a mate, having children, pursuing my passions and even the luxury of advancing my self-actualization.” Often, though, it’s the very thinness of our self-awareness that gets in the way of our success in meeting our “basic needs”, which in turn make it impossible to climb Maslow’s pyramid. That’s because the way life really works isn’t linearly; it’s iterative. We explore; we learn; we apply; we learn from that, good and bad; we adjust; we re-apply; and we repeat these steps ad infinitum, until we die. If we’re lucky, we get somewhere close to our highly individual vision of “a life well lived”. But this, too, is a moving target; because all of that learning, discovery and stabbing in the dark leads us to understand the world differently from when we established our fantasies. And so the whole enterprise is in a constant state of iterative evolution. A knotted-up tangle of yarn comes to mind — not a pin-strait “A to B” line — to evoke the idea of what life looks like, for most (all) of us. Somewhere in that knotted mess, we can find shelter, food, jobs, mates, baubles, and self-awareness. Life, in a manner of speaking A Typical Life Let’s say you’re the typical Westerner, with no real idea what your purpose or passions are at the tender age of 18 or 20, when many (most) of us must choose our professional paths. So we pick something, study or train in it, and stumble into Job A, once we’re done. Job A exists as often as not because it was local, or we knew someone who knew someone, or an employer who took pity on us “gave us a shot.” Once employed, we do what we must until we’re promoted to floor manager, or lead stylist, or tenured professor, or sergeant, or whatever form of executive leader to which our professions might eventually lead us, over time. At one point during our odyssey, we meet someone — at school, at work, through friends or at a bar. We have babies with them; and then plummet down the rabbit hole of economic hardship: the kids, our mortgage, vacations we can’t afford, and that Peloton bike we’ve eyed of late, and finally lost our resolve not to buy. Then, right on the tail of all that success, we obsess about where our careers can go from here, so that we can afford to get out of debt, or build “real” wealth, and security. You know, like the Joneses. Life, in another manner of speaking While we’re pumping out cortisol chasing pennies and children — the life we’ve built — some of us accidentally meet a future co-conspirator — one who has a company, a startup or an idea, and could use “a gal like you”. So we jump off of our hamster wheel (job), and start anew on another one. Sometimes, that shift gives us confidence to ditch the hubby we made an early mistake over falling for and marrying, as grateful as we are for our thankless children. We then slug it out over who gets the proceeds from our starter lives, losing a third of it all to the lawyers in the process. Regardless, now renewed in our second lives, we attack the shiny new career with gusto. We get a haircut, a new wardrobe, tell all our friends about how much better it is than our old lives, maybe even believing part of it for a hot minute, and we do all the things we couldn’t when our husband put his foot down in disagreement, but now that “I’m the boss of me”, all I have to do is decide… At some point, we figure out that our new life is just like our old life. The hot trainer we met when we were getting back into dating shape is also a douchebag, like the deadbeat dad we left; and besides, he never liked our kids anyway, but we ignored it because the sex was so good and we liked the idea of being a MILF. That was before our libidos failed us, when the luster of the honeymoon phase wore off. The new job turns out to be a slog, like the old one; it’s just that we’re just selling ad space instead of widgets now. How 2020 of us. Our bosses are also as arrogant or incompetent as the ones we left behind. And we are still struggling — even with our raises and our much-vaunted equity — to make ends meet, because every time we jump tax brackets we increase our spend. Life is increasingly expensive, increasingly complex, increasingly lonely and we still have no idea WTF to do with all of it, or what our “true purpose” is. And time seems to be breathing down our necks. That is, when we can ignore the pain in our joints from overdoing it at the gym, “like we did when we were young.” At some point we find ourselves in the middle of our mid-life crises. We’ve broken down, and decided our lives have no real meaning; we’ve wasted our “most energetic years” chasing our tails blindly in pursuit of “we don’t know what”, for too long. So we finally carve off enough time, money and energy to devote some of it to “finding ourselves” amid the swirl of the lives we stumbled into and lived out, decision by decision, year by year, mate by mate, app by app. Somewhere during that inner journey, we begin to meet ourselves. We learn, belatedly, to listen more deeply to our heart rates, our inner monologues and judgments, using a variety of tools to do so, and we begin — finally — to look upon our actions with just enough emotional distance to see ourselves outwardly, as others may. That is, to gain perspective. We begin to see the forest for the trees. With our newfound perspective, we gain insights — clearings in the fog. And during those brief glimpses, we are afforded clarity that escaped us in our youth. This drives us to act in a manner that aligns — for once — more closely with our internal worlds, full as they are of hopes and dreams that are as unique to each of us as are the worlds that we’ve each created, and lived. It’s just that those inner worlds required exploration, and discovery. So with vigor, we take a third pass at life-making. We realize that the shiny baubles are less valuable than the insights we now crave. So we begin focusing on seeking quality experiences as much as (or more than) material goods. We start to make lifestyle choices that match our personalities, in service of creating harmony, rather than stress. Our homes; our mates; our friendships; our self-care; even our interactions with our still-ungrateful kids, who are as young and clueless as we were, back then. We realize that we kept finding douchebag mates because we weren’t yet whole, and never paid attention to what was actually good for us in the first place. So we kept attracting the lessons we needed, without learning from them. Once we finally did, a different species of mate materialized, like magic. Said another way, they now found us as compelling as we did them. It slowly dawns on us that we’ve spent 20–30 years so focused on the lower tiers of Maslow’s pyramid that if only we’d spent some of that energy on the uppermost one — self-discovery, and the potential for self-acceptance that it unlocks for us — then our pyramid may have looked very different. We may have realized earlier that notoriety is fickle; that homes and cars and other trophies are just things; that our children really did deserve the parent we now are, too late to impart some of our hard-earned insights to help fuel their long lives; that we cheated ourselves out of some “good years” with that mate we finally found, in mid-life; that we really did have talent in those areas that we sat on, and which could’ve led us to a very different place — the one we dreamed of, but had convinced ourselves at the time wasn’t “responsible” or “practical” enough, however we may have interpreted that, back when. Photo by Alice Donovan Rouse on Unsplash Another Path If we’d only realized earlier that we could feel this good — that we could change our personal narratives, our relationships with ourselves, our mindset, and aim these weapons toward the things we now know matter most — we would most likely have made different moves. This isn’t in any way to suggest that life isn’t wonderful when we fumble in the dark, chasing down paths that may or may not pan out. These, too, are full of lessons and insights; and feed our awareness, of both self and others. At the same time, there are many paths to self-discovery, and the overarching point here is that none of them is linear. Certainly, self-actualization may in some way sit at the top of the pyramid, much as the “crown chakra”, or sahasrara, sits atop the human body in the Vedic tradition. Sahasrara confers foresight and clarity, leading practitioners toward samādhi, or enlightenment. But a hierarchy of states nonetheless is not linear in its attainment, or influence, either in the Vedas or — I’m arguing — in Maslow’s pyramid. With respect to self-actualization, “needs” and “wants”, once aligned by self-awareness, become indistinguishable. The more self-aware (or self-actualized) we are, the more our wants become our needs, and vice versa. Everything distills. Everything simplifies. Everything becomes less dissonant; more harmonic; stiller. If we knew in youth what we know now that we’ve invested the time in self-actualization — that the tiers of Maslow’s pyramid are in truth non-hierarchical — we’d have been able to move between them, non-linearly, allowing gains in one area to inform choices in another. We’d have been able to iterate our lives more rapidly, leading to “truer” experiences, and the possibility of developing greater clarity, sooner. The best part? We still can. Abraham Maslow—Image: Bettmann Archive / Getty Images A Man Chased by Demons Abraham Maslow was himself a conflicted man, given to hatred of his environment (the Brooklyn in which he and his immigrant parents lived), and the people in his life, including his own mother, whom he utterly abhorred. He lost himself in books, searching long and hard to find an “idealistic world based on widespread education and economic justice.” To some degree, he even hated himself; he wished to be strong and muscular — the epitome of manhood, he thought. But his studiousness and his natural body shape conspired to undermine him. He hated college, too, dropping out of no fewer than three of them, including Ivy League Cornell. And he was ashamed of the “embarrassing triviality” of his own thesis when he finally did graduate, refusing to publish it for three years, until he capitulated to external pressures. All of these things conspired to frame his view of the world, which was very dark. It wasn’t until his forties that he found his focus and his voice, following the conclusion of World War II. Mental health and human potential drove his foundational work in the field of humanistic psychology — no doubt fueled by his efforts to grapple with his own demons. In principle, Maslow felt — insisted — that human beings possessed “the inner resources for growth and healing” and that “the point of therapy is to help remove obstacles to individuals’ achieving them.” The work of Freud, of whom he was critical for his focus on deficiencies, and B.F. Skinner, whose deterministic behaviorism irked Maslow, drove his focus toward self-determination and the innate human capacity and drive to overcome adversity, realizing one’s capacity and creativity through self-actualization. Maslow is one of the most cited psychologists of the 20th century. His inner drive and late insights have helped countless people — professional and lay — to improve our understanding of — and relationship to — self, ever since. So these thoughts aren’t critical of the man himself, nor his work. It’s simply that each of us is different, in temperament and circumstance; and that there is nothing linear about satisfying the need for esteem — or a mate — before we can embark on a journey of self-discovery, and healing. They are not mutually exclusive pursuits; nor are they linear. Maslow came into his own when he distanced himself from everything he hated about his life, and himself, and threw himself into his “life’s work”. Who knows if he ever truly found peace, in the end. What is documented is that late in life, he “came to conclude that self-actualization was not an automatic outcome of satisfying the other human needs.” In other words, it wasn’t linear. Furthermore, in my view, one of his famed tiers — that of esteem — is a red herring. In reality, I believe, a being who has reached a certain level of enlightenment, or self-acceptance, no longer needs external esteem or recognition at all. With this high level of growth — beyond the hamster wheels of societal strictures — the notion that any external validation is not only unnecessary but a distraction, and a cancerous one at that, makes esteem a moot point. Regardless, Maslow was onto something huge, even if the notion of linearity, and some of the values, weren’t quite the way we see it now, from the perspective of nearly eighty additional years of post-publication exploration. Maslow was enamored with Henry David Thoreau — a man whom Maslow considered “self-actualized”, and whose perambulations around Walden Pond, and the book On Civil Disobedience that emerged from that time, inspired Gandhi to free India through non-violence, and fueled King’s peaceful protests in pursuit of African-American civil rights in racist America. In describing self-actualization, Maslow channeled Thoreau, fixating on the following qualities he felt were evident in enlightened beings. He called these “Being Values” — or B-Values — as an antithesis to Freud’s obsession with negative traits. Wikipedia provides a list of these: Truth : honest, reality, beauty, pure, clean and unadulterated completeness : honest, reality, beauty, pure, clean and unadulterated completeness Goodness : rightness, desirability, uprightness, benevolence, honesty : rightness, desirability, uprightness, benevolence, honesty Beauty : rightness, form, aliveness, simplicity, richness, wholeness, perfection, completion : rightness, form, aliveness, simplicity, richness, wholeness, perfection, completion Wholeness : unity, integration, tendency to oneness, interconnectedness, simplicity, organization, structure, order, not dissociated, synergy : unity, integration, tendency to oneness, interconnectedness, simplicity, organization, structure, order, not dissociated, synergy Dichotomy : transcendence, acceptance, resolution, integration, polarities, opposites, contradictions : transcendence, acceptance, resolution, integration, polarities, opposites, contradictions Aliveness : process, not-deadness, spontaneity, self-regulation, full-functioning : process, not-deadness, spontaneity, self-regulation, full-functioning Uniqueness : idiosyncrasy, individuality, non-comparability, novelty : idiosyncrasy, individuality, non-comparability, novelty Perfection : nothing superfluous, nothing lacking, everything in its right place, just-rightness, suitability, justice : nothing superfluous, nothing lacking, everything in its right place, just-rightness, suitability, justice Necessity : inevitability: it must be just that way, not changed in any slightest way : inevitability: it must be just that way, not changed in any slightest way Completion : ending, justice, fulfillment : ending, justice, fulfillment Justice : fairness, suitability, disinterestedness, non-partiality, : fairness, suitability, disinterestedness, non-partiality, Order : lawfulness, rightness, perfectly arranged : lawfulness, rightness, perfectly arranged Simplicity : nakedness, abstract, essential skeletal, bluntness : nakedness, abstract, essential skeletal, bluntness Richness : differentiation, complexity, intricacy, totality : differentiation, complexity, intricacy, totality Effortlessness : ease; lack of strain, striving, or difficulty : ease; lack of strain, striving, or difficulty Playfulness : fun, joy, amusement : fun, joy, amusement Self-sufficiency: autonomy, independence, self-determining Final Thoughts The pursuit of self-awareness is one of the most noble human undertakings. It is only with a deeper understanding of self than most of us possess, that we can aim our energies effectively, whatever that may mean to us, personally. Self-awareness, which can — in the right circumstances — lead to self-acceptance, is the primary vehicle by which we are able to overcome unhelpful self-narratives, the worldviews they feed, and the actions we take as a direct result of these highly subjective perceptions. Perspective is a hard-won prize. The only path I know of for reaching it is through a deep dive into the self. When we have uncovered enough of ourselves, and have further learned that we are not only acceptable as we are — warts and all — but we are, moreover, the only cause of our successes and failures (see: the fabulous book Compassion and Self-Hate, by Dr. Rubin; and Psycho-Cybernetics, by Dr. Maltz, for two titan volumes on the subject), then we develop the capacity to act powerfully in the world, in harmony with our true selves, thus creating the world in which we actually wish to live. Once we do these things, we can then “build a better pyramid”. Photo by Alexander Andrews on Unsplash But the pyramid itself is still problematic, as a form. I prefer to think of human development as a constellation of planets and satellites, like those of a solar system. Underpinned by unifying forces of physics and matter, each sphere in the system influences — and is in turn influenced by — the others’ gravities and orbits. In this way, the interdependency of each “world” — in our case, our “needs” and “wants” — is revealed, nearly without hierarchy. The only thing upon which all others do depend is the burning core of the system. In ours, it is the sun. In a modified Maslow diagram, the sun at the center of our “matrix of needs” would be our authentic self — our nuclear uniqueness. To complete the metaphor, critically, our solar system, too, is part of a larger order of equally interdependent constellations of solar systems and galaxies. This fact betrays the other key takeaway: that no matter the perceived distance or emptiness that may exist between us, as individuals, the human community is as important as each person, and accordingly each part of it exerts powerful gravitational influence over the others, on a scale we may not yet fully comprehend, but that exists and bonds us, nonetheless. We are social beings, to our cores. As with the Earth’s biome, when we knock any element of the human community out of alignment, the butterfly effect is profound. And so the very best thing we can do, for ourselves, for those we love, and for those we may not yet even know, but who are influenced by our acts, nonetheless, is to understand ourselves deeply. To attend to the acme of Maslow’s pyramid. Simply stated, we may want to start at the top of the pyramid, if we wish to change our worlds.
https://medium.com/curious/the-problem-with-maslows-pyramid-ee8566dd1af
['Anthony Fieldman']
2020-12-11 04:18:28.546000+00:00
['Self Improvement', 'Life Lessons', 'Life', 'Psychology', 'Philosophy']
Writer’s Block
Photo by Pedro Araújo on Unsplash I keep waiting for truffles to tumble out; perfect decadence from a pen to graze your mind and instead: doubt stale sequences squeeze their way between the lines as your brain’s narrator a slithering sine, an up-and-down elevator of thoughts, signs of homeostasis but I want for you an oasis of fire: I want you to be ablaze for my tritest words to spark electricity, for a phrase to hitch your breath and steal it away for microseconds what happens instead is your eyeballs hopskip letters laden with lead, heavy you read slowly and continue to inhale and exhale
https://medium.com/resistance-poetry/writers-block-c654c0477ad7
['Katya Davydova']
2020-03-15 17:25:38.377000+00:00
['Writing', 'Reading', 'Stuck', 'Resistance Poetry', 'Wordplay']
What Can the Blockchain do for Our Environment?
An example of this might be a town somewhere in the United States having a limited resource, say water for instance. A finite amount of “water tokens” would be issued and people would earn them for any activities conserving water. These tokens would be spendable elsewhere as tokens or traded for fiat currency. This incentive would not only change people’s minds about conservation but, in time, would change the mindset of the community towards the preservation of natural capital. If we create an economy around this notion, we can incentivize all kinds of activities needing serious and immediate changing. Another instance would be in the realm of carbon emissions. A token economy would allow the creation of “carbon coin(s),” and every time you emit carbon it would cost you some of your coins, and every time you sequester carbon you would get paid in these carbon coins which could also be used for other things or turned into fiat currency. When all the carbon coins have been redeemed, we would know the world would run at net zero for carbon, and this would be a great thing for the planet. As you can see, they can apply these kinds of tokenized incentives to anything needing to be fixed, and the fastest way to get people on board would be to give them an opportunity to earn assets for their time and efforts. For those who want to do things, like cutting down trees, for example, that is hurting the ecosystem, those people would have to purchase tokens from those who have them to make this happen and if no coins are available, or people don’t want to sell their tokens to people for this activity, then the activity, theoretically, wouldn’t happen. The decentralized nature of a tokenized economy will exert peer, and financial, pressure on those not acting in good faith, and eventually those people would be in such hot water with the rest of the world, who will be trying to earn money by saving the environment, that they wouldn’t be able to operate because their activities would be monitored. Not only by a few people working for an understaffed and under-funded centralized government agency or non-profit, but by a world government of people who, as a whole, will stop such activities. Assets, in a token economy, would have to be verified using the blockchain, which would also foil those not acting in good faith. Transparency would be their number one enemy, and unfortunately for them, this transparency is built into the system in such a way it can’t be circumvented. So, as you can see there are many potentials for the blockchain to change the world at its most basic levels to ensure the continuation of mankind. While saving the environment would probably be this technology’s greatest achievement, it will only be one of the many changes that it would bring about, and the faster we get out of the way and let it happen the better off we will be as humans. — Keeping up with the blockchain space is what I do for fun. The next upswing is not that far away and a few small investments today can turn into a lot of money later. For those who wish to join my community and get the latest research reports click here.
https://medium.com/hackernoon/what-can-the-blockchain-do-for-our-environment-c6e6dc634ff0
['Chris Douthit']
2018-09-10 17:11:01.249000+00:00
['Economics', 'Environment', 'Blockchain', 'Cryptocurrency', 'Bitcoin']
The cuStreamz Series: Checkpointing Through Reference Counting in RAPIDS cuStreamz
Introduction Checkpointing is a necessary feature for production streaming data pipelines and one of the major milestones in bringing cuStreamz into reality. It saves a record of the application’s position in a data stream, so it can be restarted from where it left off in case of failure. cuStreamz is built on top of the Python Streamz library, much of which operates asynchronously. We cannot know when data has been completely processed unless there is some mechanism for tracking when these asynchronous operations complete. For example, if a checkpoint is made only upon reading data from a source, it is possible that the processed result can be lost in the pipeline. The application hosting the pipeline can be terminated before the processed result of the data is written to the target. To prevent data loss, a checkpoint can only be created after each micro-batch has been successfully processed. In this post, we walk through how a technique involving metadata and reference counting can be used to determine when a checkpoint should be created to achieve a zero data-loss, high-speed pipeline with at-least-once semantics. Summary cuStreamz has an assortment of functions that can be used to manipulate streaming data and control the flow of the pipeline. These functions can be tied together into a pipeline by defining which functions receive the output of other functions. In the pipeline, metadata is passed downstream to accompany the associated data. This metadata contains a reference counter that is incremented for each function node. It enters and decremented when the node no longer holds a reference to the associated data. When there are no longer any nodes holding a reference to the data, it is assumed that the data has exited the pipeline. For the purposes of checkpointing, we can categorize the functions in cuStreamz as either synchronous or asynchronous. Synchronous functions return only after emitting data downstream every time it is received. Asynchronous functions may have a cache, a delay, or drop data. They may emit data at some time in the future, but they may return before emitting the data. Metadata Metadata is a common feature in data pipelines. In Kafka, for example, the message key is often used to save details about the value. It proved useful for future use cases to implement metadata and use it as the container for the reference counting. This container is what is passed downstream. For most of the functions in cuStreamz, managing this container is a simple matter of forwarding it downstream without making any changes. However, the asynchronous functions require more attention due to the fact that they do not always immediately emit data downstream. For these functions, the metadata must also be retained with the associated data. In other functions, data is combined from multiple streams. We must carefully consider how to merge the metadata from each stream. Also, functions that collect data, and eventually emit data as tuples need solutions on how they will emit the metadata. The main rule on how metadata is handled is that it must be emitted with the data to which it is tied, even if that data is emitted multiple times. This means that some of the nodes cache and group the metadata in order to obey the rule. Reference Counting Reference counting is a method often used in memory management. When an object is instantiated, the number of references to that object is maintained in a counter. When the counter reaches zero, then the memory used by the object can be freed. The same technique is used in cuStreamz to determine if any of the functions in the pipeline still hold a reference to a datum. When the reference counter associated with a datum reaches zero, we can say the datum is “done.” In practice, most users will be reading data from an external source. Using this provided data source has the reference counting built-in, and it will work out-of-the-box with Kafka’s existing checkpointing mechanism. Notice in the following example; the user does not need to define anything more than the group.id parameter for Kafka to specify the consumer group in Kafka. args = { 'bootstrap.servers': 'localhost:9092', 'group.id': 'my-group' } source.from_kafka_batched('my-topic', args, npartitions=1) \ .map(work) \ .sink(print) How It Works For synchronous functions, it is simple to know when data has exited the pipeline. In cuStreamz, when the user invokes a function like .map() or .sink() the functions are not immediately executed; they are only staged. The functions only execute on the data when data is emitted into the pipeline with a call to .emit() . During execution, each stage will have an .update() function that is called from the previous stage. This causes the call stack to grow until the end of the pipeline is reached, at which point, the stack is unwound back to the starting point as each function completes. If you are unfamiliar with the APIs in the following examples, please see the Streamz documentation. To illustrate, take the following example: # Example UDF def inc(x): return x + 1 # Stage the pipeline source = Stream() L = source.map(inc).map(inc).sink_to_list() # Create the reference counter and pass it into # the pipeline via the metadata ref = RefCounter() source.emit(1, metadata=[{'ref': ref}]) Note that the user should not have to create a RefCounter in most cases. It is already managed by the source. When .emit(1) is called, the call stack will evolve like so: call-stack time offset frame 0 0 source.emit(1) 1 1 map.update(1) # first call to map 2 2 self._emit(2) # from map 3 3 map.update(2) # second call to map 4 4 self._emit(3) # from map 5 5 stream.sink_to_list(3) 6 # Unwind the stack 0 We can see that a call from the user to .emit(1) from the code above will not return until the result has been returned from the last function in the pipeline. The difficulty arises when asynchronous functions are introduced into the pipeline because they often return before calling the next function in the pipeline. The following example illustrates this using the .buffer() function. # Example UDF def inc(x): return x + 1 # Stage the pipeline source = Stream() source.map(inc) \ .buffer(1000) \ .filter(lambda x: x % 2 == 0) \ .sink(print) # Will be called when the data is “done” def done_callback(): print('Your data is done!') # Create the reference counter and pass it into # the pipeline via the metadata ref = RefCounter(cb=done_callback) source.emit(1, metadata=[{'ref': ref}]) This example will produce the following series of events: call-stack time offset frame 0 0 source.emit(1) 1 1 map.update(1) 2 2 self._emit(2) # From map 3 3 buffer.update(2) 4 0 # buffer returns. unwind stack to 0 n+0 1 self._emit(2) # from buffer at time n n+1 2 filter.update(2) # "2" passes filter n+2 3 self._emit(2) # from filter n+3 4 sink(print) n+5 0 # The stack unwinds to 0 Here we can see two problems. The first being that the asynchronous node creates a disconnect in the pipeline. The .emit(1) call will return after the data is cached in the buffer. The data will be emitted from the buffer at some unknown time in the future. This means that relying on when the original call to .emit(1) returns is not sufficient in determining when data has been completely processed. It is possible that an error can occur after .buffer() has emitted the data downstream. Secondly, the pipeline is split into two pipelines. How can we track that the data has been fully processed from both pipelines? With the use of reference counters, we can better track when data has exited the pipeline. In this technique, a counter object is created and emitted into the pipeline to accompany the data. The counter provides a callback to notify the original sender when the count is decremented to zero. The callback is an asynchronous notification to indicate that data has been completely processed. Before the data is emitted forward in the pipeline, the count is incremented by the number of downstream functions. After each function completes, the count is decremented by one. If an asynchronous function receives and caches the data, it is responsible for incrementing the counter by one, and decrementing the counter after it is no longer holding a reference to that data. Let’s revisit the previous example and add reference counters.
https://medium.com/rapids-ai/checkpointing-through-reference-counting-in-rapids-custreamz-f9ded03674f5
['Jarod Maupin']
2020-11-18 00:46:30.933000+00:00
['Python', 'Data Streaming', 'Data Science', 'Gpu', 'Open Source Software']
The Giant Inside
Inukshuk, Rankin Inlet, Nunavut, image courtesy of mapio.net We each have a giant within us. Not a sleeping, scary giant as the idiom would have us believe, but a watchful giant. A giant of wisdom, patience, intelligence, knowingness, substance, grace, competence, and calm. Yet sometimes our big, beautiful giant inside seems asleep. For example, my giant can be awake but then seem to slumber like an idiot, half-woke, unable to stay conscious to the truth of what is and what it can do. I can look in mirrors held up by those I love and trust and be like: ‘I hope that the real deal I see is me?’. And that doubt leaves me punch-drunk because my decisions bear fruit of being good ones! I behave competently and do things confidently. I’m brave when afraid. Other people see it too. So why then this disconnect? Why the shock when someone sees my giant? Turns out, the real idiot is the tiny yet powerful trickster of the ego that runs amok now and again while the knowing soul of the giant waits patiently to be like: ‘are you done? I’d like to get up and have a stretch.’. Well, here’s the news AND the weather: I am done. My giant inside would like to stretch all the way out into even longer periods of consciousness and I’m here for it. Now can the ego ever be eliminated? Not to my knowledge. Will any of us vanquish all fear, self-doubt, and everything we’re scared of? Uh, no. Can the ego be sublimated so the giant of our soul can stay at the forefront of our consciousness for longer and longer? Yes. How? Working that out, but practice seems to play a role. One answer may lie in a movie about mountain climber Tommy Caldwell called “ The Dawn Wall”. In it, Tommy is trying to climb a nearly impossible section of El Capitan. He’s trying and failing; trying and failing and then right before he gets it says “I felt this flow of confidence”. He woke up to his giant inside, to the one that is the flow and the only one that could carry him up that pitch and all the way to the top. Our giant inside is never unconscious or acting a fool but even it can retreat for a bit when the monster of the ego and the gremlins of unconsciousness, fear, and shame occasionally overtake us. But our giant is wise enough to say ‘I’ll wait quietly over here until your tantrum is through’. Our job then is to remember that giant; that quiet and loving presence is just waiting for us to engage with it instead of its opposite. When we don’t, as Reshma Saujani says, “that’s the drama talking, not the knowing” and the ego is all drama, all the time. Still image from the film “The Dawn Wall” via imdb.com So, why even let that drama in? Other than being human one reason may be we are taught that to acknowledge goodness or competence would be arrogant, and, while it can be a slippery slope, the true giant is not arrogant or boastful, it just quietly knows and can accept who and what it is while knowing it is wrongheaded and unnatural to play small. I’m learning more and more that recognizing one's “ place in the family of things “ and acknowledging being a good vessel for something great and infinite is not a thing of arrogance it’s just a thing that is. Also, while we can be brave when afraid, if perfect doesn’t follow brave that cancels everything out. The truest truth though is that we can be brave, followed by imperfect, followed by learning. We can be Tommy on El Cap. It wasn’t that he’d never get to the top it’s just that he wasn’t there yet. Lastly: we can act competently and be confident but not be conscious of it or we can attribute it to luck, instinct, or good advice from someone we love. And all of that can be true. We can be lucky, have good instincts, and be surrounded by thoughtfulness and love. I know I am. But we have to be as conscious of our inner giant as we are of all of the giants that surround us. We have to hold everything in the balance; being conscious of our strengths and our limits; our egos and our giants as well as our fellow giants all while deciding exactly what will get the benefits of our energy and of our time. And we will not hold this balance well all of the time, but, with practice, we will do it more as we become more. Recently, a friend shook my giant and woke it nearly instantly. They did because they have been so patient and told me about my giant exactly one million times and were now telling me that they love and trust me enough to push me up the mountain to find the pass where I can stay woke to my own damn self! They are the best kind of friend and who push me and show me that I’m capital “O” okay. They will help, support, love, hear, and advise me when I’m afraid and remind me of my giant by showing me theirs which is mighty. See, the best kind of friend. In the series of mountain ranges in my life, this is a big one but I’ll keep climbing it. Up to the peak and then down again through the valley. I’ll try and fail and try and fail and the more I try the even more fully conscious I’ll become of my giant. And along the passes, I’ll look into the mirror-like streams of the ones I love who surround me and reflect back to me, sometimes without saying a word, the truest truth about my giant inside and it’ll be a quiet but awe-inspiring gift that I hope I will give to someone too. And who knows: maybe in writing this I already have? The giant in me sees and honours the giant in you. WRITTEN BY I write about what affects our lives. Thoughts we have, questions we raise and ways in which we can grow and, hopefully, become better so we can do better.
https://medium.com/free-thinkr/the-giant-inside-1e777ae4f752
['Christine Quaglia']
2020-11-30 16:12:29.655000+00:00
['Growth', 'Self Improvement', 'Self-awareness', 'Growth Mindset', 'Self Love']
Regular Expressions Are Still Useful For Chatbots
Regular Expressions Are Still Useful For Chatbots Using Regular Expressions With LUIS & Bot Framework Composer Introduction The concept formed during the 1950’s by an American mathematician Stephen Cole Kleene. He formalized the description of a regular language. Regex Entities in LUIS A Regular expression (regex) is a sequence of characters defined by a pattern. A regex is a standard textual syntax which represents a pattern we want to match in our text. A degree of flexibility can be introduce by using wildcards. The entity is a good fit when: The data are consistently formatted with any variation that is also consistent. The regular expression does not need more than 2 levels of nesting. A good way to think of employing Regex in your chatbot is where you want to extract an entity which have a limited format…The format of the number might be very constant, but with high variances of letters of numbers. Examples of such numbers are: Reference Numbers Flight Numbers Ticket Reference Numbers etc. An added advantageous of Regular Expressions is the fact that is is lightweight and a compact way to extract specific information. You do not have to worry about training examples, contextual awareness or specific intents necessarily.
https://cobusgreyling.medium.com/regular-expressions-are-still-useful-for-chatbots-5317cd863e60
['Cobus Greyling']
2020-12-04 19:58:51.104000+00:00
['Conversational UI', 'Machine Learning', 'Artificial Intelligence', 'NLP', 'Chatbots']
Ryu should say: HADOOP…ER
For some time I wanted to build and write how a (very) basic Hadoop cluster should be built but it was you, my dear reader/follower/friend/no-better-thing-to-do-person, that made me embark in this awesome odyssey. We will fight giants, kill huge dogs with three heads, defeat pseudo-gods and sing Somewhere Over the Rainbow while doing it… Well, almost… But still, I’ll do my best to explain step-by-step how I created the clusters, help you somehow and we can both learn together. I’ll first explain a bit of the technologies involved and state the requirements to do it. After this, we start building a single slave cluster, which will be our base system, and then improve the cluster by adding more nodes. We also provide a helpful test code for you to check if the cluster is running smoothly or not. I hope you enjoy! Let’s do it! Tech List This adventure requires us to understand some core concepts. Here is the small list of techie goodies we use: Hadoop: The heart of our joint adventure. Hadoop is an open-source software framework for storing data and running applications on clusters. Widely used to process huge amount of data in a multitude of formats; HDFS: If Hadoop was a brain, this was where its memories would be stored. HDFS stands for Hadoop Distributed File System and it’s optimized to handle distributed data in Hadoop; YARN: Yet another resource negotiator, literally. It’s used to split up the functionalities of resource management and job scheduling/monitoring into separate daemons; MapReduce: It’s a programming model used to process and generate large quantities of data in a parallel and distributed fashion. Of course we could even explain further the technologies listed but then we would lose focus on our fun task: Building the Hadoop Single/Multi Node cluster. Requirements We will use virtual machines to create the cluster since we don’t have physical resources to build an actual cluster. I’ll try to explain as much as I can along the way about each development toy we use. Here is the list of what you must have in your computer: Virtual Box (VBox): the VMs manager. You can get it at here ; Vagrant: an awesome tool to quickly deploy VMs using Virtual Box (or actually any other manager). Grab it! Of course you’ll need even more tools but those will be installed in the VMs (AKA Guest machines) and not on your computer (AKA Host machine). Please install these tools… I’ll wait… I’m not going anywhere! Continue when you’re ready. We start by being Single… Yes… We all start by being single, as will this short adventure. So, assuming that everything is setup, we start by deploy one VM using Vagrant and VBox. Vagrant runs based on the configuration declared in the configuration file, usually called Vagrantfile. Vagrants also read Go to the directory you which to work, create the Vagrantfile with the following content: Let’s try to explain a bit what it’s happening.The main configuration entry is declared from line 1 to 23. Line 2 it’s a comment: when a line starts with #, vagrant interprets it as comment. From line 3 to 22 we start configuration a machine named “master”. Every line between this block will deploy and configure the first machine. Line 4 tells Vagrant the box we want to use (it will download it from the Hashicorp repository) and we will use Ubuntu 14.04 Trusty Thar 64 bits release. Line 6 sets the machine IP over the Vagrant private network; Line 7 forwards a port from the host to the guest machine (this port will be used to monitor Hadoop); and Line 8 enables the public network bridge. The provider entries for this VM (lines 10–13) configure the physical resources to be used by the guest machine. In this case, 1024 MB from the host will be reserved to this VM (line 11) as well as a CPU core (line 12). The last set of instructions (lines 15–21) will run the guest’s shell and regular bash commands. You can now boot up your machine by running vagrant up on the working directory. Be aware that you have to select the interface you want Vagrant to bridge. In this case, select the one you’re using to access the internet. You have now my permission to take a cup of coffee because this normally takes a bit. Why you may ask… Because if you never used the selected box, Vagrant will download it from the repository, install it and configure it. After, finishing this step, let’s go to following step… Getting your hands dirty We have our VM running and begging to be used. This can be done by calling ssh and the awesome Vagrant has a helper: vagrant ssh. And now you are logged in the VM and we can start configuring the cluster. Step 1. Install Oracle Java 8 There isn’t any special reason why I use Oracle Java instead of OpenJDK or another flavor… So, you can use another version if you prefer. To install follow these steps: sudo add-apt-repository -y ppa:webupd8team/java sudo apt-get update && sudo apt-get -y upgrade echo oracle-java8-installer shared/accepted-oracle-license-v1–1 select true | sudo /usr/bin/debconf-set-selections sudo apt-get -y install oracle-java8-set-default The only line that requires some explanation is the third line: it forces the acceptance of Oracle Java 8 terms and does not block the process. You can confirm your installation by running: java -version. Step 2. Add Hadoop User “hadooper” and Group “hadoop” Hadoop will run on a specific group and under a specific user. You could install using the default user but this keeps the house tidy: sudo addgroup hadoop sudo adduser — ingroup hadoop hadooper sudo adduser hadooper sudo Step 3. Setting up SSH Let’s setup SSH and the RSA keys. Hadoop uses SSH to communicate between nodes (even the master node with itself). sudo apt-get -y install ssh su hadooper cd ~ ssh-keygen -t rsa -P “” cat ./.ssh/id_rsa.pub >> ./.ssh/authorized_keys Step 4. Grabbing Hadoop The latest Hadoop version available at date is 2.7.2. We used one of the available mirrors to download the package (automatically assigned when I went to the download page): wget http://mirrors.fe.up.pt/pub/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz tar -xzvf hadoop-2.7.2.tar.gz sudo mv hadoop-2.7.2 /opt/hadoop We install the package at /opt directory but you are free to place anywhere else. Step 5. We need a nice Environment Now, we need our hadooper to be able to run the commands related to Hadoop anywhere. We are going to edit hadooper’s ~/.bashrc and add the following lines to the end of it: From now on, we will use these variables when needed, even during the writing of this story. Don’t forget to load the new configuration: source ~/.bashrc . Step 6. Hadoken… Errr… Hadooper This step configures the Hadoop deployment. We will edit three different files that give support to Hadoop environment and execution. Let’s start by editing $HADOOP_HOME/etc/hadoop/hadoop-env.sh and change the JAVA_HOME entry to: Now we copy and edit the MapReduce site template. To copy: cp $HADOOP_HOME/etc/hadoop/mapred-site.xml.template $HADOOP_HOME/etc/hadoop/mapred-site.xml And then we dig in the file $HADOOP_HOME/etc/hadoop/core-site.xml and $HADOOP_HOME/etc/hadoop/mapred-site.xml to set configuration tag as follows: Step 7. We need storage We are now going to setup Hadoop file storage. HDFS requires that we select a directory to place the file system. I’ve selected /var/hadoop/hdfs. We need to declare two sub-folders: namenode where Haddop keeps the directory tree of all files in the file system, and tracks where across the cluster the file data is kept (does not store the data); datanode where the actual data is stored. The creation is done by: sudo mkdir -p /var/hadoop/hdfs/namenode sudo mkdir -p /var/hadoop/hdfs/datanode sudo chown -R hadooper /var/hadoop And the configuration by editing $HADOOP_HOME/etc/hadoop/hdfs-site.xml and changing the configuration tag to: If you check the first property dfs.replication is set to 1. This means the data will be placed on only one node, in this case, the master/single node. The following properties define the namenode and the datanode directories. Finally, we format the file storage by running: hdfs namenode -format Step 8. Start the Single Node Cluster Watch you’re hard work results unning by typing the following commands (type yes if SSH connection asks for it): start-dfs.sh start-yarn.sh jps Where the first line boots the HDFS, the second starts the resource negotiator and the last line just gives you feedback on the services running. Congrats young Padawan! Single Node Cluster running you have now! (Ohhhh… Good ol’ Yoda). If you want to test it right now, check Annex A, where I placed an example to be compiled and executed in your cluster (single or multi, works on both). Also you can check the cluster status by opening a browser in your host machine and going to: We can now jump to next big thing: Going Multi Node! …But we grow and go Multi By now you should have a working Hadoop cluster with a single node. To transform it in a multi-node, you must have more machines. We will use two more virtual machines. More than three VMs is overkill unless you have an awesome computer. Adding the VMs declaration to the Vagrant file will be our first step. After that, we will reconfigure our first VM, master, to behave as a the lead node. The following move is to set up each node. This is probably the most boring task because you’ll need to perform multiple steps in each new node that you already made in master. Get set! Ready! Go!!!!! Step 9. Add more machines We will add two VMs to Vagrant file now. Edit the Vagrantfile in your host machine and add the add slave1 and slave2 machines: The two machine are declared in: Lines 24–43: slave1 machine with IP 192.168.2.101 Lines 45–64: slave2 machine with IP 192.168.2.102 Every time you want to execute a Vagrant command directed to one machine, specify the machine name after the command itself, e.g., you want to boot only the master machine, you execute: vagrant up master. Another example is the ssh access: vagrant ssh slave1 to access the slave1 machine. If you do not define the name, you’ll run the command over all machines if the command is compliant with it, i.e., vagrant up will boot all machines but vagrant ssh will fail. In the next steps I’ll place “Master” in the title or “Slaves” so that you don’t lose track on where we stand at that point. Step 10. Master: Stop HDFS and YARN Simple. Run the following commands: stop-dfs.sh stop-yarn.sh Step 11. Master: Declare companions We need to teach master (pun intended) which machines are its companions, AKA slaves. Edit /etc/hosts file (requires sudo) and add the following lines after the localhost initial line (if there is that line): 127.0.0.1 localhost # After this line 192.168.2.100 master 192.168.2.101 slave1 192.168.2.102 slave2 Master knows the IP addresses of the two slaves. This will enable master to execute remote tasks in the slaves. Step 12. Master: Adapt configuration Edit the following according with the gist: $HADOOP_HOME/etc/hadoop/hdfs-site.xml $HADOOP_HOME/etc/hadoop/mapred-site.xml $HADOOP_HOME/etc/hadoop/yarn-site.xml hdfs-site.xml — We now have a replication value of 2 and removed the datanode from master. We also need to clean the /var/hadoop folder to fit to the configuration: sudo rm -r /var/hadoop sudo mkdir -p /var/hadoop/hdfs/namenode sudo chown -R hadooper /var/hadoop mapred-site.xml — Besides the obvious update to the job tracker address, we also define that YARN is now responsible to distribute MapReduce tasks across the entire cluster. yarn-site.xml — We declare that we want to use the resource tracker, scheduler and resource manager at the specified addresses. You can assign other ports but be careful to not assign unavailable ports. Step 13. Master: Identify slaves (nodes) We will now tell Hadoop which are its slaves. Edit file $HADOOP_HOME/etc/hadoop/slaves and add the following two lines: slave1 slave2 And also inform him that he is the master by adding to $HADOOP_HOME/etc/hadoop/masters the line: master Step 14. Slaves: Redo steps In each slave, execute the following step you also did in master and keep the same password for the hadooper user: Step 1; Step 2; Step 3; Step 15. Slaves: Reuse Master’s pair of SSH keys This is really tricky but helps a bit in the future. From each slave, execute: scp hadooper@master:~/.ssh/* ~/.ssh/ You need to be hadooper user in the slave. This will make easier to access every node in the near future. Step 16. Slaves: Get Hadoop directly from master This will save us a lot of time. In each slave, execute: scp -r hadooper@master:/opt/hadoop/ ~/hadoop scp -r hadooper@master:~/.bashrc ~ source ~/.bashrc sudo mv ~/hadoop /opt/hadoop Step 17. Slaves: Configure HDFS Since each slave is a datanode we must adapt a file system to that position by running: sudo mkdir -p /var/hadoop/hdfs/datanode sudo chown -R hadooper /var/hadoop/ And edit each $HADOOP_HOME/etc/hadoop/hdfs-site.xml file in each slave: Step 18. Slaves: Settings the hosts in each slave Our odyssey has almost ended. For each slave N, we must edit the /etc/hosts files place the lines: 127.0.0.1 localhost # After this line 192.168.2.100 master 192.168.2.10<N> slave<N> Where <N> = {1, 2} (since we are only using two machines). Now the slaves are ready. Step 19. Master: Almost there… This is the final step in this amazing adventure you came across. In master, execute as hadooper: hdfs namenode -format start-dfs.sh start-yarn.sh jps If you have problems connecting to either one of the slaves, try to ssh connect directly to troubleshoot the issue. Sometimes it may complain that one of the slaves is not safe and you’ll only need to add the key to the authorized_keys file, which connecting to the that troublesome node will do for you… Also, I had to reboot all the machines once after the installation because Hadoop master couldn’t connect to slave2, possibly because SSH wasn’t running correctly or the keys weren’t loaded. And voilá: you have your cluster… If everything went OK! :D To check the status, point the browser in your host machine to http://localhost:50070/. You can test again the cluster using the example in Annex A. The code should be executed in master. One thing that we left out is running the cluster automatically at startup. In Ubuntu you can use init.d scripts. More info on this here. Conclusion We went on an incredible adventure to build a powerful Hadoop cluster (or not)… We were introduced to set of technologies and the requirements to go over the story. After, we started by building a single slave cluster by following a set of steps and then went on building a cluster with three machines, one master and two slaves, where the master handles the work distribution and the slaves handle the processing and data storing. We used virtual machines just for test case but, in the real world, we should use physical machines or machines/services with advanced virtualization capabilities. Hadoop has a lot of companion tools and tools that improve Hadoop itself. The next step would be to integrate Hadoop with a big data database and use tools to handle the interaction between Hadoop and the big data solution selected. Also, exploring other technologies that were built over Hadoop, like Spark, is a good way path to follow. Drop me a line if you found any error and got stuck anywhere in the story and I’ll try to help you as best as I can. Hope you enjoyed! More References https://medium.com/@markobonaci/the-history-of-hadoop-68984a11704#.b17jz23m0 https://medium.com/@nikantvohra/hadoop-82e96891022c#.hfbpyg7tu http://hortonworks.com/hadoop-tutorial/using-commandline-manage-files-hdfs/ https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html https://www.vagrantup.com/docs/ Annex A In any programming language or new programming framework there is a typical introductory program, which we commonly call Hello World!. The same happens in Hadoop. The most basic example used in Hadoop with MapReduce is a word counting program. You could write it in Python, R, Scala or Java but, since we at Bitmaker Software use mostly the latest, the example we give is in Java. Also, the credit to this code is not ours. It was taken from the official Apache Hadoop documentation. You can create the file WordCount.java in the hadooper’s home directory (cd ~) and paste the following code: The following lines are to be run in the command line and will compile your code using Hadoop libraries: hadoop com.sun.tools.javac.Main WordCount.java jar cf wc.jar WordCount*.class As we explain during the tutorial, Hadoop uses HDFS as its file storage system. Every file you want to process must be included in a HDFS folder. You must first create the directory and then copy the files to be processed. This can be achieved by using Hadoop’s fs commands. To begin with the directory creation: hadoop fs -mkdir -p /user/hadooper/wordcount/input If it fails, be sure that HDFS and YARN are running by executing jps command. If you see something like: 4674 Jps Instead of: 5185 Jps 5075 SecondaryNameNode 4838 NameNode (… or with more information) You’re probably not running the file storage and/or the resource negotiator. Just run the next couple of lines in command line and try again: start-dfs.sh start-yarn.sh If for some reason the namenode entered in safe mode, you can disable it by running: hdfs dfsadmin -safemode leave We also need at least a test file (but you can use multiple…). I grabbed a copy from the King James Bible in UTF-8 txt file and placed on the folder we created to do the word counting: wget https://www.gutenberg.org/ebooks/10.txt.utf-8 -O kjb.txt hadoop fs -put kjb.txt /user/hadooper/wordcount/input Finally we run: hadoop jar wc.jar WordCount /user/hadooper/wordcount/input /user/hadooper/wordcount/output And the results can be shown by:
https://medium.com/bitmaker-software/ryu-should-say-hadoop-er-e8f89b4c3e39
['Nuno Barbosa']
2016-05-13 11:04:07.597000+00:00
['Hadoop', 'Big Data', 'Hadoop Training']
Evolution of a Crush. Why do crushes continue? And what can…
Four weeks. Give or take a day or two. That’s how long it took. Takes. For my crush to run its course. But my crush didn’t just stop. It didn’t run face first into a firmly closed door, bloody and smash its nose. I didn’t get a firm “thanks, but no thanks”. I didn’t confess my feelings only to have them stomped on and squished like an unwelcome bug in a pristine house. There was none of that. There was talk of a straying girlfriend who was studying abroad, and my heart jumped in protest until the lure of the maybe that — butterfly-light — floated in my general direction. Potentially possible, because we — my crush and I — each kept seeking the company of the other, drawn by a magnetic meant-to-be that was bigger than both of us. We are two some-kind-of-wonderful people who have been brought together by time and circumstances to a place of caring and respect and admiration and humour. Of shared thoughts and feelings about how the world is, and how it should be. Possibility and potential was eventually replaced by realism. And the girlfriend who refused to disappear. And a marriage that was being planned. And realism that forced my crush to transform. My crush evolved. Changed. Into something more. Something else. Something… transcendent. Long lived. And lovely. Gentle. Kind. My crush is now my friend. A bestie friend (his words). And Lord knows, I don’t have too many of those. Don’t get me wrong: I have lovely friends. Lots of them. Friends I have collected and kept over the years. Friends who are dear to me and who have seen me prevail in times of sadness and joy and frustration and grief and anger. But a bestie friend? Someone I know I could call if I were arrested and needed to be bailed out of a Vietnamese prison, however unlikely? A friend who senses when I am not me and takes me out for sticky rice and hot chocolate even though he lives miles away and it’s after work and he’s tired and he just wants to go home? A friend who — merely by being in his company — makes me feel a whole lot better and a whole lot nicer and a whole lot more loveable? A friend I can talk to about anything and — regardless of the language and cultural obstacles that sometimes need to be scaled and traversed — who seeks to understand me. Who sees me. That is so much better than the imperfection of sex, and so much better than the emotional roller coaster of romantic love, and so much better than the selfishness of relationships. It’s cleaner and nicer because it’s about enjoyment rather than gratification. There is intimacy without complication. I can talk without awkwardness or expectation or second guessing. Because we just are. We are two some kind of wonderful people who have been brought together by time and circumstances to a place of caring and respect and admiration and humour. Of shared thoughts and feelings about how the world is, and how it should be. A meeting of minds and hearts, with a hat tip to the universe that conspired to bring us together. I have a wonderful friend. And that friend is forever.
https://medium.com/literally-literary/the-evolution-of-a-crush-4b076d45793f
['Diane Lee']
2020-08-21 06:14:48.560000+00:00
['Relationships Love Dating', 'Nonfiction', 'Love', 'Crush', 'Friendzone']
[Paper] DB-CNN: Deep Bilinear Convolutional Neural Network (Image Quality Assessment)
In this story, Blind Image Quality Assessment Using A Deep Bilinear Convolutional Neural Network (DB-CNN), by Wuhan University, New York University, and University of Waterloo, is presented. I read this because I recently study IQA/VQA. In this paper: For synthetic distortions , a CNN is pre-trained to classify image distortion type and level . , a CNN is pre-trained to . For authentic distortions , a pretrained CNN for image classification is adopted . , . The features from the two CNNs are pooled bilinearly into a unified representation for final quality prediction. The entire model is fine-tuned on target subject-rated databases. This is a paper in 2020 TCVST where TCSVT has a high impact factor of 4.133. (Sik-Ho Tsang @ Medium)
https://sh-tsang.medium.com/paper-db-cnn-deep-bilinear-convolutional-neural-network-image-quality-assessment-2dbf96cc1bc
['Sik-Ho Tsang']
2020-11-08 06:45:32.248000+00:00
['Artificial Intelligence', 'Quality Assessment', 'Convolutional Network', 'Deep Learning', 'Iqa']
Masturbation May be The Solution to 80% of Our Problems
Masturbation May be The Solution to 80% of Our Problems Here is the science to back it up. Photo by Francesca Zama from Pexels A friend once said 9 out of 10 people masturbate, and the tenth person is a liar. She was right, masturbation is one of the first things we learn to do in the womb. This curiosity continues with us through toddlerhood, childhood and adulthood. But somewhere in our formative years we learn that self-pleasuring is wrong. Growing up, I remember the shame around masturbation and sex. I remember how adults tiptoed around the subject claiming the slightest knowledge would destroy our innocence. I remember how religious instructors preached that our bodies were temples that housed God and masturbation would threaten that union. Despite these teachings my childish mind couldn’t comprehend how something that felt so good was so wrong. But soon guilt and shame kept me in check. This is where my self-hate began. We all have stories that follow this same narrative. Stories that taught us shame and guilt and consequently made us detached from ourselves and healthy sexuality. If we’re to improve in any way, we have to remove the stigma around masturbation. By understanding how masturbation translates to self-love and how it impacts our overall health our journey is made easier. So read on to find out.
https://medium.com/sexography/masturbation-may-be-the-solution-to-80-of-our-problems-456cb82cae10
['Dona Mwiria']
2020-12-30 07:28:34.492000+00:00
['Growth', 'Women', 'Men', 'Science', 'Sexuality']
How to Automate Google Sheets with Python
How to Automate Google Sheets with Python How to use pygsheets python package to play around google sheets and to automate. Google spreadsheets are easy to maintain, edit, and share with people with python package pygsheets. I have been using pygsheets for long time to automate my daily work in google spreadsheets. pygsheets is a simple intuitive python library to access google spreadsheets through the Google Sheets API v4. Automating Google Sheets with python is not as hard as climbing Mount Fuji. 😉 A picture from my Mount Fuji trekking. Everyone knows what google spreadsheets are and how to use them. In this article , we will learn how to play around google spreadsheets with python. So, without further ado, let’s start. Installation pip install pygsheets Get client secret Obtain OAuth2 credentials from Google Developers Console for google spreadsheet api and drive api and save the file as client_secret.json in same directory as project. See complete guide here. Authorization import pygsheets gc = pygsheets.authorize() # Use customized credentials gc = pygsheets.authorize(custom_credentials=my_credentials) # For the first time, it will may produce as a link to authorize Open spreadsheets and worksheets Google spreadsheets can be opened by name, id , and link. Worksheets can be accessed by name or index. How to open a spreadsheet and worksheet with pygsheets. Playing around spreadsheet Authorize and open a spreadsheet import pygsheets gc = pygsheets.authorize() sh = gc.open('medium') # Open a spreadsheet with name 'medium'. Get spreadsheet title sh.id # Returns id of spreadsheet Get spreadsheet id sh.title # Returns title of spreadsheet Get spreadsheet url sh.url # Returns url of spreadsheet Check last update sh.updated # Returns date and time of last update Delete spreadsheet sh.delete() # Delete spreadsheet Get worksheets info sh.worksheets() # Return information of worksheets Share spreadsheet sh.share('[email protected]', role='commenter', type='user', emailMessage='Here is the spreadsheet we talked about!') sh.share('', role='reader', type='anyone') # Make public Remove permissions sh.remove_permission('[email protected]', permission_id=None) # You can specify permission id Add new worksheet sh.add_worksheet('sheet3',rows=250, cols=20) Delete worksheet sh.del_worksheet('sheet3') Playing around Worksheet Open a worksheet wk1 = sh.sheet1 or wk1 =sh[0] # Open first worksheet Get title, id, and url of worksheet wk1.title # Returns title of worksheet wk1.id # Returns id of worksheet wk1.url # Returns url of worksheet Get rows and cols count wk1.rows # returns number of rows wk1.cols # returns number of columns Get cell object and cell value wk1.cell((row_number,col_number)) # Returns cell object wk1.cell((row_number,col_number)).value # Returns cell value as string Get value/values/records wk1.get_value('A1') # Returns A1’s value wk1.get_value('A1', 'B2') # Returns list of values wk1.get_all_values() # Returns list of all values in worksheet wk1.get_all_records() # Returns a list of dictionaries Example of get_all_records Update value/values wk1.update_value('A8', '40') # Updates A8 with 40 or wk1.update_value('A8','=A6+A7',True)#Updates A8 with sum of A6 and A7 wk1.update_values('A8', [['G',40]]) # Updates values in starting from A8 Get rows or columns wk1.get_row(row_number) # Returns a list of all values in a row wk1.get_col(col_number) # Returns a list of all values in a column Add/delete rows and columns wk1.add_rows(n) # Add n rows to worksheet at end wk1.add_cols(n) # Add n columns to worksheet at end wk1.delete_rows(n) # Delete last n rows of worksheet wk1.delete_rows(n) # Delete last n columns of worksheet Insert rows and columns wk1.insert_rows(row =1, number = 2) # inserts 2 new rows after 1st row wk1.insert_rows(row =1, number = 1, values =['AA', 40]) # insert 1 new row and insert values in same row wk1.insert_cols(col =6, number = 2) # inserts 2 new columns after 6th column wk1.insert_rows(col=6, number = 1, values =['AA', 40]) # insert a new column and insert values in same column Update row and column wk1.update_row(row_index, values, col_offset =0) # Updates values in a row from 1st column Example: >>> wk1.update_row(9, ['H', 45, 178, 81]) wk1.update_col(col_index, values, row_offset=0) # Updates values in a column from 1st row Example: >>> wk1.update_col(9, [78, 45, 178, 81]) Adjust width of column and height of row wk1.adjust_column_width(start=0, end=3, pixel_size=50) # Updates column size to 50 pixel wk1.adjust_row_height(1,10, pixel_size=50) # Updates row height to 50 pixel Resize and clear worksheet wk1.clear('A9') # Clear all values starting from A9 wk1.clear('A9:D10') # Clear values in grid range A9 to D10 wks.resize(num_rows, num_cols) # Resize to given dimension Add pandas dataframe to worksheet wk1.set_dataframe(df, 'A9')#Inserts df in worksheet starting from A9 # Note: set copy_head =False if you don't want to add first row of df Get worksheet values as pandas dataframe wk1.get_as_df() # Returns a pandas dataframe of worksheet # Note: You can specify start and end to get specific range data Example of get_of_df() Add chart to worksheet >>>wk1.add_chart(('A1', 'A6'), [('B1', 'B6')], 'Age Chart') <Chart COLUMN 'Age Chart'> Chart added to worksheet. I guess, you have learned enough to play around google spreadsheets with python. Many more operations can be performed with pygsheets package. Please, see documentation of pygsheets to learn more operations. Thank you for reading this article. Read my other Medium Articles here. Reach out to me on LinkedIn, if you have query. Reference: https://pygsheets.readthedocs.io/en/latest/index.html
https://medium.com/game-of-data/play-with-google-spreadsheets-with-python-301dd4ee36eb
['Dayal Chand Aichara']
2019-07-19 03:40:55.858000+00:00
['Pygsheets', 'Pandas', 'Python', 'Google Spreadsheets', 'Data Science']
Make Your Python Code Fluent
Make Your Python Code Fluent With Function and Operator Overloading Photo by Nolan Marketti on Unsplash Overloading in Python allows us to define functions and operators that behave in different ways depending on parameters or operands used. Operator Overloading As an example, we can use “+” operator to do arithmetic calculations on numerical values while the same “+” operator concatenates two strings when strings operands used. This is called operator overloading and it allows us to use the same operator on different object types to perform similar tasks. As shown below, we can overload “+” operator to use it with our custom-made object types as well. # No overloading, task is performed by 'add' method cost1 = Cost(10) cost2 = Cost(24) cost_total = cost1.add(cost2) # '+' operator is overloaded to work with 'Cost' type of objects cost1 = Cost(10) cost2 = Cost(24) cost_total = cost1 + cost2 The second code block above is easy to read and understand compared to the first one. This is how overloading makes our code fluent and clean. Function Overloading Although Python does not support function overloading by default, there are ways of implementing it with some tricks. Let’s consider that we want to create a function to calculate the area of a triangle. The user can provide; Base and height of the triangle or, Length of three sides of the triangle We need to define two different functions to handle the task if we don’t consider overloading. Instead of having different function definitions for the same task, we can write only one function and overload it to increase code consistency and readability. #--------------------- # No overloading, task is performed by two similar functions #--------------------- def triangle_area_base_height(base, height): .... def triangle_area_three_sides(a_side, b_side, c_side): .... area1 = triangle_area_base_height (10,14) area2 = triangle_area_three_sides (10, 12,8) #--------------------- # Function overloading #--------------------- from multipledispatch import dispatch @dispatch(int,int) def triangle_area(base, heigth): ..... @dispatch(int,int,int) def triangle_area(a_side, b_side, c_side): ..... area1 = triangle_area (10,14) area2 = triangle_area (10,12,8) Overloading Operators When we use an operator, a special function associated with that operator is invoked. As an example, when we use + operator, the special method of __add__ is invoked. To overload + operator, we need to extend the functionality of __add__ method in a class structure. # Addition of 2D point coordinates with # + operator overloading class P oint2D: def __init__(self, x, y): self.x = x self.y = y # adding two points def __add__(self, other): return self.x + other.x, self.y + other.y def __str__(self): return self.x, self.y point1 = P oint2D(5, 4) point2 = P oint2D(6, 1) point3 = point1 + point2 print(point3) Output: (11, 5) Instead of defining an additional function to add Point2d objects, we can overload + operator to have more fluent and easy to read code. You can overload all operators including assignment, comparison and binary ones by modifying the associated special methods in a class structure. Overloading Built-in Functions We can also overload built-in Python functions to modify their default actions. Consider the len() built-in function which returns the number of objects in a sequence or collection. To use it with our custom-made object type, we need to implement overloading. To overload len(), we need to extend the functionality of __len__ method in a class structure. Let’s see below how we can do it in Python: class Names: def __init__(self, name, country): self.name = list(name) self.country = country def __len__(self): return len(self.name) obj1 = Names(['Amy', 'Harper', 'James'], 'UK') print(len(obj1)) Output: 3 You can similarly overload all built-in functions. Photo by Chundy Tanz on Unsplash Overloading User-Defined Functions Python does not support function overloading by default. But we can use multiple dispatch library to handle overloading. After importing multipledispatch, what we need to do is to decorate our functions with @dispatch() decorator. import math from multipledispatch import dispatch @dispatch(int,int) def triangle_area(base, height): return (base*height)/2 @dispatch(int,int,int) def triangle_area(a_side, b_side, c_side): s = (a_side + b_side + c_side) / 2 return math.sqrt(s * (s-a_side) * (s-b_side) * (s-c_side)) area1 = triangle_area (10,14) area2 = triangle_area (5,5,5) print("Area1: {}".format(area1)) print("Area2: {}".format(area2)) Output: Area1: 70.0 Area2: 10.825317547305483 Key Takeaways Overloading in Python allows us to define functions and operators that behave in different ways depending on parameters or operands used. in Python allows us to define functions and operators that behave in different ways depending on parameters or operands used. Operator overloading allows us to use the same operator on different object types to perform similar tasks. allows us to use the same operator on different object types to perform similar tasks. Instead of having different function definitions for the same task, we can write only one function and overload it to increase code consistency and readability. Conclusion In this post, I explained the basics of overloading in Python. The code in this post is available in my GitHub repository. I hope you found this post useful. Thank you for reading!
https://towardsdatascience.com/make-your-python-code-fluent-7ee2dd7c9ae3
['Erdem Isbilen']
2020-07-25 22:48:05.855000+00:00
['Function Overloading', 'Operator Overloading', 'Python', 'Programming', 'Overloading']
Take Action to Combat Fear and Move Forward
How do you know if your plans will succeed? Sometimes you don’t, but imagine if you do nothing. Assess where you currently are, like what resources do you have and what people do you have on your side? How much money do you have, or don’t have? For those of us who rely only on ourselves for income, then laying out plans and taking action is necessary. It’s important to first create a structure or framework. I know a financial planner who made a spreadsheet with his financial goals each month and that includes developing leads. He gives himself points to attend a certain number of meetings and networking events. Then he gives himself additional points for conversations. It’s an area where I need to improve so I continue to have a steady number of leads coming through. Knowing your people's resources include who are the most likely people to send potential clients your way? For me, individual marketing consultants needing content written are often helpful. Where can you be consistent and take action to stave off fear and uncertainty? It may not have to be in business but it could be at home. Are there issues with children and their schooling or in your relationships? Break down the specific areas where you can gently and respectfully raise issues and then talk about needs. I’m launching a fifth novel and for the first time, my writing partner and I have come up with more of a comprehensive marketing strategy. It’s not perfect by any means but it gives us a framework so sales don’t just fall flat and fall off the radar. I can look ahead to the next year and see that it’s realistic to create a novel, a novella, and a few short stories. I’ve found inexpensive services like Book Doggy to use for promotion in addition to our own social media and email list. We have a plan for taking action and evaluating, which is far superior to wondering why nothing is happening and possibly letting fear settle in and grip us. As the old saying goes, it’s easier to steer a boat that’s sailing instead of one moored in a harbor.
https://medium.com/live-your-life-on-purpose/take-action-to-combat-fear-and-move-forward-563fd3d63342
['Don Simkovich']
2020-10-31 17:01:04.290000+00:00
['Self Improvement', 'Productivity', 'Success', 'Entrepreneur', 'Fear']
U.S. Attorney General Barr asks Apple to unlock iPhones of Pensacola shooter (PLUS what is Apple’s defense)
U.S. Attorney General Barr asks Apple to unlock iPhones of Pensacola shooter (PLUS what is Apple’s defense) This appeared in The Millennial Source Following a deadly December shooting in Pensacola, Fla., United States Attorney General William Barr has pressed Apple Inc. to help unlock a pair of iPhones believed to belong to the gunman. In a press conference Monday, Barr stated the shooting, which resulted in four deaths, was determined to be a terrorist attack. The gunman was a Saudi national who was training with the U.S. military. The request by the U.S. Department of Justice to have the phones unlocked is similar to a request the Federal Bureau of Investigation (FBI) made in 2016. At that time, the bureau was seeking to unlock the phone of a shooter who was involved with a 2015 San Bernardino attack that left 16 dead. What happened in Pensacola On Dec. 6, a gunman identified as Mohammed Saeed Alshamrani opened fire on the U.S. naval base in Pensacola, Fla., according to NBC News. Alshamrani killed three U.S. Navy sailors and injured eight others, including two Escambia County sheriff deputies. He was shot and killed by additional sheriff deputies on the scene. Alshamrani, who was a second lieutenant in the Royal Saudi Air Force, was part of a Saudi-funded program to train with the U.S. Air Force. The training involved aviation training as well as studies in English. Motivation for the shooting In the immediate aftermath of the shooting, questions turned to the shooter’s motivations, according to USA TODAY. The shooting was initially investigated as a terrorist attack, but authorities were cautious about declaring that as the official motivation. However, on Monday, Barr announced the motivation for the attack was “jihadist ideology,” which refers to militaristic actions based on extremist Islamic beliefs, according to The Hill. No direct link to any specific terrorist group has been found for the attack, though. A month-long investigation found that Alshamrani had posted messages on social media platforms that were deemed anti-American and anti-Israeli. The most recent messages were posted just two hours before the attacks. On September 11, 2019, he posted a message that simply said, “The countdown has begun.” According to FBI Deputy Director David Bowdich, Alshamrani shot at pictures of U.S. President Donald Trump and another president during the attack, according to The Washington Post. Witnesses also said they heard the shooter making unfavorable statements about the actions of the U.S. military in other countries. Unlocking Alshamrani’s phone In order to further the investigation and understand more about Alshamrani’s radicalization, Barr has asked Apple to unlock two of the shooter’s phones, according to The New York Times. Barr also stated he believes that information on the phone could help protect against future attacks. iPhones have a security feature that can result in all the data on the phone being erased after six or 10 incorrect attempts, depending on the model, to enter the passcode, according to Business Insider. For that reason, investigators cannot simply attempt to break into a phone through repeated attempts. Apple has shared information from Alshamrani’s iCloud account, but the company has so far refused to provide a means of cracking the two phones. This is consistent with the stance the company took in 2016. Apple’s defense of civil liberties In December 2015, two Islamic terrorists entered a social-services center in San Bernardino, Calif., according to The Atlantic. The couple, which was married, killed 14 people and wounded 21 others before being killed by police officers. Following the shooting, an iPhone used by one of the shooters was recovered. The Department of Justice approached Apple Inc. about it opening the phone by creating a “backdoor” to bypass the passcode, according to The Guardian. The DOJ and FBI argued that they needed access to the phone to better investigate the shooting and provide justice for the victims. The backdoor that the FBI recommended would have involved creating a new operating system that could be loaded onto the phone, according to WIRED. This OS would not include a limit on incorrect passcode attempts. In that way, Apple wouldn’t be providing the code; it would simply be allowing the authorities the ability to crack the code without the possibility of data deletion. Apple’s CEO, Tim Cook, refused to comply with the US government, even going to court to assert their right to refuse. Cook cited the need to protect civil liberties as his reason for his company’s refusal to cooperate. In an internal email, he said the “data security of hundreds of millions of law-abiding people” was at stake. The U.S. government backed down and Apple was not required to create the backdoor. The U.S. expels 21 Saudis During the press conference in which he discussed Alshamrani, Barr also announced that 21 Saudi military students were being expelled from the country, The Millennial Source reported. The students are being expelled for posting Jihadi or anti-American messages on social media, as well as for having contact with child pornography.
https://themillennialsource.medium.com/u-s-attorney-general-barr-asks-apple-to-unlock-iphones-of-pensacola-shooter-plus-what-is-apples-9e641813cc56
['The Millennial Source']
2020-01-16 01:38:30.189000+00:00
['America', 'World', 'News', 'Apple', 'Government']
Predict Sales Spikes With C# and ML.NET Machine Learning
It’s a very simple CSV file with only two columns: The date for the sales record The number of shampoo bottles sold on that date I will build a machine learning model that reads in each sales record, and then identifies every anomalous spike in the sales data. Let’s get started. Here’s how to set up a new console project in NET Core: $ dotnet new console -o Spike $ cd Spike Next, I need to install the ML.NET base package, the time series extensions, and a plotting library: $ dotnet add package Microsoft.ML $ dotnet add package Microsoft.ML.TimeSeries $ dotnet add package plplot We C# programmers unfortunately do not get access to the awesome matplotlib library that Python programmers get to use. But there’s a really nice alternative for us: PLplot. It’s an advanced scientific plotting library and it comes with full C# bindings. If you’re on Windows, you can use the Nuget package out of the box. But if you use a Mac like me, you’ll have to install plplot with homebrew first: $ brew install plplot Linux users also need to install plplot. Here’s how you do it: $ sudo apt install libplplot15 plplot-driver-cairo How cool is that? This is a multi-platform NET Core plotting app that runs on Windows, OS/X, and Linux! Now I’m ready to add some classes. I’ll need one to hold a sales record, and one to hold my model’s predictions. I will modify the Program.cs file like this: The SalesRecord class holds one single sales record. Note how each field is tagged with a Column attribute that tell the CSV data loading code which column to import data from. I’m also declaring an SalesPrediction class which will hold a single sales prediction. Note that the prediction field is tagged with a VectorType attribute that tells ML.NET that each prediction will consist of 3 numbers. Now I’m going to load the sales data in memory: This code uses the method LoadFromTextFile to load the CSV data directly into memory. The class field annotations tell the method how to store the loaded data in the SalesRecord class. The data is now stored in a data view, but I want to work with the sales records directly. So I’m calling CreateEnumerable() to convert the data view to an enumeration of SalesRecord instances. Let’s start by plotting the sales records to get an idea of what my data looks like. I’ll add the following code: You can see that plplot has an API with very short method names. I’ve added some comments to describe what each method does. The line() method draws a line from two array of x- and y-coordinates. My code uses two LINQ queries to provide a range of 0..35 for x, and the corresponding sales numbers for y. The final eop() method closes the plot and saves it to disk. When I run the app, I get the following plot: Looks good! That’s a nicely increasing shampoo sales trend. Now I will identify all anomalies in the data. I will use an ML.NET algorithm called IID Spike Estimator. ‘IID’ refers to Independent and Identically Distributed. It means that each sales record is independent from all other sales records, and the probability distribution of all sales records are the same. Anomalies are outliers in the data. These are points on the input time-series at which the data behaves differently from what is expected. These deviations are usually indicative of some interesting events in the problem domain that we want to focus on. There are two types of anomalies: spikes which are sudden yet temporary bursts in the values of the input time-series, and change points which indicate the beginning of persistent changes in the system Let’s start with spikes. Here’s how to perform spike estimation in ML.NET. First remove the pl.eop() call, and then add the following code: Machine learning models in ML.NET are built with pipelines, which are sequences of data-loading, transformation, and learning components. My pipeline has only one component: DetectIidSpike which reads the sales records and estimates all anomalous spikes in the data. I have to provide the input and output column names, a confidence threshold, and the size of the sliding window used during estimation. I’m configuring my spike estimator to use a 95% confidence threshold and a window that spans 25% of my x-value range. With the pipeline fully assembled, I can train the model on the data with a call to Fit(…) and then call Transform(…) to make spike predictions for every sales record in the data set. Finally I call CreateEnumerable() to convert my transformed variable to an enumeration of SalesPrediction instances. Each sales prediction instance now holds a vector with three values: An ‘alert’ value that is equal to 1 for a sales spike (that exceeded the specified threshold) and 0 otherwise. The predicted sales value. The p-value. This is a metric between zero and one. The lower the value, the larger the probability that we’re looking at a spike. I can highlight the spikes in the plot with the following code: I use a LINQ query to select all spike predictions with the ‘alert’ value equal to 1, and call the pl.string2() method to highlight these locations in the graph with a down-arrow symbol. Here’s what that looks like. When I run the app now, I get the following plot: The algorithm has discovered four spikes in the data. Now let’s try and find the change points. Remove the final pl.eop() and add the following code: My new pipeline has only one component: DetectIidChangePoint which reads the sales records and estimates all change points in the data. I have to provide the input and output column names, a confidence threshold, and the size of the sliding window used during estimation. I’m configuring my change point estimator with the same values I used before: a 95% confidence threshold and a window that spans 25% of my x-value range. With the pipeline fully assembled, I can train the model on the data with a call to Fit(…) and then call Transform(…) to make change point predictions for every sales record in the data set. Finally I call CreateEnumerable() to convert my transformed variable to an enumeration of SalesPrediction instances. Each sales prediction instance now holds a vector with four(!) values: An ‘alert’ value that is equal to 1 for a change point (that exceeded the specified threshold) and 0 otherwise. The predicted sales value. The p-value. This is a metric between zero and one. The lower the value, the larger the probability that we’re looking at a spike. The Martingale value. This represents the deviation of the current value from the previous trend. High values indicate a change point. I can highlight the change points in the plot with the following code: I use a LINQ query to select all change point predictions with the ‘alert’ value equal to 1, and call the pl.line() method to draw a vertical red line in these locations in the graph. Here’s what that looks like. When I run the app now, I get the following plot: The algorithm has discovered one change point at day #8. This is where the horizontal sales trend stops, and shampoo sales start trending upward. So what do you think? Are you ready to start writing C# machine learning apps with ML.NET?
https://medium.com/machinelearningadvantage/predict-sales-spikes-with-c-and-ml-net-machine-learning-643bbc8af835
['Mark Farragher']
2019-11-19 15:02:35.305000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Data Science', 'Programming']
School Java Project Chess (1)
School Java Project Chess (1) Printing out an empty game board Let’s create a simple chess app which can be used like a non-digital chess board, meaning that we can use mouse to move pieces around on board. It will look like the following when finished. This is the first post of the series so we’ll kick off Java Hello World at the same time. Our goal is to print out a simple empty chess board represented with 64 dots. a b c d e f g h 8 . . . . . . . . 8 7 . . . . . . . . 7 6 . . . . . . . . 6 5 . . . . . . . . 5 4 . . . . . . . . 4 3 . . . . . . . . 3 2 . . . . . . . . 2 1 . . . . . . . . 1 a b c d e f g h We simply follow the board style shown on this Chess Wiki page. The following is the computer I used. If you are using one other than MacOS, you may need to adapt some of your tools to follow, although the Java source code you find here should work on your machine too, compatible I mean. You can bring up app terminal by pressing Cmd + Space and start typing “term…”. Terminal window looks like the following. The default command line prompt could be “$” or “%” or something else. The next step is not necessary. But for fun let’s change the default prompt from “ zhijunsheng@mbp2012 ~ % ” to “🄹 ”. Make sure you have Java installed on your machine. I won’t go to the details about how to install Java on your machine. Here is one way of doing it. If you don’t have homebrew installed you can run the following. Then run brew install java . Make sure java is installed properly: 🄹 java -version java version "1.8.0_20" Java(TM) SE Runtime Environment (build 1.8.0_20-b26) Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode) Create a chess directory, or folder, somewhere in your file system. 🄹 mkdir chess 🄹 ls -l total 0 drwxr-xr-x 2 zhijunsheng staff 64 8 Jan 23:05 chess 🄹 cd chess 🄹 ls -l 🄹 The first program we’ll write is Hello Chess. We use vim text editor to edit source code. Two spaces, instead of tab key, are use for indentation. Run vim Chess.java to start typing in source code. If you don’t know how to use vim it’s a good time to make friend with it since most of programmers can use this tiny but powerful text editing tool. 🄹 vim Chess.java The complete code has 5 lines: class Chess { public static void main(String[] args) { System.out.println("Hello, Chess!"); } } Compile Chess.java with javac and run the app using java. 🄹 javac Chess.java 🄹 java Chess Hello, Chess! Congratulations! We got our first Java program working perfectly. BTW since programmers are lazy the above two commands can by put together like this: 🄹 javac Chess.java && java Chess Hello, Chess! There is no need to type the same command manually every time after you update the source code because Up Arrow key 🔼 brings you through the command history in terminal. If you check your current directory you’ll find a new file Chess.class which was the output of command javac Chess.java .
https://medium.com/analytics-vidhya/school-java-project-chess-1-85f97a2d1877
['Golden Thumb']
2020-01-15 04:42:21.745000+00:00
['Tutorial', 'Java', 'Chess', 'Kids Programming', 'One On One']
Directions in User Research
As 2019 comes to a close, I’ve been reflecting on the state of UX research and what 2020 can hold for the discipline. I’ve worn many different hats, from leading a research team to editing this channel (which, you guessed it, involves a lot of reading). Looking at the field from these perspectives, here are a few trends I’ve noticed and my predictions on where UX research is headed. Empathy: What is it good for? It’s a founding truism that UX research builds empathy for customers. But some influential voices have been challenging our perceived notion of empathy, saying it allows us to operate with blind spots around privilege, shame, and the harmful effects of a pathologically altruistic mindset. These arguments raise important questions. How can we move past a shallow rhetoric of empathy? What work should we be doing to ensure that research has a positive, connective effect among our partners and the communities we serve? We need to evolve our discourse to encompass in-depth knowledge from fields such as communications and counseling so that we come to understand the generation of empathy as an ongoing practice, not just a buzzword or something we use in a process some of the time. In the next year, we’ll be looking at the practices and processes we have in place as researchers, critically questioning how we engage with communities. We’ll be asking, should a given community’s participation require an invitation from us? Are there better means to learn and continue learning alongside our participants? The answers will reshape our field for the better. Keeping up with tech — the rise of AI and proliferation of remote testing tools Among technology trends, the rise of AI-powered experiences has the biggest ripples for UX research. Although many classical principles governing human-computer interaction design hold true with AI, new guidelines have emerged to account for differences such as the systems’ inherent adaptability. For UX pros researching AI-powered experiences, these differences imply expanded methods as well as increased emphasis on certain fundamentals of recruiting and analysis. Meanwhile, tech developments continue to affect UX research methods. Remote testing, while by no means new, is taking up an ever larger share of testing overall, and we’re seeing more tools in the wake of this sea change. Yet I can’t agree with those who predict that smaller, in-person studies will fall away. For all the benefits of remote studies, local testing still has an edge for reaching certain profiles of users, such as IT pros and children. And at the end of the day, in-person studies offer more opportunities to interact and explore. Ideally, we’ll mature into a better understanding of the strengths and weaknesses of each. Want your product partners to understand the value of user research? Bring them along for the ride For researchers, the most powerful tool to deepen our understanding of people’s needs, pain points, and context is the data we gather. But all the customer data in the world won’t help unless product teams use it. For that reason, one of our key jobs as researchers is developing our design and engineering partners’ buy-in. There are many ways to become a more effective communication partner, including learning how to convey research results as a story. But direct experience is even more memorable than a story well told. Once researchers start training product teams to interact with customers and gather insights directly, they tend to find increased enthusiasm. Some natural tensions are likely to arise in this exchange: for example, how can we empower our partners while also creating appropriate expectations of customer-engagement programs’ role in the broader context of research design? Despite such challenges, customer-engagement training and coaching is an approach that many are experimenting with, and I expect this trend to continue. If you build it they will come — exploring insights libraries In 2018 we saw development of the concept of timeless research. This year brought more emphasis on organizations developing the tools they need to achieve this ideal, most notably the insights library, or research repository. These repositories allow research to be accessed regularly. A handful of companies have experimented, and the global Research Ops community has taken up the study of research repositories as a formal project, so we can expect the momentum to grow in 2020. Based on the work happening with our insights library at Microsoft, I think we’ll see a constructive discussion on how to evolve some of the best attributes of atomic research. The conversation about how to curate research insights in ways that keep pace with the industry will also be a hot topic.
https://medium.com/microsoft-design/directions-in-user-research-1b6458338213
['Sheetal Agarwal']
2019-12-19 17:48:29.630000+00:00
['Research And Insight', 'Design', 'Trends', 'UX', 'Microsoft']
Faded Sense
Original Drawing by JusTee The moments out of control frighten me Seeming to increase with age So much to wonder And even to fear Without explanation or purpose And the consequences Spread like dandelions in the lawn Little tears needed for life Or harmony through chaos Impacting hidden branches of the mind White coats and gloves for help Practicing only on the weak Glue for the breaks Through new cracks With reality lost like it was real Pits in the ground caused to stumble To remember the earth And the salt of the ground Only to look up For less weight of the hold And the tears are secret Beyond wrinkles and lines Time always different To where it happened Or when or why it was
https://justee.medium.com/faded-sense-6ddd8fcff9c9
[]
2018-04-06 18:36:26.944000+00:00
['Time', 'Poetry', 'Life', 'Mental Health']
We Optimize Ourselves by Working Through the Wilderness
We Optimize Ourselves by Working Through the Wilderness Leverage challenging circumstances for long-term growth. One of the messages that we’ve heard lots of times — and legitimately so — is that we shouldn’t be looking to just go back to normalcy. Plus, normalcy is over-rated. What is normal anyway? Things are always in flux and somehow we always still have something complain about! What’s Our Objective? We have a tendency to over-complicate things. We construct lofty goals with well-intentioned plans. Plans are good, but we cannot let them become an idol. Micah 6:8 reminds us that the Lord lays out a pretty simple command: humble yourself and walk humbly with your God. Pretty simple compared with our 10-year plan. And what does the Lord require of you but to do justly, and to love kindness and mercy, and to humble yourself and walk humbly with your God? — Micah 6:8 Humility means recognizing that we don’t have all the answers and that we’re a work in progress — constantly improving or, as Paul says in Philippians 3:14, “I press on toward the goal to win the prize of God’s heavenly calling in Christ Jesus.” When we’re proud, we put ourselves at odds with Jesus — unable to learn and listen to the Holy Spirit guide and point out areas for growth. Moreover, Jesus affirms the Old Testament prophet Micah, and what Paul would eventually utter in his letter to the Philippians, when He reminds us in Matthew 5:28: “Be perfect, therefore, as your Heavenly Father is perfect.” But, how do we “be perfect” — or strive towards it — practically speaking? Optimizing the Wilderness Let me focus on one specific dimension of this process for improvement: how we leverage our endowments, which most notably includes our time. This concept of time is rooted in scripture. Paul reminds us in Ephesians 5:15: “Be very careful, then, how you live — not as unwise but as wise, making the most of every opportunity, because the days are evil.” While not everyone has large financial resources, we all have time — and that’s a gift to be stewarded. While not everyone has large financial resources, we all have time — and that’s a gift to be stewarded. So, how are we using our time during the pandemic? As the economy begins to reopen and stay-at-home orders are lifted, we’re beginning to see a rise in economic activity and traffic. But, we’re far from business as usual — and that’s a good thing — and many companies are likely to have their employees continue working from home. To get a sense of how we might want to think about this moment we’re in, let’s look back at the Israelites when the Lord took them through the wilderness following the Exodus out of Egypt. They spent 40 years in it. The shaking that we’ve witnessed is much like the wilderness that the Israelites experienced. Wondering in the desert, it was just them and their God — Yahweh. That’s much like our experience over the past three months: whether we openly acknowledge the Lord or not, He’s here with us. Getting Ready for Revival While we’ve already had significant breakthroughs in the spiritual realm — and the prayers of the righteous church have unambiguously played a role in moving the country forward —we’re likely to see an even bigger outpouring at Pentecost. Are we ready, or what might the Lord be trying to teach us still? We might think we’re ready for the outpouring, but Jesus rewards those who search and invest. Of course, we are not made righteous through works. But, we can demonstrate our love by how we allocate our time. For example, Jesus reminds us in the “Parable of the Talents” that we are to invest and generate a healthy return on investment using our finances, time, and mind. Then, what actions are you taking right now to prepare yourself and reciprocate the love that He poured out for us? Are you investing your time, or just spending it? Social media, for example, has become a particularly large distraction in our day and age. While it can be used for lots of good, it is ultimately a tool, meaning that it can also do harm. Similarly, Netflix can be used to bring a family together and have some laughs around a good movie. But, it too an create a similar addiction — or simply become a black hole that squanders our time. Netflix can be used to bring a family together... But, it too an create a similar addiction — or simply become a black hole that squanders our time. Let’s put an end to these empty pursuits by asking Jesus to transform our desires, which will in turn change the way that we *want* to invest our time, specifically on activities that advance His kingdom. Time is precious and we don’t want to look back when we get to Heaven wishing that we had invested more faithfully in His kingdom. Three Steps for Course Correction How do we put this all to practice? Hardly an exhaustive source, but perhaps we can boil it all down to taking the following three steps. Take an internal inventory of just the past month — where have we allocated our time and thoughts? Ask Jesus to speak to your heart —were all these activities aligned with His will for our life, or what are the areas for improvement? Pray to Jesus to help take the next steps — while we cannot do anything on our own, what does Jesus need to help us with to move forward? We need to take a really hard look at what we’re doing and what we’re thinking. We cannot call ourselves followers of Jesus if we behave the same way that people who do not follow Jesus behave. Everything we do needs to be aligned with His purposes. We should not let this journey through wilderness end without learning the lesson that Jesus is trying to teach us. What are you going to do differently tomorrow?
https://medium.com/publishous/we-optimize-ourselves-by-working-through-the-wilderness-b3a6503772cf
['Christos Makridis']
2020-05-30 14:01:00.998000+00:00
['Self Improvement', 'Christianity', 'Productivity', 'Covid 19', 'Spirituality']
How to create a Web-Based Document Viewer in Python
Setting up web views for documents can be the source of a great many headaches for Python users. That’s why today’s solution will be as simple as calling a function, then using the HTML embed code that it provides. Our total setup time is going to be just a handful of minutes. This level of simplicity is achieved through use of an API, which we will install now. pip install cloudmersive-convert-api-client Our function is named viewer_tools_create_simple, which will also require a new API instance. Set this up in the following manner: from __future__ import print_function import time import cloudmersive_convert_api_client from cloudmersive_convert_api_client.rest import ApiException from pprint import pprint # Configure API key authorization: Apikey configuration = cloudmersive_convert_api_client.Configuration() configuration.api_key['Apikey'] = 'YOUR_API_KEY' # Uncomment below to setup prefix (e.g. Bearer) for API key, if needed # configuration.api_key_prefix['Apikey'] = 'Bearer' # create an instance of the API class api_instance = cloudmersive_convert_api_client.ViewerToolsApi(cloudmersive_convert_api_client.ApiClient(configuration)) input_file = '/path/to/file' # file | Input file to perform the operation on. try: # Create a web-based viewer api_response = api_instance.viewer_tools_create_simple(input_file) pprint(api_response) except ApiException as e: print("Exception when calling ViewerToolsApi->viewer_tools_create_simple: %s " % e) And now input the file path for your document. Moments later you will have your embed code. Effortless.
https://cloudmersive.medium.com/how-to-create-a-web-based-document-viewer-in-python-d7c7232b9843
[]
2020-05-08 17:12:12.993000+00:00
['Document Viewer', 'Python', 'Create', 'Web Based']
ECOMI Is Flying to See You!
For the Months of June & July ECOMI is in the air to meet you! With new blockchain conferences appearing every month, we’re taking our project on the road so you can get familiar with ECOMI products and tech, and have the team answer any questions you have face-to-face. Catch David Yu, CEO and Daniel Crothers, COO at a number of events across Asia. The team will have the ECOMI Secure Wallet with them to show and tell, and can demo the beta version of ECOMI Collect’s augmented reality feature! To organise an interview, or if you just want to make sure you catch them for a chat, please reach out to the pair on telegram @DanECOMI or @davidECOMI. You’ll also have the chance to hear David speak about all things collectibles, and licensing, in the digital space at the 2018 Asia Blockchain Summit in Taipei, Taiwan. Dates and Events: June 2018 Tokyo — Japan 26th & 27th June 2018 Japan Blockchain Conference Venue: Tokyo International Forum www.japan-blockchain-c.com/en/ The first event of its kind in Japan, this national blockchain conference is expecting more than 10,000 attendees with a line up of more than 100 experts in cryptocurrency, blockchain companies, and organisations from all over the world, including CEO of bitcoin.com Roger Ver, and former CEO of Ethereum Charles Hoskinson. Seoul — South Korea 28th & 29th June 2018 Blockchain Open Forum Venue: Nonhyeon-ro 508, Yeoksam 1-dong, Gangnam-gu, Seoul 1st Floor AMORIS www.en.blockchainopenforum.org Open forum on the current state of blockchain, as well as an expansion of use cases, spanning from entertainment, media and payments to real estate and social impacts. July 2018 Taipei — Taiwan (Speaker Day 2 | 2:55–3:30pm) 2nd & 3rd July 2018 2018 Asia Blockchain Summit (Please see discount ticket coupon) Venue: Taipei Marriott Hotel www.abasummit.co David will be speaking about ECOMI, collectibles and licensing in the digital space alongside keynote speakers include Charlie Lee (Litecoin creator), Changpeng Zhao (Binance CEO and founder). Hong Kong (Booth & Speaker) 9th to 12th July 2018 RISE Venue: Hong Kong Convention & Exhibition Centre www.riseconf.com More than 15,000 attendees covering blockchain, new tech, robotics, AI and the expected outlook of crypto over the next 24 months. Hong Kong (Booth & Speaker) 24th to 26th July 2018 NIFTY CONF Venue: Grand Hyatt Hong Kong www.nifty.gg Discussing the ‘bleeding edge’ of non-fungible tokens (NFTs), collectibles, and blockchain gaming. Event sponsored by ECOMI. Let us know which event(s) you’ll be attending, and what you are the most excited about in the comments or on twitter! And don’t forget to reach out to the ECOMI team on Telegram for your chance to chat to David and Dan about ECOMI, collectibles and all things blockchain!
https://medium.com/ecomi/ecomi-is-flying-to-see-you-cb6b130676c5
[]
2018-08-09 01:23:08.458000+00:00
['Bitcoin', 'Blockchain', 'Startup', 'Cryptocurrency', 'Conference']
A New ‘Hero’s Journey.’
A critical post about the biggest cliche story of all time and suggestions for a new story lines towards a more united earth. (I’ve tried a writing innovation. I call it the READER EDIT. If you only read the normal font, you get a shorter more straightforward version. You can also read the Italic, which will explain some points a bit more, if the normal script is not enough or you wish for an example.) The Cliche Chart to reveal the biggest cliches of our time. Google: ‘Hero’s Journey model’ for more models. At the end of this post I offer some alternative stories, with a little different take on overcoming trouble. It’s time to end our current ideas on how stories essentially go. We all must have heard of the ‘Journey of the Hero’, the basis for every good story. It works in training. And Hollywood can’t do without Joseph Campbell’s mono myth anymore. It always repeats this set up: Some huge, often bigger than life, threats need to be overcome by a bigger than life hero. The evil must be stopped. The movie works towards the big shoot out or confrontation that will solve almost everything (unless part II is planned). And the Hero’s Journey is the basis for very good business training models too. It works for leadership, change programs, entrepreneurship, and plain motivation. You are on an adventure in this training, learning how get breakthroughs. Hey, me and a college offered them as “Breakthrough by Metaphor” They are great fun and confronting too. I still offer them, but do add new twists. Yet, it’s time to let it go off this dominant idea of the ‘Hero’s journey’ and find a brand new take on it, that will help the world going through the coming big changes or storms. For a real bigger than life ‘storm is coming’. Our climate crisis has no evil villain, nor can be stopped by one heroic being. It will need all of us to weather the storms. Literal storms from the changing weather and degrading of essential eco systems and metaphorical storms due to the messy economic and political developments, in part caused by the literal storms. Tensions over natural resources will keep rising, like those about energy, whether it’s about oil, or between oil corporations and innovative alternatives. Or just new technological breakthroughs hitting the market that really outdate your business or job. Do Superhero movies offer an escape, inspiration, distraction or false illusions about solving problems? I’d guess all of the above, but still. One wonders. Four reasons why the ‘Journey of the Hero’ needs scrutiny. 1. It boosts ‘Us vs. Them’ thinking. Hollywood loves that simple reality. Kill the bad guy and everything is right again…wrong. Many bad things happen due to broken systems, not because one person rigged it. Also in training the format is often simple: believe that finding and overcoming one big obstacle is the way to go. Where’s the sense of reality? After the training you’ll feel wow, but back at the firm you get confused. Many big and small obstacles, interests, alliances and oppositions, rules and customs are all interwoven and seem out to stop you. That’s why many revolutions end in a mess. A, it turns out it’s often just more of the same, and B, new faces and new system doesn’t always mean better. Reality is complex. And solutions often become the source of new problems. Cars, chain saws, junk food, or insect poisons all were a great idea once. Also the whole idea that there is some evil out there targeting us to destroy everything, hinders acceptance for strangers and other cultures. Mistrust and fear actually create more conflict and suffering than they solve, from racist ‘solutions’ to ‘big walls’. 2. You might start to believe it’s all too much. The bigger the heroes, the less we recognize ourselves in them. Standing up is dangerous. The other side is really willing to destroy you for what it wants. You’d rather hide. And you lack what heroes have: specials tools, gifts, inner conviction and especially a kind of magic call. The call you never received. You may feel rudderless and filled with a doubt, that heroes seemingly rarely have. So you wait for a real signal, not this joke: “If you’re waiting for a sign from God, this is it.” We need to start realizing doubters are actually people who care and don’t want to hurt people. It’s the ones who don’t mind and think they can get away with it, who are the problem. And it’s often the sensitive ones who seek to improve things for the victims of the guys who didn’t mind. 3. Almost no hero succeeds alone. We all need help to make it happen. People who do their thing, people who believe in you, people who give in and accept change has come. Heroes often have many people to thank for their success. We should not take them out of the light with our focussed admiration for the ‘one’, but get the others back in. Consider Nelson Mandela. His integrity and values inspired many. And many people were needed to free him from prison and make him president. On both sides of the lines. Sometimes it’s even unclear who all made something happen. Who made the tools for the fire brigade? Many many little things led to the fall of the Berlin Wall, many people wanted it down. Many efforts became an unstoppable wave. And please acknowledge the guards that fired no shots at all as important too. Consider the fight for women to vote. Consider the end of slavery. Etc. There are many heroic individual stories within the process, but how to show the collective making the difference? 4. What is the bigger picture? Firstly those who make the movie frame what you see. They may also decide what values rank on top. A movie about an American Commando saving a hostage from a group of terrorists, will often neglect that the war probably got started, let alone got worse, due to US meddling of some kind. Regime change wars anyone? The USA military has a big finger in Hollywood movies, but they can’t stop side effects. ‘Fun’ fact, many terrorists according to a former CIA agent identify with Luke Skywalker, lone desert boy, rebel, fighting an evil empire. We do have to wonder, is the movie to tell about a hero, who made a difference, or is it once more confirmation that the ‘psychopath attitude of not hesitating to kill’ is what saves the day? Too many leaders and heroes, are shown as people who stay cold blooded in any situation. In reality we can’t trust that strong decisive leaders actually have the capacity to care enough about others. Because of our ‘hero’ culture we forget to wonder, is the way of the hero the best way to solve a situation? Of course calling the cops to arrest all the bad guys is more boring, than a good shoot out of one guy vs the rest. But we should wonder, what isn’t shown, isn’t told? A New Story So what is the new story we need? Here are essential elements that I propose: A: We are all together in it. Everyone who saw the short video: “The Blessed Unrest” of Paul Hawken can feel that millions worry and do something. We all want peace, safety and positive change. Much has been done. There is less war, less slavery and we need to work to improve beyond where we are now. We can. “Earth is a Space Ship and We’re all Crew”. That realization becomes more apparent as we understand our planet better and the threats get bigger. It makes more and more people feel we must act. This new approach should help liberate us from out tendency to wait on the ‘leaders’ or the ‘one’ to make it happen and see how we together ‘as a positive swarm’ are the ones with power. Some people called for a WQ, We Quotient, for our capability to see the whole and our role in it. I think this capability is becoming increasingly important if we want to prevent global self destruction. In tribal ‘Us vs Them’ thinking whistle blowers like Edward Snowdon are traitors. For global citizens we see how he seeks to help safeguard the whole from misuse of power. B: There’s a lot of common people, with no gift at all, making a difference through hardship. Movies like ‘Erin Brockovich’ are way too rare. And the movie makes her plight individual, because, well, hey, you need to feel with and for her. Yet many more people than shown in the movie helped her make that difference. There’s many idealistic, sensitive, simple, strange and even victimized people who go out of their way to help others, because if feels like the right thing. This behavior needs way more light, compared to the hero willing to shoot up bad guys. We applaud Fortune 500 success even when it damages the planet and forget the ordinary volunteers, because they didn’t get rich. Where’s the Humanitarian or Green 500? Movies also love the grander scale. Heroes safe the country. Thus all the daily poverty and small stories of daily survival are pushed to the side. I admit, when in trouble, a great action or fantasy movie takes away the trouble away for a bit. C: We need to see more of the effects for normal people, when big powers clash. Next to focus on the heroes, we need to know what it means for all who live through the consequences of clashes. Stories need to show, all people are people, not just inconsequential puppets to make heroes look grand. What makes it okay for heroes to endlessly kill lots of the ‘bad’ guys standing in his way? Or have the hero safe populations, who seem to have no guts, insights or options themselves whatsoever, until ‘our’ (read mostly white male American) hero leads them. We hear the numbers, about civilian victims fallen through drones, but we don’t understand. In most media they are numbers, while the mother of a local victim of violence gets full exposure on screen. Once a Austin Powers movie showed the funeral of an unknowing guard killed as crony of Dr Evil. You know the kind of guys who die, just because they’re in the way of the hero, on his way to safe the day. Perhaps we need to distrust the fingers that point at ‘others’ as dangerous, more than these ‘others’. Reality is, most efforts need other people to succeed. The New Hero’s Journey There is a call; can be a problem or a longing of many. People answer that call. They discover each others role in the issue, during which reality starts to hit harder. Out of this the people rise. They start to connect and frame what is wrong and what might be system shifts needed. A solution emerges. Things start to fall in place, people set up actions and change begins to happen. Resistance slowly turns into new understanding and willingness. People celebrate and reward each other for their help. (2018 edit: small video told by Caitlin Johnstone explaining the need for this) Differences with the old model: The dragon is there right in the beginning. Or there is a longing for a better system. The trouble with the old no one’s fault, though their might be people who misuse the old system, consciously or because of ignorance. Nasty suffering is quite possible, as are nasty attacks or setbacks. Not because of a criminal, but perhaps even the FBI totally convinced it is preventing unrest, while they only help postpone corruption to be exposed. Most of all, it is a story of people with a higher purpose, willing to be human read vulnerable, grateful, trusting, willing to help during the experience. The end does not justify the means at all. The new stories shows how that can be done. There may not be a final fight, but there sure is a celebration at the end. Instead of the final fight there may be a series of shots of people changing, more green in cities or hugs between strangers. Not just a camera doing a fade out over a hero walking away form the dead body of the evil bastard. Rather images that shows the (long term) result of positive action. Around the world people are working for a positive change. And there are still a lot of people who need human dignity and solutions that work for them as well. They deserve to be in the light as well. It’s up to us to invent stories that makes us all feel part of our ‘ship’. It’s up to us to connect to dots, not just see the Hollywood ones that distract us. The hero’s journey can be replaced by the Human Experience. Happy Journeys. There’ already a ‘lot of calls for change or examples of what works and what not’ out there. And movies like this one, about what it means to be Human. Yet fictional movies that address real issues are way too few, and if they’re there, often way too safe as to not compromise the interests they address, all to prevent legal issues. (2017 addition) Here’s a very similar call for changing stories, but in very different wording and a, can I see it, deeper scope: http://www.huffingtonpost.com/maya-zuckerman/the-collective-journey-pa_b_9073346.html And here a step what this might mean for Hollywood. And if you’re up for it. Here’s some ideas for Future Scenario’s. Some stories we need to see more often: Border patrols and paper works, can be very hard for people on the run for war. Game of Bones. A story about the effects of big players on the world. What about a series of an Afghani family? They all suffer the real world ‘Game of Thrones big powers play’. The strategist big players we hardly get to see and don’t follow. Just the effects of their choices. Mother gets, by mistake, for carrying large branches for the fire that look like guns from above, killed by a drone. Daughter 1 flees to Pakistan, and works in a sweat shop for a big company seeking the cheapest labor. It’s that or prostitution to foreign diplomats. Daughter 2 marries a guy, through Taliban pressure. Her husband later becomes Isis soldier and dies. She stays behind in an Isis occupied, war zone. Son 1 gets threatened for translating for the Americans (read is seen as a traitor by fellow country men) and flees to US. And he only gets there after a long long journey and then gets arrested as illegal alien. Son 2 leaves for Australia and gets a lot of racial hate and suspicion towards him. Father stays behind and gets harassed because son 1 betrayed his country by working for the Americans. Etc. Politics, Religion and Business rule their lives from afar. Moral of the story: people who see the world as a chess board never bring the solution. They are part of the problem. Flash mob as form of intelligent and positive swarming for fun. Magnolias with a purpose A story of people working together to make a difference. In reality we are already swarming towards emerging solutions for big problems. With our focus on individuals we too often don’t realize that. This is a typical ensemble concept for television. In this series a group of individuals all work together on the same real world issue. The series focusses on the positive ones making the change. There is no real enemy, but there may be conflicting interests, corruption, broken systems and some nasty people. The theme doesn’t matter; whether it’s about gay rights, corporate pollution, local corruption or the fall of the Berlin Wall. It’s about how we all are helping to make a difference together. Big point is, for a long time these people feel alone, don’t know about each other. They all start with hope, care, pain or anger about what is happening. During the series each of them is serious help to several of the others and to the whole solution, while not (yet) seeing the bigger picture or knowing about each others efforts. Slowly they start to see and learn about each other, which gives them more hope, courage and purpose. In the end they have a breakthrough and solve a big threat, perhaps even fracking or celebrate the first gay marriages. In a second series they may face a new issue more together now, or a different historic change, where many made the difference together is retold, for example the end of slavery. Here’s a blogpost with an example of the style of writing fitting this: Everything Touches. Moral: We are all in it together and must trust others act on this too. By acting our care we help the whole to evolve and overcome problems. iEye may even be a sweet alien detective. iEye Our collective effort can become stronger through technology. By now we’ve all seen movies where ‘need to know’ means the hero will be betrayed. And we live in a world that gets more transparent. Transparency already has solved crimes, through people who shared their need on social media, or police who aired certain faces. So what if the detective takes you directly on the case? iEye is a science fiction animation about a small detective all wired up to social media. In many rooms people sit in with his researches, and of course sometimes the crooks as well. :) The downsides must also be shown and discussed. His most loyal followers give him advice on the spot or do online research for him. Thus he is helped by a hacker, a intuitive girl, an unemployed lazy bum mostly offering advise filtered by his bias, and some others. We see every episode his followers doing a poll who, or what, might be the culprit. Our detective may even talk to the audience or phone in to one of them, wondering about a suggestion nobody thought of before. Later there might be a media station airing his researches, which attract perhaps more dangers, but also may safe him at times, as there are always witnesses. His cases are all over the top global issues. Arms industries starting wars for profit. Robots who want to be recognized as people, getting lynched. Refugees with five sexes, who get imprisoned because they aren’t male or female. Dangerous products that kill. Banks inventing literally new money out of nothing and the dangers that come with that. Etc. The series is a bit crazy, like the Simpsons in 3d/cgi. My favorite style would be more funny, more humane, more Pixar. While the arena (story stage) is clearly a smaller simpler version of our planet (Earth may have been destroyed, because we learned too late you can’t eat money) many other planets with each their own issues, rules and aliens may be visited. Once again: there’s NO big evil, behind all murders that slowly becomes the arch enemy. (it may seem like this for a bit, only to have that illusion broken, the other is I too.) It’s about working together towards solutions, where all differences are accepted, as they each bring in their own valuable viewpoint. Moral: Our differences are contributing and adding to each other. Big issues need a diversity in voices and viewpoints willing to listen en investigate to be solved. Note: I used some pictures I found on the internet. I will take them down immediately when you have rights to them, or add the name of the makers, if so desired.
https://medium.com/the-gentle-revolution/a-new-heros-journey-cdd8ca77a7d3
['Floris Koot']
2020-04-10 14:05:39.690000+00:00
['Storytelling', 'Social Change', 'Scenario', 'Heroes', 'Mythology']
Clean Code, not a part of skincare routine
As a skincare lover, I know for sure that cleaning is the basic, most important thing in skincare. Not just skincare, nowadays since the pandemic happened, cleanliness plays a big part in staying healthy. Unsurprisingly, it does not only apply in skincare and health but in coding too. Photo by Lee Campbell on Unsplash What is Clean code? here is an explanation of what a clean code is. clean code is a simple, non-complex code. clean code is not confusing and complicated. It is straightforward, to the point and precise. Clean code is a code that is easy to understand. Clean code means that the code can be understood with ease by other developers. Each task, the role of each class and function must be clear and concise. The intention of each component of the code must be revealed and comprehendible for other developers (clean code reveals the intent of the written code). Clean code is a code that is maintainable. A code is considered clean if it is easily maintainable by other developers. Other developers must be able to add new things, change some functions and features easily. Why do we need a clean code? We need a clean code to make sure that the application could be developed, not just by the original developer who writes the code but also other developers that works within the team and/or company. Since developing a product is not a stagnant process (even the development is an agile development), the code must be versatile too. Photo by AltumCode on Unsplash How to write a clean code? Only write what is necessary Writing codes that are not necessary might confuse other people who are working on the code. It might distract them from understanding the usage and point of the code, which makes the intention of the code unclear. Removing dead/unused things Just like point number one, unused things are in the way of understanding the intention of the code. Because something is there, people might think that it is important but when they are looking for the usage within the flow, they might become worried since there are some elements that lack functions when actually it is dead/unused. Duplication means automation is required This is the application of the DRY principle, or Don’t Repeat Yourself principle. Duplication means doing the same thing, but for different things. This means that it could be written as a function. Clear naming for Variables, Functions and Classes In order to know what this is for, or what value this variable hold (for easier apprehension for other developers) we need a clear naming for all the elements inside the code. This makes it easier for other people to identify the elements inside the code in no time. Tools to know clean up the code My team uses Sonarqube as a tool to see the bugs and the code smells that might exist in our code. We can see the percentage of the code duplication we can also see the maintainability and code smells of the code. We try to keep them as low as possible to make it easier for other teammates/developers to work on. The Sonarqube is fairly easy to use and it is a staple tool to check and use to support a clean code! Good luck on your journey for a cleaner code!
https://medium.com/pilar-2020/clean-code-not-a-part-of-skincare-routine-b4aa755331
['Inez Nabila']
2020-11-18 20:08:18.118000+00:00
['Development', 'Clean Code', 'Programming']
Uloom al Quran
Ever wondered what are the different Qiraat of Quran? Curious about how the Quranic text was compiled? Who gathered the whole Quran in one mushaf for the first time? Uloom al Quran by Mufti Taqi Uthmani of Karachi is a very good book for a beginner student of Quran which answers all of the above questions and many more. Although the book packs a lot of research, the simple writing style makes it easy and light to read. Originally written in Urdu, this book has been translated in English and can be found at any book store which stocks Pakistani Islamic books.
https://medium.com/from-my-bookshelf/uloom-al-quran-3576397a001d
['The Niqabi Coder Mum']
2016-08-30 14:45:44.736000+00:00
['Quran', 'Books', 'Islam']
Animated design.
Have you seen the gorgeous animations on the Global Forest Watch Topics pages? We’d like to tell you how and why we made them. It was a process that united art, science and technology to create a captivating entry point for data exploration. Opening up the path to knowledge. Forest loss is a complex issue. Trying to understand everything all at once can be overwhelming. To help first-time visitors to Global Forest Watch learn more about deforestation and guide them towards independent data discovery, we wanted to design an introduction that not only looks great but is rooted in science and viewable on any device. Animation is a great way to introduce people to new or complex topics because the moving images are more engaging and memorable than still ones. However, there are a few things you need to consider if you’re thinking about using it. I asked Estefanía Casal, the designer of the animations to tell me more about the process. Set out your objectives. When we began designing these animations, we had three things in mind. Simplicity. The animations have to be an entry point for people who aren’t forest experts. Representative: The animations need to be more than just pretty, they have to be a faithful representation of the topics. Performance: The animations have to be viewable and usable on any device, regardless of processing power or bandwidth. Simplicity. “Good design should attract, not distract, a person from the data or insights you want to share with them,” explained Estefanía. “With this in mind, we thought a minimalist style would work well for the animations. Our designers and developers have experimented before with isometric projection—a method of visually representing three-dimensional objects in two dimensions—so we decided to build upon this work. Using this approach you can add depth to your animations and make the scenarios they depict seem more realistic.” With isometric projection you can represent 3D objects in 2D. To make sure everyone was happy with the approach, Estefanía shared her preliminary sketches with the Global Forest Watch team at World Resources Institute. Although basic, these early designs prompted some great feedback that fed into the design of the moving parts. Getting feedback early on can save you lots of time in the long run, as adapting a sketch is easier than changing a multi-layer animation. Estefanía’s original sketches that she used to share her ideas and get feedback. Representative. Even though her animations are minimalist in style, Estefanía was still careful in her choice of what elements to feature in each one. Everything in the animation has to have a purpose. If it’s not essential to the story being told, it won’t be included. For example, clouds might make a drawing prettier but if they aren’t part of the narrative, you shouldn’t include them. To help her decide what features to include and what kind of forest to draw, Estefanía talked to Ben and Enrique, two of our Scientists. They showed her pictures of rainforests, mangroves and plantations, and explained what happens when forests are cleared for agriculture or urban expansion. With this information, she could sketch out the dramatic changes that happen when deforestation occurs and add some little details that add character, like a tropical red flower or buttress roots. The red flower and the shape of the tree trunks in this scene are a nod to the real flowers and trees found in rainforests. One of Estefanía’s favourite moments of the project was making a tree trunk fall into the river. People had told her that the animations reminded them of a video game, and since games are so engaging, she decided to build on this idea. Seeing the tree fall was the moment Estefanía realised she’d achieved her aim and found an engaging way to visualise forest loss. Estefanía wanted her illustrations to have a video game feel to them. As well as being representative of the real-life events that happen when forest loss occurs, the colours used in the animations reflect the Global Forest Watch brand. The greens match the brand’s colour palette and blend seamlessly with the overall appearance of the platform. It’s a subtle touch that goes a long way towards enforcing a brand and creating a more beautiful experience. Performance. Aesthetics shouldn’t interfere with good user interface, so animations are only good if people can view them without crashing their phones or sending their laptop fans into overdrive. To find the best approach, Estefanía paired up with Ed, one of our developers. Animations can be placed on the screen in a variety of formats, he explained, but due to the complexity of the animations, and the number of moving images (nodes), using a video or a GIF would be very large. We also wanted to be able to transition smoothly between each animation for any given topic. This means we’d have to load all four animations at once. You’d end up requesting 100Mbs of image files for each topic, which is unrealistic if you want people to have a smooth experience that doesn’t crash. So what is the solution? A popular method for animation is the use of Scalable Vector Graphics (SVGs) to dynamically draw on the screen. Instead of loading a heavy image you can draw the image instead. Using a plugin called BodyMovin for Adobe After Effects you can export the SVG to a JSON file which contains references for how to draw the animation. Best of all, if you are a designer you can do this step yourself and not rely on a developer to do it for you. Ed then used a nice library from Airbnb called Lottie to take the JSON and render dynamic looped SVGs. Normally this method is used for simple animations, such as moving icons or logos, but in our case we sometimes had millions of complex nodes to move around with high levels of detail. Estefanía had to reduce the complexity of the animations so Lottie wouldn’t get jumpy. Another reason not to include clouds if they aren’t essential to the story. Each part of the moving arrow-like shapes (which represent CO2) is an element that has to be moved. Add too many and your animation will get jumpy. The resulting JSON was 50kb and it’s tiny in comparison to a video. This allowed us to have a very light page with complete control over where we show the animations. You can check the code on our GitHub if you’re interested in doing something like this yourself. It takes a team to make an animation. Everything you see on your screen is the result of collaboration. Even this blog you are reading right now. Bringing together different skills and working together as one unified team will help you create the very best experiences for people who want to learn more about our world. At Vizzuality, we add scientists to the mix of designers, researchers and developers to ensure data leads the design. If we can inspire people with our designs, and spark curiosity, we’re creating an opportunity that someone will use Global Forest Watch data to make decisions that are good for our planet.
https://medium.com/vizzuality-blog/animated-design-6e007311ca46
['Camellia Williams']
2019-05-31 16:42:14.005000+00:00
['Design', 'Illustration', 'Json', 'SVG', 'Lottie']
Revisit your own work, you will be surprised…
As I was preparing to share a private piece with someone that is going to be its sole recipient, I took the time to navigate my old published pieces (I’ve been writing regularly on Medium since 2017 and not so regularly even before that). To my surprise, I enjoyed it, and the more I scrolled, the more I liked it. Suddenly, I craved it. The more I saw, the more I wanted to see. The more I remembered, the more I felt and then more I wanted to feel like that again. The joy of creating, the pride of publishing and be able to go back in time to revise my thinking, my ideas, my proposals, offerings and questions to the world. I loved it. So I did it, and here I am now, putting these words together; asking myself, why, if it feels so great did I ever stop? The answer is easy, life. Life gets in the middle, and it will always do; the real question then needs to be, “why it took me so long to get back?” or “why it was so difficult to start again?”, because if there’s one thing that I can add beyond joy and happiness, that is the ease that you can feel when let go and started again. That’s all it takes. All you need is to grab an idea and turn it into a moment. Let that moment be the first one of many more. Just write one word and the rest will follow. Cheers to you mysterious and magical recipient, if you ever happen to read this, here’s another one I owe you, here’s another “thanks” to you 😉
https://medium.com/thoughts-on-the-go-journal/want-a-boost-revisit-your-own-work-you-will-be-surprised-131d2a95f1d2
['Joseph Emmi']
2019-04-27 00:30:01.427000+00:00
['Wrtiting', 'Motivation', 'Life', 'Personal', 'Journal']
Design Thinking is not bullshit
No idea who told her this title would be smart. Maybe its more of a clickbait. Natasha Jens point, design thinking is waisting a lot of time and limiting ourself to a less good design result is understandable. In my opinion its only understandable from a single perspective. Classic design is a discipline you can execute without dependencies except tools. Take your input, research the job, explore your possibilities and request critique — change. That was for a long time a process which worked for many designers. Until interaction design came along. Interaction design is crafted and made by more than one designer. Tech companies learned, products get more excellent and polished by including everyone in a design thinking process. Interest and responsibility to create design is now distributed through a whole team instead of one graphic designer. Design thinking is an excellent method distributing responsibility for design. Thats why these workshop are magical in the beginning or game changing milestones of a project. Teamwork, responsibility and results are provided by it (You need someone experienced enough crafting these workshops — otherwise your engineers get bored and grumpy). And even though the result isn’t that fancy as she pointed out for IBM Bluemix, it works. And we all know — fancy interfaces can be the wrong answer. Invest 5 day’s in a workshop and you have evidence why you invest your budget in this rather boring direction. Natasha Jen is missing critique in this process and maybe pushing visual borders. Today, we have more designers than just visual designers. Because Design thinking is made for more than one “designer”, critique would be a bummer. Its a creativity killer. Design thinking workshops produce a lot of Ideas. In the end of the process we have a testing phase: critique from users. For a long time we believed we are able to percept and imagine the factors and environment users have. And we can’t. Reality is the best critique we can ask for. And its the hardest one. We are less vulnerable to political design decisions because a usertest is neutral. I believe design thinking is the revolution of design we need. With design thinking we are now able to invite everyone to the design process without loosing focus. We are all responsible for good design. It became an awful buzzword, doesn’t make it less useful.
https://medium.com/mobile-lifestyle/design-thinking-is-not-bullshit-df939c60cffd
['Marie Schweiz']
2017-08-21 19:39:18.710000+00:00
['Design Thinking', 'Design']
A Metaphor, Analogy and a Simile Walk into a Bar
A metaphor, a simile and an analogy were sitting in a bar. They had been drinking heavily for hours. “You three look so alike,” said the bartender. “That’s racist!” Said the simile, pointing at the bartender. “Sorry, I didn’t mean to offend anyone,” said the bartender. “No one’s ever told me what a simile is. I can’t tell you what that was like.” There was a hungry look in the face of the bartender. You know the look. When you haven’t eaten for a while. “You’re behaving like a hyperbole, but without the humour!” Said the metaphor to the simile. “Now you’re calling me fat!” said the simile. “Oh, my God! What are you like? Stop behaving like such an idiom!” Said the metaphor, “my friends keep telling me, that my incorrect use of metaphors will get me in trouble. But you don’t see me calling them out. If they’re right, we’ll burn that bridge when we get to it.” The metaphor spoke with a misplaced sense of authority. The metaphor had been ordering everyone around all evening. The simile was beginning to get angry, “the bartender is a racist, fat-hating…” “Listen”, said the metaphor to the simile, “as a kid, I always remember my dad being a good dad. But he was no good at being a metaphor. He’d say stuff like, “you’re a f*cking idiot.” Now, you don’t see me complaining do you?” “But it’s still racist to say we all look the same,” said the simile. “I really can’t tolerate it when you similes, confuse reality with metaphors. It makes my head literally explode! You similes do that all the time,” said the metaphor. “Bartender, same again please,” said the analogy. The analogy had the kind of voice that tells its life story in a handle of words. The analogy looked straight at the simile as they spoke. “Let me ask you a question; what do you get when you cross a joke with a rhetorical question?” “You’re making fun of me!” Exclaimed the simile. “Us three have been talking bull for the last couple hours,” said the analogy. “We’ve talked about a lot of stuff. On the one hand, some of the language we used may come across as offensive to some. On the other hand, we are activists for positive social change discussing contentious issues. The bartender here is simply saying, from his perspective we all look the same. Is it the bartender's fault that the average person can’t differentiate between a simile, a metaphor and an analogy? The bartender is only expressing their worldview. I’m interested in hearing what part of the bartender’s statement is racist.” The analogy took a long puff on their cigarette. The simile hesitated before answering, “you can’t say we all look the same.” “But you’ve taken what the bartender said out of context. Without context, where would we be?” The analogy asked the simile. The analogy continued, “do you remember that time the metaphor broke into song because he couldn’t find the key? And what did you do? You called the police!” “What does context mean?” The simile nervously asked the analogy. “In what context is it being used?” The analogy asked the simile. “You’re poking fun at me again,” said the simile. “Maybe I am. But listen, and listen good. I know it is the job of similes to go around and say something is like something else. But you can’t go around saying stuff is like something else when it’s clearly not. At least when the metaphor says something is something else, it actually is.” The analogy took another puff on their cigarette. “I’ve seen you hanging out with that bunch of idioms. Those guys are not what they seem. The language those idioms use isn’t meant to be taken literally. I think you’re hanging out with the wrong crowd.” The simile interrupted; “don’t patronise me!” The simile was beginning to lose their composure. “You analogies are all the same. You are always saying something is like something else to make some sort of an explanatory point. You all think you’re better than us similes.” “Now who’s jumping to conclusions? Just because all similes are metaphors, it doesn’t mean all metaphors are similes.” The simile looked at the analogy blankly and confused. The bartender spoke; “It’s been great listening to you guys talk, I’ve learnt so much just from being in your company. You’ve made me feel more confident in the use of figurative language. I now feel able to have my cake and eat it.” The bartender looked grateful but in a blissfully ignorant kind of way. The metaphor was subtlely nodding their head in agreement with the analogy. The analogy was now in conversation with a synecdoche and a metonymy. The simile still looked confused. Please note: Since the above story was written, the simile has been diagnosed with hypertension. A form of ADHD (Analogy Deficit and Hyperboltivity Disorder), with traits of anxiety.
https://medium.com/muddyum/a-metaphor-analogy-and-a-simile-walk-into-a-bar-e93a53f44452
['Lee Serpa Azevado']
2019-11-11 14:41:53.485000+00:00
['Humor', 'Writing', 'Comedy', 'Short Story', 'Satire']
What is False-Consensus Effect?
What is False-Consensus Effect? When designing user experience keep in mind that: YOU ARE NOT YOUR USER. Photo by NESA by Makers on Unsplash As designers, we have tendency to assume that our users are similar to us. In fact, this is not just a problem that designers have — it is an example of a more general phenomenon, called the FALSE CONSENSUS EFFECT. We always make mistake believing in that our users are like us when solving a design problem. For example when we react a certain way to a design, we may assume others will react the same way. In other words, we may assume there is an agreement or consensus when there is none (thus the name false consensus). If we really want to design a user-centered design website, app or any other system. And we want to know how our users will respond to it— and we want to design something they will love, then we need to more systematic. We need to take action that will ensure we are basing our design decision on genuine knowledge of our users. Most often this means, ‘testing our designs’ on real users.
https://medium.com/nyc-design/what-is-false-consensus-effect-218e10ac8277
[]
2020-06-21 06:24:06.205000+00:00
['New York', 'User Experience Design', 'Design Thinking', 'User Experience', 'User Interface']
Inclusive design improves UX for everyone
Picture this: You’re sat in the quiet section of a train, and you have a TV show you really wanted to watch. …but you forgot your headphones. Annoying, right? But then you remember, you can watch the TV show with subtitles! Suddenly, you have been enabled to continue normal usage of your device. Well, that’s what inclusive design can do. It’s not just for those with disabilities, inclusive design can improve the lives of everyone it touches, and their experiences with the products they use. Ways to design inclusively Inclusive design can come in all different forms, and it isn’t just restricted to the web. Don Norman — ex Apple VP and author of the Design of Everyday Things — puts it very nicely: Curb cuts were meant to help people who had trouble walking, but it helps anyone wheeling things: carts, baby carriages, suitcases. And it’s the same case on the web. Closed captioning helping you when you forgot your headphones, higher contrast text when you’re out in the sun, larger button sizes for when you’re using the phone with one hand (the list is endless). Vision Online, it is recommended you have a good contrast between typefaces and the background they are on. For example, the WCAG 2 AA standards require a contrast ratio of at least 4.5:1 for normal text (around 18px) and 3:1 for large text (more than 18px). However this does not only help those with poor vision. For example, how many times have you been sat in a class or lecture, and you were bombarded with slides like this: Not great is it? It’s not comfortable to read, and so mentally you’re beginning to switch off. Now imagine if we took the advice of the WCAG and made the slide look like this: That’s a bit nicer. Another aspect of legibility is down to font-sizing. Generally, 18pt and 14pt are an acceptable minimum size for title text and subtext respectively. This roughly translates to around 24px and 18.5px. For body text, something like 12pt (around 16px) usually is the minimum readable font size. Recently I was refactoring some front-end code for a project I was on, and before it was redesigned, the font size was around 14px. The UI looked nice. However, with these accessibility constraints, I worked to find a way to make the UI comply, and after the refactor it looked even better. It was way more legible, and the main information a user would need was comfortable to scan quickly. Now picture you were using this tool outdoors on a sunny day, you’d be really glad that the font-sizes were slightly larger and the colours used on the text were of a higher contrast. These accessible and inclusive design changes have really improved your experience. Physical navigation When filling out a long form online, to speed up the process I tab between input fields. It makes what can be a tiring and cumbersome process slightly quicker, and I can then move on with my life. However, tabbing between elements wasn’t just developed for you to fill out forms quicker. Some people cannot use the conventional mouse or trackpad when navigating around web pages. They rely on physical buttons to navigate. Typically the tab and enter keys. By default, web browsers are pretty good at automatically deciding what a user can tab to and what they cannot. For example, if you use a button, the web browser should automatically let you tab to it. If you use a div however, the browser will ignore it. That’s great, but sometimes a button doesn’t cut it for our design and we want to use a more custom, clickable element. A good rule of thumb is, if there is a click event on any elements, the user should be able to tab to them. HTML makes this pretty easy too, all you have to add to your element’s attributes is tabindex along with the value “0”. For example: <div tabindex=”0” onclick=”doSomething()”>Click me!</div> What’s also important to note is that some frameworks don’t interpret the press of the enter key as a click, so you may need to so something like this in Angular (to name one): <div tabindex=”0” (click)=”doSomething()” (keyup.enter)=”doSomething()”>Click me!</div> Audio Going back to the train example, imagine if even when you can’t listen to something, you could still understand what is being said. Well, unless you’re a great lipreader, you’re most likely going to have to rely on closed-captioning. Closed-captioning became a reality in the early 1970s, after a failed experiment to send timing data along with TV signals. Now, closed-captioning is fairly commonplace. Think about when you’re watching YouTube or Netflix, you’re probably only one or two clicks from being able to enable captions. A great recent innovation with CC comes from Google. At their latest IO event, they revealed that a ‘Live Caption’ feature will accompany the Android Q release — meaning that closed captioning is done natively by the Android operating system, and turns the spoken word into text on the screen on the device. You won’t even have to be connected to the internet. While Google noted that they worked closely with the Deaf community to develop this technology, Google’s CEO Sundar Pichai said: “You can imagine all the use cases for the broader community, too, for example, the ability to watch any video if you’re in a meeting or on the subway without disturbing the people around you.” Now, think about when you’re on a slow connection. Imagine you’re looking at an article online, you’re enjoying it, but the images aren’t loading because of the weak connection. Well, it’s now pretty common to have some text describing the image if it can’t load for whatever reason. This is called alt-text, and it’s really easy to add to your site. However, it’s not only for us sad people on the equivalent of dial-up connection speeds. It’s mainly aimed at those who can’t see the image well, so their screen-reader can describe it to them. Daniel Göransson has written a great piece on how to create the perfect alt-text, but the key points are: Describe the image in context. For example, if there is a group of people out in the rain, depending on the context of where that image is, you could use alt-text of “Stormy weather in city.” if we were on a weather site, or if the image was on some sort of social platform you could say “Group of friends enjoying themselves in the rain.” Don’t include unnecessary/annoying information. For example, the name of the photographer is not important to someone trying to understand what the image looks like, nor keywords you’re trying to rank higher for on Google. Don’t say it’s an image. Screen readers will already be telling users this. If you start with “image of…” the screen reader will recite something like “image image of…” to the user. This is simply annoying to anyone using a screen reader. Keep it concise, and end with a period. You don’t need to go overboard with your description, and if you end with a period, the screen reader will leave a short pause before continuing on to the rest of the content.
https://medium.com/swlh/inclusive-design-improves-ux-for-everyone-c8d137d255df
['Michael J. Fordham']
2019-06-01 14:21:03.287000+00:00
['Design', 'Accessibility', 'Technology', 'Visual Design', 'UX']
Analyzing CitiBike Data: EDA
Let’s get some more information on the data. df.info() #sum of missing values in each column df.isna().sum() We have whooping 5,77,703 rows to crunch and 15 columns. Also, quite a bit of missing values. Let’s deal with missing values first. Handling missing values Let’s first see the percentage of missing values which will help us decide whether to drop them or no. We can not afford to drop the missing valued rows of ‘birth year’. Hence, drop the entire column ‘birth year’ and drop missing valued rows of ‘end station id’,‘ end station name’,‘ end station latitude’, and ‘end station longitude’. Fortunately, all the missing values in these four rows (end station id, end station name, end station latitude, and end station longitude) are on the exact same row, so dropping NaN rows from all four rows will still result in only 3% data loss. Let’s see what gender talks about our data We can see more male riders than females in New York City but due to a large number of unknown gender, we cannot get to any concrete conclusion. Filling unknown gender values is possible but we are not going to do it considering riders did not choose to disclose their gender. Subscribers vs Customers Subscribers are the users who bought the annual pass and customers are the once who bought either a 24-hour pass or a 3-day pass. Let’s see what the riders choose the most. We can see there is more number of yearly subscribers than 1-3day customers. But the difference is not much, the company has to focus on converting customers to subscribers with some offers or sale. How many hours do rides use the bike typically We have a column called ‘timeduration’ which talks about the duration each trip covered which is in seconds. Firstly, we will convert it to minutes, then create bins to group the trips into 0–30min, 30–60min, 60–120min, 120min, and above ride time. Then, let’s plot a graph to see how many hours do rides ride the bike typically. There are a large number of riders who ride for less than half an hour per trip and most less than 1 hour. Same start and end location VS different start and end location We see in the data there are some trips that start and end at the same location. Let’s see how many. Riding pattern of the month This part is where I have spent a lot of time and effort. The below graph talks a lot. Technically there is a lot of coding. Before looking at the code I will give an overview of what we are doing here. Basically, we are plotting a time series graph to see the trend of the number of rides taken per day and the trend of the total number of duration the bikes were in use per day. Let’s look at the code first then I will break it down for you. You might have understood the basic idea by reading the comments but let me explain the process step-by-step: The date-time is in the string, we will convert it into DateTime object. Grouping the data by days of the month and counting the number of occurrences to plot rides per day. We have only one row with the information for the month of July. This is an outlier, drop it. Repeat steps 2 and 3 but the only difference this time is we sum the data instead of counting to get the total time duration of the trips per day. Plot both the data on a single graph using the twin axis method. I have used a lot of tweaking methods on matplotlib, make sure to go through each of them. If any doubts drop a comment on the Kaggle notebook for which the link will be dropped at the end of this article.
https://medium.com/towards-artificial-intelligence/analyzing-citibike-data-eda-e657409f007a
['Sujan Shirol']
2020-10-03 05:43:38.525000+00:00
['Seaborn', 'Python Matplotlib', 'Visualization', 'Data Analytics', 'Data Science']
14 Social Media Fails for Realtors
Jena Apgar is an international speaker, digital agency CEO at Warfare Marketing, co-founder and trainer at Business Growth Network with the popular 90-Day Double My Business Challenge, and co-host of the popular videocast, DoubleFunnel.tv, which takes you through what it takes to build a 6–8 figure business, to double your business, leveraging marketing strategies, funnels and traffic tactics. Join the 4,000 readers getting a dose of my best and most controversial marketing & business content. Join here.
https://medium.com/business-growth-network/14-social-media-fails-for-realtors-8393c7a26353
['Jena Apgar']
2019-07-25 17:50:28.847000+00:00
['Facebook Marketing', 'Realtor', 'Marketing', 'Social Media', 'Social Media Marketing']
What Do 220,000,000,000 Data Points Look Like?
Update (Nov. 3, 2014): The map now contains 160 million activities and 375,000,000,000 points. Recently we released a global heatmap of 77,688,848 rides and 19,660,163 runs from the Strava dataset. This was more of an engineering challenge to create a visualization of that size than anything else. But still, the map has raised many questions about how and where people run and ride. Some of these can only be answered using the raw data, which is addressed by our Metro product. The code to generate the map is the grandchild of a heatmap I built almost two years ago. Last year the code was cleaned up and became the Personal Heatmaps feature on Strava. This time it has been refactored to handle the large dataset by reading from mmapped files stored on disk. To start out, the world is broken up into regions presented by zoom level 8 tiles. Each one of these regions has a file containing the sorted set of key/values where the key is the pixel zoom and quadkey and the value is the count of GPS points. The quadkeys make it so all the data for a tile is stored sequentially in the file. Pixels with no GPS points are excluded and only every 3rd zoom level is stored in the file. The values for missing zooms can be found by adding the 4 to 16 values from higher zoom levels. Skipping zoom levels saves a bit a disk space, but it also preloads into memory the region of the file needed for deeper zooms. This results in about 9000 files (6300 for rides, 4700 for runs) that are all opened as memory mapped files when the server starts. When a request for a tile comes in, the server finds the corresponding file handle and does a binary search on the keys. Since the info for the tile is stored sequentially in this file, it can do a fast read and build a 2D array of the number of GPS points in each pixel of the tile. Now those values need to be normalized to a value between 0 and 1 and colorizer. The normalization is very local, taking into account the 8 neighboring tiles. For each tile the 95% percentile, non-zero, GPS count is computed. These values are averaged into 4 corner “heatmax” values for the current tile. The count for every pixel is divided by the bilinear interpolation of those values, capped at 1. This [0,1] value is used to color each pixel using a gradient function. You can see the local effect on roads that branch away from popular routes. This is all done on the fly (minus memcache and a CDN) and takes about 200 milliseconds per tile. Why serve on the fly? Well, the ride map has 106,991,000 unique tiles, times three colors, plus the run and both versions and you’ve got a lot of S3 objects. This saves that step and lets me update parts of the map and tweak or add colors as needed. Everything is hosted on a single c1.xlarge EC2 instance which maxes out at about 150 tiles a second. Because the files are memory mapped, the OS does all the caching. Still, the process is IO bound as users are accessing the map all over the world. Using an SSD backed instance solves the IO issues, but they’re way more expensive. Given that the CDN serves most of the tiles, I figured small slowdowns from IO bottlenecks are okay. This only happens when we have huge traffic, like when being featured on a Belgian national news site. There have been a lot of suggestions on different types of maps to create, but I’m not really sure what’s next for this map, or the code. I think incorporating direction of travel could look really cool, but right now I’m more interested in using the map data. If you think about it, the heatmap is just a density distribution of GPS points. A “noisy” GPS stream could be corrected using these probabilities. The Slide Tool represents some of my initial thinking in this direction.
https://medium.com/strava-engineering/what-do-220-000-000-000-data-points-look-like-d267107d9aa7
['Strava Engineering']
2017-05-09 17:43:42.686000+00:00
['Maps', 'Data Science', 'Big Data']
The Beginner’s Guide to the Cloud Native Landscape
The Beginner’s Guide to the Cloud Native Landscape This blog post was written by Ayrat Khayretdinov and was originally published here on CloudOps’ blog. The cloud native landscape can be complicated and confusing. Its myriad of open source projects are supported by the constant contributions of a vibrant and expansive community. The Cloud Native Computing Foundation (CNCF) has a landscape map that shows the full extent of cloud native solutions, many of which are under their umbrella. As a CNCF ambassador, I am actively engaged in promoting community efforts and cloud native education throughout Canada. At CloudOps I lead workshops on Docker and Kubernetes that provide an introduction to cloud native technologies and help DevOps teams operate their applications. I also organize Kubernetes and Cloud Native meetups that bring in speakers from around the world and represent a variety of projects. They are run quarterly in Montreal, Ottawa, Toronto, Kitchener-Waterloo, and Quebec City. Reach out to me @archyufaor email CloudOps to learn more about becoming cloud native. In the meantime, I have written a beginners guide to the cloud native landscape. I hope that it will help you understand the landscape and give you a better sense of how to navigate it. The Cloud Native Computing Foundation The History of the CNCF In 2014 Google open sourced an internal project called Borg that they had been using to orchestrate containers. Not having a place to land the project, Google partnered with the Linux Foundation to create the Cloud Native Computing Foundation (CNCF), which would encourage the development and collaboration of Kubernetes and other cloud native solutions. Borg implementation was rewritten in Go, renamed to Kubernetes and donated as the incepting project. It became clear early on that Kubernetes was just the beginning and that a swarm of new projects would join the CNCF, extending the functionality of Kubernetes. The CNCF Mission The CNCF fosters this landscape of open source projects by helping provide end-user communities with viable options for building cloud native applications. By encouraging projects to collaborate with each other, the CNCF hopes to enable fully-fledged technology stacks comprised solely of CNCF member projects. This is one way that organizations can own their destinies in the cloud. CNCF Processes A total of twenty-five projects have followed Kubernetes and been adopted by the CNCF. In order to join, projects must be selected and then elected with a supermajority by the Technical Oversight Committee (TOC). The voting process is aided by a healthy community of TOC contributors, which are representatives from CNCF member companies, including myself. Member projects will join the Sandbox, Incubation, or Graduation phase depending on their level of code maturity. Sandbox projects are in a very early stage and require significant code maturity and community involvement before being deployed in production. They are adopted because they offer unrealized potential. The CNCF’s guidelines state that the CNCF helps encourage the public visibility of sandbox projects and facilitate their alignment with existing projects. Sandbox projects receive minimal funding and marketing support from the CNCF and are subject to review and possible removal every twelve months. Projects enters the Incubation when they meet all sandbox criteria as well as demonstrate certain growth and maturity characteristics. They must be in production usage by at least three companies, maintain healthy team that approves and accepts a healthy flow of contributions that include new features and code from the community. Once Incubation projects have reached a tipping point in production use, they can be voted by the TOC to have reached Graduation phase. Graduated projects have to demonstrate thriving adoption rates and meet all Incubation criteria. They must also have committers from at least two organizations, have documented and structured governance processes, and meet the Linux Foundation Core Infrastructure Initiative’s Best Practices Badge. So far, only Kubernetes and Prometheus have graduated. The Projects Themselves Below I’ve grouped projects into twelve categories: orchestration, app development, monitoring, logging, tracing, container registries, storage and databases, runtimes, service discovery, service meshes, service proxy, security, and streaming and messaging. I’ve provided information that can hopefully help companies or individuals evaluate what each project does, how it’s evolved over time, and how it integrates with other CNCF projects. Orchestrations Kubernetes Kubernetes (graduated) — Kubernetes automates the deployment, scaling, and management of containerised applications, emphasising automation and declarative configuration. It means helmsman in ancient Greek. Kubernetes orchestrates containers, which are packages of portable and modular microservices. Kubernetes adds a layer of abstraction, grouping containers into pods. Kubernetes helps engineers schedule workloads and allows containers to be deployed at scale over multi-cloud environments. Having graduated, Kubernetes has reached a critical mass of adoption. In a recent CNCF survey, over 40% of respondents from enterprise companies are running Kubernetes in production. App Development Helm Helm (Incubating) — Helm is an application package manager that allows users to find, share, install, and upgrade Kubernetes applications (aka charts) with ease. It helps end users deploy existing applications (including MySQL, Jenkins, Artifactory and etc.) using KubeApps Hub, which display charts from stable and incubator repositories maintained by the Kubernetes community. With Helm you can install all other CNCF projects that run on top of Kubernetes. Helm can also let organizations create and then deploy custom applications or microservices to Kubernetes. This involves creating YAML manifests with numerical values not suitable for deployment in different environments or CI/CD pipelines. Helm creates single charts that can be versioned based on application or configuration changes, deployed in various environments, and shared across organizations. Helm originated at Deis from an attempt to create a ‘homebrew’ experience for Kubernetes users. Helm V2 consisted of the client-side of what is currently the Helm Project. The server-side ‘tiller’, or Helm V2, was added by Deis in collaboration with Google at around the same time that Kubernetes 1.2 was released. This was how Helm became the standard way of deploying applications on top of Kubernetes. Helm is currently making a series of changes and updates in preparation for the release of Helm V3, which is expected to happen by the end of the year. Companies that rely on Helm for their daily CI/CD development, including Reddit, Ubisoft, and Nike, have suggested improvements for the redesign. Telepresence Telepresence (Sandbox) — It can be challenging to develop containerized applications on Kubernetes. Popular tools for local development include Docker Compose and Minikube. Unfortunately, most cloud native applications today are resource intensive and involve multiple databases, services, and dependencies. Moreover, it can be complicated to mimic cloud dependencies, such as messaging systems and databases in Compose and Minikube. An alternative approach is to use fully remote Kubernetes clusters, but this precludes you from developing with your local tools (e.g., IDE, debugger) and creates slow developer “inner loops” that make developers wait for CI to test changes. Telepresence, which was developed by Datawire, offers the best of both worlds. It allows the developer to ‘live code’ by running single microservices locally for development purposes while remaining connected to remote Kubernetes clusters that run the rest of their application. Telepresence deploys pods that contain two-way network proxies on remote Kubernetes clusters. This connects local machines to proxies. Telepresence implements realistic development/test environments without freezing local tools for coding, debugging, and editing. Monitoring Prometheus Prometheus (Graduated) — Following in the footsteps of Kubernetes, Prometheus was the second project to join the CNCF and the second (and so far last) project to have graduated. It’s a monitoring solution that is suitable for dynamic cloud and container environments. It was inspired by Google’s monitoring system, Borgman. Prometheus is a pull-based system — its configurations decide when and what to scrape. This is unlike other monitoring systems using push-based approach where monitoring agent running on nodes. Prometheus stores scrapped metrics in a TSDB. Prometheus allows you to create meaningful graphs inside the Grafana dashboard with powerful query languages, such as PromQL. You can also generate and send alerts to various destinations, such as slack and email, using the built-in Alert Manager. Hugely successful, Prometheus has become the de facto standard in cloud native metric monitoring. With Prometheus one can monitor VMs, Kubernetes clusters, and microservices being run anywhere, especially in dynamic systems like Kubernetes. Prometheus’ metrics also automate scaling decisions by leveraging Kubernetes’ features including HPA, VPA, and Cluster Autoscaling. Prometheus can monitor other CNCF projects such as Rook, Vitesse, Envoy, Linkerd, CoreDNS, Fluentd, and NATS. Prometheus’ exporters integrate with many other applications and distributed systems. Use Prometheus’ official Helm Chart to start. OpenMetrics OpenMetrics (Sandbox) — OpenMetrics creates neutral standards for an application’s metric exposition format. Its modern metric standard enables users to transmit metrics at scale. OpenMetrics is based on the popular Prometheus exposition format, which has over 300 existing exporters and is based on operational experience from Borgmon. Borgmon enables ‘white-box monitoring’ and mass data collection with low overheads. The monitoring landscape before OpenMetrics was largely based on outdated standards and techniques (such as SNMP) that use proprietary formats and place minimal focus on metrics. OpenMetrics builds on the Prometheus exposition format, but has a tighter, cleaner, and more enhanced syntax. While OpenMetrics is only in the Sandbox phase, it is already being used in production by companies including AppOptics, Cortex, Datadog, Google, InfluxData, OpenCensus, Prometheus, Sysdig, and Uber. Cortex Cortex (Sandbox) — Operational simplicity has always been a primary design objective of Prometheus. Consequently, Prometheus itself can only be run without clustering (as single nodes or container) and can only use local storage that is not designed to be durable or long-term. Clustering and distributed storage come with additional operational complexity that Prometheus forgoed in favour of simplicity. Cortex is a horizontally scalable, multi-tenant, long-term storage solution that can complement Prometheus. It allows large enterprises to use Prometheus while maintaining access to HA (High Availability) and long-term storage. There are currently other projects in this space that are gaining community interest, such as Thanos, Timbala, and M3DB. However, Cortex has already been battle-tested as a SaaS offering at both GrafanaLabs and Weaveworks and is also deployed on prem by both EA and StorageOS. Logging and Tracing Fluentd Fluentd (Incubator) — Fluentd collects, interprets, and transmits application logging data. It unifies data collection and consumption so you can better use and understand your data. Fluentd structures data as JSON and brings together the collecting, filtering, buffering, and outputting of logs across multiple sources and destinations. Fluentd can collect logs from VMs and traditional applications, however it really shines in cloud native environments that run microservices on top of Kubernetes, where applications are created in a dynamic fashion. Fluentd runs in Kubernetes as a daemonset (workload that runs on each node). Not only does it collects logs from all applications being run as containers (including CNCF ones) and emits logs to STDOUT. Fluentd also parses and buffers incoming log entries and sends formatted logs to configured destinations, such as Elasticsearch, Hadoop, and Mongo, for further processing. Fluentd was initially written in Ruby and takes over 50Mb in memory at runtime, making it unsuitable for running alongside containers in sidecar patterns. Fluentbit is being developed alongside Fluentd as a solution. Fluentbit is written in C and only uses a few Kb in memory at runtime. Fluentbit is more efficient in CPU and memory usage, but has more limited features than Fluentd. Fluentd was originally developed by Treasuredata as an open source project. Fluentd is available as a Kubernetes plugin and can be deployed as version 0.12, an older and more stable version that currently is widely deployed in production. The new version (Version 1.X) was recently developed and has many improvements, including new plugin APIs, nanosecond resolution, and windows support. Fluentd is becoming the standard for log collection in the cloud native space and is a solid candidate for CNCF Graduation. OpenTracing OpenTracing (Incubator) — Do not underestimate the importance of distributed tracing for building microservices at scale. Developers must be able to view each transaction and understand the behaviour of their microservices. However, distributed tracing can be challenging because the instrumentation must propagate the tracing context both within and between the processes that exist throughout services, packages, and application-specific code. OpenTracing allows developers of application code, OSS packages, and OSS services to instrument their own code without locking into any particular tracing vendor. OpenTracing provides a distributed tracing standard for applications and OSS packages with vendor-neutral APIs with libraries available in nine languages. These enforce distributed tracing, making OpenTracing ideal for service meshes and distributed systems. OpenTracing itself is not a tracing system that runs traces to analyze spans from within the UI. It is an API that works with application business logic, frameworks, and existing instrumentation to create, propagate, and tag spans. It integrates with both open source (e.g. Jaeger, Zipkin) or commercial (e.g Instana, Datadog) tracing solutions, and create traces that are either stored in a backend or spanned into a UI format. Click here to try a tutorialor start instrumenting your own system with Jaeger, a compatible tracing solution. Jaeger Jaeger (Incubator) — Jaeger is a distributed tracing system solution that is compatible with OpenTracing and was originally developed and battle tested by Uber. Its name is pronounced yā′gər and means hunter. It was inspired by Dapper, Google’s internal tracing system, and Zipkin, an alternative open source tracing system that was written by Twitter but built with the OpenTracing’s standard in mind. Zipkin has limited OpenTracing integration support, but Jaeger does provide backwards-compatibility with Zipkin by accepting spans in Zipkin formats over HTTP. Jaeger’s use cases monitor and troubleshoot microservices-based distributions, providing distributed context propagation, distributed transaction monitoring, root cause analysis, service dependency analysis, and performance and latency optimization. Jaeger’s data model and instrumentation libraries are compatible with OpenTracing. Its Modern Web UI is built with React/Javascript and has multiple supports for its backend. This includes Cassandra, Elasticsearch, and memory. Jaeger integrates with service meshes including Istio and Linkerd, making tracing instrumentation much easier. Jaeger has observatibility because it exposes Prometheus metrics by default and integrates with Fluentd for log shipping. Start deploying Jaeger to Kubernetes using a Helm chart or the recently developed Jaeger Operator. Most contributions to the Jaeger codebase come from Uber and RedHat, but there are hundreds of companies adopting Jaeger for cloud native, microservices-based, distributed tracing. Container Registries Harbor Harbor (Sandbox) — Harbor is an open source trusted container registry that stores, signs, and scans docker images. It provides free-of-charge, enhanced docker registry features and capabilities. These include a web interface with role-based access control (RBAC) and LDAP support. It integrates with Clair, an open source project developed by CoreOS, for vulnerability scanning and with Notary, a CNCF Incubation project described below, for content trust. Harbor provides activity auditing, Helm chart management and replicates images from one Harbor instance to another for HA and DR. Harbor was originally developed by VMWare as an open source solution. It is now being used by companies of many sizes, including TrendMicro, Rancher, Pivotal, and AXA. Storage and Databases Rook Rook (Incubator) — Rook is an open source cloud native storage orchestrator for Kubernetes. With Rook, ops teams can run Software Distributed Systems (SDS) (such as Ceph) on top of Kubernetes. Developers can then use that storage to dynamically create Persistent Volumes (PV) in Kubernetes to deploy applications, such as Jenkins, WordPress and any other app that requires state. Ceph is a popular open-source SDS that can provide many popular types of storage systems, such as Object, Block and File System and runs on top of commodity hardware. While it is possible to run Ceph clusters outside of Kubernetes and connect it to Kubernetes using the CSI plugin, deploying and then operating Ceph clusters on hardware is a challenging task, reducing the popularity of the system. Rook deploys and integrates Ceph inside Kubernetes as a first class object using Custom Resource Definition (CRDs) and turns it into a self-managing, self-scaling, and self-healing storage service using the Operator Framework. The goal of Operatorsin Kubernetes is to encode human operational knowledge into software that is more easily packaged and shared with end users. In comparison to Helm that focuses on packaging and deploying Kubernetes applications, Operator can deploy and manage the life cycles of complex applications. In the case of Ceph, Rook Operator automates storage administrator tasks, such as deployment, bootstrapping, configuration, provisioning, horizontal scaling, healing, upgrading, backups, disaster recovery and monitoring. Initially, Rook Operator’s implementation supported Ceph only. As of version 0.8, Ceph support has been moved to Beta. Project Rook later announced Rook Framework for storage providers, which extends Rook as a general purpose cloud native storage orchestrator that supports multiple storage solutions with reusable specs, logic, policies and testing. Currently Rook supports CockroachDB, Minio, NFS all in alpha and in future Cassandra, Nexenta, and Alluxio. The list of companies using Rook Operator with Ceph in production is growing, especially for companies deploying on Prem, amongst them CENGN, Gini, RPR and many in the evaluation stage. Vitess Vitess (Incubator) — Vitess is a middleware for databases. It employs generalized sharding to distribute data across MySQL instances. It scales horizontally and can scale indefinitely without affecting your application. When your shards reach full capacity, Vitess will reshard your underlying database with zero downtime and good observativability. Vitess solves many problems associated with transactional data, which is continuing to grow. TiKV TiKV (Sandbox) — TiKV is a transactional key-value database that offers simplified scheduling and auto-balancing. It acts as a distributed storage layer that supports strong data consistency, distributed transactions, and horizontal scalability. TiKV was inspired by the design of Google Spanner and HBase, but has the advantage of not having a distributed file system. TiKV was developed by PingCAP and currently has contributors from Samsung, Tencent Cloud, and UCloud. Runtimes RKT RKT (Incubator) — RKT (read as Rocket) is an application container runtime that was originally developed at CoreOS. Back when Docker was the default runtime for Kubernetes and was baked into kubelet, the Kubernetes and Docker communities had challenges working with each other. Docker Inc., the company behind the development of Docker as an open source software, had its own roadmap and was adding complexity to Docker. For example, they were adding swarm-mode or changing filesystem from AUFS to overlay2 without providing notice. These changes were generally not well coordinated with the Kubernetes community and complicated roadmap planning and release dates. At the end of the day, Kubernetes users need a simple runtime that can start and stop containers and provide functionalities for scaling, upgrading, and uptimes. With RKT, CoreOS intended to create an alternative runtime to Docker that was purposely built to run with Kubernetes. This eventually led to the SIG-Node team of Kubernetes developing a Container Runtime Interface (CRI) for Kubernetes that can connect any type of container and remove Docker code from its core. RKT can consume both OCI Images and Docker format Images. While RKT had a positive impact on the Kubernetes ecosystem, this project was never adopted by end users, specifically by developers who are used to docker cli and don’t want to learn alternatives for packaging applications. Additionally, due to the popularity of Kubernetes, there are a sprawl of container solutions competing for this niche. Projects like gvisor and cri-o (based on OCI) are gaining popularity these days while RKT is losing its position. This makes RKT a potential candidate for removal from the CNCF Incubator. Containerd Containerd (Incubator) — Containerd is a container runtime that emphasises simplicity, robustness and portability. In contrast to RKT, Containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users. Similar to RKT containerd can consume both OCI and Docker Image formats. Containerd was donated to the CNCF by the Docker project. Back in the days, Docker’s platform was a monolithic application. However, with time, it became a complex system due to the addition of features, such as swarm mode. The growing complexity made Docker increasingly hard to manage, and its complex features were redundant if you were using docker with systems like Kubernetes that required simplicity. As a result, Kubernetes started looking for alternative runtimes, such as RKT, to replace docker as the default container runtime. Docker project then decided to break itself up into loosely coupled components and adopt a more modular architecture. This was formerly known as Moby Project, where containerd was used as the core runtime functionality. Since Moby Project, Containerd was later integrated to Kubernetes via a CRI interface known as cri-containerd. However cri-containerd is not required anymore because containerd comes with a built-in CRI plugin that is enabled by default starting from Kubernetes 1.10 and can avoid any extra grpc hop. While containerd has its place in the Kubernetes ecosystem, projects like cri-o (based on OCI) and gvisor are gaining popularity these days and containerd is losing its community interest. However, it is still an integral part of the Docker Platform. Service Discovery CoreDNS CoreDNS (Incubator) — CoreDNS is a DNS server that provides service discovery in cloud native deployments. CoreDNS is a default Cluster DNS in Kubernetes starting from its version 1.12 release. Prior to that, Kubernetes used SkyDNS, which was itself a fork of Caddy and later KubeDNS. SkyDNS — a dynamic DNS-based service discovery solution — had an inflexible architecture that made it difficult to add new functionalities or extensions. Kubernetes later used KubeDNS, which was running as 3 containers (kube-dns, dnsmasq, sidecar), was prone to dnsmasq vulnerabilities, and had similar issues extending the DNS system with new functionalities. On the other hand, CoreDNS was re-written in Go from scratch and is a flexible plugin-based, extensible DNS solution. It runs inside Kubernetes as one container vs. KubeDNS, which runs with three. It has no issues with vulnerabilities and can update its configuration dynamically using ConfigMaps. Additionally, CoreDNS fixed a lot of KubeDNS issues that it had introduced due to its rigid design (e.g. Verified Pod Records). CoreDNS’ architecture allows you to add or remove functionalities using plugins. Currently, CoreDNS has over thirty plugins and over twenty external plugins. By chaining plugins, you can enable monitoring with Prometheus, tracing with Jaeger, logging with Fluentd, configuration with K8s’ API or etcd, as well as enable advanced dns features and integrations. Service Meshes Linkerd Linkerd (Incubator) — Linkerd is an open source network proxy designed to be deployed as a service mesh, which is a dedicated layer for managing, controlling, and monitoring service-to-service communication within an application. Linkerd helps developers run microservices at scale by improving an application’s fault tolerance via the programmable configuration of circuit braking, rate limiting, timeouts and retries without application code change. It also provides visibility into microservices via distributed tracing with Zipkin. Finally, it provides advanced traffic control instrumentation to enable Canaries, Staging, Blue-green deployments. SecOps teams will appreciate the capability of Linkerd to transparently encrypt all cross-node communication in a Kubernetes cluster via TLS. Linkerd is built on top of Twitter’s Finagle project, which has extensive production usage and attracts the interest of many companies exploring Service Meshes. Today Linkerd can be used with Kubernetes, DC/OS and AWS/ECS. The Linkerd service mesh is deployed on Kubernetes as a DaemonSet, meaning it is running one Linkerd pod on each node of the cluster. Recent changes in the service mesh ecosystem (i.e. the introduction of the Istio project which closely integrates with Kubernetes and uses the lightweight proxy Envoy as a sidecar to run side by side with each microservice) can provide more capabilities than Linkerd and have considerably slowed down its popularity. Some are even questioning the existence of Linkerd. To regain community interest and support a large base of existing customers, Buoyant (the company behind Linkerd) announced project Conduit with the idea of allowing DaemonSetts to use the sidecar approached used by Istio and rewriting dataplane in Rust and Control plane in Go. This enables many possible features that can use the sidecar approach. Not so long ago project Conduit was renamed Linkerd 2.0 and recently announced GA, signaling its readiness for production use. Service Meshes continue to evolve at a fast pace and projects like Istio and Linkerd2 will be at its core. Service Proxies Envoy Envoy (Incubator) — Envoy is a modern edge and service proxy designed for cloud native applications. It is a vendor agnostic, high performance, lightweight (written in C++) production grade proxy that was developed and battle tested at Lyft. Envoy is now a CNCF incubating project. Envoy provides fault tolerance capabilities for microservices (timeouts, security, retries, circuit breaking) without having to change any lines of existing application code. It provides automatic visibility into what’s happening between microservice via integration with Prometheus, Fluentd, Jaeger and Kiali. Envoy can be also used as an edge proxy (e.g. L7 Ingress Controller for Kubernetes) due to its capabilities performing traffic routing and splitting as well as zone-aware load balancing with failovers. While the service proxy landscape already has many options, Envoy is a great addition that has sparked a lot of interest and revolutionary ideas around service meshes and modern load-balancing. Heptio announced project Contour, an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer. Contour supports dynamic configuration updates and multi-team Kubernetes clusters with the ability to limit the Namespaces that may configure virtual hosts and TLS credentials as well as provide advanced load balancing strategies. Another project that uses Envoy at its core is Datawires Ambassador — a powerful Kubernetes-native API Gateway. Since Envoy was written in C++, it is a super lightweight and perfect candidate to run in a sidecar pattern inside Kubernetes and, in combination with its API-driven config update style, has become a perfect candidate for service mesh dataplanes. First, the service mesh Istio announced Envoy to be the default service proxy for its dataplane, where envoy proxies are deployed alongside each instance inside Kubernetes using a sidecar pattern. It creates a transparent service mesh that is controlled and configured by Istio’s Control Plane. This approach compares to the DaemonSet pattern used in Linkerd v1 that provides visibility to each service as well as the ability to create a secure TLS for each service inside Kubernetes or even Hybrid Cloud scenarios. Recently Hashicorp announced that its open source project Consul Connect will use Envoy to establish secure TLS connections between microservices. Today Envoy has large and active open source community that is not driven by any vendor or commercial project behind it. If you want to start using Envoy, try Istio, Ambassador or Contour or join the Envoy community at Kubecon (Seattle, WA) on December 10th 2018 for the very first EnvoyCon. Security Falco Falco (Sandbox) — Falco is an open source runtime security tool developed by Sysdig. It was designed to detect anomalous activity and intrusions in Kubernetes-orchestrated systems. Falco is more an auditing tool than an enforcement tool (such as SecComp or AppArmor). It is run in user space with the help of a Sysdig kernel module that retrieves system calls. Falco is run inside Kubernetes as a DaemonSet with a preconfigured set of rules that define the behaviours and events to watch out for. Based on those rules, Falco detects and adds alerts to any behaviour that makes Linux system calls (such as shell runs inside containers or binaries making outbound network connections). These events can be captured at STDERR via Fluentd and then sent to ElasticSearch for filtering or Slack. This can help organizations quickly respond to security incidents, such as container exploits and breaches and minimize the financial penalties posed by such incidents. With the addition of Falco to the CNCF sandbox, we hope that there will be closer integrations with other CNCF projects in the future. To start using Falco, find an official Helm Chart. Spiffe Spiffe (Sandbox) — Spiffe provides a secure production identity framework. It enables communication between workloads by verifying identities. It’s policy-driven, API-driven, and can be entirely automated. It’s a cloud native solution to the complex problem of establishing trust between workloads, which becomes difficult and even dangerous as workloads scale elastically and become dynamically scheduled. Spiffe is a relatively new project, but it was designed to integrate closely with Spire. Spire Spire (Sandbox) — Spire is Spiffe’s runtime environment. It’s a set of software components that can be integrated into cloud providers and middleware layers. Spire has a modular architecture that supports a wide variety of platforms. In particular, the communities around Spiffe and Spire are growing very quickly. HashiCorp just announced support for Spiffe IDs in Vault, so it can be used for key material and rotation. Spiffe and Spire are both currently in the sandbox. Tuf Tuf (Incubator) — Tuf is short for ‘The Update Framework’. It is a framework that is used for trusted content distribution. Tuf helps solve content trust problems, which can be a major security problem. It helps validate the provenance of software and verify that it only the latest version is being used. TUF project play many very important roles within the Notary project that is described below. It is also used in production by many companies that include Docker, DigitalOcean, Flynn, Cloudflare, and VMware to build their internal tooling and products. Notary Notary (Incubator) — Notary is a secure software distribution implementation. In essence, Notary is based on TUF and ensures that all pulled docker images are the signed, correct and untampered version of an image. This can be done throughout all stages of a CI/CD workflow, solving one of the major security concerns for Docker-based deployments in Kubernetes systems. Notary publishes and manages trusted collections of content. It allows DevOps engineers to approve trusted data that has been published and create signed collections. This is similar to the software repository management tools present in modern Linux systems, but for Docker images. Some of Notary’s goals include guaranteeing image freshness (always having up-to-date content so vulnerabilities are avoided), trust delegation between users or trusted distribution over untrusted mirrors or transport channels. While Tuf and Notary are generally not used by end users, their solutions integrate into various commercial products or open source projects for content signing or image signing of trusted distributions, such as Harbor, Docker Enterprise Registry, Quay Enterprise, Aqua. Another interesting open-source project in this space Grafeas is an open source API for metadata, which can be used to store “attestations” or image signatures, which can then be checked as part of admission control and used in products such as Container Analysis API and binary authorization at GCP, as well products of JFrog and AquaSec. OPA Open Policy Agent (Sandbox) — By enforcing policies to be specified declaratively, Open Policy Agent (OPA) allows different kinds of policies to be distributed across a technology stack and have updates enforced automatically without being recompiled or redeployed. Living at the application and platform layers, OPA runs by sending queries from services to inform policy decisions. It integrates well with Docker, Kubernetes, Istio, and many more. Streaming and Messaging NATS NATS (Incubator) — NATS is a messaging service that focuses on middleware, allowing infrastructures to send and receive messages between distributed systems. Its clustering and auto-healing technologies are HA, and its log-based streaming has guaranteed delivery for replaying historical data and receiving all messages. NATS has a relatively straightforward API and supports a diversity of technical use cases, including messaging in the cloud (general messaging, microservices transport, control planes, and service discovery), and IoT messaging. Unlike the solutions for logging, monitoring, and tracing listed above, NATS works at the application layer. gRPC gRPC (Incubator) — A high-performance RPC framework, gRPC allows communication between libraries, clients and servers in multiple platforms. It can run in any environment and provide support for proxies, such as Envoy and Nginx. gRPC efficiently connects services with pluggable support for load balancing, tracing, health checking, and authentication. Connecting devices, applications, and browsers with back-end services, gRPC is an application level tool that facilitates messaging. CloudEvents CloudEvents (Sandbox) — CloudEvents provides developers with a common way to describe events that happen across multi-cloud environments. By providing a specification for describing event data, CloudEvents simplifies event declaration and delivery across services and platforms. Still in Sandbox phase, CloudEvents should greatly increases the portability and productivity of an application. What’s Next? The cloud native ecosystem is continuing to grow at a fast pace. More projects will be adopted into the Sandbox in the close future, giving them chances of gaining community interest and awareness. That said, we hope that infrastructure-related projects like Vitess, NATs, and Rook will continuously get attention and support from CNCF as they will be important enablers of Cloud Native deployments on Prem. Another area that we hope the CNCF will continue to place focus on is Cloud Native Continuous Delivery where there is currently a gap in the ecosystem. While the CNCF accepts and graduates new projects, it is also important to have a working mechanism of removal of projects that have lost community interest because they cease to have value or are replaced other, more relevant projects. While project submission process is open to anybody, I hope that the TOC committee will continue to only sponsor the best candidates, making the CNCF a diverse ecosystem of projects that work well with each other. As a CNCF ambassador, I hope to teach people how to use these technologies. At CloudOps I lead workshops on Docker and Kubernetes that provide an introduction to cloud native technologies and help DevOps teams operate their applications. I also organize Kubernetes and Cloud Native meetups that bring in speakers from around the world and represent a variety of projects. They are run quarterly in Montreal, Ottawa, Toronto, Kitchener-Waterloo, and Quebec City. I would also encourage people to join the Ambassador team at CloudNativeCon North America 2018 on December 10th. The original blog post can be read here. Sign up for CloudOps’ monthly newsletter to stay up to date with the latest DevOps and cloud native developments. Ayrat Khayretdinov Ayrat Khayretdinov is a Solutions Architect at CloudOps and a Kubernetes evangelist dedicated to driving community growth. He is both a CNCF ambassador and a member of CNCF Technical Oversight Committee (TOC). Ayrat is passionate about promoting community efforts for the cloud native ecosystem. Photo by Maximilian Weisbecker
https://medium.com/cloudops/this-blog-post-was-written-by-ayrat-khayretdinov-and-was-originally-published-on-cloudops-blog-ef91c4e884ce
[]
2020-10-16 18:45:17.458000+00:00
['Cncf', 'Kubernete', 'Cloud Native', 'Cloud Native Ecosystem', 'Cloud Computing']
A Holiday Prayer From An Atheist
I think I am an atheist, But… Every day I pray there is a God. Not a He. Not a She. God is all of us, together. I pray. It’s probably not what you think. That all people are made in God’s Image. Not that. But that stars are we, and sea-stars are We, and God is the Creation, creating We. Hydrogen and deuterium protons spark into helium And the sounds of the sun, Somehow become silent photosynthesis, That licks the morning dew off of all that becomes you. Grass heads become your cereal. Apple and orange trees juice our thirst. Birds and bees babble and bumble, Making the sweet honey music of daylight. Dawn. Making you. But you are late for work, so you don’t notice. Whitman saw the beauty in this cacophony, but you ignore it in disgust. Work stress drags ragged nails across a blackboard of sky. The belch and noxious grate of engines chant constant percussion on your headache. At the office, you expect, John and Maria steadily grind out petty grumbles. You saw that article about drowning, newborn penguins. You know how latest famines harvest the brittle bones of little children. You read the Stanford report that our execrable extinction efforts are accelerating ten times faster than the Cretaceous dinosaur die offs. You’re aware that someone we think grips power tweets gripes. You heard the searing news about drowned cities, or quiet California ash. You cannot hear the birdsong, the bees whirring a world together, Still. I think you should look up. Glory in God for one reverberating breath. Because it doesn’t matter whose God can beat up whose God. It matters that every humming moment Can exalt that we are One Earth, if we remember… We belong. We belong. We belong.
https://medium.com/thrive-global/a-holiday-prayer-from-an-atheist-1e5bfcdfa12d
['Christyl Rivers']
2018-11-26 15:06:16.017000+00:00
['Purpose', 'Nature', 'Philosophy', 'Poetry', 'Science']
A Black Hole in Our Solar System
There are only gentle signs at first. The outer planets would careen off their usual orbits and on Earth we’d notice the night sky is threaded with the metallic glint of more and more comets as the days go on. The comets have been torn out of their silent beds in the Oort cloud, sent towards the belly of the Solar System where they might strike any planet and rend gaping craters on the rocky surfaces. A black hole moving fast enough will be unaffected by the sun’s gravity and will leave without much more damage than that. But if the sun — that massive gold jewel that makes up the majority of our Solar System’s mass — attracts the black hole, a devastating story begins to unfold. Lunar and planetary orbits deform into wild, tangled paths with the approach of the black hole. Gravitational effects change the landscape all around us with the Earth quite literally parting in extreme earthquakes and the molten fire of volcanic eruptions. Seasons become extreme as our path around the sun grows closer or further away. The tides shift; we are bound to our home planet which could be sent into the sun to burn. Otherwise the Earth might be pushed into the empty, exotic darkness of the cosmos where our fate will be a stony and suffocating freeze without the heat of our star. Jupiter — larger than the black hole but less massive — will have its thick, brass-colored clouds sucked away and made swirling into a hot and luminous disk. The black hole continues onto the asteroid belt. Were life still around, we’d now be able to see the nearing monster and the light bending around it. The black hole is drenched in x-ray radiation from the disk of Jupiter’s gas. If our planet hasn’t streamed either into the sun or into space, we will be consumed by the black hole, unable to see its final meeting with the sun. Our star’s gas and light are stolen into the event horizon and this marks the final wound and death of our Solar System. A simulation shows the gravitational lensing effects of a black hole passing in front of a galaxy and distorting its light. Image by Urbane Legend. While black holes can traverse the galaxy and disrupt any star systems in their path, the chances of it happening are fortunately small. This is true especially towards the hem of the galaxy where we reside and where the number of black holes greatly diminishes. A nightmarish scenario like the one above is sensational by nature but unlikely. Instead, the black hole which may exist in our Solar System takes on a very different shape. Data from half a decade of OGLE (Optical Gravitational Lensing Experiment) revealed a large number of lensing events. During these events objects are magnified by large masses in front of them, providing astronomers with a better view of the more distant objects behind. The survey attempts to detect changes in brightness from distant stars and galaxies. It revealed a strange population of small, nearby lenses in the Milky Way with a mass of .5 to 20 times that of the Earth. This aligns perfectly with predictions of Planet Nine. New research suggests Planet Nine and all its disruptions in our Solar System might not be a planet at all, but something much rarer. The search for the elusive world known as Planet Nine is based primarily on the behavior of a dozen Trans-Neptunian bodies (objects swirling in the outer reaches of the Solar System, past Neptune). The objects have clustered orbits with similar tilts and at their closest to the sun, they pass by the same area. These orbits are clustered, in theory, due to the gravitational influence of a ninth as-yet-undiscovered planet. The probability that the behavior of the TNOs is pure chance is a paltry 0.007%. Neither can Neptune be the answer since the bodies aren’t close enough to the dense, blue giant. The hypothetical orbit of Planet Nine is seen here with the present day orbit of TNOs. Image by R. Hurt/JPL-Caltech. There are three ways in which Planet Nine could have entered our Solar System: it could have formed as-is, in its current orbit, or it could have formed within the inner Solar System before being ostracized to its current position as much as 120 billion km (75 billion miles) from the sun. At this distance from any starlight, it becomes difficult to see. Both of these are unlikely. The most favorable scenario involves Planet Nine forming outside the Solar System as a free-floating planet before being drawn in by the gravity of the sun. There is speculation, however, that what the sun captured was a rare kind of black hole. The chances of the sun having ensnared a free-floating planet are roughly the same as it ensnaring a primordial black hole. The gift of the primordial black holes is that not only would it explain the clustering of the TNOs, but it would also explain the lensing events of the OGLE survey. Primordial black holes are still only hypothetical, but are believed to have formed less than a second after the brazen eruption of the Big Bang. Quantum fluctuations during this early and tentative time would have caused matter to distribute unevenly with the more densely packed regions collapsing and giving way to the first ancient population of black holes. They would have ornamented the universe long before stars, long before any sparkling galaxy. To this day they would remain compact and countless, but difficult to find due to their distance and lack of light. A black hole with a mass of Planet Nine (about five times that of Earth) would measure only 9 cm (3.5 inches) across. This is thanks to the black hole’s great density, allowing it to have visible, gravitational effects even at the size of a baseball. Calculations and computer simulations can only, after all, tell us the mass and not the composition of any object responsible for the TNO anomalies. The figure above is to scale for the proposed primordial black hole. Image by Jakub Scholtz and James Unwin. To find this primordial black hole is possible but, requires a very different route than the one we’ve so far taken. Searching for a planet involves infrared and microwave surveys; searching for a black hole is a little more experimental. As dark matter and its antimatter pair met and annihilated around the black hole they would form a distinctive halo lit by flashes of gamma radiation. The halo would stretch and saunter a billion kilometers (621 million miles) out in every direction. This is about the distance between Earth and Saturn. These gamma rays might be picked up by current technology like the Fermi Gamma Ray Space Telescope, though any moving sources in x-rays and high energy cosmic rays should also be considered. There are even more exotic theories of what may be stringing the TNOs along in their peculiar paths. What lies at the edges of our Solar System could be something as strange as a Bose star — cool, collected orbs of dark matter that would provide a way to better understand our galaxy and every other. Whatever may be the answer to this mystery — a bizarre star or an undiscovered super-Earth — will be something new and something revolutionary. In this way the story of Planet Nine unfolds like the tale of an approaching black hole, its sweeping presence giving us a chance to be part of something tremendous.
https://medium.com/predict/a-black-hole-in-our-solar-system-cdfd5f699f4c
['Ella Alderson']
2019-11-07 02:00:29.151000+00:00
['Science', 'Space', 'Physics', 'Cosmos', 'Universe']
The Problem With Mutable Types As Default Values in Functions in Python
Mutable and Immutable Objects For those who don’t know, I will try to explain the concept of mutable and immutable objects in very simple terms. Mutable objects are those which can be changed after creation. In Python, dictionaries are examples of mutable objects. In the code snippet above, we have created a list and changed the item at index 2 . It ran without any problems because lists are mutable in Python. Immutable objects are those which cannot be changed after creation. In Python, tuples are examples of immutable types. In the code snippet above, we have created a string and are trying to change the item at index 1 . We got an error because strings are immutable. Now that we understand how to write functions with default values and the concept of mutable objects, we will try to understand the problem with immutable types as default values. For this demonstration, I will write a simple function that takes two arguments, appends the first argument to the second, and prints the second argument. Now we will test this function. As we can see from the code snippet above, when we provide the value for argument b , the function works fine. We start to see problems when we don’t provide the value for argument b because in the later function calls, the default value is no longer an empty list. The problem is that each default value is evaluated when the function is defined — usually when the module is loaded — and the default values become attributes of the function object. So if a default value is a mutable object and you change it, the change will affect every future call of the function. We can see the default values by checking the __defaults__ attribute of the function. So, how can we avoid this problem? There is a very simple fix for this. Whenever you need to use mutable types as the default value for an argument, use None as the default value. Now, we can see that our function is working properly. I hope this post will improve your understanding of functions.
https://medium.com/better-programming/the-problem-with-mutable-types-as-default-values-in-functions-in-python-81ca88ff4d91
['Diwakar Singh Parmar']
2020-02-06 20:59:20.579000+00:00
['Python Programming', 'Python3', 'Programming', 'Python', 'Devop']
I Empathize with Rachel Hollis
It’s important to let readers know that when it comes to Rachel Hollis, her divorce, and the subsequent marketing of her new book, I’m a complete outsider. I was never invested in this woman as an influencer. I never really subscribed to the idea of an “influencer” anyway. People might be famous content creators, motivational speakers, writers, Instagram models, entertainers, etc. But I think labeling successful digital creators as “influencers” kind of talks down to us, the audience. We just like what we like, and many of us are not easily influenced. I’d never even heard of Rachel Hollis. I knew less than John Snow about her — until bloggers and YouTubers started discussing her divorce from husband and business partner, Dave Hollis. I want people to know that I’m giving my opinion coming from the place of a total outsider. I wasn’t a fan, I never dropped a dime on Rachel’s books or conferences, and I was completely unbiased before I started researching the issues people have with her. And some of the stuff out there has been vicious. Bloggers labeling her as a liar and a fraud, ex-fans on social media throwing insults, and YouTubers making bank off of videos where they judge her for monetizing her divorce (as they monetize her divorce). Note — I think everyone who is upset about investing in Rachel and Dave’s relationship advice has every right to feel their feelings. They feel hurt and betrayed. Their feelings are valid. But so are mine. And so are Rachel’s and Dave’s. We’re all just humans here. I also know that Rachel’s divorce has brought to light some other criticisms, notably her instances of plagiarism and her support of multilevel marketing schemes. She’s definitely made some mistakes. And now, she’s become the picture of the very thing she tried to teach everyone not to be — one half of a broken marriage. Even so, I think we need less judgment and shaming of her during the awful situation she and Dave and their four children now find themselves in, and more support of a strong woman entrepreneur who is doing her best to keep her business going and support her kids — when her divorce could very well destroy it. The Trauma Of Divorce If you aren’t familiar with this public couple, here’s some background. Rachel Hollis is a best-selling author, podcaster, and motivational speaker. Her personal development conferences were immensely successful. She also found a demand for couples conference, which she produced with her hubby, Dave. People who invested in this part of their messaging or paid good money — $1,700 — to attend their relationship conference were pissed when they learned of the divorce. Rachel and Dave were charging money to teach couples about a healthy relationship when they themselves were having problems. They were also putting out a happy-marriage image to the public through their podcast, social media, or blog posts. I got all the facts available about the split from Rachel’s and Dave’s public announcements. There aren’t a ton of details available. All we know is that there were problems, and in the end, it was decided that the healthiest choice for them and their children would be to divorce. I’ve been there. I’ve been in that exact same spot as a woman and mother. And it was miserable. You feel guilt for separating your family. You feel like a weak person for not being able to tough it out. You feel like a failure as a woman and a mother, even though you believe you’re making the decision that is healthier for you, and will therefore ultimately make you a better mom. Not only is Rachel going through a divorce, but she also has to deal with a crushing professional blow at the same time. Breaking their long marriage is not only traumatic in general — it’s terrible for her career. She’s losing followers left and right, which could greatly impact how she’s able to provide for herself and her kids. I’m sure she’ll probably be alright financially, the way she’s continuing to show up and work. But girl has some major egg on her face, and she is down. So many other women (and some men) are kicking her when she’s already down there, floundering around in her mistakes. Problems With Rachel Hollis, And My Response 1. Rachel and Dave were putting out relationship advice in books and blogs — and holding pricy conferences — when they were having their own issues Okay. But — take a look at a few of the ideas that Rachel and Dave claimed would help you have a healthier relationship. These are tips I found on their blog and their podcast: Take the time to go on a date night each week Try new things together Make out a lot Communicate with each other Find solutions to resolve arguments (in this case, they hired a cleaning lady, which relieved the stress of fighting about who’d be cleaning the toilet next) Don’t let extended family manipulate your decisions — instead, do what’s best for your own relationship and your kids Don’t let sex go by the wayside — prioritize your physical relationship We can clearly see that these are valuable, healthy habits to practice. Doesn’t matter who is saying it. I think it’s key to look at the actual advice they gave — not hold them up as some sort of Infalliable Gods of Marriage. Every couple is susceptible to divorce. Every single one. Even the world’s most brilliant and educated marriage counselor might one day find her spouse has left her for any number of reasons — from finding a younger woman to a mid-life crisis. Does that mean we should bash said marriage counselor till she agrees to give up and quit her job? People paid for these conferences, and I think they got a lot of value out of them. They loved the experience, and I’m sure a lot of couples improved their relationship as a result of doing this important work. These retreats weren’t full of bullshit, unfit for an audience. They were entertaining, well-produced, big events. If they were so terrible, Rachel wouldn’t have been able to keep booking more. Additionally, Rachel and Dave never claimed to be professional therapists with PhDs — they simply became popular (because we made them popular) and they worked to inspire other couples. Even though they didn’t make it, they were able to share a lot of stories about the problems they’ve had at one point and how they resolved them. You can hear about some in this podcast episode. Even if they couldn’t resolve all their problems, they still taught things that have brought value to others. Every couple is different. Dave and Rachel didn’t make it. That doesn’t mean some of the couples who utilize this basic, sometimes obvious advice (like “communicate with each other”) won’t make it. I don’t see a reason to ask for a refund. And no, I don’t think your relationship is also doomed if you practice good communication... 2. Rachel’s a liar and a fraud because she ended up eventually getting divorced As a divorced woman and mother, I have to tell you that from the first time I considered divorce to actually following through with it was about three and a half years. During that time, I spent many days accepting my decision to stay in it for the long haul. I was committed, and I was going to go about the business of being a loving, happy couple no matter what conflicts came our way! Other days, the bad days, things would happen, and I’d think about how separating might be healthier. This doesn’t mean I’d made my decision to leave the marriage. This was simply me, being a human person, thinking about all my options. Just because you and your spouse have problems (as all couples do) and you, understandably, contemplate your right to leave that situation — that doesn’t mean you’ve automatically decided that your relationship is total garbage and ready to be thrown out. When you’re in your marriage, you’re in it. When Rachel and Dave were committed, they kept working their business. No one knows the exact moment they went from committing to working on their problems to deciding to end things (because I tell ya, in every relationship, none of which are perfect, you’re either doing one or the other). We don’t know when that very last straw broke and how that would impact the relationship coaching aspect of their career. All critics can do is speculate. And now that they’ve split, I don’t think they’ll be putting on relationship retreats. Maybe “How to Divorce and Co-parent” retreats, sure. And honestly, divorced women like myself have some pretty damn useful advice on what not to do in a relationship. When you’re on the other side of it, you’re able to start learning from your mistakes. You’re able to help others see what caused your marriage to end. 3. She’s monetizing her divorce in an upcoming book People don’t even know the reason behind the divorce (which could honestly be a heart-wrenching, terrible reason — who knows?), and they’ve already pegged Rachel as a selfish money-hungry whore who is angering God and profiting off the pain and tears of couples as she sits on her THRONE OF LIES! (I’m paraphrasing.) You can see an example of this kind of harsh criticism here, from a “parenting influencer” who, in part 2 of his Rachel Hollis exposé, told us that this video is one of his highest viewed (and highest-grossing) of all time. He’s getting enough content for multiple videos and making a good bit of cash off of Rachel’s divorce, as are lots of other YouTubers and bloggers — the majority of whom are women. They’re judging her for being a relationship guru exploiting her divorce for profit in a book — while they’re making money from the scandal of her divorce. This judgment I’m seeing of a writer “exploiting” her divorce for money by writing about it has to run off me like water off a duck’s back. I’m a blogger who writes explicit details about my personal life experiences. From my divorce, to my sexual adventures, to my relationship woes, to my real-life experience with sexual assault. These blogs of mine (hopefully!) make money. Only if people read them. Writing is my profession and my creative outlet — and so I write. It’s often fun and/or therapeutic for me, and I can use said money for things like buying food for my child and paying the water bill, while also, hopefully, providing value to readers. Rachel Hollis has a very fresh, real, and raw experience to write about. Why do we have to rake her across the coals for sharing her personal story and maybe helping someone else who is just now going through the beginning stage of a crushing divorce? And from what I’ve heard about the book, it’s not some manifesto about how to become completely healed from divorce. It’s about going through grief of all kinds, and one of those very real things she’s grieving right now is the loss of her marriage and her family as she knew it. According to her podcast, it would feel inauthentic to her not to include her divorce when she’s currently writing a book about pain and loss she has experienced in her personal life. Makes sense to this writer. Building Women Up As They Work Through Their Mistakes I understand why fans feel betrayed and duped and sold to. But, even if you subscribed to Rachel Hollis’s thoughts on marriage and her picture of perfection, you can still understand that sometimes, things in life take a turn. They go wrong. And we don’t always react in the best way. I don’t think she and Dave were behind the curtain at conferences, rubbing their hands together with menacing grins saying, “These idiots. They’re shelling out so, so much moola. Wait till we follow through with our plan to divorce in two years — how foolish they will feel! And how much they will hate us. Mwhahahha!” I think this thing happened. It was bad for her career. It was bad for her emotionally. She recognizes it as a trauma for her and her family. She picked herself up rather quickly and went back to work. Good for her. Rachel recently got Rob Lowe on her podcast—this was after the whole divorce thing got out. Girl is working it. He’s one of my very favorite people, and he and Rachel put out an amazing interview about working and careers. I’ll take Rob Lowe’s participation as an optimistic sign that Rachel’s not a total fraud. I mean, who doesn’t like Rob Lowe? But —I’m aware that it’s also important not to let celebrity status impress us so much that it clouds our reality. In the same way that we shouldn’t automatically smear Rachel as a cheat and hypocrite because she now finds herself going through a divorce — especially when we don’t yet know the whole story — we should also be careful about falling for, and giving loads of money to, self-help gurus who don’t have a single shred of counseling background. You have every right not to like Rachel Hollis. You have every right to like her. You have every right to boycott her conferences and books. But I think not buying her product is enough. We don’t also need to publicly shame her over and over again. But now that plenty of bloggers and YouTubers and fans have already done so, some of them getting a leg up in their own career as a result, I think the word is out and they can stop now. I’m not sure why all this has struck such a nerve with me and led me to produce an article. I’m pretty sure Rachel Hollis and her career will be fine, and so will Dave’s. They’ve both built a lot of their work in other self-help spaces like health, motivation, and career — and that’s something we can’t take away from them, as they are clearly fit, motivated, and successful. Check out this woman’s review on YouTube after she attended one of Rachel’s personal development conferences. It’s an example of the effect her work has had: “The Rise weekend is a personal development conference. It’s got different elements that cover aspects like motivation, inspiration, goal setting, life coaching, entertainment, and a lot of intense emotional and personal work that you’ll be doing. There’s also an element of just, an experience. Being able to witness the power of women coming together, and the energy and the vibe that that power brings into one huge room…I feel like every single woman should have the privilege of attending such an event.” -Momjo I’d love to see us coming together to support each other more. Especially when we’re down. Even when someone makes mistakes — we can take these as learning opportunities and reach out to see how we can help them rather than slam them publicly and hurt their attempts to put themselves and their careers back together again. Rachel Hollis isn’t perfect. She may not have handled all this in the best way. She may try to pivot it to save her career. She may continue to try to inspire others. And I think that’s all well within her right — to try to heal, recover, and learn from her experiences.
https://medium.com/fearless-she-wrote/i-empathize-with-rachel-hollis-5634c8726d1b
['Holly Bradshaw']
2020-09-08 16:31:00.915000+00:00
['Relationships', 'Culture', 'Women', 'Entrepreneurship', 'Self']
Mathematics behind Random forest and XGBoost
What are ensembles? Ensemble means Collection or group of things. Ensemble learning is a machine learning technique that combines several base models in order to produce one optimal predictive model(powerful model). Ensemble Methods allow us to take a sample of Decision Trees into account, calculate which features to use or questions to ask at each split and make a final predictor based on the aggregated results of the sampled Decision Trees. Types of Ensemble Methods: BAGGing, or Bootstrap AGGregating(Random forest) Boosting Stacking Cascading BAGGing, or Bootstrap AGGregation: All sampling is done at Sampling with replacement. Each model is built at a different subset of data. A model is said to have high variance if the model changes a lot with changes in training data. So bagging is a concept to reduce variance in the model without impacting bias. Bagging=DT + Row sampling Bagging is the application of the Bootstrap procedure to a high-variance machine learning algorithm, typically decision trees. By taking a small sample dataset for each model, the aggregate model does not change much because it only impacts a small subset of a dataset. Typical aggregation operation is mean/median. Mean/Median in case of regression and majority vote in case of the classification problem. Final aggregation step is the final model(h). Take bunch of low bias and high variance models(h1,h2,h3,h4…) and combine them with bagging. you will get low bias and reduced variance model(h). Ex: Decision tree with very high depth. When bagging with decision trees, we are less concerned about individual trees overfitting the training data. For this reason and for efficiency, the individual decision trees are grown deep (e.g. few training samples at each leaf-node of the tree) and the trees are not pruned. These trees will have both high variance and low bias. These are important characteristics of sub-models when combining predictions using bagging. Random forest the most popular Bagging model used now a day for low bias and high variance datasets. Random forest: Random-forest does both row sampling and column sampling with Decision tree as a base. Model h1, h2, h3, h4 are more different than by doing only bagging because of column sampling. As you increase the number of base learners (k), the variance will decrease. When you decrease k, variance increases. But bias remains constant for the whole process. k can be found using cross-validation. Random forest= DT(base learner)+ bagging(Row sampling with replacement)+ feature bagging(column sampling) + aggregation(mean/median, majority vote) Here we want our base learner as low bias and high variance. so train DT to full depth length. we are not worried about depth, we let them grow because at end variance decrease in aggregation. For model h1, (D-D’) dataset not used in modeling are out of bag dataset. they used for cross-validation for the h1 model. Let’s look at the steps taken to implement a Random forest: 1. Suppose there are N observations and M features in the training data set. First, a sample from the training data set is taken randomly with replacement. 2. A subset of M features is selected randomly and whichever feature gives the best split is used to split the node iteratively. 3. The tree is grown to the largest. 4. The above steps are repeated and prediction is given based on the aggregation of predictions from n number of trees. Train and run-time complexity Training time = O(log(nd)*k) Run time = O(depth*k) Space = O(store each DT*K) As the number of base model increases, training run time increases so always use Cross-validation to find optimal hyperparameter. Code: Extremely randomized tree: Extreme randomized tree = DT(base learner)+ bagging(Row sampling with replacement)+ feature bagging(column sampling) + aggregation(mean/median, majority vote) + randomization when selecting threshold One more level to decrease variance but bias may increase slightly. but this method not used much in real life. Cases: All cases of decision trees also the same for the random forest except in RF, variance affected ny number of base learner(K). feature importance in case of decision tree only on one model but in RF we also consider all k base learner model(k). Note: all cases of Decision trees also applicable to the random forest. Boosting: Boosting is another ensemble technique to create a collection of predictors. In this technique, learners are learned sequentially with early learners fitting simple models to the data and then analyzing data for errors. In other words, we fit consecutive trees (random sample) and at every step, the goal is to solve for net error from the prior tree. This works in case of high bias and low variance base model and additively combine. We will try to reduce bias. When an input is misclassified by a hypothesis, its weight is increased so that the next hypothesis is more likely to classify it correctly. By combining the whole set at the end converts weak learners into a better performing model. Note: in RF we can not minimize loss because we are not using it. But in Gradient boosting we can minimise any loss Here L(y,F(x)) is loss function(log-loss, linear loss, hing loss etc) and M is the number of the base model. All have done sequencly hence GBDT not parallisied so take more time train even compare to RF. Gradient boosting is a base learner have high bias and low variance models. here we will use the Gradient boosting decision tree with a very low depth value. As v reduces, giving less weighted to model. if v increase, your chances of overfit increases. always do cross-validation for the number of the base models(M) and shrinkage parameters (v). As M increases bias decrease certainly but variance also increases. So use regularization. Here In shrinkage, we simply multiply “v” with the second term. Hence reduce “game” by constant “v”. As “v” reduces, chances of overfitting decrease so variance decreases.Both M and “v” use for the bias-variance tradeoff. Train and Run time complexity Training time = O(log(nd)*k) Run time = O(depth*k) Space = O(store each DT* k ) XGBoost XGBoost: Boosting + Randomization https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html https://github.com/dmlc/xgboost/blob/master/demo/guide-python/sklearn_examples.py Adaboost: Mostly use in image processing and face detection problem. At every stage, it gives more weight to those points which are misclassified. You can tune the parameters to optimize the performance of algorithms, I’ve mentioned below the key parameters for tuning: n_estimators: It controls the number of weak learners. It controls the number of weak learners. learning_rate: Controls the contribution of weak learners in the final combination. There is a trade-off between learning_rate and n_estimators. Controls the contribution of weak learners in the final combination. There is a trade-off between learning_rate and n_estimators. base_estimators: It helps to specify different ML algorithms. Stacking models: The first train each model independently(C1, C2, C3,…) the meta classifier can be logistic, RF or any. Cascading classifiers: this model mainly used when the cost of an error occurring is very high. Widely used in credit card companies to detect fraud transactions. or medical domain to detect rare problem to save a life. dataset train in the second step is (D-D1) dataset which are not used in the first step modeling. ============Happy ending================== Reference: Google image Applied AI
https://medium.com/analytics-vidhya/mathematics-behind-random-forest-and-xgboost-ea8596657275
['Rana Singh']
2019-11-19 05:32:34.398000+00:00
['Data Scientist', 'Deep Learning', 'Artificial Intelligence', 'Analyst', 'Machine Learning']
5 Post-Pandemic Trends in Data Science
5 Post-Pandemic Trends in Data Science How Data Scientists Have Adapted to Meet Urgent Business Needs in the Face of the Covid-19 Crisis Photo by engin akyurt on Unsplash At the beginning of this year, I was interviewed for an article about critical trends in data science I predicted for 2020. When I said that changes were happening so rapidly that it was practically impossible to fully anticipate what would happen in the field of data science over the next year, I had no idea how true that would be for 2020. With the Covid-19 pandemic upending everything around the world this year, the major trends in data science, especially data science applied to the business world, have changed significantly. Here are five post-pandemic trends that I have seen emerge in recent months. 1. Analytics and data science have become less siloed. Cross-functional teams have become increasingly common, yet most businesses still kept data scientists and analytics teams separate from the teams making critical business decisions. Only the most data-driven companies had effectively integrated data science into regular business operations. The pandemic quickly changed this. Businesses created cross-functional crisis-response teams to create analytics-driven solutions. For many companies, this has revealed data science as a valuable tool to improve decision-making. Post-Covid, more of these teams are likely to be the norm. Data scientists must, in turn, have a better cross-functional understanding. Data scientists must understand the business context to contribute to these discussions beyond just providing the data. It’s not enough to be a coding expert anymore; data scientists need to be business scientists who can contribute actionable insights. As data experts become less siloed, they must have a more comprehensive understanding of the industries where they work and business operations. Photo by Carlos Muza on Unsplash 2. Data visualization skills are a must. Customers and investors need to feel secure that the businesses they are interacting with are committed to safe practices, so more companies are releasing more data than they had in the past. Raw data, however, is useless. That’s why data scientists skilled in data visualization have come in infinitely higher demand. Data visualization skills are no longer a bonus on a resume. They are required. At the beginning of the year, I highlighted the increasing importance of being able to explain how your algorithms work and how you came to your conclusions. Now data scientists must show, not tell. People everywhere are experiencing incredibly high levels of stress. They aren’t willing to invest a lot of time in understanding your point. You need to make your results clear in seconds; a useful graph or chart is the only way. Data scientists without data visualization skills are at a considerable disadvantage post-Covid. 3. Automated decision intelligence has become a priority. Only the most agile companies are surviving the economic devastation of the pandemic. Every decision matters, and leaders must respond to changes in the market faster than ever. Automating key decisions using real-time data helps prevent costly mistakes. In these unprecedented times, decisions based on past performance and gut-feelings no longer suffice; everything has changed. Automated decision intelligence helps fill the gaps in our knowledge by using real-time data relevant to our current economic reality. Automated decisions based on AI models allow companies to react more nimbly to new patterns in the data. The best intelligence comes from models that can anticipate trends long before humans could on their own. Data scientists now must prioritize models that can deliver this kind of accurate intelligence. Interestingly, the pressures of Covid-19 have also reduced employee resistance to these kinds of AI-augmented automated decisions. Accenture research indicated that one of the biggest roadblocks to implementing this kind of technology in 2019 was a lack of employee adoption and negative sentiment. That is already changing. The pressure put on the workforce during the pandemic has made it more apparent than ever that we need effective technology to help us make better decisions and survive during difficult economic times. As businesses recover, employees are likely to continue to use decision intelligence to make their work easier. Photo by Nong Vang on Unsplash 4. Businesses have focused even more on the customer, as have algorithms. Historically, businesses offered what they believed customers should want in a business-centric “push” model. Few alternatives were usually available, so there was less risk of alienating the customer. Over the past decade, we have rapidly seen businesses forced to adopt a more customer-centric approach. With the internet making consumers better informed and more demanding, the economy has shifted to a customer-driven “pull” model. The coronavirus has only accelerated this trend. Consumer spending has dropped overall, and impulse shopping in stores has virtually ended. Customers are only purchasing goods and services online that precisely serve their needs. This makes it vital for the algorithms that dictate pricing, customer service, and supply chain decisions to align with that same goal. Data scientists must pivot to increasingly customer-driven models in order to meet the needs of businesses. 5. Data scientists must be more agile in responding to data imperfections and model drift. Drastic deviations from historical patterns in consumer behaviour and economic activity have challenged every business’s models’ robustness. Data scientists found themselves getting unexpected errors and even nonsense recommendations. The disruptions have revealed fatal design flaws in far too many models. After all, a good model should use disruption as an opportunity for automated learning. If the models drift in response, then they were not properly set up for self-learning in the first place. Such a discovery is never encouraging, but it’s even direr amidst a crisis. Data scientists have had to respond quickly to ensure that even imperfect models were still delivering the most accurate information possible as they were reworked to adapt to a dynamic economic reality. This crisis requires an agile response to existing data imperfections and preventing deviations from becoming model drift. Not only are data scientists working to use imperfect data to get actionable results, but they are also forced to rework models to better learn from and adjust to continued disruptions moving forward. Since there is no precedent for this pandemic, data scientists must think creatively and quickly — something that makes many typically methodical data scientists uncomfortable. Yet data scientists must adapt to be more comfortable with meeting crises with action. Although we can hope that another worldwide pandemic does not happen anytime soon, smaller disasters and other problems with data are inevitable. Data scientists must prepare themselves to respond as agilely as possible to whatever problem they face next to avoid model drift — and continue to provide data-driven insights companies can depend on. This crisis has revealed the weak points in everyone’s systems; now data scientists must shore up those weaknesses to avoid breaking down come the next disruption. Photo by Fatos Bytyqi on Unsplash Covid-19 may have changed data science for good These trends are not merely interesting deviations from the past; they are likely to dictate how data scientists operate for many years to come. While it’s easy to ignore these trends in the aftermath of the crisis, we should all learn from what’s happening, so we can be prepared for how they change our profession in the long run. Special thanks to Kaitlin Goodrich.
https://medium.com/swlh/5-post-pandemic-trends-in-data-science-4f1b4bf147ad
['Tobia Tudino']
2020-10-07 14:05:30.475000+00:00
['Business Science', 'Business', 'AI', 'Covid 19', 'Data Science']
Feature prioritizing: 3 ways to reduce subjectivity and bias
Originally published in Smashing Magazine. When you’re working on a product, what is more crucial than choosing the right features to develop? However, the exercise often turns into a spectacle of team voting. As a result, decisions change many times down the road. Let’s talk about the pitfalls of popular prioritization techniques and approaches to reducing bias and disagreement. How familiar is this scenario: the team employs modern decision-making methods and performs all design-thinking rituals, but the result remains guesswork. Or this: Soon after having prioritized all features, the key stakeholders change their mind and you have to plan everything again. Both situations have happened to my team and my colleagues quite a few times. Feature prioritization succeeds or fails because of one tiny thing, and I won’t keep you in suspense until the end of this article to find out. The key factor is selection criteria. But first things first. Let’s see what can go wrong, and then we’ll talk about ways to mitigate those risks. Flaws of popular prioritizing methods Challenge 1. Non-experts and experts have the same voting power Product teams strive to make the right trade-offs and marry an infinite number of options with limited resources. Typically, a decision appears as a result of collaborative activities, such as dot voting, the value-versus-feasibility canvas, MoSCoW, the Kano model, etc. While these techniques were invented by different people, they essentially work the same way: Team members put sticky notes with all feature ideas on a board, and then they shortlist the most promising ones. Either participants rate the ideas with marks or votes or they arrange them along the axes according to how feasible, desirable, or innovative each feature is.
https://uxdesign.cc/feature-prioritization-a089fd0af08
['Slava Shestopalov']
2020-12-23 22:55:14.148000+00:00
['Product Management', 'Startup', 'Product Design', 'UX', 'User Experience']
Why Kids Will Be Fine Wearing Masks At School
Why Kids Will Be Fine Wearing Masks At School Parents are more worried about the masks then kids are. Photo by Engin Akyurt on Unsplash Parents have been sharing with me their worries that their kids will not be able to tolerate wearing a mask at school. As a pediatrician, I have some reassurance to offer. Children will definitely be able to wear their masks at school during the coronavirus pandemic. It’s going to be okay. Parents are worried enough about sending kids back to school, and I want them to feel more comfortable about masks. To unravel all the confusion about masks, we need to start at the beginning. Masks in March 2020 Let’s start with the early misunderstanding about masks, which has left a lot of people confused even now. Because at the beginning of the pandemic, people in the U.S. were hoarding masks and healthcare workers couldn’t get them, public health officials made a decision. To make sure masks got to healthcare workers, public health officials asked people to donate their masks. At that time we were just beginning to understand how coronavirus spreads, and we thought that touching surfaces with the virus on it could be an issue. It is for a lot of viruses. So the worry was that since people who don’t normally wear masks tend to touch their faces a lot and mess with their masks, it would be better for the public not to wear them. That made sense when public health officials thought that touching surfaces and then touching masks would be a big way of getting coronavirus. While it is possible to get coronavirus from surfaces it turns out that that is not a major way it’s spread. And that means that the worry about people touching their masks and touching their face is not such an issue after all. On the other hand, we learned over time that the COVID-19 virus is super contagious through the air. That changes everything about how we think about masks. Anything we can do to cut down on our chances of breathing in the virus is essential. And that’s why the public health officials and doctors like me are now urging everyone to wear masks. Masks are the best way to stop coronavirus What I want you to leave with is the understanding that it’s super important for your kids to wear a mask so that they don’t breathe in the virus. And we’re not too worried about them touching their face anymore. Remember, it’s a new virus and we had to learn over time. And we are still learning. Another reason masks are important is because just talking, or even breathing according to some studies, propels the viral particles into the air. We still don’t know who’s contagious. Since people can be highly contagious for the four days before they show symptoms, it’s really important to keep our breath to ourselves. At school, kids and teachers need to wear masks to keep their breath to themselves and avoid breathing each other’s breath. Why kids will be able to wear masks Let’s move on to why I believe your kids will be able to wear masks. The biggest reason I believe this is because my patients have been doing it for a couple months now. As daycare centers started to reopen, they required kids to come back with masks on. Many of my patients at ages four and five are being asked to wear their mask all day long. And guess what? They are doing it. Kids adapt more easily than their parents do. If adults tell them it’s the way things are, children take it in stride. When parents and teachers have a good attitude about masks and share with their kids how important it is, kids do an even better job. My own son hates the way his mask fogs his glasses, but he knows he needs to wear it anyway. Masks are good for us We doctors are asking everyone to wear masks because we care about your well being. Masks are perfectly safe, and the front line of defense against coronavirus. Unfortunately, there’s been a lot of really silly stuff on the internet about masks. It’s completely untrue, and it’s scared people for no good reason. Masks allow you to get air just fine, and to blow off your carbon dioxide. Doctors have been wearing them all day in operating rooms for years, so we’d know by now if they caused problems. In fact, we do know. They don’t cause any breathing or lung problems. To convince us, lots of doctors have run experiments and posted them online. One I know did a 60-minute high-intensity peloton workout with an N95 mask on, all while wearing a pulse oximeter. Her oxygen was great the whole time, and she has asthma! So why do some people say they feel like they can’t breathe? We are used to breathing with nothing in front of our mouth. And so our brain notices the mask and that it’s a little hot, and interprets it as feeling bad. We say to ourselves, “I can’t breathe.” But actually you can. What’s your feeling is hot and uncomfortable, not air deprivation. People feel the exact same way when they’re out of shape and start exercising. But nobody tells themselves that they can’t breathe and they have to stop exercising then. They know it’s that they need to get better at exercise and then the breathing will get easier. It’s the same with masks. We just need to practice. We all do uncomfortable things in our lives to keep ourselves healthy. Just think of all the vaccines that protect your child. They’re uncomfortable, but they’re necessary, and they’re much much less uncomfortable than the diseases we’re trying to prevent. Medical exceptions for school-aged kids What about medical exceptions to wearing a mask? There’s actually very few of those. What about asthma? Masks are even more important for kids with asthma. If a child with asthma can’t breathe comfortably in their mask, then that could mean their asthma is not well controlled. And that’s a problem. If asthma is not well controlled, that means you need to make an appointment with your doctor as soon as possible. We are rarely unable to control asthma when families work with us and use their medicine consistently. Rather than asking your doctor for a note to excuse them from wearing a mask at school, make an appointment if they are having trouble with asthma. Parents of children with severe developmental disabilities may find that their child can’t wear a mask, and that’s something they can discuss individually with their doctor. However, many kids with developmental delays are actually able to wear their masks with a little practice. The same is true for kids with anxiety. After all, a treatment for anxiety is exposure to the thing that makes us anxious, so we can build our strength. Practicing mask-wearing can help a lot. How parents can help their kids with masks We can do this and we can set a positive tone for our kids. And just think of the impact we can have on our world if we all wear masks. We can really slow this virus down. This is our chance to help our kids feel like they’re contributing to their communities by protecting themselves and each other. It’s a way they can be good citizens. I’ve been telling my patients that we’re all superheroes now. Superheroes put on masks and protect people. Now, we put on masks to protect people. That makes us superheroes. Kids love this idea. A note to the parents of adolescents — I know you are worried that they won’t listen or wear their masks. And I know that you can’t be at school to watch them, so we’ll have to hope that your school enforces masks and social distancing. But I’ve also noticed that many parents of adolescents don’t know about their power? Do you know your power? Hint: it has to do with that thing that has been surgically attached to your child. Their phone. Their electronic devices. If they don’t follow your family’s expectations for masks and social distancing, take their phone. It only takes a day! It’s magic how their behavior changes after one day without their electronics.
https://medium.com/home-sweet-home/why-kids-will-be-fine-wearing-masks-at-school-99d03a71f99e
['Alison Escalante Md']
2020-08-02 22:11:14.327000+00:00
['Schools', 'Education', 'Parenting', 'Coronavirus', 'Masks']
Accelerating Growth Cycles with Analytics, Part 4: User Retention
In the previous post, we discussed various ways to evaluate and improve user engagement. Retention goes hand in hand with engagement. Engaged users are more likely to be retained. A well-retained user base amplifies the network effect and generates more engagement from individuals, which then leads to even better retention. What is a retained user and how do you measure retention? As always, we will discuss step by step. Retention Definition When measuring retention, the first consideration is your definition of a retained user. Here are some common approaches ranked by complexity from low to high. Any user who returned to any of your platforms, aka active users. Anyone who performed a key action, e.g. reading an online article or swiping on a profile or making a purchase. Anyone who crossed a threshold of activity levels, e.g. spent more than 30 seconds reading an article. It could also be a combination of different types of activities. Active User Retention At the beginning of your product journey, the active-user approach is easy to implement and a good indicator of product market fit. The downside is that there are ways to artificially inflate active user figures with no underlying positive or sometimes detrimental changes in your service. For example, you switched on an overly aggressive email or push campaign. A proportion of users are bound to click and land on your app, which leads to an increase of active users hence retention. After realising nothing has improved in the app, these users will disappear again. Some will unsubscribe and you lose that communication channel with them forever. Key Action Retention As you scale, consider moving to an action-driven approach, which makes your retention indicator more meaningful. For example, if you are an online streaming service, qualify retention on users who have played a video. That way you avoid vanity measures that include active users who browse your catalog forever without finding anything interesting to watch. If you prefer to keep active-user retention for historical consistency, then monitor both retention and engagement. If you observe a jump in retention, ensure usage levels have moved accordingly. Threshold Retention The next level up is to improve on a binary qualification based on a single action to a threshold driven by the intensity of an action or a collection of actions. For example, instead of users who clicked open a film, qualify on users who watched at least 5 mins or more of content. If you are a social app such as Instagram, you could define retention as users who performed any of the core actions: post a picture or story, like or comment on others posts, start a live streaming etc. Not all types of core actions are equal. For example, a live streaming event is likely to generate more network effect than someone liking a post. How do you balance it? You can assign weights to different types of actions then add them up as the daily activity score per user, e.g. a live streaming event is worth 10 points, a photo post 5 and a like is 1. If a user liked 5 posts and live streamed once and posted 3 pictures in one day, his or her activity score of the day is 1*5 + 5* 3 + 10*1 = 30. You can then apply an activity score threshold to define retention. How do you decide on the weights of each action and the overall threshold level for retention? This topic requires a post on its own but on a high level, they are the results of modelling towards customer lifetime value (LTV). The level of activity intensity that correlates highly with desired user behaviour and your power users as discussed in previous posts. Retention Window The next concept to define is the time period and frequency to measure retention. A closed window retention is the percentage of users returning on the day or within a week or month. An open window is the percentage of users returning either during or after the period, i.e. the inverse of churn. The frequency will depend on the usage pattern of your service. For example, if you have a monthly subscription model then you might want to monitor monthly subscription retention. For a social app such as messaging, daily retention might be more appropriate. Retention Cohort Once you have your retention metric, visualise it by registration cohort to monitor and detect trends. Registration cohort is users grouped together by their registration time. You can aggregate it on different time intervals, e.g. daily or monthly. Make sure to count distinct users to avoid duplicates. Heatmap Below is a heatmap of monthly retention figures by monthly registration cohorts. Heatmaps are useful when examined in three ways. Horizontally it shows how a particular cohort is retained over time. Vertically, and most importantly, it shows how your app retention is evolving. Ideally, it gets darker going down, i.e. retention is improving over time. Diagonal view can act as an alert if you spot a different colour pattern. In the example below, the lighter diagonal strip suggests something went wrong in Nov 2019, i.e. month 1 of 2019–10 cohort. Retention Curve The other view that is useful in monitoring trends are retention curves by cohorts. The example below shows retention over time for each monthly cohort. You should hope to see newer cohorts moving upwards. Another way is to visualise the progression of key retention metrics over registration cohorts. These key retention metrics should be where you see major drop-off points in retention rate. They are usually at the very beginning of the user journey or after certain user actions or product cycles. Below is an example of day 1, week 1 and month 1 retention trend over monthly registration cohort. Feature Retention In addition to overall product retention, you can also measure the performance of each component of your product, i.e. feature retention by first usage cohort. Instead of time-based intervals, you can consider using action frequencies. For example, when Uber released the Spotify integration, what percentage of users come back to this feature in their subsequent rides? If you operate a subscription model, it is important to track and analyse subscription retention rate by cohort of first subscription date. Taking It Up A Notch Retention And Churn Propensity Prediction So far we have discussed retention as an aggregate on cohort level. As your user base grows and collects quality data for three to six months, you can start training data science models to predict retention score on an individual level. The model will also give other meaningful outputs such as feature importance which can guide your Aha! Moment detection. In the past, we have experimented with models that predict future retention at different time intervals, e.g. day-1 retention with users’ 15 minutes of activity data or day-14 with first hour data. These models can be integrated into your product and marketing strategy and play a crucial role in improving customer experience. For example, you can design tailored in-app flows based on retention score buckets, send personalised offers if a user’s score falls below a certain threshold and about to churn, or prompt users with how-to guide if they are new and fail to reach your Aha! Moment. LTV Prediction At the beginning of this post, we discussed the positive cycle between user engagement and retention. It should come at no surprise that retention also correlates highly with LTV. You can easily retrain or reclassify your retention model into LTV predictions. If your monetisation pattern is fairly unvaried such as Netflix’ monthly subscription model, you can estimate aggregated LTV by demographics without complex data science algorithms.
https://medium.com/173tech/accelerating-growth-cycles-with-analytics-part-4-user-retention-198ec93c5771
['Candice Ren']
2020-07-09 11:44:37.354000+00:00
['Analytics', 'Growth', 'Startup', 'Data Science', 'Retention']
Development Update October 24, 2018 | The Force is Here
The FAB Foundation provides technical updates related to the project every week. The Force: Scheduled Fork The moment we have all been waiting for is hear. FAB’s development team has worked long and hard and is now ready for phase 1 of “The Force”. (Read in depth details of “The Force” here.) The FAB team began deploying smart contracts and upgrades in the test network in real time in early August. This new upgrade also adjusted the mining mechanism for self-propelled mining. This version is the original Force Basic Edition. The upgraded main new features include: upgrade mining algorithm to EquihashFAB, support for smart contracts and compatibility with Ethereum contracts; 150 second block time, and anti-Asic attacks. Fast Access Blockchain will be hard-forked at a height of 235,000 blocks (tentatively October 28, 2018). To properly run the new forked network, you will need to download the new software. Visit http://fabcoin.pro/runtime.html to download your new full node or wallet. The old wallet program is still available, but be sure to update the latest version by the 28th. In order to facilitate your mining transactions you will need to upgrade. Mining Details - The block time has been adjusted to 150 seconds. - anti-ASIC attack technology added - Currently only NVIDIA graphics card mining is supported, and AMD graphics cards are not supported at this time. More Details The FAB team will actively deal with the various questions of the fork. For any questions and inquiries, please email [email protected] or [email protected]. Alternatively, reach out to the team on Twitter and Telegram which you can find below. Mobile Wallet We are making significant improvements in the mobile wallet to allow for tokens. These include: Details in the confirmation dialog for tokens Unnecessary dependencies have been removed from the JS file. Unused features have been removed from the wallet management. Improve the sorting optimization of token receiving for wallet management. We are also continuously improving the User Interface and Experience. Kanban Kanban PBFT Consensus The PBFT configuration of the Kanban (Universal State Layer) is almost complete. Currently, the configuration allows for thousands of transactions per second during testing. We are working to set up the sharding process for shard groups. Kanban Tangle Version Our second formulation of Kanban is through the Tangle version. This past week we have changed the communication protocol on the Kanban diagram, deleted the Navigator completion between nodes and changed the Tangle communication component based on the real p2p network to immediately perform real communication and broadcasting. The team has also localized voting and consensus components based on the new protocol. Next week: Verify that new instances/tips of the API are embedded in any single node and execute on other nodes according to the new protocol. Set the node registration based on the node public key and displays a list of neighboring nodes, node groups, and all graphs. Improve the Tangle network and support three different concepts, including vertices, edges and network paths. Simulate some network paths and display them on the Tangle viewer, which will display the network based on the new front-end tools (whole network, group/shard network, and node graph). Join a new instance and pass the workflow to the network based on the new p2p protocol. Update the embedded public API to get all the business rules that have been defined. Embed the redis version in the Kanban map and run it before the node. Create a network verification path after any new nodes want to join the graph. Smart Contract Completed Work: - V16.2.3 Fasc — Branch: Testing and problem solving in progress - V16 — fabcoin-dev sm01 — Branch: Equihash parameters and difficulty adjustment — completed - testnet / mainnet test — bug fix This week, the smart contract fabcoin-dev / sm01 backend was released to complete the equihash difficulty parameters. We are building the Ubuntu (Nvidia, no mining) / windows / IOS / CentOS; AWS configuration node
https://medium.com/fast-access-blockchain/development-update-october-24-2018-the-force-is-here-e97a9f2827eb
['Fab Info']
2018-10-25 15:51:46.505000+00:00
['Decentralization', 'Cryptocurrency', 'Development', 'Smart Contracts', 'Updates']
How I Cook, Eat and Learn from Crisis
That episode in my personal history repeated itself in March of this year. Clients didn’t need my advice all of a sudden. They just needed to stay afloat. Out of work once more, I started to fret. I couldn’t sleep. But there is now that familiarity with loss, a belief steeled by experience that anything can be survived because I have. Fate feels more forgiving. I am also not alone in my sorrow — the whole world is in mourning. And I am home, in Manila, Philippines, in the house where I grew up, in the company of people I love. Having learned from overcoming crises before, I knew exactly what I needed to do to feel much better. I needed to eat. But there is now that familiarity with loss, a belief steeled by experience that anything can be survived because I have. Cooking for survival To eat well in a pandemic, one must know how to cook — a skill I luckily learned 22 years ago. Cooking is never a chore. It is a form of meditation and relaxation. It is a personal passion that has tremendous benefits. It has given me not just my health, but strong relationships with my family and friends, who generously share some of themselves while enjoying the food I prepare. Cooking has taught me to create a safe space for my loved ones, to value their health that they are briefly entrusting to me, to reciprocate their trust with thought and abundance, and to expect nothing in return but their company. We may have lost some of our liberties in these coronatimes, but we can still make our own food, entertain safely, and share the bounty with our friends and family. That is what I set out to do as it becomes clear to me, the more I spend time in the kitchen, that food will ultimately help me survive this crisis. This survival plan requires a kitchen retrofitted for disaster. It should have a working stove, oven, blender or food processor, freezer, and a fridge (ideally defrosted to optimize storage). It should be able to churn out meals with ingredients that comply with the minimum safety standards of house arrest: nutritious, delicious and don’t spoil easily. It should have root crops that have a long shelf life such as potatoes, radishes, and turnips. It should have canned beans and vegetables. Asian pantry staples such as rice, noodles, sesame oil, soy sauce, hoisin, turmeric, sesame seeds, gochujang, rice vinegar, and fish sauce must always be replenished. We don’t have to suffer from hunger while living in a bunker. In fact, we can live a bit on the edge using some of the extra free time we now have. Cooking has taught me to create a safe space for my loved ones, to value their health that they are briefly entrusting to me, to reciprocate their trust with thought and abundance, and to expect nothing in return but their company. Eat healthy but heartily Before the pandemic, I would reserve Sundays to prepare my meals for the week to save time. In ongoing quarantine, I suddenly have all the time to experiment, to create something impromptu based on fridge inventory. But one thing remains the same: the meals that I make are healthy and hearty. No scrimping necessary. The lingering uncertainty forced me to be more intentional about the menu. I have to create dishes that suit both my palate and my parents’ to avoid waste. I have to inoculate them from COVID-19 by feeding them foods rich in antioxidants such as fruits and vegetables. At the same time, I have to preserve the joy of eating comfort food by offsetting their usually high sugar and salt content with fresh ingredients. There will be no junk food in my home. To err on the side of overcaution and visit the grocery less frequently, I decided to switch to a predominantly vegan diet. Let your tastebuds travel I have gone vegan before on a month-long experiment so I know how doable it is. This wasn’t a big deal for me, but it is for my parents who are used to their traditionally pork-based Filipino fare. I love the food I grew up with, but ironically for an archipelago surrounded by water, much of the popular Philippine dishes are meat-based. Take for example my favorite local dish — sinigang, a satisfying pork stew soured with tamarind and mixed with radish, eggplant, spinach, and taro. The secret of its majesty is in this step: sauteeing the pork belly in garlic and letting the fat simmer long enough to absorb the acid from the tamarind. The soup is ultra savory with a few drops of fish sauce. It is perfect with rice and perfect for rainy days. I have too much history with culinary perfection that to mess with it by using substitutes seems a travesty. Tofu sinigang cannot exist. Someone else can boldly make the attempt because I won’t. As a result, I have not had sinigang for six months. We have other dishes that could be veganized easily but old habits die hard. Then I caught myself: why settle with local cuisine during the quarantine? Why resign to tradition? We are going vegan for health reasons, to take full advantage of the pure nutritional benefits of fruits and vegetables. That is the bar, and it is not the end of the world if the bar is not met, like when there is leftover shrimp, which my mom loves. I am not throwing away that shrimp. I am going to use that shrimp to make scampi with loads of garlic. Now is not the time to throw food away. Now is the time to at least try and go vegan on most days. With that as the goal, we will still end up healthier than most people and better equipped to protect ourselves and others from the coronavirus. There are many plant-based options everywhere, including popular dishes in other countries that remind us of our travels. Or exotic dishes and spices that can take us to a place we have never been. In India alone, there are a variety of meatless curries, a lesson in spices in every bite. Coriander, cumin, and cinnamon — add the three Cs to your vegetable dish for gastro magic and it will taste incredible enough to forego and forget meat. Cumin and coriander are also essential in Mexican cooking. Cinnamon is vital to baking. Why did the Europeans expand their empire in the East? Spices. How did the Chinese influence Filipino cuisine? Spices. Conquistadors needed an armada to have what now sits abundantly in our cupboards. Perhaps we can pay tribute to the spice trade with just a little bit of creativity? Or perhaps, we should make that the mission of our quarantine adventures — to let our palate do the exploring. To travel despite the travel ban. To get lost in the euphoria of flavors right in our own homes. Why settle with local cuisine during the quarantine? Why resign to tradition? We are going vegan for health reasons, to take full advantage of the pure nutritional benefits of fruits and vegetables. That is the bar, and it is not the end of the world if the bar is not met. Create then elevate The first dish I cooked that was out of the ordinary was aloo gobi — an Indian vegan dish made with potatoes and cauliflower. It was a dish that I first had in Rajasthan during my last foreign travel before the lockdown and that I had promised to make for myself when I get home. My second creation was vegan bread. I had never baked anything in my life before March of this year, but because I love artisanal bread and not the long lines just to get one, I knew I had to make my own. Making my first-ever bread was a pivotal event in my kitchen journey because the recipe called for the traditional way of making bread — kneading by hand. It was an homage to the early Egyptians who, around 8000 BC, crushed grain and, by accident, combined the resulting flour with water. The strange mixture rose to become bread which would feed mankind for generations to come. Bread is inexpensive, yet nutritious and filling. Learning to make it increased my chances of surviving this crisis tenfold. My homemade vegan dinner rolls I have since created vegan spreads and soups to pair with my homemade bread, such as hummus, cashew cream cheese, and cauliflower chowder. I would try to remember standout dining experiences and the dishes I wanted to eat again: baba ganoush, tzatziki, tom yum, pumpkin pie, and so on and try to improvise using local ingredients to make them vegan/vegetarian. Since March, I have “traveled” to Italy, Japan, Thailand, China, and will soon “visit” Israel by making shakshuka. It is no longer unusual in our household these days to be “traveling” to three countries at the same time. Yesterday, I made sushi rolls using leftover brown rice, pesto rosso — a vegan pesto sauce made of sundried tomatoes, and our local banana fritter called turon. They don’t really pair well, but who cares? We are traveling without moving! Bread is inexpensive, yet nutritious and filling. Learning to make it increased my chances of surviving a crisis tenfold. Fake it ‘til you taste it The downside of experimenting is that some of the ingredients that are critical to successful rendering are not that easy to find. For example, I have been to several groceries, including one that is 20 kilometers away, and not one had cardamom in the spices section. I did find harissa from Tunisia, but I still need cardamom so I can make my own garam masala. It is tough to cheat your way through Indian mastery in these parts, but authentic Chinese, Korean, and Japanese ingredients are much more abundant so I do end up making those dishes more. The best time to fake it is when preparing vegan ingredients, which are devoid of eggs and dairy. This can be pretty challenging for folks who love cheese as I do. I have yet to create my own vegan mozzarella cheese, but I have had some success with other cheese substitutes. My favorite recreation is vegan parmesan cheese, which is a combination of nutritional yeast, ground cashews, and garlic powder. If you sprinkle some of that on warm pesto, you can abandon Parmigiano-Reggiano, at least for a while. Another vegan essential is tofu, one of my favorite fake meats, which is affordable and surprisingly versatile. The easiest dish that I make is Korean tofu bites, which are just diced extra-firm tofu cooked in the air-fryer for 10 mins at 190C and then tossed in sesame oil, gochujang and brown sugar. I top that with green onions and sesame seeds. It’s a fun fiery red appetizer to start off a meal with just the right kick. Korean tofu bites, topped with parsley because I was out of leeks Let me have cake Like I’ve said, I have never baked anything in my life before March, but things got way out of control in no time after family and friends tried my vegan bread and asked for more. Time to turbocharge my inner Christina Tosi and bid adieu processed desserts. I am going to make my own and I’m going to make a lot of them. I didn’t use the mixer right away and started with the easy beginner stuff: no-bake peanut butter cups. This is just like Reese’s, but without palm oil (which I abstain from to save orangutan habitats), dextrose, emulsifier, and all these synthetic preservatives that should be banned from entering the human body in the first place. I make them muffin-size big and top with sea salt because I love that salty-sweet contradiction banging against each other in my mouth. I continue to experiment on this since my friends started ordering after I gave them away as presents. My homemade peanut butter cups I love that salty-sweet contradiction banging against each other in my mouth. While there are many vegan desserts that are incredible (my improvised mango torte with coconut cream was a Sunday family lunch star), I want to make all kinds of desserts. My favorite sweets have eggs. They really are essential for structure. While I did try to make flourless chocolate cake with the flaxseed meal egg substitute, that had to be thrown out. So for my sweet tooth, I have decided that there should be no inhibitions. I will follow Sally’s recipes to a T. Weren’t the lockdowns restriction enough? Time to go all-in during the quarantine. Learning the strange art of acceptance It is not always easy to keep the spirits up every time jobless zombies like myself enter the kitchen to make something. Believe me, I spend an hour as soon as I wake up meditating and silently affirming the many reasons I have to be grateful. “I am healthy, alive, and strong enough to care for my family.” Or the standard picker-upper “this too shall pass.” But we don’t know when. I don’t know when I can dig my heels in the sand again. I don’t know when I can perform live again. I don’t know when I will work again. In the absence of answers, I have chosen to accept being in limbo as my new normal. It is just less stressful that way. You can say I am lucky to have that option, to be buoyed by parents who are secure in their pensions and happen to need the help of their daughter who failed to launch. I know not everyone is as fortunate. And with this luck, this lifesaver that is keeping me afloat in these testiest of times, I am slowly adding important life skills to my growing arsenal: kneading dough, making a sushi roll with a bamboo mat, properly storing produce to extend their shelf life, knife skills, turning chickpeas into hummus and fermenting radish. These skills, I now know, ensure my survival. They can’t be bad for the world. They might even lead to a new career. Nutritionist? Food researcher? Farmer? Who really knows, but I do know it would be amazing to be any of these three. The kitchen “pro” With this new life, it helps to apply whatever bankable skills we already have as a professional. In my case, I had once worked as a presidential speechwriter and chief of staff for a Cabinet minister. I have years of training in managing a political communications office, anticipating public crises, and peacefully handling inter-department skirmishes. These skills may initially seem out of place in the kitchen, but as I spend more time there, I realize that I am learning my way around food and baking and progressing rather quickly because I am organized. I clean the kitchen before and after I work. I cook only when the mise en place is set. The ability to track multiple moving parts while managing egos in my past life originated from a genetic predisposition to pay very close attention to the tiniest things. I just used the same programming which is already lodged in my brain anyway in my cooking — helpful when making pasta al dente while the sauce is stewing.
https://medium.com/the-innovation/how-i-cook-eat-and-learn-from-crisis-f49e459df305
['Mai Mislang']
2020-09-21 20:18:44.819000+00:00
['Love', 'Food', 'Covid 19 Crisis', 'Vegan', 'Cooking']
Predict Customer Churn by Building and Deploying Models Using Watson Studio Flows
1. Create a project All analytical assets within Watson Studio are organized into ‘Projects’. Projects are workspaces which can include data assets, Notebooks, Flows and models (among other items). Projects can be personal or shared with a team of collaborators. 1. From the ‘Welcome’ page of Watson Studio click on ‘New project’. 2. Enter a project name, and select your target IBM Cloud Object Storage Instance (we’ll help you create a storage instance if you don’t have one already). Press Create. 2. Add data assets to project Data assets are added to a project to make them available for any of the tools included in Watson Studio. Data assets added to a project are also available for any of the collaborators sharing that project to use. Examples of data assets are files, such as .csv or .json or database tables added through a data connection. 1. Click on the ‘Community’ tab in the top bar. From here you can see content in the Watson Studio Community, which includes data sets. 2. Navigate to the data set ‘Calls by customers of a Telco company’, then click on the add (+) icon on the community card for this data set, select your project and click ‘Add’. Once this completes it will display ‘Added’. This may take up to a minute. 3. From the Community add the data set ‘Customers of a Telco including services used’, and once it displays ‘Added’ click on the ‘View Project’ link. 4. Within the project’s ‘Assets’ tab you will see ‘Data assets’ section, which should now have the two added above. Click on the ‘Calls by customers of a Telco company.csv’ name to preview the data. From here you will see the description and name on the right. Click on the name to and rename it to ‘calls’ then click ‘Apply’. 5. Return to the project by clicking on the project name in the navigation breadcrumb, then perform similar steps to rename ‘Customers of a Telco including services used.csv’ to ‘customers’. 3. Building a flow in Modeler 1. From the navigation bar select ‘Add to project’ then ‘Modeler flow’. 2. Provide a name and use the default runtime ‘IBM SPSS Modeler’, and click ‘Create’. Adding and merging data using the flow Open the node palette on the left side of the screen using the button in the toolbar. Add a ‘Data Asset’ node from the Import category in the palette. Double click the node to open the editor then click the ‘Change data asset’ button. This will open the asset browser allowing you to select from the available data assets in your project. Select the ‘calls’ dataset and click OK. This will close the asset browser. Then click ‘Save’ in the node editor to apply your changes. This dataset contains the call records for the Telco customers. Each customer may have made multiple calls, some of which may have been dropped — meaning the call ended abnormally (perhaps because the phone lost service). 4. Right click on the ‘calls’ node, or click on the three dots within the node, and select Preview to examine this data set. As you can see this dataset contains five columns: from, to, dt, duration and dropped. However there are only two columns in this dataset which will be useful for building the predictive model. The ‘from’ column contains the customer’s phone number, and as this is unique to each customer (and is common with the ‘number’ field in the ‘customers’ dataset) it can be used as an ID. The column labelled ‘DROPPED’ is a flag where 1 is yes (meaning the call ended abnormally) and 0 is no (meaning the call ended normally). This column could be an important input to the predictive model as a reasonable hypothesis to test is: If a customer has a large number of their calls end abnormally, are they more likely to churn? The ‘to’ column is the number dialed for that call and the ‘dt’ column is the date of the call. 5. Add the ‘customers’ dataset to the canvas and preview it. Take a look at the Profile and Visualization tabs to get a better idea of the shape of your data. The ‘customers’ dataset contains a row for each customer and various columns with data about the customer. This includes things like the customers age and gender as well as which deals and offers provided by the Telco company that customer makes use of. It also includes the number of times the customer has called the company service desk. The ‘number’ column contains the customers phone number and this maps directly to the ‘from’ column in the ‘calls’ dataset we just looked at. In order to work with these datasets it will be useful to read the data and instantiate the fields within the flow. To do this use a Type node, which traverses the data and determines the field types and metadata. 6. Add a Type node from the node palette on the left hand side. The Type node is a ‘Field Operations’ node (or you can use the search bar). 7. Connect the ‘calls’ and Type nodes together by dragging a link from the output port of the source node to the input port of the Type node. 8. Double click the Type node and click on ‘Configure Types’ then click ‘Read Values’. Once this operation completes click OK to close the table dialog, then click ‘Save’. 9. Add another Type node to the flow and connect the customers node to it, open its settings and click on ‘Configure Types’ then click ‘Read Values’, and then ‘Save’. The ‘calls’ dataset potentially contains multiple rows for each customer as multiple calls may have been made and will be logged as separate rows in the table. However, the ‘customers’ dataset contains only a single row for each customer. It is therefore necessary to take the multiple rows for each customer’s calls and aggregate them together, taking an average for the dropped column to get a ratio of the calls for each customer that were dropped. 10. To do this, add an Aggregate node from the node palette on the left hand side. The Aggregate node is a ‘Record Operations’ node (or you can use the search bar). 11. Connect the Type node for the calls dataset and the Aggregate node together by dragging a link from the output port of the Type node to the input port of the Aggregate node. 12. Open the Aggregate node and click on the ‘Add Columns’ for the ‘Group by’ section. Select the ‘from’ field in the field picker and click ‘OK’. As we are grouping the records by the ‘from’ column which, as discussed earlier, can be considered an ID, each unique value will have a single row coming out of the Aggregate node. 13. Click on the ‘Aggregations’ link to open the Aggregations panel. Add the ‘dropped’ field to the aggregation table using the field picker. For this field, we need to determine what aggregation function will be used to combine the multiple values for each customer into a single value. In this case mean makes sense so click the edit button and ensure that only the ‘Mean’ checkbox is checked, then ‘Save’ the node settings. 14. Preview the Aggregate node to ensure that the data is as expected. There should now be three columns in the dataset; from, dropped_Mean and Record_Count. Click the ‘Visualizations’ tab and start typing dropped_Mean into the Columns text box, it will autocomplete allowing you to select the field. Click ‘Visualize Data’. A histogram will be generated showing the distribution and frequency of values for the dropped_Mean field. 15. Return to the flow by clicking the flow name in the navigation breadcrumb. 16. Add a Filter node from the node palette and connect it to the Aggregate node. Use the Filter node to rename the ‘dropped_Mean’ field to ‘dropped_ratio’, and also rename the ‘from’ field to ‘number’ then click ‘Save’. The next step is to merge the ‘calls’ and ‘customers’ datasets together and in order to do that a key field that has common values for each customer in both datasets is necessary. As discussed earlier the customers’ phone numbers can be used as this ID. As we have renamed the ‘from’ field to ‘number’ it is common with the equivalent field in the ‘customers’ dataset. 17. Add a Merge node to the canvas and connect the ‘customers’ dataset and then the ‘calls’ dataset to the Merge input port, by connecting first the Type node from customers to the Merge node, then the Filter node from calls to the Merge node. The order in which the inputs are connected to the Merge node is important, as the setting we will use, ‘Partial out join’, takes the first input as the primary. 18. In the Merge node settings tab change the ‘Merge method’ to ‘Keys’ and select the ‘number’ field as the key. Here we are stating that the ‘number’ field is in both datasets and has unique values for each row which match in both datasets (i.e. customer A in dataset 1 is the same person as customer A in dataset 2). 19. Select ‘Partial outer join’ as the join type. This means that incomplete records from either dataset (where there is no merge to be performed) will be retained in the output dataset. Click the ‘Select Dataset for Outer Join’ link and make sure the ‘customers’ dataset is checked as the primary. Preparing the data for modelling A partial outer join will include all records from both datasets. If there are any customers that only appear in the ‘customers’ dataset then they will still be included in the output dataset but for the columns added from the ‘calls’ dataset the value will be null. A null value is not useful for modeling and in this situation the fact a user has made zero calls may be useful information for the model. We therefore need to convert the null values to ‘0’. 1. Add a Filler node and connect the Merge node to it. 2. In the Filler node add the ‘dropped_ratio’ field to the ‘Fill in fields’ list and select ‘Null values’ in the ‘Replace’ dropdown, and click ‘Save’. The default value for the ‘Replace with’ control is 0 so this is all we need to do here. Next we need to divide the data into training and testing sets for the model. By doing this the model will use the training partition to build and train and the testing partition to test the accuracy of the model on data it has never seen before. 3. Add a Partition node to the canvas and connect it up. Open the node editor and ensure that the training and testing partitions use the default value of 50%. This means half of the data will be used to train and half used to test. 4. To make model building repeatable using the same partitions (meaning the same sets of customers will be used to train and test the model if it is rebuilt) check the ‘Use unique field to assign partitions’ field and select ‘customer_id’ in the dropdown that appears. The model won’t need to use all fields in the data and by default all fields are given the ‘input’ role, meaning the model will use that field and try to find relationships between it and the target. For fields like ‘number’ or ‘customer_id’ where the data will have no relationship on the customer churning using these fields as inputs makes no sense so we will use the type node to stop the model using them. 5. Add a Type node from the palette and connect it up. 6. Open the node editor and click the ‘Configure Types’ link to open the wider editor. 7. As mentioned all fields will use the ‘input’ role by default so we only need to add fields we want to change to the type table. Add the following fields: number customer_id first_name last_name twitter_handle location churned 8. Using the dropdown in the table set the role of churned to ‘Target’. 9. For all the other fields set the role to ‘None’. This means the model will ignore them. Building the model 1. Add a CHAID node from the palette and connect it up. The CHAID node is a tree model which iteratively splits the data into groups based on significant differences in field values. The model finds the most significant split first and divides the data, for example it may split the data by gender. Then for each subgroup it repeats the process, forming a tree-like diagram. When a new row of data is scored with the model the row effectively travels from the top of the tree down through this tree diagram based on the values for each field that split the data. When the row reaches the final level of the tree diagram, called the leaf node, a prediction is made based on the majority target value for training records contained in that node. The bigger the majority the higher the confidence in that prediction. 2. With the Type node determining the target field the CHAID node label should change to ‘churned’. We will run the node with the default settings but feel free to open the node editor to take a look. 3. Click the ‘Run’ button in the canvas toolbar. Once execution has completed a new model node will appear on the canvas. 4. Preview the new model node and confirm that the two new fields from the model are included ($R-CHURNED — the prediction, $RC-CHURNED — the confidence in that prediction as a value between 0 and 1). Your model viewer for the CHAID model! 5. Open the context menu on the model (by right-clicking or click the little options glyph on the node) and click ‘View Model’. This opens the model viewer which contains information, statistics and charts about the model. Use the tabs in the navigation pane to view the different pages about the model. 6. In particular the ‘Predictor Importance’ chart shows useful information about which input fields are most useful for making the prediction (including the dropped_ratio field we derived earlier). 7. Also look at the tree diagram and hover on the different nodes to see how the training records were split. Evaluating the model Now that a model has been built we can evaluate it to determine how accurate it is at predicting customer churn. 1. Add an Analysis node from the outputs section of the node palette and connect the generated model node to it. 2. Open the Analysis node editor and select the ‘Coincidence matrices’ checkbox then save the change. 3. Using the node context menu run the Analysis node. When complete it will generate an output object and the flyout panel containing your outputs will open. Double click on the new analysis object to open it. The analysis output contains the accuracy figures for the model as well as a coincidence matrix showing the correct and incorrect predictions made by the model. The accuracy and coincidence matrix items are divided into the training and testing partitions created earlier in the flow. It is expected that the accuracy for the testing partition is lower than the training as this is data that the model did not see as part of it being built. The coincidence matrix is useful for determining if the model has a problem with predicting one category over another. For example, the model may be worse at predicting a customer staying than it is at predicting a customer churning. 4. From within the Analysis output click on the flow name in the header bar to return to your flow. 5. Build an evaluation chart to get a graphical representation of the quality of our model — to do this add an Evaluation node from the graphs section of the node palette and connect it to your model. 6. We will run this node with default settings to build a ‘Gains’ chart, but feel free to open the editor to see what other settings are available. Run the Evaluation node using the context menu. 7. A new evaluation chart output item should be added to the flyout panel. Double click this to open the chart in a new breadcrumb. The SPSS Modeler Knowledge Center description of Gains charts is: Gains charts. Cumulative gains charts always start at 0% and end at 100% as you go from left to right. For a good model, the gains chart will rise steeply toward 100% and then level off. A model that provides no information will follow the diagonal from lower left to upper right (shown in the chart if Include baseline is selected). Build an Auto Model to compare classifiers The SPSS Modeler runtime includes auto models which build multiple different examples of the same type of model (either classifier or numeric) and then ensemble them together to provide a prediction. The auto modeler can also be useful for comparing different model types and their accuracy. 1. Add an Auto Classifier from the node palette and connect the Type node to it. 2. The Auto Classifier builds a default selection of classification models and keeps the top few models based on accuracy. Run the node. 3. When the Auto Classifier model is added to the canvas view it using the context menu. 4. The Auto Model is a special case in that it contains multiple individual models. These can be seen in the ‘Models’ table in the viewer along with information about each model. You can view the individual models using the action menu in the table row. 5. The ‘Use’ checkbox allows you to deselect one or more models from this list so that they are not included in the ensemble when the overall Auto Modeler is scored. Keep all the models selected and return to the flow using the breadcrumb in the header bar. Evaluating the Auto Classifier and comparing with the CHAID model To evaluate and compare the Auto Classifier and the CHAID model, we will need to connect them together before passing the predictions to the evaluation graph node. 1. Using the context menu on the generated Auto Classifier disconnect it from the Type node (note the dashed ‘model refresh’ link will remain). 2. Connect the CHAID model output port to the input port of the Auto Classifier model node and then disconnect the Evaluation node and connect the Auto Classifier to it so that the flow looks like this: 3. Run the Evaluation node to build a combined ‘Gains’ chart, then view it. As both model prediction fields are being passed to the Evaluation node it will plot both on the same graph allowing a comparison. The models are likely to be similar in performance but note whichever has the generally higher line — this is the model with better performance at predicting customer churn. $R-CHURNED is the CHAID model, and $XF-CHURNED is the Auto Classifier. You could also use an analysis node to compare the performance of the two models. Congratulations! You’ve built two models predicting customer churn and evaluated them to compare their performance. Summary In this tutorial we have created a project in Watson Studio, added some data from the community and then used the Modeler Flows product to build a model that answers the question: Will a customer churn? With the variety of other model types available in Modeler Flows you can build a model to answer practically any question and get real value from your data!
https://medium.com/ibm-watson/predict-customer-churn-by-building-and-deploying-models-using-watson-studio-flows-7626b9fb5ada
['Joseph Kent']
2018-07-26 08:54:03.797000+00:00
['Tutorial', 'Machine Learning', 'Watson Studio', 'Data Science', 'Big Data']
Design Guidelines: Efficient, Consistent Design
Worried about a messy or inconsistent design? This is where design guidelines step in to save the day. Design guidelines help you have efficient and consistent design throughout your application. They help guide the overall style and flow so that you don’t repeatedly think about things like, what colour should this button be? or what size font should I set this text? What is it exactly? As defined by interaction-design.com, design guidelines are guidelines on how to implement design principles such as learnability, efficiency and consistency in order to let your application have designs that meet and exceed user needs. These guidelines fall into these categories: Style Refers to matters such as the colour palette, logos, etc. Refers to matters such as the colour palette, logos, etc. Layout Refers to the placing or structure of components, such as using a grid structure. Refers to the placing or structure of components, such as using a grid structure. UI Components Refers to the components of the design, such as buttons, icons, etc. Refers to the components of the design, such as buttons, icons, etc. Text Refers to all text aspects of the design, such as font, text content, etc. Refers to all text aspects of the design, such as font, text content, etc. Accessibility Refers to the ease of access of a feature or service provided, such as how this page can be navigated into. Refers to the ease of access of a feature or service provided, such as how this page can be navigated into. Design Patterns Refers to the use of recurring components to enhance consistency in the design, such as the use of breadcrumbs design pattern in forms. Design guidelines are different from design principles and design rules. Design principles is more of the general directions. Design guidelines talks about how to approach those directions and use them to decide the design rules. Then, design rules are straightforward, direct instructions. Design guidelines can be viewed subjectively and are used differently when moving to the rules. Different designers might have different interpretations of the guidelines. Here is an example taken from interaction-design.org: Design principle : Provide plain-language error messages to pinpoint problems and likely solutions. : Provide plain-language error messages to pinpoint problems and likely solutions. Design Guideline : Write large-lettered, jargon-free text in web-safe font. Use short sentences and draw users’ attention to causes and remedies. : Write large-lettered, jargon-free text in web-safe font. Use short sentences and draw users’ attention to causes and remedies. Design Rule: Use 20-pt, black Georgia on lavender background (#e6e6fa Hex). Put instructions in bold. Why is it helpful? Consistency By following a standard design guideline, design of the application will remain consistent. This consistency leads to other benefits. By following a standard design guideline, design of the application will remain consistent. This consistency leads to other benefits. Usability Consistency not only adds aesthetic, but improves flow. This is because the users will be able to be familiar with the application faster and recognize elements such as buttons faster. This ultimately improves usability. Consistency not only adds aesthetic, but improves flow. This is because the users will be able to be familiar with the application faster and recognize elements such as buttons faster. This ultimately improves usability. Efficiency The development team will be able to have faster workflow since having standard guidelines means having to spend less time thinking about the style and more time implementing the code. The development team will be able to have faster workflow since having standard guidelines means having to spend less time thinking about the style and more time implementing the code. Productivity Since more time is spent on code implementation, the amount of work and product progress will be faster. This means the workflow becomes more productive since more work is done, instead of going back and forth in confirming the design specifications. Implementation in bisaGo application Since this topic is about following design guidelines, it is best that we dive into how I follow my team’s application’s design guidelines When doing front-end tasks from each Sprint PBI, I always refer to existing designs from already established components or following the provided application’s UI design kit which is made using figma. Our existing design guidelines and rules includes (but are not limited to): We use Comfortaa and Muli as our fonts, with a defined colour palette and designed icons Our components such as logo, button designs and form designs are also there Other ways I follow the design guidelines when implementing the design and features of the application is by using and/or referencing existing components in the codebase. Using the colours and spacing that have been defined already defined styles code snippet of defined styles usage Using existing form components. In the left, there are already created widgets that act as form components. When making a form page, instead of creating new widgets inline, those form components are used in order to keep forms consistently designed. right: code snippet of form components usage
https://medium.com/ppl-c/design-guidelines-efficient-consistent-design-c14dc532b401
['Wilson Hadi']
2020-12-22 17:08:10.458000+00:00
['Design Rule', 'Design Patterns', 'Design Guideline']
Census Cut Population By 570,000 People in 2017
Census Cut Population By 570,000 People in 2017 And Other Interesting Stories From the New Population Estimates The Census Bureau just released its new population estimates. They’ll be in the news. You’ll hear about the fastest and slowest growing places. That’s cool beans. Let’s talk nerd. What revisions did Census make? Let’s start nationally. Here’s Census’ revisions to national population each year versus the previous year, expressed as a percent of the previous vintage’s estimate.
https://medium.com/migration-issues/census-cut-population-by-570-000-people-in-2017-43cb41c447ca
['Lyman Stone']
2018-12-20 06:44:01.036000+00:00
['Migration', 'Demographics', 'Census', 'New York', 'Culture']
5 Design Principles For Blockchain
Blockchain is often pitched as the next big thing. However, when it come to design, it’s a totally new realm of challenges. Blockchain acts as a thick layer of complexity on top of traditional products. If you’re a designer, blockchain is a space that needs your help! Here’s the basics to get you up to speed, and what you should be thinking about as a designer. 1. 🚫 No Jargon Blockchain and cryptocurrency is a formidable space to get involved in. The result is a core group who are passionately involved. But to the average person or designer, outside of the hype bubble, it’s really hard to get excited. There’s so many new and abstract concepts. There’s no easy way to get involved. The industry has a bad reputation as being a get rich quick scheme. Looking from the outside, you’ll see terms like DLT, Dapp, and altcoins being used. They’re overcomplicated jargon! As a designer, my mission is to make blockchain technology accessible in the mainstream. The first step in this is removing jargon. I encourage a no nonsense, no jargon, approach to everything. That means ruthlessly reviewing and simplifying copy (unexplained acronyms are enemy #1!). Nobody cares what software Netflix runs on. Users only care about what a product lets them do. Focus on value, not jargon. We want to get more people involved, so we need to make products that are really simple to use and understand, in layman’s terms. 2. ✂️ ️️️Ruthlessly Break Down Barriers to Entry When I tell my friends and family about cryptocurrency or blockchain, it’s often a blank look staring back. The market is full of people inside the bubble, people who understand. But to outsiders, it’s an unwelcoming, impenetrable bubble. Unfortunately, if you want to get involved, you really need to be determined. You’ll probably have to battle through terrible UX, and a complete black hole of knowledge. There’s nobody to easily explain core concepts, or walk you through the daunting process. It’s like the first generation of the internet. Where it’s technically there, but not very usable. Products like Coinbase are really focusing on great and simple user experiences. The next wave of blockchain will be to make it useable in the mainstream. Be ruthless. Radically simplify at every point. Make it so that your parents can understand and use it. 3. 🔒 Added Security or Added Friction? There’s a lot of risk when you’re dealing with cryptocurrency. Funds often live in a digital wallet. If those funds get stolen or hacked, there’s no way to get them back. No bank or central body to file a claim with. So security is crucial, particularly in making users feel safe, and trusting your product. In my work on Etherparty’s product, Rocket, we offer 2 factor authentication (2FA) to keep accounts secure. Enabling 2 factor authentication on Rocket, by Etherparty We strive for a balance of security and a seamless experience. We encourage users to activate 2FA, but we don’t force it for new users. That’s a lot of friction up front to ask of a new user. Instead, we designed the product to require 2FA only for important, or security sensitive activities, such as moving funds. This gives our users peace of mind when needed. Security is always a priority with blockchain and digital currency. 4. ✨ Be Transparent With Users When you deploy something to the blockchain, it takes some time before it is finalised. As an example, you might deploy a Smart Contract. (A smart contract is a piece of code that can execute when certain conditions are met.) That length of time it takes to make this change public depends on how busy the network is. As a designer, we can’t be sure if processing that action will take a two minutes or two hours. To compensate for this, you should always be transparent. Users are accustomed to near-instant actions, so we have to set and manage their expectations. Don’t leave users waiting around. Tell them to come back in a few minutes, and communicate the current status as best you can. Better yet, send them an email when it’s complete. Keeping users up to date on the current status is key 5. 🎯 Design Thinking For the Win Blockchain has been touted as the saviour for every industry. Sometimes it’s an industry looking for a problem to solve. It has a lot of potential. As designers, our role is so important in defining problems, and making valuable products, that solve real problems. Educate your company, advocate for research and user testing. Fight past the hype, and get real user insights. Strive to make products that makes people’s lives better. 👋 Should you get involved? Yes! Blockchain presents many tough challenges for a designer. As a Product Designer, I often describe my real job title as Problem Solver. These challenges are hard, but they’re really exciting problems to solve. For me, they’re some of the most interesting in my career. Until now, blockchain has been largely development driven. The UX on so many blockchain websites and exchanges is frustratingly bad. If blockchain is to become mainstream, we need an army of designers to work on radically simplifying everything. Designers are key in bringing usable and valuable blockchain products to the real people and organizations.
https://uxplanet.org/5-design-principles-for-blockchain-14f9745fa61d
['Eamonn Burke']
2018-09-26 23:09:24.613000+00:00
['Design Thinking', 'Design', 'Blockchain', 'Ethereum', 'Bitcoin']
Liderar la innovación con estrategias hacia la sostenibilidad — 7 de Febrero en Barcelona
in In Fitness And In Health
https://medium.com/el-blog-de-imfusio/liderar-la-innovaci%C3%B3n-con-estrategias-hacia-la-sostenibilidad-7-de-febrero-en-barcelona-e5f97e8ccf1a
[]
2018-02-19 20:42:39.876000+00:00
['Leadership', 'Sustainability', 'Fssd The Natural Step', 'Formación', 'Sostenibilidad']
Marker Genes and Gene Prediction of Bacteria
Marker Genes and Gene Prediction of Bacteria What are marker genes, their usage and how to predict them? When we think of the word marker, the first thing that comes to our minds is something that is used to indicate a place. For example, it can be your current location on Google Maps or it can be the place where you planted some seeds in your garden. Similarly, in genomics studies, we can find marker genes in bacterial genomes. In this article, I will introduce you to marker genes used in metagenomics analysis, how they are used and walk you through an example of a commonly used gene prediction tool. Image by Mahmoud Ahmed from Pixabay What are Marker Genes/Genetic Markers? According to Wikipedia, A genetic marker is a gene or DNA sequence with a known location on a chromosome that can be used to identify individuals or species. As a result of mutations and alterations within the genome, these genes can vary depending on their composition and location. What are Single-Copy Marker Genes? In bacterial cells, single-copy marker genes are expected to occur once. In other words, each bacterial cell contains only one copy of each of these single-copy marker genes. These genes are essential for the life-functions and can be found in the majority of the bacterial species. Previous efforts have been made to identify marker genes that can resolve closely related organisms. Protein-coding marker genes which are rarely horizontally transferred and exist in single copies within genomes have been identified [1]. These include a set of 40 marker genes [2,3] and 107 marker genes [3]. Usage of Marker Genes Marker genes are commonly used in taxonomic profiling of environmental samples to identify gene families. These genes are also used in phylogenetic inference to reconstruct the evolutionary history of organisms. Recently, reference-free binning tools such as MaxBin and SolidBin have used single-copy marker genes to identify the number of species in a given sample. Moreover, tools such as MyCC use single-copy marker genes to refine resulting clusters. Gene Predictors Gene predictors can be used to extract marker genes. Some popular gene prediction tools include, Example Usage of FragGeneScan Let us see how we can use FragGeneScan to predict genes. Firstly, you can download FragGeneScan from You can follow the instructions provided in the README file to compile and run FragGeneScan. You can see the following parameters and options of FragGeneScan.
https://medium.com/computational-biology/marker-genes-and-gene-prediction-of-bacteria-5fa4cb7802f3
['Vijini Mallawaarachchi']
2020-11-10 00:59:33.147000+00:00
['Genomics', 'Software', 'Bioinformatics', 'Biotech', 'Science']
Top 10 or So Notes on ‘On Writing’ by Stephen King
Top 10 or So Notes on ‘On Writing’ by Stephen King Secrets to Stephen King’s writing process were revealed in his book, ‘On Writing,’ released in 2000. This is my review of all the important aspects I took away after re-reading it almost twenty years later. This article was original published at Brain Bank 1. Understand the Basics Know proper grammar. Expand your vocabulary. Know the basics of the English language. Avoid passive sentences. Never use -ly words. Read and digest Strunk and White’s ‘The Elements of Style.’ Understand that you can write simply and still get across complicated story ideas that resonate with a reader. Do not overwrite or overindulge, this may come off as magniloquent or pompous and could easily lose a reader. 2. Read Of course, in order to write well, one must read obsessively. That is a no brainer. But one thing Stephen King points out that never occurred to me is that it is okay, and important to read bad stories and bad books. I have always read authors I admire. I read stories and novels as if I am reading letters from my gods, and I stand back in awe thinking to myself, ‘I can never do what they do.’ According to Stephen King, this is the kiss of death to an aspiring writer. Who can write if all they ever do is compare themselves to the greatest writers who ever lived? King’s advice is to read as much as possible and read as many authors as possible. Don’t just stick to your favorites. He claims that every good writer ultimately read a book where they thought, ‘I can do better than this.’ And that is they key. You must realize there are authors being published, even praised, that you can best. On making time for reading, King points out that you can and must read everywhere: in the bathroom, on line at a store, in the car (listen to audio books), on the treadmill, as well as in the obvious places like a coffee shop, your bed, the couch, your office. 3. Find Your Place & Close the Door A concept taught to me in creative writing classes, and now again in Stephen King’s book, that still eludes me is finding a time and place to write. In ‘On Writing’ Stephen King explains that you can not wait for your muse to stimulate you into writing. Instead, you must go to the basement and close the door. He means this literally in his case, but also figuratively. In order to write, you must have a place you can go to block out the rest of the world, to keep out any distraction or intrusion, and your brain must know this place is only to write. Ambient music is okay, but TV and the internet are the enemies. Your muse lives in the basement, he says. He will not come find you, you must go find him. And he will not be sexy and beautiful, he may be grumpy, lazy and full of warts. So don’t wait for your muse, you must find a place and go there everyday, then your muse will know where you go and hopefully someday, if you’re lucky, he will sprinkle some of his magic dust over your pen, but you should not wait for him to do so. Just get to work writing. In an intense workshop back in college, Robert Olen Butler once explained that he wrote his Pulitzer Prize winning masterpiece, ‘A Good Scent from a Strange Mountain,’ while riding on the subway to and from work everyday. With its incessant racket a subway seems counter to this theory of finding a place with no distractions. But he had no choice but to be on the train to go to work. He was on the train for at least an hour each way and the people in and out of the doors created a din that allowed Butler to enter a zone aspiring writers can only hope they ever encounter. So he started writing on the train. And sometimes, he would be so in the zone that he would skip his stop and continue on the subway until it circled back to his destination. I got it. Another excellent point King makes related to writing, but probably also related to finding your place, and probably how he would answer me when I say I cannot find a place to ‘get in the zone’ and write on a consistent basis, is this: ‘if you want it bad enough, you will find your place.’ 4. On Plot Forget about plot. Plot is the enemy. Let the story develop naturally. Uncover the story as if you were excavating a fossil. The fossil could turn out to be a little bird (a short story), or it could turn out to be a huge dinosaur (a long novel). You must use delicate tools to excavate the bones. Your job is to reveal the fossil to the reader as you uncover it. The reader should discover the fossil as the writer does. The tools used to excavate the fossil must be gentle in order to not break a thing — but understand that even under such careful attention, bones still will break. That is okay and expected. However, ‘plot’ should be thought of as a huge jackhammer, not a delicate tool. Try excavating bones with such a monstrosity and the entire fossil is likely to break. Instead of plot, King says he often starts with a scenario and asks himself, ‘what if?’ What if, an obsessed fan kidnaps a popular author and demands he pen his next novel just for her? What if, a family was stuck in a car because a rabid dog would not let them out? Then, bone by bone, uncover how the characters react in those circumstances and watch how the story, and the characters are revealed to you. 5. On Narration (Description) Don’t let description distract from the story. Don’t use hackneyed similes or metaphors. Don’t use similes or metaphors that are lazy or make you as the writer seem as dull or as a plain as a dry turkey sandwich. Use similes and metaphors that add to the ambience of the setting, or that reinforce the mood of the scene, or that bring to life a character’s trait. In future drafts, feel free to cut great description if it does not add to the story. Just because you wrote something great, he says, doesn’t mean it belongs in the story. You must be able to cut your ‘darlings.’ 6. On Character Development Write honestly about your characters. Example… if they curse, let them curse, don’t avoid cursing just because it goes against your values. Be true to your characters. Johnny from the streets might say ‘oh fuck,’ when he makes a mistake, but Aunt Jennie who goes to church and lives in the south might really say, ‘oh darn.’ Be honest about your characters. Most importantly, a story might start with a scenario, but it should always evolve into revealing truth about the character(s). The story should never get lost, however, and should remain the main attraction. A character study alone, after all, would simply be a biography. “The best stories always end up being about the people, not the event,” King says. But the story, he emphasizes, should always be boss. Avoid one-dimensional characters. These are characters that simply play a role in the story, and don’t diverge from that role. A bad guy, in other words, is not all bad. And the hero is not all good. Tell the truth about your characters. 7. First and Second Drafts The first draft should be written as freely and as fast as you can keep up with. Get the story out. Try not to look back other than to check names or pieces of backstory. Get to the end of the story without seeking advice or second opinions. Write the first draft with no fear of negative reception, but with a deep fear that if you don’t finish quickly you will lose your creative vision. In the first draft, create the trees, draw the branches and watch the leaves fall. Keep creating the trees until the story is over. Then, take a walk. Go on a vacation. Separate yourself from your story. Do not dive back in hoping you will see how brilliant you are, or wishing you will find every little mistake before moving on. Go back maybe six weeks later and look at your manuscript. If it feels completely foreign to you, you have waited long enough. Then, take a look at the forest you have created. Before you start the second draft read the first draft and take a look at your forest. Read until you understand your theme. In other words, figure out what the story is really about. When you have it figured out, start the second draft with the intention of making very clear what the story is about, removing meandering passages and scenes that distract from the story. This could lead to major changes, deletions or edits, but it is imperative to follow through for the betterment of the manuscript. At this point you can also let other people you trust read and give you their notes on your work. If everyone agrees something is wrong, fix it. If one person thinks its wrong, another thinks its good, it’s a wash, leave it. Tie goes to the writer! Further Drafts After the first and second drafts, King will often go over a manuscript so many times, making small corrections, or major changes, that he has passages memorized. But this only happens after the first draft. He makes very clear that the first draft should be written as fast and freely as possible, do not reread at this stage. Don’t worry about stupid mistakes yet, because if you do it will only make you feel bad about your ability as a writer, which will hinder the development of the story. 8. Write for Your Ideal Reader While you are writing you should be constantly thinking of your ideal writer. For Stephen King, it is his wife Tabitha. He respects her opinion and feels if he can please her, the book will be pleasing to a larger audience. He also respects her notes and reactions so therefore considers them valuable enough to consider seriously on draft number two. Some writers might imagine someone dead as their ideal reader. It doesn’t matter, but constantly think, ‘would my ideal reader like the pace of the story, a particular dialogue, or narration, or chapter?’ Thinking in this way elevates your writing from the start, helping to eliminate most nonsense and garbage. It is like having an editor before ever letting another person read your work. 9. On Writing Classes & Workshops Don’t go to writing classes, writing retreats, writing seminars or any other such place where you ‘have to’ write and are going to get quasi-interesting feedback on your stories. Most feedback from random people is so vague that it mostly serves as a hindrance to your creative process. You only need keep your Ideal Reader in mind when writing and write your first draft with the door completely closed. Also, write because you want to write, not because you have to write. Having to write inhibits the free flow of your first draft and makes you feel pressure about deadlines and being judged by people you don’t know or respect. Technically, you shouldn’t even be reading this book, King says (and thus, this article). The key to learning how to write is to read and write often. The more you read, the more you write, the more you will learn from yourself as your mental database of read material grows. 10. Random Notes write a story as if you were a reporter. your job is to reveal to your audience the life of the story in a way that captures the reader’s attention until the very end. the details you include must create the ambience of the story and it always must be honest. exclude details that are irrelevant. only add details that enhance the understanding of the world you are creating for the reader. paragraph structures. the rhythms of a story are like the rhythms of speech. You must seduce the reader as if in a conversation with a beautiful woman and language is the tool you use to seduce. (learning rhythms of speech and writing comes mainly from reading and subconsciously studying and taking in all that was written until you are filled with a sea of examples, of rhythms that have worked, and those that have not.) people love reading about other people’s jobs. include what people do in your stories. if you are a plumber, your characters can be plumbers, thus they will know in depth details about the trade. write about what you know, but understand you must also write about what you want to know, or that you don’t know — after all, that is why you are writing, to uncover things about something you and the reader didn’t know. you don’t know about outer space, or life after death, no one does, but you can imagine those things. fiction would be lost without imagination, or if people only wrote what they knew. it is when you mix the two that truth is revealed. So, a plumber in space might be a great premise for a novel. go where your mind takes you. don’t try and force your mind to go in one place or another. this is what a preconceived plot or theme does to a writer. the first draft the story is telling itself to you. let it. don’t interrupt it. the second draft you are perfecting it so other people can enjoy it. on character backstories… Every character has a backstory, but most of it is uninteresting and irrelevant to the story. Only tell backstory that is interesting and that adds to the story being told. And tell it in a manner of stealth, not in a matter-of-fact report. write 2,000 words everyday and don’t stop until you reach your goal. But, start with 500 words per day for a week, then 750, then 1000, until you get to 2,000 words per day (this is an example, set a goal that is right for you, but be relentlessly consistent with it). don’t ever spend more than one season writing a story or book. it gets stale and the reader will notice. only ever take one day off per week while in the heat of developing a story. two days off and by the time you return the following week the story will no longer be fresh in your mind. you will have to spend too much time reacquainting yourself with your story, getting back into your subconscious and that will either be a waste of time, or too overwhelming, and you will give up. writing is hard work. who cares if you have a good idea. if you aren’t going to put in the work, all the great ideas in the world mean shit. Conclusion Write a story that resonates. King emphasizes that his ultimate goal is for the story to linger in the reader’s brain long after it is over.
https://olinations.medium.com/top-10-or-so-notes-on-on-writing-by-stephen-king-1dc87642b922
['Jamie Uttariello']
2018-12-27 19:21:05.925000+00:00
['Stephen King', 'Writing', 'Writing Tips', 'Fiction', 'Creative Writing']
Deploying Django App to Heroku: Full Guide
You made an app with Django. Great! , and you are excited to show it to everyone on the planet. For that, you need to put it somewhere online so that others can access it. Okay, you did some research and selected Heroku. Great Choice!👍 Now you started to deploy your app and did your research online and now you have many different resources to follow which suggest millions of ways to do it. So you are confused, frustrated and stuck and somehow managed to put host it but after that, to your surprise, you found that your CSS files didn’t show up.🤦‍♂️ OK, Now let’s solve your problems. This article covers almost everything that you will need. Roadmap Install Required Tools Create Required Files Create a Heroku app Edit settings.py Make changes for static files Install the required tools For deploying to Heroku you need to have Heroku CLI (Command Line Interface) installed. You can do this by going here: https://devcenter.heroku.com/articles/heroku-cli CLI is required because it will enable us to use features like login, run migrations etc. Create Files required by Heroku After installing the CLI let’s create all the files that Heroku needs. Files are : requirements.txt Requirements.txt is the simplest to make. Just run the command pip freeze > requirements.txt This command will make a .txt file which will contain all the packages that are required by your current Django Application. Note: If you add any package further then run this command again, this way the file will be updated with the new packages What is the use of requirements.txt? As you can see that it contains all the dependencies that your app requires. So when you put your app on Heroku it will tell Heroku which packages to install. Procfile After this make a new file name Procfile and do not put any extension in it. It is a file required by Heroku According to Heroku : Heroku apps include a Procfile that specifies the commands that are executed by the app on startup. You can use a Procfile to declare a variety of process types, including: Your app’s web server Multiple types of worker processes A singleton process, such as a clock Tasks to run before a new release is deployed For our app we can write the following command web: gunicorn name_of_your_app.wsgi -log-file - If you are confused about your app name, then just go to wsgi.py file in your project and you will find your app name there. For this, you should have gunicorn installed and added to you requirements.txt file Installing is super simple. You must have guessed it! pip install gunicorn runtime.txt After this make a new text file called runtime.txt and inside it write the python version you are using in the following format python-3.8.1 That’s all the files we require. Now we have to start editing our settings.py file. Create a Heroku App This is a simple step and can be done in 2 ways, either by command line or through the Heroku Website. Let’s use the Heroku Website for now. After making Heroku Account you will see an option to create a new app It will ask you for a name, the name should be unique. After hit and trials, you will be redirected to your app dashboard. There are many options to play with here but let’s go to the settings tab and there click on Reveal Config Vars In the KEY write SECRET_KEY and in VALUE paste the secret key from the settings file and you can change it because only this key will be used. That’s all for now. We will revisit it soon. Edit settings.py There are quite a few changes that should be made in this file. Lets’ start First change DEBUG = False In the allowed hosts enter the domain of your Heroku app ALLOWED_HOSTS = ["your_app_name.herokuapp.com", "127.0.0.1"] Replace the SECRET_KEY variable with the following (assuming that you have setup the secret key in heroku from the previous step) SECRET_KEY = os.environ.get('SECRET_KEY') What this does is, it gets the SECRET_KEY from the environment. In our case, we can set the secret_key in Heroku and it will provide the key here through environment variables. Setup Static Files In settings file, you will find STATIC_URL = '/static/' Replace this with the following code STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.join(BASE_DIR, 'static'), ) Basically, this will create a folder named static which will hold all the static files such as CSS files. If your App contains images that you have stored on it or the user has the ability to store then add the following lines MEDIA_URL = "/media/" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') This is pretty much the same as the above There is one more thing you need to do. If you have media files then to allow Django to server them you have to add a line to your urls.py file of the project (top-level urls file) from django.conf import settings from django.conf.urls.static import static urlpatterns = [ # ... the rest of your URLconf goes here ... ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) I highly recommend you have a look at this documentation. https://docs.djangoproject.com/en/3.1/howto/static-files/ The last thing you need to serve your static files in production is WhiteNoise With a couple of lines of config WhiteNoise allows your web app to serve its own static files, making it a self-contained unit that can be deployed anywhere without relying on nginx, Amazon S3 or any other external service. (Especially useful on Heroku, OpenShift and other PaaS providers.) -WhiteNoise Documentation Install white noise pip install whitenoise Add it in MIDDLEWARE in settings.py file MIDDLEWARE = [ # 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', # ... ] After this don’t forget to run the command which creates the requirements.txt file. Remember? have a look at the documentation http://whitenoise.evans.io/en/stable/ So finally we have completed the 2 most important steps for deploying Adding Code To GitHub Make a new Github Repo and add all of your code in it. Use this post a reference After that go to Heroku and under the Deploy tab, you will see an option to connect Github. Connect your repo and you can hit the deploy button to deploy your app. Using Heroku Postgres What is the Need? I am using SQLite already! The problem is that The Heroku filesystem is ephemeral — that means that any changes to the filesystem whilst the dyno is running only last until that dyno is shut down or restarted. Each dyno boots with a clean copy of the filesystem from the most recent deploy. This is similar to how many container-based systems, such as Docker, operate. In addition, under normal operations, dynos will restart every day in a process known as “Cycling”. https://help.heroku.com/K1PPS2WM/why-are-my-file-uploads-missing-deleted Basically, all the data you will store will get delete every 24hrs. To solve Heroku suggest using either AWS or Postgres. Heroku has made it very simple to use Postgres. Let’s do this Go to your app dashboard and in the Resources section search for Postgres. Select it and you will get something like this Postgres Add-on in Heroku Now go to the settings tab and reveal the config vars You will see a DATABASE_URL key there. It means that Heroku has added the database and now we have to tell our app to use this database. For this, we will be needing another package called dj_database_url. Install it through pip and import it at top of settings.py file Now paste the following code below DATABASES in settings file db_from_env = dj_database_url.config(conn_max_age=600) DATABASES['default'].update(db_from_env) That’s it now your database is setup Currently, your database is empty and you might want to fill it. Open terminal type → heroku login After the login run the following commands heroku run python manage.py makemigrations heroku run python manage.py migrate heroku run python manage.py createsuperuser Now your app is ready to be deployed either use git push heroku master (after committing the changes) or push it through Github.
https://medium.com/quick-code/deploying-django-app-to-heroku-full-guide-6ff7252578d7
['Shubh Agrawal']
2020-11-06 16:46:29.707000+00:00
['Heroku', 'Hosting', 'Python', 'Django', 'Web']
Docker container as an executable to process images using Go (golang)
Image Processing Program In this lesson, we are going to write a Go program that takes a source image on the file system (using the path of the image) and crops the image in a square shape. It also tweaks the contrast and brightness. Then it finally saves the grayscale format of the image on disk. The results look like below. We won’t be writing a program that has complicated logic to manipulate every aspect of the source image to achieve this functionality. Instead, we are going to use the sweet little imaging library written in Go that provides us functions to manipulate the image and save it to the disk. docker-image-processor/ ├── .dockerignore ├── .gitignore ├── Dockerfile ├── bin/ | └── avatar ├── go.mod ├── go.sum ├── main.go ├── process.go ├── process_test.go ├── shared/ ├── test.jpg ├── tmp/ | ├── out/ | | ├── cmd_out.jpg | | ├── model-1.jpg | | └── model-2.jpg | └── src/ | | ├── model-1.jpg | | └── model-2.jpg └── utils.go Out project structure will look something like the above. Do not worry about creating these files just now, some of them will be generated afterward. First of all, let’s convert this project to a Go module so that we can install and track dependencies. $ go mod init docker-image-processor This command will create the go.mod file in the project directory. We are not concerned about publishing this Go module so docker-image-processor module name would be alright for us. Now, let’s install the imaging module. $ go get -u github.com/disintegration/imaging This command will download the imaging module from GitHub and register it as a dependency inside go.mod file. It will also generate go.sum file to hold cryptographic hashes of the installed modules and their own dependencies. The process.go file contains the actual logic to transform the source image and save the output image to disk. It has the makaAvatar function that does the job of producing the image. func makaAvatar(srcPath string, outPath string) error This function accepts two arguments. The first argument srcPath is the path of the source image on the disk and the second argument outPath is the path at which the output image will be saved. The package main declaration suggests that this Go project would be compiled into a binary executable file. We will discuss more while working on the Dockerfile and the Docker image. The utils.go file contains some utility functions such as fileExists to aid other programs in the project. Now that we have the makeAvatar() function ready, let’s test it out. For this, the best approach would be to create a unit test and execute makeAvatar() function to produce some sample outputs. A unit test file in Go ends with _test suffix and the unit test function must start with the Test prefix. In the process_test.go test file, we have written the TestMakeAvatar function that tests the makeAvatar function for sample images located in the ./tmp/src directory and saves the output inside ./tmp/out directory. It then checks if the output files exist to conclude the results of the test. Let’s see the result of this test. $ go test -v The go test -v command instructs Go to look for test files ( *_test.go ) in the module directory and execute the test functions. From the result above, the TestMakeAvatar test has been passed. After running this test, you would be able to see the output images in the ./tmp/out directory which would look something like below (right). Now let’s work on the main.go file which would be the entrypoint of the application. The application we are trying to build is a binary executable that accepts two arguments as shown below. $ ./avatar <srcPath> <outPath> The ./avatar file is the executable we will build from this Go project. The srcPath and outPath arguments are the path to the source image and output image on the disk. When we execute this command, the main() function inside the main.go file would be executed with these arguments. But before building an executable file, we need to work on the main.go file and make a call to makeAvatar() function with srcPath and outPath received in the command. We can use go run . to run the module which executes the main() function just like an executable would. $ go run . <srcPath> <outPath> We can access the arguments passed to the command that executed the main() function using os.Args variable. The Args is a slice (array) of strings and the first element of this slice contains the path of the executable. // main.go package main import "fmt" import "os" func main() { fmt.Println(os.Args) } This simple program produces the following result when executed with the command $ go run . ./test.jpg ./test_out.jpg . [ /var/folders/xx/..../b001/exe/docker-image-processor ./test.jpg ./test_out.jpg ] When we run a Go program or an entire module in this case, Go first creates a binary executable file on the fly, stores it at a temporary location, and then executes that binary executable file. The binary executable execution starts by executing the main() function. The command-line arguments received from os.Args we are interested in are the 2nd and 3rd ones. So let’s modify the main() function and make a call to makeAvatar() function with these two values. In the program above, first, we are checking if the length of os.Args is less than 3 and exiting the program with an error if that’s the case. Then we are extracting the 2nd and 3rd argument which would be the srcPath and outPath for the makeAvatar() function. $ go build -o ./bin/avatar . The command above will build a binary executable file avatar from this module and put it inside the ./bin directory of the project. Then we can execute this binary executable using the ./bin/avatar <args> command. $ ./bin/avatar ./tmp/src/model-1.jpg ./tmp/out/cmd_out.jpg Success! Image has been generated at ./tmp/out/cmd_out.jpg.
https://medium.com/sysf/docker-container-as-an-executable-to-process-images-using-go-golang-5233f9bd3bf7
['Uday Hiwarale']
2020-12-18 08:14:16.825000+00:00
['Docker Image', 'Go', 'Docker', 'Image Processing', 'Dockerfiles']
Lessons from Faktograf’s cross-border collaboration to reduce COVID-19 misinformation
The organisation relies solely on grant and project funding and has no membership or subscription programme for readers, nor does it host any advertising. In 2019, it partnered with Facebook to review and rate the accuracy of content on the platform as part of its third party International Fact-Checking Network. Faktograf has a newsroom of seven people plus three freelancers for covering additional work during COVID-19. They also have some occasional contributors who write on specific topics. The team primarily fact-check the statements of top government officials, members of parliament and other elected officials; however, they also debunk common untruths or misconceptions which circulate widely on social media. These fact-checks are published on Faktograf’s website and distributed via its social media accounts. Prior to the pandemic, Faktograf’s daily output was around three articles per day in 2019. Since the European elections in May 2019, the team has begun receiving regular requests from readers via Facebook, email or Twitter to check out claims or statements. Faktograf aims to reach an audience of engaged Croatian citizens who are keen to take part in public discourse. Its ideal readers have a high dose of scepticism and curiosity. In 2019, the Faktograf site received a million unique views, but in the first six months of 2020, it recorded more than two million uniques. Around half of its readers are aged 25–44 years old, while 54% are female and 46% are male. Its readers are predominantly based in Croatia, with some in neighbouring countries, mostly Bosnia and Herzegovina and Serbia. In recent years Croatia’s press freedom track record has become a cause for concern. According to Reporters Without Borders, Croatian investigative journalists, especially those who cover corruption, organised crime or war crimes are often subjected to harassment. Media law is particularly strict in the country; for example, individuals and organisations can launch criminal proceedings for insult and defamation or start civil proceedings with claims for compensation. This happened in 2018 when the national broadcaster Croatian Radio and Television (HRT) filed a lawsuit against the president of the Croatian Journalists’ Association following a critical press release about the organisation. As of May 2020, there were 905 current lawsuits against journalists and the media in Croatia. To compound matters, Croatia’s media is also highly concentrated with most print, TV and radio owned by two companies — Austrian Styria Media Group and Hanza Media Perhaps unsurprisingly, citizens’ trust in the media — like many public institutions in the country — is low. The 2019 Reuters Institute Digital News Report shows out of 38 countries surveyed around the world, Croatia has the highest percentage of people (56%) actively avoiding the news followed by Turkey (55%) and Greece (54%). The European countries in the survey with the lowest avoidance of news are Sweden (22%), Norway (21%), Finland (17%) and Denmark (15%). How did Faktograf handle the COVID-19 crisis? In January 2020, when the spread of the virus was restricted to China but misinformation was already rife, Faktograf worked with the International Fact-Checking Network to create the #CoronaVirusFacts Alliance Database. The alliance brings together more than 100 fact-checkers around the world to publish, share and translate facts-checks about COVID-19. The database — which now has over 7,000 entries in over 40 different languages — helped Faktograf to better understand what mis- and disinformation was being widely shared on a global scale. This made it easier in the early stages of the pandemic to fact-check claims that originated someplace else and found its way to Croatia. Article on face masks on Faktograf’s live blog When cases started to appear in Croatia, Faktograf started a liveblog to debunk COVID-19 mis- and disinformation. The team prioritised topics that endangered public health and that were likely to reach a large number of people. This included information about face masks, fake cures and vaccines. Collaboration with readers intensified during this period and the team regularly received messages via Facebook and email with suggestions of stories and posts to fact-check. (The liveblog does not allow for comments but readers can post or message the team on Facebook or Twitter.) In total, the team has published more than 130 COVID-19 related articles since March from a total of 300 pieces. At the start of March, Faktograf co-founded a new fact-checking network to foster collaboration between fact-checking journalists in the region and to promote media accountability. Called SEE Check, it is made up of seven fact-checking organisations in six countries in Southeastern Europe (SEE) including Slovenia (Razkrinkavanje.si), Montenegro (Raskrinkavanje.me), North Macedonia (F2n2.mk), Serbia (Fakenews.rs & Raskrikavanje.rs), Bosnia & Herzegovina (Raskrinkavanje.ba). The network created a Twitter account and Facebook page as well as a Viber community, and hosted a roundtable discussion in May 2020 to share tips on how journalists across the region could better work together. Prior to forming the network, Faktograf had been in touch with those organisations on an informal basis; however, the spread of COVID-19 led them to formalise the relationship. A concrete example of SEE Check’s work is the debunking of the 26-minute “film” Plandemic, which went viral online in April and was removed by YouTube in early May. Journalists from Faktograf and Raskrinkavanje.ba worked together to produce a detailed explainer in Croatian about why the film’s claims were baseless. The article was published at the same time on both Faktograf and Raskrinkavanje.ba’s platforms, as well as all SEE Check’s social media channels. Screenshot of Faktograf’s Viber group That network also set up the COVID-19 Provjereno (meaning ‘checked’) group on Viber, an instant messaging platform popular in the Southeastern Europe region, after realising that disinformation was spreading primarily via messaging apps. The aim of the group was to debunk false information and to share factual information with as large an audience as possible across all six countries. All seven SEE Check members post fact-checked articles in the group as well as short videos and memes that counteract dis and misinformation. So far almost 5000 people have joined the group. On March 22, Zagreb was hit by one of the largest earthquakes in 140 years. At 5.3 magnitude, the earthquake partially damaged Faktograf’s city-centre offices. At that point, all of its journalists were working remotely from home. But the chances of returning to the office building remains unlikely at this point due to safety. Working remotely proved challenging during lockdown. The team mainly found themselves collaborating over email and Signal. How has COVID-19 changed the future of Faktograf? In the future, SEE Check plans to expand the collaboration among its seven co-founding organisations. For example, the group is currently applying for an International Fact-Checking Network grant to create a fact-checking podcast related to the Southeastern Europe region. Each podcast episode would be produced by a different newsroom and would be filmed to create versions with English and regional language subtitles. The pandemic helped the team realise the importance of expanding their efforts to new platforms, such as WhatsApp and Viber. Creating the COVID-19 Provjereno group seemed to help check the flow of mis- and disinformation on these mobile platforms although it is difficult to gauge its impact. Nonetheless, the team is examining how it will also set up a similar fact-checking community on WhatsApp. Following the success of the Viber community, Faktograf also plans to start a newsletter to help build a stronger relationship with its readers. The team also wants to show that there is a reason to come back to Faktograf after the pandemic subsides. While there is no membership or crowdfunding planned, they believe the newsletter is the first step to more deeply engaging its audience. There are also plans to redesign the Faktograf website with a mobile responsive design given 78% of Croatians use smartphones to consume news on a weekly basis. COVID-19 highlighted to the Faktograf team how important it is to expand their network of experts to help answer reader’s questions. For example, for a recent article about the effects of 5G masts, they spoke to medical experts and telephone network specialists to debunk misinformation about the effect of the masts on people’s health. Going forward, this means creating a database of scientists, academics, researchers and others working in public institutions who are amenable to working with the Faktograf team on debunks and other articles. Media literacy will become a bigger priority for Faktograf in the future as a result of the coronavirus. According to Medijskapismenost.hr — a Croatian media literacy portal set up in 2016 by the Agency for Electronic Media and UNICEF — only 11% of Croatian citizens have the opportunity to learn how to critically evaluate media content. Most of these people are from a young generation (15–30 years old) and tend to be highly educated. One way the team intends to help increase media literacy is by creating shareable memes and videos on social media and messaging platforms to make people aware that what they share has wider societal consequences. What have they learned? “From COVID-19 to the anti-vax movement, we’ve witnessed how easy disinformation or misinformation is to consume and share. This is not the case with factual information and that’s a big problem. That means there is an even greater need to put out truthful and factual information in large numbers in order to combat this. But this responsibility isn’t solely on fact-checkers or journalists. It has to include not only media literacy — it has to come from the bottom as well as the top. In terms of positive outcomes, we learned we can adapt quickly and create new ways to collaborate across borders and within our team. Our readership skyrocketed during this time of uncertainty. Our hope is to serve these readers beyond COVID-19.” Ana Brakus, journalist, Faktograf.hr
https://medium.com/we-are-the-european-journalism-centre/lessons-from-faktografs-cross-border-collaboration-against-covid-19-misinformation-5eafd5156fc
['Tara Kelly']
2020-09-07 09:51:59.332000+00:00
['Journalism', 'Misinformation', 'Case Study', 'Croatia', 'Media']
EOS Proxy Voting: Basics by Attic Lab
Today, Attic Lab is going to tell you about Proxy Voting: overview, how to set somebody as your proxy and how to create your own proxy account. For all proxy actions, we will tell you by using cleos and eostookit.io. What is proxy voting Proxy accounts represent a person/community with a list of Block Producers to which it is willing to vote. Commonly, they support their vision with personal preferences, EOS Constitution compliance, regproducer agreement, etc. For regular users, it is a handy tool to follow, if they want to contribute to EOS ecosystem’s life. EOS token holders don’t need to research every Block Producer to see if it is compliant, transparent, amount of contribution to the community, etc. Proxies usually just provide the list of chosen Block Producers and a checklist of “why to vote for this certain Block Producer” To finalize, proxies are your “representatives” in terms of voting. Just find a proxy that can match your views, and deserves your trust. If this happened, let’s go to the next step. How to set somebody as your proxy If you are using cleos , this step is pretty straightforward. It is the easiest way to use, from our point of view. Let’s assume that account “atticlab1” wants to set “123proxy” as its proxy. There is only 1 command to use. $ cleos system voteproducer proxy atticLab1 123proxy Now, “atticlab1” proxied their votes and “123proxy” will vote for its chosen list of Block Producers on behalf of Attic Lab. There is another option to proxy your vote — toolkit We will give an example of using eostookit.io. First, you should install Scatter browser extension, create or import your wallet and go to eostoolkit.io After you’re done with this step, you should go to Manage Voting -> Set Proxy. Here, it is pretty straightforward. Under “1”, as you can see on a screenshot, you provide Proxy Account Name, and under “2” you give your Account Name. Let’s assume that you decided to dive into EOS ecosystem and create your own proxy account. Then, we will show you the next step. How to create your own proxy account Attic Lab will show you this method using cleos and eostoolkit.io. To register as a proxy using cleos, you should run only 1 command. $ cleos system regproxy atticlabproxy After running this command, we’ve created a new proxy account — “atticlabproxy”. If you want to use eostoolkit.io, you should download Scatter browser extension, create/link your account and go to https://eostoolkit.io . Then go to Manage Voting -> Create Proxy and fill out the form as shown on the screenshot below. So, today we discussed the basics of proxy votes, how to proxy your votes, and how to create your own proxy account. Attic Lab is here to support EOS ecosystem and the community. We hope that you find this educational content useful and helpful. Share your thoughts about it in the comments section below! Support Attic Lab as a Block Producer, if you want to receive more important content about EOS. Follow us! Website: https://atticlab.net/eos/ Twitter: https://twitter.com/atticlab_it Facebook: https://www.facebook.com/atticlab/ Reddit: https://www.reddit.com/r/atticlabeosb/ Steemit: https://steemit.com/eos/@attic-lab Medium: https://medium.com/eosatticlab Golos: https://golos.io/@atticlab Telegram Chat: https://t.me/atticlabeosb Telegram channel: https://t.me/eos_atticlab
https://medium.com/eosatticlab/eos-proxy-voting-basics-by-attic-lab-68742cab3faa
['Attic Lab']
2018-08-19 17:54:05.523000+00:00
['Blockchain', 'Eos', 'Development', 'Cryptocurrency', 'Bitcoin']
3 key takeaways from HPE Labs
One of my most recent projects entailed spending eight months as the lead designer at HPE Labs, a security and management company in Bristol, to bring the user interface (UI) of their management dashboard right up to date. As a rich and varied role, it seems only fitting that I share some key aspects of the experience to provide some insight into the way I work. The contract itself was split into two projects: Loom and the Executive Dashboard. Whilst the former required an improvement to an existing UI, the latter was a from-the-ground-up task that required the development of a brand-new dashboard in line with the modern brand guidelines. HPE’s developers and engineers had been working on the project for four years prior to my arrival and they hadn’t collaborated with a designer until that point. Enter the onboarding process. A project seldom starts with you — understand its past It was important for me to get to grips with the project and its past as quickly as possible — after all, I was the one who was furthest away from the inception of the idea at this stage. If a project has been on-going for four years, it’s important to ask as many questions as you can to get up to speed with the team and its thinking. It transpired that serious limitations with the existing UI were at the core of the project, so I had to ensure that I could bring its focus back onto the users and their high expectations of UIs in 2017 and beyond. In the end, it was only by understanding the project’s past and the limitations that held it up that I could work to develop a revolutionary new user experience (UX) with a completely new and intuitive UI made for touch and large-screen displays. Collaborate, collaborate, collaborate None of that would have been possible if either I didn’t integrate with the project team or the team weren’t receptive to my questions. They brought a designer on for a reason, so that could only be conducive to a positive outcome. Collaboration for designers works two ways and the creative process must be flowing. This doesn’t just mean with developers and engineers; it means close collaboration with the brand police as well when it comes to brand guidelines. If we don’t stick to them from the off, we’re sent back to the drawing board when we send our work for approval. The importance of taking your time Of course, deadlines normally dictate our approach to ‘taking our time’, but there is a lot to be said for genuine, on-the-spot testing with real users. This is exactly what we did with the Loom and Executive Dashboard UIs for HPE. I was lucky enough to be working in the same building as the people who would be using the UI I was designing after I’d left, so we had the privilege of immediate testing and feedback to inform our decisions. It’s not always possible to take your time with such an iterative approach, but when you do get the chance, make the most of it. Ultimately, my eight months at HPE Labs gave me a great opportunity to grow and develop as a designer outside of the usual agency environment. I was humbled by the belief and trust that the team put in me from the off and was impressed by the fact that they create the sort of environment in which like-minded creatives can thrive. It really allows for a more desirable outcome if everyone works together, but it starts with the first steps you take into the office as a consultant if everyone is going to be on and remain on the same side.
https://uxdesign.cc/3-key-takeaways-from-hpe-labs-cdbecb345e2a
['Simon Mccade']
2019-03-29 10:49:30.238000+00:00
['Research', 'UI', 'Design', 'Collaboration']