title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Why Dental Floss Is a Marketing Scam
How Do You Market a Product That Nobody Needs? First, you need to present your product as the solution to a problem. The problem here is rotten teeth. Who wants that? This is an easy sell in a dentist. The patient already has a need, their teeth ache or it’s a check-up and they’ve arrived at your business. It’s now a question of when to present the worst scenario that foretells the loss of teeth. And it’s not just the loss of teeth. The social impact has a major psychological grip on my sense of worth. With bad teeth I can’t get a date. Without a date I’ll be lonely. Being lonely will lead me to depression. Being depressed… well, you get the picture. A spiraling decline in my mental health all because of bad teeth. It gets worse. You’re seeking employment. You have bad breath caused by food stuck in your teeth. You have an interview coming up. Without flossing you could be doomed. Without a job, you can’t get a date. Without a date, you’ll be lonely. Yeah, I know, it all leads back to sex. All this runs through my mind as I sit sprawled in the dentist’s chair unable to answer back coherently. “You wouldn’t want to have bad breath would you Reuben?” asks the Gentle Dental. “Murhhmughhhh muuuummmmrrrgrhlll mmooorrrrhhhh” I would reply. What? Didn’t I say I was in a vulnerable position? “Tell me Reuben… do you date?” He doesn’t wait for an answer. “Because you’ll not be wanting to kiss anyone with food stuck in your teeth, would you now?” He’s a pessimist my dentist. Probably because he’s made the choice of staring at people’s bad teeth every single day of his life. The nightmare scenario rapidly plays through my head and within ten seconds I have died, alone, somewhere in Palmerston North (formerly the worse place to live on Earth). Need. Check. Prevention message. Check. Now all he needs to do is close the sale.
https://medium.com/better-marketing/why-dental-floss-is-a-marketing-scam-31cb666f1a00
['Reuben Salsa']
2019-11-22 00:13:55.504000+00:00
['Ideas', 'Marketing', 'Medical', 'Opinion', 'Dentistry']
Building to learn: the role of prototyping in Design
Taking Time to Prototype The prototyping phase of product development is often sacrificed because design or product teams don’t think they have time in their schedules. But the reality is that you probably don’t have time not to prototype. One thing that over twenty years in the software industry has taught me is that everybody prototypes, but not everyone plans for it. In other words, no matter how experienced you are as a designer or a product owner, the chances that a new product or feature will resonate perfectly with customers exactly as it was originally conceived is pretty low — especially when you’re trying to deliver entirely new kinds of user experiences across new kinds of devices and platforms. If you skip the design prototyping phase, that means you are prototyping with your customers which is by far the least efficient and most costly way to learn. The reality is that you probably don’t have time not to prototype. The more learning you do before launching a new product or feature, the more confidence you can have that it will resonate with end users right away, and the better the chances are that you can devote the next release to innovating rather than addressing customer complaints — or worse, customer indifference. Although it may feel counterintuitive, sometimes the best way to move faster is to first slow down. The global design company IDEO has a great saying about prototyping: “If a picture is worth 1,000 words, a prototype is worth 1,000 meetings.” If you’ve ever experienced how a high-fidelity, interactive prototype can facilitate decision making and drive alignment across organizational silos, it’s hard to imagine shipping anything without taking time to prototype it first. Prototyping doesn’t just save time during short- and long-term product development lifecycles; it can also dramatically reduce friction around innovation. Throughout my career, I have seen countless good ideas — ideas that could have been significant revenue generators — collapse beneath the weight of debate or indecision only to be resurrected by future unsuspecting resident entrepreneurs. Prototyping can disrupt these cycles by giving stakeholders something concrete to evaluate. The faster you can bring an idea to life, the sooner you can make a deliberate and well-informed decision to either take it to market or put it to rest. Everybody prototypes, but not everyone plans for it. The last point I want to make around taking the time to prototype has to do with misconceptions around product schedules. Most people — even experienced product owners — assume that the most difficult aspect of building a successful product is the implementation. That hasn’t been my experience. While anything that adds value to customers’ lives certainly does take time to build, if you always knew exactly what customers wanted, and exactly the user experience they would ultimately find most intuitive and engaging, figuring out how to give it to them probably wouldn’t stop you from being successful. It is usually far more difficult to get users’ attention in the first place, hold that attention long enough that they begin developing habits around your product, and then consistently delight them. Until you are confident that you know what you should build, don’t waste time trying to build it. Prototype it instead. Stop building to ship until you’ve first built to learn.
https://uxdesign.cc/building-to-learn-977a8cd88ced
['Christian Cantrell']
2019-04-10 00:51:10.058000+00:00
['Design Thinking', 'Design', 'User Experience', 'Prototyping', 'UX']
Grief, Denial and the Will to Rise Again.
An old friend scheduled a c-section after enduring a complicated, high risk pregnancy. The doctors were taking no chances. “It feels so strange to be scheduling a birth like a dentist appointment,” she told me. “Monday at 1p I’m going to walk into the hospital and have my baby.” I am thinking of this now as I call the vet for the third time in two days to reschedule Bruno’s euthanasia appointment. I am scheduling his death. It feels grotesque and surreal. I am asking his death to accommodate my life. I want to do it on a Friday so we have the privacy and down time of a work free weekend to let our grief loose, like a let-go-of kite. But the kids aren’t ready. They want more time. And he’s not in pain. He’s just already so far over the threshold of being gone that he’s not really here. We’re down to bodily functions. The practical stuff. His soul, his magic, is a dim light at the end of its wick. Bruno is the same age as my youngest child. We joke that they’re litter mates. He’s grown up in the thick of our family, always under foot, on a lap, sitting shotgun. Always included. Even in the awful stuff like the wrecking ball of divorce and the displacement it brought and the ceaseless rebuilding that feels sometimes like the drip drip dripping of a lopsided castle at the edge of a hungry sea. Dogs are like houses in the way they contain the record of our days spent living. The big moments but also the quieter ones that no one else sees. Opening mail and closing cabinets. Stacking clean shirts into a drawer and peeling carrots. Turning on lights as the night crawls in. Running a bath or running the dishwasher or filling a vase of tulips with tap water. They are the keepers of our stories, grand and insignificant. They know our rituals and our truths. They were at our side, in their eternal loyalty, feeling it with us all along. I am holding Bruno in my arms tonight. His shaggy blonde hair looks like dried sea grass in winter as I move my hand through it. His pudgy pot belly has diminished noticeably. His bones are the branches of a tree who has let her leaves go to reveal the bareness of her being. At his full weight Bruno was 17 lbs. The same size as my daughter when she was 6 months old. That was my favorite age. Toothless and smiley and filled with awe for the little things in life. Like the wind chimes by our window or a button eyed sock puppet. It was so easy to keep her happy then and to keep her with me. It only recently occurred to me that Bruno has been my surrogate all these years. That when I hold him on my hip like a koala bear and dance with him in the kitchen or spoon him in bed, sharing the same pillow, I’m reliving the perfection of that stage in the motherhood journey. That 6 month old baby girl is a teenager now, about to get her driver’s license. She’s grown like a sunflower while Bruno has remained a tiny violet at her feet. He’s been a space holder. I haven’t had to mourn the loss of the innocence, the snuggle-bear-bundled-up-baby-ness that consumed every last molecule of bandwidth in my heart. I’ve had a permanent baby with me for the last 14 years who never outgrew my lap or my arms. Is that why the mourning feels supernaturally hard now? Nearly a year ago Bruno suffered a stroke. I rushed him to the vet and his prognosis was bleak. For the next three days he barely ate or drank a thing. The subcutaneous water pack on his back was the only thing hydrating him and keeping him alive. I cried in a way that felt violent. Like being punched in the face over and over again. My eyes were so swollen I couldn’t fully open them. My stomach convulsed against my will as the reality of losing him took hold. Bruno was completely indifferent to my unhinged falling apart. I wrapped him a fleece blanket and rocked him and played Enya’s Shepherd Moons. I only put him down to pee in the front yard. I was willing him to live, hating myself for making it hard for him to go and gripped by the metal teeth of grief, digging mercilessly into my heart. It hurt to breathe. On the fourth day I woke up to the sound of Bruno lapping up water at the foot of the bed like a fragile kitten. Later that morning he was willing to eat vanilla ice cream from a small spoon, his favorite food on the planet. Then he ate some tiny bits of boiled chicken and he walked with me in super slow motion at the park. He turned a corner dramatically and I turned it with him as the days turned into weeks and then months. I started to feel like he was immortal. And my horror at having lived through the dress rehearsal of his death shifted into a kind of deep rooted denial that assured me I would never have to. What we lose when we lose a being we love is so much more than can be measured. It’s more than the raw hollowness of sorting through toys that will never be played with again, a bed that will never be filled or a food dish left empty, as we round up the reminders so as not to be caught off guard and propelled unexpectedly into a wave of unpredicted grief. What we lose most acutely, in surviving our beloveds, is the wholeness and depth of our connection with them. When we love someone and they love us back, in a way that is pure and uncomplicated and real: it’s a miracle. It’s incredibly simple yet incredibly hard to come by. That kind of love can change your whole life. To be witnessed. To be wanted. To be accepted without reservation. To be depended on. To come through. Over and over again, even when it’s not easy, is to experience ourselves as deeply worthy and deeply capable. These are the ingredients of a purposeful life. I am walking with Bruno now as the sun sets over the river. I walk with him in a baby carrier across my shoulders; he’s too weak to make the trek across the rocks and broken branches at the shore. My phone is filled to the brim with pictures of this scene. Every time I see a thing of beauty I’m compelled to capture it despite the fact that not one of those shots comes close to doing justice to the actual feeling of standing at the edge of the water, looking out at the mountains on the other side, watching the sky turn pink-streaked and majestic. Not one of them has ever been able to trap down the ineffable feeling of what it was like to be there, with a dog at my side, watching another day toss its glitter around and sing its finale. Trusting, over and over, that the sun, as it sinks silently, gracefully, knows how to rise again.
https://medium.com/indian-thoughts/grief-denial-and-the-will-to-rise-again-835d4eb70161
['Mary Welch Official']
2020-12-18 01:03:29.367000+00:00
['Spirituality', 'Loss', 'Dog Lover', 'Love', 'Psychology']
Opening Jupyter Notebook From Any Desired Location
I ntroduction Jupyter notebooks have become the preferred workspace for the majority of Python data scientists. Jupyter is an open-source project that supports interactive data science and scientific computing across programming languages. The very name Jupyter derived from the three core programming languages it supports viz. Julia, Python, and R. The name is also a homage to Galileo’s notebooks that records the discovery of the moons of Jupiter. The very name Jupyter derived from the three core programming languages it supports viz. Julia, Python, and R. Jupyter Notebook can be installed using either pip(Python Package Installer) or Anaconda Individual Edition. Once installed you can run the Jupyter Notebook via Terminal(Linux/Mac), Command Prompt(Windows), or Anaconda Prompt by typing ‘jupyter notebook’. jupyter notebook The Jupyter Notebook runs from the start-up location based on the operating system the user is using. Hence the user will only be able to save the code in the start-up location. This is a great drawback. Don’t get disheartened as there are several ways to overcome this flaw of the Jupyter Notebook. Most of the tweaks are a bit complicated. Rest assured, as we will be discussing the simplest solution. The start-up location for Linux, macOS, and Windows are shown below: Linux(Ubuntu 20.04) The start-up location in Linux(Ubuntu): macOS 10.14: Mojave The start-up location in macOS: Windows 10 The start-up location in Windows 10: The Simple Solution to Open from the Desired Location Linux(Ubuntu) Go to the desired directory and right-click to select ‘Open in Terminal’: In the Terminal, type ‘jupyter notebook’: macOS Open the desired location using Finder and right-click to select ‘New Terminal at Folder’: In the Terminal, type ‘jupyter notebook’(as in Linux): Windows The solution in Windows is a bit different from Linux and macOS. Open the desired location in Windows File Explorer, copy the desired location from the address bar of Windows File Explorer. Alt + D goes to the address bar and Ctrl + C copies the location. Now open the Anaconda Prompt and type the following command: cd D:\desired location Somehow, the Anaconda Prompt returns to the original location. Enter ‘d:’ and the prompt will reach your desired location(as shown in the image below). Note that you must enter the drive letter of your desired location(C: for C:\ drive-the primary partition). Afterward, type ‘jupyter notebook’ and the Jupyter Notebook will be opened. Note that the Jupyter Notebook’s home page does not list anything if the folder is empty. Once a Python3 notebook is created, the home page will list the files. To install Jupyter Notebook for different environments, refer to my blog here: Hope this will help you in getting things done using Jupyter Notebook in a location of your choice. Happy Coding!!!
https://iambipin.medium.com/opening-jupyter-notebook-from-any-location-7d2c66fdd940
['Bipin P.']
2020-11-24 10:44:55.126000+00:00
['Tips And Tricks', 'Jupyter', 'Data Science', 'Jupyter Notebook']
IT Staff Augmentation: How It Can Benefit Your Tech Startup?
Imagine you already have an outstanding idea and a perfect plan ready to shape as a startup. But, you don’t have an outstanding team to turn your Idea into Reality. Thus, your outstanding idea will only remain a concept that could have had a massive impact. Now, this is one of the major challenges startups face, where they don’t have an idea of how to commence things with hiring the right talent. This is where the need for IT staff augmentation arises for productivity as well as for measuring the software development metrics. In most of the successful tech startups, people are the factor that can make or break your business. Thus, IT staff augmentation plays an important role when handled wisely. Let us understand it briefly, before understanding How IT staff augmentation can benefit your TECH startup. What is staff augmentation? Staff augmentation is a popular outsourcing strategy wherein you can hire highly skilled & experienced talent on a contract rather than permanently. Simply, you will get highly skilled and qualified employees from software development companies offering IT staff augmentation services that help you in meeting all your project objectives. IT staff augmentation is emerging as a popular outsourcing strategy, especially in this Covid-19 pandemic situation for employing a dedicated workforce. The reason is that many startups are looking to hire expert talent with the help of IT staff augmentation to fulfill their deadlines & aggressive project requirements. Now, you have understood the IT staff Augmentation, Let’s see how it can benefit your TECH startup. Here, I’ve outlined some fantastic benefits of staff augmentation that will help to boost your tech startup; Highly Flexible and Cost-Effective Simplifies IT Recruitment Top-notch quality with exceptional skills Faster time to hit the market Increased Output and Team Size 1. Highly Flexible and Cost-Effective Probably the most obvious & the biggest advantage It staff augmentation is cost savings. You would be wondering, how? Startups can get the benefit of flexibility of dispersed teams in terms of flexible pay & flexible hours. Thus, you will have all the freedom of scaling up or down your staff based on your requirements. So, you need to pay your augmented IT staff only for the time they have worked on either an hourly or monthly basis. As your augmented team will work remotely, so won’t be required to bear everyday overhead expenses related to hiring a new employee. Moreover, you won’t have to worry about costs associated with benefits offered in a salary package as you are working with temporary augmented staff. All these things will reduce your overall cost. 2. Simplifies IT Recruitment If you think of hiring independent contractors on your own, it can work, but there are many risks associated with it. Moreover, it will be time-consuming as well as expensive as you would need hiring new HR staff to engage in dedicated team placement. This is where an IT staff augmentation company with a team of experts can remove all your headaches & simplify the whole procedure of recruitment by hiring experts the right fit. The reason is that these firms have extensive experience in the field of HR and offer you the right talent in an organized and systematic approach. You just need to define your partner about your staffing requirements, and these IT staff augmentation would do the rest for you. 3. Hire Top-notch quality with exceptional skills Hire skilled expertise instead of hiring randomly with IT staff augmentation. So true, right? Many times startup owners are likely to hire irrelevant people in a hurry, even without understanding whether or not they would serve the right purpose. Suppose, for your one aspect of startup development, you need a very specific coding language. Would you hire a permanent employee with that expertise that you’ll probably only use one time? No, it would be your answer! As a tech startup, you will be more likely to look for people with expert technical skills. So, finding and hiring such talent on your own would be a daunting task. Therefore, IT staff augmentation is a reasonable solution as it offers you the exact technical skill set for the exact amount of time you require it. No less and no more! And one more perk you’ll get with IT staff augmentation is that you can hire a whole team of developers with various expertise. It will drastically speed up your development time and ultimately faster time to market. 4. Faster time to hit the market You’ll be able to save time that you might take for recruiting and hiring the talent for your startup with the IT staff augmentation services. As we discussed above, as your development time reduces to an extent with staff augmentation, you will have a faster time to hit the market. Ultimately, you will be ahead of your competitors in designing & developing your minimum viable product as well as releasing your final product to market. As a startup, what more would you wish for? 5. Increased Output and Team Size AS you know that with IT staff augmentation, you’ll have lower development costs. So, with the lower cost, you can hire more people & relatively increase your output. How? Let me explain. What if for the same money it will cost you to employ 10 US-based developers, you could hire 15 Indian developers that too with similar skills & experience. It will be a great way to speed up your development process or increase your outcome. Wrapping Up When you choose IT staff augmentation, you will enjoy the various benefits like cost-effectiveness, flexibility, faster time to market, and many more that boost your startup. Moreover, with these added benefits, you will have complete control over your project’s security, quality, and resources. So, if you are an owner of a tech startup, it is the best choice to use IT staff augmentation service and let your startup reach to the sky!
https://medium.com/devtechtoday/it-staff-augmentation-how-it-can-benefit-your-tech-startup-721ec9d2ad9b
['Bharti Purohit']
2020-08-10 10:38:13.688000+00:00
['Startup', 'It Staff Augmentation', 'It Outsourcing', 'Tech Start Ups', 'Staff Augmentation']
Meeting The Shepardess: (La Pastora) Salvia Divinorum
image by author During the late 1990s, my partner and I grew salvia divinorum. A friend brought us a cutting, which we easily cloned. Saliva thrives in Seattle weather and soon our rear windows were lined with shelves of salvia plants. Later, we grew a big patch behind the house. Salvia was legal and purportedly psychedelic. I tended it carefully and read everything I could about the supposed effects. During this time many people in the Pacific Northwest were interested in psychedelics. We considered ourselves urban shamans and psychonauts. I undertook my relationship with salvia divinorum, The Shepardess, during this time of curiosity and permissiveness. The primary active component, salvinorin A, is found in the leaves. The Mazatec indigenous tribes of Mexico use it for divination. During the 1990s researchers asserted that salvinorin A was the strongest natural hallucinogen known to man. I wanted to approach the salvia with care. Seasoned psychedelic users know that some psychoactive substances have more of a presence than others. LSD feels somewhat clinical; users experience themselves as the author of their trip, which seems to come from within them. Psilocybin mushrooms, on the other hand, have more of a “personality” or presence. The mushrooms can feel like witnesses, or guides during the trip. Everything I read attested to the striking presence of the salvia spirit, which made me a little nervous. Salvia isn’t known for being a pleasure or recreational drug. I hoped she would like me. What follows are a series of trip reports I made during these explorations. Salvia Trip report #1 ~ August 14, 1997 (Three large bong hits in succession) It began with a physical sense of motion, of unidirectional rotation. This turning was not just in my body superficially, but in every molecule and even in the air surrounding my form. I was in awe, overwhelmed by this feeling that everything, all of creation, was turning on an axis of celestial proportions. The persistence and profundity of the rotation were mildly disturbing because it also produced an additional effect: Waves of electrical energy passed through my body at rhythmic intervals, like an alien force scanning me with invisible lasers. The field of electricity passed through me like a pulse, moving from left to right, just like the rotation. I was convinced that these energies were impinging on me at a sub-atomic level. The rotation was constant, pervasive, overarching — something unchanging, as if it had always been there. What had changed was my ability to sense it. Looking around I saw that the geometry of the room was wrong, warped, two-dimensional. It was still a room, I just wasn’t seeing it as I normally did. The spatial oddities didn’t alarm me because I remembered that I was in a safe, familiar place. The presence of the salvia was not overt, it didn’t anthropomorphize itself for me, or “speak.” If anything I felt its disinterest or at least the lack of an emotional component. Despite the strangeness, I found myself fascinated by the effects. After a few moments, I noticed that the shapes and colors before my eyes did not change regardless of whether or not my eyes were open. I experimented with this for a while, opening and closing my eyes. The images persisted either way. How had my eyelids become superfluous? What “eye” was I looking through? But, the rotation was ceaseless and I could not ignore it. The waves of quantum bombardment were relentless, I felt my form being sliced through a billion trillion times, always in one direction. The asymmetry of it made me both physically and psychologically uncomfortable. Left to right, without end! I couldn’t stand it anymore, so I shifted a quarter turn to the left, facing directly into the rotation, so that my body received each collapsing waveform head on. Relieved by the symmetry I’d achieved from re-positioning my body, I stretched my arms upward in a gesture of worship and awe for this inexplicable source of motion and energy. I wondered if it was the literal spin of the earth on its axis. About 15 minutes later, a pinpoint appeared in the air before my eyes. It grew slowly, spinning and opening to the size of a small hole. Like the rotation of my trip, the pinpoint had a centrifugal force. It seemed to be a drain-hole in the fabric of space-time for this swampy glow my body had begun to emit. The glow left my body and brain, moving like an ethereal liquid through the air, toward the hole in front of my eyes. It swirled a little as it approached the exit point, then disappeared into the tiny vortex. This spot held its position just a little to the left in my field of vision, until all the glowing ether was gone. I laughed and my lover looked at me quizzically. “I can see myself coming down,” I told him. The sense I had of the earth’s rotation affecting my physical form lingered for another 20–30 minutes and then finally faded. I was grateful when it was gone, and I’m not sure I ever want to feel it again. Salvia Trip report #2 ~ September 21, 1997 I’d traveled with a friend to the Okanogan Barter Faire in central Washington, he’d given me a small tent to use as my own. On a relatively quiet and cloudy afternoon, I crawled inside it alone and decided to try smoking salvia again. I drew three large bong hits and lay on my back. The colors and shapes comprising the roof of the tent began shifting into a beautiful kaleidoscope pattern. I closed my eyes and the trip changed. My body became the earth and a field of salvia plants grew up from my body. Their roots plunged down into my flesh, and their leaves rose above me. They grew taller and taller then fell over into my form, sending fresh roots down and rising up again. I witnessed the cycles of their growth. The life cycle sped up and I experienced generation upon generation of salvia plants living, taking nourishment from my fertile body, growing, falling, dying, and being reborn as new shoots burst up through the soil, my body. The salvia’s mood this time felt happy, joyous, and I did too. The kaleidoscope of the tent’s interior resolved into recognizable patterns again, the breeze made a beautiful song in the trees that faded as I came down. The effects were gone within 30 minutes. (In December of the year 2000, my partner and I went to the 3rd World Conference on Salvia Divinorum at the Breitenbush Hot Spring Resort in Oregon.) It was there that I learned the proper way to ingest salvia, according to the Mesoamerican tribes that grow and use it. They take it orally, chewing it as a quid, and they experience it in total darkness and silence, concentrating on the visions “La Pastora” provides them. When we returned home I decided to follow the instructions as closely as I could to see what would happen. Salvia Trip Report #3 ~ December 18th, 2000 The day before I’d taken eight fresh leaves from our salvia plants, carefully washed each one, and matched it with a similar-sized leaf. Face to face, I stacked them in pairs, couplings that are meant to symbolize intercourse. I carefully wrapped them and placed them in the refrigerator. At 5:30am the alarm woke me and I readied the room for my ceremony. I selected three items of importance from my altar and placed them on the bedside table. I quieted the environment and meditated on my intentions: “healing and being a healer.” I would take the salvia in the way of the Mazatec people who cultivate it and use it for divination. I took the first two pairs, 4 leaves, and rolled them into a cigar shape. I began to gnaw on the cylinder of leafy greens until it became a slurry in my mouth. It tasted disgusting and bitter, but I held onto the solution for approximately 5 minutes then spat it out. Rolling the next four leaves in the same manner, I chewed and swished that solution around for an additional 5 minutes, then spat it into a cup. Afterward I turned off the light, laid on my back in silent darkness, and closed my eyes. A turning spiral feeling began on all sides of my body, my hands sunk into my thighs, and the planes that define the perimeters of my form changed. I felt like a bird or flying creature moving in swirling, circular patterns swooping over a desert landscape. I was watching the bird while also experiencing myself as a bird. I seemed to be omnipresent. Suddenly, it turned upward and headed straight for the sky. I was viewing its ascension from an extra-terrestrial perspective, looking down as it sped toward me. The vision faded and some judgmental part of myself began to wonder, had I chewed the quid long enough? This irritating thought made me sit-up, find the cup, and reintroduce its bitter contents into my mouth. I lay for another 10 minutes or so with the quid lodged between my teeth and gums, then I swallowed it all in one big gulp. Yuck. Closing my eyes, now I sunk deeper into salvia space. The bird was still there but was now moving about in the vast inner space of my mind. I sensed it wondering about the nature of its confinement. It flew off to my left in the space of my inner eye, as far as it could go, seeking the edges of my psychic perimeter. Having found the limits, it swooped in front of me and shot off to the right as far as it could go. I felt it plumbing the boundaries of my conscious mind. Those boundaries slowly came to my awareness as the strange bird scouted. It helped me get a sense of this odd psychic dimension, and interestingly there was no hard edge. Rather, this “space” was cloud-like fading gradually at the perimeter, it was also quite large, much larger than my proprioception of physical mass and volume. The bird flew swiftly on its reconnaissance mission, and I was with it — both bird and witness. Almost immediately after the bird’s flight resolved, I saw a clear image of a plant. The image was simplified, more like a drawing or animation of a plant. It reminded me of ‘Seymour’ in “Little Shop of Horrors” and then, without hesitation, it ate me. I was chewed up and consumed by the plant. Surprisingly, I found the experience quite pleasant. The nature of this reflection was clear to me. I ate the salvia, and now it had eaten me. This made me happy and more comfortable. I felt that now the salvia and I were better friends. The plant multiplied itself, dividing into four smaller plants. They reminded me of fresh soybean pods with fuzzy teeth, but they moved like caterpillars. There was a ledge along with a window and the four plants seated themselves, nodding and smiling at me with cartoon style expressions. They were reminding me that they had been cloned and then lived as potted plants in our back window. I felt their acknowledgement, if not quite approval, for the way we’d raised them. The salvia enjoyed being my body. It was enjoying the playground of my human form. Their movements gave me these funny itches and my mind attended to them, chasing them as they scampered along the surfaces of my skin. I tried to catch them pursuing one until it lodged on my right nostril. Another scurried to the outer side of my labia. I had to scratch… …The salvia found my erogenous center, and I suddenly became aroused. Even pressing into the bed sent little bubbles of pleasure all throughout my body. It seemed that the impetus came not just from me, but the also from the salvia. My body sought its satisfaction without my conscious consent, and within a few seconds, I climaxed. No psychedelic experience I’d ever had before provoked a spontaneous orgasm. I lay in afterglow, one with the salvia as subtle information passed between us. This was the teaching I’d sought. I had asked salvia a question about healing, but the plant does not answer in words: It plays a movie for the mind’s eye. It reflected the parts of myself that are insecure, my soft spots, which somehow helped me see myself better. There is nothing mammalian about the interface between salvia and human consciousness so interpretation of the experience can be hard to put into words. My inability to describe the nature of these salvia insights was obvious even before I came down. Trying to capture what was most sublime about this experience would prove to be a difficult and somewhat futile exercise. Afterthoughts: The plant did not seem overly concerned about my question, or “healing” — although the orgasm might be considered a gift along those lines. Eating the salvia in the traditional way was better. As salvia is neither human nor animal, communication with it feels awkward and alien — as plant to human interaction would be. (In the year 2001 my partner died of pancreatic cancer. I moved out of our home and downsized into a one-bedroom city apartment. It was a dark time for me. I barely worked and stayed home alone, grieving in solitude.) April 15, 2002 ~ Salvia Trip Report #4 (Smoked three successive bong hits.) I decided to try it again, home alone. I took three hits and then expectantly looked around. My room appeared normal. Nothing changed, I wondered what was happening with the salvia and a tiny bit of disappointment entered my mind. In my stream of consciousness a thought rose to the surface of my mind mutely asking, “Is that all?” Compared to my trip a couple of years ago, I guess it didn’t seem like much. “Is that all?” Suddenly, the salvia seized upon the thought and turned it against me, like a giant club. It struck my brain, the words became a weapon, a heavy bat and it slammed back into my consciousness again and again and again: “IS THAT ALL! IS THAT ALL! IS THAT ALL! IS THAT ALL! IS THAT ALL! IS THAT ALL! IS THAT ALL! IS THAT ALL! IS THAT ALL! IS THAT ALL! . . .” it shouted back. The salvia hit me with the words over and over, the sensation was physical, not auditory. It went on and on, with no sign of stopping or slowing. It was both physically and emotionally painful. I was grateful when about 10 minutes it became quieter, 10 minutes after that it was only a murmur and eventually it faded away. That was the last time I did salvia. “Set and setting” are a big part of our psychedelic experiences and I was depressed when I took it this final time. Also, I smoked it, rather than carrying out a traditional ceremony as I had before. Whatever the reason, it was traumatic enough to permanently extinguish my curiosity. I have a deep respect for The Shepardess, but I don’t need to meet her again.
https://medium.com/an-idea/meeting-the-shepardess-la-pastora-salvia-divinorum-58d0aac97ac4
['Andrea Juillerat-Olvera']
2020-12-10 15:22:31.116000+00:00
['An Idea', 'Nonfiction', 'Experience', 'Psychedelics', 'Journal']
Depth Estimation
Conventional displays are two dimensional. A picture or a video of the three dimensional world is encoded to be stored in two dimensions. Needless to say, we lose information corresponding to the third dimension which has depth information. 2D representation is good enough for most applications. However, there are applications that require information to be provided in three dimensions. An important application is robotics, where information in three dimensions is required to accurately move the actuators. Clearly, some provisions have to made to incorporate the lost depth information, and this blog explores such concepts. How do we estimate depth? Our eyes estimate depth by comparing the image obtained by our left and right eye. The minor displacement between both viewpoints is enough to calculate an approximate depth map. We call the pair of images obtained by our eyes a stereo pair. This, combined with our lens with variable focal length, and general experience of “seeing things”, allows us to have seamless 3D vision. Stereo image pair formed due to the different viewpoints with respect to the left eye and the right eye. (Source) Engineers and Researchers have realized this concept and tried to emulate the same to extract depth information from the environment. There are numerous approaches to reach the same outcome. We will explore the hardware and software approaches separately. Hardware: 1. Dual camera technology Some devices have two cameras separated by a small distance (usually a few millimeters) to capture images from different viewpoints. These two images form a stereo pair, and is used to compute depth information. Dual camera separated by a small distance on a mobile phone. (Source) 2. Dual pixel technology An alternative solution to the Dual Camera technology is Dual Pixel Autofocus (DPAF) technology. Calculation of depth using DPAF on the Google Pixel 2. (Source) Each pixel is comprised of two photodiodes, which are separated by a very small distance (less than a millimeter). Each photodiode considers the image signals separately, and then analyzes it. This distance of separation is surprisingly sufficient for the images produced by the photodiodes to be considered as a stereo-image pair. Popularly, Google Pixel 2 uses this technology to calculate depth information. 3. Sensors A good alternative to multiple cameras is to use sensors that can infer distance. For instance, the first version of Kinect used an Infra-Red (IR) projector to achieve this. A pattern of IR dots is projected on to the environment, and a monochrome CMOS sensor (placed a few centimeters apart) received the reflected rays. The difference between the expected and received IR dot positions is calculated to produce the depth information. Kinect sensor in action. (Source) LIDAR systems fire laser pulses at the objects in the environment, and measures the time it takes for these pulses to get reflected back (also known as time of flight). They also additionally measure the change in wavelength of these laser pulses. This can give accurate depth information. LIDAR in action. (Source) An alternative and inexpensive solution would be to use Ultrasonic sensors. These sensors usually include a transmitter that projects ultrasonic sound waves towards the target. The waves are reflected by the target back to the sensor. By measuring the time the waves take to return to the sensor, we can measure the distance to the target. However, sound based sensors may perform poorly in noisy environments. A typical low cost ultrasonic sensor. (Source) Software: Using additional hardware not only increases the cost of production, but also makes the depth estimation methods incompatible with other devices. Fortunately, methods to estimate depth by using software only techniques do exist, and is also an active research topic. Below are some of the popular methods to estimate depth using software: 1. Multiple image methods The easiest way to calculate depth information without using additional hardware is to take multiple images of the same scene with slight displacements. By matching keypoints that are common with each image, we can reconstruct a 3D model of the scene. Algorithms such as Scale-Invariant Feature Transform (SIFT) are excellent at this task. To make this method more robust, we can measure the change in orientation of the device to calculate the physical distance between the two images. This can be done by measuring the accelerometer and gyroscope data of the device. For instance, Visual-Intertial Odometry is used in Apple’s ARKit to calculate the depth and other attributes of the scene. User experience is refined as even slight motions of the device is enough to create stereo image information. 2. Single image methods There are several single-image depth estimation methods as well. These methods usually involve a neural network trained on pairs of images and their depth maps. Such methods are easy to interpret and construct, and provide decent accuracy. Below are examples of some popular learning based methods. A. Supervised Learning based methods Supervised methods require some sort of labels to be trained. Usually, the labels are pixel-wise RGB-D depth maps. In such cases, the trained model can directly output the depth map. Commonly used depth datasets include the NYUv2 dataset, which contains RGB-D depth maps for indoor images, and the Make3D dataset, which contains RGB-D depth maps for outdoor images. You can checkout this GitHub repo for information on more datasets. Sample image (left) and its depth annotation in RGB-D (right). (Source) Target labels need not necessarily be pure depth maps, but can also be a function of depth maps, such as hazy images. Hence, we can use hazy and haze-free image pairs for training the model, and then the depth can be extracted using a function that relates a hazy image with its depth value. For this discussion, we will only concentrate on methods that use depth maps as target labels. Autoencoder are among the simplest type of networks used to extract depth information. Popular variants involve using U-Nets, which are convolutional autoencoders with residual skip connections connecting feature maps from the downsampling (output of convolutions) and upsampling (output of transposed convolutions) arm. Standard U-Net architecture. (Source) Improvements can be made over the basic structure. For instance, in the paper “Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture” multiple neural networks have been used, with each network operating on input in different scales. The parameters of each network such as kernel size and stride are different. The authors claim that extracting information from multiple scales yields a higher quality depth than single scale extraction. An improvement over the above method is presented in “Structured Attention Guided Convolutional Neural Fields for Monocular Depth Estimation”. Here they use a single end-to-end trainable model, but they fuse features maps of different scales using structured attention guided Conditional Random Fields (CRFs) before feeding it as input to the last convolution operation. Other methods treat depth extraction as an image-to-image translation problem. Conventional image translation methods are based on the pix2pix paper. These methods directly extract the depth map given an input image. Image translation in action. (Source) Similarly, improvements can be made over this structure as well. The performance can be enhanced by improving GAN stability and output quality, by using methods like gradient penalty, self-attention and perceptual loss. B. Unsupervised Learning based methods It is hard to obtain depth datasets of high quality that account for all possible background conditions. Unsurprisingly, enhancing performance of supervised methods beyond some point is difficult due to the lack of accurate data. Semi-supervised and Unsupervised methods remove the requirement of a target depth image, and hence are not limited by this constraint. The method introduced by “Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue” involves generating the right image, for a given left image in a stereo image pair (or vice versa). This can be performed by training an autoencoder as in the supervised scenario. Our trained model can output right-side images for any left-side image. Now, we calculate the disparity between the two images, which in our case is the displacement of a pixel (or block) in the right-image with respect to its location in the left-image. Using the value of disparity, we can calculate the depth, given the focal length of the camera and the distance between the two images. Calculation of depth using disparity. Baseline is the distance between the two cameras (right and left images). (Source) The above method is considered to be truly unsupervised when our algorithm can adapt to non stereo image pairs as well. This can be done by keeping track of the distance between the two images by checking the sensor data on the device. Improvements can be made over this method, as done in this work “Unsupervised Monocular Depth Estimation with Left-Right Consistency” where the disparity is calculated both with respect to the left image and the right image, and then the depth is calculated by considering both values. Limitations The limitation of using learning based methods, especially that of supervised methods, is that they may not generalize well to all use-cases. Analytical methods may not have enough information to create a robust depth map from a single image. However, incorporating domain knowledge can aid extraction of depth information in some cases. For instance, consider Dark Channel Prior based haze removal. The authors observed that most local patches of hazy images have low intensity pixels in atleast one channel. Using this information, they created an analytical haze removal method. Since haze is a function of depth, by comparing the dehazed image with the original, the depth can be easily recovered. A clear limitation of unsupervised methods is that they require additional domain information such as camera focal length and sensor data to measure image displacement. However, they do offer better generalization than supervised methods, atleast in theory. Applications of depth estimation 1. Augmented reality One of the key applications of depth estimation is Augmented Reality (AR). A fundamental problem in AR is to place an object in 3D space such that its orientation, scale and perspective are properly calibrated. Depth information is vital for such processes. An AR app that can measure the dimensions of objects. (Source) One impressive application is IKEA’s demo, where you can visualize products in your home using an AR module before actually purchasing them. Using this method, we can visualize its dimensions, as well as view it from multiple scales. 2. Robotics and object trajectory estimation Objects in real life move in 3D space. However, since our displays are limited to two dimensions, we cannot accurately calculate motion along the third dimension. With depth information, we can estimate the trajectory along the third dimension. Moreover, knowing the scale values, we can calculate the distance, velocity and acceleration values of the object within a reasonable accuracy. This is especially useful for robots to reach or track objects in 3D space 3. Haze and Fog removal Haze and Fog are natural phenomena that are a function of depth. Distant objects are obscured to a greater extent. Example of haze removal. (Source) Hence, image processing methods that aim to remove haze must estimate the depth information first. Haze removal is an active research topic, and there are several quantitative and learning based solutions proposed. 4. Portrait mode Portrait mode on certain smart phone devices involve focusing on certain objects of interest, and blurring other regions. Blur applied as a function of depth creates a much more appealing image than using just uniform blur. Blurred image (right) created using portrait mode. (Source) Conclusion Depth Estimation is a challenging problem with numerous applications. Through efforts taken by the research community, powerful and inexpensive solutions using Machine Learning are becoming more commonplace. These and many other related solutions would greatly pave the path for innovative applications using Depth Estimation in many domains.
https://medium.com/beyondminds/depth-estimation-cad24b0099f
['Bharath Raj']
2019-02-17 11:14:49.731000+00:00
['Machine Learning', 'Artificial Intelligence', 'Data Science', 'Neural Networks', 'Deep Learning']
How Do You Help a Friend Through the Loss of Her Husband?
How Do You Help a Friend Through the Loss of Her Husband? Literally Literary and The Writing Cooperative prompt He spoke of the death as if I already knew. Earlier this week, Matt, my colleague, had told me about a friend of his who had delayed his retirement, working well into his late 60s. “He finally retired in July, then discovered he has pancreatic cancer,” Matt told me then. “He likely won’t see Christmas.” Now when Matt referred to a local man who died after being hit by a tree limb, it was yet another piece of mounting evidence that he should retire sooner rather than later. Matt was 65, working wonders in the industry, and he found his company wanting to expand — by expanding his role — more and more. “I should be slowing down at my age,” he told me, “not accelerating.” “Wait. Who died?” I had to ask. It was a city official’s husband. He’d been killed, leaving Jaclyn widowed with 4 children, ages 4 to 14. I had met her but I didn’t know her personally. Because I was sympathizing with Matt, I had filtered Jaclyn’s horrific news through his filter — that Matt better quit soon so he could enjoy retirement before he died. I didn’t connect Jaclyn’s grief with my story in any way. An Uncommon Networking Conversation That evening, I attended a networking event and met Dana. We exchanged the perfunctory “what do you do” lines, talked about where we were from and how we got to our mutual university employer. Dana had grown up in our town but went away to college and never planned to return. Yet here she was. Her daughter, now 21, also planned to leave, but she had chosen to attend our university and remained here. “Do you have any children?” Dana asked me. Her subsequent questions launched me into my own story of death and hope. The Story I Love to Tell Widowed in my 20s and writing for the local newspaper, I published a column about the sudden loss of my husband Bill, titled “No Greater Loss to Be Feared.” Steve, who would lose his wife to melanoma a month later, read that column and understood that “it is better to have loved and lost than never to have loved at all.” He would follow my articles in The Sun. Steve’s situation was different. His wife’s death was anticipated. His grieving would be both stifled and magnified by parenting four children, ages 15 months to 7 years. I had never met his wife, but she and I had a mutual friend, Becky, who had told me about the situation and asked me to pray for this man and his four children. I did. Two years later, Becky ran into Steve and the kids at the YMCA pool. “Are you dating anyone?” she had asked. “No, I haven’t found the right person,” he had responded. “Do you have anyone in mind?” “Do you know Sara Olson?” (Olson was my first married name.) “Does she work for The Sun?” Later Becky called me to ask if I would be willing to date a widower with four children. Not just “a widower with four children” but the very ones I had covered in prayer! We met on a blind date and married with some form of rapid romance in between. Nothing could have prepared me for instant motherhood and marriage to a grieving man. Nothing about the union was easy, but it was and is a treasure 25 years later. Dana stood enthralled as I shared the story I love to tell. “This sounds like a movie!” she told me. Then Dana sobered. “Your story gives me hope for a friend of mine, Jaclyn. She lost her husband yesterday,” she said. “She has four children. We used to work together.” Ah, Jaclyn. The same Jaclyn Matt had mentioned. Widowed. With four children, now fatherless. I embraced her pain through the lens of what Steve and I individually had experienced instead of the lens of Matt’s retirement dilemma. My heart broke for this new widow. “What can I do for her?” Dana asked me. The Deeper Story Tucked away deep in my heart were the memories exulansis had muted during all those hours, days, weeks, and months of overwhelming loss. Who could have understood my pain? Many of my thoughts secreted in my journal and told to God in prayer long ago I’d never expressed to another human. I knew I couldn’t and wouldn’t ever say I understood Jaclyn’s loss; I remembered how I’d resented those who tried to tell me they “knew just how I felt” those many years ago. The pain of losing a spouse is so individual. I couldn’t imagine her vast grief at losing a spouse with the burden of helping young children grieve the loss of their father. The bulk of the story I had shared with Dana — the clearly God-paved path Steve and I walked toward each other as we put grief behind us and dared to love again — merely touched the surface of the pain of my widowhood and centered on the magical provision of new love after loss. It hadn’t captured the pain or my journey through grief. I suddenly was aware that remembering and sharing those details might help Jaclyn. “Be there for her in three months when everyone else will think she’s gotten on with her life,” I began, pulling from my forgotten sea of memories. “Grief for me was like an ocean of waves. Instead of getting smaller, the troughs of loneliness, loss, and despair got deeper over time. I had highs wherein I felt almost normal, as if I’d put grief behind me, but they made the lows seem even worse.” My arm mimicked the sea’s undulations as I spoke, and Dana nodded. Obviously, time had healed my wounds, but healing takes more time than most realize. Yet some friends stayed the course. My friend Kim came to mind, the knack she had for calling just when I was having an emotional breakdown. I remember her interruptions, her voice saying through the phone, “I’ll be right there.” And she was. She would appear on my doorstep 15 minutes later and steal me from my isolation to her family-warmed home. I would spend many days and evenings with Kim and Gary and their children. Bill and I had taught their 4-year-old son Joel in children’s church. Joel was the reason I chose a funeral plot under an oak tree in the local cemetery. “He’d like that the squirrels will play near him,” he had said. Joel felt comfortable asking questions that forced me to feel and face grief. “Where is Bill?” Joel would ask. “Is Bill with Jesus?” Kim handled me more gently, simply handing me a hot cup of tea laced with fresh lemon and walking with me to her wooden swing, where we’d dangle and sip and talk and be silent, sometimes crying, sometimes laughing. “I think I would be OK if I could just talk to Bill about this,” I had told her through my tears. Another friend, Esther, invited me to live with her family until a room opened in a friend’s apartment. Esther was pregnant with child number four, and because I worked evenings, I spent a lot of time with Esther’s children, doing crafts, going to the park, and cooking. Being part of a family’s daily life was healing, even though it also made me acutely aware that I had lost not just Bill but also the children and family life we had hoped to have. I likened my loss to having a “Bill-shaped vacuum” that nothing else could fill. I missed him; I also missed being known by him. “You don’t just lose a ‘loved one’,” I told Dana, “You lose a life. The life you had together. Your hopes, your dreams.” That awful night when Bill died, a very pregnant Judy and Ben, our friends, appeared at my door at 2 in the morning. (The doctor, who had called me to tell me “his heart expired” and handed me off to his nurse to handle the details, had called them, as I had no family in town.) Ben and Judy took me to their house, where Judy stayed awake with me, letting me talk and cry. In these and other ways, my friends Kim, Esther, and Judy were among the many heroes who helped sustain me as I walked through the grieving process. My heart hurt for Jaclyn, and I could only hope that what I’d shared with Dana might somehow benefit the newly bereft family. “I’ll be there for Jaclyn,” Dana assured me as she thanked me for sharing my experience. With that, we said goodbye, and I left, sensing the value of sharing what I had left unspoken.
https://medium.com/literally-literary/how-do-you-help-a-friend-through-the-loss-of-her-husband-6d232cb0d63a
['Sara Dagen']
2020-01-09 01:50:58.142000+00:00
['Nonfiction', 'Prompt', 'Loss', 'Exulansis', 'Grief']
Tech Trends: Mixing Water and Electricity
Waterjet cutters use a focused stream of liquid and abrasive particles to slice through metal, stone, or nearly any other material with precision. Wazer’s elegant desktop machine will be the first to open up the creative possibilities offered by this industrial technology to designers, artists, and makers. A different kind of waterjet, Bixpy is a modular propulsion system that adds an electric boost to kayaking, paddleboarding, and even swimming. Aside from making you feel like a secret agent, it will let you go farther and deeper than you could on your own steam. Whether competing in a triathlon or just getting through a long work day, hydration is essential to performing your best. This elegant wearable device will help you track activity, sleep, and if you’re drinking enough water (spoiler: you probably aren’t). The ubiquity of flying drones has made it easy to shoot video from the sky, but what about seeing the world that lies beneath the water’s surface? This beginner-friendly, camera-equipped swimming robot will let you explore new depths on your next outdoor adventure. Its compact, modular design means that it even fits in a backpack. Perhaps the most important thing we can do with water is use less of it. This conservation-oriented faucet attachment creates a gentle mist for those times when a gushing stream isn’t necessary, reducing water consumption by up to 98%. ___________________________________________________________________ This collection was first featured in our Invent newsletter, a twice-monthly look at the most interesting new projects and trends in Design & Technology on Kickstarter. Sign up to stay in the loop:
https://medium.com/kickstarter/tech-trends-mixing-water-and-electricity-67123de8295f
[]
2018-11-28 19:00:43.017000+00:00
['Kickstarter', 'Technology', 'Design', 'Tech', 'Innovation']
Build High-Performance Services With gRPC and .NET 5
Photo Credit: Markusspiske NET 5 has been released and It comes with a lot of exciting features, new technologies and performance improvements. It unifies the .net environment and replaces .NET Core. In this blog, we’ll focus on building high performance services using gRPC and .NET 5. Why gRPC? gRPC is not another buzzword being thrown around. It’s a popular open-source RPC framework. It has been around for a while but it’s built on new technologies like HTTP/2 and Protobuf. It’s platform-independent as it offers language-neutral contract language — which is designed for high-performance modern apps. How does it compare with WCF and REST ? WCF, which is also a RPC framework and achieves the same goals, but there are some key differences: gRPC uses Http/2 (You can learn more about Http/2 in detail here). It uses a faster binary protocol which makes it more efficient for computers to parse. It supports Multiplexing over a single connection (It means multiple requests can be sent without request blocking each other). It uses ProtoBuf which providers faster serialization/deserialization and also uses less bandwidth than other text-based formats. There’s much better tooling in .NET 5 to automatically generate boilerplate code to hide the remoting complexity so you may focus on business logic. Streaming allows multiple responses to be sent to the client and also the client to server and bi-directional streaming. It’s designed for low latency and high throughput so it’s great for lightweight microservices where performance is critical. Deadlines/timeouts and cancellation allows the client to specify how long they are willing to wait for an RPC to complete. Inter-Process Communication gRPC calls are sent usually over tcp sockets. However, if the client and server are on the same machine gRPC can use custom transport like Unix Sockets, Name Pipes, etc in IPC scenarios. Getting Started Install .NET 5.0 Runtime and SDK Update Visual Studio 2019 to 16.8 or later (There’s a C# extension that supports .NET 5.0 and C#9 for Visual Studio Code) Create your First gRPC Service Open Visual Studio (16.8) and Create a new project Select gRPC project template 3. Select ASP.NET Core gRPC Service (You can see the “ .NET 5.0” in the framework drop if installed correctly) 4. Enable Docker Support if you want to containerize this service (to be run as a docker container). It will create the asp.net core app with gRPC service. Let’s explore the solution folder — Protos -> greet.proto file. What is Proto file? Since gRPC is a contract first RPC framework, therefore the contract is defined in the proto file — which is the heart of gRPC. It’s a language-agnostic way of defining your apis and the messages. This proto file contains service definition — which in our case is Greeter SayHello is the method that takes a request and returns a response. HelloRequest and HelloReply are declared as messages and can have properties similar to classes and simply defines the strongly typed data that will be transmitted. Let’s explore the gRPC service (GreeterService.cs in our case) This server implements the same method (defined in the proto file above) and takes the HelloRequest object as a parameter and returns HelloReply in response (Advanced: It also has ServerCallContext object — It’s a context for server-side calls and is used for authenticating and authorizing gRPC calls). Code Generation — Where the Magic Happens You might wonder where are GreeterBase, HelloRequest, and HelloReply files? Well, that’s where the magic happens and they are automatically generated so they hide all the routing and remoting complexities.
https://medium.com/swlh/build-high-performance-services-with-grpc-and-net-5-7605ffe9b2a2
['Hammad Abbasi']
2020-11-25 15:58:57.354000+00:00
['Grpc', 'Microservices', 'Dotnet', 'Rest Api', 'Net5']
Collective Works for June 2020
Here you will find all the works — poetry, prose poetry, fiction, articles, personal essays and creative non-fiction— for the month of June 2020. Note: Any works that have been curated will also include the Friend’s Link so you can read them for free.
https://medium.com/the-rattling-bones/collective-works-for-june-2020-716cb3668fe
['Ravyne Hawke']
2020-09-17 21:01:01.742000+00:00
['Poetry', 'Collection', 'Writing', 'Articles', 'Fiction']
Fat Protocols vs Fat Dapps vs Fat Wallets
Fat Protocols vs Fat Dapps vs Fat Wallets Which crypto thesis would you pick? The crypto space was a lot less complicated 18 months ago. Back then, risk was systemic. The macro outcomes were binary: the crypto industry either survived or it didn’t. The most likely outcome was that the industry would fall victim to a fatal technical fault, a hack, or government crackdown. Even if one of these didn’t kill crypto, widespread adoption seemed unlikely. Lightspeed and a handful of other VCs made some targeted investments in crypto before 2016, but only a few people had the conviction to bet the house on it. Source: Giphy Now, everything is different. Crypto has mass mainstream awareness, a diverse multi-asset and multi-region ecosystem and an increasingly clear global regulatory environment. The risk profile has totally changed. It’s now less about making a macro binary bet and more about identifying the winners. For this, you need to understand the ecosystem and where the value will be created. It hasn’t been a straight line to get here. The crypto market has grown through cycles each a few years long. And each wave has been an order of magnitude greater than the last: In 2011, the crypto market cap reached hundreds of millions of dollars and leading platforms saw thousands of weekly new users. In 2013, the market cap grew to tens of billions of dollars with tens of thousands of weekly new users. In 2017, the market cap grew to hundreds of billions of dollars with millions of new users. The next wave could see a market cap in the trillions and millions of weekly new users. That would make it the defining wave that takes the crypto industry into the mainstream. When this wave comes, where will the value be created? This is one way, as Richard Burton outlined in his insightful tweet: He frames the question by breaking value creation in the crypto ecosystem into three layers: 1) Base protocols (like ethereum) 2) Decentralized applications (like CryptoKitties) 3) Companies building products (like wallets) BASE PROTOCOLS Many people, including Joel Monegro, Chris Dixon, Fred Wilson and Albert Wenger, have drawn parallels between the current state of blockchain technology and the early days of the internet. In the case of the internet, the protocol layers (TCP/IP, HTTP, etc.) produced huge value, but it was companies like Google and Facebook who actually captured the vast majority of this value. These folks argue that with decentralized, blockchain-based networks the reverse could be true — that platform protocol layers (such as Bitcoin and Ethereum) could capture most of the value, outstripping the value captured by applications built on top. So far they’ve been right. The market cap of cryptocurrencies is many times larger than the total value of blockchain and crypto based companies today. This line of thought has become known as the “fat protocol thesis”. Joel Monegro Some people who challenge the fat protocol thesis note that the crypto ecosystem already has hundreds of different protocols, with many more coming. If the ecosystem is not interoperable, this creates a ton of complexity. We’re already seeing technology move towards enabling cross-chain transactions, for instance between bitcoin and ethereum. Cross chain interop would mean low switching costs between different protocols. Would this limit the amount of value any single protocol will capture in the long run? Or could a small number of protocols emerge to dominate specific, high-value parts of the total base stack? (Probably transactions, data storage, computing and messaging to name a few.) Today the fat protocol thesis holds true, and the future may continue in this way, or see value diffusion or concentration into only some protocols. DAPPS All the value accretion in protocols is based on the assumption that these blockchains will ultimately end up being useful for more than just speculation. This is where Dapps (Distributed Applications) come in. Dapps are software that allow for a service to be created outside the control of any one entity. This could look like a decentralized version of an existing application (e.g. file storage) or an entirely new use case (e.g. self-sovereign identity). Lightspeed believes that Dapps have a bright future and will create a lot of value. But perhaps more importantly, they offer the potential to codify incentive structures that drive usage, and they reward those who contribute to their creation and development. This is the “fat Dapp thesis”. Today (July 2018) the top Dapps have less than a few thousand daily users, so it’s still early days. But Dapps pose great promise and some really exciting things are being built. Source: DappRadar COMPANIES (WALLETS) Moving out one more layer from Dapps, we come to the end-user and the third of the proposed theses. If crypto is to really hit the mainstream, users will want generalized access to ‘the network’ with products that make using both protocols and Dapps accessible, simple and safe. The best analogies would be products like AOL and Netscape in the early days of the internet, Google, and Tencent today. As noted above, the bet here is that Crypto plays out just like the internet did, with protocols creating value but companies capturing value Let’s look at online music and video as a very concrete example. In the early days, people were willing to use peer-to-peer “sharing” software like Napster and Kazaar to download pirated music and movies. They placed their trust in providers of dubious legal status, and they risked viruses. When Apple introduced the iTunes store, and later Spotify introduced streaming, consumers were drawn to intuitive and seamless product experiences that they grew to love. Source: Giphy There are a lot of parallels here with the the crypto market. The “wallet” is the best example. Crypto is complicated. Most users don’t want to worry about managing all of their public and private keys, just as they didn’t want to worry about managing all manner of audio files in the early days of digital music. Instead they want the same high quality UX they have come to expect from the internet and their other apps. Ensuring minimal friction means the familiar experience of “login with password”. From their wallet they will be able to easily interact with multiple protocols and Dapps. Another consideration is the impact of protocol forks. We’ve already seen forks in both bitcoin and ethereum, and many more attempted forks. In a future with infinite forks, protocol level value could get diluted across many different forks. The only thing that prevents forks is community. Will protocols have the biggest communities to prevent forks, or will wallets be better positioned by aggregating communities across protocols and Dapps? Users are unlikely to want to use many different wallets, just as they don’t want to use many different music stores or browsers. The majority of customers and assets will gravitate towards the most useful and trusted tools. This is already a concentrated market, with Blockchain (a Lightspeed portfolio company), Coinbase, Xapo and a few others holding the lion’s share of wallets and coins between them. This is the “fat wallet thesis”. Readers of Niall Ferguson’s “The Square and the Tower” will see this as another instantiation of the pattern of disruptive new networks eventually exhibiting a new hierarchy. Such a centralized outcome may seem at odds with the decentralized ethos of the crypto world. But success of this “fat wallet thesis” rests on the success of the others. Base protocols will only create value over time to the extent that they generate actual economic value and can support real utility. This will only be the case if Dapps prove useful. If protocols and Dapps are not used then no one is going to need a wallet at all.
https://medium.com/lightspeed-venture-partners/fat-protocols-vs-fat-dapps-vs-fat-wallets-4d33ead29130
['Jeremy Liew']
2018-07-25 15:38:24.797000+00:00
['Venture Capital', 'Entrepreneurship', 'Blockchain', 'Cryptocurrency', 'Bitcoin']
7 Routine Activities To Boost Your Self-Worth
7 Routine Activities To Boost Your Self-Worth Your daily micro shifts help you rise and shine Photo by Taylor Smith on Unsplash You may have heard this story. Once upon a time, there were four people named Everybody, Somebody, Anybody, and Nobody. There was an important task to be done. Everybody was sure that somebody would do it. Anybody could’ve done it. But you know, nobody did it. Somebody got angry about that because it was everybody’s job. Everybody thought anybody could do it. Nobody realized that everybody wouldn’t do it. Everybody blamed somebody when nobody did what anybody could’ve done. When I hear that story, I think the important task is to go vote, a topic we hear discussions about in our daily lives. No one person’s vote makes the final decision, but each person can importantly cast their voice or vote. A low self-worth person can think their voice doesn’t matter. A person with high self-esteem would think their opinion is important and can make a difference. The prideful person can wrongfully think their opinion is the only one that is right, or matters. If you think you have low self-worth, you want to change your opinions of yourself to live your best life and for your overall well being. Low self-worth can be situational if life isn’t going as you planned. If you question or doubt your self-worth, you may just need a tune up to boost start loving yourself again, the opposite of selfish. Because you can’t authentically come to the world, whole and loving if you have a small tank of love for yourself. Lack of self-worth thoughts come from your past and present situations or relationships. As signs, you could be sulking in self-pity, lazily moping around or unproductive. If you feel less than worthy, this could exacerbate in a relationship when you may be confronted with a mirror showing you who you are. There’s a disconnect when you are expected to show consideration for others and you can’t authentically show up for yourself. The good news is you can boost your self-worth with daily routines for you and others around you. 7 Routine Activities To Boost your Self-Worth 1. Give yourself love (self-love). Look in the mirror and in your eyes, and repeat saying “I love you.” This may seem silly or narcissistic, but your brain and inner self needs to hear these words. Make every cell in your mind and body aware. You are talented and worthy of love. A person or an opportunity will appreciate your qualities. State people aloud you’ve helped and how. If you feel guilty for any situations recent or past, forgive yourself. If you did wrong, you want to free yourself from beating yourself up by playing in your mind what happened, over and over again. Practice apologizing if someone else is involved. Tell yourself you didn’t know better at the time, and now you know, and will do better. If after humble reflection, you can’t think of what you did wrong, tell yourself you did nothing wrong to let go of the self-imposed guilt, to positively move forward. 2. Take care of yourself (self-care). Look in the mirror at another time from your self-love routine. What physical features do you like? Zero in and appreciate those. Do something to accentuate those features like flex a muscle or put on a big smile. Feel powerful. Conquer fear and stress. Take care of your body and appearance. If you don’t feel good about your outside, that feeds into your mind and can lower your feelings about yourself. And when you have stress, you can develop skin disorders and small maladies that don’t help your psyche feel good. In yoga studios, there are often times full length mirrors in the room to help you witness and adjust your pose. If you don’t like to look, then that’s a mirror into how you feel about yourself, your body and appearance. In the same way, when you’re working out, put a full length mirror in front of you. If you feel you have to suck in your gut, hide a part, or you lose your smile at what you see, then that’s an area you can work on so you feel good about yourself again. 3. Focus on your inside character. If you focus on material stuff and satisfying wants to make you happy and give you identity, you’ll just be chasing the fleeting things to feed your fix. And then still be unsatisfied and unfulfilled. Authentic growing happiness inside you and your thoughts, can complete you. I went from a 4,000 square foot home to a 200 square foot space. In the bigger home I had to think of how to fill the space, and then the opposite for the smaller place. I was forced to look at my situation instead of my stuff. Looking back I realized I had everything all along that I needed inside me to feel whole. 4. Lose the self-deprecating humor or attitude. Think before you speak. Don’t make fun of yourself to make a situation less heavy. Words matter and what you feed to your mind matters most. Words bring reality to light. You become what you say or feed your mind. In a relationship, especially don’t make fun of an unhappy relationship no matter how cathartic it may feel in the moment. Because you could find yourself alone soon. After the dust has settled, lessons learned would be a better place to channel your settled in healed thoughts that could help others not make the same mistakes. 5. Accomplish something daily. When you get your adrenaline going or blood flowing, you have more energy to stand ontop of the world. Meet a goal and your self-esteem increases. Your self-worth is positively impacted as a result. When you work on the same daily project, over time the details become something significant you can see and marvel over like a Seurat pointillism painting. 6. Help someone who is less fortunate or appreciative of your help. When you help someone who is grateful for your time and effort, you feel better about yourself. Your doing a good deed can have lasting effects. Volunteering could be a regular way to keep feeling good. 7. Find a calming hobby or activity. When you’re at peace, you don’t feel anxiety that can make you feel less than. Activities such as reading may seem relaxing but could make you more anxious if you’re mind isn’t settled. An activity where you use your instincts, is better. Your mind can safely go on auto-pilot rest from thinking for several moments while pool swimming or knitting. Free divers know how to calmly dive to the bottom of the water without equipment or oxygen tanks. They conserve their air by being calm. Like divers, without fear, your calm state can increase your productivity, and your success that boosts your self-worth. You and the people around you may already know your potential, but you need to know your self-worth to unleash your power and authentic happiness. Your self-worth discovery and how important you are now, can be one of the most empowering breakthroughs you make. You’ll never look in the mirror the same.
https://medium.com/age-of-awareness/7-routine-activities-to-boost-your-self-worth-eaa37611ba8d
['La Dolce Vita Diary']
2020-07-04 00:36:24.167000+00:00
['Self', 'Identity', 'Productivity', 'Success', 'Personal Growth']
The Perils of “Follow Your Heart”
The Perils of “Follow Your Heart” A review of Trevin Wax’s new book, *Rethink Your Self* Think back to your high school graduation. What were the major messages there? “You can be anything you want to be!” “Stay true to yourself!” The *Oh, the Places You’ll Go!* message. If it was anything more substantial than that, you can count yourself blessed. As Trevin Wax mentions in his new book, Rethink Your Self, the message of the ceremony behind high school graduations might be the epitome of individualistic American culture, pushing the narrative of “You do you” to its limit. Wax calls this cultural ritual, “looking in”, which is so true of all aspects of our society. We are told to “look in” to see who we are, what we should do, how we should act. And no one else is to question our definition of ourselves, because they don’t have the authority to tell us we are doing anything wrong. Trevin Wax skillfully broadens his audience in Rethink Your Self to include anyone who may have been let down by this self-reliant and self-focused culture. Later, he directs the reader to a biblical structure where one “looks up” before “looking in”: look to God for your purpose and identity, then let that shape your life. I loved that Wax chose to write to such an audience, because it is highly effective, and I pray that B&H Publishing can get this book in the hands of non-Christians who desperately need God and His purpose for their lives. As I kept reading, however, I began to see that many in the church need this book as well. “Self-help” books have dominated the Christian book publishing industry for years, and these books specialize in the “look in” approach, then make sure to tell you to “look up” to get divine approval for your life plans. Don’t believe me? I’ll look up the best-selling Christian books of the last 5 years. (Real-time internet search afoot.) 2019 — Girl, Wash Your Face, Rachel Hollis 2018 — Girl, Wash Your Face, Rachel Hollis 2017 — The 5 Love Languages, Gary Chapman 2016 — The Magnolia Story, Chip, and Joanna Gaines 2015 — Jesus Calling, Sarah Young OK, that took longer than I thought, mainly because the Evangelical Christian Publishers Association (ECPA) website is not that great and doesn’t allow you to change the year on the bestseller list. Despite the trouble, I think this list is accurate. And let me start by saying I have nothing against a couple of these books. I think The 5 Love Languages is a helpful framework for couples as long as it is kept in perspective. And The Magnolia Story is perfectly good and should not be included as a “look in”-focused book. The other three absolutely are self-focused. The 5 Love Languages, even as I said I think it’s pretty helpful, is absolutely a “look in” book: learn more about yourself to better your life. I mean, in this case, it’s so that your partner can know about you, but only sometimes does it point up to how God sees you and defines you. Girl, Wash Your Face is the peak of self-focused Christian fare, and I have many more thoughts that I’ll keep to myself. And Jesus Calling, despite the title, focuses on one’s individual life and definitions, relying too much on personal revelation and not nearly enough on biblical revelation. It’s “look in” masquerading as “look up”. But there is one more book that has not been the top seller in any year, but it has been in among the top sellers for a few years in a row, has had an even bigger influence on Christian culture than its bestselling status implies, and it encapsulates the “look in” approach to self-discovery: Ian Morgan Cron’s The Road Back to You: An Enneagram Journey to Self-Discovery. I have not read The Road Back to You, but I do know a lot about the Enneagram and have some second-hand knowledge of the themes of the book. The book description itself is helpful in gleaning its views: What you don’t know about yourself can hurt you and your relationships — and even keep you in the shallows with God. Do you want help figuring out who you are and why you’re stuck in the same ruts? The Enneagram is an ancient personality typing system with an uncanny accuracy in describing how human beings are wired, both positively and negatively. The message is simple, and it tracks with American culture: “look in” to learn who you are, then you can more easily “look up”. Trevin Wax never mentions the Enneagram in his book, and I may be misrepresenting his views here. And, let me be clear, I don’t think the Enneagram by itself is a bad thing. (As a student and teacher of psychology, I have serious doubts about all personality tests. Not all psychologists would agree with my criticisms of them, but a lot would.) I think that if you are using the Enneagram to learn more about your personality and trying to improve where that may be causing you to fall short of God’s design, it can be useful. But there is something that people miss with all personality tests: they are based on your answers and your attempts to define yourself. What if you are wrong about who you are? Wax is clear that God defines us, shapes us, guides us, and redeems us. He changes our desires into His desires. Wax writes: What does the “look up” approach say about our desires? Here is where the project of rethinking our selves becomes challenging. Our hearts are full of conflicting desires, which is why the notion that you should just “follow your heart” doesn’t make sense. But the Bible doesn’t say we should never follow our hearts or pursue our desires; in fact, one of the psalms says that when we find out delight and joy in God, he will give us the desires of our heart. But notice how that promise connects delight and desire. You may be thinking that the point of religion is to repress your desires, to stifle your feelings, and to ignore your deepest longings. Unfortunately, some churches and religious institutions have given that impression. But rightly understood, following Jesus is not the destruction of desire, but the development of better desire. You won’t change your life merely by repressing a desire, but by replacing it. So why would I want to learn more about myself? God is in the business of changing it. Yes, I know, “personalities are static” and all that. And to some extent that can be true. But I am proof that God changes personalities and shapes them to reflect who he wants you to be. Yes, there is room for self-discovery in some sense of the word, but most important is the task of God-discovery. “Looking in” can come much, much later. The Christian bestseller lists don’t reflect that, and I pray that God will use the words of Trevin Wax and others to point our brothers and sisters in Christ “up” instead of “in”. I received a review copy of Rethink Your Self courtesy of B&H Publishing with a special thanks to Jenaye White, but my opinions are my own.
https://medium.com/park-recommendations/the-perils-of-follow-your-heart-9b56049f859e
['Jason Park']
2020-11-02 12:03:25.865000+00:00
['Christianity', 'Religion And Spirituality', 'Books', 'Self Improvement', 'Reading']
Asking the right questions during user research, interviews and testing
Formulating the questions Interviewing users require a lot of effort and planning. Depending on how extensive the research is, you might spend several weeks preparing for the sessions, several days talking to your users and several hours capturing and organizing your notes. You want to make sure all that effort won’t be thrown away because you didn’t take the time to properly plan your questions. Start by defining broader themes This may sound a bit obvious, but the first step is to really think through what you are trying to get out of the interviews. At this point, think about themes you are trying to uncover, not specific questions just yet. Make sure you are aligned with the rest of the team that those are the topics you want to touch upon when talking to users. A few examples of what these themes may look like: “Why do people shop online?” “How do people shop online?” “For your customers, what is the difference between online and offline shopping?” Break down your questions to make them answerable The themes above sound similar, but there are fundamental differences between the topics each one is trying to uncover. Make sure you align with your team on the broader goal of research; this can save everyone tons of time later in the project. The examples above are themes, and not the actual questions you would ask your users — if you did, you would get answers that are just too generic or vague. The next step is to break down, for each theme, the specific questions you want to ask your users: From: “Why do people shop online?” To: “What types of product do you buy online?” “What types of product do you avoid buying online? Why?” “What do you like the most and the least about the checkout process?” Don’t ask questions that will influence the answer A common mistake when framing questions for the interview is to rush things out and try to get to the expected answers as quick as possible. When you walk in the room for an interview, there is a good chance you already have an idea about the answers users will give you — but don’t let that intuition get in the way of extracting impartial, unbiased results. From: “How anxious do you feel when an online purchase can’t be completed successfully?” To: “Try to remember the last time an online purchase couldn’t be completed for some reason. How did you feel then?” Ask about specific moments in the past Answers become less generic and more accurate when user are thinking about a specific time in the past when that situation happened. They are more likely to give you more genuine and detailed answers — and they will try pretty hard to remember that specific occasion. Make sure your question is prompting that moment in the past: From: “What goes through your head when an online purchase fails?” To: “Tell me what went through your head the last time you tried to buy something online and the purchase failed.” Prioritize open-ended questions Some users feel very comfortable in interviews, and will give you thorough and complete answers, even without too much prompt. But in some cases, users will answer only what is being asked. Not because they’re lazy or mean, but just because different people have different personalities. To avoid unproductive interview sessions (or sessions that will end too soon), make sure your questions are open-ended. Give users some room to elaborate their answers, as opposed to making super binary questions.
https://uxdesign.cc/asking-the-right-questions-on-user-research-interviews-and-testing-427261742a67
['Fabricio Teixeira']
2017-03-28 22:44:09.799000+00:00
['User Research', 'User Experience', 'Design', 'UX Design', 'UX']
(Robot) data scientists as a service
A primer on symbolic regression “94.7% of all statistics are made up.” — Anonymous Data Scientist We will spend no more than three minutes introducing the intuition beyond symbolic regression with a simple example (the reader familiar with it — or just not interested in the nerdy details — can safely skip to the next section). Consider the following X-Y plot: The familiar image of a scatterplot: what is the relation between X and Y? Looking at the data, we can take out pencil and paper and start making some reasonable guesses on the relation between X and Y (even just limiting ourselves to simple polynomial options): Y = bX + a (linear) Y = cX^2 + bX + a (quadratic) We measure what is the best one and take what we have learned to produce even better estimates: Comparing two hypotheses: R-squared is 0.65 and 0.74 respectively. It seems that we can try an even higher-degree polynomial to achieve a better fit: R-squared for a third-degree polynomial is 0.99 (it looks like overfitting but we swear it’s not). It sounds like a reasonable strategy, doesn’t it? In a nutshell, symbolic regression is the automated version of what we did manually with few functions and two “generations”. That is: start with a family of functions that could fit the dataset at hand; measure how well they are doing; take the best performing ones and change them to see if you can make them even better; repeat for N generations until satisfied. Even with this toy example, it’s clear that fitting data patterns by intelligently exploring the space of possible mathematical functions has interesting upsides: we don’t have to specify many assumptions to start with, as the process will evolve better and better candidates; the results are readily interpretable (as we can produce insights such as “an increase of aX will lead to an increase of bY”), which means new knowledge is sharable across all business units. As a downside, evaluating large populations of mathematical expressions can be time consuming — but that is not a problem for us: our robots can work at night and serve us predictions the next day (that’s what robots are for, right?). The crucial observation for our purposes is that there is a fundamental trade-off between model expressivity, intelligent exploration and data fitting: the space of mathematical relations that could potentially explain the data is infinite — while complex models are more powerful, they are also prone to overfitting and, as such, should be considered after simpler ones fail. Since relations are expressed in the language of math, why don’t we exploit the natural compositionality and expressivity of formal grammars to navigate this trade-off (yes, at Tooso we do love languages)? This is where we combine the intuition of symbolic regression — automatically evolving models to get better explanations — with the generative power of probabilistic programming. Since models can be expressed as domain-specific languages, our regression task can be thought as a special instance of “ Bayesian program synthesis”: how can a general program write specific “programs” (i.e. mathematical expressions) to satisfactorily analyze unseen datasets? In the next section we will build a minimal formal language to express functions and show how operations on language structures translate to models that efficiently explore the infinite space of mathematical hypotheses (the faithful reader may recall that we solved in a similar fashion the “sequence game” introduced in a previous post). In other words, it’s now time to build our army of robots. [ Bonus technical point: symbolic regression is usually done with genetic programming as the main optimization technique; a population of functions is randomly initialized and then algorithmic fitness dictates the evolution of the group towards expressions well suited for the problem at hand. We picked a probabilistic programming approach for this post as it nicely fits with some recent work on concept learning and lets us share directly in the browser some working code (a thorough comparison is beyond the scope of this article; for more comparisons and colored plots, see the Appendix at the end; while proof reading the article, we also discovered this very recent and pretty interesting “neural-guided” approach). The non-lazy and Pythonic reader interested in genetic programming will find gplearn delightful: a good starting point is Jan Krepl’s data science-y tutorial.] Building a robot scientist “Besides black art, there is only automation and mechanization.” — F. G. Lorca As we have seen in the previous section, the challenge of symbolic regression is the vast space of possibilities we need to consider to make sure we are doing a good job in fitting the target dataset. The key intuition to build our robot scientist is that we can impose a familiar, “linguistic” structure on this infinite hypotheses space, and let this prior knowledge guide the automated exploration of candidate models. We first create a small language L for our automated regression tasks, starting from some atomic operations we may support: unary predicates = [log, round, sqrt] binary predicates = [add, sub, mul, div] Assuming we could pick variables (x 0, x 1, … x n), integers and floats as our “nouns”, L can generate an expression such as: add(1, mul(x0, 2.5)) fully equivalent to the more familiar: Y = X * 2.5 + 1 Plotting the familiar mathematical expression “Y = X * 2.5 + 1” [ We skip over the language generation code as we discussed generative language models elsewhere. For an overview of scientific problems through the lenses of probabilistic programming, start from the fantastic ProbMods site.] Since we can’t directly place a prior over an infinite set of hypothesis, we will exploit the language structure to do it for us. Since less (probabilistic) choices are needed to generate the linear expression: add(1, mul(x0, 2.5)) compared to the quadratic expression: add(add(1, mul(x0, 2.5)), mul(mul(x0, x0), 1.0))) the first is a more likely hypothesis before observation (i.e. we obtain a prior favoring simplicity in the spirit of Occam razor). A simple WebPPL snippet generating mathematical expressions probabilistically. The final detail we need is how to measure the performances of our candidate expressions: sure, a linear expression is more likely than quadratic before data points, but what do we learn through observation? Since we framed our task as a Bayesian inference, Bayes’ theorem suggests that we need to define a likelihood function that will tell us the probability of obtaining our data points if the underlying hypothesis is true ( posterior ~= prior + likelihood). As an example, consider the three datasets below: Three synthetic datasets to test likelihood without informative prior beliefs. They have been generated by adding noise to the following functions: f(x) = 4 + 0 * x (constant) f(x) = x * 2.5 (linear) f(x) = 2^x (exp) We can exploit the observe pattern in WebPPL to explore (without informative priors) how likelihood influences inference, knowing in advance what is the mathematical expression that generated the data. A simple WebPPL snippet to test the likelihood of some generating functions against synthetic data. As clear from the charts below, with as little as 25 data points the probability distribution over possible mathematical expressions is pretty concentrated on the correct value (also note that the constant parameter is narrowly distributed over the true value, 4, and the same holds true for the exponential example). Our final robot scientist is then assembled combining (language-based) priors with likelihood (if you’re interested in a small-and-hacky program that puts everything together, don’t forget to run the snippets here). Let’s see now what our robots can do. Putting our robot scientist to work “Humans turn me on.” — Anonymous Robot Now that we can create robot scientists, it’s time to see what they can do on some interesting data patterns. The chart below represents datasets built out of a simple language for mathematical expressions (such as the one described above), showing, for each case: a scatterplot with the target data points; the generating mathematical expression (i.e. the truth); the expression selected by the robot scientist as the most likely to explain the data (please note that when running the code, you may get several entries for different, but extensionally equivalent expressions, such as x * 4 and 4 * x ). Four synthetic datasets (left), the underlying generator function (center, in red), and the best candidate according to the robot scientist (right, in blue). Results are pretty encouraging, as the robot scientist always made a very reasonable guess on the underlying mathematical function relating X and Y in the test datasets. As a finishing touch, it just takes a few more lines of code and some labelling to add a nice summary of the findings in plain English, so that the following data analysis: From data analysis to an English summary: we report model predictions at different percentiles since the underlying function may be (as in this case) non-linear. gets automatically summarized as: According to the model '(4 ** x)': At perc. 0.25, an increase of 1 in cloud expenditure leads to an increase of 735.6 in revenues At perc. 0.5, an increase of 1 in cloud expenditure leads to an increase of 9984.8 in revenues At perc. 0.75, an increase of 1 in cloud expenditure leads to an increase of 79410.5 in revenues Going from model selection to explanations in plain English is fairly straightforward (original here). Not bad, uh? It seems that our data science team can finally take a break and go on that deserved vacation while the robots work for them! While the non-lazy reader plays around some more with the code snippets and discovers all sorts of things that can go wrong with these robots v1.0, we shall go back to our enterprise use cases and make some parting notes on how to leverage these tools in the real world. What’s next: scaling prediction across enterprise data “The simple truth is that companies can achieve the largest boosts in performance when humans and machines work together as allies, not adversaries, in order to take advantage of each other’s complementary strengths.” — P. R. Daugherty Let’s go back to our prediction problem: we had data on how cloud services impact the revenues of our company and we wanted to learn something useful from it. Our X-Y chart: what can we learn from it? Sure, we could try and use a machine learning tool designed for this problem; if we buy into the deep learning hype, that has obvious downsides in terms of integration, generalization to unseen datasets and interpretation. We could try and deploy internal resources, such as data scientists, with downsides in terms of time-to-ROI and opportunity costs. Finally, we could try to prioritize speed and run a simple one-fit-all model, sacrificing accuracy and prediction power. In this post, we outlined a very different path to address the challenge: by mixing statistical tools with probabilistic programming we obtain a tool general enough to produce interpretable and accurate models in a variety of settings. We get the best out of automated A.I., while keeping the good part of data science done right — explainable results and modeling flexibility. Science-wise, the above is obviously just a preliminary sketch on how to think outside the box: when moving from a POC to a full-fledged product, a natural extension is to include Gaussian processes in the domain-specific language (and, generally, exploit all the nice things we know about Bayesian program synthesis, in the spirit for example of the excellent ). Product-wise, our experience with deploying these solutions with billion dollar companies has been both challenging and rewarding (as enterprise things often are). Some of them were skeptical at first, after being burned by pre-made solutions heavily marketed today by big cloud providers as “automated AI” -as it turns out, those tools can’t solve anything but the simplest problems and still require non-trivial resources in time/learning/deployment etc.. But in the end, all of them embraced both the process and the results of our “program synthesis”: from automated prediction to data re-structuring, customers love the “interactive” experience of teaching machines and work with them through human-like concepts; results are easily interpretable and massive automation is achieved at scale through serverless micro-services (for our serverless template for WebPPL, see our devoted post with code). At Tooso, we do believe the near-term A.I. market belongs to products that enable collaboration between humans and machines, so that each party gets to do what it does best: machines can do the quantitative legwork on data lakes and surface the most promising paths for further analysis; humans can do high-level reasoning on selected problems and give feedback to the algorithms, in a virtuous loop generating increasingly more insights and data awareness. All in all, as fun as it is to dream of evil robot armies (yes Elon, it’s you again), there is still plenty of future that definitely needs us.
https://medium.com/tooso/robot-data-scientists-as-a-service-eea4a6f9a
['Jacopo Tagliabue']
2019-05-21 18:58:01.086000+00:00
['Machine Learning', 'Statistics', 'Artificial Intelligence', 'Data Science', 'Automl']
Why You Should Write Like You’re Dying
Why You Should Write Like You’re Dying What we can learn about life from those who are “writing while dying” Photo by Aron Visuals on Unsplash As a writing genre, memoirs from the dying are alive and well. These elegantly written last testaments are enjoying tremendous popularity in a rather equal opportunity playing field. You can read memoirs from people you’ve never heard of who share the distinction of “writing while dying” alongside illustrious authors and brilliant physicians. The memoirs follow a predictable chronology that tracks the author’s pre-illness life, then an unexpected diagnosis, beginning of treatment, failed treatment, death, and then an epilogue lovingly penned by a spouse or family member. Within each memoir, there are exquisite moments and profound realizations that move us. But we also brace ourselves for the inevitable conclusion we know is coming. Because we know how the story ends. So why do we read them? Is it merely a personal obsession with death? Is it because we need a salutary lesson in how to die from those who can offer the most salient education? To be certain, we can learn a lot from how these individuals lived their lives. They offer some powerful lessons about how we should live our “writing life,” as well. Be brutally honest Memoirs from the dying offer a raw and candid account of their journey. These writers generously provide moments of grace and dignity, but they are also honest about the full scope of their circumstances. They write of their anger at being dealt a shitty hand and infuse a rawness that rightfully conveys a middle finger to the unfairness of it all. Keep this fidelity to the truth in mind with your own writing. Write in a way that is faithfully honest to you and your experience. We oftentimes rob ourselves of the opportunity to be completely candid by hiding behind window dressing, glossing over important details, or omitting essential elements of our narrative. We’re afraid of what it may say about us and the implications of putting ourselves out there. But if you’re doing it right, writing should make you feel completely exposed. Writing honestly is in service to your story. You, your readers, and the story you’re telling deserve to be free of any inhibitions to be nothing but honest with your work. Create a permanent, prolific record Writing outlasts us. We’re only here for a finite time. But what we write lives on. It can offer a general record of our lives and deeper insight into our fears. And it should all be written down. The permanence of writing can be intimidating. But start thinking of how you can leave a lasting imprint. Many of us have weightier, more personal reasons for doing so. We may be parents who want to leave a textual legacy for our kids that may help them learn more about us even after we’re gone. We may simply want our writing to live on as teachable touch points for others. In whatever genre or capacity, the more we write, the more we can leave a lasting record of our lives. Don’t let criticism derail you Don’t let the fear of criticism dictate (or delay) your pursuit of writing. We all respond differently to other people’s opinions; some writers are unmoved by criticism while others can feel unmoored by it. For me, the fear of criticism was one of the admittedly lame excuses that kept me from writing for a long time. But the simple fact is that you’re not going to please everyone. It’s impossible. And a necessary lesson in writing — and life in general — is to know what you can and can’t control. So write what you want and how you want without being deterred or encumbered by what people might have to say about it. The truth is that we should all act like we won’t be around for the negativity; we should act like we don’t have the time, energy, or inclination to care what other people have to say about our writing. Adopting this kind of perspective isn’t to be dismissive or cavalier about any opinions our work may invite. Rather, it’s about not allowing that criticism to be debilitating and derail your pursuit of writing. Write to understand At its core, writing helps bring clarity and meaning. This applies in any context: whether we’re in our last days, reeling from any number of major life events, or even in a daily, existential sense. Writing doesn’t diminish the cruelty of disease or help us to change our circumstances. But we write so that we can understand. We learn and process as we write. Why is this happening? What does it mean? What will it mean for my future? Yes, we write to convey a story, impart a lesson, communicate and connect with others. But don’t forget that for all the depth and meaning it can offer us, we write for ourselves first. Write with urgency Write like you’re running out of time…because you are. There should be a pressing urgency to your writing, whether it’s getting started, finishing a piece or finally putting it out there to share. What are you waiting for? Put away the excuses. Reading and writing function as a reciprocal round table. When we write, we hope our experiences can offer some wisdom, perspective or empathy to those who read our work. And in return, our readers validate the bravery it took to share our writing. Be generous with sharing the insight of your life experiences. Annie Dillard put it best (doesn’t she always?): “One of the things I know about writing is this: spend it all, shoot it, play it, lose it, all, right away, every time. Do not hoard what seems good for a later place in the book or for another book; give it, give it all, give it now. The impulse to save something good for a better place later is the signal to spend it now. Something more will arise for later, something better. These things fill from behind, from beneath, like well water. Similarly, the impulse to keep to yourself what you have learned is not only shameful, it is destructive. Anything you do not give freely and abundantly becomes lost to you. You open your safe and find ashes.”
https://medium.com/the-ascent/why-you-should-write-like-youre-dying-b4046eec8679
['M Gleeson']
2019-08-16 21:33:13.872000+00:00
['Writing', 'Death', 'Writers On Writing', 'Writer', 'Writing Tips']
Welcome to Modern Sex
Introducing Modern Sex We are looking forintriguing articles, unexplored narratives, personal essays, alternative advice, investigations into how you see sex and sexuality evolving in the modern-day.(We are aware of the NYT column “Modern Love” and chose this title anyway for one simple reason: “Contemporary Sex” sounds like a sex-ed book written by someone who has never had sex.) What has changed, what needs to change, what have you seen that you want everyone else to see? Who can you be there for that wasn’t there for you when the world was less open? How has modern society impacted sex and sexuality, and vice versa? Western culture, in particular, is experiencing a sexual revolution. People are fighting for the world to be a safer place for others to come forward and expose the undercurrent of sexual violence and aggression that has been made possible through the shame, repression, and silence surrounding sex. What do we want to see? Experiences and advice from underrepresented groups Whole new stratospheres of sexual expression are opening up for groups who have been historically shunned into silence by hateful and antiquated ideologies, and systemic repression. We want to be a place where all people of all orientations and identities can find advice, explore new ideas, and share their experiences. This means LGBTQ+ communities, as well as those who feel their sexual orientations or preferences, have been underrepresented in the world of sex. The emergence of new types of sexual relationships More and more couples are exploring alternative relationships, stepping outside of their marriages and into new ones, open ones, partnered non-monogamy, polyamory, polygamy. Are you currently an alternative relationship? Do you want to express something that you haven’t because it feels like people won’t accept it? We are looking for your story. New frontiers of sex Kids are swimming through endless digital oceans of porn, right in their pockets. We have consent apps and soon (hopefully) male birth control. There are brothels with robots and dolls that are popping up around the world, deep-fakes, virtual reality, augmented reality, and much much more that are going to impact the world of sex in ways we can’t predict. Where do you see it going? Are you currently interacting with or investigating this marriage of technology and sex? We want to know more. These are only a few of the topics we are looking for but what we love to be, more than impressed, is surprised. If your idea doesn’t fall into any category, but you believe in it, submit it! The only requirement is that it relates to sex and sexuality as we are experiencing it in the modern-day.
https://medium.com/sexography/welcome-to-modern-sex-647b46371f00
['Sexography Editorial']
2020-01-13 16:46:00.621000+00:00
['Equality', 'Writing', 'Newsletter', 'LGBTQ', 'Sexuality']
The Biggest Lessons I Learned From Making a Major Career Change
The Biggest Lessons I Learned From Making a Major Career Change Steven Hopper Follow Nov 7 · 6 min read I spent nearly a decade as a high school teacher and loved every minute of the job itself. What I struggled with was poor leadership, annoying politics, low pay and recognition, and the rigid day-to-day schedule. Not surprisingly, these align with the top three reasons from a recent Indeed.com survey on why people make a career switch: unhappiness with the sector, desire for more flexibility, and desire for more pay. The tipping point for me came when health concerns made doing my job as a teacher too difficult and I decided it was time for a change. And I’m not alone. According to CNBC, nearly half of workers make a major career switch in their lifetimes. But it’s not easy to do and it takes planning and patience to pull off. But what I didn’t know at the time, is that finding a new job was only the beginning of the hard work it would take to switch careers. It took over a year for me, but I finally feel comfortable in my new role and am grateful for all that I learned along the way. Here are the major lessons I learned from this journey for anyone else looking to make a major career switch: Confidence is key. “Whether you think you can or think you can’t, you are right.”– Henry Ford The first step to making big life changes is going all in. Everyone who switches careers will experience doubts — both internally and from the people around them. When I started looking for new career opportunities, I remember feeling overwhelmed and nervous that I wouldn’t be able to find another job that matched my level of education and experience. I panicked feeling like I would either be forced to stay in my current position forever or like I would have to take a significant pay cut to get requisite training and experience for a better job. Nevertheless, I proceeded with the job search and applied for any and everything I felt aligned with my passions and strengths. Confidence is what allowed me to put myself out there despite not being qualified on paper. And when I started my new job, confidence enabled me to lean into the challenge and prevail over the doubters I faced. The easiest way to overcome doubters is by exuding confidence. The ability to learn trumps all other qualifications. “We now accept the fact that learning is a lifelong process of keeping abreast of change. And the most pressing task is to teach people how to learn.” — Peter Drucker Making a major career switch means overcoming a steep learning curve to onboard to a completely new industry and role. This means being able to find a balance between exuding confidence while also knowing when you’re not skilled in something and need help. The best thing I did was ask a lot of questions, jot down notes when I didn’t understand something, and spend time researching outside of work so I could prepare myself for the next conversation. Lucky for me I had previous education and experience that helped me catch up quickly. In fact, I would suggest that having a different set of skills and knowledge became one of my biggest advantages. What I had to learn was how to apply these to my new career in a way that highlighted my unique perspective. Whether you switch careers or not, those who embrace continual learning as a part of their work realize many benefits. A recent LinkedIn survey found, “that employees who spend time at work learning are 47% less likely to be stressed, 39% more likely to feel productive and successful, 23% more ready to take on additional responsibilities, and 21% more likely to feel confident and happy. And the more you learn, the happier you become.” For me, the learning challenge revived my passion and interest in work. It allowed me to grow in ways that I would not have if I had stayed in the same field my entire career. Failure is expected. Embrace it. “Success is not final, failure is not fatal: it is the courage to continue that counts.” — Winston Churchill I won’t lie, switching careers is not easy and it does not happen quickly. For me, it took months of countless networking events, several job application submissions, many “thank you, but at this time we’re going in a different direction” responses, and only a few in-person interviews. Just when I was about to give up and prepare myself for the next year of teaching, I found the right opportunity and got a job offer. Now on the job, there have been many times I’ve underperformed. Instead of accepting these moments as failure, I’ve embraced them as learning opportunities. I’m reminded of Carol Dweck’s research on growth mindset. In her book Mindset: The New Psychology of Success she writes, “In the fixed mindset, everything is about the outcome. If you fail — or if you’re not the best — it’s all been wasted. The growth mindset allows people to value what they’re doing regardless of the outcome . They’re tackling problems, charting new courses, working on important issues. Maybe they haven’t found the cure for cancer, but the search was deeply meaningful.” So remember that making a career switch is like all aspects of life — it’s a journey, not a destination. Celebrate the highs and embrace the lows as growth opportunities. Everyone is an imposter. “Owning our story and loving ourselves through that process is the bravest thing we’ll ever do.”- Brené Brown I faced an uphill climb in my new role as a business consultant to prove myself to my new colleagues and clients. This made me doubt whether or not I had made the right decision — I went from being viewed as an expert in my field to being viewed as an unqualified imposter. And it made me question myself and doubt my ability to switch careers. What I’ve learned, however, is that this is natural for everyone — whether you have years of experience or are new to a role. At some point we all face this feeling of imposter syndrome. In fact, Rita Clifton writes about this in her book Love Your Imposter: Be Your Best Self, Flaws and All and says that about 70% of people have experienced imposter syndrome at some point in their career. But there are ways to overcome this natural doubt. Instead of letting the fear of not fitting in paralyze you, you can use these feelings to your advantage by maximizing your strengths and transforming that fear into excitement. While this sounds simple enough, it’s definitely not. It takes courage to stay true to ourselves in the face of doubt. What helped me immensely was sharing these feelings with others and hearing similar experiences from them. This led me to discover that people across jobs and industries have a lot more in common than we’re led to believe when looking through job postings. Sure we may have different education, experiences, and backgrounds, but we’re still connected as people. Ultimately, realizing this helped me reframe my outlook and open up to others instead of feeling like I had to defend my position. In turn, I was able to find great mentors who have helped me see my own potential in my new role.
https://medium.com/swlh/the-biggest-lessons-i-learned-from-making-a-major-career-change-d64ac3bd9948
['Steven Hopper']
2020-11-11 05:46:51.105000+00:00
['Motivation', 'Life Lessons', 'Work', 'Life', 'Inspiration']
NO NEW WRITERS WILL BE ACCEPTED UNTIL FURTHER NOTICE
Writing for Resistance Poetry Submission Guidelines 2020.12.05 Resistance Poetry is dedicated to “Verse as Commentary.” We are interested in your poetic response — humorous, deep, whimsical, bold, etc. — to breaking news, social phenomenon, or whatever current thing you feel could use a little push-back. Poetry can cut and polish language into a gem which can be examined by readers in various lights and from multiple angles. It gives them time to contemplate, question, look more deeply, perhaps see something anew. The goal of RP is to tap into this power to encourage people to question and broaden perspectives. But we also like to make people laugh. Many a truth is said in jest. BASIC GUIDELINES Original Poetry using verse as commentary is the only genre we accept. Poetry is broadly interpreted, but, as Justice Potter Stewart said of porn, we know it when we see it. If you include quotes from other sources, reference them. Medium publication channels must be used to submit work. You must have a Medium account, format your work on Medium, and submit through the Medium publication channels. Illustrate your piece with an image appropriate to your work. Images over 1000 px wide are best for formatting. Credit your Illustration! It’s not just polite, it’s right. Need help understanding image credits? Here’s a decent explanation: Tag your piece appropriately. Of the five tags you are allowed, one tag must be Resistance Poetry. WHAT WE WON’T ACCEPT Libel will not be published for your protection and ours. If you make a specific accusation in your work, please provide proof of the veracity. We will not accept racist, sexist, or otherwise broadly denigrating works, nor will we publish pieces designed to harm or silence private persons. SUBMISSION MECHANICS Ask to be a writer by responding to this post. Include a link to a poem you have written which you believe belongs in RP. If we think your work is a good fit, we will add you as a writer and ask to publish your piece. Once you are a writer, DO NOT PUBLISH YOUR PIECE FIRST. Create your piece as a draft. Tag your piece but DO NOT PRESS PUBLISH. Click on the three dots to the right of the publish button for the drop-down menu. Click on “add to a publication” and click on Resistance Poetry. Your piece will wind up in our inbox and will be reviewed. WHY NOT PUBLISH THEN SUBMIT? Because RP’s homepage is designed to feature the most recently published works. The feed is chronological. For maximum exposure, submitting as a draft is ideal. Otherwise, depending on the time that passes between personally a publishing piece and acceptance by RP, your piece could wind up “below the fold.” Medium now allows us to publish “members only” pieces. Our understanding is the royalties from the piece will accrue entirely to you, the writer. EDITING Editing poetry is tricky. If we notice what we believe are unintentional misspellings or grammar errors, we will correct them. If we are in doubt as to the intentionality of a break from the standard, we will ask you about it in a private message. We will not change your message. We attempt to respond to all submissions in a timely manner. However, delays can occur due to the editors’ schedules. YOUR PART If you write for Resistance Poetry, please support the publication by following it and engaging with other RP poets and their work. Also, if you choose not submit a piece within three months of being made a writer, you will be removed from the list of writers.
https://medium.com/resistance-poetry/writing-for-resistance-poetry-4cf81f3b5493
[]
2020-12-05 10:14:00.376000+00:00
['Poetry', 'Explanation', 'Writing', 'Resistance Poetry', 'Resistance']
Iowa Is What Happens When Government Does Nothing
Warnings from doctors like Perencevich are what prompted my visit to Iowa City, a college town in eastern Iowa that serves as a sort of liberal sanctuary in a mostly red state. The city is home to the University of Iowa, and also to its public teaching hospital, which employs 7,000 people and has more adult ICU beds than most other state hospitals. I spent two days there just before Thanksgiving, interviewing doctors and nurses outside the brick walls of the hospital in the frigid November weather, standing six feet apart in the front garden or, when it rained, near a vent shooting out warm air on the building’s south side. Through the glass windows of the lobby, I watched as nurses in face shields pushed sick people around in wheelchairs. Once, I stepped inside to thaw and was startled by how quiet it was, and how the silence belied the suffering going on just a few floors above. The first cases of the coronavirus in Iowa were recorded here in early March, when a group of infected locals returned home from an Egyptian cruise. As cases rose, Reynolds closed schools for the rest of the school year and most businesses for about two months. But by May 15, she’d allowed gyms, bars, and restaurants in all of Iowa’s 99 counties to open up again. She did not require Iowans to wear a mask in public, ignoring requests from local public-health officials and the White House Coronavirus Task Force and arguing that the state shouldn’t make that choice for its people. “The more information that we give them, then personally they can make the decision to wear a mask or not,” Reynolds said in June. She also wouldn’t require face coverings in public schools, where she ordered that students spend at least 50 percent of their instructional time in classrooms. When Iowa City and other towns began to issue their own mask requirements, Reynolds countered that they were not enforceable, undermining their authority. (The governor’s office did not respond to multiple requests for comment for this story.) The rest of the summer and early fall brought on a mix of business closings and reopenings in counties around the state. (Complicating the picture, a data glitch at the Iowa Department of Public Health deflated case numbers in late summer.) Infections exploded in meatpacking plants, where managers were allegedly taking bets on how many workers would get sick. After students returned to schools and universities in the early fall, Iowa had the highest rate of COVID-19 infections in the country. In October, when Iowa was in the thick of community spread, Reynolds showed up, maskless and smiling, at a campaign rally for Trump at the Des Moines airport. (Her let-them-get-sick attitude toward the pandemic hasn’t been unusual among Republican governors, though there have been exceptions, including Mike DeWine of Ohio and Larry Hogan of Maryland.) By late November, the number of new COVID-19 cases in Iowa was higher than at any other point in the pandemic, and as many as 45 Iowans were dying of the disease every 24 hours in a state of just 3 million people. Outbreaks were reported in 156 nursing homes and assisted-living facilities in Iowa, and the virus ran rampant in the state’s prisons. Doctors have been warning for weeks that the state’s health-care system is close to its breaking point. The University of Iowa hospital reached a peak of 37 COVID-19 inpatients in April, but by Thanksgiving, it had 90. That number may not seem overwhelming until you consider that COVID-19 patients require dozens of staff and that many spend weeks or months in hospital care. To meet the demand, administrators have had to reschedule hundreds of nonessential surgeries and converted multiple wards into COVID-19 units. Doctors told me that they’re already short on ICU beds, and are having to decide which critically ill patients receive one. There are not enough specialists to oversee common life-support techniques, such as extracorporeal membrane oxygenation, or ECMO, for people with severe cases of COVID-19. And the University of Iowa hospital is actually in a better position than many others in the state. Smaller institutions, which have fewer specialized doctors and fewer staff overall, are being overwhelmed across Iowa, and many face bankruptcy, in part because they’ve been forced to cancel elective procedures. Worst of all, health-care workers are sapped. They are used to death. But patients don’t usually die at this pace. They don’t usually die in this way, with tubes sticking out of their throats and sucking machines clearing the mucus from their lungs. They don’t usually die all alone. Joe English, a 37-year-old respiratory therapist, spends every day traveling between hospital units, hooking up seriously ill COVID-19 patients to ventilators or ECMO machines. When there’s nothing left to be done, English is the one who turns off those machines; he’s done so at least 50 times in the past few months. “What I’m seeing [among health-care workers] is just frustration, desperation,” English told me. “People have been acting like we’ve been fighting a war for months.” There is a name for this feeling, says Kevin Doerschug, the director of the hospital’s medical ICU: moral distress, or the sense of loss and helplessness associated with health-care workers navigating limitations in space, treatment, and personnel. Just a few weeks ago, a man in his 30s with no medical problems arrived in Doerschug’s unit with a severe case of COVID-19. After a week on a ventilator, the man’s health had greatly improved. Nurses removed his breathing tube, and his vitals were stable. But just a few hours later, the man was dead. “Our whole team just sat down on the ground and cried,” Doerschug told me outside the hospital, his voice muffled by his mask and the sound of the heating vent. Trauma like that compounds when a hospital fills up with critically ill patients. “The sheer enormity of it — it’s just endless,” Doerschug said. What makes all of this suffering and death exponentially more painful is the simple fact that much of it was preventable. A recent New York Times analysis clearly showed that states with the tightest COVID-19 restrictions have managed to keep cases per capita lower than states with few restrictions. Reynolds is in an admittedly complicated situation. She, like other governors, is facing enormous pressure to protect people’s livelihoods as well as their health. But a mask mandate is free. And failing to control the virus is, unsurprisingly, very bad for business. “We want to take care of people … It shouldn’t be this hard, and that makes us mad,” Dana Jones, a nurse practitioner in Iowa City, told me. “There are people to blame, and it’s not the patients.” When Reynolds finally announced a spate of new COVID-19 regulations on November 17, the rules limited indoor gatherings to 25 people, and required that Iowans wear masks inside public places only under a very specific set of conditions. Four of the doctors and nurses I interviewed laughed — actually laughed — when I asked what they thought of the new regulations. The policies will do basically nothing to prevent the spread of the virus, they told me. State lawmakers’ response to Reynolds’s handling of the pandemic breaks down along partisan lines. “She’s done a good job balancing people’s constitutional rights with a few restrictions that have been commonsense,” Representative Dave Deyoe, a Republican from central Iowa, told me, arguing that tighter restrictions in more liberal states haven’t led to lower death rates. Although this is a common argument among Iowa Republicans, it’s an unfair one. Many Northeast and West Coast states have had more total deaths because they were badly hit by the virus early in the pandemic, before strong measures were put in place. In the past seven days, Iowa’s death rate has been at least twice as high as that of New York, New Jersey, and California. Democrats in Iowa believe that Reynolds’s inaction has always been about politics. Early on, she’d assumed an important role making sure that Trump would win Iowa in the November election, State Senator Joe Bolkcom, who represents Iowa City, told me. “She did that by making people feel comfortable” about going out to eat, going to bars, and going back to school. “She mimicked Trump’s posture” to get him elected. Ultimately, Reynolds was successful in her efforts: Trump won Iowa by 8 points. But Iowans lost much more.
https://medium.com/the-atlantic/iowa-is-what-happens-when-government-does-nothing-bdf940803e1c
['The Atlantic']
2020-12-03 19:13:56.466000+00:00
['Politics', 'Iowa', 'Covid 19', 'Coronavirus', 'Covid 19 Crisis']
How you can design end to end on a Chromebook
When Chromebooks first came out, it would’ve been a pipe dream to believe they could replace Macs for designers. They were limited by power, software, and screen quality. Even today, few would think to trade out their Apple computer for a Chrome device. I’m here to tell you it’s possible, and I’ll even share the tips that made it so. For the past year I’ve been working as a professional, paid (yes, paid!) designer using a Chromebook as my primary computer. Why, you ask? Well, Square, my previous company, persuaded me to make the move. They decided to transition some people over for a range of reasons — from the security of Chromebooks (they’re way safe) to their affordability to the fact that files are always backed up and synced. As a designer, I could’ve chosen to stick with Macs, but because I worked with people inside the company on internal presentations, I decided I needed the true Chromebook experience to empathize with my “customers.” I was wary at first…of course. Would this Chrome computer sit on my desk, collecting dust after a day of usage due to my preference for macOS? I’ve been designing with Figma for the previous three years (I was lucky enough to get an alpha invite), so my primary design software was already in the cloud. But I had no idea how Chromebooks would handle performance, or how I’d go about meeting my other design needs. It’s been an adjustment, but overall, I’ve gone Chromebook and I’ll never go back. Below is what you need to know if you’re thinking of making the switch as a designer.
https://medium.com/figma-design/how-you-can-design-end-to-end-on-a-chromebook-ebd1e9bb1b90
['Zach Grosser']
2018-05-24 15:35:20.292000+00:00
['Pixelbook', 'Design', 'Chromebook', 'Editorial', 'Figma']
Movie Data by the US State it’s Set In
Movie Data by the US State it’s Set In Web Scraping and Data Analysis via Python I recently completed a project analyzing movie revenues against factors such as genre, release date, and viewer rating. A project for a beginner data scientist, and I enjoyed it. But as I was gathering the data, I grew more curious. This developed into a project of its own. I learn a ton, especially about web scraping, as I did this, so I’m hoping somebody else may learn by reading about it. What other factors could affect a movie’s reception? The problem with answering such a vague question is the need to gather data. Even just coming up with a list of what data to use is a daunting task, much less actually gathering the data itself. After spending too long reading about movie data — and pausing this idea to complete the original project — I hit upon something interesting: https://en.wikipedia.org/wiki/Category:Films_set_in_the_United_States_by_state. It turns out Wikipedia has organized thousands of movies by the state they are set in. A movie’s setting could certainly be a factor in how it is received, and it’s not the type of information usually found in the standard movie dataset. I decided to focus on this data and set about to satisfy my curiosity. Web Scraping The first step was to gather the data from Wikipedia. The requests and BeautifulSoup libraries are key here, and Selenium will help with some of the trickier bits. Pandas, of course, is needed to store and process the data. from bs4 import BeautifulSoup import pandas as pd import requests from selenium import webdriver from time import sleep Looking at the links on that wikipedia page, I noticed that many of them contain yet more subcategories. Instead of reading each page for all the subcategories, and then all those pages for yet more subcategories, they can all be read from this one page. This is where we need Selenium: the links are not in the html by default, they only get inserted after the arrow icon next to the link is pressed. After some trial and error, I found this method for clicking every expandable arrow on the webpage. driver_options.add_argument('headless') driver = webdriver.Chrome(options=driver_options) driver.get(' driver_options = webdriver.ChromeOptions()driver_options.add_argument('headless')driver = webdriver.Chrome(options=driver_options)driver.get(' https://en.wikipedia.org/wiki/Category:Films_set_in_the_United_States_by_state' to_click = driver.find_elements_by_xpath('//*[ sleep(2) while len(to_click) > 1: for element in to_click[1:]: element.click() sleep(2) to_click = driver.find_elements_by_xpath('//*[ sleep(2) sleep(2)to_click = driver.find_elements_by_xpath('//*[ @title ="expand"]')sleep(2)while len(to_click) > 1:for element in to_click[1:]:element.click()sleep(2)to_click = driver.find_elements_by_xpath('//*[ @title ="expand"]')sleep(2) soup = BeautifulSoup(driver.execute_script('return document.body.innerHTML'), 'html.parser') When I first attempted this code, without the sleep elements, I got inconsistent results. I wouldn’t find any arrows one run, then find a hundred the next. Sometimes I would get errors from not being able to find any, sometimes they’d just skip. Adding the sleep calls increases the time it takes to run, but I only needed to run it once, so it was worth it. The driver options allow Selenium to run without actually opening the window on my screen. The execute script method at the end returns the html of the webpage. This is different from calling page source, as that will only return the base html, not the changed html after all the button clicks. I skipped the first item on the webpage because it was only documentaries about the states: not exactly what I’m looking for. Next I passed the html to BeautifulSoup to search for the links I need. But just grabbing the links wasn’t enough, as I had to associate each link with a state. Luckily the subcategories are all listed under a category that has the state name, so it didn’t end up too difficult. After removing duplicate links (some subsubcategories were listed under multiple subcategories), I ended up with 234 webpages full of movie titles to scrape. for group in soup.find_all('div', class_='mw-category-group')[1:]: for state in group.find_all('li'): item = state.find('a') name = item.get_text(strip=True).split('in ')[-1].split(' (')[0] state_links.append((name, f" for item in state.find('div', class_='CategoryTreeChildren').find_all('a'): state_links.append((name, f" state_links = [i for n, i in enumerate(state_links) if i not in state_links[:n]] state_links = []for group in soup.find_all('div', class_='mw-category-group')[1:]:for state in group.find_all('li'):item = state.find('a')name = item.get_text(strip=True).split('in ')[-1].split(' (')[0]state_links.append((name, f" https://en.wikipedia.org{item['href' ]}"))for item in state.find('div', class_='CategoryTreeChildren').find_all('a'):state_links.append((name, f" https://en.wikipedia.org{item['href' ]}"))state_links = [i for n, i in enumerate(state_links) if i not in state_links[:n]] Now to pull the actual movie links from each page. I already grabbed every subcategory, so I only need the movies themselves. Unfortunately, some of the categories have over 200 movies, leading to them being displayed over multiple pages. This required me to loop the scraping if I found a next page link. Not all of the pages used the same div wrappers, which led to a few conditionals scattered throughout the code. for entry in state_links: state_name = entry[0] url = entry[1] next_page = True while next_page: r = requests.get(url) soup = BeautifulSoup(r.content, 'html.parser') page_wrapper = soup.find('div', id='mw-pages') if not page_wrapper: next_page = False continue content_wrapper = page_wrapper.find('div', class_='mw-content-ltr') if not content_wrapper: next_page = False continue for movie in content_wrapper.find_all('a'): movie_dict = {} movie_dict['title'] = movie.get_text(strip=True) movie_dict['state'] = state_name movie_dict['wiki_link'] = f' movie_list.append(movie_dict) next_page_link = content_wrapper.find_previous_sibling('a') if next_page_link and (next_page_link.get_text(strip=True) == 'next page'): url = f" else: next_page = False wiki_df = pd.DataFrame(movie_list) movie_list = []for entry in state_links:state_name = entry[0]url = entry[1]next_page = Truewhile next_page:r = requests.get(url)soup = BeautifulSoup(r.content, 'html.parser')page_wrapper = soup.find('div', id='mw-pages')if not page_wrapper:next_page = Falsecontinuecontent_wrapper = page_wrapper.find('div', class_='mw-content-ltr')if not content_wrapper:next_page = Falsecontinuefor movie in content_wrapper.find_all('a'):movie_dict = {}movie_dict['title'] = movie.get_text(strip=True)movie_dict['state'] = state_namemovie_dict['wiki_link'] = f' https://en.wikipedia.org{movie[ "href"]}'movie_list.append(movie_dict)next_page_link = content_wrapper.find_previous_sibling('a')if next_page_link and (next_page_link.get_text(strip=True) == 'next page'):url = f" https://en.wikipedia.org{next_page_link['href' ]}"else:next_page = Falsewiki_df = pd.DataFrame(movie_list) After getting the data, I inserted it into a Pandas dataframe. I ended up with the name, Wikipedia link, and state the movie was set in for 14868 movies. This is an absolutely massive amount of data, but I know it will be cut down as I get more information on each movie. Since the only information I have for each movie is its Wikipedia link, I’ll have to pull information from there. Most of the movies have an IMDB link in the external links section of their page. The IMDB url contains the IMDB id within it, making it easy to match up to other databases. The code to get this information is simple, but for nearly 15000 pages, even after removing duplicates, it took over 2 hours to run. def get_imdb_link(url): print(url) r = requests.get(url) soup = BeautifulSoup(r.content, 'html.parser') try: ext_links = soup.find('span', id='External_links').find_parent().find_next_sibling('ul') return ext_links.find('a', text='IMDb').find_previous_sibling()['href'] except: return 'n/a' return 'n/a' wiki_df.drop_duplicates(inplace=True) wiki_df['imdb_link'] = wiki_df['wiki_link'].map(get_imdb_link) I then removed any movies that had no IMDB link, cleaned up the movie titles, and extracted the IMDB id from the url. I was left with a dataset of 13254 movies, which I then saved to a csv file. wiki_df = wiki_df[wiki_df['imdb_link']!='n/a'] wiki_df['title'] = wiki_df['title'].map(lambda x: x.split(' (')[0]) wiki_df['imdb_id'] = wiki_df['imdb_link'].map(lambda x: x[x.find('title/tt')+6:-1]) wiki_df.to_csv('Data/wiki_data.csv', index=False) Data Merging and Cleaning Getting the data I want from IMDB is actually very simple. Their daily updated datasets are freely available to anyone at https://www.imdb.com/interfaces/. I decided to collect genres, release year, and ratings information as it seemed the most interesting to me. Pandas makes importing the tsv files simple. Merging the two, leading to a total of 1086028 movies in a dataframe in seconds, much quicker than the 15000 movies I got from Wikipedia. I also renamed the columns to make them more intuitive. import pandas as pd imdb_basics_df = pd.read_csv('Data/title-basics.tsv', sep='\t', usecols = ['tconst','startYear', 'genres']) imdb_ratings_df = pd.read_csv('Data/title-ratings.tsv', sep='\t') imdb_df = imdb_basics_df.merge(imdb_ratings_df, on='tconst') imdb_df.rename(columns={'tconst':'imdb_id', 'startYear':'year', 'averageRating':'rating', 'numVotes':'votes'}, inplace=True) Finally I merged the IMDB data with the Wikipedia data, read from the saved csv. Many of the Wikipedia pages did not have IMDB links, and some of the IMDB ids I got from Wikipedia did not match up with anything on the IMDB data. Most Wikipedia data is user generated and edited, so I’d guess that human error is the cause. wiki_df = pd.read_csv('Data/wiki_data.csv') compiled_df = wiki_df.merge(imdb_df, on='imdb_id') The genre information is all put into a single string. To make this more useful. I need to break this out into multiple columns. I’ll use a column for each genre, with a boolean value for each row. IMDB is kind enough to provide a list of their genres at https://help.imdb.com/article/contribution/titles/genres/GZDRMS6R742JRGAG#. After turning that into a list, I simply looped over that list of genres, adding a new column for each. The new column is mapped against the existing genres column. Both values are strings in the same format, so the evaluation is simple. genres = 'Action | Adult | Adventure | Animation | Biography | Comedy | Crime | Documentary | Drama | Family | Fantasy | Film-Noir | Game-Show | History | Horror | Musical | Music | Mystery | News | Reality-TV | Romance | Sci-Fi | Short | Sport | Talk-Show | Thriller | War | Western'.split(' | ') for genre in genres: compiled_df[genre] = compiled_df['genres'].map(lambda x: genre in x) Time to do a little cleaning. I turned the year column into integers. Some of the genres, namely Adult, Game-Show, News, Reality-TV, Short, and Talk-Show, are not part of movies I’m looking for. Most of these probably only apply to television shows in IMDB, and not movies, but I still removed any movies that had those genres and deleted those columns. Finally, after removing duplicates, I’m left with a clean dataset of 12793 movies to analyze, saved once again to a csv file. compiled_df['year'] = pd.to_numeric(compiled_df['year']) genres_to_remove = ['Adult', 'Game-Show', 'News', 'Reality-TV', 'Short', 'Talk-Show'] for genre in genres_to_remove: compiled_df = compiled_df[~compiled_df[genre]] compiled_df.drop(columns=genres_to_remove).columns compiled_df.drop_duplicates(inplace=True) compiled_df.to_csv('Data/compiled_data.csv', index=False) Analysis Analyzing the data is actually the easiest part. The data is once again read in using pandas read csv function. import pandas as pd df = pd.read_csv('Data/compiled_data.csv') I used Panda’s groupby function to get collective data for each state. I did this to get the mean year, rating, and vote count for each state. Years are limited to since movies have existed, ratings are limited from 0–10, and votes are limited to the user base of IMDB. Because of that, outliers should be rare and not very far off, so mean is a good measure of this data.
https://andrew-muller.medium.com/movie-data-by-the-us-state-its-set-in-c60d3f25296c
['Andrew Muller']
2020-10-26 16:09:10.776000+00:00
['Movies', 'Python', 'Data Science']
9 Quotes to Help You Unleash Your Creativity
9 Quotes to Help You Unleash Your Creativity By Heike Young Some people feel innately creative; some don’t. Many view creativity as a you’ve-got-it-or-you-don’t trait. As marketers, we’re often more comfortable labeling ourselves as effective, efficient, or data-driven, instead of creative. At the same time, companies put a lot of pressure on marketers to nail every campaign — every time. Because of this, marketers have grown skeptical of veering away from best practices and using their creative intuitions. We choose to do what’s tried, tested, and true in lieu of that new, crazy experiment that may not succeed. But what if being different is the key to your content — and marketing — success? On this week’s episode of the Marketing Cloudcast — the marketing podcast from Salesforce — we’re discussing the ways we can tap into our creativity to be exceptional at marketing. To dive in, we went to the expert: Jay Acunzo, host of the Unthinkable podcast and savant of all things creativity. If you’re not yet a subscriber, check out the Marketing Cloudcast on iTunes, Google Play Music, or Stitcher. Take a listen here: You should subscribe for the full episode, but our conversation with Jay was jam-packed with insights about unleashing creativity and approaching creative projects in a new way. Here are nine quotes to get you thinking. 1. “Noise is not the problem. Similar noise is.” The internet is bloated with content that promises hacks, secrets, and fail-proof tips from gurus. But as Jay points out, “There are no secrets. People who follow that stuff create a lot of sameness and noise.” And all this noise has started to sound very much the same. 2. “People and organizations have within them what it takes to do exceptional work, but they’re just not executing against it.” Jay is passionate about encouraging marketers to act on their creative intuitions instead of tired best practices. How can you be the bass drop in a room full of elevator music? 3. “Creativity doesn’t mean big.” Thinking more creatively about your marketing doesn’t mean you have to craft the next award-winning Super Bowl commercial. But it does mean discovering what differentiates your brand from competitors, and speaking to that in a new way. One parallel Jay has noticed when interviewing entrepreneurs for his podcast is that “they all start with an aspirational anchor: your desire, your intent to be exceptional that you can articulate.” What’s your anchor? 4. “Creativity is a work ethic.” Jay says it’s time to ask, “What are you trying to accomplish? Where are you trying to go? What is your aspiration as an individual, a team, and a business, and can you describe it in a way that’s specific and concrete?” If you think that way every day, creativity will become your way of life and your work ethic. 5. “Everyone has access to the same listicles, but no one has access to your people.” “I’m really interested in helping brands figure out how they’re different and exceptional from everybody else, and [helping] them deploy that in their marketing,” explains Jay. In this quest, people are your #1 resource. Use them. “Everyone has access to the same listicles, but no one has access to your people. You should execute against that.” 6. “We’re in an industry and an era where the ground is moving under our feet. The people that are going to fall away are the people who think it’s ever going to get stable again.” To differentiate your marketing, you need to access and flex your creative muscle. How can companies accomplish this? Jay says it all comes down to adopting the mentality of a life-long learner and accepting the new status quo. “When you have that mentality, you’re never pausing to ask: am I average? Should I keep going? Should I keep pushing myself? You’re always thinking: I’m going to keep learning,” says Jay. 7. “Ask yourself: What can I do to be better than I am today?” Jay says marketers shouldn’t obsess over whether the content they’re putting out is average or comparing themselves to others. Instead, continue growing every day. 8. “Being exceptional is about being different. The best way to be an exception is to put all of yourself into your work.” He continues, “There is no situation that is entirely identical to someone else’s situation. You as an individual, your team, your brand, the way you’re marketing, your product — all of these factors roll together to make your situation different than anyone else’s.” Give 100% of your efforts every day if you truly want to grow your career and be an exceptional marketer. 9. “Find the framework, but then you have to break the framework.” When asked what he feels that content marketers are getting wrong about marketing in 2017, Jay says, “We play it so safe that it borders on pointless. Ten new websites launch every 1.7 seconds today. Brands cannot afford to blend in and do what’s tried and true forever. You find the framework, but then you have to break the framework.” He continues, “Content marketers are in the mode of being the bottom feeders in the ocean that is the internet.” When it comes to content marketing, it’s not about surviving in that ocean; it’s about figuring out ways to thrive. “The idea that standing out or being different takes guts has to die. This is just the state of the world. The safe bet is always being different.” These quotes are just the beginning of what we discussed with Jay (@jayacunzo). Get more details and inspiration on how to unleash your creativity in this episode of the Marketing Cloudcast. Join the thousands of smart marketers who already subscribe on iTunes, Google Play Music, and Stitcher. New to podcast subscriptions in iTunes? Search for “Marketing Cloudcast” in the iTunes Store and hit Subscribe, as shown below. Tweet @youngheike with marketing questions or topics you’d like to see covered next on the Marketing Cloudcast.
https://medium.com/marketing-cloudcast/9-quotes-to-help-you-unleash-your-creativity-d56841d53d35
[]
2017-03-01 21:36:10.189000+00:00
['Marketing', 'Digital Marketing']
Hyperplanes and You: Support Vector Machines
A core data science task is classification: grouping data points into various groups based on certain shared qualities. In a sense, it’s an exercise as old as life itself: as soon as the first protozoan developed sensory organs, it (accidentally) started to act differently based on various sensory stimulus. On a higher biological level, it’s a monkey looking at an object hanging from a branch and deciding “food” or “not food”. On a machine level, it’s your ML model combing through credit transactions and deciding “fraud” or “not fraud”. You’ve probably heard of clustering as a technique for classification; it’s easy enough to visualize on a two-dimensional graph, or even with a Z axis added in. It’s intuitive, since we move about in three, maybe four dimensions. But your data may be a little more complex than that (as far as axes are concerned), and the moment you have 4 columns in your table, you’re in high-dimensional space. How do you draw balanced class distinctions in data with 70 features? One clever way is support vector machines, a geometric classification technique involving hyperplanes — which can be thought of as “decisionmaking boundaries”. Multiplanar Thinking In short, SVMs classify data points by drawing hyperplanes to maximize the overall distance between classes. Hyperplanes are much simpler than they sound: a “subspace whose dimension is one less than that of its ambient space”. In our previous 2D examples a hyperplane is a 1D line. In a graph with a Z axis, we’d have a 2D plane. Let’s start with two dimensions. There’s plenty of lines you could draw to separate these points into two classes: But some lines sort of… feel better than others, don’t they? That’s your brain performing a bunch of visual-distance estimations & subconscious calculations. A lot of neurons are firing in some very complex ways to “balance” things intuitively. SVMs are a way of mathematically formalizing this balancing. A hyperplane (in this case, a line) that feels good to you is likely one that maximizes the margins (overall distance) between itself and the closest data points of each class. The reason we search for balanced classifiers is that the real world doesn’t always look like our training data, so we want our model to generalize well — it should learn enough from the dataset without overfitting on the unique minutiae. To do so, we draw some support vectors to figure out where the optimal hyperplane lies, and maximize the margins between the vectors. Getting Tricky But how can we deal with messier data that doesn’t seem to fit neatly into a linear classification? The simple answer is “take your data to the next dimension” in a very literal sense. Welcome to the kernel trick.
https://mark-s-cleverley.medium.com/hyperplanes-and-you-support-vector-machines-7dcd406e2f1a
['Mark Cleverley']
2020-11-14 22:33:07.545000+00:00
['Machine Learning', 'Python', 'Computer Science', 'Data Science', 'Support Vector Machine']
Um, Like, You Just Need to Be Smarter With Money
Given the times we’re living through it’s difficult to comprehend why a songwriter would choose to effectively add to the divisiveness. Why wouldn’t they ask themselves the question I pose at the outset of this article? And if they did, how in the world did they decide it was still a good idea to release the music they released — in 2020? With such a massive platform, you’d think you’d choose empowerment over heaping more anxiety on an already stressed-out audience. Maybe they had good intentions. Either way, the result was neither helpful or productive. Words matter. Timing matters. Emotions matter. Self-awareness matters. Tact, grace, empathy, and compassion matter. These things matter beyond songwriting and interpersonal communication. They matter when you’re talking about money, whether it’s in the present forum or one-on-one with a friend. I read articles about money. I hear people say stuff to people who are experiencing money trouble. I marvel at how unhelpful and unproductive some of these things — even if stated with good intentions — can be. For example, inflation. It’s a concept lots of personal finance types like to latch onto. Most people don’t think much about it. Others have a difficult time wrapping their head around the idea. With so much apathy and ignorance, it’s the ideal subject matter for the money guru: It’s even more difficult to talk a friend down when they’re freaked out about having enough money to pay their bills. You can help your friend better situate their finances. However, if the first thing you explain to your friend is inflation, you’re a bleeping know-it-all who is ultimately doing them a disservice. A manifesto about inflation — it’s the go-to line for so many people whenever the conversation turns to money. The classic, hey, look what I know! Fantastic, except schooling somebody who’s trying to create a personal financial plan, particularly when they’re hurting, does little, if any good. Riffing about the purchasing power of money (“in twenty years, it will cost you $1.20 to buy what you can buy today with $1.00”) is the stuff of blowhards. Inflation is simply one more thing we need to adapt to. We have no control over it. So you’re stating the obvious when you say you want the money you save and invest to outpace inflation. Or at least some of it. Because if you freak out about inflation, you’re bound to allocate your money too aggressively, leaving you potentially screwed. You need pots of money. Some that don’t stand a chance against inflation. Others that will beat it (even though you can’t comfortably predict the exact number you have to beat and when you have to beat it). I’d go so far as to say you’re just fine to ignore inflation. Why introduce a potentially complicated (and ultimately meaningless) concept to somebody who has essentially told you they don’t have an emergency fund because they don’t even know what one is? Explain and empower them on the basics. This will result in a well thought out and offensively defensive budgeting, saving, and investing plan that will weather every unknown (even a global pandemic, even inflation!) without stressing over things we can’t predict and have little control over even if we could. By the opposite token, don’t tell your friend expressing money problems that “um, like, you just need to be smarter with money.” Also, not helpful. In fact, next time a friend stresses out to you about money, don’t say anything. Let it breathe. Let them get it out. Then buy them a drink. Change the subject. Proceed to make them have a good time. Then, on day two, give them a call or shoot them a text. Remind them of the conversation. Lead with similar money tumult you’ve experienced. Include what you did to better situate yourself and adopt sound personal finance habits. Provide a resource or two to lead your friend in the right direction. Obviously, they need to be smarter with money.
https://medium.com/makingofamillionaire/um-like-you-just-need-to-be-smarter-with-money-80274c7eb093
['Rocco Pendola']
2020-12-24 14:02:27.880000+00:00
['Personal Finance', 'Self-awareness', 'Self', 'Money', 'Life Lessons']
AI in Government: Part 3 — Signal Processing
With the explosion of data around us today, it can be difficult to pick out the important information from the noise. We often mistakenly deem information irrelevant, but upon further inspection, we find that the signal is clearly advising a course of action. Signals are comprised of many components, and if a person is only exposed to part of the signal instead of the entire signal, there is a high risk of drawing incorrect conclusions. For example, imagine that a lifelong fan has season tickets to the local university’s basketball team. She attends every home game, and her team wins them all. Because she only sees the home games but misses the away games, she incorrectly concludes that the team is undefeated. She is unaware that while her team won 20 home games, they lost 5 away games. Unwittingly, observing a filtered signal like this can lead to false conclusions. A key part of RS21’s mission is to make meaningful signals more apparent. We connect disparate data sources, analyze them, and visualize findings through intuitive interfaces to enrich understandings and provide actionable information our clients can use to inform decisions. So how does signal processing support better data analytics and better decision-making? Human Perception We receive a constant stream of information from the external environment and react to information we judge to be important. A motorist, for example, will coast to a stop when a traffic light changes from green to red. Drivers are accustomed to this signal and agree to stop as a means to avoid accidents. But not all information is as easily perceived as the traffic light. In fact, it is estimated that the human brain processes 400 billion bits of information per second, but our conscious minds are only aware of 2,000 of those bits. We miss the vast amount of information raining down on our five senses (i.e., sight, hearing, taste, smell, and touch). Furthermore, our brains often incorrectly process the information that we do catch. “Rotating Snakes” optical illusion by Akiyoshi Kitaoka. Consider the static image above. Most people incorrectly perceive the wheels as moving. But the wheels are, in fact, static. You can download it or print a hardcopy and still perceive it to be moving. Why? Because the information encoded in the pixels — the color and intensity — varies across the image, causing asymmetric luminance. Such optical illusions are commonplace in much of our thinking and how we naively interpret the billions of signals around us. In data science, we make an intentional effort to systematically oppose our inherent cognitive illusions. The Monty Hall Problem A signal can be defined as any information that varies in time or space, or both time and space. The traffic light switching from green to red encodes the instructions “STOP”. It is both a signal and an agreed upon set of instructions for motorists to stop and wait for the light to turn green again. While the traffic signal encodes a social norm, many signals are not codified in this way and can create competitive advantages for those who know how to interpret them. For example, the Monty Hall problem, from the TV game show Let’s Make a Deal, illustrates the mismatch between helpful information signaling us to action and our human intuition that runs counter to its message. Photo by Marco Bianchetti / Unsplash In this problem, a contestant is shown three doors and asked to pick the door he or she believes contains a prize. The host then proceeds to open one of the other doors, revealing that there is no prize behind it. The contestant is given the option of either staying with his or her original choice or switching to the door that the host didn’t open. The question is, does the reveal provide enough information for the contestant to know whether to stay with the original choice or switch? Most people assume it doesn’t matter. But if you play this game repeatedly, the contestants that switch win the prize twice as often as those who don’t. While the extra information doesn’t always motivate contestants to switch doors, in data science we can use such counter-intuitive insights to better predict patterns and outcomes in various circumstances. Signal Processing Applications in Government National Oceanic and Atmospheric Administration (NOAA) A familiar signal is time series data, such as temperatures recorded at a set of geographically distinct weather stations. Such data can show the passage of Earth orbiting the sun, as represented by periodic high annual temperatures in summer months and low annual temperatures in the winter. Similarly, daily and nightly highs and lows indicate the Earth’s 24-hour rotation. While this is a simple use of signal processing, there are larger implications for the use of climate data and modeling catastrophic weather events that might necessitate a disaster preparedness response. Signal and image processing techniques and improved algorithms can help the NOAA and other weather forecasting entities predict and assess the impact of natural disasters: hurricanes, flooding, tsunamis, and severe weather events. More accurate predictions coupled with a better understanding of cascading effects and the geographic area affected will allow institutions and disaster response organizations to prepare, provide communications and outreach, and efficiently deploy resources as needed. Historical storm event frequency to support predictive analysis of natural disaster events and impact. © RS21 National Institutes of Health (NIH) As the primary U.S. federal agency responsible for biomedical and public health research, the National Institutes of Health (NIH) is a leader in medical research, treatments, and cures. With the power of genomic signal processing (GSP), genetic diseases can be detected and diagnosed early on. This research is advancing precision medicine, an emerging model that tailors therapeutic tools and medicine for individual patients based on their specific genetics. As this field evolves, the reality of precision medicine will help lead to better treatments and improved outcomes for patients. Department of Defense (DoD) Signal processing has tremendous implications for the Department of Defense and the intelligence community dealing with military response in complex environments and real-time situations. Imagine, for example, if military analysts could quickly cut through the noise by converting signals from analog to digital and then analyze these signals to detect and respond to adversarial acts. Furthermore, imagine the enhanced functions of drones and electronic devices built with better visual, physical, and audio signal processing capabilities. Such devices could collect and process intelligence data, react to voice commands, and automatically respond to precise movements and sounds in the environment. This would ultimately result in better support for military personnel. Economic + Financial Institutions Cultural and economic behaviors generate signals. Public equity markets were transformed in the early 1980s by statistical analysts who recognized an enormous advantage for those who could couple company data with stock prices. For example, two companies in the same business sector, such as Coca-Cola and Pepsi, should have similar stock prices. Any large observed differences permit a profitable trade by betting on the under-performing stock and betting against the over-performing stock. This technique has proven to be very profitable to those who collect and appropriately analyze this augmented signal. Today, financial analysis depends on signal processing techniques, allowing our economic and financial institutions to more accurately evaluate and respond to short-term and long-term forecasts. Parting Thoughts Signal processing has proven to be enlightening in many fields, revealing new insights and providing novel actionable information. RS21 consults with clients and proactively collects a variety of highly relevant data sources. We ensure the information works in concert to address a specific problem so that a clear melody of meaning can be distilled. Sign up for our newsletter to learn more about RS21’s work and how data science can help you.
https://medium.com/rs21/ai-in-government-part-3-signal-processing-1159cc056f1d
[]
2020-07-08 19:25:38.481000+00:00
['Machine Learning', 'Data Analytics', 'Artificial Intelligence', 'Data Science', 'Information Technology']
8 Ways to Make Accents the Background, Not the Story
Sasin Tipchai for Pixabay I get it. You have a character who is French but speaking in English. You have a character with a Scottish accent. You have a character from Russia. And you want to give the flavor of their accents to your reader, right? Well… maybe. Here’s the thing. Making your reader focus on a character’s accent can cause you to commit the worst writing sin of all: lifting your reader out of the story. Giving them a reason to stop reading. I’m half-French, and several of my mysteries take place in Montréal, so I’m acutely aware of the need to give a flavor of the environment and the characters’ speech patterns. I want them to feel the “Frenchness” of the venue. But how do I do it? Probably not like this: “I ‘ad to put zee ticket een zee machine.” You probably had to read that out loud, right? To see what it sounded like? But… did you focus on the content of what my protagonist Martine might have been saying? Of course not. You were figuring out her accent. Imagine, now, that every time Martine opens her mouth, that phonetic spelling is what comes out. Will you ever be able to follow the mystery? Seriously, I don’t think I’d be able to follow the mystery, and I’m the one writing it! There’s another problem with this kind of dialogue: it’s not real. No matter how hard you try, unless you are French, you’re not going to sound French when you try Martine’s lines out loud. You’re going to sound like a caricature of a French-speaker. And caricature slips pretty seamlessly into stereotypical thinking and perception, into the worst kind of cliché. This is Martine LeDuc, not Pepe LePew. There has to be a better way. Fortunately, there is. There are ways to give your characters that je ne sais quoi and make them real and vibrant to your readers without taking the reader out of the story and without resorting to caricature. Focus on what’s interesting I spent a lot of time creating Martine. There are a lot of things about her that are interesting: she holds an important and unusual job, she copes with living in a city that is culturally and linguistically divided, she is a stepmother. The way she speaks — her accent — is not the most important or interesting thing about her. I don’t want readers coming away from her stories thinking about her accent, but about her life, her opinions, her adventures. What’s the solution? Keep the accent off the page, and keep the rest on it. Give foreign phrases, not accents One of the solutions I’ve used is to sprinkle snippets of French throughout Martine’s speech. (I can’t claim credit for this method: Agatha Christie’s Hercule Poirot is constantly peppering his speech with little French phrases.) The trick here is twofold: to provide just enough consistently to give a sense of the character’s culture and language without overwhelming the reader with too many foreign words. to not immediately translate what you’ve written. (There’s nothing more tiresome than reading, “Je ne sais pas — I don’t know.”) Instead, use very short phrases that are immediately understandable by their context. Narrate the accents Another character in your story might notice someone’s accent and remark on it, either out loud or in their head. “His English was excellent, but deeply accented; it was clear which side of the city he was from.” Someone might notice a character tripping over a difficult pronunciation — this is one of the few moments where “show, don’t tell” should be reversed. “I was struggling a little with his thick Scottish accent, and that slowed my reaction.” Drop contractions One of the clearest giveaways that English isn’t a person’s first language, or that they’re new to it, is a lack of contractions in their speech. Simply replacing “I wouldn’t go,” with “I would not go” allows you to keep the flow of dialogue moving while still rendering the character’s essential differences of speech. Check out grammatical structures In French, auxiliary verbs do a lot of the heavy lifting, but they translate poorly. “Aller” means “to go,” so a French speaker might assume that the “to” and “go” are inseparable. “I must to go” might be the result. Russian works the opposite way: it infers auxiliary verbs from the sentence’s context, so that same “to” might disappear completely, leaving a character saying, “I go work tomorrow.” Before you go this route, however, make sure that you get the original language right; don’t rely on clichés about it. There’s nothing more insulting than seeing oneself caricatured on a page. Explore alternate idioms Idioms are subtle tells that indicate one’s background, both in time and geography. An older character will use different slang than will a twenty-something, and figuring out what those differences are can make your dialogue come alive and feel authentic. People often think that idioms can be translated (they can’t) and you can have some fun with this, having characters translate their idioms into English. In American English, one wishes to be a fly on the wall to eavesdrop; in French, it’s a spider under the table that gets the scoop. Take advantage of commonly used words When you’re creating a character, study where they’re from. A Yorkshire farmer says “aye” rather than yes; a Scottish doctor might note you have a wee temperature; an Irish painter might add “sure” at the end of a sentence. But I was serious when I said “study”: echoing what I said above, the worst thing you can do is make assumptions about language patterns based on others’ clichéd perceptions of that language. Make sure you get it right; make sure you don’t overdo it; make sure the expression doesn’t dominate the dialogue. Show other traits Honestly, relying on accents to show a character’s “otherness” is sheer laziness on the writer’s part. There are many ways to root a character in whatever it is you want to show — their backstory, their culture, their environment. Martine, for example, likes her food, and she’s forever finding excuses to eat poutine, which puts her squarely in Québec’s gastronomic culture. Showing the reader how much she likes this dish roots her in Montréal, but it also tells a lot about the character herself. Perhaps your character wears something that’s important to their home or work. American politicians tend to wear an American flag on their lapels. Other cultures find that curious. Finding that uniqueness about your character’s background and weaving it into their daily life will help capture who they are better than any weak accent ever will. Kevin Gardner for Pixabay In conclusion… These are ways you can avoid the tedious phonetic spellings that mess up otherwise good writing and encourage readers to put the book down. And looking at it from a wider perspective, the truth is, everyone has an accent. We’re not aware of our own most of the time, but there are many differences in the ways people speak — Californians don’t sound like residents of Georgia any more than a Brit sounds like they come from Sri Lanka. Calling out just one accent and not the others makes no sense. I am a tremendous fan of Tana French’s books. They’re all firmly rooted in a sense of place — in Ireland. Sometimes this comes through in characters’ speech, but only in subtle ways: expressions, word order in a sentence, that sort of thing. When I read French’s books, though, I’m not hearing the characters the same way she did when she wrote them. My English is of the American variety, and any word I read in English is going to come through my brain sounding American. I was particularly intrigued by one of her stories in which the Irish narrator is noting the accent of another Irish character, one who came from a less-privileged background, who grew up in the projects; his accent was somehow “dangerous.” I didn’t hear what that accent actually sounded like—but I got the idea clearly. (As a side note, though, I was anxious to listen to the audiobook version of that particular novel, to listen to an Irish reader differentiate those accents. It was fascinating, but had nothing to do with my appreciation of the story.) The bottom line: making accents the background and not the story itself is what will make for better stories, fresher characters, and a reader who won’t be able to put the book down. Jeannette de Beauvoir helps writers develop their voices and those of their characters at JeannettedeBeauvoir.com.
https://jeannettedebeauvoir.medium.com/8-ways-to-make-accents-the-background-not-the-story-37145ad0171c
[]
2019-04-11 14:16:22.415000+00:00
['Accents', 'Writing', 'Character Development', 'Fiction Writing', 'Writing Dialogue']
Independent Component Analysis (ICA) In Python
Suppose that you’re at a house party and you’re talking to some cute girl. As you listen, your ears are being bombarded by the sound coming from the conversations going on between different groups of people through out the house and from the music that’s playing rather loudly in the background. Yet, none of this prevents you from focusing in on what the girl is saying since human beings possess the innate ability to differentiate between sounds. If, however, this were taking place as part of scene in a movie, the microphone which we’d use to record the conversation would lack the necessary capacity to differentiate between all the sounds going on in the room. This is where Independent Component Analysis, or ICA for short, comes in to play. ICA is a computational method for separating a multivariate signal into its underlying components. Using ICA, we can extract the desired component (i.e. conversation between you and the girl) from the amalgamation of multiple signals. Independent Component Analysis (ICA) Algorithm At a high level, ICA can be broken down into the following steps. Center x by subtracting the mean Whiten x Choose a random initial value for the de-mixing matrix w Calculate the new value for w Normalize w Check whether algorithm has converged and if it hasn’t, return to step 4 Take the dot product of w and x to get the independent source signals Whitening Before applying the ICA algorithm, we must first “whiten” our signal. To “whiten” a given signal means that we transform it in such a way that potential correlations between its components are removed (covariance equal to 0) and the variance of each component is equal to 1. Another way of looking at it is that the covariance matrix of the whitened signal will be equal to identity matrix. Identity Matrix Covariance Matrix The actual way we set about whitening a signal involves the eigen-value decomposition of its covariance matrix. The corresponding mathematical equation can be described as follows. where D is a diagonal matrix of eigenvalues (every lambda is an eigenvalue of the covariance matrix) and E is an orthogonal matrix of eigenvectors Once we’ve finished preprocessing the signal, for each component, we update the values of the de-mixing matrix w until the algorithm has converged or the maximum number of iterations has been reached. Convergence is considered attained when the dot product of w and its transpose is roughly equal to 1. where Python Code Let’s see how we can go about implementing ICA from scratch in Python using Numpy. To start, we import the following libraries. import numpy as np np.random.seed(0) from scipy import signal from scipy.io import wavfile from matplotlib import pyplot as plt import seaborn as sns sns.set(rc={'figure.figsize':(11.7,8.27)}) Next, we define g and g’ which we’ll use to determine the new value for w. def g(x): return np.tanh(x) def g_der(x): return 1 - g(x) * g(x) We create a function to center the signal by subtracting the mean. def center(X): X = np.array(X) mean = X.mean(axis=1, keepdims=True) return X- mean We define a function to whiten the signal using the method described above. def whitening(X): cov = np.cov(X) d, E = np.linalg.eigh(cov) D = np.diag(d) D_inv = np.sqrt(np.linalg.inv(D)) X_whiten = np.dot(E, np.dot(D_inv, np.dot(E.T, X))) return X_whiten We define a function to update the de-mixing matrix w. def calculate_new_w(w, X): w_new = (X * g(np.dot(w.T, X))).mean(axis=1) - g_der(np.dot(w.T, X)).mean() * w w_new /= np.sqrt((w_new ** 2).sum()) return w_new Finally, we define the main method which calls the preprocessing functions, initializes w to some random set of values and iteratively updates w. Again, convergence can be judged by the fact that an ideal w would be orthogonal, and hence w multiplied by its transpose would be approximately equal to 1. After computing the optimal value of w for each component, we take the dot product of the resulting matrix and the signal x to get the sources. def ica(X, iterations, tolerance=1e-5): X = center(X) X = whitening(X) components_nr = X.shape[0] W = np.zeros((components_nr, components_nr), dtype=X.dtype) for i in range(components_nr): w = np.random.rand(components_nr) for j in range(iterations): w_new = calculate_new_w(w, X) if i >= 1: w_new -= np.dot(np.dot(w_new, W[:i].T), W[:i]) distance = np.abs(np.abs((w * w_new).sum()) - 1) w = w_new if distance < tolerance: break W[i, :] = w S = np.dot(W, X) return S We define a function to plot and compare the original, mixed and predicted signals. def plot_mixture_sources_predictions(X, original_sources, S): fig = plt.figure() plt.subplot(3, 1, 1) for x in X: plt.plot(x) plt.title("mixtures") plt.subplot(3, 1, 2) for s in original_sources: plt.plot(s) plt.title("real sources") plt.subplot(3,1,3) for s in S: plt.plot(s) plt.title("predicted sources") fig.tight_layout() plt.show() For the sake of the proceeding example, we create a method to artificially mix different source signals. def mix_sources(mixtures, apply_noise=False): for i in range(len(mixtures)): max_val = np.max(mixtures[i]) if max_val > 1 or np.min(mixtures[i]) < 1: mixtures[i] = mixtures[i] / (max_val / 2) - 0.5 X = np.c_[[mix for mix in mixtures]] if apply_noise: X += 0.02 * np.random.normal(size=X.shape) return X Then, we create 3 signals, each with its own distinct pattern. n_samples = 2000 time = np.linspace(0, 8, n_samples) s1 = np.sin(2 * time) # sinusoidal s2 = np.sign(np.sin(3 * time)) # square signal s3 = signal.sawtooth(2 * np.pi * time) # saw tooth signal In the proceeding example, we compute the dot product of the matrix A and the signals to obtain a combination of all three. We then use Independent Component Analysis to separate the mixed signal into the original source signals. X = np.c_[s1, s2, s3] A = np.array(([[1, 1, 1], [0.5, 2, 1.0], [1.5, 1.0, 2.0]])) X = np.dot(X, A.T) X = X.T S = ica(X, iterations=1000) plot_mixture_sources_predictions(X, [s1, s2, s3], S) Next, we use ICA to decompose a mixture of actual audio tracks and plot the result. If you’d like to try it yourself, you can get the audio samples here. I encourage you to actually try listening to the different audio tracks. sampling_rate, mix1 = wavfile.read('mix1.wav') sampling_rate, mix2 = wavfile.read('mix2.wav') sampling_rate, source1 = wavfile.read('source1.wav') sampling_rate, source2 = wavfile.read('source2.wav') X = mix_sources([mix1, mix2]) S = ica(X, iterations=1000) plot_mixture_sources_predictions(X, [source1, source2], S) wavfile.write('out1.wav', sampling_rate, S[0]) wavfile.write('out2.wav', sampling_rate, S[1]) Sklearn Finally, we take a look at how we could go about achieving the same result using the scikit-learn implementation of ICA. from sklearn.decomposition import FastICA np.random.seed(0) n_samples = 2000 time = np.linspace(0, 8, n_samples) s1 = np.sin(2 * time) s2 = np.sign(np.sin(3 * time)) s3 = signal.sawtooth(2 * np.pi * time) S = np.c_[s1, s2, s3] S += 0.2 * np.random.normal(size=S.shape) S /= S.std(axis=0) A = np.array([[1, 1, 1], [0.5, 2, 1.0], [1.5, 1.0, 2.0]]) X = np.dot(S, A.T) ica = FastICA(n_components=3) S_ = ica.fit_transform(X) fig = plt.figure() models = [X, S, S_] names = ['mixtures', 'real sources', 'predicted sources'] colors = ['red', 'blue', 'orange'] for i, (name, model) in enumerate(zip(names, models)): plt.subplot(4, 1, i+1) plt.title(name) for sig, color in zip (model.T, colors): plt.plot(sig, color=color) fig.tight_layout() plt.show() The accompanying Jupyter Notebook can be found here.
https://towardsdatascience.com/independent-component-analysis-ica-in-python-a0ef0db0955e
['Cory Maklin']
2019-08-22 14:32:53.200000+00:00
['Machine Learning', 'Technology', 'Artificial Intelligence', 'Data Science', 'Programming']
Why You Need to Start Talking To Your Anxiety Now
Why You Need to Start Talking To Your Anxiety Now It might just change your life forever It’s exhausting, isn’t it? I know. I’ve been there. As if the constant stress and fear and helplessness weren’t enough to deal with. As if you don’t already want to curl up under a blanket and let the world disappear. Isn’t it enough just to make it through the day? Isn’t it alright to just scrape through? To navigate the troubled waters of panic attacks and dodgy bowels, of aches and sweats and sleepless nights? Don’t you do enough? Well, no. Because on top of all that, there’s the endless, well-meaning but overbearing advice. It comes from all sides. From the doctor who wants to medicate you, the therapist who wants to explore your past, the counselor who wants you to meditate. It comes from your family, from your friends, from your colleagues. It comes in all forms. It comes in response to the questions you’ve asked. It comes, unsolicited, from questions you haven’t. It comes when it’s all that you want and when it’s the last thing you need. You seek it out too. You read the books, the articles, the how-to guides. And every time you do, you’re told something new. Try this! Try that! Breathe… Breathe slower… Breathe mindfully! It just never stops. It’s the burden of all who suffer from issues of mental health to fix themselves. To be their own advocate. To heal. And it’s exhausting. Overloaded and Underwhelmed Of course, each time you see a new tip, or trick, or piece of advice, you welcome it with open arms. Of course you do, you’re desperate, you’ll try anything once! But that’s the problem. Much of the strategies and techniques we hear about work. They are proven to work. But they only work if you put the time in. If you practise religiously day after day after day. And most of us don’t. It’s too hard. We’re too distracted. There’s another thing we’ve been told to try so we’ll give that a go instead. Sound familiar? You are not alone. So many of us have been there. Many of us never leave. It’s time to simplify So, with that in mind, I want to make a suggestion. Let’s simplify, let’s pick a single strategy and stick with it. Honestly, I think if you can do that, it may not matter too much which one it is. But here’s what’s working for me. I’ve tried it all, because, after all, my last few years have demanded it of me — I’ve had a pretty tough ride. But I’ve had great results from one thing in particular, especially since I decided to simplify and double down. Talking to your anxiety First off, this is science. It’s not something I or any one else has just made up. It’s been researched and tested. It’s widely adopted in the psychotherapy arena. So you don’t have to just take my word for it. The idea is a simple one. By talking to your anxiety in the third person, you are doing a number of things: Establishing distance: a key principle here is to establish that you and your anxiety are not the same thing. You can be unraveled and separated, forever. Many of us, consciously or otherwise, struggle to really believe this. Creating empathy: most people are much better at caring for others than themselves. We don’t give ourselves the same love that we give others when we need it. Creating a character out if your anxiety is a great way to develop that empathy and trick the brain into being a little kinder. Keeping perspective: although we’re generally not very good at caring for ourselves, we’re brilliant at protecting ourselves. It’s a basic evolutionary mechanism to keep ourselves alive at all costs. So when we see a threat, the fight or flight mechanism kicks in and we lose all perspective. We just want to get the hell out of there. When we think about our anxiety as another person, we’re able to assess the situation more calmly, and ultimately, avoid causing more stress than is required. There’s a lot of neuro-chemistry going on here as well, which, if you’re interested, is worth a closer look. But that’s not for everyone, and it’s not strictly necessary either. In fact, sometimes, understanding the mechanism too well can be counterproductive to trusting it. What you need to do I’ve gone through a lot of advice on ways to put this into practise. But I think it’s best to keep it simple. Here’s what I found works: Step 1: Give it a name. This is so simple, but really helps establish that third-person effect we’re going for. A therapist of mine once suggested the use of the name Amy (as short for Amygdala, the part of the brain responsible for ’fight or flight’ and, generally stressing us out) and it kind of stuck for me. But I think a better idea is to pick a name you have a positive attachment to. Words, even when spoken internally, can have a great impact on our mental state. Negative words wind us up, positive calm us down. Pick a name that brings you a microcosm of joy each time you hear it. Step 2: Prepare some statements. Anyone who suffers from mental health conditions knows that, when the proverbial hits the fan, rational thought goes out the window. So be prepared. Work out what you’d want to say to your anxiety in advance. Write it down. Say it out loud. Learn it by heart. Here’s one of mine you may find useful: ”Ok Amy, I hear you. I know you’re not having a great time right now. I’ve heard what you have to say. But I want you to know that I’ve got this. I’m going to look after you and keep you safe. I love you, and won’t let anything happen. Now, calm down so I can concentrate on doing just that.” That’s it. Keep it simple. Keep it straight. Avoid using those negative words. Put yourself in command, firmly but gently. Step 3: Practise, practise, practise. Most mental health issues are strengthened by repetition. They reflect patterns of thought and behaviour which are hard to break. You have to treat any attempt at recovery in a similar way. Talk to your anxiety, a lot. Talk to it in the morning when you wake up, when you’re on the way to work, while you’re sat in boring meetings or in the car, talk to it until it is a habit and then talk to it some more. You can change up what you say, but keep the tone constant. Keep it firm but gentle. Keep it caring but commanding. You are in control.
https://medium.com/invisible-illness/why-you-need-to-start-talking-to-your-anxiety-now-1a02711bc332
['Daniel Skomer']
2019-08-23 19:04:33.251000+00:00
['Self Improvement', 'Anxiety', 'Life Lessons', 'Mental Health', 'Life']
The Power of Design Culture
What makes a Microsoft product Microsoft? What is that product ethos, guiding our design ideals? These are the types of existential questions I recently sat with, flying at 30,000 feet across 10 time zones. Destination: Tel Aviv, to take part in Microsoft Israel Development Center’s Design Day. An enlightening opportunity to connect our design teams and try to get at the heart of these questions. As designers, what’s our purpose, our responsibility to people all over the world who trust Microsoft? As the CVP of the Windows Design team, it’s essential to be thoughtful about the role of design in technology. That means continually collaborating with cross-company leaders and partners to cultivate the greater Microsoft Design identity. Recently, Microsoft announced an important investment in intelligent cloud and intelligent edge. It’s a thrilling opportunity to connect our teams even more and create a synchronous voice that informs our products and reaches our customers. In our continued journey as One Microsoft, our product ethos is essential, and design’s influence will matter more than ever here and across the industry. As we define that future, I want to invite you along and ask for your insights. This will be a series of conversations and discovery as we continue our always-exhilarating journey. Design Culture, Our Guiding Light At Design Day in Tel Aviv, I was asked to present on the history of Microsoft Design — more than a little daunting. How do you catalog all the amazing design work that’s happened over 30 years? I came back to this article by one of our designers, a collection of our history. I reached out to folks all over the company and found a trove of product work, inspired by a legacy of breakthroughs and innovations, challenges and learnings. We design experiences that take advantage of the technology around us, to serve new or unmet needs. There’s a certain amount of tech trendiness that we can predict, but in this moment, those trends are moving quickly and it’s the role of design to see the need, adapt, and keep delivering coherent, usable experiences to customers. Think about the relatively speedy evolution from PC command line, to GUI with keyboard and mouse, to touch, to ink & pen on tablets, mobile, ubiquitous devices, large screens, mixed reality, to zero UI. We’re at the intersection of these technologies, with multiple inputs and shifting contexts. The challenge now is to simplify using ambient computing, conversational UI, and intelligence embedded on the “edge,” always connected through the cloud. This is the future we’re investing in. But what does that really mean, in our product design work at Microsoft? How do you evoke a feeling? What values do our products express? Trust? Empowerment? Inclusivity? To answer that, we usually look to brand, identity, design language — constructs that let us create a shared set of values. But the key part that influences identity is the sense of culturearound how we design together. How do people at Microsoft work and behave? What do they believe in, and how do those beliefs manifest in the experiences we bring to the world? In tech, the tendency is to look to the individual — the myth of the lone designer, creator, or visionary — one person with creativity spark to drive a product design forward. And while there are inspiring creatives to be celebrated, the risk is in reinforcing the individual hero, the alpha, the power of singular genius to create their vision for the world. The principles of design thinking — empathy, learning, iteration, co-creation, collaboration, open-mindedness, exploration — don’t work in a hero culture. In practice, product design is the collective work of many disciplines coming together to create better solutions for our customers, not ourselves. And this is why Satya has challenged us at Microsoft to be the elixir to Conway’s Law. We need to have a holistic product view that looks at the end-to-end customer experience, removing seams and boundaries along the way. Any design system we create should harness the creative power of the collective to incubate, iterate, and co-create inside Microsoft and alongside our external ecosystem, with openness. We need to bring in a broad range of perspectives, designers with different backgrounds, genders, race, expertise, and worldviews. To truly empower everyone, we need an inclusive culture that fosters inclusive products. Our product design culture needs to be that of strong beliefs loosely held — seeking to learn through each other’s work and every customer’s journey. We need trust, candor, openness, and the ability to challenge our work with respect and dignity. Space for creativity. An environment that reflects our cultural values. It all matters as we move into a new paradigm of computational design — keeping our distinct humanness as part of our Microsoft product ethos. What are your thoughts on compelling design culture? Share your feedback in the comments. Or, come talk to me on Twitter. To stay in-the-know with Microsoft Design, follow us on Dribbble, Twitter and Facebook, or join our Windows Insider program. And if you are interested in joining our team, head over to aka.ms/DesignCareers.
https://medium.com/microsoft-design/the-power-of-design-culture-d065ccd4eb4f
['Albert Shum']
2019-08-27 17:28:50.606000+00:00
['Technology', 'Design', 'UX Design', 'Inclusive Design', 'Microsoft']
How Hot is a Lightsaber?
How Hot is a Lightsaber? And other lightsaber physics Welp. Star Wars Episode IX came and went, bringing with it a stark and definitive closure to the paths of our newest heroes and heroines in the beloved saga. Though the acting, cinematography, and special effects are largely praised as some of the greatest to yet graze the franchise, the plot of the third installment of the sequel trilogy has no doubt ignited heavy debates among fans of all ages, perhaps polarizing the Star Wars fandom more than any film before it. It’s hard to peruse through even a handful of movie reviews on Medium without encountering someone’s unique take on the multi billion dollar series that has ushered in an entirely new generation of fans. Despite the massive array of differing opinions on the eleven renowned films, there is one thing that all fans of Star Wars can agree on: lightsabers are really, really cool. Or should I have said “hot”? Extremely hot, to be precise! Hot enough, in fact, to slice right through Darth Maul in a fraction of a second, and cauterize the bisection completely. Hot enough to carve through solid metal blast doors, stronger than steel and thicker than trees. Hot enough to chop stone pillars, metal coolant pipes, and enormous boulders with as much ease as sliding a hot knife through warm butter. Pretty much anything that a lightsaber touches is immediately converted to a molten liquid or a charred crisp, and when one lightsaber clashes with another, a flash ensues that is so bright that it momentarily saturates one’s entire field of vision. So yes, in addendum, lightsabers are really, really hot! Qui Gon slicing through a blast door in Star Wars Episode I: The Phantom Menace. Star Wars may have taken place in a galaxy far, far away, but we are still led to presume this galaxy does indeed exist somewhere in our universe. This means that all of the technology depicted in the many films should still abide by our known laws of science and physics. So, keeping all that in mind, how hot would the lightsabers depicted in the films actually need to be to do what they do? To answer this question, let’s first take a look at how the Star Wars canon describes the functionality of a lightsaber. From Wookiepedia: Lightsabers consisted of a plasma blade, powered by a kyber crystal, that was emitted from a usually metal hilt that could be shut off at will. There is some other pseudo-scientific discussion about power cells, modulation circuits, and energy gates, but this basic description tells us two important things: 1) That a lightsaber blade is made of plasma i.e. superheated atomic material, and 2) That they have a single power source — a “kyber crystal” — which presumably fits snugly within the lightsaber’s hilt. Unless the kyber crystal is somehow harnessing power from an external source (such as Casimir or Zero-Point energy), a kyber crystal can be thought of as an extremely advanced and powerful battery, with enough juice to power the lightsaber whenever active. In order to gauge a rough order-of-magnitude estimate for the temperature of a lightsaber, we need an example of one slicing through something pretty thick that we also know the material properties of. This isn’t as easy as it sounds — probably 99% of all saber slices in the franchise are that of crates, pipes, or doors with ambiguous compositions, or people’s limbs, neither of which make a very good test of determining a lightsaber’s power output. However, there is one fantastic scene in Star Wars Ep. VIII where Rey slices through a stone while training with Luke Skywalker on Ahch-To, providing us a pristine example of the true power of a Star Wars lightsaber. Rey slices through a stone while training on Ahch-To; one of the few instances where a lightsaber is used in the franchise to cut something that’s physical properties are known! In the scene, Rey is testing her skills with the Jedi weapon, swinging it within centimeters of the tall, weathered stone, only to pull it back to reset the exercise. She repeats the drill several times before finally losing her restraint and driving the lightsaber clean through the solid stone, sending the top half careening down the mountainside. Beautiful and symbolic as the scene was, it’s also a gold mine for calculating a lightsaber’s power. Not only is the stone made of a material with predictable physical traits, but we also gain some valuable insight on the size of the cross section that is sliced, as well as the amount of time that it took to completely liquify the rock within the cross section. These tidbits will turn out to be valuable assets in determining the lightsaber’s power output, and thus its temperature. Most silicate stones have melting points between 700⁰ C, and 1300⁰ C. For the sake of simplicity, let’s assume that Rey’s lightsaber must heat up and liquify the material along the cross section to a temperature of 1000⁰ C (the same as the melting point of basalt). The energy required to melt a substance, however, is more than just increasing its temperature to its melting point; additional energy (called latent heat of fusion) is required to actually turn that 1000⁰ C solid material into 1000⁰ liquid material. For example, picture an ice cube sitting on your counter. The temperature of solid ice can never exceed 0⁰ C, yet it still takes several minutes of sitting on your counter absorbing heat in your ~21⁰ C kitchen in order to melt it into its liquid state. This means that we will need two additional factors to calculate our lightsaber temperature — the specific heat of stone (as the solid stone heats up to 1000⁰ C), and the latent heat of fusion of stone (as the stone changes to a liquid state). A phase change diagram for water. Note that the temperature of the water does not change during the melting/freezing phase, nor during the boiling/condensing phase, due to the added energy going towards changing the state of the water instead of increasing its temperature. Clearly, heating 1 gram of rock from room temperature to a molten liquid requires less energy than doing the same for 1 kilogram of rock. In order to estimate a lightsaber’s heat output, we will need to determine the volume (and thus the mass) of rock which is being liquified. We can probably roughly estimate the cross section of Rey’s slice from the scene in the movie, but this does not determine the volume of material that is liquified. To obtain this, we must be able to estimate the height of Rey’s cut. Logic would tell us that the height of the cross section is probably about the same as the thickness of the lightsaber itself, so what is the thickness of a lightsaber blade? Lightsabers have been depicted in many thicknesses throughout the franchise. In the originals, they were extremely thin, sometimes completely disappearing when viewed edge-on. By the time of the prequels and beyond, lightsabers took on a thicker, more sword-like appearance. Because our example takes root in the sequel trilogy, I will use an estimated thickness of perhaps 3 centimeters. A diagram I made in MS Word (lol) showing the volume of material liquified by the lightsaber as a function of cross-sectional area and height of the slice, the latter of which is roughly the same as the thickness of a lightsaber’s blade itself. We now have enough information to determine how much energy it takes to liquify the cross sectional slice from Rey’s lightsaber, but we still need one more factor to determine how much power Rey’s lightsaber is radiating. If the lightsaber itself were 1000⁰ C, it could theoretically liquify the entire cross section, but it would take it an infinite amount of time to do so. The hotter the blade is, the faster it will be able to slice through the rock. By carefully timing the scene from the beginning of slice to the end, we can determine the amount of time Rey spends bisecting the stone, which will then allow us to reverse-calculate the amount of thermal power radiating from the lightsaber’s blade. By assuming that this power is uniformly radiated from the surface area of the lightsaber, we can finally tabulate the temperature of the weapon. For those interested in all the math: Hahahahaha damn. Boasting a power output greater than some nuclear powerplants, the nearly 1-Giggawatt lightsaber would flare at a temperature hotter than the surfaces of most of the stars in our universe. To keep the lightsaber ignited for a mere 20 minute duel, the kyber crystal within the lightsaber’s hilt would have to yield an energy density of about 1,670 Gigaojoules per liter — akin to the volume energy density of a plutonium fission reactor! Peaking at emission wavelengths in the UV spectrum, an object of this temperature would likely appear bright purple to the naked eye, not dissimilar to the lightsaber of Mace Windu. Is Mace Windu’s lightsaber the most visually accurate lightsaber? However, anyone holding such a lightsaber would actually burst into flames. This is because the autoignition temperature of biological material would be in the range of combustion even out to a distance of more than a football field away just from the radiative heat of the lightsaber’s blade! Now clearly Jedi and Sith aren’t bursting into flames while dueling, so something else must be at play here. My theory: the pseudo-scientific “energy gates” touched on in Wookiepedia are actually extremely advanced fields of some sort, designed to contain the superheated plasma and *most* (I’ll come back to this) of the radiation from escaping the blade while not in contact with anything solid. The energy from a lightsaber is only released when that field is interrupted by a solid object or another blade, and only the surface area of the lightsaber that is actually slicing or colliding is releasing energy. The rest of the saber’s energy gate field remains intact to protect the user from succumbing to 3rd degree burns. This theory would also explain why a lightsaber collision generates a momentary blinding flash, since a split second of Gigawatt-level power is released during each clash. Since this hypothetical field would block *most* of the radiation from getting through, the energy storage demands of the kyber crystal are actually much less stringent than what I previously estimated. It would still require Gigawatt levels of peak power in order to slice rock and metal, but since this power is only required during short bursts, energy from the kyber crystal would only be draining sporadically, instead of continuously. With the same energy density listed previously, a lightsaber could last weeks or even months of duals before a change of kyber crystal would be required — much more believable to be done offscreen than every 20 minutes. Finally (I said I would come back to it), a Jedi or Sith could perhaps adjust the energy gate to tweak the amount and frequency of allowed radiation that gets through the field in order to customize the color of their lightsaber blade, allowing for any saber color under the sun(s). So yeah, in conclusion, I guess lightsabers are pretty cool… erm, well… hot! Thanks for reading!
https://medium.com/our-space/how-hot-is-a-lightsaber-8a5db6499fa9
['Brandon Weigel']
2020-02-25 15:49:48.075000+00:00
['Science Fiction', 'Star Wars', 'Physics', 'Space', 'Science']
Bacteria producing energy… also from your guts
Now let’s move to the ecosystem in our guts. Guts are fascinating human structures with incredible connections to the nervous system, and with relevant roles for the immune system. Unfortunately, we probably don’t have butterflies as the common knowledge suggests, but we have certainly “electrifying things”. Shewanella have been found in the intestine of some animals but not specifically in humans. Nevertheless, other exoelectrogenic bacteria — like the Listeria monocytogenes — have been found in humans. To be fully clear, L. monocytogenens are not part of the natural gut ecosystem in humans. They can be introduced during the eating process, and can cause food poisoning. Indeed, we call this infection listeriosis. The mechanism of electricity generation from bacteria is similar to our breathing process, in the sense that they do it to remove electrons produced during metabolism and support the production of energy. This might be particularly pronounced in the gut, as this electricity-generating process is probably a “back-up system” used in low-oxygen conditions (the gut). It appears that our guts might provide sufficient scaffolding to them, likewise the synthetic ones described above. It is unlikely that we will harness electrical power from the gut of human beings, unless we imagine a distopian future Matrix-style. Nevertheless, those bacteria allow us to study their genes and therefore to understand better which protein is responsible of the exoeletrogenesis. Opening up new way to future technologies. We can design bacteria-based energy-generating technologies, or organic battery cells where those live, resolving the waste of toxic materials from currently used batteries. Lastly, it is fascinating to imagine that we literally have an entire ecosystem producing energy in our belly.
https://medium.com/illumination/bacteria-producing-energy-also-from-your-guts-4759625fa9d
['Dr. Alessandro Crimi']
2020-11-18 11:32:45.210000+00:00
['Technology', 'Energy', 'Biology', 'Electricity', 'Science']
Create a map of Budapest districts colored by income using folium in Python
A vector map of Budapest. https://www.shutterstock.com/image-vector/black-white-vector-city-map-budapest-1035519106 Ever wondered how to draw a map of less common geographical areas? And color them based on some data? This pair of tutorials shows how to build this from scratch! First, you need to construct the border of your polygons — Part 1 is about this task. After that you need to create a map, and color those polygons according to some value of your interest. That will be shown in Part 2. Part 1 of this tutorial is available here. There are many tutorials on the internet for drawing maps in Python, even more sophisticated maps like heatmaps (where heat is basically the density of points in an area) or choropleth maps (where polygons are colored according to some arbitrary value). However, these tutorials are mainly done on states of the United States. View some great ones with plot.ly or an also otherwise superb one with folium. For the US, these packages have some convenience methods, but for the rest of the world, they’re of little use. Alternatively, they are based on some accidentally available json file like this one with Altair. Obviously, these are not general solutions to the problem of creating a map of some areas of your choice. I aim to give a general solution in these articles. In this second part you will: learn how to create a GeoJSON file, the basic way to plot polygons on a map understand geodataframes, by which you can add additional data to polygons create a beautiful choropleth map using folium. The code is also available on GitHub: What is a GeoJson? Python function creating geojson from a list of coordinates. A GeoJson is actually a JSON file with some predetermined structure. The function above generates a GeoJSON file from a list of points by JSON dumping a dictionary formatted like a GeoJSON file. In these dictionaries, there are only 3 keys: lat, lon and name. Lat and lon stand for the coordinates, while name identifies the shape that the point belongs to. The order of these dictionaries in the list is also critical, you can read about this in the first part of this tutorial. What is a GeoDataFrame? GeoDataFrames are a subclass of classic `pandas` DataFrames with a special column. This special column, geometry contains all the information that makes this DataFrame geo-aware. A GeoDataFrame object can be easily created from a GeoJSON file, you can add additional data to polygons easily, and just as easily convert it back to a GeoJSON file for visualization purposes. GeoDataFrames are easy to work with! In this snippet to the left centroids are added to each polygon. By this, we can add a nice marker to the centroid’s location depicting information about the polygon - in this case, the district. Just as easily as this, other data can be added using a plain pandas merge. Here, income tax per capita data is added to each polygon! Creating the map The full code creating the beautiful map! The creation of the map consists of 4 important steps: Creating the base map Creating the choropleth Adding markers as a FeatureGroup Enabling LayerControl Firstly, map creation is just as easy as it seems. You should provide a starting position, a starting zoom, and a tiles argument which is responsible for the basic design of your map. Be aware, that folium expects coordinates in latitude, longitude order! Secondly, creating the choropleth is also easy, but it requires a neatly structured GeoJSON file. The parameters of the choropleth method speak for themselves, but you can refer to the documentation if something is not clear. This creates a layer where polygons are colored according to income tax per capita data — the more the greener, the less the yellower. Thirdly, A FeatureGroup object is created. This object consist of Marker objects signalling each district’s name and the respective income tax per capita value to give the reader exact amounts apart from the color coding from choropleth. Lastly, LayerControl is added to the map in order to make it possible to show or hide layers, such as the choropleth or the FeatureGroup layer. Finally, the map can be saved and the created .html file opened using your browser. You can find it on my github, in the outputs folder, but here is a snippet of it: Takeaways The main takeaway here is that in creating a beautiful choropleth map — just like in any data science project — data preparation takes around 90% of the effort. We got to know OpenStreetMap data structures and the Overpass API, and solved an interesting problem in Part 1. In Part 2, using GeoDataFrames made it easier to add information to our GeoJSON files and maps. It is also clear for me, that if you are creating maps in Python, folium is the way to go. As you can see, creating a map can be a one-liner, and creating choropleth maps or showing markers with html popups are really easy with folium.
https://medium.com/starschema-blog/create-a-map-of-budapest-districts-colored-by-income-using-folium-in-python-8ab0becf4491
['Mor Kapronczay']
2019-11-11 14:54:24.936000+00:00
['Python', 'Folium', 'Geospatial', 'Data Science', 'Maps']
Building Docker Images inside Kubernetes
The Problem Let’s say that we’re a new team who has been testing and building locally , using docker build , and pushing the images to our Docker registry. We want to put this into a CI/CD system, and so deploy Jenkins on a new Kubernetes cluster (using the Jenkins Kubernetes plugin). We add a step to run our tests, which works flawlessly. Then we add a step to build a Docker image, but when we run it, and we find that there’s no docker daemon accessible from the Kubernetes container, which means that we can’t build docker images! Let’s investigate some ways that we can fix this problem, and start building Docker images using our CI/CD pipeline. How image building works First, it’s important to understand what goes on under the hood when you run docker build using a Dockerfile. Docker will start a container with the base image defined in the FROM directive of the Dockerfile. Docker will then execute everything inside the Dockerfile on that container, and at the end will take a snapshot of that container. That snapshot is the resulting docker image. The only thing that we need from this process is the docker image, so as long as the result is the same a tool can build the docker image in any way it wants, and the implementation doesn’t really matter to us as long as we get the same docker image in the end. Technically we don’t even need a Dockerfile, we can just run commands on a running Docker container and snapshot the results. The rest of this article is going to explore different ways to generate a Docker image. Docker out of Docker With Docker out of Docker we’re essentially connecting the Docker inside of our container to the Docker daemon that the Kubernetes worker uses. The only good reason to use this method is because it’s the easiest to set up. The downsides are that we’re potentially breaking Kubernetes scheduling since we’re running things on Kubernetes but outside of its control. Another downside is that this is a security vulnerability because we need to run our container as privileged and our Jenkins slaves will have access to anything that’s running on the same worker node (which could be a production service if you don’t have a separate cluster for Jenkins) Visualizing the vulnerabilities caused by mounting the worker’s docker socket Here’s a basic configuration for Docker out of Docker: A sample config for a k8s pod with a mounted docker socket Once you launch the pod, my-container will have access to the host’s docker daemon and images can now be built on it with docker build . Docker in Docker Docker in Docker means that we run a Docker container which runs it’s own Docker daemon, thus the name Docker in Docker. This is the approach that we’re currently using at Hootsuite. The advantages of this approach are that it’s still pretty easy to set up and it’s backwards compatible with Jenkins jobs that are already building Docker images using docker build . The security is also better than Docker out of Docker since our Pod doesn’t need to be privileged (although the Docker in Docker container does still need to be privileged) and our Jenkins slaves can’t access the other containers running on the same Kubernetes worker. Here’s a diagram that explains the inner workings of Docker in Docker Here’s a YAML snippet with a basic configuration for Docker in Docker: Once you launch the pod, my-container will have access to the the docker daemon running in the dind container and images can now be built on it with docker build . This is an open source solution created by Google, who originally created Kubernetes. It allows you to build Docker images without access to a Docker daemon. This means that you can run container builds securely, in isolation, and easily inside your Kubernetes cluster. Kaniko has a few problems at the time of writing. The first problem is that it’s hard to set up. You need to configure a Kubernetes pod and send it a build context, then migrate your CI/CD solution to start using this new set-up to build images. Another problem is that it’s not a very mature solution, and is missing features (such as caching). You can find information about setting up Kaniko in the README found on GitHub: https://github.com/GoogleContainerTools/kaniko A visual overview of the solution Kaniko offers
https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25
['Vadym Martsynovskyy']
2018-08-27 21:51:46.260000+00:00
['Jenkins', 'Co Op', 'Docker', 'Kubernetes']
How to craft a kickass filtering UX
Filtering best practices Provide category-specific filters Universal filters narrow down results by common characteristics such as price, colour, or popularity. However, we should also include filters that vary according to the category. This could mean ‘Fit’ or ‘Waist Height’ under ‘Pants’ when shopping for clothes, or ‘Happy Hours’ or ‘Good for Dancing’ under ‘Nightlife’ when browsing for restaurants. One rule of thumb to uncover category-specific parameters: anything that is mentioned on the product description is worth to include. Myntra offers very detailed category-specific filters. This is particularly helpful giving the sheer number of items. Category-specific filters also have the advantage of being an education point for people who have less experience with the content being displayed. Allow multiple selections Decisions might not be taken based on a single data point. Whether we know exactly what we want (we‘re interested in a restaurant that has tables outside, and allows pets, and has vegetarian options), or we only know what we don’t want (we’re not interested in a restaurant with loud music), it makes sense to allow for multiple filters to be selected at once. This will allow the user to narrow down their search quickly and more comfortably. Allowing multiple selections also works better for slower connections. It is frustrating to have the intention of selecting two or more filters, only to see the page reload painstakingly slow with each selection. Use real world language Filters should be modelled after the actual behaviour of users. Think the way someone would ask a shop assistant for a dress, or how a group of friends chooses a vacation rental home. The language used to model these choices should be the language in the filter. For instance, when going to a clothes shop, it’s unlikely we would ask for a ‘Fit & Flare’, ‘Shift’, or ‘Bodycon’ dress. Yet these are the filters on Chumbak’s online store. We’d be better off browsing their entire category of dresses and mentally noting what is relevant and what is not (this one is too short, this one is long-sleeved…). In this case, there are only 35 items to browse, but it wouldn’t be manageable with more results. Be redundant when displaying applied filters The convention is either showing applied filters in their original position, or in a separate ‘Applied Filters’ section. Research tells us, we’re better off with both. When users want to deselect a filter, they will look for their original position first. Not finding the filter there makes for a disconcerting experience. On the other hand, keeping selected filters under a separate section gives users a quick way to check currently applied filters, and an easy way to unselect multiple filters at once. Kulture Shop conveniently adds a ‘Currently shopping by’ section in addition to keeping the filters in their original position on the dropdown. Kulture Shop displays filters in their original position as well as at the top Kulture Shop has room for improvement, though. ‘Search by mood/styles/topics’ filters are a good option, but separate filters for ‘Rebellious’ and ‘Satirical’ is going a bit too much into minutiae. And please don’t punish my lexical limitations by reloading the page every time I select one box! I wan’t to see what’s in both without having to load the page twice on my poor internet connection. Make sure filter changes are separate events in the browser Since the content of the page changes, the perception when using filters on desktop web is that there are multiple “pages” instead of a single page with different filters. The browser behaviour should match this perception — if a user wants to go back to see a previous filter selection, clicking on the back arrow shouldn’t take them to a completely different page. Filtering and sorting patterns In UX, it always depends — filters are no exception. Design decisions for filters will depend on the context, on the number of parameters, on the type of user. Filtering or sorting? In theory, they are different: sorting organises the content according to a certain parameter, filtering removes it from view. However, research suggests that for the user, the outcome is roughly equivalent: both surface the most relevant content according to their criteria. During user sessions, Baymard even reports some people to use “sorting” and “filtering” interchangeably. Sorting makes sense with filters of the same “type” (for instance, ‘Price’). This allows users to go through an entire range (by sorting price low to high) rather than forcing them to choose a specific bracket (by selecting $0–20, $20–40…). Filtering is a sounder option for filters that might be mutually exclusive. Even if we can sort pants by fit, seeing jeggings on top and those early 2000's extra baggy jeans at the bottom wouldn’t be particularly helpful. It’s common to see a combination of both: a filtering sidebar paired with a sorting tool on the top. Which leads us to the next question… Sidebars and toolbars (desktop web) The sidebar filtering interface on the left is the gold standard on desktop websites. It’s easier to skim and it can accommodate a larger number of filters, since it’s not limited to the page’s width. Facebook uses a good old-fashioned sidebar in their search results page It does have its problems, though. Banner blindness may cause the sidebar to be ignored altogether, or lead the users into noticing only the sorting options at the top and thinking those are the only filtering options. Combining sorting and filtering tools in an horizontal toolbar at the top can give them more visibility. Airbnb identified the few filters people use use the most and displayed them comfortably in the horizontal toolbar. Anything else is tucked under the ‘More’ dropdown, which expands to a full screen. However, this option might not be as suitable for cases where there are a lot of search parameters with no clear hierarchy, since it would become cumbersome to navigate through a lot of dropdowns. Filtering patterns for mobile devices Thierry Meier has already done a great job discussing the merits and the use case of each type of mobile filtering, so I’ll be brief in enumerating them: Slideover onscreen filtering — A filter view that partially overlays the search results. It’s useful to see keep context of what is being filtered, but it isn’t recommended on longer sets of filters. A filter view that partially overlays the search results. It’s useful to see keep context of what is being filtered, but it isn’t recommended on longer sets of filters. Fullscreen onscreen filtering— Full screen filter views are convenient when displaying longer sets of filters, since they make for a more focused experience and allow for more screen estate. Full screen filter views are convenient when displaying longer sets of filters, since they make for a more focused experience and allow for more screen estate. Search result filtering— The results depend on the user’s input. If the query is too broad, a set of categories is offered in order to narrow down the results. If, on the other hand, the query is more specific, deliver the search results right away. The results depend on the user’s input. If the query is too broad, a set of categories is offered in order to narrow down the results. If, on the other hand, the query is more specific, deliver the search results right away. Sorting — Again, opting for sorting or filtering is a design decision that should be informed by the context. Meier also points out that the more limited space on mobile devices might play in favour of sorting. Again, opting for sorting or filtering is a design decision that should be informed by the context. Meier also points out that the more limited space on mobile devices might play in favour of sorting. Address different choices separately—Top-level decisions that need to be made by all users should be separated from the actual filtering. ‘Filterless filters’ While we were learning more about filters, Aiswarya Kolisetty and I also concurred that in some cases it might be appropriate to surface relevant results without resorting to “traditional” filtering. Preselection during onboarding Some services prompt their new users to make certain choices that will filter the content they will see down the line. Medium, for instance, lets users select the topics of their interest. The newsfeed will reflect this selection. It’s a reasonable choice, since Medium users’ topics of interest are not likely to radically shift every time they open the website. Surface most used parameters first Repetitive search tasks can be avoided by identifying and bringing up the most common search parameters. We can enter any destination on the Uber app. But the saved and most used destinations are conveniently kept in handy, and are pushed first in the search results. Provide different views for mental filtering Different UI choices have different affordances. A map view helps users to mentally filter by location, whereas a list view lets users skim through other characteristics. Take Google Maps, for instance. Searching for cafes in the map makes sense when the geographical location is the main concern. The list view is more relevant when the decision also depends on other factors (like rating, type of food searched…). Predict the next movement AI and machine learning can be used to reduce effort by anticipating the user’s intention.
https://uxdesign.cc/crafting-a-kickass-filtering-ux-beea1798d64b
['Laura Cunha']
2018-12-19 06:51:22.288000+00:00
['Usability', 'Design', 'UX', 'UX Design', 'Filters']
Gay is Good: An Astronomer’s Pre-Stonewall Fight Against Homophobia
A League of His Own Kameny did something incredible when he petitioned the Army Map Service and Civil Service Commission courts that dismissed him on the basis of alleged homosexual behavior. He did not deny his homosexuality. Instead, he criticized the federal government's program of sexual conformity and argued that there was no connection between reliability and homosexuality. This would become the foundation of Kameny’s later messages. While continuing to petition these courts for reinstatement of clearance, Kameny lived his gay life in the 50s and 60s. He started the Washington chapter of the Mattachine Society, an organization centered around discussions of morality, homosexuality, and public perception of homosexuals. Defining Homosexuality Source: The Lavender Scare, PBS. When Jean White was writing her five-part series on homosexuals for The Washington Post in 1965, she asked Kameny an important question, also posed to psychiatrists she had interviewed: were homosexuals sick? Psychiatrists at the time were in agreement that homosexuality was deviance, a sickness. Kameny, on the other hand, stated that it was not a sickness or maladjustment, but a preference, not dissimilar to heterosexuality. When Congressman John Vernard Dowdy questioned whether the Mattachine Society promoted the idea that homosexuals could enter into an equivalent of heterosexual marriage, Kameny was forced to think on his feet. In 1965, his organization had no official stance on gay marriage. In that House District Subcommittee hearing, while under intense scrutiny from the Texan Congressman, Kameny held “If two individuals wish to enter into such a relationship it is certainly their right to do so as they choose; yes sir.” Intersectional Activism Kameny worked with lesbian activist Barbara Gittings and Gay Liberation leader Randy Wicker to present a unified front to the public, with varying levels of success. While the groups disagreed on protests tactics, they recognized similar struggles in the Black Freedom Movement and with trans resistance movements. In 1962, Bayard Rustin (the Black, gay, socialist pacifist that is credited with informing MLK’s non-violent approach) organized a March on Washington. Senator Thurmond attempted to weaken support for the march by declaring Rustin as guilty of “sex perversion.” No less than two hundred thousand showed up to the march, including Frank Kameny and seven other Mattachine members. From Homosexual to Gay to Good After hearing the chanting of boycotting Black students in Newark, 1968, Kameny recognized the power of phrasing. The students chanted simple, yet forceful messages, like “Black is beautiful.” On an August day, Kameny came up with his version, “Gay is Good.” Source: “Gay is Good” by Phillip Potter 1971, printed 2014 Digital C type print on Kodak Endura Matte © Phillip Potter This broad term was revolutionary not just in its positivity, but in the connotation of moral goodness. After so long in the shadows, in the closet, in therapy, finally a positive affirmation of goodness. Reading this in the year 2020, even with examples of queer representation in media, I felt a lightness in me. Never before had I seen or read the caveat that not only was gay OK but more than that — it’s good: it’s good to be who you are. The world indeed is a better place for it.
https://medium.com/age-of-awareness/gay-is-good-an-astronomers-pre-stonewall-fight-against-homophobia-5ff82dbf02ea
['Anahit Moumjian']
2020-08-06 20:40:26.116000+00:00
['Equality', 'Books', 'History', 'LGBTQ', 'Intersectionality']
Agile Design Project Framework
Agile Design Project Framework Why you should use Invision, JIRA, Slack, Sketch, and Zeplin.io to save you from 100 hour work weeks Modern digital design work entails a multitude of steps that must occur in synchronous harmony. Yet at most design agencies this is not the case. Often workers are under constant pressure to deliver often working ~80–100 hour work weeks. These designers and developers are very creative and inspired. But if my experience thus far is any indication they are also incredibly disorganized. A lot of times it comes down to a team not leveraging all they can out of a particular tool. And it’s not that they're lazy it’s that they're uninformed. Design tools are debuting new features every month and keeping up with them can be a task within itself. I attempt to keep up through reading flipboard, twitter, and listening to podcasts. Team Dynamics Understanding what the team needs from each other directly informs the tools used. Let’s break the roles and their needs down. Developers require accurate and up to date designs and notifications of when changes are made. Designers are the fulcrum point within the framework. They need to create designs, receive feedback, create prototypes, and plan/track all work. Design Managers are focused primarily on time. They want to make sure work is getting done and their client is happy. Stakeholders need to be involved in the process but not in a way that interrupts the overall design process.
https://uxdesign.cc/agile-design-project-framework-9c4715ecaeb1
['Matthew Voshell']
2017-01-12 02:26:23.817000+00:00
['Agile', 'Design', 'User Experience', 'Project Management']
Using Apache Airflow to Create Data Infrastructure in the Public Sector
I spent the better part of a decade orchestrating data in Wall Street during the Financial crisis and became skilled in the efficient movement of financial data. What I realized towards the end of my career in Wall Street was that back in the early 90s, big banks got together and agreed to speak in a common transactional language. This was called the Financial Information Exchange or FIX protocol. FIX protocol is the glue that allows major financial institutions share data quickly. These data standards, I later found out, are not unique to Wall Street but exist in almost every sphere of trade and commerce (Ex: air traffic control, cell phone number portability, etc.). More intriguingly, many of these protocols are managed by unbiased, neutral and non-profit organizations. (I write about this in detail in “ Bringing Wall Street and the US Army to bear on public service delivery.”) My shift to the public sector was motivated by a personal desire to repurpose my data engineering skills towards positive impact but also a realization that vast swathes of public services lack standardized protocols to communicate in the digital realm. Why did you start with water reliability in California? A California state of mind and the willingness of a few bold public water utilities to try something new was what helped us get off the ground. We owe our existence to these public leaders. Patrick Atwater, part of the core ARGO crew and project manager of the CaDC, is a fifth generation Californian and was already deeply invested in the effects of drought and water management before arriving at NYU CUSP. He also co-authored a children’s book about how the drought impacts everyday life in California. When you went to California, did you know you’d use Apache Airflow? Or when/how did you land on that technology? Absolutely not! We were just getting started with Amazon Web Services, standing up databases and data parsing infrastructure on the cloud. We were very fortunate to get early support from the inspiring data organization Enigma, whose Chief Strategy Officer and NYC’s first Chief Analytics Officer saw what we wanted to do and supported our mission by offering Engima’s data and infrastructure Jedis. This was critical to initially scoping the challenge ahead of us. While evaluating Enigma’s ParseKit product for our needs, we stumbled upon Apache Airflow via Maxime Beauchemin’s Medium post “The Rise of the Data Engineer.” It was then I realized the potential of Airflow. While in Wall Street, I spent many years using Autosys, an enterprise “workload automation” product developed by Computer Associates, aka CA Technologies, that was used by many big banks at the time. What was different about Airflow and clear from Maxime’s excellent description was that it was developed by data engineers for data engineering (i.e. the frictionless movement of data). Maxime led data engineering at Airbnb and if it worked for a multi-billion$ company, why couldn’t it work for us!? The fact that it was open source was the cherry on top. I also want to take this opportunity to thank Maxime and the entire deployment team responsible for Airflow 1.8.0 that came just in time for us. What was the biggest challenge you faced when getting the water coalition up and running? In addition to creating the necessary technology and delivering results quickly, we needed to manufacture and preserve trust across our local utility partners. This can be especially challenging when most of the value that’s being generated goes unnoticed, so the burden was on us to find creative ways to message the heavy data piping. Moreover, many of our utility partners were consciously going against political convenience in supporting our effort. Preserving goodwill and trust across this complex landscape was challenging to our 4.5-person strong outfit (0.5 because we relied heavily on heroic volunteer researchers to help us deliver). In meeting these challenges, we ended up creating a special community of purposeful water data warriors who are truly committed to seeing that water systems are better managed. We presented our trials and tribulations at the 2016 Bloomberg Data for Good Exchange conference titled “Transforming how Water is managed in the West.” Tell us about the current data infrastructure; what happens via Airflow? We call it the Kraken because ETL has been this mythical beast of a problem in the public sector as portrayed by Dave Guarino, Senior Software Engineer for Code for America. His ETL for America post really shed the light on the intractability of implementing cross-platform ETL in the public sector. Apache Airflow allowed us to ingest, parse and refine water use data from any water utility in any (tabular) format using any language we wanted. In addition to using PythonOperators to handle most of our data parsing, we use BashOperators and SSHExectuteOperators (to move data between machines), PostgresOperators, SlackOperators and CheckOperators. We are also conscious of operating in an “eTl” environment where we are light on “e” and “l” as they do not involve time-sensitive ingestion or loading — and instead emphasize the “T” as our value lies in parsing data from different shapes into a single “shape” to help power analytics. An overview of the Kraken Data Infrastructure to power a shared data infrastructure for water utilities. A capital E and L imples ingesting real-time streaming data and loading it in highly available, possibly NoSQL databases. We are not there yet, and understanding this current need has helped us build consciously and deliver our core suite of analytics. These include the State efficiency explorer and Neighborhood efficiency explorer that reflect the diversity of local conditions while evaluating statewide conservation policies and programs. Our data infrastructure also powers a rate modeling tool to illustrate how the shifts in water rates impact customers’ bills and utilities’ revenue. This helps water utility managers navigate an environment of uncertain water futures and implement conservation and planning programs that can adapt to the on-the-ground reality. A recent and significant benefit was realized by one of our leading utility partners, Moulton Niguel Water District, MNWD who were able to save up to $20 million in recycled water investments. The forecasting tools and ability to access accurate data in a timely manner was key to realizing this. This Airflow-powered data infrastructure provides key planning benefits, which are mission-critical so California can adapt to changing water conditions. Anything else? Last summer, we implemented Apache Airflow with another open source data collection app called Open Street Cam to manage our Street Quality Identification-Bike (SQUID-Bike) project (a key analytics ability towards establishing the Streets Data Collaborative) and co-create, with public works and transit agencies, a shared streets data infrastructure! The SQUID project was conceived in Fall 2015 and involves collecting and integrating street surface imagery and ride quality data, applying computer vision and image processing techniques towards rapidly measuring the overall quality of a city’s streets and bike lane infrastructure. A frequent digital survey of all city streets enables cities answer a simple yet powerful question: “Which streets in my city are deteriorating faster than others?” Answering this question, we believe, is key to prepare cities for a future that amongst other things, includes autonomous vehicles. To end, I’ll give you a sneak peek of how we’re addressing city streets. The SQUID Bike Data workflow, which started with riding 75 miles of NYC bike lanes and then used computer vision to automatically measure bike lane quality: Our computer vision algorithm at work detecting cracks on the street imagery we collected
https://medium.com/the-astronomer-journey/using-apache-airflow-to-create-data-infrastructure-in-the-public-sector-7be8974773df
['Varun Adibhatla']
2017-10-11 13:20:02.322000+00:00
['Data Infrastructure', 'Smart Cities', 'Public Data', 'Govops', 'Astronomer']
Introduction to Statistics (Part-III)
Why You Should Perform Statistical Hypothesis Testing? Hypothesis testing is a form of inferential statistics that allows us to draw conclusions about an entire population based on a representative sample. You gain tremendous benefits by working with a sample. In most cases, it is simply impossible to observe the entire population to understand its properties. The only alternative is to collect a random sample and then use statistics to analyze it. What is a Hypothesis Statement? If you are going to propose a hypothesis, it’s customary to write a statement. Your statement will look like this: “If I…(do this to an independent variable)….then (this will happen to the dependent variable).” For example: If I (decrease the amount of water given to herbs) then (the herbs will increase in size). If I (give patients counseling in addition to medication) then (their overall depression scale will decrease). If I (give exams at noon instead of 7) then (student test scores will improve). If I (look in this certain location) then (I am more likely to find new species). Hypothesis Testing Hypothesis testing in statistics is a way for you to test the results of a survey or experiment to see if you have meaningful results. You’re basically testing whether your results are valid by figuring out the odds that your results have happened by chance. If your results may have happened by chance, the experiment won’t be repeatable and so has little use. Null Hypothesis The Null Hypothesis, denoted by Ho, assumes that there is a “NO” difference between two sets of values. The null hypothesis states there is no relationship between the measured phenomenon (the dependent variable) and the independent variable. Alternative Hypothesis The inverse of the null hypothesis. States that there is a statistical significance between two variables Holds true if the null hypothesis is rejected Usually what the researcher thinks is true and is testing Denoted by H1 or Ha Null hypothesis: If one plant is fed lemonade for one month and another is fed plain water, there will be no difference in growth between the two plants Alternative Hypothesis: If one plant is fed lemonade for one month and another is fed plain water, the plant that is fed lemonade will grow more than the plant that is fed plain water Definition of One-tailed Test One-tailed test alludes to the significance test in which the region of rejection appears on one end of the sampling distribution. It represents that the estimated test parameter is greater or less than the critical value. When the sample tested falls in the region of rejection, i.e. either left or right side, as the case may be, it leads to the acceptance of the alternative hypothesis rather than the null hypothesis. It is primarily applied in chi-square distribution; that ascertains the goodness of fit. In this statistical hypothesis test, all the critical region, related to α, is placed in any one of the two tails. The one-tailed test can be: Left-tailed test: When the population parameter is believed to be lower than the assumed one, the hypothesis test carried out is the left-tailed test. Right-tailed test: When the population parameter is supposed to be greater than the assumed one, the statistical test conducted is a right-tailed test. Definition of Two-tailed Test The two-tailed test is described as a hypothesis test, in which the region of rejection or say the critical area is on both the ends of the normal distribution. It determines whether the sample tested falls within or outside a certain range of values. Therefore, an alternative hypothesis is accepted in place of the null hypothesis, if the calculated value falls in either of the two tails of the probability distribution. In this test, α is bifurcated into two equal parts, placing half on each side, i.e. it considers the possibility of both positive and negative effects. It is performed to see, whether the estimated parameter is either above or below the assumed parameter, so the extreme values, work as evidence against the null hypothesis. Confusion Matrix In predictive analytics, a table of confusion (sometimes also called a confusion matrix) is a table with two rows and two columns that reports the number of false positives, false negatives, true positives, and true negatives. This allows more detailed analysis than mere proportion of correct classifications (accuracy). Accuracy will yield misleading results if the data set is unbalanced; that is, when the numbers of observations in different classes vary greatly. Given a sample of 13 pictures, 8 of cats and 5 of dogs, where cats belong to class 1 and dogs belong to class 0, actual = [1,1,1,1,1,1,1,1,0,0,0,0,0], assume that a classifier that distinguishes between cats and dogs is trained, and we take the 13 pictures and run them through the classifier, and the classifier makes 8 accurate predictions and misses 5: 3 cats wrongly predicted as dogs (first 3 predictions) and 2 dogs wrongly predicted as cats (last 2 predictions). prediction = [0,0,0,1,1,1,1,1,0,0,0,1,1] With these two labelled sets (actual and predictions) we can create a confusion matrix that will summarize the results of testing the classifier: In this confusion matrix, of the 8 cat pictures, the system judged that 3 were dogs, and of the 5 dog pictures, it predicted that 2 were cats. All correct predictions are located in the diagonal of the table (highlighted in bold), so it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal. In abstract terms, the confusion matrix is as follows: P = Positive; N = Negative; TP = True Positive; FP = False Positive; TN = True Negative; FN = False Negative. Happy learning :)
https://medium.com/analytics-vidhya/introduction-to-statistics-part-iii-84039fd43a1e
['Antony Christopher']
2020-12-03 16:30:53.643000+00:00
['Confusion Matrix', 'Statistics', 'Analysis', 'Hypothesis Testing']
Dating while Depressed
When we first starting dating someone new, we all try to put our best foot forward and present the most attractive version of ourselves, while still staying authentic to who we are. It is natural to try to seem as smart, funny, and nice as possible early on in dating someone. As someone with depression, anxiety, and an eating disorder (a real triple threat of mental health problems), the question of how to connect with a potential partner but also remain true to ourselves is even more complex. Dating can be challenging for everyone, but here I seek to illuminate the hurdles people with mental health issues face and the best ways to handle them. Obviously, I’m not going to convey all of my trauma on the first date, but to hide these parts of myself for extended periods of time while dating a man can feel deceitful. At what point do I disclose these parts of myself to him? I am not ashamed of the state of my mental health, but I only want to reveal these vulnerabilities to someone I believe I can trust. Not every man I meet on Bumble or Hinge deserves to hear all of my story. I still do not navigate this situation perfectly, but I tend to enter into this conversation when the time comes to “define the relationship.” After a few months, I will eventually bring up the idea of us being exclusive; in concert with this possibility, I explain what my life can be like: one of joy and love and laughter, but also one of panic attacks and days of doing nothing but sleeping. While I am in therapy and on medication, I tell him, that does not mean I will be ok 100% of the time. Dating me exclusively will mean dating someone who occasionally will shut off, who has a deeply complex relationship with her body, and who has deep-seeded issues with men. I tend to be a blunt communicator; I can be honest to a fault. If my potential boyfriend cannot handle even hearing about these issues, I would rather know before I enter into a committed relationship with him. Looking for the right partner to walk beside you can be a long journey Even if my now boyfriend assures me that he still wants to be with me and does not care about these problems, my relationships in the past have been met with a variety of results. Some men get utterly uncomfortable when I show any depressive symptoms; they cannot handle any sort of emotional display of sadness. I have had friends in the past who have defended these boyfriends, saying, “He’s a guy, come on. They don’t deal with emotion well. Give him a break.” While I recognize that men’s upbringings and the current harmful standards of masculinity might influence their ability to express or deal with emotion, their gender is not a get-out-of-jail-free card for being a supportive, caring partner. If a man cannot deal with the fact that if I am depressed on a Saturday night, that might mean staying in and watching a movie instead of going on a big night out, what does that say about him? If they cannot handle a slight inconvenience, how will they react in our relationship if something terrible, like a job less or life-threatening accident, happens? When I first started dating regularly in my 20s, I accommodated every aspect of myself to make my partner more comfortable. This often meant hiding my symptoms or engaging in behaviors that my partner wanted, but would ultimately aggravate my symptoms. Eventually, I learned the difficult lesson that in my relationships, I need to advocate for myself and my mental health. It is perfectly fine for me to set boundaries to be kind to myself; it can be as simple as communicating that I need to go to bed early every night and avoid late night outs, since sleep deprivation worsens my anxiety. I, of course, do not expect my romantic partners to cure my mental illness or act as my therapist. Rather, I expect my partner to be an empathetic, understanding, and supportive person. If I’m feeling depressed, sometimes I just need him to give me a hug or talk to me about funny work stories from his day to take my mind off things .Of course, I do not use my mental illness as a trump card; it is never, “we have to do things my way because I’m sick.” My approach is to recognize and honor my needs, and communicate them to my partner. Hopefully he will respect them and I will do the same with his needs. There were many men who could not handle any aspect of dating a complex person. If I cried, some men shut down entirely; they were not able to leave my apartment fast enough. If I told them that we needed to cook at home more because eating out frequently aggravated my eating disorder, they became frustrated with me. People always say relationships are about compromise, but in the past few years, I have met so many men who are unwilling to compromise about the smallest of things. Apparently making dinner instead of grabbing take-out can be too great a burden. In some ways, championing my mental health when it comes to dating has made things easier. If a man I am dating freaks out when I mention I suffer from depression, then that’s an easy goodbye that saves me a good amount of time and energy. Revealing this damaged, vulnerable side of myself can be hard; it is not my job to explain how mental health and therapy work to a man who knows nothing about them. It also can mean reopening past wounds and expending a great deal of emotional bandwidth. Yet, having this conversation can also be freeing; there is something liberating in presenting your entire self to another human being and asking them to accept you. If they reject me, I have the perspective to know that it is due to a problem with them, not with me. I know what patterns, routines, and behaviors are important in my life to maintain and improve my mental health; I want a partner who also recognizes these things as important. In self-disclosing to a potential partner, the benefits always have outweighed the cost. While people with mental health problems can suffer a variety of negative symptoms, we also are supportive, generous, funny, and kind partners. We deserve romantic partners who accept all facets of us. I will keep advocating for myself and what is best for my mental health until I find a compassionate, understanding partner.
https://andinomc.medium.com/dating-while-depressed-f62ae542abc3
['Mary Andino']
2020-12-04 23:57:18.942000+00:00
['Depression', 'Relationships', 'Mental Health', 'Dating', 'Anxiety']
Come to Edith
Come to the pale blue river that runs through her crevices Come to the paper-thin skin laced with spicules all over She waits for you with a faint smile her onyx heart wrapped in green gauze Her trellises bear dead climbers — from a time, she was just her eyes from a time, he was just her mirage “Come to Edith”, a voice covered in moss travels in circles, around her swollen throat She is planted in rainwater, since then her tears are nothing but mist — She is looking for you Grace, her child — in the forest of delusion, she has become ~ From the series Edith. Previous story —
https://medium.com/poetry-place/come-to-edith-af764100aeea
['Shringi Kumari']
2019-08-18 12:01:01.151000+00:00
['Storytelling', 'Poetry', 'Womanhood', 'Self', 'Life']
Change
Email Refrigerator :: 05 “Then and Now” (Berlin 1938/2008) Photograph by Peter Perry Hey friend, Earlier this month, I took a trip to my childhood home outside Chicago. Sleeping in my childhood bedroom alongside my wife and daughter was a surreal experience. I laid in bed (ok, the pull-out couch now in that room) thinking to myself that so much has changed since I called this bedroom mine. But then something else happened. The next morning at the breakfast table, I got a flashback to being 10. My dad was reading the paper, my brother was waking up an hour later than everyone else. My mom was pouring her coffee and asking us all questions too intense for that time of morning. Nothing has changed in 20 years. We’re all the same. I’m curious about that paradox. So this month, I’m exploring the spectrum of change. From “Everything Changes” to “Nothing Changes.” Happy snacking. Frog Lifecycle Animation by Anna Taberko I. Everything Changes Recently, I had coffee with a friend I hadn’t seen in a while. Although we’ve drifted apart over the last few years, we still make time to connect. This time, I thought it was worth addressing our distance. As we started to think about the history of our relationship, we agreed it wasn’t possible to simply go back. Our definition s of “fun” have changed. Our day-to-day lives now include our marriages and we don’t require the same things from our friends that we did 15 years ago. Our experiences and choices have changed who we are. We cannot make our relationship what it was without undoing all the change that makes us who we are today. We cannot make our relationship what it was without acknowledging the choices that have defined our divergent paths. We cannot make our relationship great again. (We cannot make anything great again. I t was what it was, and will be something else.) ​ Rather than try and recreate some version of our past, we can just accept what WAS, happened. That retreat you went on that changed your life? That was great. And it’s not going to happen again, no matter how many reunions you plan. 6 summers of camp that shaped who you are? That was great too. And you don’t have to work there or send your kids there or still wear that bracelet for that impact to be real. That old relationship that brought meaning into your life but has since drifted? It was amazing. And what it was doesn’t also have to dictate what it will be. There’s a synchronicity that happens between people. Timing matters. Sometimes it’s there but not always. When it is–in that retreat, in that summer, in that relationship, meal, marriage, or even in that moment… T ake advantage. Let it happen. Be in it. And be grateful. Because it’s not permanent. Nothing is. “All of 2010” Photograph Montage by Erik So II. Making Change Humans are terrible at understanding time. At Caveday, we teach that there are two ways of looking at time– chronos, which is the hours-minutes-seconds with which we’re used to measuring time. Chronos is quantitative. The 45 excruciating minutes until the next rest stop on a road trip– can I hold it that long? The 12 more minutes I have in my workout, pushing my limit. The 3 hungry minutes I have to wait for my coworker’s lunch to heat up before I can use the office microwave. Counting. Every. Second. And then there’s kairos, which is the experience of being IN time. Moments. Kairos is qualitative. It’s that moment of looking down at Golda in her crib first thing in the morning and she looks back at me and smiles. The time I spend each morning writing and clearing out my brain. Painting and playing music. A dinner party with my closest friends. I’m not looking at the clock. It doesn’t matter how long. I’m lost in time, experiencing without counting. We understand the passage of time through kairos, not chronos. Over long periods, we don’t measure time, we only experience time. It might feel like a little or a lot of time has passed. And that depends on the amount of change that happens. The more change, the more it time it feels has passed. But this experience is not just something we experience passively. We can create change for ourselves in order to create distance. After a breakup, it’s totally normal to cut or dye hair, start playing guitar, or hang out with new friends. After a big career rejection, a lot of people will learn a new skill, take new headshots, or switch industries. After a death, it’s common to clean out the house, rearrange furniture, or even move. We create change to protect ourselves and feel like more time has passed. It’s particularly relevant to think about when and where we want to actively create distance in our lives. Career transition. Post-breakup. Crisis of purpose. Sometimes we need distance a nd we can create that feeling by creating change. The more we change, the more we’re distancing ourselves not only from a person or event, but also from our former self. And transformation can only happen when we’re ready to distance ourselves that former version of ourselves. Everything Is a Remix III. Nothing Changes Betty White has a career in television almost as long as the medium itself. Her first credit appears 80 years ago, in 1939. Her big break came on The Mary Tyler Moore, where she played the cheerful yet spicy homemaker Sue Ann Nivens. 10 years later, she played the naive and long-winded widow Rose Nylund in The Golden Girls. 20 years later, she’s Elka in Hot in Cleveland– an outspoken and caring octogenarian, who has been known to smell like Snoop Dogg. Although she may be described differently over her career, each character is essentially the same. “Cheerful” at 35 might look like naive at 60 or easygoing pothead at 75. “Spicy” might mean sexually voracious at 35. Snarky at 60. And stubborn at 75. She’s not changing, but the context and the age makes a difference in how we perceive her . You don’t have to be an actor to see the same pattern in our own l ives . We are born with a set personality that might be colored by our age or who we’re with. But largely, we’re the same people our entire lives. I have always been sentimental. As a kid, it meant needing a souvenir of everything I did–ticket stubs and photo albums. As a teen, I started journaling. As a twenty-something, I would make lists and art projects out of my relationships. In my 30s it means documenting my life in more serious ways, including the Email Refrigerator : documenting and sharing my thoughts here. Can you map the traits and personality you’ve had through your life? How does your 8 year-old personality show up today? How are you like your teenage self? When we can acknowledge accept who we are, we can stop trying to do the Sisyphean work of changing ourselves, and just resolve to be ourselves. IV. Never Change Signing yearbooks in middle school and high school, it was such a common thing to write “never change.” I know it was always intended to be loving with the implied “you’re perfect the way you are.” But as an upholder and a pleaser, I would sometimes feel accountable to that. To actually, never change. As if that was even possible. Instead, I’m going to sign off here with the opposite message. CHANGE FOREVER! “Change” is letting the world guide you, being influenced by people in your life, allowing learning to open and change your mind. I hope you got something new from this visit to the refrigerator. Change forever. Jake
https://medium.com/email-refrigerator/change-1f15b6c9c679
['Jake Kahana']
2020-12-27 18:40:56.233000+00:00
['Self-awareness', 'Evolution', 'Change Management', 'Change', 'Chaos Monkey']
10 questions that reveal whether your team trusts you as a leader
Trust makes your job as a leader easier in just about every way possible. Your teams make decisions faster (and revisit them less often). People pro-actively admit to and learn from mistakes instead of scrambling to hide them. It’s easy enough to know your own level of trust in the people you lead. But gauging how they feel about you is a different story. You can’t ask point-blank — not if you hope to get reliable, authentic responses. Fortunately, you don’t have to. Ask yourself these proxy questions instead. 1. Do people say “no” to you? If everyone always says “yes,” you’ve got a high-compliance environment — which is not the same as high-trust. People who trust you will offer respectful dissent. They’ll engage in discussion and, ultimately, rally behind decisions (even when they disagree). 2. Do you use high-trust language? This includes saying “we” instead of “I” or “you” when talking about successes or failures. Does the company generally prefer the term “team members” or “teammates” over “employees” or “co-workers”? “Leaders” over “managers”? The choice of words provides subtle clues as to how your staff view their place in the company — and yours. 3. Are failures and lessons learned publicized across the company? Tolerance for honest mistakes encourages creative thinking and calculated risks. When you share your failures and what you learned from them, it sends a strong signal to others that the tolerance is real. The last thing you need is people covering up their mistakes and (worse) unknowingly repeating someone else’s. 4. Do people live the company values? I don’t just mean the executives and other taste-makers around the company (although it’s obviously extra-important that they embody the values). If you see examples of people from front-line customer service reps to accounting to your top brass using your values to guide decisions and behavior, that’s a sign you’re working with true believers. P.S.: If they’re not living the values, consider whether you have values worth living. 5. Is information open and easy to find? Transparency demonstrates trust in your people, which pays dividends of their trust in you. Opening up sounds scary at first, but in truth, there’s very little that needs to be locked down (salary and other personal data come to mind). At Atlassian, even information on revenue, expenditures, and customer count is discoverable by anyone on staff who cares to look for it. And in 15 years, we’ve yet to experience any leaks. Of course, you don’t have to take it to that extreme. Making network drives, shareable documents, wiki pages, and chat rooms “open by default” goes a long way. 6. Does everyone know what the business is focusing on and how it’s performing? Being cagey about focus areas and strategies not only encourages distrust from your rank-and-file, it’s downright foolish. Nobody can think about how their work contributes to the bigger picture is they don’t know what that bigger picture is. Similarly, sharing performance data only with a blessed few guarantees everyone else will be making decisions about their work based on a combination of rumor and assumptions — most of which will be false. 7. Do team members share company news on their social channels? When a major publication mentions your company in a favorable light, people working in a high-trust environment will be sharing the heck out of it — and not just the PR team. Social sharing is a sign that team members are confident in the direction the company is headed, proud of their contribution, and want to incorporate that into their personal brand. On average, you might see 7–10% sharing company content on social. In uber-engaged workplaces, it might rise as high as 30%. 8. Is it easy to give and invite feedback at any time? Professional development and personal growth thrive on feedback throughout the year (not just at annual review time). When trust is high, you’ll notice a steady flow of high-fives and respectful-but-challenging questions coming your way. You’ll also notice feedback being given informally amongst teammates. Peer reviews of work in progress, design sparring, code reviews, project retrospectives… if these are baked into your business-as-usual, that’s a Good Sign™. 9. Do you crowdsource strategy and major initiatives? Too often, the C-suite gets seduced by the notion that they’re the only ones with the vision (and in some cases, intellect) to know where to steer the company. But a new breed of executives understands that being on the front lines working with customers and/or making the product have the perfect vantage point from which to see opportunities. These execs make a habit of soliciting ideas for major initiatives and strategic focus areas. By asking for input, you demonstrate confidence in your staff. In return, staff tend to rally behind whichever ideas are selected for action, even if they would’ve chosen differently. 10. Is it easy to connect with you? In companies with high-trust cultures, top brass typically have open-door policies. Anyone in the company can schedule a few minutes to talk about product direction, career development, internal operations, etc. If you’ve answered “yes” to all 10 (and done so with a straight face), congratulations! You’ve got the key ingredients for leading a company that is collaborative, creative, and generally crushing it. If fewer than three questions got an affirmative, you’ve got some work to do — but take heart. Building a culture of trust takes time, but it starts with you. Even if you’re not in a position to influence big things like opening up tools and information, there are lots of things you can do as a leader. Remember: leadership is personal, not positional. Anyone from the CEO down to the intern who started last week can lead by example. I’ll leave you with 17 ways to be the change you seek.
https://medium.com/smells-like-team-spirit/10-ways-to-tell-if-your-team-trusts-you-4c265144d869
['Dom Price']
2018-01-24 16:42:27.365000+00:00
['Leadership', 'Company Culture', 'Work', 'Trust', 'Teamwork']
Android Data-Binding Made Simple
Learning Android Development Android Data-Binding Made Simple Assigning Data and get UI Updated Automatically Photo by Tamanna Rumee on Unsplash Data Binding in Android has been introduced since 2015. However, due to it’s boilerplate, it is not as commonly used as one desired. Nonetheless, still good to know how it works. To know how Data Binding is different from the conventional way, let’s look at the conventional way a little Conventional View Update We have an XML <TextView android:id="@+id/text_id" android:layout_width="match_parent" android:layout_height="match_parent" /> And in the code, we’ll have to access to the text_id and update it. val myTextModal = getTextFromLogic() val textView = findViewById(R.id.text_id) textView.text = myTextModal Note here, we have myTextModal is instantiated in the code instead of XML. Data Binding View Update To enable DataBinding, first, we’ll need to turn it on using the below in the app’s build.gradle file. buildFeatures { dataBinding = true } The XML side Instead of instantiating the modal in the code, the modal is instantiated in the XML instead. <?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> <variable name="myTextModal" type="String" /> </data> <TextView android:layout_width="match_parent" android:layout_height="match_parent" android:text="@{myTextModal}" /> </layout> To do so, we’ll have to Define an outer layer layout over the entire XML code. Then followed by data wrapping around the modal. Then update the TextView by referring to the modal using @{myTextModal} The Code Side Here, it is simplified without the need to access the View Item anymore. But it will need to access the Modal in the XML, and also have a different way to inflate the layout. private lateinit var binding: ActivityMainBinding override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) binding = DataBindingUtil.setContentView( this, R.layout.activity_main) binding.myTextModal = getTextFromLogic() } In short, it is moving the modal into the XML, which make the code doesn’t need to access the View Item anymore. XML Communicate back to Code Sometimes other than having assign variable from the code to the XML, we need the XML to respond back. To demonstrate that, I’m getting the same design here to illustrate this. The design is to provide some image URL to be loaded into the ImageView. In the XML Here we define the usual Modal and the View <?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> <variable name="imageUrl" type="com.elyeproj.demoglide.ImageUrl" /> </data> <ImageView android:id="@+id/my_image_view" android:layout_width="match_parent" android:layout_height="match_parent" android:src="@{imageUrl}" /> </layout> Notice we are sending the ImageUrl over to the ImageView . So what is ImageUrl ? It is just data class ImageUrl( val fastLoadUrl: String, val fullImageUrl: String, val listener: MyImageRequestListener.Callback ) So how could that be an image loaded? In the code Before we look into that, we look into how it is setup class MainActivity : AppCompatActivity(), MyImageRequestListener.Callback { private lateinit var binding: ActivityMainBinding override fun onFailure(message: String?) { // Do something on failure } override fun onSuccess(dataSource: String) { // Do something on success } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) binding = DataBindingUtil.setContentView( this, R.layout.activity_main) binding.imageUrl = ImageUrl( "https://theFastLoadUrl~", "https://theFullImageUrl~", this ) } } That’s the part that send ImageUrl over to the XML. Upon receiving the ImageURL , the XML will be able to connect back to load the image using the below code @BindingAdapter("android:src") fun setImageUrl(view: ImageView, imageUrl: ImageUrl?) { imageUrl?.let { val requestOption = RequestOptions() .placeholder(R.drawable.placeholder).centerCrop() Glide.with(view.context).load(it.fullImageUrl) .transition( DrawableTransitionOptions.withCrossFade()) .thumbnail(Glide.with(view.context) .load(it.fastLoadUrl) .apply(requestOption)) .apply(requestOption) .listener(MyImageRequestListener(it.listener)) .into(view) } } Notice that it @BindingAdaptor to android:src , so that when the ImageView get data within the android:src , it will call this function and provide the view for use. Do note that the setImageUrl need to be a static function or a global function accessible by the XML directly. You can get the code here.
https://medium.com/mobile-app-development-publication/android-data-binding-made-simple-e857ca70f92c
[]
2020-12-19 06:22:19.782000+00:00
['Mobile App Development', 'App Development', 'Android', 'AndroidDev', 'Android App Development']
Flutter Layout Cheat Sheet
IntrinsicWidth and IntrinsicHeight Want all the widgets inside Row or Column to be as tall/wide as the tallest/widest widget? Search no more! In case you have this kind of layout: Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text('IntrinsicWidth')), body: Center( child: Column( children: <Widget>[ RaisedButton( onPressed: () {}, child: Text('Short'), ), RaisedButton( onPressed: () {}, child: Text('A bit Longer'), ), RaisedButton( onPressed: () {}, child: Text('The Longest text button'), ), ], ), ), ); } But you would like to have all buttons as wide as the widest, just use IntrinsicWidth : Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text('IntrinsicWidth')), body: Center( child: IntrinsicWidth( child: Column( crossAxisAlignment: CrossAxisAlignment.stretch, children: <Widget>[ RaisedButton( onPressed: () {}, child: Text('Short'), ), RaisedButton( onPressed: () {}, child: Text('A bit Longer'), ), RaisedButton( onPressed: () {}, child: Text('The Longest text button'), ), ], ), ), ), ); }
https://medium.com/flutter-community/flutter-layout-cheat-sheet-5363348d037e
['Tomek Polański']
2020-05-13 19:53:40.360000+00:00
['Mobile App Development', 'Android App Development', 'AndroidDev', 'Flutter', 'iOS App Development']
In Corona Benignitas [Poem]
Nature, it is said, is red, in canines and toughened nails, in microbe and protein crowns, in meteors and killer hails. . Nature, it is said, heals herself, like a bedridden goddess riddled with sores, slowly turning and swatting the squatting sapiens upon her. . But she (or he) is dead, (Nietzsche should have said) Gaia has no clue, She has no bad or good intent for you. . Unsent we sit upon this rock and watch the unconsented tyrant’s coronation and hope against, and ruminate on, the mathematics of our decimation. . Here we sit upon this rock squeezed together by the fires of suns long stopped. Time compresses us to think again of our beginnings, and journey’s endings, and all our in-between rememberings. . Here we huddle upon this rock and remember the kind of creatures we ought. The kind forged by math, and bound at hearths. So we, the undersigned, are unresigned, and do hereby consent, to warm and brighten each other’s paths. . Human Nature it will be said, is in rejuvenation, in sit-down, do-nowt, protestations. With good words in notes and songs, we will usher this idiot along. . And with these teeth, and with these claws, We shall dis-infect, and disconnect this little Caesar from his neck! And then warmly turn, one to the other, in a bloody murder of applause!
https://medium.com/daily-connect/in-corona-benignitas-poem-6eaa1d5f462d
['Rick Johnston']
2020-05-13 14:50:01.366000+00:00
['Poetry', 'Nature', 'Poem', 'Poetry On Medium', 'Coronavirus']
Speaking Up About Assault is Not a ‘Trend’ — It is a Chain Reaction
Speaking Up About Assault is Not a ‘Trend’ — It is a Chain Reaction Each woman that tells her story gives strength to others. When the #MeToo movement happened, I spent days glued to my computer, reading one heartwrenching post after another. I didn’t eat and barely slept. It felt like I was being carried away by a devastating tsunami, and every story I read felt like an echo of pain I had felt and was experiencing all over again. It hurt like hell, but it felt good at the same time because finally, I was not alone. Other people had lived these terrible things and survived them and were just as angry about them. It set something free inside of me. If those women hadn’t spoken up in their millions, I would never have the courage to share my experiences, abusers attempts at the annihilation of my being. And I certainly wouldn't have the strength to deal with the horrible reactions you get when you do speak up. Mostly the reactions are cliché as can be, they come straight from the How-To guide to being a sexist idiot. They cast doubt on what happened, they tell you it was your fault, or they tell you it isn’t that bad and that you should just suck it up. They mansplain your feelings or attempt to justify the assailant’s behaviour. Or they say you are attention-seeking and just following the “trend” of telling your rape story. These comments all share the same aim: to prevent women from speaking up by creating a hostile environment when they do. They are written by men who are unhappy to hear about the assault, who would rather it happen in silence because they don’t care enough about women to want things to change, and they don’t see us as human enough to empathise. I’d like to unpack this last argument — the idea that speaking up about assault is a ‘trend’, that we are doing it to be cool and to get attention. This reaction is upsetting in so many ways, and it is also fundamentally false. As you’ll have surmised already from this article, the reactions to our stories aren’t exactly pleasant. You’re vulnerable enough when you are sharing such experiences, and to have them met with hostility and victim-blaming hurts. And to be honest, even if we were attention-seeking, there is nothing wrong with that. Especially when what you are drawing attention to is a societal problem that is causing trauma for millions of survivors. That sheer number is why it may feel like a ‘trend’ — it isn’t that people are jumping on the bandwagon, there are just that many cases. There are so many stories to be told. What should be surprising us isn’t how many of these stories we are hearing, but what that says of the number of cases left untold. The reason that voices are opening now isn’t that it’s ‘in fashion’, but that each woman who tells her story makes it easy for those that come after her. When we read such stories we know that we are allowed to speak up. We know that there are words for our experiences and that they are worthy of being told. It shows that no experiences of assault are black and white, contrary to what we have learned, and so it makes it easier to see our grey-shrouded stories for the assault that they are. We may also feel a duty of reciprocity. If someone else is brave enough to tell their story to help others, it can make us feel like we too can make that gesture for our sisters. Speaking up about rape stories isn’t a trend — it’s a chain reaction, one woman empowering the next. Giving us strength — strength enough to deal with the vile reactions. It’s worth it all, just for one comment from a woman who has had a similar experience, who has been moved by our words.
https://medium.com/an-injustice/speaking-up-about-assault-is-not-a-trend-it-is-a-chain-reaction-73432d62c027
['Stark Raving']
2020-12-03 23:12:28.991000+00:00
['Equality', 'Women', 'Writing', 'This Happened To Me', 'Feminism']
Metadata is Useless — Unless You Have a Use Case
Opinion Metadata is Useless — Unless You Have a Use Case Here’s why context is key when it comes to unlocking the value of your metadata. Image courtesy of Shutterstock. Last week, I participated in a panel at the Coalesce conference, led by the Fishtown Analytics team (creators of dbt), to discuss the role of metadata in the modern data stack. One of the points we discussed was: metadata is useless. In this blog post, I’ll explain why. Over the last decade, data teams have become increasingly proficient at collecting large quantities of data. While this has the potential to drive digital innovation and more intelligent decision making, it has also inundated companies with data they don’t understand or can’t use. All too often organizations hungering to become data-driven can’t see the forest for the trees: data without a clear application or use case is nothing more than a file in a database or a column in a spreadsheet. In recent years, we’ve seen the rise of data: now, companies are collecting more and more data about their data, in other words, metadata. By and large, this enthusiasm around metadata is a huge win for the industry. ETL solutions like dbt make it easy to track and use metadata, while cloud providers make interoperability of metadata more seamless between data solutions in your stack. Still, as we become more and more metadata-dependent, it’s important to remember not to repeat these same mistakes. More metadata, more problems Just as data without context is nothing more than a bunch of numbers, metadata by itself is useless — it’s just more information about other information. Collect it all you want, but without a practical use case, metadata is largely meaningless. Take for example, lineage, a type of metadata that traces relationships between upstream and downstream dependencies in your data pipelines. While impressive (neon colors! nodes! sharp lines!), lineage without context is just eye candy, great for a demo with your executives — but, let’s be honest, not much else. Lineage without a business use case is just an empty March Madness bracket. Image courtesy of Barr Moses. The value of lineage doesn’t come from the simple act of having it, but instead lies in its relevance to a particular use case or business application. Where can lineage be actually useful? Aside from looking nice in a fancy demo or PowerPoint presentation, data lineage can be a powerful tool for understanding: How to understand data changes that will impact consumers and determine the best course of action to resolve that use case Say for example you want to make a change to a particular field. Without lineage, you’re likely making that change blindly — hoping there are no downstream repercussions (you: “fingers crossed that no downstream consumers are going to be surprised by this change!”). By using field and table-level lineage, you can see which specific tables, reports, and most importantly — users consuming those assets — are going to be impacted by this change. How to troubleshoot the root cause of an issue when data assets break In another scenario, you may be paged in the middle of the night about a broken dashboard your team is supposed to present to execs the next morning. You need a quick way to understand what broke upstream to render your Tableau graphs completely useless. But what exactly is the root cause of this problem? And which of the 100,000 tables you have in your data warehouse will you need to fix? With lineage, you can immediately identify the upstream assets contributing to this data downtime and pinpoint the root cause. How to communicate the impact of broken data to consumers And finally, let’s say data breaks (as it often does) — specifically, an ETL job was completed, but the data in this column is now 80% null — essentially, a silent failure. And now you need to highlight how this silent failure affects the users of this data. How do you know who will be impacted and should be notified about this? Lineage provides a quick and easy way to communicate what happened and where so that you can keep stakeholders in the know while you resolve the issue. At the end of the day, lineage and metadata have the potential to be immensely valuable to data teams and companies at large — but only when it’s applied directly to your business. When captured holistically and in the context of business applications, metadata has the potential to serve as a force multiplier for your entire company. Image courtesy of Barr Moses. When captured holistically and in the context of business applications, metadata has the potential to serve as a force multiplier for your entire company. Image courtesy of Barr Moses. At the end of the day, your metadata (including but not limited to lineage) should answer more than the basic “who, what, where, when, why?” about your data. It should enable your customers (be it internal or external) to be equipped with up-to-date and accurate answers to questions that relate back to the your customer’s pain points and use case, including: Does this data matter? What does this data represent? Is this data relevant and important to my stakeholders? Can I use this data in a secure and compliant way? Where does the answer to this question come from? Who is relying on this asset when I’m making a change to it? Can we trust this data? Many data teams are trying to answer these questions through a variety of solutions, including APIs that hook into modeling and pipeline transformation tools, data catalogs, documentation, and lineage. All four provide rich insights about your data, but they’re missing one critical piece: its application to your business. Application is everything Metadata without a use case is like an elephant riding a bicycle. Interesting and impressive, but not very useful (unless you’re running a circus). The true power of metadata lies in where, when, and how we use it — specifically, how we apply it to a specific, timely problem we are trying to solve. In addition to collecting metadata and building metadata solutions, data teams also need to ask themselves: what purpose is this metadata serving? How can I apply it to solve real and relevant customer pain points? Personally, I couldn’t be more excited for the future of metadata. With the right approach, applied metadata can be a powerful tool for data observability, data governance, and data discovery, three critical components of having accurate, reliable, and trustworthy data that can move the needle for your organization. What to derive more value from your metadata? Reach out to Barr Moses and the Monte Carlo team.
https://towardsdatascience.com/metadata-is-useless-535e43311cd8
['Barr Moses']
2020-12-21 22:32:07.090000+00:00
['Metadata', 'Data Engineering', 'Data', 'Data Science', 'Data Governance']
Analyzing Employee Reviews: Google vs Amazon vs Apple vs Microsoft
Analyzing Employee Reviews: Google vs Amazon vs Apple vs Microsoft Which company is it worth working for? Overview Whether it is for their ability to offer high salaries, extravagant perks, or their exciting mission statements, it is clear that top companies like Google and Microsoft have become talent magnets. To put it into perspective, Google alone receives more than two million job applications each year. Working for a top tech company is many people’s dream, it was certainly mine for a long time, but shouldn’t we be asking ourselves “Is it really worth working for one of these companies?” Well, who better to help us answer this question than their own employees. In this article, I will walk you through my analysis of Employee Reviews for Google, Microsoft, Amazon and Apple and try to uncover some meaningful information that will hopefully illuminate us when deciding which company it’s worth working for. I will start by describing how I cleaned and processed the data, and then talk about my analysis with the help of some visualizations. Let’s get started!! Data Data Collection The employee reviews data used for this analysis was downloaded from the Kaggle Datasets and it was sourced from Glassdoor — a website where current and former employees anonymously review companies and their management. The dataset contains over 67k employee reviews for Google, Amazon, Facebook, Apple and Microsoft. The reviews are separated into the following categories: Index: index Company: Company name Location : This dataset is global, as such it may include the country’s name in parenthesis [i.e “Toronto, ON(Canada)”]. However, if the location is in the USA then it will only include the city and state[i.e “Los Angeles, CA” ] Date Posted: in the following format MM DD, YYYY Job-Title: This string will also include whether the reviewer is a ‘Current’ or ‘Former’ Employee at the time of the review Summary: Short summary of employee review Pros: Pros Cons: Cons Overall Rating: 1–5 Work/Life Balance Rating: 1–5 Culture and Values Rating: 1–5 Career Opportunities Rating: 1–5 Comp & Benefits Rating: 1–5 Senior Management Rating: 1–5 Helpful Review Count: A count of how many people found the review to be helpful Link to Review : This will provide you with a direct link to the page that contains the review. However it is likely that this link will be outdated Here is what the data looks like in tabular form: Preview of data in tabular form (Not showing the last two columns) Data Cleaning After doing some basic data exploration, I decided to do the following to get the data ready for my analysis: Only include employee reviews for Google, Amazon, Microsoft and Apple. Although Facebook and Netflix had a good number of reviews, combined, they represented less than 4% of the dataset, so I decided to exclude them from this analysis for simplicity purposes. The “Link” and “Advice to Management” columns were dropped since I didn’t think they would be as insightful as the other columns. Rows with missing values in the “Date” column were dropped. A new column named “Year” was created containing the different years when the reviews were made. Rows with missing values in the following columns were dropped: “company”, ‘year’, “overall-ratings”, and “job-title”. Rows with missing values in all columns were dropped. Columns containing numeric values were converted to the appropriate data type. Insights & Analysis Which company has the most reviews? I began my analysis by visualizing the distribution of employee reviews for each of the 4 companies I selected. Interpretation: We can clearly see that Amazon has the most employee reviews (over 25,000). This is great since it probably means that we’ll see good mix of opinions. Although Google has the least amount of employee reviews, it is still large enough to be significant and be able to compare it to the other companies. Lets take a look at how these reviews are distributed throughout the years for each company. Interpretation: As we can see, there is a decade worth of employees reviews available, but they only go up to 2018. Microsoft: Most reviews are from the past 4–7 years Most reviews are from the past 4–7 years Google: Most reviews are from the past 4 years Most reviews are from the past 4 years Amazon: Most reviews are from the past 3–4 years Most reviews are from the past 3–4 years Apple: Most reviews are from the past 2–4 years Based on these observations and considering how fast these companies are growing and changing every year, I decided to continue my analysis with employee reviews from the last 4 years available (2015 to 2018), since I believe they will be the most relevant. Who is reviewing? Now that we know how many reviews we are dealing with, let’s figure out who is writing them. This question can be answered in many different ways and my first approach was to figure out the job title of the reviewers, here is what the top 5 looks like: Anonymous Employee 21910 Software Engineer 930 Specialist 648 Software Development Engineer 618 Warehouse Associate 585 Unfortunately, most of the job titles are labeled “Anonymous Employee”. Considering that often times companies have a slightly different titles for the same job, I decided to not dig any deeper. Instead, let’s take a look at how many of the reviewers are current and former employees As we can see, most of the reviews come from current employees, but to get some more insight let’s see what this distribution looks like for each company: Interpretation: Once again, we see that most of the reviews for each company are from current employees. These are a few thoughts that came to mind when trying to interpret the data: Is having a large number of reviews from current employees a good thing or does it mean more bias? Perhaps, having more reviews from former employees could give us the type of insights that we don’t often read about these companies. Let’s continue… Which company has the highest overall rating? Let’s take a look at how the average overall rating for each company has changed over the past few years (2015–2018) Interpretation: We can see that the average overall rating for every company, except Apple, has not decreased since 2015. Google holds the highest average overall rating among the 4 and it has remained that way for the past couple of years. Lets talk about the trends for each company: Google: Seems to have started decreasing slightly since 2016. Seems to have started decreasing slightly since 2016. Microsoft: Increasing slowly since 2015 Increasing slowly since 2015 Apple: Seems to be decreasing slowly. Seems to be decreasing slowly. Amazon: Has increased dramatically from 2015 to 2017. Which company offers better Work-Life Balance? Let’s find out how good these companies are at allowing their employees to have a life outside of work: Interpretation: Google has the highest work-life balance rating (over 4 stars) and Microsoft comes as a close second. Amazon seems to fall short when it comes to providing good work-life balance. Which company has better Culture Values? Let’s find out how the employees the rate core principals and ideals of their company: Interpretation: Google has the highest rating for culture values and Apple places second (over 4 stars). Amazon has the lowest rating of the 4, but with just over 3.5 stars. Which company has better Career Opportunities? How good are they at helping you advance your career? Interpretation: Google has the highest rating for career opportunities (over 4 stars). This shouldn’t comes as a surprise considering how big the company is and how many different types of technologies they are working with. Apple has the lowest rating at just below 3.5 stars. Which company offers better Benefits? Let’s find out how well these companies are doing in terms of benefits/perks for their employees. Interpretation: Google has the highest rating for benefits/perks with over 4.5 stars. Apple and Microsoft also seem to offer good benefits, but Amazon falls a bit short. Which company has better Senior Management? Leadership is an important function of management, let’s see how Senior Management’s leadership is rated at these companies: Interpretation: Google has the highest rating for Senior Management, but at just below 4 stars which is its lowest when compared to its other ratings. Amazon has the lowest rating for senior management. What are pros of each company? Let’s explore the pros comments using word-clouds:
https://towardsdatascience.com/analyzing-employee-reviews-google-vs-amazon-vs-apple-vs-microsoft-4dc3c036666b
['Andres Vourakis']
2019-04-26 23:10:48.541000+00:00
['Data Science', 'Data Analysis', 'Data Visualization', 'Towards Data Science', 'Python']
How to become the person people come to for advice
Everyone wants to be someone’s Yoda. Right? Everyone wants to be of value to other people. It feels amazing to solve someone’s problem. It fills your life with meaning. So.. How can you become better at helping? How can you be the person people come to for advice? Good question. I can’t give you the answer… But I can help you form your own.
https://medium.com/the-innovation/how-to-become-the-person-people-come-to-for-advice-e471d94a04ad
['Christiaan Van Eijk']
2020-11-11 16:49:01.174000+00:00
['Personal Development', 'Entrepreneurship', 'Family', 'Personal Growth', 'Helping Others']
What the Chinese Received From Coronavirus
What the Chinese Received From Coronavirus A question was posted on Zhihu, the Chinese Quora. The answers were heart-rending and unexpected; and even Trump surprised. Wuhan citizens queuing to buy masks. Source: Wikimedia Amidst the gloom and doom, a Chinese netizen asked the following question on Zhihu, the Chinese Quora: “What have you received from this coronavirus epidemic?” At the time of writing, the question received 15m views, 24k followers and 11k responses. The following are highlights of some the answers given by the Chinese, many of whom are locked down in their homes and quarantined cities. Doctors and nurses barred from home by neighbors, children ostracized Notice found on door of a residential compound “Medical Staff Not Allowed”. Source: Wechat Right at the front line of the epidemic battle are the tens of thousands of doctors and nurses treating infected patients. But what some of them received in return was discrimination from neighbors and friends. One particular doctor shared a phenomenon that was experienced by many colleagues across China. They were barred from going back home by their own residential compound’s estate management and neighbors. At first when the stories begun circulating on social and mainstream media, many thought it was fake news. But one doctor asked his contacts from the hospitals cited in the news and verified this to be true in his Wechat post. He also shared a post from a nurse at his own hospital who faced the same situation. The first story broke from a nurse working in Nanyang city in Henan Province. She was refused entry into the estate where her home was after coming back from her shift one day. Despite police, hospital management and government officials arriving on the scene, after four hours of negotiations with her neighbors, she was still refused entry and ended up spending the night in a motel nearby. The ostracizing didn’t stop at the medical personnel themselves. Stories also broke of parents telling their children not to play with the children of doctors and nurses, for fear of infection. Don’t watch this video if you’re easily moved. The scene of this Chinese nurse ‘air hugging’ her pleading daughter is heart-rending. A lifetime of mundane becomes a heroic story for life But an even more heart-rending story concerning another doctor was discussed in one response. On February 7th, 2020, a Chinese doctor in Wuhan called Li Wenliang died. He was one of the earliest to treat infected patients. Realizing this could be an epidemic in the making, he raised a warning by posting in a WeChat group of his medical school alumni about the new coronavirus. But for that, the Wuhan police issued him with a letter for disrupting social order and threatened him with criminal charges, unless he signed the letter and promised to “stop such illegal behavior”. That was in early January 2020. He started coughing soon after, having contracted the virus from a patient. A month later he died in the hospital. Dr. Li was a pretty ordinary person, according to the netizen who wrote the response about him. Based on his online activities, he indulged in mundane stuff like online lotteries and Marvel movie promotions. On social media, he posted pictures of himself holidaying in Guangzhou and eating Texas Fried Chicken. Dr Li Wenliang. Source: Weibo In an interview with The New York Times before he died, he said that he became a doctor because he “thought it was a very stable job”. He has a four year old child and an unborn one due in June... From his death, China received an ordinary hero. Chinese netizens poured out their anger and grief, and demanded the authorities for reform and accountability — despite attempts by the authorities to censor the social media barrage. “I started coughing on Jan. 10. It will take me another 15 days or so to recover. I will join medical workers in fighting the epidemic. That’s where my responsibilities lie.” — Dr Li Wenliang, from a New York Times article Dr. Li was just 34-years old. But perhaps from his early demise, China will finally receive some much needed reform on whistleblowing. According to Reuters, China’s top anti-corruption body said it would send investigators to Wuhan to probe “issues raised by the people in connection with Dr. Li Wenliang”. The heart returns home Not all the responses were full of grief and heartaches. The author of the most liked response lamented that it was the epidemic that finally brought him home and closer to his parents. Like many others, having returned to his hometown for the Chinese New Year, he was now stuck there as companies all over China extended the holidays due to travel restrictions and fear of contagion. “Without this epidemic, I would not have been home to spend the 15th day of the Lunar New Year for seven years now. The fragrance of mum and pop’s cooking, the sunshine of my hometown — how nice.” He went on later in the article to share… “… I’ve hardly ever spent some quiet time at home with my parents. To be honest I really wouldn’t dare to quarrel with my folks now. With the epidemic so serious I would have nowhere else to go if I skipped home. Hence I find myself getting along with my parents for a record period of time. I’m going to use this precious two weeks to keep my folks company, and let myself slow down…” This netizen also noted — with a tinge of irony — that during last year’s Chinese New Year, local cinemas released a blockbuster called “The Wandering Earth”, about a post-apocalypse global effort to save the Earth from total destruction. In it there was such a line: “In the beginning, no one cared about this calamity. It was just another fire, another drought, another extinction of a species, another city disappearing. Until the disaster hit everybody…” A not so gentle reminder But movies are movies. We watch, we laugh, we cry, and then we go home and quickly forget about it. Right now, the streets of China, and Wuhan in particular, are stark reminders that fiction can become reality. In the face of calamity and death, the human spirit unites. Adversaries put aside their differences and work together. Even Trump is no exception, despite having led two years of an aggressive US-China trade war. I believe this epidemic has given all of us something precious. A reminder that we all live on this same earth, nourished and destroyed by the same Mother Nature; that in the face of a common threat we should all remember, there is no you or I — there is only us. Spread the word (not the disease)
https://medium.com/behind-the-great-wall/what-the-chinese-received-from-coronavirus-42e7bed04296
['Lance Ng']
2020-02-16 13:45:15.941000+00:00
['Humanity', 'China', 'Epidemic', 'Coronavirus', 'Doctors And Nurses']
There’s No Justice In Killing Part III
My name is Rebecca Wu. And I’m Henry James, and we’re writers for Dark Sides of the Truth magazine. Part I, Part II By seven-thirty, we were camped out in a booth at Johnson’s enjoying one of the best artery cloggers in the city. Who knew we both liked our burgers with mustard and mayonnaise? “Pass me the ketchup, will you?” “You put ketchup on curly fries?” “You have a problem with that?” “Not at all. That’s just how I like them.” For a few moments, we enjoyed several bites of our burgers, dipped our curly fries in mounds of ketchup with our fingers, and washed them down with our carbonated beverages. “What?” “I don’t know. It’s just you don’t seem, well, uh…” “Korean?” “Well, I mean, yeah.” “That’s another thing Sunny told me about you, Henry. You like to fit people in tight little stereotypical boxes. Take me, for example. You probably expected me to suggest a local Asian diner where we could have some fried rice and something tossed in a Wok, right?” “Uh.” “And you think I’m some dainty lotus flower who has never gotten her hands dirty.” “Hang on Becky, I never said that.” “You didn’t have to. I can see it in your expression. I can tell by the questions you ask. That’s the point, Henry. You can’t put people on your shelf, dust them off and expect them to behave exactly like you want them to. It just doesn’t work that way.” “Look, it’s not like I haven’t been around the block a time or two. I’ve gotten pretty good at reading people.” “I hope so for your sake Henry because you’re going to need to bring your A-game to find Dwayne Macy.” “Right, okay Becky, what do you know about all this, besides what you’ve read?” “I know her brother was dead before Macy buried him. And I know how he killed him.” “Whoa, my little Korean princess. Back up. How do you know this? There are only three people on this planet who know.” “Four. Do you want the rest of those fries?” “Help yourself. So spill it, Becky. If we’re going to be partners, we need to learn to start sharing our info.” “You know the coroner at County Medical where they took Dante?” “Not personally, no. Manny said…” “Ah, yes. Manny Hermanos, special director of the FBI’s criminal investigative unit in Quantico.” “Uh, yeah. Go on.” “Well, it seems as if the coroner at County Medical has a penchant for Asian women with perky little breasts. All it took was a low cut blouse, a few deep breaths and a couple of jiggles and the man gave me everything I needed to know.” “When?” “Beg pardon?” “When did you do all this, and more importantly, why?” “Sunny called me last week after the board made their decision. She wanted me to have an interview with her and Rick. After I was hired on, and after reading the story and what had happened, I kind of guessed you weren’t going to let this one go. So I paid the coroner a visit.” “You guessed right. But you’re telling me something I already know.” After flagging one of the waiters, we waited until he brought us another round of sodas. “Henry, are you willing to split an apple strudel with ice cream with me?” “Never had it.” “Oh, sir. It’s to die for. We’ll have the strudel with two spoons, please.” After the dessert was delivered we both set about making short work of the sweet apple strudel slightly chilled by three dollops of ice cream. When we finished we leaned against the booth backs and sized each other up, silently wondering how this latest reunion was going to work. “Good idea Becky. That was pretty good stuff. But I think I’m going to call it a night. Thanks for the dinner and the conversation. How about we keep this between us girls, okay?” “Henry, sit down.” “Do what?” “I said, sit down. You haven’t heard the best part yet.” “This better be good, Wu.” “It is. Trust me. Let me ask you something. Do you even bother to read your story after it goes to print?” “What the hell is that supposed to mean?” “Just that. Do you?” “No.” “Of course not. Neither does Sunny. And that’s how you both missed it.” “Missed what? What the hell are you talking about, Becky?” “The clue. You and Sunny were both there. He gave you a clue, and you both missed it.” “I know all about the four clues Dwayne Macy gave us. I was there, remember?” “I’m not talking about Dwayne. I’m talking about what his father told you and Sunny when you visited him.” “Dammit, I already told you we knew Becky. His father gave us a second clue.” “I’m trying to be patient with you Henry, but you’re really starting to frustrate me to no end here.” “Then tell me what you’re talking about and quit pissing around.” “The phone call Henry. Charles Macy told you two his son called him about three months before you came to see him.” “Yes, and?” “Okay, from what I understand, this guy Dwayne is a ghost. I mean, he just rolled up his magic carpet and disappeared. Not a single trace of him can be found, right?” “Yeap.” “What if he made the call to his father on a regular line or a personal cell phone he was using before he canceled the service? What if he didn’t use a burner phone to make the call?” The reality of the possibility hung in the air, frozen between us, daring the both of us to grasp hold and run with it. “Damn. So far, this guy has been invisible. Always covering his tracks so well, none of us can find him. What if…” “He made a mistake?” “We need to make a little trip, Becky. I need to have a little chat with somebody who can help.” “You mean Robert, your brother who works for the NSA?” “How the hell do you know my brother?” “I was asked to be one of the bridesmaids at the wedding. We’ve all met several times over lunch.” “How come I wasn’t invited?” “Uh, because you’re, well, uh, you’re you.” “Figures. Okay, fine, whatever. Your car or mine?” “Well, I haven’t had a chance to clean mine lately. I’m sorry, but I’ve got a ton of potato chip bags and junk food wrappers tossed in the back seat. It’s really a mess.” “Wu, that’s music to my ears. Wait until you get a load of my car. You should feel right at home.” READ ON — THERE’S NO JUSTICE IN KILLING PART IV Let’s keep in touch: [email protected] © P.G. Barnett, 2019. All Rights Reserved.
https://medium.com/dark-sides-of-the-truth/theres-no-justice-in-killing-part-iii-6001f1e0861e
['P.G. Barnett']
2019-11-26 13:31:01.252000+00:00
['Storytelling', 'Henry And Rebecca', 'Stories', 'Fiction', 'Short Story']
Designing for a hackathon
Written by: Annie Xu Each year, students from around the world are eager to participate in hackathons and look forward to an exciting weekend filled with learning experiences, and a healthy dose of competition. Hackers can have very different impressions of each event- starting from the very first social media post. No matter what we were told growing up, people do judge books by their covers. Design helps represent the identity of the organization and its values as well as what hackers can expect from the event experience. All of this is done through ✨ branding ✨. Defining the brand identity is just one of the many aspects the Design team is responsible for on Hack the North. Designing visual assets, user experiences, and even wayfinding signage are just some of what falls under our purview. Throughout all of these projects, there are 3 pillars our design team carries into the design process: Designing with intention 🔬 Designing collaboratively and efficiently 🤝 Designing for inclusion and accessibility 🌎 Designing with intention 🔬 How can we make our website more accessible? What narrative should we build with our brand? What information goes first on the sponsorship package? When designing for a big event, we often have to make difficult decisions and hope that the decisions we make are objective. In reality, we sometimes get caught up designing in a bubble where we let our biases influence our decisions. As a design team, we make a conscious effort to be intentional with our design decisions and be well informed about our community. When it comes to each design project, there are two things to always keep in mind: 1) What’s the desired outcome?, and 2) Who’s our audience?. When branding Hack the North 2020++, our goal was to simultaneously encompass the event’s direction while staying identifiable as the event our hackers all know and love. Specifically for 2020, we wanted to emphasize our main mission: To make it easy for anyone to dream big and build. Our audience includes passionate and innovative students from around the world. Each year we reach out to our hackers to ask for feedback on the event and their experiences, which helps us better understand the community we’re designing for. What do our hackers value? What truly makes their experience special? When designing the brand, we must consider what makes our hackathon unique to our audience. In order to push for inclusivity last year, we designed big and bold empowerment posters for the event. Since our hackers value these unique experiences, we wanted to showcase them on our landing page for 2020.
https://hackthenorth.medium.com/designing-for-a-hackathon-f3025c8aa4df
['Hack The North']
2020-08-07 16:10:14.296000+00:00
['Hackathons', 'Accessibility', 'Branding', 'Technology', 'Design']
How to Design Interruptions
How to Design Interruptions What my ADHD taught me about living in a world of notifications and how to better design them “The scarce resource of the 21st century will not be technology; it will be attention.” — Mark Weiser Imagine you’re having a positive, fully immersive conversation with someone in a restaurant. Your concentration is focused on the person across from you. In this context, how would a server complement this experience? Detract from it? Some servers ignore you and need to be tracked down, which negatively affects the overall experience. Others organically know when and how to interrupt you. More often than not, servers adjust their approach based on your verbal and non-verbal cues. This example is a reminder that humans are the true experts in adapting. We can apply this analogy to technology; a person may desire real-time pop-ups or in-text communications when we share tips, updates, or alerts. Or, those may be distracting and disrupt their flow. Achieving focus When technology communicates and behaves well, it enables you to do what you want to, on your terms. It communicates in ways that allow you to focus and achieve the level of concentration you need to accomplish a task. Interruptions in our lives We’re alerted hundreds of times per day and not all are harmful. Interruptions can be helpful when: They’re urgent They can progress a current task, or, They contain information a person cares about. Some are useful and non-invasive, like an oven burner turning orange when it’s hot. Some are needed, like a critical security update, while others are just generally helpful, like a feature suggesting something new. But when they appear at inopportune moments, even the most useful notifications often have detrimental results like anxiety, frustration, and reduced productivity. While a pop-up might be nearly invisible to one person, to another it might stop a critical task completely for hours. We must examine when our communications are helpful vs. harmful. We’re notified hundreds of times a day. Some are so ambient, we may not even notice. The cost of interruption Harmful interruptions take a large toll and lead to net-negative experiences with products. It takes an average of 25–45 minutes to get back into your task once jolted out. In many industries, there can be safety concerns. A changing landscape Sounds simple, but enabling focus is an increasingly difficult challenge. The points of contact between people and products are undergoing massive evolution. Experiences have moved beyond screens to engage and immerse multiple senses. Each of these new interactions presents new potential points of friction and interruption. Inclusive design is about reducing errors and creating seamless transitions as people move from one moment to the next. Learning from people I wanted to learn how to design interruptions more respectfully from people working in multiple industries, cognitive science experts in and outside of Microsoft, and those with heightened sensory sensitivity. This manifested in interviews with chefs, emergency room doctors, pilots, cognitive psychologists, people who spoke English as a second language, and people who have disabilities — seen and unseen. Personally, I struggle in flow because my computer makes assumptions about what I need and want. For some, notifications are slightly annoying. To me, they can be crippling and ruin my entire day. I need to work out for two hours, drink several cups of chamomile tea, and meditate as a prerequisite to working in certain programs. When we design technology, we’re teaching it how to behave as it interacts with people. To do this, we need to understand people and their motivations. Technology should respond to the unique way people think, feel, and behave. We need to put people, not technology, in the lead of interactions. From this research, Doug Kim and I landed on a framework of Design Considerations to keep in mind. The framework will help you start addressing questions like: How can technology act respectfully, understanding when interruptions are helpful vs. harmful? How can technology serve the right information at the right time, while cutting back unwanted information, distractions, and extra steps? How can we identify when, where, and how it’s appropriate for a system to communicate with people? How can information blend in the background so people can focus on their task, not the tool? Design Considerations 1. Understand urgency and medium There are many ways technology communicates: a visual pop-up, orange blinking light, a sound, a vibration. Are all modes needed to capture someone’s full attention for one low-urgency communication? Consider: When designing any form of communication, determine how much attention it needs and when: full attention, partial attention, little attention. Determine ways to align the delivery form with the urgency of the message. An important message may warrant taking full attention from the person. A non-urgent software update may not. To help make better decisions about timing and delivery, think about how to balance the benefit of the interruption with the cost of interrupting the task. Ask yourself: Do you have a range of alert types that convey various levels of importance? How do you use visual, aural, and haptic modes of communication today in your experience? What’s the cost to the customer if they miss your interruption? How can you be more respectful? How can you make use of the periphery to keep the person focused on their primary task? 2. Adapt to the customer’s behavior How a customer interacts with each feature or part of your experience will change over time. Consider: Think of every experience we build as a conversation between customer and technology. How good or helpful is a conversation when only one side is listening, and the other side just speaks whenever it wants to? What does it take for technology to understand when it’s appropriate to communicate? Humans have the capability to understand that what’s appropriate in one context might not be appropriate in another. Let’s learn from human-to-human interaction to create experiences with the lowest mental cost. Ask yourself: What are all the alert types that your customer could encounter while in your experience? When can you speak with a diverse range of people to learn about how they experience your systems’ communications? How can you learn from human interactions in the physical world to judge how and when it is most appropriate to interrupt someone? Are you listening to your customer behavior? Can your system learn from how customers interact and modify behavior? 3. Adapt to context We all focus, filter, and consume information in unique ways. We have capabilities and limitations for tuning in and out information. These preferences and capabilities can rapidly change based on context. Because of that, how a person interacts with each feature or part of an experience will change. Can your system learn from how people interact to modify the way it communicates? Consider: Alone or in a crowded room. No WIFI or full speed internet. On the go or in a conference room. Limited use of sight due to a permanent disability. Glare on a sunny day. There are many contexts to consider when designing communications systems: cognitive, environmental, social, physical, cultural. Understanding your customers’ primary motivation can help you design an experience that contributes to vs. takes away from their goal. Build experiences that respect and adapt to context. Ask yourself: Can your system learn from how people interact and shift contexts to modify the way it communicates? What experiences could be competing for your customer’s attention? What contexts have you currently built for and which should you take into consideration moving forward? What is your customers’ main goal? What could interrupt that goal? Can your system learn from customer behavior? How could it improve? 4. Enable the customer to adapt Personal experiences are tailored to an individual. Customizable features help customers feel empowered and in control of their devices. Many alerts on computers today are difficult to tune out or turn off. With multiple applications running at once, we can be inundated with communications. Better systems have ways for users to control the type and timing of notifications. Consider: Allow personalization of the type, time, and mode of communication. Design an entire experience mindful of feedback, triggers, and alerts from the system and how a person can make the experience their own. Ask yourself: When and where can you allow deep personalization of the type, time, and mode of communication? What control does the customer have over alerts in your experience? Does your customer know if they have control over their alerts? Do your systems’ communications take your customer’s current task into account? When do you and when could you allow deep personalization in communication type and frequency? 5. Reduce mental cost Experiences are moving beyond screens to engage and immerse simultaneous human senses. Each of these new interactions presents new potential points of friction. Consider: There’s plenty of evidence that people are overwhelmed by the sheer amounts of information that they receive through technology. But this is also a function of how many steps people need to take to interact with technology. And how many distractions they encounter along the way, visual, audio, organizational, etc. What is nearly invisible or even helpful to one person may be disruptive to another. Ask yourself: How can you better understand the mental cost of customers within each step of their journey? How can you build intelligence to know when a customer is the most interruptible? How can you identify what’s worth interrupting a person for and what isn’t? How can you better understand what’s important to the customer and not make assumptions on their behalf? We’re piloting a series of tools with universities and internal teams and have made some strides here, like Focus Assist in Windows 10 but there’s a lot more that’s needed. Many thanks to Amber Case, Doug Kim, Jutta Treviranus, Mary Czerwinski, and, the extended Microsoft Design Community for their contributions and research. Want more? Check out our website for a series of short-films and the inclusive design toolkit. To stay in-the-know with Microsoft Design, follow us on Dribbble, Twitter and Facebook, or join our Windows Insider program. And if you are interested in joining our team, head over to aka.ms/DesignCareers.
https://medium.com/microsoft-design/how-to-design-interruptions-b93c0c667e6f
['Margaret P']
2020-07-17 18:28:23.104000+00:00
['Interaction Design', 'Design', 'Inclusive Design', 'Microsoft', 'UX']
Design review and gamification: how to make a behavior change in your team
Design review and gamification: how to make a behavior change in your team Santi Martinez Follow Apr 27 · 3 min read We all remember that when we were kids our parents only let us play after we had done our homework: studying and having fun were two different things. As adults, we follow the same pattern. Unconsciously, we end up classifying our daily activities in “serious” and “fun”. Serious ones always come first: how to organize tasks, answer emails, or plan your daily agenda are a priority, and only later do we go out and hang out with our friends. The concept of gamification comes to bridge the gap between fun and work. Turn into a game Gamification is the use of game-design concepts with an additional purpose: to entertain. Until recently, our design reviews were a 60-minute meeting where designers showed, on low or high fidelity wireframes, what had been designed so far in order to have feedback. This no longer works for us because we feel it is wearisome and pointless. However, we can’t completely avoid this process because it’s very important for us to meet our high-quality standards and foster a fluid communication and feedback exchange between different areas that work in the same project (design, development, and Quality Assurance). We decided to take our design review process to the next level by leveraging gamification techniques. Step by step, like an arcade Designers choose up to three flows in progress so that a group of reviewers can give feedback and suggestions. To be faster and more efficient, we count on a deck of cards, with UX and UI topics. Are you wondering how you can get that deck? Well, Don’t!! Here, we share our lovely deck with you. Participants are split into two groups: designers and team managers on one side of the room and reviewers on the other. A moderator times and guides the 60-minute meeting: 7 minutes to speak about the main points of the project (scope, user persona, benchmark, etc). 13 minutes to discuss and analyze the reviewers' statements about each flow. Some final minutes to fill in a feedback form with comments and suggestions. It’s important to pay attention to how we express our feedback. We need to take care of the words we choose and always value our colleagues’ work. A design review is intended for enhancing the project, never for offending the designer. 3. We create a follow-up form with stickers and a system of “prizes” rewarding the teams that are asking for a product design review session. 4. As a result of this playful process, product design reviews are a great chance to discover new opportunities for our digital products and explain our justified improvements to our clients. Behavior change Changes in our working methodologies may be perceived as a setback or misstep. We tend to judge and focus on what is “lost” rather than valuing the skills we learned along the way. If we always see a half-empty glass perhaps it is time to look for a smaller glass or more specific objectives. Can we seek a behavior change without pressure? Is this possible? The answer is yes. Gamification opens the possibility for an autonomous change, not a compulsory one. These techniques offer us an incredible number of resources to foster behavior changes with different projects: complete steps in a fitness app, give crowns when we finish a course in a language learning app, ​​or make our coworkers much more motivated to carry out a daily process. Now it is up to us to think when we can put it into practice and achieve better results.
https://medium.com/wolox/design-review-and-gamification-how-to-make-a-behavior-change-in-your-team-56cd82debe74
['Santi Martinez']
2020-11-06 14:14:30.577000+00:00
['Product Design', 'Design Review', 'Gamification', 'Design', 'UX']
Poppies -Chapter 31-
Fiction Series Poppies -Chapter 31- A Novel Photo owned by Author Chapter -31 - Pauline paced outside of the shed waiting to be invited inside by her sisters. She twisted her light-brown hair streaked with gold around her finger. She put the ends into her mouth and sucked. She liked to suck on her hair even if mean old Mara-Joy would yell at her and say she was disgusting. That just made Pauline want to suck on her hair more. She would have thought Mara-Joy would be nice to her, considering she couldn’t have any little kids of her own. An account of the “miscarriage” two years ago that wasn’t allowed to be mentioned. But that didn’t matter. Mara-Joy continued to be mean to Pauline. She would always find something to hit her for. Whether it was for slurping her milk or dragging her feet, which Pauline did all the time when Mara-Joy came over to the house. She liked getting Mara-Joy angry. The only problem was that whenever she got Mara-Joy angry, Mama got furious. Would start declaring that Mara-Joy shouldn’t be upset in her own home. Her own home? Pauline thought she lived with Chad in their home. Mara-Joy only came over to get Alan-Michael and take him on special trips to the zoo and movies. She never took Pauline on these special outings. She never even asked if Pauline wanted to go. Maybe it was because Alan-Michael was younger and it made Mara-Joy feel like she had a child to care for. Alan-Michael was only two years younger than her. Not that much younger. Besides ten years old wasn’t all that grown-up. Pauline sighed. It didn’t really bother her. What bothered her was Mama never told Mara-Joy to include Pauline as well. Mama insisted Joanna and Constance take Alan-Michael with them. Why didn’t Mama uphold the same rules with Mara-Joy? It didn’t matter. Joanna and Constance liked taking Pauline places, and not Mikey. A year ago Joanna and Constance found Pauline hiding in their secret place. They weren’t angry with their little sister. They let the shed be her special place too. They always let her tag along. Pauline kicked a few stones with the toe of her shoe. The pebbles flew up and made a pinging noise as they hit the side of the shed. Joanna and Constance both turned around from their crouched positions, startled. Relief spread across their faces when they realized it was only Pauline. “What are you doing lurking over there? Come on, get out of the shadows and sit with us,” Joanna waved Pauline over. Pauline smiled, releasing the sticky strand of hair from her mouth. She ran over toward her sisters huddled together, and crouched between the two girls. All three looked alike except for a slight difference in hair color. Joanna’s long hair was light brown and poker-straight, very much like their mother’s. Constance’s hair was a wavy blonde that fell below her chin. Pauline’s was kind of a mix between the two. It was brown with lots of gold highlights sprinkled throughout her shoulder-length hair. It was not wavy like Constance’s, but not as straight as Joanna’s either. All three had their father’s eyes: large, slanting green eyes. All the girls had tan complexions. As did their brother, who, unlike the girls, had inherited their father’s looks. The only one who was fair-skinned was Mara-Joy. She didn’t resemble any of them, not even Alan-Michael, who looked a little like his other sisters. Joanna turned to Constance, bent on continuing her conversation where she left off. “How can I get him to notice me?” Joanna asked, pressing her lips together, strained. At almost sixteen years old, Joanna was no longer a plain, gangly girl. She was long, lean, and graceful with an attractive oval-shaped face and unique shaped eyes. “I have it all planned,” Constance waved her hands into the air, a habit she did when trying to explain herself. Pauline lifted her hands, mimicking Constance. Joanna stopped Pauline’s waving hands mid-air, her eyes still focused on Constance. “Well, don’t keep me in the dark. Tell me. How can I get Chad to notice me?” Her eyes twinkled. The time had finally come. It was the opportunity to get back at Mara-Joy for all her years of torment. To teach her a lesson and give her a dose of her own medicine. “Mara-Joy comes over every Friday to take Alan-Michael out to God knows where. When she comes back, she always stays and visits with Mama and Pappy. When Mara-Joy leaves with Alan-Michael, tell Ma and Pa you have a date.” She looked at both Pauline and Joanna, making sure she had their full attention. “Go over to Chad’s. He will be there,” she said before Joanna could interrupt. “He works over the bills on Friday. You know, trying to figure out some way to pay for the stuff Mara-Joy buys.” Joanna began to get excited. It did make sense. Mara-Joy and Chad had the same routine every Friday. She couldn’t remember the last time it had not been so. “But what if she comes home before I have left?” Joanna stuttered, trying to find a hitch in Constance’s plan. The act of proceeding with their plans they made two years ago excited and frightened her. “I’ll call when she leaves home,” Constance said matter-of-factly. “You are so smart, Connie,” Joanna squealed, grabbing hold of Pauline and hugging her. Pauline had stopped listening to the two girls lost in a daydream. When Joanna squealed in delight, she squealed back unaware of what she was squealing for. Constance shrugged but laughed. She loved Joanna dearly and only she had permission to call her Connie. Everyone else, including Pauline, called her Constance. Connie was a term of endearment meant only for Joanna. Constance was a firm believer that the name Connie did not evoke an image of intelligence. She knew Joanna thought she was smart, but to everyone else, she had to prove her mind worked. With a no-nonsense name like Constance, people had to see that she was more than just a girl, but an astute person. “Then it is set.” Constance beamed, crossing her arms across her swelling chest. Her plan was foolproof. “I suppose it is.” Joanna covered her mouth with her hands, not knowing what to do. She didn’t know if she should burst out laughing or throw up. “You can do it, sis. I have faith in you,” Constance, reaching out and placed a firm hand on Joanna’s knee. Joanna smiled unsure, placing her hands on top of Constance’s. They felt warm and smooth, familiar and safe, and at that moment she needed to feel safe. What they planned to do was uncharted territory. There would be no turning back after she succeeded in her plan. “I hope so, Constance, and I hope I know what I am getting myself into.”
https://medium.com/illumination/poppies-chapter-31-8f91512d3de0
['Deena Thomson']
2020-12-18 14:48:53.053000+00:00
['Novel', 'Writing', 'Fiction Series', 'Fiction', 'Fiction Writing']
Black Scholes Model in Python for Predicting Options Premiums
Black Scholes Model The Black Scholes model is considered to be one of the best ways of determining fair prices of options. It requires five variables: the strike price of an option, the current stock price, the time to expiration, the risk-free rate, and the volatility. Black Scholes Formula C = call option price N = CDF of the normal distribution St= spot price of an asset K = strike price r = risk-free interest rate t = time to maturity σ = volatility of the asset Assumptions Made for this Calculator It works on European options that can only be exercised at expiration. No dividends paid out during the option’s life. No transaction and commissions costs in buying the option. The returns on the underlying are normally distributed. Black Scholes in Python For the Black Scholes formula, we need to calculate the probability of receiving the stock at the expiration of the option as well a the risk-adjusted probability that the option will be exercised. In order to do that we need a function for calculating d1 and d2. After that we could input them into a Normal distribution cumulative distribution function using scipy which will give us the probabilities. Each will require 5 inputs: S, K, T, r, and sigma. Now we could implement the Black Scholes formula in Python using the values calculated from the above functions. However, there has to be two different formulas — one for the calls, and the other for puts. Collecting the Data For us to test the formula on a stock we need to get the historical data for that specific stock and the other inputs related to the stock. We will be using Yahoo Finance and the pandas library to get this data. We are able to calculate the sigma value, volatility of the stock, by multiplying the standard deviation of the stock returns over the past year by the square root of 252 (number of days the market is open over a year). As for the current price I will use the last close price in our dataset. I will also input r, the risk-free rate, as the 10-year U.S. treasury yield which you could get from ^TNX Utilizing the Function We could now output the price of the option by using the function created before. For this example, I used a call option on ‘SPY’ with expiry ‘12–18–2020’ and a strike price of ‘370’. If all is done correctly we should get a value of approximately 16.169. Our Option Premium Implied Volatility and the Greeks Some useful numbers to look at when dealing with options is the Implied Volatility. It is defined as the expected future volatility of the stock over the life of the option. It is directly influenced by the supply and demand of the underlying option and the market’s expectation of the stock price’s direction. It could be calculated by solving the Black Scholes equation backwards for the volatility starting with the option trading price. Other useful numbers people look at when dealing with options are the Greeks. The following are the main five: Delta: the sensitivity of an option’s price changes relative to the changes in the underlying asset’s price. Gamma: the delta’s change relative to the changes in the price of the underlying asset. Vega: the sensitivity of an option price relative to the volatility of the underlying asset. Theta: the sensitivity of the option price relative to the option’s time to maturity. Rho: the sensitivity of the option price relative to interest rates. Options Pricing I mentioned above that options prices are mainly affected by the prices however it is also important to know other factors that are used in their pricing. The three main shifting parameters are: the price of the underlying security, the time and the volatility. The implied volatility is key when measuring whether options prices are cheap or expensive. It allows traders to determine what they think the future volatility is likely to be. It is recommended to buy when the implied volatility is at its lowest as that generally means that their prices are discounted. This is because options that have high levels of implied volatility will result in high-priced option premiums. On the opposite side, when implied volatility is low, this means that the market’s expectations and demand for the option is decreasing, therefore causing prices to decrease. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/calculating-option-premiums-using-the-black-scholes-model-in-python-e9ed227afbee
['Suhail Saqan']
2020-12-03 19:04:26.374000+00:00
['Math', 'Options', 'Black Scholes Model', 'Python', 'Stocks']
4-Flutter Adding Points on a Map — The web coder path to Flutter
Everything is nicer when you can add points on a Flutter map The complete Flutter code so far, is at the end of the article. After been able to display points on a map, I will change my App and Web API to add new points from my Mobile App. This article is a follow up on: FloatingActionButton To get started, I will add a simple “FloatingActionButton”, these are circular buttons that appear at the bottom of the screen, and are very convenient and easy to use. First I will change my build to contemplate my new button: adding a floatingActionButton My FloatingActionButton will be returned from a function, which will return one of 2 buttons, one to set the new marker, and another to submit after the new marker has been set. return floatingActionButton This function will also call 2 other functions depending on the var “newMarker” state, first when its null and we need to set the marker location: setting newMarker Position Initial FloatingActionButton When I first set my “newMarker”, I will be calling the function “setNewMarker” to set it’s starter location at the center setting newMarker position Submit new Point FloatingActionButton After I’ve added the marker to the location I want, I will have another button on the “floatingActionButton” to submit the new entry open submit newMarker form The button for submitting the new point will use “Navigator”, which is used to navigate to other screens I created, this will open a new screen with a form to complete the point information. Note that I’m passing the “newMarker” I had set earlier to the “NewEntry” screen/Widget. Later I will show more about the “NewEntry” screen I will Navigate to. Code improvements I’ve done a small restructure of my original “getMap” function: new getMap and newMarker It now checks if the “newMarker” is not null, when it is, it will display only the markers already submitted, if it’s not null it will display the marker I’m setting. My original “getMyData” also has a small change: new getMyData To make the code more readable, I’ve created a couple of functions to deal with filling my “_makers”, because I’m going to update this info in more than one place: fill my _marker Creating a new Class Now, I’m going back to that “NewEntry” I mentioned earlier, I’m going to create a file named “newEntry.dart”, and add my first Stateless widget: NewEntry Stateless Widget Stateless Widgets is a widget (that can be made of many other widgets), that never changes state. Unlike Statefull Widget, we can not use “setState”, and it can not be changed after it was created. It is said that we should always start with a Stateless widget, and only switch to Statefull when it is completely necessary. I’ve also added a parameter which is required to initiate this class, “newPoint”, this could be compared to a “QueryString” from my URL, since I’m more used to the Web. This parameter will be used to pass the latitude and longitude from my marker, to my Web API. Form Validation New Entry Form I’m going create a Form with a GlobalKey to use validation, so that it is only submitted after Title and Description are filled. Both fields will have a validator. Note: I have to pass “BuildContext context” to my function, because since we are using a Stateless Widget, we need to pass it when the widget is first built, we can’t access it directly after that. As an example, for the Title, I use the TextFormField, with a controller associated (controllers give you access to what has been filled on that field), and on validator I will just check if it’s empty. If it’s not empty it will return a null and we will be able to continue: Title Field To make sure all the fields are filled before proceeding, all I have to do is call “_formKey.currentState.validate()”, it will validate the fields with a validator, if nothing is wrong, it will continue: Save Button MediaQuery, you can get information about the size and layout of the screen of a device. In the previous code, I’ve a call to my Web API, “List result = await requestNewPoint();”, the result can be sent back to the “main.dart” with a simple “Navigator.pop(context, result);”. This simply closes the current screen and returns a value (result). On the main.dart I’m waiting to receive this data because I’m waiting for it: It’s like opening a popup, and waiting for a result from it. Validation
https://medium.com/flutter-community/4-flutter-adding-points-on-a-map-the-web-coder-path-to-flutter-f469d2183274
['Emanuel Luís']
2019-12-30 16:46:20.989000+00:00
['Csharp', 'Flutter', 'Maps', 'Sql']
Native Lazy Loading in the Browser
1. What Is Lazy Loading? When an img or iframe tag is in the HTML and parsed by the browser, it will immediately load the sources. This blocks the whole page to finish loading. This will cause a longer time to load the page, so it takes longer for the user to interact with the page. We want the user to interact with the page as fast as possible. So smart people came up with lazy loading. This is a technique to show images and only load iframes when they’re visible to the user. This will make sure the user only downloads what is visible. History Back in the day, we did lazy loading via a scroll event listener. With every event, we checked the array with images and iframes if one of them was visible. If they were visible, we had to change the attribute data-src=" source URL" to `src=“source URL” . This loads an image or page at that moment. But a scroll event is very inefficient because it fires countless events in a very small timeframe. We had to calculate if the element was visible with getBoundingClientRect() . The downside of this was that with every call, it forced the browser to lay out the whole page again. For that time, it was not ideal. But it was better than loading in all the images by page load so that the user had to wait until the page was fully loaded.
https://medium.com/better-programming/native-lazy-loading-in-the-browser-85dabe6653ed
['Dev Rayray']
2020-05-20 14:32:52.072000+00:00
['Software Engineering', 'Programming', 'Software Development', 'Asynchronous', 'JavaScript']
Amazon and Walmart are hiring hundreds of thousands of warehouse workers, as the invisible retail underclass form the ‘backbone of our economy’ this holiday season
By Kate Taylor Every Saturday at 6:30 p.m., a worker at a Walmart distribution center starts his first of three nearly 12-hour shifts at what he calls “a COVID box” in Shelby, North Carolina. Many coworkers fail to follow mask requirements, the worker said. Social distancing is difficult, especially as groups crowd together at the beginning and ends of shifts. Rumors of COVID cases permeate the distribution center, with little communication from higher-ups when coworkers stop showing up to work. And, with the holiday shopping season in full swing, more and more workers are being hired. “Fear overcasts our atmosphere at work,” he told Business Insider. “People are afraid — legitimately afraid — about their wellbeing.” The worker spoke with Business Insider on the condition of anonymity to avoid repercussions. But, he is far from the only warehouse worker concerned about safety during the pandemic, especially as the ranks of employees in distribution and fulfillment centers swell during the holidays. This employee and 1.2 million others working in warehouses across America are the invisible underclass as retailers head into the peak shopping season. Traditional images of holiday shopping are filled with frantic cashiers and bustling retail workers. But, in 2020, the success or failure of the season rests on warehouse employees more than ever before. From April to October, the US added 146,700 warehouse and storage jobs, as people stuck inside turned to online shopping. The end of the year is already peak time for warehouses, when companies hire thousands of seasonal workers to make sure presents arrive at stores and shoppers’ homes before Christmas Day. This holiday season, even more workers will be needed in warehouses — just as COVID cases surge across the US. “The retail worker is no longer the guy that helps you check out,” Juan Arias, a senior consultant at real estate data firm CoStar Group, said. “You probably don’t need help checking out anymore.” Instead, warehouse workers have become “the backbone of our economy.” “Let’s hope that nothing happens with these workers,” Arias said. “Because right now … our economy is basically working on the backs of these people.” Warehouse work has exploded in recent years Amazon and other retailers have hired hundreds of thousands of warehouse workers. Photo: Ina Fassbender/AFP via Getty Images Retailers rely on a web of warehouses across America to keep shelves stocked and deliver packages to online shoppers. Amazon has roughly 290 million square feet of warehouse space across the country, according to CoStar Group. Walmart and Sam’s Club have roughly 143 million square feet, about 30% of which is used for e-commerce fulfillment. Third-party logistics (3PL) companies — most with names few Americans would recognize — have 800 million more square feet. As companies buy up warehouse real estate, they also are hiring hundreds of thousands of workers. The number of warehouse and storage jobs have doubled since 2010, reaching 1.27 million this October, up from 629,000 in 2010. Photo: Yuqing Liu/Business Insider “Significant acceleration of warehouse worker hiring started in around late 2014, early 2015,” Arias said. “You can really just time that exactly when Amazon came out and started to say that they were going to do same-day delivery.” In late March, the number of warehouse workers plummeted, as employers laid off workers in the face of an uncertain future. But, companies were soon hiring once again, as it became clear that more manpower was needed during the e-commerce boom. Amazon announced in mid-March that it planned to hire 100,000 warehouse workers, boosting pay to win over potential employees. A few days later, Walmart said it would hire 150,000 people to work in stores, distribution centers, and fulfillment centers. By October, warehouse and storage workers had become a rare type of job that had not only recovered but beat pre-pandemic employment numbers. Photo: Yuqing Liu/Business Insider Working in warehouses comes with risks, even before the pandemic. Warehouse workers are injured at a significantly higher rate than employees in other industries, whether by being struck by stacked boxes, forklift turnovers, or repetitive motion injuries. Twenty-eight warehouse workers died on the job in 2018, the most recent year the Bureau of Labor Statistics has recorded data on industry deaths. Now, some workers say the pandemic has made a sometimes risky situation even more toxic. Warehouse workers face COVID risks and no clear answers Workers worry about social distancing in 2020. Photo: Suzanne Kreiter/The Boston Globe via Getty Images Monica Moody said she realized her job at an Amazon fulfillment center in Charlotte, North Carolina was unsafe soon after she started in October 2019. Amazon’s “time off task” policies that track workers’ speed and can discourage them from taking breaks were a red flag, according to Moody. “At the beginning of the pandemic, it became obvious pretty quickly that Amazon still put productivity ahead of worker safety,” Moody said on Monday, on a call with reporters coordinated by retail worker rights group United for Respect. “They didn’t make any adjustments in our rates to account for the extra time that we should be taken to protect ourselves from this virus,” added Moody, who no longer works at the fulfillment center. “We didn’t even have masks in our facility for quite a while.” Amazon was fined $1,870 in California in October after workers filed a pair of complaints saying the company did not notify workers of a COVID case, failed to enforce physical distancing, and prevented them from taking handwashing breaks. The company said in early October that more than 1,900 out of its 1,372,000 front-line workers had contacted COVID, with at least 10 dying. The Walmart distribution center worker said he fears catching COVID because of people not following mask rules, as well as a lack of social distancing in shoulder-to-shoulder meetings and crowded lunchtimes. Complicating matters is the lack of communication from management about coworkers who may have been exposed or sick, something Moody said she also encountered at Amazon. “It’s like Russian roulette, when you’re trying to protect yourself and put food on the table,” Moody said of uncertainty around COVID exposure. Earlier this year, the Walmart worker said he called the health department to ask if the company was required to alert workers or close the warehouse if someone caught COVID. He was told that these decisions were up to Walmart. “No one seems to really care. No one knows who to go to, who to speak with, if anything will even be done,” the worker said. “That’s the most scary part — knowing that all of this is happening and nothing seems able to be done.” ‘We’re seeing a lot of COVID cases’ Tommy Carden, an organizer with Warehouse Workers for Justice who works primarily with people in the crucial distribution hub of Will County, Illinois, said that issues such as lack of sanitizing, hazard pay, and communication around positive cases have been central problems during the pandemic. “We’re seeing a lot of COVID cases,” Carden said. “It’s a little tricky because a lot of the employers aren’t communicating to the workers when someone does have COVID. So, numbers aren’t always very clear. … There’s a lot of hush, hush around that.” Employers have rolled out new precautions during the pandemic, such as distributing face masks to workers, announcing social distancing policies, and conducting temperature checks. Amazon did not respond to Business Insider’s request for comment. The company has previously said that having 19,816 workers catch COVID is 42% lower than its expected number. According to Amazon, it has “introduced or changed over 150 processes to ensure the health and safety of our teams,” including rolling out COVID testing and cleaning every 90 minutes. Walmart declined to comment on any specific allegations, but pointed Business Insider to a April video showing safety precautions the company is taking in distribution centers. The video shows executive vice president of supply chain Greg Smith walking through a distribution center, highlighting features such as social distancing stickers, increased sanitizing supplies, and new online worker resources. “It’s a great time to join the company,” Smith says. “Our volumes are heavy, and hiring helps us reinforce and support our associates, and support our business — making sure we get products to our customers, to be able to get products on the shelf.” But, some workers say that enforcement of these new rules is uneven and has failed to stop the spread of the virus. Photo: Yuqing Liu/Business Insider Data backs their concerns. In Illinois, the category of factories and manufacturers (which include warehouses) is the largest site of COVID clusters, with the exception of long-term care facilities. Making up 12.8% of outbreaks, these factories, warehouses, and other manufacturing sites account for more cases than restaurants, bars, or colleges. Holiday shopping season is bringing new problems Booming e-commerce sales, skyrocketing warehouse employment, and COVID anxiety are the backdrop for an unprecedented holiday shopping season. Adobe Analytics predicts US online holiday sales will reach $189 billion in 2020, up 33% from the prior year. “Traditionally, retailers hired their holiday help for their store,” Logistics TI founder Cathy Roberson told Business Insider. “However, within the past couple of years, there’s been a shift.” This shift to warehouse hiring has been exacerbated by the pandemic, as companies investing in the crucial last-mile delivery go on hiring sprees. Transportation and warehousing industries added 108,200 jobs in October, a whopping 198% more than in October 2019, according to recruiting firm Challenger, Gray & Christmas. Data analyzing 14,000 injuries in Amazon’s distribution centers from 2016 to 2019 revealed that injuries typically spike around the holidays and Prime Day, Center of Investigative Journalism outlet Reveal reports. Some warehouse employees worry that more work and hires could also increase the chances of catching COVID during peak season, just as the virus surges across the US. “When there are already concerns about social distancing, already concerns about sanitation — more workers in a facility does mean more risk,” Carden said. Carden said it is often easier for unsafe practices to go overlooked in warehouses because, unlike a store, most customers do not see what is actually happening. In a year when many retailers are closing on Thanksgiving, with Walmart encouraging workers to spend time with loved ones, it is a particularly stark division. “For a lot of people, warehouse workers are sort of like an invisible part of the supply chain,” Carden said. “They’re not the one behind the cash register. They’re not the ones you see in the stores. But they’re the ones that make it all possible.” Companies need warehouse workers to guarantee the delivery of holiday packages amid the e-commerce boom. But, hiring more could make warehouse workers’ jobs riskier — and, if that risk escalates into an outbreak, send the entire system crashing down. “It’s going to keep popping up and it will impact the supply chain,” Roberson said of COVID outbreaks. “The biggest impact you’ll see is going to be in those warehouses.” If companies are not careful, Roberson said, “there won’t be anybody to pick and pack those items.” From warehouse workers to retail giants to industry experts, the holiday shopping season remains full of unknowns. So far, massive companies’ e-commerce bets have paid off, with retailers and their CEOs making billions of dollars in recent months. Workers, however, are facing a holiday season in which hazard pay has dried up, but COVID cases are once again on the rise. “Walmart being Walmart, you’d hope that they will take a lead, take a stance, come to bat for the very people who are making them these billions,” the Walmart distribution center worker said. If you’re a worker with a story to share, email [email protected]. For more great stories, visit Business Insider’s homepage.
https://medium.com/business-insider/inside-the-invisible-retail-underclass-working-through-the-holidays-58cec04c6192
['Business Insider']
2020-11-30 21:51:04.574000+00:00
['Shopping', 'Ecommerce', 'Covid 19', 'Amazon', 'Walmart']
A Pop Cultural History of the Amazon
The Amazon is on fire. As we mourn the loss of these unique habitats and attempt to understand the complicated legal and economic ramifications of this devastation, it’s important to acknowledge the importance of this ecosystem, both ecologically and culturally. The Amazon has inspired a wave of pop culture moments that illuminate why it’s important. A Big Snake and A Little Fish Of the Amazon’s many fascinating creatures, few seem to have gripped the popular imagination as much as two of its allegedly ferocious denizens: the anaconda and the piranha. Both have inspired B movies that took liberties with the zoological facts. Anaconda (1997) depicts the misadventures of a film crew who encounter an absurdly large anaconda with a taste for people. Although incidents in which boas have eaten humans have occurred, real-life anacondas are not the extremely aggressive monster shown in the movie. To be fair, the creature is supposed to be a Giant Anaconda, a cryptid whose existence has yet to be proven. Even so, in local mythology, the Giant Anaconda is a guardian of the Amazon, and so this cinematic representation could be interpreted as either insensitive appropriation or a high-level satire depicting the Amazon defending itself against obnoxious filmmakers. Real-life anacondas max at about 17 feet, not 30, and do not regurgitate food in order to eat again (although all boas may regurgitate food when frightened). Typically, they do not consume large prey, instead targeting fish and birds. Anacondas are not endangered, but may be threatened soon by the loss of their habitat. Piranha (1978) is the ultimate B movie about killer fish. In the film and its exceptionally gory reboot, schools of piranhas devour people in seconds. This extreme feeding behavior is based on Theodore Roosevelt’s observation of a school of piranhas skeletonizing a cow in mere minutes. What he didn’t know was that (a) piranha group feeding typically occurs when the fish are stressed and (b) local fisherman had starved the school for days before offering the cow to them. In real life, piranha are often fished and prepared in Brazilian restaurants, making them an important part of the local economy. The fish themselves are largely scavengers and help clean up the river bed. The Most Frustrating Video Game In 1993, educational software MECC decided to take all the frustrations of the popular Oregon Trail game and make them prettier, fancier, and all the more likely that you’d die of some horrible illness. In fairness, The Amazon Trail was quite addictive and had much more to offer in terms of gameplay and graphics than Oregon Trail. Players were much more likely to learn about different native species, indigenous culture, and historical figures than in Oregon Trail. A Certain Book Website When Amazon.com launched in 1994, it was a no-nonsense online marketplace for books and videos. Its appeal was that users could purchase items that weren’t available in their local stores — imagine that. It quickly became a book lover’s paradise, and as the site expanded to include everything under the sun, became the definitive online store. Now, Amazon has morphed into a global force and a purveyor of any product you can imagine. It solidified the expectations that we have for ecommerce: extreme ease of purchase and broad availability of products. CEO Jeff Bezos initially named the site Cadabra, but wisely chose the name of the world’s largest and most diverse river system instead. Amazon has lived up its name and now offers 12 million products for sale. Despite its questionable business practices, Amazon likely isn’t going anywhere any time soon. Unfortunately, the same can’t be said for its namesake. Learn what you can do to help the Amazon so that it can live beyond the pop culture moments it inspires.
https://rachelwayne.medium.com/a-pop-cultural-history-of-the-amazon-ec19fce0613b
['Rachel Wayne']
2019-09-02 13:58:01.895000+00:00
['History', 'Culture', 'Amazon', 'Ecology', 'Pop Culture']
Thou Shalt Love Without False Dichotomies: A Letter to President Oaks
Two Commandments or One? After quoting this scripture, you said that “our zeal to keep this second commandment must not cause us to forget the first, to love God with all our heart, soul, and mind. We show that love by ‘keep[ing] [His] commandments.” This statement establishes the first false dichotomy in your talk. Jesus made it clear that these commandments are effectively one. He taught this parable: “When the Son of Man comes in his glory, and all the angels with him, then he will sit on the throne of his glory. All the nations will be gathered before him, and he will separate people one from another as a shepherd separates the sheep from the goats, and he will put the sheep at his right hand and the goats at the left. Then the king will say to those at his right hand, ‘Come, you that are blessed by my Father, inherit the kingdom prepared for you from the foundation of the world; for I was hungry and you gave me food, I was thirsty and you gave me something to drink, I was a stranger and you welcomed me, I was naked and you gave me clothing, I was sick and you took care of me, I was in prison and you visited me.’ Then the righteous will answer him, ‘Lord, when was it that we saw you hungry and gave you food, or thirsty and gave you something to drink? And when was it that we saw you a stranger and welcomed you, or naked and gave you clothing? And when was it that we saw you sick or in prison and visited you?’ And the king will answer them, ‘Truly I tell you, just as you did it to one of the least of these who are members of my family, you did it to me’” (Matthew 25: 31–40). The Book of Mormon also teaches this principle. King Benjamin taught that “when ye are in the service of your fellow being ye are only in the service of your God.” In other words, we love God by loving our neighbor, and as you mentioned, “everyone is our neighbor.” There are no exceptions to this commandment — “Truly I tell you, just as you did not do it to one of the least of these, you did not do it to me” (Matthew 25:45). If we ever fail to love one of our fellow beings, we are failing to love God.
https://medium.com/faithfully-doubting/thou-shalt-love-without-false-dichotomies-a-letter-to-president-oaks-58195ddddcde
[]
2019-10-10 19:17:33.181000+00:00
['LGBTQ', 'Queer', 'Lds', 'Mormon', 'Christianity']
You are Focusing On The Wrong Thing
When I started writing, I was constantly looking at the stats. I wanted to know how each post was doing — how many reads? How much views? How much has it earned? It became a stressful and constantly disappointing obsession. This anxiety bled all over my mental state. While I was regularly checking the stats, I realised I was consistently having writer's block. Words and ideas have always been something that came naturally for me. I've always lived in my head, which means I've always created alternate realities and practised the act of persuasion with refined words and thoughts in my head. Coupled with the fact that I've journaled nearly all my life. So to word thoughts and emotions was something I live and breathe each day. How then, am I getting this friction in the flow of my mind? That was when it occurred to me. You see when I write, I relied on Medium to meet certain expectations. And that was that it goes viral and curated. And because I was focused on the statistical results, my work didn't seem to connect, because they were coming from my head, not my heart. I was not putting enough emotional investments into my work. My work wasn't written with the readers in mind, it was written with the medium editors in mind. I was missing the goal — the most important thing why I was doing it in the first place. These wore me out for quite a while until I had to retrace my steps. I got exhausted trying to write the next big medium article. I mean such perfection is hard to keep up with. Let's face the truth, you are not writing for Medium. You are writing for people through Medium. Your goal should be to touch the heart of a poor boy from Wisconsin, trying to understand how to deal with peer pressure. Or a young lady from Malawi who wants to understand her role in society. Or the woman who doesn't know how she can get out of an abusive relationship. We write because there is chaos in the world, and we want to fix them. “When you treat a disease, you either win or lose. But when you treat a person, you win no matter the outcome.” —Robin Williams We should put a face at the end of our writing, not a number. The primary purpose of your talent is to make a difference. The monetary aspect comes secondary, and trust me, it surely will. When I started focusing on the impact my writings would make rather than the stats, everything changed for me. It took me to a deeper level within myself. If I’m going to write for an 18 years old who is going through depression and anxiety, it would require me to dig deep within myself to connect with her on a deep emotional level. This is the kind of leverage you don't get when your sole focus is to make money.
https://medium.com/illumination/you-are-focusing-on-the-wrong-thing-59bc7e72b1ba
['George Blue Kelly']
2020-12-26 23:20:11.689000+00:00
['Writing Tips', 'Writing']
The Importance of the Beginner Mind
The Importance of the Beginner Mind Be curious and never stop learning Photo by Justin Peterson on Unsplash Most people would probably think of writers and other industry experts as insufferable know-it-alls. You do see a lot of marketing gurus and social media experts flaunting their business success, giving tips and guides, and teaching other people some tricks of the trade. What most do not know is that a lot of these so-called experts have learned what they know along the way. And the good ones, at least, are perpetually in a state of having the beginner mind. I came across the term reading past articles, particularly this one from Escape from Cubicle Nation. Beginner mind is a state of being where you approach learning with no judgment, censoring, editing or preconceived expectations. To me, basically, having that beginner mindset is being curious, and being a learner. Learning is important, after all. Even if you have amassed a huge amount of knowledge in your years of academic study and even more years of actual experience in the field, there are still inevitably some things you will learn along the way. You can learn new things from the simplest and most mundane of things, to the most complicated of concepts. You can learn from theories, and you will learn more from applying these theories in real life (whether they succeed or fail in real-life events is part of the learning process). The beginner mind is not only important in business and entrepreneurship. In anything that you do, adopting the opposite mindset–the expert mind, in which you think you know it all–is sure to invite trouble. If you think you know it all, you’re no longer welcoming new ideas and change. You become closed-minded. You learn nothing. You stagnate. In terms of the arts–writing and literature included–having the beginner mind also means welcoming new ideas and welcoming critique. If you think of yourself as the best writer in the world, with the best writing style and flair, with zero possibility of committing grammatical and logical mistakes, then you’re in for trouble. If you’re sitting atop your pedestal, wearing your plumes in your hat for all to see, telling everyone you’re at the top of your game and you would rather not be elsewhere, then that’s probably not the best place you can be. If you think you’ve achieved your best, and you’re at the peak then there’s no way but down. Always look for challenges. Always look for opportunities to improve and do better. Then, even if you come down from your peak, you can always just bounce back up. Be open to new ideas. Be open to new things. Have that infinite curiosity of a child, and try to learn as much from your environment. Learning is fun and exciting. I still relish at the thought of learning new things every time, even if it were just the simplest of things that I’ve overlooked through the years. I’ve often found myself in the trap of being insecure because I lack expertise in a lot of things. But then again, perhaps keeping my mind wide open is better than knowing it all.
https://medium.com/the-daily-500/the-importance-of-the-beginner-mind-4f4539831b22
['J. Angelo Racoma']
2020-08-21 22:50:30.328000+00:00
['Education', 'Learning', 'Entrepreneurship']
6 Things You MUST Do to Make Writing Your Career
6 Things You MUST Do to Make Writing Your Career How to Make Your Writing More Than a Hobby Photo by rawpixel.com from Pexels When someone decides to become a doctor, they undertake a grueling course of education and practice. When someone decides to be a hairdresser there are classes and certifications demanded. When you decide to become a writer, you get a lot of thinly veiled skepticism and a notebook. Dedicating yourself to making a career out of your ability to string words together can feel daunting. Writing as a hobby is easy if you’re creative. It’s just words to the page. That’s it. Writing as a career requires a lot more than good craft. Here are six things you must master if you are taking your writing from hobby to career. 1. Get Dressed Writers like to joke that we get to do our work in our pajamas. Neither our characters nor our readers care what we look like as long as we produce the content. To some extent this is true. You can write as well in Scooby Doo lounge pants as you can in your Sunday best. Every day, every one of us sets the stage for our sentiment, our confidence, and our success by getting dressed. When you feel great, when you feel your best, it opens up a world of possibility. Feeling confident and self-assured are important inputs into good days, successful days, and happy days. — Katrina Lake Of course, a person like Katrina Lake would say something like that. She is the CEO of StitchFix, the clothing subscription service. It is her job to make people want to get dressed and it is in her best interest that they do. Nevertheless, she’s not wrong. Photo unattributed from Pexels You’ve heard the idea that you dress for the job you want. Well, if writers can write in pajamas and you want to be a writer, fuzzy pants are fair game. Except, professional writers don’t only dress in sweats and quirky tees. They are out meeting with agents and editors, going to meet and greet events, pushing their books, and making connections, all of them wearing real, out-of-the-house appropriate clothing. This doesn’t mean you need to get out the pants suits and neckties every day. Often times I change out of PJs into clothes that look a lot like PJs. This is especially true when there are no errands on my to-do list. It matters less what you don, and more that you acknowledged the start of the day and began it with the intentional act of putting on a fresh set of clothes. 2. Pick a Time You’ve heard it over and over again, pick a time to write and make yourself write in that time. It’s not just lip service. It is a real requirement to make writing your career. Many writers and artists hide behind the excuse “I can only create when inspiration strikes.” Inspiration, however, is not some fickle muse, flitting about sprinkling fairy dust made of story bits and unicorn horn. Inspiration is knowing how to look at your world and craft with what is already in it. Everybody walks past a thousand story ideas every day. The good writers are the ones who see five or six of them. Most people don’t see any. — Orson Scott Card I’m willing to make the bet you’ve never walked into a doctor’s office when sick only to be told the doctor is feeling uninspired and will not be treating patients. Lawyers do not skip court on days they feel their muse has abandoned them. They have jobs and they do them when it is time to work. (And sometimes, even when it is NOT time to work but they are needed.) My time to write begins at 6:30 am. I am not a morning person by a long shot. My creative juices are most bubbly late at night. However, I have small children and they get up, insisting on food sometime around 8 in the morning. Every morning. This recovering night owl has had to adapt. I get up at 6 each morning, put on my writer clothes (see above), make coffee, and I’m at the keyboard for my writing time. I log at least an hour and a half of writerly miles each morning. The rest of my writing time is broken up between naps and playtime, meal preparation and laundry. It is there and intentional but those first precious moments of the day are critical to the rest of the day’s writing happening. Writers wanting to take their craft from hobby to career must make a habit of writing. Same time, every day, sit down and do something about your writing. Jeff Goins has a wonderful piece that outlines one of the best suggestions I’ve seen as to how to best use that time you’ve picked. I’ll link it at the bottom of this article. The unavoidable truth is you must learn to make your time available for writing. 3. Use the Tools I hate writing on my phone. It is inefficient, autocorrect makes editing a bigger pain than it already is, and I am far more prone to inconsistencies when working on a tiny screen. Yet, here I am, pecking this out on my mobile device. I am unable to be at my computer at this moment but this is writing time. (Dogs do not care that it is writing time when they need to pee.) So it is a necessary evil to write from my phone in this instance. I’m using a tool, a phenomenal if annoying one, to accomplish my writing today. Photo by Fancycrave.com from Pexels None of us are born knowing how to use blogging platforms. We learn and promptly forget the bulk of our grammar training. Luckily for us, there are endless tools at our disposal to remedy this at our disposal. Download, install, and religiously use the tools to make your job easier. Grammarly has a keyboard for mobile phones. Use it. YouTube has a host of how-to guides on any blogging platform you choose, not to mention the resources those platforms have created for their users. Even here on Medium, (a platform you need to use if you are not already), articles abound with invaluable tips, tricks, and hacks on how to make the best of site. Again, I’ll link my favorites in the reading list below. Stop trying to reinvent the wheel and use the tools. You do not need to have perfect grammar or the prose-power of the gods to write a good piece. You need the tools that can keep your less-than-perfect tendencies in check. 4. Set Goals “Set a goal” sounds like the advice a high school career counselor hands out like candy. It might sound hokey but it is excellent advice. Set a goal and then set another. Once you are done with those, set one more. No one can tell you specifically what sort of goals to set. Some may work well reaching for a word count goal. Others respond well to a self-imposed deadline on the calendar. Perhaps you do well setting a timer and working until it dings. Whatever goal will hold your focus and keep you motivated is the first one to go with. Ultimately you need more than one goal. You need a big goal — the big dreams and pot of gold goals. You also need smaller goals, tiered underneath the big guys. I create a vision board every year with my various “Big Dream” goals and supporting accomplishments I need to happen to get me there. It is a visual reminder of what I am doing and why. Some goals change and fade (that cooking blog of mine is definitely not happening). Some are increased or decreased as the reality of July hits January’s optimistic view. Ultimately they serve as a reminder when I’m feeling unfocused. Gwenna’s actual vision board and writing space, dust included. Maybe a vision board isn’t your thing. Maybe simply setting a daily word count goal is enough to drive you forward. Whatever it is, you need a goal, something more immediately attainable than “be the next JK Rowling.” That can be a goal, sure. Just be sure to add some step-by-step goals underneath it. 5. Get Over Your Fear of Rejection Writing is one part creativity, one part dedication, and eight parts getting dumped on. It is easy to take those rejections personally. Writers come pre-equipped with a heavy dose of self-doubt. We are constantly questioning the worth of our work and the weight of our talent. Each time we suffer the slings and arrows of rejection it can feel like the world is confirming our own suspicions; we are not cut out to be writers. You need to let that go. Mean comments are going to be a part of your life as a writer. They will try to cut to the bone and some of them will. Rejection letters will pile up. Perhaps the worst wound of all, your work will be utterly ignored. It may feel like all those blog posts and Medium articles are just you shouting into the void. You’re not. At least, you’re not if you keep at it. The trick to getting accepted is not to avoid the rejections. It is to lean into those rejections and glean any lessons available from them. Hobbyist writers seek the instant gratification of having their work adored. They share it with their spouses, family, and network of built-in supporters, reveling in the compliments and appreciation. My mom has read 98% of my published body of work. She thinks I’m a great writer. And I am. But not because my mom thinks so. She is understandably biased. I am a great writer because I have a body of published work and carry an audience (albeit small here on Medium). People pay me for my unique method of pulling together thoughts and ideas on the page. People also have no qualms telling me I am most certainly not a great writer. The trick is to absorb that, shift through for anything you can learn from the vitriol, internalize that and ignore the rest. Then, continue writing. This is my profession and rejection is an unavoidable part it. Adapt to the valid critiques, adjust to the demands of your audience, and keep writing. Submit again. Be rejected again. Rinse and repeat. 6. Remember That Some Days It Is Still a Job When I first admitted I was a full-time, professional writer I had these foggy, glittering visions of sitting down each morning with my cup of tea and a fresh page before me. I envisioned words pouring out, directly from the fount of my artist’s soul. I knew it would be work. I knew even with my new job title, agents would not come pouring out of the woodwork begging to represent me and pitch my unfinished manuscripts to HarperCollins and Penguin. But I did have this glorified sense that now that I was doing what I love as a career, every day was going to be a good day. Choose a job you love and you’ll never have to work a day in your life. — Unknown Spoiler Alert: That is not true. Some days you sit down to write and nothing good comes out. You use up your writing time and have almost nothing to show for it. It feels like a grind. Some days your precious “Do what you love” career will feel like a job. This isn’t every day for me. Luckily it isn’t even most of them. But occasionally I get up in the morning, get dressed (see above), sit down to write, and realize today is not going to be a fantastic day in the life of a writer. It won’t be one of the days included in my memoir once I achieve international fame. It’s going to be a day of prying each and every syllable out of my brain, a self-lobotomy for an ounce of content. Those days are just as necessary as the good days when the words just won’t stop coming. Days you write when nothing wants to piece together makes you a better writer. It forces you to use those tools, coax inspiration out of nothing, and check that you are still working toward your goals. Reading List
https://medium.com/swlh/6-things-you-must-do-to-make-writing-your-career-619a4d15e084
['Gwenna Laithland']
2019-07-06 12:25:16.008000+00:00
['Authors', 'Writing', 'Work', 'Careers', 'Writer']
Splitting your data to fit any machine learning model
Introduction After you have performed data cleaning, data visualizations, and learned details about your data it is time to fit the first machine learning model into it. Today I want to share with you a few very simple lines of code that will divide any data set into variables that you can pass to any machine learning model and start training it. It is trivially simple but an understanding of how the split function for training and test data sets work is crucial for any Data Scientist. Let’s dive into it.
https://towardsdatascience.com/splitting-your-data-to-fit-any-machine-learning-model-5774473cbed2
['Magdalena Konkiewicz']
2019-11-13 18:08:01.244000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Data Analysis', 'Data Visualization']
The Importance of Zoning Out
The Importance of Zoning Out What if thoughtfulness is the new mindfulness? Photo by Nathan Dumlao on Unsplash When I ask friends or family to repeat what they just said to me, the reply is almost always: “Did you zone out again?” I usually like to answer that being zoned out is my natural state; I just zone back in once in a while. Although I mean it as a joke, there is some truth in my line — one that hints at the possibility that inviting diverse thoughts in our mind instead of rejecting them in the name of presence can sometimes be good for us. The first time I caught myself spacing out, I was in fourth grade. It was during an award ceremony of a local writing contest I won. That meant I had to stand on a stage in front of about 60 schoolmates while a lady read my story and a guy behind me would make a live drawing out of what she was reading. Some children would have felt proud to be on that stage — but I was the farthest thing from those children. If I could have given an arm and not be there, I would have. So I started mind-wandering about being someplace else. Physically I was standing on that stage but mentally I was occupied with other tasks. So where was I, really? Mind-Wandering: Laziness or a Creative Habit? In a culture obsessed with efficiency, some may interpret my tendency to zone out as infantile thinking, a lazy habit, or a sign of procrastination. But is it possible that spacing out — when done right — is a form of productive creativity? In The Secret Life of Walter Mitty — adapted from a short story by James Thurber — Ben Stiller plays a clumsy 40-something who spends day after monotonous day developing photos for Life magazine. To escape the tedium he often zones out in a world of exciting daydreams in which he is the undeniable hero. Only when he faces unemployment, he decides to stand up for himself and to become an agent of his life, instead of a witness. The movie quietly celebrates the overwhelming beauty of the present moment, the pronounced joy and healing nature of being fully alert. But what if we have become too alert? Is there such a thing as too much focus? According to psychologist Rebecca L. McMillan who co-authored a paper titled “Ode To Positive Constructive Daydreaming,” mind-wandering can aid in the process of “creative incubation.” Many of us know from experience that our best ideas come seemingly out of the blue when our minds are elsewhere. Being too alert or focused on a task may actually kill the opportunity of a creative revelation. The practice of zoning out may seem mindless but there’s evidence that it could involve a highly engaged brain state. Spacing out once in a while can lead to sudden connections and insights because it’s related to our ability to recall information in the face of distractions. Nevertheless, we have elected focus, presence, and attention to the mundane as the gatekeepers of happiness and success. Thanks to smartphones, laptops, and apps to keep us updated on the go, there’s no second left to wonder, no moment left unexamined, no opportunity left for spacing out the zone. Yet, this system seems to fail us. The Forgotten Art of Thoughtfulness Anxiety is now more common than ever, especially among young adults. In the United States alone, it affects over 30% of the population. In 2018, approximately 16.2 million adults suffered from at least one depression episode and around 15% of men and 22% of women reported some mental illness in the past year. As a millennial, I had my fair share of anxiety disorders too. I understand why young people in particular may feel anxious. I know that most millennials are afraid of the future — their own, and that of the world. Mainly, we are afraid not to live up to our belief of what life should and shouldn’t be. I know that some — myself included — are afraid of their own thoughts. So often, they seem too many and too distracting. We often treat our thoughts as something to dismiss, to hide from others and ourselves, like involuntary, bodily reflexes one has to almost feel sorry for, when in fact they are the very essence of life. We tell ourselves that too much thinking impairs action. Too much thinking cannot be good for us, so we sign up for yoga classes, cooking lessons, online courses. We take on more work than we can handle just to keep ourselves busy. Anything but thinking. Ideally, we start meditating, which is the absence of thoughts — or at least of any attachment to thoughts. So we download a meditation app and spend at least 15 minutes scrolling through the plethora of available talks and guided meditations and nature sounds, only to find ourselves mindful of the fact that we are, in fact, more anxious than before. For some reason, we forgot about the art of children, the art of Walter Mitty: the art of zoning out. The value of staring into space and getting lost in thoughts, until they sprout into ideas. We know a good deal about mindfulness, which is the state of being aware of sensation, but we don’t know much about thoughtfulness, which is the state of being full of thoughts. We often treat our thoughts as something to dismiss, to hide from others and ourselves, like involuntary, bodily reflexes one has to almost feel sorry for, when in fact they are the very essence of life. Thinking is a process. Just like life, it doesn’t happen smoothly. But that doesn’t mean we should reject it. Being absorbed in thoughts is how I started writing this piece. And staying zoned out is how I arrived at the end of it. Zoning Out as a Sense-Making Mechanism These moments where I am zoned out, jumping from one train of thoughts to another are, ironically, the moments when I am the most zoned-in: when I am the most connected and alive. That’s when the world feels more manageable. Not because it becomes safer or more distant — but because there’s more nothingness, more in-betweens, more opportunities, more naturally occurring blank spaces to glue it together. Getting lost in thought is how I get ideas for my writing. It’s how I get inspiration for my next projects. And it’s how I make sense of life. It may not sound as dreamy as a 20-minute guided meditation but, often, it’s a lot more rewarding than that.
https://medium.com/big-self-society/the-importance-of-zoning-out-1de93e0a29ab
['Marianna Saver']
2020-11-17 15:04:44.307000+00:00
['Self Improvement', 'Life Lessons', 'Inspiration', 'Psychology', 'Culture']
Reflections on a failed startup
Nearly a year ago, we decided to shut down Ansaro. In the immediate aftermath, I wrote a long list of contributing reasons. Then I put it in a drawer for a year; it was too painful to think about. I recently came back to that list, to draw out some lessons, which I hope will be useful for myself and others. Let’s set the stage… Ansaro’s mission was to improve hiring, via data science-driven SaaS. After 2 years, we’d had a team of a half dozen, $3M in funding, and a handful of Fortune 500 customers. Nonetheless, if product-market fit was a heartbeat, we never had a pulse. How come it took us 2 years to figure that out? How could we have avoided it? My learnings fall into 3 buckets: Team beats problem beats solution Learning in a wicked environment People stuff Team beats problem beats solution (aka: a good team is most important, then identifying a good problem, then finding a solution for the problem) Build a team that isn’t afraid to disagree Cofounders unafraid to disagree with each other have an advantage. Willingness to disagree, even about core beliefs, increases the speed/likelihood a team will identify a failing product (mis-guided faith that a product should work can quickly become a core belief). Faster identification of failure means faster iteration, and more chances of survival. I built a team around a specific solution I had conceived (and to which I was emotionally attached). Since the team was recruited to build that solution, there was a feeling that declaring “this solution isn’t working” meant “I don’t want to be on the team anymore.” Our team members took contrarian views in their areas of expertise, but the way I had constructed the team made it hard for them to push back against the solution I’d been promoting since Day 1. That made us slow to pivot, and slow = dead for startups. Rather than recruiting cofounders who bought into my idea for a solution, I should have built a team based on more fundamental elements: (i) deep mutual respect, as demonstrated by the ability for team members to change each others minds and (ii) interest in the same general problem space (in our case: hiring), but without attachments to specific solutions. Go after a problem that is acute and quickly measurable The problem we set out to tackle, reducing turnover and improving new hire performance, was felt acutely by our buyers (CHROs / Heads of Talent). It was NOT felt acutely by our users (recruiters). Recruiters are measured against average time-to-hire, not new hire retention or performance. We were trying to improve outcomes about which users would pay lip service, but which didn’t impact their paychecks or promotions. Moreover, we knew it would take at least 6 months for our customers to observe improvements in retention or new hire performance. While that timeline worked on paper — we had runway for at least 18 months — it left us waiting far too long for market feedback. When the ultimate feedback was bad, it was too late to do another pivot. Control your solution’s own destiny Our first solution was joining records from our customers’ Applicant Tracking System with records from their Performance Management System, to identify predictors of successful hires. This meant integrating with two enterprise systems holding sensitive data. Most of our customers’ systems were more than a decade old. We could only dream of modern APIs.[Note 1] Instead we pleaded for nightly file-based data dumps with “partnership” representatives from these ATS/PMS software vendors, representatives who had zero interest in partnering with us. When we finally realized we couldn’t solve these technical dependencies quickly, we moved on to another solution: software for conducting better job interviews by using natural language processing to parse interview audio recordings. We built this system to avoid dependencies on any other systems. What we didn’t appreciate was that we’d traded away technical dependencies for process change dependencies. Conducting better interviews means convincing interviewers to interview differently! That’s a change management project, not a software pilot. We needed an MVP that had both minimal technical and process change dependencies. Only then would it have been truly easy to trial. Learning in a Wicked Environment In a wicked learning environment, feedback is poor, misleading, or missing.[2] The early days of a startup is a classic wicked learning environment; relying on intuition is dangerous, but data is sparse. What to do? Limit user research Initially, I believed extensive user research was a badge of honor. In our first few months, I did 80+ prospective user interviews. I collected hundreds of pages of feedback on our prospective solution. And I found ~20 prospective users who told me they were interested in purchasing our soon-to-be product; “Call us back as soon as it’s live!” they told me. Statisticians call this “overfitting to the data”. If you have the world’s worst product idea and talk to 100 potential customers, 10 of them will be nice enough to tell you they’re interested. With so much data, I semi-subconsciously parsed it selectively to convince myself and others of the story we wanted to hear. We should have done no more than 5 interviews of potential users.[3] I would have heard “no” four times and “meh” once. And if that was all the data we had, we would have been forced to concede there wasn’t demand for our idea → come up with a new idea → do 5 more interviews … Don’t confuse the user and the buyer We spent a lot of time talking with our buyers: CHROs and Heads of Talent Acquisition. It felt good to have “big picture” conversations with senior executives. We knew they controlled the purse strings. And these developing relationships excited our team members and investors. So we built a product for these buyers, not the recruiters who had to use it on a daily basis. And the product actually slowed our users down, and all the upside (the metrics we promised to improve) only mattered to their bosses. They were extremely unhappy. Ultimately, the CHROs, with whom we’d had so many “feel-good” conversations had to listen to their own people, who were begging them to stop using Ansaro.[4] Give your (metrics) the benefit of the doubt An MVP is meant to test the fundamental viability of a product, not whether a very rough v0.0 can achieve the metrics needed for a sustainable business. If the MVP works, metrics can be improved through product refinement. Because we didn’t know how much improvement could come from future product refinement, we worried about declaring a “false negative” on our MVP and thus avoided a precise metric goal. A way to avoid this trap is to “give yourself the benefit of the doubt”: assume that product refinements can always triple the v0 KPI. Let’s put numbers on this. For Ansaro’s business model to be sustainable, we believed we needed to convert 10% of qualified leads (in-person sales pitches). After 100 pitches, we’d converted 2 to paying customers. So even with the benefit of the doubt (2 * 3 < 10), our business post-product refinements wasn’t going to be sustainable. People Stuff Accountability for Outputs, Not Inputs When our CTO started, we failed to set clear near-term milestones. We gave him a big, nebulous project and a month to work on it. But without any interim milestones during that first month, we started focusing on facetime even though that ran totally contrary to our stated company values. This was enormously demotivating to our CTO, who turned out be incredibly hard-working — he just preferred working from home in the evenings (while my co-founder and I tended to stay later in the office). An idea that seemed crazy to me then, but which resonates now, is limiting the team to a 40 hour work week during the MVP phase.[5] I estimate that of 100 MVPs, 90 will fail regardless of whether the team works 40 or 80 hour weeks; 8 will achieve Product-Market Fit (“PMF”) regardless of 40 or 80 hour weeks, and 2 will succeed at 80 hours but fail at 40 hours. If you work 40 hours and fail, you have to accept a 20% chance it’s a false negative (e.g. “if we’d worked more hours, we would have succeeded”). I think that’s a good trade-off because (a) it forces you to give up the fantasy you’ll achieve PMF if you just work a bit harder, and (b) you’ll have happier, more motivated team-mates. Choose a mature tech stack Why is this under people? Because choosing a stack should be based more on human considerations than pure technical considerations. We built mainly in Go, because it was lightning fast, massively concurrent, and had many other awesome technical attributes. But for every developer who is good at Go, there are 50 who are as good at Java or Python. When you look up a question on StackOverflow, you’ll get 3 answers for Go and 300 for Java or Python. We never had the scale to realize the technical benefits of our stack. But the extra time required to hire people who knew our tech stack was significant.[6] So was the reduced velocity from developing with newer tools that lacked the community support of older tools. Raise less money, later We raised too much money, too early, which reduced our openness to changing paths. We didn’t set out to raise $3M; our target was $500K from only family and friends. But conversations with family and friends led to introductions to VCs. As investors flattered us, our confidence soared, and I started envisioning the awesome TechCrunch article about our seed round. And then, one-by-one, those VCs told us they weren’t going to invest. Without ever explicitly discussing it, we’d moved our goal from “raise $500K from family and friends” to “raise many millions from blue-chip VCs”. We felt that new goal slipping away, and with that came a feeling of desperation. So when two VC firms relatively unknown to us offered us a term sheet, we quickly said yes. Big mistake. We never really understood these investors, and they never really understood us. It was like getting married right after your first date. We should have stuck with our original plan. Raising a smaller amount of money would have forced a shorter timeline for testing our MVP (more iteration → higher likelihood of survival.) And raising from friends and family — who were investing in us as individuals, rather than in a specific business model or product — would have also enabled us to pivot more quickly.
https://medium.com/semi-random-thoughts/reflections-on-a-failed-startup-6029586bf681
['Sam Stone']
2019-08-17 04:08:53.373000+00:00
['Failure', 'Data Science', 'Startup']
Data Visualization in the Age of Communism
As the 20th century began, Russia was divided into three groups: the czars, who held a monopoly on political power, the intelligentsia, a group of highly educated citizens who were completely shut out of the government, and the peasants. This last group was almost completely rural, made up roughly 80% of the entire population, and had increasingly refused to acknowledge the rule of law or principle of private property. ‘The USSR is the Crack Brigade of the World Proletariat,’ 1931, by Gustav Klusis. Photo: Fine Art Images/Heritage Images/Getty Images To radically understate it, the Revolution of 1917 changed all that. A fraction of the intelligentsia known as the Bolsheviks overthrew and killed Czar Nicholas II and his family. The leader of the Bolsheviks, Vladimir Lenin, outflanked all political opponents and forced the country into complete Communist rule. The message of Communism, however, still had to be sold to the general populace. Before the 1917 revolution, the Constructivists were loosely affiliated with other modernist art movements in Western Europe. They sought to develop a new visual language based on reducing decoration and streamlining shape and type in order to “construct” art. But after the revolution, they found themselves at the center of the Russian cultural overhaul. Since many of the Constructivists were also Communists, they eagerly participated in the cultural transformation that played out on an unprecedented scale. The resources at their control were profound, and many of the Constructivists were installed as leaders in the art academies or put in charge of media or industrial design institutions. Many Constructivists rejected painting in favor of graphic design and photography. Some, like Alexander Rodchenko (pictured above), turned to political propaganda. It was a successful pairing at first, as the visual sophistication of the Constructivists amplified the Communists’ reach and influence. Since the country needed nearly everything produced in Russia to be remade in the new image of Communism, the Constructivists had an opportunity to play a large role in creating that image. The first five-year plan (1928–1932) The posters in this story come from the Woodburn Collection, at the National Library of Scotland, and focus mostly on the economic and social issues from the first two of Russia’s five-year plans, to make an appeal to the proletariat. The scope of this poster series was representative of the massive change occurring in the country. As the political structure of Russia was rewritten, so was its national identity. The Bolsheviks evolved into the Union of Soviet Socialist Republics (USSR), and the Communist Party shifted its focus to transforming the country into an economic powerhouse. This coincided with the transfer of power to Joseph Stalin, who announced the first five-year plan — a device the government used for planning economic growth — on October 1, 1928.
https://modus.medium.com/data-visualization-in-the-age-of-communism-ab7789c1c848
['Jason Forrest']
2019-12-08 03:40:52.190000+00:00
['Ideas', 'Design', 'Media', 'Data Science', 'History']
Web Scraping Berita dengan 4 Baris Kode menggunakan Python
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/milooproject/web-scraping-berita-dengan-4-baris-kode-menggunakan-python-7fb6a8113a67
['Fahmi Salman']
2020-06-12 02:47:17.409000+00:00
['Python', 'News', 'Data Science', 'Hacker', 'Web Scraping']
I Read “The Apology” So You Don’t Have To
I Read “The Apology” So You Don’t Have To Eve Ensler’s new book is horrific and profound Finishing The Apology this morning felt a little like getting my last chemotherapy treatment years ago. Going through it was important to my survival, but I was tremendously relieved when it was done. In this slim tome of 112 pages, Ensler brings her dead father to life so he can apologize for destroying her childhood (and poisoning her adulthood) with sexual and physical abuse. She writes the book in his voice, which is the tricky part. He does some explaining, which sounds a bit like justifying. So instead of delivering a simple and satisfying story of retribution, smashing evil forces, the book almost gently uncovers the human flaws and failings and twisted attempts to feel love that promulgate molestation. This is nauseating. This needs to be done. You’ll excuse me if I don’t want to list the statistics you’ve already read a thousand times. Women are constantly being sexually and physically abused by men the world over. It has to stop. And imagining some superhero swinging in to crush evil is silly and childish and ineffective. The roots of the problem must be examined and understood. Like scientists, we have to look closely to figure out what’s causing the plague to put an end to it. We have to look. I have secrets I would never write down or whisper, let alone publish in a book. I’m guessing most people are the same. But Ensler tells her shameful secrets bravely. Perhaps that’s why she was named one of the 150 Women Who Changed the World by Newsweek and one of the 100 Most Influential Women by the Guardian. Ensler’s influence exploded after writing The Vagina Monologues in 1996. The content of that play is ever changing, as diverse women from around the world enlist to talk about “consensual and nonconsensual sexual experiences, body image, genital mutilation, direct and indirect encounters with reproduction, vaginal care, menstrual periods, sex work, and several other topics,” according to the Wikipedia entry on VM. Ensler has been helping women ever since. Buffeted by the profits from the play, she founded V-Day, a global movement to end violence against women and girls; One Billion Rising, a non-profit whose goals are the same; and co-founded the City of Joy, a center for women survivors of violence in the Democratic Republic of the Congo. I first heard about The Apology on this podcast in which she is interviewed by bookseller Mitchell Kaplan and makes the connection between the sexual and physical abuse of women and the toxic patriarchal system which is about to kill our world. In the podcast, Ensler posits that if men could learn to apologize — learn to value the vulnerable, and emotional, and loving sides of themselves — we could perhaps save the planet and begin to thrive. In the book, her father speaks from the gray void of limbo, where he’s been spinning since his death 31 years before. “My father, Hyman, is here and his father and his and on and on. Fathers who wreaked their merciless havoc on the world,” he tells his daughter, Eve. “A chain of generals, conquerors, CEOs, con men, tyrants, thieves, exploiters of every kind and fools. They die and die here again for all eternity. These are my fathers. These are the men. Allegiance our highest calling. Obedience outweighs logic, morality, or sense.” The historic men try to call Ensler’s father away from his task. They scorn his apology, seeing it as a threat to male dominance in the world. “Each admission here defies a blood vow determined long before my birth. An apologist is a traitor of the highest order. How many men, how many fathers ever admit to failures or offenses? The act itself is a betrayal of the basic code…Our silence is our bond,” Ensler’s father explains. Yet the status quo that people fight so hard to preserve scars us all. In training boys to be patriarchs, feckless parents and reckless society rob them of their capacity for joy, love, tenderness, wonder... “And what, you ask, is a life without wonder? It is drab and dreary. It is one of imposed certainty and compulsory routine. It is devoid of splendor and excitement with a bolted doorway to astonishment,” Ensler’s father tells her in the book. This book, the #MeToo Movement, and all the women running for president suggest we are in the midst of change. It’s about time — time to stop training boys to be patriarchs. For humanity’s sake and for the sake of our planet, let’s train all children to be human beings instead.
https://medium.com/fourth-wave/i-read-the-apology-so-you-dont-have-to-e0f301e1d9f7
['Patsy Fergusson']
2020-02-17 05:01:12.755000+00:00
['Sexual Abuse', 'Patriarchy', 'Books', 'Feminism', 'Culture']
The Single Philosophy Book That Changed My Life
The Single Philosophy Book That Changed My Life Lasting inspiration from a 20th century classic Photo by hannah grace on Unsplash I believe there comes a point in all of our lives when we begin to wonder exactly what we’re supposed to be doing here. I remember being about fifteen years old and walking around the suburban neighbourhood where I grew up, looking at the waxed cars on the driveways and the neatly mowed lawns with neatly trimmed hedges. And I remember being overcome with a deep sense of confusion — as well as hostility. Was this it — my fifteen-year-old self asked? Was this my life’s purpose? To own a shiny car and cut my grass at the weekends? Not long after, I ventured into the philosophy section of my school library for the first time. I say “section” but it really only consisted of a single shelf of books. Not knowing anything about philosophy, I chose something from the Greek portion of the shelf, something about logic and morality. I felt pleased with myself for opening the pages of a genuine philosophy book, but if truth be told, the exact meaning of the text completely eluded me. I had to wait at least another three or four years before I came upon a philosophy book that actually spoke to me, that actually made me think differently about the world and my place in it, that — and I don’t mean to sound like I’m exaggerating — actually changed my life. Recognition of absurdity The reason that Albert Camus’ The Myth of Sisyphus changed my life was because it allowed me to see my senses and my imagination — my conscious mind — as being at the very heart of the possibility of meaning behind my life. I first came across The Myth of Sisyphus when I was at university. I’d chosen to study philosophy and was midway through a module on Kantian Ethics when, a little bored with the distinctions between analytic and synthetic propositions, I went roaming into the Existentialist section of the department’s book collection. The first thing that impressed me about The Myth of Sisyphus was the way it captured my own vaguely-observed sense of confusion at life. It didn’t try to tell me how to be less confused, but rather it described what the confusion felt like. Camus’ word for this was absurdity. “At any street corner the feeling of absurdity can strike any man in the face.” “It happens that the stage-sets collapse. Rising, tram, four hours in the office or factory, meal, tram, four hours of work, meal, sleep and Monday, Tuesday, Wednesday, Thursday, Friday and Saturday, according to the same rhythm — this path is easily followed most of the time. But one day the ‘why’ arises and everything begins in that weariness tinged with amazement.” The fulcrum of The Myth of Sisyphus tilts on the idea that human existence has an absurd quality at its core: that we try our best to gain a foothold on the world, working hard, educating ourselves, having families, adopting personas, falling in love and building our lives, and yet despite our best efforts we can’t be sure that any of it has any meaning. One day we wonder ‘Why?’ Camus compared this predicament to that of the mythological character Sisyphus, who was sentenced by the gods to roll a rock to the top of a mountain, only for the rock to slip back down to the foot of the mountain ready for the next day’s labour. “If this myth is tragic,” Camus wrote, “that is because its hero is conscious. Where would his torture be, indeed, if at every step the hope of succeeding upheld him?” Is life worth it? Camus begins his book with the question that any sensible person might ask of Sisyphus, namely, Why does he carry on? If his life is so absurd, why not kill himself? Camus writes, “Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.” Yet the wider point of the essay is not to tell us whether our own lives are in fact worth living, or to flatter the reader into feeling better about themselves, but instead to get inside the particular qualities of “the absurd”. In our daily lives we find ourselves in a perpetual panic over money and time, status, friends, train connections, haircuts, rent, essential oils, ripe avocados, phone signal loss, fashion, bad mirrors, good photos, old age, white teeth, bad weather, gluten. We panic over all these things. Then at a certain moment, perhaps exhausted by the effort, we wonder what the hell we’re doing. It took me some time to understand that what makes life absurd is not the fact that everything is meaninglessness, but the chronic inability to know one way or the other. Camus wrote, “Allowance must be made for those who, without concluding, continue questioning.” A bright conclusion The ultimate purpose of The Myth of Sisyphus is to ponder the conditions by which a life, however absurd, can be made tolerable… perhaps even happy. For this, Camus dwells on the figure of Sisyphus pushing his rock. “Nothing is told to us about Sisyphus in the underworld. [..] One sees merely the whole effort of a body straining to raise the huge stone, to roll it and push it up a slope a hundred times over; one sees the face screwed up, the cheek tight against the stone, the shoulder bracing the clay-covered mass.” Camus suggests that it is through the direct contemplation of his predicament that Sisyphus can find some contentment. His fate may be torturous and life-numbing, but if he can assert himself as fully conscious of his effort, then he finds he can carry on. “All Sisyphus’ silent joy is contained therein. His fate belongs to him. His rock is his thing, likewise, the absurd man, when he contemplates his torture, silences all the idols.” We are the author’s of our own lives I always appreciated how Camus offered meaningfulness as something that can be created from one’s circumstance rather than suffocated by them. In other words, that significance in one’s life can be self-generated despite hardship and routine, and that we can all be author’s of our own sense of purpose. Like many self-help books, The Myth of Sisyphus describes contentment as something that can be attained from within. There is, however, a simple and crucial difference. For Camus, contentment is a uniquely personal sensation. As such, it can’t be thought of in terms of outward status or signs, which by definition are judged and measured by public consent. It is not to be achieved as such; rather it is to be created, imagined and enriched by personal (and largely private) invention. In these circumstances, according to Existentialist philosophers like Camus, we must look to ourselves for our values, for they are in us and are expressed directly in our actions, even in our sense of play. “Like great works,” wrote Camus, “deep feelings always mean more than they are conscious of saying. […] A man defines himself by his make-believe as well as by his sincere impulses. There is thus a lower key of feelings, inaccessible in the heart but partially disclosed by the acts they imply and the attitudes of mind they assume.” In short, we are never trapped by our conditions. To pose an objective account of a person’s circumstances will only capture the outward truths; it will not describe how things are for the person themselves, inwardly. The subjective experience is always unfinished — always emerging, as it were, in the light of new stimulus, of memory and imagination. Afterword Camus wrote his essay The Myth of Sisyphus when he was a young man — it was published in 1942 when he was just 29 years old. It was also the same year he published his novel The Stranger. I suspect that some people reading the text from a 21st century point of view may find the style obscure, written in a prose that is at times florid and poetic — perhaps even verging on the pretentious. It may smack some readers as evidence of youthful exuberance or else a self-conscious effort to impress. None of this worries me. In fact, Camus’ high-ideals as a writer and a philosopher were an inspiration to me. The complex prose and the range of literary references throughout the book, from Dostoevsky to Kafka, added to the character of the text — which above all else, broke me free from the habits of daily routine and asked me to think afresh about the deep fabric of my life. Would you like to get… A free guide to the Essential Styles in Western Art History, plus updates and exclusive news about me and my writing? Download here. Christopher P Jones is a writer and artist. He blogs about culture, art and life at his website.
https://christopherpjones.medium.com/the-single-philosophy-book-that-changed-my-life-9a7c9d34560b
['Christopher P Jones']
2020-03-09 15:34:50.546000+00:00
['Authors', 'Books', 'Reading', 'Writer', 'Philosophy']
Why Quizlet chose Apache Airflow for executing data workflows
Why Quizlet chose Apache Airflow for executing data workflows Part Two of a Four-part Series In Part I of this series on Quizlet’s Hunt for the Best Workflow Management System Around, we described and motivated the need for workflow management systems (WMS) as a natural step beyond task scheduling frameworks like CRON. However, we mostly pointed out the shortcomings of CRON for handling complex workflows and provided few guidelines for identifying what a great WMS would look like. It turns out that the landscape of available workflow managers is vast. As we evaluated candidates, we came up with the following wish list of features that a dream workflow manager would include (ordered roughly by importance): Figure 2.1: An example data processing workflow. 1. Smart Scheduling. Scheduling task execution is obviously a minimal criterion for a WMS. However, we also wanted task execution to be more “data-aware.” Because some tasks can take longer to finish than their execution schedule (imagine a hourly-schedule job where each task takes 3 hours to execute), we want to ensure that any framework we use can take these incongruent time periods into account when scheduling dependent tasks. 2. Dependency Management. This is where we fully diverge from CRON. We wanted a simple, concise interface for defining dependencies amongst tasks. Dependency management should handle not only the dependency of task execution, but also failures and retries. We also wanted a system that could take advantage of any lack of dependencies amongst tasks to increase workflow efficiency. For example, in our workflow example introduced in Part I of the series, the five leftmost tasks in Figure 2.1 (outlined by dotted line) are all independent of one another and can be executed in parallel. Additionally, there can be a single task on which many other child tasks depend on its completion. In this case, we’d prefer that the parent task be executed as early as possible. We also wanted a system that could consider the priority of individual tasks. 3. Resilience. As mentioned above, workflows will always act unexpectedly and tasks will fail. We wanted a framework that would be able to retry failed tasks and offer a concise interface for configuring retry behavior. In addition, we wanted the ability to gracefully handle timeouts and alert the team when failures occur or when tasks are taking longer-than-normal to execute (i.e. when service level agreement — or, SLA — conditions have been violated). 4. Scaleability. As Quizlet expands its usership and continues to develop more data-driven product features, the number and complexity of workflows will grow. Generally, this type of scaling necessitates more complex resource management, where particular types of tasks are executed on specially allocated resources. We wanted a framework that would not only address our current data processing needs, but would also scale with our future growth without requiring substantial engineering time and infrastructure changes. 5. Flexibility. Our dream framework would be able to execute a diverse (ok, limitless!) range of tasks. Furthermore, we wanted the system to be “hackable”, allowing us to implement different types of tasks as they are needed. We also wanted to avoid pegging our workflows to a particular type of file system (e.g. Oozie or Azkaban) or pre-defined set of operations (e.g. only map-reduce-type operations). 6. Monitoring & Interaction. Once we got our WMS up and running, we wanted it to offer centralized access to information regarding task statuses, landing times, execution durations, logs, etc. Not only should this diagnostic information be observable, it should potentially be actionable: from the same interface, we wanted the possibility to make decisions that affect the state and execution of the pipeline (e.g. ordering retries of specific tasks, manually setting task statuses, etc.). We also wanted configurable access to diagnostic information and workflow interactivity to be available to all data stakeholders. 7. Programmatic Pipeline Definition. A large number of workflow managers use static configuration files (e.g. XML, YAML) to define workflow orchestration (e.g. Jenkins, Dart, and Fireworks). In general, this approach is not a problem, as the structure of most workflows are fairly static. However, the sources and targets (.e.g. file names, database records) of many workflows are often dynamic. Thus being able to programmatically generate workflows on the fly given the current state of our data warehouse would be an advantage over static configuration-based approaches. 8. Organization & Documentation. For obvious reasons, we gave preference to projects that had a solid future road map, adequate documentation and examples. 9. Open Source / Python. The data science team at Quizlet takes a large part in the ownership and execution of data processing and workflows. We also take great pride in sharing and contributing to open projects that everyone can use and learn from (ourselves included!). It would be huge plus if we were able to a framework that was also rooted in the Python / open source community. 10. Batteries Included. Quizlet needs its data yesterday! Today’s business world is fast-moving, and business intelligence is better served sooner than later. These insights cannot be delivered if the data science team is busy implementing basic workflow functionality. We wanted to adopt a framework that is fairly plug-and-play, with a majority of the above features already baked in. Why We Chose Airflow Using the above wish list as a guide when evaluating WMS projects, we were able to select three main candidates from the pack: Pinterest’s Pinball, Spotify’s Luigi, and Apache Airflow. All three projects are open source and implemented in Python, so far so good! However, after looking into Pinball, we were unconvinced by the focus of their road map and the momentum behind their community. At the time of writing this post, the Pinball Github project had 713 Stars, but only 107 total commits, 12 contributors, and just a handful of commits in the past year. In contrast, the breakdown of Github stats for the Luigi/Airflow projects, respectively is as follows: Stars: 6,735/4,901; Commits: 3,410/3,867; Contributors: 289/258. Thus, both Luigi and Airflow have the active developers and vibrant open-source community that we were looking for. Our primary decision then became to choose between either Luigi or Airflow. Luigi and Airflow are similar in a lot of ways, both checking a number of the boxes off our wish list (Figure 2.1). Both projects allow the developer to define complex dependencies amongst tasks and configure execution priorities. Both allow for parallelism of task execution, retrying failed tasks, and both support historical backfills. Both projects support an array of data stores and interfaces including S3, RDBs, Hadoop, and Hive. Both can be installed easily using pip and are, in general, fairly feature-rich. That said, there are a number of distinct differences between the two that motivated us to go with Airflow over Luigi. Figure 2.2: Side-by-side Comparison of Airflow and Luigi. Each row ranks Airflow and Luigi on their implementation of various features and functionality. Ranking ranges from no checkmarks (worst) to three checkmarks (best). First, Airflow’s future road map appears to be more focused and the momentum of the development community currently appears to be stronger than Luigi’s. Though originally developed by AirBNB, the Airflow project has become an Apache incubator project, which increases its probability of future success and maintenance. We also preferred a lot of design conventions chosen by Airflow over Luigi. For example we preferred Airflow’s schedule-based task execution strategy, which is a little more “set-it-and-forget-it”, as compared to Luigi’s source/target-based approach, which can require heavier user interaction with workflow execution. We also felt that extending Airflow was more straight forward through the definition of new Operators (we’ll get to Operators shortly), rather than having to inherit from a particular set of base classes as is done in Luigi. Airflow also offered a number of features missing from Luigi. In particular Airflow’s UI provides a wide range of functionality, allowing one to monitor multiple sources of metadata including execution logs, task states, landing times, task durations, just to name a few. A huge plus was that the same UI can also be used to manage the states of live workflows (e.g. manually forcing retries, success states, etc.). This is quite a powerful paradigm, as it not only makes diagnostic data easily available, but also allows the user to directly take action based on insights from the data. In contrast, Luigi offers a simple UI as well, but it’s not nearly as feature rich as Airflow’s, and offers no interaction capabilities with live processes. Some other features offered by Airflow but missing from Luigi include the ability to define multiple, specialized workflows, the ability to share information between workflows, dynamic programmatic workflow definition, resource pools (referred to as “queues” in Airflow), and SLA emails. Thus, Luigi was able to check boxes 1–5 and 9–10 off of our wish list, but Airflow was able to check the remaining boxes as well. So we decided to go full-steam-ahead with Airflow! The rest of the series of blog posts details what we learned getting Airflow up and running here at Quizlet. In particular, Part III demonstrates some of Airflow’s key concepts and components by implementing the workflow example introduced in Part I of the series. Part IV documents some of the practicalities associated with Quizlet’s deployment of Airflow.
https://towardsdatascience.com/why-quizlet-chose-apache-airflow-for-executing-data-workflows-3f97d40e9571
['Dustin Stansbury']
2017-05-03 18:59:42.685000+00:00
['Airflow', 'Infrastructure', 'Data Science', 'Big Data', 'Analytics']
Making C# More Welcoming
This article is part of the C# Advent Series check it out for more articles from others in the community I love C#. I’ve been working with the language since 2001 and still view C# as my favorite and primary programming language, despite growing to love many other languages as well since then. However, this year has been eye-opening for me as I’ve gotten a glimpse into how others learn programming and the problems C# has with new developers. This year I left software engineering and became an instructor at Tech Elevator, a full-stack C# and Java bootcamp. I am now responsible for teaching others programming via teaching them C#, SQL, and JavaScript among other technologies. Learning programming is hard, but learning a language that was introduced 20 years ago and continues to radically grow and change year after year is monumentally harder. In this article I’ll lay out some of the current problems I see with C# for new developers and talk about my hopes for the future of the language, including some early information on changes Microsoft is looking at including with .NET 6 and a feature request I’m hoping Microsoft will implement in Visual Studio. It is my hope that this article helps others empathize more with new developers and take new developers into account when designing their own libraries, languages, and tooling. I would also love it if Microsoft entertained some of the suggestions in this article, but my primary desire is for the community to have a bit more empathy for new developers learning in a broad and deep ecosystem. Panic! In the IDE While it’s normal for experienced developers to feel waves of impostor syndrome throughout their career, it can be harder to remember that feeling of being a brand new developer writing your first few pieces of code. As a beginner, your primary focus is on learning the syntax and learning to think like a machine, but there are so many ways things can fail and a vast array of things you’ve not even encountered yet. “Not All Code Paths Return a Value” The “Not All Code Paths Return a Value” compiler error in C# is one that almost all of our students run into as they start making their first methods and playing with conditional logic. Let’s say you need to write a method to return a string based on whether the number is above, below, or inside of a specified number range. A seasoned programmer would have no problem with this, but new programmers often try an approach that looks something like this: The core logic here is correct: We have 3 good checks for things that have appropriate boundary conditions and return the right result, but the compiler isn’t happy: A beginning programmer looks at this CS0161 identifier and it scares them because it’s an alien string from a computer. They see “not all code paths return a value” and think “Did I miss a return statement inside my if statements? Is there an edge case I forgot?” and they’ll validate that any number going into the method should meet one of those 3 scenarios. They think about the logic and not necessarily about the compiler’s perspective. Beginner programmers like repetition. They like thinking about things in terms of if statements and they’ll repeat the same types of condition checks with slight edits. It’s very normal for people to do things like this as they get used to programming. Only later do they think “I don’t need an if here if I’m returning on the prior lines for the other conditions” because it takes time to become truly familiar with the return statement and its implications. I have a fix in mind for this error in particular which I’ll get to in a bit. Unassigned Local Variables Just to illustrate that I’m not just picking a random bad compiler error, let’s take a look at another case folks are likely to try at some point: Here the programmer forgot to assign favoriteNumbers a new list of integers. Most programmers, when confronted with these two lines in close proximity, will spot this issue and be able to correct it. The problem is that these are often less clear with longer methods, additional variables, loops, if statements, etc. So let’s take a look at the error message Visual Studio offers. The message once again tells us what’s wrong, but most experienced developers forget that reading error messages is an acquired skill. To the novice “unassigned” might not make them think “I need to create a new list and set that list into my variable”. This has all the right words, but it could be clearer with one more sentence asking if they meant to set the variable to a value when declaring it. Two Decades of Language Features Let’s shift gears a bit to talk about how C# and .NET is awesome. As I mentioned in the opening, I’ve been working with C# since beta 2 back in 2001. You don’t stick with a language that doesn’t adapt and change over time. .NET has made some phenomenal changes over the past two decades with the major new platforms such as Razor, .NET Core, MVC, WPF, LINQ, Entity Framework, and Microsoft Azure, among other things. As technology continues to change, .NET remains relevant entirely through these efforts. However, with two decades of progress and history, there is a price to pay, and unfortunately it is the new developers who bear the brunt of it. Options Overload As C# has grown over the years, it’s gained a number of incremental improvements in new language features. As new capabilities and keywords get added on, older ways of doing things remain supported to provide easy migration paths. This means that newcomers in tech are likely to see things a number of different ways in looking over existing code and documentation. This problem is perhaps at its most prevalent with properties. Let’s take a look at all of the different ways of handling properties in C#. Here we have all the currently supported ways of working with C# properties in one class. As you can see, there’s a lot of valid, but different syntax. Some of these pieces of syntax are matters of preference ( expression-bodied members), and some are necessary to support certain scenarios. A novice programmer needs to be familiar with reading and understanding most of these. Visual Studio will actually automatically insert properties as expression-bodied members when overriding a class, which forces that arrow syntax on new devs who may not be ready for it yet. Guiding People Towards Cliffs C# has a growing number of keywords — particularly when dealing with method parameters and with the concept of a class vs a struct vs an interface vs a tuple vs a record vs an anonymous object. There’s simply a lot of things that a newcomer needs to be able to read and understand in online help, starter templates, articles, books, and existing code. Not only does a newcomer need to understand how to read these things, but they must internalize guidelines and rules on when to use what type of thing as part of the process of understanding each new thing they know. Unfortunately, the tooling makes it easy to discover some of the newer and more advanced language features when you don’t necessarily want people to. Take this error message our students might encounter when running on a slightly older version of C# and working with interfaces for the first time: Here the student has tried to define an interface containing a method body out of habit. The compiler message (pictured above) is more encouraging towards upgrading C# versions to take advantage of default interface method implementations as opposed to pushing them to double check if they wanted to add a method body to an interface to begin with. Note: Interestingly, newer versions of C# don’t error on this code at all, making it harder for a student to understand the difference between an abstract class and an interface and what the intent of either one really is. This is an example of a new language feature making it harder for newcomers to learn core aspects of the language first, which is unfortunately a common trend. Making C# More Accessible At this point hopefully you have a slightly larger understanding of some of the things that concern me about people learning C# 2 decades into its existence. Let’s talk more about remedies for these things. Here are a few things that I think would help people have an easier time getting into C# and feeling confident and competent with the language: Beginner-Friendly Compiler Errors The number one thing I would encourage Microsoft to change would be the way that compiler errors appear to the user when the user hovers over the “red squiggly” in the interface. Let’s take a look of a simple mock-up of what might appear with the “Not all code paths return a value” error I cited earlier as an example: Here we have a few changes: First, the “red squiggly” occurs at the end of the method instead of at the method title. This helps the developer with this particular error by putting the squiggly closer to the spot of the missing / incorrect code. Secondly, the error tooltip is vastly different and includes basic contextual information up at the top, a short beginner-friendly paragraph describing the main cases of the error, and includes examples of bad and fixed code. Finally, the error code is still included at the footer, along with a hyperlink to learn more details from the official documentation on this error code. This type of experience would be optimal for a new developer and would take a certain degree of fear out of the experience while steering them in the direction of a proper fix. Note: This feature has been officially registered as a feature request. If you believe that this provides value, please upvote it. These tooltips would be hard to do with many errors, but I’m certain Microsoft has some idea of which compiler errors are the most common errors and could prioritize providing help for those scenarios. Here are a few I’d prioritize myself: CS0029 — Cannot implicitly convert X to Y CS0103 — X does not exist in the current context CS0161 — Not all code paths return a value CS0165 — Use of unassigned local variable CS1002 — Semicolon expected CS1513 — } expected (missing closing scope) CS1525 — Invalid expression term. Specifically when using == instead of = for assignment (e.g. int i == 4) CS7036 — No argument given that corresponds to the required formal parameter (incorrect method call) Opt-In to Advanced Language Features It’s healthy and expected to add more language features to support developer productivity and keep up with the changing nature of programming. The recent rise in popularity of functional programming is a great example of this as it has pulled in a lot more functional-style syntax into newer versions of the C# language. The problem comes when more complex language features are pushed to new developers still trying to learn the basics. For example, when a new developer is trying to implement an interface for the first time, C# will helpfully offer to generate members for the user. That’s an awesome feature. However, the implementation is maybe not the best for new users: Here C# generates a property getter and setter, but prefers the expression-bodied member syntax of the language. This is actually my preferred syntax for writing a simple property, but this is distracting to show someone brand new to C# and programming in general because there are so many other fundamentals to focus on first before you get to the point where you can learn arrow functions. One of the most surprising things for me as a new instructor was seeing how much trouble new developers have understanding properties. New programmers are juggling so many different concepts and this idea of a class as a re-usable piece of code with little properties that can be customized takes time to soak in. During this “soaking” period, properties don’t make a lot of sense and, unfortunately, there are far too many ways of writing them at present. What’s worse, when Visual Studio pushes expression-bodied member properties at a new developer who has only seen the older ways of writing a property (and has potentially not yet seen arrow functions), the beginner may not even realize that they’re looking at a property. Beginner Mode I think what we need is a “Beginner Mode” that simplifies the language of error messages and sets the auto-generated code to prefer older style syntax. Just like how when you used to set up Visual Studio you would tell it if you had a C#, VB.NET, or Web Development background and it would customize its menus, it’d be nice to indicate that you’re new to programming or new to C# and have Visual Studio minimize the number of things that could distract you from that early learning path. To be fair to the wonderful team at Microsoft here, there’s definitely new features coming out that you now must opt-in to receive. A notable recent example would be the way C# 8 adjusts how null values are handled by requiring a project-level setting enabled to enable that advanced behavior. More features need to be handled in similar ways going forward. Deprecate and Remove Old Syntax We need to seriously think about the growing baggage in terms of number of different keywords and different ways those keywords can be arranged. When we introduce new things, we need to seriously ask ourselves what, if anything, we can take out or deprecate from older versions of the language. For example, if expression-bodied members are truly designed to replace standard field-based properties with gets and sets, we should say that you should stop using the older way of doing things and provide compiler warnings to guide people away from them. (Note, I’m not saying this change should be made, I’m just using it as an example of adding complexity without removing anything) Even better, we should provide handy quick-fixes that automatically convert old code to use the new recommended ways of doing things, and let you do that at the file, project, or solution level. By making explicit best practices clear in our tooling, we remove things that new developers need to care about. Sure they need to care about understanding the old and new way, but they don’t need to worry about deciding when they should use one way over another. Beginners like and need guide rails because they reduce mental strain and improve comfort as people learn the ropes by focusing on one thing at a time. What’s Ahead in .NET 6? The good news is that making .NET Accessible to new developers is a central theme currently under consideration for the .NET 6 release. Looking into the current theme, it looks like Microsoft is aiming for the following improvements: Overall, I’m elated at this, but I’d still love to see more progress on improving the compiler and runtime errors users see and gearing them towards those brand new to programming. I strongly support and embrace Microsoft’s approach in prioritizing new developers coming into the language as the language and its tooling continue to grow and evolve and I’d love to see some additional effort on simplifying legacy ways of writing C# code or guiding people towards more modern approaches in a way that’s as friendly as possible to new developers while respectful of existing code. Closing This article may read like a wish list or an airing of grievances about things that make my job harder as an instructor, but it’s actually not my students I’m worried about. A skilled instructor can serve as a tour guide through an existing language. My fear is for the folks who want to learn C# in greater depth than their schools teach it, or who are trying to improve themselves on the side. My fear is also that students may struggle and give up due to the daunting learning curve early on when they could have succeeded if they’d had a bit more help. We need a wider diversity of people coming into tech, and the learning curve is one of the many factors stifling that growth. More people should have a fair shot at learning these technologies, and that means revisiting our tools and documentation to welcome the next generation of developers from all ages, genders, and backgrounds. I am incredibly proud of .NET and all of its supported languages and associated tools. Microsoft has done a phenomenal job over the past 20 years and I cannot overstate that, despite the improvements I want to see.
https://medium.com/swlh/making-c-more-welcoming-4c5a76a7497e
['Matt Eland']
2020-12-13 06:55:09.151000+00:00
['Visual Studio', 'Csharp', 'Dotnet', 'Learn To Code', 'Programming']
How to Heal Through Disillusionment
Before Grace enters there is always a test. Mine took the form of breaking the compulsive pattern of forcing attachment by entering into five years of self-imposed celibacy. To my surprise, this was not as difficult as I feared. The volcanic hatred erupting from a lifetime of relational traumas encouraged solitude and allowed for the space to create therapeutic workshops dramatizing the plight of the Orphaned Child. The objective of these arts-driven psycho-spiritual laboratories was to release illusions of safety and rescue so that possibilities rooted in reality could be realized. Guided by the Archetypal Fierce Mother the repressed life force of the Rejected Child surfaced. Honoring the victimized child meant embracing the torment of her darkness. Through the workshop process, I revisited the little girl who carried the pain of rejection and loneliness throughout her life. I came to admire her immense capacity to endure the endless longing for love denied her, while she tenaciously held on to the hope of rescue and acceptance. Eager for some sort of definition in her aimless world, she clutched desperately at anything that might offer her a sense of who she was supposed to be, trying to anchor herself in some sort of constancy and security. I remembered her courage in the face of rootlessness, despair, and deprivation, and her resilient efforts to remain hopeful that love would find her. It was she who was willing to sacrifice everything to belong; to reclaim her birthright to be loved, to be safe, to do whatever it took to experience tribal union. I came to appreciate her efforts to seek connection, thwarting the stigma of the outcast, the one who was unwanted, the one who didn’t matter, was separate from the rest. I mourned her illusions and embraced the reality of her powerlessness over her circumstances with full acceptance that the loss of unconditional love, safety, admiration, and care could never be compensated for. Through my connection to her, the magnitude of my inconceivable plight became clear. Individuation was my calling at a time when survival meant belonging. Photo by Jordan Rowland on Unsplash A tempestuous path of exile finally brought me home to myself where integrity and self-respect live. At long last I grieved over her scars. I came to love her. As I laid her illusions to rest I moved into adulthood. Suddenly I was ready to emphatically say ‘No’ to old ways and attachments that needed to be excised. I was no longer deluded by false hope and deceptive ideologies. Emerging with a more integrated sense of self fostered a deeper understanding of the collective human struggle. This resulted in my being intelligently guarded and discerning. With these realizations came a dramatic shift in my world-view.
https://medium.com/publishous/healing-through-disillusionment-2e10c5fd6373
['Rev. Sheri Heller']
2020-06-27 13:08:48.100000+00:00
['Growth', 'Life', 'Mental Health', 'Self', 'Suffering']
A third batch of Slowdown Papers
Afternoon walk, Enskede 15 May 2020 Dear readers — Many thanks for subscribing to the Slowdown Papers. A few days ago I published a third batch of Papers, numbers 19–41. This third set was written over the last few months, starting in the long Swedish summer holiday through July. I then took a couple of months of refining, editing, organising and researching, knocking them into some sort of shape. As I note in the first paper, which reflects a little on the writing itself, at least some of this set is partly built out of the many essays and articles I’ve written elsewhere during this time, as well as the many speeches I’ve given (and thanks to all of you who asked me to write and speak—such things also force me to organise my thoughts. I quote Zadie Smith in the first paper, who said that “Writing is control”, and there is no doubt some of that happening here.) I also discuss how difficult it is to write about something that changes in real time, and in ways that are not at all obvious. That also meant a few months was required, to sift some of the thoughts through a series of different frames, and finding a way of capturing things as they are happening. But my work, and therefore my writing, often concerns things that are ambiguous, incomplete, uncertain and subjective—so this is all of my own doing. Still. Three months. This third batch makes more explicit the link between the pandemic and the intertwined grand challenges of the climate crisis, public health and social justice, making clear that COVID-19 is merely an expression of all three. I describe how this entangled interconnectedness, woven through our various infrastructures of everyday life, also presents an opportunity for action. That means this batch covers the way we make decisions about fundamental shared concerns, like policing and hospitals, and our broader questions of value, as well as the patterns, dynamics and qualities of streets, housing, infrastructure, neighbourhoods, and cities themselves, in which those values are articulated. In that, the batch explores the form and dynamics and tangible environment, or indeed mood, of what I call these emerging Slowdown Landscapes, as well as implications for the kind of design we do (for the few of you who are designers) as well as the broader patterns of policymaking or action. The broad arc of this batch includes a few scene-setting pieces to start with. 19. The waters draw back, only to return and 20. Wait, what? are both overviews of what happened this year, with the former homing in on the apparent failure of the Anglo-American model as well as the idea of getting used to living with the virus, and the latter records related events to the pandemic, touching on bushfires and wildfires, as well as Black Lives Matter. 21: Clear skies, full parks, can’t lose (a punning title that will only make sense to Americans) is a long-form ‘casebook’ recording the emerging observations and research, whereas 22. Revisiting the Slowdown, and the end of the Great Acceleration, unpacks Danny Dorling’s idea of the Slowdown, which I discovered after writing the first batch for this series. The next few cover the potential end of city centres, work, and the office as we knew it, following by the linked imperatives to retrofit the suburbs and to explore different models for housing and neighbourhoods that engender care and culture, as well as pursuing form and function.
https://medium.com/slowdown-papers/a-third-batch-of-slowdown-papers-17d37c26c144
['Dan Hill']
2020-09-27 15:31:06.473000+00:00
['Cities', 'Technology', 'Design', 'Covid-19', 'Policy']
12 hours in Yosemite
Last November, as the world around me was starting to freeze, my wife and two children headed west to the sunny California coast. I hadn’t been out to visit my brother and his family for years and with American Thanksgiving coming up, we thought it was an ideal time to go. While there we also wanted to get out and witness some of the amazing natural beauty around the bay area. Not being all that familiar with the local geography, I hadn’t made the connection that Yosemite National Park was only a few hours away by car. I’ve wanted to visit Yosemite since as long I can remember and to finally have the chance was something I had to make happen. Considering we didn’t have a lot of days to spare on our trip, the journey to Yosemite was going to have to happen in one day. With the 3+ hour drive each way, was a day trip even going to be worth it? There was only one way to find out. We awoke around 4:30am the day of our trip. We packed as little as possible, some food, books and toys for the kids. We filled up my brother’s Siena and we were ready to roll. The kids were still asleep and we got them into the van without waking them up; it was a good omen. We hit the road just before 6am and it was still pitch black. Google maps said the journey was going to be 3 hours and 34 minutes. I had my fingers crossed that the kids would sleep most of the way and perhaps we might make it there in under 4 hours, given the coffee and bathroom breaks. We headed east through the yellow hills of the California country side. There wasn’t a cloud in the sky and the sun was just about to peek over the horizon. The day was turning out to be absolutely stunning and our excitement grew as we wheeled onward. Driving through mid-land California was a different perspective on the state I had only known for its coastline. It was incredibly flat, dry and full of farm fields, fruit trees, pickup trucks and rodeos. Not what you would typically picture when thinking about California. It was nice to see the other side. After about two and a half hours we hit the Sierra Nevada’s and our route to Yosemite took us up some insane roadways that appeared to be dug out from the side of mountains. We climbed upward about 5000 ft. After a short while we entered the Stanislaus National Forest. As we got closer to Yosemite we noticed the base of most trees were scorched black from fire. Everything had scars and some areas had been completely wiped out. After some Googling we learned that there was a massive wildfire here in 2013 and was in fact the largest fire on record for the Sierra Nevada’s. It was clear to the eye how much damage it had caused. Large swaths of land completely burnt to the ground. It was called the ‘Rim Fire’ because of is vicinity to the ‘Rim of the World’ lookout point. The landscape was still beautiful, just perhaps not as it once was the previous years. After reading about the fire we came up on the lookout and we had to stop and take in the view. It was immediately apparent why it was called ‘Rim of the World.’
https://medium.com/get-outside/12-hours-in-yosemite-58ae876e855c
['Matt Quinn']
2015-03-07 16:00:40.871000+00:00
['Storytelling', 'Photography', 'Travel']
Men and Interracial Friendships: Uncommon, But Certainly Not Impossible.
Upon his death in June 2011, much was made of the relationship between Bruce Springsteen and his late, great saxophone player, Clarence Clemons. Many media outlets from highbrow publications to run of the mill, to relatively obscure publications ran fawning stories highlighting the genuine, affectionate and loyal relationship that both men had with one another. During the summer of 2015, various segments of the media engaged in deja vu as they affectionately touted the gripping story of three young American citizens — Alex Skarlatos, Spencer Stone and Anthony Sadler. While on vacation in Paris, these three heroic men sprung into action and dramatically and decisively took down heavily armed terrorist Ayoub El-Khazzani, successfully preventing an unimaginable terror plot. The brave and capable trio was awarded with the Legion d’honneur, the most prestigious award given by the nation of France. Each young man was honored with medals by then U.S. Defense Secretary Ash Carter at the White House and had a private meeting with former President Obama. U.S. Airman Stone, National Guardsman Skarlatos and then college student Sadler were the recipients of hometown parades, talk show and radio interviews, and became the darlings of both national and international media across the political spectrum. Skarlatos gained further celebrity and became a contestant on ABC’s Dancing With The Stars. While the public became privy to many details of the lives and backgrounds of these young men, and fondly reminisced on the seemingly brother like relationship of Springsteen and Clemons, one notable factor that was not largely evaded was the fact that in both situations, these men were/are part of an interracial friendship. While this does not make them a novelty, it is in fact somewhat noteworthy. There are those who would argue that cross-racial friendships are not that unusual, particularly for the latter trio of men given the fact that these men are members of the millennial generation. For the better part of a decade we had been led to believe that this group of young men and women are supposed to be the most racially and socially progressive group in American history though a number of studies over the past several years coupled and a plethora of recent incidents on a number of college campuses and other venues have amply refuted this largely held belief. Despite such incidents, it is probably safe to say that, on average, millennials (those born between 1981 and 1998), are likely to be more open-minded and accepting of certain mores, customs ― for example, same-sex marriage, interracial marriage, open drug use ― that have been less well received by previous generations. That being said, being more tolerant does not necessarily translate into full support. Tolerance and acceptance are two different things! The truth is that interracial friendships among men are neither novel nor non-existent. Additionally, outside of real life scenarios, we have seen many cases on the silver screen and in entertainment where White men and non-White men, particularly with Black men, have formed close bonds with one another. From escape convicts Tony Curtis and Sidney Poitier in the 1958 Oscar-nominated movie The Defiant Ones to Bill Cosby and Robert Culp in the 1960s NBC detective series I Spy (1965-‘68). To Don Johnson and Phillip Michael Thomas in the flashy, flamboyant and racy mid 1980s television program Miami Vice (1985–1990). To Mel Gibson and Danny Glover in the multi sequel Lethal Weapon series. To Samuel L. Jackson and John Travolta in the 1994 Oscar-nominated film Pulp Fiction. To Scott Bakula, Ray Romano and Andre Braugher in the woefully underrated and ridiculously unappreciated, fantastic TNT series Men of a Certain Age (2009-11), the entertainment industry does a great job of providing fictional versions of White/non-White pairings, particularly Black-White fellowship that is often at odds with reality. To be sure, there are interracial relationships between White and non-White men in real life, especially among athletes and the most progressive men. The famous, in some cases, infamous Hollywood Rat Pack members — Frank Sinatra, Dean Martin, Sammy Davis Jr., Peter Lawford and Joey Bishop were a notable, high-profile interracial group. Indeed, many men (including a number of their famous male co-horts in the entertainment industry,) admired and were in awe of them. The aforementioned examples aside, the question is, outside of the jock/athletic realm, how commonplace and frequent are they? Yes, it is true, that many men have problems making friends with anyone, their male neighbor, co-worker(s), let alone a man of a different racial background. Historically speaking, the fact is that relationships between White men and Black men have often been fragile and fraught with tension, paranoia and ample amounts of intense suspicion and mistrust. This is due to a number of historical, economic and psychological reasons. Fear of supposed unrestrained sexual prowess (fear of the Black male penis) coupled with supposed “malevolent and wanton” Black male violence. Sexually rapacious behavior and brutally untoward dispositions toward Black women by segments of White men just a few of many factors that created a divide between men of both groups. Throughout most of the 20th century, lower-income males of both groups (particularly Irish, Polish and Italian White men) were often competing with Black men for low-wage jobs. Yes! It was about money and sex! Other factors were at play as well. Even among more supposedly liberal White men this divide has been commonplace. To be sure there are non-White men who harbor racial prejudice and dangerously misguided views about others with different skin pigmentation and should be justly condemned and challenged for their attitudes. Indeed, on the contrary, there are many men across racial lines who do not use racial slurs or harbor rabid hatred toward others who are physically and culturally different from themselves. Nonetheless, they are often either far too indifferent, disinterested or unwilling to make the next step of crossing the racial divide and befriending or learning more about their brothers of another color or culture. These are the men (most often White self-identified liberal men) who like the idea of racial diversity (as well as cultural pluralism in general) in theory, from a safe, non-threatening distance. As the old saying goes, “talking the talk, as opposed to truly and sincerely walking the walk.” Does not having any friends of other races automatically make you a bad or racist person? No. Does it make you a socially and culturally limited person? To a large degree, yes it does! For those men such as Bruce Springsteen, Clarence Clemons, Spencer Stone, Alex Skarlatos and Anthony Sadler, myself and others who do have male friendships across the racial divide, we can safely say that these relationships have been very valuable and rewarding for a multitude of reasons and we are likely to be more socially, emotionally, psychologically and possibly, even physically healthier in a variety of ways due to this fact. Elwood Watson, Ph.D. Is a professor, author and public speaker. His forthcoming book, Keepin’It Real:Essays On Race in Contemporary America will be published the University of Chicago Press.
https://elwoodwatson890.medium.com/men-and-interracial-friendships-uncommon-but-certainly-not-impossible-505e7ea355e7
['Elwood Watson']
2020-10-01 09:45:38.847000+00:00
['Interracial Relationships', 'Male Sexuality', 'Pop Culture', 'Men', 'Psychology']
RAPIDS cuGraph adds NetworkX and DiGraph Compatibility
RAPIDS cuGraph is happy to announce that NetworkX Graph and DiGraph objects are now valid input data types for graph algorithms. Furthermore, when a NetworkX Graph is passed in, the returned data type will match the corresponding NetworkX algorithm’s return type — with some exceptions that this blog will cover. Existing analytics using NetworkX can be accelerated by simply replacing the module name. answer = nx.pagerank(G) ==> answer = cugraph.pagerank(G) # same G Example Using Betweenness Centrality Betweenness Centrality (BC) is a measure of the relative importance of a node based on the number of shortest paths that cross through the node. This is under the assumption that the more information flows through a node, the more important it is. Since a single source shortest path (SSSP) algorithm needs to be run for each node, BC can be slow. Consider the following code sample that computes BC using NetworkX and the same code with the new cuGraph feature. It’s that easy. Replace the module name, and you have access to RAPIDS accelerated algorithms. Running the above code on a range of random graphs produces the results in Table 1. The random graphs are created using the preferential attachment model, but other models could be used, and NetworkX contains a wide assortment of graph generators. For this test, the number of nodes started at 100 and was double each iteration. Also, the average degree, M argument, was set to 16. You can find the code in the cuGraph notebook folder. Table 1: cuGraph runtimes for BC vs. NetworkX Compare 47,763 seconds, which is a little over 13 hours, to the cuGraph time of 145.6 seconds, which is under 3 minutes, and you get a sense of the potential performance improvement that can be achieved by making a simple code change (a 328x speedup). The example does use Betweenness Centrality, which is known to be slow. To improve performance, estimation techniques can be employed to use a sample of nodes rather than all of them. Setting the k argument to 25% of nodes (k = N // 4) will reduce runtime of both NetworkX and cuGraph by 75%, but also reduce accuracy. The performance speedups listed above are typical. However, if you are working with a small dataset, like Moreno’s seventh-grade friend network or Zachary’s Karate club, where the number of nodes is less than 50, then GPU acceleration really won’t help. Once you get to a few hundred nodes, the benefits become noticeable. And once you are into the tens of thousands plus range, acceleration is a must. The Betweenness performance numbers were generated on a randomly created preferential attachment style graph. Let’s look at a sample of other large public datasets across a range of algorithms. Figure 1: Speedup of cuGraph over NetworkX for typical graph algorithms Table 2: Dataset structures used in the generation of Figure 1 An additional benefit of cuGraph now accepting NetworkX Graph objects is that it also allows NetworkX user access to algorithms that are not in the current NetworkX release, for example, Louvain, Ensemble Cluster for Graphs (ECG), and Lieden, to name a few. More algorithms are coming. Current List of Algorithms and Areas of Difference There’s an important difference to note. Currently, cuGraph does not support a rich property set on nodes or edges. When a NetworkX graph is imported, only the source, target, and a single weight column are copied over. That means that algorithms that return a graph, like k_truss , will not have any additional attributes. The solution would be to run a networkx subgraph extraction on the returned graph. Algorithms that exactly match, exactly match but do not copy over additional attributes, and are not available in NetworkX (Table 3). Table 3: cuGraph algorithms that exactly match their NetworkX equivalents Algorithms where not all arguments are supported in cuGraph (Table 4). Table 4: cuGraph algorithms that do not support all NetworkX arguments Algorithms where the results are different (Table 5). For example, the NetworkX traversal algorithms typically return a generator rather than a dictionary. Table 5: Algorithms that produce different results on cuGraph vs. NetworkX Conclusion: RAPIDS cuGraph now provides an accelerated graph analytics library that integrates the RAPIDS ecosystem with NetworkX. Providing a friendly, easy-to-use user experience is almost as important as awesome performance. The RAPIDS cuGraph team will continue to expand the list of available algorithms, with scaling to hundreds of billions of edges, but more importantly, also focus on expanding and enhancing interoperability with NetworkX and other Python frameworks. Your feedback and comments are greatly welcome. We are available on Google Group, or you can file a GitHub issue with suggested enhancements. Please consider giving cuGraph a star on GitHub.
https://medium.com/rapids-ai/rapids-cugraph-networkx-compatibility-d119e417557c
['Brad Rees']
2020-10-02 18:59:22.426000+00:00
['Graph Analytics', 'Networkx', 'Python', 'Data Science', 'Open Source Software']
When You’re not in Reno
Our bedroom is nothing like any of the seven different rooms we’ve stayed in at the Sands Regency in Reno since 2011 but I woke up certain that that’s where I was this morning. It was confusing. It’s been a very confusing year. Although the Burning Man Organization officially canceled the 2020 event in April, somehow that hadn’t gotten real to me until now. In past years, a crew was out on the enormous flatness of the Black Rock Desert at the end of July to drive the gold spike which indicated where The Man would stand. By now, preparations would be in high gear out there and all around the world as 70,000 giddy ticket-holders hauled out the dusty gear and got serious about that art installation or painting or whatever gift was to be made and brought this year. The theme this year had already been selected before the cancelation: the Multiverse. From the Burning Man website: The 2020 Black Rock City event theme explores the quantum kaleidoscope of possibility, the infinite realities of the multiverse, and our own superpositioning as actors and observers in the cosmic Cacophony of resonant strings. It’s an invitation to ponder the real, the surreal and the pataphysical, and a chance to encounter our alternate selves who may have followed, or are following, or will follow different decision-paths to divergent Black Rock City realities. Welcome to the Multiverse! Naturally, there will be a Virtual Burning Man online and I suppose there are a lot of people who will log in for that. Meh. I checked it out and all those computer graphics and virtual environments were phony-looking and boring. And what about that smell? The Black Rock Desert isn’t really a desert. It’s a dried-out lakebed. In the Pleistocene era, it was a real lake. Now it’s an enormous alkaline flat and the minute your vehicle leaves the road and that white, powdery dust rises there’s an unmistakable smell. If I were to go unpack some of the gear stowed in our suitcases there would be some of that dust lingering. And the smell. Oh, man, that smell! See, now that it’s the middle of August everything sends me into weird Burning Man vibes. The dreams are back, the ones where I know I’m at the Burn but it doesn’t make any sense because there are cabins and trees and streets. But I know this is Burning Man, I just need to get to where the “real” Burning Man things are happening. Ask me if I ever manage to do that. And now I’m waking up thinking I’m in Reno. We store our tent, bikes, and all the other major gear in Sacramento — which is actually closer to the Black Rock Desert than San Francisco, go figure — and pick up our U-Haul there. Then we drive to Reno and that’s when the charge begins to build. For about a month now my partner and I have both been goosed by these intense little vibes associated with Burning Man. I’ll experience phantom smells or just have a sense of something Big and Wonderful approaching. Or, conversely, the old panic arises when I realize that in less than two weeks we leave for Sacramento! Oh shit!! I will always cop to having a complicated relationship with Burning Man. There is so much hype around the event, a chorus of voices proclaiming massive life realignments in four-part harmony. That has not been my experience. AleXander to me / Temple wall / 2013 — the year we married in Reno Moreover, Burning Man isn’t what you’d call a relaxing summer vacation. The litany of frustrations and pains and things that just suck is long and boring. Who cares about nasty porta-potties or freezing nights in a tent or sitting in completely stopped traffic miles from the event? The people who are determined to go are going to roll with this crap and everyone else is going to count themselves to be intelligent, well-adjusted people for not going. My annual cycle regarding this madness. During the usual 4 to 8 hours of Exodus, I hate everything about Burning Man. I’m probably sunburned, I’ve got dust in places where dust really should never be, I’m certainly dehydrated and seven days of serious carb-gorging has my system in an uproar. I think I’ll feel better if we ever get back onto the actual road and out of this dusty, rutted wilderness. I do, a little, but then the drive back to Reno seems to take days. I don’t start to come out of my dull misery until I get into the shower at the Sands Regency in Reno. Of course, then there’s everything that has to be done before we can get on an eastbound jet. Gear has to be cleaned and repacked. Bedding and towels have to be laundered and repacked. The truck has to be emptied and cleaned. All that gear needs to be wedged back into the shared storage space (the world’s biggest game of Tetris anyone?). The truck has to be returned as we hold our breath, hoping they don’t notice those new dents from flying gravel on the passenger-side door. Then up at o’dark thirty just 2 days off-playa to submit to the security screening that precedes getting on the damned jet for home. At this point, my partner wisely refrains from asking if I’m up for going next year. The process at this end of going back through everything in two giant suitcases and two carry-ons, cleaning it all, sorting it all, packing it all away again — as well as doing eight to ten loads of laundry — masks the inevitable let-down of returning to a wholly non-transformed world. From September until, say, January I’m not particularly interested in thinking about Burning Man. Then in the winter, strangely enough, I begin to thaw. I’ll find myself going back through the hundreds of photos. My partner always gets the thankless job of sorting, organizing, labeling, and saving all those images. It can take weeks. Then he goes the extra mile and weaves the various videos into one highlight reel with music and captions. In years past ticket sales can start as early as February and, because we camp with a long-established theme camp with a great reputation for cleaning up after itself and being super inclusive and fun, we’re included in the Direct to Camp sales. Meaning we’re with the cool kids and don’t have to worry about whether we’ll get tickets or not. We’re in. And, generally speaking, by the time my partner hits the “buy” selection for our two tickets and the vehicle pass required to get our U-Haul onto the playa, I’m beginning to come around. The Artist and his Art / Center Camp / 2013 Ambivalent, yes. Sad? That, too. By this time each year, I’m starting to ping-pong between dread and excitement. My partner will have gotten his annual photomontage printed and shipped out to Sacramento, ready to go up in Center Camp the day we arrive. I’ll have completed whatever little story I’ve written and we will have collated, folded, and stapled about 500 copies of it to be gifted at the event. Outfits will have been tried on and either stacked for packing or discarded, usually because they’re finally too shredded to be worn again. And this year, none of that is happening. We may know that in the front of our heads, but our lizard brains are just starting to kick into gear. This means that at any moment of the day or night one of us will have an inexplicable moment of time/location travel. We’ll smell things that aren’t there. We’ll feel things that aren’t happening. We’ll reach for something that doesn’t exist. These are just going to get more intense and frequent in the coming weeks. What the hell are we going to do with ourselves during the week of the non-existent Burn? It’s not as if we can build something to blow up out at the intersection of Lenox and West 112th Street (although some of the neighbors would be on board for that). I suppose we’ll make our own double-malt chocolate milkshakes and pretend we’re at Mel’s in Reno. We’ll Zoom with friends we only see out west during this event. We’ll watch our old videos and slide shows of the now-thousands of photos we’ve taken over the years. Other substances might be called for. And we’ll save up our money in hopes of being out there with our friends in 2021. Given that my cycle has been so thoroughly disrupted, I can only imagine what levels of out-of-my head anticipation/dread/excitement I’ll be experiencing next year at this time. For now, I guess I’ll just have to settle for hot showers, a comfortable bed, and a flush toilet. I can do that. © Remington Write 2020. All Rights Reserved. Something my partner, AleXander, wrote last year around this time:
https://medium.com/the-partnered-pen/when-youre-not-in-reno-df62a7b59123
['Remington Write']
2020-08-13 19:59:24.585000+00:00
['Life', 'Art', 'Relationships', 'Creativity', 'Travel']
If I had to start learning Data Science again, how would I do it?
A couple of days ago I started thinking if I had to start learning machine learning and data science all over again where would I start? The funny thing was that the path that I imagined was completely different from that one that I actually did when I was starting. I’m aware that we all learn in different ways. Some prefer videos, others are ok with just books and a lot of people need to pay for a course to feel more pressure. And that’s ok, the important thing is to learn and enjoy it. So, talking from my own perspective and knowing how I learn better I designed this path if I had to start learning Data Science again. As you will see, my favorite way to learn is going from simple to complex gradually. This means starting with practical examples and then move to more abstract concepts. Kaggle micro-courses I know it may be weird to start here, many would prefer to start with the heaviest foundations and math videos to fully understand what is happening behind each ML model. But from my perspective starting with something practical and concrete helps to have a better view of the whole picture. In addition, these micro-courses take around 4 hours/each to complete so meeting those little goals up front adds an extra motivational boost. Kaggle micro-course: Python If you are familiar with Python you can skip this part. Here you’ll learn basic Python concepts that will help you start learning data science. There will be a lot of things about Python that are still going to be a mystery. But as we advance, you will learn it with practice. Link: https://www.kaggle.com/learn/python Price: Free Kaggle micro-course: Pandas Pandas is going to give us the skills to start manipulating data in Python. I consider that a 4-hour micro-course and practical examples is enough to have a notion of the things that can be done. Link: https://www.kaggle.com/learn/pandas Price: Free Kaggle micro-course: Data Visualization Data visualization is perhaps one of the most underrated skills but it is one of the most important to have. It will allow you to fully understand the data with which you will be working. Link: https://www.kaggle.com/learn/data-visualization Price: Free Kaggle micro-course: Intro to Machine Learning This is where the exciting part starts. You are going to learn basic but very important concepts to start training machine learning models. Concepts that later will be essential to have them very clear. Link: https://www.kaggle.com/learn/intro-to-machine-learning Precio: Free Kaggle micro-course: Intermediate Machine Learning This is complementary to the previous one but here you are going to work with categorical variables for the first time and deal with null fields in your data. Link: https://www.kaggle.com/learn/intermediate-machine-learning Price: Free Let’s stop here for a moment. It should be clear that these 5 micro-courses are not going to be a linear process, you are probably going to have to come and go between them to refresh concepts. When you are working in the Pandas one you may have to go back to the Python course to remember some of the things you learned or go to the pandas documentation to understand new functions that you saw in the Introduction to Machine Learning course. And all of this is fine, right here is where the real learning is going to happen. Now, if you realize these first 5 courses will give you the necessary skills to do exploratory data analysis (EDA) and create baseline models that later you will be able to improve. So now is the right time to start with simple Kaggle competitions and put in practice what you’ve learned. Kaggle Playground Competition: Titanic Here you’ll put into practice what you learned in the introductory courses. Maybe it will be a little intimidating at first, but it doesn’t matter it’s not about being first in the leaderboard, it’s about learning. In this competition, you will learn about classification and relevant metrics for these types of problems such as precision, recall and accuracy. Link: https://www.kaggle.com/c/titanic Kaggle Playground Competition: Housing Prices In this competition, you are going to apply regression models and learn about relevant metrics such as RMSE. Link: https://www.kaggle.com/c/home-data-for-ml-course By this point, you already have a lot of practical experience and you’ll feel that you can solve a lot of problems, buuut chances are that you don’t fully understand what is happening behind each classification and regression algorithms that you have used. So this is where we have to study the foundations of what we are learning. Many courses start here, but at least I absorb this information better once I have worked on something practical before. Book: Data Science from Scratch At this point we will momentarily separate ourselves from pandas, scikit-learn and other Python libraries to learn in a practical way what is happening “behind” these algorithms. This book is quite friendly to read, it brings Python examples of each of the topics and it doesn’t have much heavy math, which is fundamental for this stage. We want to understand the principle of the algorithms but with a practical perspective, we don’t want to be demotivated by reading a lot of dense mathematical notation. Link: Amazon Price: $26 aprox If you got this far I would say that you are quite capable of working in data science and understand the fundamental principles behind the solutions. So here I invite you to continue participating in more complex Kaggle competitions, engage in the forums and explore new methods that you find in other participants solutions. Online Course: Machine Learning by Andrew Ng Here we are going to see many of the things that we have already learned but we are going to watch it explained by one of the leaders in the field and his approach is going to be more mathematical so it will be an excellent way to understand our models even more. Link: https://www.coursera.org/learn/machine-learning Price: Free without the certificate — $79 with the certificate Book: The Elements of Statistical Learning Now the heavy math part starts. Imagine if we had started from here, it would have been an uphill road all along and we probably would have given up easier. Link: Amazon Price: $60, there is an official free version in the Stanford page. Online Course: Deep Learning by Andrew Ng By then you have probably already read about deep learning and play with some models. But here we are going to learn the foundations of what neural networks are, how they work and learn to implement and apply the different architectures that exist. Link: https://www.deeplearning.ai/deep-learning-specialization/ Price: $49/month At this point it depends a lot on your own interests, you can focus on regression and time series problems or maybe go more deep into deep learning.
https://towardsdatascience.com/if-i-had-to-start-learning-data-science-again-how-would-i-do-it-78a72b80fd93
['Santiago Víquez Segura']
2020-05-31 19:38:01.968000+00:00
['Machine Learning', 'Python', 'Data Science', 'Deep Learning', 'Towards Data Science']
Stop Operating On Autopilot
I’ve been there. After graduating high school I thought I needed a major that would guarantee I’d always have employment and I’d never had to struggle for money. This was all drilled into me by my family, teachers and the society I grew up in. Being a doctor meant I’d follow a noble pursuit that would secure my social standing and afford me everything I wanted. I struggled in University, battling a generalized anxiety disorder, seasonal depression and PTSD from unchecked childhood trauma and I realized that I would never make it through medical school. More importantly, I knew I didn’t want to try. Changing course didn’t seem like a good idea so once I was done I went home and became a teacher. Not because I wanted to educate and inspire youth but because I had no idea what else I could do with a science degree. Sure there were options but none that appealed to me. Though, I was sold on becoming a teacher by a family friend that owned a school and so I did. I unconsciously decided that teaching was just as noble. I hate my job. Thankless doesn’t begin to scratch the surface of what teaching high school is really like but it created a bitterness in me that took me into the darkest of despairs. To cope with my discontent I drowned myself in substance abuse, excessive partying, reckless sexual behavior and avoided dealing with the underlying issues. I flipped the autopilot switch and carried on. The routine formed without much effort; work until 4 pm I only needed to survive until the happy hour at 5 pm and then I could party and pretend my life was better than it was. Truth be told I was playing a dangerous game, Russian Roulette, the trigger being fired at the nesting doll I hid inside of. I’d lost sight of who I was and those dreams seemed to belong to someone else. The destruction had taken a toll on my mind, body, and spirit. Nothing changed until I decided to take control of my life. There is no magical moment that made me snap out of the zombie zone only a conversation. Words of a woman a lot wiser than I was about food. At the time I was working at a private school that was grounded in traditional African values and classical learning. I knew the Dean, and his wife was a Vegan Chef. One day she came by and we had a long conversation about the relationship we have with our bodies and the fuel we put in it. Veganism was not new to me, I grew up with Seventh Day Adventist and so I’d eaten plant-based before. Her approach was more holistic. She took orders on Wednesday from staff members so I decided to try. FOMO made me do it and it changed everything. I started consciously eating, and before I knew it I gave up meat completely. This isn’t a sales pitch for veganism, I am currently not vegan, but this mindful approach to eating had a domino effect in my life. Every meal was thought out and planned and so I consciously thought of what went into my body. As a result, I lost weight, I finally freed myself of religion, I began doing activities I used to love before I let life fuck me like dance, yoga, and art. I stopped abusing substances on the frequency that I was. Overall I nurtured my mind, body, and spirit.
https://medium.com/candour/stop-operating-on-autopilot-5525fab34c84
['Nicole Bedford']
2019-11-01 12:42:28.927000+00:00
['Mindfulness', 'Life', 'Self-awareness', 'Self', 'Self Improvement']
🦈 Surf’s Up
C A R T O O N 🦈 Surf’s Up Yum yum yum Cartoon by Rolli Cartoonist’s Note №1 I’ve never surfed, and can say with confidence that I never will. Cartoonist’s Note №2 This cartoon will earn no money (Medium recently all-but-demonetized poetry, cartoons, flash fiction and other short articles). Please consider buying me a coffee. More coffee=more cartoons for you to enjoy. Cartoonist’s Note №3 Like this cartoon? Get it on a coffee mug! Cartoonist’s Note №4 This cartoon is brought to you by the letter “D.” “D” is for Dr. Franklin’s Staticy Cat and Other Outrageous Tales, my collection of humorous stories and drawings for children. Cartoonist’s Note №5 My new one-man Medium magazine is called — Rolli. Subscribe today. Cartoonist’s Note №6 From now on, I’m letting my readers determine how often I post new material. When this post reaches 1000 claps — but not before — I’ll post something new. Cartoonist’s Note №7 You might like these cartoons, too:
https://medium.com/pillowmint/surfs-up-6ffe5d0c4a7
['Rolli', 'Https', 'Ko-Fi.Com Rolliwrites']
2020-03-14 16:07:16.445000+00:00
['Humor', 'Covid 19', 'Comics', 'Cartoon', 'Coronavirus']
It Was Not To Be But I Dared For Once
It was not to be but I dared for once; Into the eyes of fear I stared for once. I stole the stars from my neighbor's sky; His nights are now dark and he's scared for once. I dreamt of ironing the wrinkles on my forehead; And woke to find my fate repaired for once. I tasted my thoughts to note what they lacked; Go check my Facebook where I shared for once. Gods do what they wish and get away with it; It is us who are never spared for once.
https://medium.com/the-coffeelicious/it-was-not-to-be-but-i-dared-for-once-c851c82ae65e
['Maitreya Thakur']
2019-11-01 10:00:06.252000+00:00
['Poetry', 'Poem', 'Writing', 'Fiction', 'Thoughts']
I Tricked A Sex Abuser Into Confessing And He Was Convicted
If you are like me, you are sick and tired of rapists like Brock Turner getting away with their crimes. Perhaps you’ve imagined capturing a sexual deviant and putting him in a cage. I had the chance to do just that by tricking a child sex abuser into confessing on video. How it Began It all started on a cool summer morning in late August. My 13-year-old niece, Jaz, and my sister, Kim, woke me early to tell me a secret. Jaz’s father, Ted, had molested her — most recently, only two nights before she told her mom. Half an hour later, Kim, Jaz, and I were in a room speaking to a police officer and Child Protective Services. Before we left the station, we found out that it would be Ted’s word against Jaz’s in court. I felt certain that I could convince Ted to confess, so I asked the police officer if I could legally record Ted without his knowledge. I learned that our state, South Carolina, was a two-party state. This meant that as long as one other person in the room knew about the recording, the video was admissible in court. Since Kim would be aware I was recording, we could move forward with our plan to get a confession. A Hidden Camera Several hours later, Kim and I were alone at home. We prepared for Ted’s arrival from work. I positioned my phone on the side table to be aimed at Ted’s favorite spot on our couch. I pressed record when I heard his vehicle pull into the drive. Kim paced the front porch chain-smoking. She and Ted had celebrated their 15th wedding anniversary only two weeks before, and she had to act naturally. She flicked her last cigarette butt into the gravel driveway, ground it flat under her heel, and forced a welcoming smile. Kim and Ted chatted casually on the porch for the longest three minutes of my life. When Ted finally came inside, he looked nervous. Kim walked across the living room to the couch. I glanced at the phone to be sure he was in the shot — he was. Kim gave me a questioning look, and I nodded. Kim sat in a chair opposite Ted and cleared her throat. She decided to get it all out at once. “Jaz told us that two nights ago, you came into her room when you thought she was sleeping. You pulled off her pajama bottoms and put your fingers in her vagina.” still from confession video by the author Denial “EXCUSE ME?!! I did not do that!” Ted’s face went three shades of scarlet, and Kim burst into tears. “We believe Jaz. You thought she was sleeping, but she was awake. Don’t you dare call her a liar! You’ve done enough to hurt her.” I piped in, “Look, Ted. We can either take care of this within the family, or we can go to the police. Either way, we will take care of this. This is your moment not to be a total jackass and at least admit you did it. If you don’t, we are leaving right now, and the next person who asks you will be the police.” Confession Ted was quiet for a moment. His face was red, and he was tearing up. He took off his glasses and polished them on his shirt. “Did you do it?” He took a breath, “Yes.” Kim almost lost it then. We knew it was true, but everything had happened so fast it felt like a dream. Ted’s one-word response brought us crashing back to reality. We talked to Ted for 30 minutes. He blamed alcohol for his actions and claimed he didn’t remember raping Jaz. The most chilling part of the confession came when I asked him when he first felt sexual attraction to Jaz. I was hoping that if he admitted to several sexual assault instances, it would lead to more jail time. “When she stopped looking like a baby and more like a girl.” I had to remain calm, which was difficult at this point. “Do you mean around 4 or 5-years-old?” He responded, “More like 6 or 7.” Kim stood up then. She walked to him and handed him her wedding ring. He broke into loud sobs. still from confession video by the author In the end, Ted admitted on video that he sexually assaulted Jaz so many times that he couldn’t give us a number. He admitted to fondling her and putting his fingers in her. Ted never admitted to brutally raping Jaz when she was 8-years-old. He claimed that he might have been so traumatized by his own actions that he blocked it from memory. Ted stuck to this story when he confessed again to Child Protective Services. After the Confession It was 6 months before he was arrested. Another 6 months passed before he was tried for his crimes. The video confession clearly proved that Ted had regularly committed the crime of sexual assault for 8 years. However, Ted was only tried for the most recent sexual assault. The lawyers offered a plea for a lesser crime, but Jaz said she’d rather go to trial than accept the deal. Ted’s lawyer reminded the judge that Ted had never been arrested and had devoted twenty years of service as an EMT and five years as a police officer. He blamed Ted’s traumatic childhood and stress at work. The judge promised that he would keep all of that in mind while he decided the sentence. Ted received twelve years out of a possible twenty-year sentence. With good behavior, he’d serve eight. The Last Word Jaz got her to say in court. She looked tiny, tucked into a witness box, in the big courtroom. But her voice was strong, and she looked directly at Ted while she spoke. “You won’t admit you raped me, but you and I know you did. You have ruined my childhood. I have nightmares every night, and I tried to kill myself at Christmas. How dare you sit there crying about what you did! You deserve life in jail. You are a coward.” Jaz marched away from the witness stand, and she didn’t look back. Note from the Author About Jazmin Every piece I write concerning my niece, Jazmin, is written with her express permission. She feels no need to be ashamed and wants everyone to know her story. She hopes, more than anything, that her story will help other survivors.
https://medium.com/survivors/i-tricked-a-sex-abuser-into-confessing-and-he-was-convicted-801ee4e09f8e
['Toni Tails']
2020-10-21 12:44:03.234000+00:00
['Justice', 'Sexual Assault', 'Mental Health', 'Family', 'Parenting']
Ethical Bias In AI-Based Security Systems: The Big Data Disconnect
It’s a question that has surfaced at the discussion tables of conferences and social chats everywhere-“Can machines turn on humans?”. It’s a question that often accompanies scenes and visuals from movies like the Terminator. But what we know and what we’ve seen from the use of AI in big data is that certain uncertainties and biases have to be considered when designing systems for larger scales with more complex environments. Image ref: http://www.fullai.org/ethical-issues-ai-top-mind-data-scientists/ What is it that machines feel? What is it that makes them behave the way they are other than the code that’s inserted into their mainframe? Do Isaac Asimov’s three laws still hold ground today in defining the standards for how machines should behave in a convoluted environment? The answers to these questions lie in the way we choose to define the rules of the game and how the machine responds to sudden changes. Ethical biases are a special zone of uncertainty in Artificial Intelligence studies that concerns the trinkets and levers that pull machines to behave in ways that may seem strange or even detrimental at times. With the rise of autonomous vehicles and AI-driven production methods set to take over the world, an unanswered question demands an answer once again. What do we do about the machines? Introduction To Biases Biases and variances from a data perspective are linked to the proximity of the measured values from the actual values. Variance, in this case, is a measure of how far the measured values differ from each other and biases refer to how much the measured values differ from the actual values. In a highly specific case of models with great accuracies, both variance and bias would be small. This may, however, reflect how poorly the model will perform with new data. Nevertheless, having a low bias and variance is difficult to achieve and is the bane of data analysts everywhere. Biases are particularly more difficult to treat for use cases that involve simple decision making where simple binary computations aren’t enough. One is tempted to ask, why is it that biases find their way into the system? And if a machine fails to decide at a critical point no worse than humans, then why are they used in the first place? To answer these questions, one has to look at the general methodology of how models are built in the big data services realm. Data is first collected and cleaned from actuators and sensors that provide raw numbers for analysts to work on. These values then undergo a preprocessing step where they are normalized, standardized or converted to a form where dimensions and units are removed. Once the data is converted into a suitable tabular or comma-separated format, it is inserted into a network of layers or functional equations. If the model uses a series of hidden layers, rest assured they will have an activation function that will introduce a bias every step of the way. However, biases can also enter the system through the many pitfalls of collection methods. Maybe the data wasn’t balanced towards a certain group or class of outputs, maybe the data was incomplete, erroneous or maybe there wasn’t any data, to begin with. As the datasets grow larger and larger with more incomplete records, the possibility of the system filling those gaps with some predefined values is certain. This results in another kind of assumptive bias. The Black Box Conundrum Many scholars would also argue that numbers may not mean the same thing without proper context. In the controversial book titled-‘The Bell Curve’ for example, the claim made by the author about IQ variations among racial groups was challenged with the notion of environmental constraints and differences. But if a human can arrive at such resolutions, how long would it take a machine to remove such judgemental lapses in its logic? Chances are minimal. If the machine has been fed with erroneous or faulty data, it will output faulty values. The problem arises from the ambiguity of how the AI model is built. These are usually black box models that exist as data sinks and sources with no explanation of what goes inside. To the user, such black-box models cannot be interrogated or questioned as to how it arrives at a result. Furthermore, there are additional problems to be tackled with result variations. Due to a lack of understanding of how the black box operates, analysts may arrive at different results even with the same inputs. Such variations may not make a huge difference for values where precision isn’t key, but the data realm is seldom so generous. Industrial manufacturers, for example, would be at a loss if AI systems failed to predict highly specific parameters such as pH, temperature or pressure to large points. However, when the objective is to provide answers to problems like loan compatibility, criminal recidivism or even applicability for college admissions, AI’s lack of crisp values comes at a disadvantage. The onus is however on AI enthusiasts to tackle the issue from another angle. Put, the methods and the rules of the interferences between layers must be resolved to interpret what every line of code and coefficient represents. The black-boxes thus have to be uprooted and dissected to know what makes the machines tick, which is easier said than done. Taking a look at even the simplest of neural network AI is enough to show how complicated such systems are original. Nodes and layers all stack up with individual weights that interact with the weights of other layers. It may look like a magnificent deal to the trained eye, but it leaves little interpretation for understanding the machines. Can it be simply due to the difference in language levels of humans and machines? Can there be a way to break down the logic of machine languages in a format that the layman can understand? Types of Biases Covering back the history of biases in data analysis, there can be several biases that are introduced as a result of improper techniques or predefined biases in the entity responsible for the analysis. Misclassification and presumptive biases can be produced from models that are well-positioned towards balanced results because of certain inclinations and interests of the programmer. It’s an all too common mistake that certain marketing analysts make when dealing with leads. Collection software provides great data on people who have converted and those who haven’t. Instead of focusing on models that focus on both classes of people, most may be tempted to build models just for the unconverted leads. In doing so, they end up blinding themselves to the richness of available data for those that have become customers. Another issue that plagues AI models is the inability to properly classify or misclassify data that culminates into a disaster for analysts. In the production industry, such errors fall under the Type I and Type II category-the former being when a classification is made for a record which doesn’t belong and the latter being when it fails to classify which does belong. From the context of the production lot, quality control engineers are quick to stamp the accuracy of goods by testing only a small portion of them. It saves time as well as money. But it can be the perfect environment for such hypothetical biases to occur. Another similar example has been observed in image detection software where neural networks scan through broken portions of pictures to reconstruct logical shapes. There can be multiple problems caused by the similarity in the orientation of the objects in images that can cause the model to give out strikingly contentious results. Current age Convolutional Neural Networks are capable of factoring such intricacies but require large amounts of testing and training data for reasonable results. Certain biases are a consequence of the lack of proper data being available were using complex models unwarranted and even unnecessary. It is a commonly held belief that certain models and neural network programming should only be applied to datasets once they reach a statistically significant number of records. This also means that algorithms have to be designed to check the quality of data on a timely basis reiteratively. Fighting AI With AI Is the solution to the problem with AI biases hidden within AI itself? Researchers believe that improving the tuning methods by which analysts collect and demarcate information is important and should take into account that not all information is necessary. That being said, there should be an increased emphasis in removing and eradicating inputs and values that skew the models in completely untoward places. Data auditing is another means by which biases can be checked and removed well in time. This method like any standard auditing procedure involves a thorough cleanup and checkup of the processed data as well as the raw input data. Auditors track changes and note down possible improvements that can be made to the data as well as ensuring that the data has complete transparency to all stakeholders. Specialized XAI models have been in question as well that can be put to the question table under the right circumstances. These models involve a much detailed parametric model development where every step and change is recorded, allowing analysts to pinpoint likely issues and trigger instances. AI has also become a frontier for validating the accuracy and confusion matrices of models instead of relying on simpler tools like ROC curves and AUC plots. These models look at performing repeated quality checks before the deployment of the dataset and attempt to cover the data overall classes, regardless of distribution or shape. The nature of such pretesting is made more difficult with datasets where differences in units and ranges vary significantly over the inputs. Likewise, for media-related data, the time taken to break down and condense content to numeric formats can still lead to biases. However, thanks to a new slew of changes in fundamentals for data transparency and third-party checks, companies are at least acknowledging that something is going wrong. New explainer loops are being inserted between the models as well that intend to accentuate the black boxes that fill most AI models. These are again driven by AI models that are fine-tuned systematically to look for inconsistencies and errors. A Few Case Examples In AI Ethical Failures Data analysts would be familiar with the concepts of false negatives and false positives. These discrepancies in identifying outputs can result in special cases of errors with detrimental effects on people. A false negative put is when the system incorrectly recognizes a positive class as negative. Similarly, a false positive occurs when a negative class is incorrectly recognised to be positive. The severity of such false cases can be better understood in the context of actual big data studies. In the famous case of CHD(coronary heart disease) being modeled using logistic regression, confusion matrices for the false positives and false negatives yielded large numbers, despite yielding a high accuracy. To the average person, an accurate model may seem like the only important ‘make or break’ check. But even in the early days of data analysis, it was clear that such models would fall flat and even misdiagnose new patients. The trade-off was made by collecting more data streams and cleaning the columns to induce better data normalization. A step that is becoming the staple for the industry these days. Uber’s autonomous vehicles suffering crashes in testing phases aren’t the only red flags that industry professionals are concerned about. These fears extend to other spheres such as identification and machine perception as well. Tech giant Amazon came under the scrutiny of the media after its model had learned to develop what the media called a ‘gender bias’ towards women. In a shocking case of applicant bias(seen previously with applicants in tech companies), the models generated negative compliance for the applied job higher for women than men. Problems at the other end of the spectrum have been observed in tech giants like Apple, where the consumer hyped FaceID, allowed different users to access locked phones. One may argue that the models used to identify facial cues for detection might be generating similar results even for different people. It was only a matter of time that engineers would stick to ironing out faults and conclude that there were assumptive biases produced from questionable inputs. AI’s big leap in the medical world has been set back quite a notch due to the failure in integrating ethical values; values which would have replaced nurses and staff on the go. This is mainly dealt with by construing all the possible number of case examples where a machine can properly replace a human and take the very same decisions. Although philosophy majors may argue that even humans don’t operate under a set of guidelines. There are various schools of ethics- Kantian, egalitarian, utilitarian and so on. How these schools of thought conform to various ethical conundrums is left to the person and his/her interests. In the famous trolley case, a person’s inclinations to pull or not pull the lever dictated purely by the ethical framework in which the person operates. The question of accountability becomes fuzzy when machines take the place of the decision-maker. Final Words-How To Make A More Ethical? The eternal question of where we draw our tolerance of those systems leads the line for including machines in our day to day activities. AI has been the building block for life-saving and supporting frameworks such as transportation, predictive studies, financial investments, security, communication, and production. It has seeped into all significant aspects of human life without raising many nay-sayers. The line is drawn when AI fails to embed the very philosophies that the humans who created it operate under. We are far ahead from the days of Yevgeny Zamyatin and Alan Turing when machines were regarded to be impartial. To breathe a new life in machines by teaching AI to be ethical is a challenge that drops to the fundamental question of what it means to be ‘human’? We now know that to construct a perfect ethical framework, AI has to be stripped down to its bare essentials and needs to be driven a context abled approach that emphasizes on the quality of the results. As with the fundamentals of diversity in the workplace, the steps are simple:- Keep a close watch on the data. Keep it varied but normalized. Have a team monitor the preprocessing steps from time to time. Eradicate exclusions of any form in the output. Remove junk values that may be erroneous or useless to the model. Refine, audit, share and recollect results, incorporating them into the model. Eliminate interactions, data silos and always have sanity checks for what the objective ultimately is. Knockdown data silos and teach the AI to think rather than modeling it to think. Keep the Johari window of awareness in check. Cover unknown knowns and known unknowns. As for the unknown unknowns, such biases will always remain, unfortunately. Originally Posted on Cuelogic Blog.
https://medium.com/cuelogic-technologies/ethical-bias-in-ai-based-security-systems-the-big-data-disconnect-a4e4c806f349
['Harsh Binani']
2019-09-27 11:59:51.857000+00:00
['Machine Learning', 'Artificial Intelligence', 'Big Data Services India', 'Data Security', 'Big Data Analytics']
The IxDA Student Challenge Winner is…
Judges Commentary The judges scored each team on several criteria: if they accomplished the task, how they outlined their goals, the fidelity of their proposed solution, use of the inclusive design process, the extensibility of their solution to other areas, and their storytelling. “Project keyHue tackled a challenge most people knew nothing about,” judge Haiyan Zhang, Director of Innovation at Microsoft Research, said. “They were able to bring us into the experience of the user, through their video and their research, and by talking to someone with dyscalculia and speaking to some of the emotional challenges they face. Their idea was very innovative, and I liked the combination of an on-screen app and the physicality of still relying on the piano, so a person can, over time, learn how to play a real piano and learn the muscle memory.” For Neil Churcher, one of our other judges (and Head of Design at Orange), Project keyHue’s ability to provide a connection between their research and the development of the design is what set them apart. “They had an idea to solve some of the challenges their research subject had, and you could see how it was extensible. They also had a believability to it that came through in their presentation, which gave them a little edge.” For the winning team, it was all shock and excitement as they heard their names announced from the stage. “Disbelief,” Kevin Ong said. “Just being here was an amazing experience. We took our time and wanted to be sensitive — we didn’t know anything about dyscalculia, so we spent extra time getting to know what we were jumping in to. We’re so grateful to Neil and his daughter Ingrid for letting us call them in New York even with the time difference and helping us build a good case and story.” None of the winners had previously entered a design competition before, and it was actually Mélodie Jacob’s first time designing a project in English (she primarily speaks French). “I’m glad we got to explore a learning disability and learn more about it — in three days we made a whole project. It was nice to learn how to be efficient, work with people from other teams, schools and languages.” “I think what I’ve been learning is our networks are so important. We reached out and didn’t just rely on the conference attendees to do extra research. That had a really big impact on our project and guided us forward. Without speaking with Neil, we would have stayed stuck.” — Katarina Yee Answering the Challenge The four-day challenge began with an inclusive design workshop led by Margaret Price, a principal design strategist at Microsoft, and Cassie Klingler of Microsoft’s Hacking STEM team. Ana Domb (A UX consultant with a Master’s from MIT) and Ahmed Riaz (who won the first year of the competition, and is head of UX strategy at Logitech) co-chaired the IxDA Student Design Challenge this year. “It’s such an incredible opportunity to understand how students apply inclusive design,” Margaret Price said. “It’s incredible to see them in their space. One of the greatest benefits of inclusive design is being able to change people’s lives, and hearing from students about how the impact of how inclusive design has shifted their world view reinforces the importance of inclusive design. I hope they walk away with a deeper understanding of inclusive design and how they can take it, pay it forward, and make an impact.” “I hope the students walk away feeling empowered,” Albert Shum, corporate vice president of design at Microsoft, said. “Design can inspire and create change. There’s tremendous need to make sure that humanity is at the center of everything we make and I’m excited to see the next generation of design thinkers and makers shape our future.” But at the end of the day, why does design matter? Why does getting involved with students in their design education matter? “To create inclusive experiences, I fundamentally believe we need to have an inclusive culture that extends the table to bring in different perspectives. How can we shape our world and the products we make to be more inclusive for everyone? This should be design’s agenda for the future we want.” — Albert Shum “It’s a magical feeling when you see the smile on someone’s face once they’ve been empowered to accomplish something that they were previously unable to do,” Tim Allen, a partner of design at Microsoft, said. “I want more designers to feel that magic as often and as consistently as possible. To me, it’s one of the best aspects of being a designer!” Neil Churcher added, “In places like this [Interaction 18] you have a collective interest, knowledge, and a desire to pursue what’s important [in design] to project onto students. These challenges are doing good, by helping students take responsibility as designers for what you deliver and generating insights for real problems. It’s a good way to shape education.” Congratulations again to our winning team, and fantastic effort by all nine students who participated in the challenge! Interaction 19 is coming to Seattle in 2019, and we’re looking forward to another year of design innovation ahead of us. Thanks for following along for Interaction 18 — hope to see you next year!
https://medium.com/microsoft-design/the-ixda-student-challenge-winner-is-7464d928428
['Ashley Walls']
2019-08-27 17:24:07.790000+00:00
['Design', 'UX Design', 'Ixda', 'Inclusive Design', 'Microsoft']
How to Build a Matrix Module from Scratch
How to Build a Matrix Module from Scratch If you have been importing Numpy for matrix operations but don’t know how the module is built, this article will show you how to build your own matrix module Motivation Numpy is a useful library that enables you to create a matrix and perform matrix operations with ease. If you want to know about tricks you could use to create a matrix with Numpy, check out my blog here. But what if you want to create a matrix class with features that are not included in the Numpy library? To be able to do that, we first should start with understanding how to build a matrix class that enables us to create a matrix that has basic functions of a matrix such as print, matrix addition, scalar, element-wise, or matrix multiplication, have access and set entries. By the end of this tutorial, you should have the building block to create your own matrix module. Photo by Joshua Sortino on Unsplash Why Class? Creating a class allows new instances of a type of object to be made. Each class instance can have different attributes and methods. Thus, using a class will enable us to create an instance that has attributes and multiple functions of a matrix. For example, if A = [[2,1],[2,3]], B = [[0,1],[2,1]], A + B should give us a matrix [[2,3],[4,4]]. __method__ is a private method. Even though you cannot call the private method directly, these built-in methods in a class in Python will let the compiler know which one to access when you perform a specific function or operation. You just need to use the right method for your goal. Build a Matrix Class I will start from what we want to create then find the way to create the class according to our goal. I recommend you to test your class as you add more methods to see if the class acts like what you want. Create and print a Matrix object What we want to achieve with our class is below >>> A = Matrix(dims=(3,3), fill=1.0) >>> print( A ) ------------- output ------------- | 1.000, 1.000, 1.000| | 1.000, 1.000, 1.000| | 1.000, 1.000, 1.000| ---------------------------------- Thus, we want to create a Matrix object with parameters that are dims and fill . class Matrix: def __init__(self, dims, fill): self.rows = dims[0] self.cols = dims[1] self.A = [[fill] * self.cols for i in range(self.rows)] We use __init__ as a constructor to initialize the attributes of our class (rows, cols, and matrix A). The rows and cols are assigned by the first and second dimensions of the matrix. Matrix A is created with fill as the values and self.cols and self.rows as the shape of the matrix. We should also create a __str__ method that enables us to print a readable format like above. def __str__(self): m = len(self.A) # Get the first dimension mtxStr = '' mtxStr += '------------- output ------------- ' for i in range(m): mtxStr += ('|' + ', '.join( map(lambda x:'{0:8.3f}'.format(x), self.A[i])) + '| ') mtxStr += '----------------------------------' return mtxStr Scaler and Matrix Addition Goal: Standard matrix-matrix addition >>> A = Matrix(dims=(3,3), fill=1.0) >>> B = Matrix(dims=(3,3), fill=2.0) >>> C = A + B >>> print( C ) ------------- output ------------- | 3.000, 3.000, 3.000| | 3.000, 3.000, 3.000| | 3.000, 3.000, 3.000| ---------------------------------- Scaler-matrix addition (pointwise) >>>A = Matrix(dims=(3,3), fill=1.0) >>> C = A + 2.0 >>> print( C ) ------------- output ------------- | 3.000, 3.000, 3.000| | 3.000, 3.000, 3.000| | 3.000, 3.000, 3.000| ---------------------------------- We use __add__ method to perform the right addition. Since addition is commutative, we also want to be able to add on the right-hand-side of the matrix. This could be easily done by calling the left addition. def __radd__(self, other): return self.__add__(other) Pointwise Multiplication Goal: Matrix-Matrix Pointwise Multiplication >>> A = Matrix(dims=(3,3), fill=1.0) >>> B = Matrix(dims=(3,3), fill=2.0) >>> C = A * B >>> print( C ) ------------- output ------------- | 2.000, 2.000, 2.000| | 2.000, 2.000, 2.000| | 2.000, 2.000, 2.000| ---------------------------------- Scaler-matrix Pointwise Multiplication >>> A = Matrix(dims=(3,3), fill=1.0) >>> C = 2.0 * A >>> C = A * 2.0 >>> print( C ) ------------- output ------------- | 2.000, 2.000, 2.000| | 2.000, 2.000, 2.000| | 2.000, 2.000, 2.000| ---------------------------------- Use __mul__ method and __rmul__ method to perform left and right point-wise Standard Matrix-Matrix Multiplication Goal: >>> A = Matrix(dims=(3,3), fill=1.0) >>> B = Matrix(dims=(3,3), fill=2.0) >>> C = A @ B >>> print( C ) ------------- output ------------- | 6.000, 6.000, 6.000| | 6.000, 6.000, 6.000| | 6.000, 6.000, 6.000| ---------------------------------- Matrix multiplication could be achieved by __matmul__ method that is specific for matrix multiplication. Have Access and Set Entries Goal: >>> A = Matrix(dims=(3,3), fill=1.0) >>> A[i,j] >>> A[i,j] = 1.0 Use __setitem__ method to set the value for the matrix indices and use __getitem__ method to get value for the matrix indices. Put everything together Create and Use the Module After creating the class Matrix, it is time to turn it into a module. Rename the text that contains the class to __init__.py . Create a folder called Matrix . Put main.py and another file called linearAlgebra inside this folder. Put __init__.py file inside the linearAlgebra file. Folder Matrix contains main.py and linearAlgebra Folder linearAlgebra contains __init__.py Use main.py to import and use our Matrix class. Conclusion Awesome! You have learned how to create a matrix class from scratch. There are other methods in Python class that would enable you to add more features for your matrix. As you have the basic knowledge of creating a class, you could create your own version of Matrix that fits your interest. Feel free to fork and play with the code for this article in this Github repo. I like to write about basic data science concepts and play with different algorithms and data science tools. You could connect with me on LinkedIn and Twitter. Star this repo if you want to check out the codes for all of the articles I have written. Follow me on Medium to stay informed with my latest data science articles like these:
https://towardsdatascience.com/how-to-build-a-matrix-module-from-scratch-a4f35ec28b56
['Khuyen Tran']
2020-11-27 21:26:04.140000+00:00
['Numpy Array', 'Python', 'Modules', 'Data Science', 'Matrix']
Why are you not designing your day-to-day experience?
Why are you not designing your day-to-day experience? When designing everything around you becomes second nature. The other day a coworker and I were chatting about our phone’s home screens and the way we organize our app icons. We spent a good amount of time describing to one another our implicit rules on how we prioritize homepage icons, how we choose apps that will sit on the edge vs. the middle of the screen, how we thoughtfully select the ones that will stay fixed on the bottom dock. We were verbally documenting our own unspoken rules on how we had “designed” our phone home screen experience. Here’s where I stand as of today: I try to limit the number of icons displayed on my homepage to the ones I use every day. The rule is simple: if I haven’t used an app for more than 3 days, I move it to another page. The fact I have less information to look at helps reduce cognitive load 100% of the times I unlock my phone. displayed on my homepage to the ones I use every day. The rule is simple: if I haven’t used an app for more than 3 days, I move it to another page. The fact I have less information to look at helps reduce cognitive load 100% of the times I unlock my phone. I removed all red badges from the homepage icons (except Slack and work email) to reduce anxiety and the feeling that I need to click on certain icons in the first place — because I really don’t. I noticed my phone usage has drastically dropped since I changed that setting. from the homepage icons (except Slack and work email) to reduce anxiety and the feeling that I need to click on certain icons in the first place — because I really don’t. I noticed my phone usage has drastically dropped since I changed that setting. Since I realized I’m often holding my phone with my right hand, I moved icons that I need to access quickly to the right side of the screen, so it’s more easily accessible with my thumb. I don’t really need to open Twitter, Linkedin, or Instagram when I’m walking outside holding my phone with one hand and trying to cross a busy street. The list goes on forever. I have rules around how my phone’s second screen is organized as well. I could write a whole book on the little things I do to optimize my day-to-day experience — not only with my phone, but with my laptop, with my wardrobe, office desk, electric chargers, fridge, backpack. But in all honesty, that would be an incredibly boring book.
https://uxdesign.cc/why-are-you-not-designing-your-day-to-day-experience-269ec91d7d7
['Fabricio Teixeira']
2019-03-09 15:22:44.892000+00:00
['Product Design', 'Ts', 'Design Thinking', 'Design', 'UX']
An Inconvenient Masterpiece: When Art Took on Fascism
The bull and minotaur are dark and monstrous. The horse, often depicted as white, is almost always the victim of a bull attack. The light bearing child — always a girl — never seems in harm’s way despite being close to danger, they witness. The combination of the mythological and photo-like black, white and grey colour scheme holds a peculiar power over the viewer. Had the painting been in full colour it perhaps would have been viewed as too aloof, too removed from the atrocity. The black and white has the effect of grounding the mythmemes into a newsprint-like reality that is actually underscored, and not betrayed, by the twisting and fracture of their forms. Atrocity Exhibition In July 1937 the painting was put on display in the Spanish Pavillion at the International Exhibition. The theme of the exhibition — technology — was completely ignored by the Spanish government, whose exhibits focused on the civil war and the attack on democracy by Franco’s nationalists. Ferrer de Morgado’s Madrid 1937 (Black Aeroplanes) was also on display and preferred by government officials for its more realistic portrayal of civilian victims of the Civil War. (Fair use. Source: Wikipedia) For many, Picasso’s avant-gardist treatment of the subject was unpalatable. The painting was criticised by leftists as not really revealing much of the truth of the Civil War. Some of the Spanish delegation criticised its modern and child-like forms, they preferred another painting on display, Ferrer de Morgado’s Madrid 1937 (Black Aeroplanes), which depicted a similar scene with more naturalism. Le Corbusier commented that Guernica “only saw the backs of visitors.” The German guide to the World’s Fair supposedly echoed the often-used criticism of Picasso’s art that “any four-year-old could have painted [it].” But Picasso was always dismissive of traditionalist criticisms. Later declaring: “I am only concerned with putting as much humanity as possible into my paintings. Too bad if this offends a few worshipers of the traditional human figure […] What is a face in the end? Its photo. Its makeup. What is in front? Inside? Behind? And the rest? Doesn’t everyone see it in their own way? I have always painted what I’ve seen, felt.” (Picasso in Arts de France number 6, 1947) “Felt” is the key word here. The art historian John Berger praised the way Picasso’s neo-cubist style allowed him to convey pain and suffering in the very forms of the painting. We “see” pain in the twisted and deformed limbs of the victims, their dagger-like tongues, their flat, wide eyes. Moreover shard-like waves convulse through the painting’s forms. These can only be described as traces, since they intersect the bodies and the scene, complicating the relationship between figures and ground. In another earlier post I describe how Picasso further developed cubism to convey that which cannot be shown. These traces are of immense suffering. Picasso intended to gift the mural to the Basque people upon the closure of the International Exhibition, but to Picasso’s dismay it was rejected by officials. The painting instead went on a tour of Scandinavia and England in 1938 arranged by Picasso’s dealer, Paul Rosenberg, where it was exhibited to raise awareness of (and money for) the struggle in Spain. In 1939, when Franco had won the civil war, the mural traveled to America where it toured until it was finally entrusted to the Museum of Modern Art in New York. A victorious General Franco arrives in San Sebastian, a Basque city, to fascist salutes in 1939 (Source: Wikipedia). Picasso entrusted Guernica to the Museum of Modern Art in New York until liberty and democracy was returned to Spain. (Public domain. Source: Wikipedia) Picasso allowed MoMA the mural for safekeeping until liberty and democracy is returned to Spain. An Inconvenient Masterpiece It was during its safekeeping on display at the Museum of Modern Art that Guernica took on a profound significance as the definitive anti-war image of our time. It was of course a document of history and a recognised masterpiece of modern art, but the war in Vietnam gave the painting a new kind of meaning. Members of the Art Workers Coalition protest in front of Guernica holding up images of women and children massacred by American troops in Vietnam (Source: Wikipedia) In early 1970, the Art Workers Coalition staged an anti-war protest in front of Guernica. The participants held up pictures of women and children massacred at the village of My Lai in 1968 to embarrass MoMA’s board. Nelson Rockefeller and William S. Paley, two trustees of the museum, were staunch supporters of the war in Vietnam and board members allegedly profiteered from the conflict. The protestors’ point was simple, but activated a new life in Guernica as a kind of scabbed wound — half dead, half living — that could be picked at. In the subsequent decades the mural became a frequent site of protest. Tony Shafrazi, an Iranian-American artist, daubed “Kill Lies All” over the mural in red paint, claiming his vandalism was an act of creative collaboration. His action was likely inspired by the release of lieutenant William Calley who had admitted to murdering women and children at My Lai. “I wanted to bring the art absolutely up to date, to retrieve it from art history and give it life,” he told a magazine in 1980. As famous as the Mona Lisa, copies, quotations and parodies of Guernica have appeared across the globe. (Fair use. Source: Wikipedia) He didn’t need to. Guernica is alive in all the copies, quotations, allusions or even parodies to its terrible, suffocating grandeur across murals, banners, postage stamps, placards, badges and artworks. Guernica is as instantly recognisable as Barber’s Adagio for Strings (written a year before Guernica) and its life is in the use made of it. Like Barber’s Adagio, Guernica is “performed” over and over again. Nelson Rockefeller had wanted to buy the painting but Picasso refused to sell, instead insisting that, like him, the mural was an exile. Guernica would return to Spain, where it belonged, Picasso had further stipulated, when the Republic was once again established. Franco died in November 1975 and his autocratic state crumbled. In 1978 Spain emerged as a democracy, albeit one with a constitutional monarchy. On the basis of the latter fact, the Museum of Modern Art disputed its return to Spain. The painting did return, originally protected by a blast-proof screen. It turns out the screen was never needed, Guernica had survived the trauma of the Vietnam War and found peace in Spain. Having been denied the painting, Rockefeller commissioned a life-size tapestry which he eventually donated to the United Nations in New York. The tapestry hangs in a corridor outside the Security Council’s meeting room, where it is intended to inspire the conscience of those entering the chamber. On 5 February 2003 Colin Powell and John Negroponte addressed the UN outside the Security Council chamber to legitimise military action against Saddam Hussein’s Iraq. The argument was made on the basis that the dictator had weapons of mass destruction and was a financial backer of Al Qaeda. Guernica was covered for the event by a blue curtain, apparently to make the television pictures of the briefing clearer. Whether it was an act of censorship or not, the shrouding of the mural caused people to make the connection between the bombing of Guernica and the attack on Iraq. So famous is the paintings imagery, so implanted in the imagination that it’s very covering made it all the more visible. Had the US delegation left Guernica as it was, the painting perhaps would not have been associated with the coming attack on Iraq, its covering revealed an awkward truth.
https://medium.com/the-sophist/an-inconvenient-masterpiece-when-art-took-on-fascism-8a1ffe9f4fe6
['Steven Gambardella']
2020-07-19 13:34:56.688000+00:00
['Art', 'Culture', 'History', 'Creativity']
Stop Optimizing Dumb Stuff
I have a friend. She’s brilliant at arts and crafts. Every time I enter her place, she’s tinkering. Decorating. Customizing a birthday gift. Preparing a surprise package. And it all looks amazing. Bar none. But when she tells me the story of how her current project came together, I always die a little bit inside. Last time, she was dressing up a gift box. The insides were lined with holiday napkins, like carpet in a living room. And into this soft bed, she placed little trinkets and treats. Three of them were tiny, slim bottles, two filled with a reddish-transparent, one with a tan-colored liqueur. As it turned out, those hadn’t been easy to get. She told me that, first, she saw the walnut liqueur at the farmer’s market. She wanted to get two bottles, but they were expensive. I think it was $8 a piece. So she scoured the town. Eventually, she found the same walnut liqueur in a bookstore. $5 per bottle. Score! But they only had one. Ugh! After deciding to fess up, she went back to the farmer’s market. Except now, their little stash of walnut liqueur was gone. That’s how she ended up with two red and one tan bottle — and a big dilemma of which friend gets what. After all, there were multiple gift boxes to send. This is just one of many stories and she is just one of many examples, but it goes to show: People are awesome. Humans have an amazing ability to focus on details, to obsess over the microscopic until a beautiful, big picture comes together. But we’re wasting this ability by applying it to dumb shit. Like cents instead of dollars. Let’s say the whole “I’ll find it elsewhere cheaper”’ detour took my friend three hours. Ideally, she’d have saved $3 a piece for two bottles — a grand total of $6 — making her time worth $2/hour. In reality, she only saved $3 and missed her goal of getting two bottles, creating extra stress and losing even more time, on top of “being paid” poorly. I see this all the time. People spend days deliberating a $50 purchase when they make that money in an hour or two. They obsess about coupons instead of asking for $1/hour more. And they’re afraid to drop $20 on the wrong book but spend the same money on two more drinks when they’re already buzzed. I wish I could grab all these people — including my friend — by the shoulders, shake them, and yell: “Stop optimizing dumb shit!” Stop flicking through 250 TV shows only to realize it’s now too late to watch even a single episode. Stop tapping “next” in your playlist to find the perfect workout track if it ruins your cadence of sets. Stop looking for that Instagram post “you know you saw just yesterday” and tell your friend the story instead. Stop fixing details and finally start asking the big questions: How can I earn more? What really deserves my time? Who’s actually worthy of my love? Stop saving ten cents on chocolate and start cutting your cable. Stop running for the bus and start making enough to skip entire workdays. Stop squeezing every sorta-ok yobbo into your calendar and start enjoying longer-than-planned breaks with true friends. Stop haggling over 5% more pay for your first job offer and start applying to every company you’d actually want to work for. Stop worrying about the cost of every broken door handle and start looking for a place that’s not owned by a scrooge and run by a lazy super. Stop exerting yourself with niceties people won’t appreciate or have learned to take for granted and start finding folks who value your time. At the end of the day, it all comes down to this: If your life feels like a blurry, run-down, second-hand version of what it should be, it’s ‘cause you’re optimizing dumb shit. You're wasting your potential obsessing over the wrong parts. Like my friend’s gift box and the story I told about it, your life is littered with decorations. It might look gorgeous from the outside, but on the inside, it doesn’t function. We pour our hearts and souls into details that, ultimately, don’t matter, yet we degrade our most important choices — where we live, who we date, what we work on — to go-with-the-flow gut decisions. We settle for what’s there. What good is having all your ducks in a row? What purpose does that really serve? Forget presenting a consistent picture to society. Make sure you’re painting. Doing what matters to you. Spending time on the important things. But what I find most fascinating about all this is this: The line between dumb shit and true productivity is often incredibly small. I keep telling my friend to document her decor extravaganzas. To post her creations on Instagram. She could have thousands of followers by now. Make the money to buy $8 liqueur without blinking. But that she is afraid of. Too personal, she says. Is it, really? Nowadays, a lot of our dumb shit wouldn’t be so dumb if we shared it. Because others obsess about the same things. Finally! Someone who loves Magic cards as much as me. Who wastes as much time playing Fortnite. Someone, who stepped up and told me my dumb shit is worth it. One minute, one picture, one tweet can turn dumb shit into the bedrock of your future career. The foundation of something wonderful. Even if it becomes just a small part of your life, it’ll now be a part that contributes. That doesn’t just take. That won’t drag you down. But if you don’t show up, we’ll never know. You’ll rob us of your gifts and yourself of your happiness. You’ll let your fear of looking stupid conquer your fear of living with regret. You’ll still be great at a great deal of things, but they’ll matter a lot less. To a lot fewer people. Because that part is up to you. So please, share your contributions. Not all of them, but all those you care about. Those you’d gladly dedicate a whole afternoon to. Where wasting time won’t turn into memories of time wasted, but memories of time enjoyed. Stop optimizing dumb shit. And start caring about what matters.
https://ngoeke.medium.com/stop-optimizing-dumb-sh-t-add1d67d9f99
['Niklas Göke']
2019-03-01 08:28:30.078000+00:00
['Life Lessons', 'Self Improvement', 'Happiness', 'Art', 'Creativity']
Big Data versus Teenage Sex
After implementing your Big Data stack: “How was I?” Big data and Analytics: terms that frequently pop up in newspapers, magazines, airports or even during pub chats to pimp a conversation. These days, everybody talks about it, nobody knows how to tackle it, everybody thinks the others are doing it and hence claims to do it as well, but as the title suggests only the fortunate have (positive) experience(s) with it. Big Data: everybody talks about it, nobody knows how to tackle it, everybody thinks the others are doing it and hence claims to do it as well, but only the fortunate have positive or any experience with it. But what is big data actually? Let’s start by narrowing down the perspective to size only and consider more than 1 terabyte of data. Do we know of any successful business cases that create added value by storing, analysing and managing more than 1 TB of data? Of course we do, but they are too often limited to domains such as astronomy, bio-informatics (e.g. genomics) and only rarely in business applications such as risk management, fraud detection, marketing, or supply chain management. In this contribution, we would like to share some of our experiences originating from various research partnerships which we recently initiated with a diverse set of firms and institutions operating in sectors such as banking, retail, and government. A first issue concerns the organizational aspect. How can this new technology be successfully embedded into a company’s DNA? A first option would be to set up a companywide Analytical Centre of Excellence, and staff it with data scientists handling all Big data & Analytics requests from the various departments. It is our experience that such a centralized approach oftentimes simply doesn’t work. To fully leverage and compete on analytics requires business knowledge, implying that the data scientists should be close to the business. In an earlier column, we described the ideal skill mix of a data scientist as follows: quantitative skills, ICT skills (e.g. programming), business knowledge, communication and presentation skills, and creativity. In other words, a data scientist is a multidisciplinary profile, and to fully exploit this unique skill set another organizational approach is needed based upon the principle of subsidiarity. The main idea is that a centralized unit should only manage the issues that cannot be successfully managed by the local business units, such as managing the ICT environment (both hardware and software), privacy rules, model governance, documentation, etc. A substantial amount of data scientists should be directly embedded into the individual business units, such that the analytics projects can be well-focused and fed with the right business knowledge. Business ownership for every analytical project is essential, but also the cross-fertilization between data scientists across business units is important. A well-focused, centralized analytics unit can play a key role in evangelizing, stimulating and communicating good practices and lessons learned (and vice versa prevent repeated rookie mistakes). A closely related attention point concerns the sharing of (different types of) data across business units, since this is precisely where the added value is to be situated! Hence, we prefer to not only consider size when talking about Big Data, but also to take into account the new insights that can originate from coalescing different data sources, both structured (e.g. transactional data) as well as unstructured (e.g. server logs, click streams or social media feeds). A second attention point concerns the economic value of Big Data & Analytics investment. Firms only invest in a new technology when a positive return is anticipated. Although the costs of an analytical project are fairly easy to grasp (e.g. acquisition and (post-) ownership costs), this is far less evident for the benefits. Our experiences indicate that firms primarily invest in Big Data & Analytics under competitive pressure, rather than based upon their firm belief in its positive return. The latter is however clearly the case. Just think about new strategic opportunities by better targeting customer segments, identifying new product needs, or anticipating customer behavior. These benefits are however hard to precisely quantify upfront and it’s our belief that the fruits of the investment are harvested about 3 to 5 years after the initial investment, although reaping some low hanging fruits in the initial stages of a project is also an explicit concern, if only for the sake of management buy-in. At our university (and undoubtedly many others as well), we teach our students to adopt a long term perspective when making investments. Unfortunately, due to both internal and external (not seldom stock market) pressure for immediate results, companies are far too often short sighted, hereby impeding the adoption of new technologies (e.g. Big Data & Analytics) to foster sustainable growth.
https://medium.com/dataminingapps-articles/big-data-versus-teenage-sex-788fcc3f5ab1
['Seppe Vanden Broucke']
2016-02-28 11:15:32.041000+00:00
['Entrepreneurship', 'Data Science']
The Life-Altering Decision-Making Technique I Learned This Week
On an absolute wild whim, I set up a call with a blogger I’ve been following when I realized I was going to be in her city last week. She’s a color analyst. I had my colors done. I’ll write more about that later. Count on it. It was a lot of fun. But there was something about the process that struck me and that’s what I want to talk about today. Getting your colors done involves sitting in a chair with controlled lights pointed at you, in front of a gray wall, wearing a gray smock. The analyst drapes pieces of fabric over your shoulders, under your chin, and the two of you look at your face and decide how your skin reacts to the color. What was the most interesting to me was that she didn’t just go color-by-color through the rainbow. She used colors that were very close to each other and compared them to each other. Is a cool, icy blue better on me, or a warm, peacock blue? Sometimes the colors were so close to each other that it was difficult to tell them apart. Cool white or warm white? But when she put them against my face, there was a subtle, but obvious, winner. She told me that it was all about perspective and that the human eye isn’t calibrated to see perspective very well. It needs something to compare. Later, I thought, what a great lesson. Life is all about perception. And perception is nearly impossible to see with the naked eye. You need something to compare it to. We all need something to compare everything with, if we ever want to make good decisions. Cool, icy blue was okay on me until two seconds later when I saw that peacock blue literally made me look ten years younger. When you’re trying to figure your life out, you can use the same technique. When you have a decision to make, put it next to a couple of other choices. And then hold them up to your life and see what happens. Let’s try it with my life. I’ve been on the freak out lately about buying a house. I want to. I don’t want to. I’m scared to. I don’t want to be trapped. The housing market, though! But I love this house so much. I want to be able to paint walls and relax into a home without worrying about the rent going up or the owner selling it out from under us. Looking at buying a house by itself, in isolation, just has me going around in circles. I could do this forever. Eventually, I’ll make a decision, but I might never feel good about it. But what if I use the technique that the color analyst used on me? What if I compare buying a house with renting one? Hold them both up to my life. My actual life, the way it is now — against a gray wall, in a gray smock, as it were. So no guessing. No idolizing or wishing. My actual, current life. Wrinkles, bags-under-the-eyes, perimenopause, and all. What I see is that each option offers a subtle difference. We’re likely to want to at least have the option to move to a city in four or five years, after Ruby (our youngest daughter) graduates from high school and is off to college. But we’re committed in the meantime to staying in our current town for four or five years. Financially, we’re in a good position to buy the kind of house we want right now, because we live in this tiny town where the cost of housing is almost ridiculously low. If we lived in a city now, where houses were more expensive, that would be different, because I’m self-employed and I have large student loans, which complicate the mortgage process. We have a large, complicated family and for at least the next few years having a big house with a lot of space is essential. It’s difficult to find an appropriate house to rent and almost constantly on my mind that if something happens with our current rental situation, we might not be able to find another house big enough. If stability and the ability to pick up and move at the drop of a hat are a balance, right now I think my husband and I are slightly favoring the stability side. (We haven’t always.) If I imagine buying and renting as drapes I hold up to the face of my life — I can see that neither is terrible, but one lights things up a little more. It feels a semi dangerous, which might just be because it’s new and not my status quo. You (and I) can do the same thing with any decision. Just choose a second option. What would you do if you didn’t do the thing you’re considering? If you don’t go back to school or you don’t move across the country or you don’t say yes or you don’t say no or . . . whatever. What’s the alternative? Or an alternative. Because you can do this over and over. I could pull out another drape, right? I could compare buying a house in my current little town with pulling up stakes and moving to a city now. Or moving back west this summer. And on and on, if I feel the need. So find your alternative. Then put get honest about your life the way it is right now. The color analyst put me in front of a gray wall and in a gray smock so that she could control as many factors of the process as possible. Put your life in the same controlled atmosphere. No sugar coating. It is what it is, right now in this moment. No judgement. No shame. Hold your choices up. See what happens. If you go one way — what does that do to your life? How about the other way? When the analyst held a color that had any orange in it up to me, my skin turned gray. It was the weirdest thing. I don’t want gray skin! Orange is not my color. I literally had no idea. I don’t actually wear orange much, so instinctively, I must have figured it out. But it was startling to see the response my skin had to the color. If I held ‘buy a house’ up to my life and saw that it would almost certainly cause financial panic and be a massive burden that would lead to anxiety and stress and possible (or probable) bankruptcy — that would be the equivalent of gray skin. If you hold your decision up to your life and get some kind of ‘gray skin’ response — well the choice will still be up to you. I can wear an orange dress if I want to. You can do what you want to. But at least you’ll go in with your eyes open. But first, hold your other choices up. When the analyst put that peacock blue color against my face, it was like someone had turned on the lights. My skin brightened. It highlighted the green in my eyes in an interesting way. It even made my cheeks rosy. If I held ‘keep renting’ up to my life and it just showed me a path toward getting out of debt and building the kind of life I want — that’s the equivalent of a ‘light up’ response. At the very least, before you make a ‘gray skin’ choice, you want to know if one of your other options is going to light you life up, right? So, one more time: Find at least one alternative to the decision you’re trying to make. So you have something to compare your choice with, for perspective’s sake. If you don’t do what you’re considering, what would you do instead? (Even if that something is nothing.) Be honest about your life — put it against a gray wall, in a gray smock. Hold your choice up to your life. What happens? Again — honesty matters here. If you have a friend or partner you trust to talk this out with, that might help. Now hold the alternative up to your life. What happens? Compare the two. Do the same with any other alternatives. Does one choice light up your life and the other give it gray skin? Or is one just a little bit brighter, a little bit more, than the other? The difference can be subtle. Once you have the information, you can make an informed decision. And that’s the best kind, don’t you think?
https://medium.com/the-write-brain/the-life-altering-decision-making-technique-i-learned-this-week-da2e7c2d79c6
['Shaunta Grimes']
2020-01-08 16:46:04.099000+00:00
['Life Lessons', 'Self', 'Productivity', 'Goals', 'Life']
Writing Synchronous Code in Swift
Photo by Anton Darius on Unsplash When you write mobile apps in Swift, you usually have a lot of background work. I’ve been working as a mobile developer for almost 10 years and I can hardly remember a single project without Internet requests. Each Internet request requires time to be processed. Usually an unknown amount of time, possibly, endless. If you do such work in the main or UI (which is the same) thread, your UI will get stuck. That’s why asynchronous tasks in Swift are designed the way they will never do so. The most common way is to avoid it is to use a callback or delegate. For example: API.getUserInfo(userId: userId) { (userInfo, error) in } Here the app flow stops only for a small fraction of a second to prepare and send a request, but it doesn’t wait for the result. We get the result in a closure. userInfo contains information about a user (or nil if error happened), error is an optional error. Let’s see a more complicated example. We need to load information about the user; if it succeeds, we also need a list of their purchases. If it fails, we can try again, but not more than 3 times. func loadUser(attempt: Int = 0) { API.getUserInfo(userId: userId) { (userInfo, error) in if let error = error { if attempt < 3 { self.loadUser(attempt: attempt + 1) } else { self.showError(error) } } else { API.loadPurchasesForUser(userId: userId) { (purchases, error) in // ... } } } } It looks much more complicated, but still readable. I call such things fir-trees or Christmas trees. Bad-looking code. Thanks https://www.clipartmax.com/ for tree picture Now imagine that you also need to make 3 attempts to load purchases. And you need to load payment methods, lists of friends with their statuses, lists of available restaurants and their menus. This function will split into several smaller functions. Flutter or JavaScript programmers will say “why not to add async / await to this code?” — and they will be right. It will make the code much simpler. But how to do it in Swift? We’d like to get something like this (note that the code below is not Swift code): func loadUser() { for i in 0..<3 { (userInfo, error) = await API.getUserInfo(userId: userId) if let error = error { if i == 2 { self.showError(error) return } } else { self.userInfo = userInfo break } } for i in 0..<3 { (purchases, error) = await API.getPurchases(userId: userId) if error == nil { self.purchases = purchases break } } // ... } Yes, it also has a tree-looking structure, but it will never go deeper. And we don’t make any recursions. Everything is in one function, the flow is much clearer. Short about threads One of the purposes of modern multi-tasking operating systems is to manage processes and threads. Old systems (like DOS) didn’t do it, that’s why if one app “hung” it forced you to reboot your computer. iOS is not so fragile, it can run many processes at any given time and each of them can have one or more threads. The first thread, also called main thread or UI thread handles the main app flow and changes in UI. You can’t change the text in UILabel or hide an UIImage in any other thread or it will lead to a crash. If you need to do background work, you need another thread. In the case of a single-core CPU (Central Processing Unit), these threads will be executed one after another on the same code. In multi-core CPUs, threads can run on different cores — which potentially makes your app faster. If a thread is “stuck”, it won’t hang the whole app. Even more, you can detect it and fix it from the main thread. Yet, it’s not a good practice to create many threads which just wait in the background. Each thread has its own context, uses memory and computational power of your device. In simple cases, a thread runs a sequence of statements, programming code that you as a developer provide it with. In more complicated scenarios, like in the case of the main thread, it uses a dispatcher. The dispatcher has a queue of small functions which it runs one by one. The main thread has a loop like this: App code Updating UI Listening to user events Of course, it’s simplified, but it’s important to understand the core concept. You need to remember two important rules: Never, by no means, “hang” the main thread. Don’t change the UI from any background thread. To add code to a queue of the main thread, use this code: DispatchQueue.main.async { // ... } To create a new thread and run code there, do this: DispatchQueue.global().async { /// ... } Besides the async method, you can use asyncAfter , which schedules code to run after some time: DispatchQueue.main.asyncAfter(deadline: .now() + 1) { /// ... } In this example, .now() + 1 means that the code will run in one second. Warning! Don’t use it as a timer. Due to specifics of the dispatch queue, it doesn’t guarantee that the code will run exactly as scheduled. Converting asynchronous functions into synchronous In most libraries and Apple APIs we get asynchronous functions as functions with callbacks (delegates). So, our first step will be turning one into another. Let’s define a couple of asynchronous functions: This example will output: Optional(__lldb_expr_3.UserInfo(firstName: "John", lastName: "Doe")) ["Purchase 1", "Purchase 2"] The first line will appear in 3 seconds, the second one in 2 more seconds. Now let’s get rid of a fir-tree. We can wait while the function execution ends using a semaphore: Not only will this make us wait until the function execution ends, but also make it thread-safe. The execution will end in the same function as it started. Let’s see how it simplifies our code: If we remove print statements, we’ll have only 3 lines of code! And we can add any logic we want. But there’s one downside and let’s discuss it in details. What is Semaphore? If you didn’t understand the idea of a semaphore from the previous example, let’s talk about it in more detail. When we have more than one thread, they need synchronisation. There are many different mechanisms, but the simplest is called semaphore. It’s a thread-safe counter. Thread-safe means that you can change it in one thread and read from another without any negative consequences. It will be perfectly synchronised between threads. You can create a semaphore, change its value and wait for it. This code creates a new instance: let semaphore = DispatchSemaphore(value: 0) value is an initial value of a semaphore. Zero means that there will be two threads which need synchronisation. Passing zero for the value is useful for when two threads need to reconcile the completion of a particular event. — Apple Developer Documentation This code starts a waiting loop. The code execution will be paused until another thread gives a signal: semaphore.wait() You can run it in the main thread, because this function is not going to block the thread. It checks if the semaphore has a signal — if yes, it just returns control. Otherwise, it schedules itself to a dispatcher. When a thread comes back to a scheduled function, it does the same, again and again, until it receives a signal. And this function gives a signal: semaphore.signal() It’s your responsibility to make sure that semaphore.signal() will be called, otherwise semaphore.wait() will never end. Running functions in different threads The output of original code is the following: Start Optional(__lldb_expr_21.UserInfo(firstName: "John", lastName: "Doe")) Middle ["Purchase 1", "Purchase 2"] End We got synchronous functions, and they “block” the flow until the execution is finished. Running such functions from the main thread is totally unacceptable. Let’s go over two different examples: Here’s the output: Start Middle End Optional(__lldb_expr_23.UserInfo(firstName: "John", lastName: "Doe")) ["Purchase 1", "Purchase 2"] When we use DispatchQueue....async , it schedules the execution of a closure in a specified thread, but it starts only when the thread is free. Each thread has a queue of code blocks to run and we add our closure to the end of this queue. What’s interesting here is that we see userInfo first, and only later the list of purchases. Why does it happen? The list of purchases should come first, it takes only 2 seconds, and fetching userInfo takes 3 seconds. But the second code block stays in the queue until the execution of the first one is finished. Not only does this block stay in the queue, but also all the other UI-related code blocks do so. Let’s have a look at another example: In this case instead of using the main thread, we create two background threads and run code there. The output looks different: Start Middle End ["Purchase 1", "Purchase 2"] Optional(__lldb_expr_27.UserInfo(firstName: "John", lastName: "Doe")) It happens because we start these functions simultaneously, and as expected, getting the purchase list ends faster. What’s the correct use The correct use of blocking synchronous functions is this: We create a background thread We implement all the logic inside the background thread without working with the UI If we need to update the UI, we jump into the UI thread This code waits for 5 seconds, then outputs both lines together: Optional(__lldb_expr_29.UserInfo(firstName: "John", lastName: "Doe")) ["Purchase 1", "Purchase 2"] Synchronous frameworks Turning each function into a synchronous can take a lot of time. Can it be done easier? Well, there are people out there who have done some work on that regard before. For example, there is a synchronous version of the popular framework Alamofire: The Parse framework, which was popular years ago (when it was officially supported by Facebook) has a synchronous versions of calls: But such cases are more exceptional. For example, Firebase directly asks not to do it, explaining that the nature of their calls is asynchronous and it should stay that way: Conclusion Swift is a constantly evolving language, but it still doesn’t have Dart-like or JavaScript-like async / await constructions. The Developers community sends proposals to the Swift developer group asking to add them, but those haven’t been incorporated yet. Still, there’s a good solution involving background threads and semaphores. It may be not so easy, but if you master it and design your API calls or another long-working processes synchronously from the beginning, your project can benefit from it a lot. Happy coding and see you next time!
https://medium.com/swlh/writing-synchronous-code-in-swift-3c0ccf2904b2
['Alex Nekrasov']
2020-09-28 14:32:31.393000+00:00
['Mobile App Development', 'Synchronicity', 'Asynchronous', 'Swift', 'iOS App Development']
Covid-19 Vaccine: The Most Important Questions to Ask
It’s Okay to Have Questions About a Covid-19 Vaccine. Here’s What to Ask. The approval process, interpreting clinical trial results, and how to be confident in your choice to get the vaccine This story is part of “Six Months In,” a special weeklong Elemental series reflecting on where we’ve been, what we’ve learned, and what the future holds for the Covid-19 pandemic. Depending on whom you ask, anywhere from half to 70% of Americans plan to get the Covid-19 vaccine when it’s available. But that means a lot of folks likely have questions before they’ll decide to line up for it. It’s entirely reasonable for people to be skeptical about a new vaccine. In fact, concern about a potential Covid-19 vaccine is healthy, particularly given the speed of its development, and is shared by many scientists and public health experts. “We tell people all the time to get involved in your health and ask questions, and then we act surprised when people ask questions about vaccines,” says Holly Witteman, PhD, an associate professor of medicine at Laval University in Quebec City, Canada, who studies vaccine hesitancy. So, what should you be asking? Ahead, recommendations from the experts. Did the vaccine successfully go through all appropriate regulatory channels? Above everything else, a Covid-19 vaccine must make it through the gauntlet of approvals at the FDA and CDC that any other vaccine, at any other time, for any other diseases, would be expected to pass through. “I do think if a vaccine is approved through a science-based process, the public can have a lot of trust that the vaccine is safe and effective,” Witteman says. But most people aren’t familiar with all those processes or which ones might involve shortcuts that could make them uneasy. Generally speaking, a vaccine goes through three clinical trials. Then the vaccine must receive FDA approval based on the clinical trial data. After the FDA licenses the vaccine, the CDC makes recommendations on who should get it and when. The FDA could also issue an Emergency Use Authorization, but only if the data strongly shows high efficacy and no safety issues. And there is an inherent expectation that public health and regulatory officials and manufacturers will ensure a vaccine goes through the right processes. “I do think if a vaccine is approved through a science-based process, the public can have a lot of trust that the vaccine is safe and effective.” “The promise is that all the safe measures will still be taken, but it’s okay to have questions about how that is done,” says Maya Goldenberg, PhD, an associate professor of philosophy at the University of Guelph in Ontario, Canada. Goldberg specializes in vaccine hesitancy research and has a book on it coming out in spring 2021. “The extent to which people trust vaccines is going to depend on the extent to which they trust the system that supports vaccines.” Questions about regulatory processes are especially valid when evidence of political interference already exists, such as attempts by the administration to edit the CDC’s Morbidity and Mortality Weekly Report. “We are seeing significant damage to precisely those U.S. institutions and government agencies that are supposed to be protecting public health. If the system doesn’t work, there’s no reason to trust the product,” Goldenberg says. Take, for example, the influence the executive branch had on FDA decisions about hydroxychloroquine and convalescent plasma, notes Paul Offit, MD, director of the Vaccine Education Center and an infectious disease pediatrician at Children’s Hospital of Philadelphia, who co-developed the rotavirus vaccine. “There’s evidence that this administration [influences federal regulatory agency processes], so you wonder if that would be true here also,” Offit says. Fortunately, several aspects of the approval processes are transparent and immune to political influence. Data safety monitoring boards, composed of academics independent from the government and pharmaceutical companies, closely observe the clinical trials to watch for possible safety problems and assess the evidence for efficacy. Then, two committees within the FDA and CDC, both composed of independent experts, meet publicly to make recommendations to the parent agency. The FDA’s Vaccines and Related Biological Products Advisory Committee (VRBPAC) meets in open, publicly broadcast meetings where anyone can hear the experts discuss the data. The next one for the Covid-19 vaccine will be October 22. “If that group says they don’t think it’s ready, and then it gets approved, that’s worrisome, or if they’re skipped,” Offit says. “I don’t think they should be skipped.” So far, FDA Commissioner Stephen M. Hahn has vowed that Covid-19 vaccine candidates “will be reviewed according to the established legal and regulatory standards for medical products,” including VRBPAC. The other committee is the CDC’s Advisory Committee on Immunization Practices (ACIP), which created a Covid-19 Vaccines Work Group in April that includes 41 experts from a wide range of different fields. At its open June meeting, the work group established its guiding principles for decision-making, starting with safety being “of paramount importance,” including across different populations. Following safety were “the importance of diversity in clinical trials” and the “efficient and equitable distribution of vaccines.” Again, open meetings of ACIP will allow the public to hear what these independent experts think about the vaccine’s safety and efficacy based on the data. If the committee expresses any concerns, those concerns will be immediately public in real time. What was found during the clinical trials? While many people will feel comfortable relying on those committees, even if they feel the larger agencies are suspect, others will want to study the data themselves. Here’s how to interpret it. Where do you find the data? All clinical trial data submitted to the FDA is publicly available, without a paywall. You should be able to find this data by searching online for “FDA Covid-19 vaccine approval” and clicking on the result in the FDA’s “Vaccines, Blood & Biologics” section. The approval page will include links to all the supporting documents in the vaccine’s approval application. For example, the HPV vaccine’s page includes the clinical reviews that contain all the evidence on the Gardasil 9 trials. If you have trouble finding it this way, you may need to do some digging on ClinicalTrials.gov, but news articles will likely link to the data as well. “Look at the tables, where the data are summarized,” Witteman advises. “Also, there will be supplemental material and data with extra tables, and sometimes that’s where you get those details you’re interested in.” How large were the trials? It’s important that the later (phase 3) trials have enough people to detect rarer adverse events that might not show up in only a couple thousand people. The FDA requires a minimum of 3,000 people in these trials, but more than 10,000 is ideal. So far, most trials for Covid-19 vaccines have 15,000 to 20,000 people. Who was in the trials? Thousands of people in a trial doesn’t mean much if all the participants are demographically similar. Participants need to be diverse in terms of age (including older adults), race, ethnicity, comorbidities, and sex. Additional trials may be needed to establish safety and efficacy for children and pregnant people. Witteman said she would especially want to look at differences in safety and efficacy between males and females and among different racial and ethnic groups. How effective is the vaccine? How many people received the vaccine and how many received a placebo (a “fake” vaccine)? How many people who received the vaccine got sick with Covid-19? How serious were those infections? How many people who received the placebo got sick with Covid-19? These numbers will be summarized in tables, and the researchers use them to calculate the efficacy of the vaccine. Ideally, they will also calculate the efficacy for different subgroups, such as by age, race, and sex. The FDA has issued detailed guidance to vaccine manufacturers on development of a Covid-19 vaccine, including a required minimum efficacy of 50%. That means the FDA won’t approve a vaccine unless it prevented Covid-19 infections in at least 50% of vaccinated people in the clinical trials. (Efficacy refers to how well the vaccine works in the clinical trials; effectiveness refers to how well it works in the general population after licensure.) For comparison, the measles vaccine is about 97% effective, and the annual influenza vaccine is usually 40% to 60% effective. Another helpful question is how severe the illness is in people who got the vaccine and still got sick. Did the vaccine reduce the severity of disease compared to those in the placebo group? (In other words, were there more mild illnesses overall in the vaccine group?) A harder question to answer, but one that the researchers should hopefully try to address, is whether people who get the vaccine and test positive for Covid-19 without symptoms are still contagious. What are the side effects? The biggest issue for most people will be the vaccine’s safety profile. “We need to know what the benefit/risk trade-off is,” Goldenberg says. “Of course, the lower the risk, the better, and the higher the efficacy, the better.” To learn about side effects, first look at the list of all adverse events that were reported and how common each was. An adverse event isn’t always an actual side effect: Any negative health event occurring during a trial is an adverse event, even if clearly unrelated to the vaccine (such as getting hit by a car). Side effects are the adverse events that evidence shows were caused by the vaccine. If the vaccine is an injection, common side effects will almost certainly include soreness, redness, and swelling at the injection site, and possibly fainting (listed as syncope), because these are common with any vaccine injection. The frequency of other side effects will give you an idea of the vaccine’s overall safety and whether there are any substantial risks. What systems are in place to find side effects after approval? After FDA licensure (meaning the agency has approved the vaccine for use) and after people begin getting the Covid-19 vaccine, safety surveillance doesn’t stop. Several programs specifically look for adverse events as the vaccine is distributed. Physicians and even individuals can (and should) report adverse events to the Vaccine Adverse Event Reporting System (VAERS), which enables researchers to watch for any upticks in certain types of reports. (A VAERS report does not mean the vaccine caused the problem, but if a problem is reported over and over, it’s a red flag for researchers to investigate.) The Vaccine Safety Datalink is a collaborative research project that involves studies to investigate possible links between a vaccine and negative effects. Finally, the Post-licensure Rapid Immunization Safety Monitoring System analyzes health insurance claims data to look for possible vaccine safety concerns. “It’s never a matter of when you know everything because you never know everything. The question is when do you know enough?” How should I interpret the trial results as a layperson? It’s okay if you dig up the PDF of a clinical trial and have no idea where to begin. Most people have little experience interpreting clinical trial data on their own, so Witteman recommends watching for explainers in the news and listening to experts who offer their assessments. How do you know if someone is actually an expert? Look them up on their institution website or on PubMed. They should have publications or other experience in vaccine trials, vaccine safety, or evaluating clinical trials. Many epidemiologists who specialize in infectious disease will have valuable perspectives as well. If you’re reading a news article, search for other articles by that journalist. Have they covered vaccines and clinical trials before? “Ideally, everyone would have access to a well-informed health professional who can help them with that decision, but I know that is not the reality, especially not in the U.S.,” Witteman says. If you don’t have a doctor you trust to answer your questions about the vaccine, Witteman recommends seeking out experts in the news or on social media who acknowledge the limits of the data and their own knowledge. Ideally, they’ll have experience in vaccine development or safety. “You’re not looking at people who appear 100% certain about everything,” she says. “You’re looking for people who are saying, ‘These are the limits of what we know.’ That’s an indication of someone who has the confidence to be really honest.” The bottom line One of the challenges of making decisions based on scientific evidence is that the evidence is never complete — but you still have to make a decision. “You never eliminate uncertainty, you just reduce it,” Offit says. “So, when people ask the question, ‘Is it absolutely safe?’ No, nothing is absolutely safe.” For example, it’s impossible to have data on long-term side effects from a brand-new vaccine. No vaccines licensed in the United States have ever shown long-term effects that weren’t discovered during clinical trials or within a year after licensure, but it’s still not possible to guarantee that will never happen. “It’s never a matter of when you know everything, because you never know everything,” Offit says. “The question is when do you know enough? With the information we have now, do the benefits outweigh the risks?” If the vaccine makes it through VRBPAC, FDA approval, and ACIP and is recommended by the CDC, that means the experts believe the benefits outweigh the risks. Being able to ask these questions and find the answers can help you feel confident about agreeing with them.
https://elemental.medium.com/its-okay-to-have-questions-about-a-covid-19-vaccine-here-s-what-to-ask-e8196cb8f222
['Tara Haelle']
2020-09-18 12:46:38.407000+00:00
['Pandemic', 'Six Months In', 'Covid 19', 'Coronavirus', 'Vaccines']
Understanding Reference Counting in Python
In this article, we will go through one of the memory management techniques in Python called Reference Counting. In Python, all objects and data structures are stored in the private heap and are managed by Python Memory Manager internally. The goal of the memory manager is to ensure that enough space is available in the private heap for memory allocation. This is done by deallocating the objects that are not currently being referenced (used). As developers, we don’t have to worry about it as it will be handled automatically by Python. Reference Counting Reference counting is one of the memory management technique in which the objects are deallocated when there is no reference to them in a program. Let’s try to understand with examples. Variables in Python are just the references to the objects in the memory. In the below example, when Python executes var1 = [10, 20] , the integer object [10, 20] is stored in some memory location 0x20bfa819cc8 , and var1 is only the reference to that object in the memory. This means var1 doesn’t contain the value [10, 20] but references to the address 0x20bfa819cc8 in the memory. Image by Author In the above example, there is only one variable referencing 0x20bfa819cc8 . So, the reference count of that object is 1. So, how do we check the reference count of the object, the variable is referencing. There are 2 ways to get the reference count of the object: Using getrefcount from sys module In Python, by default, variables are passed by reference. Hence, when we run sys.getrefcount(var1 to get the reference count of var1, it creates another reference as to var1 . So, keep in mind that it will always return one reference count extra. In this case, it will return 2. Using c_long.from_address from ctypes module In this method, we pass the memory address of the variable. So, ctypes.c_long.from_address(id(var1)) returns the value 1 as there is only one reference to the same object in the memory. As you can see from the below code, getrefcount() and from_address returns reference count of 2 and 1 as expected. Image by Author Let’s say if we execute var2 = var1, var3 = var1 , what will be the reference count of the object? You guessed it right!! It’s three as there are 3 variables referencing the same object in the memory. Once the variable var3 is set to None or if it gets deleted during the execution of the program, then the reference count reduces to 2 as you can see from the below code. Also, when we set var2 to None or if it gets deleted during the execution of the program, then reference count reduces to 1 as you can see from the below code. Finally, when var1 also goes away during the execution of the programs, reference count plays its role and release the memory to heap as there are no references to that object var1 was referring to earlier. Conclusion In this article, you have understood how reference counting works in Python. As mentioned earlier, as developers we don’t have to worry about this as the Python memory manager takes care of this behind the scenes. However, understanding how reference counting may help you when you are debugging memory leaks in Python.
https://towardsdatascience.com/understanding-reference-counting-in-python-3894b71b5611
['Chetan Ambi']
2020-12-23 08:45:09.274000+00:00
['Python', 'Data Science', 'Programming']
Why Giving Free Value Is Still the Best Brand-Building Strategy
There is strong competition Competition is everywhere. You compete with hundreds or even thousands of people who want to build their presence in your niche. It’s a fact, accept it or you won’t stand a chance. People are trying different strategies. Some of them are common and some are unique. To be honest, it doesn’t change a lot. You just need to have a plan, be patient and persistent. You won’t build your audience quickly. You won’t have thousands of customers by taking shortcuts. Why? Precisely because the competition is strong. You have to show up and most importantly you have to stand out from the crowd. Without that, you’ll lose. You must find and understand your competition. After that, you’ll be able to prepare a plan that will help you win. You must change your name, your person into a strong brand. People recognize brands. They believe them and trust them. It’s the same thing with products. We’re more focused on the brand than on the product itself. So take an example from big companies and take care of your personal brand. Don`t give content, give value…for free What can I say? That’s the reality. People are giving an enormous amount of value for free. Without a strong audience and a well-established group of customers, you won’t sell a lot. The only way to get those two things is by giving them VALUE. To do that you must understand that content is not value. You can’t just push worthless babble onto the internet. It’s not about generating a huge amount of articles, films, or Insta stories. It’s about providing value that your customers can benefit from. People want to grow, they want to develop their skills. They want to become better. Use that for your purpose. There’s no better time for that than today. The internet, YouTube, social media, platforms like Medium are mines of knowledge. You can learn everything for free. Your job is to deliver value using those platforms and build your brand. Your task is to build the recognition of your name. If you want to sell something, people must trust you. They have to believe that what they pay for is worth it. How do you do that? The recipe is very simple. Show yourself to people. Remove obstacles to reaching your knowledge and experience. Show that you’re not just another entity that wants to become rich and famous. Give them more than you ask from them. But you must know one thing. It’s a hard job. It requires a lot of effort. Not everybody can do that. If you don’t persevere in it, you’ll only lose precious moments of your life. So you have to think about whether you want to get involved. If you don’t do it, you won’t build a strong brand and you won’t achieve anything. Adapt the strategy to your possibilities If you’ll use Google for finding the best strategy for publishing your content, you won’t learn a lot. You’ll find articles about the need to publish daily. But also you’ll find information that the best way is to publish something every other day, or three times per week, or daily but without weekends. If there is one thing that I’m 100% sure is that it doesn’t matter. Why? Because you’re targeting thousands or even hundreds of thousands of potential clients. You’ll never find a strategy that will be good for each one of them. So the best way is to prepare a strategy that will allow you to prepare and deliver high-quality content. Valuable content is much more important than a large amount of content. The equation is simple: Value > Amount There are a couple of things you need to consider: Where (on which platform) will you find your customers? How are your clients using this platform? What kind of content must you deliver there? How much content can you deliver without losing quality? Think about those things, prepare a plan and strategy, and start building your brand.
https://medium.com/better-marketing/why-giving-free-value-is-still-the-best-brand-building-strategy-835c2744b744
['Dawid Pacholczyk']
2020-01-04 05:34:19.474000+00:00
['Marketing', 'Self Improvement', 'Branding', 'Daily Manager', 'Social Media']
Install and configure OpenCV-4.2.0 in Windows 10 — Python
This post will guide you through all the steps for installing and configuring OpenCV-4.2.0 in Windows 10 (64-bit) for python use inside the Anaconda environment. OpenCV with Anaconda, for Python 3.6.0+ development I will focus here on OpenCV for python 3.6.0+, my previous post for VC++ integration can be found here. Note: To follow along with the tutorial, I will assume that you already have Anaconda and Python 3.6.0 installed. If not, please feel free to install these tools before continuing to read further. OpenCV-4.2.0 for Python The steps for installing OpenCV through Anaconda are pretty easy and straight forward. Don’t forget to add Anaconda to your path, so you can easily access conda command from the prompt. Step 1: Create a conda virtual environment for OpenCV conda create --name opencv-env python=3.6 opencv-env refers to the virtual environment name, you can name it as you like, but remember to pick a meaningful name. Create a folder for your project where you will put your python scripts. Head to your folder through the command line cd C:\Users\<username>\my_folder , and activate the virtual environment you just created, with the following command: , and activate the virtual environment you just created, with the following command: conda activate opencv-env Note: if you use bash as your default terminal in windows, conda activate opencv-env might not work as expected. The reason for it is that bash is not, by default, properly configured to run anaconda scripts, so you may give it a workaround: Edit your .bashrc file c:\Users\<username>\.bash_profile adding the following line of code: Edited with Visual Studio Code 2. Whenever you want to lunch your bash terminal add the following arguments: --login -i , therefore you will lunch your custom bash profile which has been granted access to conda scripts. Now your conda environment is activated and perfectly available. Step2: Install OpenCV and the required packages To use OpenCV you have to install some important packages that go alongside: pip install numpy scipy matplotlib scikit-learn pip install opencv-contrib-python pip install dlib Step3: Test your installation And you should get the latest OpenCV version available in the python repo. And that’s all, have fun with OpenCV.
https://towardsdatascience.com/install-and-configure-opencv-4-2-0-in-windows-10-python-7a7386ae024
['Aymane Hachcham']
2020-04-21 22:03:33.426000+00:00
['Opencv', 'Python', 'Anaconda']
The Proofredder
The Proofredder We all have our gifts. Hers was not between the ears! photo by author No, there’s no typo in the title. It’s spelled that way for a reason! Allow me to explain. Several years ago when I was a significant contributor at SCREW Magazine, the editor of the publication and I used to hustle sessions in exchange for giving individuals or houses free-of-charge guide listings. The tradition had been a perk for the editorial staff for as long as the paper existed. Al would fire anybody who sold the listing and put the money in his pocket. But a little “fun” in exchange for a word ad? That was fine. After all, the editors deserved a little time off from dealing with the boss’s insanity. I myself entered the privileged circle when a guy named Steve became editor. Once he realized I knew and sold advertising to a lot of people in the escort business, he assigned me a weekly column and gave me the office phone to hustle my clients for sessions in exchange for guide listings. Almost everybody was receptive. The listings were known to be effective. An excellent deal for both parties was virtually ensured. Steve and I would switch off with the rewards for my salesmanship. He’d get one — then I’d get the next — and on and on like that! So anyway…I had this client who was half-owner of a successful escort agency as well as a partner in GHETTOGAGGERS, an abhorrent streaming website whose program is pretty much self-explanatory. Bernie knew about the guide listing program…and knew Steve and I were running it. So he called one day to offer one of the new just-videod gaggers for an hour in exchange for the listing he knew would bring him business. And it was my turn! Later that night, the girl called to get directions to my place. And shortly thereafter, she arrived at my apartment building looking as physically attractive as she turned out to be mentally dim. And that’s saying something! But who cared? I wasn’t looking for Albert Einstein’s sister. I was looking for a pretty girl with a beautiful body. And that…I definitely got. After our “business meeting,” the girl cracked on me for a job as an escort. Bernie’s customers weren’t really into black girls and thus, he welcomed me getting her work as the girl was pestering him for customers he had difficulty providing given that most of his client base preferred less colorful women. His loop girl wasn’t that fantastic in the room. But she did have an awesome body and would surely be a welcome addition for a particular madam who ran ads with me. Or at least, that’s what I was hoping for the girl. But when she started working at the house, the clients commenced to bitching about her performance in the clinch. And it wasn’t long before the boss was calling to say “this girl ain’t gonna make it. I’m getting complaints from the guys.” Owner #1 fired the girl summarily. But because of her physique, Bernie's friend moved on to gain employment with another client of mine who welcomed almost anybody if she looked good. But once again…the complaints about her performance began. Disappointed that she wasn’t cutting the mustard, the girl with the million dollar body but no acumen in the room picked up a New York Post and began looking for work in the mainstream to escape all the criticism. And when she hit the proofreading ads, the girl just knew she’d found her new calling. Excited about the impending adventure into a new line of work, she called me enthusiastically to froth “Billy! I’ve been reading The Post and I want to become a ‘proofredder.’” Now, you know it was all I could do to not bust out laughing. I mean…come on, girl. I think you need to know how to pronounce the word correctly before you march into the office and tell them you’re looking for the “proofredder” job! But I took the high road…encouraging her to become the world’s greatest “proofredder” if that’s what she wanted to do. I even oriented her for a second explaining that the best way to “proofred” a piece of text was to read it backwards so she could concentrate on every word individually rather than risk passing over an error. (This is actually true.) The girl really was a decent human being. And I was hoping for her to succeed — even if all the odds seemed stacked against her. She wasn’t going to make it as an escort. And realistically, proofreading was out of the question. Well anyway…all’s well that ends well. The last I heard, the girl found a boyfriend (not that difficult…she was built like crazy and pretty enough) and faded away from the business. But her memory lingers as a prime example of one of the most sonorous dumbbells I’ve ever met on the escort trail. Thankfully, God gave her a few gifts with which to navigate this life. It’s just that gray matter wasn’t one of them. What more can I say? I’d bet there’s more than one girl who’d swap her brain for this girl’s body! May we all count our blessings — whatever they are! Look above! The picture accompanying this piece is actually the girl.
https://medium.com/everything-you-wanted-to-know-about-escorts-but/the-proofredder-bf12d614058e
['William', 'Dollar Bill']
2020-11-17 11:47:40.300000+00:00
['Relationships', 'Culture', 'Escorts', 'Sex', 'Psychology']