title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
I Thought I Had Coronavirus, But it Was Really a Panic Attack
My daughter and I were at the clinic first thing in the morning, getting tested for COVID-19. I felt much better than the night before, when I was afraid I was going to die, when I couldn’t speak without gasping for breath. “I know it’s hard, but you need to find ways to stay calm.” The doctor looked me straight in the eyes. “Anxiety just makes breathing difficulties worse.” I liked the doctor, was thankful she was testing us, but I also felt offended. Sure I was a little anxious right now, but who wouldn’t be? The sickness came first, then the anxiety, not the other way around. How dare she suggest my wheezing, my gasping, was just anxiety. Just anxiety. Just. The doctor didn’t say that. That was my own mind, my own shame, belittling my experience, needing a test result to tell me this was the coronavirus. Because if my difficulty breathing was not COVID-19, if it was a mental health issue, it felt shameful, embarrassing, pathetic. It felt like my fault. I’ve been medicated, gone through lots of fantastic therapy, for my depression and anxiety in the past, and yet part of me still carries the societal stigma. I worry I’m supposed to be able to heal my mental health issues through the power of positive thinking, that if I do enough yoga I’ll be fixed. That when I lapse into obsessive, negative thinking, when panic takes me over until I can’t breathe, that means I’m a failure. And that feeling of failure just piles on another layer of anxiety. Layers upon layers, smothering me. It’d been, what, maybe 4 days, since my husband and I smugly talked about the people freaking out? As if we weren’t them, as if we were somehow above the drama, the panic. The doctor took my blood pressure. 147/83, shockingly different than my usual uber-low 100/60. Anxiety can raise our blood pressure. Anxiety can make it hard to breathe. “I need you to try to stay calm,” the doctor said again.
https://medium.com/an-injustice/i-thought-i-had-coronavirus-but-i-really-had-a-panic-attack-92988e845c90
['Darcy Reeder']
2020-03-29 08:05:29.794000+00:00
['Covid Diaries', 'Covid 19', 'Mental Health', 'Anxiety', 'Coronavirus']
The Word You Need To Stop Using To Be More Productive and Happy
The Word You Need To Stop Using To Be More Productive and Happy This word is common but unhelpful. Photo by Claudia Barbosa on Pexels I’ve always placed expectations upon myself, usually ones enforced by society. I feel a need to look a certain way, to act a certain way, to feel a certain way, and more. Instead of feeling motivated or encouraged, I feel inadequate. Unhelpful thoughts run through my head on repeat, and they can sound like: “You should eat fewer calories and lose weight so you’ll look better and fit into your clothes.” “You should hurry up and find a ‘normal’ job so people don’t judge the freelance path you’re taking.” “You should stop sharing your anecdotes with mental illness, because people think less of you.” “You shouldn’t feel so upset over something so trivial.” I know these thoughts are not only unhelpful but also harmful and inaccurate; however, I can’t always stop thinking of them or refute them easily. While they’re problematic in several ways, they all share one word I need to work on cutting out of my vocabulary to feel happier and be more productive: Should. My therapist once told me that the word “should” implies an unnecessary sense of shame. It makes us feel a guilt-induced need to do something that we don’t actually need to do. Instead of using the word “should,” she suggested phrases like “it may be helpful if” or “I feel a need/want to” or “would benefit me.” She also encouraged getting curious about why I feel like I “should” do something. From whom or where did that thought come from? What might that person or group’s reasoning be, and is that helpful or unhelpful? And I have to admit, the phrases she suggested quickly felt much more comforting and appealing. Hearing them reminded me I don’t have to do what I don’t want to do. It reminded me that my mental illness voice, Herbert, is hurtful and wrong. It reminded me I’m a human who’s allowed to mess up and be imperfect, making mistakes and going against the grain. Whether or not we actively intend it, the word “should” radiates negativity. It’s a word we often use when we’re criticizing ourselves or others. And with that negativity, we feel anxious and shameful, which can lead to an inability to change for the better and meet our goals. In addition, when we use the word “should,” we often are telling ourselves we need to do something that we don’t really want or need to do. We put pressure on ourselves to do or feel differently rather than validating ourselves empathetically. We’re trying to be perfect when perfection is unattainable, and we’re expecting something from ourselves that we can’t authentically achieve, at least healthily and happily. Even though I know I should — oops, may want to — stop using the word “should,” completely canceling a part of my vocabulary is a hard process that takes time. If you’re also struggling with this, I encourage you to be gentle and understanding with yourself. You’ve likely grown up using and hearing the word, so it popping up in your mind may feel like second nature — and that’s okay. As far as techniques that may help, clinical psychologist Dr. Soph suggests noticing when you say “should,” questioning why, asking yourself if it feels good or helps you, and considering using “could” or “I want to” instead. After, note how using one of those new terms makes you feel, and if you’re able to achieve your goals with it. Clinical psychologist Dr. Susan Heitler helped a woman named Betty stop using the word “should” as well. Betty shared that the word invited guilt, stress, and depression rather than learning. However, when she started saying “could” instead, she felt motivated and gained the ability to be more productive. She completed her tasks effectively. Once she changed “I should’ve done the taxes yesterday,” to “Yes, I could have done that yesterday, and at the same time, I enjoyed going outside to enjoy the sunshine instead,” she eased her negative feelings. She didn’t guilt herself for having a thought containing the word “should, but simply fixed it. She likely realized that multiple tasks and wants have value, and sometimes, we simply don’t have the time to do it all. Cognitive behavioral therapy is all about rethinking negative thoughts, and I’m thankful that my therapist and I have engaged in it. Some tools it includes are questioning your assumptions for validity and gathering evidence for and against your thought. Through practicing these tools, you’ll likely realize you don’t have to do what you think you “should,” and you’ll still feel just fine. For example, I realized I could live in my natural body without dieting and be much happier and still loved. I realized there’s no need to pursue a society-approved career when I’m doing much better emotionally and financially with my freelance work. And by realizing when I’m saying “should,” that it’s not helpful, and ways I can reframe it has increased my levels of self-compassion and decreased my levels of depression and anxiety. I’m more gentle with myself and more stable, meaning I can more easily move forward effectively and productively. I make more money, I cry less, I put less unnecessary pressure on myself, and I live my truth. When I have unhelpful thoughts like those mentioned at the beginning, I’m able to pause and realize this: I’m human. I’m beautiful, loved, worthy, and accepted just as I am. I’m doing what works for me and I’m helping others. My feelings are valid, and judging them won’t make me feel any better. I feel less guilt about who I am and how I act because I know I have no need or reason to feel that guilt. Because I criticize myself less, I’m happier and feel more self-love. I trust my gut and respect my needs, and I stop wasting time on negativity. I encourage you to try using more helpful words and phrases than “should” so you can live a happier, more productive life. Love yourself, be gentle with yourself, and know you were never meant to be perfect.
https://medium.com/change-becomes-you/the-word-you-need-to-stop-using-to-be-more-productive-and-happy-1401ddbaf506
['Ashley Broadwater']
2020-12-26 14:32:19.947000+00:00
['Life Lessons', 'Self', 'Mental Health', 'Advice', 'Productivity']
How we respond to stress can teach us everything we need to know about ourselves.
Photo by Joel Mott on Unsplash How we respond to stress, especially the life or death kind, (like coronavirus) can teach us everything we need to know about ourselves. When a crisis hits, when death seems imminent, we react in old, outdated survival strategies. These ancient coping strategies don’t come from the grown-up parts of our brains. Even if we are competent, mature, psychologically sane adults today, we can react to fear with old, illogical coping strategies. These weird sometimes self-destructive responses can teach us a lot about ourselves. None of us respond to real hardcore fear, the life and death kind, with our prefrontal cortex’s, the part of our brain that is the grown-up developed area, our logical brain. Instead, when we are scared, we slide back to old stuff, to our brain stem response, to a primitive primal response, the fight or flight or freeze response. When the coronavirus hit, I heard about it first in reports from China. It wasn’t COVID-19 then. It was something that people on the news were suffering from. We all saw it; lots of people in surgical masks running around looking scared. But we had seen this before, SARS, MERS, and other scary bird flu-like illness that seemed to run through the population, but always in Asia and other places. Not here. Five days later and the stores are out of toilet paper, schools are closed, bars are shut down, concerts canceled and the world is shut down, on quarantine. And this time the quarantine is on television but it’s here. It’s everywhere. All-day long on the news, on television, on my computer. It even comes to my phone unbidden in scary ‘Alerts.’ The alerts say: The President has it. The President doesn’t have it. The Canadian prime minister’s wife has it. Tom and Rita Hanks have it. On social media, I watch TikTok videos of Italian singers crooning from their balconies in dark, empty streets in Milan, and the news continues. Reports of normal healthy people in the hospital, the Intensive Care is full, they are running out of respirators. Just four days ago I went to yoga. At the time it seemed like a mature way to deal with stress. I was managing my anxiety well at the time. The class was packed. I mean, wall to wall, move your matt over an inch please so I can squeeze in here, packed. We passed props to each other, tried to be polite, touched each other’s blankets and blocks, touched each other, breathed each other’s air. We Om'd and chanted and filled the room with our nervous exhales. Someone in the class sneezed and then coughed. There was a moment of silence. Everyone froze in their downward dogs. The instructor quickly warned people not to panic, “It’s allergy season, people. Take a deep breath, exhale.” But no one did. No one exhaled. They held their breath, considering. Was it true? Was it allergies? The poor woman who had sneezed looked around, terrified. The class slowly shifted. A silent mob, turning as one. We all stared at her, our heads hanging upside down between outstretched hands. She fled the class before she was Lord-of-the-Flies-pig-stabbed. Now, just a few days later, the yoga studio is shut down till further notice. The props need to be bleached and disinfected. Memberships are suspended. A friend told me about another yoga studio where she goes, still open, but classes are limited. No more than ten people allowed in a class, space in between mats. But no exhaling. That’s what I was doing, going to yoga, taking care of myself. I was trying to exhale. My brain was trying to take in information, clearly, this is a crisis, and ok, process it all logically, like an adult, with my prefrontal cortex, the front of my brain. My brain said, okay, we are in trouble. Obviously. There’s no food on the shelves at the grocery stores. All work events are canceled. People are quarantined. Do what you know will relax your brain, I thought. I called my husband. I am in Los Angeles, he is in Connecticut. “I’m worried,” I said. “I think I’m starting to get worried. Like really worried.” “Come home,” my husband said. “Get on a plane. Come.” I looked online. Flights canceled all across the country. TSA workers testing positive. A flight to Denver had to be grounded when a guy sitting next to a young woman with a cold started freaking out and demanded to be moved. The plane had to make an emergency landing and let the guy off with a security escort. I did the next thing I knew to do when things get scary. I went for a walk. I am lucky, I can walk on the beach. I walked. It rained. It never rains in Southern California. It’s been raining all week. Good for California. They need the rain. I don’t. I moved here to be bi-coastal, for the damn sunshine. So I did the next thing I do to survive under dire circumstances. I meditated. I downloaded Insight Timer guided meditations and for hours I breathed in and out and relaxed my nervous system, listening to calming voices, imagining floating on clouds. But I couldn’t sleep. I was falling apart. I stared at the ceiling, my hands shook, my heart pounded. I imagined all the horrible things that were going to happen. I thought of my children. I got up and ate chocolate. Lots of it. Boxes of it. The sugar rushed to the parts of my brain that had been deprived of sugar and carbs for weeks and weeks and it said, “YES, finally, you remembered. This is what you need. This is how you do it. You’re in survival mode, girl.” I felt better for like five minutes. Then my brain said “Now give me more. MORE. NOW.” Clearly, this was not the grown-up part of my brain talking. It was cranky and demanding. This self-destructive coping strategy lasted two days. Like a toddler, my thoughts took, over, demanding, whining, unreasonable, out of control, obsessive. My compulsions didn’t make sense. I was full, I felt sick, I didn’t want to get off the couch and get more damn chocolate. “Get it yourself” I wanted to say, but I was the one demanding, it was my own damn brain bullying me into eating more chocolate. That binge lasted about 36 hours. Afterward, I felt bad about myself. Really bad. I realize now what happened. I was in survival mode. I was not thinking clearly. I was not in my prefrontal cortex. I had let an old coping skill take over. Eating sugar was a way of handling life that I used in my childhood when things sucked and I felt I had no other choices. “This is what your brain needs,” it told me when clearly it is not what the grown-up part of me needs today. But that inner child part of me was scared. Let me be clear, there is nothing, absolutely nothing, wrong with chocolate. Or with tequila or with wine or whatever you are doing right at this moment to survive the stress and anxiety of a global pandemic. This is serious stuff. This is life and death stress. This is not your average everyday anxiety. Don’t let anyone tell you “don’t panic, calm down, it’s no big deal.” It’s a big deal. How you react to this big deal pandemic can teach you a lot about yourself. It can reflect for you some important information. How you handle life and death stress will more tell you how you handled your childhood when things got tough. Cause that’s where we learn our survival strategies. That’s where we learn to cope. That’s where we go when things get bad. Growing up, I spent my childhood mostly alone, by myself, or with a younger brother and sister to take care of. I had a single mother who worked the late shift as a nurse in the hospital Intensive care unit. I remember waiting up late nights trying to catch a glimpse of her when she came home, well after midnight. She was usually exhausted, burnt out, and suffering from migraines and post-traumatic stress. In the mornings she was asleep when I got up to go to school and was gone by the time I got home. Food was a much more constant companion. It was a comfort to eat. No one was there to tell me what to eat, and so I didn’t make healthy choices. I was a kid. I was scared and lonely. I had big fears and no one to talk to. Eventually, I grew up. I made better choices. I got healthy. I learned to care for myself and for my children. Yet that little girl, alone and scared, is still inside of me. Now, today, there are some big scary things going on. I am alone in LA, about to get on a plane to go to NYC. On the news it says “don’t travel, it could kill you.” Or at the very least, “you could kill the people you love.” Of course, this triggers my old brain stem response. It makes sense I would regress to the old way of coping even while I know in my grown-up self I know it may not be the best way of dealing with things. I think about getting on that plane. It’s scary. The whole damn world is scary. But I am a grown-up. I can face the fear, act like an adult. Yes, this is serious. But I have some great coping skills today. I can take care of myself and I can take care of people who need me. And apparently, unlike a lot of the country, I have plenty of toilet paper. I think I also have a bottle of tequila somewhere. Tammy Nelson, Ph.D. is a sex and relationship therapist, a TEDx speaker and the host of the podcast, The Trouble with Sex. She is the author of five books including the new release Integrative Sex & Couples Therapy. She can be found at www.drtammynelson.com
https://drtammynelson2.medium.com/how-we-respond-to-stress-can-teach-us-everything-we-need-to-know-about-ourselves-9d1f8a681cb3
['Tammy Nelson']
2020-03-15 01:59:15.249000+00:00
['Stress', 'Coping', 'Coronavirus', 'Co Vid 19', 'Psychology']
How I Beat Social Anxiety
Well, Maybe There’s Hope After All What ultimately helped was a combination of two things. One was a technique (more like a program), and the other the single worst experience I’ve ever had. They meshed to help me get rid of my fear of people. Will You Commit to This Program? 2017 was also the year I started therapy. My therapist and I discussed a plethora of issues, but after the college incident, we focused our attention on my social anxiety. A “staircase” program was proposed. Of course, I refused at first. But eventually I had to agree for my own good. It was simple. We laid out a number of social situations in increasing order of trepidation, and I had to complete one each week. The one at the very top was going to a movie theater and watching a movie alone. The ones at the bottom were much easier — talk to a friend on the phone for half an hour, visit a friend in person and have a day out with them, be more active on social media, talk to a relative, and so on. A cakewalk to me now, insurmountable demons to me then. I recommend drawing something like this up with the consultation of a mental health professional. Put the least daunting challenges at the bottom, and increase the level of difficulty as you progress. But don’t do one every day. Give yourself a reasonable amount of time to recover from each challenge. I’d recommend completing one challenge every four to seven days. If need be, do the same challenge twice or thrice before proceeding. This is a more long-term process intended to desensitize you to social situations. The more you do it, the less afraid you’ll get. As to the number of steps, spend some time and come up with every social situation that frightens you. Each and every last one of them. Then, rank them in increasing order of difficulty. There. Now you have your own staircase program. The Scariest Experience I’ve Had It happened in 2018. I’d gotten admission once again into the same law school, and this time I’d made some friends (courtesy of the confidence I gained from the program). But things were still incredibly bad. For a whole week, my blood pressure had spiked. The college infirmary told me it was something like 140/100, both levels twenty points above normal. I had fever and cold, had panic attacks every day, slept only three to four hours every night, and was still very, very alone. It all came to a head when, one evening, I began feeling something was off. Something primal and fundamental. What followed was the longest, scariest, and most vicious panic attack I’ve ever had. It lasted three whole hours, and only ended when I took strong sleeping pills. Immediately upon waking the next morning, it began again. By the time it ultimately stopped, my mind and body were frayed and severely weakened. That night, I was convinced I’d die. It wasn’t a thought, nor a possibility. Just like you know the sun will come up tomorrow morning, I knew I was going to die. I even hugged my mother and told her I loved her. Which is, trust me, rarer than a dancing unicorn. How did this experience help me, you ask? Well, it gave me perspective. I now knew what it was like to be afraid for your life. I’d faced something I shudder to think of even today, and I’d come out alive and (in another week) well. Compared to that, social situations are nothing, laughably benign. I now knew what to be afraid of and what to disregard. That experience drove away quite a few of my fears, actually. I was no longer afraid of social gatherings or strict professors, no longer bothered by examinations or low grades; I no longer cared what people thought of me, and I no longer doubted my capabilities. Once you face something truly traumatic and genuinely believe you’re going to die, and then you survive somehow, it gives you a fresh perspective on life. You now have a yardstick to compare dangers to. And social interactions aren’t even a blip on that scale. I’m still terrified of lizards, though. Wonder what’ll fix that.
https://medium.com/invisible-illness/how-i-beat-social-anxiety-ce7f853a8c6f
['Chandrayan Gupta']
2020-12-18 01:13:44.052000+00:00
['Self Improvement', 'Advice', 'Mental Health', 'This Happened To Me', 'Psychology']
It’s cool to see how Product People played a small part in people like Ryan Hoover and Hiten Shah…
It’s cool to see how Product People played a small part in people like Ryan Hoover and Hiten Shah meeting up. When I started the podcast, that was my dream: that listeners would see the humans behind the product. Personally, I’m interested in how products connect people. For years I’ve used Path to interact with my family back home. I’ve met many friends through Slack chats. I watched a documentary by LEVI on Vimeo, and we ended up hanging out in person a few months later. The things we make can create a connection. It’s a good reminder to keep making things. Whether we’re writing code, publishing blog posts, or recording podcasts, our work can foster meaningful relationships. Friendships forged on a platform can outlast the platform itself. To me, that’s worth investing in. Cheers, Justin Jackson @mijustin
https://medium.com/product-people/its-cool-to-see-how-product-people-played-a-small-part-in-people-like-ryan-hoover-and-hiten-shah-362cf3cab3c3
['Justin Jackson']
2016-12-14 01:13:54.195000+00:00
['Product Design', 'Startup', 'Podcast', 'Makers', 'Entrepreneurship']
Let Git Aliases Boost Your Productivity
Useful Aliases There is an unlimited number of possibilities for aliases. What’s important is that you define the ones that are valuable for your specific day-to-day workflow. Below are a number of aliases that I have defined for my personal workflow. I find that they simplify my Git usage quite a bit. You may or may not feel the same way. Checkout branch This alias is as simple as it gets. It is a keystroke-shortener for checking out a new branch. Alias: co = checkout Example Usage: git co my-fancy-branch Stage and commit changes Most of the time, when I am wanting to commit changes, I just want to stage and commit everything that I have done. This alias lets me do so. Alias: ac = !git add -A && git commit Example Usage: git ac -m "Some Commit Message Goes Here" Create new branch Creating a local branch, and then a remote branch to push to late, almost always stops me in my tracks when I try to push my changes and realize that I can’t remember how to create the remote version of my new branch. For this reason, I find it best to just do both in a single line. Thus, this alias creates a new local branch off of the current working branch, and then immediately connects it to a new remote branch. Alias: create-branch = !git checkout -b $1 && git push -u origin Example Usage: git create-branch “my_new_branch” Notice the $1 in this alias. This interpolates the first parameter — in this case, the desired branch name — that was passed to the alias into the command that gets executed. Sync Branch w/Dev I try to sync up my feature branches with the dev branch that other developers on my team are pushing to regularly. This helps me avoid accidentally breaking other features, tests, and so on, without noticing — and vice-versa. This alias quickly syncs up my current working branch for me. Alias: sync = !git pull origin dev Example Usage: git sync List branch’s changed files When I am wrapping up work on a feature branch and about to create a pull request to get it into dev , I like to take a final look at all of the files that have been changed. This helps me ensure that, for example, I’m not introducing new code smells into the codebase. This alias will list out the names of all files that have been changed in my current working branch. Alias: get-changes = !git diff — name-only origin/dev Example Usage: git get-changes Delete old local branches On my team, whenever we merge a pull request into dev , we delete the relevant feature branch. This helps to keep our activity clean — thus making it easy to see what’s in progress at any given moment. Because we do this, I can very quickly clean up local clones of old branches in my environment by deleting the ones that have already been deleted on our remote. I actually find that doing this regularly speeds up some other Git operations I perform, though I do not know why. Alias: clean-branches = !git remote prune origin && (git branch -vv | grep ‘origin/.*: gone]’ | awk ‘{print $1}’ | xargs git branch -D) Example Usage: git clean-branches This alias is probably my favorite of the bunch — it simplifies a fairly complex pipeline of commands, and helps keep my environment pristine.
https://medium.com/better-programming/let-git-aliases-boost-your-productivity-ced0fcdd65b5
['Luke Hollenback']
2019-12-05 14:53:28.139000+00:00
['Tips', 'Development', 'Productivity', 'Programming', 'Git']
The Problems With Conservation: Scenic Clickbait, External Costs, and Community
Consumerism and conservation are dependent on “out of sight, out of mind.” American consumerism is rooted in the violent extraction of natural resources, culture, and human labor. The ways and means of extractive consumerism must remain as far from the consumer as possible, otherwise risking the upheaval of a populace finding the real cost of production untenable. These practices are removed from civilized society because, let’s face it, if belching factories, sweatshops, and strip mines were in plain sight, no one would stand for it. The wealthiest business people don’t live where the goods their companies produce for a reason: It eliminates moral culpability. “At the end of this transaction [modern consumerism] it’s easy to not eat responsibly because we see nothing, hear nothing, know nothing. We bear no responsibility for our decisions because they’re out there somewhere. We don’t internalize our decisions because we’ve externalized our living.” — Joel Salatin Because the problems caused by consumerism are “out of sight, out of mind,” so too are the available “solutions.” Like buying away anxiety on Amazon, we alleviate eco-anxiety by throwing dollars at people we’ve never met to fix places we’ve never visited. While well-meaning, it further pushes our responsibility into the foggy margins and blasphemes earth-healing by relegating it to a mere transaction. Here’s Berry again: “The dilemma of our private economic responsibility … is that we have allowed our suppliers to enlarge our economic boundaries so far that we cannot be responsible for our effects on the world. The only remedy for this that I can see is to draw in our economic boundaries, shorten our supply lines, so as to permit us literally to know where we are economically.” We’ve removed ourselves from production, so the only marketed recourse to solving its woes is to throw dollars at an external organization. Because we prioritize “scenic” places like national parks, the systems of production in our communities are free to fall into extractive practices at their leisure, because we pay no attention to the “unscenic” places where their production occurs. Conservation as a tragedy of the commons: public vs. personal solutions In our current system, external costs are usually not passed down to the consumer at the store. The public eventually pays for them by way of healthcare costs, farm subsidies, and the tragedy of the commons. Likewise, we don’t “pay” to conserve our iconic landmarks directly. But we do pay in the form of taxes, entrance fees, and occasional donations to our favorite environmental charities. The necessary cost of effective conservation is disproportionately redirected to these external, publicly diffused solutions which nearly eliminate personal responsibility. Public law is mostly unconcerned with individual morality. And unlike morality, public law is enforced by violence and compartmentalized force. “Public” spaces, as beautiful as they are, are the least common denominator in conservation. The public preserves them because it deems them so, and can be unpreserved just as quickly, often with political and physical violence. Moral law is enacted by community consciousness and accountability. Community spaces prioritize the well-being of those who live within them and are more resilient to trauma and change. And yet we sacrifice the lion’s share of our nation’s natural capital to less-than-scrupulous marketers eager to sweep the soul of our lands away from an unwary public, loading their RVs for a trip to Yosemite. A fundamental distinction between public and community spaces is the prerogative to engage the land. A public space delineates recreation and work, meaning all human activities therein must adhere to strict understandings of recreational behavior barring things like farming. For a sustainable community, work and pleasure are not separated because the land’s redemptive work is a joy and a vocation. Private life is where change happens, where we find a sense of place, or space attachment, and discover our power to preserve and protect our communities and environment. As Masterson et al. write in an article for Ecology and Society, “[R]esearch on place attachment has shown that place attachment can indeed contribute to protective and restorative stewardship actions in dynamic SES [socio-ecological systems]. [S]trong attachment is associated with care and action … Attachment is based on meanings: We become attached to a landscape as embodying a certain set of meanings, and it is those meanings we seek to preserve.” To conserve the vast majority of land now overlooked, we must rewrite apathetic and hostile meanings we attach to them. But it’ll take more than changing narratives to establish a sense of place: We must create meaning that is intrinsic, palpable, real. To create local spaces worth fighting for, we need to add value to our communities, making them beautiful places to live and worthy of stewarding. For rural areas, this could be beautiful forests and land easements intertwined between arable land. For urban areas, this could be community gardens, green space, and biophilic buildings. “Sustainability is about defining and working toward creating a tenable place for humanity to live. Whether place refers to one’s backyard or the planet as a whole, understanding how people relate to places is key for sustainable development” (Masterson et al.). Rightful placement of the conservation imperative on overlooked rural and urban areas is the linchpin for creating a sustainable world driven not only by top-down policy but also by grassroots, vested interest in the places we work and live. By caring for, and conserving, the overlooked places we live and work, we will add massive conservation value to the efforts already assigned to scenic public spaces. Space attachment exists anywhere someone finds meaning in the community. But space attachment does more than just indicate a person’s sense of belonging or loyalty to a locale: It also correlates to how likely the person is to engage in conservation and sustainable practices. To paraphrase Aristotle, people take better care of things when they own them. But when it comes to real conservation and sustainability, I believe people are more likely to care for something if they feel connected to a place, based on three pillars: Practical connection: The person derives direct and traceable benefit from her/his community, including but not limited to local production of food, shelter, materials, transportation, clothing, energy, etc. Aesthetic connection: Attachment drawn from the visual appeal of surroundings, including natural and artificial landscapes. Spiritual/moral connection: This whopper includes everything that binds a person to a place apart from the practical and aesthetic (though it can, and often does, draw from them). This can include (not at all exhaustively): Faith/religion, familial ties, tradition/heritage, promises, bequests, etc. The greater these three attachments, the more likely a person is to care deeply about where they live. Imagine each of the attachments as a leg in a three-legged stool. If they’re proportionately applied, the stool sits evenly. If the legs are uneven or non-existent, the stool will tilt towards it’s weakest leg, or cease to be a stool. The latter two scenarios are where most of us find ourselves concerning our own communities. The answer to building a sustainable conservation ethic is to increase the three legs of place attachment. But how does this happen? We’ll get to that. A note on public lands The National Park system exists solely on compartmentalization: “This place is suitable for exploitation and profits; you can recreate over there.” But that mindset fails to recognize the immeasurable value of belonging to the locale by which we draw inspiration. We work, exploit, and pinch pennies with eager hopes to visit the Grand Canyon next year, see the glaciers before they melt, or buy a retirement home with a life of cloistered, protected scenery. I know when I advocate for or donate to conservation projects, the act is tainted with pessimistic defeatism. As my dollars and efforts are whisked away to help save the Amazon or protect the Tongas National Forest, they’re accompanied by the winds of a faint, disheartened whisper: “Quick! Stay beautiful. Don’t become like the place the rest of us live.” But this scenery will disappear as the communities we flee from are ravaged unrestrained. We are creatures of place, and if we stop being so, we will cease to create, and exist to destroy. Don’t throw the baby out with the bathwater. “If conservation is to have a hope of succeeding, then conservationists, while continuing their efforts to change public life, are going to have to begin the effort to change private life as well.” — Berry We shouldn’t abandon public solutions. God help us from stripping the Department of the Interior of funding all to save a few bucks for a community garden. If anything, the DOI is in dire need of some financial lovin’. But, theoretically, we could have our cake and eat it, too by reducing externalities and pumping the savings into national conservation projects and local conservation-production. For the U.S., these savings would be more than enough to go around, somewhere to the tune of $214.5 billion annually (this is in no way exhaustive, so give an extra several billion here or there, but likely little take). Put in perspective, the cumulative 2021 budget for the Department of the Interior is $12.8 billion. The DOI includes federal agencies responsible for conservation, including the National Park Service, Bureau of Land Management, and the Fish and Wildlife Service, and 16 other agencies and departments. But the hole dug by dollars can’t be escaped by filling it with dollars. Similarly, our souls’ destruction can’t be ameliorated by looking to “scenic” places as our only remaining Eden. We can find solutions (and salvation) by establishing a sense of place. Solutions entail getting to know what your locale is all about, what it can produce that it currently isn’t, what’s hurting there, and what abhorrent practices are going on in places where our food and goods come from that you’d never want to happen in your community. By embedding external costs into the community, we know how and why we produce our products and become personally vested in change when we aren’t happy with the process. This is impossible with our current system of production and conservation. What do sustainable communities look like? Conservation will reach its fullest potential when we value the dynamic health of both the “scenic” and the “unscenic.” The key will be to redeem public diffusion of responsibility back to where it can genuinely make a difference: Responsibility for the communities in which we live, and their means of production and consumption. Localizing the actual cost of production forces us to address anything unsustainable. NIMBY is a decisive factor at work here: Few think twice about a KFC dinner bucket’s social and environmental costs, but would take to the streets if a poultry farm and processing plant set up shop in the neighborhood. We must build a strong sense of place by building lovable, livable communities. We must reduce externality costs of production and invest those savings into community and public conservation projects. We must produce as much as we can in our communities through meaningful, space-based work, and strip away the hidden externalities of production and incorporate them locally so the community can see it for what it is. Communities need the power and efficacy to change their production and conservation systems when production costs become too high, thus spurring grassroots sustainability owned by the community. This is real conservation: Communities producing as much as they can, as sustainably as they can, as close to home as they can, and the willingness to be honest with themselves when equilibrium goes out of balance. This concept is impossible with extractive production that relies on external costs being distributed unfairly throughout society and ecology. When local/community lands are degraded, the community is culpable. The land and community that belong to it (and the resulting damage which all suffer) are close at hand and mutually shared. Conversely, damage to public space is absorbed without feeling by the public. But loss to a community space is felt immediately among its members and avoided at all costs. Internalizing social and environmental costs of production strengthens community resiliency and safeguards against unsustainable external costs of production. In the case of conservation, prevention is the best medicine. And degradation is best prevented when you live in a place where you produce what you consume. Local conservation-based production is self-regulating because it accounts for all external costs — environmental, social, and economic — generated by creating a product. In this way, a conservation-production community stays within its means. Declaring public and scenic spaces alone as worthy of conservation stymies real conservation work and diffuses blame of mismanagement. In such a case, conservation is, at best, a statistic and, at worst, diffused responsibility. If we’re serious about redeeming the conservation potential of all lands, we need to invest in local projects that build place attachment, create opportunities to engage in productive work with embedded externalities, and continue to allocate funds into meaningful protection of public spaces. A call to action here would be too banal. Multi-faceted problems require multi-faceted solutions. The tools for building place-based conservation are as wildly varied as the communities they need to be built in. Anything and everything you can do to bring what you consume closer to home, fall in love with where you live, and find reasons to protect it all, is worth it’s weight in gold. “[Environmental] protections are left to the community, for they can be protected only by affection and by intimate knowledge, which are beyond the capacities of the public and beyond the power of the private citizen.” — Berry There’s enough ingenuity, love, and money to go around. We just need to fall in love with where we live. I guess my call to action is this: Go love your place.
https://medium.com/climate-conscious/the-problems-with-conservation-scenic-clickbait-external-costs-and-community-60334de05a40
['Christian Wayne Yonkers']
2020-07-26 19:21:40.146000+00:00
['Vision', 'Conservation', 'Sustainability', 'Community', 'Environment']
The Future is Location Powered: Interview with Aditi Sinha, Founder, Locale.ai
The Future is Location Powered: Interview with Aditi Sinha, Founder, Locale.ai A deep-dive into the Locale.ai Journey straight from the horse’s mouth Aditi Sinha This is the first in a series of question-answer style articles, with innovators and industry leaders, meant to provide budding entrepreneurs with answers to pressing questions about ideating, starting up, receiving funding and taking their products to market, in addition to offering unique insights into the tech industry, originally hosted on FreeLunch. The $8.1 billion (and rapidly expanding) Geoinformatics industry has transformed businesses through the acquisition, monitoring, classification and analysis of spatial data, driving decision-making and providing operational insights. Consequently, it has become a hotbed for technical professionals, and new innovators alike. Locale.ai is one such ambitious startup, making waves in the Location Analytics space. Founded by Aditi Sinha, BITS Pilani alumna (2014–2018) and Rishabh Jain (2013–2017) in 2019, Locale endeavours to make location-powered operational efficiency an achievable goal for businesses across the world. It was after meeting at SocialCops, that the duo conceptualized Locale, and have never looked back since. We reached out to Locale.ai co-founder Aditi Sinha, who helped shed light on the ins-and-outs of the business, and how her company is adding value to its customers, and the world. The Locale.ai team What exactly does Locale.ai do? What is its target market? Locale is a location analytics platform built for city and business teams. Our product converts the location data of demand (users), supply (vehicles, riders) and static locations (stations, warehouses), and converts them to meaningful insights. The inspiration comes from web analytics tools like Google Analytics, Mixpanel or Clevertap that helps marketers increase the conversion and retention on their web products. Similarly, we help companies improve unit economics, increase user conversions and reduce cost per delivery by showcasing how their business performs on the ground and pinpointing where the problems lie. Our target market is any company with moving assets — vehicles, users, delivery partners, salespeople. This includes companies in sectors like mobility, on-demand delivery, last-mile delivery as well as workforce companies. Aditi Sinha How did you arrive at this idea? What motivated you to build Locale? After graduating from BITS, I started working in a data consultancy startup called Social Cops. It was then that I met my now co-founder Rishabh Jain. He had been working on different location projects with governments, FMCGs, startups, etc. You could say that Locale started with a personal problem. We had to build our own internal tools because there weren’t any suitable analytics tools for geolocation data. This is when we realized that companies are collecting a huge amount of location data and they didn’t have the right tools. They had to build internal products which were extremely painful to use. But the idea of Locale dawned on us when we found out that multiple companies are looking to build geospatial teams internally. Could we build a product to empower local teams to get the right insights and spend time working on their core problems instead of writing queries? The Locale.ai Team What functionalities does it provide its customers? Which areas can it most significantly improve for its customers? Locale provides companies with insights about their demand, supply and operations on the ground. For example: Churn: Where do people search but don’t book? Events: Where do cancellations, or frauds or accidents happen the most? Supply-Demand Gap: Where are we not able to service orders & where is the supply idle? Journeys: How do most valuable users move on the ground? Where do they go? Anomalies: Where do important KPIs shoot over suddenly? SLAs & Delays: Where are they not able to meet SLAs & because of which reason? We focus on metrics such as asset utilization, user conversion, user acquisition, cost per delivery and unit economics. Snapshot of the Locale.ai Platform What core technologies does Locale leverage in its product? How does it maintain the edge over its competitors? Locale.ai uses a wide range of powerful open-source tools to handle large scale datasets in front-end and backend. The frontend is powered by Uber’s Deck.gl for high-performance visualizations, Nebula.gl for additional editing capabilities and Mapbox-GL for rendering maps. Unlike other platforms, Locale provides the ability to ingest a large amount of data both in real-time and on-demand to analyze and gain insights on the fly. The backend is powered by python, PostgreSQL and PostGIS for powerful data processing and geospatial operations. So, a company would choose Locale because of the following reasons: A simple and intuitive user interface to carry out analyses, especially for business users Scalable geospatial visualizations with actionability ETL robust enough to handle streaming data as well as historical analysis to go back in time This means that whenever business users or decision-makers need the right insights about their ground operations, they turn to Locale because they don’t have to be dependent on their teams of analysts or engineers. Locale gets all their location data together and acts as one source of truth for all lcoation-based decisions. I have written in detail about this in a piece here. What are the other applications of geospatial analytics have you worked on? Although we have worked on different kinds of geospatial problems, I would like to mention the following three: The Bangalore COVID-19 tracker We built this dashboard using the available data released by the Karnataka government. The dashboard shows the number of people quarantined with their country of arrival at very granular levels in different cities of Karnataka. Shark-movement tracker We’ve been working with a research group called AI on the beach to analyze the movement of sharks in the sea and how it is affected by shipping vessels. The aim is to study the impact of humans on the behaviour of sharks and other marine creatures. Locale.ai Shark Movement Tracker Mobility Analysis We have recently started a project which involves analyzing the movement of people inside cities. More details coming up soon! You recently raised a funding round. Can you walk us through the challenges you faced and how you overcame them? My biggest challenge was that I had never done this before; but one thing that helped us was that before pitching to VCs, we pitched to angel investors. Angel investors are comparatively more founder-friendly and invest early on into the journey. What angels judge you on is how well do you know the problem that you are solving in your target market. Since my co-founder and I had already been working in the geospatial space and had experienced the problem ourselves, we were pretty confident about the problem. Moving on, when we started pitching to VCs, the biggest hindrance at that time was that we still did not know concrete answers to a lot of their questions as we were still early in the game. However, we were confident that we would find those answers as we move forward. So we were looking for an investor who would believe in the problem that we were solving and the large market opportunity as well as a mentor who could guide us. The challenge in creating a new category of products is that not everyone is going to believe in you, but you need to find the select few who do. We were extremely fortunate to have found Better Capital as our lead investor.More details here: What part did your undergraduate experience at BITS play in the ideation and development of Locale? BITS has been instrumental in starting and most importantly, growing Locale. I had a lot of free time at hand because I didn’t take up any dual degree with my Economics degree. I used this time to join various clubs and took up several projects with J-PAL from MIT, Vision India Foundation, authored research papers in Economics that got published in international journals. All these experiences helped me realize what it is that I am passionate about and have shaped me in so many different ways. “Sponz” taught me how to convince people to give you money, TEDx taught me how to manage people. The research experience helped me get my first job out of college at a startup called SocialCops (now, Atlan). BITS network has been so wonderful even after starting up. If you want to startup, being in BITS works in your favour in so many different ways as the community is so close-knit and always ready to help. In case you are looking to start up and need some help, you can hit me up! BITS Pilani Hyperloop Team What critical skill-sets should young undergraduates equip themselves with, in 2020? The most important skill that you need to have to run a startup is “getting things done” or in other words, finding creative solutions to problems without much resources. I would recommend undergrads get startup experience first-hand by joining an early-stage startup, and learning how to start a company from scratch. Apart from this, start projects or initiatives on your own. College is an extremely good time to experiment because there are fewer downsides to it. These experiences teach you how to be frugal as well as develop leadership skills. In other words, get up and do things. There are so many problems all around us. Try to solve some of them. Optimize for learning. P.S. We are always looking for the smartest people to join us. Check out our careers page for any openings and reach out in case you are interested.
https://medium.com/locale-ai/location-powered-futures-for-young-entrepreneurs-the-locale-ai-journey-308eb2152a57
['Aalaap Nair']
2020-05-20 21:07:56.463000+00:00
['Startup', 'GIS', 'Interview', 'Entrepreneurship', 'Founders']
Product/Market Fit: What it really means, How to Measure it, and Where to find it
Product/Market Fit is a common concept in the startup world. While widely applied in conversations around new high-growth companies, it doesn’t seem to have caught on in the rest of the business world yet. It deserves to be more widely understood, because it’s a useful Mental Model for the interplay between a business, it’s products, and it’s customers. Learning about Product/Market Fit will help you see the world differently, and inspire new ways to create value for your customers, and growth for your business. What is Product-Market Fit? Because it’s such a new concept, there are a few overlapping definitions of Product-Market Fit. We should start with the definition from Marc Andreesen, who originally coined ‘Product/Market Fit’ in his post “The Only Thing That Matters”: Product/market fit means being in a good market with a product that can satisfy that market. This is a rather vague definition, but it’s a start. What Andreesen says gives us a more vivid illustration of what Product/market fit really feels like: You can always feel when product/market fit isn’t happening. The customers aren’t quite getting value out of the product, word of mouth isn’t spreading, usage isn’t growing that fast, press reviews are kind of “blah”, the sales cycle takes too long, and lots of deals never close. And you can always feel product/market fit when it’s happening. The customers are buying the product just as fast as you can make it — or usage is growing just as fast as you can add more servers. Money from customers is piling up in your company checking account. You’re hiring sales and customer support staff as fast as you can. Reporters are calling because they’ve heard about your hot new thing and they want to talk to you about it. Marc Andreesen’s post on Product Market Fit is the most-recommended resource ever received on Evergreen. It was sent in separately by 6 people, including 2 Gregs: Greg Meyer, Greg Drach, Nitya Nambisan, Aaron Wolfson, and Jason Evanish. When your customers spread your product A complementary definition was found in Principles of Product Design, by Josh Porter, suggested by Tim Harsch. Josh’s idea of the level of dedication and excitement by customers is an indicator of Product/market fit: Product/market fit is when people sell for you Product market fit is a funny term, but here’s a concrete way to think about it. When people understand and use your product enough to recognize it’s value that’s a huge win. But when they begin to share their positive experience with others, when you can replicate the experience with every new user who your existing users tell, then you have product market fit on your hands. And when this occurs something magical happens. All of a sudden your customers become your salespeople. Validation of the Value Hypothesis The definition that feels the most precise and helpful I found in this post by Andy Rachleff, the CEO of Wealthfront, called Why you Should Find Product-Market Fit Before Sniffing Around for Venture Money. He paraphrases work by Eric Reis and Steve Blank to create this explanation: A value hypothesis is an attempt to articulate the key assumption that underlies why a customer is likely to use your product. A growth hypothesis represents your best thinking about how you can scale the number of customers attracted to your product or service. Identifying a compelling value hypothesis is what I call finding product/market fit. A value hypothesis addresses both the features and business model required to entice a customer to buy your product. Worth noting on this definition is that there are likely multiple key assumptions to be validated, across product, pricing, and business models. Thanks to Joe Bayley for recommending this post by Andy Rachleff. Myths about Product Market Fit This post by Ben Horowitz called The Revenge of the Fat Guy (in reference to a debate with Fred Wilson), has insight that radically improves understanding of Product/market fit. In it, he outlines 4 common myths: Myth #1: Product market fit is always a discrete, big bang event Product market fit is always a discrete, big bang event Myth #2: It’s patently obvious when you have product market fit It’s patently obvious when you have product market fit Myth #3: Once you achieve product market fit, you can’t lose it Once you achieve product market fit, you can’t lose it Myth #4: Once you have product-market fit, you don’t have to sweat the competition If the definitions above left room for these myths to take hold — read this post and dispel them, before they cause you and your business harm. Finding Resonance Itamar Goldminz, a reader and common contributor to Evergreen wrote this piece that uses the metaphor of resonance from Physics to describe Product-market fit: A good analogy for finding PMF comes from Physics: finding resonance with your customers and getting on the same wavelength as them. Note that this can be accomplished both by changing your product and by changing your customers (market pivot). Changing your wavelength is a gradual, continuous process (anti-myth #1), you know when you’re close to being on the same wavelength but it’s hard to tell if you’re exactly there (anti-myth #2). Since both your product and your customers constantly change (wavelength), it’s easy to get out of sync again (anti-myth #3) and it’s clear that your actions don’t prevent others from getting on the same wavelength (anti-myth #4). How to get Product-Market Fit Knowing that we need to get to Product-Market Fit, and what it means to do so, the obvious question is ‘How?’. There is a unique path for every company (or there is failure), and these ways to look at the problem will help you find your way. Everything is on the Table Here is another idea from the godfather of Product/market fit, Marc Andreesen. It explains that everything is a possible lever to move your toward product/market fit, and might be changed in that pursuit. Do whatever is required to get to product/market fit. Including changing out people, rewriting your product, moving into a different market, telling customers no when you don’t want to, telling customers yes when you don’t want to, raising that fourth round of highly dilutive venture capital — whatever is required. When you get right down to it, you can ignore almost everything else. Changing teams, markets, products, names, and visions are all reasonable in pursuit of product-market fit. That’s the story of many companies: Instagram, Soylent, Anyperk, Twitter — all radically changed course from their original plan to find Product-market fit. Talk to your customers Customer Development is a core skill to developing product-market fit. We’ve created a whole Edition of Evergreen on it, called How to Failure-proof your business with Customer Development. Product-Market Fit is Everyone’s Job Every employee in the company should understand that they’re hunting for Product-Market Fit, and expect that it’s going to be a tough journey. It’s not a matter of linear progress — it’s a maze where you spend most of your time lost, never sure if you’re making progress or just eliminating an idea through invalidation. Ryan Holiday has a great comment on this: Product Market Fit is not some mythical status that happens accidentally. Companies work for it; they crawl toward it. They’re ready to throw out weeks or months of work because the evidence supports that decision. The services as their customers know them now are fundamentally different from what they were at launch — before they had Product Market Fit. Every member of the team has a role to play in finding it, from those who are building products to those that make strategic decisions or interact with customers. Today, it is the marketer’s job as much as anyone else’s to make sure Product Market Fit happens. […] But rather than waiting for it to happen magically or assuming that this is some other department’s job, marketers need to contribute to this process. Isolating who your customers are, figuring out their needs, designing a product that will blow their minds — these are marketing decisions, not just development and design choices. These excellent excerpts come from Ryan Holiday’s course on Growth Hacker Marketing, suggested by Vinish Garg. All Markets are Not Created Equal Andrew Chen has an underrated post about how to know when a consumer startup has hit Product/market fit. In it, he outlines what makes a ‘good market’: - A large number of potential users - High growth in # of potential users - Ease of user acquisition and what the benefits are of targeting a good market: Leading with a great market helps you execute your product design in a simpler and cleaner way. The reason is that once you’ve picked a big market, you can take the time to figure out some user-centric attributes upon which to compete. The important part here is that you can usually pick some key things in which your product is different, but then default the rest of the product decisions. Product-market fit means that you’ve found a product and a market that wants it — but if that market is small, cheap, or shrinking… you still won’t have much of a company. Don’t just find a market — find a great market. How to Measure Product/Market Fit As management legend Peter Drucker always says: “What gets measured gets managed” — so how to we measure this concept of Product/Market fit? How do we know if we’re getting closer, or if we have it? This isn’t an easy question, and there’s no perfect answers, but there are three approximations that can guide your journey to Product/market fit. Do your Customers Recommend you to Friends? Net Promoter Score (NPS) is a simple survey, asking customers to rate from 1–10, “How likely are you to recommend _____ to a friend or colleague?” Here’s a basic explanation of the Net Promoter score Metric, and how it is calculated. A screenshot from Delighted’s Demo. Services like Delighted automate the process of collecting and analyzing the data for you. I talked with my friend Caleb Elston, and set you up with a $100 credit to get you started. If you plan to try it, email [email protected] with the subject line “Evergreen sent me” and they’ll take great care of you. Do customers care if your company died tomorrow? A complementary question to NPS, which is measuring how many customers love you, is this approach which measures how many customers would be distraught if they couldn’t have your product/service anymore. This is an effective way to measure your value to them, and approximate the price you could extract or the leverage you have to push growth by asking your users to share or invite their friends. How would you feel if you could no longer use [product]? - Very disappointed - Somewhat disappointed - Not disappointed (it isn’t really that useful) - N/A — I no longer use [product] If you find that over 40% of your users are saying that they would be “very disappointed” without your product, there is a great chance you can build sustainable, scalable customer acquisition growth on this “must have” product. This post from Growth Hacker Sean Ellis was suggested by Tyler Hayes, and introduces a tool to send out this question called survey.io. How Many Customers Leave & How Soon? Alex Shultz, in his talk in the Lecture series How to Start a Startup at Stanford calls out his definition of Product-Market fit, which is based on churn and user retention: Look at this curve, ‘percent monthly active’ versus ‘number of days from acquisition’, if you end up with a retention curve that is asymptotic to a line parallel to the X-axis, you have a viable business and you have product market fit for some subset of market. This may require some context (and some research and math to figure out for your business), so check out this full talk to get the whole story: This talk was also suggested by Tyler Hayes, who dominates resources for learning how to measure Product/market Fit. You rock, Tyler. Extra Bonus Presentations These two presentations, one from Andrew Chen and one from Jason Evanish, get into the process of arriving at Product-Market fit, and are worth clicking through to learn more. Andrew Chen: Zero to Product/Market Fit Presentation Jason Evanish: Getting to Product/Market Fit
https://medium.com/evergreen-business-weekly/product-market-fit-what-it-really-means-how-to-measure-it-and-where-to-find-it-70e746be907b
['Eric Jorgenson']
2020-05-06 23:42:25.664000+00:00
['Startup', 'Silicon Valley', 'Product Market Fit', 'Entrepreneurship', 'Business Building']
You’re a Hypocrite and It’s Destroying Democracy. Here’s Why.
I am, too. Photo by Adi Goldstein on Unsplash Imagine you attended the first session of a psychological experiment on “cognitive training” several weeks ago. Today is the follow-up session. You’ve just arrived and are ushered into a small waiting room. In it, you find two people already seated. There is one empty chair and you sit down. A minute later, someone on crutches and wearing a boot on her foot enters the room. She notices there are no open seats, lets out a big sigh, and then leans up against the wall, clearly in discomfort. You quickly glance at the other two people. They both ignore her. What do you do? Do you give up your chair to this person who’s in obvious need? Or, do you copy what the others do and pretend the problem doesn’t exist? If you think you’d offer her your chair, you’re probably wrong. This study found that only 14% of participants offered their chair to someone who clearly needed it. Just so we’re clear, that’s only one in seven people! Of course, the scenario was a ruse. The other two people seated were actors, as was the person “in distress.” The test also wasn’t quite fair. There’s something called the bystander effect, which makes it less likely for you to help someone if there are other people present. But, from the outside looking in, 86% of people kind of look like assholes, don’t they? You might be tempted to say, “Yeah, most humans are assholes!”, but let’s try not to jump to conclusions. After all, you probably don’t consider yourself an asshole. Maybe you’ve done asshole-ish things, but that doesn’t mean you’ve got “asshole” branded on your soul, right? But, isn’t it interesting that we tend to label other people as “assholes” far more often than we do ourselves or people we feel similar to? Don’t we seem to love to rage about the moral hypocrisy of others while simultaneously turning a blind eye when it occurs in ourselves or people we identify with? What does this say about us? Why does each of us think we’re exempt from moral failings? And, most importantly, what do we do about it? We’re all moral hypocrites Imagine you’re involved in another study. You’re taken into a room and seated at a computer. The researcher tells you there are two tasks. The “green” task will take about 10 minutes and consists of a photo hunt. The “red” task will take about 45 minutes and consists of logic problems. You can either assign yourself one of the tasks based on preference or use the computer to randomly assign one for you. The catch? If you perform the “green” task, the next participant will perform the “red” task, or vice versa. The researcher then leaves the room. What would you do? More importantly, what do you think is fair? The researchers had some participants make the decision and other participants watch actors choose “green” for themselves. Participants then rated either how fairly they behaved or how fairly the actors behaved on a scale from 1 (“extremely unfairly”) to 7 (“extremely fairly”). The participants who selected “red” or had the computer assign the task to them were excluded from the results. When rating their own actions — that is, choosing the “green” task — participants rated their fairness at about 4. However, when rating the actions of someone else — that is, actors choosing the “green” task — participants rated their fairness at about 3. Interestingly, if participants were made to feel similar to the actor they were observing, they rated the actor’s actions as a little higher than 4. On the flip side, if participants were made to feel dissimilar to the actor, they rated the actor’s actions as a little less than 3. As a result, the authors concluded: At a basic level, preservation of a positive self-image appears to trump the use of more objective moral principles. It is equally disconcerting, however, that the stain of hypocrisy actively spreads to group-level social identities, and in so doing may inflame intergroup discord. What this means is that how we assess moral actions depends on our relationship with the person performing them. We tend to think of ourselves as having moral principles that stay constant. We’re wrong. Why would this be the case? The same authors performed a follow-up study to answer this very question. What they suspected was that people rationalized — that is, made excuses for — their behavior, but not the behavior of others. So, when the participants were again asked to rate the fairness of the same actions, the researchers kept the participants’ rational minds preoccupied with what they called a cognitive load. In this case, it meant the participants were asked to remember and report on a series of 7-digit numbers as they were answering the questions. What the researchers found was that, under cognitive load, the participants didn’t rate their actions any differently than the actions of others. They concluded: Hypocrisy readily emerged under normal processing conditions, but disappeared under conditions of cognitive constraint. Isn’t that interesting? Our intuitions about morality and fairness are consistent — but it is our rational minds that twist and distort the circumstances to turn us into hypocrites. Combatting moral hypocrisy in ourselves We see hypocrites everywhere we look. The news and social media abound with stories of people self-righteously claiming one thing while doing the opposite. It’s amazing how sensitive we are to morality, except where it matters most — in ourselves. If you take anything from this article, I hope it’s that you’re not exempt from this uncomfortable truth. Don’t believe me? Simply watch your mind the next time you judge your actions or the actions of another. What you’ll see is your mind scrambling to interpret the information in a way that supports what you already believe — which is that you and the people you agree with are the “good guys” and that all others are the “bad guys”. The trouble is, this isn’t something that will miraculously fix itself. The polarization of society is resulting from none of us recognizing our own hypocrisy. There’s a lot at stake here. Perhaps even democracy itself. So, the next time you see someone acting immorally, stop yourself from rushing toward an immediate judgement. Take a step back and ask yourself if you haven’t acted similarly in the past. You just might realize that they’re not so different from you.
https://medium.com/datadriveninvestor/youre-a-hypocrite-here-s-why-7f92a28cd9b0
['Jeff Valdivia']
2020-12-27 16:03:09.249000+00:00
['Self-awareness', 'Politics', 'Mindfulness', 'Psychology', 'Morality']
Cleaner Data Analysis with Pandas Using Pipes
Cleaner Data Analysis with Pandas Using Pipes Practical guide on Pandas pipes Photo by Candid on Unsplash Pandas is a widely-used data analysis and manipulation library for Python. It provides numerous functions and methods to provide robust and efficient data analysis process. In a typical data analysis or cleaning process, we are likely to perform many operations. As the number of operations increase, the code starts to look messy and harder to maintain. One way to overcome this issue is using the pipe function of Pandas. What pipe function does is to allow combining many operations in a chain-like fashion. In this article, we will go over examples to understand how the pipe function can be used to produce cleaner and more maintainable code. We will first do some data cleaning and manipulation on a sample dataframe in separate steps. After that, we will combine these steps using the pipe function. Let’s start by importing libraries and creating the dataframe. import numpy as np import pandas as pd marketing = pd.read_csv("/content/DirectMarketing.csv") marketing.head() (image by author) The dataset contains information about a marketing campaign. It is available here on Kaggle. The first operation I want to do is to drop columns that have lots of missing values. thresh = len(marketing) * 0.6 marketing.dropna(axis=1, thresh=thresh, inplace=True) The code above drops the columns with 40 percent or more missing values. The value we pass to the thresh parameter of dropna function indicates the minimum number of required non-missing values. I also want to remove some outliers. In the salary column, I want to keep the values between the 5th and 95th quantiles. low = np.quantile(marketing.Salary, 0.05) high = np.quantile(marketing.Salary, 0.95) marketing = marketing[marketing.Salary.between(low, high)] We find the lower and upper limits of the desired range by using the quantile function of numpy. These values are then used to filter the dataframe. It is important to note that there are many different ways to detect outliers. In fact, the way we have used is kind of superficial. There exist more realistic alternatives. However, the focus here is the pipe function. Thus, you can implement the operation that fits best for your task. The dataframe contains many categorical variables. If the number of categories are few compared to the total number values, it is better to use the category data type instead of object. It saves a great amount of memory depending on the data size. The following code will go over columns with object data type. If the number of categories are less than 5 percent of the total number of values, the data type of the column will be changed to category.
https://towardsdatascience.com/cleaner-data-analysis-with-pandas-using-pipes-4d73770fbf3c
['Soner Yıldırım']
2020-12-24 18:36:53.853000+00:00
['Machine Learning', 'Python', 'Artificial Intelligence', 'Data Science', 'Pandas']
To Be a Good Writer, Join a Writing Critique Group.
The Structure and How it Works We are a group of five writers, each with their own genre and style. We have one business writer, a fiction writer, a memoirist, a historical fiction writer and a poet. We meet weekly, and critique one member’s work each week over Zoom— with candour, tough love, and giggles. We also have a Discord channel where we chat about things unrelated to writing and share resources that may help each other. I was the first to go under the knife. I had submitted two poems, both of which looked complete to me, until…you can guess! After an hour-long discussion around length, use of words, imagery and details big and small, I had my work cut out for the rest of the month. Little did I expect that so much room for improvement would be revealed in my pieces which looked proper and polished to my own eyes. That first meeting was enough for me to clearly see that this was going to be the best action I’ve taken to hone my craft so far. Three Main Take-aways I took away three key lessons from joining this writing critique group on why this is something every writer should consider. 1. A willing and high-quality “focus group” A writing critique group can become your tribe. They will offer you candid, useful feedback on your prototype novel or poem before you launch it in the wider market. In a way, your peers are your test audience, and they have every reason to help you make your work better. 2. Accountability, every week There are so many articles on Medium about how important it is to write everyday. Developing a regular writing practice has the power to transform our inner self, to clarify blocks in our power to express, to help us articulate complex ideas, and to improve our abilities to write well. For me the greatest benefit of being part of a writing critique group is that it creates the right amount of pressure for me to keep writing every week. Positive peer pressure from this group also led me to join the NaNoWriMo challenge this year. One of the writers in the group participates every year, and this year, she managed to convince all five of us to join. To my own surprise, I wrote 10,000 words during the month! 3. An insider view of other writers’ minds When I joined this group, I was expecting some feedback on my work but I was also keen to learn from other people’s writing and thought processes. Yet, I had totally underestimated how much insight I would get into the minds of other writers, who write a totally different genre. This is especially true of the fiction writers. The way they create worlds and build characters is simply fascinating. For example, one of my fellow writers wrote an entire chapter describing the childhood of her protagonist, only to share later that the novel only takes place during the character’s adulthood. Writing up her childhood was her technique for extrapolating an authentic character into adulthood. Such insight has changed the way I read. Moreover, it is making me a much better storyteller.
https://medium.com/the-brave-writer/to-be-a-great-writer-join-a-writing-critique-group-80fc04644aca
['Prajakta Unknown']
2020-12-30 01:00:42.741000+00:00
['Writing', 'Advice', 'Productivity', 'Writing Tips', 'Freelancing']
Five Steps to Nail Your Pitch Deck — Part II
Five Steps to Nail Your Pitch Deck — Part II What you need to know to score a second meeting DevOps Storytelling 3 minutes and 44 seconds. That’s the average amount of time a VC spends reading a 20-page early-stage pitch deck. 12 seconds a page — if the investors read it at all. Having worked as a VC for many years and reviewed thousands of pitch decks, I can tell you there’s only one way to get investors’ attention: Namely, your pitch deck has got to tell a compelling story. As discussed in Part I of this article — ”Find your story” — it’s a common misconception that you need to cram all the details about what makes your idea unique into your pitch deck. Instead, you’ve got to build your pitch deck with one goal in mind: Tell a great story, fast. VCs meet roughly 50% of the companies they see, and if you’ve targeted the right investor, your chances at a first meeting are as good as 90%. Scoring a second meeting is the real challenge: VCs invite fewer than 20% of the companies they see for a first meeting to the second meeting (where, by the way, investors really start engaging with the nitty gritty of your business proposal). In Part I of this article, we looked at how to find your story. Now that you’ve got it, let’s get into the weeds of how to communicate that story effectively. Drawing on my extensive experience on both sides of the table, I’ve developed a process that will help you turn your company’s story into a winning pitch deck. I call it “ConDes” and in this article I’ll explain how it works. From DevOps to ConDes Remember the days when software developers wrote and deployed code and then forgot about it, while operations had to tackle all the issues? As we all know, that process didn’t work well. So about a decade ago, software experts came up with a new system. They understood how much better it is to integrate “Development,” the department creating the code, and “Operations,” the department using that code, and “DevOps” was born.¹ The idea was to make the two teams sync up in order to work more holistically. The result? Better, faster, more efficient work products. DevOps is more than a process or a set of tools — it’s a culture and a philosophy. My clients would never question the superiority of DevOps when it comes to software development. But when it comes to their pitch decks, many of them — especially engineers — act as if the Dev and the Ops are still separated. Meaning, they first prepare the content of the pitch deck and then add some random design to make it look “good”. The result? Pretty “meh.” Based on a decade of experience, I’ve seen how much pitch deck storytelling can benefit from a DevOps approach. Meaning, merging the two elements of pitch deck crafting — storytelling (“Content”) and visual (“Design”) — from the very beginning. That’s why I developed “ConDes”, a process that makes it easy to achieve such a merge. In this article I’ll explain the basic principles I use to get an optimal pitch deck. How does ConDes work? Remember, our goal is to generate emotions and keep the investor attentive across all of our slides. Therefore, when you think about WHAT you want to say, you have to think also about HOW you want to say it. As a rule of thumb: a beautifully designed slide with part of the data is much more effective than an ugly slide with all of the data. ConDes has 5 steps and is similar to the production of a Hollywood movie. There is pre-production, production, and post-production. Some steps focus more on content, the others more on design, but they go hand-in-hand all along. ConDes — five steps to get a winning pitch deck Step I — Find Your Story In Part I of this article we explained how to find your story. Next, you create your storyboard. A storyboard is a sequence of drawings that represent the slide deck you plan to create. The purpose of the storyboard is not to go into all the details of each slide, but rather set the flow of the story. That means you should keep each slide simple. (You can add more detailed notes along the side, if it makes it easier for you to keep the slide minimal). Here’s an example: The storyboard Step II — give your story colors Every company needs a corporate design. The corporate design takes your company story (also known as corporate identity) and reflects it visually in a logo, color palette, typography, webpage, business card, swag and more. Corporate design is a major building block in building your brand. And as barriers to entry continue to fall driven by cloud technologies, competition among startups will increase and your brand may be the only differentiator between you and your competitors.² Should your company already have a corporate design, then you should verify it fits to the story you decided to tell in Step I. I’ve run into cases where the design was created early on in the life of the company and no longer matches the story you want to tell (for example, you want to tell a bold story but you’re using only cool colors). If you don’t have a corporate design, now is the time to create one. You can use internal (e.g. your UI/UX designer) or external resources (e.g. agencies) to help you find your design. Usually, the process involves a brand workshop that will help the designer understand your story and how you imagine it to look and feel in reality. The outcome of this step should be a slide template together with a color palette and font guidelines. Extract from the Brand Identity book of Daedalean AG Step III — production Now it’s time to create slides. But before you use the shiny new design, first translate the storyboard into real slides. The slides should have a white background and be populated with text (and maybe some rudimentary graphs). First put ALL your thoughts on the slide. Later, you’ll boil it down to the key messages, so don’t worry if there’s a lot of text. The goal is to make sure you describe in detail what the goal of each slide is (i.e. why do you believe that if investors see this information they are more likely to invest). See the slide below to track development. Once the slides are ready and you feel the story flows well, then it is time to match content with design. When I think about a slide and the content I want to deliver, I don’t think about it in a textual form. I imagine the slide and think about how the content is visually represented. For example, should the market data be represented by text, or by a picture, table, line chart, bar chart or creative infographic? You have to choose the way your content will best be remembered. Use images (proprietary images are much better than free stock pictures — it’s time for your kids and pets to become stars). The outcome of this step is a deck with all your content presented in a basic design format. Step IV — make your slides unique At this stage, a designer takes your slides and makes them one of a kind. Investors see hundreds of presentations every year. They know by heart all the built-in shapes and SmartArt powerpoint and the like have to offer. My advice — If you want to impress an investor, use original graphic design. New fonts, illustrations or design elements. This is the magic that makes design beautiful and appealing to our Limbic brain. When your designer is ready with the results, your work will be transformed. Get ready for a “Wow” moment and a case of goosebumps. If you don’t experience that, then the design isn’t ready yet! Step V — Finalize and dry-run Once the design work is done, it’s time to check that the story still works and flows. Sometimes we over-design a slide and omit too much data. It is a fine balance between too much and too little data and you need to make sure you nail it. Now’s the time! It’s also a good time to start working on your delivery by doing a dry-run with other members of the company, family, or friendly investors you have a close relationship with. Once you start using the deck, you’ll notice there are still a few hiccups. You’ll see where things should be shorter or longer. A big warning — at this stage, when you do final content touches, there is a temptation to skip involving the designer (because you want to save time or money). This is a big mistake. The design was built with the content in mind. If you add or omit even a single sentence, this can destroy the design of the slide and harm your ability to deliver a crisp message. Therefore, you should engage the designer in any change you are making to the deck. After a few runs, the number of changes will be minimal. The outcome of this step is an amazing pitch deck that will wow investors and help get you more meaningful meetings! And there’s a bonus — you can use the same deck as the basis for any other communications need you have, from general company presentations to sales pitches. Over 50% of the slides will be the same. How much time does the process take? A well thought-out pitch deck will take a couple of months to prepare. Think of all the workshops (strategy, brand), work and new iterations that are needed. This isn’t a process that can be done overnight. Of course, you can take shortcuts if there’s no time, but then you have to give up some of the parts of the process (e.g. you can buy a presentation template for $20 instead of coming up with an original design). So, when is a good time to start? NOW! And at the latest a year before you want to raise money. Why a year? Because raising money is a 6 month-long process and you want to have another 4–6 months for the ConDes process. Real pitch deck example I’ve recently gone through the ConDes process with Daeadlean, a swiss startup developing autonomous piloting systems for aircraft. Daedalean was preparing its next financing round and needed a new pitch deck. The following is real examples of a couple of the slides as we made progress in the process. The Problem slide The Product slide Summary — content and design go hand-in-hand If you want to grab investors’ attention, your fundraising pitch deck has to tell a compelling story. Good stories are where the content, design, and delivery (the way you pitch) are all in sync. While you can wait until your deck is ready to work on your delivery, the content and design elements of the deck MUST be developed simultaneously. The “ConDes” process can help you turn your story into a winning pitch deck. By following the five steps, you will make sure that the next time you think about giving a presentation, you won’t just think about the messages you want to deliver, but also about how it should be visually represented so you can tell a truly compelling story. One note: I find it very helpful to have an experienced sparring partner to guide you through the process. It is usually easier for outsiders to come up with a short and crisp story, as well as to provide honest feedback on whether the slides are strong enough. Now you’ve got all the tools to build the perfect pitch deck. I am curious to hear about your experience using the ConDes process and looking forward to seeing your beautiful pitch decks. If you have any questions, feedback or need any help, feel free to ping me on Linkedin. In future articles I will elaborate on specific slides in the pitch deck and share more practical tips like the ones here. Follow me for more! Big thanks go out to Sally and to Nicolai for providing feedback on earlier versions of this article. Many thanks to Alessandro for the colorful illustrations.
https://medium.com/swlh/five-steps-to-nail-your-pitch-deck-part-ii-5484d005b870
['Yair Reem']
2020-07-15 16:00:16.336000+00:00
['Storytelling', 'Fundraising', 'Startup', 'Pitch Deck', 'Venture Capital']
Substack Writing Tips
4. V A L U E When you sign up for Blogging Guide as a paying subscriber, you gain access to my back catalogue of premium posts and to any future subscriber only posts I write. However, this is true for any Substack publication and is frankly a dubious value proposition for readers ( i.e.Sure, I’ve read Casey’s free posts but will I actually get enough value from a relatively new newsletter, to justify the purchase?). To make this decision easier for my readers, I provide my readers with instant and exclusive access to several digital products that I have created. If you click on the following locked post (which is also referenced in my featured post, pinned to the top of my Substack homepage), you will see instructions explaining how to download several of my digital products for free! Locked Post Containing Downloads for Subscribers: There are many ways you can gate (control the access to) content. Since I was already selling these products through Gumroad, I continued to utilize their platform, but you could accomplish the same thing through Etsy or any other digital eCommerce store. Paying subscribers now have instant access to my bonus content. Better yet, because this content is listed for sale on Gumroad, it helps potential subscribers realize the value of becoming a subscriber. The products I am offering free complimentary access to are worth $250 (and I plan on adding more). Since the current price of my newsletter is only $50 per year, the value proposition becomes much more clear: Obviously, what you offer your readers will depend upon your niche. And some niche topics may not lend themselves toward digital downloads. But I am sure there are many other creative ways to provide bonus content. Still, providing clear and instant value to potential subscribers is a valuable tactic on Substack (or pretty much in marketing/sales in general).
https://medium.com/substack-writing/substack-writing-tips-a8e72d86a39b
['Substack Writing']
2020-08-05 02:21:33.659000+00:00
['Technology', 'Writing', 'Blogging', 'Substack', 'Journalism']
The GPT-3 won’t get you laid off. Unless you’re a fulltime autoregressive…
This, by all means, is hugely impressive and *really* cool, if you ask me. The ability of the GPT-3 to generate layouts, Python code, or creative content is shocking to most people. And it should be. It’s really strange to see a machine learning model that can comprehend and act upon requests in a reasonably correct fashion. So with that in mind, let’s get to the main topic. The GPT-3 won’t get you laid off Well, at least for the near future. I won’t dare to give it a timeline, as I’m a firm believer in the following: “It’s tough to make predictions, especially about the future” — Danish Proverb And, of course, I’m not referring to the folks that are working 9 to 5 as an autoregressive language model API. Feel sorry for you. Sam Altman, CEO of OpenAI When innovation comes to disrupt tasks performed by people, the discussion of the effects of the technology regarding society always gets heated. When such innovation is seemingly disrupting the output of a workforce that is known to produce things that are similar to magic, like software engineers in general, things are definitely getting more heated than previously. We, as tech people, are having some mixed feelings that are affecting our judgment. I’m afraid that the notion that the GPT-3 is a truly groundbreaking achievement and, at the same time, it won’t hurt us, is not being brought up as frequently as it should. Yes, it can generate code. Just because of that, I wouldn’t advise you to switch careers or fire your engineering team. Yes, maybe the code that is generated is not as fancy as the code you write with 5,10 or 20+ years of experience, tricking the compiler or the framework to achieve maximum performance. Just because of that, I wouldn’t advise you to ignore it completely, like a super-hyped technology that will be forgotten on a dusty shelf. “Deep Learning is a superpower” — Andrew Ng We need to learn how to use it to potentialize our craft, before anything else. The tools that can be created with the use of GPT-3 and other ML models to help us build, keep, and run better technology are in a league of their own. And they could help us, as professionals, to reach a new level of productivity, quality, and overall expertise. In our little tech bubble, we can think of the new possibilities that are brought to the table with the GPT-3. We can reimagine and build, from the ground up, the tools that we use daily — for coding, designing, managing work, etc — in order to become better professionals. For example, we can use it to build better IDEs. Smart IDEs. Some companies have already started the work in the field and have developed great products. However, I feel there’s a lot more to evolve, and even to reimagine what an IDE can look like. We should work on those tools so that we can be better at our craft, not to replace us altogether. A vast portion of professionals of all areas have had to reinvent themselves through the years due to the code that we’ve written and all of a sudden we ourselves are unable to do the very thing we’re advocating for decades? It’s up to us to learn how to use our new superpower.
https://towardsdatascience.com/the-gpt-3-wont-get-you-laid-off-8a5cb5f1dcb8
['Thiago Candido']
2020-07-24 16:22:07.338000+00:00
['Machine Learning', 'Startup', 'Artificial Intelligence', 'Programming', 'Deep Learning']
Big Data, AI & IoT Part Two: Driving Industry 4.0 One Step At A Time
Looking Into the Future of AI, Big Data and IoT | Towards AI Factories, refineries, utilities and all manner of industrial environments will benefit from AI, Big Data, and IoT, but what will it take to get there? UNSPLASH What comes to mind when you think of a factory? A dark Victorian mill? A Charlie Chaplin skit? Or a robotized production line? Industrial manufacturing has consistently been a marker of progress, acting as a sign of the times as each generation has developed increasingly advanced technologies. But although industrial companies have used digital technology for years to improve their processes, the power of connected sensors, data and artificial intelligence (AI) at all stages of an operation has yet to be realized on a grand scale. This article will focus on the intersection of Big Data, AI, and IoT devices in an industrial setting, continuing the series on these technologies and the ecosystem they support. Processing power The industrial sector is an ideal proving ground for automation and optimization, due to the huge number of specific processes in any operation. These organizations also tend to be spread across various locations and have a huge chain of suppliers, distributors, and users which their product or service may touch. This means that the processes followed at each stage — whether during production, maintenance, or distribution — can end up being fragmented between operations. Technology, therefore, plays a huge part in industrial environments, helping to log material usage, measure flow in utilities and oil pipelines, and bring operations together under one system. Industry 4.0 refers to the next step in industrial technology, with robotics, computers and equipment becoming connected to the Internet of Things (IoT), and enhanced by machine learning algorithms. Advances in sensor technology and connectivity modules have allowed more equipment to be measured, monitored, and tracked between sites, and orchestrated from a central, remote location. With this accessibility, managers, executives and even data scientists can use that insight to improve the efficiency and productivity of the whole operation. Thanks to the rise of cloud computing and the consequent falling costs of data storage, a huge amount of data can now also be stored and fed into machine learning algorithms to help automate specific processes within an organization. A holistic view Bringing AI into industrial processes is not as easy as buying a new piece of equipment, however. Due to the complex and interlinked nature of industrial processes, companies must have a solid understanding of what they want from AI in the first place. ‘Whether it comes from sensors along the production floor or connected devices out in the wild, ultimately you can’t do anything with that data without having a structured thought-process’ says Shekhar Vemuri, CTO at Clairvoyant. With a ‘strong foundational data strategy’ in place, companies can then look at the whole system end-to-end as data flows through an enterprise, as long as data itself is the focus. ‘If you still think of data as a secondary product of your operations, then your organization will keep struggling’ says Vemuri, ‘with data as the primary asset it becomes part of your business processes, and you can see how each bit of data relates to the other.’ With a holistic view of the production line, companies can use AI to get further value from that data or gain more insight into the equipment. ‘People are now looking at how to leverage industrial IoT sensor data to project things that may happen — whether predictive maintenance, line management or quality control,’ says Vemuri, ‘and things will start snowballing over the next 12–18 months because there’s growing knowledge there to make it happen.’ Vemuri is clear to point out however that technology is not everything: ‘while I would love to say it’s purely a technical problem, it’s a people and an organizational problem too’, so while setting out a solid goal of what you want to achieve is important, ‘you have to think big but act small, because otherwise it’s too abstract to go and grasp the value — set your goal but take baby steps towards it.’ Connecting the dots Collecting all this data, however, relies entirely on a reliable connection. Various connectivity standards exist that cater to the industrial environment, optimized for sending data packets over long distances (LPWANs, or low-power wide-area networks), keeping a device in the field for as long as possible (embedded SIMs or eUICC), or maximizing the potential of existing network infrastructure (cellular LPWANs that use the LTE spectrum). Ethernet, or wired internet connection, is also widely used in local, stationary IoT applications, but is not suitable in many cases that need wireless connectivity. All of these connectivity types have their benefits and drawbacks, but many industrial companies run on ‘legacy technology which is simply not built for the new world,’ says Venkat Viswanathan, co-founder of LatentView Analytics. Because of this, uprooting an organization to adopt a new connectivity standard would be completely unfeasible in many cases. Industrial companies may well choose cellular multi-network connectivity for the meantime then, and work on getting more of their processes automated before next-gen network technology becomes available. Edge computing alongside machine learning may provide part of the solution, as this allows more data to be qualified and more automated tasks assigned at the source, before being transmitted to the cloud for analysis, reducing the amount of data transmitted over the airwaves. Whilst in some cases this replicates legacy equipment from a communications perspective, edge computing improves latency and efficiency that true automation requires, and will remain compatible with newer systems where legacy systems are already becoming out of date. This also allows companies to evaluate their existing systems and processes using the reliability of cellular connectivity, and bring about the incremental change needed to achieve complete automation. Piece by piece Incremental change is the name of the game, as many industrial organizations are too spread out and fragmented to perform a complete overhaul and immediately benefit from cutting edge technology. In fact, ‘many of these processes are still completely manual’, according to Viswanathan. ‘If you can imagine a refinery and the various equipment there, they actually have people eyeballing the equipment, looking for a problem and making a note with paper and pen.’ While this is, of course, an extreme example, both Viswanathan and Vemuri agree that ‘the number one thing that companies need to focus on is sponsorship from top management.’ Bringing about a new wave of industrial progress with AI, Big Data and IoT will not happen overnight. To take advantage of the opportunity that these technologies bring requires a holistic strategy, strong leadership, and an understanding of how data flows through an organization. Many industrial companies rely on equipment that functions perfectly well but will not fit with new technologies, and the same is true of some of the executives at the top — they are perfectly happy with how things are, but will not be able to adopt new technologies without a change in mindset. The increasing number of leaders that do appreciate the benefits of Industry 4.0 however, need to remember that meaningful change on such a huge scale can only come in baby steps.
https://medium.com/towards-artificial-intelligence/big-data-ai-iot-part-two-driving-industry-4-0-one-step-at-a-time-5f76b1ad6df6
['Charles Towers-Clark']
2019-07-18 15:41:52.230000+00:00
['Artificial Intelligence', 'Automation', 'IoT', 'Internet of Things', 'Big Data']
Natural Language Processing (NLP) with Python — Tutorial
Natural Language Processing (NLP) with Python — Tutorial Tutorial on the basics of natural language processing (NLP) with sample coding implementations in Python Author(s): Pratik Shukla, Roberto Iriondo Last updated, July 26, 2020 In this article, we explore the basics of natural language processing (NLP) with code examples. We dive into the natural language toolkit (NLTK) library to present how it can be useful for natural language processing related-tasks. Afterward, we will discuss the basics of other Natural Language Processing libraries and other essential methods for NLP, along with their respective coding sample implementations in Python. This tutorial’s code is available on Github and its full implementation as well on Google Colab. 📚 Check out our sentiment analysis tutorial with Python. 📚 Table of Contents: What is Natural Language Processing (NLP)? Computers and machines are great at working with tabular data or spreadsheets. However, as human beings generally communicate in words and sentences, not in the form of tables. Much information that humans speak or write is unstructured. So it is not very clear for computers to interpret such. In natural language processing (NLP), the goal is to make computers understand the unstructured text and retrieve meaningful pieces of information from it. Natural language Processing (NLP) is a subfield of artificial intelligence, in which its depth involves the interactions between computers and humans. Applications of NLP: Machine Translation. Speech Recognition. Sentiment Analysis . . Question Answering. Summarization of Text. Chatbot. Intelligent Systems. Text Classifications. Character Recognition. Spell Checking. Spam Detection. Autocomplete. Named Entity Recognition. Predictive Typing. Understanding Natural Language Processing (NLP): Figure 1: Revealing, listening, and understand. We, as humans, perform natural language processing (NLP) considerably well, but even then, we are not perfect. We often misunderstand one thing for another, and we often interpret the same sentences or words differently. For instance, consider the following sentence, we will try to understand its interpretation in many different ways: Example 1: Figure 2: NLP example sentence with the text: “I saw a man on a hill with a telescope.” These are some interpretations of the sentence shown above. There is a man on the hill, and I watched him with my telescope. There is a man on the hill, and he has a telescope. I’m on a hill, and I saw a man using my telescope. I’m on a hill, and I saw a man who has a telescope. There is a man on a hill, and I saw him something with my telescope. Example 2: Figure 3: NLP example sentence with the text: “Can you help me with the can?” In the sentence above, we can see that there are two “can” words, but both of them have different meanings. Here the first “can” word is used for question formation. The second “can” word at the end of the sentence is used to represent a container that holds food or liquid. Hence, from the examples above, we can see that language processing is not “deterministic” (the same language has the same interpretations), and something suitable to one person might not be suitable to another. Therefore, Natural Language Processing (NLP) has a non-deterministic approach. In other words, Natural Language Processing can be used to create a new intelligent system that can understand how humans understand and interpret language in different situations. 📚 Check out our tutorial on the Bernoulli distribution with code examples in Python. 📚 Rule-based NLP vs. Statistical NLP: Natural Language Processing is separated in two different approaches: Rule-based Natural Language Processing: It uses common sense reasoning for processing tasks. For instance, the freezing temperature can lead to death, or hot coffee can burn people’s skin, along with other common sense reasoning tasks. However, this process can take much time, and it requires manual effort. Statistical Natural Language Processing: It uses large amounts of data and tries to derive conclusions from it. Statistical NLP uses machine learning algorithms to train NLP models. After successful training on large amounts of data, the trained model will have positive outcomes with deduction. Comparison: Figure 4: Rule-Based NLP vs. Statistical NLP. Components of Natural Language Processing (NLP): Figure 5: Components of Natural Language Processing (NLP). a. Lexical Analysis: With lexical analysis, we divide a whole chunk of text into paragraphs, sentences, and words. It involves identifying and analyzing words’ structure. b. Syntactic Analysis: Syntactic analysis involves the analysis of words in a sentence for grammar and arranging words in a manner that shows the relationship among the words. For instance, the sentence “The shop goes to the house” does not pass. c. Semantic Analysis: Semantic analysis draws the exact meaning for the words, and it analyzes the text meaningfulness. Sentences such as “hot ice-cream” do not pass. d. Disclosure Integration: Disclosure integration takes into account the context of the text. It considers the meaning of the sentence before it ends. For example: “He works at Google.” In this sentence, “he” must be referenced in the sentence before it. e. Pragmatic Analysis: Pragmatic analysis deals with overall communication and interpretation of language. It deals with deriving meaningful use of language in various situations. 📚 Check out an overview of machine learning algorithms for beginners with code examples in Python. 📚 Current challenges in NLP: Breaking sentences into tokens. Tagging parts of speech (POS). Building an appropriate vocabulary. Linking the components of a created vocabulary. Understanding the context. Extracting semantic meaning. Named Entity Recognition (NER). Transforming unstructured data into structured data. Ambiguity in speech. Easy to use NLP libraries: The NLTK Python framework is generally used as an education and research tool. It’s not usually used on production applications. However, it can be used to build exciting programs due to its ease of use. Features: Tokenization. Part Of Speech tagging (POS). Named Entity Recognition (NER). Classification. Sentiment analysis . . Packages of chatbots. Use-cases: Recommendation systems. Sentiment analysis. Building chatbots. Figure 6: Pros and cons of using the NLTK framework. spaCy is an open-source natural language processing Python library designed to be fast and production-ready. spaCy focuses on providing software for production usage. Features: Tokenization. Part Of Speech tagging (POS). Named Entity Recognition (NER). Classification. Sentiment analysis . . Dependency parsing. Word vectors. Use-cases: Autocomplete and autocorrect. Analyzing reviews. Summarization. Figure 7: Pros and cons of the spaCy framework. Gensim is an NLP Python framework generally used in topic modeling and similarity detection. It is not a general-purpose NLP library, but it handles tasks assigned to it very well. Features: Latent semantic analysis. Non-negative matrix factorization. TF-IDF. Use-cases: Converting documents to vectors. Finding text similarity. Text summarization. Figure 8: Pros and cons of the Gensim framework. Pattern is an NLP Python framework with straightforward syntax. It’s a powerful tool for scientific and non-scientific tasks. It is highly valuable to students. Features: Tokenization. Part of Speech tagging. Named entity recognition. Parsing. Sentiment analysis. Use-cases: Spelling correction. Search engine optimization. Sentiment analysis. Figure 9: Pros and cons of the Pattern framework. TextBlob is a Python library designed for processing textual data. Features: Part-of-Speech tagging. Noun phrase extraction. Sentiment analysis. Classification. Language translation. Parsing. Wordnet integration. Use-cases: Sentiment Analysis. Spelling Correction. Translation and Language Detection. Figure 10: Pros and cons of the TextBlob library. For this tutorial, we are going to focus more on the NLTK library. Let’s dig deeper into natural language processing by making some examples. Exploring Features of NLTK: a. Open the text file for processing: First, we are going to open and read the file which we want to analyze. Figure 11: Small code snippet to open and read the text file and analyze it. Figure 12: Text string file. Next, notice that the data type of the text file read is a String. The number of characters in our text file is 675. b. Import required libraries: For various data processing cases in NLP, we need to import some libraries. In this case, we are going to use NLTK for Natural Language Processing. We will use it to perform various operations on the text. Figure 13: Importing the required libraries. c. Sentence tokenizing: By tokenizing the text with sent_tokenize( ) , we can get the text as sentences. Figure 14: Using sent_tokenize( ) to tokenize the text as sentences. Figure 15: Text sample data. In the example above, we can see the entire text of our data is represented as sentences and also notice that the total number of sentences here is 9. d. Word tokenizing: By tokenizing the text with word_tokenize( ) , we can get the text as words. Figure 16: Using word_tokenize() to tokenize the text as words. Figure 17: Text sample data. Next, we can see the entire text of our data is represented as words and also notice that the total number of words here is 144. e. Find the frequency distribution: Let’s find out the frequency of words in our text. Figure 18: Using FreqDist() to find the frequency of words in our sample text. Figure 19: Printing the ten most common words from the sample text. Notice that the most used words are punctuation marks and stopwords. We will have to remove such words to analyze the actual text. f. Plot the frequency graph: Let’s plot a graph to visualize the word distribution in our text. Figure 20: Plotting a graph to visualize the text distribution. In the graph above, notice that a period “.” is used nine times in our text. Analytically speaking, punctuation marks are not that important for natural language processing. Therefore, in the next step, we will be removing such punctuation marks. g. Remove punctuation marks: Next, we are going to remove the punctuation marks as they are not very useful for us. We are going to use isalpha( ) method to separate the punctuation marks from the actual text. Also, we are going to make a new list called words_no_punc , which will store the words in lower case but exclude the punctuation marks. Figure 21: Using the isalpha() method to separate the punctuation marks, along with creating a list under words_no_punc to separate words with no punctuation marks. Figure 22: Text sample data. As shown above, all the punctuation marks from our text are excluded. These can also cross-check with the number of words. h. Plotting graph without punctuation marks: Figure 23: Printing the ten most common words from the sample text. Figure 24: Plotting the graph without punctuation marks. Notice that we still have many words that are not very useful in the analysis of our text file sample, such as “and,” “but,” “so,” and others. Next, we need to remove coordinating conjunctions. i. List of stopwords: Figure 25: Importing the list of stopwords. Figure 26: Text sample data. j. Removing stopwords: Figure 27: Cleaning the text sample data. Figure 28: Cleaned data. k. Final frequency distribution: Figure 29: Displaying the final frequency distribution of the most common words found. Figure 30: Visualization of the most common words found in the group. As shown above, the final graph has many useful words that help us understand what our sample data is about, showing how essential it is to perform data cleaning on NLP. Next, we will cover various topics in NLP with coding examples. Word Cloud: Word Cloud is a data visualization technique. In which words from a given text display on the main chart. In this technique, more frequent or essential words display in a larger and bolder font, while less frequent or essential words display in smaller or thinner fonts. It is a beneficial technique in NLP that gives us a glance at what text should be analyzed. Properties: font_path: It specifies the path for the fonts we want to use. width: It specifies the width of the canvas. height: It specifies the height of the canvas. min_font_size: It specifies the smallest font size to use. max_font_size: It specifies the largest font size to use. font_step: It specifies the step size for the font. max_words: It specifies the maximum number of words on the word cloud. stopwords: Our program will eliminate these words. background_color: It specifies the background color for canvas. normalize_plurals: It removes the trailing “s” from words. Read the full documentation on WordCloud. Word Cloud Python Implementation: Figure 31: Python code implementation of the word cloud. Figure 32: Word cloud example. As shown in the graph above, the most frequent words display in larger fonts. The word cloud can be displayed in any shape or image. For instance: In this case, we are going to use the following circle image, but we can use any shape or any image. Figure 33: Circle image shape for our word cloud. Word Cloud Python Implementation: Figure 34: Python code implementation of the word cloud. Figure 35: Word cloud with the circle shape. As shown above, the word cloud is in the shape of a circle. As we mentioned before, we can use any shape or image to form a word cloud. Word CloudAdvantages: They are fast. They are engaging. They are simple to understand. They are casual and visually appealing. Word Cloud Disadvantages: They are non-perfect for non-clean data. They lack the context of words. Stemming: We use Stemming to normalize words. In English and many other languages, a single word can take multiple forms depending upon context used. For instance, the verb “study” can take many forms like “studies,” “studying,” “studied,” and others, depending on its context. When we tokenize words, an interpreter considers these input words as different words even though their underlying meaning is the same. Moreover, as we know that NLP is about analyzing the meaning of content, to resolve this problem, we use stemming. Stemming normalizes the word by truncating the word to its stem word. For example, the words “studies,” “studied,” “studying” will be reduced to “studi,” making all these word forms to refer to only one token. Notice that stemming may not give us a dictionary, grammatical word for a particular set of words. Let’s take an example: a. Porter’s Stemmer Example 1: In the code snippet below, we show that all the words truncate to their stem words. However, notice that the stemmed word is not a dictionary word. Figure 36: Code snippet showing a stemming example. b. Porter’s Stemmer Example 2: In the code snippet below, many of the words after stemming did not end up being a recognizable dictionary word. Figure 37: Code snippet showing a stemming example. c. SnowballStemmer: SnowballStemmer generates the same output as porter stemmer, but it supports many more languages. Figure 38: Code snippet showing an NLP stemming example. d. Languages supported by snowball stemmer: Figure 39: Code snippet showing an NLP stemming example. Various Stemming Algorithms: a. Porter’s Stemmer: Figure 40: Porter’s Stemmer NLP algorithm, pros, and cons. b. Lovin’s Stemmer: Figure 41: Lovin’s Stemmer NLP algorithm, pros, and cons. c. Dawson’s Stemmer: Figure 42: Dawson’s Stemmer NLP algorithm, pros, and cons. d. Krovetz Stemmer: Figure 43: Krovetz Stemmer NLP algorithm, pros, and cons. e. Xerox Stemmer: Figure 44: Xerox Stemmer NLP algorithm, pros, and cons. f. Snowball Stemmer: Figure 45: Snowball Stemmer NLP algorithm, pros, and cons. 📚 Check out our tutorial on neural networks from scratch with Python code and math in detail.📚 Lemmatization: Lemmatization tries to achieve a similar base “stem” for a word. However, what makes it different is that it finds the dictionary word instead of truncating the original word. Stemming does not consider the context of the word. That is why it generates results faster, but it is less accurate than lemmatization. If accuracy is not the project’s final goal, then stemming is an appropriate approach. If higher accuracy is crucial and the project is not on a tight deadline, then the best option is amortization (Lemmatization has a lower processing speed, compared to stemming). Lemmatization takes into account Part Of Speech (POS) values. Also, lemmatization may generate different outputs for different values of POS. We generally have four choices for POS: Figure 46: Part of Speech (POS) values in lemmatization. Difference between Stemmer and Lemmatizer: a. Stemming: Notice how on stemming, the word “studies” gets truncated to “studi.” Figure 47: Using stemming with the NLTK Python framework. b. Lemmatizing: During lemmatization, the word “studies” displays its dictionary word “study.” Figure 48: Using lemmatization with the NLTK Python framework. Python Implementation: a. A basic example demonstrating how a lemmatizer works In the following example, we are taking the PoS tag as “verb,” and when we apply the lemmatization rules, it gives us dictionary words instead of truncating the original word: Figure 49: Simple lemmatization example with the NLTK framework. b. Lemmatizer with default PoS value The default value of PoS in lemmatization is a noun(n). In the following example, we can see that it’s generating dictionary words: Figure 50: Using lemmatization to generate default values. c. Another example demonstrating the power of lemmatizer Figure 51: Lemmatization of the words: “am”, “are”, “is”, “was”, “were” d. Lemmatizer with different POS values Figure 52: Lemmatization with different Part-of-Speech values. Part of Speech Tagging (PoS tagging): Why do we need Part of Speech (POS)? Figure 53: Sentence example, “can you help me with the can?” Parts of speech(PoS) tagging is crucial for syntactic and semantic analysis. Therefore, for something like the sentence above, the word “can” has several semantic meanings. The first “can” is used for question formation. The second “can” at the end of the sentence is used to represent a container. The first “can” is a verb, and the second “can” is a noun. Giving the word a specific meaning allows the program to handle it correctly in both semantic and syntactic analysis. Below, please find a list of Part of Speech (PoS) tags with their respective examples: 1. CC: Coordinating Conjunction Figure 54: Coordinating conjunction example. 2. CD: Cardinal Digit Figure 55: Cardinal digit example. 3. DT: Determiner Figure 56: A determiner example. 4. EX: Existential There Figure 57: Existential “there” example. 5. FW: Foreign Word Figure 58: Foreign word example. 6. IN: Preposition / Subordinating Conjunction Figure 59: Preposition/Subordinating conjunction. 7. JJ: Adjective Figure 60: Adjective example. 8. JJR: Adjective, Comparative Figure 61: Adjective, comparative example. 9. JJS: Adjective, Superlative Figure 62: 10. LS: List Marker Figure 63: List marker example. 11. MD: Modal Figure 64: 12. NN: Noun, Singular Figure 65: Noun, singular example. 13. NNS: Noun, Plural Figure 66: Noun, plural example. 14. NNP: Proper Noun, Singular Figure 67: Proper noun, singular example. 15. NNPS: Proper Noun, Plural Figure 68: Proper noun, plural example. 16. PDT: Predeterminer Figure 69: Predeterminer example. 17. POS: Possessive Endings Figure 70: Possessive endings example. 18. PRP: Personal Pronoun Figure 71: Personal pronoun example. 19. PRP$: Possessive Pronoun Figure 72: Possessive pronoun example. 20. RB: Adverb Figure 73: Adverb example. 21. RBR: Adverb, Comparative Figure 74: Adverb, comparative example. 22. RBS: Adverb, Superlative Figure 75: Adverb, superlative example. 23. RP: Particle Figure 76: Particle example. 24. TO: To Figure 77: To example. 25. UH: Interjection Figure 78: Interjection example. 26. VB: Verb, Base Form Figure 79: Verb, base form example. 27. VBD: Verb, Past Tense Figure 80: Verb, past tense example. 28. VBG: Verb, Present Participle Figure 81: Verb, present participle example. 29. VBN: Verb, Past Participle Figure 82: Verb, past participle. 30. VBP: Verb, Present Tense, Not Third Person Singular Figure 83: Verb, present tense, not third-person singular. 31. VBZ: Verb, Present Tense, Third Person Singular Figure 84: Verb, present tense, third-person singular. 32. WDT: Wh — Determiner Figure 85: Determiner example. 33. WP: Wh — Pronoun Figure 86: Pronoun example. 34. WP$ : Possessive Wh — Pronoun Figure 87: Possessive pronoun example. 35. WRB: Wh — Adverb Figure 88: Adverb example. Python Implementation: a. A simple example demonstrating PoS tagging. Figure 89: PoS tagging example. b. A full example demonstrating the use of PoS tagging. Figure 90: Full Python sample demonstrating PoS tagging. Chunking: Chunking means to extract meaningful phrases from unstructured text. By tokenizing a book into words, it’s sometimes hard to infer meaningful information. It works on top of Part of Speech(PoS) tagging. Chunking takes PoS tags as input and provides chunks as output. Chunking literally means a group of words, which breaks simple text into phrases that are more meaningful than individual words. Figure 91: The chunking process in NLP. Before working with an example, we need to know what phrases are? Meaningful groups of words are called phrases. There are five significant categories of phrases. Noun Phrases (NP). Verb Phrases (VP). Adjective Phrases (ADJP). Adverb Phrases (ADVP). Prepositional Phrases (PP). Phrase structure rules: S(Sentence) → NP VP. NP → {Determiner, Noun, Pronoun, Proper name}. VP → V (NP)(PP)(Adverb). PP → Pronoun (NP). AP → Adjective (PP). Example: Figure 92: A chunking example in NLP. Python Implementation: In the following example, we will extract a noun phrase from the text. Before extracting it, we need to define what kind of noun phrase we are looking for, or in other words, we have to set the grammar for a noun phrase. In this case, we define a noun phrase by an optional determiner followed by adjectives and nouns. Then we can define other rules to extract some other phrases. Next, we are going to use RegexpParser( ) to parse the grammar. Notice that we can also visualize the text with the .draw( ) function. Figure 93: Code snippet to extract noun phrases from a text file. In this example, we can see that we have successfully extracted the noun phrase from the text. Figure 94: Successful extraction of the noun phrase from the input text. Chinking: Chinking excludes a part from our chunk. There are certain situations where we need to exclude a part of the text from the whole text or chunk. In complex extractions, it is possible that chunking can output unuseful data. In such case scenarios, we can use chinking to exclude some parts from that chunked text. In the following example, we are going to take the whole string as a chunk, and then we are going to exclude adjectives from it by using chinking. We generally use chinking when we have a lot of unuseful data even after chunking. Hence, by using this method, we can easily set that apart, also to write chinking grammar, we have to use inverted curly braces, i.e.: } write chinking grammar here { Python Implementation: Figure 95: Chinking implementation with Python. From the example above, we can see that adjectives separate from the other text. Figure 96: In this example, adjectives are excluded by using chinking. Named Entity Recognition (NER): Named entity recognition can automatically scan entire articles and pull out some fundamental entities like people, organizations, places, date, time, money, and GPE discussed in them. Use-Cases: Content classification for news channels. Summarizing resumes. Optimizing search engine algorithms. Recommendation systems. Customer support. Commonly used types of named entity: Figure 97: An example of commonly used types of named entity recognition (NER). Python Implementation: There are two options : 1. binary = True When the binary value is True, then it will only show whether a particular entity is named entity or not. It will not show any further details on it. Figure 98: Python implementation when a binary value is True. Our graph does not show what type of named entity it is. It only shows whether a particular word is named entity or not. Figure 99: Graph example of when a binary value is True. 2. binary = False When the binary value equals False, it shows in detail the type of named entities. Figure 100: Python implementation when a binary value is False. Our graph now shows what type of named entity it is. Figure 101: Graph showing the type of named entities when a binary value equals false. WordNet: Wordnet is a lexical database for the English language. Wordnet is a part of the NLTK corpus. We can use Wordnet to find meanings of words, synonyms, antonyms, and many other words. a. We can check how many different definitions of a word are available in Wordnet. Figure 102: Checking word definitions with Wordnet using the NLTK framework. b. We can also check the meaning of those different definitions. Figure 103: Gathering the meaning of the different definitions by using Wordnet. c. All details for a word. Figure: 104: Finding all the details for a specific word. d. All details for all meanings of a word. Figure 105: Finding all details for all the meanings of a specific word. e. Hypernyms: Hypernyms gives us a more abstract term for a word. Figure 106: Using Wordnet to find a hypernym. f. Hyponyms: Hyponyms gives us a more specific term for a word. Figure 107: Using Wordnet to find a hyponym. g. Get a name only. Figure 108: Finding only a name with Wordnet. h. Synonyms. Figure 109: Finding synonyms with Wordnet. i. Antonyms. Figure 110: Finding antonyms with Wordnet. j. Synonyms and antonyms. Figure 111: Finding synonyms and antonyms code snippet with Wordnet. k. Finding the similarity between words. Figure 112: Finding the similarity ratio between words using Wordnet. Figure 113: Finding the similarity ratio between words using Wordnet. Bag of Words: Figure 114: A representation of a bag of words. What is the Bag-of-Words method? It is a method of extracting essential features from row text so that we can use it for machine learning models. We call it “Bag” of words because we discard the order of occurrences of words. A bag of words model converts the raw text into words, and it also counts the frequency for the words in the text. In summary, a bag of words is a collection of words that represent a sentence along with the word count where the order of occurrences is not relevant. Figure 115: Structure of a bag of words. Raw Text: This is the original text on which we want to perform analysis. Clean Text: Since our raw text contains some unnecessary data like punctuation marks and stopwords, so we need to clean up our text. Clean text is the text after removing such words. Tokenize: Tokenization represents the sentence as a group of tokens or words. Building Vocab: It contains total words used in the text after removing unnecessary data. Generate Vocab: It contains the words along with their frequencies in the sentences. For instance: Sentences: Jim and Pam traveled by bus. The train was late. The flight was full. Traveling by flight is expensive. a. Creating a basic structure: Figure 116: Example of a basic structure for a bag of words. b. Words with frequencies: Figure 117: Example of a basic structure for words with frequencies. c. Combining all the words: Figure 118: Combination of all the input words. d. Final model: Figure 119: The final model of our bag of words. Python Implementation: Figure 120: Python implementation code snippet of our bag of words. Figure 121: Output of our bag of words. Figure 122: Output of our bag of words. Applications: Natural language processing. Information retrieval from documents. Classifications of documents. Limitations: Semantic meaning: It does not consider the semantic meaning of a word. It ignores the context in which the word is used. Vector size: For large documents, the vector size increase, which may result in higher computational time. Preprocessing: In preprocessing, we need to perform data cleansing before using it. TF-IDF TF-IDF stands for Term Frequency — Inverse Document Frequency, which is a scoring measure generally used in information retrieval (IR) and summarization. The TF-IDF score shows how important or relevant a term is in a given document. The intuition behind TF and IDF: If a particular word appears multiple times in a document, then it might have higher importance than the other words that appear fewer times (TF). At the same time, if a particular word appears many times in a document, but it is also present many times in some other documents, then maybe that word is frequent, so we cannot assign much importance to it. (IDF). For instance, we have a database of thousands of dog descriptions, and the user wants to search for “a cute dog” from our database. The job of our search engine would be to display the closest response to the user query. How would a search engine do that? The search engine will possibly use TF-IDF to calculate the score for all of our descriptions, and the result with the higher score will be displayed as a response to the user. Now, this is the case when there is no exact match for the user’s query. If there is an exact match for the user query, then that result will be displayed first. Then, let’s suppose there are four descriptions available in our database. The furry dog. A cute doggo. A big dog. The lovely doggo. Notice that the first description contains 2 out of 3 words from our user query, and the second description contains 1 word from the query. The third description also contains 1 word, and the forth description contains no words from the user query. As we can sense that the closest answer to our query will be description number two, as it contains the essential word “cute” from the user’s query, this is how TF-IDF calculates the value. Notice that the term frequency values are the same for all of the sentences since none of the words in any sentences repeat in the same sentence. So, in this case, the value of TF will not be instrumental. Next, we are going to use IDF values to get the closest answer to the query. Notice that the word dog or doggo can appear in many many documents. Therefore, the IDF value is going to be very low. Eventually, the TF-IDF value will also be lower. However, if we check the word “cute” in the dog descriptions, then it will come up relatively fewer times, so it increases the TF-IDF value. So the word “cute” has more discriminative power than “dog” or “doggo.” Then, our search engine will find the descriptions that have the word “cute” in it, and in the end, that is what the user was looking for. Simply put, the higher the TF*IDF score, the rarer or unique or valuable the term and vice versa. Now we are going to take a straightforward example and understand TF-IDF in more detail. Example: Sentence 1: This is the first document. Sentence 2: This document is the second document. TF: Term Frequency Figure 123: Calculation for the term frequency on TF-IDF. a. Represent the words of the sentences in the table. Figure 124: Table representation of the sentences. b. Displaying the frequency of words. Figure 125: Table showing the frequency of words. c. Calculating TF using a formula. Figure 126: Calculating TF. Figure 127: Resulting TF. IDF: Inverse Document Frequency Figure 128: Calculating the IDF. d. Calculating IDF values from the formula. Figure 129: Calculating IDF values from the formula. e. Calculating TF-IDF. TF-IDF is the multiplication of TF*IDF. Figure 130: The resulting multiplication of TF-IDF. In this case, notice that the import words that discriminate both the sentences are “first” in sentence-1 and “second” in sentence-2 as we can see, those words have a relatively higher value than other words. However, there any many variations for smoothing out the values for large documents. The most common variation is to use a log value for TF-IDF. Let’s calculate the TF-IDF value again by using the new IDF value. Figure 131: Using a log value for TF-IDF by using the new IDF value. f. Calculating IDF value using log. Figure 132: Calculating the IDF value using a log. g. Calculating TF-IDF. Figure 133: Calculating TF-IDF using a log. As seen above, “first” and “second” values are important words that help us to distinguish between those two sentences. Now that we saw the basics of TF-IDF. Next, we are going to use the sklearn library to implement TF-IDF in Python. A different formula calculates the actual output from our program. First, we will see an overview of our calculations and formulas, and then we will implement it in Python. Actual Calculations: a. Term Frequency (TF): Figure 134: Actual calculation of TF. b. Inverse Document Frequency (IDF): Figure 135: Formula for the IDF. Figure 136: Applying a log to the IDF values. c. Calculating final TF-IDF values: Figure 137: Calculating the final IDF values. Figure 138: Final TF-IDF values. Python Implementation: Figure 139: Python implementation of TF-IDF code snippet. Figure 140: Final output. Conclusion: These are some of the basics for the exciting field of natural language processing (NLP). We hope you enjoyed reading this article and learned something new. Any suggestions or feedback is crucial to continue to improve. Please let us know in the comments if you have any.
https://medium.com/towards-artificial-intelligence/natural-language-processing-nlp-with-python-tutorial-for-beginners-1f54e610a1a0
['Towards Ai Team']
2020-12-11 23:17:16.749000+00:00
['Artificial Intelligence', 'Innovation', 'Technology', 'Education', 'Science']
What I Learned About Customer Service From the Dollar Shave Club
What I Learned About Customer Service From the Dollar Shave Club These seem like such simple lessons but the simplest lessons are usually the most powerful. Photo by Austin Distel on Unsplash A couple of months ago I joined the Dollar Shave Club, which offers inexpensive shaving products for men. I liked DSC’s funny and irreverent marketing and assumed I would enjoy their products. I was smack dab in the middle of their target market (basically, guys who hate shaving and don’t like to spend money). When I placed my first order, I chose the middle tier 4x blades and purchased a tube of shave butter as well. The package arrived a few days later and I was looking forward to a great shave the next morning. To be honest, I was a little disappointed. I didn’t care for the shave butter at all. My friend who first recommended DSC raved about the shave butter, so to each his own. I also wasn’t that impressed with the blades, which didn’t seem any better than what I was already using (Gillette Fusion blades, if you’re interested.) But I stuck with it, wanting to give myself a chance to really like the products. A few weeks after I joined, I received an email from DSC with a link to a survey. I sent my thoughts on their products and was still deciding whether I would continue my subscription. Then, a week or two after completing the survey, I received this email from DSC: Hey Joe, Thanks for taking our survey! You mentioned in your comments that the 4x blades do not meet your shaving needs, so sorry to hear you aren’t loving your blades. Everyone’s shaving needs are different and I’d love to help you find a razor that is a better fit for you. I’d be happy to send you out a free trial of our Executive razor if you’d like to give it a try. It is a 6 blade razor and has a trimmer blade on the back for precision areas. Please let me know if you are interested, and thanks again for the feedback! Shave On, Carlos I was surprised and happy to receive this personal message from someone at DSC. I mean, how many times do you get a personal response back from a company whose products you don’t initially love? But DSC has decided to do things differently, which is fantastic. I replied to Carlos’ message, thanking him for contacting me and taking him up on the offer for a sample of the upgraded blades. Based on this experience, I’m taking away three lessons about customer service that apply to anyone who has an audience, customers, a platform, or fans of any type. (Because let’s face it: anyone who reads, enjoys, hears, or otherwise consumes your creative work is a customer on some level.) 1. Communicate The simple fact that Carlos from DSC took a few minutes to send me a personal message spoke volumes. DSC is a growing company with lots of customers, so why bother to communicate with one customer who wasn’t totally satisfied? The reason is that they wanted to make sure I was satisfied with their product and to see what they could do to improve my experience. There are many ways you can communicate with your audience. One simple way is by replying to your messages, whether via email, Twitter, Facebook, or other means. Most people don’t expect replies to their messages, so it’s an easy customer service win when you are responsive at all. Action step: Think about the ways your audience likes to interact with you and determine to show that you care by responding to messages. 2. Be thankful Did you notice that Carlos thanked me twice in his message? He expressed gratitude for taking the survey and offering feedback. It might seem strange to thank someone for negative feedback, but that’s a sign of a company (or person) that’s interested in growing and being the best. Leaders and influencers aren’t just interested in good feedback. They also want to know what’s not working so they can fix it. In 2019, I served on the planning team for a regional conference. Whenever the secretary of the organization would send out meeting minutes via email, he would always add, “Remember, I’m not emotionally tied to these notes. Please let me know about any corrections or updates.” I loved that sentiment because it’s a reminder that if you want to be the best, you must welcome both good and bad feedback. But you not only need to welcome feedback, but you also need to be thankful for it. When you receive negative feedback about your creative work, how do you respond? Does it make you irritated and angry? Or do you sincerely thank the person for taking the time to point out ways in which you could improve? Action step: The next time you receive negative feedback, do your best to have a positive attitude and be thankful for the opportunity to improve. 3. Be generous What are the possible ways DSC could have responded to my survey feedback? They could have simply not responded, and I wouldn’t have known any different, since you don’t normally get responses from surveys. They could have also sent an email suggesting that I upgrade to the Executive blades. This would have been a nice touch, and I would have probably considered doing just that. But instead, Carlos offered to send a free sample of the Executive blades. Was this a move designed to keep me happy as a customer, and possibly upgrade to the more expensive blades? Of course. But did I mind the upsell? Not at all, because I liked being offered a free sample of something I might purchase and would probably enjoy. I’ve heard it said that you should try to “surprise and delight” your audience. One of the best ways to do this is by being generous. What’s generosity all about? It’s about the way you think about your creative work and how you can share it with others. It’s about giving without expecting anything in return. It’s about an inward attitude of the mind and heart that can express itself in a thousand outward ways. Generosity is not about money, it’s about a way of life. Action step: Generosity begins with a “core orientation” of helping others. Is your first impulse to help people, or to get something from people? If you want to build an audience, you must first offer something of value before you can expect anything in return. These seem like such simple lessons: Communicate. Be thankful. Be generous. But the simplest lessons are usually the most powerful.
https://medium.com/swlh/what-i-learned-about-customer-service-from-the-dollar-shave-club-1167aa5f8f83
['Josef Cruz']
2020-11-12 03:08:39.227000+00:00
['Communication', 'Life Lessons', 'Entrepreneurship', 'Startup', 'Small Business']
The ideal design to dev workflow
Joe Alterio A scorpion asks a frog to carry it across a river. The frog hesitates, afraid of being stung by the scorpion, but the scorpion argues that if it did that, they would both drown. The frog considers this argument sensible and agrees to transport the scorpion. The scorpion climbs onto the frog’s back and the frog begins to swim, but midway across the river, the scorpion stings the frog, dooming them both. The dying frog asks the scorpion why it stung the frog, to which the scorpion replies, “That code doesn’t reflect my designs.” - Products that require both design and engineering, from cathedrals to automobiles, find an inflection point in their lifecycle that can cause confusion, delay production, and add multiple headaches further down the road. Specifically stated, that point is when the thing that is designed must now be made. Transition from concept to realization is tough. It’s no wonder the hand-off meeting can be a dreaded occasion for both developers and designers. Airy concepts must come crashing down to reality. Bad news is often broken in these meetings, sometimes resulting in disagreements, complicating a working relationship and immediately putting the build on the wrong foot. How can we, as both designers and developers, mitigate these risks and foster a more inclusive workflow that ensures the final designs are not only beautiful, but functional and developmentally sound? Tip 1: Priorities “‘That’s hard to build’ versus ‘That’s not my design’ is immaterial when you’re focused on ‘Let’s ship this thing.’“ Our respective work can seem like the most essential work in the world, but we shouldn’t let that cloud our judgement about who we are making these products for. For the agency, it is our clients, but in a grander sense, it is the end users. Each decision we make needs to lead with the end user in mind. The work we undertake has the potential to impact millions of people and this responsibility informs the hierarchy of importance we assign to each decision. The lifecycle of creating a digital product has many decision points along the way. In each of these moments, we should be asking ourselves how it affects the user. Any type of consideration should be fundamentally subservient to this. It is the nature of our business. Tip 2: A Cohesive Team “As a developer, understanding what the problem is we’re trying to solve, helps answer some of the questions about why I’m building the thing I’m building, which is very motivating for me.” Segmenting teams based on deliverables is not ideal. Even if development is not involved in the early day-to-day project activities, keeping all teams informed during the product development cycle will head off rude awakenings of technical impracticalities. Conversely, even for projects in which a client has assured the team the internal designs are “code-ready,” it’s good to let your internal design team validate the assumptions. Not only is there value in having a second team’s eyes on designs, but often the design team will have a good idea of development’s capabilities and make design arguments that support development’s stance. We’re firm believers in the enmeshed design/development team hybrid model, and it’s been successful for us specifically for this reason. Tip 3: Specs, Specs, Specs “There has been jobs I have been on where I’ve had to take rasterized bitmapped images to make vector assets.” It is a great time to be making digital products. The support afforded by modern digital tools such as Zeplin and Avocode makes digital product velocity lightning fast and allows quicker and cleaner file hand off. However, that efficiency has come at a cost. As recently as five years ago, design handoffs included not only actual redlines, but spec sheets. They seem to have gone the way of design briefs, which is a shame. Cloud-based links are now the norm, but even tools that include room for detailed notes and design systems are rarely used. In the interest of keeping everything unified and speedy, we’ve sacrificed some of the less obvious yet essential practices that ensure clearer communication. If you’re a designer who is preparing to handoff files, make sure you include not only basic redlines, but also the following: Content Matrix This may not be the designers responsibility, but making sure content is in usable forms (copy-and-pasteable text, usable file formats sized correctly) and has the ability to be tracked and validated by multiple parties is an essential part of a build. You’re not ready for development yet if this step is not complete. Technical Expectations Spec sheets should, at a minimum, contain a broad overview of the technical expectations for the product’s features. Ostensibly, the development team will have a good idea as to expected functions, but not necessarily the location of function triggers, cross-functional actions, and features that are story or case-dependent. Laying these out in clear language will be a boon to not only the developers, but the entire product team, and serve as a good reference document going forward. Full Responsive Versioning, Including Device Expectations I’ve been shocked to learn that some designers do not give their developers fully responsive comp sizes. At a bare minimum designers should provide complete desktop, tablet, and mobile comps, as well as error states, form states, and empty states. Bonus points for flex fulcrums, such as where the padding increases, and by how much. One Formidable developer’s request was even more on point: “Most useful is to tell the developer what you are basing these sizes on.” Knowing the make and model of each intended device the screen size being is pulled from can inform the developer about expected optimizing and scaling. User Stories/User Flow Map Including a (hopefully) previously generated user-flow map will align the development team with the expected architecture of the product. Even more importantly, if the naming conventions are adhered to, this helps development understand the screens design provided. Context for error states, empty states, and the like will save many headaches. The aggregate time spent figuring out where a design fits into the grand scheme of things can be hefty. Palette and Color Story A complete color palette, for both colors and situation colors (like error states), must be provided; don’t expect the developer to infer from existing elements what colors mean, and where they should be applied. Part of a designer’s job is to give the developer tools to solve small problems on his/her own. Explaining what colors are used why and when is a perfect toolbox for this. Complete Typography The typographic support provided should not only outline (and provide files for) typographic choices, it should also outline type hierarchy and use cases for any speciality formatting. Examples of Motion and Animations Most modern digital products have some sense of motion or animation endemic to them. Motion storyboards, gifs of the animation, or ideally, prototypes of the movement are essential for the developers’ success. Our team prefers to use AfterEffects with Lottie from Sketch files for our animations, but there are multiple excellent tools available these days. The use of spec sheets should not be a last resort of validation and argument resolution, but a source of agreed-upon truth for the entire product team. Tip 4: Get Yourself a Remarkable Hybrid “Part of your role as a developer is to make adjustments along the way.” Ok, you don’t have to call it a Remarkable Hybrid or a “Unicorn,” but this person is a unique type of digital product expert who both knows code AND truly understands the fundamentals of design. Even in the most planned-for products there will be gaps — it’s the nature of making things from scratch. By understanding the designs, and being able to make educated guesses about designer intent, as well as understanding the technology enough to implement changes on the fly, hybrids can bring the two sides of the rope together and tie a neat knot. These individuals are a valuable component in any digital product team. If you don’t have one (or many), hire some now. Tip 5: Use the Right Tools “The ideal tool would be a something that creates real usable assets, but we’re not there yet.” The most prosaic advice offered is also the most obvious, but must still be said: Use the right tools for the job. The host of lightweight, development-ready design tools is legion, and while some are better than others in generating actual usable CSS objects, they are all angled for digital development. Make sure your assets are as lightweight as possible, and scalable. There’s no reason to be scaling bitmapped images 1x, 2x, and 3x anymore, except for the occasional graphic, and if you have to, current design trends, from gradient overlays to image patterns all work towards the goal of not making a digital product overly weighty. In Conclusion There is no magical solution to a flawless and error-free design-to-development handoff. Errors and miscommunications will still happen resulting in pushed deadlines and compromises. However, using the above protocols will minimize your risk and you can get back to doing what you are good at. All insights and quotes provided by Paula Lavalle, Carlos Kelly, David Earl Duncan, and Ryan Ray, members of the Formidable Design and Development teams.
https://uxdesign.cc/the-ideal-design-to-dev-workflow-f02b71797c74
['Joe Alterio']
2019-06-14 22:53:30.985000+00:00
['Design', 'Design Process', 'Development', 'Designer', 'Interaction Design']
The Sexual Overtones in All Relationships
Sex is involved in every relationship. Bi-sexual, or not. Everybody’s bi-sexual. Multi-sexual. (Photo by Raw Pixel) There is a hidden sexuality in every relationship. Bi-sexual, or not. This is because there is a male hidden in every female, and a female hidden in every male. This explains male relationships. Female relationships. Male and female relationships. It, also, explains sexual relationships. The hidden sexuality in all relationships. The sexual overtones in all relationships. Zero and one is one and two. Conservation of the circle is the core dynamic in nature.
https://medium.com/the-circular-theory/the-sexual-overtones-in-all-relationships-65f76531e92a
['Ilexa Yardley']
2017-07-27 12:18:30.192000+00:00
['Relationships', 'Sexuality', 'Artificial Intelligence', 'Entrepreneurship', 'Data Science']
Branding for startups: How to build trust before product
Branding case studies often read as somewhere between obvious and grandiose. You’re either looking at established companies with a 20/20 hindsight, or reading a digital agency portfolio sales pitch. Just as business strategy, real-world branding is in fact neither superfluous, nor inaccessible to mere mortals. Below, I’ll explain why every founder should consider their brand from day one, and share some of the thinking that went into the identity of Pona, my home-cooked food marketplace startup. What is a brand and why does it matter? Branding is all the ways you convey to current and prospective customers what they can expect from your company. It is a promise made through consistent visuals, sounds, language, and actions. The objective of any brand is to create a trusted relationship with the customer. This means a successful brand must be: Reliable — Even the most rebellious companies send their customers a consistent message: You can trust us to remain rebels. Differentiated — As in personal life, the best way to get into and stay in a relationship is to make clear how you’re different from all the others. Some suggest startups ignore branding until they reach the legendary product-market-fit. This advice strikes me bizarre. After all, building trusted relationships with early adopters is crucial for any new business, and the brand is often all you’ve got while you’re still patching up your Minimum Viable Product. Moreover, reliability is often the first concern among users considering a little-known product, and Differentiation is the only reason they might actually bother to give it a try. Unless you’re in D2C or Fast-Moving Consumer Goods, you probably shouldn’t spend money on a marketing agency to develop a fancy identity for your project. But thinking about what you stand for, and how you communicate this to your customers, should form part of your every decision. How startup branding is different Remember how a brand is made of visuals, sounds, language, and actions? Well, turns out the importance each of these plays in the perception of the brand is radically different between an established business and a new startup. You can read all the case studies in the world about the meaning of the McDonald’s logo, the psychology of its colour choices, and how its taglines have changed over the years. The truth is — 90% of their brand value today comes from the consistent, quality experiences people have had with Maccy D’s over the years. Your startup’s brand, on the other hand, consists almost entirely of the story you tell — not your past actions. You simply haven’t sold enough burgers for the world to notice, or kept making them long enough for your existing customers to trust you won’t pivot into a fried chicken shop tomorrow. 1. Words and visuals are all you’ve got When Apple launches a new product, the reason people line up in front of their stores are not the witty headlines set in Myriad Pro, or the geometry of their logo. They line up because they trust Apple will yet again deliver on their expectations, as it has done every year since Jobs revived the company in ‘97. As a small startup, you don’t yet have the luxury of such strong trust and expectations. Instead, you have to use storytelling to persuade your audience of who you are, or will be one day. Buy Me A Coffee built trust through a design and microcopy that spoke to its target audience 2. You can piggyback off a bigger company’s brand If red paint were sold out that day when the McDonald brothers moved their first restaurant to San Bernardino, copycat fast food chains around the world would all be coloured in blue, or green, or turquoise today. Successful founders treat their ventures as machines, and aim to only reinvent one or two cogs at a time. Most early stage startups will do best with a traditional org structure, and most MVPs are better off if they rely on Google’s Material Design. Similarly, for many startups, it makes sense to base their branding on established industry players, or like-minded disruptors in other fields. Humans are conditioned to look for patterns, and will gladly give a new entrant the benefit of the doubt if it reminds them of another beloved brand. Tesla revolutionized the car industry without veering off too far from the existing design language 3. You can afford to take much bigger risks With great value comes great responsibility. Established companies are stewards of brands built over generations. A few missteps could wipe out billions in consumer trust. As a startup, you don’t have the benefit of such expectations, but this also allows you to make much bigger bets. Take Lyft and its pink moustache, for example. It caused a lot of controversy, and possibly even physical damage to cars, but it also attracted attention like nothing could in the competitive ride-hailing space. Lyft’s iconic pink moustache certainly got people talking Case study: Branding home-cooked food At Pona, we coach home cooks into entrepreneurs, then market their one-of-a-kind cuisine. Buyers have instinctive concerns about the safety of home-cooked food, so it was crucial for us to establish trust early on. As a new entrant in a crowded food-delivery market, we also had to differentiate ourselves clearly from other platforms. Here’s how we went about it: We focused on one overarching story A good brand should remind people of the vision behind your project through every word, sound, image, and action. Of course, this means you first must gain clarity of that vision. At Pona, we explored several angles, such as authentic ethnic cuisine, and the promise of re-experiencing your favourite childhood meals. Based on the market feedback, we finally focused on the connection between buyers and sellers, the care with which home cooks prepare their one-of-a-kind cuisine, and our quest to always find ways to make both sides feel special. This message permeates not just our visuals, but also marketing copy… …and of course, the product itself. Whereas other delivery platforms play down the provenance of the food, we focus on the person and story behind each meal as a core part of the experience. You can learn what goes into the food through interviews on My Signature Dish, video episodes of Home Cooks of the World, and Instagram stories direct from our home kitchens. Every package is signed personally by our home cooks, and when you leave a review, you know it’ll be read by a real human, and often lead to immediate tweaks based on your feedback. Left: Pona home cook signing personal stickers. Right: Stories of the process behind her signature dish. We considered our strategy, and every stakeholder Whether you like this, or not, your branding will colour your every message and interaction. You also shouldn’t forget that the brand does not exist in a vacuum, but rather forms an integral part of your overall business strategy. At Pona, our brand directly impacts relationships with: Buyers (demand side of the marketplace) (demand side of the marketplace) Home cooks (supply side) (supply side) Catering clients (corporates, co-working spaces) (corporates, co-working spaces) Local government (legislators, food safety officials) Each group forms an integral part of our business model, so it was crucial to develop a brand that would speak to all four. For example, there’s certainly a category of foodies who would appreciate a more rebellious brand. But, such positioning may not fit the typical profile of a micro-food entrepreneur, or set us off on the right foot when we push for new food safety legislation. A brazen Pona brand iteration which worked with buyers but failed us with other stakeholders. We iterated until we found the right balance Early on, we did not have the resources to take professional photos of the meals prepared by our home cooks, and commissioned cute little food icons for use in the app instead. In an industry where government relations are crucial, we quickly realized we’ll have to tone down on the cuteness a little bit, which also helped reassure our corporate catering clients. As we ran more experiments, and better understand our buyers, home cooks, and other stakeholders, we found a similar balance in all other aspects of our brand. For example, we started to take more pride in the social impact our project is having on our underprivileged sellers, and admitted against our original expectations that extra revenue is the central reason home cooks apply to sell on the platform. Take control of your brand I hope I managed to convince you to take branding seriously from the earliest days of your startup, and the story of Pona gave you some ideas on how to think about your own brand. Whether you want it or not, your customers, suppliers, potential hires, and even investors will perceive your company in a certain way. You now have a choice — Tell a story that fits and reinforces your business, or let someone else write one for you, and hope for the best.
https://medium.com/swlh/branding-for-startups-and-why-it-should-form-part-of-your-every-decision-8ea7e4807bf1
['Philip Seifi']
2020-04-16 22:24:07.011000+00:00
['Branding', 'Business Strategy', 'Startup', 'Marketing', 'Branding Strategy']
Why Scripted Is The Ultimate Platform for Skilled Freelancers
Why Scripted Is The Ultimate Platform for Skilled Freelancers Working for content mills is usually not a good idea. Scripted is the exception confirming that rule. Photo by Burst from Pexels In the very beginning of my freelancing career, I did some writing for low-paying content mills. It was a stepping stone, and in many ways, it was a great training ground. It taught me to write fast, and it taught me to pitch daily. And I actually liked the concept, where you could find all your work in one place. Alas, the pay was painfully low. More than ten years have passed, and today I’m a seasoned writer with expertise in several niches. I usually avoid working with third parties, and prefer pitching my services directly to editors not to have anyone taking a cut of my pay. Then I came across Scripted. Scripted is an online matchmaking service helping businesses find qualified freelance writers to strengthen their content marketing. This platform is entirely different from the content marketplaces I recall from my early days. Scripted is a competitive platform with a strong focus on quality, and with a rigorous vetting process where only 2% of applicants are accepted to become writers. Applying to become a writer means taking a pretty extensive test, but trust me, it’s worth the effort. And yes, it’s also worth the one-time fee you’re charged to apply. Because once accepted, this platform offers tremendous opportunities for skilled writers. How does it work? The process is simple and straightforward. You can browse what client projects are available at any given moment, the content briefs are standardized and of high quality, and it’s easy to submit your pitches. The UI is intuitive, and the dashboard gives an excellent overview of your assignments. As soon as the client has accepted your proposal, you can start writing. Then you submit your draft, which is either accepted, rejected or returned for edits. You only get paid after the article is approved, and the client can reject it if it’s not good enough. But the clients I’ve worked with have been a pleasure to cooperate with, providing brilliant and constructive feedback when necessary. So much so that it has helped me up my writing and SEO skills even more. Is This For Me? Scripted is probably not the place to go if you’re just starting as a writer. Many of the projects are quite specialized and require some experience. But if you’re a versatile writer who knows your SEO, your grammar and your niche, there will be plenty of business for you here. I’ve only been active on Scripted for a few weeks, but I already love it. It’s an easy and time-efficient way to land new work, and so far every transaction has been super smooth. The clients are friendly, and the projects are fun. I quickly got several well-paying assignments, and many of them have turned into recurring revenue as there is the option for clients to “invite” their favorite writers to new gigs. Are you ready to give it a go? Here are some pointers to help you get started! #1 Keep an eye on the dashboard regularly New projects are posted all the time. And the sooner you pitch, the better are your chances to close a deal. I’d recommend that you check the available jobs at least a few times per week. This way, you can build a pipeline of assignments. #2 Don’t underprice your work One of the beautiful things about Scripted is that they don’t allow price-dumping. There’s a minimum rate per word, and it’s not possible to pitch below it. But I would even advise against going in at the minimum level. Most clients I’ve dealt with on Scripted are corporate people. They’re not looking to save that last penny; they will pick the freelancer who provides the most value. #3 Send several pitches As always in sales, it’s a numbers game. There are plenty of highly qualified writers on Scripted, and no matter how good you are, you’re up to some serious competition. But don’t get discouraged if a client picks another writer — focus on pitching regularly, and you will eventually be rewarded. Conclusion My experience with Scripted has so far been nothing but positive. This is the only content platform I’m using, and I will definitely stick around. The clients are top-notch, the rates are good, and the support team is incredibly quick to answer any questions. Scripted is a trustworthy and very efficient platform for professional freelancers to complement the work they already have with, or perhaps even build a whole business upon. In short: 11/10, would highly recommend!
https://medium.com/freelancers-pharmacy/why-scripted-is-the-ultimate-platform-for-skilled-freelancers-ef33cf29b486
['Nina Quist']
2020-06-02 06:22:07.828000+00:00
['Freelancing', 'Entrepreneurship', 'Writing', 'Freelance Writing', 'Content Marketing']
How to run Jupyter Notebook on Andriod?
Recently, I was searching for Jupytter Notebook to run my Andriod mobile. But there was no straight way to install it. So I thought let’s write about it. We will divide it into two parts, the first will be application installation, and the second will be the Andriod Setup. 1. Installation There are two things you need to install from the google play store to get started. 1 . Install Pydroid 3 — IDE for Python 3 from google play store 2. Install Pydroid repository plugin from google play store After installing the above 2 applications we are good to install jupyter Notebook. 2. Andriod Setup Open Pydroid 3 — IDE for Python 3 click on pip icon search for jupyter -notebook and install it. Click on the terminal icon type jupyter-notebook in terminal A new page will be launched as a local host create a new python file and your jupyter notebook is ready. Thanks for reading.
https://medium.com/analytics-vidhya/how-to-run-jupyter-notebook-on-andriod-dfc5bccca7ca
['Namratesh Shrivastav']
2020-12-28 16:16:31.579000+00:00
['Python', 'Andriod', 'Jupyter Notebook', 'Data Science', 'Machine Learning']
Disconnection Syndrome: What Is It and How to Help It.
As a society, we sometimes find ourselves lonely, anxious, and depressed. We are becoming separated from sustainable joy. Unfortunately, many aspects of the modern world can keep us locked into a state of disconnection syndrome. What are the drivers of Disconnection Syndrome? MINDLESS ACTIVITY — Constant social media scrolling or a general mentally disconnected state. To improve this, decide to be intentional with your time. Set your plan for the day and follow the schedule that you’ve defined. Set timers for social media scrolling to limit scrolling and be more intentional. LONELINESS — In a state of the pandemic, loneliness is at an all-time high. To improve this, prioritize calling family and friends or scheduling a zoom call. When socially distanced and safe in-person opportunities arise, take them when and if you feel comfortable. CHRONIC INFLAMMATION — Constant consumption of negative and fear-induced news can lead to chronic stress. High-sugar and refined carbohydrates in our foods can create chronic inflammation in our bodies and brains. To improve this, incorporate meditation and breathwork into your day. Also, ensure you are fueling your body with clean and nutrient-dense foods to boost your mood and performance. I always suggest limiting your sugar intake. INSTANT GRATIFICATION — In a world of likes, we can quickly get caught up in the immediate and lose sight of the long-term. It feels great and boosts your dopamine when your latest post is well received, but it is essential to remember that not all greatness is immediately recognized. To improve this, set a plan for your next week, month, and year. Remind yourself that your daily efforts are bringing you closer to the ultimate end goal. NARCISSISM — It is backed by science that when we think more of others and bring joy to others, it leads to improved happiness within ourselves. To improve this, make a conscious effort to do something nice for someone every day. Think about how you treat others and how you can positively impact lives around you. POOR RELATIONSHIPS — As they say, you become who you surround yourself with. If you surround yourself with negative people, you will become negative. On the contrary, if you surround yourself with positive or ambitious people, you will become more positive and ambitious. To improve this, make it a point to evaluate your relationships thoroughly and often. Ask yourself if those around you are making you a better or worse person. CHRONIC STRESS — Media and stressors of the unknown can induce chronic stress. Chronic stress leads to inflammation and a state of fight or flight in the body. It’s essential to lower stress levels for a healthier and happier life. To improve this, again, meditation and breathwork are essential but reflecting on what you are grateful for can help lower stress. Take time to care for your body in every way available to you. From the alone time to exercise — anything that makes you feel good. IMPULSIVITY — From impulsive purchases to decision making, impulsivity can bring fleeting happiness but does not always equal sustainable joy. To improve this, ensure you are intentional about your decisions and that your choices are aligned with long-term happiness versus short term gratification. To achieve long-term happiness, it’s essential that we look inward, rewire our brains with gratitude, and enjoy the journey, not just the destination. Make small and intentional changes daily for a joy-filled life. Thank you, Dr. Pearlmutter, for coining this use of Disconnection Syndrome!
https://medium.com/joincurio/disconnection-syndrome-what-is-it-and-how-to-help-it-e868140e5c31
['Kayla Barnes']
2020-12-07 15:01:13.869000+00:00
['Relationships', 'Mental Health', 'Wellness', 'Mental Well Being', 'Brain Health']
No Fear. No Limits. No Regrets in 2020
No Fear. No Limits. No Regrets in 2020 If you want to view paradise simply look around you and view it. Wall mural at Adelaide Central Market, Australia ©Lucy King At Mindset Matters, we believe you can change your life depending on the type of stories you tell yourself. It’s been a whirlwind building up this publication. Thank you to everyone for your support over the past six months. To close out 2019, here’s a collection of our top performing, curated stories. Each story explores a dimension of mind, spirit and creativity with over 100 likes or 500 reads. All of the listed stories have been distributed in topics by Medium. Several of them are double and triple curated even! Looking forward to write and share many more stories to uplift you in 2020. You can find inspiration everywhere — even in the most unlikely places such as an airport! A positive sign spied recently at Adelaide Airport ©Lucy King No prizes for guessing where the editor of Mindset Matters is spending her New Years holiday. For those of you embarking on New Year’s Resolutions, one last reminder: Mindset matters more than you may think! Enjoy! Mind — A deep dive into the psychology of our everyday behavior. Spirit — Stories to inspire and motivate you to reach your full potential. Creativity — On art, design, writing and innovation. Thank you for reading!
https://medium.com/mindset-matters/have-no-fear-no-limits-no-regrets-in-2020-affd73e49fd
['Lucy King']
2020-01-11 13:44:14.424000+00:00
['Inspiration', 'Writing', 'Self Improvement', 'Lifestyle', 'Storytelling']
Can a Planner Obsession Make You Less Productive?
Can a Planner Obsession Make You Less Productive? I never thought I’d be a 30-something woman obsessing over stickers As a young girl, I kept a diary to write my deepest, darkest secrets. Thirty years later and I’ve moved from diaries to planners. If you think there isn’t a difference between diaries and planners then you haven’t stepped into the all-consuming Planner World. Welcome to Washi Village, Sticker Mountain, and Pen Paradise. Where all your planning dreams come true. Planners are meant to help you be productive, plan your life, make it easier to schedule events and be more efficient.
https://medium.com/live-your-life-on-purpose/can-a-planner-obsession-make-you-less-productive-34b6750568b9
['Lana Graham']
2019-11-13 01:01:01.550000+00:00
['Productivity', 'Planner', 'Writing', 'Ideas', 'Bullet Journaling']
The Intimacy Paradox During Christmas Holidays
The Intimacy Paradox During times like these, people still crave intimacy. We are social beings, and it’s natural to feel certain things, but it’s also a pretty bad time considering everything that’s happening here. I ended up on one of my notorious Google rabbit-holes and stumbled upon the intimacy paradox. It’s a very curious phenomenon, especially in 2020. You see, we struggle to establish autonomy as adults, especially as we have to balance the need to make sacrifices for our families and friends. For example, there’s already comprehensive research studies on the state of intimate relationships during the first wave, such as how couples were forced to cohabit for 24 hours a day, which can be pretty unhealthy, while single people are stuck with no prospects. Photo by Randy Jacob on Unsplash — It’s okay to be single in 2020. If these were normal times, and you were a child, we would feel the safest in the loving arms of our parents. Being rocked back and forth, we all know our parents would protect us from all the scary monsters in the world. Perhaps you even went on a holy pilgrimage many years ago. If you’ve never been on the land you’re pilgrimaging to and are unfamiliar with the norms and customs there, it’s much more comforting to rely on the support of others to help us get from Point A to Point B. However, these social supports are fiercely limited. During times like these, we continue to remain uncertain, even as we’re heading closer and closer to a more ideal future. While I have no answers or solutions, the best we can do is keep optimistic. I mean no matter what’s happened in history, society has found ways to keep moving forward. Society survived the World Wars, polio, and so much more. If society did well in the past, it can do so again in the future. As Roy T. Bennett once wrote,
https://medium.com/accompanied-by-enervation/the-intimacy-paradox-during-christmas-holidays-dfb0309734ed
['Synthia Satkuna', 'Ma Candidate']
2020-12-23 20:21:15.202000+00:00
['Relationships', 'Intimacy', 'Life Lessons', 'Mental Health', 'Psychology']
Animated Explainers and how to create one
Let’s imagine you want to create an animated explainer. What steps are in the process? 1. Research and strategy During this stage, it’s essential to get to know your audience better. What do they know about your offering? On what stage are you with your product or service, and what do you want to communicate? Are you building a new innovative solution and want to know public reaction? Or maybe you redesign your current product or just add a new feature? Several factors need to be taken into consideration. The company you choose will ask you a lot of questions to understand your goals better. 2. Pre-production This stage is all about creating a script accompanied by a storyboard. It is a crucial step in the process because the output will provide the basis of the entire work. Good writing tells the story with clear goals and maintains interest throughout the duration of the video. Bad storytelling can’t be fixed even by the state of art illustrations or motion. Sometimes if the product or service you are trying to communicate is too complex even for the design team you are working with, a good way would be to visualize your thoughts. Some of the visualizations can be used to build narration or even copy-paste into a storyboard. The rule of thumb is to aim for a 1–1:30 min of the narrative. The storyboard’s primary function is conveying narrative in the form of images and acting as a guide for a motion picture. These are selected frames, particularly significant in the whole story. A storyboard created by our team It is recommended not to rush in the process because it is all about the iteration. Some scenes can even alter the script slightly. The pre-production stage is also a time to establish some visual assumptions. If you are on the path of shaping your product or just want to refresh your brand’s image, a useful tool to communicate the idea with designers can be a bipolar chart. It helps to align the expectations because, with the use of some references, everybody in the room has a similar perception of what will be built. So as a person responsible for the company’s impression, make sure you know how the brand should be perceived. Example of bipolar chart To achieve the desired look and feel, you can use different techniques. Among most notables are 3D or 2D animation, stop motion, and even the classic hand-drawn method. 3d animation by Guillaume Kurkdjian 2D animation by Seth Eckert Stop motion animation by Ueno 3. Production Now all is up to your design team. With the final script, storyboard, and visual direction, the work on the video can start. You should expect some samples of work in progress from your agency. Each studio has its internal process of creating “things,” but it is always good to know the project direction. Character evolution during the process by Clim Studio 4. Post-production At this stage, the explainer needs some editing and polishing. Usually, voice-over and additional sounds are added in the pre-production stage. People gain a better understanding of a product or service once they see and hear someone explain it. That’s why it is recommended to invest some time in finding a good voice actor. There are a lot of companies recording voices for various purposes with a large number of available tone of voices — your team should take care of it and propose you some samples, but you can do it yourself too. The tone of voice and its character should mirror the output from previous steps to build a coherent experience. If an explainer needs to be very dynamic, make sure to include the lector and sound editor earlier in the process, at least in the production stage. It will help to synchronize motion with sound and can make explainer more attractive. And here we are. At this stage, we should have a great-looking explainer that achieves our business goals. All that‘s left is to share it with the world.
https://medium.com/pragma-design/animated-explainers-and-how-to-create-one-469721e82055
['Tomasz Dyrks']
2020-07-02 08:48:14.723000+00:00
['Design', 'Process', 'Tips', 'Explainer Videos', 'Marketing']
The Perfect Design Tool
One of the following statements by influential designers was made this month and the other was made five years ago. Can you guess which statement was made when? “The web and its related disciplines have grown organically. And those of us who work here should have sophisticated, native tools to do our jobs.” “Design tools are stuck in the stone age but we’re expected to build spaceships. Imagine the glorious things we’ll design once tools catch up.” It’s very common for us to lament and discuss the state of our tools. I’m not here to point out that it’s wrong to do this. I certainly get caught up in it myself. Whether it’s an improvement to Sketch or InVision, or even a completely new prototyping app like Principle for Mac, I’m all over it. The main problem with design tools — I will refer to them just as tools from here on in — is not just that they don’t match our personal/team workflow or lack in their abilities to help us create what is in our head: it’s that they are simply… well, tools. “But we need good tools to do our jobs.” — You I hear you. We do need good tools. The issue is that tools can only help us do certain kinds of things, which is actually limiting in the long run. Let me explain. A Tool Is A Reflection Of The Outcome Our opinion of better tools depends on what our end goal is. In other words: A tool is a direct reflection of it’s intended outcome; it turns intention into a specific action. Currently, that reflection (in UX Design) is something under glass. It could be a website, an app or interface for a car dashboard. Either way it’s a thing under glass. I really like Bret Victor’s definition of a great tool: “A tool addresses human needs by amplifying human capabilities. That is, a tool converts what we can do into what we want to do. A great tool is designed to fit both sides.” He goes on in the article to talk about how many visions of the future are just “pictures under glass.” Sliding our fingers across glass all day is very limiting in comparison to what our senses allow us to do and how we interact with most of the other things in our life. I can’t say it better than Bret (you really should read his article): “Are we really going to accept an Interface Of The Future that is less expressive than a sandwich?” The tools that I mentioned in the intro are both highly utilized (currently the most popular) in the UX Design industry and are made specifically to design things under glass. If you want to make the best-designed-interface-solution-thingy-under-glass, then those are the tools you should have. If you want to do anything else you are mostly out of luck. Even many of the new tools for design from large companies like Adobe are focused for things under glass. Current devices only tap into a small facet of the range of ways that humans can interact with things. And current tools only allow us to create for that limited scope of interaction. But this is what we have asked for and continue to request. The Tooler’s Addiction In the article Don’t Be a Tooler (that fortunately is still available via the Wayback Machine and worth a read), the author talks about how in the early phase of his design career (pre-computer), people would ask him what he did for a living and have some kind of appreciation for the craft and say something like: “You get paid to draw? I can’t even draw a stick figure!” However at the time he was writing the article (post-computer) the same response to being a designer was something like: “That’s cool. I have a computer too. I made X…” and they talk about using some templates and their computer, etc. The point being that the tools — in this case, the computer and software — had replaced the acknowledgement of the skill and craft once associated with his ability. I have interviewed a lot of designers and I ask them to take me through their process or how they start a project. Many answers include things like looking through WordPress templates to find which one will work for the client, or searching Dribbble and Behance to find shots that can work for their current project. Let me state that I’ve done the above, too. Many times. It’s part of learning skills and who you are as a designer. However, there is a cycle of inbred design solutions out there that will continue to be exactly the same, until more designers break out of their addiction to their tools. Via Zeldman We’re always in search of ways to be more efficient. It makes it easier to quote a project or estimate your time if you can compare it to something else, and starting from scratch doesn’t make sense if you are always doing similar work. It’s true that there are similarities between problems and many people solving similar problems, but it’s rarely the exact same problem. My colleague RJ Owen recently put it this way: “every project is an anomaly.” Every one of our projects has a unique set of concerns, people, politics, problems, and customers — that’s the reason why custom design agencies can still exist and templates can’t solve every problem. By choosing to solve the variety of customer problems with the same tools — and by proxy, the same output — we end up solving other people’s problems, rather than for our customers and their customers. The counterarguments here are that everything is a remix of something else, and that certain patterns just work and therefore we shouldn’t deviate. I agree with some of this. But mimicking another solution is different from making one and understanding the intentions behind it. I’m not sure how many designers can say with a straight face that they understand the problem and wholeheartedly believe that the solution they came up with using their preferred tool, Dribbble or a WordPress template (yes, they are both tools), is right for the end user. I’m sure Photoshop filters were a hit with designers when they first released them. But then every design solution soon after resulted with some kind of filter being applied. The more powerful the tool the more we become dependent on that tool to solve problems it wasn’t designed to solve. Serve People, Not Supercomputers In the book The Best Interface Is No Interface, Golden Krishna poses a compelling argument that our default solution — the beloved digital interface — is wrong. When you have a hammer, everything looks like a nail. Need to open your car door? There’s an app for that. Need to design a better fridge? Put an interface on it. The supercomputer in our pocket doesn’t serve us, we serve it. We walk around all day with our faces stuck to these addictive interfaces. I dare you to look outside right now and tell me you don’t see someone walking around playing Pokémon GO… Although I don’t agree with every part of Krishna’s book, I do agree with his sentiment; that we as designers (and everyone around us) are stuck with using our default solutions: “No matter the client, the industry, or the problem we’re trying to solve, the creative process for developing technological solutions repeats the lazy act of drawing a screen. […] We learn fascinating things, then we draw a lazy rectangle.” Tools will always follow the solutions. As long as we continue to create the same kinds of solutions, we will get the same kinds of tools. If we want to create better things, then we need to break the cycle. As Bret Victor states, we have a choice: “The most important thing to realize about the future is that it’s a choice. People choose which visions to pursue, people choose which research gets funded, people choose how they will spend their careers.” We are seeing other types of needs emerge that will require new kinds of experiences designed for them. I’m not just talking about fitness wearables, IoT refrigerators, or Amazon’s Alexa — I’m talking about the future of services that companies will offer to make our lives better. It’s about services, not consumption of products and interfaces. So how do you articulate and design for experiences that aren’t under glass? Human-centered Storytelling One method that I have used, with great success, is human-centered storytelling. Ok, waiting for some eye rolls… Alright, ready? It really works. I first wrote about the power of storytelling in the article Better User Experience Through Storytelling, six years ago. There has been a trend in creative industries like advertising (or even with rollercoaster designers) to call themselves “storytellers.” This was heavily criticized by designers like Stefan Sagmeister. This is not that. At all. This is using well-documented design principles and ideation methodologies, and then generating a narrative based on those findings. In companies today, most proposals for new products or new features for existing ones are presented with stats, bullet points, tech specs, unrealistic screen designs, or even a press release of that future project. And with a vision it is very important that everyone has the same understanding of that experience — especially in a large corporation. However this typical proposal method places the burden on the listener(s) to assemble the vision from this assortment of disparate elements.
https://medium.com/user-defenders/the-perfect-design-tool-54e64428f2b
['Francisco Inchauste']
2016-07-21 17:50:33.597000+00:00
['Design', 'Storytelling', 'UX']
3 Things to Keep in Mind When Choosing a Web Font
The @font-face attribute has changed the face of web design, allowing designers to move away from the system default fonts and into a new realm of diverse, rich, & interesting fonts. Here are 3 things to keep in mind when choosing a web font to ensure that your users are getting the design experience you intended. Watch Out for Poor Anti-aliasing Anti-aliasing refers to smoothing the edges of the fonts, rather than displaying them with pixelated (jagged) edges. The font rendering on Windows does not always do smoothing by default. So you can enable it your own computer but there is no way to ensure it is enabled for viewers visiting your site. Here are two of the ways to get around that: Choose another font — This will look better or worse depending on which font you choose, what font type, as well as what font sizes you will be using. Some fonts look better large and vice versa. — This will look better or worse depending on which font you choose, what font type, as well as what font sizes you will be using. Some fonts look better large and vice versa. Font replacement — This means displaying the font with a completely different method from the browser’s default text renderer. This way your font will always render the same, assuming that whatever technology you’re using (ex. javascript, flash) is enabled and supported. You can also check out “A Closer Look at Font Rendering” by Tim Ahrens in Smashing Magazine for a breakdown of how font rendering works. Don’t Forget to Set a Default Font The cold hard web-font truth is that the @font-face tag is fairly new and only compatible for newer versions of browsers. You will definitely want to take into account both audiences. First, make sure that you specify a default font-family after setting your cool new font. Depending on what font you’re using, you will want to use a default font that looks similar to your new font: h1 { font-family: ‘myCoolNewFont’, arial, helvetica } “Common Fonts to all Versions of Windows & Mac Equivalents” is a helpful list of potential default browser fonts. Second, double check your font weight. Header tags are bold by default (font-weight: bold). When using a font that does not have a separate bold version of the font, the browser will apply a “fake bold” to the font. That results in certain parts of the fonts being thicker than were intended. and it can look out of proportion and just plain weird. Simply setting { font-weight: normal } does not take into account viewers who will see your page using a default font. Those headers will also be set to { font-weight: normal }, which hinders their visual effectiveness as a header. Here are two potential ways to get around this: Font selection — Again, some web font packages come with secondary bold versions of the font so you can use one of those instead. When using a font replacement you will also need to specify a secondary bold version of the font. Now { font-weight: bold} will render as expected. — Again, some web font packages come with secondary bold versions of the font so you can use one of those instead. When using a font replacement you will also need to specify a secondary bold version of the font. Now { font-weight: bold} will render as expected. Modernizr.js — This is something I recently found that checks CSS3 compatibility and lets you assign a CSS styling for either scenario. For example: h1 { font-family: arial, helvetica; font-weight: bold } .font-face-enabled h1 { font-family: ‘myCoolNewFont’; font-weight: normal ; } Pretty sweet, eh!? Download modernizr.js here: http://modernizr.com Google Font Bug! Last but not least, Google has a great Webfonts API and you can load fonts directly from their server. I noticed that the vertical spacing is significantly different when viewing it in Chrome on Windows. The best solution I was able to find is to manually upload your own version of the font kit from Font Squirrel and use that instead. Visit endertechnology.com to learn more!
https://medium.com/endertech-insights/3-things-to-keep-in-mind-when-choosing-a-web-font-af185ac8663a
[]
2017-08-22 22:03:06.440000+00:00
['Design', 'Marketing', 'Web Design', 'Tech', 'Typography']
How to Market Your Business When Nothing Is Working
I started my entrepreneurial journey when an interesting blog post was enough to get you attention… …a well-crafted tweet could start a long-term working relationship with a partner, and a podcast — well, people still didn’t quite know what to do with those. Needless to say, social media, blogging, and podcasting have changed a lot in the last 8 and a half years I’ve been in business. Email marketing has changed a lot too. Sending weekly email used to feel like a lot but now, if you’re not sending several times a week, your emails are probably getting lost in the sea of other emails your subscribers are getting. The fact is “what works” has changed. In fact, for many business owners I know, it seems nothing works anymore. If you’re feeling like you’re working hard to share your business with people who care but you just aren’t getting anywhere, it can be debilitating. After all, how can you put any effort toward the work you really want to do if you can even get people to pay attention? After months or years of perceived invisibility, you’re likely to give up altogether. This summer, I was speaking with Lisa Robbin Young, founder of Ark Entertainment Media, about this and she said, “When people say nothing is working, what they’re really talking about is tactics.” I couldn’t agree more. Tactically, the marketing landscape is going through a sea change. Power is consolidating among a few key platforms. Those platforms — your distribution channels for marketing — control how content is shared, who you can connect with, and what gets seen. Without a reliable way to spread the word about what you’re putting out into the world, even the media you own has less value to you as a business owner. It might seem, at this point, that the situation is dire. It’s not. There is still a huge opportunity today for independent businesses like yours. People are hungry to buy from companies that represent their values, make them feel things other than rage or fear, and create meaningful or innovative offers. Yours is one of those companies. So then how do you reach the people who are so hungry to buy from you? It’s certainly not by trying to make an end run around the Facebook algorithm, outsmart the bots on Twitter, or plaster your marketing messages all over the latest copy of Snapchat. Forget looking for the magic marketing tactic that’s going to turn your business around and start getting real about what your marketing needs to do for you. 1.) Market one person at a time. Maybe you got the idea that social media was going to help you reach hundreds or thousands of people at once. Maybe you thought that you could create “some content” and suddenly the masses would see, like, and share it so that you wouldn’t have to actually talk to anyone about your business. I’ll admit, there was a time when this was partially true. That time is not now. Marketing is, and has always been, the pursuit of reaching one person at a time with something they desire or need — be it content, a product, or a conversation. Forget figuring out how to broadcast to more people (yes, one day your Facebook reach will become zero) and start figuring out how to connect with the exact right person who needs what you offer next. You may very well still find them on Facebook or Instagram or Twitter — but you won’t find them by shouting out into the ether. Next step: make a list. I have my clients make lists of 10, 20, or 30 people they want to buy what they’re selling. Then they find ways of reaching those people one at a time. They might email, they might message them on Facebook, they might even — perish the thought — pick up the phone. This genuine, personal attempt at connection almost always results in thousands of dollars in sales, fans for life, and a huge sigh of relief when they realize they never have to “launch” again. 2.) Focus on what you’re most enthusiastic about. Your level of enthusiasm is a huge indicator in your likelihood of success when it comes to marketing. Of course, since many business owners tell themselves they hate marketing, this is a big problem. Those folks aren’t going to be very enthusiastic. They’re the ones constantly trying to find that magic marketing tactic and their enthusiasm dips every time the latest fab falls flat. Luckily for you, you can turn your enthusiasm into an unfair advantage. Dacher Keltner, author of The Power Paradox, wrote, “Groups give us power when we are enthusiastic, speak up, make bold assertions, and express an interest in others.” He also says that enthusiasm was the strongest predictor of sustained social power in the groups he studied. Feel free to substitute “group” with “market” or “community” here. If you want to build earn attention and power for your business, you’re going to have to show some enthusiasm! Next step: make another list. This time, list out 10–20 ideas or topics you vehemently disagree with in your market. Then, list out 5–10 aspects or features of your product or service that you’re incredibly passionate about. Finally, list out 5–10 misconceptions your potential customers make and how your offer turns them around. You now have a huge list of things you can speak or write enthusiastically about. Try creating emails, blog posts, podcast episodes, or videos from this list. Try speaking to local groups about something on the list. Try bringing up list items in your next sales call. Your enthusiasm will go a long way toward connecting you and your company to the right people. 3.) Show, don’t tell. Your potential customers are more skeptical than they’ve ever been — and with good reason. They’re really not interested in reading, hearing, or watching something that explains something they’ve heard a million times before — but never seen results from. Nor do they want to hear how your product is the best, most innovative, or most fun ever created. They need to see it. They want a demonstration. They want to be shown what’s on the inside, how it works, and why it works. If your product is really different, they want to see that difference in fine detail — right alongside the things they’ve tried before. And yes, “show, don’t tell” applies to service-based businesses as well as product-based businesses. Next step: Get creative, get transparent, and be willing to put your offer side-by-side with other offers to show off the differences. Create a video, slideshow, infographic, or checklist that actually shows what’s truly unique and special about what you’re offering. What’s working is nothing new. The strategy and tactics you’ll need to successfully market your business are not being developed in a posh office in Silicon Valley right now. They’re grassroots, person-to-person, authentic, transparent actions that have always worked to grow businesses. Those actions–like picking up the phone or speaking in front of a group of local community members–might make your heart race the way posting 10 times a day to Facebook does not but they’re infinitely more effective. Whether you’re just getting started and wondering how to find your first 100 email subscribers or you have thousands of people in your audience and no clue how to re-engage them in this brave new world, the answer lies in these 3 keys: market one person at a time, focus on what you’re most enthusiastic about, and show don’t tell. This post was originally published on CreativeLive.
https://medium.com/help-yourself/how-to-marketing-your-business-when-nothing-is-working-48f7443cafe0
['Tara Mcmullin']
2017-10-03 21:01:33.403000+00:00
['Marketing', 'Digital Marketing', 'Small Business', 'Growth Hacking', 'Entrepreneurship']
Are You a Natural-Born Storyteller?
Are You a Natural-Born Storyteller? Please, be weird and tell us about it! Image licensed from Canva A friend of mine always laughs at the fact, that talking to me is like watching a soap opera, in a good way. Something is always going on, something interesting always happens to me, people coming and going, tension, friction, love and drama. In his eyes, my stories are the result of me being a magnet to drama. He’s convinced, that my life consists of huge events, something dramatic, funny or very emotive all the time. If I am the one telling about it, the simplest episode of an online dating failure turns into half an hour of laughing and deep thoughts about life and the world. He thinks I have a very interesting life, that is anything but boring. I am that kind of person who always has something going on. I don’t think I have an especially interesting life. I am working, writing, raising my kids, I go to kickboxing and running. I am reading and thinking a lot. I meet friends when I can, which happens less frequently than I’d like to. I travel whenever I can — which is even more infrequent. I am dating sometimes, but that’s all. One thing is sure: It’s never boring. But… Nothing too important or mind-blowing. Yet, when I talk about it, and when I think about it… I see and tell it as if something was always happening to me. I realized that the most interesting people in my life have one thing in common: They are natural-born storytellers. Life is never boring, but you need to have an eye for a story Everything is a story, everyone is a character, boring or exciting, but they are a character in your show. If you notice it. There is no such thing as an event too small to tell. I read stories about extremely mundane things, very simple thoughts — the bests usually are the ones that could have been told in one single sentence. But then I wouldn’t have cared. If you open your eye for the things that can serve as a springboard for a story, it is very hard to close it again. It can become an addiction. But not an addiction to drama, an addiction to tell stories. Storytelling starts with telling your story to yourself It doesn’t matter if you are a writer, a designer, any kind of creative person or anything, you can also be a “boring” housewife like I sometimes think I am… — if you notice the little things and you tell yourself about the events that are happening to you in a way that they become a story — you are a storyteller. If you think life is pretty exciting with its ups and downs, you notice the drama in the trivial factors of life, then your narrative has you as the hero of your own story, with a villain (sometimes more than one), with your helpers, with the necessary props. The storyline is given, your choices and life shape it, but how you process it depends on you. Overthinking is not always bad There are so many articles about how not to overthink your life, how to let go of unnecessary fuss, how not to get lost in your own thoughts. This “bad habit” actually helps the storytelling ability. You are looking at it, trying to make sense of it, writing your story in your head and editing and rewriting it, as events start to make more or less sense. I noticed that all my interesting friends have a tendency to notice drama, to overthink, overanalyze, overprocess their lives — to finally be able to tell their story as a whole narrative. And it doesn’t matter what happens in reality, great stories and conversation starters can come from the most insignificant grocery shopping to the biggest life events. Being weird is great When we are kids, we are taught in school to fit in and the best students are invisible. If you are unlucky, your family background has the same belief. The system rewards the norm. Yet creativity is way more interesting and being weird is so much more fun. Being different can be a burden, especially if you don’t yet have the means to process it and use it to your own benefit. It can become your superpower, it can become your single most valuable trait. The greatest stories, the best aspects and the most mesmerising executions come from a weird point of view. Do us a service, please be weird and stay weird. Have a strong opinion, think a bit differently, love the things that others don’t even notice, stand out from the crowd. It’s amazing and interesting and the stories worth it! Writing as an outlet of storytelling Meeting so many brilliant writers I came to realize that the ones I find most interesting are the overthinkers, the over-analysers, the ones with exceptional curiosity, with an eye for the details, the ability to spot the seed for a story. The best stories I’ve read usually start from a very simple core-thought, based on personal experience, a snippet from the past, a current happening, a simple musing, a spark of frustration or a burning question. I enjoy it immensely to follow the train of thoughts of others, to marvel at how their brains work, how they draw their conclusions, how they live what life hands them, how they phrase their stories.
https://zitafontaine.medium.com/are-you-a-natural-born-storyteller-4f6c2d32e6af
['Zita Fontaine']
2019-08-31 07:06:15.662000+00:00
['Writing', 'Personal Grow', 'Self', 'Drama', 'Storytelling']
Fourth Wave Has a New Look
Our new masthead was designed by artist Leila Register, with inspiration from and gratitude to regular contributor Peter Pruyn. So much has happened since we sent out the last communication to Fourth Wave subscribers in mid-May, including Medium’s change in labeling for this type of communication from “letter” to “newsletter.” The most obvious change, though, is our new redesign. Some of you might have been fans of the previous masthead, which featured a woman holding her arms up while looking out at the ocean — a literal translation of the feminist term “fourth wave.” I know I was. Because I designed it! :p But about the time I was receiving $1,200 from the U.S. government, I was realizing that image was under par, so I decided to spend some of that money hiring actual artists to come up with a new vision for the site. You can read about that thought process, and see a few of the options it generated, in two stories: How I Spent My $1,200 and Why Beauty is a Drawback
https://medium.com/fourth-wave/fourth-wave-has-a-new-look-5cabc13633c4
['Patsy Fergusson']
2020-07-20 18:43:35.502000+00:00
['Design', 'Feminism', 'Writing', 'Equality', 'BlackLivesMatter']
How to Change a Trump Supporter’s Mind (in 2021)
Humphrey Bogart and the Mandela Effect Collage by author/Original photos: Wikipedia Commons Sometimes it feels impossible to change a Trump supporter’s mind. It feels like facts don’t even matter to them. How can we possibly find common ground with them? “The election was rigged! Hilary is a pedophile!”, they scream. It brings to mind the Mandela Effect. This is a phenomenon named after the widespread belief that South African civil rights leader Nelson Mandela died in prison. He didn’t, of course. Mandela was released from prison and actually served as President of South Africa before he died. This memory of his prison death is completely false, yet many people believe that it happened. This effect is used to describe memories that large numbers of people swear they have despite all proof to the contrary. Some even say that memories like this are evidence of a parallel universe. Remember the classic black and white movie “Casablanca”? In one iconic scene set at a nightclub, a heartbroken and drunk Rick, portrayed by Humphrey Bogart, demands that the pianist named Sam play his favorite song. “Play it again, Sam”, you may remember him saying. Only, he doesn’t actually say that in the movie. Rick just says “Play it”. He never utters the other two words. This is an example of the Mandela Effect. You’ve known this movie for your entire life, and now you find out, all of a sudden, that the most well-known line in it was actually never said. You’re not the only one who misremembers the line, by the way. Popular culture is littered with references to the non-existent phrase. There’s even a used sports equipment store in my own neighborhood that alludes to the line in the name of its business. They call themselves “Play it Again Sports”. A likely explanation is that you, and others, had a lapse in memory with respect to that movie. Rick never said that famous line and Mandela didn’t die in prison. And we haven’t been bumped into a parallel universe. It doesn’t matter if a lot of people believe in something or how passionate they are about it. That doesn’t make it true. For those that refuse to accept the truth, we have to agree to disagree. This, by the way, is how to handle a Trump supporter.
https://medium.com/indian-thoughts/how-to-change-a-trump-supporters-mind-in-2021-b8f58d58e6d2
['Keith Dias']
2020-12-09 00:16:53.632000+00:00
['Politics', 'Political Science', 'Psychology', 'Science', 'Culture']
UX & Psychology go hand in hand— How Gestalt theory appears in UX design
In the age of AI and “Human Centered Machine Learning”, it’s essential that we understand the needs and behaviour of our users. This is doubly true as a UX designer. In order to create work that better serves the needs of our users, it’s important to understand some basic psychological principles. Which is why I want to share with you Gestalt theory. With this toolkit under our belt, we can consciously design user experiences that truly fit the users. Introduction of Gestalt psychology Gestalt theory was founded by Max Wertheimer at early 20th century. This psychological philosophy addresses with perception, perceptual experiences, and related patterns of stimulation. The motto of the gestalt philosophy is: “The whole is other than the sum of the parts.” — Kurt Koffka When human perception meets with complex elements, we recognise the whole before we see the individual parts. As a designer if we understand these psychological principles, we can be more conscious during the design phase. One of the basic document of Gestalt principles stated by Max Wertheimer in 1923, called Laws of Organization in Perceptual Forms defined some basic principles (laws) that show how the mind tends to perceive visual stimuli. Law of Proximity “Law of Proximity” states when objects are close to each other and they tend to be perceived together in one group. Basically proximity is closeness. If we use clear structure and visual hierarchy we will be less charged by the limited cognitive resource of users, so they will be able to quickly recognise and react.
https://uxdesign.cc/ux-psychology-go-hand-in-hand-how-gestalt-theory-appears-in-ux-design-18b727343da8
['Norbi Gaal']
2018-04-24 04:45:44.880000+00:00
['Design', 'Gestalt', 'User Experience', 'Psychology', 'UX']
Once Again
Photo by Şahin Yeşilyaprak on Unsplash I crawl out of the dream and shake the predator off my consciousness. I’m much too cold to will myself out of bed, but not so scared anymore. I’m burning up under all the layers, but I’m cold all the time these days. I’m fumbling through the maze of all the places I went wrong, afraid.
https://medium.com/makata-collections/once-again-aff4d601b5a
['Brianna R Duffin']
2019-03-05 02:32:49.004000+00:00
['Sleep', 'Mental Health', 'Fear', 'Psychology', 'Poetry']
Short-form stories are a great way to reach scrolling writers fast.
Short-form stories are a great way to reach scrolling writers fast. If you keep this little posts within the guidelines, not only do you get a 100% read-rate, you also have a quick way to promote your latest story to readers who may not have noticed it otherwise. With the new changes in distribution (see above), we writers need to hone every promotional tool in the shed. I believe these short-form stories are a great way to grab your reader’s attention in under 150 words (so the entire text is visible in preview). Enroll in my Free Email Masterclass. Get Your First 1,000 Subscribers
https://medium.com/the-book-mechanic/short-form-stories-are-a-great-way-to-reach-scrolling-writers-fast-25c098d2eab6
['August Birch']
2020-12-10 18:30:46.379000+00:00
['Medium', 'Writing', 'Marketing', 'Money', 'Life Lessons']
Why buy equity shares in BABB?
When you invest in BABB, you’re funding the banking license and you’re also buying a chunk of the private company, meaning you become a direct shareholder. Being a shareholder comes with various perks. We are selling 2.91% equity based on a valuation of £50 million. All equity sold will be in the form of ordinary (A) shares in the company, which entitles you to… Ownership rights (two shares in BABB per £1 invested) Voting rights on key strategic decisions such as appointing a chairman of the board (one vote per share) Access to possible future dividend payments based on company profits (please note that we have not committed to pay dividends) Return on investment in the future (more details below) Why invest in BABB? There are two main reasons why you would invest and buy equity in BABB You like the project and want to see it succeed You’re optimistic about the future value of the company, and therefore the future value of your investment Equity versus BAX Equity investing is different to buying tokens but they are both integral to the future of the company. Rather than choosing between them, it’s best to have both! The main points of difference between BAX tokens and shares are… Shares are a legally recognised investment in a regulated entity (BABB Group Ltd) are a legally recognised investment in a regulated entity (BABB Group Ltd) Shares entitle you to a return if and when BABB goes public or is bought by another company, but they’re not fundamental to the operation of the platform entitle you to a return if and when BABB goes public or is bought by another company, but they’re not fundamental to the operation of the platform BAX is a utility token which underpins the operation of the blockchain platform, but owning BAX tokens doesn’t guarantee returns or profits is a utility token which underpins the operation of the blockchain platform, but owning BAX tokens doesn’t guarantee returns or profits Shares are not publicly tradeable because BABB is a private company, meaning you can only swap or exchange your shares privately. are not publicly tradeable because BABB is a private company, meaning you can only swap or exchange your shares privately. BAX tokens are liquid and fully tradeable at all times. BAX and shares are both equally important ways for the community to have a stake in the BABB project and ensure its success. We are happy to see BAX holding its own in this bearish market, and delighted to have welcomed thousands of new token holders since the token sale. We want to reward all our early backers with a bit of extra BAX, and we also think it’s important for all equity investors to own BAX ahead of the product launch. That’s why we’re offering… BAX bonus for the private round — get 40 BAX tokens for each £1 you invest on CrowdCube (minimum investment £100); and BAX airdrop for everyone who participated in the token sale, and all equity crowdsale investors. For some equity investors, BAX might be their first encounter with crypto-assets. We’re very excited about this opportunity to introduce our project to a whole new audience. Getting your money out Making an equity investment is a commitment. Unlike buying shares in a public company, which you can trade any time, buying equity in a startup means you can’t access your money until the company exits or your shares are bought by another investor or the company itself. There are a couple of ways you might get a return from your investment in BABB: We take investment from a VC firm who offers to buy out some (or all) earlier investors We are acquired (bought by) a bigger company We become a public company via an Initial Public Offering and list our shares on the stock exchange Company buyback, in which BABB gives you an attractive offer to purchase your shares in the company If any of these things happen, you’ll make a profit from your initial investment. Various companies have raised investment through CrowdCube and gone on to exit the business and generate a healthy return for their investors. For example, Camden Town Brewery was acquired just eight months after its CrowdCube raise, and this sale delivered ‘a multiple return’ to its 2,173 investors. At this stage, we can’t tell you which exit will happen, so we can’t tell you exactly when or how you will get a return on your investment. However, as with every other aspect of the BABB story, we will keep you fully informed of our progress at all times. Register to invest That’s the end of the fine print. It’s decision time. If you want to invest in BABB and own equity in the company, you know what to do. Questions? You can join our equity crowdsale Telegram group or email us at [email protected].
https://medium.com/babb/why-buy-equity-shares-in-babb-d9605bd85d16
[]
2018-09-25 09:34:50.051000+00:00
['Blockchain', 'Investing', 'Entrepreneurship', 'Startup', 'Thoughts']
A Recipe for Designing Animations—Without Sacrificing Performance
Step 1: Sketch I started by designing a set of illustrations in Sketch, to set up each animation. Like many, I typically use Illustrator for graphic design, and Sketch for UI design. But because Sketch is so well-suited for flat, geometric vectors, I gave it a shot for these illustrations. I used Illustrator very sparingly—only for certain Pathfinder operations. I also copied each artboard and created a set of static fallbacks, which I exported as PNGs. Fallbacks are useful for older browsers and in cases of animation failure. Also, fallbacks should be served to users who provide reduced motion requests for accessibility reasons. Sketch Step 2: Sketch2AE (now AEUX) I then used Sketch2AE, the precursor to AEUX, to import all of my Sketch layers into After Effects. In AE, I use a lot of precomps and null objects. Rather than copying all of my layers into AE at once, I used Sketch2AE to copy groups of layers at a time. This made it easy to isolate the components I wanted to animate without manually sorting through a long layer list. AEUX Step 3: After Effects In After Effects, I animated each 5-second composition to tell a story about Chrome. I leveraged internal Material motion guidelines to ensure smoothness and consistency. I also used trim paths to create a trail of the core brand colors, and incorporated this trail into every animation for narrative continuity. After Effects Step 4: Bodymovin I then used the Bodymovin extension in After Effects to generate a JSON file for each animation. Each JSON file contains coded instructions for the entire composition. For safekeeping, I also rendered an MOV for each comp—more on this later. Bodymovin Step 5: Lottie I handed off the JSON files to our creative developers, who used the Javascript library Lottie to generate an SVG for each animation. SVGs themselves are another form of coded instructions for animations, and can be written inline in HTML code. This means that no network requests are needed to render each animation—no huge GIFs or video files. Our developers also wrote functions to trigger animation playback on scroll, and restart playback on click. This makes for a more intentional user experience than endless looping would have. If you don’t have programming experience, you can use an online platform such as LottieFiles to upload a JSON file and preview its animation. Step 6: Photoshop For presentation purposes, I like to create a GIF for each animation. This makes it easy to share work internally for feedback throughout the design process. To do so, I imported each rendered MOV from After Effects into Photoshop using File > Import > Video Frames to Layers. I then used the Photoshop Timeline panel to create frame animations, and exported each animation as a GIF using Save for Web (Legacy). If you have a long animation with thousands of frames, Photoshop may be slow to convert your frames to layers. In this case, I would use an online MOV to GIF converter. Keep in mind that these online converters are sometimes lossy, so your GIF may not have the same quality as your MOV. Photoshop Step 7: Optimization I have a habit of optimizing all my exports for good file hygiene. I always start with a lossless compression tool, such as ImageOptim. Lossless compression will reduce file size without impacting quality. If the file size is still too large post-compression, I typically use a lossy compression tool such as ezGif on the lowest compression setting, to preserve quality. ImageOptim Done! At the end of this process, you’ll have a lightweight SVG animation supported by a JSON file. You’ll also have an MOV and an optimized GIF for easy sharing, as well as a static PNG fallback. These steps allowed us to successfully incorporate animations into the Chrome homepage without impacting performance. I used the same workflow to create an animation for the download confirmation page, and a variant for a landing page experiment. Plus, I’ve also shared this workflow with other teams at Google, who have since incorporated animations into their websites—hopefully you and your team find it useful, and if you have tips and tricks of your own, please share in the comments. ❧ Neil Shankar is a designer on Creative Engineering at Google, embedded through Left Field Labs. Visit tallneil.io.
https://medium.com/google-design/a-streamlined-workflow-for-performative-animations-be0a6ff3df7a
['Neil Shankar']
2019-06-14 19:33:00.329000+00:00
['Animation', 'Design', 'Tools', 'UX', 'Productivity']
Could the future of farming be vertical?
Could the future of farming be vertical? Vertical farming is greener and more efficient than traditional agriculture, writes Natalie Mouyal Photo: BrightAgrotech, Pixabay Vertical farming promises a more sustainable future for growing fruit and vegetables. Instead of planting a single layer of crops over a large land area, stacks of crops grow without soil or sunlight. The nascent technology enables farmers to grow more food on less land. Among the benefits, it reduces the environmental impact of transportation by moving production from the countryside to the cities, where most people live. Dilapidated warehouses and factories around the globe are being transformed into urban farms to grow salads and other leafy greens at a rate that surpasses traditional farming techniques. LEDs provide the lighting plants need to grow, while sensors measure temperature and humidity levels. Robots harvest and package produce. At one vertical farm in Japan, lettuce can be harvested within 40 days of seed being sown. And within two towers measuring 900 m2 each (actual cultivation area of 10 800 m2 and 14 400 m2), the factory can produce 21 000 heads of lettuce each day. Indoor farming is not a new concept, as greenhouses have long demonstrated. It has existed since Roman times and can be found in various parts of the world. Greenhouses are described in a historic Korean text on husbandry dating from the 15th century and were popular in Europe during the 17th century. In modern times they have enabled the Netherlands to become the world’s second largest food exporter. Vertical farming offers a new take on indoor farming. Popularized by the academic Dickson Despommier, its proponents believe that vertical farming can feed millions of people while reducing some of the negative aspects associated with current agricultural practices: carbon-emitting transportation, deforestation and an over-reliance on chemical fertilizers. Vertical farming is defined as the production of food in vertically stacked layers within a building, such as a skyscraper or warehouse in a city, without using any natural light or soil. Produce is grown in a controlled environment where elements including light, humidity, and temperature are carefully monitored. The result provides urban dwellers with year-round access to fresh vegetables since they can be grown regardless of weather conditions, without the need for pesticides and have only a short distance to cover, from farm to plate. Initially conceived by Despommier with his graduate students as a solution to the challenge of feeding the residents of New York City, vertical farming has since taken off around the world, most notably in the United States and Japan. According to the research company Statista, the vertical farming market is expected to be worth USD 6,4 billion by 2023. High-tech farming According to the UN Food and Agriculture Organization, food production worldwide will need to increase by 70% by 2050 to feed a projected global population of 9,1 billion. Vertical farming seeks to address the dual challenges of feeding a growing population that, increasingly, will live in urban centres. By repurposing warehouses and skyscrapers, these ‘high-tech’ greenhouses reuse existing infrastructure to maximize plant density and production. One vertical farm in the United States claims that it can achieve yields up to 350 times greater than from open fields but using just one percent of the water traditional techniques require. In general, two methods for vertical farming are used: aeroponics and hydroponics. Both are water-based with plants either sprayed with water and nutrients (aeroponics) or grown in a nutrient-rich basin of water (hydroponics). Both exhibit a reliance on advanced technology to ensure that growing conditions are ideal for maximizing production. So as to produce a harvest every month, vertical farms need to control the elements that affect plant growth. These include temperature, requisite nutrients, humidity, oxygen levels, airflow and water. The intensity and frequency of the LED lights can be adjusted according to the needs of the plant. A network of sensors and cameras collects data with detailed information about the plants at specific points in their lifecycle as well as the environment in which they grow. This data is not only monitored but also analyzed to enable decisions to be taken that will improve plant health, growth and yield. Data sets sent to scientists in charge of the growing environment enable decisions to be made in real-time, whether they are onsite or at a remote location. Automation can take care of tasks such as raising seedlings, replanting and harvesting. It can also be used to provide real-time adjustments to plant care. One factory plans to automate its analytical process with machine learning algorithms so that real-time quality control can take into account a diverse range of data sets. While each of these farms will implement varying levels of technology, it can be expected that as these technologies become more widespread, their adoption will increase. The use of artificial intelligence and cloud computing is not yet extensive but is likely to become increasingly important to ensure production yields remain high. Growing pains Despite the enthusiasm for vertical farming, its business model is not yet proven. The initial investment needed to launch a vertical farm and the electricity required to power the 24-hour lights, sensors and other technologies can be costly. Depending on the source of the electricity used to run the equipment, it may not necessarily prove environmentally cleaner than traditional farming techniques. For this reason, a shift towards renewable energy sources could support the claim that these farms have a positive environmental impact. At this stage, vertical farms are used primarily for growing crops that attract high market prices, such as herbs, medicinal plants and baby greens. They have not been used to grow the wheat, beans, corn or rice which feed much of the world. Its scale is not yet sufficient to meet food demands. Vertical farming is still in its infancy. No large scale studies have yet been completed to allow a full comparison with traditional farming techniques. Despite this, it has generated much enthusiasm and, more recently, significant financial support, which may enable vertical farming to create a niche market for the supply of fresh produce to city dwellers.
https://medium.com/e-tech/could-the-future-of-farming-be-vertical-ee109aa895af
[]
2018-10-08 14:35:10.080000+00:00
['Agriculture', 'Sustainability', 'Artificial Intelligence', 'Environmental Issues']
The VC Formula for Success in Life
Image: Harvard Business School. Source: Flikr Israel has been under its second lockdown for three weeks now, so I took the opportunity to sort through my closet. Underneath all the clothes and memorabilia, I found a box with my application to Harvard Business School (back then we still printed things out). Brushing off the dust to read my essays, it felt like I discovered a time capsule for my thoughts back in 2005. One essay question stood out to me; it was #4: “How do you define success?” Within my 400-word limit I rambled on about “finding the equilibrium between professional, personal, and spiritual goals”. My answer was fluffy and clearly written to appeal to the audience. To be honest, it is not how I viewed success then, and it is definitely not how I view it now. The truth is that life starts off simple and gets more complicated. Friends come and go. We make money and we lose it. Some businesses thrive and others don’t. The race to success early on often gives way to the need to make sense of it all, especially these days. Though I know there are no Harvard professors on the other side waiting to analyze what I have to say this time around, I figure for myself and whoever else may be interested, it is worth taking the time for a re-do. So here I go, again. How do I define success? The way I see it now, success is not a destination, it’s a state of mind. As a partner at F2 Venture Capital, I have come across hundreds of over achievers: CEO’s, CTO’s, data scientists, developers, and other investors. What sets the winners apart is not their achievements, but the way they carry themselves. Take Dekel Valtzer. He had the same confidence selling sunscreen and diapers as he draws on now at the helm of Avo, one of our fastest growing portfolio companies. Or Assaf Wand. Back in early 2000 he sold do-it-yourself sushi courses and MBA application consulting services with the same passion that propels him now as CEO of Hippo, the $1.5 billion insur-tech unicorn. These visionaries were always a success; it just took some time to reveal it. Success is also how we manage failure. As an investor in a dense ecosystem, it’s very easy to get bogged down in comparisons to others, or “misses”. It’s impossible to win every deal or get it right every time. When a startup called Moon Active announced a recent funding round at a valuation of $1.25 billion, I cringed. Back in 2012 I met the CEO and signed an agreement to invest $50,000 at a valuation of $2 million that would have netted $20 million today. Sadly, after our handshake and 20 follow up emails, the deal didn’t close. After the news came out, I struggled with this loss. It took me time to realize it is not a failure unless I quit then and there. But I found another way to handle it; take the anger and frustration and use it as fuel (rocket fuel given the size of the miss) to chase the next deal that much harder. The Measure of Success One thing we always tell our founders is “if you are not measuring it, it doesn’t count”. In the startup ecosystem, we define success in terms of key performance indicators (KPIs), numbers that roll up into one simple equation called ROI or return on investment. Image: ROI equation. Source: F2VC For example, a company that spends $1 in marketing to acquire a customer who spends $2 over her lifetime generates an ROI of 2x. We can compare this number in a company and across companies over time, to measure success and failure. If it works for startups, could this measure be applied to life? I believe the formula would look something like this: Return on Life (ROL) = Impact / Time. The sudden loss of my parents at a relatively young age made clear to me just how precious time is in life. Suddenly I found myself in the front row without a safety net. This was a stark wakeup call to prioritize my time for positive impact, relative to where I was yesterday, and relative to the people who bring meaning to my life. On our podcast series Founder Stories, my wife Anouk and I interview professional authors, chefs, athletes, singers and other pioneers at the top of their fields in Israel. Because we live in a small country, these visionaries don’t command celebrity salaries. The common thread of their success is the enormous impact they make with their limited time and singular talent. Image: Founder Stories Podcast. Source: F2VC While Return on Life does not define success, it can certainly point us in the right direction. In venture capital, ROI is our northern star. Out of every 100 companies that reach out, we invest in 1. These are the founders who are not seeking our validation; they already have conviction in their eyes. This is how we decide to commit millions of dollars well before any real product or business model goes online. The relationship with the founders we back is personal so every time we can serve as a sounding board or open a useful door yields tremendous satisfaction. Even as the lockdown persists and the office remains closed, I lean into this job because the ROL is sky high. Here in Tel Aviv, the VC world is full of firms competing for the best founders. But we can’t lose sight of reality- this is just a bubble of time and space, here and now. There are startup hubs all over the world — Silicon Valley, New York, London, Berlin, Shanghai. And what happens when the next unicorn rolls around after our time is up? It’s simply impossible to catch them all Success is not being the best. It’s about making the most impact with the time that we have. As they say, “The race is long and, in the end, it’s only with yourself.”
https://medium.com/f2-capital/the-vc-formula-for-success-in-life-3f1df5679cf1
['Barak Rabinowitz']
2020-10-14 07:00:35.020000+00:00
['Entrepreneurship', 'Israeli Startups', 'Venture Capital', 'Coronavirus', 'Harvard Business School']
Practical Machine Learning with Scikit-Learn
Practical Machine Learning with Scikit-Learn EDA, feature engineering and preprocessing, pipelines Photo by Joshua Hoehne on Unsplash Customer churn is an important issue for every business. While looking for ways to expand customer portfolio, businesses also focuses on keeping the existing customers. Thus, it is crucial to learn the reasons why existing customers churn (i.e. leaves). Churn prediction is a common task in predictive analytics. In this article, we will try to predict whether a customer will leave the credit card services of a bank. The dataset is available on Kaggle. We will first try to understand the dataset and explore the relationships among variables. After that, we will create pipelines to transform features to an appropriate format for model training. In the final step, we will combine the feature transformation pipeline and a machine learning model in a new pipeline. The first step is to read the dataset into a pandas dataframe. import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(style='darkgrid') churn = pd.read_csv("/content/BankChurners.csv", usecols=list(np.arange(1,21))) print(churn.shape) (10127, 20) I have excluded the redundant columns in the dataset using the usecols parameter. (image by author) There are 20 columns. The screenshot above only includes 7 columns for demonstration purposes. We can view the entire list of columns by using the “columns” method. We should always check the missing values in the dataframe. churn.isna().sum().sum() 0 The isna function of Pandas returns true if a value is missing. We can apply sum functions to count the number of missing values in each column or entire dataframe. Since there is no missing values, we can move on.
https://towardsdatascience.com/practical-machine-learning-with-scikit-learn-de014bd9d4e5
['Soner Yıldırım']
2020-12-24 07:05:30.756000+00:00
['Data Science', 'Machine Learning', 'Artificial Intelligence', 'Python', 'Programming']
This Just In: Choice In Video Games
This Just In: Choice In Video Games Have video games lost their appeal? Last week I watched High Score on Netflix. It’s a six-part docuseries on the history of video games. For most of the episodes, it was a walk straight down Memory Lane. The first video game I remember playing is Duck Hunt on the original Nintendo Entertainment System. Pulling the trigger on the plastic gun came with a very loud click, following a few milliseconds later by the screen flash. Then, after three shots, the dog would pop up and laugh at you. It was revolutionary. After that, I was hooked on video games. I’ve owned just about every video game console released in the last three decades and played everything from sports simulators to role-playing games. I enjoy any game with a good storyline. Over the last few years, my gaming considerably slowed ( unless you count playing Sudoku on my phone, which happens almost daily). I tend to play games three or four years after their release, once they are significantly cheaper and any downloadable content is included. Right now, I’m playing Shadow of the Tomb Raider, the third game in the latest Tomb Raider reboot. When I am running around, climbing cliffs, and exploring tombs the game is a lot of fun. However, there’s this whole other side of the game where the only option is ruthlessly killing people at every turn. This part of the game not only is not fun, but it makes no sense. The premise of the rebooted Tomb Raider series is centered on, of course, Lara Croft. In her early twenties, Lara is adventure-seeking and a bit reckless, trying to solve her deceased father’s life work: preventing an evil secret society (Trinity) from destroying the world. Throughout the course of the games, Lara goes from running away from bears to killing Trinity henchmen ( or zombie-like South American jungle-tribesmen) on sight and without mercy. Maybe my tastes have evolved, maybe it’s the pandemic and the ever-present death toll, but I don’t really want to finish Tomb Raider. And it’s not just this game ( which, aside from the killing actually has a compelling storyline), I don’t want to play any game where killing is the only way to move the story forward. I’ve been drawn lately back to No Man’s Sky, a game released in 2016 that has almost no real story. It’s essentially a game where you explore various galaxies, mine resources, and trade with other species. All while trying to survive hostile planetary climates. On the surface, the game appears boring. A few hours in, becoming an intergalactic trading mogul is incredibly satisfying. My issue isn’t about violence in games, which I don’t have a problem with as long as it makes sense in the context of the story. I guess what I’m looking for is a game where I actually have choices. No Man’s Sky provides those choices since you can practically do what you want. Avoid people? Never get into a fight? Just mine resources and build your habitat? All those choices are perfectly fine in the game. Tomb Raider provides zero choices. I’d prefer Lara have the choice to sneak past Trinity guards, avoiding conflict like any sane twenty-something in over her head would. Instead, the choices are removed and the only option is for Lara to become a ruthless serial killer. It’s just not fun. While there are games where your choices affect the outcome, they are rarely open-ended. The truth is, I’d love a version of Tomb Raider without the Trinity storyline. A game where Lara explores South American jungle ruins, collecting artifacts and clues to find the next one. When you complete Shadow of the Tomb Raider’s main story, you unlock all the “challenge tombs” which is essentially the game I want to play. I just don’t want to go through the Trinity mess and senseless killing to get there. One of the best games I’ve played in years is Horizon: Zero Dawn. You play as Aloy, one of the last humans in a world destroyed by technology. The majority of the conflict in the game comes from avoiding artificially intelligent dinosaurs. Plus, the story is fantastic. Horizon is Tomb Raider without the people and I am here for it. Unfortunately, games like Horizon are few and far between. With the next generation of consoles releasing later this year, I fear games will continue to leave players without choices, now with better graphics! Maybe someday the industry will evolve and players will have actual choices.
https://justincox.medium.com/this-just-in-choice-in-video-games-6784460baff4
['Justin Cox']
2020-09-15 13:46:01.160000+00:00
['Mental Health', 'Nonfiction', 'Life', 'Games', 'Entertainment']
Clickbait Subject Lines: You Won’t Believe the Results
“Email on the Move” by rawpixel on Unsplash Clickbait is a pejorative term for a headline-writing technique that online publishers have been using for years to drive traffic to websites, more often than not featuring paper-thin content, in the hope of driving advertising impressions and revenues. You only need to click on a clickbait-type headline a couple of times to realize you’ve been duped — like a fish finding itself on a hook after biting down on a juicy worm. The thing about clickbait is that it nearly always leaves a bad taste in your mouth. The content is rarely as sensational, awe-inspiring or informative as the headline promises, and quite often, it simply wastes your time. Note: As an ex-newspaperman, I’m constantly perplexed as to why any editor would allow their “quality” content to be sullied by clickbait-style advertising partnerships (see below). In an age when ad-blocking software robs publishers of much-needed revenue, I guess many are just desperate for the cash they generate, although I cannot help but feel it’s a rather short-term plan for a very uncertain future. Not All Traffic Is Good Traffic The basic premise of clickbait is that all traffic is good traffic. This doesn’t work in the world of email marketing, where long-term relationships are far more valuable than quick wins. For an email marketing subject line to drive real success, it has to tell the subscriber absolutely everything about the email they are about to open. It’s actually very similar to a good old-fashioned (honest) newspaper headline, which tells the full story and encourages the reader to pick up a newspaper and carry on reading. I doubt very much that you would pick up a newspaper again if its headlines had little connection to the articles or were just a front for ill-targeted advertising. It’s exactly the same with email. The moment an email subject line tricks or misinforms a subscriber to open an email is the moment you lose that subscriber’s engagement forever. They will either unsubscribe or (worse still) never open another email from you again. In the perfect world, your subscribers will never receive an untargeted message. However, as we don’t live in a perfect world, I would rather a subscriber not open the occasional email because the subject didn’t quite pique their interest than waste their time on misleading and highly frustrating content. Instead of attempting to hoodwink your subscribers into opening your emails, wouldn’t it be better to just give them what they want? This abridged post first appeared on the iContact Email Marketing Blog.
https://john-w-hayes.medium.com/clickbait-subject-lines-you-wont-believe-the-results-d07b580b81d4
['John W Hayes']
2018-07-20 09:18:14.293000+00:00
['Email Marketing', 'Marketing', 'Entrepreneurship']
10 Normality Tests in Python (Step-By-Step Guide 2020)
10 Normality Tests in Python (Step-By-Step Guide 2020) Normality test is used to check if a variable or sample has a normal distribution. Image of Author Before going to talk about Normality test lets first discuss normal distribution and why is it so important? Normal distribution The normal distribution also known as the Gaussian distribution is a probability function that describes how the values of a variable are distributed. It is a symmetric distribution where most of the observations fall around the central peak and the probabilities for values further away from the mean taper off equally in both directions with fewer outliers on the high and low ends of the data range. The term “Gaussian distribution” refers to the German mathematician Carl Friedrich Gauss. A normal distribution has some important properties: the mean, median, and mode all represent the center of the distribution. the distribution is a bell shape ≈68% of the data falls within 1 standard deviation of the mean, ≈95% of the data falls within 2 S.D of the mean and ≈99.7% of the data falls within 3 S.D of the mean Image from Christian Hubicki. Why is it so important A normal distribution is the most important probability distribution in statistics because Many processes in nature follow the Normal distribution, Some of the examples are age, height, weight and blood pressure of a person Linear regression assumes that errors or residuals follow a normal distribution. Some ML algorithms like Linear Discriminant Analysis and Quadratic Discriminant Analysis are derived under the assumption of normal distribution. Normality tests In statistics, normality tests are used to check if the data is drawn from a Gaussian distribution or in simple if a variable or in sample has a normal distribution. There are two ways to test normality, Graphs for Normality test Statistical Tests for Normality 1. Graphs for Normality test Various graphs can be used to test the normality of a variable. Using graphs/plots we can visually see the normality but graphs are not very accurate as statistical methods. 1.Q Q or Quantile-Quantile Plot It plots two sets of quantiles against one another i.e. theoretical quantiles against the actual quantiles of the variable. Image from Author If our data comes from a normal distribution, we should see all the points sitting on the straight line. 2. Box Plot Box Plot also know as a box and whisker plot is another way to visualize the normality of a variable. It displays the distribution of data based on a five-number summary i.e. minimum, first quartile (Q1), median (Q2), third quartile (Q3) and maximum. Image from Author If your variable has a normal distribution, we should see the mean and median in the center. 3. Histogram One of the popular and commonly used plot to visualize the distribution of the data is a histogram. It also allows us to inspect data for its underlying outliers, skewness, etc. It divides the data into bins of equal width. Each bin is plotted as a bar and height of the bar depends on the number of the data points are in that bin. Image from Author If your variable has a normal distribution, we should see a bell curve. 2. Statistical Tests for Normality On the other hand, there are many Statistical Tests to check if the distribution of a variable is normal/gaussian. In this section, I am not gonna talk about the math behind but I will show you the python code for each test. Shapiro-Wilk Test We should start with the Shapiro-Wilk Test. It is the most powerful test to check the normality of a variable. It was proposed in 1965 by Samuel Sanford Shapiro and Martin Wilk. Image from Author If the p-value ≤ 0.05, then we reject the null hypothesis i.e. we assume the distribution of our variable is not normal/gaussian. p-value ≤ 0.05, then we reject the null hypothesis i.e. we assume the distribution of our variable is not normal/gaussian. If the p-value > 0.05, then we fail to reject the null hypothesis i.e. we assume the distribution of our variable is normal/gaussian. 2. D’Agostino’s K-squared test D’Agostino’s K-squared test check’s normality of a variable based on skewness and kurtosis. It was named by Ralph D’Agostino. Skewness is a measure of symmetry. Kurtosis is a measure of whether the data are heavy-tailed or light-tailed relative to a normal distribution. Image from Author If the p-value ≤ 0.05, then we reject the null hypothesis i.e. we assume the distribution of our variable is not normal/gaussian. p-value ≤ 0.05, then we reject the null hypothesis i.e. we assume the distribution of our variable is not normal/gaussian. If the p-value > 0.05, then we fail to reject the null hypothesis i.e. we assume the distribution of our variable is normal/gaussian. 3. Anderson-Darling Normality Test Anderson-Darling Normality Test is another general normality tests designed to determine if the data comes from a specified distribution, in our case, the normal distribution. It was developed in 1952 by Theodore Anderson and Donald Darling. Image from Author It gives a range of critical values, at which the null hypothesis can be failed to rejected if the calculated statistic is less than the critical value. In our case, at each significance level, the data has a gaussian distribution. 4. Chi-Square Normality Test Another way of checking the normality of a variable is with Chi-Square Normality Test. It is not as popular as other methods. Image from Author If the p-value ≤ 0.05, then we reject the null hypothesis i.e. we assume the distribution of our variable is not normal/gaussian. p-value ≤ 0.05, then we reject the null hypothesis i.e. we assume the distribution of our variable is not normal/gaussian. If the p-value > 0.05, then we fail to reject the null hypothesis i.e. we assume the distribution of our variable is normal/gaussian. 5. Lilliefors Test for Normality The Lilliefors test is a normality test based on the Kolmogorov–Smirnov test. As all the above methods, this test is used to check if the data come from a normal distribution. It is named after Hubert Lilliefors, professor of statistics at George Washington University. Image from Author If the p-value ≤ 0.05, then we reject the null hypothesis i.e. we assume the distribution of our variable is not normal/gaussian. p-value ≤ 0.05, then we reject the null hypothesis i.e. we assume the distribution of our variable is not normal/gaussian. If the p-value > 0.05, then we fail to reject the null hypothesis i.e. we assume the distribution of our variable is normal/gaussian. 6. Jarque–Bera test for Normality The Jarque-Bera test tests whether the sample data has the skewness and kurtosis matching a normal distribution. NOTE: This test only works for a large enough number of data samples (>2000). Image from Author If the p-value ≤ 0.05, then we reject the null hypothesis i.e. we assume the distribution of our variable is not normal/gaussian. p-value ≤ 0.05, then we reject the null hypothesis i.e. we assume the distribution of our variable is not normal/gaussian. If the p-value > 0.05, then we fail to reject the null hypothesis i.e. we assume the distribution of our variable is normal/gaussian. 7. Kolmogorov-Smirnov test for Normality Performs the (one sample or two samples) Kolmogorov-Smirnov test for goodness of fit. The one-sample test performs a test of the distribution F(x) of an observed random variable against a given distribution G(x) (i.e. a normal distribution). Image from Author
https://towardsdatascience.com/normality-tests-in-python-31e04aa4f411
['Sivasai Yadav Mudugandla']
2020-09-26 07:32:11.454000+00:00
['Statistics', 'Artificial Intelligence', 'Python', 'Data Science', 'Machine Learning']
I Broke My Lifelong Habit of Chronic-Lateness and You Can Too
Being chronically-late affects everyone around us — more than we think I broke my habit of chronic-lateness Exactly 60 days ago I made a pledge to myself (and my son) to stop being chronically late. For most of my life I’ve been late to everything. It’s embarrassing, selfish, and depressing. I was early for 60 days straight — first time in my life! Being late didn’t make me more productive. I got worse. I did everything at the last minute. I showed up to meetings as the guy who tip-toed in, opening and closing the door behind me, with the one-handed, queen wave and the mouthed ‘sorry.’ But none of the work stuff pushed me over the edge. I was just rude. My reputation at work wasn’t as big a deal to me as my integrity at home. My chronic lateness didn’t shift until my son broke-down crying after I made him miss the bus for the umpteenth time. I felt so low. I mean, the lowest — rock-bottom. I was a piece of crap. I was an addict to lateness. I tried to pack 100 pounds of activity in a three pound bag. And now my six-year-old son had to be the one to tell me to get over myself. I was humbled, embarrassed, and ashamed. I still am, but I’m getting better. You can read more about the beginning of my journey here:
https://augustbirch.medium.com/i-broke-my-lifelong-habit-of-chronic-lateness-and-you-can-too-d8846111044b
['August Birch']
2019-04-22 16:28:48.122000+00:00
['Life Lessons', 'Entrepreneurship', 'Habit Building', 'Life', 'Psychology']
Trust Me?
Trust Me? Should I? ©Paula via Canva “Do you trust me?” he asked, offering me his hand. I could barely hear him over the whirl of the helicopter blades but I’m good at reading lips. “Not at all.” I replied as I took his hand. “But hey, this night has been full of surprises.” Duncan helped me up into the helicopter as it lifted off the ground, just avoiding the bloodthirsty creatures that dived for us. I watched them gather underneath us even as the helicopter gathered altitude. The last thing I ever expected when I agreed to this blind date was for the restaurant to be crashed by blood-thirsty former human beings, attacking anything and everything in their sight. For some reason they seemed to be zeroing in on me. What I didn’t realize at the time was that my ‘date’ had known what was going to happen. He’d been assigned to keep me safe as apparently, I was the key to saving the world. We buckled into the helicopter as his friend steered us away from my former home. We’d heard rumors that there was a safe zone in Texas of all places that was creature free. If we could get there and prove we were uninjured, we’d be safe. Supposedly. It hadn’t been long since the creatures had begun attacking but to my knowledge, the world had been overrun already. But with the winds gathering and the numerous creatures we just left behind, I didn’t know if we could make it to Texas from New York. I wasn’t even sure we could make it out of New York safely. If there were rumors about a safe zone, no doubt there would be more of these things and I didn’t like the looks of this wind. Just then, the helicopter bucked in the wind. “Hang on!” I grasped the seat in front of me and the hook above my head, “Already hanging!” I felt him wrap his arm around me, other hand wrapping around the hook above his head. The helicopter rolled in mid-air, shuddering and there was a loud snap above our heads. “What was that?!” Before he could answer, our pilot spoke, “One of our rotors just snapped! We’re going down!!” The helicopter took a nosedive and I felt his arm tighten as the pilot yelled, “Hold on!” I felt the hook above my head be ripped from my hand as the top of the helicopter ripped off, leaving nothing but open air and jagged walls above us. There was no way the seatbelts were going to hold even if there was any way this helicopter was going to land safely. He pulled my hand off the seat before it could be broken, even as the seatbelts snapped from the winds, pulling us out of the helicopter. “Hold on!” I wrapped my arms around his waist, holding on tightly, hoping like hell he had a plan that I couldn’t see beyond freefalling to our deaths. Just then, I felt something hit me in the back of the head as we cleared the helicopter remains. As I blacked out, I swear I saw wings above him, coming out of his back. But that’s not possible, is it?
https://medium.com/sunday-fabricated-stories/trust-me-9e196e26830
['Paula Crofoot']
2019-08-04 17:48:39.230000+00:00
['Short Story', 'Fiction', 'Escape', 'Writing', 'Trust']
Medium is a fantastic source for a writer’s multi-income streams.
Medium is a fantastic source for a writer’s multi-income streams. By using Medium as the beginning — the gateway — you can bring new readers inside your proverbial fence and into your world. Medium isn’t the end, but the beginning. You can use your Medium stories to earn your reader’s trust, showcase your expertise in your niche, and encourage readers to join your email list for a deeper-dive. While our reads and partner income might’ve taken a dive, Medium is still one of the best places to gather your tribe. Enroll in my Free Email Masterclass. Get Your First 1,000 Subscribers
https://medium.com/the-book-mechanic/medium-is-a-fantastic-source-for-a-writers-multi-income-streams-68677d88316a
['August Birch']
2020-12-08 17:50:00.628000+00:00
['Medium', 'Money', 'Writing', 'Entrepreneurship', 'Life Lessons']
Meet the Founders — Jad Meouchy, CTO
Jad Meouchy, CTO and co-founder of BadVR! 😎 Like all companies, BadVR is comprised of a group of dedicated, passionate people, all working on solving the problem of how to view, analyze, and share data insights at scale. As modern datasets continue to increase exponentially in size and complexity, it’s become increasingly difficult for technical and non-technical users alike to engage with these huge datasets. That’s where BadVR comes in — with the addition of immersive (VR, AR) technology, the process of working with data becomes much easier and more accessible to everyone! But, where did the idea of bringing together data visualization and immersive technology come from? Our CEO and co-founder, Suzanne Borders, was interviewed back in 2018 for an article of her own. She shared her journey and got us all up to speed about her experience founding BadVR. But her co-founder and CTO Jad Meouchy hasn’t spoken about his experience — until today. We sat down with Jad and asked him about everything from data to entrepreneurship — getting the inside scoop on the technical side of BadVR! A little background: Jad has been building software and companies for 15 years, with three exits. His expertise in software architecture and data analysis provides a solid platform for bridging data into immersive tech. Jad is originally from Northern Virginia and strives to maintain a bi-coastal balance. Let’s start with the basics. Tell me a little about what inspired you to start BadVR? Spatial computing is the future and I am dedicated to realizing its full potential. There’s a tremendous opportunity right now to build that magical world that we all imagined as children, inspired by science fiction movies. Hollywood special effects are becoming real, and this inspires me every day to make once-fantastical concepts into everyday reality. Making the vision of the future today’s reality! 😎 We are constantly surrounded by information everywhere we go, but it’s either invisible or elusive. Imagine if you had an extrasensory ability to see all that data. How could it improve your life? What kind of strategic advantage would it bring? How would you share it? The way information is presented changes the way it is understood. Thus, the BadVR vision is not to make a 2x or 3x improvement in how people work with data, but 1000x. That’s the magnitude of impact that inspires us to tackle a challenge as big as BadVR. Can you explain your role as CTO at BadVR? What does it entail? On a high level, I transform my team’s vision into reality. I take abstract visions and convert them into a tangible reality, driven by the architecture and the code that our engineering team creates. More than anything, my role is to support and empower others to dream big and then to then find unique and innovative ways to make those dreams into tangible, marketable products. What issue — or issues — are you solving with BadVR’s technology? I seek to disrupt the clinical sterility of modern data tools. Using traditional analytics software is like trying to cut steak with a spoon. No matter how beautiful or rich your data may be, blunt instruments mangle it into a mess. If someone asks you to describe the smell of walking through a field of fresh flowers, would you really show a bar graph of median odor values? Is that the best, most effective, way to convey that information? No. Jad visualizing data with AR and VR headsets at our booth at MWCA LA! 😎📈 People have become accustomed to reductive displays of raw information, always serializing data into tables of numbers and math equations. But they don’t understand that the color of the story is tinted by the storyteller. When information passes through people and their opinions, it picks up biases along the way based upon ‘facts’ that may not actually be factual. Like a game of telephone, every level of analysis that happens between you and your raw dataset invites confusion and inaccuracies. This is why our technology is relevant — we allow EVERYONE to see and work with the raw data that holds the absolute truth. What motivates you to tackle these problems? “Data” is a difficult, high-stakes problem that has the potential to change everyone’s lives. I believe these grand challenges are what spark world-changing innovation and inspire teams to deliver their best work. Big problems with big impacts are immensely attractive. These big problems cannot be solved just with desire; passion and creativity are absolute necessities. The intricate challenges of modern data can only be tackled through comprehensive efforts involving a variety of talented and motivated individuals. Which are exactly the kinds of people that I want to be around. It’s fun to work with people who break the mold. Team picture! Funny faces everyone! 😆🤪 Let’s switch focus away from BadVR. You’ve been coding for over 25 years — can you share some tips for writing great code? Coding is both a science and an art. There are techniques that you learn from instinct, from inspiration, and from experience. It’s an organized arrangement of people and components that are labelled appropriately and well documented. Good code is short, simple, and easy to understand, like a great story. The most powerful, most effective stories (and apps) are the ones that resonate with the broadest audience. Some people think that code has to be really complicated to be good and I would say the exact opposite is true. Speaking to science, code is not about the syntax as much as it is about solving small problems with succinct statements. Languages come and go, but good techniques remain constant. Code is an organization of ideas that are clearly expressed, and consistent with the other expressions regardless of which format may be “correct.” It’s not about each person leaving their mark — it’s about the group coming to a consensus and a shared understanding of how to solve a specific problem. Coding is a collaborative process, so it’s really important that everybody agree on rules and follow them. Tell me a little about how you discovered coding and how that discovery led you to your position today? I was always tinkering with mechanical things as a kid, because that’s what was accessible at the time. It was easy to take apart a broken alarm clock because I could get a broken alarm clock. Opening it up was like a fun puzzle, a story of how all the pieces came together to make a useful little machine. And you could always see the marks that the builder left behind. Jad hard at work as “The Thinker” 🤔🧐 However, these little electronic worlds I could explore weren’t always exciting, and I desired more intricate, complicated, and sophisticated machinery. I wanted to take apart the quintessential Swiss watch and see if I could learn something about the person who designed and assembled it. Of course, the last thing anyone would do was donate an expensive watch to a child for their own amusement in smashing it to pieces. Software was an interesting twist. Even the most expensive, exclusive, and elaborate programs were accessible and deconstructable. This was a concept called “reverse engineering” and it was a fascination of exposure to new ideas and puzzle pieces. I could observe techniques from the finest engineers in the world without ever leaving home. I reverse engineered games, programs, even operating systems. And I learned more from that than any formal education! What pivotal life experience made you who you are today? How did it change your perspective of the world? One day, at the beginning of my career, a friend of mine bought a fancy new sports car for a small fortune. He found this obscure car racing event in the middle of the Everglades and drove down 12 hours on the hope that it would be a fun adventure. When we arrived at the privately rented military airstrip, there were maybe 50 people total. Of course, our vehicle was the cheapest and slowest of any in sight, as everyone else had flown in on their private jets while their entire garages of exotic cars were transported by covered trailers owned by another participant.
https://medium.com/badvr/meet-the-founders-jad-meouchy-cto-2eabc74f17b6
['Jason Tam']
2020-05-25 20:46:36.607000+00:00
['Big Data', 'Life Lessons', 'Analytics', 'Startup', 'VR']
How 2020 Reaffirms My Resolution Writing Method
In between putting on a face mask, removing it; constant washing of hands, or rubbing them with sanitisers; disinfecting clothes and shoes upon reaching home or the office, time has seemed to take on the speed of light. We have stepped into September, we have passed even the height of summer. The heat of summer has been the most welcomed one this year, with the hope that it will help to slow down or even kill the coronavirus. Unfortunately, that has not been the case. The coronavirus is still spiking in several countries such as the U.S. where it tops the chart with over 5 million infected cases and over 160,000 deaths at the point of writing. And the dreaded winter that is approaching carries a sense of foreboding. The “Year Twenty-Twenty” could have been spelt “Twilight-Twilight” and not many would have disagreed. And many would concur that the year had thrown a spanner into the works of their new year resolution. Your 2020 Resolution Earlier this year, I wrote about what I have learned from 10 years of making New Year’s resolutions and how I intend to stick to the method that I found to be working in those 10 years. The key was to turn my “why” to my “who”. Instead of asking myself why I am making this resolution, I asked myself who am I making this resolution for; effectively turning it into a promise. My 2020 New Year’s resolution is a promise to my wife, my family and myself. How about your? Were they a “why” or a “who” led resolution? If you have been fortunate enough to write them from a “who” perspective, do allow yourself a pat on your back as your resolution should and must still be valid for consideration at this point. However, if you had made a less fortuitous choice, then you might need to change your method in crafting the 2021 resolution. Do allow me to convince you why you should pivot to this new direction of resolution writing. Why a “Who” 2020 Resolution Would Still be Valid If your resolution this year is to travel to several countries; is likely to be an unachievable feat no matter how desperate you work on it. Most countries are still in lockdown and if they are not, they are still closed to tourists. However, if you had crafted that travel resolution by answering the “who” question which in essence making a promise to that someone, I am pretty sure you are still able to fulfil that resolution. Let me illustrate it out to you. Say hypothetically, one of your 2020 resolutions is to travel to Thailand. I would like to travel to Thailand in the year 2020 as a reward for graduating from my bachelor program. Here, you have resolved a promise to yourself for graduating, you would be intrinsically motivated to see it come to pass, coronavirus or not. What you would probably have done is to take a virtual field trip to Thailand with Google Earth and walk on the streets of Bangkok with your fellow virtual travellers. You would also probably have connected with Thais on social media and to bask in the warmth and friendly faces of ordinary Thais. Albeit, this pales in comparison of actually getting onto a plane to “Amazing Thailand”, you would still have kept your word to visit for it is a vouch you have made to someone: you. On the contrary, if your resolution is a “why” led one, you would probably have given up on the idea, without even attempting to try. The Promise Theory In 2004, Mark Burgress, a computer scientist and former professor at Oslo University College, proposed the “Promise Theory”. This theory was designed in the context of computer science; with the sole purpose of solving problems inherent with the current computer management system that is based on an obligation. In layman terms, the current computer system is based on an obligation logic, whereby “agents” in the system, could be a computer or a printer are obligated to behave in a certain way when it is sent a command. However, Mark believes that all agents in the system should have autonomy of control and they can only be responsible for their behaviour by keeping to their “promises”. A ‘promise’ is a declaration of intent whose purpose is to increase the recipient’s certainty about a claim of past, present or future behaviour. (Wikipedia, 2020) In other words, it is about building trust on the part of the recipient; whether previous promises have been kept or not, with trust increasing proportionately to promises kept. In parallel, this is no different to us, humans. By making promises and keeping them, we earn trust and if we are the recipient, we will trust the promisor more. The “Promise Theory”, had sufficiently explained how we tend to want to increase trust in others by doing what is within our means; say what we do and do what we say. After all, a promise made is a promise kept. Reference:
https://medium.com/illumination/how-2020-reaffirms-my-resolution-writing-method-71c33a94864f
['Ivan Yong Wei Kit']
2020-09-18 02:34:50.138000+00:00
['Writing', 'Resolutions', 'Productivity', 'Philosophy', 'Self']
Our Indispensable Pollinating Bees
by Jackie Swift Bees are big news these days. The internet is full of stories about colony collapse, where hives of domesticated honey bees die off en masse. If the honey bee disappears, the current wisdom goes, crops won’t get pollinated. Is this really true? There are 20,000 bee species in the world — 4,000 of them in the United States and an estimated 420 in New York. Amongst all the bees out there, who is really doing the pollinating? “My perspective as a wild bee specialist is that there are so many wild bees, maybe we should be trying to understand their role in agricultural crop pollination,” says Bryan N. Danforth, Entomology at Cornell University. New York State’s Bee Diversity The Danforth lab recently finished a 10-year study on bee diversity in New York State apple orchards. Contrary to the belief that honey bees are essential to pollinating crops, they discovered wild bees vastly outperform honey bees on a per-bee basis. Danforth’s collaborator Mia Park, PhD’14 Entomology (now at the University of North Dakota) found that wild bees deposit four times more pollen grains per visit than honey bees do. “Honey bees are terrible pollinators,” Danforth says. By videotaping the bees, the researchers discovered that most honey bees visiting a blossom suck up nectar at the base of the anthers, the pods that hold the pollen in a flower’s stamen, rather than actively collecting pollen as wild bees do. As a result, honey bees don’t pick up much pollen, and they don’t come in contact with the stigma, the female part of the flower, to pollinate it. “We did a lot of outreach to apple growers as a result of this project,” Danforth says. “Our message was, ‘You probably don’t need to rent honey bees unless you have a very large orchard. You can probably rely on the native bees for much of your apple pollination needs. Conserve your natural habitat around the orchard where the wild bees live because it’s going to benefit your bottom line.’” Social Bees versus Solitary Bees We may think of honey bees as the quintessential bee, but Danforth is quick to point out that they aren’t typical. For one thing, honey bees are floral generalists. “Honey bees will go to almost any host-plant for pollen and nectar,” Danforth says. “Many other bees are very highly specialized. They visit a single host plant species for their pollen and nectar. If their preferred host-plant disappears, these pollen-specialist bees will likely go extinct, as well.” Photo Credit: Dave Burbank Of all the bee species in the world, only about 10 percent are social like the honey bee, living in colonies with many workers and offspring. Most (75 percent) are solitary, where a single female gathers pollen and nectar for her offspring, occupies the nest, and lays her eggs — often as few as a dozen. Another 15 percent of bees are brood parasitic. They lay their eggs in the nest of another bee species, and their larva then kill the host larva and consume the provisions meant for them. “Biologically, bees are fascinating and diverse,” Danforth says. “They arose from wasps, which are carnivores, but sometime in the Cretaceous Period the ancestors of bees switched from a carnivorous diet to an herbivorous one. That was a key innovation. It probably drove the rapid diversification of bees in the mid-Cretaceous.” Bee Phylogenetics A big part of the Danforth lab’s research focuses on the phylogeny of bees, using large data sets with thousands of genes to try to understand the evolutionary relationships among bees and between bees and other organisms. In particular, the researchers are interested in how bee phylogeny relates to that of their closest wasp relatives. “We can use phylogenetics to understand lots of things about bee biology, like the evolution of social behavior, plant use, and parasitism,” Danforth says. “We can go back in time and reconstruct what an early bee might have looked like within any particular lineage of bees.” The researchers construct phylogenetic trees, then use them for further study, such as tracking how host plant associations of certain specialist bees changed over time. “We map onto a phylogeny of bees the host plant each bee species is visiting,” Danforth explains. “Then we ask, ‘What was the ancestral host plant that the ancestor of those bees was visiting?’ We can also reconstruct ancestral states. In a sense, we can go back in time and reconstruct what an early bee might have looked like within any particular lineage of bees.” At the Museum of the Earth Danforth’s phylogenic work is getting air time at a new exhibit on bees, which opened in October 2019 at the Museum of the Earth in Ithaca, New York, Cornell University’s hometown. The exhibit was funded as part of a research grant by the National Science Foundation (NSF). “The focus is to highlight unfamiliar aspects of bee biology,” Danforth says. “We don’t talk about honey bees or the bumble bees in your backyard. We tell these crazy stories of bees that build amazing nests, bees that have unusual lifecycles, bees that forage only on a particular host plant — all the really interesting stuff most people don’t think of when they think of bees.” Photo Credit: Dave Burbank Documenting New York State Bees, and Other Bee Projects Danforth is also collaborating with the New York Natural Heritage Program on the Empire State Native Pollinator Survey. Funded by the State of New York, the project will document all 420 species of bees in the state to determine the status of their populations. A main focus of the project is to understand which species are threatened or in decline and identify the likely causes. Recently Danforth has begun a new project, funded by the United States Department of Agriculture and the NSF, to look into another startling discovery made by Mia Park during the apple orchard project. Park found that fungicides have a bigger impact on the community of orchard pollinators than insecticides do, even though fungicides have low direct toxicity for adult bees. Danforth and his collaborators theorize that fungicides might be impacting the microbial communities in the pollen provisions collected by females for their offspring. “There’s an enormous diversity of microbes in these pollen provisions,” he says. “We’re just starting to appreciate how important they are to larval development.” In all his research, Danforth is driven by his admiration for the under-appreciated solitary bees. “They’re fascinating and many of them are beautiful,” he says. “And I think we’ve completely underestimated how much they contribute to our crop production. We haven’t quantified the value of these unmanaged bees. Let’s stop giving all the credit for crop pollination to the honey bee, and let’s try to find out how much pollination is provided — for free — by wild bees.”
https://medium.com/cornell-university/our-indispensable-pollinating-bees-b983ec243020
['Cornell Research']
2020-02-10 20:01:01.317000+00:00
['Nature', 'Cornell University', 'Agriculture', 'Environment', 'Science']
Are you a Panster or a Plotter
Are you a Panster or a Plotter Pros and cons of both Photo by Dan Dimmock on Unsplash To plot or not to plot is the question. But the answer is not the same for everyone. Plotter A plotter writes down every detail of every chapter from beginning to end. They know how the story will start, what each character will do and how the story will end. Their outline is specific, and there is no deviating from it. Pros — This always leads to a more organized writing experience as the you, the writer, already know what is going to happen. It allows you to sit down and type that 80,000 word novel in no time, getting you to the editing part quicker which, in turn, gets you to the publishing phase in no time. Cons — Plotting leaves no room for deviation. Meaning that if you want to kill a character off mid-way through and it is not in the outline, your whole system gets messed up. Say you want to change a particular aspect of an incident, you can’t change it without having to redo the remaining outline. Pantser A panster literally flies by the seat of their pants when writing. They have no idea how it will begin, how it will end and nothing in between. There is no outline to follow, no path already chosen. Just the writer, a keyboard and a blank screen awaiting the arrival of the next best-seller. You want to kill your character off? Go ahead, there are no rules. Pros — A literal free for all going on. There is no telling what your fingers will type on the screen. The anticipation might even be the highlight of your day. You, the writer, have the possibility to surprise yourself with the events in the story. Cons — Two words, writer’s block, these two words can wreak havoc on your ability to finish the story. Without a path to follow you are pretty much winging it and that can cause writer’s block in the middle of the story. Stuck on a chapter? Well you can’t move forward because you don’t know what will happen next until you write this one. Are these the only two options? Absolutely not! Imagine a line, on one end is the word Plotter and the other end says Pantser. In the middle is wide array of levels to the writing process. Some would say that somewhere in the middle is good, an equal balance. But really it is up to you, the writer, as to where you sit on that line. I am closer to a panster, outlines bore me and seem tedious. But I do create lists for each of my characters. A bio for each one saved in a folder on my computer. I also tend to know the ending, the conflict and the why the conflict is happening. But I don’t write the end first, although it is something we can do as writers. Knowing the end could help write the beginning and all that stuff in the middle. In the end, it doesn’t matter whether you are a Plotter or a Pantser. What does matter is that you do what feels comfortable to you. Want to write a six page outline, go ahead! Want to sit in front of your computer waiting for the words to make it to the screen, go ahead! You do you!
https://medium.com/the-writers-bookcase/are-you-a-panster-or-a-plotter-b7b8e53f14a2
['Tammi Brownlee']
2019-11-06 14:48:50.076000+00:00
['Novel', 'Advice', 'Writing', 'Writing Tips', 'Storytelling']
Does protecting your ideas matter?
A patent gives its owner a legally-enforceable monopoly to their invention. The invention could be any new and inventive solution to a technical problem; whether it’s the humble paper clip to a new pharmaceutical to developments in AI or quantum computing. Patents play an important role in incentivising and rewarding innovation. They reward the skill and ingenuity that go into developing any invention by preventing others from using the idea covered by the patent. This is a valuable tool for anyone in the business of innovating. For example, it is because of the potential rewards offered by a monopoly that pharmaceutical companies invest millions into research and development, including going down numerous blind alleys, in the hope of finding life-saving treatments. The patent system also increases competition and helps others to innovate. In return for the monopoly, in order to apply for a patent inventors have to publish information about their invention, including sufficient detail for a reader to be able to make the invention for themselves. Once a patent has expired (typically after 20 years), competitors can then use the information in the patent, building on and refining the invention, to create even better products and processes. At the same time, patents force companies in the same field to try and find different ways of doing things if they want to compete. Instead of just one advance, you may end up with two or three people trying to do something in an entirely new way. Writing strong, water-tight patents requires a huge amount of skill and expertise. It is crucial that the patent specification makes clear where the boundaries of the inventive concept lie so as to provide a strong, enforceable monopoly. A great amount of care must be taken to get the wording absolutely right. Small differences in wording or nuance can mean the difference between real success and total failure. Five of history’s most interesting patents: Exoskeleton Date: 2014 Exoskeletons date back to an “apparatus for facilitating walking” invented by Nicholas Yagin in 1890. ReWalk, was granted a patent in 2014 for their exoskeleton which was designed to let people suffering from paralysis to relearn to walk and even climb stairs. The technology is already being used by some construction workers, soldiers, and even astronauts.. Actual Patent Name: “Locomotion assisting device and method” Sony Walkman Date: 1983 Sony created its Walkman audio cassette player in 1979 but didn’t apply for a patent as they believed it was inimitable. But Andreas Pavel, an inventor, had already patented his “High fidelity stereophonic reproduction system” in 1977 and sued Sony for breach of his intellectual property. Eventually had to Sony pay Pavel more than $10 million out of court. Actual Patent Name: “High fidelity stereophonic reproduction system” Drones Date: 1993 The drone may seem like a relatively recent invention but it was actually patented back in 1962. An engineer called Edward G. Vanderlip created a way to prevent helicopter instruments from systemically failing. He then used the new technology to make a small, remotely-operated rotary aircraft: today referred to as a drone. Still, his patent for an “omni-directional, vertical-lift, helicopter drone” took a while to get off the ground. Actual Patent Name: “Omni-directional, vertical-lift, helicopter drone” Brain Implant Date: 1962 At the end of the 19th century, doctors realised that they could make human and animal muscles move involuntarily simply by passing an electrical current through the brain. In 1993 the University of Utah patented an “implantable, integrated apparatus that contacts the brain with a plurality of metal needles to detect electrical signals or to transmit signals to the brain.” Today, brain implants can actually move robotic prosthetics or type out text on a computer by thought alone. Actual Patent Name: “Three-dimensional electrode device” GPS Date: 1974 The underlying technology in GPS satellites was invented Roger L. Easton and was developed in the 1950s as a way to track orbiting satellites. Easton flipped his idea on its head by making a Global Positioning System designed to track objects on the ground from space. The first GPS data was transmitted by the Navigation Technology Satellite 2 in 1977. Actual Patent Name: “Navigation system using satellites and passive ranging techniques”
https://medium.com/dyson-on/patents-502049e91c2e
['Henry Tobias Jones']
2019-04-29 10:51:00.760000+00:00
['Design', 'Law', 'Inventions', 'Startup', 'Technology']
NFL All-Time Series: Top 5 Quarterbacks in Dallas Cowboys History
1. Troy Aikman (1989 — 2000): The man of the hour. The best Quarterback to ever put on a Cowboy Uniform and one of the 10 best Quarterbacks of All-Time. Many criticize Aikman for not having gaudy numbers. Even for the 90’s. However, he played in a run first scheme that didn’t ask him to throw much. When he was asked to throw, the results were more than satisfactory. Aikman is one of the most accurate Quarterbacks to ever play. He would put the ball where ever he wanted to in his prime. With six Pro-Bowls, three Super-Bowls, and a Super-Bowl MVP, it’s hard to say anyone else did it better at the Quarterback position than Aikman did for the Cowboys.
https://medium.com/top-level-sports/nfl-all-time-series-top-5-quarterbacks-in-dallas-cowboys-history-e22c3f5c27b0
['Jeffrey Genao']
2020-10-10 18:38:25.100000+00:00
['Sports', 'History', 'Writing', 'NFL', 'Productivity']
Jesus Don’t Like Ugly
Jesus Don’t Like Ugly There’s No Place for Hate in Religion Photo by Raychan on Unsplash It’s greatly perplexing to me that many card-carrying religious enthusiasts seem to have centered their fervor around hate rather than love. Because I live in the southern portion of the United States, I notice this often with Christianity. It’s like they’re ignoring everything we know about the life of Jesus. Loving our neighbor has gone right out the window, and the Sermon on the Mount has been shoved aside to make way for a doctrine that focuses on judgment, condemnation, and anger. Love, joy, peace, patience, kindness, goodness, faithfulness, gentleness, and self-control… Those were the lyrics of a song I learned as a child about the “fruits of the spirit”, referencing the Biblical verses Galatians 5:22–23. These are the kind of ideals that are espoused by Christianity. Whether or not they are practiced remains to be seen. Instead, the modern religious right seems to be practicing anything but love or joy or peace. Kindness and self-control also seem to have been thrown out the window. Like a street-corner preacher screaming hate at passersby, we seem to have a disconnect between the message of Jesus and the practice of the people who do things in his name. If Jesus hadn’t left his grave already, as Christians believe, he might surely be turning in it. I should say that I know a great many people who identify as Christian who represent the life and message of Jesus with all of those spiritual traits. They’re the ones who feed the hungry without condemning them first or requiring that they ingest a religious service before eating. They’re the ones who are trying to understand others, not judge their life experiences. They’re kind and loving, and they don’t go around telling people that they’re going to hell. Some people seem to believe that their behavior is permissible because they hold the Jesus card. What, you may rightly ask, is a Jesus card? It’s when someone is so self-righteous that they believe they get a pass on being, frankly, an asshole. When someone identifies as Christian but then acts, consistently, in an aggressive and even bullying manner with no remorse for his/her actions, this person is playing the Jesus card. But here’s the thing: There. Is. No. Jesus. Card. Just like there’s no Buddha card. Or Allah card. Or Universe-Get-Out-of-Karma-Free Card. We don’t ever have a free pass to be an asshole because we think we’re more spiritual or more religious or simply more right than other people. Sure, it might feel good to unload for a minute and to write the other person off as ignorant, brain-washed, or somehow inferior to ourselves. Just thinking that way can, in our own minds, absolve us of our treatment of other human beings. But it is actually dehumanizing them, and that’s not okay. We’re not given the master pass to treat other people poorly because we think we have God/Allah/Krishna or any other deity, or set of beliefs on our side. When we’re horrible to someone else, no religion or spiritual orientation sides with us. Christians: “Therefore all things whatsoever ye would that men should do to you, do ye even so to them.” Matthew 7:12, King James Version. Buddhists: “Hurt not others in ways that you yourself would find hurtful.” ~ Udana-Varga 5:18 “One should seek for others the happiness one desires for oneself.” Confucianists: “Do not do to others what you do not want them to do to you.” ~ Doctrine of the Mean Hinduism states: “This is the sum of duty: do not do to others what would cause pain if done to you.” ~ Mahabharata 5:1517 In Islamic tradition: “None of you truly believes until he wishes for his brother what he wishes for himself.” ~ Number 13 of Imam “Al-Nawawi’s Forty Hadiths” (writing by Muhammad) In Judaism: “Thou shalt love thy neighbor as thyself.” ~ Leviticus 19:18 In Native American Spirituality: “Respect for all life is the foundation.” ~ The Great Law of Peace And Pima proverb: “Do not wrong or hate your neighbor. For it is not he who you wrong, but yourself.” None of these practices give out a free pass for worshippers to treat others poorly. Even the Satanists — yes, the devil-worshippers much feared in Christianity — have standards of behavior around treating others with compassion. So, no, there’s no spiritual pass for shitty behavior toward others. Not for me or for anyone else, and I’ve had moments of being guilty of this as well. What we need to remember when we’re tempted to let “the other side” of any particular argument feel our wrath is that every religion and spiritual tradition does share one common thread: they all urge us to practice the same kindness and courtesy to others as we would like to receive ourselves. None of us, not one person among us, wants to be on the receiving end of vitriolic words or actions. It’s not complicated. People get passionate about the things they love — and the things they hate. Our passionate nature can be the driving force that helps to create change or save lives, but it can also be the force that leads us to destruction and take lives. It’s all in how we use it. If we can focus all that passion on loving others, being kind, practicing compassion, and living a life that honors our values, we really won’t have time to be monitoring or judging others for their life choices.
https://medium.com/publishous/jesus-dont-like-ugly-3d70a9ecc125
['Crystal Jackson']
2019-10-08 14:01:02.432000+00:00
['Religion', 'Mental Health', 'Self-awareness', 'Humanity', 'Love']
Data Science in Production — Advanced Python Best Practices
Ten years ago, any quantitative PhD with a few statistics courses under their belt and an inkling of writing code could call themselves a data scientist — and not be entirely wrong. I should know — I was one of them. But as the field of data science has evolved, this is no longer the case. It’s not enough to be able to put together a few models in a notebook. Today, data scientists work iteratively in teams to build stable, scalable, and extendable data science products which drive core revenue-generating activities for their companies. The following guide fills a gap in the existing literature by focusing on data science software engineering practices required to build effective data products. As an introduction, I suggest reading Andrew Fowler’s excellent post that outlines three key data science best practices: command line executability, YAML configuration files, and model testing. In this guide, I extend this foundation to a full suite of Python data science best practices: 1) Configure effectively Box up your config Make all parameters configurable Ensure portability by using environment variables 2) Use non-code to improve development Write docstrings before coding Use typehints and mypy Keep code and documentation together Make it replicable using virtual environments 3) Manage complexity Decouple, decouple, decouple Organize project directory structure Use a single entry point Create a walking skeleton 4) Minimize bugs Log, don’t print Unit test, a lot Assert to validate data Track provenance 1) Configure effectively Box up your config Save time and sanity by accessing hierarchical YAML config files using Box. This package allows for dot-based access to YAML parameters, rather than Python’s traditional dictionary syntax; e.g., you will write cfg.base.path.data instead of cfg["base"]["path"]["data"]. Here’s an example YAML config. Pip install python-box and Box up this config. Then, in the rest of your project, use “.” notation for easy access: Best practice: analogously to defaultdict , there’s a defaultbox . Here’s an idiomatic way to use it with config files to facilitate reuse and modularity of functions/methods. The above function looks for n_estimators in params . If it is not present, it checks cfg.base.rf_params.n_estimators , and if that’s missing as well, sets a default value (200). Make all parameters configurable Making all parameters in your package configurable has three advantages: You can quickly adjust the ETL, model, and other pipeline components by changing a YAML file rather than editing code. Your collaborators and users can make substantial modifications without changing (or even understanding) the underlying code. Your package can be readily integrated in automated pipelines that dynamically modify configuration parameters before launching your package. For example, here’s a section of the config defining training parameters: Now we can use importlib to dynamically import and instantiate the specified sampler and classifier models. Then, the Sampler and Classifier can be changed simply by modifying the config file, without changing any code. Ensure portability by using environment variables In the above config file, we have different DB paths depending on the environment: dev or prod. Use environment variables to determine where you are, and parse the config accordingly. For example, let’s say the variable ENVIRONMENT is either dev or prod , depending where your package is run. This has the additional advantage of removing the need for .base (and .dev or .prod ) when accessing config parameters. Now you can just call cfg.path.data instead of cfg.base.path.data . Here are three related best practices: (1) save your environment variables to the config (line 10 above) so that you can easily access them through the config without having to import os each time; (2) never save plaintext passwords or credentials in the config or in your code— use environment variables for that; and (3) save the environment variables in a .env file (add it to .gitignore ) and load them using dotenv. 2) Use non-code to improve development Write docstrings before coding Poor design is the original sin from which subsequent difficulties in extending, modifying, and understanding your program arise. Learn clean design — it’s one of the highest-value time investments for a data scientist. Plan the design and align with teammates by writing the docstrings for your key modules, classes, and functions prior to coding them. These can be reviewed (PR them!) and revised with the full team, so that everyone provides input and gains understanding of the whole package architecture, not just the pieces they are responsible for coding. Of course a lot will change as you and your team develop. Nevertheless, initial planning will set you on a smoother development and refactoring trajectory. As Eisenhower said, “plans are useless, but planning is essential.” Use typehints and mypy Unlike statically typed languages, such as Java and Scala, Python does not require type declarations. Since Python 3.6, however, they can be added to your code using typehints. Typehints serve two purposes: documentation and static checking using mypy. Using typehints obviates the need for documenting types inside docstrings. Additionally, the mypy package reads typehints and, if you run it prior to execution, it’ll reveal elusive type errors that crop up intermittently and are hard to detect without static checking. Think of mypy as a fast, “free” way to detect many issues (or at the very least, inconsistent typehints) without having to perform manual testing. Some IDEs, including Pycharm and VS Code, also check typehints statically and highlight inconsistencies as you code. Keep code and documentation together Keep documentation version-controlled together with code. For docs to be useful, they must be updated together with the code they’re documenting. Write documentation in markdown and render as clean, navigable HTML using Sphinx¹. And, remember those comprehensive, Google-style formatted docstrings you’ve been assiduously writing before coding? Add them automatically to the documentation using Sphinx autodoc. Make it replicable using virtual environments Managing dependencies while developing multiple packages in Python can be a chore. Ensure you and your collaborators have the same development environment by using virtual environments, which encapsulate and isolate both the Python interpreter and dependencies of your package. Two common virtual environment options are conda and venv. Pick one —it doesn’t matter which. I prefer venv, since it’s built into Python 3.3 and above. Create the virtual environment using venv and install dependencies as follows: Above, we create an environment with Python 3.6 interpreter, add three dependencies (more precisely, at least 3 — they each may have other dependencies pip will install), and then write the list of current dependencies into a requirements.txt file. This file, and not the env directory itself is added to git. Then your collaborators can recreate the same environment by initializing a new virtual environment as in line 2, activating it as in line 3, and executing pip install -r requirements.txt in order to install the required dependencies. Using conda is analogous, except that instead of requirements.txt , an environment.yml file is normally used to save dependency information. You can find the details in the official documentation. 3) Manage complexity Decouple, decouple, decouple Modular components are easier to reuse, debug, and extend. Ensure that key parts of your pipeline — including data sourcing, preprocessing, training, evaluation, and prediction — are standalone. Source: Pipe data from multiple original sources into a staging DB (e.g., for simple applications, sqlite), separate files (e.g., images for training a classifier), or denormalized tables saved as pickle, Parquet, feather, or other formats. Preprocess: Read from staging DB/files, use asserts to validate key staged data properties, engineer features, transform, and save to new DB table(s) or files. Train: Read from files, use asserts to validate key preprocessed data properties, define model and training scheme (e.g., cross-validation), train model, and save (e.g., as a pickle, or h5 for TF/Keras models). Evaluate: Load model, apply to test data, save metrics. Predict: Load model, apply to new data, save predictions. Each step should be governed by configuration parameters. If only a few configuration parameters are needed (e.g., paths to a few source files and an output file), provide them as arguments to a function wrapped with a click CLI (see Andrew’s post for details). If many parameters are needed, use a separate YAML config, with its path given as an argument to the click CLI wrapped function. Organize project directory structure A clean, logical directory structure will reduce the cognitive load involved in finding, understanding, and changing functionality — both for you and for your collaborators and users. Here’s a directory structure I recommend. This structure is for a project named supermodel. The files are for the most part examples — yours may be different. supermodel/ Root project directory ├── docs/ Documentation (.md/.rst files) ├── configs/ Config files (.yml) ├── logs/ Logfiles ├── notebooks/ EDA and validation (.ipynb) ├── scripts/ Deployment, Dockerfile, etc. ├── env/ Virtual env, add to .gitignore ├── supermodel/ Top level package dir │ ├── source/ Data sourcing (.py, not shown) │ ├── preprocess/ Preprocessing (.py, not shown) │ ├── model/ Modeling (.py) │ │ ├── __init__.py Designates as (sub)package │ │ ├── train_eval.py │ │ └── predict.py │ ├── utils/ Util functions used in source, │ │ ├── __init__.py preprocess, model. │ │ ├── spark.py │ │ ├── date_time.py │ │ └── io.py │ ├── tests/ Unit tests │ │ ├── source/ │ │ ├── preprocess/ │ │ ├── model/ │ │ │ ├── __init__.py │ │ │ ├── test_train_eval.py │ │ │ └── test_predict.py │ │ ├── utils/ │ │ │ ├── __init__.py │ │ │ ├── test_spark.py │ │ │ ├── test_date_time.py │ │ │ └── test_io.py │ │ └── __init__.py │ ├── __main__.py Package execution entry point: │ ├── __init__.py "python -m supermodel" │ ├── config.py Loads and Boxes configuration │ └── run.py Called from __main__.py ├── README.md Intro to package ├── setup.py Installing the package ├── requirements.txt Lists dependencies ├── .gitignore Files/dirs to git ignore └── LICENSE.md License A few things to note: The root directory is not a Python package (it has no __init__.py ), but the same-named directory supermodel , and its subdirectories, are Python packages. Unit tests (more on those below, in the “Unit test, a lot” best practice) are in supermodel/tests (from the root) and are organized in directories and files mirroring the structure supermodel . For instance, functions of supermodel/utils/spark.py are tested in supermodel/tests/utils/test_spark.py . setup.py contains a call to the setup function from setuptools , with the install_requires loading requirements from requirements.txt , like this. To simplify your life, avoid duplication of requirements in setup.py and requirements.txt . config.py should load and Box your config file(s) and add in any environment variables, as described in the sections above. Then, in all other modules, you would import the config Box as from supermodel.config import cfg . __main__.py calls the main() function in run.py , which is the single entry point for all functionality (see below “Use a single entry point” best practice). It enables the user to execute the package in a standard way: python -m supermodel (with optional arguments following). Do you find it too confusing that the root directory and top-level package directory have the same name? If so, change one of the names (e.g., scikit-learn has root directory “scikit-learn” and top level package “sklearn”. In Pandas, on the other hand, both are named “pandas”). Personally, I avoid multiplicity of names, so I use the same name for both. This structure is not meant to be exhaustive, or set in stone. Depending on the nature of the project and the approach, you may have multiple top-level packages (e.g., a separate, extensive ETL or reporting process), interfaces/ directory for abstract classes if taking an object-oriented approach to the pipeline, a separate directory for integration and end-to-end tests, etc. Use a single entry point A key principle of good design is to wrap complex business logic in separate components connected with simple APIs. Having a single entry point is an application of this principle to your whole package: it’ll make it simple for users and collaborators to access package functionality and use it in their workflows². The entry point should be the package itself: python -m supermodel (to use the name of the example package above). To allow this, create a __main__.py like this: Then in your run.py you would define two functions: main_cli which is decorated with click to process command line arguments, and main , which is called from main_cli and defines the execution of your package. (Why separate main_cli and main ? The former can’t be imported, due to the click decorations; the latter can.) Here’s an example main from supermodel/run.py: The imports in lines 3–6 and the tasks dictionary can be refactored using the config file and importlib (see the “Make all parameters configurable” best practice section above). Remember to not include if __name__ == "__main__": in any of the other files in the package. After all, you want __main__.py to be the single entry point to all package functionality. Create a walking skeleton Waiting to connect all the components of your pipeline — from source data ingestion to delivering the resulting predictions — to the end of the project is a recipe for disaster. By creating a complete dummy pipeline you will kill two birds with one stone: first, de-risk the external system integration challenges and second, create a “walking skeleton” on top of which you’ll develop the real functionality piece by piece. The walking skeleton will allow you to test that the pipeline executes successfully while you are developing. What does the initial version of the walking skeleton entail? It varies by pipeline step (see the “Decouple, decouple, decouple” section above): Source: For each data store from which data is sourced, query a small sample of the data (not saved anywhere). Instead, write a dummy data table/file to the designated staging DB/bucket/directory for preprocess to pick up. Preprocess: Read the dummy table/file written by source. Don’t do anything with it. Write dummy preprocessed data for predict. Predict: Read the dummy table/file from preprocess. Write a dummy prediction in the format you expect the real prediction to have to the destination where it’ll be read by downstream systems (e.g., a Pandas dataframe with the expected columns saved as a csv to the agreed-upon S3 bucket). Notify downstream users of your model to verify that they can read the dummy predictions and that the format of the predictions is suitable. As you develop, rerun the pipeline to incrementally test each additional piece you implement. In fact, schedule it to run in the same way the final product will typically be run, such as using a daily Airflow DAG run, cron job, or event triggers. By doing so, you’ll also be able to verify that all required external systems — data sources and predictions destination — are reliably accessible over time. 4) Minimize bugs Log, don’t print Python provides extensive logging facilities. By logging, instead of using print statements, you can specify (1) a dynamic log format, indicating where in your package the statement originated, and (2) different levels of severity, from debug (lowest) to error (highest). You’ll also be able to flexibly channel your logs to files, standard out and other streams, and even email. While you don’t need to know Python’s full logging functionality, you should master the basics and use them consistently in your packages. Configure your logger using a YAML configuration file, like this one. Then, in your config.py (after you load the cfg YAML), set up the logger based on this configuration: In each module, set the logger name to the module name in the beginning of the file, following the imports (line 5 below): Yes, you have to do it in every file. But it’s a small price to pay for clear logs that indicate which module each log message is coming from. Unit test, a lot Strive to write pure functions — functions that take arguments, process them, and return the result without changing state³. Look askance at all functions which return None; 90% or more of those should be I/O write functions. Pure functions are easier to reason about, reuse in your pipeline, and unit test. Write unit tests for all functions that encode business logic and perform data transformations. Isolate in small functions — and do not unit test — I/O, graphing, and orchestration (i.e., functions or scripts that execute your pipeline in the intended order). Also, do not unit test functions which thinly wrap external (e.g., Pandas or Scikit-learn) functionality — those packages have their own suites of unit tests. If the wrapper contains substantial logic, consider separating it. For example, split a wrapper that sets up model architecture and then trains into two functions: model setup (test this) and model fitting (do not test — sklearn/TF function). The thicker the wrapper, the greater the benefit of testing. Assert to validate data Use assert statements to validate necessary assumptions about data and key calculation results: if the assertion is false, your program will halt and display the assertion message. This will save the user time and frustration tracing an error raised later in the program due to, for instance, faulty input data⁴. Below are examples of three types of assertions: Assertions of necessary properties of data inputs (line 8) — columns, value constraints, dtypes, etc. Assertions about the environment (lines 11, 12) — environment variables, key package versions, availability of external resources. Assertions about results (lines 19, 20) — prior to returning a result, check that it makes sense. Track provenance Especially during rapid, iterative model development, you want to know which preprocessed data, saved model, or results were generated by what code. There are a few ways to accomplish this: No versioning: This is a valid option if the modeling project is simple and running the pipeline is very quick (i.e., you can always checkout a previous commit and quickly rerun to generate all outputs). Manual versioning: Backup important “milestone” outputs with descriptive filenames. Git hash versioning: In your saving functions (for saving preprocessed data, models, results), append the git hash of the current commit using GitPython. For example: 4. Full-featured data versioning: Use a system such as DVC. For many purposes, options 2 and 3 are sufficient. Sip from the Firehose The above practices are a lot to take in all at once. The good news is that you don’t have to. Add them one at a time to your repertoire until you develop familiarity and comfort. They will serve you well on your journey from Jupyter-notebook slinging data science cow(boy|girl) to a clean-coding, product-developing professional. Thanks to Andrew Fowler, Brad Allen, Cloves Almeida, and Akos Furton for providing feedback on an early draft of this guide. Footnotes: [1] Markdown is simpler to write than reStructuredText, but it doesn’t capture the full functionality of Sphinx. For more complex documentation, use reStructuredText. [2] In this guide, I’m using “entry point” in the general sense of a method for executing your package. I’m not referring to the entry_points argument in setup.py. You would not normally use those entry points in a data science project. [3] Do you need to keep state in an organized way (as opposed to having a bunch of dicts and DataFrames floating around)? Use classes/objects. Within classes, use the @staticmethod decorator to designate utility methods that don’t need to use class/object attributes, and make those methods pure. [4] Caveat: if you do not have control of how your package is deployed, and it may be deployed by an inexperienced devops engineer who thinks that Python is C++ and decides to use "optimization” flags, then using assertions in your code is dangerous because they are skipped by the optimization. In this case, use conditionals and raise exceptions instead of using assertions. Or you can provide explicit instruction on how to correctly deploy the package. (Explanatory note: unlike in C++, the optimization flags in Python provide close to zero improvement in execution speed. Don’t use them, don’t let friends use them.)
https://medium.com/bcggamma/data-science-python-best-practices-fdb16fdedf82
['Victor Kostyuk']
2020-05-19 13:35:31.698000+00:00
['Software Engineering', 'Python', 'Data Science', 'Product Development', 'Clean Code']
Deep Learning Explainability: Hints from Physics
Deep Learning Explainability: Hints from Physics Deep Neural Networks from a Physics Viewpoint Nowadays, artificial intelligence is present in almost every part of our lives. Smartphones, social media feeds, recommendation engines, online ad networks, and navigation tools are some examples of AI-based applications that already affect us every day. Deep learning in areas such as speech recognition, autonomous driving, machine translation, and visual object recognition has been systematically improving the state of the art for a while now. However, the reasons that make deep neural networks (DNN) so powerful are only heuristically understood, i.e. we know only from experience that we can achieve excellent results by using large datasets and following specific training protocols. Recently, one possible explanation was proposed, based on a remarkable analogy between a physics-based conceptual framework called renormalization group (RG) and a type of neural network known as a restricted Boltzmann machine (RBM). RG and RBMs as Coarse-Graining Processes Renormalization is a technique used to investigate the behavior of physical systems when information about its microscopic parts is unavailable. It is a “coarse-graining” method which shows how physical laws change as we zoom out and examine objects at different length scales, “putting on blurry glasses”. When we change the length scale with which we observe a physical system (when we “zoom in”), our theories “navigate the space” of all possible theories (source). The great importance of the RG theory comes from the fact that it provides a robust framework that essentially explains why physics itself is possible. To describe the motion of complex structures such as satellites, one does not need to take into account the motions of all its constituents. Picture by 3Dsculptor/Shutterstock.com. RG theory provides a robust framework that explains why physics itself is possible. For example, to compute the trajectory of a satellite orbiting the Earth we merely need to apply Newton’s laws of motion. We don’t need to take into account the overwhelmingly complex behavior of the satellite’s microscopic constituents to explain its motion. What we do in practice is a sort of “averaging” of the detailed behavior of the fundamental components of the system (in this case the satellite). RG theory explains why this procedure works so remarkably well. Furthermore, RG theory seems to suggest that all our current theories of the physical world are just approximations to some yet unknown “true theory” (in more technical terms, this true theory “lives” in the neighborhood of what physicists call fixed points of the scale transformations). RG theory seems to suggest that all our current theories of the physical world are just approximations to some yet unknown “true theory”. RG works well when the system under investigation is at a critical point and displays self-similarity. A self-similar system is “exactly or approximately similar to a part of itself” in whatever length scale it is being observed. Examples of systems displaying self-similarity are fractals. Wikipedia animation showing the Mandelbrot set and we zoom in (source). Systems at critical points display strong correlations between parts of it that are extremely far apart from each other. All subparts influence the whole system and the physical properties of the system become fully independent of its microscopic structure. Artificial neural networks can also be viewed as a coarse-graining iterative process. ANNs are composed of several layers, and as illustrated below, earlier layers learn only lower-level features from the input data (such as edges and colors) while deeper layers combine these lower-level features (fed by the earlier ones) into higher-level ones. In the words of Geoffrey Hinton, one of the leading figures in the deep learning community: “You first learn simple features and then based on those you learn more complicated features, and it goes in stages.” Furthermore, as in the case of the RG process, deeper layers keep only features that are considered relevant, deemphasizing irrelevant ones. Convolutional neural network (CNN). The complexity level of the forms recognized by the CNN is higher in later layers (source). An Exact Connection Both physics and machine learning deal with systems with many constituents. Physics investigates systems containing many (interacting) bodies. Machine learning studies complex data comprising a large number of dimensions. Furthermore, similarly to RG in physics, neural networks manage to categorize data such as, e.g. pictures of animals regardless of their component parts (such as size and color). In an article published in 2014, two physicists, Pankaj Mehta and David Schwab, provided an explanation for the performance of deep learning based on renormalization group theory. They showed that DNNs are such powerful feature extractors because they can effectively “mimic” the process of coarse-graining that characterizes the RG process. In their words “DNN architectures […] can be viewed as an iterative coarse-graining scheme, where each new high-level layer of the NN learns increasingly abstract higher-level features from the data”. In fact, in their paper, they manage to prove that there is indeed an exact map between RG and restricted Boltzmann machines (RBM), two-layered neural networks that constitute building blocks of DNN. From the 2014 paper by Mehta and Schwab where they introduced the map between RG and DNNs built by stacking RBMs. More details are provided in the remaining sections of the present article (source). There are many other works in the literature connecting renormalization and deep learning, following different strategies and having distinct goals. In particular, the work of Naftali Tishby and collaborators based on the information bottleneck method is fascinating. Also, Mehta and Schwab explained the map for only one type of neural network, and subsequent work already exists. However, for conciseness, I will focus here on their original paper, since their insight was responsible for giving rise to a large volume of relevant subsequent work on the topic. Before giving a relatively detailed description (see this article for a great, though much less technical, description) of this relationship I will provide some of the nitty-gritty of both RG theory of and RBMs. Renormalization Group Theory: A Bird’s-eye View As mentioned above, renormalization involves the application of coarse-graining techniques to physical systems. RG theory is a general conceptual framework so one needs methods to operationalize those concepts. Variational Renormalization group (VRG) is one such scheme which was proposed by Kadanoff, Houghton and Yalabik in 1976. For clarity of exposition, I chose to focus on one specific type of system to illustrate how RG works, namely, quantum spin systems, instead of proceeding in full generality. But before delving into the mathematical machinery, I will give a “hand waving” explanation of the meaning of spin in physics. The Concept of Spin in Physics In physics, spin can be defined as “an intrinsic form of angular momentum carried by elementary particles, composite particles, and atomic nuclei.” Though spin is by definition a quantum mechanical concept having no classical counterpart, particles with spin are often (though incorrectly) depicted as small tops rotating around their own axis. Spins are closely related to the phenomenon of magnetism. The particle spin (black arrow) and its associated magnetic field lines (source). The Mathematics of Renormalization Let us consider a system or ensemble of N spins. For visualization purposes suppose they can be put on a lattice, as illustrated in the figure below. A 2-dimensional lattice of spins (represented by the little arrows). The spheres are charged atoms (source). Since spins can be up or down, they are associated with binary variables The index i can be used to label the position of the spin in the lattice. For convenience, I will represent a configuration of spins by a vector v. For systems in thermal equilibrium, the probability distribution associated with a spin configuration v has the following form: This is the ubiquitous Boltzmann distribution (with the temperature set to 1 for convenience). The object H(v) is the so-called Hamiltonian of the system, which can be defined as “an operator corresponding to the sum of the kinetic [and] potential energies for all the particles in the system”. The denominator Z is a normalization factor known as the partition function The Hamiltonian of the system can be expressed as a sum of terms corresponding to interactions between spins: The set of parameters are called coupling constants and they determine the strength of the interactions between spins (second term) or between spins and external magnetic fields (first term). Another important quantity we will need to consider is the free energy. Free energy is a concept originally from thermodynamics where it is defined as “the energy in a physical system that can be converted to do work”. Mathematically, it is given in our case by: The symbol “tr” stands for trace (from linear algebra). In the present context, it represents the sum over all possible configurations of visible spins v. At each step of the renormalization procedure, the behavior of the system at small length scales is averaged out. The Hamiltonian of the coarse-grained system is expressed in terms of new coupling constants and new, coarse-grained variables are obtained. In our case, the latter are block spins h and the new Hamiltonian is: To better understand what are block spins, consider the two-dimensional lattice below. Each arrow represents a spin. Now divide the lattice into square blocks each containing 2×2 spins. The block spins are the average spins corresponding to each of these blocks. In block spin RG, the system is coarse-grained into new block variables describing the effective behavior of spin blocks (source). Note that the new Hamiltonian has the same structure as the original one, only with configurations of blocks of spins in place of physical spins. Both Hamiltonians have the same structure but with different variables and couplings. In other words, the form of the model does not change but as we zoom out the parameters of the model change. The full renormalization of the theory is obtained by systematically repeating these steps. After several RG iterations, some of the parameters will be dropped out and some will remain. The ones that remain are called relevant operators. A connection between these Hamiltonians is obtained by the requirement that the free energy (described a few lines above) does not change after an RG-transformation. Variational Renormalization group (VRG) As mentioned above, to implement the RG mappings one can use the variational renormalization group (VRG) scheme. In this scheme, the mappings are implemented by an operator where λ is a set of parameters. This operator encodes the couplings between hidden and input (visible) spins and satisfies the following relation: which defines the new Hamiltonian given above. Though in an exact RG transformation, the coarse-grained system would have exactly the same free energy as the original system i.e. which is equivalent to the following condition in practice, this condition cannot be satisfied exactly and variational schemes are used to find λ that minimizes the difference between the free energies or equivalently, to approximate the exact RG transformation. A Quick Summary of RBMs I have described in some detail the internal workings of restricted Boltzmann machines in a previous article. Here I will provide a more condensed explanation. Restricted Boltzmann Machines (RBMs) are generative, energy-based models. used for nonlinear unsupervised feature learning. Their simplest version consists of two layers only: One layer of visible units which will be denoted by v One hidden layer with units denoted by h Illustration of a simple Restricted Boltzmann Machine (source). Again I will consider a binary visible dataset v with n elements extracted from some probability distribution Eq. 9: Probability distribution of the input or visible data. The hidden units in the RBM (represented by the vector h) are coupled to the visible units with interaction energy given by The energy sub-index λ represents the set of variational parameters {c, b, W}. where the first two elements are vectors and the third one is a matrix. The goal of RBMs is to output a λ-dependent probability distribution that is as close as possible to the distribution of the input data P(v). The probability associated with a configuration (v,h) and parameters λ is a function of this energy functional: From this joint probability, one can easily obtain the variational (marginalized) distribution of visible units by summing over the hidden units. Likewise, the marginalized distribution of hidden units is obtained by summing over the visible units: We can define an RBM Hamiltonian as follows: The λ parameters can be chosen to optimize the so-called Kullback-Leibler (KL) divergence or relative entropy which measures how different two probability distributions are. In the present case, we are interested in the KL divergence between the true data distribution and the variational distribution of the visible units produced by the RBM. More specifically: When both distributions are identical: Exactly mapping RG and RBM Mehta and Schwab showed that to establish the exact mapping between RG and RBMs, one can choose the following expression for the variational operator: Recall that the Hamiltonian H(v) contains encoded inside it the probability distribution of the input data. With this choice of variational operator, one can quickly prove the RG Hamiltonian and the RBM Hamiltonian on the hidden layer are the same: Also, when an exact RG transformation can be implemented, the true and variational Hamiltonian are identical: Hence we see that one step of the renormalization group with spins v and block-spins h can be exactly mapped into a two-layered RBM made of visible units v and hidden units h. As we stack increasingly more layers of RBMs we are in effect performing more and more rounds of the RG transformation. Application to the Ising Model Following this rationale, we conclude that RBMs, a type of unsupervised deep learning algorithm, implements the variational RG process. This is a remarkable correspondence and Mehta and Schwab demonstrate their idea by implementing stacked RBMs on a well-understood Ising spin model. They fed, as input data, spin configurations sampled from an Ising model into the DNN. Their results show that, remarkably, DNNs seem to be performing (Kadanoff) block spin renormalization. In the authors’ words “Surprisingly, this local block spin structure emerges from the training process, suggesting the DNN is self-organizing to implement block spin renormalization… I was astounding to us that you don’t put that in by hand, and it learns”. Their results show that, remarkably, DNNs seem to be performing block spin renormalization. In the figure below from their paper, A shows the architecture of the DNN. In B the learning parameters W are plotted to show the interaction between hidden and visible units. In D we see the gradual formation of block spins (the blob in the picture) as we move from along the layers of the DNN. In E the RBM reconstructions reproducing the macroscopic structure of three data samples are shown. Deep neural networks applied to the 2D Ising model. See the main text for a detailed description of each of the figures (source). Conclusions and Outlook In 2014 it was shown by Mehta and Schwab that a Restricted Boltzmann Machine (RBM), a type of neural network, is connected to the renormalization group, a concept originally from physics. In the present article, I reviewed part of their analysis. As previously recognized, both RG and deep neural networks bear a remarkable “philosophical resemblance”: both distill complex systems into their relevant parts. This RG-RBM mapping is a kind of formalization of this similarity. Since deep learning and biological learning processes have many similarities, it is not too much of a stretch to hypothesize that our brains may also use some kind of “renormalization on steroids” to make sense of our perceived reality. As one of the authors suggested, “Maybe there is some universal logic to how you can pick out relevant features from data, I would say this is a hint that maybe something like that exists.” It is not too much of a stretch to hypothesize that our brains may also use some kind of “renormalization on steroids” to make sense of our perceived reality. The problem with this is that in contrast to self-similar systems (with fractal-like behavior) where RG works well, systems in nature generally are not self-similar. A possible way out of this limitation, as pointed out by the neuroscientist Terrence Sejnowski, would be if our brains somehow operated at critical points with all neurons influencing the whole network. But that is a topic for another article!
https://towardsdatascience.com/deep-learning-explainability-hints-from-physics-2f316dc07727
['Marco Tavora Ph.D.']
2020-08-30 15:44:24.176000+00:00
['Artificial Intelligence', 'Inside Ai', 'Data Science', 'Science', 'Machine Learning']
Crypto During Coronavirus: Make The Right Decisions Now
It goes without saying that things have been a bit chaotic over the past few weeks. At dlab HQ in New York, our office is one of thousands that are closed amidst a city-wide shutdown the likes of which were difficult to imagine as little as two months ago. As the longer-term reality of the new normal settles in, we’ve been working diligently with our staff and our portfolio companies to figure out how to best navigate this unfamiliar environment. We understand that this is a challenging time for everyone and sincerely hope that all of you are staying safe and healthy. We’ll get through this, and we will be stronger for it. But first we have to survive, and surviving means reprioritizing and adapting. How long will this situation last? Is COVID-19 a 6-month problem or a two-year problem? How long will recovery take, both operationally and economically? What does it mean for startups in our ecosystem attempting to raise financings this year? It’s honestly hard to know, as we simply don’t have sufficient data; but it’s clear that the effects will be material. This is truly an unprecedented situation and we need to be mindful with our next steps, both as a fund and as a community. It’s important in times like this to slow down before you speed up. The pandemic’s effects on cryptocurrency and the entire blockchain ecosystem are even harder to pin down. What does life in a post COVID-19 world look like, and what kind of a role will crypto play? We’re going to find out. Here at dlab we continue to be excited about the growth of blockchain technology, its potential for societal change, and its applicability to the challenges that we find ourselves facing in a post-pandemic world. Although we’re certainly concerned about the current seed funding climate, we remain optimistic that great founders, as always, find ways to defy the odds and flourish. Now is as great a time as ever to work on technology that solves real problems the world is facing; problems that will become even more pronounced in the weeks and months ahead. We’re proud that our portfolio includes a number of companies addressing needs in disaster response, supply chain traceability, remote user security, worker credentialing, and employee compensation, among others. We’ve had conversations with dozens of companies for our latest batch who are doing fascinating work applying this technology stack to the healthcare, research, food and agriculture, and supply chain industries. This is work that matters now. So how do we, as a program, respond to an event like this? At SOSV, we’ve been telling our startups that, first and foremost, they have to survive. This usually means making some difficult decisions: downsizing, focusing, and eliminating distractions. Often it means taking a step back from ambitious new projects in order to support your core. This is true of the companies we back, but it’s also true of us as an investor and as a studio. Our primary responsibility is to our founders, so we’ll be putting a lot of other efforts to the side for the next few months in order to provide additional resources, virtual programming, and hands-on support to them. We’ll be working hard to ensure the best possible outcomes for their businesses, and by extension, for the world around us. In order to focus our energy here, and because of the uncertainty of the next few months, it’s clear that the most responsible thing for us to do is to delay the start of our upcoming accelerator program. Another change precipitating this delay is the exit of our partner, EMURGO. In this uncertain global situation, they’ve had to make the difficult decision to refocus on their own core business, which means they will no longer be co-investing in new dlab startups at the accelerator stage. They remain fully committed to the success of the ten portfolio companies we have co-invested in together, and we look forward to continuing to work with the EMURGO team as mentors and advisors for future dlab prospects. The last few weeks have certainly been a time of change and priority re-assessment for everyone. However, it’s hard not to be optimistic when you’re surrounded by the kind of founders, partners, and mentors that we’re lucky to have in our network. Even if we’re all physically isolated for the moment, it’s always a pleasure to see you on a zoom call or in a slack room. The hangouts and virtual coffee conversations have been great so far, even if the actual coffee isn’t as good as the stuff downstairs at Culture Espresso on 36th. Finally, if you’re a startup who has applied to our upcoming dlab cohort: Please know that we appreciate your patience and understanding during these uncertain times. We’ve been blown away by the quality of the applications we’ve received for this program and we’re truly excited to continue our conversions over the coming months. Please don’t be bashful if we can help in any way; you know where to find us via email or telegram. Likewise, if you’re a pre-seed or seed-stage startup in the space that we haven’t yet spoken with, we’d love to talk with you. We’re currently accepting applications and our current plan (accepting that there are maybe one or two major things outside of our control in these crazy times!) is to make selections in the Fall for a program kickoff in Q4. Thank you. Please stay healthy, happy, and safe. Make the right decisions. And don’t stop building stuff that matters.
https://medium.com/dlabvc/crypto-vs-coronavirus-make-the-right-decisions-now-fd972e87d4a7
['Nick Plante']
2020-11-29 15:43:05.611000+00:00
['Blockchain', 'Startup', 'Accelerator', 'Investing', 'Coronavirus']
7 Insanely Good Reasons Why You Should Write On Medium
1. Establish a New Audience I’ve been around the block in the blogging world for years now and I know for a fact that it’s hard to establish a following from scratch. A writer could have top-notch quality content but still fall short of readers. But when I discovered Medium for the first time, I realized that there was so much potential here to build an engaging audience. And ever since then, I’ve been able to gain 3000 followers in a short period of time. On other websites, you may have to do twice the work. Notify your email list, share with your followers or maybe even invest in paid ads to promote your posts. 2. Develop Your Writing Voice The beauty of Medium is that it gives you the freedom to publish about anything you want. And whether you’re here to work on your craft as a writer or simply journal your thoughts at the end of the day, there is always a value for what you do on Medium. Being part of Medium has allowed me to polish my writing skills. As I continue to write here every month, I’ve seen a vast improvement in the way I’ve been able to express myself. Another thing I love about Medium is the potential of being curated every time you publish. It’s something that motivates me to continuously push myself to become a better storyteller. When curators choose your article for distribution, it reaches a targeted audience in specific topics such as Writing, Business or Life Lessons. This can potentially earn you more money in the long run. 3. Connect With a Thriving Community There are so many writers on this platform that you are bound to meet someone that you can relate to. I know that it always gives me a sense of satisfaction when I come across an article and say, “I was just thinking the exact same thing!” Blogging isn’t the same without a community. And it’s a big reason why I decided to keep writing on Medium. The people I’ve been able to connect with provide honest, valuable feedback on my posts and consistently leave thoughtful comments. This encourages me to create more every day. 4. Gain a New Perspective on Life I honestly think that I’ve become more knowledgable ever since I signed up for a membership on Medium. That is because I read articles on so many different topics now — ones that I would have normally passed up on before. In real life, there are also things I don’t necessarily converse about with people. And it’s fascinating to read about them through someone else’s lens. Sometimes we think we know everything until we read another writer’s perspective on the subject matter. 5. Make Extra Income Per Month For me, blogging is a side hustle. It’s something I really enjoy doing outside of my career and my commitment to it has been able to bring in extra money each month. What’s great about it is that I can write literally anywhere and still earn cash. Sometimes I write while I’m commuting or even while I’m waiting in a long line. And a cool feature on this platform is that I can jot down a few sentences in the drafts section and it will save automatically. Ever since I’ve been publishing on Medium, I’ve been able to make anywhere between $100-$500 a month — however, it’s taken some time for me to get there. And while it’s not a liveable wage and also nowhere near the amount I make in my full-time job, I still get paid for it. But as I keep dedicating myself to writing, I see the numbers grow and grow each day. And I intend on doubling my income someday. 6. Measure Your Metrics Daily Medium has valuable post analytics that allows you to see how your posts are performing every day. That means that you can see how many views your content has generated or what the average time each reader spent on a story. This helps determine what posts are the most popular for you. And it can be helpful if you are trying to earn a steady income by blogging. You can use the data as a tool to analyze the overall performance of a post and get a good idea of how people reacted to it. It can even pinpoint which articles need more improvement. This is a great way to figure out where your weaknesses are and give you a second chance to polish up your writing. 7. Get Discovered By Publishers While being on Medium doesn’t guarantee that you’ll get discovered, you can definitely increase the likelihood of it. After all, there are big publishers on the platform that could potentially stumble across a story of yours. With Medium’s ever-growing audience, you never know who your post is going to reach. After all, there are people who started their writing careers on Medium. And it’s simply because their posts were distributed enough that it eventually sparked the interest of a publisher. Medium is great for exposure. And whether you want to write as a hobby or advance your career as an author, publishing on here will open new doors for you.
https://medium.com/narrative/7-insanely-good-reasons-why-you-should-write-on-medium-552336dca7c3
['Katy Velvet']
2020-12-15 01:04:05.784000+00:00
['Writing', 'Entrepreneurship', 'Work', 'Life Lessons', 'Medium']
A Giant Volcano Lurks Beneath Santorini, Greece’s Most Touristy Island
On a recent November morning, Giannis Bellonias was sipping tsipouro on the veranda of his cave house in Santorini, Greece. “Visibility is poor today. Hope Kolumbo won’t give scientists a hard time,” Bellonias said to his wife, looking out onto the Aegean sea. The alabaster paths leading down to Santorini’s famous sun-bleached, blue-shuttered cave houses, spread out below. Kolumbo is a gigantic, active submarine volcano located roughly four miles to the northeast of Santorini and 550 yards underwater. When it last erupted in 1650 A.D., it formed a crater 1.5 miles wide, triggered a tsunami that smashed into the eastern and southern coast of Santorini, and killed over 50 people. Bellonias, who has lived on and off the island for almost 60 years and owns a cultural center and library called the Bellonio Foundation, is keenly aware of the threat Kolumbo poses, and believes it is more dangerous than that of the two volcanoes that sit in the middle of the Santorini caldera, Palea Kameni (“Old Burnt,” inactive) and Nea Kameni (“New Burnt,” still active). Experts think he might be onto something. As Bellionas looked out onto the sea, scientists dropped new seismographs into Kolumbo’s crater in hopes of keeping a closer eye on the mysterious beast. “I come from a generation of people who have made peace with the volcanic life.” “Trust me, I’ve dived into the Kolumbo crater with a submersible and have seen the active hydrothermal field of the volcano with my bare eyes,” says Paraskevi Nomikou, a geologist-oceanographer at the National and Kapodistrian University of Athens who dove into the submarine volcano in October. “We need to monitor this active submarine volcano with the same urgency we grant to on-land volcanoes.” Sulfur rises from the floor of Nea Kameni, an active volcano in the center of the Santorini caldera. A Santorini native, Nomikou gives credence to local “folk wisdom”: Nea Kameni can cover the island in ash and destroy crops, but it’s Kolumbo that they should fear. This month, the sudden eruption of New Zealand’s White Island volcano created a 3.7-kilometer-high column of ash and killed at least eight people, raising fresh concerns about the safety of volcano tourism. The 1.5 million tourists who wash up on Santorini’s shores each year may not realize it, but they, like the island’s roughly 15,000 permanent residents, dance with a constant volcanic threat. In fact, the five islands that make up Santorini were formed over 2 million years of volcanic eruptions. The Late-Bronze Age eruption of 1627 B.C., which lasted for 27 years, sent up seven cubic miles of rhyodacite magma and collapsed 200-foot-thick landslides into the sea, creating tsunami waves nine meters high and forming Santorini’s early modern caldera. Eons later, volcanic activity outside the Minoan caldera created Kolumbo during the historic eruption of 1650 A.D. Palea Kameni and Nea Kameni formed over the course of six violent eruptions between 1570 and 1950 A.D. To many scientists, Nea Kameni is considered the most likely source of a future eruption, but Kolumbo’s silence weighs heavy. “Santorini is not for the faint of heart.” For 60 years, all seemed peaceful in Santorini. Then, between 2011 and 2012, GPS instruments on the island registered renewed activity. Data suggested a “rise of the island of Santorini and a parallel decline in sea levels, an unusual and gradually increasing tectonic activity inside the caldera, and a change in water temperatures,” says Christos Pikridas, a scientific council member at the Institute for the Study and Monitoring of the Santorini Volcano (ISMOSAV), a nonprofit organization formed under an E.U. research program in 1995. The observation alarmed locals and volcanologists alike. Nea Kameni has the potential to eject 3.5 million cubic meters of magma and blanket large areas of Europe with ash. In 2012, the Greek government formed a two-year committee, the Special Scientific Committee for the Monitoring of Santorini Volcano, to monitor the new activity and predicted two potential outcomes: An extreme, “sub-plinian” eruption with a high discharge rate of magma lasting minutes to hours, or, more likely, the reactivation of the Kameni islands’ volcanic centers. They filed a report describing the two scenarios, their potential impact on the local population, and a mitigation plan to the Greek General Secretariat for Civil Protection in 2013. But today, there is still no emergency plan in case of a major eruption. For now, it is up to scientists to keep tabs on the region’s volcanic activity. Predicting Nea Kameni’s next eruption, says Pikridas, is “a dangerous game.” For 14 years, ISMOSAV monitored Nea Kameni 24/7 with a network of GPS, seismographs, and tide gauges. This October, the organization added four global navigation satellite systems (GNSS) around the volcano to increase its observational power, and next year, ISMOSAV plans to set up two more. Scientists know that any eruptions will be preceded by earthquakes six or seven months in advance, and Pikridas, who led the GNSS installation, hopes the new satellites will pick up these warning signs. “If the points where we have installed the antennae of the GNSSs move upward, this might reveal signs of volcanic unrest with an accuracy of a few millimeters.” But nearby, the sleeping giant Kolumbo remains “a geological mystery,” says Nomikou. The subterranean volcano is seven times more active than Nea Kameni and has its own magma chamber, separate from the one that feeds the on-land volcano. In fact, it’s considered by some to be the most active submarine volcano in Europe. But because it is underwater, Kolumbo is extremely hard to monitor. Scientists rely on earthquakes in the area to monitor it, says Nomikou. In October, Nomikou spent 17 days on the research vessel Poseidon alongside scientists from the Helmholtz Center for Ocean Research Kiel and the Universities of Hamburg and Bologna investigated Kolumbo. They anchored five ocean-bottom seismographs around the volcano and one directly in its crater. It was an ambitious expedition — one that perhaps should have been done many years ago. “Greek authorities have not paid importance to the subject due to budget issues,” says Nomikou. “Of course, they know about the danger because we inform them right away after each and every one of our oceanographic expeditions.” The team blasted the Poseidon’s air gun at the volcano seabed to penetrate it with sound waves, which were measured by 16 sonar detectors borne on thick, 300-meter-long cables as they rebounded. A detailed picture of the volcano’s structure and a complete representation of its seismic activity emerged. “We basically X-rayed Kolumbo,” says Nomikou. The microquakes caused by the air gun also confirmed that the ocean-bottom seismographs worked. “We recorded small earthquakes, which, of course, show again that the submarine volcano is active,” she says. Nomikou believes Kolumbo should be the primary subject of scientific focus and innovation, especially as it has already “showed its teeth” by taking so many lives during the great eruption of 1650. The active volcano of Nea Kameni, meanwhile, has gone through several eruptive periods over the past 300 years, all without claiming any victims. “There’s nothing new on Nea and Palea Kameni. These volcanoes have been studied well by previous volcanologists,” Nomikou says. The scant data on Kolumbo’s activity makes predicting its potentially explosive behavior difficult, but scientists know how to look for early warning signs. They are watching, for example, for earthquakes and discoloration of the seawater into shades of translucent green and white, caused by dissolved volcanic gases like sulfur dioxide, carbon dioxide, and hydrogen sulfide. Nomikou is particularly concerned that there is no permanent monitoring system on Kolumbo to watch for these signs, even though its grave risks to Santorini are clear. The island has an evacuation plan, but only for earthquakes. The Municipality of Santorini provides online maps, listing places where locals can take refuge or stay for a few days in case one strikes. Pikridas agrees that the lack of emergency planning for an eruption poses a serious threat to the Santorini’s population — as well as the crowds of tourists who come to the island each year. To a local like Bellonias, the endless stream of tourists scrambling up Santorini’s hills to get the perfect Instagram photo appear blissfully oblivious.“We could have an eruption any time now,” says Bellonias from the yard of his 17th-century home, which has outlasted several eruptions and earthquakes. “I am actually situated right above the fault line,” Bellonias says with pride. “I come from a generation of people who have made peace with the volcanic life,” he says. “Santorini is not for the faint of heart.”
https://onezero.medium.com/a-giant-volcano-lurks-beneath-santorini-greeces-most-touristy-island-e3893590094a
['Stav Dimitropoulos']
2019-12-13 17:05:12.363000+00:00
['Tourism', 'Volcano', 'World', 'Environment', 'Science']
Embracing Tsundoku — How a Library of Unread Books Can Expand Your Mind
The Antilibrary — A Boundless Fountain of Curiosity Understanding that read books are far less valuable than unread ones might appear very counter-intuitive and confusing to you. That’s okay. Knowledge itself seems to have been put on a pedestal these days where people tend to form their identity around the number of books that they’ve read, their educational institution, and everything that they do know. The problem with this is perfectly illuminated by Lincoln Steffens’ quote from his 1925 essay, Radiant Fatherhood: “It is our knowledge — the things we are sure of — that makes the world go wrong and keeps us from seeing and learning.” Certainty in a world clouded in mystery has become mainstream. Marcello Gleiser highlights in his book, The Island of Knowledge, that we have a tendency to constantly strive towards more and more knowledge, but that it is essential to understand that we are, and will remain, surrounded by mystery. Science after all is driven by the unknown answers to calibrated questions of nature. That’s why the antilibrary is such an important tool— the intentional act of pursuing Tsundoku for monetary personal growth. Coined by Nassim Taleb in his book, The Black Swan, he describes the antilibrary and its role in battling ignorant certainty by using legendary writer Umberto Eco’s relationship with books as a means to show what a fruitful relationship with knowledge really looks like. Umberto Eco had a very large personal library amounting to thousands of books, so whenever he had visitors, he’d often separate them into two categories. There were those who were fascinated by his library and would ask how much he has read, whereas the other very small minority would actually understand that a library was not meant to be an ego-booster, but rather a very powerful research tool. The library, to him, was not a mere ornament of knowledge, but rather a means to constantly remind himself that there exists a sea of knowledge out there that has yet to be explored — that he will never be able to explore. Therefore it’s clear that the antilibrary serves the purpose of being an intellectual reminder about how much there is to explore in the world, to stay curious, and that you don’t know as much as you think you know — there’s always more to learn and knowledge will never be complete. It allows you to be comfortably conscious of your ignorance, thus sparking a fire in your belly to keep on learning. As Stuart Feinstein points out in his Ted Talk and book, Ignorance: How it Drives Science, being consciously ignorant is to have faith in uncertainty, finding pleasure in mystery, and learning to cultivate doubt as there is no surer way to screw up an experiment than to be certain of its outcome. Though I may find a little discomfort in knowing that I won’t finish all of the books that I have in my library, I have cultivated this constant drive to learn more and more in due part to that discomfort. Every time I look, I get reminded of the pressing reality that I will never know all that I want to know, but I can at least try and bring light to the shadows of my ignorant mind.
https://medium.com/illumination-curated/embracing-tsundoku-how-a-library-of-unread-books-can-expand-your-mind-and-bolster-productivity-103ccc6d4d44
['Zachary Minott']
2020-12-02 21:03:14.645000+00:00
['Books', 'Philosophy', 'Life Lessons', 'Advice', 'Psychology']
Learn Kubernetes in Under 3 Hours: A Detailed Guide to Orchestrating Containers
Introduction to Kubernetes I promise and I am not exaggerating that by the end of the article you will ask yourself “Why don’t we call it Supernetes?”. Fig. 11. Supernetes If you followed this article from the beginning we covered so much ground and so much knowledge. You might worry that this will be the hardest part, but, it is the simplest. The only reason why learning Kubernetes is daunting is because of the “everything else” and we covered that one so well. What is Kubernetes After we started our Microservices from containers we had one question, let’s elaborate it further in a Q&A format: Q: How do we scale containers? A: We spin up another one. Q: How do we share the load between them? What if the Server is already used to the maximum and we need another server for our container? How do we calculate the best hardware utilization? A: Ahm… Ermm… (Let me google). Q: Rolling out updates without breaking anything? And if we do, how can we go back to the working version. Kubernetes solves all these questions (and more!). My attempt to reduce Kubernetes in one sentence would be: “Kubernetes is a Container Orchestrator, that abstracts the underlying infrastructure. (Where the containers are run)”. We have a faint idea about Container Orchestration. We will see it in practice in the continuation of this article, but it’s the first time that we are reading about “abstracts the underlying infrastructure”. So let’s take a close-up shot, at this one. Abstracting the underlying infrastructure Kubernetes abstracts the underlying infrastructure by providing us with a simple API to which we can send requests. Those requests prompt Kubernetes to meet them at its best of capabilities. For example, it is as simple as requesting “Kubernetes spin up 4 containers of the image x”. Then Kubernetes will find under-utilized nodes in which it will spin up the new containers (see Fig. 12.). Fig. 12. Request to the API Server What does this mean for the developer? That he doesn’t have to care about the number of nodes, where containers are started and how they communicate. He doesn’t deal with hardware optimization or worry about nodes going down (and they will go down Murphy’s Law), because new nodes can be added to the Kubernetes cluster. In the meantime Kubernetes will spin up the containers in the other nodes that are still running. It does this at the best possible capabilities. In figure 12 we can see a couple of new things: API Server : Our only way to interact with the Cluster. Be it starting or stopping another container (err *pods) or checking current state, logs, etc. : Our only way to interact with the Cluster. Be it starting or stopping another container (err *pods) or checking current state, logs, etc. Kubelet : monitors the containers (err *pods) inside a node and communicates with the master node. : monitors the containers (err *pods) inside a node and communicates with the master node. *Pods: Initially just think of pods as containers. And we will stop here, as diving deeper will just loosen our focus and we can always do that later, there are useful resources to learn from, like the official documentation (the hard way) or reading the amazing book Kubernetes in Action, by Marko Lukša. Standardizing the Cloud Service Providers Another strong point that Kubernetes drives home, is that it standardizes the Cloud Service Providers (CSPs). This is a bold statement, but let’s elaborate with an example: – An expert in Azure, Google Cloud Platform or some other CSP ends up working on a project in an entirely new CSP, and he has no experience working with it. This can have many consequences, to name a few: he can miss the deadline; the company might need to hire more resources, and so on. In contrast, with Kubernetes this isn’t a problem at all. Because you would be executing the same commands to the API Server no matter what CSP. You on a declarative manner request from the API Server what you want. Kubernetes abstracts away and implements the how for the CSP in question. Give it a second to sink in — this is extremely powerful feature. For the company it means that they are not tied up to a CSP. They calculate their expenses on another CSP, and they move on. They still will have the expertise, they still will have the resources, and they can do that for cheaper! All that said, in the next section we will put Kubernetes in Practice. Kubernetes in Practice — Pods We set up the Microservices to run in containers and it was a cumbersome process, but it worked. We also mentioned that this solution is not scalable or resilient and that Kubernetes resolves these issues. In continuation of this article, we will migrate our services toward the end result as shown in figure 13, where the Containers are orchestrated by Kubernetes. Fig. 13. Microservices running in a Kubernetes Managed Cluster In this article, we will use Minikube for debugging locally, though everything that will be presented works as well in Azure and in Google Cloud Platform. Installing and Starting Minikube Follow official documentation for installing Minikube. During Minikube installation, you will also install Kubectl. This is a client to make requests to the Kubernetes API Server. To start Minikube execute the command minikube start and after it is completed, execute kubectl get nodes and you should get the following output kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready <none> 11m v1.9.0 Minikube provides us with a Kubernetes Cluster that has only one node, but remember we do not care how many Nodes there are, Kubernetes abstracts that away, and for us to learn Kubernetes that is of no importance. In the next section, we will start with our first Kubernetes resource [DRUM ROLLS] the Pod. Pods I love containers, and by now you love containers too. So why did Kubernetes decide to give us Pods as the smallest deployable compute unit? What does a pod do? Pods can be composed of one or even a group of containers that share the same execution environment. But do we really need to run two containers in one pod? Erm.. Usually, you run only one container and that’s what we will do in our examples. But for cases when for e.g. two containers need to share volumes, or they communicate with each other using inter-process communication or are otherwise tightly coupled, then that’s made possible using Pods. Another feature that Pods make possible is that we are not tied to Docker containers, if desired we can use other technologies for e.g. Rkt. Fig. 14. Pod properties To summarize, the main properties of Pods are (also shown in figure 14): Each pod has a unique IP address in the Kubernetes cluster Pod can have multiple containers. The containers share the same port space, as such they can communicate via localhost (understandably they cannot use the same port), and communicating with containers of the other pods has to be done in conjunction with the pod ip. Containers in a pod share the same volume*, same ip, port space, IPC namespace. *Containers have their own isolated filesystems, though they are able to share data using the Kubernetes resource Volumes. This is more than enough information for us to continue, but to satisfy your curiosity check out the official documentation. Pod definition Below we have the manifest file for our first pod sa-frontend, and then below we explain all the points. apiVersion: v1 kind: Pod # 1 metadata: name: sa-frontend # 2 spec: # 3 containers: - image: rinormaloku/sentiment-analysis-frontend # 4 name: sa-frontend # 5 ports: - containerPort: 80 # 6 Kind: specifies the kind of the Kubernetes Resource that we want to create. In our case, a Pod. Name: defines the name for the resource. We named it sa-frontend. Spec is the object that defines the desired state for the resource. The most important property of a Pods Spec is the Array of containers. Image is the container image we want to start in this pod. Name is the unique name for a container in a pod. Container Port:is the port at which the container is listening. This is just an indicator for the reader (dropping the port doesn’t restrict access). Creating the SA Frontend pod You can find the file for the above pod definition in resource-manifests/sa-frontend-pod.yaml. You either navigate in your terminal to that folder or you would have to provide the full location in the command line. Then execute the command: kubectl create -f sa-frontend-pod.yaml pod "sa-frontend" created To check if the Pod is running execute the following command: kubectl get pods NAME READY STATUS RESTARTS AGE sa-frontend 1/1 Running 0 7s If it is still in ContainerCreating you can execute the above command with the argument --watch to update the information when the Pod is in Running State. Accessing the application externally To access the application externally we create a Kubernetes resource of type Service, that will be our next article, which is the proper implementation, but for quick debugging we have another option, and that is port-forwarding: kubectl port-forward sa-frontend 88:80 Forwarding from 127.0.0.1:88 -> 80 Open your browser in 127.0.0.1:88 and you will get to the react application. The wrong way to scale up We said that one of the Kubernetes main features was scalability, to prove this let’s get another pod running. To do so create another pod resource, with the following definition: apiVersion: v1 kind: Pod metadata: name: sa-frontend2 # The only change spec: containers: - image: rinormaloku/sentiment-analysis-frontend name: sa-frontend ports: - containerPort: 80 Create the new pod by executing the following command: kubectl create -f sa-frontend-pod2.yaml pod "sa-frontend2" created Verify that the second pod is running by executing: kubectl get pods NAME READY STATUS RESTARTS AGE sa-frontend 1/1 Running 0 7s sa-frontend2 1/1 Running 0 7s Now we have two pods running! Attention: this is not the final solution, and it has many flaws. We will improve this in the section for another Kubernetes resource Deployments. Pod Summary The Nginx web server with the static files is running inside two different pods. Now we have two questions: How do we expose it externally to make it accessible via URL, and How do we load balance between them? Fig. 15. Load balancing between pods Kubernetes provides us the Services resource. Let’s jump right into it, in the next section. Kubernetes in Practice — Services The Kubernetes Service resource acts as the entry point to a set of pods that provide the same functional service. This resource does the heavy lifting, of discovering services and load balancing between them as shown in figure 16. Fig. 16. Kubernetes Service maintaining IP addresses In our Kubernetes cluster, we will have pods with different functional services. (The frontend, the Spring WebApp and the Flask Python application). So the question arises how does a service know which pods to target? I.e. how does it generate the list of the endpoints for the pods? This is done using Labels, and it is a two-step process: Applying a label to all the pods that we want our Service to target and Applying a “selector” to our service so that defines which labeled pods to target. This is much simpler visually: Fig. 17. Pods with labels and their manifests We can see that pods are labeled with “app: sa-frontend” and the service is targeting pods with that label. Labels Labels provide a simple method for organizing your Kubernetes Resources. They represent a key-value pair and can be applied to every resource. Modify the manifests for the pods to match the example shown earlier in figure 17. Save the files after completing the changes, and apply them with the following command: kubectl apply -f sa-frontend-pod.yaml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply pod "sa-frontend" configured kubectl apply -f sa-frontend-pod2.yaml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply pod "sa-frontend2" configured We got a warning (apply instead of create, roger that). In the second line, we see that the pods “sa-frontend” and “sa-frontend2” are configured. We can verify that the pods were labeled by filtering the pods that we want to display: kubectl get pod -l app=sa-frontend NAME READY STATUS RESTARTS AGE sa-frontend 1/1 Running 0 2h sa-frontend2 1/1 Running 0 2h Another way to verify that our pods are labeled is by appending the flag --show-labels to the above command. This will display all the labels for each pod. Great! Our pods are labeled and we are ready to target them with our Service. Lets get started defining the Service of type LoadBalancer shown in Fig. 18. Fig. 18. Load balancing with the LoadBalancer Service Service definition The YAML definition of the Loadbalancer Service is shown below: apiVersion: v1 kind: Service # 1 metadata: name: sa-frontend-lb spec: type: LoadBalancer # 2 ports: - port: 80 # 3 protocol: TCP # 4 targetPort: 80 # 5 selector: # 6 app: sa-frontend # 7 Kind: A service. Type: Specification type, we choose LoadBalancer because we want to balance the load between the pods. Port: Specifies the port in which the service gets requests. Protocol: Defines the communication. TargetPort: The port at which incomming requests are forwarded. Selector: Object that contains properties for selecting pods. app: sa-frontend Defines which pods to target, only pods that are labeled with “app: sa-frontend” To create the service execute the following command: kubectl create -f service-sa-frontend-lb.yaml service "sa-frontend-lb" created You can check out the state of the service by executing the following command: kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE sa-frontend-lb LoadBalancer 10.101.244.40 <pending> 80:30708/TCP 7m The External-IP is in pending state (and don’t wait, as it’s not going to change). This is only because we are using Minikube. If we would have executed this in a cloud provider like Azure or GCP, we would get a Public IP, which makes our services worldwide accessible. Despite that, Minikube doesn’t let us hanging it provides a useful command for local debugging, execute the following command: minikube service sa-frontend-lb Opening kubernetes service default/sa-frontend-lb in default browser... This opens your browser pointing to the services IP. After the Service receives the request, it will forward the call to one of the pods (which one doesn’t matter). This abstraction enables us to see and act with the numerous pods as one unit, using the Service as an entry point. Service Summary In this section, we covered labeling resources, using those as selectors in Services, and we defined and created a LoadBalancer service. This fulfills our requirements to scale the application (just add new labeled pods) and to Load balance between the pods, using the service as an entry point. Kubernetes in Practice — Deployments Kubernetes Deployments help us with one constant in the life of every application, and that is change. Moreover, the only applications that do not change are the ones that are already dead, and while not, new requirements will come in, more code will be shipped, it will be packaged, and deployed. And on each step of this process, mistakes can be made. The Deployment resource automates the process of moving from one version of the application to the next, with zero downtime and in case of failures, it enables us to quickly roll back to the previous version. Deployments in Practice Currently, we have two pods and a service exposing them and load balancing between them (see Fig. 19.). We mentioned that deploying the pods separately is far from perfect. It requires separately managing each (create, update, delete and monitoring their health). Quick updates and fast rollbacks are out of the question! This is not acceptable and the Deployments Kubernetes resource solves each of these issues. Fig. 19. Current state Before we continue let’s state what we want to achieve, as it will provide us with the overview that enables us to understand the manifest definition for the deployment resource. What we want is: Two pods of the image rinormaloku/sentiment-analysis-frontend Zero Downtime deployments, Pods labeled with app: sa-frontend so that the services get discovered by the Service sa-frontend-lb. In the next section, we will translate the requirements into a Deployment definition. Deployment definition The YAML resource definition that achieves all the above-mentioned points: apiVersion: extensions/v1beta1 kind: Deployment # 1 metadata: name: sa-frontend spec: selector: # 2 matchLabels: app: sa-frontend replicas: 2 # 3 minReadySeconds: 15 strategy: type: RollingUpdate # 4 rollingUpdate: maxUnavailable: 1 # 5 maxSurge: 1 # 6 template: # 7 metadata: labels: app: sa-frontend # 8 spec: containers: - image: rinormaloku/sentiment-analysis-frontend imagePullPolicy: Always # 9 name: sa-frontend ports: - containerPort: 80 Kind: A deployment. Selector: Pods matching the selector will be taken under the management of this deployment. Replicas is a property of the deployments Spec object that defines how many pods we want to run. So only 2. Type specifies the strategy used in this deployment when moving from the current version to the next. The strategy RollingUpdate ensures Zero Downtime deployments. MaxUnavailable is a property of the RollingUpdate object that specifies the maximum unavailable pods allowed (compared to the desired state) when doing a rolling update. For our deployment which has 2 replicas this means that after terminating one Pod, we would still have one pod running, this way keeping our application accessible. MaxSurge is another property of the RollingUpdate object that defines the maximum amount of pods added to a deployment (compared to the desired state). For our deployment, this means that when moving to a new version we can add one pod, which adds up to 3 pods at the same time. Template: specifies the pod template that the Deployment will use to create new pods. Most likely the resemblance with Pods struck you immediately. app: sa-frontend the label to use for the pods created by this template. ImagePullPolicy when set to Always, it will pull the container images on each redeployment. Honestly, that wall of text got even me confused, let’s just get started with the example: kubectl apply -f sa-frontend-deployment.yaml deployment "sa-frontend" created As always let’s verify that everything went as planned: kubectl get pods NAME READY STATUS RESTARTS AGE sa-frontend 1/1 Running 0 2d sa-frontend-5d5987746c-ml6m4 1/1 Running 0 1m sa-frontend-5d5987746c-mzsgg 1/1 Running 0 1m sa-frontend2 1/1 Running 0 2d We got 4 running pods, two pods created by the Deployment and the other two are the ones we created manually. Delete the ones we created manually using the command kubectl delete pod <pod-name> . Exercise: Delete one of the pods of the deployment as well and see what happens. Think for the reason before reading the explanation below. Explanation: Deleting one pod made the Deployment notice that the current state (1 pod running) is different from the desired state (2 pods running) so it started another pod. So what was so good about Deployments, besides keeping the desired state? Let’s get started with the benefits. Benefit #1: Rolling a Zero-Downtime deployment Our Product manager came to us with a new requirement, our clients want to have a green button in the frontend. The developers shipped their code and provided us with the only thing we need, the container image rinormaloku/sentiment-analysis-frontend:green . Now it’s our turn, we the DevOps have to roll a Zero-Downtime deployment, Will the hard work pay off? Let’s see that! Edit the file sa-frontend-deployment.yaml by changing the container image to refer to the new image: rinormaloku/sentiment-analysis-frontend:green . Save the changes as sa-frontend-deployment-green.yaml and execute the following command: kubectl apply -f sa-frontend-deployment-green.yaml --record deployment "sa-frontend" configured We can check the status of the rollout using the following command: kubectl rollout status deployment sa-frontend Waiting for rollout to finish: 1 old replicas are pending termination... Waiting for rollout to finish: 1 old replicas are pending termination... Waiting for rollout to finish: 1 old replicas are pending termination... Waiting for rollout to finish: 1 old replicas are pending termination... Waiting for rollout to finish: 1 old replicas are pending termination... Waiting for rollout to finish: 1 of 2 updated replicas are available... deployment "sa-frontend" successfully rolled out According to the output the deployment was rolled out. It was done in such a fashion that the replicas were replaced one by one. Meaning that our application was always on. Before we move on let’s verify that the update is live. Verifying the deployment Let’s see the update live on our browsers. Execute the same command that we used before minikube service sa-frontend-lb , which opens up the browser. We can see that the button was updated. Fig. 20. The Green button Behind the scenes of “The RollingUpdate” After we applied the new deployment, Kubernetes compares the new state with the old one. In our case, the new state requests two pods with the image rinormaloku/sentiment-analysis-frontend:green. This is different from the current state running so it kicks in the RollingUpdate. Fig. 21. RollingUpdate replacing pods The RollingUpdate acts according to the rules we specified, those being “maxUnavailable: 1″ and “maxSurge: 1″. This means that the deployment can terminate only one pod, and can start only one new pod. This process is repeated until all pods are replaced (see Fig. 21). Let’s continue with the benefit number 2. Disclaimer: For entertainment purposes, the next part is written as a novella. Benefit #2: Rolling back to a previous state The Product Manager runs into your office and he is having a crisis! “The application has a critical bug, in PRODUCTION!! Revert back to the previous version immediately” — yells the product manager. He sees the coolness in you, not twitching one eye. You turn to your beloved terminal and type: kubectl rollout history deployment sa-frontend deployments "sa-frontend" REVISION CHANGE-CAUSE 1 <none> 2 kubectl.exe apply --filename=sa-frontend-deployment-green.yaml --record=true You take a short look at the previous deployments. “The last version is buggy, meanwhile the previous version worked perfectly?” — you ask the Product Manager. “Yes, are you even listening to me!” — screams the product manager. You ignore him, you know what you have to do, you start typing: kubectl rollout undo deployment sa-frontend --to-revision=1 deployment "sa-frontend" rolled back You refresh the page and the change is undone! The product managers jaw drops open. You saved the day! The end!
https://medium.com/free-code-camp/learn-kubernetes-in-under-3-hours-a-detailed-guide-to-orchestrating-containers-114ff420e882
['Rinor Maloku']
2019-10-03 14:19:30.986000+00:00
['Java', 'Kubernetes', 'Web Development', 'DevOps', 'Docker']
Main perspectives and forecasts The workforce of the future
In addition to the facts and statistics, it stresses that employers need to realize more than ever how their teams and employees need to be helped. It is apparent that teams working remotely and taking care of their lives will produce great results. Some horrible examples of organizations doing this wrong — for instance the manager, who told his employees he needed them to hold a video call open all day long to track what they had been doing. There have also been some excellent examples of businesses willing to invest and assist their employees with online equipment and training, as well as daily social and non-curricular activities. The publishing of this study was spoken to me by Wendy Mars (Twitter, LinkedIn), President of Cisco for Europe, the Middle East, Africa, and Russia. She claims, in particular, that it underlines the continued value of digital transformation and the need for businesses to respond rapidly to it. She said, “We have to be able to ensure that workers remain active in businesses and operations and to be able to operate in this very fast and efficiently through the development of workforce around the globe if we look at the time and pace at which the corporations have reacted to this because they have no genuine option … And we have seen the speed that companies have done so. “At the present time, the dynamics of working remotely or from home and how people see it, have also changed considerably … people saw workers becoming unbelievably productive from home … it would radically change the nature of work throughout the future.” It is evident from talking to our friends and families that all of us have found a much healthier work and life balance objectively. In addition, the results of this study indicate that, as a result of improvements to work-life, both our emotional and physical wellbeing are better-taken care of. In several ways, businesses and organizations today seem to have a duty to repay their workers as efficiently and as well as to adjust them to their needs while ensuring that they still have positive working environments. Mars says, “To be able to have a degree of business capability, be that they are in a bureau or remote, there would be a great dependence. The user’s standards are very high. Unplash In addition, challenges will be created to ensure that workers retain their know-how and acquire new skills that are worthwhile for themselves and their businesses in the future. And organizational environments are also likely greatly affected by workplace decentralization. With personal and after-work drinks not on the table for the near future, both managers and workers need to be creative in terms of upholding mutual values and the spirit of building a team. Giphy Technology would possibly once again have a long-term solution. With 5 G networking and the advancement of technology such as virtuality and increased reality, stuttering video conferencing will be a thing of the past and there will be new possibilities for interaction. Organizations that lead in this direction are likely to see a more satisfied and efficient workforce of the future for their investments. You can see my full conversation with Zhejian Peng here:
https://shaiksameeruddin.medium.com/main-perspectives-and-forecasts-the-workforce-of-the-future-70ac27ac1bbb
['Shaik Sameeruddin']
2020-10-20 07:30:00.465000+00:00
['Big Data', 'Data Science', 'Technology', 'Artificial Intelligence', 'Machine Learning']
How Buying Social Proof Can Make You Invisible
How Buying Social Proof Can Make You Invisible “Buying Instagram followers” is a hot combination of keywords right now. That’s surprising. I thought that trend was far gone. But after looking for trending topics in social media, I realized that buying followers was still one of the top searches on Google. Here, let me explain why buying followers won’t fix any of your problems, and why it can harm your account long term. Photo by Ev on Unsplash Buying followers will destroy your engagement rate. The engagement rate is the ratio between the average number of likes and comments you get on your posts and the number of followers you have. Instagram values content with a high engagement rate. That means that Instagram naturally pushes content that drives lots of likes and comments. If you buy Instagram followers, you will drastically increase your number of followers while your number of likes will remain the same. The followers that you’ll buy aren’t active accounts, and these people will not like your content. So, what Instagram sees, is a giant spike in your number of followers and an equal decrease in your engagement rate. This is enough of a signal for Instagram to know you bought followers. Naturally, Instagram won’t push your content anymore as a punishment for going against the rules. Your account may grow after buying followers, but that won’t last for long. Sooner rather than later, your content won’t be shown on your followers’ feed. It looks suspicious. Maybe you’re thinking about buying a few thousands of followers to improve your business’s social proof. Well, this is also a terrible idea. How does it look like to you, to see an account followed by 100k people, that gets an average of 85 likes per post? This is a massive red flag. Lots of restaurants and trendy brands buy followers to make them seem more popular than they are. This is a hard pass for me. If a restaurant tries to trick you into thinking they’re incredibly popular, what are they even making you eat? It tells a lot about someone’s ethics when they decide to buy followers to kickstart their business. What if Instagram changes the rules? Instagram is highly unpredictable. They change the rules all the time. What if they randomly decide to get rid of every single account that bought followers? They could do that. They did that a few years ago with accounts that have logged into some specific automation software. Then your entire business account is gone. Your years of hard work are wiped off, and all of that because of 3,000 followers that you decided to buy three years ago. It’s fixing the symptom, not the cause. There is a reason why you’re not getting new followers. You’re fixing the consequences of a lousy content strategy, poor-quality content, bad targetting, or any other cause. By ignoring these reasons, you’re preventing your account from growing at all. The thing is, identifying the cause of slow growth is relatively easy once you have the right method. But most importantly: don’t buy followers, it’s not worth it.
https://medium.com/datadriveninvestor/how-buying-social-proof-can-make-you-invisible-ca0f02b2bc1f
['Charles Tumiotto Jackson']
2020-10-15 14:09:46.484000+00:00
['Instagram', 'Growth', 'Startup', 'Marketing', 'Social Media']
Unboxing of Mind
I believe my mind has become that box which has too many wires tangled inside, each day I untangle few but the next day a new nest is there which I need to dismantle. It’s a love and hate kind of relationship with my mind and every time my mind wins, there is a poem waiting like a gift which I need to unbox. A good reason for me to search that paper and scribble it all before it gets tangled in the mesh. Whatever I write here or at another place is that piece of the moon which keeps burning me. Have you ever imagined standing at the edge of a cliff and thinking to jump or to walk away, we become those abandoned words stuck in some story. The choice is ours whether to ink it or leave it empty and when you are a writer you know to write is easier. That doesn’t mean each silly thought which speaks to you has to be on the page, some are those stars, which need to stay hidden behind the dark clouds because the time is not right for them to speak to the page. Look for that moon which is illuminating the clouds above, look for it and pull it down so that it can brighten the page and you can let go that anxiety. Maybe as a writer when you read this you may feel it’s not the same for me, words don’t burn me, the sky doesn’t come closer at night. The reason you are made of different stars but that can’t stop you from mapping your journey. I wrote here and unboxed my mind, laid it bare for you on the page, you can love it or hate it, you can adore it or push it, this is me on the page. A ticking bomb which never stops, a tale which is forever looking for new words.
https://medium.com/scribe/unboxing-of-mind-4c372f90a5e4
['Priyanka Srivastava']
2020-06-08 09:51:01.520000+00:00
['Prose', 'Writers On Writing', 'Writing', 'Nonfiction', 'Writers Life']
5 Psychological Habits That Can Prepare You For 2021
5 Psychological Habits That Can Prepare You For 2021 Plan small sparks of joy to look forward to Living through this pandemic is tough. In 2020, many people felt scared, frustrated, trapped and unsure about the future. However tough the past year has been, you can do something different in 2021 to reduce the psychological burden you experienced in 2020. You can create new routines, be kind to yourself, and see opportunities for connection to live a better life despite the uncertainties. Everyone reacts differently during stressful situations, but you can adopt certain habits, behaviours and mindset to live rise above your personal problems and emotional stressors. Uncertainty can feel daunting and frightening, but taking control of your actions, reactions and responses can guide your steps in the right direction. If you’re wondering how to emotionally prepare for the months ahead — here’s some advice. Focus on the things to celebrate a little more We’ve come a long way. You made it. Take some time to let that sink in. Despite the stressors, uncertainties, emotional burdens and months of isolation, you are still here. “Small wins are exactly what they sound like, and are part of how keystone habits create widespread changes,” says Charles Duhigg, the author of The Power of Habit: Why We Do What We Do, and How to Change. Now is a great time to review the experiences of the year that helped you cope or thrive. Acknowledge all that you have already adjusted to. You may not have responded well to the events of the year but you kept moving. You showed your courage. You build new routines to survive. It was a crazy year but you made it through and will do great at whatever life throws your way! Today right now is the best opportunity to be kind to yourself and celebrate your wins, no matter how small. Reminding yourself to go easy on yourself every day. Your daily activities may not seem like it used to be, especially when you compare them to your normal life, but it pays to acknowledge how far you’ve come. Lift your mood by celebrating your successes. If you’re still standing, you have something to celebrate. Give yourself credit for the things you managed to do and overcome during one of the toughest years in history. Celebrating the small wins is a great way to build confidence, self-esteem and start feeling better about yourself. Plan small sparks of joy to look forward to Life isn’t made up of big moments — those events are rare. It’s made up of many small moments brought together. Happiness is enjoying the little things in life. ‘Perfect happiness is a beautiful sunset, the giggle of a grandchild, the first snowfall. It’s the little things that make happy moments, not the grand events. Joy comes in sips, not gulps” says Sharon Draper. In 2021, don’t stress about missing out on vacations, travelling to a foreign country, birthday celebrations or that big event you are used to every year. To survive the year without psychological burdens, plan anything that can guarantee sparks of joy as frequently as you can — practice gratitude, notice the sunrise and sunset, take a walk in nature, enjoy the company of people close to you, practice mindfulness, start a passion project and enjoy your creative process. All too often we take the little things in our life for granted. Life is this very moment. Seize it and enjoy it while you can. Sometimes the little opportunities we hardly notice can have the biggest impact. Take things one day at a time. Be realistic, lower your expectations If all goes according to plan, by the end of 2021, most of the world should be relatively back to normal. Good events, such as news about covid-19 vaccines can change everything, but it pays to be realistic. It’s not realistic to think that everything will be back to normal. Instead, build better habits into your schedule that makes it easy to do more of what makes you come alive. “Accepting the uncertainties of the future, while at the same time identifying areas in your life you can control is a good place to start,” says Stephen Khan, the Editor of The Conversation. Take time out for self-care. Remind yourself not to worry if you miss personal deadlines and complete some tasks later than expected. Focus on your high priority items and don’t expect too much of yourself. Choose to stay in the moment Far too often, many people are stuck waiting for future happiness rather than staying in the present moment or making time to enjoy the time they have now. Anxiety and worry come from casting yourself into the future and staying there longer than normal. Choose to live in the present. But“if you keep your energy in the present moment, and you’re not contemplating how many more miles you have, it can feel easy at times,” says Jo Daniels, a senior lecturer in clinical psychology at the University of Bath. How do you stay in the moment? By reminding yourself consistently to observe your body, reactions and the environment more. By enjoying where you are at any point in time and slowing down to notice more. And by writing about how you feel— your pain, your frustrations, your unmet goals, and all your failures and successes. You can also stay in the moment by paying attention to the full experience of anything you are doing. Example, whilst walking, observe how objects seem to move past you, the temperature, the wind, etc. “Realise deeply that the present moment is all you ever have,” argues Eckhart Tolle. Neither the past nor the future is of relevance if you can’t enjoy this very moment right now. Strengthen your connections Humans thrive on social connections — set aside time to spend quality time with your family. If you can’t connect with colleagues and friends physically yet, plan a video call with them regularly. Positive interactions and conversations can boost your oxytocin levels, also known as the feel-good hormone, which will in turn reduce any stress you may be feeling. Regardless of where you are psychologically, making quality connections with the people in our lives can improve our mood. In 2021, improve your connections — make them stronger. Call parents, siblings and colleagues and talk voice-to-voice. Give them your full attention when you ask, “How are you?” Be particularly invested in the answer. Be more interested in listening than talking. Take walks with your family or those in your safe bubble. Recommend books, movies, podcasts, soothing playlists to your friends and family. Build engaging activities into your routine. “…embrace the ordinary — to play board games, cook meals, watch entire TV seasons, read books, take walks, do puzzles, get those art supplies out of the back of the closet, catch up with people we “meant to call” weeks or months ago and make one another laugh — precisely because our busy routines have been disrupted,” writes Lori Gottlieb, a contributing writer at The Atlantic and a psychotherapist. Uncertainty tolerance is also something you can improve. You can train yourself to be psychologically resilient. It’s uncomfortable but it’s within reach and it’s not impossible. Starting today, choose to celebrate your small wins, plan small sparks of joy into your routine, be realistic about your expectations, practice mindfulness and strengthen your connections. Your emotional health depends on it.
https://thomas-oppong.medium.com/5-psychological-habits-that-can-prepare-you-for-2021-d0a2041a731b
['Thomas Oppong']
2020-12-29 14:40:07.521000+00:00
['Psychology', 'Self Improvement', 'Relationships', 'Mental Health', 'Self']
Numpy HandBook For Beginners
Numpy HandBook For Beginners A handbook that will definitely come in handy In this blog, we are going to discuss the Numpy library in python and later we are also going to prepare a notebook that can be used by us as a handbook later on in the future. What is Numpy? Numpy stands for Numerical Python and is a python library for all kinds of scientific calculation. It consists of many multidimensional arrays and a collection of routines to process them. It adds additional support for matrices and large multidimensional arrays by adding a large collection of high-level mathematical functions. It was created in 2005 by Travis Oliphant. It is an open-source project and you can use it freely in your code. In python, we have lists that can serve the purpose of an array but the problem with lists is they are slow to process. On the other hand, NumPy arrays have an advantage over Lists. They are much faster than the lists because they are stored in one continuous place in memory. Also, NumPy arrays are optimized to work upon CPU cores. You Can Find Pandas Handbook For Beginners Here. Concepts to be covered in Blog
https://medium.com/pythoneers/numpy-handbook-fca4aea5ddfb
['Abhay Parashar']
2020-12-15 16:12:10.349000+00:00
['Numpy', 'Data Science', 'Machine Learning', 'Python', 'Artificial Intelligence']
The frozen middle
Bite sized leadership advice Leadership is only effective for two levels known as Boss Squared. We are motivated to please our Boss, in return they keep us, and the tribe ‘safe’. A simple concept. To please my boss I adopt their priorites, copy their behaviour and respond to what is important to them. Simple stuff. But I am not naive. Like a weather forecast I look beyond my Boss to their Boss, known as Boss Squared. Think of this as an early warning sign that change is coming. My Boss will be influenced by their Boss. The influence goes no further. That is it. Two levels. When your CEO stands on the stage and pronounces change, expecting it to magically happen. It doesn’t. The best they can hope for is awareness. Behavioural change only happens when your Boss, or your Bosses Boss start doing things differently. The failure to cascasde and engage the middle leadership layers is called the ‘Frozen middle’. A group of leaders who are often seen as being blockers to change. They aren’t, they just aren’t connected the change; the CEO just stood up in front of everyone, what do they need to do? If you are a change agent .. watch out for the frozen middle by ensuring that every second layer of leadership are effective leaders who are clear on what they need to do differently. Some practical advice When you take on an assignment, start with a role map. Map the change targets, then work your way up the structure noting each two layers until you get to the sponsor of the change. Have a leadership plan, with explicit actions that cascades every two layers Coach (senior) leaders on the limit of their effectiveness Recognise that all-hands, or town hall meetings will *only* achieve awareness. They are not the magic touch, just the opening ‘hello’. This is part of our Leadership Wizdom series, bite sized leadership advice for leaders who wish to improve their leadership, but don’t have much time. For more indepth articles check out The Change Wizard. We coach leaders and help their organisations become more adaptable at www.thechangewizard.com
https://medium.com/leadership-wizdom/the-frozen-middle-184b15ddd130
['Ed Pike']
2020-07-27 11:01:01.009000+00:00
['Emotional Intelligence', 'Leadership', 'Change', 'Startup', 'Entrepreneurship']
Problem solvers always are the winners
“The reward for being a good problem solver is to be heaped with more and more difficult problems to solve.” — R. Buckminster Fuller Constancy, dedication and commitment. It is not easy to do what we want, there are many obstacles everywhere, but the main thing is oneself. As I write this I also try to solve many problems in Kchin, from simple bugs to the big logistical and financial problems, and I know that when I solve some things others will appear. And yes, that sucks; but I have learned something at this time, and that is that at the end of the day those who resist and persist are those who reach the goal, because it is very easy to conceive ideas, what is complicated is to carry them out; someone once told me that the best project is just the one that gets going, the others are just occurrences. The important thing is to work steadily, ideas will improve, you do not have to have a complete idea to start working on it, you do not have to waste time developing a master plan, you just need to start, work and focus and the rest will come, conditions will be given. So stop focusing on what you do not have, better pay attention to all the resources with which you count and use them, whatever is missing will come at the right time, not by magic but by simple logic. There is no greater reward for the entrepreneur than seeing his ideas and work come to life, to see all that was only air and notes become something real and functional. And mainly remember to enjoy the trip…
https://medium.com/on-startups-and-such/problem-solvers-always-are-the-winners-e57b8510c9cf
['Luis Acerv']
2017-09-02 14:09:26.945000+00:00
['Startup Lessons', 'Entrepreneur', 'Ceo Skills', 'Startup', 'Design']
Why you should choose Go lang and abandon Python 2020?
Why choose Go programming? 1. Compile to a single binary Golang is a compiled language and the developers of Googe put a lot of effort into it. It uses static linking to actually combine all dependent library files and modules into a single binary based on the operating system type and environment, which also means that if you want to compile your back-end applications to your Linux operating system and In an X86-based CPU, you only need to download the compiled binary application to the server, and then the back-end application can work, which does not require any dependent files. 2. Static type system Type systems are very important for large-scale applications. Python is a great and interesting language but sometimes you will see some unusual exceptions because when you try to use a variable as an integer variable, it turns out that it is a string type. Django will crash process because of this def some_view (request): user_id = request.POST.get (‘id’, 0) Go compiles and tells you that this is a compiler error, and this is where it wins time on stupid issues. 3. optimization Surprisingly, Go is faster than Python (version 2 or 3) in most application scenarios. The results of the comparison can be seen in the Benchmarking Game, which of course is unfair, it depends on the type of application and user use case. For our case, Go got better performance due to its own multi-threaded module and CPU scalability. Whenever we need to execute some internal requests, we can use Goroutine to execute them separately, which is more than ten times less resource-intensive than Threads in Python. Thanks to these built-in language features, we can save a lot of resources (memory and CPU). 4. Go no longer requires a web framework This is a very cool thing for programming languages. The creators and community of the Go language have many built-in native tools supported by the core language, and in most cases, you no longer need any third-party libraries. For example, it has a built-in Http, JSON, and HTML templates. You can even build very complex API services without having to worry about finding third-party libraries on Github. Of course, Go also has many libraries and frameworks for building web projects, but I would recommend that you do not use third-party libraries to build your web projects or API services because in most cases using native packages will make your life easier. 5. better IDE support and debugging IDE support is one of the most important considerations when you try to change your programming language. A friendly IDE can save you 80% of your programming time on average. Go Plugin For JetBrains IDEA, also provides other support, such as (Webstorm, PHPStorm, etc …). This plugin provides any services you need in project development. The powerful JetBrains IDEA can make your development even more powerful. Choose Go, or just go home? Mozilla is internally switching its massive underlying logging architecture to Go, partly because of the powerful [goroutines]. The Go language was designed by people at Google, and support for concurrency was a top priority at the beginning of the design, rather than adding it after the fact as with various Python solutions. So we set out to switch from Python to Go. Although Go code is not yet an officially launched product, the results are very encouraging. We can now process a thousand documents per second, use less memory, and don’t need to debug the problems you encounter in Python: ugly multi-process / event / “why Control-C can’t kill processes”. Why we like GO ? Anyone who has a little understanding of how programming languages ​​work (interpreted vs compiled, dynamic vs. static) will say, “Cut, of course, Go is faster”. Yes, we can also rewrite everything in java, and we can see similar and faster improvements, but that’s not why Go language wins. The code you write in Go seems to be correct. I don’t know what’s going on, but once the code is compiled (fast compilation speed), you will feel that the code will work (not only will it run without error, but even logically correct). I know this doesn’t sound very reliable, but it does. This is very similar to Python in terms of redundancy (or non-redundancy). It takes functions as the first goal, so functional programming will be easy to understand. And of course, go threads and channels make your life easier, you can get a big performance boost from static typing, and you can control the memory allocation more finely, but you don’t have to pay too much for language expressiveness. cost. Based on our code statistics, we wrote 64% less code after rewriting the project in Go. You don’t need to debug code that doesn’t exist. The less code, the fewer errors! in conclusion Go provides us with great flexibility. One language can be used in all user scenarios, and it works well in all user scenarios. In our Backend and API services, we got 30% performance optimization. And now I can process logs in real-time, convert to a database, and process one or more services through Websocket! This is a very powerful feature provided by Go language features.
https://medium.com/datadriveninvestor/why-you-should-choose-go-lang-and-abandon-python-2020-123d6030b584
['Shiv Bajpai']
2020-02-20 05:19:20.024000+00:00
['Programming', 'Golang', 'Python', 'Machine Learning', 'Cloud Computing']
How To Tell Human Stories That Make Your Brand Look Good
How To Tell Human Stories That Make Your Brand Look Good Cut the buzzwords about connection and community Photo by Josh Hild on Unsplash You’re sitting in meeting room three, the whiteboards are covered in marketing ideas, and everyone is trying to answer one question. How do we create a compelling brand story? Everyone is on their second cup of coffee, and frankly, your team ran out of useful ideas seven minutes into the meeting. Suddenly, a light bulb goes off in your head, and you blurt out: “Why don’t we put the people in front of our product?” The meeting erupts with applause and cheers! You immediately receive a promotion and a corner office, going down in history for singlehandedly saving the company with your brilliant idea.
https://medium.com/better-marketing/how-to-tell-human-stories-that-make-your-brand-look-good-4bdb6affc139
['Thom Gallet']
2019-12-20 18:19:39.557000+00:00
['Social Media', 'Branding', 'Marketing', 'Storytelling', 'Freelancing']
The Case for Starting a Business With Your Friend
If you have a moment like this, do something about it Alex and Brett both remember the exact moment that they decided to go into business together. Not to be melodramatic, but it reminded me of asking someone about the moment they held their baby for the first time or met the love of their life. It was a moment etched into their memories with such acuity that I felt as if I was there with them when they described it to me. Alex called Brett one day with an idea, confident he needed Brett to help him bring his idea to life. Brett and Alex didn’t assume they understood someone else’s problem. Sure, they were student-athletes themselves, but Brett had spent three years working at a teacher recruitment company, and Alex had first-hand experience as a coach. This meant they spent their 20s deepening their understanding of the industry they were about to break in to. The two were on the same page before they had even talked about it, as Brett recalls: “We’d actually just happened to both, without speaking with each other, have something similar in mind at the same time. So it was like this unspoken bug was out in the universe that told him to call me that day. And that was the spark.” They didn’t ignore the spark and they used that spark as their call to do something. They not only trusted the moment where they both felt called to go into business together, but they trusted each other enough to know they could do it as a team.
https://medium.com/swlh/the-case-for-starting-a-business-with-your-friend-b0401fa2d939
['Casey A.']
2020-11-30 15:02:49.256000+00:00
['Startup Lessons', 'Startup Life', 'Startup', 'New Business', 'Entrepreneurship']
Asshole Astrology: Week of 7 December 2020
Asshole Astrology: Week of 7 December 2020 Horoscopes for horrible people Image by Gerd Altmann from Pixabay Here is next week’s horoscope for your sign. It doesn’t matter when you read it, or which sign you are, as horoscopes are all made up. What does the universe have in store for you? Let’s find out. Image by Gerd Altmann from Pixabay Aquarius: God is a name for whatever came before the big bang. God created the universe and it unfolded according to the laws of science. That’s what god does. My toaster makes toast. That’s its job. As much as I love toast, I don’t worship my toaster or kill in its name. God is a toaster. This is what I’m saying. Cliff Richard is a Christian. Give me the Devil’s music over that any day of the week. When I fly I usually listen to Highway to Hell during take off so that if we die in a crash there’s no confusion about where I belong. The party, and the best music, is downstairs. Image by Gerd Altmann from Pixabay Pisces: Ruthlessly cut out distractions. All those productivity articles that tell you to make your phone’s screen greyscale are for amateurs. I changed my laptop screen to greyscale. And turned on dark mode. It reduces eye strain, distraction and procrastination. It also sucks the joy out of every activity other than writing. Which was the whole point. I’d love to design an app that made everything greyscale except when you use certain apps (eg. you want to write so every time you use Scrivener it’s full colour but do anything else and you’re back to the joyless grey). Image by Gerd Altmann from Pixabay Aries: What deep dark secrets do you have? Will people really think less of you if you tell them? Every time I see a picture of a dog or cat on the internet I secretly boop its nose. Tonight in the bath I sang ‘Run to the Hills’ by Iron Maiden in the voice of Donald Duck. How was your day? I like to sing Iron Maiden songs in the bath in the voice of Donald Duck. That’s everything you need to know about me. Sadly I also snore like Donald Duck and yawn like a Wookiee. Nobody said I’m perfect. Do you think less of me now you know? The world would be a much better place if more people sang like Donald Duck. Image by Gerd Altmann from Pixabay Taurus: I don’t know how to break this to you but you probably are a super-villain. Extroverts count ‘necessary time for yourself’ in hours; introverts in days. I can go months without needing company. My phone is permanently set to Do Not Disturb. It never rings and always goes straight to voicemail. Why? Because I hate people calling me unannounced. If you’re an introvert at heart then until you can save up enough for your very own Fortress of Solitude you need to try a much cheaper creative solution: Earplugs. Failing that perhaps you can find better friends. Image by Gerd Altmann from Pixabay Gemini: How are you? I’m not in the best mood today but when I’m angry I look ridiculous because I’m usually so adorable. Someone asked me: “How do you hug someone who doesn’t want to be hugged?” Here are some detailed instructions on how to hug me: Don’t. I love you but I use the terms I, love, and you extremely loosely. Instead of showering people with unwanted affection why not try random acts of kindness instead? That sounds much more appealing. Within reason. Just so long as you don’t end up with a rota for your randomness. To do is to be. To be is to do. Doobie doobie doo. Image by Gerd Altmann from Pixabay Cancer: Freedom of speech means nothing without freedom of thought. I read it on a t-shirt so it must be true. That’s the smartest and funniest thing you will see all day. Until someone shows you the next thing. Please don’t ask me to not swear. There’s no such thing as bad language — only bad grammar, spelling and punctuation. You’re a profanity enabler. I mean that in a good way. Don’t tell me to shut up or do rude gestures with your hands. These are not the mimes you are looking for. Mime kink? That definitely belongs on your list of things not to confess out loud. Image by Gerd Altmann from Pixabay Leo: The only way to have an informed critical opinion on a book or film, regardless of your opinion, is to read it or watch it. There is no such thing as a book you shouldn’t read. Full-stop. Never ever ever. If you don’t read a book then you don’t have the right to complain about it. To destroy a physical book is a mortal sin. Deleting an ebook on the other hand? Not so much. “Don’t argue with idiots. They’ll drag you down to their level and beat you with experience.” You can’t argue with that. Let other people have the last word… Yeah, but! Image by Gerd Altmann from Pixabay Virgo: Only the good die young. Prepare for the long haul. Everybody dies, some of us live, and few of us get to decide who tells our story. Making art is like living with a woman. Nothing is ever as it seems but you still need to try. Sit with your novel. Just look at it. When you can’t stand it any longer begin writing. Time passes no matter what you do so you may as well spend it in the company of a good book. Or in your case the terrible book that you’re trying to give birth to. Don’t worry; it will all be over soon enough. As in art so in life. Image by Gerd Altmann from Pixabay Libra: It’s a digital world but it belongs to the storytellers. Storytelling is sacred. That’s why you should write. Netflix and the like don’t understand this of course. To them it’s all about money and metrics. Lorrie Moore said: “A short story is a love affair, a novel is a marriage. A short story is a photograph; a novel is a film.” Netflix say a short story is a limited series and a novel is canceled after two seasons because they ran the numbers. There should be a three season or three film limit on every TV show — beginning, middle, end. If a story is worth telling then it’s worth telling all the way through. Image by Gerd Altmann from Pixabay Scorpio: Laptops don’t drink tea. Apparently. Don’t spill tea on your computer. This is what I’m saying. You could yell “Buy my book! Buy my course! Give me your money!” at me even though we’ve never met. Or you could offer me a cup of tea and talk to me about cartoons like Adventure Time and Bojack Horseman. Which strategy do you think works best for getting me to read your work? Being understanding is probably the best approach. Try to talk with people rather than market at them. I’ll stick to drinking tea and cursing. *drinks tea and tries to think happy thoughts* Image by Gerd Altmann from Pixabay Sagittarius: You’re wasting your life. Stop that. I’m not telling you how to live. But when you’re eating breakast cereal from a cup, using chopsticks, it’s probably time to do the washing-up. When you’re listening to Rob Zombie as ‘happy-making music’ you know you’re in trouble. Drinking red wine whilst completing job applications? You really didn’t think this through now did you? Nobody wants to read a tear-stained CV. What’s that you say? ‘What do you think I should do? Ok, thanks, I’ll try to do the exact opposite of that.” You can’t teach an old dog new tricks. Image by Gerd Altmann from Pixabay Capricorn: What should you do on your first day back from vacation? Book your next vacation. There’s no such thing as too much travel. Except, you know, when you’re not allowed to travel. Adopt a travel mindset even in lockdown. If I had all the money in the world I’d buy tea and books. *looks around at my existing stash of tea and books* And travel. That’s about it. What are you missing? Where do you want to go? What can you do to bring some of that into your life right now? “You can’t go back and you can’t stand still. If the thunder don’t get you then the lightning will.”
https://medium.com/the-partnered-pen/asshole-astrology-week-of-7-december-2020-5d62726b4661
['James Garside']
2020-12-06 12:27:25.448000+00:00
['Life Lessons', 'Psychology', 'Writing', 'Relationships', 'Self']
Teaching Perspective and Resiliency to Students Will Help Them Succeed, Not Fail
Teaching Perspective and Resiliency to Students Will Help Them Succeed, Not Fail Ashley Broadwater Follow Jul 17 · 4 min read Photo by Tim Gouw on Unsplash “My first year of college, I had a 1.7 GPA,” my professor told my class my sophomore year at UNC-Chapel Hill. I sighed in relief. My GPA was much higher than that, so I wasn’t worried, per se. But seeing how someone can be in such a low place and still succeed really encouraged me. As a perfectionist and type three who worries too much about success, this is the message I needed. We can fail and still succeed: That’s what we call resiliency. My whole childhood, an emphasis was placed not only on good grades, but As. This wasn’t hard for me — I usually got As — but the pressure was a lot sometimes. I never really learned about the more crucial importance of maintaining my mental health. I’ve never been someone who needed mentors or coaches to be tough on me. I’m a people-pleaser and I want to succeed and do good in the world. I don’t need an extra push. In fact, extra pushes — especially in the form of a firm tone or yelling — only make me cry. As a middle school and high school student, I would cry and stress over bad grades. But thankfully, my dad taught me about the importance of perspective. In other words: Yes, do your best, but know that one bad grade won’t matter next year or in five years. It’s just one grade and one class — I’ll survive it. I’ll have other good grades, and I’m doing my best. Further, what “my best” is may change due to life events that are out of my control. That’s all okay, normal and understandable, and it doesn’t decrease my worth. We can and will bounce back. While success stories can be inspiring and are worthy of praise, we need to talk more about “failure” stories too. Students need to see that we all fail sometimes, and that our lives and careers still work out. We can’t let students’ mental health continue to drastically decrease over a number, a letter. We can’t let them continue to think that one bad grade will ruin them or indicate that they’re stupid. Those are the thoughts they’re having. According to a Princeton Review study, over 50 percent of students said they felt stressed and 25 percent said homework was the cause. Additionally, students spend over a third of their time feeling overwhelmed. According to the American Psychological Association, 28 percent of college students went to counseling because of the anxiety they felt over their academic performance. Additionally, according to Education Week, when students are asked to do more than they can, their bodies release cortisol, a stress hormone. In turn, they can’t form memories as well and don’t do as well in school. However, research shows that learning about failure helps students succeed. According to a 2016 study by the Journal of Educational Psychology, students who learned about the struggles and failures of successful scientists — such as Albert Einstein — improved academically afterwards. If we want their stress to lessen, we can’t be scared to share our personal failures. Through sharing these anecdotes, we’re really depicting resiliency. Vulnerability is a strength that brings us connection. Through being vulnerable, we realize we aren’t as alone as we think, and that we have no reason to be ashamed. “When we ignore fear and deny vulnerability, fear grows and metastasizes,” said Brené Brown, shame researcher and storyteller. “We move away from a belief in common humanity.” Brown continued to discuss how we can connect with others and humanity as a whole through pushing past that fear and being vulnerable with others. Through that vulnerability, we help others feel more safe and comfortable doing so themselves. “If leaders really want people to show up, speak out, take chances and innovate, we have to create cultures where people feel safe — where their belonging isn’t threatened,” Brown said. For students to have and share their voices, for students to critically think and succeed, they can’t be afraid of judgement or failure. One of my professors senior year showed my class this in a beautiful way: He showed us a stack of rejection letters he’d received. Again, the message I got from him was not that I shouldn’t try to succeed, but that we all fail, and that our lives and careers still work out. He also comforted me by sharing that our first jobs may not be my favorite jobs. Despite all of the positive posts we see on social media — especially LinkedIn — many people aren’t in love with their first job out of college. “Your first job is all about looking for your second job,” he said. His honesty was refreshing. So often we talk about our successes and how perfect our lives are. So often we hear about people’s accolades and awards. And then we think that we’re failures because we aren’t succeeding in those ways. But sometimes, those successes we hear about are not as perfect as we think. People get internships through nepotism. People have internships that are unpaid or in which they have to deal with unkind people. And even the most successful people have still had their downfalls, and still feel insecure sometimes. I remind myself to remember Pastor Steve Furtick’s words: “The reason we struggle with insecurity is because we compare our behind-the-scenes with everyone else’s highlight reel.” Failure can be inspiring; failure can unify us. Failure doesn’t mean we aren’t talented or smart. Hearing perspective about failure can help us realize that everything will be okay, not that we shouldn’t try. And by sharing that perspective and taking all of that pressure off of students, they’ll be healthier and more likely to succeed and feel good about themselves. And isn’t that what we really want?
https://medium.com/age-of-awareness/teaching-perspective-and-resiliency-to-students-will-help-them-succeed-not-fail-d4935fcba4c2
['Ashley Broadwater']
2020-07-17 23:31:01.228000+00:00
['Education', 'Leadership', 'Psychology', 'Parenting', 'Mental Health']
Turns out, coffee has an agenda.
If you purge your soul onto paper and are doing it for the love of writing. Then this publication is for you. We love gritty, raw, emotional, thought-provoking, rebellious, sexual, spiritual, nature-related writings and comic strips. Follow
https://medium.com/the-rebel-poets-society/turns-out-coffee-has-an-agenda-7a16e2366108
[]
2020-06-18 13:15:52.791000+00:00
['Coffee', 'Productivity', 'Comics', 'Storytelling', 'Humor']
Why You Should be Positive on Medium, not Negative
There’s certainly a desire in life sometimes to be negative. I just felt it on the drive home from the gym yesterday, getting stuck behind somebody going 28 MPH in a 35 MPH zone, having to wait at three traffic signals in a row for what felt like forever. This drive did not fill me with joy and positivity. You can feel negative about yourself too of course about all sorts of things. Something I really try to steer away from but that I feel about myself almost every day is that my writing could be so much better. My writing on Medium. My novel writing. My short story writing. I’m currently revising my twentieth novel, and I get frustrated here and there when I feel like I’m producing mediocre work, when it should be great, when I should be able to produce an incredible story by now after years and years of practice. And then of course there’s the desire to feel negative toward others on Medium, too. You might feel negative toward somebody doing far better on you on here. Somebody whose stories gets more claps, more views, more responses, more curation. You might feel jealous. Might feel anxious. You might even say something nasty to another Medium writer, which I talked about this weekend… Your negative feelings could potentially manifest themselves into being mean to another Medium writer, and such a direction is never one you want to go in. Such a direction isn’t going to get you very far on here, let me tell you. Instead, you should embrace positivity on Medium always. You want to think of this site as a community. Where we are all doing our best work each and every day and doing whatever we can to support each other. I was called out recently for not clapping enough for other writers on Medium. It was hard to hear, but I needed to hear it. I was taking too much time only writing, writing, writing on Medium, and I wasn’t engaging enough with the community as a whole. I wasn’t being quite as positive as I should have been. Now I’m clicking over to this site every morning feeling super positive, not just about my own work, but about the incredible work so many other writers are creating on Medium. Stories that entertain, inform, and inspire. Amazing pieces I would never even think about writing. Compelling pieces that allow authors to open themselves up and share a little bit of themselves with us. So don’t be competitive. Don’t be negative and nasty. Embrace positivity! Sometimes I feel negative about things. It happens every day, and you know what? It’s part of being human. You can’t feel positive all the time. Things big and small will bother you, and you just have to deal with them. But I’m really taking the viewpoint of being only positive when it comes to Medium. There’s so much great work being done here. The community is so supportive. Writers and readers on Medium make my heart happy every day. Feel free to vent here and there. Share a criticism about Medium if you have one. But stay positive as best you can. Especially when it comes to your own Medium journey, and the journeys of others. We’re all in this together, remember that. Let’s be as supportive of each other as we possibly can!
https://medium.com/med-daily/why-you-should-be-positive-on-medium-not-negative-517fceec5d06
['Brian Rowe']
2019-10-21 16:07:58.502000+00:00
['Medium', 'Community', 'Positive Thinking', 'Writing', 'Entrepreneurship']
10 Python Skills for Beginners
#7— Apply a condition to multiple columns Let’s say we want to identify which Bach-loving plants also need full sun, so we can arrange them together in the greenhouse. First, we create a function by using the def keyword and giving it a name with underscores between words (e.g. sunny_shelf). Appropriately, this naming convention is called snake case 🐍 The function sunny_shelf takes in two parameters as its inputs — the column to check for “full sun” and the column to check for “bach.” The function outputs whether both these conditions are true. On line 4, we .apply() this function to the DataFrame and specify which columns should be passed in as parameters. axis=1 tells pandas that it should evaluate the function across columns (versus axis=0 , which evaluates across rows). We assign the output of the .apply() function to a new DataFrame column called ‘new_shelf.’ Alternatively, we could use the np.where() function for the same purpose: This function from the numpy library checks the two conditions specified above (i.e., that the plant is a lover of full sun and Germanic classical music) and assigns the output to ‘new_shelf’ column. For these tips on .apply(), np.where(), and other incredibly useful code snippets, check out Chris Albon’s blog.
https://towardsdatascience.com/10-python-skills-beginners-3066305f0d3c
['Nicole Janeway Bills']
2020-11-23 14:06:04.934000+00:00
['Artificial Intelligence', 'Machine Learning', 'Python', 'Technology', 'Data Science']
Replacing Your VC? 5 Principles For Entrepreneurs
At Tau Ventures we advise all entrepreneurs to look upon the diligence process during a fundraising round as a two-way street. We believe it is not only an opportunity but a duty of the CEO to develop conviction that the VC they are bringing on board will indeed be the right partner moving forward. A wrong choice is akin to a bad marriage — terrible for all parties. In fact we subscribe to the maxim that hiring and firing your employees is hard, hiring and firing your investors is harder. That said, there are many situations in which a startup will have to invariably reconfigure a relationship with a VC. It matters significantly because your champion within the firm is the one who will work the hardest for you, including advocating for funding in future rounds and being your ambassador with other investors. Here are five key situations to keep in mind. Transition: VC Who Championed You Is Changing Firms — This is probably the most common case of having to change an investor relationship. If there is advance knowledge another investor in the partnership usually starts attending board meetings a few months in advance. Some funds will have a board seat and observership and may elevate the observer to the seat. Note a new deal owner will typically have less incentive to manage the investment and do follow-on rounds because they will receive less credit for having sourced the deal initially. As an entrepreneur you should ensure enough rapport with your key investors to be aware of such developments in enough time and voice an opinion as makes sense. After all, a transition of ownership is much more impactful for the startup than the investor who is typically managing multiple companies. 2) Retirement: VC Who Championed You Is Leaving Venture Capital Indefinitely — It’s a common practice for the fund to request the exiting VC to continue with a board seat, oftentimes compensating them. Once again it’s in your interest as an entrepreneur to read the situation correctly and apply the same principles from #1 Transition in terms of getting a successor. You essentially want someone who is good to work with and carries respect within the firm — both things are important. If your champion is leaving for personal or professional reasons that would reflect on the startup then obviously it’s a different matter. But if your champion is exiting voluntarily then it’s convenient to maintain the relationship, with an eye towards finding the best possible successor. 3) Outspent: VC Fund Is Out Of Money — This is perhaps the toughest of situations. In some rare cases funds have been known to default on their capital commitments but more often they simply don’t have unallocated funds to support you in future rounds. Remember most VCs do not have their full fund available to them upfront, they call it incrementally from their LPs over time. Most firms do leave a significant portion in reserve, often close to 50%, but cultures vary enormously on how they deploy this capital and having a champion who both understands and influences the internal decision-making process is key. How to avoid a surprise? In this day and age it’s very easy to spend a few minutes on the web to check upon a fund’s size, estimate the average check size, and typical follow-on strategy. In the US SEC filings, TechCrunch, Crunchbase and Pitchbook are some of the tools startups can use to triangulate towards those answers. And then obviously ask the major investors themselves during the diligence process. If you do run into an unexpected situation then brainstorm with your champion whether syndicating with their LPs, reaching out to other VCs, or doing a SPV with third parties are options. 4) Conflict: There Are Opposing Interests — Most VCs will not invest in competing companies, at least to start with. But it sometimes happens anyways when different companies in a portfolio start overlapping too much. The typical practice is for different partners to then take each board seat to minimize cross-information. Some firms will go as far as recusing themselves from the seat or even actively selling their position to another investor. Strategic VCs will typically abstain from those specific discussions where they will be inherently conflicted, such as the startup considering competing acquisitions including from the corporate. At the end of the day, the key thing for a CEO is to develop enough trust with the investors because no amount of language in a term sheet can address all possible conflicts. 5) Clash: You And The Investor Are Fighting — Some investors will work on replacing the exact person on the board. Others will advocate for doing a secondary i.e., selling the ownership to another investor. In either case ultimately it’s the CEO’s responsibility to try to manage a conflict, including leveraging the rest of the board as needed. Oftentimes a CEO is also the Chairman of the Board which should help in setting an agenda towards a resolution. Disagreements getting out of hand is actually rarer than it seems from our experience at Tau Ventures. But when it happens it causes tremendous damage — look no further than Uber or WeWork as recent high-profile examples. Originally published on “Data Driven Investor,” am happy to syndicate on other platforms. I am the Managing Partner and Cofounder of Tau Ventures with 20 years in Silicon Valley across corporates, own startup, and VC funds. These are purposely short articles focused on practical insights (I call it gl;dr — good length; did read). Many of my writings are at https://www.linkedin.com/in/amgarg/detail/recent-activity/posts and I would be stoked if they get people interested enough in a topic to explore in further depth. If this article had useful insights for you comment away and/or give a like on the article and on the Tau Ventures’ LinkedIn page, with due thanks for supporting our work. All opinions expressed here are my own. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/replacing-your-vc-5-principles-for-entrepreneurs-d3f79991d2fa
['Amit Garg']
2020-12-03 15:44:21.569000+00:00
['Entrepreneur', 'Board Of Directors', 'Startup', 'Venture Capital', 'Entrepreneurship']
How to install Square’s beta SDKs
How to install Square’s beta SDKs We recently released a big update to SDKs—here’s how to upgrade. Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog [Update: the beta has completed, download the latest version of our SDKs here] As a reminder, if you want help with installation, to give us feedback on the new design and features, or simply to learn more about the APIs, request an invite to our new Slack community here. The differences in version 2.1.0 of our new SDKs are the same for each language (of course, these changes look a little different in each language). The main differences are: Support for v1 endpoints including items and inventory management, employee management, and the v1 version of transaction reporting. Renaming and reorganizing of some of the generated APIs: <=2.0.2 2.1.0+ CheckoutApi CheckoutApi LocationApi LocationsApi TransactionApi TransactionsApi RefundApi TransactionsApi CustomerApi CustomersApi CustomerCardApi CustomersApi V1EmployeesApi V1LocationsApi V1ItemsApi V1TransactionsApi We’ve removed the per request Authorization parameter. With our new design you’ll only need to set your access token once in the beginning, and not in every request. To visualize this difference in php , our current model looks like this: <?php require_once(__DIR__ . '/vendor/autoload.php'); $api_instance = new SquareConnect\Api\CheckoutApi(); $result = $api_instance->createCheckout($authorization, $location_id, $body); ?> and with version 2.1.0 and forward, that same request would look like: <?php require_once(__DIR__ . '/vendor/autoload.php'); // Configure OAuth2 access token for authorization: oauth2 SquareConnect\Configuration::getDefaultConfiguration()->setAccessToken('YOUR_ACCESS_TOKEN'); $result = $api_instance->createCheckout($location_id, $body); ?> Installation Guides PHP You can install the beta 2.1.0 version of our SDK with composer with the command line: composer require square/connect:dev-release/2.1.0 or by adding the following to your composer.json file for your project. { “require”: { “square/connect”: “dev-release/2.1.0” } } You can also install from GitHub with: cd connect-php-sdk git checkout release/2.1.0 git clone https://github.com/square/connect-php-sdk.git cd connect-php-sdkgit checkout release/2.1.0 and use the familiar require('connect-php-sdk/autoload.php'); within your application. Java You’ll need to get the source from GitHub and: cd connect-php-sdk git checkout release/2.1.0 mvn install -DskipTests git clone https://github.com/square/connect-php-sdk.git cd connect-php-sdkgit checkout release/2.1.0mvn install -DskipTests You should now have a new .jar file to include with your project. Ruby With Ruby, you can specify the 2.1.0.beta version of the square-connect gem to install by running: gem install square_connect -v 2.1.0.beta Python You can install the Python package straight from GitHub with: git checkout release/2.1.0 sudo python setup.py install git clone https://github.com/square/connect-python-sdk.git git checkout release/2.1.0sudo python setup.py install C# You’ll need to download the files from GitHub and switch to the 2.1.0 version: git checkout release/2.1.0 git clone https://github.com/square/connect-csharp-sdk.git git checkout release/2.1.0 You can then build the .dll by running ./build.sh for Unix-based systems or build.bat for Windows. You will likely need to download the dependencies, but you should be prompted to do so to build. To learn more about how our SDKs are made, as well as the release of our Java SDK, check out our earlier post announcing new versions of our client SDKs.
https://medium.com/square-corner-blog/how-to-install-the-beta-sdks-b746503515d9
['Tristan Sokol']
2019-04-18 22:37:40.641000+00:00
['API', 'PHP', 'Java', 'Ruby', 'Python']
CONNECTY, THE SOLUTION TO ALL NEEDS IN KNOWLEDGE
Connecty.io aims to bring together in one place all the actors of the two worlds targeted by the project : innovators and creators of knowledge. The platform intends (see development roadmap) to quickly cover the entire universe of science with the goal of representing all the specialties provided by a great diversity of stakeholders (researchers, laboratories …) of all nationalities : starting with 5000 experts in France as of June 2018 and a widespread international opening in 2019. Connecty.io intends to boost the development of different poles of knowledge and innovation nodes via the creation of a vast network of knowledge creators and innovators. • A knowledge hub consists of a group of experts in the same field of knowledge. • An innovation node is a place where knowledge is transformed into innovation located across the various areas of knowledge. This facilitated circulation of knowledge will liberate all the creative potential of modern economies, already existing as seen previously, but insufficiently exploited. This «Map of Science» illustrates the online behavior of Scientists accessing different scientific journals, publications, aggregators, etc. Colors represent the scientific discipline of each journal, based on disciplines classified by the Getty Research Institute’s Art and Architecture Thesaurus, while lines reflect the navigation of users from one journal to another when interacting with scholarly web portals. Image credit: Los Alamos National Laboratory To summarize, the emergence of an ecosystem of knowledge is reflected in the emergence of flows and exchanges between the different actors of knowledge and those of innovation: knowledge circulates, generates new knowledge and transforms itself into innovation. A virtuous circle is then put in place. This is the natural appearance of a circular economy. Connecty.io intends to bring a universal vision of knowledge and science by bringing together creators of knowledge from all fields of research on the platform : Life and Earth sciences Social sciences Physical sciences and Chemistry Engineering and system Economic sciences Physical sciences and Chemistry Another illustration of the global vision carried by Connecty.io, the knowledge proposed on the platform takes three forms: • Fundamental knowledge : From fundamental research,whose main objectives are the understanding of natural phenomena, the establishment of theories or explanatory models. In the short term, thepurpose is not to have an economic application but to create knowledge, to explain and to elaborate theories. • Knowledge : Resulting from applied research, knowledge results from a work of transformation of fundamental knowledge into concrete applications. • Know-how : the know-how lies in the capacity to draw the quintessence of a technology, an innovation, an application. An example of progress from fundamental knowledge to innovation is fully illustrated with the work of a late 80’s man, Albert Fert (2007 Nobel Prize in physics with Peter Grunberg for the discovery of giant magnetoresistance), leading to the first concrete applications developed by IBM in 1997. Thanks to the diversity of the actors it brings together, Connecty.io platform illustrates the fact that these different forms of knowledge are everywhere, in every place (university campuses, laboratories, factories …), in each of us (moral persons, physical persons: researchers, experts, craftsmen …). For fundamental knowledge : • In public laboratories. For Basic knowledge from applied research: • In those same laboratories. • In the CRS (Contract Research Structures): private structures with high scientific level and technical skills in one or several fields. The SRC’s core business is to provide research and technological development services for SMEs, mid-cap or large companies. • At the heart of innovative companies. For the Know-How : • In companies based on these experts and specialists, that translate the innovations of all these actors into products distributed to everyone. The team looks forward to seeing you on our Social Network : Telegram Discord Linkedin Twitter Facebook Youtube Connecty.io Read the connecty White paper : White paper
https://medium.com/connecty/connecty-the-solution-to-all-needs-in-knowledge-b8825e760561
[]
2018-10-08 11:34:38.613000+00:00
['Artificial Intelligence', 'Blockchain', 'ICO', 'Science', 'Bitcoin']
Docker 101: Fundamentals and Practice
Docker 101: Fundamentals and Practice Start using docker today after these 4 "Hello, World!" examples If you're tired of hearing your coworkers praise Docker and its benefits at every chance they get, or you're tired of nodding your head and walking away every time you find yourself in one of these conversations, you've come to the right place. Also, if you are looking for a new excuse to wander off without getting fired, keep reading and you'll thank me later. Docker Here's Docker's definition, according to Wikipedia: Docker is a computer program that performs operating-system-level virtualization. Pretty simple, right? Well, not exactly. Alright, here's my definition of what docker is: Docker is a platform for creating and running containers from images. Still lost? No worries, that's because you probably don't know what containers or images are. Images are single files containing all the dependencies and configurations required to run a program, while containers are the instances of those images. Let's go ahead and see an example of that in practice to make things clearer. Important note: Before you continue, make sure you install docker using the recommended steps for your operating system. Part 1. "Hello, World!" from a Python image Let's say you don't have Python installed in your machine — or at least not the latest version - and you need python to print "Hello, World!" in your terminal. What do you do? You use docker! Go ahead and run the following command: docker run --rm -it python:3 python Don't worry, I'll explain that command in a second, but right now you are probably seeing something like this: It might take a few moments for this command to run for the first time That means we are currently inside a docker container created from a python 3 docker image, running the python command. To finish off the example, type print("Hello, World!") and watch as the magic happens. A "Hello, World!". Much wow! Alright, you did it, but before you start patting yourself on the back, let's take a step back and understand how that worked. Breaking it down Let's start from the beginning. The docker run command is docker's standard tool to help you start and run your containers. The --rm flag is there to tell the Docker Daemon to clean up the container and remove the file system after the container exits. This helps you save disk space after running short-lived containers like this one, that we only started to print "Hello, World!". The -t (or --tty) flag tells Docker to allocate a virtual terminal session within the container. This is commonly used with the -i (or --interactive) option, which keeps STDIN open even if running in detached mode (more about that later). Note: Don't worry too much about these definitions right now. Just know that you will use the -it flag anytime you want to type some commands on your container. Lastly, python:3 is the base image we used for this container. Right now, this image comes with python version 3.7.3 installed, among other things. Now, you might be wondering where did this image came from, and what's inside of it. You can find the answers to both of these questions right here, along with all the other python images we could have used for this example. Last but not least, python was the command we told Docker to execute inside our python:3 image, which started a python shell and allowed our print("Hello, World!") call to work. One more thing To exit python and terminate our container, you can use CTRL/CMD + D or exit() . Go ahead and do that right now. After that, try to execute our docker run command again and you'll see something a little bit different, and a lot faster. Much faster. Wow! That's because we already downloaded the python:3 image, so our container starts a lot faster now. Part 2. Automated "Hello World!" from a Python image What's better than writing "Hello, World!" in your terminal once? You got it, writing it twice! Since we cannot wait to see "Hello, World!" printed in our terminal again, and we don't want to go through the hustle of opening up python and typing print again, let's go ahead and automate that process a little bit. Start by creating a hello.py file anywhere you'd like. # hello.py print("Hello, World!") Next, go ahead and run the following command from that same folder. docker run --rm -it -v $(pwd):/src python:3 python /src/hello.py This is the result we are looking for: Great! YAHW (Yet Another "Hello World!") Note: I used ls before the command to show you that I was in the same folder that I created the hello.py file in. As we did earlier, let's take a step back and understand how that worked. Breaking it down We are pretty much running the same command we ran in the last section, apart from two things. The -v $(pwd):/src option tells the Docker Daemon to start up a volume in our container. Volumes are the best way to persist data in Docker. In this example, we are telling Docker that we want the current directory - retrieved from $(pwd) - to be added to our container in the folder /src . Note: You can use any other name or folder that you want, not only /src If you want to check that /src/hello.py actually exists inside our container, you can change the end of our command from python hello.py to bash . This will open an interactive shell inside our container, and you can use it just like you would expect. Isn't that crazy? Note: We can only use bash here because it comes pre-installed in the python:3 image. Some images are so simple that they don't even have bash . That doesn't mean you can't use it, but you'll have to install it yourself if you want it. The last bit of our command is the python /src/hello.py instruction. By running it, we are telling our container to look inside its /src folder and execute the hello.py file using python . Maybe you can already see the wonders you can do with this power, but I'll highlight it for you anyway. Using what we just learned, we can pretty much run any code from any language inside any computer without having to install any dependencies at the host machine - except for Docker, of course. That's a lot of bold text for one sentence, so make sure you read that twice! Part 3. Easiest "Hello, World!" possible from a Python image using Dockerfile Are you tired of saying hello to our beautiful planet, yet? That's a shame, cause we are gonna do it again! The last command we learned was a little bit verbose, and I can already see myself getting tired of typing all of that code every time I wanna say "Hello, World!" Let's automate things a little bit further now. Create a file named Dockerfile and add the following content to it: # Dockerfile FROM python:3 WORKDIR /src/app COPY . . CMD [ "python", "./hello.py" ] Now run this command in the same folder you created the Dockerfile : docker build -t hello . All that's left to do now is to go crazy using this code: docker run hello Note that you don’t even need to be in the same folder anymore You already know how it is. Let's take a moment to understand how a Dockerfile works now. Breaking it down Starting with our Dockerfile, the first line FROM python:3 is telling Docker to start everything with the base image we are already familiar with, python:3 . The second line, WORKDIR /src/app , sets the working directory inside our container. This is for some instructions that we'll execute later, like CMD or COPY . You can see the rest of the supported instructions for WORKDIR right here. The third line, COPY . . is basically telling Docker to copy everything from our current folder (first . ), and paste it on /src/app (second . ). The paste location was set with the WORKDIR command right above it. Note: We could achieve the same results by removing the WORKDIR instruction and replacing the COPY . . instruction with COPY . /src/app . In that case, we would also need to change the last instruction, CMD ["python", "./hello.py"] to CMD ["python", "/src/app/hello.py"] . Finally, the last line CMD ["python", "./hello.py"] is providing the default command for our container. It's essentially saying that every time we run a container from this configuration, it should run python ./hello.py . Keep in mind that we are implicitly running /src/app/hello.py instead of only hello.py , since that's what where we pointed our WORKDIR to. Note: The CMD command can be overwritten at runtime. For instance, if you want to run bash instead, you would do docker run hello bash after building the container. With our Dockerfile finished, we go ahead and start our build process. The docker build -t hello . command reads all the configuration we added to our Dockerfile and creates a docker image from it. That's right, just like the python:3 image we've been using for this entire article. The . at the end tells Docker that we want to run a Dockerfile at our current location, and the -t hello option gives this image the name hello , so we can easily reference it at runtime. After all of that, all we need to do is run the usual docker run instruction, but this time with the hello image name at the end of the line. That will start a container from the image we recently built and finally print the good ol' "Hello, World!" in our terminal. Extending our base image What do we do if we need some dependency to run our code that does not come pre-installed with our base image? To solve that problem, docker has the RUN instruction. Following our python example, if we needed the numpy library to run our code, we could add the RUN instruction right after our FROM command. # Dockerfile FROM python:3 # NEW LINE RUN pip3 install numpy WORKDIR /src/app COPY . . CMD [ "python", "./hello.py" ] The RUN instruction basically gives a command to be executed by the container's terminal. That way, since our base image already comes with pip3 installed, we can use pip3 install numpy . Note: For a real python app, you would probably add all the dependencies you need to a requirements.txt file, copy it over to the container, and then update the RUN instruction to RUN pip3 install -r requirements.txt . Part 4. "Hello, World!" from a Nginx image using a long-lived detached container I know you are probably tired of hearing me say it, but I have one more "Hello" to say before I go. Let's go ahead and use our newly acquired docker power to create a simple long-lived container, instead of these short-lived ones we've been using so far. Create an index.html file in a new folder with the following content. # index.html <h1>Hello, World!</h1> Now, let's create a new Dockerfile in the same folder. # Dockerfile FROM nginx:alpine WORKDIR /usr/share/nginx/html COPY . . Build the image and give it the name simple_nginx , like we previously did. docker build -t simple_nginx . Lastly, let's run our newly created image with the following command: docker run --rm -d -p 8080:80 simple_nginx You might be thinking that nothing happened because you are back to your terminal, but let's take a closer look with the docker ps command. I had to crop the output, but you’ll see a few other columns there The docker ps command shows all the running containers in your machine. As you can see in the image above, I have a container named simple_nginx running in my machine right now. Let's open up a web browser and see if nginx is doing its job by accessing localhost:8080 . Hurray! (this is the last time, I promise) Everything seems to be working as expected, and we are serving a static page through the nginx running inside our container. Let's take a moment to understand how we accomplished that. Breaking it down I'm going to skip the Dockerfile explanation because we already learned those commands in the last section. The only "new" thing in that configuration is the nginx:alpine image, which you can read more about it here. Apart from what is new, this configuration works because nginx uses the usr/share/nginx/html folder to search for an index.html file and start serving it, so since we named our file index.html and configured the WORKDIR to be usr/share/nginx/html , this setup will work right out of the box. The build command is exactly like the one we used in the last section as well, we are only using the Dockerfile configuration to build an image with a certain name. Now for the fun part, the docker run --rm -d -p 8080:80 simple_nginx instruction. Here we have two new flags. The first one is the detached ( -d ) flag, which means that we want to run this container in the background, and that's why we are back at our terminal right after using the docker run command, even though our container is still running. The second new flag is the -p 8080:80 option. As you might have guessed, this is the port flag, and it's basically mapping the port 8080 from our local machine to the port 80 inside our container. You could have used any other port instead of 8080 , but you cannot change the port 80 without adding an additional setting to the nginx image, since 80 is the standard port the nginx image exposes. Note: If you want to stop a detached container like this one, you can use the docker ps command to get the container's name (not image), and then use the docker stop instruction with the desired container's name at the end of the line. Part 5. The end That's it! If you are still reading this, you have all the basics to start using Docker today on your personal projects or daily work. Let me know what you thought about this article in the comments, and I'll make sure to write a follow-up article covering more advanced topics like docker-compose somewhere in the near future. If you have any questions, please let me know. Cheers!
https://medium.com/free-code-camp/docker-101-fundamentals-and-practice-edb047b71a51
['Guilherme Pejon']
2019-04-29 19:02:13.834000+00:00
['Nginx', 'Python', 'Productivity', 'Docker', 'Tech']
Dismantling Neural Networks to Understand the Inner Workings with Math and Pytorch
Motivation As a child, you might have dismantled a toy in a moment of frenetic curiosity. You were drawn perhaps towards the source of the sound it made. Or perhaps it was a tempting colorful light from a diode that called you forth, moved your hands into cracking the plastic open. Sometimes you may have felt deceived that the inside was nowhere close to what the shiny outside led you to imagine. I hope you have been lucky enough to open the right toys. Those filled with enough intricacies to make breaking them open worthwhile. Maybe you found a futuristic looking DC-motor. Or maybe a curious looking speaker with a strong magnet on its back that you tried on your fridge. I am sure it felt just right when you discovered what made your controller vibrate. We are going to do exactly the same. We are dismantling a neural network with math and with Pytorch. It will be worthwhile, and our toy won’t even break. Maybe you feel discouraged. That’s understandable. There are so many different and complex parts in a neural network. It is overwhelming. It is the rite of passage to a wiser state. So to help ourselves we will need a reference, some kind of Polaris to ensure we are on the right course. The pre-built functionalities of Pytorch will be our Polaris. They will tell us the output we must get. And it will fall upon us to find the logic that will lead us to the correct output. If differentiations sound like forgotten strangers that you once might have been acquainted with, fret not! We will make introductions again and it will all be mighty jovial. I hope you will enjoy. Linearity The value of a neuron depends on its inputs, weights, and bias. To compute this value for all neurons in a layer, we calculate the dot product of the matrix of inputs with the matrix of weights, and we add the bias vector. We represent this concisely when we write: The values of all neurons in one layer. Conciseness in mathematical equations however, is achieved with abstraction of the inner workings. The price we pay for conciseness is making it harder to understand and mentally visualize the steps involved. And to be able to code and debug such intricate structures as Neural Networks we need both deep understanding and clear mental visualization. To that end, we favor verbosity: The value of one neuron with three inputs, three weights, and a bias. Now the equation is grounded with constraints imposed by a specific case: one neuron, three inputs, three weights, and a bias. We have moved away from abstraction to something more concrete, something we can easily implement:
https://towardsdatascience.com/dismantling-neural-networks-to-understand-the-inner-workings-with-math-and-pytorch-beac8760b595
['Mehdi Amine']
2020-06-05 23:56:03.440000+00:00
['Artificial Intelligence', 'Machine Learning', 'Python', 'Data Science', 'Deep Learning']
A Letter To My Future Self
Dear Rozemarijn, Today is November 19th, 2020. The transition from 2019 to 2020 was harder than you expected, and 2020 didn’t have a lot of good moments either. Let’s look at the past year. September 2019, you weren’t able to get financial support for your study anymore, as you had quit. Thus, you needed to find another way to make ends meet. You applied for a few positions, but never got invited. Eventually, you got the opportunity to earn a bit on the side by writing. Little did you know, it was only the beginning of a much bigger process. Around October 2019, your therapist told you she thought you were autistic. She wanted to send you to a specialized institute with a long waiting list. Fortunately, you were able to get in around November/December. In the meantime, the writing became more and more, and you suffered more from stress. By coincidence, you discovered that a certain drug helped you to write. As another positive side-effect, it also helped to keep your IBS in check. With small doses each time, and a long break every time, you were able to use it to get through the hard days. In February, you heard that you weren’t getting the autism diagnosis you hoped for. The people at the institute told you it would close too many doors, even when you checked all the boxes. You felt discouraged and got in contact with your doctor and the practice’s psychologist. Eventually, you contacted your therapist as well. You talked about the drug, and how it had helped you so much. However, you also told them that you didn't want to rely on drugs to function, so you asked for a receipt for medicine with the same properties. Unfortunately, this was to no avail. In April 2020, you learned about PDA, pathological demand avoidance. It’s a characteristic some autistic people have which causes you to be unable to complete even the simplest tasks. It’s as if your brain tells you “no, I won’t do it.” It took a while before your therapist understood what you meant, but she soon discovered that this could be caused by the disproportion between your intelligence and the processing speed at which your brain functions. You’re still thankful for that explanation. Your therapist encouraged you to get in contact again with the doctor and the practice’s psychologist to ask about the use of medicine. Your doctor only wanted to prescribe anything if you had a diagnosis for ADD or ADHD, which you knew would be useless. The practice’s psychologist did want to help you, but could only refer you to a psychiatrist. You put all your hope in her, but again, to no avail. She told you she had no experience with the combination of autism and Ritalin, and therefore only prescribed you a way more dangerous medicine: an antipsychotic. The medicine made you sleepy and it was as if you were living in a weird dream, so you stopped taking it. The therapist then recommended you get in touch with an old classmate that did have a Ritalin prescription. He was so kind as to give you a part of his leftover medicine. It was as if a new world opened up to you. You could work again, your IBS didn’t act up as much, and you could handle sudden changes with fewer issues. The only downside was that the medicine he gave you was only enough for one week, and you still couldn’t proceed. Fortunately, he had a lot leftover as he didn’t use it as often, and he shared it with you. In the meantime, you got in contact with your childhood doctor, as he could be able to help you. It took a while, but eventually, you were able to come by the office. He heard your story and, while hesitant, trusted you and prescribed Ritalin for a 3-month period for you. However, you had to promise him that you’d be back after 3 months to get your blood and heart checked as the medicine can influence your health. We’ve just finished month one. in the midst of all this, you experienced a higher workload, having to raise a puppy, a fight with your in-laws that resulted in having to pay rent soon, a sick grandpa who probably won’t see the end of the coming year, depression, the pandemic and its issues, issues with a governmental covid support payment, an existential crisis, and having to keep up with all your relationships and friendships. At the moment, your profession seems to be doing well. You have two great regulars and a few smaller regulars. You quit Fiverr, and you probably earn enough to get around though you still need to keep an eye out for the coming year. Still, there are days where you suffer from depression, meltdowns about the smallest of things, and other mental issues. According to your therapist, this is because you became an entirely different person within one year. Within one year, you went from being a student to being a fulltime adult who not only has their own household, but also their own business, a pet to raise, and a relationship to keep. According to your therapist, you’re doing amazing considering the circumstances. She said your stress and included meltdowns is normal. I know you’re not entirely sure of this yourself, but you hope it just takes some time to get used to it, as your therapist predicted. You also hope that the less stress you experience, the more you’ll be able to lose weight because that is one big issue right now. Besides these experiences this year, you’ve also started to discover yourself again. You came out as non-binary and you’ve started to explore that part of your life. You’ve also changed your style and are currently reevaluating your interests, choices, and the like. It’s a turbulent period. Your therapist calls it “the second puberty”. Fortunately, your family is here to support you, including your partner and your friends. You’re lucky to have them. Twitter and TikTok also offer a community with like-minded people. And you write through Medium, talking about your experiences, feelings, and problems. It’s a great place to write. When you’re reading this, a few months have passed, and I’m curious to know how you’re doing. I hope you’ve found yourself in less stressful situations, and I hope your depression has gotten less as well. I also hope you’ve had more time to develop yourself within you being non-binary. Do you still want a beard and microdoses of testosterone? In that case, you should probably follow up on that. I wish you all the best. With love, Rozemarijn
https://medium.com/artfullyautistic/a-letter-to-my-future-self-d0a89ba60cea
['Rozemarijn Van Kampen']
2020-11-20 05:13:37.675000+00:00
['Autism', 'Psychology', 'Neurodivergent', 'Mental Health', 'Depression']
How I improved a Class Imbalance problem using sklearn’s LinearSVC
How I improved a Class Imbalance problem using sklearn’s LinearSVC Tracyrenee Follow Dec 20 · 6 min read Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. The advantages of support vector machines are: Effective in high dimensional spaces. Effective in cases where number of dimensions, or features, is greater than the number of samples. Uses a subset of training points in the decision function, called support vectors, so it is also memory efficient. Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels. SVMs can be used on datasets where there is a class imbalance by implementing the class_weight parameter in the fit method. Below is a diagram of a dataset that has weighted and unweighted boundaries:- For large datasets, LinearSVC is the most appropriate model to use, and is the model that was implemented in this post for training, fitting, and predicting on the large dataset used, which was was an Analytics Vidhya Hackathon question concerning the likelihood of having a stroke, the link being found here:- McKinsey Analytics Online Hackathon — Healthcare Analytics (analyticsvidhya.com) The problem statement for this competition question reads as follows:- “Your Client, a chain of hospitals aiming to create the next generation of healthcare for its patients, has retained McKinsey to help achieve its vision. The company brings the best doctors and enables them to provide proactive health care for its patients. One such investment is a Center of Data Science Excellence. In this case, your client wants to have study around one of the critical disease “Stroke”. Stroke is a disease that affects the arteries leading to and within the brain. A stroke occurs when a blood vessel that carries oxygen and nutrients to the brain is either blocked by a clot or bursts (or ruptures). When that happens, part of the brain cannot get the blood (and oxygen) it needs, so it and brain cells die. Over the last few years, the Client has captured several health, demographic and lifestyle details about its patients. This includes details such as age and gender, along with several health parameters (e.g. hypertension, body mass index) and lifestyle related variables (e.g. smoking status, occupation type). The Client wants you to predict the probability of stroke happening to their patients. This will help doctors take proactive health measures for these patients.” In order to solve this problem I created an .ipynb file using Google Colab, a great, free online Jupyter Notebook that I can use from any computer that has internet access. Many libraries are already installed on Google Colab, so I only needed to import them into the program, being pandas, numpy and seaborn. I downloaded the train and test files from Analytics Vidhya’s competition and saved them into my personal GitHub account. Once saved, I loaded them and read them into the .ipynb file I had created:- I checked for any null values and found that the columns “bmi” and “smoking_status” had many missing values that needed to be imputed:- I imputed the missing values by replacing any missing values in the “bmi” column with the median value. I replaced any missing values in the “smoling_status” column with the most commonly used value, called mode(). I then put the target, or “stroke” column on a graph using seaborn’s distplot() function. When the target, being a dependant variable, is put on a graph it is clearly evident that there is a large class imbalance in this competition question:- I counted the number of examples and found only 1.8% of the values were class 1, revealing special measures need to be taken to address this class imbalance:- I used a preprocessing measure by checking to see which columns were categorical and put them on an ordinal encoder, thereby transforming them into numeric values:- I defined the dependant and independent variables, which will be used to make predictions. The target variable, train.stroke, is defined as y. The independent variables are X and X_test. The independent variables are composed of the train and test files with “id” and “stroke” dropped from the train file and “id” dropped from the test file:- I put the values in X and X_test dataframes on a scaler, using sklearn’s StandardScaler() function. Scaling is necessary to put the data in the same range to optimise the predictions that will be made on it:- It is always a good idea to to graphically view the data as to how it appears in the computer’s memory, so I created a three dimensional graph of this information. As can be seen, the data represents a huge mass of zeros and the small fraction of ones are scattered within the mass. As can be seen, there are no clear lines of demarcation separating these two classes, which makes it difficult to make accurate predictions:- After I created the three dimensional graph of the target variable, I split the X dataframe into training and validating sets using sklearn’s train_test_split() function. I set the parameters up to signify there is a class imbalance by setting stratify=True. I put y_val, being the validation set target variable, on a graph and it illustrated the class imbalance that will be present when the validation set is predicted on:- I defined the model and chose LinearSVC because it is suitable for large datasets. I set the parameters up to attain maximum efficiency: class_weight=’balanced’, dual=True,max_iter=680, and C=10. I achieved an accuracy of 96.59% using this parameter tuning:- I predicted on the validation dataset and also achieved an accuracy of 96.59%! I put the prediction, y_pred, on a graph and was pleased to see that the class imbalance had been addressed. The model had picked up 80 ones as opposed to y_val actually having 78:- I decided to plot a graph of the correct examples versus the incorrect ones. The correct examples are portrayed as the purple dots while the incorrect ones are yellow:- I predicted on the test X_test dataset and was pleased that the class imbalance had been addressed when I put the prediction on a graph:- When I checked the predictions on Analytics Vidhya’s solution checker I found that I scored 54.99, which is an improvement on the previous model I had used, CatBoost. If I had adjusted the parameters on the LinearSVC() model I could have achieved a higher accuracy, but the class imbalance would not have been accurately addressed:- The code for this program can be found in its entirety in my personal GitHub account, the link being found here:- AV-Stroke/AV_Stroke_LinearSV.ipynb at main · TracyRenee61/AV-Stroke (github.com)
https://medium.com/ai-in-plain-english/how-i-improved-a-class-imbalance-using-sklearns-linearsvc-9f291d89804b
[]
2020-12-20 09:53:06.734000+00:00
['Linearsvc', 'Python', 'Artificial Intelligence', 'Class Imbalance', 'Data Science']
Why Jupyter Is Not My Ideal Notebook
Jupyter main features are: inline code execution easy idea structuring nice displays of pictures and dataframe This overall flexibility has made it a preferred tool compared to the more rustic iPython command line. However it should not be forgotten that this is not more than an REPL where you can navigate efficiently throughout the history. Thus it is not a production tool. However, tons of machine learning developers have experienced the deep pain of refactoring a deep learning notebook into a real algorithm in production (also reddit or stackoverflow). Keeping a lean mindset, we should strive to reduce waste as much as possible. Introduction At Sicara, we build machine learning based products for our customers: machine learning: the customer comes with a business need and we have to deliver a satisfying algorithm as fast as possible; we build products: we need to develop in a production-ready mindset. Algorithms are deployed in the cloud, served and updated with APIs, etc. First of all you definitely need a versioning tool which is a pain with Jupyter (also reddit, reddit again, quora). Not only for your code, but also for your experiments. You need to be able to re-run any results got so far with 100% confidence. How often come data scientists with results they cannot reproduce? Furthermore, when using notebooks, people often tend to mix three kinds of usage: development: defining methods and tools to actually do something; debugging/applying: running the piece of code with real data to see what is going on; visualization: presenting the results in a clean and reproducible output. In order to reduce waste, these steps should be clearly defined and separated so as to be able to change one without the other and vice versa. I have come to the conclusion that to produce high-quality tested code, better using a first-class IDE to debug code, there are visual debugging tools to write down reports, I am more comfortable with an expressive markup language (markdown, reST, Latex) Fortunately a well-configured IDE can do all of these things. For instance if you come from the R community you certainly use RStudio which allows you to do so: native code completion, auto-fix, etc. direct visual debugging Rmarkdown/knitr/Sweave to generate dynamic and beautiful reports. Develop production-ready code As soon as you want to make an experiment, i.e. write a method to do something to your data, you should think about its usage, limit case, etc. Do it in a separate file, document and unit-test it. Doing so you make sure that: your method actually does what you want; your code can be safely used somewhere else in your project. Because you will have to organize your tools, it makes you think about the structure of your pipeline, the things you need, what you are likely to change, etc. … Read the full article here.
https://medium.com/sicara/jupyter-notebook-analysis-production-b2d585204520
['Clément Walter']
2020-01-30 13:48:53.856000+00:00
['Jupyter Notebook', 'Machine Learning', 'Programming', 'Data Science', 'Productivity']
Secure, Efficient Docker-in-Docker with Nestybox
Docker containers are great at running application micro-services. But can you run Docker itself inside a Docker container? And can you do so securely? This article describes Docker-in-Docker, the use cases for it, pros & cons of existing solutions, and how Nestybox has developed a new solution that allows you to run Docker-in-Docker securely and efficiently, without using privileged containers. Docker users (e.g., app developers, QA engineers, and DevOps) will find this article useful. TL;DR If you want to see how easy it is to deploy Docker-in-Docker securely using a Nestybox system container, check this screencast (best viewed on a big screen): In the rest of the article, we explain what Docker-in-Docker is, when it’s useful, some current problems with it, and how Nestybox has developed a solution that solves these problems. If you want a quick summary, go to the end of this article. What is Docker-in-Docker? Docker-in-Docker is just what it says: running Docker inside a Docker container. It implies that the Docker instance inside the container would be able to build and run containers. Use Cases So when would running Docker-in-Docker be useful? Turns out there are several valid scenarios. DinD in CI pipelines is the most common use case. It shows up when a Docker container is tasked with building or running Docker containers. For example, in a Jenkins pipeline, the Jenkins agent may be a Docker container tasked with running other Docker containers. This requires Docker-in-Docker. But CI is not the only use case. Another common use case is software developers that want to play around with Docker containers in a sandbox environment, isolated from their host environment where they do real work. Yet another use case is a system admin on a shared host that wants to allow users on the host to deploy Docker containers. Currently, this requires giving users the equivalent of “root” privileges on the system (e.g., by adding users to the “docker” group), which is not acceptable from a security perspective. In this case, giving each user an isolated environment inside of which they can deploy their own Docker containers in total isolation from the rest of the host would be ideal. For all of the above, Docker-in-Docker is a great solution as it provides a lighter-weight, easier-to-use alternative to a virtual machine (VM). DinD and DooD Currently, there are two well-known options to run Docker inside a container: Running the Docker daemon inside a container (DinD). Running only the Docker CLI in a container, and connecting it to the Docker daemon on the host. This approach has been nicknamed Docker-out-of-Docker (DooD). I’ll briefly describe each of these approaches and their respective benefits and drawbacks. I will then describe how Nestybox offers a solution that overcomes the current shortcomings of both of these. DinD In the DinD approach, the Docker daemon runs inside a container and any containers it creates exist inside said container (i.e., inner containers are “nested” inside the outer container). The figure below illustrates this. DinD has gotten a bad rap in the past, not because the use cases for it are invalid but rather due to technical problems in getting it to work. This blog article by Jérôme Petazzoni (until recently a developer at Docker) describes some of these problems and even recommends that Docker-in-Docker be avoided. But things have improved since that blog was written (back in 2015). In fact, Docker (the company) officially supports DinD and maintains a DinD container image. But there’s a catch, however: running Docker’s DinD image requires that the outer container be configured as a “privileged” container, as shown in the figure above. Running a privileged container is risky at best. It’s equivalent to giving the container root access to your machine (i.e., it has full privileges, access to all host devices, access to all kernel settings, etc.) For example, from within a privileged container, you can easily reboot the host (!) with: $ echo 1 > /proc/sys/kernel/sysrq && echo b > /proc/sysrq-trigger Because of this, running privileged containers should be avoided in general (for the same reason you wouldn’t log in as root in your host for your daily work). It’s a non-starter in systems where the workloads running inside the container are untrusted. Another problem with this solution is that it leads to Docker “volume sprawl”. Each time a DinD container is created, Docker implicitly creates a volume in the host to store the inner Docker images. When the container is destroyed, the volume remains, wasting storage in the host. There is plenty of pain out there with Docker’s DinD solution, especially in CI/CD use cases. The need for privileged containers is causing heartburn. As explained later, however, Nestybox has now developed a solution that allows running DinD efficiently and without using privileged containers, and one that overcomes the inner Docker image cache problems as well as others. DooD Due to the problems with DinD, an alternative approach is commonly used. It’s called “Docker-out-of-Docker” (DooD). In the DooD approach, only the Docker CLI runs in a container and connects to the Docker daemon on the host. The connection is done by mounting the host’s Docker daemon socket into the container that runs the Docker CLI. For example: $ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker In this approach, containers created from within the Docker CLI container are actually sibling containers (spawned by the Docker daemon in the host). There is no Docker daemon inside a container and thus no container nesting. The figure below illustrates this. This approach has some benefits but also important drawbacks. One key benefit is that it bypasses the complexities of running the Docker daemon inside a container and does not require a privileged container. It also avoids having multiple Docker image caches in the system (since there is only one Docker daemon on the host), which may be good if your system is constrained on storage space. But it has important drawbacks too. The main drawback is that it results in poor context isolation because the Docker CLI runs within a different context than the Docker daemon. The former runs within the container’s context; the latter runs within host’s context. This leads to problems such as: Permission problems: the user in the Docker CLI container may not have sufficient permissions to access the Docker daemon on the host via the socket. This is a common problem causing headaches, in particular in CI/CD scenarios such as Jenkins + Docker. Container naming collisions: if the container running the Docker CLI creates a container named some_cont , the creation will fail if some_cont already exists on the host. Avoiding such naming collisions may not always trivial depending on the use case. , the creation will fail if already exists on the host. Avoiding such naming collisions may not always trivial depending on the use case. Mount paths: if the container running the Docker CLI creates a container with a bind mount, the mount path must be relative to the host (as otherwise, the host Docker daemon on the host won’t be able to perform the mount correctly). Port mappings: if the container running the Docker CLI creates a container with a port mapping, the port mapping occurs at the host level, potentially colliding with other port mappings. This approach is also not a good idea if the containerized Docker is orchestrated by Kubernetes. In this case, any containers created by the containerized Docker CLI will not be encapsulated within the associated Kubernetes pod, and will thus be outside of Kubernetes’ visibility and control. Finally, there are security concerns too: the container running the Docker CLI can manipulate any containers running on the host. It can remove containers created by other entities on the host, or even create un-secure privileged containers and put the host at risk. Depending on your use case and environment, these drawbacks may void the use of this approach. Solution: DinD with Nestybox System Containers As described above, both the Docker DinD image and DooD approaches have some important drawbacks. Nestybox offers an alternative solution that overcomes these drawbacks: run Docker-in-Docker using “system containers”. In other words, use Docker to deploy a system container, and run Docker inside the system container. A Nestybox system container is a container designed to run system-level software in it (like systemd and Docker) as well as applications. You deploy it with Docker, just like any other Docker container. You only need to point Docker to the Nestybox container runtime “Sysbox”, which you need to download and install in your machine. For example: $ docker run --runtime=sysbox-runc -it my-dind-image More info on system containers can be found in this Nestybox blog post. Within a Nestybox system container, you are able to run Docker inside the container easily and securely with total isolation between the Docker inside the system container and the Docker on the host. No need for unsecure privileged containers anymore, as shown below: The Sysbox container runtime takes care of setting up the system container such that Docker can run inside the container as if it were running on a physical host or VM (e.g., with a dedicated image cache, using its fast storage drivers, etc). This solution avoids the issues with DooD and enables use of DinD securely. And it’s efficient: the Docker inside the system container uses it’s fast image storage driver and the volume sprawl problem described earlier is solved. The screencast video at the beginning of this article shows the solution at work. There are written instructions for it in Sysbox Quickstart Guide, as well as in the Nestybox blog site. The system container image that you deploy is fully configurable by you. For example, you can choose to use Docker’s official DinD image and deploy it using the Docker’s official instructions, except that you simply replace the “— privileged” flag with “ — runtime=sysbox-runc” flag in the “docker run” command: $ docker run --runtime=sysbox-runc --name some-docker -d \ --network some-network --network-alias docker \ -e DOCKER_TLS_CERTDIR=/certs \ -v some-docker-certs-ca:/certs/ca \ -v some-docker-certs-client:/certs/client \ docker:dind Alternatively you can create a system container image that works as a Docker sandbox, inside of which you can run Docker (both the CLI and the daemon) as well as any other programs you want (e.g., systemd, sshd, etc.). This Nestybox blog article has examples. Fundamentally, this solution allows you to run one or more Docker instances on the same machine, securely and totally isolated from each other, thus enabling the use cases we mentioned earlier in this article. And without resorting to heavier VMs for the same purpose. In a Nutshell There are valid use cases for running Docker-in-Docker (DinD). Docker’s officially supported DinD solution requires a privileged container. It’s not ideal. It may be fine in trusted scenarios, but it’s risky otherwise. There is an alternative that consists of running only the Docker CLI in a container and connecting it with the Docker daemon on the host. It’s nicknamed Docker-out-of-Docker (DooD). While it has some benefits, it also has several drawbacks which may void it’s use depending on your environment. Nestybox system containers offer a new alternative. They support running Docker-in-Docker securely, without using privileged containers and with total isolation between the Docker in the system container and the Docker on the host. It’s very easy to use as shown above. Nestybox is looking for early adopters to try our system containers. Download the software for free. Give it a shot, we think you’ll find it very useful. Some useful links:
https://medium.com/nerd-for-tech/secure-docker-in-docker-with-nestybox-529c5c419582
['Cesar Talledo']
2020-11-02 06:39:27.011000+00:00
['Microservices', 'Kubernetes', 'Containers', 'DevOps', 'Docker']
Drawing The Tonys
Tony day began early for me when I took the #1 train downtown to Rockefeller Center. I was sheduled to attend the dress rehesal at 10:00, but I wanted to get there early to draw outside. It was fun walking around midtown Manhattan early on a Sunday — the streets were relatively empty, street vendors were putting up their umbrellas. Near Rockefeller Center, NYPD police were gathering in different groups. Man on the subway as I headed downtown. Once inside Rockefeller Center, the dress rehearsal lobby was full of invited guests — they were nothing if not enthusiastic theater people. The added benefit of attending a dress rehearsal is the off-script quips from actors, particularly from James Cordon. Plus, you get to see what some celebs wear when they are not all dressed up and in tons of makeup. The downside is not all celebrities come to the dress rehearsal.
https://lizadonnelly.medium.com/drawing-the-tonys-90cb88ec7710
['Liza Donnelly']
2019-06-11 19:14:40.585000+00:00
['New York', 'Visual Journalism', 'Theater', 'Tony Awards', 'Storytelling']
Write in 2016
Three years ago, sitting on the beach over empty champagne glasses and the smoldering remains of late-night firecrackers, I made a New Year’s resolution to start this blog and publish once a week. Write, I told myself, attempting to channel Nike: Just. Do. It. Write about the lessons now so familiar they can be recited in your sleep. Write about the insights not yet sighted, their silhouettes blurry like the edges of a distant shore. Write about the job, the joy and chaos of designing and building. Write at least once a week. Write to learn how to write, and write to understand, the process itself like a looking glass through which you may yet discover a strange new world. Write so something meaningful can be said to others. Write to be accountable, write with honesty. Above all, write to preserve the scrap of an age, a voice; write so you won’t forget. That first year, through many late nights and plenty of teeth gnashing, I published 52 articles. The next year, I published once every two weeks. Last year, it became once every three weeks. I’m delighted to discover that this three-week cadence seems sustainable, so in 2016, that will once again be my resolution. It’s probably not an exaggeration to say that these writing goals have changed my life. When I started in 2012, I did it purely for myself — to untangle the knots in my head, to find and make peace with my voice, to lay my inner wall of confidence brick by brick. And through the habit of writing, my thinking sharpened. I became a more curious and humble reader. I spent more time on reflecting and giving thanks. So I continued. I could not have predicted how over the years, my words would fly across wires and oceans to light up the screens of many faraway strangers. In the past year, my articles received 1.5 million views. I published 16 new pieces in 2015, and they averaged 58K views each. Logically, I recognize that this means I am known in certain circles. But it never fails to surprise me when people I don’t know stop to say that they know me from my work. It never fails to warm my day when somebody tells me a particular piece of advice or passage resonated deeply with them. Sometimes we’ll get to talking. And when there is a particular kind of pause in the conversation — a sigh, a wistful look — I can guess what is coming next. The person says, “You know, I’d love to start writing as well, but…” Here’s the thing. I know all about the but’s. Through the years they’ve swarmed me like mosquitos, determined to suck my willpower dry. But I don’t know what to write about. But I don’t have the time. But who’d be interested in what I have to say? But I’m a perfectionist. But I’m not original. But I’m not a good writer. There is a poster above my desk, a Dostoyevsky quote, which serves as a reminder that lack of topic should never be a problem. “But how could you live and have no story to tell?” No matter who you are, I know this to be a fact: that you have interests. That there is something you go to bed thinking about. That there is some experience you’ve had that not everybody has had. That there are lessons you’ve learned in your road less taken. That there is some version of the world you’d like tomorrow to be. If you wanted to write, these topics, like San Francisco in 1849, are rich for the mining. As for whether or not you have the time to write, well, we humans tend to have time for the things we prioritize, and not have time for the things we don’t. Last I checked, the average person in the U.S. has five hours of leisure time every day. It’s possible you wouldn’t really want to spend that time hunched over a notebook or keyboard. But then saying, “I don’t have the time” is kind of a cop-out. Really, what you are saying is While the idea of writing is interesting, I don’t choose to prioritize it over other things in my life. Which is perfectly fine. After all, we should all be doing the things we intentionally choose to do. So then, assuming that you do indeed have things to write about and that you do want to prioritize writing, in my experience, the biggest barrier that prevents people from actually doing it is the expectation that what they write should meet a certain criteria of success. In all the times before that I have failed to get something on paper, it was because I had thoughts like the following: geez, what if I hit publish and nobody reads this? That’d be embarrassing and pointless. Or I only want to publish something if it’s really good and makes me seem smart, witty, and knowledgeable. Or What if I say this and somebody disagrees and tells me I’m wrong? Or Hmm, I should only write when inspiration hits me, and right now I don’t feel inspired. In every creative endeavor — not just writing — this train of thought paralyzes. I have experienced it enough times to know that holding yourself to some lofty standard when you are just starting out is like blowing a deathkiss to your chances of success. Instead, if you’d like to write, I offer the following tips:
https://medium.com/the-year-of-the-looking-glass/write-in-2016-938f569b535e
['Julie Zhuo']
2016-01-13 01:15:20.042000+00:00
['Design', 'Resolutions', 'Writing']
Are You Wasting Precious Time? 5 Software Engineering Productivity Tips
Today we are going to explore 5 ways you can improve your daily workflow as a software engineer. ⚠️ Warning, once you start using these tips, you may not be able to live without them. 1. Use Pre-Commit Git Hooks 🎣 Enabling pre-commit hooks is like buying insurance for your codebase. You can utilize pre-commit hooks to maintain a clean codebase free from console.logs, missing ; `s and more. This simple strategy can ensure all members across your team adhere to the same code standards and that no “dirty” code ever gets committed to your git history. You can also utilize hooks to catch type errors when you compile your code before a commit, assuming you are using TypeScript or another strongly typed language. The easiest way to get up and running with pre-commit git hooks in your projects is to use the open-source library Husky. Download Husky here: https://github.com/typicode/husky/ 2. Have Better Window Management 💻 As an engineer, I always have a code editor and browser open. After many years seriously mismanaging my windows, I was introduced to an app, by my colleague Sekhar Paladugu, that made it incredibly simple to stay organized. Once I downloaded the free software and learned a few new keyboard shortcuts, I was quickly able to arrange my view in just seconds. My favorite tool for window management on my Mac is Spectacle. It’s free and has very simple keyboard shortcuts for arranging windows quickly. Download Spectacle: http://www.spectacleapp.com/ 3. Invest In a Password Manager 🔒 My daily work as an engineer requires me to integrate multiple services and APIs all day long. That means I log in and out of different accounts for various tools a lot. My day would quickly turn into a disaster if I had to spend 1–2 minutes looking up or resetting passwords for my 8-10 accounts. I needed some way to safely manage all my passwords and be able to access them with a click of a button. I’ve used many different password managers, but none are quite as good as Dashlane. Dashlane has apps for almost every single device which allows you use its autofill login feature from anywhere. It’s one of the more expensive products on the market but it has some of the most advanced and user friendly technology I’ve encountered in a password manager. Learn more about Dashlane here: https://www.dashlane.com/features/ 4. Become a Command line wizard 🧙‍♂️ If you are coding everyday, chances are you spend a great deal of time in your terminal. If you have not installed oh my zsh as a default, then you need to. With auto complete, cool themes, and a plethora of plugins available, you can become the command line wizard of your dreams. There are many other options to choose from to customize your command line shell if you don’t use oh my zsh but I’ve found this shell to be the most well supported in the open source community and best all around choice. Example oh my zsh custom theme The most heavily used plugins in my daily workflow include gcloud CLI commands, docker, ruby and more. Download oh my zsh: https://ohmyz.sh/ 5. Network debug and testing with ngrok 🌐 If you’ve ever had the need to debug or test an integration with another service using something like a webhook, you’ll find out quickly that it’s a cumbersome and costly process. Enter ngrok. Ngrok allows you to publish a local endpoint to the public web so you can live test and debug an integration of two separate systems through your local machine. This is a very common use case, but the possibilities are nearly endless with ngrok. You can get quite creative with it once you learn how it works. Download ngrok here: https://ngrok.com/download/ Bonus: Follow coding patterns 📚 Studying coding patterns (and implementing them correctly) will, by far, outweigh any other investment you make into improving your daily workflow and productivity as a software engineer. Utilizing object oriented programing, dependency injection, higher order components, continuous deployment, and test driven development in your daily workflow is a great place to start.
https://medium.com/broadlume-product/are-you-wasting-precious-time-5-software-engineering-productivity-hacks-5a04f091f576
['Stephen Michael Grable']
2020-09-16 22:13:12.919000+00:00
['Workflow', 'Software Testing', 'Software Engineering', 'Productivity Hacks', 'Productivity']
How to Get Your First Article Published as an Online Writer
When I went from casually blogging to consistently writing online, the biggest hurdle in front of me was getting published. One year later I have permanently conquered my fear of submitting, and now, my articles are published on a weekly basis. It feels awesome to have this confidence, but it was a long trek to get here, and it all started with getting over that first hurdle. So if you’re struggling right now in the online writing game, I want you to understand getting published is totally within your reach. You’re not a bad writer; you just need a quick reminder of what’s important. The main reason you’re getting rejected right now is that your articles are probably missing two things: simplicity and (or) originality. That’s it. Sure, you have to worry about other things, like proper formatting and grammar; but I’m going to assume you’re smart enough to have those figured out. In this article don’t expect me to waste three paragraphs explaining the importance of proper spelling. This isn’t grad school. Instead, let me break down simplicity and originality, and how you can effectively apply it to your everyday writing.
https://medium.com/the-brave-writer/how-to-get-your-first-article-published-as-an-online-writer-18b4a3d8ea9b
['Thom Gallet']
2020-12-10 13:02:03.564000+00:00
['Work', 'Publishing', 'Freelancing', 'Writing', 'Marketing']
Is ‘The News’ making us paranoid?
“Entertainment has superseded the provision of information; human interest has supplanted the public interest; measured judgment has succumbed to sensationalism.” - Franklin, ‘B, Newszak and News Media’, (1997) If you regularly read or watch the news, it’s easy to feel like there’s danger around every corner. Every time you step on the London Underground you’re looking around, trying to identify any suspicious backpacks that could contain bombs. When you walk the streets in the dark of night you quicken your step and plan your escape route if you so much as sense another human close by. Of course, there is danger around us. But the reporting of this by the media seems distorted, with the majority of news reports and articles negative and focused on crime — making society seem a much more dangerous place than it may actually be in reality. A 2018 study by University of Amsterdam researchers attempted to discover whether there was truth in this; whether the news did disproportionately represent crime, danger and negativity compared to reality. The study looked specifically at plane crashes over the previous 24 years, and found that the total number of crashes had reduced over the years due to technological advances. However, they also found that media attention for plane crashes increased significantly over the same time period. So although air travel was actually becoming safer and safer, if you tuned into the news regularly you would be excused for believing that plane crashes were common and deadly. The researchers concluded that: “News develops a life of its own and that the complex process of news selection and production is partly guided by other factors than reality.” Terrorism, and particularly Islamic extremism, is something that is extremely prevalent in the media. A Chapman University survey in 2016 found that terrorism was the second highest fear of most Americans (‘corruption of government officials’ was the first). However, The New American Foundation reported that Islamic extremists had actually killed just 94 people between 2005 and 2015. To put that into perspective, 301,797 people were killed during the same decade by shooting. That means that an average American citizen was 3210 times more likely to die by being shot than in a terrorist attack during this period. And yet, that average American citizen is far more fearful of terrorist attacks than shootings. The blame doesn’t just lie with the journalists. Today’s media companies are focused on generating profit, and they do this either through subscription fees or by in-platform advertising. Either way, they need their readers to keep clicking through to new articles, and reading them. This means that if media companies see a trend of which articles their readers are clicking on, they’ll simply keep creating more of them. And we love those negative stories. Psychologists have long been aware of our ‘negativity bias’: we humans pay more attention to negative events and happenings than positive ones. We also tend to remember negative events more than positive, meaning that they stick in our memories and have more of an influence on our ongoing lives. This is (theoretically) with good reason, evolving from a primal need to keep ourselves out of danger . Our brains simply won’t allow us to disregard danger, in the hopes that we will respond to it and keep out of harm’s way. ‘If it bleeds, it leads’ is a common phrase in journalism. And that’s why: those stories of negativity, danger, and crime are guaranteed ways to pull in readers. So those stories have become more ‘newsworthy’ than other, more positive, stories. Take this screenshot from The Guardian website, for instance, which shows the most viewed news stories that day (23 Sept 2019): Screenshot from The Guardian, 23 September 2019 There are clear themes in terms of what people are reading, and that’s stories of sexual assault, abuse, and knife stabbings. These are the stories that we’re most attracted to and that we want to read, even though they also make us anxious, sad, and fearful (Johnston & Davey, 2011). So what can we do about it? Psychologist Steven Pinker convincingly argued in his 2011 book The Better Angels of Our Nature that, despite what the media tells us, violence has actually been hugely on the decline and that now is the most peaceful time in the whole of history. Wars are less common, capital and corporal punishment are no longer the norm, medical advances mean we live much much longer and healthier lives. And that’s just the tip of the iceberg. With that said, I think if you are a regular reader or watcher of The News, it’s worth viewing articles with a critical mind. Be aware of the research outlined here, and know that the portrayal of violence and crime on news outlets is not representative of reality. Be aware that media companies are vying for your click, and that headlines are likely to be sensationalised to get that click. And if you do feel anxiety or fear after viewing negativity in the news, then try taking a step back and reducing your consumption of the news. I recently deleted The Guardian’s app from my phone and unsubscribed from their email alerts. I can still access news and information if I want to, but it isn’t a constant feature in my day.
https://tabitha-whiting.medium.com/is-the-news-making-us-paranoid-c74717cf9c0f
['Tabitha Whiting']
2019-09-28 08:36:01.582000+00:00
['Terrorism', 'Media', 'Journalism', 'News', 'Psychology']
Why You Should Give Your Team Lots Of Equity
“Would you mind telling me your stock option plan at Maxim?” Carlos, the VP HR at Micrel asked me. “I want to do some benchmarking.” I had just joined the company as General Manager of one of three divisions the company had, so I was glad to help. I walked Carlos through the grants I received every year I was at Maxim. There were 11 years of stock option grants. After I finished telling Carlos the grants I received, he said to me, “Is this really true?” Picture: Depositphotos “Yes, it is,” I said. “That’s very generous,” Carlos said to me. “Well, I’m hoping Micrel will be equally generous with granting options,” I said. Carlos was dead silent. I had a lot of issues I needed to solve at that point in time because the division I inherited was a complete disaster. Fighting for stock options for my team, quite frankly, was not at the top of my list of issues. If you want to retain your team, then you’ll grant them generous amounts of stock options. As I said, I had a lot of issues I had to deal with to get this division turned around. Morale sucked. The product strategy was a disaster. The marketing strategy was even worse. The good news was these issues were easy to fix. Once I got my bearings, I was able to focus on the personnel issues the division had. It was clear to me that most of the people I inherited would have to go. But there were some people were keepers. There was Steve who I had worked with previously at Maxim. Steve was someone who had done great work previously with me, and I knew I could count on him. I wanted to promote Steve from manager to director. I told Carlos I wanted to increase Steve’s options. “We have an evergreen plan,” Carlos said to me. “That’s great,” I said. “How can we make it work for Steve?” “Here’s how it works,” he said. “We look at the percentage ownership of each employee for the level they are at. For Steve, he should be getting 500 shares a year at his new level after his initial grant expires.” “What was Steve’s initial grant?” I asked Carlos. “It was 2,500 shares per year for four years.” I started laughing. “And you think this is a good deal?” “It’s a great deal,” Carlos repeated. “He’s getting a larger percentage ownership than he would at Maxim.” You need to understand that dilution is a good thing, not a bad thing. “Who cares!” I said, raising my voice. “Maxim and Micrel are public companies. Maxim is 10 times the size of Micrel. Percentage ownership is irrelevant. It’s the value of the stock that matters!” Carlos just stared at me, dumbfounded. “He’s (Steve) gonna leave,” I said. “Let me ask you this. How long does the average employee stay at Micrel?” “Four years,” Carlos answered quickly. “I wonder why?” I said sarcastically. “You don’t think there’s a correlation between employee’s stock falling off a cliff and when they leave?” “He (the CEO) won’t change how he does things,” Carlos answered. “He thinks this is a great deal for employees.” “He’s a fool,” I said. “There’s no financial incentive to stick around here. “Do you want to know how long employees stick around at Maxim?” Before Carlos could answer, I answered for him. “The average employee works at Maxim for at least ten years!” Again Carlos looked dumbfounded. “I told you this when I joined the company! Here’s the difference between what a Maxim employee and a Micrel employee would get!” Then I drew something like this: “That can’t be true!” Carlos said. I sighed very deeply, and I tried to calm myself down. I stood up, and I said as softly as I could. “Carlos, I want Micrel to win. If you guys don’t want to face reality, that’s your problem. But I guarantee you the reason employees are leaving after four years is because there’s no financial incentive to stay.” Then I walked out of Carlos’ office. I mentally prepared myself that I would be losing my team after four years. And I realized that I wouldn’t likely be at Micrel for more than four years either. You either get it or you don’t get it regarding stock options. I’ve never met him, but ex-Benchmark Capital founder, Andy Rachleff, seems like a pretty smart person to me. He noticed a common thread amongst his successful startup portfolio companies. Rachleff noticed the startups that had a generous evergreen plan increased employee retention from an average of 2 to 3 years to significantly longer. Rachleff wrote: “Offering a transparent, consistent and fair program of equity grants that employees can build into their long-term expectations. As a result, not only do you avoid cliffs, but you also tie both long-term tenure and contribution to their ownership stake. The best part is that, as your company grows, you always grant stock in proportion to what is fair today rather than in proportion to their original grant.” Andy Rachleff Rachleff’s experience with a large number of startup portfolio companies matches my Maxim experience. Stock options matter. And, if you’re an evolved CEO, then you realize the tremendous power a well thought out stock option plan can have. Maxim was not an easy place to work at. The expectations were sky-high, and Jack Gifford, Maxim’s CEO, was no shrinking violet. He was tough to please. One of Gifford’s favorite sayings was, “Everyone is motivated by fear and greed.” From personal experience, I know Gifford certainly understood how to put the fear of god into you. However, Gifford equally understood why it was important for everyone to share in the company’s success. The evergreen program Maxim had (similar to Rachleff’s portfolio companies) kept his team in place. Compare that to a similar company, Micrel, that didn’t have a generous evergreen program. The team left as soon as their options expired. The choice is yours which type of company, and stock option plan, you want to have. For more, read: https://www.brettjfox.com/how-much-equity-do-your-employees-deserve
https://brett-j-fox.medium.com/why-you-should-give-your-team-lots-of-equity-b41c05be52b1
['Brett Fox']
2020-06-25 22:30:11.065000+00:00
['Entrepreneurship', 'Startup', 'Leadership', 'Management', 'Venture Capital']
How to Write Perfect Python Command-line Interfaces
Let’s take a simple example Let’s try to apply these rules to a concrete simple example: a script to encrypt and decrypt messages using Caesar cipher. Imagine that you have an already written encrypt function (implemented as below), and you want to create a simple script which allows to encrypt and decrypt messages. We want to let the user choose the mode between encryption (by default) and decryption, and choose the key (1 by default) with command line arguments. The first thing our script needs to do is to get the values of command line arguments. And when I google “python command line arguments”, the first result I get is about sys.argv . So let’s try to use this method… The “beginners” method sys.argv is a list containing all the arguments typed by the user when running your script (including the script name itself). For example, if I type: > python caesar_script.py --key 23 --decrypt my secret message pb vhfuhw phvvdjh the list contains: ['caesar_script.py', '--key', '23', '--decrypt', 'my', 'secret', 'message'] So we would loop on this arguments list, looking for a '--key' (or '-k' ) to know the key value, and looking for a '--decrypt' to use the decryption mode (actually by simply using the opposite of the key as the key). Our script would finally look like this piece of code: This script more or less respects the recommendations stated above: There are a default key value and a default mode Basic error cases are handled (no input text provided or unknown arguments) A succinct documentation is printed in these error cases, and when calling the script with no argument: > python caesar_script_using_sys_argv.py Usage: python caesar.py [ --key <key> ] [ --encrypt|decrypt ] <text> However, this version of the Caesar script is quite long (39 lines, which doesn’t even include the logic of the encryption itself) and ugly. There has to be a better way to parse command line arguments… What about argparse? argparse is the Python standard library module for parsing command-line arguments. Let us see how would our Caesar script look like using argparse : … Read the full article on Sicara’s blog here.
https://medium.com/sicara/perfect-python-command-line-interfaces-7d5d4efad6a2
['Yannick Wolff']
2020-01-30 13:55:10.965000+00:00
['Python', 'Command Line', 'Interfaces', 'Machine Learning', 'Productivity']
The Strategic Product Launch Playbook
Step 1: Considerations Before Launching a Product Use problem statements as a guiding star Every successful product aims to solve a problem and each product will have a unique problem statement. The problem statement is a clear and concise description of the problem and fills in the blank between what should be happening (situation to-be) and what is actually happening (situation as-is). The problem statement is something that you can refer to throughout a product’s journey starting with the design and development all the way to introducing it to the market. Make sure you spend time with all your team members to outline the best problem statements that everyone is committed to. This will help your team focus, stay on track, and ground you and your team in goal setting, strategies, planning, execution, and assessment of the product launch plan. A simple formula to a problem statement could be the following: ‘X target users’ need a way to do ‘Y-needs’ shown by the ‘Z-insight’ For Uber, that may look something like this: ‘San Francisco commuters’ — need a way to ‘get from point A to point B’ because ‘taxi cabs are too scarce to hail and too slow to respond’. Target the right audience For successful product development and launch, you need to discover your target customers, understand their needs, and know how to communicate with them. Go beyond the run of the mill demographics and personas and use the jobs-to-be-done framework by Clayton Christensen, as a compass to hone in on your target market and their needs. Follow these steps to define your target market, craft the right messaging, and connect with customers on a deeper level: Identify a job-to-be-done as an action verb followed by the object of the action and the clarifying context. For iPod, that may be listen to music while working out. Ask your customers questions to gain insights. With the iPod, you might ask, when listening to music while working out, how do you struggle to get the job done? For example, they may have difficulty running while carrying a bulky Walkman or battle boredom with the same 12 songs playing on the CD. For iPod, that may be listen to music while working out. Ask your customers questions to gain insights. With the iPod, you might ask, when listening to music while working out, how do you struggle to get the job done? For example, they may have difficulty running while carrying a bulky Walkman or battle boredom with the same 12 songs playing on the CD. Go beyond the functional jobs-to-be-done and determine if the customer has emotional or social jobs-to-be-done as well . Ask questions like, how do you want to achieve your workouts? The customer may respond with an insight like they want to feel motivated to push themselves harder. . Ask questions like, how do you want to achieve your workouts? The customer may respond with an insight like they want to feel motivated to push themselves harder. Focus on how to message your product as solving the customer’s job. To do this, it’s important you don’t focus messaging on the features of your product and instead use the jobs-to-be-done lens to focus on your product’s position in the customers’ minds. When Apple first released the iPod, they used the catchphrase, “1,000 songs in your pocket.” They did not focus messaging around a feature, such as the device’s ability to store five gigabytes of music. Instead, they translated the storage size into the number of songs stored. A message that resonated with the customers’ job described earlier. Validate your product market fit Product market fit will drive the success of your offering. Test your customers’ motivations, desires, needs, and figure out how to communicate with them before launching a product. A great way to validate your product market fit prior to launch is to create a landing page. Develop a page with a single and focused call to action. Your call to action may be a web form to collect data, like names and email addresses, or it may be a simple button you press to purchase something. Test multiple variations of the landing page and see what sticks with users (A/B testing) while gathering data and customer feedback along the way. Don’t wait until the product launch date to validate your product/market fit. This will help you and your team to go beyond opinions and identify what strategies and messages resonate with your customers because, in the end, data always beats opinions. Track performance with metrics A product leader can get fixated on daily tasks and lose sight of what’s really important — achieving business goals. If product teams do not identify goals early on and set metrics for what success looks like, product launches may fail. Product teams need to develop product launch goals. Below are some guidelines: Create a product launch goal statement. The statement could take the form of: we are launching this product to X, Y, and Z. For example, “we’re launching the widget product to increase awareness, engagement, and conversion of new customers.” Try to gather input from team members to ensure you strike the right balance of high-level details and simplicity. The statement could take the form of: we are launching this product to X, Y, and Z. For example, “we’re launching the widget product to increase awareness, engagement, and conversion of new customers.” Try to gather input from team members to ensure you strike the right balance of high-level details and simplicity. Identify goals associated with why you are launching the product . Cross-check goals with the executives and other functional teams. An example of goals could be: “an increase in revenue or margin, establishing a foothold in a new market segment to land new clients.” . Cross-check goals with the executives and other functional teams. An example of goals could be: “an increase in revenue or margin, establishing a foothold in a new market segment to land new clients.” Translate goals into traceable metrics. Use SMART criteria (specific, measurable, attainable, realistic, and timely) in developing targets for your goals, for example, “to reach $3.2 million in revenue.” Using this method as your guideline, you and your team will be able to later judge whether or not the goals were accomplished. It will also help you to identify where resources are needed and help you and your team deliver on the expectations. The goal-setting phase may require a lot of strategic analysis and planning but will keep your team focused on the purpose of the product launch, align activities based on success metrics and keep everyone motivated to really impress your customers. Monitor and benchmark the competition Always be conducting a competitive analysis. Only then will you understand what makes your product unique and useful to your target market. Let’s take a look at the three steps you can take to conduct your own analysis and create a strategy to take on your competitors. Brainstorm a full list of your competitors. Be comprehensive and think of direct and indirect (substitute) competition. So if you are launching a new fitness watch, search for related keywords, such as activity tracker, exercise monitor band or pedometer. Or if you are selling an electric toothbrush, go beyond just researching other electric toothbrushes and think about regular toothbrushes, or even floss. Be comprehensive and think of direct and indirect (substitute) competition. So if you are launching a new fitness watch, search for related keywords, such as activity tracker, exercise monitor band or pedometer. Or if you are selling an electric toothbrush, go beyond just researching other electric toothbrushes and think about regular toothbrushes, or even floss. Understand competitors’ offerings. Evaluate their offerings comparing them to your own. Assess their website, follow them on social media, and sign up for their newsletters. Create a document, such as this, to capture your research Evaluate their offerings comparing them to your own. Assess their website, follow them on social media, and sign up for their newsletters. Create a document, such as this, to capture your research SWOT it out! Identify the strengths and weaknesses of each competitor and potential opportunities and threats you might have against them. Your competitive advantage should be unique and appeal to your target customer. Take a step back and talk out your research with all of your team. It will take time, but landing on your competitive advantage will help drive your product’s messaging by relying on what makes you different (USP).
https://medium.com/better-marketing/foundations-to-a-successful-product-launch-15a332dd943c
['Nima Torabi']
2020-06-03 12:49:31.853000+00:00
['Launch', 'Growth', 'Startup', 'Marketing', 'Product Management']
Why We All Need Design-led Content Marketing
Ever since marketing has become a buzzword and a key to getting many customers, human psychology has been dissected and carefully laid out so as to understand how brands can hook onto its no-longer-complex behaviour. As such, certain things, like how visuals can attract people and get stamped into memories more than what texts alone can do, is no longer something to gasp at. A study in 2019 by Neilsen and Taboola highlighted that the human attention span has dropped from 12 seconds to 8 seconds. While 4 seconds might sound pretty insignificant, if we try to recall how many feeds we see in a continuous scroll in just 4 seconds, it may not sound small at all. Right? Imagine your brand message, or say your marketing ad, is one of these many feeds that your customer is scrolling through. What are the chances that they will resist the urge of their thumb to push the feeds upward, and will stop at your ad and read it wide-eyed? WordStream says that the average CTR (click-through rate) for all display networks is 0.46%. Yes, so, that puts the odds of a customer interacting your ad at less than 0.5%. Wow! That’s really tiny. So, does that mean that your ad will lie in some dark corner of the marketing world, and go totally unnoticed? Not if you don’t want it to. The sad part is that even though many people have, over the years, painstakingly researched the intricacies of customer behaviour, wrote thousands of pages of results and guidelines, and suggested the optimum ways to grab customer loyalty, most brands fail to even acknowledge these efforts. As a result, they miss out on the tricks to make their ads stand out in the crowd. We have listed 5 reasons why your, my and everybody’s brand should adopt design as a tactic to enhance marketing and hence consumer engagement. Let’s get started!
https://uxplanet.org/why-we-all-need-design-led-content-marketing-f6b7ddc04487
['Design Studio']
2020-11-16 08:37:08.283000+00:00
['Design Led Marketing', 'Design', 'Brand Strategy', 'Content Marketing', 'Marketing']
The Literally Literary Weekly Update #8
Sit with Me by Agnes Louis “Sit with me, love,” Like a whisper of the wind, so soft it made me wonder if I had dreamt it all. A Love Letter To Numb Women by Jayne Stevenson (curated in Poetry) “It hurts that we rarely see you anymore. Whenever we do, you say — Got to run, no time to talk. Subtext: I can’t feel my life anymore.” Early Days by Elizabeth Williamson “even though they feel so buried in the past that you feel like another person walking through those memories” White Mirror by Scott Leonardi “We all project our deepest and most desperate selves onto this still water, red-eyed and impatient to see an accurate reflection” In the Dark by Bryony L’eau “I did a stint in the dark So I could recognize The edges of light So I could see the light that exists In the darkest places.” The Most Dangerous by Amy Nicolai “Ready for the swift cut Directly through the lifeblood of dreams The gushing of wasted away ideas” Protrusions by J.D. Harms “Like running a gauntlet pressed up against heavy thorns & weeping”
https://medium.com/literally-literary/the-literally-literary-weekly-update-8-c2bf4325d23f
['Jonathan Greene']
2020-02-12 14:07:29.379000+00:00
['Ll Letters', 'Writing', 'Fiction', 'Poetry', 'Nonfiction']
You Should(n’t) Be Writing
You should be writing. As far as memes go, it’s one that probably pops up the most in writers’ groups. “If you’re reading this, you should be writing,” “Stop scrolling Facebook and start writing,” and “If you’re not writing right now, you should be” are common variations on this same theme. These memes can be found everywhere, from the #WritingCommunity hashtag on Twitter to the Facebook pages of literary journals and writing schools. A meme of David Tennant. He is pointing towards the camera. Text on the image is saying, “You should be writing.” Via writingonpoint.com They seem to balance just the right amount of relate-ability (we know you’re procrastinating because we procrastinate too!) and motivation (no, really, get working!) which is probably why they’re so popular in the writing community. If you spend as much time in these writing communities as what I do, you’ll know that these images tie into a bigger theme: procrastination. According to social media, procrastination is a massive part of being a writer. Jokes about procrastination seem to be part of the job. “Writers: you’re either writing right now, or you should be” is another variation on the meme that keeps popping up. To be a writer, you’re expected to relate to procrastination. I know it’s a joke, but I can’t help but wonder: is this really a message we should be absorbing so uncritically? For those of us who tend towards perfectionism, “You should be writing” culture can be toxic to our creativity and state of mind. A few years ago, when my career was just starting, I wrote YOU SHOULD BE WRITING in capital letters on a piece of paper and stuck it on my wall. I thought it would be motivating. In some ways, it was. But it also made me feel incredibly guilty and anxious. Ironically, this anxiety made it even more difficult for me to write. I glanced at the paper when I watched Netflix, when I sent emails, when I texted my friends, when I ate, and when I was about to go on a walk. “Could I cut back on the time I spend watching Netflix or eating or exercising?” I wondered. And then I thought, no — those are all essential parts of my life. Recreation and food and exercise is important to me. The more I stared at that piece of paper, the more I thought to myself, “No. I shouldn’t necessarily be writing. I should be doing whatever it is I need to do.” I threw the paper away, and I started thinking about burnout culture. Yes — if you want to be a writer, you should simply write. Getting those words down is essential. Sometimes, putting pen to paper (or fingers to keyboard) is difficult, and these memes can be pretty motivating. But our culture’s obsession with procrastination is worrying. We’ve moved from procrastination being undesirable, to it being something we all admit to doing with shame, to sorta romanticizing it. Writers — who tend to romanticize writing a great deal, if I may add — tend to romanticize procrastination to the point where we insinuate all non-writing activities are forms of procrastination. We shouldn’t shame people for procrastinating, because it’s not helpful. But we also shouldn’t insinuate that procrastination is a necessary part of the writing process for everybody, nor should we insinuate that all non-writing activities are procrastination. When we assume all time spent away from work is procrastinating, we assume we should always be working. But productivity isn’t the only purpose of our lives. Sometimes, we shouldn’t be writing. Sometimes, we should be spending time with our families or eating or sleeping or traveling or reading — or, yes, scrolling through Facebook. The glorification of productivity — a phenomenon often referred to as ‘burnout culture’ — is incredibly toxic. And it’s everywhere. Millennials have a tendency to boast about their ‘hustle’, and sometimes that means we proudly admit to engaging in unhealthy behavior by putting work first and ourselves second. We boast about running on fumes and try to out-busy one another. “Sleep when you’re dead” is a phrase that comes to mind. Needless to say, this messes us up, mentally, physically, socially, and even professionally. What happens when burnout culture seeps into creative fields, like writing? Well, we produce shitty work. We may even give up on our work. More worryingly, we damage ourselves. Romanticizing procrastination is not good for humans or art. So, what’s the antidote for “You should be writing” culture? Nothing. Literally, do nothing. It helps. A few months ago, I wanted to write a particular personal essay. I found it very difficult. I kept working on it every day. I felt guilty when I didn’t work on it. And you know what? After working on it for a week, I had to admit it was awful. My solution was to do nothing. Yes, nothing. I set an alarm for exactly a week in the future that said, “You can write now.” I promised myself I wouldn’t touch the essay until then. I wrote “You shouldn’t be writing” on a Post-It note and stuck it to my computer. (Since I write for a living, I obviously had to do some writing, but I was referring to that essay in particular.) When I came back to the essay a week later, I was excited to do so. My week without touching it gave me the mental space to find my passion for it. I returned to the essay replenished, inspired, and motivated. Now, I often schedule time for doing nothing. I block time off in my Passion Planner by writing the word NONSENSE in green capital letters. I mess around in the kitchen and clean my bathroom. I scroll through social media and happily ignore “You should be writing” memes. I go to my grandmother for lunch. “Should you be working?” she asks. “No, I should be here,” I reply. Because I am a writer, but also a granddaughter, and a bunch of other things — writing is not the entirety of my identity, so it shouldn’t take up the entirety of my time. Yes, those memes can be motivating for some people, and I’m not policing anyone who shares them. But I am suggesting you take them with a pinch of salt. Next time you see one, remember that it’s totally okay to take a break from writing — in fact, breaks are essential.
https://medium.com/swlh/you-should-nt-be-writing-8a3da5c44794
['Sian Ferguson']
2019-06-14 06:25:24.599000+00:00
['Procrastination', 'Writing', 'Burnout', 'Writing Tips', 'Productivity']
5 Types of Content for When You’re Uninspired
5 Types of Content for When You’re Uninspired What if you lack time and energy to write a long blog post? Photo by bruce mars on Unsplash Content creation is hard. You have to be consistent, you have to deliver a ton of value, you have to publish content multiple times a day, you have to come up with new ideas all the time, you have to make sure your content is good, and you have to put out so much content in order to get noticed. Content marketing is only efficient if you keep practicing and if you stay consistent, following a content plan. In an ideal world, you would wake up, publish 50 pieces of content per day and deliver the best value possible, and then you would go back to sleep. But we all know that it’s not as easy in reality and that publishing only one piece of content is already hard, demanding, and daunting. You may say that skipping a day is fine and that you’ll be back on track the next day with even better content. And I agree with you — it’s not the end of the world to skip a day. But skipping a day makes it tempting to skip the one after that. And then the entire week. And then the month. Here’s the truth: It’s a lot more comfortable to not put out content when you’re uninspired. But that won’t make you grow and that won’t make you learn. But the good news is, you don’t need to publish three YouTube videos and five long blog posts per day to have an efficient content marketing strategy. As long as you deliver value, and as long as you post consistently and regularly, your content strategy will pay off and you’ll quickly see results and traffic. Think of it this way: Every single piece of content that you post is an entry point to your profile, your website, your sales funnel, your contact page, or whatever it is that you’re trying to drive traffic toward. So don’t skip a day, and use these other forms of content for when you’re uninspired or way too tired to write that article or film that YouTube video.
https://medium.com/better-marketing/5-types-of-content-for-when-youre-uninspired-7c01525f8974
['Charles Tumiotto Jackson']
2020-10-15 14:48:46.609000+00:00
['Content Marketing', 'Startup', 'Marketing', 'Digital Marketing', 'Social Media']
15 Lessons Game of Thrones Can Teach Us About Branding
Image courtesy of Night Sky Creative … And now our watch has ended. No matter how you feel about the final season of Game of Thrones, there’s no doubt that it’s produced some of the best television in the last decade. It’s redefined the medium in terms of scale and visual spectacle. As avid fans of GoT, and because we take our branding and marketing inspiration from everywhere, we’ve spent hours discussing storylines, character development, and the various themes and metaphors. Looking back over the past eight seasons, we noticed, buried beneath the rubble of King’s Landing, something interesting and unexpected… Lessons in branding. We know it sounds like a stretch — but we swear on the old gods and the new, there are some awesome lessons to be learned. So we’ve assembled our banner men, and we’re rallying together to share these branding lessons with you, the people of the seven Kingdoms… and the internet. This article contains spoilers from all seasons of Game of Thrones. Consider this your warning! Image courtesy of HBO Branding is something that’s key to the success of every business endeavour, but it can be hard to get right. So, without further ado, here are our favourite lessons we’ve found in Game of Thrones to help you build your business: Image courtesy of HBO It’s not what you say, it’s what others say about you Over the eight seasons how many times have we heard the phrase ‘A Lannister always pays his debts’? A lot. It’s the Lannister house motto, right? Wrong. If you’re a Game of Thrones superfan, like us, you will know that the real house motto is ‘Hear me roar’. Branding is all about perception. How your audience and potential consumers perceive your brand. You can roar as loud as a dragon, but you’ll never match the mass whispers of your audience. This can be a powerful force for both negative and positive — if you listen carefully and lean into those whispers, you can hear deep truths about your brand directly from your consumers’ perspective, which you can work into your brand’s messaging and designs. If you can master this, you can have a brand as rich as the Lannisters! Image courtesy of HBO Know your audience When exploring a new audience, do your research and lay your foundation before fully launching your brand. Just like Dany going to Westeros to claim the Iron Throne, it’s generally not a good idea to just throw your brand out into a new territory and a new audience without much thought, and just the feeling that you’re ‘entitled’ to that audience. Dany built up her brand gradually from nothing in Essos, where her message was clear and resonated with her audience. But the people of Westeros didn’t know Dany’s brand back story. They had not seen what she had achieved in Essos. The Breaker of Chains became yet another power-hungry leader fighting for the throne. How Dany built her brand in Essos was pretty much a how-to guide for any brand — having a clear goal and core audience, with every action as a brand moving towards that goal. A goal that serves others, not just itself. Don’t make the same mistake as Dany. When your business is exploring a new audience, new territory or new product range, be sure to conduct thorough market research. Image courtesy of HBO Don’t put all your eggs in one basket For about three seasons, we were all hyped up about the power and might of the Golden Company. Then, in the penultimate episode, they were completely obliterated in just a few fiery breaths by Drogon in a matter of seconds. Cersei put a lot of faith in the Golden Company, only to have them destroyed as soon as the battle started. As a business, you need to make sure that your offering is well-balanced, not just relying on one product range, or one aspect of your service. If anything were to happen to that product range or that element of your offering, like an unexpected price jump form a supplier, or a key member of staff leaving, you need to make sure that other products or other elements of your service can (even temporarily) pick up the slack. Image courtesy of HBO Be the Master of Whispers Varys’ job as Master of Whispers was fundamental to a lot of the goings-on in Westeros throughout the series. His ‘little birds’ fed tidbits of information from throughout the kingdom back to him, giving him the knowledge he needed to help guide the various councils and rulers of Westeros. He used this information to help achieve his ultimate goal to do what’s best for the people of Westeros in general. Data is gold-dust to your business. And it’s never been easier to get at least some insightful information and metrics so you can tailor your product or service to suit your audience. Find your ‘little birds’ — be they through customer feedback surveys, social media metrics, email clicks/opens, or just from speaking to your customers. Image courtesy of HBO Go big or go home What are the moments from Game of Thrones that stick with you? Daenerys walking out of the fire with her dragons? The Mountain crushing the Viper’s head? Cersei and Tyrion’s wildfire explosions? Those moments were huge for GoT — visually striking, explosive (literally) snapshots that everyone remembers. It isn’t possible to just ‘go big’ all the time. It’s often expensive, and if you do it too often, it’ll be less effective. Create key points in your branding and marketing strategies where you truly wow your audience, and give them something to remember. Just try not to kill anyone, like GoT always did… Image courtesy of HBO Visual identity Brand logos are like house sigils — they’re not just recognisable, but they also have real meaning to their audience. Many house sigils represent the family’s values, produce or geography of their home regions. House Bolton, for example, haven’t quite thought through their brand values to portray through their visual identity — their ‘Flayed Man’ banner. The ‘flayed man’ invokes awful emotional responses from other houses. While House Bolton might be intending to gain the Iron Throne though fear tactics, as shown in their choice of the ‘flayed man’ as their banner, it doesn’t exactly inspire trust or leadership qualities, as one would expect from someone fighting to become Regent of the Seven Kingdoms. Another House that hasn’t quite got it right would be House Davos — how they want their house (or brand) to be perceived shifts throughout the series. Although it’s identifiable to many in Westeros, the onion on their banner, doesn’t allude to their bravery and heroic exploits, nor does it refer to their seafaring expertise. All it does is give the Onion Knight (Ser Davos) a cute name. An example of a House that has nailed their visual identity is House Frey. Their bridge over water banner sums up the aspects of their brand they want to show their audience perfectly. It promotes their greatest asset to Westeros while leaving out their somewhat shady moral compass. The brick bridge is also a brilliant symbol of stability, connectivity and trust. Another House that’s captured the best elements of their brand in their banner is the fan-favourite: the Starks. The direwolf showcases their loyal nature, their leadership qualities, their fierceness to protect their own, and their pack mentality. Your visual identity is fundamental to your brand, and it’s important to develop each element of the visual identity of your brand around your core values to ensure that they are both consistent and ever-present in all of your messaging. The visual aspect of your brand is the first thing that most people see, and it should evoke an emotional response from your customers at first glance, clearly summing up your brand values. It can also become a powerful means of differentiating your brand, inspiring loyalty from your audience, and maybe just a little fear from your competition. Image courtesy of HBO Hodor, hodor hodor Too often, brands use their promotional material to overload their client with features and facts about what they do. This can be a huge turn-off for potential customers, and an overload of information. When looking for information, people want it simple and easy to digest. This lesson was beautifully illustrated, if a little drawn out, by none other than everyone’s favourite big, bearded bloke, Hodor. Hodor’s message is straightforward and clear, his purpose and reason for being inspirational and engaging. Simplify and focus your messaging to achieve your goal. If you have several messages, try to consolidate them into something simple and straightforward — keep answering the ‘why’ until you get to your hodor. Why do we want customers to visit our website? Hodor hodor, hodor, hodor hodor hodor. Why do they perform that action on our website? Hodor hodor hodor, hodor. So why do they want to do that? Hodor. Image courtesy of HBO Tone of voice There are so many brands that make the mistake of crafting their tone of voice around themselves, their own achievements and asked the client to do something for them. Joffrey constantly talked about himself, his own glory and how everyone must serve his every wild whim and desire, no matter how terrible they become. Joffrey was hardly revered in the Seven Kingdoms. Simply put, he was seen as an absolute knob. When creating your brand tone of voice, remember that you’re speaking to your customer. Consider what their challenges and expectations are, and how can you serve their needs and help them to fulfil their goals. Be more John Snow, and less Joffrey. No-one likes Joffrey. Here are some handy stats to back up this lesson: 57% of consumers will actively avoid brand that bombard them with poorly targeted promotional messages (Forbes Online survey 2018). 4 out of 5 people have left a web page because of an pop-up or auto-play advert (Hubspot, 2016). Image courtesy of HBO Offer a service customers want Your brand must add value for your customers, offering a service or product that people actually want. This might seem like an obvious suggestion, but too often, it can be lost in the pursuit of business success (i.e. money). As a brand, you can’t afford to be totally selfish, and act only in your best interests. It’s your job to think about your client. What value can you offer them? You need your customers, otherwise your brand will never succeed. Brands need to be about making the world a better place, not just about making money. For example, Daenerys’ main goal throughout the series was to be Queen of the Seven Kingdoms. She built her brand, and convinced thousands of people to stand with her. She offered them freedom, to break their chains. There are many examples of her being and inspirational, aspirational and acting in her audience’s best interests. But when things don’t go her way, she gets mad. Super mad. At the end of the series, she’s isolated, two of her dragons are dead, Ser Jorah’s dead, and even Jon Snow has betrayed her trust. Her brand starts to crumble, but instead of continuing to think of her audience — her subjects — she uses fear instead, and chooses her right to the throne as her new goal, instead of actually breaking the wheel. She wasn’t thinking of her audience, and she suffered for it. Give the people want they want, and they’ll keep coming back. It really is as simple as that! Be the brand people want to be a part of, finding solutions and tackling the challenges rather than adding to them. Image courtesy of HBO Don’t trust everyone As any Game of Thrones fan knows, The Faceless Men are not to be trusted. A man is no-one, but at the same time, everyone. As our (co-dependent) relationship between business and the internet grows, it seems as though communication and accessibility is simpler than ever. However: you can’t trust everyone, or everything. There are currently around 270 million fake Facebook accounts active right now (Mashable, 2018), and this is the tip of the iceberg when it comes to online trust. So those accounts ‘liking’ your post and following your page might not be all that they seem. Fake accounts like these can be the bane of the marketer — there’s not much worse than talking to people who you know aren’t actually real. It’s just wasting your breath, and can be potentially damaging to your brand. If potential customers see that you’re on Facebook with a lot of followers, then after a vicious crackdown on fake accounts and bots, Facebook then removes a big chunk of those followers, potential customers might think that your brand has done something wrong. Keep track of who your followers are, do regular reports and do your best to make sure they are who they say they are. So don’t take everything at face value — the Faceless Men might be using their wall of faces on your Facebook wall. Image courtesy of HBO Lead by example “The man who passes the sentence should swing the sword.”- Eddard “Ned” Stark Ned Stark is infamous within the show’s lore as the most honorable man in Westeros — a personality trait he taught Jon Snow. Honesty and integrity are invaluable within branding, as they take time and effort to build, but they can be destroyed with seconds. As David Brier says in his book, Brand Intervention: “Every brand must stand for something inspired, something good, something worth saving or worth resurrecting. That thing is our audience’s hero.” To do this effectively in branding, you need to establish an effective ‘good versus evil’ story. As the ‘hero’ in your brand, you get a chance to show exactly how good your brand is for your audience, with honesty and sincerity. In Game of Thrones, this has been achieved by putting characters like Ned Stark and Jon Snow right in the middle of all the lies and deceit. Think about who are the heroes and villains in your brand story. It can help focus your brand’s positioning and communication. Image courtesy of HBO Disruptions (dragons) are the best equaliser Think about this: without her dragons, Daenerys would have just been yet another head of a major house with newly restored wealth and power (arguably, she might not have even got this wealth or power without her dragons). The introduction of dragons truly ‘breaks the wheel’. In branding terms, this sets her apart within her market. It ensures her survival and can ensure yours too. However, as Daenerys learnt in the later seasons, you need to make sure that your disruption continuously evolves over time. You never know when your competition will develop a scorpion. Take time to think about how you can cause disruption amongst your competition. This disruption can mean many different things: a unique feature or benefit, a different aesthetic or tone of voice just to name a few. Find your own dragons and set them free. Image courtesy of HBO Reputation rules! How many times in the show does Jaime Lannister get referred to as the kingslayer? Too many times to count. But it happened over a decade ago, when the show started. Yet the whole of Westeros knows exactly who the kingslayer is. Your reputation is fragile and easily broken — once it is broken, it is nearly impossible to recover. As a brand you need to ensure a good reputation always precedes you, often it is the first impression to consumers and can have a lasting effect on their perception. Your reputation can be dictated by your actions as a brand. For example, look at the difference in how Margaery was treated when she stepped out in public (revered as a beloved queen) compared to when Joffrey stepped out in public (and was pelted with dung). Image courtesy of HBO Complacency kills Remember that epic fight between the Mountain and the Viper, way back in season 4? It’s definitely stuck with us, we can still remember the squelch of the Viper’s head as it was crushed in the Mountain’s hands. Gross. All because the Viper got too cocky and complacent, underestimating his enemy and relishing his apparent ‘victory’ far too soon. Confidence is great in branding, but overconfidence can lead to the death of a brand through complacency. Common signs of complacency are normally a lack of innovation within your category, over-reliance on past victories, and growing focus on delivering price instead of value (check out our previous blog on value and price here). If you become complacent, that’s when your competition will strike, leaving you in a compromised position — one from which you may never recover. Your confidence should be rooted in your ability to drive innovation and differentiate from your competition. We should all take a leaf out of Bran the Broken’s book and become a branding version of the three-eyed raven: one eye on our present, one on our past, and one spying on the competition. Image courtesy of HBO Winter is coming Everything changes. Your brand needs to change, too. When winter comes, adapt to your environment. Be like the Starks — embrace the winter, as well as the summer. Every season in Westeros has its own opportunities as well as its own challenges. Like the freezing temperatures of winter, and the balmy weather of summer, you need to set your measurements for change, so you can proactively prepare for change. If your metrics detect changes in the market, adapt your brand with those changes. So, those are the 15 branding lessons we found in Game of Thrones, we hope you found them useful and that they help you to continue to develop your own brand. We still can’t believe it’s all over, but what is dead may never die, right? If you think of any other Game of Thrones branding lessons, pop them in the comments, we’d love them.
https://medium.com/swlh/15-lessons-game-of-thrones-can-teach-us-about-branding-8866fc191388
['Tassia Agatowski']
2019-12-03 16:08:23.296000+00:00
['Branding', 'Game of Thrones', 'Entrepreneurship', 'Marketing', 'Branding Strategy']
What movie is playing in your head?
My middle school son was auditioning this week for roles in two shows at a local theatre company. He’s a talented young actor, but behind that talent, he was wrestling with a cluster of nerves and anxiety. A normal thirteen-year-old with aspirations to do great things, he was battling the kind of fear that likes to hang out in the cheap seats. To battle some of his jitters, I spent some time with him each morning helping him take control of the movie playing in his head. “Close your eyes, and imagine walking into the audition. Look around, make eye contact, smile and say hello to everyone. Take a deep breath, and sing your audition number for them. Imagine singing each word, and pay attention to what it feels like when you hit each note just right. When you’re done, confidently thank them. See what it looks like in your head when they smile back and tell you that you did a great job. Feel it.” I’ve done the same thing recently with my daughter (his twin sister) who has taken up lacrosse. We’re visualizing what it feels like making the perfect pass, cradling the ball, and running downfield to take the perfect shot. As much as we’re spending time training physically, we’re also taking time to train mentally. From the stage to the playing field, strengthening the mental game is critical. There was a time when someone else would have told this story and I would have thought they were nuts. For a long time, the concept of visualization didn’t fit my “push harder and apply yourself” mindset. Why visualize when you could just bulldoze through the challenges and grind it out? And then it hit me… Whether I like the idea of visualization or not, I’m doing it every day without knowing it. With each challenge I face, I’m watching stories in my head that are impacting how I handle myself. For a while, those stories revolved around all of the reasons I could not and would not be successful. I was visualizing what it looked like to fall short and to miss the mark, because I didn’t feel like I was enough or that I belonged at the table. There was no way “those people” would want to talk to me. As an entrepreneur, this was paralyzing. I would attend networking breakfasts and strategically walk in late so that I missed the networking portion and could find a table that looked safe to me. Every conversation was scary, and I was intentionally sidestepping opportunities because of the movie playing in my head. It affected my meetings and pitches with new clients. I was watching the wrong movie. I was visualizing failure. And then, I came across a research project from 1996 at the University of Chicago. A random group of students was selected to shoot free throws. On the first day, their starting shooting percentages were recorded. Then, over the next thirty days, each group was given specific instructions. One group was told not to touch a basketball for thirty days — to do nothing. The next group was told to practice shooting free throws each day for thirty minutes. The final group was asked to come to the gym each day for thirty minutes, closing their eyes and visualizing shooting free throws, never touching a ball. Thirty days passed, the groups came back, and they shot free throws again to be tallied. The group that did not practice at all showed no improvement. The group who shot free throws every day improved their percentage by 24%. Where did the group who only visualized shooting their free throws come in? They improved by 23%, without ever touching a basketball, just behind the group who practiced daily. Simply visualizing success helped them to BE successful! It’s time for a new movie! And so, if we are naturally wired to play movies in our heads anyway, why not create some Academy Award winners for ourselves? Why not take control and start to watch success stories rather than letting the negativity direct what’s on our big screen? Whether it’s at work or in our relationships, on a lacrosse field or in an audition, the simple act of slowing down to visualize success can be the key that unlocks it for you. The Takeaway Direct your own movie! Close your eyes right now, pick a part of your life that you’re motivated to excel at, and visualize what that success looks like. Spend some time quietly and watch each detail. Imagine each move. Feel the emotion of doing it perfectly. See it and experience it in vivid detail. Then, tomorrow, repeat it again until your imagination becomes reality!
https://johngamades.medium.com/what-movie-is-playing-in-your-head-6533aa0e8d17
['John Gamades']
2019-05-13 17:33:32.587000+00:00
['Entrepreneurship', 'Self Improvement', 'Motivation', 'Self', 'Success']