title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Unlocking Slack’s Full Potential 💫
Search Modifier in:[#channel] - searches messages in a channel in:[@name] - searches messages in DM conversations with a particular user to:me - searches messages sent to you from:[@name] - searches messages from a particular person from:me - searches messages you’ve sent has:link - searches messages that have a URL has:reaction - searches only messages that have received reaction (maybe not the most practical of features, but a fun one, for sure!) before:[date] , after:[date] , on:[date/month/year] , during:[month/year] - searches within a specific time frame Search modifiers can even be combined together to narrow down your results even further! You can also exclude channels from being indexed in your searches to prevent certain results from appearing. This could be useful for channels you don’t participate in, or channels that consist of an accumulation of data posted by a third party integration. To exclude channels, add them under Preferences -> Search . You can exclude certain channels from being searched in Slack’s preferences You can exclude certain channels from being searched in Slack’s preferences, too. And finally, you can search from within any conversation by starting off your query with /s followed by your modifiers and keywords. Unread Messages & Channels 📖 To mark a message in a channel or conversation as unread, hold done ⌥ and click on a message’s timestamp. This technique is helpful when you want to revisit a spot on your page at a later date. Mark messages as unread by holding down ⌥ and clicking Starring Messages ⭐️ You can “star” messages in any channel and then review those items in the “Starred Items” panel. This can be handy when you want to address a message, but need to come back to it later. Mark messages with a star Quick Switching 🏎 Use ⌘ + K to open up the Quick Switcher, which allows you to navigate to any channel, direct message, or team. This is a great way to quickly navigate around Slack, especially when combined with other keyboard shortcuts. The less time you spend switching between your keyboard and mouse/trackpad, the faster you can get things done. View All Unreads In One Place 📚 Read up on all your unread messages in one place by activating All Unreads in Preferences -> Sidebar -> Show All Unreads . This prevents you from having to navigate between a bunch of different channels or direct message conversations when you want to get caught up on things in Slack. All the unread messages can be viewed in one location Uncluttered Channels ⛏ If you’re someone who likes to keep things uncluttered, you can use the /collapse command to hide all inline image previews. This keeps things nice and compact, especially when people are posting lots of links to articles that are accompanied by thumbnail images. If you want to see the pretty pictures again, you can use /expand to reveal them. Reference Past Messages 🔗 Slack lets you reference another message by copying the message’s link and using it as the basis for a new message. This proves useful when you want to provide more context to your message. Copy the link of a message to be able to reference it in another message Conduct Polls With Emoji Reactions 🙋‍ There is no official polling system in Slack, but you can easily leverage the use of reactions to create a poll. Just ask a question and add a reaction for each of the options, which will allow people to click on a reaction to cast their votes. We use this all the time when we’re ordering food to the office. Each employee simply reacts with an emoji to indicate which lunch option he or she prefers. You can number your options in a message and have people vote with reactions. The graphic above shows you how many people have reacted to each option. Minimize Distractions 💤 Need some quiet time? Why not set up Slack’s automatic Do Not Disturb mode to disable notifications between certain hours of the day? Users can override this if they want, but they’ll have to press an additional confirmation button for their message to trigger a notification on your end. You can also enable Do Not Disturb mode in any Slack message box by typing /dnd followed by an amount of time. For example, you could use /dnd until 12 PM , /dnd for one hour , /dnd until tomorrow , or a similar variation to enable Do Not Disturb. To turn your notifications back on, simply type the command /dnd off . Reminders ⏰ Have Slack remind you about things using the /remind command. You can use reminders to remind other users, channels, or even yourself. For example, you could set up /remind @justin.trudeau Buy bread on the way home at 5 PM or /remind me Send weekly client update Friday at 10 AM . I often use reminders to prompt a channel to carry out a recurring task, or to remind me to tell someone something during working hours if I think of it after work (I don’t like messaging people on Slack after work if I can avoid it). Third Party App Integrations There are many 3rd party applications that integrate well with Slack. For example, there’s a way to manage Trello cards using Slack. You can view a list of all available Slack integrations on their website. There are also tools provided by IFTTT and Zapier that allow you automate some actions. A popular integration we use is GIPHY which allows users to add GIFs to their messages in Slack! Not sure what the context would be for this GIF, but it’s a good one IMO Keyboard Shortcuts ⌨️ Slack offers quite a few keyboard shortcuts, all of which are listed when you press ⌘ + / . Some useful shortcuts include ⇧ + Esc to mark all messages in Slack as read and ⌘ + ⇧ + \ to add a reaction to the last message in a channel.
https://medium.com/osedea/unlocking-slacks-full-potential-9a1841742bcc
['Robert Cooper']
2019-05-23 19:47:03.497000+00:00
['Slack', 'Communication', 'Tips And Tricks', 'Productivity', 'Keyboard Shortcuts']
I Hear The Song From Russian Doll Every Time I Wake Up Please Help
Nothing spoiled, except me. It isn’t cute anymore. Two weeks ago I followed my instincts and Netflix algorithms and watched Russian Doll, the streaming service’s latest darling and quite honestly the best show I’ve seen in a year. For context, it distracted me from both the fact that we don’t have so much as a release date for The Crown nor do I have access to season three of Versailles yet. As I am typically not drawn to television set in the present-day unless it also somehow contains witches and/or Benedict Cumberbatch, Russian Doll was a bit of a departure for me. There’s a Groundhog Day-esque incident repeated in the show that is marked by Natasha Lyonne’s character staring herself in a bathroom mirror while a song plays each time, always from the exact same point in the song. These few bars of music are now where I live, they are my prison, my isolated nordic island inaccessible by boat. The song is Harry Nilsson’s “Gotta Get Up” and it’s eating me alive. The song itself is a very good song! There is nothing wrong with the song. There’s nothing wrong all sorts of amazing songs either but if you played Paloma Faith’s cover of “Never Tear Us Apart” enough times I’d want to throw it through a mulcher, too. And now every single morning, the instant I open my hazel eyes I have to hear, “Gotta get up, gotta get out, gotta get home before the morning comes,” and I’m going to need a pill for it soon. It isn’t surprising, the lyrics “gotta get up” naturally pop into one’s head when that is precisely the thing you need to do in the moment, but in this tone, at this bouncy pace, over and over, every time I wake—it’s inhumane. Would you like to show me a bunch of puppies I can’t pet next? A pizza that electrocutes me every time I touch it? Non-alcoholic wine?! Bring your freshest hells, trust me, I can take them. Part of me thinks this is the point, that the song currently treadmilling through my mind was exactly what the (all female, by the way) creators of this show intended. I’m a real life continuation of the story line, starting each day over just as the one before it, not entirely out of sync with the phenomenon experienced by our main character. Is it a call to action, perhaps? A reminder that I should shake things up, live more presently in the moment, find my own way to break free from a seemingly fruitless cycle? Is it a cruel joke? Did they select the song specifically for its ability to stick itself into one’s head the way peanut butter gets stuck in hair? That’s it, that’s what it is, the show was too good, so they had to leave us with one thing to be angry about or they’d ruin us for all future programming. This is Netflix mischief, that’s what this is. Sure, we’ll give you the most innovative plot line and refreshing character personalities you’ve seen since the last time you actually owned a printer, but we’re going to leave you with this little morsel playing over and over again every time you open your eyes until you bid this world adieu. Fair trade, no take-backsies. Whatever the reasoning, I am peeved. Perturbed. In need of a cleansing, a brain-loofa. I’d even settle for replacing it with something else annoying at this point! Throw “Unbelievable” by EMF in there, I don’t give a shit! We’re working on 19 days of misery here people, I’ve got to get up, gotta get out, gotta get home before the mooooorning, FUUUUUUUCK!!!!
https://shanisilver.medium.com/i-hear-the-song-from-russian-doll-every-time-i-wake-up-please-help-e0567dd823a4
['Shani Silver']
2019-02-22 12:21:10.289000+00:00
['Humor', 'Television', 'Life', 'TV Series', 'Writing']
OUR STORIES
“The longer I live, the more beautiful life becomes.” (Frank Lloyd Wright) Non-fiction pieces, personal essays, occasional poems and short fiction that explore how we feel about how we age and offer tips for getting the most out of life. Follow
https://medium.com/crows-feet/our-stories-4e9f1b2f2ecd
['Nancy Peckenham']
2020-11-28 17:50:25.657000+00:00
['Longevity', 'Aging', 'Health And Wellness', 'Wellness', 'Retirement']
Healing is a Trap
I don’t need to heal because I was never broken. I didn’t believe this a year ago, a month ago, a week ago, yesterday. I didn’t believe this a minute ago or the second that just passed. It’s easy to forget a truth that wasn’t such for many years. I spent the last ten years searching for answers for my anxiety, my lack of trust in my intimate relationships, my relentless doubt and indecisiveness that felt as normal as air to lungs. I spent the last ten years nose deep in books and palm in pain from writing notes the size of the book I was reading. I spent the last ten years watching and rewatching the same Youtube videos because that brought relief movies and shows couldn’t. I said no to invitations to the bar, to parties, job opportunities, beautiful girls that genuinely liked me, and hardwired dreams because I didn’t feel comfortable revealing myself until the emotional stitches have fallen out and the scars as noticeable as the sun at night. I yearned for a path, a freeway that would lead to a destination of ease and pain-free success. I honestly craved the direction more than the destination because a promise of a better tomorrow intoxicated the suffering of today. But healing is that small cubicle block of sharp cheddar cheese atop of the mousetrap. Once the apparatus is triggered, there is no escaping your neck from its clamps. Put the cheese back in the air-sealed Craft bag. And return the spine-snapping rig to Jeff Bezos. Because your name tag doesn’t read “Broken.” It says “a regular-normal-and-specutacular-human-being-that’s-taken-a -few-nicks-because-life-sometimes-sucks-but-is-as-tough-as-nails-yet-soft-as-a-down-pillow-and-as-content-as-a-Buddhist-monk-meditating-while-as-joyous-as-a-kid-ripping-their-presents-open-on-Christmas.”
https://medium.com/afwp/healing-is-a-trap-ef263bf87f40
['Bryce Godfrey']
2020-12-28 15:32:36.564000+00:00
['Personal Development', 'Life', 'Mental Health', 'Self Improvement', 'Life Lessons']
Your Entire Life Is on Gmail. It’s Time to Clean That Up.
Your Entire Life Is on Gmail. It’s Time to Clean That Up. Don’t suffer like I’ve suffered Photo: Jay Wennington/Unsplash Wednesday, 2 p.m. Eastern Time: I am on hour 9,000 of deleting emails from my overloaded Gmail account. My eyeballs are leaking out of my head and pooling in a sticky puddle on my laptop’s mouse pad. Or it feels like it, anyway. In recent weeks, Google has sent me repeated warnings that I’m approaching my Google Drive storage limit. I had rarely considered that Google would limit my storage capacity; the company’s potential for data collection seems infinite, incapable of being incapacitated or overburdened. And yet my email account, which I’ve had since 2014, is finally tired of holding my endless stream of newsletters and press releases; the 15 gigabytes that Google allots free users is nearly full. I’d hoped to write this story with a title along the lines of “One Simple Trick To Clearing Out Your Google Drive Storage.” Deleting every last email in your inbox and starting fresh would be ideal, but there’s such a wild mix of correspondence stored there — chain emails from your second cousin, newsletters from companies you bought a sweater from once, love letters from the early days of a current relationship — that the nuke-and-run method isn’t recommended or possible for anyone but those who lack even a drop of sentimentality. For everyone else, there’s no one trick to clearing out your Google Suite. There are strategies, which I will get into below, but the best thing you can do, I have unfortunately discovered, is keeping tidy as you go. And that means deleting all the emails you don’t need as you receive them so you don’t get caught up in the mess I spent the last several days cleaning up. Over the course of the past few days, I’ve been experimenting with various methods of deleting my emails and documents. It was a painful experience: Reading old emails that detailed difficult situations and brought up bitter memories was not how I would prefer to spend my workday (or my time off), and it wasn’t easy to figure out what I could justify deleting (emails with former bosses about banal topics) and what should stay (contracts, old emails from friends). My years of resistance toward deleting any of these emails — out of a sense of nostalgia or concerns that I might need them for whatever reason someday — like fall into the bucket of what researchers consider “digital hoarding.” Digital hoarding, which I’ve written about before, might sound somewhat hyperbolic — after all, it’s not like someone who compulsively splurges on Steam sales must then literally wade through piles of games on their way to the bathroom. Yet studies show digital hoarding can be stressful and even upsetting to those who experience it. In 2018, researchers interviewed 45 of these so-called digital hoarders and found that the impression that digital space is endless contributed significantly to peoples’ tendency to hoard. They were surprised at the volume of digital stuff they’d accumulated, but still struggled to come to terms with deleting much of it. Many people simply didn’t care about the pileup of documents, emails, photos, and music. “I can’t be bothered going through it all, there are too many,” said one 30-year-old participant. “I’ve left it so long now that going back and sorting through is not something I can be bothered with,” said another, age 25. For many participants, the prospect of going through and deleting their digital crap was anxiety-inducing and stressful. This certainly aligns with my own experience: When I first faced the mess that was my Google Drive on the day I took this assignment, I felt panicked and overwhelmed by the sheer magnitude of the task in front of me. Similar to the terror that might precede sitting down to scrub the baseboards with a stack of sponges, I was resistant and angry before I got started on my Google Suite cleaning. But once I got started, I kind of… couldn’t stop. Seeing that little percentage tick down as I went through my emails was immensely satisfying, like creating vacuum lines on a carpet. So, yes, there’s no real trick to this, and my guess is that this is intentional: If it’s a pain in the butt to organize and delete your emails and files, your only options are either to a) get a new email address (I legitimately considered this) or b) pay Google for more storage, which starts at $1.99 a month for 100 gigabytes and goes up to $10 a month for 2 terabytes. And the more emails you keep and accumulate, the more delicious, juicy data is available for Google to collect. What you need to do first is figure out where you’re storing most of the junk taking up the precious few 15 gigabytes Google allots you. To do this, go to One.Google.com and, on the left sidebar, click “Storage.” There, you’ll find a breakdown of how much space Google Drive, Gmail, and Google Photos are taking up, respectively. Unless you’re a photographer or a big fan of huge PDFs and weighty spreadsheets, most of the action is probably happening in your email. The best way to delete mass emails without regret — or sadness, as I found as I thumbed through those depressing messages from five years ago — is to temporarily sign up for Mailstrom.co. Mailstrom analyzes your email and, theoretically, allows for quick and easy unsubscribe and delete options. (For those worried — with good reason — about their privacy, Mailstrom says it won’t share or sell your data for advertising purposes, and it deletes your data three months after you’ve canceled your account.) But after you’ve deleted 2,500 emails through Mailstrom, your free trial ends, and you need to pay between $9 and $30 a month to continue using the service. If I were going to pay, I’d probably just buy the Google storage, which is cheaper, than a streamlined method for deletion. Instead, I recommend using Mailstrom to show you which senders are pummeling you with the most garbage: I am my second-worst offender! Once you’ve determined the worst sender offender, you can go into Gmail, search from:[email protected] (or whoever is sending you tons of email), and mass delete from there. Once you’re looking at your search results, click the “Select All” box on the left side of your inbox, and then be sure to click “Select all conversations that match this search,” like so: That way, you’ll delete all the messages sent by that particular sender, not just the most recent 50. That should get rid of a pretty decent chunk of your email clutter in a minimal amount of time, but if you want to keep going, I’d recommend a few more courses of action: Delete everything in your Promotions and Social folders. Just go nuclear on those folders. Check the past few pages to ensure nothing important lands in there, and then select all and delete. That will likely be thousands, if not more, of the remaining marketing and newsletter stragglers you didn’t catch when you used the Mailstrom method. If you only want to delete Promotions or Social emails older than a specific date, use the following search term: category:promotions , older_than:2y. (That space between “promotions” and the comma is intentional — the search won’t work without it.) I did this first and then decided, whatever, I don’t need any of that junk. Bye! You can do the same with newsletters with category:updates , older_than:2y. Search for all your emails larger than a specified size, then go through and manually delete (or be bold and “select all” — “delete”). Enter larger:10mb for this option. This takes more time, so I only recommend this if you’ve already nuked your marketing emails and still have a good chunk of space to free up. It makes more sense to sort out and delete emails larger than a specific size than attachments larger than a specific size because plenty of emails are ginormous without the additional help of an attachment. But if you want to focus on attachments, type has:attachments larger:10MB (or however big) into the search bar and go wild. You can find more helpful Gmail search terms here, though the above are the terms I found most effective for deleting swathes of email at once. Because Google considers your Trash to be part of your allotted 15 gigabytes, you need to empty it before you know how much you’ve truly managed to delete. Once you empty your trash, those emails are gone forever, but I don’t really recommend clicking through and looking at what’s in there because you’ll find yourself three hours later red-eyed and extremely bored. Just hit delete — and admire your clean baseboards/organized closet/sparkling tile or whatever household cleaning analogy would give you the most satisfaction. Now, hopefully, you are down to the point where you can begin to accumulate email again without worrying about running out of space for the next couple of years. Here’s where I stand now: It took me several days to get here, but only because I took the time to try every conceivable method in order to deliver to you, dear reader, those which I’ve found most effective. Still, you probably need to put at least an hour or two of work into this, I’m sorry to say. But you never have to do this again if you follow two very simple rules from here on out: Mark any and all press releases, marketing emails, et cetera that you don’t want to see as spam. If you want to be nice, you can email the marketing company or PR reps and respectfully ask to be removed from their mailing list, but if you’re lazy or simply receive too many of them to do this, just file them to spam. Gmail automatically clears out your spam folder after 30 days, so they’re as good as gone after a month. Do this consistently, as soon as you see them. Delete any other emails you don’t care about immediately. Don’t leave them there because maybe you’ll read them later. Don’t let them collect dust. Delete. Be ruthless, seriously. No time like the present to shed your indecisive, sentimental nature when it comes to messages from your alumni association or the rescue where you got your dog. Bye! With this, you should stay ahead of the game, with minimal effort, for the next several years at least. Once you’ve cleared out your Gmail, you don’t need to get complicated with folders and labels and such, which get cumbersome if you have too many. (I only have two: one for random thoughts I send myself in the middle of the night about my novel, and one for press releases from book publicists since I often respond to those en masse at a later date). By making the spam and delete buttons your best friends, you will avoid the disaster you have wrought upon yourself for a very, very long time.
https://debugger.medium.com/your-entire-life-is-on-gmail-its-time-to-clean-that-up-c81106e099b4
['Angela Lashbrook']
2020-10-16 05:32:59.829000+00:00
['Technology', 'Gmail', 'Productivity', 'Digital Life', 'Email']
Some stuff I’d like to say to new developers
Sub title: Especially if you’re coming out of a boot camp I took a very traditional route into Software Engineering. I got interested in computers and programming in high school, then got my B.S. C.S. in 2012 and my M.S. C.S. in 2018. I’ve been in the work force for over 8 years now. With the rise of different programming boot camps and programs (like Lambda School) I’m seeing more and more people look to make the jump into the software engineering field. And with good reason! When you look at the labor statistics website it claims that software developers make an average of $86,000 which is higher than the median household income in much of the US. Add to that sites like levels.fyi claiming astronomical salaries from some big name tech companies and it’s easy to see why people are at least curious to try out the field. With all of that at play I have a pretty steady stream of people reaching out curious about software engineering and what it takes to get a job as a software engineer. I wanted to lay out a few thoughts that might help a wider audience from common questions I’ve heard Can I get a job as a software engineer without a degree? Short answer — yes! Longer answer — probably, but it’s not going to be easy. Take a look at the Lambda School outcomes report for 2019. It clearly states that they are placing students with real jobs after they graduate from the school. Lots of big name tech companies have employee advancement programs that train existing employees to be software engineers. So yes, it is totally possible to get a job as a software engineer without a degree. With that being said you should know that it’s not going to be easy. Even though there is a shortage of software engineers job postings for them tend to get an excessive number of applicants. Even small companies are often overwhelmed by the number of resumes and plenty of them are fake or inflated (more on that below). So with all of those resumes flooding in, lots of them with great buzzwords, getting yours noticed and read without a degree will be tricky. Some quick tips Do show evidence of work — a github profile with actual projects, open source contributions, stuff you’re proud of Do show what education you’ve got and the initiative you’ve taken to try and get more experience(boot camps, classes, projects, internships, etc) Do not inflate your resume — I’ve interviewed people who lied on their resume and it shows in a hurry and I mark those applicants as don’t call back. Present your strengths honestly, don’t inflate. Be realistic — if you’re graduating from a 9 month bootcamp and no work experience shoot for junior dev positions, don’t apply to senior roles when you don’t have the experience or knowledge. You’re starting off from scratch and that’s fine! Just be ready for the level you’ll come in at Don’t get discouraged when you don’t hear back right way — keep trying. There are many, many reasons why someone might skip over a resume from a boot camp grad (maybe they’re team is over worked and they can’t take on a junior, maybe they tried a grad from another bootcamp and it didn’t work out, maybe they just have too many resumes and that’s an easy filter). But not getting the first job, or the 500th you’ve applied to doesn’t mean you won’t get the 501st. Keep expanding your experience, keep trying! Ask yourself if you like programming Hollywood likes to show programming as Tony Stark manipulating 3D diagrams and building flying suits, when in reality it’s slow, detail, often painstaking work. There will be times you’ll feel like a hero when you fix a bug that’s been plaguing your time, but there will be lots and lots of other times you are frustrated on a problem for days before you make any progress. As you’re working through your bootcamp or trying some free programming tutorials ask yourself if you like what you’re doing. If you feel happy, curious, and enthusiastic when you solve a complex bug and get it to work you’re probably on the right track, but if you feel frustrated, tired, or discouraged you might want to take a step back and evaluate if this field is for you. Not just because you should do something you enjoy, but because if you hate what you’re doing you won’t be a good software engineer. Unlike some fields where the problem is visible in front of you and you can work through it whether you feel like it or not, computer science is almost all in your head. You need to be ready to apply critical thinking, form connections between abstract constructs, and problem solve. If you hate what you’re doing you won’t want to do any of that and it will be much harder to succeed. Don’t get dazzled by the money potential you see online, take your time to see if this field is right for you. Your interview will probably involve data structures and algorithms, even if your job does not A lot of programming is data structures and algorithms, and knowing when to use which of each of them. A crucial part of being a programmer is being able to reason about different data structures and algorithms, grasp the tradeoffs between selecting them and apply them to build software. “But I’m a React Front End Developer!” I hear you say, “I do design and user workflow, I focus on user experience, why would I need to know anything about data structures?” And you’re probably right, there are probably jobs out there that don’t require any DS/algo knowledge. You can probably write a lot of React apps just using hash tables or lists. In my experience most interviews still involve some level of DS/algo question for two reasons They’re a relatively simple form of problem that tell me if you can reason about abstract concepts and problem solve They’re a great way to force you to write code I’ve been in plenty of interviews where the applicant talks a great game, seems amazing, has all the right buzzwords on their resume, then we hand them a marker (or virtual coding space) and ask them to implement fizz buzz and they fall apart. If I ask you an algorithms question I get to see you actually write code, rather than just quote blog posts or tweets you’ve read. All Software Engineers Feel Dumb Sometimes Several times a week (if not once a day or more) I think to myself, “I have no clue what is causing this problem.” The severity of the “no clue” part varies pretty widely — it could be 30 minutes of figuring out why a build broke, days of debugging a dependency that’s failing, or weeks of tracking down transient issues. And I feel like an absolute idiot the whole time. It’s only human — you wonder if someone else on your team could figure this out quicker, you wonder if someone else would know the problem better. Imposter syndrome is real. So you’re not alone when you’re confused by a new language or framework. All software engineers feel dumb sometimes. Don’t get discouraged or down, remember that Software engineering is a great field to feel confused in — if you deep dive the code, read the manual, and dive in you can understand any framework or problem If you feel dumb, it could just be because this field is really, really difficult. If you find yourself struggling with a concept it could just be because it’s complicated Ask! If you’re confused, chances are someone else in the room is confused too. It may be a conversation that should be taken off line, but don’t be afraid to ask for clarification! Be the deep dive guy! If you want to show that you can contribute as a software engineer off the bat be the bug squashing, deep diving, code reading volunteer. Don’t wait for someone to show you the problem line, don’t ask for help before you’ve looked, clone that repo, grab the line number from that exception, and start deep diving! That does a couple things for you right off the bat Shows you’re not afraid of code, you’re willing to go look and solve problems Gets you reading your teams code base, which is incredibly critical Here are a couple tips for getting started doing deep dives Take notes — lots, and lots of notes! This will help you show evidence of your work, could help document some functionality that’s misbehaving, and lets you walk someone else through your rationale for a suggested fix Assume nothing! That function labeled multipleNumbers() might have a side effect that’s not obvious. Question all of your assumptions and go a little ways down each path until you’re comfortable with what it does If you’re spending a lot of time do regular check ins with a mentor or team lead to make sure they don’t think you’re too far off in the weeds (this is where your notes come in). Tell them what you’re thinking, ask if they want to redirect you. That shows that you’re a team player and gives them a chance to pull you back if you’ve gone too far down the rabbit hole. We are so excited you are looking into software engineering! Oh my word, I can’t even explain how excited we are that you’re here and looking into this! I love programming! It makes me excited to get up and go to work, it motivates me to blog about it, it keeps my brain cooking and trying new issues. Software engineering is a blast! And you know what I need more of? Qualified coworkers who can do quality engineering! And that could be you! Just because I took a very traditional route into this field doesn’t mean you have to. You’ll bring in a different perspective and experience and that’s great! Dive in, start learning, let’s build stuff!
https://rollingwebsphere.medium.com/some-stuff-id-like-to-say-to-new-developers-4b629d9ef121
['Brian Olson']
2020-12-06 03:01:28.629000+00:00
['Bootcamp', 'Software Engineering', 'Advice and Opinion', 'Developer', 'Programming']
Logistic Regression (Complete Theory and Python Implementation)
In this post, you will discover the logistic regression algorithm for machine learning. Contents… • Overview • Real-World Examples • The Logistic Equation • Assumptions • Logistic Regression (Python Implementation) data input-variable Predict Variable (desired target) Data Exploration Observations Visualizations Feature Selection Implementing the Model Logistic Regression Model Fitting Predicting the test set results and calculating the accuracy Cross-Validation Confusion Matrix ROC Curve After reading this post you will know: The many names and terms used when describing logistic regression (like log odds and logit). The representation used for a logistic regression model. Techniques used to learn the coefficients of a logistic regression model from data. How to make predictions using a learned logistic regression model. Where to go for more information if you want to dig a little deeper. Logistic regression is a predictive modeling algorithm that is used when the Y variable is binary categorical. That is, it can take only two values like 1 or 0. The goal is to determine a mathematical equation that can be used to predict the probability of event 1 happening. Once the equation is established, it can be used to predict the Y when only the X’s are known. In linear regression, the Y variable is always continuous If suppose, the Y variable was categorical, you cannot use the linear regression model for it. So what would you do when Y is a categorical variable with 2 classes? Logistic regression can be used to model and solve such problems, also called Binary Classification problems. A key point to note here is that Y can have 2 classes only and not more than that. Yet, Logistic regression is a classic predictive modeling technique and remains a popular choice for modeling binary categorical variables. Another advantage of logistic regression is that it computes a prediction probability score of an event. More on that when you start building the models. Real-World Examples Some real-world examples of binary classification problems (Logistic Regression) Spam Detection : Predicting if an email is Spam or not : Predicting if an email is Spam or not Credit Card Fraud : Predicting if a given credit card transaction is fraud or not : Predicting if a given credit card transaction is fraud or not Health : Predicting if a given mass of tissue is benign or malignant : Predicting if a given mass of tissue is benign or malignant Marketing: Predicting if a given user will buy an insurance product or not Predicting if a given user will buy an insurance product or not Banking: Predicting if a customer will default on a loan. Why not Linear Regression? When the response variable has only 2 possible values, it is desirable to have a model that predicts the value either as 0 or 1 or as a probability score that ranges between 0 and 1. Linear regression does not have this capability. Because, If you use linear regression to model a binary response variable, the resulting model may not restrict the predicted Y values within 0 and 1. This is where logistic regression comes into play. In logistic regression, you get a probability score that reflects the probability of the occurrence of the event. The Logistic Equation Logistic regression achieves this by taking the log odds of the event ln⁡(P/(1 -P)) where P is the probability of the event. So P always lies between 0 and 1. Taking exponent on both sides of the equation gives: Logistic Function The logistic function, also called the sigmoid function was developed by statisticians to describe properties of population growth in ecology, rising quickly and maxing out at the carrying capacity of the environment. It’s an S-shaped curve that can take any real-valued number and map it into a value between 0 and 1, but never exactly at those limits. 1 / (1 + e^-value) Assumptions Binary logistic regression requires the dependent variable to be binary. For a binary regression, the factor level 1 of the dependent variable should represent the desired outcome. Only meaningful variables should be included. The independent variables should be independent of each other. That is, the model should have little or no multicollinearity. The independent variables are linearly related to the log odds. Logistic regression requires quite large sample sizes. Logistic Regression (Python Implementation) Data The dataset comes from the UCI Machine Learning repository , and it is related to direct marketing campaigns (phone calls) of a Portuguese banking institution. , and it is related to direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict whether the client will subscribe (1/0) to a term deposit (variable y). Dataset Link: https://drive.google.com/file/d/1o4RjXH5Vu1ZL3cJEk4D-wXWGxzbGexaB/view?usp=sharing import pandas as pd import numpy as np from sklearn import preprocessing import matplotlib.pyplot as plt plt.rc("font", size=14) from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import train_test_split import seaborn as sns sns.set(style="white") sns.set(style="whitegrid", color_codes=True) The dataset provides the bank customers’ information. It includes 41,188 records and 21 fields. Input Variables: age (numeric) job: type of job (categorical: “admin”, “blue-collar”, “entrepreneur”, “housemaid”, “management”, “retired”, “self-employed”, “services”, “student”, “technician”, “unemployed”, “unknown”) marital: marital status (categorical: “divorced”, “married”, “single”, “unknown”) education (categorical: “basic.4y”, “basic.6y”, “basic.9y”, “high.school”, “illiterate”, “professional.course”, “university.degree”, “unknown”) default: has credit in default? (categorical: “no”, “yes”, “unknown”) housing: has a housing loan? (categorical: “no”, “yes”, “unknown”) loan: has a personal loan? (categorical: “no”, “yes”, “unknown”) contact: contact communication type (categorical: “cellular”, “telephone”) month: last contact month of the year (categorical: “jan”, “feb”, “mar”, …, “nov”, “dec”) day_of_week: last contact day of the week (categorical: “mon”, “tue”, “wed”, “thu”, “fri”) duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y=’no’). The duration is not known before a call is performed, also, after the end of the call, y is known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model campaign: number of contacts performed during this campaign and for this client (numeric, includes the last contact) pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means the client was not previously contacted) previous: number of contacts performed before this campaign and for this client (numeric) poutcome: outcome of the previous marketing campaign (categorical: “failure”, “nonexistent”, “success”) emp.var.rate: employment variation rate — (numeric) cons.price.idx: consumer price index — (numeric) cons.conf.idx: consumer confidence index — (numeric) euribor3m: euribor 3 month rate — (numeric) nr.employed: number of employees — (numeric) Predict Variable (desired target) y — has the client subscribed a term deposit? (binary: “1”, means “Yes”, “0” means “No”) The education column of the dataset has many categories and we need to reduce the categories for better modeling. The education column has the following categories: Let us group “basic.4y”, “basic.9y” and “basic.6y” together and call them “basic”. data['education']=np.where(data['education'] =='basic.9y', 'Basic', data['education']) data['education']=np.where(data['education'] =='basic.6y', 'Basic', data['education']) data['education']=np.where(data['education'] =='basic.4y', 'Basic', data['education']) After grouping, this is the columns: Data Exploration There are 36548 no’s and 4640 yes’s in the outcome variables. Let’s get a sense of the numbers across the two classes. Observations The average age of customers who bought the term deposit is higher than that of the customers who didn’t. The pdays (days since the customer was last contacted) are understandably lower for the customers who bought it. The lower the pdays, the better the memory of the last call and hence the better chances of a sale. Surprisingly, campaigns (number of contacts or calls made during the current campaign) are lower for customers who bought the term deposit. We can calculate categorical means for other categorical variables such as education and marital status to get a more detailed sense of our data. Visualizations The frequency of purchase of the deposit depends a great deal on the job title. Thus, the job title can be a good predictor of the outcome variable. %matplotlib inline pd.crosstab(data.job,data.y).plot(kind='bar') plt.title('Purchase Frequency for Job Title') plt.xlabel('Job') plt.ylabel('Frequency of Purchase') plt.savefig('purchase_fre_job') The marital status does not seem a strong predictor for the outcome variable. table=pd.crosstab(data.marital,data.y) table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True) plt.title('Stacked Bar Chart of Marital Status vs Purchase') plt.xlabel('Marital Status') plt.ylabel('Proportion of Customers') plt.savefig('mariral_vs_pur_stack') Education seems a good predictor of the outcome variable. table=pd.crosstab(data.education,data.y) table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True) plt.title('Stacked Bar Chart of Education vs Purchase') plt.xlabel('Education') plt.ylabel('Proportion of Customers') plt.savefig('edu_vs_pur_stack') The day of the week may not be a good predictor of the outcome. pd.crosstab(data.day_of_week,data.y).plot(kind='bar') plt.title('Purchase Frequency for Day of Week') plt.xlabel('Day of Week') plt.ylabel('Frequency of Purchase') plt.savefig('pur_dayofweek_bar') The month might be a good predictor of the outcome variable. pd.crosstab(data.month,data.y).plot(kind='bar') plt.title('Purchase Frequency for Month') plt.xlabel('Month') plt.ylabel('Frequency of Purchase') plt.savefig('pur_fre_month_bar') Most of the customers of the bank in this dataset are in the age range of 30–40. data.age.hist() plt.title('Histogram of Age') plt.xlabel('Age') plt.ylabel('Frequency') plt.savefig('hist_age') Poutcome seems to be a good predictor of the outcome variable. pd.crosstab(data.poutcome,data.y).plot(kind='bar') plt.title('Purchase Frequency for Poutcome') plt.xlabel('Poutcome') plt.ylabel('Frequency of Purchase') plt.savefig('pur_fre_pout_bar') Create dummy variables That is variables with only two values, zero and one. cat_vars=['job','marital','education','default','housing','loan','contact','month','day_of_week','poutcome'] for var in cat_vars: cat_list='var'+'_'+var cat_list = pd.get_dummies(data[var], prefix=var) data1=data.join(cat_list) data=data1 cat_vars=['job','marital','education','default','housing','loan','contact','month','day_of_week','poutcome'] data_vars=data.columns.values.tolist() to_keep=[i for i in data_vars if i not in cat_vars] Our final data columns will be: data_final=data[to_keep] data_final.columns.values data_final_vars=data_final.columns.values.tolist() y=['y'] X=[i for i in data_final_vars if i not in y] Feature Selection Recursive Feature Elimination (RFE) is based on the idea to repeatedly construct a model and choose either the best or worst performing feature, setting the feature aside and then repeating the process with the rest of the features. This process is applied until all features in the dataset are exhausted. The goal of RFE is to select features by recursively considering smaller and smaller sets of features. from sklearn import datasets from sklearn.feature_selection import RFE from sklearn.linear_model import LogisticRegressionlogreg = LogisticRegression()rfe = RFE(logreg, 18) rfe = rfe.fit(data_final[X], data_final[y] ) print(rfe.support_) print(rfe.ranking_) The RFE has helped us select the following features: “previous”, “euribor3m”, “job_blue-collar”, “job_retired”, “job_services”, “job_student”, “default_no”, “contact_telephone”, “month_apr”, “month_aug”, “month_mar”, “month_may”, “month_nov”, “day_of_week_mon”, “day_of_week_wed”, “poutcome_failure”, “poutcome_nonexistent”, “poutcome_success”. pref_indexes = list(np.where(rfe.ranking_ ==1)[0]) cols = list(np.asarray(X)[pref_indexes]) X=data_final[cols] y=data_final['y'] Implementing the Model import statsmodels.api as sm logit_model=sm.Logit(y,X) result=logit_model.fit() print(result.summary()) The p-values for most of the variables are smaller than 0.05, therefore, most of them are significant to the model. Logistic Regression Model Fitting X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) from sklearn.linear_model import LogisticRegression from sklearn import metrics logreg = LogisticRegression() logreg.fit(X_train, y_train) LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, max_iter=100, multi_class=’ovr’, n_jobs=1, penalty=’l2', random_state=None, solver=’liblinear’, tol=0.0001, verbose=0, warm_start=False Predicting the test set results and calculating the accuracy y_pred = logreg.predict(X_test) print('Accuracy of logistic regression classifier on test set: {:.2f}'.format(logreg.score(X_test, y_test))) Accuracy of logistic regression classifier on test set: 0.90 Cross-Validation Cross-validation attempts to avoid overfitting while still producing a prediction for each observation dataset. We are using 10-fold Cross-Validation to train our Logistic Regression model. from sklearn import model_selection from sklearn.model_selection import cross_val_score kfold = model_selection.KFold(n_splits=10, random_state=7) modelCV = LogisticRegression() scoring = 'accuracy' results = model_selection.cross_val_score(modelCV, X_train, y_train, cv=kfold, scoring=scoring) print("10-fold cross validation average accuracy: %.3f" % (results.mean())) 10-fold cross-validation average accuracy: 0.898 The average accuracy remains very close to the Logistic Regression model accuracy; hence, we can conclude that our model generalizes well. Confusion Matrix from sklearn.metrics import confusion_matrix confusion_matrix = confusion_matrix(y_test, y_pred) print(confusion_matrix) [[10873 108] [ 1104 272]] The result is telling us that we have 10873+272 (=11,145) correct predictions and 1104+108 (=1,212) incorrect predictions. ROC Curve from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve logit_roc_auc = roc_auc_score(y_test, logreg.predict(X_test)) fpr, tpr, thresholds = roc_curve(y_test, logreg.predict_proba(X_test)[:,1]) plt.figure() plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.savefig('Log_ROC') plt.show() The receiver operating characteristic (ROC) curve is another common tool used with binary classifiers. The dotted line represents the ROC curve of a purely random classifier; a good classifier stays as far away from that line as possible (toward the top-left corner). References [1]http://www.holehouse.org/mlclass/06_Logistic_Regression.html [2]http://machinelearningmastery.com/logistic-regression-tutorial-for-machine-learning [3]https://scilab.io/machine-learning-logistic-regression-tutorial/ [4]https://github.com/perborgen/LogisticRegression/blob/master/logistic.py [5]http://neuralnetworksanddeeplearning.com/chap3.html [6]http://math.stackexchange.com/questions/78575/derivative-of-sigmoid-function-sigma-x-frac11e-x [7]https://en.wikipedia.org/wiki/Monotoniconotonic_function [8]http://scikit-learn.org/stable/modules/linear_model.html#logistic-regression> [9]https://en.wikipedia.org/wiki/Softmax_function That’s all about Logistic regression Theory and Python Implementation.
https://medium.com/analytics-vidhya/logistic-regression-complete-theory-and-python-implementation-a6ee3f9d7171
['Sameer Bairwa']
2020-09-23 17:25:10.419000+00:00
['Machine Learning', 'Python', 'Data Science', 'Statistics', 'Logistic Regression']
We Can Disagree Without Disrespecting One Another
There’s no reason to feel jealous when you’re fulfilled in your life and secure in yourself. I really, really like being me. And I know a lot of women who feel the exact same way. We do not want to be someone else, including Rachel Hollis. We don’t want to have her life, or be the mother of her four kids. We don’t want to tell people about a glamourized story of how our husbands used to treat us like total crap when we dated, but our love for him was so strong that it ‘fixed him’. Success is a subjective term, and it changes in definition from person to person. To the above-mentioned influencer, success is jumping on planes multiple times a week, posting on social media several times a day with millions of likes and comments, having millions of dollars and speaking with the likes of Tony Robbins on his tours. For many of us, none of that is appealing. It’s all a matter of taste and choice. My definition of success is valuing my freedom and flexibility over all else (which one cannot do if they want to be a millionaire — I truly am not willing to work hard enough in my career to make that a reality for myself), making enough income to be comfortable but also be able to travel once or twice a year, and being able to raise a family quietly and happily while respecting others and the Earth… yeah, that’s my humble yet dream life. Being a public figure in the limelight? That’s a nightmare to me — anyone else feel the same?
https://medium.com/fearless-she-wrote/we-can-disagree-without-disrespecting-one-another-2be05df6d0d7
['Gillian Sisley']
2020-04-08 16:33:08.951000+00:00
['Relationships', 'Love', 'Feminism', 'Women', 'Writing']
Drawing The 92nd Academy Awards
It’s always an enormous treat to go to Los Angeles and draw the Academy Awards red carpet and show. This year was no different; my fifth time in attendance drawing. I never take my luck for granted. I arrived two days in advance of the Sunday broadcast so that I could draw behind the scenes and walk around the neighborhood. After arriving on site, I get my credentials and head down to the red carpet area to see what’s going on. There were the usual suspects: journalists dressed up and practicing on camera, and men in dark suits seeming to be guarding things. The carpet was still covered in plastic.
https://lizadonnelly.medium.com/drawing-the-92nd-academy-awards-7c8ea0f38d0e
['Liza Donnelly']
2020-02-15 19:33:44.997000+00:00
['Movies', 'Academy Awards', 'Storytelling', 'Oscars', 'Film']
Building + deploying a Neural Style Transfer app with pre-trained models
Building + deploying a Neural Style Transfer app with pre-trained models With Flask, Python3, OpenCV, Heroku (and a bunch of other stuff) Styled image and the uploaded image Hello internet friends! I feel like one of the issues I’ve faced with regards to learning software development is that many tutorials are often too technical (and not fun). This is probably because of the nature of software development but I feel like grasping technical concepts is as important as building fun mini-projects that will continue to encourage and spur us on. ༼ つ ◕_◕ ༽つ I thought it’ll be really cool to build something simple that involves image processing and deploy it online to show off to our friends and family. So today’s tutorial will be a diagrammatically-explained walkthrough of how to build a web app that: Allow users to upload an image and choose a style Apply Neural Style Transfer on that image using pre-trained .t7 models Display the uploaded image and the styled image Directory structure of this project A quick explanation of the directory structure above — All the files are needed except test.py. test.py was used for me to locally test and see if my neuralStyleProcess.py script was working correctly. I have 11 .t7 models in my project but you can decide how many you want (downloaded from ProGamerGov and jcjohns). Lastly, you can just ignore __pycache__ coz its auto-generated. A visual explanation of this web app In the diagram above, I’ve tried to only include the most important bits that’ll hopefully help you guys understand how the web app works (I called mine pyFlaskOpenCV — bad naming, I know >.<). Also, it’s missing chunks of code, and parts of it are pseudocodes (I’ve labeled them), and the ‘…’s are also NOT actual codes. Let’s start with the app.py script. Think of this as the app’s root — we use this to set up the routes (something like the ‘control’ that tells things how to interact). This article gives a pretty good explanation of Flask. As we can see in the diagram above, render_template() will render our upload.html template. This means that our upload.html page will be displayed to us and as we can see from the diagram, there is a form with a POST method. With action=“{{url_for(‘upload’)}}”, we call def upload() and in that function, we let users upload image(s) with the following codes: and we render complete.html. From complete.html, we again use “url_for(‘send_processed_image’, parameter1, parameter2)” to invoke send_processed_image function that will call our python OpenCV image processing script neuralStyleProcess.py. neuralStyleProcess.py writes the processed image to file with imwrite and returns the processed image filename so send_processed_image function knows what to send from directory to complete.html. With regard to the neuralStyleProcess.py script, my implementation is largely adapted from pyImageSearch’s tutorial. His article gives a thorough and detailed explanation of how it works. The only parts I've changed are — instead of showing the image, I did an imwrite and I removed the scaling part for the post-processing of the image. Deploying online with Heroku You don’t have to use Heroku, there are other options out there but because our models take up quite a bit of space (their sizes can add up), remember to choose servers that will give you a little more storage space (e.g. Heroku’s free tier worked fine for me but PythonAnywhere’s free tier didn’t). Continuing from our previous diagram, we still need to provide Heroku with three additional files so Heroku “knows” what to do with our web app. These files are — Procfile, requirements.txt, Aptfile. Again, here’s a diagram to better explain things: Neon colours, much cool A Procfile is a text file in the root directory of your application, to explicitly declare what command should be executed to start your app. In my case, here’s mine: web: gunicorn app:app Okay please let me rant. BECAUSE OF THIS I SPENT HOURS DEBUGGING!!! Why? Coz they apparently care a lot about spaces. So if you wrote — web:gunicorn app:app/ web:gunicorn app: app/ or any other configuration (okay i haven’t tried all, but I know its space (very) sensitive ಥ_ಥ) Sigh, moving on… We use Aptfile to include other additional software that we need to be installed on the dyno. What’s dyno? According to Heroku, “The containers used at Heroku are called “dynos.” Dynos are isolated, virtualized Linux containers that are designed to execute code based on a user-specified command.” Here is what’s in mine (OpenCV requires the following libraries to run correctly): libsm6 libxrender1 libfontconfig1 libice6 Finally, I did a $pip freeze > requirements.txt in my terminal to generate my requirements.txt. If you’re interested in what’s in it: click==7.1.1 cycler==0.10.0 Flask==1.1.1 gunicorn==20.0.4 imutils==0.5.3 itsdangerous==1.1.0 Jinja2==2.11.1 kiwisolver==1.1.0 MarkupSafe==1.1.1 matplotlib==3.2.1 numpy==1.18.2 opencv-python==4.2.0.32 pyparsing==2.4.6 python-dateutil==2.8.1 six==1.14.0 Werkzeug==1.0.0 One of the trickiest things about working with libraries like OpenCV-python and NumPy is that it can be really tedious to figure out which version works together and which doesn’t. (Sometimes version 4.2.0.32 works but 4.1.0 might not.) These are the versions that work for me. Next steps (some suggestions) Image extension checking for the files uploaded Resizing the uploaded image Saving uploaded/processed image(s) to a database so data persists Other image processing scripts Better styling of our html pages Adding in some indicator (e.g. loading animation) on first page load Building something interactive on top of it (e.g. drawing with javascript etc) Instead of choosing a pre-trained model, generate art with Generative Adversarial Network (GAN) Aight, hope that helped someone gain a better understanding of how things work. I’m not sure if this is a better way of going through codes rather than breaking the codes into chunks and then explaining each line — I personally find that most tutorials lack diagrammatical explanations so I thought I’d present things in a way that feels most intuitive to myself — do lemme know what you guys think. Happy creating! \ (•◡•) /
https://towardsdatascience.com/building-deploying-a-neural-style-transfer-app-with-pre-trained-models-661bbefc74cd
['Ran', 'Reine']
2020-05-10 13:22:33.325000+00:00
['Python', 'Web Development', 'Neural Style Transfer', 'Heroku', 'Flask']
Introduction to Deep Learning for Self Driving Cars (Part — 1)
Measuring Performance After we have trained our first model, there is something very important to discuss. We have a training set, as well as a validation set, and a test set. What is that all about? Don’t skip that part. It has to do with measuring how well we’re doing without accidentally shooting ourself in the foot, and it is a lot more subtle then we might initially think. It’s also very important because as we will discover later, once we know how to measure our performance on a problem, we’ve already solved half of it. Let me explain why measuring performance is subtle. Let’s go back to our classification task. We’ve got a whole lot of images with labels. We could say, okay, I’m going to run my classifier on those images, and see how many I got right. That’s my error measure. And then we go out and use your classifier on new images, images that we’ve never seen in the past, and we measure how many we get right, and our performance gets worse. The classifier doesn’t do as well. So what happened? Well, imagine I construct a classifier that simply compares the new image to any of the other images that I’ve already seen in my training set, and just returns the label. By the measure we defined earlier, it’s a great classifier. It would get 100% accuracy on the training set. But as soon as it sees a new image, it’s lost. It has no idea what to do. It’s not a great classifier. The problem is that our classifier has memorized the training set, and it fails to generalize to new examples. It’s not just a theoretical problem. Every classifier that we will build will tend to try and memorize the training set. And it will usually do that very, very well. Our job though, is to help it generalize to new data instead. So, how do we measure generalization instead of measuring how well the classifier memorized the data? The simplest way is to take a small subset of the training set, not use it in training, and measure the error on that test data. Problem solved, now our classifier cannot cheat because it never sees the test data, so it can’t memorize it. But there is still a problem, because training a classifier is usually a process of trial and error. We try a classifier, we measure its performance and then we try another one and we measure again. And another, and another, we tweak the model, we explore the parameters, we measure, and finally, we have what we think is the perfect classifier. And then after all this care we’ve taken to separate our test data from our training data and only measuring our performance on the test data, now we deploy our system in a real production environment. Image by Mika Baumeister on Unsplash And we get more data and we score our performance on that new data and it doesn’t do nearly as well. What can possibly have happened? What happened is that our classifier has seen our test data, indirectly, through our own eyes. Every time we made a decision about which classifier to use, which parameter to tune, we actually give information to our classifier about the test set. Just a tiny bit, but it adds up. So over time, as we run many, and many experiments, our test data bleeds into our training data. So what can you do?
https://towardsdatascience.com/introduction-to-deep-learning-for-self-driving-cars-f70b5f04aa16
['Prateek Sawhney']
2020-11-10 07:23:10.549000+00:00
['Machine Learning', 'Self Driving Cars', 'Artificial Intelligence', 'Deep Learning', 'Data Science']
I did write today, and a lot, yet if I’m not publishing I feel like I’m braking my chain anyways…
I did write today, and a lot, yet if I’m not publishing I feel like I’m braking my chain anyways, because I guess I defined it around publishing instead of of only writing. In a sense I guess it good as it keeps me on my toes and looking for moments for me to squeeze my writing no matter what, to avoid excuses but also to keep real and simple, to keep it personal and not solely focused on the technological and at times deep subjects I like to explore. There’s a reason for why they “simple is good”. Well, it also feels good, especially when hitting publish.
https://medium.com/thoughts-on-the-go-journal/i-did-write-today-and-a-lot-yet-if-im-not-publishing-i-feel-like-i-m-braking-my-chain-anyways-46ab77ee6eb0
['Joseph Emmi']
2018-10-29 23:33:03.675000+00:00
['Commitment', 'Goals', 'Journal', 'Self Improvement', 'Writing']
Liz’s Bio and Featured Stories
Liz’s Bio and Featured Stories If you like true stories with a twist and a life lesson takeaway, read on! Photo Courtesy of the Author Dear Reader, Welcome to my little corner of the world! I don’t often lay on the lawn in a white dress laughing, but this day I did because of a stiff rum and coke before the photoshoot to calm my nerves. I’m here to inspire, empower and entertain you :) My life has been a bumpy one, but I’m determined to use my difficult life lessons to not only become the best version of myself but to empower you as well. I don’t want you to feel alone, as I did most of my life. I want you to get lost in my words, relate to my stories and know on a fundamental level; we’re all the same. No matter where we live, what colour we are or what we’ve been through in our lives. There is always hope, and I plan to inspire you towards it. You are stronger than you know, and I plan to empower you with that knowledge. Learning to laugh at our mistakes is priceless. I plan to entertain you with all the stupid crap I’ve been through, so you realize you’re not alone. Along the way, my biggest hope is to instill lasting change in you. Writing my truth is challenging, but healing in a way that’s tough to describe. The more vulnerable I am, the lighter and more joyful I become. In this, I learned the importance of forgiving my abuser. Once the words are written, I’m graced with a new perspective. I can see where I went wrong, forgive myself for the mistakes I made along the way, and take one more step forward to a happier, healthier me. Photo Courtesy of the Author The Nitty Gritty Stuff I’m Canadian, eh! Meaning you’ll notice different spellings for certain words in my writing — an example would be; Colour vs Color. I’ve been working since I was fifteen but spent most of my career in the corporate world as a cubicle slave. Blah! The economic crash in 2008 proved to be a pivotal time in my life. A lot of material things were lost, but in hindsight, much more was gained. Getting laid off at age 48 started a surge of soul searching. It led to a deeper dive into my entrepreneurial spirit — a side of me that laid dormant due to a lack of self-esteem and self-confidence. I found the Network Marketing industry to be a great alternative to the 9 to 5 grind and spent seven years honing a new skill set and travelling. The side-effects of working with all those new like-minded people were again life-changing and positive. I knew I was on the right path. A new mentor pointed me towards God, which led to writing a memoir. It’s about my repressed ten-year marriage to an abuser, in a story format. The process was cathartic, and I emerged a better version of myself. The experience of self-publishing led me here to Medium, where I’ve found my true passion. Unfortunately, a second marriage to a cop went wrong after nearly 20 years because he couldn’t keep his zipper zipped. My father’s death opened my eyes further to how fragile our lives are. I sometimes wonder if the tough stuff I endured early in life was preparation for the second half of my journey. I’m sure my real purpose is to help you along with yours, and my pen is merely the tool to make it happen. Photo Courtesy of the Author The Wrap Up Now I’m able to work from home, build my freelance writing business and continue to heal. I’m driven by the need to do good things and pay what I’ve learned forward with Gratitude. I published my book — A Memoir: Silent Fright in Jan 2020. I published my first story on Medium in March of 2020, and boy, does it need editing! lol I’ve had stories declined, but rather than be disheartened, I use them as fuel to do better and improve my skills. I’ve been curated, but not until I started following a system and making myself these checklists. This story “His Girlfriends, Girlfriend Wrote Me a Letter” went viral in August. I received top-writer status in three categories; Life Lessons, Self-Improvement and This Happened to Me, and I plan to keep them! I’ve been accepted as a writer in nine publications and this week decided to start my own — The Best Version. I’d be thrilled if you joined our growing community! Sign up for my Empowered Plus+ Bi-weekly Newsletter and get instant access to my fun but informative FREE Inspirational Guide — 7 Days to a Better You! One day I’ll start taking submissions, then we can write together, but first, my goal is to hone my skills and get 100 stories circulating on Medium. I dislike writing Bio’s :) But… This idea was presented to give my followers a better sense of who I am as a person, an author and why I write what I write. I found my passion and my voice late in life, but it won’t stop me from taking this seriously and building my business one step at a time. I don’t write about my life to vent or complain, but for its healing power and to continue working on being the best version of Liz. I write to connect, inspire, empower and entertain. Follow along, and let’s see where our journeys take us! Paying it forward with Gratitude, Liz I’m Liz, the self-empowered, red wine & coffee lovin’, personal growth fanatic behind this article. I’ve stopped shrinking into places I’ve outgrown, and I’m a fan of straight talk and practical solutions. That’s why I’m here to Empower, Educate and Entertain.
https://medium.com/illumination/im-liz-and-i-write-to-inspire-ae10b2e6085b
['Liz Porter']
2020-12-16 16:50:59.356000+00:00
['This Happened To Me', 'Self Improvement', 'Bio', 'Life Lessons', 'Writing']
Michael, Marriage, and Me
I’ve read a single Michael Ondaatje book. And that’s a tribute to the dominance the Booker prize has had on my life as a reader, for without it I never might have bought The English Patient as an undergraduate. At the time I bought books recommended by a list of prize winners in a book intended for quiz lovers. I recall giving my copy of the novel, which, two days ago, was named the best book to win the Booker, to my friend Mary, the way you do when a book impresses you so. I must have also given her Michael Cunningham’s The Hours. (Only now do I see that the authors share first names.) When she returned, she remarked that they were both good but she thought that if she put them in water the Ondaatje would sink. It was so good an image I’ve never quite forgotten it. One of the other indications of the book’s quality came from a fictional character created by another novelist. In John Irving’s The Fourth Hand, the protagonist talks about a perfect book and a novel he sees fit is The English Patient. Never a huge fan of Irving, I still took him seriously enough to seek out the passages he quoted. A friend had one— “I have spent weeks in the desert, forgetting to look at the moon…as a married man may spend days never looking into the face of his wife. These are not sins of omission but signs of preoccupation” — that he quoted frequently. But Irving’s character was right to highlight another: “So the books for the Englishman, as he listened intently or not, had gaps of plot like sections of a road washed out by storms, missing incidents as if locusts had consumed a section of tapestry, as if plaster loosened by the bombing had fallen away from a mural at night.” II Sometime ago Julian Barnes made a comment about writers and their work: “People are very interested in writers. Successful ones. More interested in the writers than the writing. In the writer’s lives.” I’m not sure if Barnes’s idea is entirely true but it is true enough for me to recount two anecdotes about Ondaatje. The first comes from a story in the New Yorker written around the time Martin Amis and Barnes had a feud upon the former leaving his agent for Andrew Wylie. (Amis’s former agent was Barnes’s wife; the ruined business ruined the writers’ relationship.) Jonathan Wilson, the New Yorker reporter, got two women writers, I think one was AS Byatt, to talk about the feud between the male British novelists. Somehow the conversation turned into a consideration of the beauty of male writers. If I recall correctly, both women seemed to think Ondaatje very good looking. You might think this does not matter but it does. As Adam Gopnik has written: “Of all the gifts that can grace a literary career, good looks are the most easily overlooked and not the least important: though we may read blind, we don’t befriend blind.” The other anecdote is from Petina Gappah, a fan of Ondaatje’s work. When Gappah met Ondaatje at some festival, she exclaimed, “My God!” “Just call me Michael,” he replied. A funny, good looking guy then. III Since reading The English Patient, I have seen the film adaptation which I like about as much, knowing Anthony Minghella could hardly do a thing about transferring Ondaatje’s deft prose. It might be unfortunate but I don’t think you could read the book without seeing Ralph Fiennes in the film’s dust-and-dune glory as the patient of the title, at least in the parts where he’s not disfigured; you might also see Kristin Scott Thomas as the adulterous Katharine and Juliette Binoche as Hana. (My own copy of the book has as cover Fiennes kissing Scott Thomas as the sun sets behind them, a poignant scene from the film.) To illustrate how these things work for a certain kind of person, I’d admit that there has been at least one moment in my life when I have told a lover a thing about Ondaatje’s book as gleaned from Irving, whose protagonist, like me, is prone to literary talk. Irving’s hero tells his lover about the fact that Ondaatje called the depression below the neck the vascular sizood, but the Minghella film identifies the same spot by a different name, the suprasternal notch, which is in fact the correct name. As a younger man I might have relayed this bit of trivia to a love interest. Picture this: A character is in bed talking to his lover in Ondaatje; then in Irving, another man is talking to his lover about that initial Ondaatje man and his lover; and then there’s me talking to a lover about Irving’s lover talking to a lover about the difference between Ondaatje’s lover and Minghella’s lover. You’d have a point if you asked why all these talking in bed, but you’d also not be a reader. IV The quality and memorable nature of the film is one reason I think asking the public to vote the Golden Booker was dubious. How many people were voting for the book? Which ones were voting for the well-made film, a film so widely seen it ended up as a plot point on the massively popular sitcom Seinfeld? Ever aware, Ondaatje thanked Minghella in his acceptance speech, saying he suspected the late director “has something to do with the result of this vote”. In any case, a book of wondrous prose has won and there will be no bickering. But I recall having some issues with the book. First, by the end it seemed as though Ondaatje was tired of his characters—he had been with some of them in an earlier book—and wanted to air their private concerns by connecting them to the larger world, breaking their world into the larger world where the US has dropped a bomb in Japan. The way Ondaatje does this felt too cut-and-paste from a news bulletin. It is so unsatisfactory it has lingered these many years later, so much so that the Guardian, after Ondaatje’s win of the Golden Booker, asked the author if writing the end that way wasn’t a “failure of nerve”. And back in 1993, Hilary Mantel called it “a crude polemic” in her scathing NYRB review. I was also unsure about the book’s depiction of the adulterous affair at the centre, a thing blown up to intense romantic proportions by the film. Growing up in a religious country like Nigeria means the manner in which some western artists appear to elevate adultery to some subversive art makes for some discomfort. The blame for this view of marriage and fidelity, perhaps, should be directed at Nathaniel Hawthorne’s Scarlet Letter and Flaubert’s Madame Bovary and Tolstoy’s Anna Karenina, all featuring sympathetic adulterers and more or less disappointing husbands. Or maybe one has to go back to ancient accounts of the Paris and Helen of Troy affair. (In a writerly conceit, Ondaatje has his title character read Tolstoy’s novel.) Some commentators might point out that the adulterers get punished in Ondaatje’s book, but even that comes to be treated as a romantic dying-for-love device since the action, a murder-suicide, which brings their downfall is caused by the cuckolded husband, whose only initial success is the suicide. His wife his injured; his rival is unscathed. Besides who would not empathise with Fiennes’, seeing the pain in his eyes as he cradles the body of his beloved in the movie? Even before this there is no real sense of anguish at the betrayal from either the married miss nor her paramour. Ondaatje excludes the anguish of adultery by creating a hermetic atmosphere around the lovers to the exclusion of the book’s other characters. And if time spent between lovers is in pursuit of pleasure, there can be little space for anguish. What real anguish is in the book comes about when Katharine tries to get away from her non-husband, not really out of guilt or loyalty but out of a fear that her husband would be mad. Instead its her lover who goes mad when she tries to leave him. There was at least some anguish in the case of Graham Greene’s Scobie from The Heart of the Matter. The difference is instructive: Like Greene, Scobie was a Christian, a Catholic, with all of the guilt being a kind of a believer entertains. On the other hand, God, as worshipped by religious people, is not a very significant factor in the lives of Ondaatje’s characters. The English Patient himself is an atheist and declares at some point that God doesn’t exist. It is of course possible to argue that in all of the aforementioned novels, adultery is besides the point. Sin isn’t exactly a literary device, even for books by non-western authors. For example: Stay With Me, by Nigerian author Ayobami Adebayo, soon dispenses with the somewhat incestuous adultery at its centre, even as it parades characters we are nudged to think are, at the very least, nominally religious. If, as they once said, the marriage plot is dead, perhaps the religious plot suffers a worse fate: it has never really taken off. Fortunately, The English Patient’s tricky morality and the story that bears it is ferried by some of the best prose in English letters. The earnest believer need not agree with the carnal politics of the novel if she’s also a believer in the powers of good prose; there might also be a chance to understand those unlike her in the process, if only because portions of the section titled The Cave of Swimmers are especially good in presenting a lover’s paranoia when with a partner who he believes can’t be trusted sexually. It is clearly meant to be ironic that this paranoia is mostly from the man who isn’t married to the woman he’s sleeping with. V As I said I have read just the one Ondaatje novel but if Martin Amis is correct in saying we love authors only in parts, then it appears reading the English Patient means reading Ondaatje’s best part. Nonetheless, since it is in my shelf, I might pick up Ondaatje’s slim book The Collected Works of Billy the Kid. But it is just as likely that I find my copy of The English Patient and reread those intoxicating sentences that so long ago captured the heart of a Nigerian undergrad.
https://catchoris.medium.com/michael-marriage-and-me-c570e58f23aa
['Oris Aigbokhaevbolo']
2018-07-14 17:47:15.028000+00:00
['Ayobami Adebayo', 'Books', 'Literary Criticism', 'The English Patient', 'Michael Ondaatje']
My Flatulent Empire Is Crumbling
THE SALSA DIARIES #3 My Flatulent Empire Is Crumbling The broken wind of a Monday morning Photo by Jon Tyson on Unsplash I first noticed the cracks on a Monday morning. My gimp was unable to get hard and I needed plugging. In a rage, I struck it several times. It grunted but the appendix refused to rise. What more could I do? I needed entertainment. It isn’t easy living in a grand chateau and not having anything to do. The remote was lost last week, shoved up the arse of some beast, and the pool was flooded with some weird aquatic lurky as a result of a heavy dose of fecal matter. What a shit Monday I thought. What’s the point of being filthy rich if I can’t get any relief from the boredom of life? I speed-dialed Jesus. The second son of god was always up for a laugh. “Yo, JC! Whassup bruh?” He likes to think he’s a little gangsta since the resurrection. He tells me he’s the only son of god and I should stop disrespecting him. Fucking Jesus was certainly better than humping day-old pizza but he ain’t no saint. Jesus was in a slump too. Turns out, on his day off from being worshipped, he had an altercation at a local 7–11. He was pissed. The cops mistook him for a crazed robber on account of his black skin and dodgy hoodie. JC’s beatific smile couldn’t save him and they shot several rounds into his already-dead and twice risen corpse. It’s best not to ask too many questions from the force. Meanwhile, Michael Bauble was crooning about Christmas as JC’s blood dripped a festive red on the sludged-up snow. “Fuck…here we go again,” I thought. JC loved drama. I could see him licking his lips as he changed gears in retelling his tall tale. There’s nothing he likes better than rising from the dead and scaring the good folk into believing in miracles. That’s how his mom conceived he would say with a wink. “Jesus. JESUS! Just fucking stop. I don’t need to hear about another fucking resurrection on a Monday morning. I thought you had some nose candy that you scored off Magdalene. Now crack that shit open and let's get a fucking party started! I’ve got a marketing meeting in ten and your shit is getting old.” Nobody puts Jesus in the corner. Offended, he bitch-slaps my gimp before storming off in a huff. I’m now suffering from withdrawal. It’s still fucking Monday. My gimp isn’t giving out any pleasure and worse of all? It’s time for Secret Santa. Our yearly celebration where dirty Macca sweats his way through a Santa suit making every female staff member awkwardly sit on his lap. The bloody perv will already be half-drunk on cheap scotch. His hands wandering over limbs while readjusting his pants. I’ve got no game. I’m out of ideas. At least it’s Intern Torture week. Perhaps that’ll bring the gimp back to life. I sure love the fucking holiday season.
https://medium.com/title-and-picture-gag/my-flatulent-empire-is-crumbling-f25d56cac64
['Reuben Salsa']
2020-12-19 23:51:11.120000+00:00
['Humor', 'Diary', 'Satire', 'Salsa', 'Writing']
Facebook PyText is an Open Source Framework for Rapid NLP Experimentation
Facebook PyText is an Open Source Framework for Rapid NLP Experimentation The new PyTorch-based framework enables the rapid creation and testing of NLP models. I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below: Natural language processing(NLP) has become the best known discipline in the deep learning space in rencet years. Part of that popularity have brought together an explosion of tools and frameworks such as Google Cloud, Azure LUIS, AWS Lex or Watson Assistant, NLP that have enable the implementation of simple NLP applications without requiring any deep learning knowledge. These platforms are only applicable to relatively basic NLP scenarios. For the rest of the use cases, building NLP applications at scale remains incredibly challenging often surfacing strong frictions between the possibilities of research/experimentation and the realities of model serving/deployment. As one of the biggest conversational environments in the market, Facebook has been facing the challenges of building NLP applications at scale for years. Recently, the Facebook engineering team open sourced the first version of PyText, a PyTorch-based framework for building faster and more efficient NLP solutions. The ultimate goal of PyText is to provide a simpler experience for the end-to-end implementation of NLP workflows. To achieve that, PyText needs to address some of the existing friction points in NLP workflows. From those friction points, the most troublesome is the existing mismatch between the experimentation and model serving stages of the lifecycle of an NLP application. Solving the Tradeoff Between NLP Experimentation and Production The implementation of modern NLP solution typically includes very heavy experimentation phases in which data scientists rapidly test new ideas and models, many of them extracted from research literature, in order to achieve certain levels of performance. During experimentation, data scientists tend to favor frameworks that provide easy to use, eager-execution interfaces that facilitates writing advanced and dynamic models quickly. Frameworks such as PyTorch or TensorFlow Eager are great examples of this category. When deployment time comes, the limitations of the dynamic-graph models become a challenge and deep learning technologists gravitate towards frameworks with static computation graphs and are optimized for operating at scale. TensorFlow, Caffe2 or MxNet are well-known members of this type of stack. The end result is that large data science teams often end up using different stacks for experimentation than for model serving and production deployment. PyTorch was one of the first deep learning frameworks that addressed the often-conflicting gap between rapid experimentation and model serving at scale. PyText builds on the PyTorch foundation to optimize some of those principles for the NLP space. Understanding PyText From the conceptual standpoint, PyText was designed to achieve four fundamental goals: 1. Make experimentation with new modeling ideas as easy and as fast as possible. 2. Make it easy to use pre-built models on new data with minimal extra work. 3. Define a clear workflow for both researchers and engineers to build, evaluate, and ship their models to production with minimal overhead. 4. Ensure high performance (low latency and high throughput) on deployed models at inference. The capabilities of PyText result on a modeling framework that helps researchers and engineers build end-to-end pipelines for training or inference. The current implementation of PyText covers the fundamental stages of the lifecycle of an NLP workflow providing interfaces for rapid experimentation, raw data processing, reporting of metrics, training and serving of trained models. A high level view of the architecture of PyText clearly reveals how those stages are clearly encapsulated by native components of the framework. As illustrated in the previous figure, the architecture of PyText includes the following building blocks: · Task: Combines various components required for a training or inference task into a pipeline. · Data Handler: Processes raw input data and prepare batches of tensors to feed to the model. · Model: Defines the neural network architecture. · Optimizer: Encapsulates model parameter optimization using loss from forward pass of the model. · Metric Reporter: Implements the relevant metric computation and reporting for the models. · Trainer: Uses the data handler, model, loss and optimizer to train a model and perform model selection by validating against a holdout set. · Predictor: Uses the data handler and model for inference given a test dataset. · Exporter: Exports a trained PyTorch model to a Caffe2 graph using ONNX8. As you can see, PyText leverages the Open Neural Network Exchange Format(ONNX) to transition models from experimentation-friendly PyTorch to production-robust Caffe2 runtimes. PyText includes a large portfolio of NLP tasks such as text classification, word tagging, semantic parsing, and language modeling which streamline the implementation of NLP workflows. Similarly, PyText ventures in the area of language understanding by using contextual models such as a SeqNN model for intent labeling tasks and a Contextual Intent Slot model for joint training on multiple tasks. From an NLP workflow standpoint, PyText streamlines the process of transitioning an idea from experimentation to production. The typical workflow of a PyText application includes the following steps: 1. Implement the model in PyText, and make sure offline metrics on the test set look good. 2. Publish the model to the bundled PyTorch-based inference service, and do a real-time small scale evaluation on a live traffic sample. 3. Export it automatically to a Caffe2 net. In some cases, e.g. when using complex control flow logic and custom data-structures, this might not yet be supported via PyTorch 1.0. 4. If the procedure in 3 isn’t supported, use the Py-Torch C++ API9 to rewrite the model (only the torch.nn.Module10 subclass) and wrap it in a Caffe2 operator. 5. Publish the model to the production-grade Caffe2 prediction service and start serving live traffic Using PyText Getting started with PyText is relatively simple. The framework can be installed as a typical python package. $ pip install pytext-nlp After that, we can train an NLP model using a task configuration. (pytext) $ cat demo/configs/docnn.json { "task": { "DocClassificationTask": { "data_handler": { "train_path": "tests/data/train_data_tiny.tsv", "eval_path": "tests/data/test_data_tiny.tsv", "test_path": "tests/data/test_data_tiny.tsv" } } } }$ pytext train < demo/configs/docnn.json A Task is the central artifact for define the model artifacts in a PyText application. Every task has an embedded config that defines the relationships between the different components as shown in the following code. from word_tagging import ModelInputConfig, TargetConfig class WordTaggingTask(Task): class Config(Task.Config): features: ModelInputConfig = ModelInputConfig() targets: TargetConfig = TargetConfig() data_handler: WordTaggingDataHandler.Config = WordTaggingDataHandler.Config() model: WordTaggingModel.Config = WordTaggingModel.Config() trainer: Trainer.Config = Trainer.Config() optimizer: OptimizerParams = OptimizerParams() scheduler: Optional[SchedulerParams] = SchedulerParams() metric_reporter: WordTaggingMetricReporter.Config = WordTaggingMetricReporter.Config() exporter: Optional[TextModelExporter.Config] = TextModelExporter.Config() After a model has been trained, we can evaluate the model and also exported to Caffe2. (pytext) $ pytext test < "$CONFIG"(pytext) $ pytext export --output-path exported_model.c2 < "$CONFIG" It is important to notice that PyText provides a very extensible architecture and each one of its key building blocks can be customized and extend it. PyText represents an important milestone in NLP development as one of the first frameworks that addresses the often-conflicting tradeoff between experimentation and production. With the support of Facebook and the PyTorch community, PyText might has the opportunity to become one of the most important NLP stacks in the deep learning ecosystem.
https://medium.com/dataseries/facebook-pytext-is-an-open-source-framework-for-rapid-nlp-experimentation-eba67fa61858
['Jesus Rodriguez']
2020-09-28 11:58:30.511000+00:00
['Deep Learning', 'Thesequence', 'Machine Learning', 'Artificial Intelligence', 'Data Science']
Symbol of Pride
In the northeast corner of the Democratic Republic of Congo, at the heart of the Congolese rainforest, sits the Okapi Wildlife Reserve. The reserve is more than one-and-a-half times the size of Yellowstone National Park and it harbors the single largest remaining population of okapis anywhere. An okapi is a shy, solitary forest animal related to a giraffe. In reality, it looks more like the product of a vivid imagination — the hindquarters of a zebra stitched together with the head of a giraffe and the body of a horse. Considering the okapi’s size, it’s incredible that the species went scientifically undescribed until only about 100 years ago when 19th century explorers heard rumors of a ‘striped donkey.’ Today, it’s found only in these forests of northeastern DRC. It remains a symbol of national pride for the country, found on the bank notes of the local currency, on the badges of park ranger uniforms and proudly stamped over my six-month Congo entry visa. A wild okapi caught on a camera trap. ©Okapi Conservation Project Okapis share this reserve with many other species — forest elephants, chimpanzees, the highest diversity of monkeys found anywhere in Africa. This is also the home of the Efe and Mbuti, one of the oldest forest peoples on the continent, and the Mbuti still practice their traditional hunter gatherer culture in and around the reserve. All is not well in the Okapi Wildlife Reserve, though. Its wildlife now suffers from persistent poaching and trafficking for food and ivory. Its natural resources and vast forests face increasing pressure from the illegal exploitation of minerals. This activity thrives on the weak governance and rule of law that exists across many of DRC’s remote border areas. In this, the reserve is not alone. DRC as a whole has astonishing biodiversity and rich natural and cultural heritages. As a nation, it boasts the highest animal biodiversity of any country in Africa, with many of its charismatic species — okapis, bonobos, Grauer’s gorillas — found nowhere else on Earth. Unfortunately, many of these high biodiversity areas are also centers of conflict and insecurity today. As a result, much of the country’s network of protected areas and World Heritage sites is in danger. That narrative could be starting to shift, though. Just recently, the Wildlife Conservation Society (WCS) and the DRC Government’s Nature Conservation Agency (ICCN) signed an agreement to delegate management of Okapi Wildlife Reserve to WCS. With this agreement comes hope for a brighter future, where good stewardship of biodiversity and natural resources actively restores local governance, creates greater stability, drives local economic growth, and reduces conflict. At the official ceremony to inaugurate the Okapi Wildlife Reserve agreement, the Director of ICCN highlighted its importance in bringing new expertise and financing to the management of the reserve and improving the welfare and operations of its rangers. Restoring the reserve — a World Heritage Site in peril — to its former world-class status will preserve the area as a haven for wildlife and a source of revenue for its local communities. The Director General of ICCN Pasteur Cosma Wilungula presided over the ceremony for the new management of Okapi Wildlife Reserve. ©Emma Stokes/WCS A day prior to the agreement ceremony, Tom Muller, the newly installed director of the reserve, went for an early morning jog. Along the path, he was stopped in his tracks by the sight of an adult male okapi, which stood silently watching him from less than 30 meters away. After a few seconds, the okapi slipped into the forest. It was the first time Tom had ever seen the species in the wild and he remained, awed, for a few moments before moving on. I take this as a good sign. I am filled with hope that the reserve can one day serve as model for all of DRC’s protected areas, as a place where stability is restored and wildlife flourishes, where good governance and local development provide opportunities for communities, and where local people can live off their ancestral lands free from exploitation by outside interests. I believe that together we can make the Okapi Wildlife Reserve a symbol of pride for the Democratic Republic of Congo once again. Emma Stokes is the WCS Regional Director for Central Africa & the Gulf of Guinea.
https://wildlifeconservationsociety.medium.com/symbol-of-pride-eb673f3a0eca
['Wildlife Conservation Society']
2019-11-07 16:45:03.485000+00:00
['Environment', 'Wildlife', 'Africa', 'Wildlife Conservation', 'Conservation']
How To Share State Between React Components
React is a really great platform for writing applications. With the introduction of React Hooks and JavaScript improvements, writing more sophisticated React applications can be easier. I recently found a nice pattern for sharing state between React components and thought I’d share :) Let’s start with a simple application with a child components: import Child from './Child'; export default function App(props) { return <Child />; } The Child components will just expose a button to increment a counter, with state kept in the Child component: import React, { useState } from 'react'; export default function Child1(props) { const [counter, setCounter] = useState(0); return <button onClick={() => setCounter(counter + 1)}> Child Counter: {counter} </button>; } This app is pretty simple, just a button that increments a counter: Click on the button and the counter increments: Let’s say the parent wants to access the counter. This can be accomplished using global state managed by the Child component. The reactn NPM module provides setGlobal and useGlobal methods to make data global. (React allows this via Context, but it's pretty complex to use, so let's use reactn ). Let’s change Child to store the counter as a global. import React, { useEffect } from 'react'; import { useGlobal, setGlobal } from 'reactn'; setGlobal({}); export default function Child1(props) { const [child, setChild] = useGlobal('child'); useEffect(() => (child ? true : setChild({ ...child, counter: 0 })), [child, setChild]); return <button onClick={() => setChild({ ...child, counter: child?.counter + 1 })}>Child Counter: {child?.counter}</button>; } This code has the same output as before, except the counter is now available as a global. Note, useEffect is needed to initialize the child global object. The notation I'm using is pretty terse to keep it compact, it's just initialization code so let's minimize it. Note the use of child?.counter . The ?. operator is a new JavaScript operator that is very handy. The first time Child1 is rendered child will be undefined . The ?. short circuits evaluation so the counter member isn't accessed to prevent a run time error... caused by attempting to access undefined.counter . Now, let’s use this counter in another component. For simplicity we’ll just access it from the parent component. It is just as easy to access from components that are peers or children of the Child component. import { useGlobal } from 'reactn'; import Child from './Child'; export default function App(props) { const [child] = useGlobal('child'); return ( <div> <div>Parent Value: {child?.counter ? child.counter : 'NA'}</div> <Child /> </div> ); } child will be undefined until the button is clicked, so the initial screen looks like this: Click on the button to increment the counter in the child and update it in the parent:
https://medium.com/javascript-in-plain-english/sharing-state-between-react-components-8c138c505e36
['Joe Bologna']
2020-11-24 19:36:06.252000+00:00
['JavaScript', 'Reactjs', 'React', 'Web Development', 'Coding']
The World of Hadoop
The World of Hadoop Interconnecting Different Pieces of the Hadoop Ecosystem When learning Hadoop, one of the biggest challenges I had was to put different components of the Hadoop ecosystem together and create a bigger picture. It’s a huge system which comprises of different components which can be contrasting as well as complementing to each other. Understanding how these different components are interconnected is a must have piece of knowledge for anyone willing to utilize Hadoop based technologies in a production level big data application. Hadoop ecosystem possesses a huge place in the big data technology stack and it’s a must have skill for data engineers. So, let’s dig a little deeper into the world of Hadoop and try to untangle the pieces of which this world is made. Starting with a formal definition for Hadoop can help us getting an idea of the overall intention of Hadoop ecosystem: Hadoop is an opensource software platform for distributed storage and distributed processing of very large data sets on computer clusters As the definition indicates, the heart of Hadoop is made of its ability to handle data storage and processing in a distributed manner. In order to achieve this, the overall architecture of Hadoop and its distributed file system has been inspired by Google File System (GFS) and Google MapReduce. The distributed nature of Hadoop architecture makes it suitable for very large data, specially by removing a single point of failure when processing these large amounts of data. Now let’s try to walk through different building blocks of the Hadoop ecosystem and understand how these pieces are interconnected. The Hadoop Eco System: Image Source I tried to organize the entire Hadoop ecosystem into three categories.
https://towardsdatascience.com/the-world-of-hadoop-d1e5f5eb98d
['Prasadi Abeywardana']
2020-05-24 19:37:00.579000+00:00
['Apache', 'Hadoop', 'Big Data', 'Mapreduce', 'Hdfs']
The best Design Events in Europe (2018)
Awwwards Conference - An Event for UX / UI Designers and Web Developers. Two exciting days with some of the most influential speakers of the industry, who inspire, teach, and guide us as we…
https://medium.com/dsgnrs/the-best-design-events-in-europe-2018-6aa63119a62e
['.Dsgnrs. Team']
2018-01-18 10:13:01.701000+00:00
['UI', 'Design', 'Design Thinking', 'Product Design', 'UX']
3 Books About Asexuality to Close Out the Summer
August is well underway, and much of the world is still quarantined. Our usual summer break has given us little time to rest or recharge — vacations were cancelled, beach days were put off, and stress about jobs and health is at an all-time high. But books have the ability to take us away, if only for a moment, and to allow us to step outside ourselves. They have the power to change our perspective and influence our beliefs even while we’re cooped up at home. These books in my latest roundup explore a wide range of topics, from sisterhood to polyamory and body positivity. But each has two things in common: a focus on asexuality and greater understanding. Sea Foam & Silence by Lynn E. O’Connacht Sea Foam & Silence, by Lynn E. O’Connacht. Kraken Collective Books, 2018. Lynn E. O’Connacht’s Sea Foam & Silence is a retelling of The Little Mermaid. Certain aspects of the tale harken back to the original, but this version adds depth and a modern spin. Written as a poem-prose hybrid, the book uses line breaks, nonstandard capitalization, and several emojis. It also grapples with questions of love, home, and identity, all of which are informed by the protagonist’s asexuality and non-specified romantic orientation. O’Connacht’s story begins with Maris, a curious, red-headed mermaid who wants nothing more than to swim with her sisters and hunt tall-crabs. Quickly, we learn that tall-crabs are actually humans, and their species is viewed as violent and non-sentient. However, as the narrative unfolds, Maris begins to question her understanding of tall-crabs and her part in hunting them. She also reflects upon her loneliness, which refuses to go away, no matter where she is or what she’s doing. If I can catch a tall-crab on my own, Just me, me alone, My sisters will have to let me join them. I won’t be too small then. I won’t be lonely then (p. 23) Ultimately, Maris’s quest to understand tall-crabs leads her to seek out “The Witch,” a magical being who grants Maris the ability to become a tall-crab herself. In exchange, Maris gives up her voice, her sisters, and most of her mermaid identity. She’s also warned that she’ll turn into sea foam if she can’t find love within a year. Originally unphased, Maris soon realizes that she doesn’t understand what it means to love — and if she can’t understand the nature of love, how can she possibly obtain it? Maris’s asexuality becomes a focal point for her confusion. She explains that she has never had the desire to mate, and that among her sisters, this lack of desire wasn’t uncommon. In fact, asexuality was built into their family structure, and those who didn’t desire sex were primarily tasked with caring for and teaching the group’s youngest members (121–122). But while living among the tall-crabs, Maris is taught that love and sex are one and the same. Furthermore, she develops a close relationship with Prince Bernhard, who expresses no desire for marriage or sex himself. This relationship with the prince, coupled with their shared lack of desire, causes Maris to feel even less certain about love, and ever finding it. Love is when two people marry. I remember that from the stories. Tall-crabs are always telling stories about love. But love cannot be so easy to find. I think if love was only people together Then I would have had it already. I have always lived with my sisters. I have laughed with my sisters, Shared meals with them, been taught by them. But they say this is not love. This is family and, true, it is a kind of love, But it is not love. I do not understand >< (135). Sea Foam & Silence is part of a series, and I’m excited to see where Maris’s journey takes her next. Between the book’s exploration of queerness and polyamory, cultural differences, able-bodiedness and sign language, and conceptual handling of love and relationships, there are so many reasons to pick up this book. Sea Foam and Silence has a loud, important message about being true to yourself; while that truth may not always be easy to understand, embracing it is perhaps the best way to confront our loneliness and avoid becoming sea foam ourselves. Summer Bird Blue by Akemi Dawn Bowman Summer Bird Blue, by Akemi Dawn Bowman. Simon Pulse, 2020. Akemi Dawn Bowman’s Summer Bird Blue is a tale of loss, healing, and above all else, friends and family. The story begins with Rumi, a teenaged musician who loves nothing more than her piano, her mom, and her sister, Lea. But when Lea is killed in a car crash, Rumi’s mother is overcome by grief, and Rumi is sent to live with her aunt in Hawaii. Rumi tries to make amends with her sister by finishing their song, “Summer Bird Blue.” But she is repeatedly overcome by anger toward her mother’s abandonment and guilt toward her sister’s death. Simultaneously, she struggles with emerging aspects of her identity — as a sister without a sister, a daughter without a mother, and a girl without the right label to express what she wants and feels. While trying to work on her song and come to terms with her grief, Rumi meets Mr. Watanabe, a man who loves music as much as she does, and Kai, a boy her age with a friendly disposition and difficult home life. Slowly, she bonds with them and begins to work past her sadness. But each time she finds herself smiling, she feels even guiltier, retreats internally, and misses Lea. As the book continues, Rumi realizes that she’s not just mourning — she’s jealous. And lonely. “It’s not fair for people to grieve alone. The people who are left behind should stick together. They should want to stick together. I try to think about lyrics for ‘Summer Bird Blue,’ but it feels unnatural, the same way losing Lea and being in Hawaii feel unnatural” (91). Additionally, she begins to develop confusing feelings toward Kai, and she can’t tell if those feelings are friendship, romance, or something else. As she says: “Lea was always the romantic one, not me. Mom says I might be a late bloomer, but I’m not so sure. Late implies there’s something that’s still going to happen — something I don’t fully understand yet… I don’t know what I’m looking for in love. I don’t even think I’m looking for love at all. I don’t see people and feel that rush of excitement Lea always described when she had a crush — the kind of excitement that leads to touching and kissing and whatever else. I just see people that might make good friends, and I’ve always been okay with that. Which is why Kai’s ridiculous jawline is bugging the crap out of me. I don’t know what it means” (210–211) The book is broken into three parts — Summer, Bird, and Blue — with smaller sections labeled “A Memory.” Each of these sections explores Rumi’s relationship with her mom, sister, and at one point, father. Through these memories, we also realize that Rumi is asexual and possibly aromantic, though she’s not sure how she feels about these labels. With her sister’s help, she was trying to better understand herself. But now that Lea isn’t here, she’s even less certain about who she is. As she puts it, “Knowing myself should be the easiest thing I do in life, but somehow it feels like the hardest” (p. 107). Summer Bird Blue delivers a laudable exploration of love in all its forms, grief in all its shades, and identity in all its complexities. It also examines Hawaiian culture, the healing power of music, and the people who come into our lives, if only for a moment, who change us for the better. Summer Bird Blue shows that the journey toward healing and self-acceptance isn’t easy, but it is possible — and with help from our loved ones, both here and gone, that journey becomes a little more doable. Water Runs Red by Jenna Clare Water Runs Red, by Jenna Clare. 2019. Friendship can be just as meaningful — or more meaningful than — sex or romance. And when a close friendship ends, it’s just as devasting as any other breakup. This sentiment informs the heart of Jenna Clare’s Water Runs Red, which blends poetry, photos, and illustrations to examine the author’s changing relationships with both herself and former friends. While friendship makes up the bulk of this collection, Clare also explores topics related to oppression, body positivity, and asexuality — creating a poetry book that is part-memoir and part rally-cry for systemic and long-term change. Clare’s book primarily examines her relationship with two former “fairy tale friends.” This designation imbues them with power (over the speaker), hopes (for the friends’ happily ever after), and a sense of unrealness or instability (concerning who they are and what their friendships morph into). Of course, that part of the fairy tale comes later. At first, the friendships are “fairy tale” in a more magical sense. They seem unstoppable, unquantifiable, and fully real in a way that winds up being illusory. i cemented myself to you and your make-believe games and your childhood dreams, and your smile became my second home i counted you as my first - friendships mean just as much as romance (28) Along with trying to unpack her former friendships, the speaker turns to poetry to work through her sexuality. As she says, “just because/i treat you like/ a partner,/ it doesn’t mean/ i want to kiss you./- i fall in love all the time,/ just not like that” (125). But at the same time, she acknowledges the difficulties with ever fully accepting yourself, especially in terms of body image: the war against myself began without fanfare, without declaration. one day all was peace and the next day i was fighting for nothing. - me vs. myself vs. i (135) The book balances these discussions within a more general framework of oppression. Several times, the author reclaims the word “witches,” using it to symbolize the power and mistreatment of women, non-binary people, and other marginalized groups. She also expresses the need for all “witches,” including herself, to come together and support one another: i became so caught up in the happenings of my own head that i didn’t notice my fellow colonists burning all the witches even though I could avoid the stake, I did not stop to think of all the others who begged for my help and watched me push them aside - white feminism (170) The desire to be included and loved is central to Clare’s collection. However, the book suggests that it is ultimately self-love that will help us weather the loneliness (“i have been learning/ how to love/ myself// even as i try/ to devour/ myself/- single (224)”), and true friends certainly make a difference in that battle. Toward the middle of the book, Clare introduces us to a third fairy-tale friend, who proves herself loyal, loving, and true. Thus, the speaker’s newfound relationship with herself and her third friend propel her activism forward: and so the witches have become soldiers, fighting the beasts within themselves while also battling the hatred that pollutes their world (232) As summer comes to a close, it’s important that we continue practicing self-love, activism, and friendship. Any one of these books about asexuality would be a wonderful choice for your next summer read. Of course, they’d also be a great choice for fall, winter, or spring, because their messages ring true throughout the year. It is not enough to simply live our truths; we must also strive to love them.
https://medium.com/anomalyblog/3-books-about-asexuality-to-close-out-the-summer-b87d97dfca46
['Marisa Manuel']
2020-08-12 18:00:10.333000+00:00
['LGBTQ', 'Books', 'Ace', 'Review', 'Asexuality']
Big Data — your missing link to God?
If God knows everything that I am going to do, then why is it my fault? This is the most popular question among the intellectually religious. More so among those who want to run away from the burden of their choices. We prefer wasting time over building rhetoric rather than spending time on disposing of a narrative. Why reject a choice when you can get validation in countering it? One thing big data has proven is that we as people are predictable and malleable along with vile and manipulative. We can be beaten into any form or shape and we wouldn’t know. We genuinely wouldn’t. It’s even more worrisome that we don’t have the capacity to find out unless and until it is spelled out for us. The quality of the argument has progressed without responsibility. Every few hundred years we find a way to not talk about what is right, but find a way to protect what is wrong. We can turn everything into a matter of personal sovereignty even if it is spewing evil in the world. The world is at the mercy of the poison chosen by the select few. If anything, this should be a wake-up call for all of us. The negative implications of using data to cause destruction cannot be ignored, but this entire fiasco has multiple perspectives. The above is just one. Let the Governments, lawyers, and activists do their jobs. You do yours. It’s time you reflect. What has made you so predictable? What choices are you defending? What all have you done in the name of privacy? We are not just concerned about our data; we are also ashamed of it. One major reason why we want to hide it is that it is covered in filth, among other things. Photo by Larm Rmah on Unsplash Once upon a time we had closed doors, now we have the ‘incognito’ mode. If the thought of your browser history being exposed is worrying you more than the kind of thought pattern you have fallen into, there is a bigger problem at hand. Not only are you pliable, but you are also breakable. You are like a clay pot waiting to be dried so you can be smashed to the ground. The questions on data privacy and protection are new and have only begun to form, but this is not the first time the human race is facing the brunt of its choices. It has been happening for thousands of years. Each era has its way to play God. Our obsession is with superlatives. Great pyramids, Great empires, Great wars, Great corporations. It’s not long before the ‘big’ in the ‘big data’ turns to great. I’d say ‘only God knows what will happen then’, but I guess we already have an algorithm to predict that. Don’t be angry at the technology, it was only created to pick up on a pattern. That pattern happened to be ours, and it happened to be pathetic. It was turned into destructive later. All of it has and will have the same end because we don’t learn from our mistakes, we just find different ways to repeat them. How is it that some are using this technology and some are getting used by it? This is what God meant when He said He knows what you are going to do. You are a product of your choices and it can be guessed based on your ‘data’ what your likely next step is going to be. Did data take that next step for you? No. Did God? No. Did you? Yes. You have tuned your mind in a certain way that it has started to follow a pattern. It responds to a few things you claim to be your choices. You have allowed it to fall prey to perceptions. This should be a realization of the insane capacity of human intelligence. Imagine what we can build if we put our minds to good use. But you can’t build anything on the outside if you are falling apart from the inside. Everything you do is a reflection of your mind, and those who exploited the dynamite to those who are now exploiting data have committed the same crime. They have killed their conscience. It all boils down to choices. Whether you put everything online or take it offline, you mine data or burn data, you live in a cave or live in castles, your life will not cease to have a predictive end until your choices are not governed by values and your mind is not free of perceptions. But, you don’t know who you are, what you are doing, why you are doing it, and worst of all — you think you have no choice in the matter. Maybe we don’t know how to be creative with goodness like we do with evil. The difference between righteous and self-righteous is one — you. If anything, data should revive your faith. Let’s see them predict that.
https://medium.com/alt-op/big-data-your-missing-link-to-god-927a0395b0aa
[]
2019-09-18 05:51:53.546000+00:00
['Self', 'Big Data', 'God', 'Faith', 'Religion']
Predicting Apartment Rental Prices in Germany
Through this article, I have attempted to predict rental prices using the apartment rental dataset containing rental prices in Germany. The dataset consists of data scraped from one of Germany’s biggest real-estate platform. The main objective here is to study and understand the data and use the knowledge to construct a basic predictive model to predict the base rental price (popularly known in Germany as ‘Kaltmiete’). The original dataset consists of 268850 apartments (rows) * 49 features (columns), but a combination of missing and imbalanced values means that only 14 features were useful to me in the current context. The columns contain the following information: regio1 : The federal state in which the apartment is located. heatingType: The type of heating system used in the apartment. balcony: Column indicating if the apartment has a balcony. yearConstructed: The year in which the apartment was constructed. hasKitchen: Column indicating if the apartment has a kitchen. cellar: Column indicating if the apartment has a cellar. baseRent: The rent excluding electricity and heating. livingSpace: The living space in square meter. condition: The condition of the flat. lift: Column indicating if the apartment has a lift. typeOfFlat: The type of apartment. noRooms: Total number of rooms in the apartment. garden: Column indicating if the apartment has a garden. regio2: The city where the apartment is located. Dataframe at a glance(1/2) Dataframe at a glance(2/2) Exploratory Data Analysis: Since this dataset was created by web scraping, the dataset was far from clean. Therefore one of the challenges was getting rid of extreme and null values. For example, there were several rows with a living space of more than 60 sq. m. in the range of €10-€30, which are clear outliers. There were also several apartments with the total number of rooms more than 100. A post-cleanup study yielded the following: 1.) The city in Germany with the highest average rent prices is Munich, which is not surprising as the city is quite well known for its notoriously high living costs. The districts in and around Munich (denoted by München_Kreis) also figure in the Top 10. Interesting fact, Starnberg, which follows Frankfurt at number three, is popularly known as the wealthiest town in Germany. I was quite surprised not to find Düsseldorf and Bonn on the list. Average Rents by Cities 2.) Hamburg is the federal state with the highest average rental prices, followed by Berlin and Bayern(Bavaria). The North-Eastern states of Sachsen-Anhalt and Thüringen have the lowest prices. Average Rents by State 3.) As should be, the size of the living space and the base rental price seems to be more or less positively correlated. The small(in size) apartments with high prices that seem like outliers are apartments located in the city center (central Berlin, Frankfurt) which explains the cost. Total Rent vs Living Space 4.) The most common base rental price in Germany hovers around the €300-€400 range. Rent Distribution in Germany 5.) Central Heating seems to be the most common type of heating system, by a fair margin, employed in German homes. 6.) Most of the apartments on offer have 3 rooms and come under the type ‘apartment’. Count of Rooms per apartment Type of Rentals Model Development: Before a prototype model can be created, it is important to transform the dataset. The following pre-processing steps were performed on the dataset before being fed into the model. The seven unique values in the ‘condition’ column (‘well_kept’, ‘refurbished’, ‘first_time_use’, ‘fully_renovated’, ‘mint_condition’, ‘first_time_use_after_refurbishment’, ‘modernized’, ‘negotiable’, ‘need_of_renovation’, ‘ripe_for_demolition ') were divided into 3 classes ‘new’, ‘old’ and ‘middle’ apartments.This was done to get a clearer demarcation. The data in the columns ‘balcony’, ‘kitchen’, ‘cellar’, ‘lift’ and ‘garden’ were encoded to 1 or 0 depending on whether the column value was True or False. The continuous-valued variables ‘baseRent’, ‘livingSpace’ and ‘noRooms’ were normalized using sklearn’s MinMaxScaler(). While this is not a necessary step when dealing with tree-based algorithms, it eases interpretability and makes performance comparison with other models easier. The categorical variables were encoded using One-Hot-Encoding, made easy by pandas’ get_dummies. The column ‘baseRent’ was selected as the target variable and the rest as predictor variables. After applying the above steps, the dataset was split into a training set(70% of the data) and testing set(30% of the data) using sklearn’s train_test_split. I chose Random Forest to develop the prototype as it is a powerful model that can work well with both continuous and categorical variables. The model was fit using RandomForestRegressor() on the training set. Using the test set to check the performance of the model, the following results were obtained. Model Results The model accounts for around 83.4% of the data in the model or the predictive power of the model is 0.83 (best is 1.0), which is quite good considering that the model was run using the default parameters and not tuned. The MSE and MAE are also on the lower side. Conclusion: Can this model be improved? Yes, of course!. The model is very far from being the best model. My first approach would be to perform Hyperparameter tuning and cross-validation using grid-search techniques to find the ideal parameters. Secondly, the original dataset contained a lot of columns with missing data which I chose to exclude from the model, which meant quite some information was lost. Smart Imputation techniques using domain knowledge could help tackle this problem. The original dataset also contained columns describing apartment facilities and descriptions. By employing NLP techniques, more information can be gained from these columns. Heavier feature engineering would also be helpful. Needless to say, I will be using these steps as cues to better the performance of the model. I would love to hear your feedback on this. If you have any questions or suggestions you can reach out to me. Thank you for reading.
https://towardsdatascience.com/predicting-apartment-rental-prices-in-germany-d5635197ab00
['Vineeth Antony']
2020-05-06 19:03:07.814000+00:00
['Exploratory Data Analysis', 'Germany', 'Machine Learning', 'Housing Prices', 'Data Science']
Understanding Factors in R
The factor() command is used to create and modify factors in R. Factors have a privileged representation with respect to vectors. To create a factor in the R language, it is used as in the following example: factor (c("...")) So, how does factors work? ⏩ Factors are a special type of vector to represent categorical data. ⏩ It serves to store categorical or ordered data. ⏩ Factors treat transactions specifically by modeling them. ⏩ Factors define themselves in some way. For this purpose, the tagged information follows a certain order. Now let’s make it functional with some examples. 👍Factor prints out a little bit differently from a character vector, in the sense that it prints up the value. And then it has a separate attribute which is called the levels. It also indicates the levels in terms of standart. For example, since there are 2 different levels in the form of “a” and “b”, the factor specifies the contents. The order of the levels in the factor is arranged alphabetically. What if I don’t want to follow the alphabetical order? For this to happen adjusting the levels() of a factor provides a useful shortcut for reassigning values in this case. Then I can that: > x <- factor (c("a", "b", "c", "d"), levels = c("b", "a", "c", "d")) > x > [1] a b c d > Levels: b a c d 👍 With the table() you can classify the contents of the factor. And this will give the number for each of them. For example, we have a factor like the following: > x <- factor(c("adenin", "guanin", "adenin", "timin")) > x [1] adenin guanin adenin timin Levels: adenin guanin timin Now when we call table (x): > table(x) x adenin guanin timin 2 1 1 👍 With unclass() , I can show the row as a number as an array. Factor sorts the contents alphabetically and represents it as such. If we do for the same factor: > unclass(x) [1] 1 2 1 3 attr(,"levels") [1] "adenin" "guanin" "timin" 👍 as.numeric() function is still required to convert the values to the proper type (numeric). > y <- factor(c(2.5, 1.5, 3.5)) > as.numeric(y) [1] 2 1 3 👍 summary() is a great way for spotting missing data. Let’s look again for x: > summary(x) > adenin guanin timin 2 1 1 👍Using factors for everyone very basic situation. Actually it services to good working on any project with R or maybe any language of programming speak. stringsAsFactors=TRUE is the default behaviour for R. And of course we will use it by default when opening the file with factors. For example “r_test” file:
https://medium.com/think-make/understanding-factors-on-r-afc39a00db1b
['Kurt F.']
2020-09-22 19:46:51.991000+00:00
['Development', 'Programming', 'Statistics', 'Data Science', 'R']
When Love Decides To Leave
Photo By: Stephanie Love Remember that time you were in love? It was so real until that love changed. Until love said, “This is no longer enough.” You were so in love, but somewhere along the way, things started to move too fast. Or maybe things didn’t move at all. Things became a little too complicated, and you outgrew one another. It’s just that you wanted the moon and stars and a sea escape. They wanted none of that. Or maybe you wanted a family, and they wanted ‘freedom’, whatever that is. Or perhaps you went back to school, and they left to go backpacking through Asia and didn’t want to be a part of that journey with you. Maybe love said, “This isn’t it anymore.” I don’t feel the same as I once did.” And so it left. Maybe love followed you through your darkest moments, and when the light finally came, it vanished. Perhaps love came to challenge you; bring you out of your comfort zone, only to say, “This is when I must go now.” And you watched it go. Maybe for a moment, or many, you began to feel that love was never real and that it never actually cared. Perhaps you began to question it and what it meant when it said, “Forever.” Regardless of how love unfolded, trust me when I say it was real, even if it was just for a moment. Or maybe it lasted many years; perhaps it’s still there growing with you. Perhaps it’s not. The thing is that sometimes we outgrow a certain love, but sometimes love outgrows us. Sometimes love comes to help us heal, and once we’re healed, love leaves. Sometimes love comes to teach us a lesson, and once the lesson is learned, love has served its purpose, and so it goes. Sometimes love comes to save us, to destroy us, or just to be. But when love leaves, it always teaches us. Love always helps us grow. It always makes room for a new love you can now hold space for. It’s easy to look at past relationships negatively and blame love for how things ended. The truth is that somewhere along the way, you and another body, mind, and soul were so in love, and that love was real. Even when it dwindled. For a moment in time, you were the lucky ones. Sometimes it stays, sometimes it goes, but the lessons are always there. As long as you hold love within yourself, know that love is infinite.
https://medium.com/social-jogi/when-love-decides-to-leave-33b1f6c42613
['Stephanie Love']
2020-12-27 03:47:16.387000+00:00
['Relationships', 'Life', 'Love', 'Development', 'Life Lessons']
The Uncertain Fate of Pandemic Relief Funds
The Uncertain Fate of Pandemic Relief Funds The fight between fiscal and monetary policies continues, with the economy caught in the crossfire. The Battle Over Pandemic Relief Last month, Secretary of Treasury Mnuchin requested to return $455 billion of coronavirus emergency lending funds back to Congress. These key programs were intended to help Americans weather the pandemic, providing credit to employers, small businesses, and state and local governments. Though the majority of funds remain largely unused, the emergency loans helped to shore up market confidence during the economic freefall in March. In his justification, Mnuchin argued that the funds served its purposes, citing sufficient lending capacity to meet the borrowing needs of consumers and businesses. But in a rare dissention, the Federal Reserve issued a statement arguing that it “would prefer that the full suite of emergency facilities established during the coronavirus pandemic continue to serve their important role as a backstop for our still-strained and vulnerable economy.” The rare split between the Fed and Treasury sparked another debate in Washington, with Democrats decrying it as another political move by Republicans to block the incoming Biden administration and its Treasury appointee, Janet Yellen, from having the toolkits necessary to prevent further economic downturns. Sen. Pat Toomey (R-Pa.), on the other hand, countered that an extension of these funds would not only go against the original intentions of the CARES Act —which was for these facilities to be temporary and to cease operations by the end of 2020 — but would also create a form of moral hazard. If extended, monetary policies would substitute for fiscal policies, putting the onus on the Federal Reserve rather than Congress to bear the brunt of economic recovery efforts. That is a fair argument, but given the resurgence of the virus and the return to lockdown in places like New York and California, taking the gas off pandemic relief is precisely the opposite of what should be done. Despite Secretary’s Mnuchin assessment that the recovery is in full swing, one doesn’t have to look so hard to find deeply troubling signs of tough times ahead.
https://medium.com/discourse/the-uncertain-fate-of-pandemic-relief-funds-60e44eadf042
['Vy Nguyen']
2020-12-14 00:25:27.789000+00:00
['Politics', 'Federal Reserve', 'Coronavirus', 'Economics', 'Government']
Deep dive into multi-label classification..! (With detailed Case Study)
With continuous increase in available data, there is a pressing need to organize it and modern classification problems often involve the prediction of multiple labels simultaneously associated with a single instance. Known as Multi-Label Classification, it is one such task which is omnipresent in many real world problems. In this project, using a Kaggle problem as example, we explore different aspects of multi-label classification. DISCLAIMER FROM THE DATA SOURCE: the dataset contains text that may be considered profane, vulgar, or offensive.
https://towardsdatascience.com/journey-to-the-center-of-multi-label-classification-384c40229bff
['Kartik Nooney']
2019-02-12 08:49:14.226000+00:00
['Machine Learning', 'Python', 'Nltk', 'Scikit Learn', 'NLP']
Looking for UX work? BIM is waiting for you
BIM is just beginning, and it’s here to stay. You can be a part of its development as a UX designer. You see, BIM is all about data, enormous amounts of it. All this data is populated as the building is designed in 3D and is available to stakeholders in the project: owner, architect, contractor, and user. However, data by itself is useless: we require the right interfaces to make this data understandable and usable. I have witnessed how this robust data inside a model can often get ignored due to bad user experience design in the software that works with this data. It would be such a different story if only we understood better who the user is and what he requires in each scenario. The opportunity is there for you as a UX Designer: the current applications that operate BIM technology are far behind what we would call a user-friendly experience. They are well known for their complicated interfaces, feature-rich environment that is confusing, and steep learning curves. Look for big names in the industry like Autodesk and Graphisoft to get an idea of what I am mean. I recently wrote an article about it if that would give you some insights on a platform for BIM I use every day in my line of work as an architect. Have a look: There are many more up and coming developers that are trying to harness the future of BIM. Most of them haven’t invested in UX design and believe me they urgently need it! BIM is a field ripe for UX designers, from VR applications, iPhone apps to full-blown desktop apps for both MAC and Windows, and even city management software. Have a look at this project from the UNSENSE Studio in Amsterdam on what the future of BIM could look like: Hyperform software concept film developed by Squint As an architect and UX Designer, I am very passionate about this topic and have the intent of being part of the solution. I am busy compiling aspects of the future of the BIM industry that could use UX design. If you are interested in this field, please drop me a line, we could work together. For now, do some research on the BIM industry and let your creativity flow! What the future will bring in the BIM industry could be shaped by people like you.
https://medium.com/the-innovation/looking-for-ux-work-bim-is-waiting-for-you-432c93478a58
['Bruno Martorelli']
2020-07-26 17:01:40.565000+00:00
['Architecture', 'Bim', 'Design', 'UX Design', 'UX']
How TikTok users are projecting their realities, with joy
When you first open the app, TikTok is madness. What greets you is an unfiltered stream of videos that the system decides people like you, the user, are interested in — based on your location and other videos with which you’ve interacted. There is no warning. You’re pulled into a bizarre stream — rakhi videos with a Dr. Dre soundtrack; two guys playing cops and robbers to a song from the Dilwale Dulhaniya Le Jayenge soundtrack; a girl playing with mehndi. TikTok has an estimated 200 million users in India, of whom 120 million are active every month. This means a line-up of unexpected acts. Creators put in an impressive degree of inventive thinking and physical effort into creating their content –jumping over fences; doing wheelies on bikes; bringing in grandmothers and other family members to join the performance; offering a self-deprecating comment on their own bodies and rituals. The videos on the platform offer a poignant mix of candour and artifice. Often, content creators don’t try to hide context signals, like their neighbourhood or the décor in their home. This is a contrast from the content on many other video-driven social media content in which details of everyday life are usually concealed. Listen in on TikTok conversations in research forums, and you’ll hear commentators deliberate on this content barrage. There is a moral panic surrounding the platform. The concerns — pornography, hate-speech, catfishing, fake news, hackers — aren’t new. Every platform has been accused of it, but with TikTok, there is a layer of disgust and resentment, which perhaps reveals more about the commentator than the content creator. Whether it’s a comment like “Isn’t it interesting that you can’t tell she’s uneducated or from a poor background?” or “What does it mean sociologically when boys cry on TikTok?”, there is a level of condescension to the analysis. The remarks on the simple-minded gaucheness of the creators hints at a demarcation of in-group and out-group identities, of identifying an ‘intrusion’ of Otherness into their social feeds. But if you manage to break out of the careful bubble that TikTok curates for you, you realise a whole different world of videos exist. A world of merriment and carefree, light humour that offers you glimpses into farmers’ kids dancing next to crops, girls watching the sun set over a mosque, boys going to salons and setting their hair artfully on fire — an infinite cabinet of curiosities. TikTok captures a beautiful moment in time. The dopamine hits of views and likes are now freely available to newer population segments. While there is darkness in TikTok (as many news reports show us), there is also tremendous joy. We had a machine look at these TikTok videos, focusing on elements such as images, objects, and colours within the frame. We then looked for these patterns across all the other cultural models that we have for India. Scanning through videos — such as one that shows a man flipping naan set to the soundtrack of lively, pop music — the machine classified these videos as “celebrations”, grouped together with occasions like weddings, festivals, and graduations. These weddings, festivals and graduations were not taking place in TikTok. To put it simply, the machine “thought” that the TikTok video was actually closer to an actual celebration, than it was to anything else that exists in culture. Remove the human bias that evaluates these elements as “other”, and most of the videos essentially communicate one thing — festivity. Characters in the most everyday of backdrops (such as the roadside) are being classified as being in celebrations of sorts because of their physical movements, colourful clothes, and high-spirited background music connoting jubilation. This conversion of mundane and everyday life into a performance that is full of joy is something worth acknowledging. While commentators speak about abuse and cruelty as a consequence of TikTok, we need to also consider this: about 120 million individuals trying to craft, in some shape or form, a shift in the repetition and dryness of everyday life. They are creating and consuming content that celebrates, laughs and connects. It’s a magical time to be a new user of the Internet, especially an Internet that infuses the everyday with something unexpected. The next time you walk by people filming TikTok videos, pause and watch what they’re doing. They are trying to shift reality. Why shouldn’t we all dance with our grandmothers? If you would like to find out more about what we do, please contact [email protected] This article was first published in the Hindustan Times on Aug 21, 2019: Read more stories on Quilt.AI.
https://quilt--ai.medium.com/how-tiktok-users-are-projecting-their-realities-with-joy-8feac1a692eb
[]
2020-09-18 08:33:52.419000+00:00
['Machine Learning', 'Artificial Intelligence', 'Tiktok', 'Social Media', 'Data Science']
Deep Learning Tool That Selects High-Quality Embryos for IVF
The deep learning tool performed better than trained clinicians. Photo by Luma Pimentel on Unsplash “There is so much at stake for our patients with each IVF cycle. Embryologists make dozens of critical decisions that impact the success of a patient cycle. With assistance from our AI system, embryologists will be able to select the embryo that will result in a successful pregnancy better than ever before,” said co-lead author Charles Bormann, PhD, MGH IVF Laboratory director. How Accurate Is the Deep Learning Tool? According to a study published by eLife, the deep learning system was able to choose the most high-quality embryos for in-vitro fertilization (IVF) with 90 percent accuracy. When compared with trained embryologists, the deep learning model performed with an accuracy of approximately 75 percent compared to an average accuracy of 67 percent by the embryologists. What Are the Benefits? For women starting IVF, the average success rate is 33% as a result of their first cycle, increasing to 54–77% by the eighth cycle. The treatment is also expensive, costing patients over $10,000 for each IVF cycle with many patients requiring multiple cycles in order to achieve a successful pregnancy. The challenge of a non-invasive selection of the highest available quality embryos from a patient remains one of the most important factors in achieving successful IVF outcomes. Currently, tools available to embryologists are limited and expensive, leaving most embryologists to rely on their observational skills and expertise. The Study A team of researchers from Brigham and Women’s Hospital and Massachusetts General Hospital (MGH) wanted to develop an assistive tool that can evaluate images captured using microscopes available at fertility centres. The team trained the deep learning system using images of embryos captured at 113 hours post-insemination. Among 742 embryos, the AI system was 90 percent accurate in choosing the most high-quality embryos. To further assess the system’s ability, the team decided to compare the system’s performance to that of trained embryologists. The results showed that the system was able to differentiate and identify embryos with the highest potential for success significantly better than 15 experienced embryologists from five different fertility centres across the US. What Does It Mean for the Future of Deep Learning in Healthcare? The deep learning system is only used as an assistive tool for embryologists to make more informed judgments during embryo selection. “We believe that these systems will benefit clinical embryologists and patients,” “A major challenge in the field is deciding on the embryos that need to be transferred during IVF. Our system has tremendous potential to improve clinical decision making and access to care.” said Hadi Shafiee, PhD, of the Division of Engineering in Medicine at the Brigham. While the study shows the potential for deep learning to outperform humans, further research is required before the tools can be used in everyday clinical care. Artificial Intelligence (AI) is getting increasingly sophisticated at doing what humans do. They do it more efficiently, quickly, and at a lower cost. The potential for both AI in healthcare is vast. Just like in our every-day lives, AI is increasingly becoming a part of the healthcare system. Using machine learning algorithms and deep learning, AI technology has the ability to gain information, process it, and give a well-defined output to the healthcare provider or the patient. These algorithms can recognize patterns in behaviour and create their own logic. “Our approach has shown the potential of AI systems to be used in aiding embryologists to select the embryo with the highest implantation potential, especially amongst high-quality embryos,” said Manoj Kumar Kanakasabapathy, one of the co-lead authors. The findings offer hope for individuals seeking to undergo IVF, the team concluded.
https://medium.com/datadriveninvestor/deep-learning-tool-that-selects-high-quality-embryos-for-ivf-17e10019f747
['Eshan Samaranayake']
2020-09-24 15:01:03.270000+00:00
['Machine Learning', 'Artificial Intelligence', 'Healthcare', 'Deep Learning', 'Data Science']
Emergence is The New Amazing Field of Science
Depiction of Entropy Increasing — Image by Pixabay Does emergence violate the second law of thermodynamics? Many people describe emergent behaviors as spontaneous activities toward a state of order. The second law of thermodynamics claims that systems will always move toward a state of disorder. But when we observe single-celled organisms like slime mold join together and function as an organized unit, this appears to contradict this law completely. This is not the first time that biological systems have challenged the second law of thermodynamics. As life evolves to changing conditions to survive, it poses a challenge to this law. But in this case, defenders of the second law argue that while it does state that the order of a system must decrease, it doesn’t say when, where, and how. They then point out that emergent communities like slime mold become disordered again after achieving its objective as a collective group. Therefore, it is not defying the second law — not everyone agrees. Required ingredients of emergence To observe emergence in nature, certain requirements must be in place. To begin with, there must be a sufficient quantity of individual units. A handful of individuals usually don’t show emergent patterns, but thousands of them do. The whole system has to be observed to witness a pattern or structure of emergence. Interestingly, the simple nature of the individual units is essential in the emergent world. In fact, the more ignorant they are, the better. Consider how computers are built using the basic binary system of ones and zeros. If any of its components become too smart, they will make decisions based on their own needs and not the system's needs. Emergence is also dependent on random encounters and actions. Much like the sampling methods that statisticians use in their studies, if their samples are not random, then biases will be introduced into their results. Emergent behavior relies heavily on these haphazard interactions; otherwise, its biases might refuse to recognize needs or dangers to the group. Finally, local information gathered from individual units eventually becomes wisdom for the entire community. It begins with single units sharing data with one another, rather than with a community leader, and the network gradually begins to act for the common good of the entire group.
https://medium.com/illumination/emergence-is-the-new-amazing-field-of-science-a8f7380c167f
['Charles Stephen']
2020-11-04 16:10:11.130000+00:00
['Philosophy', 'Biology', 'Technology', 'Science', 'Nature']
The end of seasonality in sports sponsorship
I’ve always been intrigued with seasonal businesses. Summer beach restaurants, pumpkin patches, Christmas stores…they have a 3-month window to sell 90% of their inventory. If they hit big, they hit big. If they have a slump, they have to wait until the next season. In many ways, sports sponsorship is the same way. Each year we activate with partners for 3–6 months (depending on the league) and then re-negotiate toward the next season. If they hit, they hit fairly big and we look for more value the following year with a renewal. If the campaign does not do well…or it gets off to a slow start…we only have a season to bring the promised value. Well with the pandemic…we have had a tough season. It has exposed us to the fact that we rely on a season with live games to make money. To battle back the industry pushed toward ways to connect with our fans despite the loss of fans in our stadiums and games. I believe we have seen the last of sports sponsorship being a seasonal advertising platform. Here’s why. This pandemic exposed us…but set us up to create a sponsorship world where you can drive revenue all year. This pandemic has been terrible on all ends. We’ve had to shift everything that we do in order to survive. With the huge reliance on live games…we in sports were exposed. In most cases, 60% of our revenue came from ticketing & concessions. This is too much of a reliance on live games to happen in our stadiums in order to survive and pay the bills. The same can be said about our assets in sponsorship. The majority of them relied on in-stadium signage and on-location activation. Both of these lose their value when we don’t have fans in the stands. As an industry, we’ve been amazing at adapting. We’ve built new digital assets, campaigns, and even uses for our stadium spaces to recoup the value lost. This has, ultimately, set us up to make more money when this ends. It has forced us into creating assets that have no reliance on live games. This shift has given us the keys to billions of dollars…if we can see it and take advantage of it. We can take sponsorship from seasonal to year-round in the value we drive. Teams are influencers…The Kardashians don’t only make money when their show is airing. They make it all year. One thing I think we forget is we as teams are influencers in the purest form of the word. We built, over decades, a captivated community with a common love…our teams. The influence many times is generational, which is the next level of the influencer process. With this, we’ve built something that can sustain without live games. In the past, we relied on our stadium experience and broadcasts to carry our sponsorship assets. It used to be the only time we could reach fans at large scale. Then the internet came to be. All of a sudden we have access to multiple channels (email, social, video streaming) in which we can reach our fans at any given moment. I liken this phenomenon to the empire that the Kardashians built through their reality show. What started on E! as the only way to consume the life of this family, they have all built an empire from their social media channels. If The Kardashians show was canceled…it would have little consequence to their ability to make money. The reason? They have diversified their attention channels. If the above happened, the family would simply post their content onto their own channels and integrate in brands. That’s right…they’ve essentially built their own network with a built-in advertising platform ( promoting brands, etc.) I say this with hesitation but in order to take sports sponsorship to the next level…we need to be like the Kardashians. That is to say, we must build out our distribution channels and following so that if games are taken from us we have the diversified assets to still drive revenue. It is no different from diversifying your stock portfolio. You can’t put all your eggs into one basket, nor should you as in an amazing market you make twice the money. As we look at the end of seasonality in sponsorship, it hinges on our ability to build our own following so we can monetize all year long. We must now be 24/7/365, because digital ads are. I’ve written about this before, but sponsorship was at a disadvantage by only activating during the season. The main reason? Digital ads are 24/7/365. That is to say that a Facebook ad can make a partner trackable revenue all year long. In some cases, they make money while we sleep. With our assets, really they were the most effective during the season…then dulled out when it ended. We threw in a few off-season golf tournaments to bring value. If we plan on continuing to compete with digital ads, we must build the assets that make our brand's money while they sleep. And here is what I mean by that. If immediately after the season ends we launch the following digital campaigns: -Top 10 videos for each star player -Season recaps with individual players -Rookie to sophomore reports We’ve built assets that have long-tail consumption. We have content that the brand will see real value while they sleep as fans will literally be watching while the CMOs sleep. By building a library up of content on digital platforms we become the Kardashians. We can reach our fans at any moment with our content…many times while we sleep as a team. In order to really turn this corner and come out more powerful at the end…we have to build our assets outside of live sports to monetize. This is the way to do it. This is how we dig out…but also how we go from $24Bn to $100Bn industry. The North American sports sponsorship market is $24Bn. Our live games have built quite the market. And we have content for the off-season to keep that attention all year…but have we really doubled down on it? Let me put it this way. If we could double our partnership revenue with more inventory…would you hire a sponsorship & digital team with the sole purpose of monetizing off-season assets? I think to turn the corner we need to totally re-think how we structure our departments. Imagine if we were able to create a digital sponsorship department of our teams. A department whose sole purpose is to monetize our digital assets. We could absolutely double, triple, and beyond our revenue. In sports, we are on the verge of a transformation. A massive metamorphosis that will define our industry. If we can push our monetization to not rely on live games. If we can recycle our content, give it new life, and understand the power and following we have online…we can take our industry to the next stage in its life cycle. But ultimately, it falls on us. As creative sponsorship industry people, we must understand that our organizations need to make this shift and take the steps today to build it. We have to make the scary, massive shifts to take advantage of what we have in front of us. Sponsorship is no longer seasonal. It can’t be if we are to grow as an industry.
https://medium.com/sqwadblog/the-end-of-seasonality-in-sports-sponsorship-f2b13f60c4f6
['Nick Lawson']
2020-07-08 23:40:54.550000+00:00
['Sponsorship', 'Sports Business', 'Sportsbiz', 'Marketing', 'Sponsorship Activation']
Key points from Livestream session with Deputy CTO
On November 17, we held our first Livestream with Crypterium’s IT Team to let our followers learn some tech details about the project from the first hands. If you missed the Livestream, feel free to replay it. If you prefer reading the news, here’s the short summary we’ve prepared for you. How is the Crypterium App architecture built? What programming languages do we use? Application architecture is a rather complex topic. We have a client-server architecture with a mobile front and a backend. We’re using the REST API, and we have WebSockets on the frontend, and in order to communicate with the backend. As for programming languages, we are coding in Kotlin and Java for Android and Swift for iOS. On backend, we have two teams. We are working on .net core c#, and jvm platform is our main platform for Java and Kotlin. Another interesting technology that we have adopted is Hazelcast that we use as distributed cache solution, and Rabbitmq — our message queue. And we also use mongodb and some sql databases where applicable. How do we build our system on microservices? Microservices means services are fine-grained: they are separated from each other, they are completely independent. Each of them implements one simple business domain so you can develop parts of the product with small teams. It’s also easier to test them, and they communicate to each other using lightweight protocols, like HTTP. It’s very easy to deploy and maintain this overall structure, but it’s quite hard to enter. Which service providers are we using? Our main provider is Amazon with AWS services — our app is built using their container services. Among them is EC2 that we use to have scalable services, elastic load balancer to balance the traffic between clusters and inside them, fargate — container running service — by the way, I highly recommend it for those of you who needs speed when scaling, it’s really faster than EC2. We also use ECS — Elastic Container Service that orchestrates all the things running between fargate and ec2 instances, and s3 — object storage as buckets for static content. Finally we use Relational Database Service for hosting SQL databases. To obey the local laws in some countries, we store some data in the local data centers, and technologies like mongodb sharding helps us achieve this. Of course, we also have a Plan B, because we don’t want to rely fully on one provider only. We are ready to switch to any other hosting provider or to some local data centers, if we need to. What I’m saying is we’re not highly dependent on our providers. How is the app support working? How do we ensure 24/7 support? The main idea that we have a first-level support team working 24/7 who response to the users in chats and so on and a second-level support team working to the usual schedule with 2 day-offs per week. Together both teams collect the user’s feedback, provide analytics and bring the data to the development team. How do we plan our sprints? We are an agile company, and we use a framework that is called SCRUM. It’s an agile framework. It helps developers to collaborate with product owners, other stakeholders, customers and so on constantly. They always receive feedback, they have some reaction to this feedback. Each sprint lasts for two weeks, and there are several stages. The first stage is sprint planning when we discuss the sprint usability we discuss and plan the goals and tasks for the sprint. Then the development starts, we have daily meetings with the updates. After that, there’s a sprint review, where we show the demo to all the stakeholders so that they can give us the feedback. Finally, we have the sprint retrospective discussions of what has been done and what problems have we been facing if any. What crypto are we going to add to the app? There are no technical problems: from the tech side we have everything we need, we can add as many tokens as we want. The point is that most of the coins and tokens don’t meet the KYC and AML requirements, and our partners can’t work with tokens without KYC/AML. So our compliance team is working on it, and we are going to solve this problem. We will definitely have more tokens and coins over time. About Crypterium CCrypterium is building a mobile app that will turn cryptocurrencies into money that you can spend with the same ease as cash. Shop around the world and pay with your coins and tokens at any NFC terminal, or via scanning the QR codes. Make purchases in online stores, pay your bills, or just send money across borders in seconds, reliably and for a fraction of a penny. Join our Telegram news channel or other social media to stay updated! Website ๏ Telegram ๏ Facebook ๏ Twitter ๏ BitcoinTalk ๏ Reddit ๏ YouTube ๏ LinkedIn
https://medium.com/crypterium/key-points-from-tech-livestream-8304dc95f6fd
[]
2018-11-20 17:35:51.151000+00:00
['Tech', 'Technology', 'Mobile App Development', 'Fintech', 'Blockchain']
Create Your Own Story
Controlling Your Story How do you control your own story? Here are three steps to become the main character of your life story again: Know what you are passionate about. Start saying “Yes” to things that scare you. Be willing to take risks. Does that sound too simple? It might seem easy on paper, but it’s harder than you’d think. Let’s start with the first one: knowing what you’re passionate about. Do you know what excites you? Do you know what you live for? You probably did at one point. But as life moved on, you might’ve forgotten. At one point or another we all do, actually. It’s so easy to become focused on your job and forget what truly brings you happiness. It’s so easy to obsess over movies and mistakenly believe that screens are all that life has to offer. Grab a pen and a piece of paper. Write down what excites you. It might be one thing — it might be ten things. But go on and do it, then put the paper where you can see it each day. Are you pursuing those passions? The second step might be the scariest. It’s time to start saying “Yes” to things that scare you. Did your friend invite you to hike part of the Appalachian Trail, and did you say no because you were too scared? Did your brother ask you to go scuba diving, and did you say no because you just didn’t want to? It’s important to note that we shouldn’t say yes to everything. Some things are actually dangerous and potentially harmful, and we might have a good reason to decline invitations from certain people. But, in moments of discomfort, are you saying no because you’d rather remain complacent than try new things? Maybe it’s time to try something new. Maybe it’s time to begin saying more of “yes” and less of “no.” The third step follows right after the second one: be willing to take more risks. At the end of your lifetime, when you’re having that birthday party I mentioned earlier, will you regret the risks you took in life? No, probably not. If you regret anything, it will be the times that you avoided risks just because you preferred being comfortable in your way of life. Take a few risks. Try a few new things. Remember what excites you about life. Become the main character you are meant to be. Create the story that you want to be told about you.
https://medium.com/live-your-life-on-purpose/create-your-own-story-15d3d87bf968
['Aaron Schnoor']
2020-12-18 00:02:53.549000+00:00
['Humanity', 'Life', 'Development', 'Advice', 'Self Improvement']
Building a Kubernetes platform at Pinterest
Lida Li, June Liu, Rodrigo Menezes, Suli Xu, Harry Zhang, Roberto Rodriguez Alcala | Pinterest Software Engineers, Cloud Management Platform Why Kubernetes? Over the years, 300 million Pinners have saved more than 200 billion Pins on Pinterest across more than 4 billion boards. To serve this vast user base and content pool, we’ve developed thousands of services, ranging from microservices of a handful CPUs to huge monolithic services that occupy a whole VM fleet. There are also various kinds of batch jobs from all kinds of different frameworks, which can be CPU, memory or I/O intensive. To support these diverse workloads, the infrastructure team at Pinterest is facing multiple challenges: Engineers don’t have a unified experience when launching their workload. Stateless services, stateful services and batch jobs are deployed and managed by totally different tech stacks. This has created a steep learning curve for our engineers, as well as huge maintenance and customer support burdens for the infrastructure team. Engineers managing their own VM fleets is creating a huge maintenance load for the infra team. Simple operations such as an OS or AMI upgrade can take weeks to months. Production workloads are also disturbed during those processes, which are supposed to be transparent to them. It’s hard to build infrastructure governance tools on top of separated management systems. It’s even more difficult for us to determine who owns which machines and if they can be safely recycled. Container orchestration systems provide a way to unify workload management. They also pave the way to faster developer velocity and easier infra governance since all running resources are managed by a centralized system. Figure 1: Infrastructure priorities (Service Reliability, Developer Productivity and Infra Efficiency) Figure 1: Infrastructure priorities (Service Reliability, Developer Productivity and Infra Efficiency) The Cloud Management Platform team at Pinterest started their journey on Kubernetes back in 2017. We dockerized most of our production workloads, including the core API and Web fleets, by the first half of 2017. Extensive evaluation on different container orchestration systems was then done by building prod clusters and operating real workloads on them. By the end of 2017, we decided to go down the path of Kubernetes because of its flexibility and extensive community support. So far, we’ve built our own cluster bootstrap tools based on Kops and integrated existing infrastructure components into our Kubernetes cluster, such as network, security, metrics, logging, identity management and traffic. We introduced Pinterest-specific custom resources to model our unique workloads while hiding the runtime complexity from developers. We’re now focusing on cluster stability, scalability, and customer onboarding. Kubernetes, the Pinterest way Running Kubernetes to support workloads at Pinterest scale, while also making it a platform loved by our engineers, has many challenges. As a large organization, we have invested heavily in infrastructure tools, such as security tools that handle certificates and key distribution, traffic components that enable service registration and discovery, and visibility components that ship logs and metrics. These are components built on lessons learned the hard way, so we want to integrate them into Kubernetes instead of reinventing the wheel. This also makes migration much easier, as the required support is already there for our internal applications. Figure 2: Runtime support for applications. To run the application in the middle, there are many other support components that need to be co-located with it. On the other hand, the Kubernetes native workload model, such as deployment, jobs and daemonsets, are not enough for modeling our own workloads. Usability issues are huge blockers on the way to adopt Kubernetes. For example, we’ve heard service developers complaining about missing or misconfigured ingress messing up their endpoints. We’ve also seen batch job users using template tools to generate hundreds of copies of the same job specification and ending up with a debugging nightmare. Runtime support for the workloads is also evolving, so it would be extremely hard to support different versions on the same Kubernetes cluster. Just imagine the complexity of customer support if we needed to face many versions of the runtime, together with the difficulties of upgrading or bug-patching for them. Pinterest custom resources and controllers In order to pave an easier way for our engineers to adopt Kubernetes and make infra development faster and smoother, we designed our own Custom Resource Definitions (CRDs). The CRDs provide the following functionalities: Bundle various native Kubernetes resources together so they work as a single workload. For example, the PinterestService resource puts together a deployment, a service, an ingress and a configmap, so service developer will not need to worry about setting up DNS for their service. Inject necessary runtime support for the applications. The user only needs to focus on the container spec for their own business logic, while the CRD controller injects necessary sidecars, init containers, environment variables and volumes into their pod spec. This provides an out-of-box experience to the application engineers. CRD controllers also do life cycle management for the native resources and handle visibility and debuggability. This includes but is not limited to reconciling the desired spec and the actual spec, CRD status updating and event recording. Without CRDs, app engineers must manage a much larger set of resources, and this process has proved to be error prone. Here’s an example of PinterestService and the native resource translated by our controller: Figure 3: CRD to native resources. The left is the Pinterest CR written by user, and the right is the native resource definition generated by the controller. As shown, to support a user’s container, we need to insert an init container and several sidecars for security, visibility and network traffic. Additionally, we introduced configuration map templates and PVC template support on batch jobs, as well as many environment variables to track identity, resource utilization, and garbage collection. It’s hard to imagine engineers would be willing to hand-write these configuration files without CRD support, let alone maintain and debug the configurations. Application Deploy Workflow Figure 4: Pinterest CRD Overview Figure 4 shows how to deploy a Pinterest custom resource to the Kubernetes cluster: Developers interact with our Kubernetes cluster via CLI and UI. The CLI/UI tools retrieve workflow configuration YAML files and other build properties (such as version ID) from Artifactory and send them to the Job Submission Service. This ensures only reviewed and landed workloads will be submitted to the Kubernetes cluster. The Job Submission service is the gateway to various computing platforms, including Kubernetes. User authentication, quota enforcement and partial Pinterest CRD configuration validation happens here. Once the CRD passes the Job Submission service validation, it’s sent to the Kubernetes API. Our CRD controller watches events on all custom resources. It transforms the CR into Kubernetes native resources, adds necessary sidecars into user defined pods, sets appropriate environment variables and does other necessary housekeeping work to ensure the user’s application containers have enough infrastructure support. The CRD controller then writes the resulting native resources back to the Kubernetes API so they can be picked up by the scheduler and start to run. Note: This is the pre-release deploy workflow used by early adopters of the new Kubernetes-based Compute Platform. We are in the process of revamping this experience to be fully integrated with our new CI/CD platform to avoid exposing a lot of Kubernetes-specific details. We look forward to sharing the motivation, progress and subsequent impact in an upcoming blog post — “Building a CI/CD platform for Pinterest.” Custom Resource Types Based on Pinterest’s specific needs, we designed the following CRDs that suit different workflows: PinterestService is the long running stateless service. Many core systems are based on a set of such services. is the long running stateless service. Many core systems are based on a set of such services. PinterestJobSet models the batch jobs that run to completion. A very common pattern within Pinterest is that multiple jobs runs the same containers in parallel, each grabbing a fraction of a workload without depending on each other. models the batch jobs that run to completion. A very common pattern within Pinterest is that multiple jobs runs the same containers in parallel, each grabbing a fraction of a workload without depending on each other. PinterestCronJob is widely adopted by teams with lightweight periodic workloads. PinterestCronJob is a wrapper around the native cron job, with Pinterest-specific support such as security, traffic, log and metrics. is widely adopted by teams with lightweight periodic workloads. PinterestCronJob is a wrapper around the native cron job, with Pinterest-specific support such as security, traffic, log and metrics. PinterestDaemon is limited to the infrastructure-related daemons. The family of PinterestDaemon is still growing as we are adding more support on our clusters. is limited to the infrastructure-related daemons. The family of PinterestDaemon is still growing as we are adding more support on our clusters. PinterestTrainingJob wraps around Tensorflow and Pytorch jobs, providing the same level of runtime support as all other CRDs. Since Pinterest is a heavy user of Tensorflow and other machine learning frameworks, it makes sense to build a dedicated CRD around them. We also have PinterestStatefulSet under construction, which will soon be adopted for storage and other stateful systems. Runtime Support When an application pod starts on Kubernetes, it automatically gets a certificate to identify itself. This cert is used to access the secrets store or talk to other services via mTLS. Meanwhile, the config management init containers and daemon will ensure all necessary dependencies downloaded before the application container starts. When the application container is ready, the traffic sidecar and daemon will register the pod IP to our Zookeeper in order to make it discoverable by clients. Networking has been set up for the pod by network daemon before the pod even starts. The above are examples of typical runtime support for service workloads. Other workload types may need slightly different support, but they all come in the form of pod-level sidecars, node-level daemonsets or VM-level daemons. We make sure all of them are deployed by the infrastructure team so they are consistent between all applications, which greatly reduces the maintenance and customer support burden for us. Testing and QA We built an end-to-end test pipeline on top of the native Kubernetes test infra. These tests are deployed to all clusters. This pipeline has caught many regression before they reach the production cluster. Besides the testing infra, there is also monitoring and alerting systems that watch the system components’ health status, resource utilization and other critical metrics consistently, notifying us when human intervention is needed. Alternatives We considered some alternatives to custom resources, such as mutation admission controllers and templating systems. However, the alternatives all come with major issues, so we chose the path of CRDs. Mutating admission controller has been used to inject sidecars, environment variables and other runtime support. However, it has difficulties bundling resources together as well as managing their life cycle, whereas CRD comes with reconciling, status update and lifecycle management. Templating systems such as Helm charts are also widely used to launch applications with similar configurations. However, our workloads are too diverse to be managed by templates. We also need to support continuous deployment, which would be extremely error prone with templates. Future Work Currently, we are running mixed workloads on all of our Kubernetes clusters. In order to support workloads of different sizes and types, we are working on the following areas: Cluster Federation spreads large applications over different clusters for scalability and stability. Cluster Stability, Scalability and Visibility that makes sure applications reach their SLA. Resource and Quota Management to make sure applications do not step on each other’s feet and the cluster scale is under control. New CI/CD Platform to support Application Deployment on Kubernetes Acknowledgements Many engineers at Pinterest helped build the platform from the ground up. Micheal Benedict and Yongwen Xu, who lead our engineering productivity effort, have worked together on setting the direction of the compute platform, discussing the design and helping with feature prioritization from the very beginning. Jasmine Qin and Kaynan Lalone helped on the Jenkins and Artifactory integration support. Fuyuan Bie, Brain Overstreet, Wei Zhu, Ambud Sharma, Yu Yang, Jeremy Karch, Jayme Cox, and many others helped build the config management, metrics, logging, security, networking and other infra support. Jooseong Kim and George Wu helped build the Submission Service. Lastly, our early adopters Prasun Ghosh, Michael Permana, Jinfeng Zhuang and Ashish Singh provided a lot of useful feedback and feature requirements.
https://medium.com/pinterest-engineering/building-a-kubernetes-platform-at-pinterest-fb3d9571c948
['Pinterest Engineering']
2019-08-07 18:29:26.269000+00:00
['Docker', 'Kubernetes']
Reviewing Morgan Stanley’s Bitcoin research reports
Click to Find blockchain Jobs Find blockchain jobs using Coinmonks jobs portal Get published on Coinmonks (Following article is taken from mrb’s blog and republished with the permission of author Marc Bevand. Original posted here at 06 Feb 2018) Marc reviewed the following two research reports published by analysts at Morgan Stanley on the subject of Bitcoin electricity usage: The first report estimates current electricity consumption at 2500 MW, agreeing with my own estimate of 1620/2100/3136 MW (lower bound/best guess/upper bound) as of January 11, 2018: However I spotted a few errors. 1. Math error (multiplying instead of dividing) The analysts attempt to forecast future consumption, 12 months from now (ca. January 2019,) and claim it may be “more than 13 500/hour [sic] megawatts.” Based on TSMC production orders for 15–20k 300mm wafer-starts of Bitcoin ASICs per month, they estimate “up to 5–7.5M new rigs” could be added. They claim to calculate electricity consumption based on 6.5M, but their numbers line up only with the upper bound 7.5M: 7.5M × 1300 (watts) × 1.4 (efficiency improvement) = 13 650 MW The multiplication by 1.4 is meant to account for new rigs bringing a “40% efficiency improvement” and this is their error: they multiply instead of dividing.1 A given volume of wafers/chips more energy-efficient consume less, not more, per mm² of die area. When correcting this error we arrive at an estimate of 6950 MW, about half their published number (13 500 MW.) 2. PUE of mining farms as low as 1.03–1.33 The Morgan Stanley analysts assume “60% direct electricity usage (i.e. 40% of total electricity consumption is used for non-hashing operations like cooling, network equipment, etc.)” In data center lingo this is called a PUE of 100/60 = 1.67. However no study supports such terrible PUE values for the mining industry.2 In reality, most mining farms aggressively optimize their PUE: Gigawatt Mining builds air-cooled mining farms having a PUE of 1.03–1.05.3 Bitfury data centers are highly energy-efficient; for example their 40 MW Norway data center has a PUE of 1.05,4 and their CEO emphasized their Iceland data center does not have a high PUE.2 The well-known Bitmain Ordos mine reportedly has a PUE of either 1.11 or 1.33(depending on which journalist’s numbers are trusted.) Google optimized their data center PUEs as low as 1.065 and electricity is not even one of their main costs. So it completely makes sense to find miners, for whom electricity isone of their main costs, to be in the same range. 3. Inconsistent PUE math According to their future consumption estimate and PUE estimate, the resulting global consumption should be 13500 × 1.67 = 22 500 MW × 90% utilization = 20 250 MW. But they calculate “nearly 16 000 MW.” 20500 ≠ 16000. The math is inconsistent. Correcting their math and parameters gives 6950 × 1.11 (or 1.33) × 90% = 6950 (or 8300) MW. In summary, Morgan Stanley’s first report forecasts the consumption ca. January 2019 will be 13 500–16 000 MW (120–140 TWh/yr annualized) however fixing multiple errors actually forecasts 6950–8300 MW (60–75 TWh/yr annualized). 4. Hashrate method makes optimistic and pessimistic assumptions The report claims “the hash-rate methodology uses a fairly optimistic set of efficiency assumptions.” This is not true. Well perhaps they refered to other people’s hashrate methodology. But mine, as explained in the introduction, makes optimistic and pessimistic assumptions (miners using either the least or the most efficient ASICs.) 5. Only a fraction of the Ordos farm mines bitcoins The report continues by attempting to extrapolate the global electricity consumption from the Ordos mine: They fail to account for the fact that only 7/8th of the farm mines bitcoins. The other 1/8th mines litecoins. The media publishes slightly different power consumption numbers, implying either 29.2 or 35 MW for the Bitcoin rigs (depending on journalists.) They build their calculations on a grossly rounded estimate of its hashrate (“4% of ~6M TH/s”), but it can be calculated more exactly as we know there are 21k Bitcoin rigs (~263k TH/s.) When correcting these errors, the mine’s power consumption scaled to a global hashrate of 15.2M TH/s would imply a global power consumption of either 1690 or 2020 MW (14.8 or 17.7 TWh/yr) depending on journalists. This is significantly less than the analysts’ 2700 MW (23 TWh/yr.) 6. Antminer S9 dominates the market The report states “the most efficient mining rigs used by Bitmain in its facilities [Antminer S9/T9] are not yet widely available” and imply that if they are not available that the average rig must be another less efficient model. The analysts conflate market availabily with market share. Bitmain claimed in mid-2017 they had a 70% market share. Everything points to the fact it is even higher today. The Antminer S9/T9 has been the only Bitcoin mining rig sold by Bitmain for the last 20 months. Batches of tens of thousands sell out in minutes at shop.bitmain.com. Bitmain is buying ~20k 16nm wafers a month and arguably makes up most of the ~10k a month that the Morgan Stanley analysts claim since 3Q17. ~10k wafers = ~270k S9 = ~3.6 EH/s manufactured per month. That is more than the 1–3 EH/s added monthly to the global hashrate over 3Q17/4Q17 (it takes months to go from wafer production to mining.) Bitmain rigs make up virtually allthe hashrate being deployed to this day. 7. Electricity costs As to Morgan Stanley’s second report, it merely quotes the first report’s flawed prediction of 120–140 TWh/yr ca. January 2019. But other than that it is generally of better quality than the first. My criticism concerns relatively minor points. In it, the analysts calculate the cost of mining one bitcoin by assuming electriciy costs between 6¢ and 8¢ per kWh. Their source are EIA numbers grossly rounded for entire geographical regions. Miners do not pay average prices. They choose the less expensive electrical utilities of these regions. For example where the analysts quote 7.46¢ for Washington State (see their exhibit 5,) a mining farm located in this state, Giga Watt, pays in fact 2.8¢. It is my opinion that the industry average is probably around 5¢. 8. Bitmain’s direct sales model: ONE global price Another assumption they make when calculating the cost of mining a bitcoin is to assume that outside China an Antminer S9 costs $7000. In reality only individual retail sales reach such high prices on third party sites such as eBay. Large-scale miners representative of the average mining farm, even outside China, all pay the same price: Bitmain’s direct sales price which was $2320 for the batches sold around the time the report was written. 9. Transaction fees not accounted for Finally, they imply the cost of mining one bitcoin is a “breakeven point” but it is not exactly true. For example, at the time of the report, transaction fees collected by miners averaged more than 600 BTC daily and boosted their global daily revenue by 1.33× (1800 to 2400 BTC,) hence the true breakeven point was 1.33× lower. Correcting these errors, with an electricity cost of $0.05/kWh, with the same sale price globally, and with the (unusually) high-fee period of December/January, the true breakeven point was $2300, significantly below the analysts’ number ($3000 to $7000.) Footnotes
https://medium.com/coinmonks/reviewing-morgan-stanleys-bitcoin-research-reports-add2477b6a34
['Gaurav Agrawal']
2019-04-25 20:51:18.495000+00:00
['Cryptocurrency', 'Bitcoin', 'Investment', 'Morgan Stanley', 'Analysis']
Todo lo que Necesitas Saber sobre el Descenso del Gradiente Aplicado a Redes Neuronales
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/metadatos/todo-lo-que-necesitas-saber-sobre-el-descenso-del-gradiente-aplicado-a-redes-neuronales-19bdbb706a78
['Jaime Durán']
2019-09-20 08:23:20.454000+00:00
['Neural Networks', 'Artificial Intelligence', 'Deep Learning', 'Gradient Descent', 'Español']
What Software Engineers Can Learn From Studying Philosophy
Formal Logic Proofs are at the Core of Functional Programming There is currently a significant paradigm shift happening in the field of software engineering. Many developers are starting to notice that functional programming (FP) is taking over in the software world. Polymorphism and object-oriented programing (OOP) were for a time considered to be superior. However, as more developers recognize the benefits of composition over inheritance, these seemingly outdated patterns in the OOP world begin to fall short. FP languages such as Haskell, Elixir, Elm, Scala, and F# are increasingly becoming the new norm in modern software development. Now that we have some understanding of what functional programming is, we can better appreciate why formal logic is relevant. While learning functional programming, I noticed that many formal logic rules of replacement are used. Take, for example, the rule of associativity. One of the main benefits of functional programming is that you can create several small functions and compose them together to make entire programs and applications. These little functions are used as small, reusable components, almost like lego blocks. The reason why engineers can quickly “snap” together these functions is because of their associative nature. One with a philosophy background will intuitively recognize how to use composition from doing proofs using the same replacement rules on whiteboards. Church proved that all computation could be accomplished through mathematical functions. Simply put, each of these functions can be viewed as a system that determinately returns the same output given the same input. Church’s famous discovery is referred to as the Lambda Calculus. With Functional Programming rising in popularity, philosophy becomes even more relevant to software engineering. Many believe that Functional Programming is such a significant paradigm shift that it is also causing Senior Developers to feel like newbies all over again.
https://medium.com/swlh/what-software-engineers-can-learn-from-studying-philosophy-b746cf7126d1
['Derek Johnson']
2020-06-08 21:39:56.917000+00:00
['Web Development', 'Philosophy', 'Software Development', 'Coding', 'Software Engineering']
House Sales Predictor Using Deep Learning
Housing prices are an important reflection of the economy, and housing price ranges are of great interest for both buyers and sellers. In this project, house prices will be predicted given explanatory variables that cover many aspects of residential houses. The goal of this project is to create a regression model that are able to accurately estimate the price of the house given the features. Acquiring data The data for this project is available on kaggle. The main objective is to predict the price of a given house based upon the previous available data. The dataset is available here: https://www.kaggle.com/harlfoxem/housesalesprediction Checking out the data As we can see there are 21 different features upon which the price of the house is dependent. Not all of these features have a high correlation with the price so, we will only be looking at those features having a high correaltion. Since this is quite a large dataset we have to check for missing values and fortunately there are none! Exploratory data analysis Now we have to visualize the relationship between the price and various features present in our dataset. Majority of houses are in the one million dollars mark. As is evident from the figure most of the houses have 3- 4 bedrooms. There are several other features such as the year in which the house was built, number of floors in each house, the square feet of living space available and many more. Each of these features have a very high correlation with the price. Based off these features alone we can predict the price of a house but the most important feature is the location of the house, which is discussed next. Geographical Plotting As is clear from the heatmap the majority of pricey houses are present along the waterfront which is also clear from the actual map of King County, USA. It is quite understandable as better the view from the house, the more expensive it would be. Feature Extraction & Engineering There are a lot of columns present in our dataset which are of absolutely no use to us. So it is quite beneficial to us if we simply drop these columns from our dataset. It is also better for us if the date column which is initially an object be converted to datetime. Creating the model First we do the train-test split on our dataset and scale the data using MinMaxScaler. The second step is to actually create a neural network using tensorflow having keras API. Model Evaluation We will be evaluating our model using the various metrics available to us through scikit-learn. So we get an accuracy score of around 0.8 and a mean absolute error of around 1,00,000. This means our predictor will be off by a value of a hundred thousand dollars on a 2–3 million dollar house, which seems to be doing a good job.
https://medium.com/swlh/house-sales-predictor-using-deep-learning-18b6139c3401
['Umair Ayub']
2020-09-25 08:05:42.983000+00:00
['Data Science', 'Deep Learning', 'Exploratory Data Analysis', 'Neural Networks']
The Evil of Neutrality
On a recent sleepy Monday morning I started my day as I did every morning about a decade ago: by watching the launch trailer for Mass Effect 2. Arguably one of the best role playing games ever made, the trailer is narrated by Martin Sheen and set to Heart of Courage, a go-to track in Hollywood for evoking emotional build-up. Obviously I’m a huge nerd, but I’m also a sucker for basic, inspirational, pump-up music and videos. I’m the kind of person that would have purchased Jock Jams. Those movie montages that they play at sporting events when the game gets close? Those get me on my feet. It’s a little embarrassing, but as they say, “whatever gets you out of bed in the morning.”I hadn’t watched the trailer in a while, long enough that I had to google it. The results of the search had the opposite effect I was looking for, which was now becoming a standard for any internet interaction. Instead of a temporary boost of dopamine, I a stumbled into a sideshow of hatred and idiocy, accelerating the tailspin of existential dread that comes with the start of every work week. At some point in 2016 the Trump campaign had ripped off the trailer to create a bit of propaganda. Now most of the search results either linked back to that shitty video, or the resultant fallout. I missed the original controversy because it took place in April of 2016, months before the November election when Trump still felt like a bad joke that would soon sink back into the muck he from which he crawled. Most of the coverage was from game -focused news outlets who pegged the appropriated video as a toxic distortion of something they loved — but at the same time, broadcast and signal boosted the propaganda to the exact audience for which it was created. The coverage even bled into the mainstream with an article in The USA Today, which treated the video as a silly oddity, linked the bastardized trailer, and boldly quoted its most offensive tidbits. It’s a textbook example of how not to play into an offensive meme, a lesson that you’d think we would have a much better handle on today. Instead, things have gotten much, much worse. A known tactic of the internet’s worst alt-right factions is to absorb pop culture memes and twist them into propaganda. It’s been happening on message boards for years, where a familiar bit of multi-media gets co-opted and spit back out with a different meaning, almost always under the guise of a “joke,” but the context hardly matters. As we’ve learned from Trump, the subtext is the text. The dog whistle is the clarion call. The mainstream media got their first look at this with Pepe the Frog, a cartoon image that was absorbed as a symbol of the alt- right. It was seen as a pin worn by Richard Spencer as he was punched in the face in the now infamous video. Its creator complained and had it removed from certain sites, but it was too late- the imagery was already co-opted and spread around the internet in a million nooks and crannies. Because the creator didn’t have a voice as metaphorically large as the image itself, the damage was done and Pepe’s meaning was permanently twisted. With this success the tactics of the internet’s troll armies evolved. If an image could be co-opted, surely human beings could be as well using the exact same tactics: take something in pop culture, claim it as your own and piggyback its fame to spread your own message. You would think this technique would backfire against a target that has the ability to issue its own opinions, to lash out against a perversion of their brand, but the structural incentives of social media and the modern internet weigh heavily against the victim. At this very moment we’re seeing the same tactic in action with presidential candidate Tulsi Gabbard. Gabbard’s politics are difficult to classify, and her position within the Democratic party feels counter to the rise of the left and the platforms established by Sanders and carried forth by AOC and Elizabeth Warren. Gabbard probably wouldn’t feel as out of place pre-Trump, she’d be a hawkish Democrat with populist leanings. She’s unwilling (or unable) to nail down exactly what her politics are, but this is extremely common in the primaries, giving the candidate the option to “pivot” in the general election. However, in the current environment of conspiracy theories and online counter-messaging, Gabbard is the perfect patsy. The playbook is simple, an online troll army identifies her as an outlier with views that maybe aren’t sympathetic to their cause but could be construed that way with a little tweaking. They generate the memes and then obtain official endorsements from alt-right leaders, on which the mainstream media is compelled to report. This news coverage, even when it acknowledges the playbook, signal boosts the troll message. The noise has become the signal, the subtext has become the text. But there’s an additional knock-on effect that has become part of the pattern of alt-right appropriation — the target is incentivized not to fight back. In Gabbard’s case the signal boost from the alt-right is directly beneficial to her campaign. Every time the trolls make her trend there will be a measurable bump in social media support metrics and undoubtedly campaign contributions and poll numbers. Just to be clear, I’m not making a philosophical argument, this can be broken down into hard currency and KPI (key performance indicators) in any marketing or political campaign. In advertising the first step is to generate “awareness.” This used to be a squishy term that referred to whether potential consumers had heard of your product. Today, awareness is measurable across social networks because they track people’s search data, their follows, and their likes. This is what it means when we talk about “targeted ads,”: we already know the person we are advertising to has awareness of the product. When using the tools within a social network like Facebook’s Ad Manager, if you can identify a targeted audience you can lower your cost per click. This is a gross simplification, but basically if your product already has high awareness, your targeted ads cost less. This allows you to run more ads to a larger audience, which means you have a much higher chance of “conversion,” a term that means getting your target to act, in the case of politics, to donate or to vote. So, while Gabbard’s soft denial of her alt-right supporters seems fishy, its easily explained. The signal boost is beneficial to her overall campaign in terms of dollars and in terms of making each successive debate, in which participation invitations are based on support thresholds and performance metrics established by the DNC. In a way, the DNC itself has played into a system where popular exposure is the only metric worth chasing. We’ve watched this phenomenon play out before involving other pop culture icons. Taylor Swift’s apolitical stance early in her career designated her as a ripe target to be used as an unwilling white nationalist icon. Her position as an occasional country music star made striking out against groups that also associated with the rural south a risky tactic. Also, as a pop star, she didn’t have a strict obligation to broadcast her political views. However, over time the realization set in that remaining apolitical can make you an accomplice. What’s worse, is that even if you abstain from the argument, as a public figure you reap the monetary rewards of increased online awareness. Taylor Swift has since taken sides, and tried to combat this attempt to co-opt her image. Her debut as an ally may have been clumsy, and rejected by some of the people with whom she was hoping to align, but it was an important step in shedding the trolls that had latched on during her silent years. Compared to the next largest example of alt-right appropriation her actions look downright brave. Too much has been written about YouTube personality PewDiePie, but we recently reached the stage in his alt-right co-opting where the mainstream media has chimed in with the standard “What does he really believe?” article. As a pop culture icon that is wholly controlled by the metrics of social media networks he didn’t just abstain from siding with the darker factions of his supporters, he goaded them on and made subtle indications of support. This tactic frequently ignited controversy, backpedaling, and an attempt by the star to abstain from fighting. It hasn’t worked, and his conversion to an alt-right meme is solidified. Regardless of what he really believes, you can’t spell PewDiePie without Pepe. At this moment there’s an increasing pressure for all people, especially public figures and politicians, to define and defend their beliefs. When follower size translates directly to hard currency allowing your worst followers to appropriate your image rather than dislodge them from your base is the same as taking a bribe. In politics, if your platform is ambiguous enough to be co-opted, the public has every right to doubt your strength as a candidate and your underlying motives. Either way, if you sit back and allow the dark forces of the internet to co-opt your image for nefarious purposes, you’ve joined them whether you like it or not.
https://medium.com/swlh/the-evil-of-neutrality-225d47dfcb7a
['David Clayman']
2019-11-11 18:39:37.654000+00:00
['Marketing', 'Politics', 'Communication', 'Media']
Metacat: Making Big Data Discoverable and Meaningful at Netflix
by Ajoy Majumdar, Zhen Li Most large companies have numerous data sources with different data formats and large data volumes. These data stores are accessed and analyzed by many people throughout the enterprise. At Netflix, our data warehouse consists of a large number of data sets stored in Amazon S3 (via Hive), Druid, Elasticsearch, Redshift, Snowflake and MySql. Our platform supports Spark, Presto, Pig, and Hive for consuming, processing and producing data sets. Given the diverse set of data sources, and to make sure our data platform can interoperate across these data sets as one “single” data warehouse, we built Metacat. In this blog, we will discuss our motivations in building Metacat, a metadata service to make data easy to discover, process and manage. Objectives The core architecture of the big data platform at Netflix involves three key services. These are the execution service (Genie), the metadata service, and the event service. These ideas are not unique to Netflix, but rather a reflection of the architecture that we felt would be necessary to build a system not only for the present, but for the future scale of our data infrastructure. Many years back, when we started building the platform, we adopted Pig as our ETL language and Hive as our ad-hoc querying language. Since Pig did not natively have a metadata system, it seemed ideal for us to build one that could interoperate between both. Thus Metacat was born, a system that acts as a federated metadata access layer for all data stores we support. A centralized service that our various compute engines could use to access the different data sets. In general, Metacat serves three main objectives: Federated views of metadata systems Unified API for metadata about datasets Arbitrary business and user metadata storage of datasets It is worth noting that other companies that have large and distributed data sets also have similar challenges. Apache Atlas, Twitter’s Data Abstraction Layer and Linkedin’s WhereHows (Data Discovery at Linkedin), to name a few, are built to tackle similar problems, but in the context of the respective architectural choices of the companies. Metacat Metacat is a federated service providing a unified REST/Thrift interface to access metadata of various data stores. The respective metadata stores are still the source of truth for schema metadata, so Metacat does not materialize it in its storage. It only directly stores the business and user-defined metadata about the datasets. It also publishes all of the information about the datasets to Elasticsearch for full-text search and discovery. At a higher level, Metacat features can be categorized as follows: Data abstraction and interoperability Business and user-defined metadata storage Data discovery Data change auditing and notifications Hive metastore optimizations Data Abstraction and Interoperability Multiple query engines like Pig, Spark, Presto and Hive are used at Netflix to process and consume data. By introducing a common abstraction layer, datasets can be accessed interchangeably by different engines. For example: A Pig script reading data from Hive will be able to read the table with Hive column types in Pig types. For data movement from one datastore to another, Metacat makes the process easy by helping in creating the new table in the destination data store using the destination table data types. Metacat has a defined list of supported canonical data types and has mappings from these types to each respective data store type. For example, our data movement tool uses the above feature for moving data from Hive to Redshift or Snowflake. The Metacat thrift service supports the Hive thrift interface for easy integration with Spark and Presto. This enables us to funnel all metadata changes through one system which further enables us to publish notifications about these changes to enable data driven ETL. When new data arrives, Metacat can notify dependent jobs to start. Business and User-defined Metadata Metacat stores additional business and user-defined metadata about datasets in its storage. We currently use business metadata to store connection information (for RDS data sources for example), configuration information, metrics (Hive/S3 partitions and tables), and tables TTL (time-to-live) among other use cases. User-defined metadata, as the name suggests, is a free form metadata that can be set by the users for their own usage. Business metadata can also be broadly categorized into logical and physical metadata. Business metadata about a logical construct such as a table is considered as logical metadata. We use metadata for data categorization and for standardizing our ETL processing. Table owners can provide audit information about a table in the business metadata. They can also provide column default values and validation rules to be used for writes into the table. Metadata about the actual data stored in the table or partition is considered as physical metadata. Our ETL processing stores metrics about the data at job completion, which is later used for validation. The same metrics can be used for analyzing the cost + space of the data. Given two tables can point to the same location (like in Hive), it is important to have the distinction of logical vs physical metadata because two tables can have the same physical metadata but have different logical metadata. Data Discovery As consumers of the data, we should be able to easily browse through and discover the various data sets. Metacat publishes schema metadata and business/user-defined metadata to Elasticsearch that helps in full-text search for information in the data warehouse. This also enables auto-suggest and auto-complete of SQL in our Big Data Portal SQL editor. Organizing datasets as catalogs helps the consumer browse through the information. Tags are used to categorize data based on organizations and subject areas. We also use tags to identify tables for data lifecycle management. Data Change Notification and Auditing Metacat, being a central gateway to the data stores, captures any metadata changes and data updates. We have also built a push notification system around table and partition changes. Currently, we are using this mechanism to publish events to our own data pipeline (Keystone) for analytics to better understand our data usage and trending. We also publish to Amazon SNS. We are evolving our data platform architecture to be an event-driven architecture. Publishing events to SNS allows other systems in our data platform to “react” to these metadata or data changes accordingly. For example, when a table is dropped, our S3 warehouse janitor services can subscribe to this event and clean up the data on S3 appropriately. Hive Metastore Optimizations The Hive metastore, backed by an RDS, does not perform well under high load. We have noticed a lot of issues around writing and reading of partitions using the metatore APIs. Given this, we no longer use these APIs. We have made improvements in our Hive connector that talks directly to the backed RDS for reading and writing partitions. Before, Hive metastore calls to add a few thousand partitions usually timed out, but with our implementation, this is no longer a problem. Next Steps We have come a long way on building Metacat, but we are far from done. Here are some additional features that we still need to work on to enhance our data warehouse experience. Schema and metadata versioning to provide the history of a table. For example, it is useful to track the metadata changes for a specific column or be able to view table size trends over time. Being able to ask what the metadata looked like at a point in the past is important for auditing, debugging, and also useful for reprocessing and roll-back use cases. Provide contextual information about tables for data lineage. For example, metadata like table access frequency can be aggregated in Metacat and published to a data lineage service for use in ranking the criticality of tables. Add support for data stores like Elasticsearch and Kafka. Pluggable metadata validation. Since business and user-defined metadata is free form, to maintain integrity of the metadata, we need validations in place. Metacat should have a pluggable architecture to incorporate validation strategies that can be executed before storing the metadata. As we continue to develop features to support our use cases going forward, we’re always open to feedback and contributions from the community. You can reach out to us via Github or message us on our Google Group. We hope to share more of what our teams are working on later this year! And if you’re interested in working on big data challenges like this, we are always looking for great additions to our team. You can see all of our open data platform roles here.
https://medium.com/netflix-techblog/metacat-making-big-data-discoverable-and-meaningful-at-netflix-56fb36a53520
['Netflix Technology Blog']
2018-06-14 16:41:04.838000+00:00
['Big Data', 'Hive Metastore', 'Metadata']
Feature Engineering & Feature Selection
2. Feature Engineering The key point of combining VSA with modern data science is through reading and interpreting the bars' own actions, one (hopefully algorithm) can construct a story of the market behaviours. The story might not be easily understood by a human, but works in a sophisticated way. Volume in conjunction with the price range and the position of the close is easy to be expressed by code. Volume: pretty straight forward Range/Spread: Difference between high and close def price_spread(df): return (df.high - df.low) Closing Price Relative to Range: Is the closing price near the top or the bottom of the price bar? def close_location(df): return (df.high - df.close) / (df.high - df.low) #o indicates the close is the high of the day, and 1 means close #is the low of the day and the smaller the value, the closer the #close price to the high. The change of stock price: pretty straight forward Now comes the tricky part, “When viewed in a larger context, some of the price bars take on a new meaning.” That means to see the full pictures, we need to observe those 4 basic features under a different time scale. To do that, we need to reconstruct a High(H), Low(L), Close(C) and Volume(V) bar at varied time span. def create_HLCV(i): ''' #i: days #as we don't care about open that much, that leaves volume, #high,low and close ''' df = pd.DataFrame(index=prices.index) df[f'high_{i}D'] = prices.high.rolling(i).max() df[f'low_{i}D'] = prices.low.rolling(i).min() df[f'close_{i}D'] = prices.close.rolling(i).\ apply(lambda x:x[-1]) # close_2D = close as rolling backwards means today is #literally, the last day of the rolling window. df[f'volume_{i}D'] = prices.volume.rolling(i).sum() return df next step, create those 4 basic features based on a different time scale. def create_features(i): df = create_HLCV(i) high = df[f'high_{i}D'] low = df[f'low_{i}D'] close = df[f'close_{i}D'] volume = df[f'volume_{i}D'] features = pd.DataFrame(index=prices.index) features[f'volume_{i}D'] = volume features[f'price_spread_{i}D'] = high - low features[f'close_loc_{i}D'] = (high - close) / (high - low) features[f'close_change_{i}D'] = close.diff() return features The time spans that I would like to explore are 1, 2, 3 days and 1 week, 1 month, 2 months, 3 months, which roughly are [1,2,3,5,20,40,60] days. Now, we can create a whole bunch of features, def create_bunch_of_features(): days = [1,2,3,5,20,40,60] bunch_of_features = pd.DataFrame(index=prices.index) for day in days: f = create_features(day) bunch_of_features = bunch_of_features.join(f) return bunch_of_features bunch_of_features = create_bunch_of_features() bunch_of_features.info() To make things easy to understand, our target outcome will only be the next day’s return. # next day's returns as outcomes outcomes = pd.DataFrame(index=prices.index) outcomes['close_1'] = prices.close.pct_change(-1) 3. Feature Selection Let’s have a look at how those features correlated with outcomes, the next day’s return. corr = bunch_of_features.corrwith(outcomes.close_1) corr.sort_values(ascending=False).plot.barh(title = 'Strength of Correlation'); It is hard to say there are some correlations, as all the numbers are well below 0.8. corr.sort_values(ascending=False) Next, let’s see how those features related to each other. corr_matrix = bunch_of_features.corr() Instead of making heatmap, I am trying to use Seaborn’s Clustermap to cluster row-wise or col-wise to see if there is any pattern emerges. Seaborn’s Clustermap function is great for making simple heatmaps and hierarchically-clustered heatmaps with dendrograms on both rows and/or columns. This reorganizes the data for the rows and columns and displays similar content next to one another for even more depth of understanding the data. A nice tutorial about cluster map can be found here. To get a cluster map, all you need is actually one line of code. sns.clustermap(corr_matrix) If you carefully scrutinize the graph, some conclusions can be drawn: Price spread closely related to the volume, as clearly shown at the centre of the graph. And the location of close related to each other at different timespan, as indicated at the bottom right corner. From the pale colour of the top left corner, close price change does pair with itself, which makes perfect sense. However, it is a bit random as no cluster pattern at varied time scale. I would expect that 2Days change should be paired with 3Days change. The randomness of the close price difference could thank to the characteristics of the stock price itself. Simple percentage return might be a better option. This can be realized by modifying the close diff() to close pct_change() . def create_features_v1(i): df = create_HLCV(i) high = df[f'high_{i}D'] low = df[f'low_{i}D'] close = df[f'close_{i}D'] volume = df[f'volume_{i}D'] features = pd.DataFrame(index=prices.index) features[f'volume_{i}D'] = volume features[f'price_spread_{i}D'] = high - low features[f'close_loc_{i}D'] = (high - close) / (high - low) #only change here features[f'close_change_{i}D'] = close.pct_change() return features and do everything again. def create_bunch_of_features_v1(): days = [1,2,3,5,20,40,60] bunch_of_features = pd.DataFrame(index=prices.index) for day in days: f = create_features_v1(day)#here is the only difference bunch_of_features = bunch_of_features.join(f) return bunch_of_features bunch_of_features_v1 = create_bunch_of_features_v1() #check the correlation corr_v1 = bunch_of_features_v1.corrwith(outcomes.close_1) corr_v1.sort_values(ascending=False).plot.barh( title = 'Strength of Correlation') a little bit different, but not much! corr_v1.sort_values(ascending=False) What happens to the correlation between features? corr_matrix_v1 = bunch_of_features_v1.corr() sns.clustermap(corr_matrix_v1, cmap='coolwarm', linewidth=1) Well, the pattern remains unchanged. Let’s change the default method from “average” to “ward”. These two methods are similar, but “ward” is more like K-MEANs clustering. A nice tutorial on this topic can be found here. sns.clustermap(corr_matrix_v1, cmap='coolwarm', linewidth=1, method='ward') To select features, we want to pick those that have the strongest, most persistent relationships to the target outcome. At the meantime, to minimize the amount of overlap or collinearity in your selected features to avoid noise and waste of computer power. For those features that paired together in a cluster, I only pick the one that has a stronger correlation with the outcome. By just looking at the cluster map, a few features are picked out. deselected_features_v1 = ['close_loc_3D','close_loc_60D', 'volume_3D', 'volume_60D', 'price_spread_3D','price_spread_60D', 'close_change_3D','close_change_60D'] selected_features_v1 = bunch_of_features.drop \ (labels=deselected_features_v1, axis=1) Next, we are going to take a look at pair-plot, A pair plot is a great method to identify trends for follow-up analysis, allowing us to see both distributions of single variables and relationships between multiple variables. Again, all we need is a single line of code. sns.pairplot(selected_features_v1) The graph is overwhelming and hard to see. Let’s take a small group as an example. selected_features_1D_list = ['volume_1D', 'price_spread_1D',\ 'close_loc_1D', 'close_change_1D'] selected_features_1D = selected_features_v1\ [selected_features_1D_list] sns.pairplot(selected_features_1D) There are two things I noticed immediately, one is there are outliers and another is the distribution are no way close to normal. Let’s deal with the outliers for now. In order to do everything in one go, I will join the outcome with features and remove outliers together. features_outcomes = selected_features_v1.join(outcomes) features_outcomes.info() I will use the same method described here, here and here to remove the outliers. stats = features_outcomes.describe() def get_outliers(df, i=4): #i is number of sigma, which define the boundary along mean outliers = pd.DataFrame() for col in df.columns: mu = stats.loc['mean', col] sigma = stats.loc['std', col] condition = (df[col] > mu + sigma * i) | (df[col] < mu - sigma * i) outliers[f'{col}_outliers'] = df[col][condition] return outliers outliers = get_outliers(features_outcomes, i=1) outliers.info() I set 1 standard deviation as the boundary to dig out most of the outliers. Then remove all the outliers along with the NaN values. features_outcomes_rmv_outliers = features_outcomes.drop(index = outliers.index).dropna() features_outcomes_rmv_outliers.info() With the outliers removed, we can do the pair plot again. sns.pairplot(features_outcomes_rmv_outliers, vars=selected_features_1D_list); Now, the plots are looking much better, but it is barely to draw any useful conclusions. It would be nice to see which spots are down moves and which are up moves in conjunction with those features. I can extract the sign of stock price change and add an extra dimension to the plots. features_outcomes_rmv_outliers['sign_of_close'] = features_outcomes_rmv_outliers['close_1'].apply(np.sign) Now, let’s re-plot the pairplot() again with a bit of tweak to make the graph pretty. sns.pairplot(features_outcomes_rmv_outliers, vars=selected_features_1D_list, diag_kind='kde', palette='husl', hue='sign_of_close', markers = ['*', '<', '+'], plot_kws={'alpha':0.3});#transparence:0.3 Now, it looks much better. Clearly, when the prices go up, they (the blue spot) are denser and aggregate at a certain location. Whereas on down days, they spread everywhere. I would really appreciate it if you could shed some light on the pair plot and leave your comments below, thanks. Here is the summary of all the codes used in this article: #import all the libraries import pandas as pd import numpy as np import seaborn as sns import yfinance as yf #the stock data from Yahoo Finance import matplotlib.pyplot as plt #set the parameters for plotting plt.style.use('seaborn') plt.rcParams['figure.dpi'] = 300 #define a function to get data def get_data(symbols, begin_date=None,end_date=None): df = yf.download('AAPL', start = '2000-01-01', auto_adjust=True,#only download adjusted data end= '2010-12-31') #my convention: always lowercase df.columns = ['open','high','low', 'close','volume'] return df prices = get_data('AAPL', '2000-01-01', '2010-12-31') #create some features def create_HLCV(i): #as we don't care open that much, that leaves volume, #high,low and close df = pd.DataFrame(index=prices.index) df[f'high_{i}D'] = prices.high.rolling(i).max() df[f'low_{i}D'] = prices.low.rolling(i).min() df[f'close_{i}D'] = prices.close.rolling(i).\ apply(lambda x:x[-1]) # close_2D = close as rolling backwards means today is # literly the last day of the rolling window. df[f'volume_{i}D'] = prices.volume.rolling(i).sum() return df def create_features_v1(i): df = create_HLCV(i) high = df[f'high_{i}D'] low = df[f'low_{i}D'] close = df[f'close_{i}D'] volume = df[f'volume_{i}D'] features = pd.DataFrame(index=prices.index) features[f'volume_{i}D'] = volume features[f'price_spread_{i}D'] = high - low features[f'close_loc_{i}D'] = (high - close) / (high - low) features[f'close_change_{i}D'] = close.pct_change() return features def create_bunch_of_features_v1(): ''' the timespan that i would like to explore are 1, 2, 3 days and 1 week, 1 month, 2 month, 3 month which roughly are [1,2,3,5,20,40,60] ''' days = [1,2,3,5,20,40,60] bunch_of_features = pd.DataFrame(index=prices.index) for day in days: f = create_features_v1(day) bunch_of_features = bunch_of_features.join(f) return bunch_of_features bunch_of_features_v1 = create_bunch_of_features_v1() #define the outcome target #here, to make thing easy to understand, i will only try to predict #the next days's return outcomes = pd.DataFrame(index=prices.index) # next day's returns outcomes['close_1'] = prices.close.pct_change(-1) #decide which features are abundant from cluster map deselected_features_v1 = ['close_loc_3D','close_loc_60D', 'volume_3D', 'volume_60D', 'price_spread_3D','price_spread_60D', 'close_change_3D','close_change_60D'] selected_features_v1 = bunch_of_features_v1.drop(labels=deselected_features_v1, axis=1) #join the features and outcome together to remove the outliers features_outcomes = selected_features_v1.join(outcomes) stats = features_outcomes.describe() #define the method to identify outliers def get_outliers(df, i=4): #i is number of sigma, which define the boundary along mean outliers = pd.DataFrame() for col in df.columns: mu = stats.loc['mean', col] sigma = stats.loc['std', col] condition = (df[col] > mu + sigma * i) | (df[col] < mu - sigma * i) outliers[f'{col}_outliers'] = df[col][condition] return outliers outliers = get_outliers(features_outcomes, i=1) #remove all the outliers and Nan value features_outcomes_rmv_outliers = features_outcomes.drop(index = outliers.index).dropna() I know this article goes too long, I am better off leaving it here. In the next article, I will do a data transformation to see if I have a way to fix the issue of distribution. Stay tuned!
https://towardsdatascience.com/feature-engineering-feature-selection-8c1d57af18d2
['Ke Gui']
2020-10-13 11:19:02.781000+00:00
['Machine Learning', 'Python', 'Feature Engineering', 'Trading', 'Feature Selection']
We Are. We Are Not. We Will Never Be.
We Are We Are Not. We Will Never Be… Courtesy of William Okpo I watched as you burned sage, cleansing your home of another breakup, burying a relationship you thought would not live up to its potential, You were right. In the brisk air of the hallway, the smoke led itself down an uneven path, waiting for tourists afraid to drink you — thirsts forever unquenched. who you are to them isn’t who you are to me and only time knows the Truth. It is the clever souls who bark up trees with no grip to console their feverish minds, biting at old stories, trying to pick up where they left off leaving the glories of the good ole days in dusty waste bins, unsure of how to clean each One. Didn’t you find me in your reflection standing behind years of defeat, yet holding every story we shared over your head as a reminder of how insouciant you are. belligerent in crumbling armor, a world of “no, thank you” and “please, leave me be” lives on the tip of your tongue. We Are. We Are Not. We Will Never Be.
https://medium.com/a-cornered-gurl/we-are-8d5b19b33798
['Tre L. Loadholt']
2017-04-23 21:52:39.593000+00:00
['Beauty', 'I Missed Yall', 'Writing']
Natural Gas Is Dirtier than Coal
Photo by Patrick Hendry on Unsplash Did you think natural gas was clean? Don’t worry, you aren’t the only one. The American Petroleum Institute, including members BP, Shell, and ExxonMobil, want you to think that natural gas is clean. In fact, they say exactly that on their website. The API claims that natural gas produces one-half of the carbon emissions that coal does when used to generate electricity. This is somewhat accurate; coal releases about 25 grams of CO2 per MJ of energy produced, and natural gas releases about 15 grams of CO2. The issue with natural gas, though, isn’t the carbon released when used. The problem is the amount of natural gas that escapes, or leaks, into the atmosphere. Why natural gas leaks are so harmful If natural gas went from the ground to consumers’ homes without leaking, it would indeed be cleaner than coal. Unfortunately, it leaks. Natural gas is made up of mostly methane, which is why the leaks are a huge problem. Methane is a much more effective greenhouse gas than CO2, and it warms the planet 86 times as much as carbon dioxide before it starts decaying. Over a 20-year period, a 1% leak rate of natural gas is the equivalent of 27.9 grams of CO2 released into the atmosphere (per MJ of energy produced), compared to 25 grams of CO2 released from coal. After a decade or two, methane begins decaying to CO2. This means over longer periods of time, like 100 years, the effects of methane aren’t as drastic. I noted earlier that over a 20-year period, methane is 86 times more effective as a greenhouse gas than CO2; over 100 years, methane is “only” 34 times as harmful. Over a 20-year period, a 1% leak rate is the point at which natural gas becomes worse than coal, and over a 100-year period the “acceptable” leak rate at which natural gas is better than coal is as high as 3.2%. Due to the urgency of climate change, I think it makes more sense to use a relatively short timeline (such as 20 years) rather than a longer timeline (100 years) to measure the harm of natural gas. We need to take action immediately, and, to put it bluntly, we can’t afford to be using natural gas at all. The intensely powerful greenhouse effect of methane is causing irreversible damage to our planet. So what’s the current leak rate? Well, it depends on who you ask. The natural gas industry reports a leak rate of 0.42%. That must be why they keep patting themselves on the back about how clean natural gas is. I think we know better than to trust the natural gas industry to tell the truth about the leak rate. A more reliable source, the Environmental Protection Agency (EPA), says the leakage rate is 1.4%. At a 1.4% leak rate, natural gas is more harmful than burning coal over a 20-year period, but less harmful over 100 years. The EPA isn’t exactly trustworthy anymore, though. The 1.4% leakage rate was published in a 2018 report, when Scott Pruitt was still the head of the EPA. If you don’t know anything about Scott Pruitt, his favorite activities as Oklahoma Attorney General were taking money from the fossil fuel industry and suing the EPA. Shortly after his stint as AG, he was appointed to be the head of the EPA. He later left the position in 2018 because he was under investigation for unethical behavior and conflicts of interest. The EPA published the natural gas leakage rate about three months before he left. Would the real leak rate please stand up? The EPA’s rate of 1.4% is really bad, just to be clear. It’s over 3x what the natural gas industry reported, and it’s worse than burning coal over shorter time periods, like 20 years. The real leak rate of natural gas is much worse than the EPA reported. A study published in 2018 found that the natural gas leakage rate is about 60% more than official estimates, and the actual leakage rate is around 2.3%. At 2.3%, natural gas is much worse than coal over 20-year periods, and a little less harmful than coal over a period of 100 years. The future of energy We need to be focused on doing as much as we can to mitigate the future effects of climate change right now. The damage that natural gas is doing to the atmosphere is unacceptable, but avoidable. Globally, the cost of solar and wind power is set to fall below the cost of using fossil fuel as early as next year, and in some countries it’s already cheaper. Despite the progress made with renewable energy, natural gas is still thriving. Natural gas generates more electricity than any other form of energy, and some have estimated our natural gas supply could last for the next 200 years. Due to the existing infrastructure in place for supplying natural gas, energy companies are hesitant to migrate away from natural gas. The electricity generating capacity for natural gas is also increasing at a higher rate than clean sources of electricity, which means more new plants are coming online. This should not be happening. We need to migrate away from natural gas as soon as possible, and put all available resources into building clean energy plants. The lie of natural gas Natural gas just sounds clean, doesn’t it? The perception of natural gas as a clean or clean-er source of energy might be why it’s been so hard to get rid of. Based on the name, I would assume that natural gas was a squeaky clean source of energy, and it seems like everyone else assumes the same. In 2017, my local Farmers Market began partnering with a local natural gas company. The gas company latched on to the image and perception of the Farmers Market — sustainable, organic, environmentally friendly — and used it to their advantage. I began seeing posts in my social media feed from the Farmers Market that read like natural gas propaganda, talking about how clean and sustainable natural gas is. At this point I hadn’t done any research into natural gas, so I didn’t know if the claims were true or not, but the language in the posts seemed forced and suspicious. After enough research to feel informed, I commented on one of their social media posts letting them know that I was disappointed in their fossil fuel partnership. I felt like that went against the values of the Farmers Market. The Farmers Market quickly deleted my comment and asked me to instead email them with my concerns, which I did. Over email, they admitted they never fact-checked the natural gas company, and posted statistics and information the natural gas company provided without verifying it was true. The posts were never altered or deleted, though, and were seen by thousands of people. You can still view them today, and they still proclaim that “natural gas is WAY cleaner [than coal], way way cleaner. The US is burning a lot of natural gas.” At least the second half of the statement is true. The power of disinformation Disinformation comes from everywhere, even sources we trust. Who would’ve thought I couldn’t trust my local Farmers Market? They were probably the institution I trusted the most. I don’t blame the Farmers Market for initially falling victim to the lie of natural gas; I previously thought natural gas was much cleaner than coal, too. I do blame them for not properly fact-checking the owner of a natural gas company, and promoting fossil fuel propaganda, even after they learned the truth. When it comes to climate change, we can’t afford to believe everything we read. Especially when it comes from companies profiting from the destruction of the planet. Be diligent, do your own research, and don’t forget: natural gas is dirtier than coal.
https://medium.com/anti-dote/natural-gas-is-dirtier-than-coal-69e95eb4961
['Daniel May']
2019-11-07 05:26:13.618000+00:00
['Politics', 'Business', 'Global Warming', 'Climate Change', 'Environment']
Trump’s Desperation Is Starting to Look Like a Viral Karen Meltdown
Trump’s Desperation Is Starting to Look Like a Viral Karen Meltdown Videos of unhinged rants are now the news Melissa Carone, who was working for Dominion Voting Services, speaks in front of the Michigan House Oversight Committee in Lansing, Michigan.. Photo: Jeff Kowalsky/Getty Images Every universally shared collective trauma — and what has been more universally shared than the pandemic? — needs some comic relief to take the edge off. If we cannot find something to laugh and shake our heads in disbelief at, we will go mad. This does not always showcase our better natures: We should all strive to rise above the instinct to mock and point. But it’s hard out there. Sometimes there is relief, even gratitude, in seeing someone freaking out even more than you are. It’s not right. But people do it. We have seen this with the Target mask-display destroyer, and the lady throwing food on the floor of the grocery store because she had to wear a mask, and the guy who had to carry his dad out of an Arizona bakery because he was screaming at the staff. The correct way to respond to these people is with empathy and human understanding, with genuine hope that they get the help they so clearly need. But it is perhaps fair if you forgive yourself for not always having that immediate reaction. It’s amusing. It’s sad, but it’s amusing. Particularly because, in the middle of this, we’re all teetering a bit ourselves. The example I always use is the woman a few years ago who started screaming in an Apple Store. As someone who remembers the exhaustion and mania that comes with having a small child — and also having to deal with tech snafus involving massive corporations — I can laugh at that because I know how she feels. I’ve never done anything like that, but I can see how someone would feel pushed just far enough. That’s what a viral video like this one does: It exposes the emotions that burble around just below the surface of us all. I will submit, however, that it is perhaps not the best thing that these viral videos are now happening in courtrooms involving lawyers representing the president of the United States. The woman’s name is Melissa Carone, and she was Rudy Giuliani’s “star” witness at one of the heaving, sweating, agonizingly desperate hearings President Trump and his (dwindling and increasingly erratic) lawyers keep having in an attempt to stave off the punishing constraints of reality just a little bit longer. Her appearance was so perfectly calibrated to whipsaw around the internet instantaneously that many wondered if she wasn’t doing some sort of Improv Everywhere gag. This video isn’t that much crazier than the average loon-in-a-Publix video we’ve seen constantly since the pandemic began. It’s about exactly as crazy: It’s average crazy, as far as these videos go. But what has changed is that this video is happening, uh, right next to Rudy Giuliani, who as the lawyer for the president of the United States is actively trying to overturn the results of an election his client lost. This video is not just a funny viral video that you send around to your friends. This video is the news. I know the rush of information and madness that has flickered past us every minute of every day as we’ve been trapped in our homes with no choice to consume it—I know it’s overwhelming. I know it can be tough to make sense of it. But this really shouldn’t be overlooked: The viral Target mask videos are now happening as part of normal government business. If you want to see someone do something unhinged, all you have to do is turn on C-SPAN and wait. Someone clearly impaired saying nonsensical things and making a public embarrassment of themselves—and this person is right at the very center of one of the most important, perilous moments in American history. What a way for 2020 to finish itself. You just have to laugh and shake your head. Because the alternative is absolutely terrifying.
https://gen.medium.com/the-courtroom-feeds-are-going-viral-2b3026c9938e
['Will Leitch']
2020-12-04 14:53:03.917000+00:00
['Karen', 'Politics', 'Trump', 'Coronavirus', 'Rudy Giuliani']
Run the Dishwasher Twice
Run the Dishwasher Twice A lesson on throwing out the rule book and saving yourself Photo: Leren Lu/Getty Images When I was at one of my lowest points in life, I couldn’t get out of bed on some days. I had no energy or motivation and was barely getting by. Even therapy seemed like too much effort. I had been going every week, and on one particular day, I didn’t have much to “bring” to the session. My therapist asked how my week was going, and I really had nothing to say. “What are you struggling with?” he asked. I gestured around me and said, “I dunno, man. Life.” Not satisfied with my answer, he said, “No, what exactly are you worried about right now? What feels overwhelming? When you go home today, what issue will be staring at you?” I wanted to give him an answer that was substantial, something that seemed worthy of struggle. But instead, I told him the truth. “Honestly?” I said. “The dishes. It’s stupid, I know, but the more I look at them, the more I can’t do them because I’ll have to scrub them before I put them in the dishwasher, because my dishwasher sucks, and I just can’t stand to scrub the dishes.” I felt like an idiot just saying it out loud. What kind of grown-ass adult is undone by a stack of dishes? There are people out there with actual problems, and I’m whining to my therapist about a basic household chore? And yet my therapist nodded in understanding. And then he shared his advice: “Run the dishwasher twice.” Huh? I began to tell him you’re not supposed to do that, but he immediately stopped me. “Why the hell aren’t you supposed to? If you don’t want to scrub the dishes and your dishwasher sucks, run it twice. Run it three times, who cares?! Rules do not exist.” His words blew my mind in a way that I don’t think I can properly express. That day, I went home and tossed my smelly dishes haphazardly into the dishwasher and ran it three times. I felt like I had conquered a dragon. The next day, I took a shower lying down. A few days later, I folded my laundry and put my clothes wherever the fuck they fit. As I reveled in my newfound freedom, I stopped seeing each day as a series of arbitrary rules to follow. Eventually, I felt free enough to set goals again, on my own terms. Now that I’m in a much healthier place, I rinse off my dishes and place them in the dishwasher properly. I shower standing up. I sort my laundry. But at a time when living was a struggle instead of a blessing, I learned an incredibly important lesson: There are no rules. Run the dishwasher twice.
https://forge.medium.com/run-the-dishwasher-twice-e24ff24def60
['Kate Scott']
2020-12-01 15:49:45.392000+00:00
['Mental Health', 'Self', 'Home', 'Life Lessons', 'Therapy']
You Can Save Your Kid from Prostitution or Lawyering, but Never from Writing
Whenever someone asks me if they should become a writer I always ask them if they have considered becoming a prostitute instead. Photo by Eric Nopanen on Unsplash I mean, your customers seek you out, your own skill set does not actually matter much beyond making a few monotonous sounds, the working hours/remuneration ratio is vastly superior and the sheer likelihood of payment is far more certain. Ok, so maybe I do not actually say this out loud. I actually just say “No”. Because if you can just walk away from writing, you should. If you cannot, then you are already a writer. Then, it is pretty much just about up-skilling for the rest of your life. Think of it as a special kind of open air prison that people semi-voluntarily sign up for, eagerly checking the box that will grant them a life sentence. Go figure. I think the two things parents pray most for is to never to have their young adult son or daughter sit them down at the kitchen table and say: “I have decided to become a prostitute.” or “I have decided to become a writer.”. Photo by Kelly Sikkema on Unsplash Why should I not become a writer? Next people will be wanting a list of reasons that they should not do crack! Where do I even start? Writing… never ends is frustration on an industrial scale is hard work is something you have to get better at forever will lead to almost certain struggle to earn a living wage And you might lose your mind on a regular basis, as you will discover the infinite number of rabbit holes hidden in your brain and discover they are filled with assorted disorderly notions, facts, filters, assumptions, delusions, thoughts, semi-thoughts, sea monsters and utter nonsense. You will chase your own tail through the looking glass. You are guaranteed to also discover you looking right back at yourself in open scorn, more times than you will want to endure. And yet, people worry about absinthe. Throughout my youth, I was told I should become a lawyer. Photo by Dmitrij Paskevic on Unsplash And I am confident I would have been a good one. I know I would have actually enjoyed constructing arguments and whipping pertinent facts through painstakingly designed hoops. And though I have been fortunate enough to make a good living as a writer, it is nowhere near the indecent amount of cash I could have scored as a lawyer. Basically, any working writer has a beast of a work ethic that they could have refocused effectively into some much more lucrative career. But then, it was just never really an option. Because writing. Nothing beats it. There is such immense satisfaction in distilling thoughts into a visible form that can be shared with other people, across vast distances of space and time. Writers commune constantly with the depths of their own humanity, while simultaneously making a constant exploration and study of everything they can observe of what other people think, feel and do. To find the connections, to discover what resonates. And then bring that into their own work. All to ultimately make direct contact with the minds of perfect strangers to entertain, influence, inspire, uplift or otherwise make some valuable impact. Essentially, to move people. Once you have truly seen or felt the impact of the written word, and then have also somehow been compelled to allow the sentences tumbling around in your head to enter the world on paper or on a screen, it has begun. And if you return to this activity again and again, regardless of the reward or lack of reward, scribbling onward at intervals, you are most likely in the throes of a full-blown addiction and there is nothing to do about it but write on and hope for the best. For I have never heard of a single case of anyone being cured. Photo by Andrew Neel on Unsplash Here I am only a few weeks into writing on Medium and I feel like a kid in a candy store! I love this place. So much to read, so much to write! I really came to Medium to read more by a few bloggers I kept stumbling across. I kind of just happened to write a thing or two. And perhaps I may have five or twenty drafts cooking. But I do want to say that, despite all that, I can quit anytime I want to. Really.
https://medium.com/literally-literary/you-can-save-your-kid-from-prostitution-or-lawyering-but-never-from-writing-a54781765c98
['Ingrid L. Williams']
2020-03-01 20:35:36.651000+00:00
['Humor', 'Writing Life', 'Essay', 'Life Lessons', 'Writing']
Contradictory Dreams
Contradictory Dreams Becoming nomadic helped me embrace inner conflict Sometimes I think everything is beautiful. Then I come to a place like this, trees glowing orange over cobalt hills, a beauty so blinding I veil my eyes with clichés — and my worldview shatters. I had a dream, once, of moving to a cottage in the mountains, but I had settled for city life. I told myself I wanted closeness: to cafés, museums, friends. More importantly, my partner liked the city. (Never mind that he shared my dream of mountains, my inner conflict — it was easier to think that he didn’t.) Besides, everything was beautiful, people as lovely as nature; I wasn’t really giving anything up. I meant it when I said it — patches of pavement, paintings of corpses, busy city squares have all floored me with unexpected glory — but “everything is beautiful” had also been the spell I chanted to protect myself from my own dreams. It was only when I arrived at the dream, the home with trails leading out the front door, that I let myself feel my yearning. It did make sense to want this, not just weekend drives to the distant mountains, excursions to the highest peaks on the sunniest days, but the daily walk, the grass decked out with pearls after the rain, the leaves turning day by day, the birds I know almost by name. I walk, climb on. This place, in its silence and solitude, lets me hear my own thoughts. I think about what we give up: happiness, adventure, community; success, safety, solitude. My partner and I became nomadic just as the days were getting too short and too cold for gathering outside. (We’re in New Hampshire now, but who knows what’s next? Not knowing is part of the thrill.) The pandemic has removed some of the tradeoffs; the dream of community is slumbering, adventure and solitude can take its place. But afterwards? Once, I would have moved to mountainous seclusion in the blink of an eye, but I have grown to love people, almost despite myself. Walking through birdsong, I remember the first time I visited New York. Screech! Rush! Honk! No end to agitation; agitation to no end. So that’s what people meant by “energy”? The one place I could never live, I thought. I visited once, twice, thrice, and started to understand, the way you understand a second language, the attraction of cities: the beauty of crowds, faces, people, everyone with a different story, everyone a miracle. When I lived in the suburbs of Boston, I had those glorious strangers, plus friends I’d known for years. What do I give up when I choose solitude? There is a tension in me: even these hills of gold are empty without the hearth, the heart. Another vista emerges, horizontal strips in complementary colors: grey-blue and russet grasses, orange trees, blue hills, long thin clouds. Vertical birches frame the view and complete the picture, forming a box, a home for my vision. I inhale; the air smells like being alive. I see my inner tensions as complementary colors, sources of vibrance. I grew up between places. Our house in Poland was an anchor, a base, a home — but travel was always my second home. I want to have it all. I dream of a cottage in the mountains; I dream of never settling down; I dream of city friends. I dream of a single place that is travel and home, community and solitude, mountains and city. Maybe this is what our ancestors had, hunting and gathering through the forests in a band of friends. I had this in high school, for a moment, when my scout troop backpacked through mountains of stillness, sang full-throated at the bonfire at night. I carry a nomad inside me, who doesn’t understand this world of screens and only wants to walk, and walk, and sing. The mountains are calling and I must go. I heard what John Muir heard, but I stopped my ears. “I must go” — what sort of a reason is that? When you live in society, you do what you can explain. Our mountain is a ski slope. Near the peak, a narrow, vertiginous ladder goes up to the chairlift. I look up. Folly to climb and folly not to climb. I choose a place halfway up the ladder, just where delight meets fear, climb there, no further, then descend. My dreams butt heads with dreams; the tensions are what defines me. What scares me more than a life of inner conflict is a life without it.
https://medium.com/curious/contradictory-dreams-a64d51499cd9
['Eve Bigaj']
2020-11-10 09:11:40.967000+00:00
['Travel', 'Life Lessons', 'Psychology']
Handwritten digit classifier — Your first end to end CNN in 5 minutes
Digit classifier is apparently the new Hello World! Yes, this is the most basic Convolutional Neural Network example. Therefore understanding this one is of prime importance. So, whenever we start something new, running our first hello world program might seem easy but has a lot of friction attached to it. Running it successfully can sometimes take days — primarily because of the setup involved, getting caught in bugs which looks like alien text to us, etc. So, my main motivation behind writing this article is to help you in removing this friction — so that you can run your first-ever program on CNN in just 5 minutes. Pre-requisites Should know the basics of CNN. I would highly advise to do a google search and understand the convolution layer and max-pooling layer. You should have a good understanding of the maths behind it. I would highly recommend going through my previous article for this. Run this program via Google Colab. Did someone just run their first CNN programme?! Congratulations 🎉 How did it feel?! If you haven’t run it yet, I would highly recommend doing so. Google Colab is very simple to use software which removes all the unnecessary need of doing the setup on our machine. Highly recommended if you are still learning Deep learning. Now, let’s break it down and understand each and every component.
https://towardsdatascience.com/handwritten-digit-classifier-your-first-end-to-end-cnn-in-5-minutes-5be3d9c6c4c0
['Vidisha Jitani']
2020-09-29 16:41:35.584000+00:00
['Machine Learning', 'Neural Networks', 'Artificial Intelligence', 'Deep Learning', 'Data Science']
A Designer’s Guide to File Organization
A Designer’s Guide to File Organization 5 tips to help you design more efficiently Illustration by Holly Gibbs Objective: Learn how to set up your design art-boards, files, and folders so that you can spend more time designing, instead of searching and rearranging. Being organized is often a secondary thought for designers. “I’ll organize my files once my project is done,” or, “it’s not a big deal to stop what I’m doing to just send a file,” is the typical response when I ask designers how they approach organizing their work. However, when do you ever have time to organize your files after a project? And how much time are you spending tracking down files? Wouldn’t it be great if people could help themselves to your files because your Google Drive is so easy to navigate? Erica Tjader, VP of Product Design of SurveyMonkey, believes that, “being organized is actually a core competency in this profession. Performance is half the craft — and how you work (organize, communicate, and collaborate) is the rest.” If you are looking for ways to communicate or collaborate better, getting organized is a great way to start. An organized file system opens the doors for a more efficient workflow. Here are some easy tips to help you get your file system organized once and for all. If your desktop is overloaded with files, or you’re hesitant to share your screen when opening up your Google Drive, then this post is definitely for you. Use a project folder system After you’ve worked on a few projects at a company, you need to start thinking of a folder system before you forget where all your files are and they become a giant mess. Here’s a simple system a lot of designers use. Best part, if your file storage is shared, it doesn’t need much explanation to understand: 01_Docs ← Information about the project, research, and presentations. 02_Working ← Stuff you’re still editing, including archived versions. 03_Final ←Ready for handoff. 04_Future ← Having a documented vision outside of your Sketch file can do wonders for your team. By doing this, you can share your work with anyone who’s interested, without disrupting your designing groove. Additionally, your work can potentially inspire other designers browsing your design files. You’d be surprised how much you can learn from simply looking at another designer’s Sketch files. Pro tip: Having a “future” section is really helpful for designers who will work on the same or related area at a later time. If there are things you weren’t able to include during your current design phase, having a clear place that isn’t hidden in your Sketch file is great way to lead others to your ideal or “north star” design. 2. Use a file naming convention You know when someone asks to see a Sketch file showing the most recent design changes, and you open up 6 different windows titled something like “final” or “the real real final?” Having a go-to file naming convention lets you easily find the exact file you need so you’re not stuck searching. To start, you want to think long term when naming your files. First, think of the big project name, then break it down to sub projects. If you stay at a company long enough, this will come in handy because you’ll most likely work on a project again, just a different section or phase. Always having your past files at your fingertips can be a real time saver for yourself when you want to re-apply a design treatment you used before, but also for others as they are seeking inspiration for their projects. Having an organized file system makes it effortless to connect people with files they need most. Not to mention, everyone loves an accountable teammate that’s willing to share work relevant to their project. Formula: [Name of big project] _ [sub project name] — [version]. Examples: Themes_Phase3–01, Themes_Phase3–02, Rewards_Onboarding-01 3. Use dashes (not underscores) to name art-boards When you’re in your iteration phase and need to quickly rename things, be sure to use dashes. If you want to edit that section of the title, all you need to do is double click between the dashes — a solid timesaver! This isn’t possible if you use underscores since it highlights the whole title when you double click. When you use dashes when naming your art-boards you can easily edit the content between dashes by double-clicking on it. This doesn’t work if you use underscores. 4. Label your working spaces Have you ever had to scramble to find a specific art-board while sharing your screen during a meeting and your peers just had to sit there as you frantically zoomed in and out of your Sketch file? Super nerve-wrecking, right? Instead of zooming in and out of different parts of your Sketch board, label it! You can name it however you want. I often like to separate if by different flows, have a space for inspiration, and have a graveyard or archive area. However you decided to label your Sketch file, this makes it so much easier to find what you need at a glance. Labeled sections of Sketch board with “Reference”, “Annotated”, and “Non-annotated.” Pro tip: Whether it’s a section or a new page, have a place to “brain-dump” all your explorations. Be sure to label it “scratch” or “sandbox” so other designers know it’s not real. 5. Iterate faster with this art-board layout I’m sure you’re no stranger to using the “Cmd + D” Sketch shortcut to duplicate your art-board when ideating, but that duplicate goes somewhere far, far away from where you want it to go. To get around that, set up your Sketch file so that the steps of the flow goes downward and you iterate across. By doing this, you can iterate quick and easy to your heart’s content since the duplicate board shows up exactly where you want it to — right next to your current art-board. Also, if you’re consistent with naming your art-boards, it makes the upload into Invision so much easier since it maintains the order of the boards. Steps of your flow go downward and iterate across. Art-board naming convention: [Step in flow] + {Description] + [Version] Thanks for reading this article! I hope these tips can help you design more efficiently. When you learn something new, it might take a while for it to become a habit, but I’m confident these tips will save so much time once they’re baked into your routine — and your team will thank you for it! I’m so glad you are actively learning how to improve your craft as a designer. How we design makes up such a huge part of our jobs. If you liked this article, please share it with a friend or give it some claps, and if you have any of pro-tips of your own, please write them in the comments below! I’d love to learn more from you all! Thank you to Tiffany Ferguson, my Hexagon mentor, who taught me these great tips! Thank you to the SurveyMonkey product design team family for always inspiring me to become a better designer.
https://medium.com/curiosity-by-design/a-designers-guide-to-file-organization-cc832a6997cd
['Jasmine Rosen']
2019-05-08 16:55:28.960000+00:00
['Design', 'Designer', 'Organization', 'Sketch', 'Files']
Losing customer trust is easy
Today I tried to book a class in my fitness club where I pay a monthly fee for unlimited access to anything they have on the offer. To my surprise, I got the following message: “You are not allowed to book another class until 10.06 because you have already canceled two classes this month. You may come and book the class at the reception 1/2 hour before it begins.” I was taken aback by this message. It made me feel like a child at school who is not allowed to play with other kids because she misbehaved. Yes, it is true that I missed two classes because of some unexpected meetings that popped up at the last moment (which is not an unusual occurrence in the life of a freelancer). I tried to cancel the class in the system but it has turned out that you are not allowed to do so in the period shorter than 3 hours prior to the beginning of the class. But above all I felt that it is not what have have signed up for. I bought an unlimited access to the club. Doesn’t it mean that I can come (or don’t) whenever I have an opportunity? Or is there another meaning of ‘unlimited’ that I am unaware of? Is there another meaning of ‘unlimited’ that I am unaware of? Business perspective This booking system is an enabler for the club to see what is the level of interest in different activities. It helps them to plan their offer. But is has little to offer the customer in a small and not too busy club. It seems that it is more beneficial for me not to book the class and just show up than do it. Many companies seem to be looking at the tools and services they offer from a selfish perspective. Their own interest. Their benefit. They try to make their life simpler and somehow don’t quite pay attention to whether they are not making the lives of their customers more difficult because of that. Being selfish is not wrong per se. But if you forget who you are doing your business for is a moment when things start to go wrong. Customers are really sensitive to how they are treated. I might be an extreme example as looking for such missteps is my profession. But over and over again, business after business, I can see that customers sense when they are being treated like milking cows (and children at the same time) and not like a treasured partner. Loosing customer trust is easy In the abundance of the fitness offer in a big city like Warsaw, changing a club is not much of a challenge. Yes, I am stuck with this particular club for another 6 months. But I am already looking for an alternative. Because, what does such a process tell about them? It tells me that they are not understanding their customers. They are taking decisions that make their life more predictable and easier. But they are not that much concerned how it makes their customers feel. Such a treatment may not come from bad intentions. It may just come from being too far from those they are hoping to serve. But still it makes me feel badly treated despite my good intentions. Customer trust is hard to earn. And easy to lose. So, check out whether you are not losing it over a silly simplification of your internal process.
https://uxdesign.cc/loosing-customer-trust-is-easy-77779d5d6ec1
['Aga Szóstek']
2017-06-09 06:03:26.899000+00:00
['UX Design', 'Design', 'Enterpreneurship', 'Design Process', 'Customer Experience']
8 Great Ideas for Programming Projects That People Will Use
8 Great Ideas for Programming Projects That People Will Use Make a big difference with a little software Photo by Simon Abrams on Unsplash Two trends are becoming increasingly apparent in the software development world. It is becoming easier by the day to build a software product. The number of people in search of ideas for those software products is increasing. With the advent of no-code solutions and the hacker/maker movement, we, more than ever, are recognizing the power of being able to idealize, build, and ship a product that can impact more people than ever in history. Having a CS degree or knowing a programming language is becoming less of a factor in what you can accomplish with software, so it’s only natural that more people want to try their hand in building something with these technologies. Since you’re at it, why not consider a solution for somebody else’s problems? When I published the 12 Great Ideas for Programming Projects That People Will Use article, I became even more convinced that people were clamoring for this type of quick start. The feedback I received was incredible, and I knew I had to find another batch of ideas at least as good as the ones in that article. The search is over and here they are. Note: The illustrations in this article are free at manypixels.
https://medium.com/better-programming/8-more-great-ideas-for-programming-projects-that-people-will-use-3e236d145aa1
['Filipe Silva']
2020-07-06 21:22:09.981000+00:00
['Programming', 'Startup', 'Software Development', 'Inspiration', 'JavaScript']
How I Hacked My Way Into The Top 100 on Cindicator in Two Weeks
How I Hacked My Way Into The Top 100 on Cindicator in Two Weeks Cryptoweek Follow Jun 27, 2018 · 9 min read BY CRYSTAL STRANGER, CO-FOUNDER, PEACOUNTS ON 6/27/2018 AT 5:30AM Cindicator Logo / Cindicator I never thought I was good at judging short-term moves in the market. I typically can see clear as day what will happen in the long-term, but the timeline of when those moves will happen has always been a mystery. I’ve debated joining think tanks and the like, because I wonder why other people don’t see long-term changes as well, but it wasn’t until I signed up at Cindicator, a predictive analytics company that runs a competition for amateur forecasters, that I started actually making predictions. What I am great at though, is finding loopholes. I started making predictions on Cindicator in Mid-January of 2018, and I reached place #71 out of nearly 100,000 participants by the end of January with 720 points in the crypto markets challenge, the competition for forecasting the moves in various cryptocurrencies, in less than two weeks. How did I do it? It wasn’t very hard, I just gamed the averages and made predictions accordingly. Let me first talk about what Cindicator is, then I will explain the types of questions and how I worked this to my advantage. Cindicator is the leader in prediction market companies. There are now over 100,000 users actively predicting events in the system. Predictive analytics, while far from sexy, is a good application of blockchain technology as the predictions are verified on blockchain so that people cannot change their record. In exchange for making predictions, Cindicator splits a prize every month with their top analysts. Currently the prize is 1.25 Bitcoin for the crypto markets prize, and $7500 for the Traditional Markets Prize Fund. Payments when distributed are made in Ethereum; I earned 0.01438023 ETH in January. Aggregating the predictions using a proprietary algorithm, Cindicator then uses machine learning to assemble this information and distribute it to traders who use these predictive insights to trade. Over 8,000 subscribers including hedge funds and family offices use Cindicator’s data. I joined Cindicator not because I was trying to win the prize money, but because I had been playing around with some different charting ideas on the charting site Trading View, and wanted to see how good my choices panned out. But I ended up finding a loophole in the Cindicator system instead and decided to see if I could exploit it. I broke down each of the types of questions and how likely the answer would get me points, then I applied logic to making predictions for that type of question. Here I’ll break down each question type posed to a Cindicator analyst, and detail exactly how I gamed the system: Binary Questions Example of binary questions on Cindicator / Cindicator Yes/No questions seem fairly simple on the surface, but Cindicator awards points to these on a sliding scale. The slider starts at 50% and you can adjust up to 100% or down to 0%, rating the probability in which you think things can happen. This means you can get a total of 50 points for a correct answer, and lose a total of 50 points for an incorrect answer. The amount you adjust the slider in between is essentially the amount of points you are betting on an outcome. These are hard questions to answer accurately, but easy questions to game. Here’s why: Each question is pegged to a date by which it is expected to occur for the event, such as if a token will surpass certain price or if a president will be elected. The date on these questions is the key. If the expiration date is after the current month, you can put 100% yes in for a positive question and if it comes true you get the full 50 points. As the events that do not come true all expire after the current month, the loss of points is only awarded at that time, not affecting the current month’s score. Range Questions Example of range questions on Cindicator / Cindicator Range questions, where you input the minimum and maximum price range for a token or security seem simple at first, but are tough because answers are ultimately based on the volatility within the time. The great part about these questions though, is how points are awarded. Cindicator doubles the points available for a correct answer than for an incorrect answer. This is where charting can come in handy. There are certain points of support and resistance that each token or security tends to rise or fall to. You don’t actually have to chart this yourself, there are a number of good analysts on Trading View that publish charts showing this and typically their predictions fall close in the range. With the fact that more payout is gained for correct answers than incorrect, if you can even hit this correct 50% of the time, the overall points balance comes out ahead. As it is pretty rare that the market breaks a large way out from these ranges, usually only 1–2 days per week that there are surprises, you can get positive balances typically on 5–6 days out of the week with just a little research. However, according to Natasha Andreeva, Product Manager at Cindicator, range questions will become more difficult to game in the future, “We continue experimenting with different formulas for the range questions — I can say, the formula evolves with the market. Back in January when you were hacking Cindicator it was a different formula, so there is no way you could game the results using the current one.” That sounds a little like a challenge, makes me want to jump back in and try again. Andreeva also added that in a few weeks the way range questions are calculated will change again, “it will take into account sudden price surges and drops of the asset in question, so the analysts who predicted those movements would gain more points even if the volatility during the question’s target time interval is relatively small.” To me, this sounds like it will pay off to make bigger guesses, as if you are right the payoff is big. Rank Questions Example of rank questions on Cindicator / Cindicator These are questions where you rank a number of tokens or ICOs in order to say which one will have the greatest price gains over a certain period of time. I very much enjoy these questions as they often introduce me to ICOs that I had not previously heard about, and with the writing and investment I do this is always interesting to see which ICOs drum up hedge fund interest. However, since the timeline for these questions are quite long and the formula to calculate complex, there is no way to game the results. Date Questions When will a certain event happen? This is the nature of these questions, written in a format “It won’t happen before [dd.mm.yyyy].” If the date is not beyond the next couple months, there is a possibility to game this by selecting that it will happen before the date. Some of these questions let you pick a date after the current date and before an ending date. On these questions you want to choose a date outside the current month. That way if it happens it awards positive points, if it does not happen the negative points only impact the following month. Value Questions Example of value questions on Cindicator / Cindicator Value questions are where you are asked to predict a certain value by a timeline, such as the maximum price of Bitcoin during the year. These are essentially range questions, but with a harsher penalty as you can lose the same number of points that you can gain. Although the value questions can be a bit tricky to define as, according to Andreeva, “they can be very different in their nature: some of them, indeed, are the range questions’ little siblings, but others can relate to predicting things like an upcoming ICO’s hardcap or even to picking an asset reversal level from a list of options.” Here is an example of a tricky value question: If Ethereum (ETH/USD) trades above $650 before June 30, what will be the minimum price (turning point) of ETH before this event? (Type an option number) 1) Above $490 2) $455- $490 3) $420 — $454 4) $385 — $419 5) $350 — $384 6) Below $350 7) Will not trade above $650 Because of the long timeline on these questions, the complexity in the factors to determine, and the equal point distribution, these are not terribly useful to game the system. Although when I wasn’t I have enjoyed thinking about these and answering them. Open Questions These are a new type of question that were not around when I was gaming the system, but as you can only receive positive points from these I thought they are worth a mention. In these questions you are asked to type a response, then the question is graded on a scale of 0–100% by a live person to award points after the answer period has ended. This is an interesting concept, and certainly worth submitting these questions when no points can be lost. Predictive Analytics Beyond Investments Even though I can hack the game part of Cindicator, I still see a ton of value in predictive analytics, especially with AI creating better statistical models than a simple algorithm. Cindicator just completed their first trading contest and over the eight weeks of the contest the traders averaged a return of 21.53%, while the market was mostly negative. What is Cindicator ? Yuri Lobyntsev — Cindicator — World Blockchain Forum / YouTube To learn more about this, I attended the predictive analytics panel at Consensus in New York this May to find out what is going on in this industry and how this will be applied in the future, and I was not disappointed. Cindicator seems the strongest contender for predictions of the financial markets, but other companies had interesting outlooks on applying analytics to niche markets. Bitzpredict is applying predictive analytics to determine early results of political elections; Bodhi is using the prediction market to reduce the cost of listing on an exchange for companies launching ICOs; and Delphy is focused on the financial markets in China, and all of the data is open source. Predictive analytics is not the application I initially thought of when I learned about blockchain, but it certainly is a solid use case. My only concern with these types of analytics going forward, is if all this data can be used against humanity. The way I think of these risks is how Yelp is very useful when traveling to a new area, but can also kill businesses that don’t play for ratings. I used to have a favorite fondue restaurant in Los Angeles, extremely traditional French, down to the rude and condescending service. I liked that aspect of it, gave the restaurant character. But after a slew of one-star reviews with the advent of Yelp, the restaurant went from full to empty, and soon closed. I worry about analytics doing the same to investments, where only the strongest projects find financing and the ones that don’t pander to the community lose out. Will this kill creativity in business? The majority is not always right, and I have profited many times in life by seeing things differently than the overall market, or seeing trends in advance of them being wide known. I don’t see any sign that predictive analytics will give those long-term views power for shaping our realities. But it certainly can provide useful information for short-term traders to hack the market, just like how I hacked my way into the top 100 analysts on the Cindicator platform.
https://medium.com/cryptoweek/how-i-hacked-my-way-into-the-top-100-on-cindicator-in-two-weeks-6699eadf2496
[]
2018-06-29 00:52:26.337000+00:00
['Marketing', 'Cryptocurrency', 'Opinion', 'Bitcoin', 'Blockchain']
F. Scott Fitzgerald and the Beauty of His Broken Dreams
Image courtesy of Canva. You probably had to read The Great Gatsby in high school. And you might have also seen the relatively recent cinematic remake featuring Leonardo DiCaprio. To refresh your memory with a summary in a nutshell, it’s the story of a rich tycoon — Jay Gatsby — who is hopelessly in love with an old flame remarried to another wealthy fellow. F. Scott Fitzgerald’s 1925 novel is regarded as his magnum opus— a reflection of the glitz and glamor of the Roaring Twenties and a comment on the shortcomings of the American Dream. You may or may not be surprised to learn that pretty much all of Fitzgerald’s other books and short stories explore the same themes. That’s not necessarily a bad thing, because each one is an expertly written piece of literature. But if you read through his works, or even if you just sample one, you will see a few key themes emerge. #1: The magic allure of alcohol. Image courtesy of Canva. Prohibition was in full force during the time period in which Fitzgerald wrote, giving alcohol an extra source of allure and excitement. But for the protagonists of Fitzgerald’s novels, alcohol is not just an escape: it’s a sublime spiritual experience. When characters like Anthony Patch from The Beautiful and Damned descend into the underworld of the Speakeasy, they find themselves in a state of clarity about life and its problems. They meet up with unlikely characters for a bout of fun that gives fresh vitality to life. This penchant for libations among Fitzgerald’s characters mirrors his own biographical struggle with alcoholism…which was perhaps, also a muse for Fitzgerald as it was for many other authors such as Ernest Hemmingway. #2: The pain of an unreachable woman. Image courtesy of Canva. Many of Fitzgerald’s stories (pretty much all of them) have the pain of spurned romantic overtures or disconnection from a difficult spouse as a core element. From out-of-reach Daisy Buchanan in The Great Gatsby to the fickle Gloria in the aforementioned The Beautiful and Damned to the almost psychotic Nicole Diver in Tender is the Night, Fitzgerald’s stories are driven by the theme of unrequited love or love that cannot be emotionally consummated. This too might mirror Fitzgerald’s own biography, since his marriage to Zelda Fitzgerald was often disrupted by her mental health challenges with schizophrenia. #3: The emptiness of wealth. Image courtesy of Canva. One of the most telling scenes in regards to this theme is in The Great Gatsby: Jay Gatsby stands alone at the end of his dock, reaching out across the bay to the lights of Daisy’s home. This lonely moment is in stark contrast to the wild and lavish parties he throws at his mansion. But at the end of the day, all the wealth, excitement, and celebrity mean nothing to Gatsby. It’s much the same way for Dexter Green at the end of “Winter Dreams.” He climbed the ladder of corporate success only to realize how it all means nothing; he is thrown into a depressed state when he learns that the woman he loves is married to man who treats her coldly. This theme is also perhaps reflective of Fitzgerald’s life story. He was not the typical starving artist, but saw acclaim and financial success from his work, which allowed him to mix in with elite social circles. So what does it all mean? Image courtesy of Canva. I believe that Fitzgerald’s key themes all revolve around the American Dream. Though the idea of that dream has changed over the years, in the main it is the idea that hard work can bring you to the top of the world. And yet, the question remains: what happens when you get there? Though I myself do believe in that dream from a financial perspective, I do not believe it is the ultimate source of satisfaction in life. For many, fulfillment and happiness continue to be as elusive as Daisy Buchanan or Nicole Diver. Their American Dream becomes like an unreachable woman, forever filling her worshipful followers with a sense of emptiness. In their pain, they turn to the magic allure of alcohol, which has become their spiritual experience. For some, that alcohol is actual alcohol, or drugs. For others, it might be immersion in a distraction like television or social media. Unfortunately, Fitzgerald’s works don’t really pose a solution to this problem. Almost all of them end on a note of broken bitterness, with the protagonist dead or as good as dead in the gutter or broken-hearted. His last novel, The Last Tycoon, remains unfinished. Perhaps we would have seen a solution therein, but we’ll never know. I suppose it’s somewhat ironic, or telling, that he left if unfinished. In case you’re wondering, Fitzgerald did not come up with the idea of writing around themes that center on longing and loneliness. He got them from writers and poets before, most noticeably the romantic poet John Keats. In fact, the quote that opens Tender is the Night comes from Keats’ own poem, “Ode to a Nightingale.” The longing that is often expressed in Keatsian poetry — whether it is the longing of love or the longing to experience sublime union with some beautiful object — itself becomes a source of beauty and poetic inspiration. In the same way, this eternal state of unrequited love and longing haunts the works of F. Scott Fitzgerald. It fuels the creation of beautiful works of literature, but ones that are painful reflections of a broken dream. And yet, that is exactly what makes them beautiful.
https://medium.com/curious/f-scott-fitzgerald-and-the-beauty-of-his-broken-dreams-182e466bb1af
['Charles Hanna']
2020-11-24 18:39:06.050000+00:00
['Life', 'Society', 'Literature', 'Life Lessons', 'Art']
AAG Annual Meeting 2018 Conference Memo
Lecturer of CSIS at the University of Tokyo / Social Geography / Participatory GIS / Neogeography / Volunteered Geographic Information / Qualitative GIS Follow
https://medium.com/tosseto-info/aag-annual-meeting-2018-conference-memo-57994a6e382
['Toshikazu Seto']
2018-06-18 15:11:48.434000+00:00
['Conference', 'Geography', 'Artificial Intelligence']
The Role of Metaphor in Design: Part 2
The Role of Metaphor in Design: Part 2 Tidepod or Bonbon? As I have written about previously, metaphor is a powerful tool that designers use to help individuals understand how to interact with products. Unfortunately the use of metaphor for everyday household objects can also have dire consequences for health. The Tidepod is a prime example of this. It is a single-use packet of highly concentrated laundry detergent, stain remover, and color protector that was released in 2012 by Proctor and Gamble, and has been a blockbuster product for the company, accounting for over 80% of the laundry pod market and generating over $1.5 billion dollars in revenue. What is the appeal of the Tidepod design? Articles report that it is “popular on college campuses and among apartment dwellers, and people who have their washers in the basement”, which makes total sense from the user perspective. First, it’s convenient. There is no need to carry the entire heavy bottle down to the basement only to have to lug it back upstairs. Second, it doesn’t drip. Who wants to wash the residue that runs down the side of the bottle and have to clean up the mess? Third, it’s aesthetically pleasing Kaleidoscope is the design firm that helped design the pod. According to their website, they created: “a product that employs the brand’s iconic colors and aligns with the Tide® brand promise and visually captures the essence of the brand.” According to the Fabric Care Design group at P&G: “The breakthrough and the beauty of the pod and packaging, required us to do it justice with beautiful print, TV and digital. Kaleidoscope raised the bar. In research, people could not stop smiling and engaging with the product.” It’s a visually appealing product with a symmetric pattern of bold colors, which begs the question: Is it a Tidepod or a Bonbon? This parody article from the Onion captures the essence of the problem with the design: Toddlers and even seniors with dementia, seduced by the design, are trying to eat the pods, resulting in serious injury and death. The problem has been frequent and serious enough to warrant a full study of laundry detergent ingestions in Pediatrics, published in 2016. The study showed that rates of accidental exposure of children due to laundry detergent pods/packets were much higher compared with those for regular laundry detergent (see figure below). Not only were packets/pods associated with a higher number of exposures, but the health outcomes of children who ingested packets/pods were much worse. Compared with kids who ingested regular laundry detergent, kids who ingested laundry pods/packets had a higher risk of needing to be hospitalized, a higher risk of needing a breathing tube, and a higher risk of experiencing serious medical outcomes. There were 2 deaths, both of which occurred in kids who ingested the laundry detergent pods/packets, and 104 of the 117 children who needed a breathing tube had ingested the laundry detergent pods/packets. This is a clear example of how design impacts health. In response to the ingestions, the company has made multiple updates to the design of its product, including: Designing an over-the-lid re-sealable sticker for the container to provide education about storage and handling of the pods. (Sign design is a pretty lame method for preventing injury and death.) Providing consumers with safety latches for household cabinets and supporting a Safe Home campaign in partnership with the American Academy of Pediatrics to educate consumers about the dangers of Tidepods. (Education is probably the most low impact way to make a difference.) Despite these design modifications, there have been over 50,000 ingestions of laundry pods by children under 6 years of age between 2013 and 2017. I am always impressed by the amount of effort and resources invested in the design of consumer products, but Kristin Hohenadel in Slate poses a very important question: “Does it make sense to tempt fate by sticking with the detergent-as-candy design?” Should a laundry pod that is potentially toxic be made into a “beautiful object” that “visually captures” the brand? Isn’t that what leads to excess “engagement” from too many young children, leading to calls to the poison control center and visits to the emergency room? And doesn’t it also lead to the virality of the “Tidepod Challenge”, in which teenagers are daring each other to bite into brightly colored packets on Youtube? Consumer Reports is no longer recommended liquid laundry detergent pods because of the health risks to young children, but incredulously, the product is still available in stores. Tidepod as Bonbon: It’s a design metaphor that kills. I tweet and blog about design, healthcare, and innovation as “Doctor as Designer”. Follow me on Twitter and sign up for my newsletter! Click here for information about creative commons licensing. Disclosures: Unitio, Grant funding from Lenovo.
https://joyclee.medium.com/the-role-of-metaphor-in-design-part-2-4bd3b75cccb3
['Doctor As Designer']
2018-05-06 16:22:39.406000+00:00
['Product Design', 'Design', 'Healthcare', 'UX']
Long Story Short: Poverty, Development & Coronavirus
This article was originally published on Sustaining Capabilities. Poverty and development are intimately related, and COVID-19 provides an example of that connection as it continues to spread throughout the world. Combating poverty typically involves short-term, programmatic interventions, while promoting development usually consists of long-term, macro policies. These were long considered two separate fields, but in truth they are becoming ever-more intertwined. Whereas the obvious solution to large migration flows was once a refugee camp, today’s poverty dynamics are such that the average refugee spends 17 years away from home, and 59% of refugees live in urban areas rather than established camps. At the same time, many of today’s poor countries are on track to achieve significant increases in living standards in coming decades, but rapid growth can leave some behind. The humanitarian and development communities have been working together more frequently to address these new realities. Examples of bridging the divide include Jordan’s “special economic zones,” as well as assistance to West African countries to fight Ebola from the World Bank, an organization historically removed from humanitarian efforts. The COVID-19 outbreak is unprecedented, but these fields prove that it is possible to connect thinking on different time horizons. According to growth economics, an event like war reduces a country’s level of output by destroying human and physical capital, but output grows at a faster rate after the drop, eventually getting back to the pre-war level. The shocks induced by virus-related lockdowns similarly reduce output, leaving capital intact but unused. A country’s output level before a crisis is important, since growth over time creates wealth, and wealth makes it easier to manage crises that emerge. This is true both between individuals within a country, and across countries — richer countries tend to have more resilient and better-funded health systems, while richer individuals usually have more savings and food stocks to draw on, making them less dependent on daily labor. Of course, wealth is an imperfect measure of overall well-being, and other factors matter for the ability of countries to respond to emergencies. One of those factors is state capacity — that is, the ability to mobilize financial resources, and to create and enforce laws and regulations. State capacity is often higher where incomes are higher, but this is not always the case. During the COVID-19 outbreak, Taiwan, Singapore, and South Korea have all experienced smaller disruptions to their economies than the US, despite all having a lower GDP per capita than the US. Their governments reacted to early cases with widespread testing and self-isolation, while the US had no test kits prepared before infections spiked. It is important that COVID-19 is not used as an excuse for more closed and authoritarian governance. While authoritarian governments are good at mass mobilization, democracy plays an important role in assuring credible public information and avoiding policy mistakes. China provides a clear example — during the early days of the pandemic, the government took measures to repress information about the severity of the outbreak, but later took sweeping measures to slow the spread with an efficiency unimaginable in the West. The processes of democratization and building state capacity are difficult, slow, and non-uniform. Nevertheless, they are worth pursuing, since some combination of democratic decision-making and strong capacity is the best route to rapid, equitable responses to crises. The urgency required by a pandemic renders some typical policy prescriptions, like targeting the poor, or preventing the adverse incentive effects of welfare, unimportant. More crucial is for governments to increase testing capacity, enable social distancing — opening up government buildings and renting out private hotel rooms for quarantine where population densities and intergenerational cohabitation are high — and enact swift fiscal transfers to the poor and economically vulnerable. There are also many global financing needs that warrant immediate attention, like GAVI’s next replenishment, the UN’s appeal for food security funding, and the WHO’s ask to help prevent, detect, and respond to COVID-19. In the long term, there are numerous things that can reduce the likelihood of similar outbreaks. At the national level, countries should work toward building strong laboratory systems, and epidemiological capacity to achieve sufficient testing and surveillance coverage. All countries face budget constraints, but since a pandemic in one country is a threat to all countries, international cooperation is necessary. One idea is to establish a global health security challenge fund, whereby low-income countries would identify gaps in their outbreak preparedness schemes, pay half of the money to fill those gaps themselves, and receive the difference if they make measurable progress getting started. International collaboration is also necessary to conduct “germ games,” similar to how armed forces conduct war games, and to accelerate research and technology transfer for vaccines and diagnostic tools. The modern world is complex and highly-integrated, and crises like COVID-19 make clear that poverty and development are intimately linked. Similar to development as a whole, the policy response to COVID-19 needs both immediate aid aspects, and longer term structural aspects. There is plenty of precedence for success against disease — in just the last 50 years, coordinated global campaigns have been remarkably successful at combating malaria and HIV. It is inevitable that there will be future pandemics, but it is not inevitable that they be as deadly as the current one.
https://rmcguine.medium.com/long-story-short-poverty-development-coronavirus-4abc72326534
['Ryan Mcguine']
2020-05-05 12:23:15.169000+00:00
['Foreign Aid', 'Coronavirus', 'Global Development', 'Covid 19', 'Poverty']
Electricity
He smelled smoke. It must be the lingering cobwebs of dream. He shook his head, sat up, struggled to recall where he was. The room was dark. He fumbled for his watch. The luminous dial showed four minutes after three. He remembered, then. The tiny apartment was all he could afford in the new city since his last episode. Rumor had it there were jobs here. Awake, he still smelled smoke. It was stronger now, the harsh, cutting stench of electricity and burning carpet, painted walls in flame, and the dry tinder of old furniture instantly alight. He heard the roar of flames in the hallway outside the door. He almost touched the door knob, then tried to recall what people were supposed to do in these cases. Stay and wait for help, because there was no escape through the hall? He ran to the window, stumbling over the room’s only chair in the darkness. Pushing aside the curtain, he stared at the silent, half-empty parking lot. No sirens, no flashing lights. Nothing moved. He was on the fourth floor, the rusty death-trap fire-escape fallen away long ago. He turned away from the window, pulled the blanket from the end of the bed, and wrapped his hand. The heat of the knob scorched his hand through the cloth, but he wrenched it open anyway, ready to mask his nose and mouth with the blanket and run. Too late, he recalled the blanket should be wet. The hallway was dark and quiet. The EXIT sign flickered and buzzed at the end by the stairs. He went back in the room, back to the window, saw the fire trucks and ambulances, their furious lights strobing the darkness into a hideous, psychedelic nightmare. Flameless smoke choked him, and he fell on the bed, gasping for breath. Dawn broke at last. In the the tiny bathroom, he ran water over the burned hand and found disinfectant and bandages to wrap the blisters. The medicine stung and throbbed where torn skin had already cracked and leaked. The building supervisor’s office off the corner of the lobby said he would be in by eight. He arrived at a quarter after. His name was Hensley. They’d met when Masters rented the apartment. Masters waited, sitting in the chair by the dusty window while Hensley unlocked the door, turned on lights, and sat in the creaking chair. The man turned when Master’s shadow blocked the light from the hall and cast a shadow on the opposite wall. “Mr. Hensley,” Masters began. He took a deep breath, as if he would have to hold it for a long time, and then exhaled. “Mr. Hensley, this building is not safe.” Hensley stared at him. “Why do you say that?” “There’s no fire escape.” “Sure there is. They’re called stairs. Front and back.” “You know what I mean. And the wiring is ancient — when was the last time an electrician looked at anything in this building?” “What, you thought you were getting a suite at the Taj Mahal for what you pay here?” Hensley’s face twisted in a crooked grin. “I don’t live here. It’s not my building. Just a job. But the boss gave me strict orders — no troublemakers. You want to hit the streets, or you want to keep your big ideas to yourself?” “I’m only warning you. I have to. When — if — if there’s a fire, many people will die.” So of course, after the fire, nine days later, the super had told the detectives, and the arson inspector, and the reporter, and anyone else who would listen. Masters was questioned by the police, but the inspector concluded faulty wiring had caused the fire. Eleven people died, four of them children, and there was talk of criminal negligence on the part of Hensley and the building owner. By then, only reporters wanted to talk to Masters. Headlines ran along the lines of “Psychic’s Warning Unheeded! Deadly Inferno! Dozens Die Needlessly!” or even more sensational language. It had happened before, time and again, the vision, the attempt to warn, to explain, the disaster followed by suspicion, by questions he could not answer. So he’d left that city, all cities, and come here to this lonely, rocky outcrop where sky and land and water thrashed through their eternal love triangle, settling nothing, solving everything.
https://medium.com/capital-letters/electricity-9a3bc0533fef
['Mckayla Eaton']
2019-01-22 16:32:30.997000+00:00
['Fantasy', 'Short Story', 'Fiction', 'Fairy Tale', 'Writing']
America’s Boldfaced Serial Killers
America’s Boldfaced Serial Killers We see them. We know they’re there. We can’t avoid them. by daveynin is licensed under CC BY 2.0 A new breed of serial killer has emerged in America. They are the people who, through their actions and inactions, spread the Covid-19 virus. They don’t wash their hands or sanitize objects they touch. They don’t wear suitable masks, appropriately, all the time. They don’t isolate themselves or get tested. They ignore any symptoms they might have. They congregate closely with others. They can’t seem to help themselves. They just want life to be good … for themselves. They don’t believe they are doing anything wrong. They don’t see the consequences of their actions. They don’t believe society has the right to restrict their actions. But their pursuit of happiness comes at the expense of others. They expect those others to deal with the harm they inflict. They are everywhere — at church, at family events, at political rallies, in public spaces. They shop, eat out, throw parties, bar hop, travel, and vacation. They live life as they always have and make no concessions to the pleas of others. They target the weak, the aged, the ill, and the unprepared. They don’t seem to know or care that they are dangerous. They don’t even recognize if they leave a target unharmed or inflict their curse. They may even turn their target into an unknowing serial killer. Some of them might get caught, either in the act or later traced by their movements. A test may even confirm their culpability. But they won’t be punished. They’ll be released with only a warning. They will get away with what they have done and go on to resume their spree. They may not just kill others, they may die themselves. Most believe they will be unaffected, that they will evade the invisible agents they don’t even believe exist. They think they are stronger, and luckier, and more special than the infectious curse they bring on others. But, not all of them will escape those consequences. Then, as a victim, they will feel regret.
https://medium.com/random-terrabytes/americas-boldfaced-serial-killers-21c0f26d5e49
['Charlie Kufs']
2020-12-08 22:31:06.035000+00:00
['Metaphor', 'Random Terrabytes', 'Analogy', 'Serial Killers', 'Coronavirus']
Tell Me We’re Safe in the House
When I was 16 I fantasized about touching myself, of touching others, of being touched by others. Love will do you no good, all the songs said. I was the photo in the yearbook people forgot about, the girl who got high school reunion invites from my tormentors, all of whom misspelled my name. You should come to the Waffle House. We’ve got a three-hour buffet and a top-shelf happy hour! In response, I said, oh, that’s so tempting, but I’ve got a full evening ahead of wrist cutting and pain pill taking. You’d be surprised how long taking your life actually takes, I said, and the tormentors laughed as if they couldn’t hear me, and I couldn’t tell which of their cruelties was worse: being taunted or ignored. When I resembled something that could pass for human, they’d ask their perfunctory questions and I’d look at the clock and they at their watch and I’d say, I’m fine, just fine. One day, a boy followed me in a red Rabbit convertible and told me to get in. That year, he was the guy voted most likely to… I could tell he didn’t want to be seen with me because he took the long way home, wandering through the back roads. Steadying his hands on the wheel, he said, “I know what it’s like when you feel like you don’t fit, when all you want is to be like everybody else but you’re not. And you know it, and they know it, but we all keep playing pretend like we’re five again. When do we stop playing pretend?” Don’t fit? Play pretend? So says the guy dating a Pantene commercial. While he filed college applications, I was summoned to the guidance counselor’s office and asked, yet again, if there was something wrong at home, to which I’d respond, are you kidding me? Teachers and counselors asked about my life, but they didn’t want the details. They wanted my darkness in past tense because no one wanted to deal with present or future tense sadness. I contained my sadness, watched it play out in front of me like a car crash I couldn’t look away from, over and over, until it became too much to bear, and suddenly it became easy for me to live a life in my head, threaded to the hope that I’d wake as another person living an alternate life — one where love was abundant and uncompromising. When I resembled something that could pass for human, they’d ask their perfunctory questions and I’d look at the clock and they at their watch and I’d say, I’m fine, just fine. When we arrived at my house, two girls were jumping rope on the street and they gasped when they saw us. I got a ride from a boy? I told the boy in the red Rabbit that movies get made about people like him while people like me served as the punchline. “We keep watching with the hope that we wouldn’t be laughed at, that somehow, some way, we’d end up like you. But we never do. Hope does that to you — blinds you. Everybody wants your life,” I said. Before he drove off, he paused and said quietly, “I suppose they do.” Ten years later, the boy became a married man who parked his red Rabbit in a mall garage. He lodged a gun inside of his mouth and pulled the trigger. In the passenger seat, a small child cried because her father and the car were indistinguishable. I read an interview with his wife in the local paper. “We were happy,” she cried. “We had the right life.” The reporter talked about the wife’s hands and how they couldn’t stop shaking. “Who does this? Who blows their head off in front of their child? She’s too young to know, but still. The police came and… I can’t even look at the pictures — it’s better to not remember him that way, right? We’re keeping the casket closed for obvious reasons. But I tell you — that morning before he left, he was different. I can’t explain it. I asked if something was wrong, and he shook his head. I’m fine.” All at once I understood the boy in the red Rabbit. I understood everything.
https://humanparts.medium.com/tell-me-were-safe-in-the-house-1aaf6eeb26a3
['Felicia C. Sullivan']
2020-05-29 17:03:53.283000+00:00
['Short Story', 'Fiction', 'Fiction Friday', 'Relationships', 'Writing']
Are you effectively evangelizing your data?
You may have heard, data has become a thing. The space has quickly grown, and its popularity, and interest has never been bigger. However as in demand as data may be, you may be surprised how much excitement you will need to build for your Data and Analytics (DNA) practice, in order to effectively communicate its outcomes. Effectively evangelising your teams mission and outputs should be a key focus area, in order to give your work the exposure it needs to scale. You may not think about it now, but part of your role is to educate people on how data can be used and manage the conversation around data. Here are some helpful hints on how you can achieve this. Dashboards Galore. Ok, maybe not galore. You don’t want to have too many dashboards, but those that you do should be both eye catching and informative. The primary purpose of dashboards is to automate any rote reporting that you are getting requests for. The secondary purpose, however is visibility. Dashboards are the most immediate and visible representation of your teams work. They often will remain accessible even after your team has gone home for the day, and, as is becoming increasingly popular in many organisations, can be found turned on and displayed across monitors scattered across your office — as if on a trading floor. Given their visibility, put some time into designing them so that they capture your audience’s attention and convey the messages that have a clear, defined and valuable purpose while showcasing how data is being used within your organisation. Go old school — and communicate analog in order to build relationships I believe there is a lot of value in high quality, human interaction. In a world where everything is digital — a world that has vastly enabled the work that you are doing, finding ways to identify and enhance analog communication channels can be of great benefit. If you want to evangelize — there is an advantage of doing it face to face. One way to do this is to find ways that encourage people to submit work requests or problem statements or hypothesis that they want you to address, in person, rather than digitally. This has many benefits. First, they get to meet you and your team. Second, you will find people think through questions more rigorously, when it takes more effort to submit them. A simple change like this can increase the quality of questions that you begin to receive. Lastly, you are able to begin the conversation and gain valuable context from the source straight away. The flow of the conversation may go in many directions leaving you and your counter party better informed — a win for all parties involved. Finding little pockets of opportunities like this will enhance your teams visibility and effectiveness. Unknown Unknowns With many an organisation’s data growing dramatically, it is hard for the non data person to keep up with what’s available. Whether you know it or not, part of your remit is to educate people on what data is out there, and how it can be used. The more visibility of what data exists, the quicker your stakeholder’s minds will start ticking over and crafting questions for your team to tackle. Which means more engagement and visibility for you! Take it upon yourself to inspire non-tech people by making these data points known, and piquing people’s curiosity. This can be done by featuring them on the aforementioned dashboards. It can also be done by showcasing insights from other parts of the business, so that your audience members can get an understanding of what data is being used, and which problems are being solved in parts of the organisation, other than their own Parlez vous data? The more people you get talking using the language of data the better. Part of a good data scientists remit (and in some organisations a completely seperate role) is to be a good data translator. This job can become easier if there are less things to translate. So take it upon yourself to periodically teach non data people about your methodology. Spend time talking about the methodology that you used, and the reasons for any choices that you made along the way — and not just the results that you achieved. Explicitly point out how the work that you have done should be used and what it means, including how it may be misinterpreted. Effectively you will be summarizing your own work — to make it more digestible for others. So, take your role in communicating your teams outcomes as seriously as that of generating those outcomes. Make sure that you are leading the conversation as it helps shape how effectively data gets used within your organisation This story originally appeared here
https://towardsdatascience.com/communicating-and-building-excitement-in-your-data-practice-52e3c7a554ea
['Damjan Vlastelica']
2020-01-31 14:36:55.209000+00:00
['Dashboard', 'Analytics', 'Data Science', 'Data', 'Data Visualization']
What it takes to succeed at UX Design Interviews?
Google — Interaction Design Intern Google has been ranked the Best Company to Work For for the sixth consecutive year by Fortune. The company has grown many times over the past year. Further, they have a history of hiring UX/Interaction Design interns for a vast variety of projects ranging from Daydream (VR platform) to Project Sunroof (analyze solar saving potential). During my interview, it appeared that Google valued the following characteristics — #1 Curiosity To Learn Stay updated with the design world User Experience Design is a young and dynamic field that is constantly redefining itself. Design teams from well-known companies and individuals are constantly contributing to the design community, experimenting with and standardizing new design patterns. I feel young designers have to and should make every effort to stay abreast with the recent growth in the design industry. From my experience so far, knowledge learned from school is research-oriented and restricted to the courses. Dedicating weekly exploration times to know what the design world has for you has proven really helpful for me. I personally find the following resources very helpful: Work with designers/developers from different backgrounds UX Design is a field where you will find people from very different backgrounds. Let’s take my MS-HCI cohort for example — we have students from Computer Science, Data Science, Psychology, Marketing, Industrial Design, Literature Media, Animation, Graphic Design, and more. Being a team player and learning to listen then becomes a key differentiator between a successful UX designer and an ordinary one. Being HCI students, we have an opportunity to work with talented students from different domains, and this opportunity should not be missed. The more you work with people from different backgrounds, the more you are exposed to different perspectives, and that path leads to only one result- A Better UX Designer. How do I achieve this? Take classes that are cross-listed with other majors in school and include a semester long group project . This will give you an opportunity to work with undergraduate and graduate students from different majors having a completely different perspective on the project. . This will give you an opportunity to work with undergraduate and graduate students from different majors having a completely different perspective on the project. Wing some hackathons! I understand that most participants come with a predetermined team, partnering with the teammates they are comfortable working with or trust their expertise in specific domains. However, I would suggest, at least one time, go with the intention to explore and experience the unknown! Form a team at the event, on-the-go, with members you have never met before. In this process, you will learn to trust unknown people, communicate effectively and work together to solve a problem. Try it once and share your experience in comments section. Interview questions to prepare for: What is the latest design trend that you found intriguing? Why do you think it is popular? How does it impact user experience? If given the chance, how will you tweak it to make it better? How do you stay updated with latest design trends and design news? What do you learn from them? How do you apply that to your daily design process and approach? While working on [a project from your portfolio], which tools did you learn on the go that helped you design better? How does that tool differ from other tools in the industry? #2 Digest Design Criticism Why is this important? Google is a huge organization with more than 72k employees. Based on the information shared by my acquaintances working at Google, they have a very open feedback culture where anyone is free to share their perspective on what other teams are working on. The interviewers are interested in knowing how you react to critiques or feedback shared with you on your designs. This is a critical heuristic to make sure you are a right culture fit for Google. Get comfortable with design critiques Google stresses a lot on effective teamwork. With project Google Aristotle, they spent two years performing extensive research to understand what a perfect team looks like. The top key to build a perfect team was Psychological Safety- team members should be comfortable taking risks and speaking their mind. This also means you should be open to criticism and consider it as a constant part of your design process. I love the following tweet by Adam Connor: Critique is at the core of collaboration… Critique is not a design skill. Critique is a life skill. — Adam Connor, Designer at MadPow An increasing number of design teams across industry are incorporating critique sessions as a part of their design process. It is invaluable to be able to critique others’ designs, as well as digest and embrace the critiques of your own. It is an important skill to practice. Interview questions to prepare for: Were there any difficulties you encountered in your team while working on [a project from your portfolio]? What was the difficulty? What did you and your team members do in that situation? How did the situation end? Imagine you are working at Google, and an employee come and tell you to change the color of the button to yellow? What will you do in that situation? What approach do you use to get feedback on your designs? How do you analyze the feedback received? How do you incorporate that into your designs? Optional reading: Application process for Google UX internships
https://uxplanet.org/what-it-takes-to-succeed-at-ux-design-interviews-39fc642a258
['Nishant Panchal']
2019-01-05 05:45:27.494000+00:00
['User Experience', 'Design', 'UX', 'UX Design', 'Interview']
I talked to 3 people who got into UX in their 40s
Bulent Keles — Product Designer How old are you? 40 years old Where are you from? Istanbul, Turkey Where do you work? Kolektif Labsin in Istanbul, Turkey. What’s your background, and how’d you get into UX design? I worked in marketing and advertising for over 10 years, only to realize that tv commercials and billboard ads weren’t the kind of creativity I was looking for. I always wanted to help improve people’s lives around me, and advertising wasn’t helping. I was always reading stories about digital tech disrupting industries and changing lives. I decided I wanted to be part of that change. I quit my job in advertising and started working for a mobile tech company. The company I worked with were one of the early players in mobile, which began around the same time as the first iPhone. I worked as a product manager/designer/marketer/business development guy; I was a unicorn! I worked on several mobile apps that we built for clients. Years went by. I enjoyed working on all the aspects of building digital products. I loved helping people do things better, but I got tired. I had to choose between doing everything versus focusing on one thing and getting better at it. I chose design. It seemed like the obvious choice because it allowed me to build products that mattered to people. For me, design is fueled by creativity and meaning, and this is the perfect combination. What did you study to get into UX? When I decided I wanted to continue in design, I quit my job in Istanbul and went to San Francisco to study a formal course in UX design at General Assembly. I chose San Francisco because I also wanted to familiarize myself with Silicon Valley, home to the world’s top tech companies. What are the biggest hurdles you’ve found changing to the world of UX design at 40+? Age. When you come from a conservative culture like I do, it’s double-triple hard to make a shift in any career at my age. Everyone except my wife tried to convince me that I was crazy. I had a well-paying job, a good reputation—pretty much everything I needed and I left all that to become a designer. What do you wish you knew when you started? What you would differently if you did it all again? I would’ve quit my job much sooner and started design much younger. My advice to my younger self: Never work a job for money or prestige. Choose what makes you happy. Any tips for people changing to UX design? I’d like to think of myself as a multi-disciplinary designer because I love creating beautiful interfaces, logos, presentations, as well as great experiences. You must know that UX design is not UI design. These are 2 entirely different concepts. One can be great at both, and that’s an excellent quality to have, but if you’re looking to build a career as a graphic, visual, or interface designer, UX is not for you. UX is more about understanding users, business goals, information, and creating a great experience that works. If you’re a visual person and want to learn how to make an app or website look beautiful, then graphic design or UI design is probably a better place to start. UX design is not a trend. It’s been out there and will continue to be out there for as long as humans buy and sell goods and services. So, make sure you’re not just chasing a cool title or some job you think is the next big thing. UX is about solving problems, and it’s a great job to have.
https://medium.com/inside-design/i-talked-to-3-people-who-got-into-ux-in-their-40s-1db75aadb11b
['Guy Ligertwood']
2017-10-19 07:52:31.510000+00:00
['Self Improvement', 'UX', 'UX Design', 'User Experience', 'Design']
Architecture & Style
We build here upon a previous piece, where our emphasis revolved around the strict organization of floorplans and their generation, using Artificial intelligence, and more specifically Generative Adversarial Neural Networks (GANs). As we refine our ability to generate floorplans, we raise the question of the bias intrinsic to our models and offer here to extend our study beyond the simple imperative of organization. We investigate architectural style learning, by training and tuning an array of models on specific styles: Baroque, Row House, Victorian Suburban House, & Manhattan Unit. Beyond the simple gimmick of each style, our study reveals the deeper meaning of stylistic: more than its mere cultural significance, style carries a fundamental set of functional rules that defines a clear mechanic of space and controls the internal organization of the plan. In this new article,we will try to evidence the profound impact of architectural style on the composition of floorplans. Reminder: AI & Generative Adversarial Neural Networks While studying AI and its potential integration to the architectural practice, we have built an entire generation methodology, using Generative Adversarial Neural Networks (GANs). This subfield of AI has proven to yield tremendous results when applied to two-dimensional generation of information. As any machine-learning model, GANs learn statistically significant phenomena among data presented to them. Their structure, however, represents a breakthrough: made of two key models, the Generator and the Discriminator, GANs leverage a feedback loop between both models to refine their ability to generate relevant images. The Discriminator is trained to recognize images from a set of data. Properly trained, this model is able to distinguish between a real example, taken out of the dataset, from a “fake” image, foreign to the dataset. The Generator, however, is trained to create images resembling images from the same dataset. As the Generator creates images, the Discriminator provides it with some feedback about the quality of its output. In response, the Generator adapts to produce even more realistic images. Through this feedback loop, a GAN progressively builds up its ability to create relevant synthetic images, factoring in phenomena found among observed data. Generative Adversarial Neural Network’s Architecture | Image Source We specifically apply this technology to floorplan design, using image representations of plans as data format for both our GAN-models’ inputs and outputs. The framework being employed all across our work is Pix2Pix, a standard GAN model, geared towards image-to-image translation.
https://medium.com/built-horizons/architecture-style-b7301e775488
['Stanislas Chaillou']
2019-06-30 21:32:01.837000+00:00
['Architecture', 'Machine Learning', 'Data Science', 'Artificial Intelligence']
Listening at the keyhole
Every writer is a man with one deaf ear and one blind eye, who is possessed by a demon and unteachable by anybody but himself; a man who only half hears and half sees the world around him because for half his time he is absorbedly listening at the keyhole to his own Demon, examining with satisfaction his primordial shadow. If once the boy within us ceases to speak to the man who enfolds him, the shape of life is broken and there is, literally, no more to be said. I think that if my life has had any shape it is this. I have gone on listening and remembering. It is your shape, O my youth, O my country. O pallid clouds. O caverns of green. O rumbling river. O whispering shell. Sean O’Faolain
https://medium.com/drmstream/listening-at-the-keyhole-668023c13024
['Dan Mccarthy']
2016-11-10 02:25:28.896000+00:00
['Writing', 'Remembering', 'Human Interest', 'Seán Ó Faoláin', 'Religion']
Leading a data-driven strategy
CRO on-site optimisation is a key part to any successful business. It’s converting that valuable traffic on your website and making each user count. One thing that is often overlooked is getting the traffic onto your site, what about optimising this part of the funnel? Leading the way and being a champion in your area Photo by Fauzan Saari on Unsplash Your company needs a marketing optimisation champion (whether they know it or not). The difficulty in developing a data-driven culture can at first seem impossible, and there will be times you want to give up if you want to be that person. The first hurdle is the resistance in moving away from that ‘gut’ feel and “well it’s worked in the past” methodology and instead towards a disciplined, well structured testing strategy. This can be quite unusual in some marketing departments and can be tricky to prove at first especially if your current culture has strong personalities or that HiPPO in the room. If you can break past these barriers and get yourself into the position of being that champion then you’ll be in a strong position and eventually you will connivence the wider business that a data-driven approach is the right and only way to go. But how do you do it? 7 ways to get buy-in 1. Get Senior-Level Buy-in for Testing No matter how strong your project results are, you’ll face an uphill battle without senior management support. Many of your colleagues look for cues from HiPPOs when deciding what to support, and at the end of the day the managers control the budgets. Suffice it to say, your job will be much easier with their backing. What is success for your senior decision-makers? Start by finding out how they’re incentivised so you can show how optimisation will help them reach their goals. If you can help them look (and get paid) like rock stars, they’ll support your projects and reward you in return. You can also appeal to the rational support they need by building a business case for testing. With directly measurable results, the case for testing is easy to make. Show the conversion-rate lift that other organisations are getting, and estimate the return on investment (ROI) for a testing strategy. 2. Create a Tangible Opportunity Get support for testing by creating a tangible problem that testing solves. Bring in a conversion optimisation expert to tell decision-makers how your website needs to improve. The benefit of external voices compared to internal is that they tend to carry more weight and support your voice. 3. Involve Other Departments You’ll need the support of others within your company to get your tests running: DevOps, Finance, Marketing, Branding, Legal and Compliance and other teams may present barriers. Save yourself surprises by involving them early, if you make them feel part of the journey then they will buy-in into CRO easier. 4. Get real customer feedback. If you can, then record customer feedback, especially in video. Why video? Because it can be a powerful motivator seeing the frustrations, anger or confusion of the customer when they are using your product. If you can’t get video then look at feedback forms. Sharing case-study examples of companies can be a source of inspiration and motivation, too. There are plenty of others tools on the market which can record the actions people take on your website and generate heat maps, session replays and customer journey maping. 5. Conduct Skunkworks Tests If you don’t have senior support at the beginning, you could try an under-the-radar approach. Pick a few target pages with low political visibility to gain some quick wins. Landing pages outside the main website can be good candidates for this. Then, use the winning results from those tests as ammunition in your campaign for support to move on to more important optimisation areas. 6. Tie Results to Revenue When you present results, don’t just show the improvement in conversion rate or KPIs. Tie the results to revenue to show real monetary impact. Put it this way which sounds better: A. We ran a test and it had a 10% improvement B. We ran a test and it brings in an extra £500,000 a year At the end of the day, when it comes to business one thing matters, money. As soon as you start talking money then people will more than likely sign up. The other thing to bear in mind is that people are selfish (sorry!) we tend to look out for ourselves and what’s in it for me. When presenting results and talking about how great the tests were then try to make it seem like it’s in their benefit too. 7. Share Results Far and Wide Many of Wider Funnel’s clients have used our results-analysis presentations to create an internal event in the organisation. The champion invites members from throughout the company to see the results of tests, guess the winners, and discuss what was learned. The presentations are a lot of fun, especially for those departments that aren’t normally involved in external communications. Make sure to invite people from all functional areas. You’ll see several benefits from these meetings. Positive results with statistical certainty are exciting for everyone and create momentum. They educate your colleagues about the process of testing and inspire the organization to support your projects. You’ll be positioned as a leader with ideas that deliver results. When WiderFunnel runs tests, we hold a vote with everyone involved to guess which one will win. The results presentation could be a good time to award prizes and boost the fun factor. Photo by NeONBRAND on Unsplash Take lead, drive from the front During your career there are those moments where you think is that the right approach? It’s working out where you want to go and how you get there. When building your career there are fundamentally two things. The first part is you, what is your personality like, who are you connected too, are you relatable where’s the second part is the proof, the data, and what have you done. Become a thought-leader by reading more and sharing more knowledge with your colleagues. Take opportunities to conduct group discussions, distribute summaries of your learning, have lunch with unconvinced team members, and go to conferences. Unfortunately even with the above tips some organisations will never adopt marketing optimisation. The culture may be too rooted in it’s old ways. I cringe when I see companies start on the path of testing and then turn around and redesign their website wholesale without considering the progress and learning they’ve already made. If you don’t see progress in your data advocacy, you should move on to a company that values it. Companies that don’t test will eventually yield to competitors that do. Life is too short to battle for years as a cultural misfit at companies with outdated thinking.
https://michaelgearon.medium.com/leading-a-data-driven-strategy-40e1762ed57e
['Michael Gearon']
2018-11-03 12:43:47.537000+00:00
['Marketing', 'Optimization', 'Cro', 'Marketing Strategies', 'Data Science']
Why You Shouldn’t Settle for Shallow Entertainment
Why You Shouldn’t Settle for Shallow Entertainment A case for reading difficult books. Photo by Giammarco Boscaro on Unsplash I was always the kid who preferred to stay home and read rather than go outside and play. Other people were confusing and the outside world always fell short to the adventures I could have with a good book. I read a lot. For my 13th birthday, I got 14 books by R. L. Stine, which I read in a week (they’re not very long). If you’ve never heard of him, he’s like Stephen King for teens. An extremely prolific writer (he published over 300 books so far) that writes horror. The books themselves are very much in the “young adult fiction” realm so they’re pretty easy and fun to read (if you’re into scary stories). And, over time, I moved away from YA into classics and then, with my studies, serious non-fiction books. You may notice that I categorise books into “good” and “serious”, meaning that there is also “bad” and… “trivial”. All of those words are loaded with meaning. The first two would probably be seen as positive and the other two as negative. That is not how I see it though. Good and Bad Books There are fiction books which are extremely well-written, intricate, poetic. They carry deep meaning and strong messages. These are books that are usually considered “good”. They are usually also regarded as classics. Books that are written by writers like William Shakespeare, Leo Tolstoy, Jane Austen, Mikhail Lermontov, Virginia Woolf, Gabriel Garcia Marquez, George Orwell, Harper Lee, Umberto Eco, Chimamanda Ngozi Adichie… They are the works that make your mind at peace. As Helena Bonham Carter said about poetry, in an interview with Jolyon Connell: “Reading a poem just makes me feel better. It calms my nerves if I’m nervous. Or makes me happy if I’m sad. It just expresses what you can’t necessarily express.” Good prose, in my opinion, does the same. Then there are also books that are usually considered “not so good”. Books that are written for children or young adults, that are not that well-written or intricate, but are still fun to read. They are still able, if not transport us to another realm, at least distract us from the world around us. As I said earlier, I do think there is such a thing as “good” and “bad” (or “not so good”) prose, regardless of how unpopular this opinion may be these days. We are meant to treat everything as if it were on equal ground, simply because (to borrow a term from Marie Kondo) it brings us joy. We are all meant to adhere to the idea attributed to Jeremy Bentham (a British utilitarian) that “poetry is not better than push-pin”. Push-pin is a game played by boys in 19th century Britain, but it is used as a symbol of undignified pleasure. A pleasure you might get from, for example, reading “bad” prose. I keep using quotation marks around the words good and bad because I don’t use them to signify some kind of moral superiority. There is nothing inherently bad about enjoying any kind of writing (or any creative or artistic works). I do very much adhere to Marie Kondo’s idea that we should only keep (and do) things that bring us joy. Even if they are, in fact, undignified things. Not everything we do should be dignified. At the same time, I do think we all should aim to do more dignified things as well. Consumerism and the Need for Shallow Entertainment I am deeply aware of the baggage that the words I am using have. Accusing (and it is, of course, an accusation not a statement) someone of being shallow means accusing them of only being interested in silly, unimportant, trivial (here is that word again) things. No one wants to be perceived as shallow. Importantly, we care less about being than being perceived. The reality is that we enjoy being shallow. We enjoy silly, unimportant, trivial things. And how could we not? In the last 30 years, we went from having a few sources of news that come in the form of either a daily newspaper or evening news, which showed us a select few events from around the world, to having 24/7 live streams of constant updates from thousands of news sources. We are bombarded by sensible, important, serious (and often dreadful) news all the time. We don’t want art with deep meaning or strong messages. We don’t want to have to work to be entertained. Reading a book takes more effort than watching Netflix (not that there is anything wrong with occasionally watching Netflix). A lot of people have lost and are currently losing the ability of deep focus. A lot of people are now addicts, but their addiction is socially acceptable. Having a smartphone in your hand hardly looks as dangerous as having a needle. Some would argue that this is because it is hardly as dangerous, but that would be to relativise the fact that the internet is changing our brains in ways we still don’t fully understand. We do know that teenagers and young adults today are plagued with mental health problems. Anxiety in children and teens has gone up 20% between 2007 and 2012 and between 2007 and 2017 the suicide rate of children and teens has doubled. This is one of the many topics covered in a recently published documentary “Childhood 2.0” on the dangers of social media for children and teens. This is not to say that the internet is inherently bad, but rather that people have jumped in headfirst without understanding both how it works and what it does to our brains, to our mental health, to our sense of self. This is also often reflected here on Medium with writing advice. Writers are advised to only use short sentences, to use simple words and phrases, to use a lot of white space so as to “let their readers breathe”, and to keep to fun and easy topics. Heaven forfends a reader might be in danger of using their brain. This is not a critique of the people who are sharing this advice (I understand that this is what sells, so to speak), but rather of the system that not only enables but encourages this kind of consumption. This is not a fault of any individual, it is simply the product of the world we live. In a masterful 4-part documentary created by Adam Curtis called “The Century of the Self”, we learn about the use of psychoanalysis in managing the public. People, it is proposed by psychoanalysts, are inherently deeply irrational and they should be guided from above. The only way democracy (if you then would call it that) can be saved is by having an elite class of technocratic rulers and a disengaged wider population. A population whose attention is diverted by shallow entertainment. Looking at the world today, it would appear that they were not unsuccessful. It is doubtful though that they could have predicted the internet and how impactful it could be on spreading (and making people dependent on) shallow entertainment. Some could argue that this is not inherently a bad setup: there is a handful of people in charge of all public affairs while the rest of the people go about their life and do the things that bring them joy. Most, though, would probably not be so inclined. At least normatively. A person might enjoy not having to worry about public affairs and having their lives uninterrupted by the dreary business of politics, but at the same time, they would want certain provisions. If you live in the West (and most of the world today has been influenced by this ideological system), you will have grown up to be indoctrinated to believe freedom, justice, and equality are important values. And not only are they important values, your life would be less fulfilled, less worthwhile if you did not have these ensured. You may mind my use of the word “indoctrinated”. It’s usually associated with negative meaning. To be indoctrinated is to accept beliefs uncritically. You might think that one cannot be indoctrinated in a democracy. But like all other “negative” words used thus far, I do not use this one with the intent of moral judgement. To be indoctrinated is to grow up in society. Most of us have certain beliefs which we accept uncritically. They are given to us or legitimised in some way by certain authorities. Our schools, the “good” or “serious” books we read, the media. There is a certain “good” life or the skeleton of a “good” life that many people believe in. Perhaps it could be called “the American dream” or “the empire of freedom” (if you are a Marxist) or “eudaimonia” (if you are into Greek philosophy)… all of these share certain concepts which most people globally now hold as self-evidently “good”: freedom, democracy (as in people being involved with the ruling or governing of themselves), rule of law, independence, equality, etc. These also include having access to (as well as, in some cases, the sales of) goods and services. Goods and services not unlike this piece of content. We don’t question or critically assess our lives and our values very often. It is time-consuming, tiring, and it often feels useless. Due to how much and how often we are bombarded with information about the world, it is easy to feel small. To feel powerless. Overwhelmed. In a previous text, I talked about anxiety in the postmodern age. The dizziness of freedom one gets from not being trapped into grand narratives and the feeling of overwhelm at having so many choices. One response to that can be (as it often is) to simply immerse yourself in shallow entertainment. And, again, there is nothing inherently wrong with that. But if you want to feel powerful, if you want to feel as if you are in control of yourself and your life, shallow entertainment will not do. Reading Difficult Books To return to the beginning. I love reading. I love fiction which transports me into a different world. I love the humid, warm and orange world of magical glass that melts in the novels of Gabriel Garcia Marquez. I love the sad and lonely black velvet of unhappy families of Leo Tolstoy. I love the exhilarating and conspicuous murder and paranormal mysteries of R. L. Stine. I also love historical books. Biographies, memoirs. And scientific books. Philosophy. I love finding expressed that which I could not necessarily express myself. Either because I had no words or no understanding or concepts to describe it. “Good” books contain knowledge and meaning. They teach us nuance and critical thinking. They challenge us, our values, and our view of the world. They provide deep insights into the world around us. Because this world was not built in the last 30 years. It is a consequence of a long line of thinking, deliberating, and creating. If your goal is, like mine, to understand the world around you and your place in that world, you need to read “good” and “serious” books. You need to read difficult books. Books that cannot be read by the dozen per week. This doesn’t mean that you will have to give up shallow entertainment entirely. Just that you will need to divide your time in a different way. To break yet another rule of writing on Medium, I will introduce new information in the closing section. An average teenager spends seven hours a day on their phone ingesting content online. That is along with school work and sleeping. That number is roughly the same for adults, who spend around seven hours staring at a screen daily (a little less than four hours on their phone and another three and a half watching tv). Imagine if only half of that time every day was spend reading difficult books. No psychoanalyst could ever accuse us of being inherently irrational and needing guidance. We would be in complete control of ourselves, our lives, and our world.
https://medium.com/the-ascent/why-you-shouldnt-settle-for-shallow-entertainment-2f5b2c9690b
['Kristina Hemzacek']
2020-11-22 22:03:11.674000+00:00
['Philosophy', 'Books', 'Self', 'Self Improvement', 'Reading']
Manually computing the coefficients for an OLS regression using Python
Python Implementation Now, to the point of the article. To remain consistent with the commonly used packages, we will write two methods: .fit() and .predict(). Our data manipulation will be carried out using the numpy package. If you’re importing your data from another file, e.g. in a .csv format, you may use the pandas library to do so. Let us import the modules: *The matplotlib import will come in handy later if you decide to visualise the prediction Next, we will create a class for our Model and create a method that fits an OLS regression to the given x and y variables — those must be passed in as numpy arrays. The coefficients are obtained according to the vector form derivation performed earlier (np.linalg.inv() is a numpy function for matrix inversion and @ notation represents vector multiplication): Our .fit() method stores the computed coefficients in the self.betas attribute to allow them to be accessed later by other methods. Note that we have added an optional parameter intercept; if we decide to fit our model with an intercept, the method will add a vector of 1’s to the array of the independent variables. We now create the .predict() method: *Note the extra indentation due to the fact that this method is part of the Model class Our (very) simple method makes use of vector multiplication to obtain the “predicted” values — y_hat. As an additional (and optional) touch, we can add a method to visually output the prediction (Note: in the current form, it will only work for a univariate regression with an intercept): Finally, let us execute the methods we created using sample data (if you don’t want to generate a graph, delete the call to plot_predictions() in line 7 and instead, add the return self.y_hat line at the end of our .predict() method): If you followed everything along — you should have successfully computed the coefficients and have generated something similar to this: If you’d like to play around with the code, the full version is available as a GitHub repository here. Needless to say, this is a very basic exercise, which, nonetheless, efficiently illustrates where the OLS betas come from and what their (mathematical) significance is. Figuring this out has been immensely helpful in my further studies. Lastly, the statistical packages I have referred to earlier, in addition to the computed coefficients, typically calculate a variety of other measures — statistical significance, confidence intervals, R squared and so on. These can all be calculated numerically, however I would advise to rely on the (well-tested) libraries once you’ve understood the underlying concepts.
https://towardsdatascience.com/manually-computing-coefficients-for-an-ols-regression-using-python-50d8e413de
['Roman Shemet']
2020-06-08 15:44:00.504000+00:00
['Python', 'Regression', 'Numpy', 'Ols', 'Data Science']
Pacing
Photo by Sander Weeteling on Unsplash The sound of pacing bare feet on hardwood floors wakes me up out of a sound sleep. I check my phone- Its 2:13 am. The air is thick with pain. Slowly I extricate myself from the blankets that envelope my body. I blindly grope around the dark bedroom for my slippers and a sweatshirt. It's cold downstairs. I follow the sound of bare footfalls into the kitchen. My father is pacing. Pacing in tight circles around our small table. Pacing. Pacing. "Hey dad" I call softly. He doesn't respond or acknowledges me. His eyes are open, but his mind is back in the 1980s. He is reliving the worst parts of his nightmares over and over again. I wonder which ghost has come to visit tonight. My eyes brim with tears. Two tears escape; creating twin rivers of saltwater on my cheeks. It never gets easier for him or me. I take a seat at the breakfast bar and wait. Letting him face this alone is not something I can let him do. When he finally looks at me as if he's looking for me- I'll be here- waiting. The blue glow of the microwave clock throws a cool light across the kitchen. It's 3:31 am. His gait is starting to slow. This is a good sign. 4:15 am. My dad stops pacing. He slowly turns towards me, his eyes glistening with suppressed tears and a dazed expression. His exhaustion is tangible. Recognizing that this is the light at the end of the proverbial tunnel, I quickly stand up and make my way towards him. My outstretched arm pulls him into a tight embrace. I can hear his ragged, emotional breaths in my ear. After a moment, I release him from my grasp. We walk to the living room and take our spots on the couch- him to go to sleep- me to keep myself awake until it's time to get ready for work. Not a word is spoken. It's not needed.
https://medium.com/an-idea/pacing-13e8bc00650
[]
2020-12-15 05:47:50.642000+00:00
['Family', 'Father And Daughter', 'Mental Health', 'PTSD', 'Mental Illness']
Kill Your Personas
Article written with Doug Kim In 1983, Alan Cooper gave life to the first design persona with a wave of his hands. A pioneering software developer, Cooper had just interviewed a group of potential customers. He realized that focusing on real customer motivations rather than his own needs could spark better solutions to complicated problems. For the rest of his design critique, Cooper began assuming the gestures, speaking habits, and thought processes of made-up individuals who were loosely based on the people he’d interviewed. Personas quickly took off in both design curriculum and the software industry. And for good reason: personas help us gain a better understanding of our customers’ needs and anticipate how they might act in situations where direct communication with them isn’t feasible. But we’ve since realized a problem with personas. They are inherently an amalgamation, an average of attributes that we imagine our average customer has. And there’s no such thing as the average customer. The consequence of the artificial average In the 1950s, the US Air Force conducted a famous study on pilot size. They measured the physical dimensions of more than 4,000 pilots and calculated the average size along 140 dimensions, like height and chest circumference. They arrived at an average range for all 140 dimensions and theorized that most pilots would fit within that range. As it turns out, not a single pilot of all 4,000 fit within the average range for all 10 dimensions. The consequences of designing for the average pilot were potentially deadly. The ergonomics of planes, which were built based on “average” pilot size, were so off that pilots were crashing as a result. The planes were created for everyone but really no one at all. The dilemma with designing on assumptions We repeat the same error every day in product development. We imagine a persona — let’s say his name is Ted. We give him attributes, like a family, a high-powered job, a suburban house, and two cars (see main image). Maybe he’s got a cat. Then we’ll have debates about what Ted would or wouldn’t like. “Can you really imagine that Ted would be ok with that product decision? From what I understand about Ted’s profile, I don’t think so.” And really, no one knows what Ted would like because Ted doesn’t exist. It sounds obvious when you put it that way. But it’s challenging, because the more “human” we try to make Ted by adding specific personality traits and details about his habits, the more we unconsciously stereotype him. We make it harder on ourselves in the heat of the moment to remember that Ted is an abstract representation of research insights. Even worse, every bit of overly specific detail we attribute to Ted makes him less representative of the general audience we want to design for. In such design work, the person who doesn’t exist can begin to erode and erase the presence of people who actually do. Often, ‘Teds’ are: created in siloes without a defined goal or purpose, kept static across time and use cases, not flexible and adaptable, and, grounded in artifacts product teams/designers can’t use. As we move toward building intelligent, responsive systems, we need new tools that further embrace diversity and respect multiple contexts and capabilities. Persona spectrums: a motivation, not a character We need tools that reintroduce diversity into our design process. Every decision we make either raises or lowers barriers to participation in society. Inclusive design emphasizes our responsibility to solve for mismatches between humans and their products, environments, and social structures. We need ways to check, balance, and measure the inclusivity of our designs. So how can we put real customers first and circumvent the notion of an artificial average? One way is to kill your personas (sorry, Ted) and adopt a model of persona spectrums. Instead of defining one character, persona spectrums focus our attention on a range of customer motivations, contexts, abilities, and circumstances. A persona spectrum is not a fake person. It’s an articulation of a specific human motivation and the ways it’s shared across multiple groups. It shows how that motivation can change depending on context. Sometimes, a trait can be permanent, like someone who has been blind since birth. A person recovering from eye surgery might temporarily have limited or no vision. Another person might face this barrier in certain environments, like when dealing with screen glare out in the sun. How would your product adapt to this range of people and circumstances with similar needs?
https://medium.com/microsoft-design/kill-your-personas-1c332d4908cc
['Margaret P']
2020-04-09 19:16:18.551000+00:00
['Design Thinking', 'User Experience', 'Design', 'Inclusive Design', 'Microsoft']
Wiring Up a Redux Store in React Apps
The Redux Store is modified by using the Actions flow Keeping in mind what we have just sais, let’s see how we integrate the Redux pattern inside a React Application. Redux in React The first thing I want to point out, is that when we talk about Redux in React we basically talk about the React-Redux library. It is this library that allows Redux and React to communicate with each other properly and makes it easier for us to read the store data or dispatch our functions. As stated in the introduction, we are not going to use the ReduxToolkit for this article, so assuming we created our app with create-react-app and no particular template, we must install the Redux library and the React-Redux library manually: npm install --save redux react-redux 1- Defining reducer and action creators Before we actually use any function from the redux or react-redux library we will have to create our action creators and reducers. I usually structure my projects in order to have a reducers and actions folders where I define all my functions separated by topic and then barrel them into a single index.js file to be exported. 2- Combining reducers Most importantly, make sure that inside reducer/index.js file you combine all your reducers with redux’s combineReducers function: We want to export all the reducer combined, in order to transform them into the Redux Store. Keep in mind that the key you provide inside the combineReducers argument, will be used as a prop inside the Redux Store. 3 - Creating the State and wiring up the Provider We must make sure to create a Redux Store out of our combined reducers and provide it throught our application. To create the store, all we have to do is to import our combined reducers and call the createStore function provided by the Redux library. To make the store available to all our components, we will make use of the Provider component of the React-Redux library. This component expects a store as a props and it is intended to wrap our entire application, namely, our <App /> component! Once we have wrapped our main component inside the provider, all our nested components will be able to get access to our state data! Accessing the store We have said we are now able to reach our state from all the components in our app but we must see how. There are actually 2 different methods to read the data from the store: the connect function and the useSelector hooks. Option 1: Connect & mapStateToProps functions. The most traditional way to get access to the store data from one of our components, is to pass the entire component to the connect function from react-redux. The connect function will return a copy of the component entirely connected to the store. However, in order for the component to properly read the store data, we must first provide an important argument to the connect function: mapStateToProps. mapStateToProps is a function used for selecting the part of the data from the store that the connected component needs. This function should be passed as the first argument to connect , and will be called every time when the Redux store state changes. mapStateToProps is called with the entire store as its first and only argument and should return an object where all the keys represent the mapped props from the store, to be consumed in the reducer. Let’s try to make an example: If we want to read our list of items from our store inside of a MyList component, this is what we would do: Note how the object returned by mapStateToProps is defining the name of the props the component will use to access the data. An important aspect is that the connect function can be used for both class and functional components. Option 2: useSelector (functional components only) In the last version of React-Redux, hooks are available for us to use inside our functional components. In order to read data from the Redux Store, we may use the useSelector hook. The useSelector hook is conceptually corresponding to mapStateToProps but it has some differences, the most important being: It can be used only inside functional components. It can return any value, not necessarily an object. not necessarily an object. It can be declared multiple times inside the component. inside the component. It is called every time the components re-render but only if its returned value has changed. Its use is however very similar to mapStateToProps, since its mandatory first argument is a function that receives the entire store, just as in mapStateToProps , with the difference, as already said that it can return any value. Let’s see the same example with useSelector: Dispatching the actions and modifying the store Now that we have seen how to read data from our store, let’s focus on how we can perform changes on it, that is, how to dispatch our action creators! If you have understood how Redux works, it will be clear to you that in order to trigger an action and send it to all the reducer in the store, what we need is to pass an action creator to the store.dispatch() function. But how can we access the dispatch function from a component? Well, here is where the React-Redux magic comes into place and again, we have 2 different ways! Option 1: Connect & mapDispatchToProps Very similarly to what we have seen with mapStateToProps, the connect function allows us to map also the actions creators to the component’s props. The trick is very simple: connect can accept a second parameter called, guess what? mapDispatchToProps. mapDispatchToProps needs to be an object whose keys will be mapped to a prop inside a component. Each key of this object must be a function that eventually returns an action, that is, an object with type (mandatory) and payload (optional) arguments. Let’s see a very simple example, extending the previous MyList.js component. We will import our previously declared removeItem action creator from its file, and map it to a delete property inside the component. We will then add a button for each item of the list, and try to call our delete property with the id of the item as its argument: IMPORTANT NOTE: in some cases we only want to use action creators inside a component, regardless of the rest of the store. In those cases, you should provide null as the first argument to connect and mapDispatchToProps as a second argument! Option 2: useDispatch (functional components only) An even simpler approach is available to us inside of functional components, thanks to the useDispatch hook. This hook provides a direct reference to the store.dispatch() function, making its use really straightforward. Of course, you only need to call the hook once inside a single component and then use its reference with different action creators. Let’s see how our example MyList looks with this approach: Handling async actions with Redux-Thunk There we go, my friends! We have our store in place now, but we need to handle a little issue that we may have if we need to consume some API inside our actions creators. The problem Remember: action creators are supposed to return a plain object. But when we make use of asynchronous code, we are breaking this rule and Redux is going to show a very big error screen to us: If we perform an async call inside an action creator, this is what we run into. The solution In order to bypass this issue, we will make use of a middleware called Redux Thunk. What Redux thunk does, is putting itself in the middle every time an action creator is called and performs a check on what is returned; if the return value is an object, it will then call store.dispatch for us. If the return value is a function, it will prevent redux from breaking and instead will provide a new functions with 2 arguments out of the box: the dispatch function, that we can use to dispatch the action once we performed all our async code or other logics function, that we can use to dispatch the action once we performed all our async code or other logics a getReduxStore function, that we can use to inspect the whole content of the store! Let’s see how to implement this. We can install it with npm i --save redux-thunk We can now move to our index.js file and import thunk from redux thunk and applyMiddleware from the redux library. All that remains to wire up Redux-Thunk is to pass thunk as the first argument to applyMiddleware and applyMiddleware as the second argument to createStore . From now on, all our actions will be handled by the thunk middleware.
https://lancellotti-marco.medium.com/wiring-up-a-redux-store-in-react-apps-25e1e2224b31
['Marco Lancellotti']
2020-11-30 08:39:46.140000+00:00
['Reactjs', 'Redux Thunk', 'React', 'Redux']
Weekly update #10
Another week has passed and it is time to share the results with our community. For the few last weeks we have been mostly focused on creation of Live Stars platform. We have stress-tested the MediaSoup server implementation. It works perfectly on Chrome-based browsers, and now we are making sure that Mozilla Firefox client is also supporting our new server. Meanwhile, we have also started working on our referral system. Visual bugs and glitches on the website are almost completely gone now. We expect to be able to make our beta test public again after we make sure that the broadcast is working on all of the browsers. We have seen several complaints concerning the fact administartion has been not that active on social channels lately. This is due to the fact the whole team is now busy with the platform, as it is our main goal at the moment.
https://medium.com/live-stars/weekly-update-10-42a3bbcde70
['Live Stars']
2018-06-14 14:00:31.240000+00:00
['Blockchain', 'Cryptocurrency', 'Ethereum', 'Development', 'Bitcoin']
Teaching during a Pandemic: Five Takeaways
A month ago, the Brooklyn high school I teach at – in lock step with NYC – decided to close school and switch to remote learning, for an indefinite amount of time. It seemed logical at the time, but few of us thought of this as potentially being the end of in-person teaching and learning for the 2019-2020 school year. Back when I worked at Superform, much of the creative work we did was distributed. I collaborated with developers overseas, and in some cases, 100% of client meetings were done via conference call, video, or even just a Slack channel. I would take day trips to different WeWorks in Manhattan to get a change in scenery, without missing a beat. Remote work felt natural. When I found out we were moving to remote learning for a few weeks I felt as confident as anyone in being able to not only handle the transition, but utilize my past experience. No one was prepared for this. I’m speaking from a teacher’s perspective, but it’s clear very few people were/are equipped to work remotely for an extended period of time. In New York City, apartments are cramped, living with roommates well into your 30’s is the norm, and coffee shops offer(ed) the comfort and respite that you don’t get at home or at work. Now limited to just your home, these spillover spaces no longer accessible, we’re adapting our spaces to accommodate much more than they were designed for. A week into NYC’s pause, my child’s daycare closed, which meant my wife was working from home, our daycare was now my living room, and I was teaching online, planning and grading, and holding office hours for two hours a day. Events that would normally take place in 4–5 locations now all took place in our 1-bedroom apartment. Even the most tech-savvy of us struggle with isolation, because it’s not merely a comfort level with remote work, but grappling with the psychological effects of immobility, no longer having access to areas, stores, and places you love, and making the little personal space you have as comfortable as possible. Five Takeaways With each week that passes, we get a little more effective with remote teaching and learning, but there are a few specific takeaways from this experience that I think will change how we teach and learn. 1. Every Student Needs a Laptop and Internet Access This has been the most frustrating and disheartening part of remote teaching over the past month. Many students have missed weeks of instruction, office hours, 1-on-1 meetings, and emails, all because of something completely out of their control. As we contemplate schools potentially closing for the remainder of the year, this can have huge negative impacts on students’ learning for years to come. Beyond the unique circumstances the coronavirus has brought on, lack of technology has disproportionately effected students in low-income families, and is perpetuating the achievement gap the same way other lack of services has. How much learning, exploration, and discovery is lost by not having a computer or reliable internet in the household? Conversely, how much more opportunity do students instantly gain by having a connected device throughout their entire K-12 education? Viewed through a budgetary lens, the price of one device per student seems inconsequential, especially considering the device (and its value) go directly to the student. 2. We Need to Take Digital Literacy Seriously I was shocked at how few students were sending formatted emails, or using our online classroom’s message board in all kinds of wrong ways, until I realized: Most of my students have never discussed digital literacy in the classroom. I know there’s a wide spectrum in the U.S., some districts investing heavily in digital literacy, and many schools in which online learning is a more critical component, but I suspect there are thousands of schools across the country who have simply not addressed the elephant in the room: how to be a person on the internet. Skills like sending professional emails, formatting documents, and being presentable (as much as one can be) in video conferences, can mean the difference in building strong professional relationships, being awarded scholarships, or getting hired. It’s ludicrous that we graduate countless students without having discussed and practiced digital literacy thoroughly. It’s the level of importance that it can’t really be covered in one class, but must be embedded in school culture, where we offer students every opportunity possible to practice digital behaviors that will set them up for success. 3. The Schedule is the Glue that Holds it all Together By far, the most common feedback I’ve gotten from students, is their schedules have gone to shit. This has been the most jarring part of remote learning for students — they’ve gone from weekdays of ultimate consistency to unstructured days that bleed into one another, in spaces often occupied my multiple family members, while often sharing devices or doing work on phones or tablets. From one week to the next, students are managing (in some cases six or seven) online lessons, work time, and office hours across, on schedules that have no resemblance to the normal school schedule. The past month has really highlighted how little we (students and teachers alike) frame our weeks in terms of work time, and how critical consistent schedules are to getting things done. How many of us plan our mornings, evenings, and weekends with the same rigor we plan our workdays? How many hours are simply lost as a result? There are tons of students that will have likely never made a schedule before graduating high school, and will be attending college and/or working jobs, and having to figure out from scratch when they’ll get things done. As ‘work’ becomes more of a amorphous term that now encompasses freelance, remote work, consulting, and side-hustles, creating and sticking to a schedule as a skill seems more pressing than ever. 4. Online Classrooms Should be Standard (Even If They’re Secondary) There’s been a huge silver-lining in the past four weeks of remote teaching, especially for someone who has (used, but) not relied on Google Classroom. Publishing assignments, coursework, links to relevant articles and videos, and having both class-wide and private conversations can have a tremendous impact on facilitating learning outside of the classroom. This is not an argument for 100% permanent remote learning, but I’ve noticed several students who are not typically engaged in the classroom, are now thriving. The internet is their domain, their comfort zone, and this dichotomy highlights how little schools have adapted to students’ digital-first experience. Further, having a digital space where every lesson, assignment and material is accessible, allows for much deeper connections between topics; as a high school student, how many times did you finish a project or unit, turn it in, and never see it again? Every lesson or unit that ends without any further study is a lost opportunity to build deeper connections between wide-ranging topics. As a teacher, being able to digitally pop-in to classes gives an immediate insight into the culture of the class, student engagement, and materials being covered. Ideas for collaboration or inter-disciplinary projects are floating at the surface, rather than buried in a shared, nested drive folder. One day, we’ll return to the classroom, fist bump, catch up, and get on with the lesson, but I hope the use of online classrooms continues to grow; it’s a space for students to shine, help each other, and flex their muscles in a way that can be difficult in the physical classroom. 5. Resourcefulness is Now a Requirement September will roll around, a new school year will begin, and everyone — teachers, students, and administrators — will be expected to continue making progress, and ensuring students have the skills and knowledge they need to excel. The experiment of removing the physical spaces that house almost all of our interactions, both academic and social, has forced all of us to adapt, and achieve as much as possible with the resources we have (actually, this sounds like the norm for many teachers, coronavirus outbreak or not). It took a few weeks to get into a remote-learning rhythms, but the speed of adaptation has been pretty remarkable. Of course, everyone is struggling, and we’re certainly operating at not 100%, but the vibe of online classrooms in some cases is shockingly close to pre-isolation physical classrooms. Students are sharing devices, calling and texting teachers, and messaging each other for help on assignments, a lot of which was simply not happening a month ago. Teachers already know a lot about resourcefulness, which is a big reason why remote learning has not been a complete disaster.
https://medium.com/age-of-awareness/five-takeaways-from-becoming-a-remote-teacher-cd694d367a9a
['David De Céspedes']
2020-05-02 01:32:34.518000+00:00
['Remote Working', 'Learning', 'Teaching', 'Productivity']
From Personality Cults to Collective Intelligence: The Democratization of Education Online
There’s a major shift occurring in the world of small business education, coaching, and training. One group of bloggers, content marketers, and educators have gone on to start self-funded software companies. Another group has moved toward building agencies and practices that deliver precise execution and hands-on support. A third group is saying, “It’s just not working anymore.” Those that say “it’s not working” are largely those who have relied on personality-driven brands and the development of the online course market. This market developed out of a desire for education to be accessible to the masses. Unfortunately, what was envisioned and sold as a democratization of business education has become anything but. The premium personality brands peddle their wares with the help of mobs of fawning affiliates while aspiring personalities aim to get a small piece of the pie. This third group has employed the “gatekeeper model” — which thrives by sequestering “the good stuff” behind a paywall. The reason they’re experiencing diminishing returns is simple… The rest of the market has already moved away from gatekeepers and towards the Access Economy. We have access to people’s spare rooms when we’re on vacation. We have access to restaurant reservations at the touch of a button. We have access to a taxis in our pockets. We have access to flexible labor that we can contract to do just about anything. We have access to time-saving technology that allows us to do things we only dreamed about 5 years ago. We have access to amazing amounts of data that we can use to learn more about our customers than we thought possible. And, we have access to unprecedented amounts of information. In the Greek and Roman Empires, access to information was limited to the town square and local gossip. In the 1400s, the printing press revolutionized people’s ability to access ideas and information — but only for the few who were literate and rich enough to afford books. In the early 1900s, radio and then television brought news, entertainment, and information to the masses. Of course, the 1990s brought the internet and completely changed the game. With access to IMDB, Genius, Quora, and, of course, Google, you can find the most minute piece of information quickly — and cheat at your local pub trivia night. Even high-value education — Ivy League schools and specialized technology programs — have entered the Access Economy. The democratization of education is here — and it’s a key part of the Access Economy. Yet, when it comes to our businesses, we’re still relying on gatekeepers. The gatekeeper model starts with developing a popular personality brand (and, often, cults of personality), moves on to creating DIY online learning courses, and finally sells them to us for thousands of dollars. Information is delivered in videos, articles, or audios and the “learning” is done in worksheets or small homework assignments with no collaboration or oversight from an instructor or learning community. This is less education and more information regurgitation. The gatekeeper model offers the bare minimum in terms of access for the most amount of money. True access to learning — and not mere information — includes access to a dynamic, collaborative learning environment full of people who are invested in helping themselves while they help you. That’s what is so valuable about traditional learning environments like universities. This is where the small business gatekeeper model fails so miserably. It’s what has let you down time and time again. So while some will worry this opportunity is crumbling… I believe we’re seeing a much needed rebalancing of what people value and how they invest in what’s useful that happens to be in line with the direction of every other market in today’s consumer economy. I first felt this rebalancing in early 2016. My team and I were working to create a more immersive experience for a group coaching program we had sold for the last 4 years while at the same time trying to automate it for those who couldn’t invest at a higher level. It did not go well. First, my heart wasn’t in it. Second, we divided our focus. Third, we didn’t clearly define the value of either option. It ended up being our best sales campaign to date… but it was also a flop. That’s when I really started to rethink things. I have been passionate about collaborative learning and coaching since the beginning. Yet, I had started to abandon that approach in favor of what seemed like an easier sell and a more profitable offer. Not a smart move. I was at risk of missing a much bigger opportunity. The gatekeeper model has missed the big opportunity — and left us in the cold. Look inside the inboxes of most small business owners and you’ll find a myriad of emails all pitching some $2000 course about Facebook ads, project management, social media, selling on webinars, or writing copy. Many of these classes are very good. Some are exceptional. Others are not. There are many savvy, successful, experienced small business gatekeepers. It’s not that the courses that are priced at a thousand dollars or more aren’t “worth it.” I’ve bought them, I’ve enjoyed them, I’ve more than gotten my money’s worth. However, no $2000 course has made growing my business much easier. A course can even out our journeys, level our learning curves a bit, or answer a particular question. What a course can’t do for us is support us in the daily ups and downs that running and growing a small business entails. It can’t help us evaluate ideas, get feedback on something we’ve created, or make a personalized recommendation. It can’t offer truth-telling, constructive encouragement, or even cheerleading. An expensive course certainly can’t give us that on-demand access that we’ve come to expect from AirBnB, Lyft, and OpenTable. Ask any successful small business owner (I’ve talked to hundreds over the years) and they’ll tell you that long-term, sustainable success comes not from nailing a particular formula or following a particular set of instructions but simply having the fortitude to show up every day with the desire to make things more efficient, reach more people, and take action on their strategy. In other words, the key to business success is access, not learning. Learning happens, yes. But it’s not the truly valuable deliverable, it’s a side effect. We need access to encouragement, honest conversations, real feedback, and — we need access to people who are on a similar journey to ours. This kind of access allows us to synthesize, integrate, systematize, optimize, and perfect what we do already know. No course — even the best — is made to do that. The gatekeeper model got the opportunity wrong. The big opportunity isn’t in selling information. The opportunity is to create, nurture, and sell an environment where real growth can happen. Of course, that’s a much bigger challenge than packaging what you know into an online course. It requires a bigger investment and an intentional approach to culture development. Imagine a world where access to good information, constructive encouragement, and honest conversations about your business were as accessible as an Uber ride. If you had a question about how the Facebook algorithm might affect your social media strategy, you’d know exactly where you go for help. If you had a cash flow challenge, you’d know exactly who to talk to for some creative fundraising ideas. If you had a difficult conversation you needed to have with a team member, you’d know exactly the people to lean on for support. The answers to your questions would always be personalized — instead of the anonymity of a Google search. The creative ideas would always be contextualized — instead of the one-size-fits-all approach of a blog post. The support would be from people who care about you — instead of the faceless detachment of an Instagram meme. In this world, action is prioritized over more learning. You do more because you’re not constantly experiencing FOMO at what you should be learning to keep up with everyone else in the personality cult. You ask questions that help you move on to the next task or clarify your action plan instead of learning things that don’t matter for your strategy. In this world, you call the shots — and ask the questions. The gatekeeper model relies on control to maintain its position in the market. Those that use it need to be able to influence the questions you ask and the problems you consider worth solving. Your strategy might not depend on learning how to sell from webinars, build your list from Facebook ads, or sell high-ticket consulting proposals but they will insist it does. They’re not necessarily trying to manipulate you — they’re just very good at casting their nets for prospective customers. When you have access, you have a much bigger chance of staying on track, maintaining your focus, and sticking with your strategy. Your questions are your own, not a gatekeeper’s. In this world, you cover all the bases. Gatekeepers are human, too. They don’t have all the answers and they can’t help with every problem. Sure, they’ll send you to another gatekeeper when they can’t answer your question… but then you’re back at square one. When you have access to a network of people who have experienced scads of business challenges and successes, as well as have talents and skill sets in a variety of areas, you don’t have to worry about going elsewhere. You can rely on the distributed expertise of the network instead of the siloed expertise of the gatekeeper. In this world, you never worry about obsolescence. New information is always emerging. New techniques, tactics, and strategies make their way to the mainstream. Technology changes fast and the market changes faster. With gatekeepers, you have to worry that what they’re peddling could become obsolete at any moment. With access, the conversation goes with the flow of information, technology, and the market. When a new idea emerges, the network will evaluate it. This is the world of collective intelligence. We believe in a world where you can prioritize action over learning, call your own shots, ask your own questions, cover all the bases, and stay up-to-date. We believe the reason so many in the small business space are worried “it’s not working anymore” is because the gatekeeper model has finally given way to the deep desire to tap into collective intelligence and finally realize the promise of democratized education. What’s more: we believe building a system for collective intelligence in the small business space is more than possible and that access to it should be a part of every small business owner’s arsenal of tools. The market isn’t crumbling — it’s just getting started.
https://medium.com/help-yourself/from-personality-cults-to-collective-intelligence-the-democratization-of-the-small-business-space-1a4dc43635e3
['Tara Mcmullin']
2017-10-31 18:41:40.305000+00:00
['Online Courses', 'Entrepreneurship', 'Education', 'Small Business', 'Freelancing']
Screwed!
A happy thought and a sad one. For him, she’s both. He knows he’s screwed.
https://medium.com/3-lines-story/screwed-c9e7312f9da6
['Pawan Kumar']
2017-09-02 15:41:22.638000+00:00
['Poetry', 'Relationships', 'Love', 'Storytelling', 'Heartbreak']
Introduction to Keras & Transfer Learning for Self Driving Cars
GoogleNet In 2014, Google published its own network in the ImageNet competition and in homage to Yann LeCun and LeNet, Google named their network, GoogLeNet. It’s spelled like GoogleNet but it’s pronounced GoogLeNet. In the ImageNet competition, GoogLeNet performed even a little better than VGG: 6.7% compared to 7.3% percent, although at that level, it kind of feels like we’re splitting hairs. GoogLeNet’s great advantage is that it runs really fast. The team that developed GoogLeNet developed a clever concept called an Inception module, which trains really well and is efficiently deployable. Do you remember inception? It’s called an inception module. It’s going to look a little more complicated. The idea is that at each layer of your ConvNet, we can make a choice, have a pooling operation, have a convolution and then we need to decide, is it the one by one convolution or a three by three or five by five. All of these are actually beneficial to the modeling power of our network. So why choose? Let’s use them all. Here’s what an inception module looks like. The Naive Inception Module. (Source: Inception v1) Instead of having a single convolution, we have a composition of average pooling followed by one by one, then a one by one convolution, then a one by one followed by a three by three, then a one by one followed by a five by five and at the top we simply concatenate the output of each of them. It looks complicated but what’s interesting is that we can choose these parameters in such a way that the total number of parameters in our model is very small, yet the model performs better than if we had a simple convolution. The inception modules create a situation in which the total number of parameters is very small. This is why GoogLeNet runs almost as fast as AlexNet. And of course GoogLeNet has great accuracy. Like I mentioned earlier, it’s ImageNet error was only 7%. GoogLeNet is a great choice to investigate if we need to run our network in real time, like maybe in a self-driving car.
https://towardsdatascience.com/introduction-to-keras-transfer-learning-for-self-driving-cars-684df7eae0e
['Prateek Sawhney']
2020-12-05 08:29:30.959000+00:00
['Machine Learning', 'Self Driving Cars', 'Artificial Intelligence', 'Deep Learning', 'Data Science']
Your Customized Medium Sub-Domain Profile Has One Less Feature Now
Your Customized Medium Sub-Domain Profile Has One Less Feature Now I wish we can revert to the old design Photo by Helena Hertz on Unsplash Medium released a new profile layout on the 14th of October. That evening a sub-domain for writers was one of its change-highlight. Earlier, the sub-domain was available only to Medium owned publications but now it is available to all users. On that very evening, we have lost an important feature on Medium. One of the best features available earlier was to find an article published by a user on a definite publication. You don’t understand? Let me explain. My Friend Ryan Fan publishes many articles in a week. So, I may not view all of his stories. However, I don’t want to miss a post he publishes on Koinonia Publication. To find his Koinonia article, I neither need to go through his profile nor I need to google search and type “Ryan Fan Koinonia” (Google may provide, but it is complicated you know!) Screenshot by author So what did I do? I mostly applied a simple URL trick. I added a publication URL before his user id and hit enter which showed me the entire post published on Koinonia. My earlier URL:- medium.com/koinonia/@ryanfan. But now since he customized a sub-domain for him, I will have a tough time finding him. To the worst, Medium removed this URL trick from everyone's profile. It was a helpful trick. I badly miss it now. I wish Medium can add this feature one more time. Talking about the good side, I like the new design. I like how an individual can make their own profile. I like the way how you hide the top writer tag (especially for someone like me who obtained none in the last year) Yeah, I love this new design of Out of nothing something. Medium page if you lost out. Medium Staff, are you listening? If you can hear me, I would suggest one more important feature to be added. I would want a personalized notification for my favorite writers. I know irrespective of distribution or not; you mail me your best post from the people I follow on Medium. But I want something more. I do not want to miss stories of some specific writer, and I want to click a notification bell on their profile so every time they post something, even if I am the only person to read that article, I want a real-time notification/mail so I can read them without depending on your algorithm. While I still wish if we can revert to the old URL trick, I hope Medium can manage some changes so I can search for my favorite writer as per the publication. Also, please add a notification bell on every profile. I have a few more suggestions but will write later.
https://medium.com/the-partnered-pen/your-customized-medium-sub-domain-profile-has-one-less-feature-now-ccf05d0a6b98
['Suraj Ghimire']
2020-10-16 16:31:55.954000+00:00
['Social Media', 'Change', 'Writing']
Google CoLab Tutorial — How to setup a Pytorch Environment on CoLab
3. How to Install Python Library in Colab Colab has command line support, so simply add ! exclamation mark to run Linux commands in the cell. You can simply run !pip install <package-name> to perform pip install. So when you go to Pytorch official installation website, and choose the specifications of the pytorch version you want to download, make sure you choose Linux, Pip, and Python, and then the CUDA version you want to install, and you would see a pip command line shown at the bottom: Then copy paste that command to CoLab cell with a prefix ! So, simply run !pip install torch trochvision in the CoLab cell: Then after it runs, the package is successfully installed. Note: It seems that CoLab has already preinstalled Pytorch for you, so if you run this command, it’ll tell you “Requirement already satisfied”. However, you can use the same method to install any other python package through pip install.
https://medium.com/analytics-vidhya/google-colab-tutorial-how-to-setup-a-deep-learning-environment-on-colab-bc5ab7569f02
[]
2020-06-29 13:07:47.516000+00:00
['Deep Learning', 'Colab', 'Jupyter Notebook']
Bayes’ Rule, Unreliable Diagnostic Testing, And Containing COVID-19
The 2020 Novel Coronavirus Outbreak | Thoughts on Probability and Statistics Bayes’ Rule, Unreliable Diagnostic Testing, And Containing COVID-19 How false-negatives in diagnostic testing are leading to the release of infected people, motivating extreme containment measures. The COVID-19 outbreak, explained with Bayes’ Rule. Wuhan coronavirus. Novel coronavirus. COVID-19. We are currently in February 2020. Over the past month, a deadly virus has been spreading throughout China and the world, sending the infected to the ICU and trapping others in their homes. As authorities try to manage this crisis, they face the challenging issue of containment — sending the infected to quarantine, while allowing the non-infected to go free. The Problem With Epidemics That Plagues The Authorities Here is the scenario. You have a cough and a fever. There is a chance that you have caught COVID-19 — the virus spreading throughout the world. You don’t know what this chance is, and you don’t want to take chances, so you seek advice from your doctor. The issue from the authorities perspective is different. You want treatment, but the authorities need to contain the spread of the virus. From their point of view, there are 4 main outcomes of your visit to the doctor. If you are infected and diagnosed with coronavirus, they will quarantine you for the public benefit. If you are not infected but you are diagnosed with coronavirus, they will wrongly quarantine you, causing you inconvenience. The public will suffer no major harm, but the authorities will have to expend a small amount of resources. If you are infected and not diagnosed with coronavirus, they will wrongly release you, causing you to spread the virus. This puts the public in grave danger of an outbreak. If you are not infected and not diagnosed with coronavirus, they will rightly release you and save some resources. The two mistakes that the authorities can make is scenario 2 and 3. Scenario 2 is a minor inconvenience (if not done too often), but scenario 3 is the major issue which can cascade into a larger outbreak, even if only done once. If an outbreak occurs, they will have to do contact tracing for possibly hundreds of people, given the contagiousness and lethality of coronavirus. This will be extremely costly for them, so their primary interest is in minimizing the probability of the third scenario. Thus, the authorities need to make an accurate diagnosis, so that they can avoid releasing the infected and quarantining the non-infected. To achieve this, the authorities first make an initial assessment of all suspected infections, whether they are patients at the medical clinic, or travellers from places with active outbreaks. Initial Assessment of a Patient There are clues which will hint at a COVID-19 infection. Location: there will be different probabilities of infection for people living in different places. People near the epicenter are more likely to be infected. People living near popular tourist destinations are more likely to be infected than people from isolated places like rural Alaska. Travel history: travellers from places near the epicenter or other outbreak locations are more likely to have the infection (hence, why travellers are screened). Social contacts: people who have close contact with the infected/those at risk of infection are more likely to be infected themselves. Symptoms: people who have symptoms characteristic of coronavirus, such as fever, cough, and shortness of breath are more likely to be infected. Imaging: people who show imaging features of coronavirus on an x-ray or CT scan are more likely to be infected. Not all of these clues will be immediately available. Authorities at the airport can only screen people for temperature and identify travel history if needed. A doctor will know about the symptoms disclosed by the patient, but it is on the doctor to take the initiative to find and link the clues together. And an asymptomatic case may not have any clues. You go to the doctor, and the doctor asks you a series of questions. You tell the doctor about your cough and fever. The doctor is suspicious and builds their own belief of whether you are infected or not. Their belief is strong enough to justify a coronavirus test, so they order you to take this test. The Probability and Statistics Behind Diagnostic Testing After gathering enough clues, a doctor may suspect coronavirus. To confirm this suspicion, the doctor will order a diagnostic test. Misconceptions about what the diagnostic test result means A diagnostic test is performed by collecting samples from your body (eg. mucus in the back of the nose) and looking for presence of the virus in those samples. It seems simple enough, and people have a lot of faith in science. This may lead people into making this first mistake. Incorrect interpretation: a positive result means a patient has novel coronavirus, while a negative result means that a patient does not. This is not true, because the test is not always reliable. There are many reasons why a test may give a misleading result: A patient in the very early stages of an infection may not excrete a detectable amount of virus. The virus itself may only exist deeper inside the body, hence being inaccessible by a swab test. There may have been accidental contamination of the sample. In general, the people who make the testing kits will specify the reliability of the test. Suppose that a company now markets their test as “90% accurate”. This can lead to another common mistake. Incorrect interpretation: for a 90% accurate test, a positive result means 90% chance of being infected, and a negative result means 90% chance of not being infected. This interpretation is also not true, but is actually surprisingly common in the medical community— the great psychologist Gerd Gigerenzer shows how doctors misinterpret the results of mammogram results article. Bayesian probability explains what the diagnostic test really means The correct way to evaluate a diagnostic test requires thinking in terms of Bayesian probability. To put it simply, Bayesian probability involves having a prior probability and then using new information to update it. In terms of diagnostic testing, the prior probability is the doctor’s belief about whether the patient is infected or not. The test result is used as information, and this changes the doctor’s belief. We can formulate the reliability question in terms of math equations. Prior probability The initial probability of infection, which is based on the doctor’s judgement. Posterior probability: how reliable is a positive result? The probability of infection is conditioned on a positive result. The positive result is information. Posterior probability: how reliable is a negative result? The probability of (no) infection is conditioned on a negative result. The negative result is information. What we need to do now is to connect the doctor’s initial belief with the final belief. To do this, we use Bayes’ rule, which can easily be derived using basic facts about conditional probabilities. The denominator is expressed like this. We now have two new unknown probabilities. The probability of a positive result given that a patient is infected. This number should be high — infected patients should be getting positive test results. The probability of a positive result given that a patient is not infected . This number should be low — non-infected patients should be getting negative test results. These two probabilities are actually measures of a test’s reliability, and they can be expressed in terms of two quantities: sensitivity and specificity. Sensitivity Probability of a positive result given infection. The test is “sensitive” to the presence of coronavirus. If the coronavirus is present, the test will detect it. Ideally, close to 100%. Specificity Probability of a negative result given no infection. The test is “specific” to coronavirus. If there is no coronavirus infection, the test will not detect anything, and returns negative. Ideally, close to 100%. I should remind you here that I can express probabilities in terms of complements. Infection and no infection are mutually exclusive. Similarly, a positive and a negative result are also mutually exclusive. This means that our calculations will use these two equations, where A and B are events such as “the patient is infected”, or “the test result is positive”. The probability of A not occurring is one minus the probability of A occurring. Given that B has occurred, the probability of A not occurring is still one minus the probability of A occurring. The probability of infection given a test result We now have all the tools we need to interpret a test result Prior probability — initial belief Posterior probability — final belief Bayes’ rule — connects initial and final belief Sensitivity and specificity — allows us to do computations using Bayes’ rule Combining these, we get
https://towardsdatascience.com/statistics-and-unreliable-tests-coronavirus-is-difficult-to-contain-e113b5c0967c
['Andy Chen']
2020-11-20 13:09:26.201000+00:00
['Health', 'Statistics', 'Medicine', 'Mathematics']
The crumbling of the Californian Ideology: Technology Disruptors’ limited OS
The crumbling of the Californian Ideology: Technology Disruptors’ limited OS We don’t need rule-breaking tech founders anymore, and yet they don’t seem able or willing to change. Where do we go from here? ‘This new faith has emerged from a bizarre fusion of the cultural bohemianism of San Francisco with the hi-tech industries of Silicon Valley. Promoted in magazines, books, TV programmes, websites, newsgroups and Net conferences, the Californian Ideology promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich.’ The Californian Ideology, Richard Barbrook and Andy Cameron (1995) Twenty-three years have passed since the writing of the paper from which this passage is extracted, yet I can hardly think of a more apt description of the ideology collectively shared by most prominent Silicon Valley technology founders. Discussing its shortcomings hits close to home for me. While I am not one of them, several have nonetheless long been role models of mine, which, admittedly, is part of the broader problem I aim to describe. As we near the end of 2018, it is abundantly clear to anyone who has been following the news lately that the technology industry is in troubled water, and has for a few years now, with no signs of improvement. Technology companies are continuously clashing horns with regulatory bodies and apologising profusely to the public for what they paint as mere ‘mishaps’ that have already been long corrected. Just days before the writing of this article, a trove of internal Facebook emails were unsealed by British MPs, showcasing the predatory attitude of its executives. Yet, the problems plaguing technology companies are far more systemic and entrenched than we are being let on or willing to admit, originating in the very DNA that was passed on to them by their founders. A deep dive in these technology founders’ psychology is long overdue, and I suspect we will collectively find that the former cannot be saved. Dr Frankenstein’s Little Monsters Over the last decade or so, technology and startups strangely went from being dismissed as nerdy to being hyped as cool (though tables may be turning). Technology founders are being celebrated as heroes or artists, with Steve Jobs gathering a cult-like devotion. Consequently, the actual circumstances surrounding the birth of Silicon Valley have been largely obscured and replaced by an attractive romanticised narrative. Essentially, Silicon Valley would have begun with a bunch of rebels and renegades, driven creatives, who pulled themselves by their bootstraps and ended up creating new empires. This is an inaccurate or rather incomplete reflection of the reality. The Bay Area was always technology-driven, even before the rise of the personal computer and later software. It was involved in the development of telegraph and radio in the late nineteenth and early twentieth centuries, which made the region a significant military research and technology hub. It also subsequently received significant public investment in the post-war era for the production of semi-conductors crucial to winning the Space Race against the Soviet Union. These prior events set the ground work to establish Silicon Valley as a technology powerhouse, which was later on truly launched by the establishment of strategic partnerships between local universities and private companies in the 1970s. The cooperation of all local actors around the single mission of advancing technology was extremely powerful because it led to the development of a cluster, benefiting from strong network effects, and preventing others to match its strength. Infrastructures and cooperation are no doubt crucial, but remain only part of the story. After all, technology was, and still currently is, developed by humans, whose ideas, skills, and creativity are crucial. In the 1970s, California was also already an intellectually and culturally fertile land, where opposing ideologies clashed. On the one hand, it was home to a prominent and dynamic counterculture movement, especially in San Francisco Haight-Ashbury district. The hippies were at their zenith, advocating for a social and cultural revolution, as well as opposing the Vietnam War. More than anything they argued for an alternative approach to life and relationships, their ideals echoing a social-anarchist utopia. On the other hand, Ronald Reagan, before he became the fortieth President of the United States, was also the Governor of California at the time. Reagan is of course known for championing individualism and trickle-down economics, appropriately labelled ‘Reaganomics.’ While Reagan played a key role in popularising and putting in practice these economic theories, he was not their father as they were being actively studied and promoted by the Chicago School, and in particular Milton Friedman. Amidst the conflict that raged on between these two opposing ideologies, culminating in 1969 with the People’s Park protest, was born a sort of a third way. Some of the people wanting change were not viscerally opposed to technology as a way to progressively establish their ecological egalitarian libertarian pipe dream. Prompted by technophile local media, like Wired (which had a sensibly different editorial line at the time), and Sci-Fi pop-culture, these people became convinced that they were embarking on some grand mission to save the world. And sure enough, give highly creative and capable individuals a pseudo-mission, unencumbered by laissez-faire policies with continuous economic growth, and you eventually get modern Silicon Valley some 50 years later, for better or worse. Childish, Petty, and Naïve Demigods In a recent interview, Peter Thiel, of all people, warned against the potential pervasiveness of network effects in clusters, which ultimately drove him out of Silicon Valley. While the concentration of world class entrepreneurs, universities, companies and investors in a restricted area has been a driver of technological innovation, it had also led to the formation of a real-life bubble. Technology workers tend to have rather identical opinions and believes, and may often, especially those at the top, lack an appreciation for what consequences the products they make have on society. A recent survey by the New York Times (2017), showed that a majority of technology founders advocate for redistribution and social programs, but are against regulations. They likely share the same naive optimistic and deterministic outlook on the future as their predecessors, probably repeating to themselves in front of the mirror every morning that they are ‘making the world a better place’. All too often, they see government and institutional regulations as largely inefficient and ineffective, slowing down progress. It is not itself that a group people have strongly held, almost dogmatic, possibly wrong, opinions that is problematic. But rather that they find themselves to be some of the most influential individuals in the world today, as a result of their past and current successes. When we consider forms of influence, most of us would likely think first about money. Money matters. Jeff Bezos’s net worth topped US$134.7 billion as of the writing of this article. Over the years, it has enabled him to buy the Washington Post, donate to political campaigns and causes of his choice, and, through Amazon, invest in lobbying. This has undoubtedly, if perhaps indirectly and difficultly quantifiable, influenced the political sphere. But there are other forms of power to which we tend to pay less attention because they are more elusive and harder to regulate. Products and companies, especially successful ones, inevitably shape the society within which they exist, influencing politics, the economy, culture, and even people’s psychology. Finally, technology founders because of the popularity of their products and their effective PR campaigns have also attracted significant fan bases. Elon Musk’s 23.6 million twitter followers herald him as a modern day Jesus and bully his critics, conferring him formidable power in spreading his ideas. Here lies the crux of the problem. Silicon Valley founders have proved themselves fundamentally unable to handle their new powers. To put things in a language they will understand, as Spiderman’s Uncle Ben once said, ‘with great power comes great responsibility’. In the grand scheme of things, it did not matter what Mark Zuckerberg did with Facebook when it was used only by a couple thousand college students, now that is has over two billion users, it’s pretty much breaking democracy. For a long time, we collectively excused, and even enabled, technology founders’ childish and irresponsible behaviour, disrupting existing norms and breaking rules, because overall it led to a positive outcome, the birth of new technologies. However, because of the scale at which their products are now being used, this is no longer tolerable. They went from being considered underdogs, to main villains, yet fail to ask themselves why, which is precisely the problem. How to REALLY kinda’ Save the World So if you have made this far down this article, you may legitimately be asking yourselves ‘so what? Where do we go from here? Is there an alternative?’. And I want to start with a controversial and somewhat hyperbolic argument: Most of these people cannot change, partly because they do not know how but also possibly because they do not want to, or see the need for it. It is important all acknowledge that they have collectively done humanity an enormous service by advancing us in the digital age, which promises to solve many important issues. But at the same time, it is also fair to point out that perhaps we do not need these people anymore. We have passed the ‘installation phase’ of the current digital revolution and are now entering the ‘deployment’ one, as described by Carlota Perez. We no longer need driven risk-taking disruptors but rather level-headed leaders who are able to heed the broader impact of their actions on society. In some ways, this process is already ongoing, because this illicit behaviour and the repeated scandals they inevitably produce are bad for business. Each in their own ways, board members, Investors, and consumers punish founders who fall short of what is expected of them. Uber’s intention to hike its prices during a protest against the ban of refugees and immigrants from certain countries from entering the United States prompted many users to delete the app from their phone. Similarly, Travis Kalanick’s was forced to resign under pressure from the Uber board of shareholders following repeated controversies surrounding his unethical and inappropriate behaviour. This movement will likely continue in the future and gain even more traction, with people paying increasing attention to how the products and services they use are made. However, it is not enough to wait for disruptors to show themselves unable to rise up to their responsibilities, and later be forced out of the spotlight in disgrace. While companies’ executives are largely responsible for what goes on, they may not always be able to change things, as is Dara Khosrowshahi currently finding out about Uber’s toxic culture. Indeed, even if it was possible for Mark Zuckerberg to get removed from his CEO position, it seems to me that little could be done to fix Facebook’s issues short of breaking its core business model. This brings to my second point, we need to take immediate action in implementing effective oversight and regulatory frameworks over technology companies, even if it means curbing innovation somewhat and incurring a loss of economic value in the near-future. One of most needed reforms is the abolishment of dual class stock structure, which comes with different voting rights on the board. For instance, While Mark Zuckerberg only has about 18% of Class B shares (those who do not come with voting rights), he has about 60% of class As, effectively allowing to run Facebook however he sees fit within the boundaries of the law. More generally, there needs be greater transparency on what goes on inside companies, as well as the provision of their data to government branches and independent watchdogs to better understand how mass consumer products are made and what effects they have on their users and society. Furthermore, additional policies should be enacted depending on specific companies and regions, so to avoid generalisation whitch tend to have unintended consequences, this is particularly relevant for antitrust issues. The new economy requires new mental frameworks and new methods. Finally, we also need to prepare the next generation of leaders for the digital age, which comes down to education. Current policy makers and legislators have a long way to go to reach technological literacy, partly because of the generational gap that exists, which will eventually resorb itself, but not soon enough. Their successors should dedicate a large portion of their studies to understanding better technology, as it will surely be the cause of many of their long sleepless nights at the office. Simultaneously, future technology leaders and executives need to be given better training on social sciences and humanities. Unlike some pundits, I do not think that all developers need to be reading philosophy, just like not all philosophers need to know how to code. Do not misunderstand, both groups would benefit from studying each other’s discipline, but it is not absolutely necessary for society. On the other hand, the future leaders of companies, and even more so technological ones, need to be better than their predecessors at understanding the effects that their companies, products, and actions have on society. Humanities and Social Sciences should be helping them escaping their bubble, and benefit from the compound knowledge of the smart people who came before them. There are sometimes reasons for rules and norms, not all deserve to be broken. I am no luddite. If you can believe it, I am technology optimist and wannabe start-up entrepreneur. But for technology to once again be a force for the good, change is needed. It is time to grow up. This post was written as an assignment for the course History of Technology Revolution at Sciences Po Paris. The course is part of the policy stream Digital & New Technology of the Master in Public Policy and is instructed by Laurène Tran, Besiana Balla and Nicolas Colin.
https://victorcartier.medium.com/the-crumbling-of-the-californian-ideology-technology-disruptors-limited-os-f60f4d2b5831
['Victor Cartier']
2018-12-10 18:21:40.709000+00:00
['Politics', 'Startup', 'Technology', 'Tech', 'History']
Confronting The Pull-Up Bar Down To The Bones
Tom walked up towards the terrace; it is where his pull-up bar is. On his walk towards the terrace through the stairs, he saw them, the health-obsessed neighbors on the nearby terraces —the fitness freak youngsters, the elderlies following the doctor’s death threat, and mindful addicts who had managed to swap their addictions. They had all been there doing the same for weeks or months — if not years. Tom had only begun his practice (walking up towards the terrace and standing beneath the pull-up bar was all he could do) for some days. Actually it took him a while to even start to get up at 6:00 AM in the morning and to stabilize it, and then some more days for him to get himself up to walk towards the pull-up bar in the terrace. As usual, on that day, Tom stayed there beneath the pull-up bar for some time, staring at the bar. And then walked back down the stairs. The neighbors silently made fun of him for not doing anything other than staring up onto the bar. Some even took great pride — thinking that they were far ahead and better than the poor Tom. Some even took pity on him while busily running on the treadmill and at the same time sending him flying-loving-kindness-kisses. Being lost in their mind, busy making fun, and pitying, they were missing a gut-wrenching spiritual lesson that was being demonstrated by Tom. Not that he wanted to demonstrate nor he wanted others to take that path, but it was available for those who cared. Little did they knew that he was confronting his inner demons while being beneath that pull-up bar. He would not jump on to the bar for the fear of self-defeating blackmailing voices in the head or for the praise from neighbors. He could not fool himself that just because he managed to jump on to the bar that he would earn the entitlement to not face up to his demons — that would only be postponing the inevitable confrontation. He was neither fleeing nor fighting, in that sense, but staying unflinchingly and patiently. He was demonstrating an organic growth, the long-cut, not the short-cut available in self -help manifestos which were mostly childishness masquerading as growing up. He did not want to take the advice of ‘do-it-anyway or do-something and repeat it blindly,’ until it becomes another dull habit to boast off to others as “I blindly did it and you should too”. Tom was not just staring at the pull-up bar, but at: weakness of mind, laziness, procrastination, expectations, fear of failure, shame, embarrassment, etc. He was seeing them and studying them for what it is. And so that when he finally grabs his palms on to the iron bar, there would not be any distractions, instead he could witness his own true strength and cultivate it day by day in an organic way. He would get to defeat his enemies day by day, and not postponed inevitable confrontation with them for some clever idea of ‘do-something’. The emotional stability and spiritual progress that he acquires through confronting his daily pull-ups could extend to other life challenges too in the same way. It was his way of demonstrating true courage and humbleness, not the courage born out of fear and arrogance. One day when he walks up towards that iron bar, he would have the strength of every brave prisoner who did not hesitate or flinch while walking towards the rope waiting for them to be hanged.
https://medium.com/spiritual-secrets/confronting-the-pull-up-bar-down-to-the-bones-fbb461c261cd
['Pretheesh Presannan']
2020-09-28 07:57:21.223000+00:00
['Spirituality', 'Spiritual Secrets', 'Short Story', 'Mental Health', 'Fiction']
How to Accomplish Incredible Things According to an Old Chinese Adage
The expression 水滴石穿 (Shui Di Shi Chuan) comes from this story. The characters mean, in order, “water”, “drop”, “rock”, and “penetrate”, therefore giving the meaning of “water drops penetrate a rock”. With enough time, something as small as a drop of water can cut through hard rocks. This is what we now often call the compound effect. Small Things Add Up The compound effect follows a simple principle: to repeat the same small things enough times to create results. As Darren Hardy writes in his book of the same name, “Success is doing a half dozen things really well, repeated five thousand times.” In high school, I spent a year watching TV series with a purpose: bringing my English to fluency. I watched a few hours a day and up to 8 hours on weekends, slowly eliminating subtitles. After thousands of hours of videos, watching without subtitles stopped being tiring and became easy. When I started learning the trumpet, I spent years focusing on my breath and my lips’ position on the embouchure. I spent the following decade never having to focus on it again. It had become automatic and allowed me to reach both higher and lower notes. If you were to save 1 dollar a day, you’d have 3,650 dollars in 10 years without ever feeling a burden. If you were to make one origami a day, by the time you can make a wish, at the 1,000th one, the precision with which you use your fingers will have increased manifolds. To get on someone’s nerves, doing something grand doesn’t work that well. Bother them a bit every day and, before you know it, they’ll despise you like a pest. I wouldn’t advise it but, hey, you do you. If you spent 10 minutes a day learning a new skill, you would reach the 20-hour mark Josh Kaufman talks about to reach a comfortable level in any skill in 120 days. That’s about 3 months. Spend 20 minutes and it’ll take you only 1.5 months. Spend 5 minutes a day and it’ll take you half a year. If you wanted to reach the average “expert level” that takes 10,000 hours of practice, you’ll reach it in 3 years and a half with only 10 minutes a day. Find the Smallest Common Denominator There are things for which it’s easy to find the smallest common denominator and others for which it isn’t. If you want to save money, you know $1 is the smallest possible. If you want to walk more, you know a 1-minute walk will be better than nothing. To grow your vocabulary, one more word a day is more than none. But what about more general tasks? Learning a language, how to dance salsa, how to run a marathon, how to make incredibly complicated origami, are all harder to reduce to simple tasks. To do this, you need to dissect each into sub-categories. Let’s take learning a language. You’ll need to know how to speak, write, and understand both what you read and hear. You’ll need to understand the grammar, vocabulary, conjugation (if there’s some), how to distinguish the gender of nouns (if there are some), and so on. Then, reduce each further. To practice your listening, you need to expose yourself to the language. How about a 5-minute podcast? To increase your vocabulary, you need to encounter new words. How about a daily 1-min read? You can even use it to practice new grammar patterns. To improve your writing, you need to practice it. How about writing 50 words a day on a platform like Journaly and get feedback? Then focus on these small tasks day after day. Time will do the rest.
https://medium.com/skilluped/how-to-accomplish-incredible-things-according-to-an-old-chinese-adage-37b0d9dcce6
['Mathias Barra']
2020-11-19 01:37:20.852000+00:00
['Education', 'Self Improvement', 'Productivity', 'Life Lessons', 'Learning']
How I Nearly Burned Out as a Leader (and Why I’m Not Alone)
I sat in bed rubbing my eyes, trying to wake up. The clock read 6:29 am. Europe was well into their day. After navigating a tricky conflict the day before, my energy tank was dangerously low. I longed to stay in bed but five urgent issues loomed in Slack. I lumbered out of bed, slipped into a pair of pants and a nice sweater, forcing mental exhaustion to the back of my mind. Pushing back my own concerns was just part of leadership. This was one morning but it could have been any number of them as a leader. Emotional fatigue was a daily occurrence so I thought nothing of it at the time. When I left the company, I was closer to the edge of burnout than I’d ever been. The impact was total: my physical, mental and emotional health were all compromised. I couldn’t work full time for several months. I even contemplated leaving the industry all together. Eventually I found my way back to full health and excitement for my work. Still, I wondered how I’d gotten so close to the burnout edge. I knew leadership would be hard — but not in the way it happened. The long hours were hard, managing my emotions far harder. I didn’t anticipate how often I’d have to manage my feelings, how many times I’d have internal conflict or how much power dynamics would play into my work. As a boss your job means you have to put your own feelings aside in service of the team or larger mission. People often blame leaders for things they have no control over. Being conscious of power dynamics can make it hard to push back when an employee blames you instead of looking at their own behavior. When frustrations rose, it often spilled over on to me — some even raising their voice or exhibiting demeaning behavior. Under stress some of us lash out at others even if unintentional. Staying calm while someone projected their frustration was challenging, especially when I couldn’t share details. I set boundaries but sometimes I wonder if I did enough for my own well-being. I only realized much later that emotional exhaustion had pushed me to the brink of burnout. I’m grateful I didn’t fall completely through the trap door. The Hidden Cause of Leader Burnout When we think of bosses we think of autonomy, decision making, directing strategy, and the power they hold. On the negative side we think about the long hours they work. Becoming a workaholic is a risk for burnout but there’s another often-overlooked factor: the burdens of emotional management. Managing emotions is essential for leaders. It’s the ability to know when to listen and when to speak, to stay calm during a conflict, knowing how to be fully present despite your own concerns. Large portions of a leader’s job means communication, collaboration and influencing, which also means spending most of their time regulating their feelings. Managing your emotions is exhausting — it’s why we treasure those friends we can just be ourselves with, without having to perform an act. While we all feel on a stage at work, pressure to act in a certain way, this is even more intense for leaders. When it comes to emotional labor, we often think of employees taking on the burden but leaders carry plenty of the load too. It’s not that only leaders perform emotion management, all workers do to some degree — it’s that we overlook it as a source of leadership stress. We have high expectations for leaders, not just business performance, but in how they respond to challenges emotionally. The stakes are greater, collaboration needs higher, pressure more intense, and need to support others while having less avenues for support. We expect them to be calm and collected, despite what’s going on around them. They face disappointments, losses, and are affected by changes in the external environment just like anyone else — which requires effort to manage. Like any sort of investing, compound interest builds up. Over time the emotional burdens pile up in a mountainous heap threatening a leader’s well-being, setting the stage for burnout. Emotional Demands That Push Leaders To Burnout Always On Navigating, delicate conversations, a calendar full of meetings, and a stack of deliverables meant I barely had time to eat or breathe during the week. Taking time away from the team felt a luxury I couldn’t afford. I had to fight to find time to work on strategic projects — the ones that made a big difference for the team. My busy schedule and sense of commitment to the team meant limited places to just be myself without pressure or feeling a need to perform. This left me emotionally depleted. Running full out all the time, with little time to take care of their emotional state is typical for most leaders. Leaders are stretched beyond limits, especially in the uncertain and rapidly shifting environment. Their days jam-packed with Zoom meetings combined with ever-present communication on platforms like Slack means leaders are always on a stage. Zoom calls exhaust everyone, leaders maybe even more so given their density. Watching what you say, how you say it and even how you arrange your face is tiring. Most leaders have little down time which can lead to high levels of self-monitoring. Just like a yawn, moods are also contagious. As the role model for the organization, the mood of leaders can rub off on others. Knowing this, they often feel like they have to be a rock for the team, no matter what’s happening personally or professionally. Expected to have answers and be a support for their team, leads can feel like they have to wear a coat of calm, no matter how uncertain or anxious they feel. When they’re down or uncertain they might mask their emotions. This can help the team’s mood but can make them feel inauthentic. Masking emotions all day can eventually turn into self-alienation and even burnout. Navigating Other’s Perceptions I knew I’d have to making unpopular decisions, what I didn’t expect was how much other’s perceptions would affect me. After one particularly thorny decision I was told I didn’t care about others and maybe shouldn’t be in a leadership role. What they didn’t know was that it wasn’t my decision and jobs were in peril if the situation was ignored. Confidentiality and legal matters meant I couldn’t explain myself. In the absence of information, they filled in the blanks. Despite anger being hurled at me, I had to stay calm and listen rather than defend myself. I managed my emotions so the conflict didn’t get out of hand and the team felt heard, but the cost to my mental well-being was enormous. We want to be liked or more importantly, understood. New leaders frequently underestimate how much the team’s response to an unpopular decision or being gossiped about might upset them. When facing loss and uncertainty, some lash out, an attempt to regain a measure of control. While it’s human nature, it’s challenging to be on the other side of that equation — especially when the assumptions are incorrect, you can’t defend yourself and you worry any show of emotion will escalate an already tense situation. Leaders often have to present themselves in ways that aren’t congruent with how they feel. Dissonance between inner and outer states can be very stressful. Feeling misunderstood while making impossible decisions means managing agitating feelings. Constantly having to manage emotional dissonance and agitating feelings are linked to burnout. Resolving Ever Present Conflict I don’t seek conflict, but have never been one to shy away from it. Conflict can root out hidden assumptions, lingering issues and bring clarity. When I stepped into my role I discovered several unresolved conflicts between teams and individuals. Some resolved easily, others so ingrained even loosening the knot was tough. Pretty good at conflict resolution, I hadn’t anticipated how much mental effort this might take. While my training as a coach certainly helped, there were days when having to stay calm and not react to tricky conflicts drained me. Conflict resolution is a massive part of leadership. It’s not just resolving the conflict that matters, it’s how it gets resolved. Leaders can’t play favorites, have to set their own emotions aside, be open to all sides and make sure everyone feels heard. Navigating conflict requires emotional regulation, especially on leads who are tasked with resolving it. Being able to regulate fight, flight or freeze responses takes effort. For instance, when someone attacks unfairly it takes discipline to stay calm and not fight back with criticism. If someone avoids the conflict by completely disengaging, leads expend effort figuring out how to get them to re-engage. Conflicts arise from differences in opinion but also from negative habits. Toxic behavior drags everyone down, managing it can be utterly exhausting. Dealing with negative habits adds tremendous load to an already full plate, making leaders susceptible to emotional fatigue, a burnout precursor. Avoiding Emotional Burnout Find ways to relieve pressure Taking vacations is critical but not sufficient. Leaders must build regular down time into their schedule. Take an hour each day first thing though any time of day will do. The key is reducing the pressure to manage emotions for others. Building a rich life outside of work can provide a place where leaders can free themselves from performative demands. Learn not to take things personally No one likes to be falsely accused of something, it’s stressful for everyone. Leaders have to find a way to manage emotions that rise from false perceptions they can’t correct. Learning not to take it personally can decrease the amount of other’s emotions they absorb, reducing emotional strain. Connect with other leaders who are navigating the same situation — it will make you feel less lonely. You might even pick up some tips on how others handle this tricky situation. Create an operating system for yourself When I found myself exhausted I did an assessment of my time. This helped me discover the constant conflict one team member found themselves in took five hours of my week. Look for the source of emotional fatigue, especially around conflict and develop systems for handling them. Navigating defensiveness, emotional triggers and negative emotions like fear take incredible energy. Create an operating system for yourself to keep you grounded. Make sure to include mechanisms designed to manage emotional stress, these levers allow you to emotionally rejuvenate.
https://medium.com/swlh/how-i-nearly-burned-out-as-a-leader-and-why-im-not-alone-2d54933a937e
['Suzan Bond']
2020-11-09 05:05:04.076000+00:00
['Management', 'Startup', 'Leadership', 'Careers', 'Work']
A Writer’s Guide to the 2 Most Popular Story Archetypes
Steve Jobs (left) told three stories in his 2005 Stanford University commencement address, each employing the Hero’s Journey story archetype. Dr. Martin Luther King told just one in his famous 1963 ‘I Have a Dream’ speech from the steps of the Lincoln Memorial in Washington DC, employing the archetype The World the Way It Is/The World the Way It Could Be. A Writer’s Guide to the 2 Most Popular Story Archetypes Know your heroes Although I know several great writers, I know very few great storytellers. Because a story is a particular type of writing that emphasizes an emotional arc of challenge, personal growth, triumph or tragedy, writing a story is a different experience (for the writer) than journalism, or technical writing, or ad copy. To create an emotional arc for the characters, many writers must experience the negative emotions for themselves, and this is why great story-telling is so difficult — because it requires the storyteller to be vulnerable in ways that most writers are unwilling to experience. Nonetheless, there’s lots of great advice for story writers and most of it you’ve heard already, whether it’s “good stories always have conflict,” or “torture your protagonist.” For example, one of the most successful group of storytellers is at the computer animation movie studio Pixar, and they’ve published ’22 Rules for Storytelling,’ in case you want to know how they do it. But none of Pixar’s rules describe the essential difference between the two most popular story archetypes. Hero’s Journey One of the most famous and powerful story archetypes has been described by Joseph Campbell as The Hero’s Journey. It’s the topic of several good articles on Medium, but for the sake of convenience, I’ll recap it briefly here. There are two “worlds” in the Hero’s Journey: the familiar and the unfamiliar. The “journey” describes the Hero’s departure from the familiar world, into the unfamiliar, and back again. In the archetype, the Hero is motivated to leave the familiar world by a Call to Adventure. According to the archetype, the Hero declines the first invitation, and the effect is to increase the attachment between the audience and the characters, and increase the dramatic tension in the story. For example, the most popular instantiation of the Hero’s Journey is Star Wars, and in particular Episode 4 — the original Star Wars movie. This is the scene that represents the Call To Adventure. True to archetype, Luke declines the Call. In the Hero’s Journey, it is not until the stakes are raised and the Call becomes overwhelming that the Hero will commit. In Star Wars, that moment is found here: After Luke is separated from his adopted family by the destruction of his familiar world, he has little choice but to accept the challenge of the unfamiliar. There, he will be tested and he will despair. According to Steven Pressfield (Nobody Wants to Read Your Sh!t, 2016) all great Heroes reach what a moment in the story he call “All is lost.” It is the emotional nadir, at which the Hero’s failure seems certain and death is imminent. In a great story, the Hero will receive the aid of a Mentor, overcome the trials of the unfamiliar world, and eventually return to the familiar world a changed (and better) person. The key to the Hero’s Journey is to create a Hero who is likable and identifiable to the audience. When they identify with the Hero, they live the adventure vicariously thru him, and experience the same emotions and triumph that the Hero experiences — albeit without the possibility of existential threat. Thus, the Hero’s Journey archetype gets its power by tempting the audience to consider that the possibility of personal growth may exist inside themselves, without actually challenging the audience to the risks that only a true Hero must take. The World the Way It Is/The World the Way It Could Be There is another powerful story archetype called The World the Way It Is/The World the Way it Could Be. It also creates two different worlds — one familiar by the fact that it is the current experience of the audience, and the other unfamiliar in that it springs from the imagination of the storyteller. The most popular application of this story archetype may be Dr. Martin Luther King Jr.’s famous speech I Have A Dream. Listen to how Dr. King describes the current condition of the American Negro “100 years later” — after President Abraham Lincoln signed the Emancipation Proclamation freeing the slaves. His description of the World the Way It Could Be for the Negro is “a lonely island of poverty in a vast ocean of prosperity… (in which) he finds himself in exile in his own land.” In his poetic verse, “Now is the time…” he compares the World the Way It Is to the World the Way it Could be when he says, “… to rise from the dark and desolate valley of segregation to the sunlit path of racial justice.” He returns to The World the Way It Is in his cadence “We can not be satisfied… as long as the Negro in Mississippi cannot vote, and the Negro in New York City believes he has nothing for which to vote.” And finally, he completes the transitions to the The World the Way It Could Be in the most famous phrases in his speech: I have a dream that one day this nation will rise up and live out the true meaning of its creed, “We hold these truths to be self-evident, that all men are created equal.” I have a dream that one day on the red hills of Georgia, sons of former slaves and the sons of former slave owners will be able to sit down together at the table of brotherhood. I have a dream that one day even the state of Mississippi, a state sweltering with the heat of injustice, sweltering with the heat of oppression, will be transformed into an oasis of freedom and justice. I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character. Dr. King’s dream, there is only brief mention of the World the Way It Is (e.g., “sweltering with the heat of injustice”) in favor of his description of the World the Way It Could Be. Who Is the Hero? The principal difference between the two story archetypes is the position of the Hero. The Hero’s Journey is effective when the audience identifies with the Hero, and travels the Hero’s emotional trajectory vicariously thru them. By contrast, The World the Way It Is/The World the Way It Could Be is effective when it challenges the audience to become the Hero themselves. Notice that Dr. King prepares his audience for his Dream by acknowledging that his audience is, at the moment, already in the unfamiliar world of Washington DC. Thus, they have already accepted the Call to Adventure. They need only return to their familiar world as changed men and women to complete the Hero’s journey. He challenges them to do just that when he says: Go back to Mississippi. Go back to Alabama. Go back to South Carolina. Go back to Georgia. Go back to Louisiana. Go back to the slums and ghettos of our Northern cities, knowing that somehow this situation can and will be changed. Autobiography and the Hero’s Journey Contrast Dr. King’s Dream with Steve Jobs famous commencement speech at Stanford University, in which he “tells three stories from my life.” His first story describes his separation from his biological Mother who put him up for adoption, and his rejection by the first couple who had selected him. As he describes it, his decision to drop out of Reed College was the moment he left his familiar world of the parents who eventually agreed to take him, and who supported him in his biological Mother’s mandate that he should have a college education. He emerged from his adventure a changed man, and as a consequence of his adventure, typesetting and word processing on computers were changed forever. His second story describes the second most important rejection of his life, being fired from Apple Computer. He describes the experience as “devastating,” until “something slowly began to dawn on me.” He decided to start over, and “enter one of the most creative periods of my life.” Again, having been forced from his familiar world, he accepted the Call to Adventure to enter into the unfamiliar world of Next and Pixar, overcome the challenges there, and eventually return to the familiar world of Apple Computer a changed man. As a consequence, he (again) changed the world of computing forever. His third story has no such triumphant return. It is an example of the Hero’s Journey in tragedy, although he doesn’t tell the story in that way. At the time of his speech Jobs was living with pancreatic cancer. Although he was convinced that he had overcome his fatal diagnosis and returned to the familiar world a changed and healthy man, the cancer killed him six years later. Although Jobs gives direct advice to his audience when he implores them “don’t waste (your limited time) living someone else’s life… follow your heart and intuition, they somehow know what you want to become…” there is little in Jobs’ speech that suggests a Call to Adventure specific to the graduates. The story is moving, not because we have experienced his rejection, his impoverishment, or his despair. We only imagine these. And because we admire Jobs and we sense just enough of ourselves in him to identify with him, we experience the powerful emotions of his journey without having to accept his risks ourselves. Therein lies the difference in the two most popular and powerful story archetypes. At its best, the Hero’s Journey entertains us, while The World the Way It Is/The World the Way It Could Be inspires us.
https://medium.com/storygarden/guide-to-popular-story-archetypes-592b34802b02
['Thomas P Seager']
2020-12-27 12:46:20.956000+00:00
['Martin Luther King', 'Storytelling', 'Steve Jobs', 'Heros Journey', 'Story Archetypes']
How we build products at Everoad. Part deux: Planning.
Part deux. We plan together. And commit. Planning is probably the most challenging exercise that exists for a Product leader. It requires an outstanding synthesis capacity to translate hundreds of insights into a couple of projects. It implies strong leadership to challenge teams on their priorities then sell a plan once it has been locked in. And it definitely demands grit — in the end you will be judged on whether you managed to deliver your plan. I’ve sat through dozens of planning exercises. And I still come out of each asking myself: ‘have we been inclusive enough?’, ‘are people really bought in?’ or ‘are we investing in the right things?’ … Spoiler alert — this article won’t solve for your insecurities. You probably should continue asking yourself these questions, because they are fundamental to healthy product operations. What this article can help with, on the other hand, is decreasing the risks stemming from poor planning. Our quarterly planning cadence Breaking down one strategy into many Planning should always start with an overall strategy. A product plan without a clear company strategy is like a Guns N' Roses tune without any Slash solo. Pointless. This is why ours always starts with the overall mission, big rocks and objectives that are set with the collaboration of the leadership team. But because building a centralized plan is close-to-impossible, especially in hyper-growth environments, we decided to let teams plan independently. A product plan without a clear company strategy is like a Guns N’ Roses tune without any Slash solo. Let me take a step back here. Our team is organized around Programs. Programs are conceptual territories designed to foster fast execution while ensuring tight alignment with their business counterparts. Each PM in our team owns a Program like Shippers, Carriers, Marketplace and so on. This allows them to partner with the relevant people in our company (our PM on Shipper works closely with our entire Sales team while our PM on Money interacts daily with our CFO) while remaining laser-focused on our users. During Planning, each PM is tasked with crafting a strategy for their Program, leveraging their backlog insights to come up with priorities. Instead of ending up with one messy centralized strategy, we end up with several — yet, nicely sharp and focused. Moving from insights to priorities To write their strategy, our PMs simply tap into our backlog to identify the highest rated insights and translate that into priorities. If you remember from Part one, our backlog is structured around our Programs. This allows us to store insights at scale, in a crazily Marie-Kondō-organized way, thus accelerating priority identification. But because Planning is — and always should be — a collaborative exercise, we confront our findings with our business counterparts to make sure that we’re truly aligned behind the objectives we are pursuing and the level of priority each item has been given. Think of it as Lebron & Davis partnering to chase this year’s playoffs ring. Obviously, it would be a bit painful for us to review decentralized strategies, so we push teams to summarize their findings and list their priorities into a unified plan. This is what our Tech leaders eventually use to finalize Planning through costing. Our Program strategies feeding into our unified plan Costing or the ‘best worst’ way to size your investment? An essential piece of our puzzle is costing. Our plan couldn’t make sense if we weren’t relying on data. This is why we spend a lot of time trying to size the cost of each item in a really granular way. Our assumptions are quite simple. We have a theoretical capacity (we use working days but you can use another proxy) that we spread across several working blocks: Strategy, Definition, Design, Build & Launch. For each role (Product, Design, Engineering) we allocate a percentage of their time to each workstream. As a result, we know exactly how much investment capacity we have on each workstream. Then, we simply estimate the cost of each item, across all dimensions. And the idea is to fit in as many projects as possible in priority order, starting with P0s, then P1s and so on… To be honest, we’re not so happy with this process and we don’t think we have cracked the ‘costing’ challenge yet. This is definitely better than nothing — it helps us define a rough idea of our investments, while helping teams better understand how we work. When moving into building phases of projects (stay tuned for Part three) we refine that estimation to provide teams with an accurate view of planning. The more our organization grows, the more time PMs & Designers will be able to spend time on grooming insights super early on together with engineers. As such, instead of doing long hours of costing during each planning exercise, they’ll be able to leverage insights that had already been groomed and cost. Translating our plan into an operational roadmap So far, our whole plan has been static. We need to translate that into time. Our team calls this ‘Tetris’ scheduling, though I sometimes feel this looks more like a giant uneven Rubik’s cube. We need to figure out when to work on each item in a way that makes sense from a resource point-of-view. This can be quite tricky as we have multiple teams working on multiple workstreams with multiple constraints (dependencies, holidays, development plans, etc.). As such, the more you can break down your team into teams, the easier this exercise becomes.
https://medium.com/everoad/how-we-build-products-at-everoad-part-deux-planning-7c01a84f6a43
['Benjamin Chino']
2020-01-08 10:13:13.718000+00:00
['Product Management', 'Strategy', 'Startup', 'Planning', 'Product']
Build a Python Flask App on Glitch!
Tired of making text-based choose-your-own-adventure games in Python? Ready to take your Python programs to the next level by incorporating an HTML/CSS graphical user interface? Flask is the perfect framework to build an HTML/CSS web app with a Python backend. In this tutorial, we’re going to build a The Office-themed Flask application… on Glitch! via GIPHY Yes, you heard me right — no frustrating downloads or setup, no banging your head against the wall. You can build this Flask app directly in your browser using Glitch! We’re going to make this app which asks for your desert island book choice… … and then tells you which The Office character you are based on your answer! If you haven’t already, head over to Glitch and make an account. Then remix the Flask Office starter code by pressing the microphone button at the bottom right hand corner of the Preview screen. Note: this tutorial assumes familiarity with HTML/CSS and Python programming. If you’re not familiar with these topics, check out some of my other tutorials or follow along by copy-pasting if you’re undeterred! Model-view-controller architecture, or why we have all these files There are so many files, all prepared for you! Why so many? Why so many files in the file tree? First, let’s go over the basics of how most web apps are broken down. Model : This is the main Python backend logic of your app. All your classes, functions, etc. live here. Our model is currently contained in model.py , though in other places you may see a folder containing multiple Python classes and files. : This is the main Python backend logic of your app. All your classes, functions, etc. live here. Our model is currently contained in , though in other places you may see a folder containing multiple Python classes and files. View : This is our beautiful HTML/CSS interface — what the user sees. Our views are currently contained in the templates folder, which contains two empty HTML files: index.html and result.html . There is also the static folder, which contains assets such as our style.css file. : This is our beautiful HTML/CSS interface — what the user sees. Our views are currently contained in the folder, which contains two empty HTML files: and . There is also the folder, which contains assets such as our file. Controller: The controller goes between the model and view, calling on the model for backend computations and delivering results to the view. Our controller lives in server.py. To conceptualize this architecture, it can help to think of the web app as a restaurant. The model is the kitchen, where all the food is cooked. The view is the menu, which is presented to the guest. The waiter is the controller, running between the guest and the kitchen and delivering the food to the guest. Previewing the home page Right now, our home page is very boring. If we press “Show,” we just see some very plain Times New Roman text saying, “Hello, world!” BOOOO-RING Where is this message even coming from? Pop open server.py and see if you can find the line. We’re actually going to get fancy here and, instead of just returning a string, return an entire HTML page as our homepage. You can see from the imports we’re already imported from flask import render_template , which means we can use the render_template function in Flask to render HTML pages. Instead of returning “Hello, world!” I’m going to have my Flask app serve up the index.html page from my templates folder. We’re just about ready to see what our app does now… except for an essential piece of information about coding Flask apps on Glitch! Using the console on Glitch Whenever you make a change to your Flask app on Glitch, it can take a little while to take effect. However, you can force Glitch to update the app by opening the console and running the refresh command. I usually open the console by clicking on “Tools” in the lower left hand corner and then selecting “Logs” at the top of the toolbar. Then I can navigate to “Console” as one of the options and run “refresh.” Type “refresh” and then hit return! The console, incidentally, is also where you can run commands like mkdir and touch to make new directories and files. Using a template on the home page Okay, you ran “refresh” in the console and then previewed your app again. Was it a totally boring white page? That’s because, if you open templates and then index.html, we haven’t actually coded any HTML in here! Let’s fix that, and add an HTML skeleton along with a header: <!DOCTYPE html> <html> <head> <title>The Office Quiz</title> <link rel="stylesheet" type="text/css" href="static/style.css"> </head> <body> <h1> The Office Quiz </h1> </body> </html> Now, after running “refresh,” you should see some text on your page! Beautiful. Making another route We have the bare bones of our home page set up. We just need one other page, for the result the user gets when they find out what The Office character they are. Let’s hop back to server.py and add this route. The route can be accessed by adding /result after the app URL that appears when you hit “Show”— for example, flask-office-starter.glitch.me/result. In the code, we define a Python function that runs when the user reaches that page. Here, we’re returning a template again, but this time it’s the result.html file. Let’s head over to templates/result.html and spruce that template up a bit! Add an HTML skeleton along with a header: <!DOCTYPE html> <html> <head> <title>The Office Quiz</title> <link rel="stylesheet" type="text/css" href="static/style.css"> </head> <body> <h1> You're {{ character }}! </h1> </body> </html> The {{ character }} line is a placeholder that we’ll come back to. Soon, this will print the character from the The Office that the user is! Now, if you run “refresh” in the console, press “Show” on Glitch, and add “/result” to the URL, you should arrive at your result page! Existential delight in the “/result” route Notice that {{ character }} doesn’t print anything yet. That will change soon! Sending data in an HTML form Now, we’re going to give the user a little quiz that will carry them from the home page to the results page, where they can find out which character they are! For simplicity, our quiz will just be one question: the desert island book question. We’ll ask it using radio buttons in an HTML form, in the body of our index.html template: <form method="post" action="/result"> <p> Which book would you bring to a desert island? </p> <input type="radio" name="book" value="bible"> The Bible<br> <input type="radio" name="book" value="davinci"> The Da Vinci Code<br> <input type="radio" name="book" value="hp"> Physician's Desk Reference (hollowed out with tools inside) but also Harry Potter and the Sorcerer's Stone for when you get bored <p> <input type="submit" /> </p> </form> The form is using the POST method to send data from one webpage (index.html) to another (result.html). The form will be going through the “/result” route. Notice that all the radio buttons have the name “book.” Now this form should appear on the homepage! (Remember, run “refresh” and then hit “Show.”) Programming the model All right, we’re going to take a trip to the backend. When our user sends their data through the form, we’re going to need to process it. Specifically, I need a function that takes in the book the user selected and returns the The Office character they are. Head into model.py and write the following function: def get_character(book): if book == "bible": return "Angela" elif book == "davinci": return "Phyllis" elif book == "hp": return "Dwight" else: return None Notice that each book in Python is the same as the radio button values in the HTML. Now, because I’m fancy, I’d also like a function that takes in a character from The Office and returns the URL to a funny gif of them. Add this function to model.py: if character == "Angela": return " elif character == "Phyllis": return " elif character == "Dwight": return " else: return None def get_gif(character):if character == "Angela":return " https://media.giphy.com/media/Vew93C1vI2ze/giphy.gif elif character == "Phyllis":return " https://media.giphy.com/media/1oGnY1ZqXb123IDO4a/giphy.gif elif character == "Dwight":return " https://media.giphy.com/media/Cz1it5S65QGuA/giphy.gif else:return None Connecting the model and view with the controller Okay, we have our (kinda) beautiful form. We have our (definitely) beautiful functions. Now we need to connect them! Head over to the heart of your controller, server.py. You can see by the line import model that we’ve imported all the contents of model.py, and those functions are available to us now. The line from flask import request means that we have some functionality for dealing with HTTP GET and POST requests now. (GET is when we’re just asking for webpages, like the homepage, whereas POST will be used for sending data around, like for the result page.) Because we’re sending data to the “/result” route, we need to modify the header to include POST requests. Change the first line to incorporate functionality for both GET and POST requests: Now, we can make our function do different things depending if the user is getting or posting to the page. Modify your function to read: def result(): if request.method == "POST": return render_template("result.html") else: return "Sorry, there was an error." It’s always good practice to catch errors (like sending a GET request when we need a POST request) with an else statement. So this is great, but we’re still not interacting with or processing the user data. Inside the if branch of your function, you can grab all the data sent by a form like so. Put this line inside the if branch, but before the return statement: userdata = dict(request.form) If you print the data, “refresh” on Glitch and run your app, you’ll see something like this in the Logs when you submit the form: {'book': [u'bible']} It’s a dictionary where “book” is one of the keys, and the value is a one-item list! You can drill down and isolate the exact book like so: book = userdata["book"][0] Now we can feed the book variable into our get_character function from the model, and find out what character the user is: character = model.get_character(book) And finally, because I’m fancy, I’m going to feed the character variable into the get_gif function and get a URL of the user’s gif! gif_url = model.get_gif(character) Okay… but how do we print the user’s character to the results page?! Remember how in the h1 in results.html, we wrote the following placeholder: <h1>You're {{ character }}!</h1> Well, this placeholder was actually in Jinja, a templating language for Python. We can actually define what value {{ character }} should have when we render our template. return render_template("result.html", character=character) If we hop over to result.html, we can even add in a placeholder for a gif: <p> <img src="{{ gif_url }}" /> </p> And we can send the gif_url value along with the character value when we render the template. return render_template("result.html", character=character, gif_url=gif_url) Now, in server.py, your finished “/result” route should look like this: def result(): if request.method == 'POST' and len(dict(request.form)) > 0: userdata = dict(request.form) print(userdata) book = userdata["book"][0] character = model.get_character(book) gif_url = model.get_gif(character) return render_template("result.html", character=character, gif_url=gif_url) else: return "Sorry, there was an error." @app .route("/result", methods=["GET", "POST"])def result():if request.method == 'POST' and len(dict(request.form)) > 0:userdata = dict(request.form)print(userdata)book = userdata["book"][0]character = model.get_character(book)gif_url = model.get_gif(character)return render_template("result.html", character=character, gif_url=gif_url)else:return "Sorry, there was an error." Can you find where I sneakily added in a check to make sure that the user actually filled out the form before sending it?! Run “refresh,” press “Show,” and test out your quiz! Styling with CSS Last thing: head over to static/style.css and add your styles. Here’s the simple stylesheet I used: body { font-family: courier; margin: 2em; letter-spacing: 1px; } input[type="submit"] { border: 1px solid black; font-family: courier; padding: 10px; border-radius: 10px; font-size: 1.1em; letter-spacing: 1px; } input[type="submit"]:hover { background-color: #eee; cursor: pointer; } Run “refresh,” hit “Show,” and admire your work! I find that the CSS styles take a while to take effect, so don’t be afraid to clear your cache with a hard refresh. Congratulations, you’ve built a Flask app on Glitch! via GIPHY Extensions Style your app, of course! Add more options and results. (What about Michael, Jim, and Pam?) Add another question to the quiz. Other Python Tutorials
https://medium.com/analytics-vidhya/build-a-python-flask-app-on-glitch-fc2c4367baaf
['Kelly Lougheed']
2019-10-22 09:04:52.498000+00:00
['Python', 'Glitch', 'Web Development', 'HTML', 'Flask']
Gore Vidal and Norman Mailer: the Ali and Frazier of American letters
One of the most bitter public feuds in the history of American letters raged between Norman Mailer and Gore Vidal from the early 1970s all the way into the mid-eighties, when they finally reconciled. It was conducted in print and in person in various joint television appearances, most spectacularly on an episode of the Dick Cavett talk show in 1971. Away from the public eye, the feud most memorably manifested at a party in 1977, when Mailer punched Vidal in the face over the latter’s scathing review of his book on feminism The Prisoner of Sex. Legend has it that afterwards the latter looked at his assailant and said, “Once again, words fail Norman Mailer.” It was a clash of two literary titans, both of whom, if honest, would grudgingly have admitted a sneaking regard for the other even when their acrimony was at its most intense, what with the role of creative whetstone that each played in the other’s work. Of the two the work of Norman Mailer, like whisky, is a taste acquired; his prose layered to such a fine point and poise that it could be no other. I still recall reading his work for the first time and feeling daunted at being confronted with one of the finest examples in the English language of what he described as the ‘spooky art’. The work in question was his classic book The Fight (1975) on the epic 1974 Rumble in the Jungle between Muhammad Ali and George Foreman in Zaire (Democratic Republic of the Congo). It remains one of the finest sports books ever written. Consider: It seems like eight rounds have passed yet we only finished two. Is it because we are trying to watch with the fighters’ sense of time? Before fatigue brings boxers to the boiler room of the damned, they live at a height of consciousness and with a sense of detail they encounter nowhere else. In no other place is their intelligence so full, nor their sense of time able to contain so much of itself in the long internal effort of the ring. Thirty minutes go by like three hours. Let us undertake the chance, then, that our description of the fight may be longer to read than the fight itself. We can assure ourselves: It was even longer for the fighters. Or how about this snippet from his 1976 essay, ‘Genius’, on the work of Henry Miller: Henry Miller, however, exists in the same relation to legend that antimatter shows to matter. His life is antipathetic to the idea of legend itself. Where he is complex, he is too complex — we do not feel the resonance of slowly dissolving mystery but the madness of too many knots; where he is simple, he is not attractive — his air is harsh. If he had remained the protagonist by which he first presented himself in Tropic of Cancer — the man with iron in his phallus, acid in his mind, and some kind of incomparable relentless freedom in his heart, that paradox of tough misery and keen happiness, that connoisseur of the spectrum of odors between good sewers and bad sewers, that noble rat gnawing on existence and impossible to kill — then indeed he could have been a legend, a species of Parisian Bogart or American Belmondo. Here I confess to having dismissed Mailer as a study in narcissism and self-aggrandisement in years gone by. I had him down as a writer who used his work not to mine and explore the human condition but rather to construct, maintain and promote a persona of rugged masculinity and derring-do. He came over as a study in male angst and insecurity, of a type Freud knew intimately. Thus, back then, I considered his prose self-conscious and contrived, lacking verisimilitude and fundament. Mailer in New York while in his pomp I was wrong. My issue with Mailer, I later came to realise, was less to do with his writing and more to do with my reading. For if good writing is a serious craft, one that calls to us from the mists of the past and speaks to an uncertain future, how can good reading be anything less? The former, it should be indelibly absorbed, cannot and does not exist without the other. And what is good reading if not the product of maturity; the fruits of an evolving sensibility? It’s why reading the right book at the wrong time can only result in failure to grasp its depth and importance, not forgetting the proper appreciation of the craft of the author. How many of us, being honest, have managed to get to grips with Joyce’s Ulysses at first attempt? Similarly when it comes to any number of other clasic works. I make this point as someone who recently managed to get through Gore Vidal’s Lincoln at second time of asking. It was only then I was able to bring to bear the requisite level of concentration necessary to keep up with the shifting narrative, told through the eyes of the novel’s multiple characters, in process of which Vidal elegantly constructs a comprehensive and multidimensional insight into the mind of the book’s pseudonymous subject. The sense of time and place he succeeds in imparting is stunning when viewed in the context of the novel’s totality. A young Gore Vidal While Mailer may be a taste acquired, Gore Vidal is a taste required; his writing so imperious it affirms the centrality of great writing in lifting humanity out of the realm of necessity to experience, however temporarily, the transcendence of the inner life. Consider this passage from his 1952 essay ‘The Twelve Caesars’, reviewing the classic work of the same name by Suetonius: The unifying Leitmotiv in these lives is Alexander the Great. The Caesars were fascinated by him. The young Julius Caesar sighed enviously at his tomb. Augustus had the tomb opened and stared long at the conqueror’s face. Caligula stole the breastplate from the corpse and wore it. Nero called his guard the “Phalanx of Alexander the Great.” And the significance of this fascination? Power for the sake of power. Conquest for the sake of conquest. Earthly dominion as an end in itself: no Utopian vision, no dissembling, no hypocrisy. I knock you down; now I am king of the castle. Why should young Julius Caesar be envious of Alexander? It does not occur to Suetonius to explain. He assumes that any young man would like to conquer the world. Mailer and Vidal lived parallel literary lives. Products of the Second World War (in which they both served), they each published their debut novel in the immediate postwar period - in Mailer’s case The Naked and the Dead to critical acclaim in 1948, and in Vidal’s Williwaw to critical indifference in 1946. Mailer and Vidal sparring on the Dick Cavett show 1971 Both of them became immersed, personally and professionally, in the seismic events that unfolded over the course of the ensuing decades— events which, together, amount to one of the most tempestuous periods in American history. The Vietnam War; Black Civil Rights and Liberation movements; the rise and assassinations of JFK, RFK, Malcolm X, Dr Martin Luther King; the emergence of Reagan and Reaganomics; the fall of the Berlin Wall; these events, not to mention others, traced a downward trajectory from the hope spawned in the mid to late sixties of coming social, racial and economic liberation, to, by the early 1980s, the rise of an ironclad corporate dictatorship underpinned by the values of Wall Street. Such a radical shift from the revolutionary upsurge of 1960s to the counter-revolutionary kickback of Reagan must surely have been disorienting for those actively engaged in the times. Mailer and Vidal both ran for political office — Vidal seriously on two occasions in 1960 and 1982, Mailer less so on one, when he ran for mayor of New York in 1969. Mailer’s image as someone with neanderthal biases towards women (he was married six times and stabbed his second wife Adele Morales) and homosexuals was not undeserved. Add to the mix his years of hard drinking, replete with a catalogue of barroom brawls — and not forgetting his passionate interest in boxing — and you arrive at someone who aspired to emulate not just Ernest Hemingway’s writing write but his very ontology. This being said though, in 1955 Mailer made an attempt to confront his anti-homosexual views in his essay ‘The Homosexual Villain’. As to how successful he was, readers can make up their own minds: At any rate I began to face up to my homosexual bias. I had been a libertarian socialist for some years, and implicit in all my beliefs had been the idea that society must allow every individual his own road to discovering himself. Libertarian socialism (the first word is as important as the second) implies inevitably that one have respect for the varieties of human experience…I suppose I can say that for the first time I understood homosexual persecution to be a political act and a reactionary act, and I was properly ashamed of myself. Gore Vidal, himself, was an unrepentant homosexual who was fearless in his defence of his right to indulge his sexual orientation as he saw fit, while at the same time refusing to be defined by it. He once opined on the matter thus: “Actually, there is no such thing as a homosexual person, any more than there is such a thing as a heterosexual person. The words are adjectives describing sexual acts, not people. The sexual acts are entirely normal; if they were not, no one would perform them.” Mailer with Muhammad Ali Vidal’s courage in this regard was redolent of Oscar Wilde, who like him was a public intellectual with a penchant for same sex sex (and indeed got sent to prison for it), while also being a writer of extraordinary biting wit and depth. Both men were fiercely defiant of conformity and its ugly progeny, hypocrisy. In fact, one of the finest things Gore Vidal ever wrote was along those very lines: “I do not accept the authority of any state — much less one founded as ours was on the free fulfillment of each citizen — to forbid me, or anyone, the use of drugs, cigarettes, alcohol, sex with a consenting partner or, if one is a woman, the right to an abortion. I take these rights to be absolute and should the few persist in their efforts to dominate the private lives of the many, I recommend force as a means of changing their minds.” Interestingly, given the bitter rivalry and animus they shared over so many years, at one time amity rather than enmity existed between them. Here, by way of evidence, is a passage from a Gore Vidal essay in 1960. It appeared in The Nation magazine and though ostensibly a review of Mailer’s book Advertisements for Myself (1959), Vidal, as was his wont, offers up a sweeping analysis of Mailer’s work and his development as an artist up to that point: Of all my contemporaries I retain the greatest affection for Mailer as a force and as an artist. He is a man whose faults, though many, add to rather than subtract from the sum of his natural achievement. There is more virtue in his failures than in most small, premeditated successes which, in Cynic’s phrase, “debase currency.” Mailer, in all that he does, whether he does it well or ill, is honourable, and that is the highest praise I can give any writer in this piping time. Of the samples of Mailer and Vidal’s work so far referenced in this piece, none have been lifted from their works of fiction. This is no accident. To my mind, though both have due claim when it comes to writing renowned works of such, their best work is to be found in their essays and non-fiction journalism. It was a form they elevated into an art, raising the bar high enought to ensure their passing left a mammoth lacuna in American letters that is yet to be filled. They also both made a foray into writing for the theatre and movies, Vidal far more successfully than Mailer. In fact, during the 1950s Gore Vidal mined a lucrative career writing for television and doctoring screenplays for the studios — most significantly working on a rewrite of the script for Ben Hur, starring Charlton Heston. Vidal also enjoyed huge success on Broadway with his 1960 political play The Best Man, which he later adapted into the screenplay of the movie starring Henry Fonda in 1964. Politically, both sat on the left of the spectrum; although here congruence ends. Whereas Vidal was a modern incarnation of Rome’s famed Tiberius Gracchus, which means to say by birth a member of the patrician class who nailed his political colours to the mast of the common man at the cost of finding himself outcast by his class, Mailer was down with the kids and the cool blacks in the counter cultural movement of the sixties going into the seventies. The former was fixated on the way the republic had in his considerable opinion been hijacked by hucksters, charlatans, warmongers and first rate second rate men (to coin Wendell Phillips’ withering excoriation of Abraham Lincoln). Men such as Harry S Truman, for example, who by dint of some cruel quirk of fate and circumstance found himself occupying the office of President of the United States after the death of FDR in 1945. Truman was the progenitor of the National Security Act which kickstarted the US military industrial complex, the Cold War and decades of anti-communist hysteria and witchhunts, all of which scarred American society for generations, along with the world entire. Gore Vidal was, it is fair to say, non too impressed, averring in his 1999 Vanity Fair essay ‘The Last Empire’, In regard to the ‘enemy, Ambassador Walter Bedell Smith — a former general with powerful simple views — wrote to his old boss General Eisenhower from Moscow in December 1947 apropos a conference to regularize European matters: ‘The difficulty under which we labor is that in spite of our announced position we really do now want or intend to accept German reunification in any terms the Russians might agree to, even though they seemed to meet most of our requirements’. Hence, Stalin’s frustration that led to the famous blockade of the Allied section of Berlin…The President [Truman] did not explain that the United States had abandoned [the agreements made at] Yalta and Potsdam, that it was pushing the formation of a West German state…and that the Soviets had launched the blockade to prevent partition. Vidal was not a man ever minded to play the patriot game. Mailer in his dotage And neither was Norman Mailer, as adduced in his 1991 piece ‘How the Wimp Won the War’ on the First Gulf War in Iraq, against Saddam, and the central role of George Bush Sr in that event: Mailer had decided that America — no matter how how much of it might still be generous, unexpected, and full of surprises — was nonetheless sliding into the first real stages of fascism. The Left, classically speaking, might be the most resolute defense against fascism, but what was the Left now about to contest? No part of it seemed able to cooperate effectively with any other part, nor was it signally ready to work with the Democratic Party for any set of claims but its own. The Democratic Party was bereft of vision and real indignation, and, given the essential austerity of the Christian ethic, the Republican Party was never wholly comfortable with that idea that Americans like themselves ought to be that rich. They grew more and more choleric about the blacks. Their unspoken solution became the righteous prescription: if those drug bastards won’t work, throw them in jail. Both men had other literary feuds, Mailer with Tom Wolfe and James Baldwin, Vidal with William Buckley and Truman Capote, in response to whose death in 1984 he is said to have quipped, “A wise career move.” Norman Mailer and Gore Vidal, whom I was privileged to meet in person at an anti-Iraq war rally in 2003 in Hollywood at which he spoke, were among the last true giants of American letters. Their feud was redolent of two gunslingers for whom the town they were in was not big enough for both. They were giants in the Land of Lilliput. End.
https://johnwight1.medium.com/gore-vidal-and-norman-mailer-the-ali-and-frazier-of-american-letters-e419b3c4e1c4
['John Wight']
2020-07-09 07:17:15.760000+00:00
['Gore Vidal', 'America', 'Literature', 'Norman Mailer', 'Writing']
7 Figma plugins that you need in your life
Originally published at marcandrew.me on September 24th, 2020. There’s no denying that Figma is already a powerful, and versatile design tool out-of-the-box. It continues to improve with each new release and has rightly claimed its place in the ‘Big 3 Design Tools Club’ (the others being Sketch & Adobe XD of course). But, up until around a year ago, Figma was at a disadvantage against a tool such as Sketch. Why? Plugins. Of course that’s no longer the case and the Plugin community for Figma is thriving, with more, and more powerful, and invaluable plugins being added daily. In this article I wanted to share a few of the plugins that I use on a regular basis, and that have helped me not only improve the designs I create, but have also helped speed up my workflow considerably. And I’m sure they’ll do the same for you. Let’s get to it…
https://uxdesign.cc/7-figma-plugins-that-you-need-in-your-life-f1c849ae899d
['Marc Andrew']
2020-09-25 19:50:05.254000+00:00
['Web Development', 'UI', 'Figma', 'Design', 'UX Design']
Finding top trending stocks using python
Google trends is a service provided by Google, which shows the trend of a specific term in the required period. This trend can be used to predict which stocks will be active in the stock market by analyzing the most searched companies share price on google trends. In the given post, I will be explaining how I wrote the code, which can show the top trending stocks at any moment by using the google trend module. First you have to download and install the module pytrends on your computer. You can download the pytrends module from this link . This is an unofficial module for retrieving data from google trends website. Now we will start coding in the python enviroment. We will first import all the relevant libraries for our code. They are as follows: #importing relevant libraries to use in the project import pytrends from pytrends.request import TrendReq import pandas as pd Now we will use the pytrend module to retrieve the trends of the words related to the financial markets. In the first line, tz=360 relates to US timing and hl= ‘en-US’ refers to language english in USA . I have selected the keywords such as “share price” and “stock price” as these are the terms which are mostly used by general public to check the market status of any stocks on google. In the below code, I have selected the geographical location as “US” which is for USA. You can tweak the code as per the country you are interested in. Also, the timeframe can be modified to see the trends from last hour by “now 1-H” to last four hours by “now 4-H”. This can also range to several years, the description of that is provided in pytrend documentation #code for finding the frequencies of term in google trends and related queries trending_terms = TrendReq(hl=’en-US’, tz=360) keywords = [‘share price’,’stock price’] trending_terms.build_payload( kw_list=keywords, cat=0, timeframe=’now 1-H’, geo=’US’, gprop=’’) term_interest_over_time = trending_terms.interest_over_time() related_queries= pytrend.related_queries() Here in the above code , interest_over_time captures how frequently the given keyword is searched relatively in the country. For the above the code , the interest_over_time() gives the following output in ipython. Relative frequency of the search on respective keyword Thus share price is relatively more searched term at any time duration. You can experiment with various keywords and modify the keyword list as per your requirements. The related queries capture the term related to the keywords which were more searched. In our case, it will contain the names of companies whose share price was most sought. This is in a dictionary format and should be cleaned for usage. The code for cleaning the related queries is as follows: # Code for cleaning related queries to find top searched queries and rising queries top_queries=[] rising_queries=[] for key, value in related_queries.items(): for k1, v1 in value.items(): if(k1=="top"): top_queries.append(v1) elif(k1=="rising"): rising_queries.append(v1) top_searched=pd.DataFrame(top_queries[1]) top_searched rising_searched=pd.DataFrame(rising_queries[1]) rising_searched Now we have divided the queries into top queries i.e queries which are the most searched term related to keywords. In our case companies whose share price was most searched during that hour with relevant score value. Top stock price searched in last hour in USA with relative Score Similarly we can access the stocks whose search is rising as compared to its previous search due to some news or recent happeing. This value is stored in rising_seached variable . The data stored in that variable is given below. Rising stock price searched in last hour in USA with relative Score Hence , we were able to extract the top trending stocks at any given time in python by writing a few lines of codes. The advantage of this pytrends is that it does not require any authentication like twitter api and can be used any number of times without restrictions. Thus by tweaking the above code you can find the trending stocks in any country at any given time.
https://medium.com/the-samurai-sale/finding-top-trending-stocks-using-python-d5ef6caf9c6b
['Samurai Strider']
2020-05-06 14:30:12.833000+00:00
['Machine Learning', 'Python', 'Finance', 'Trends', 'Data Science']
In Conclusion. Two cemeteries, two cities, and a…
In Conclusion How we talk to the dead Wee Kirk o’ the Heather, Forest Lawn Memorial Park, April 19, 2020. Photo courtesy of the author. I went to the cemetery the other day, not for any particular reason other than we are still allowed to go to cemeteries, at least for now. Ordinary funeral gatherings are not permitted at this moment here in Los Angeles, a city of four million people that spreads out over 469 square miles. I could not find the number of deaths due to Covid-19 within city limits, but Los Angeles County has lost 848 people to this pandemic. By the time you read this, more will be gone. I’m from Hunterdon County, a rural area in New Jersey that takes up nearly as much space as my adopted city but contains about 3,875,000 fewer people. The biggest thing to ever hit Hunterdon County was the 1935 Lindbergh baby murder trial, after which they electrocuted a German immigrant despite evidence that he didn’t do it. Pretty much nothing happened before that or after that. Good public schools, though. Twenty-one people in Hunterdon County have died from the coronavirus. I do not know if the cemeteries are still open back home, where the woods are blooming and the backyard gardens are in flower. A few weeks back, my hometown friend left a rabbit hutch open on her farm for five minutes. This week, her kids got to hold baby bunnies for the first time. There is an old private cemetery in Hunterdon County that I liked to visit now and again as a teenager. It is full of graves from a family who has farmed in that part of New Jersey for more than three centuries. As a kid, I heard ghost stories about it, this place at the edge of a farm, by the side of a long road that is paved until it isn’t. They said you could see a glowing red light at night, eerie, coming from nowhere — a spirit hovering above a grave. I heard that story for years and years. And then, one afternoon, I visited for the first time, with an ex-boyfriend I still loved who had broken up with me, ostensibly because I wasn’t the right religion but probably because he wanted to date a girl who would actually have sex with him. As an adult, I have broken up with people for the same reason, so I understand. We were 18. I still loved him, a little, and thought maybe he would change his mind. Adolescence can be adorable, really. We gazed up at the scrubby, bare tree branches arching against a cold early spring sky. We read the old tombstones, the names of infants and the mothers who died with them, and the fathers and children who survived to meet new wives, and stepmothers who not infrequently went on to die in childbirth with their own newborns. “They died so young back then,” I said. “So many babies just, like, died. And the mothers. And then they just had to get another wife to raise the kids.” “Can you imagine?” the ex-boyfriend said. “All that pain.” He had a lot of emotions. I couldn’t imagine all that pain, and neither could he. He and I wandered farther, to the newer, more boring gravestones. It felt a bit rude to trample over those graves, less a historical investigation than a breach of neighborly etiquette. Then I noticed one grave several yards off, adorned with a colorful bunch of objects we couldn’t quite discern. I walked over, drawn by the little pile of bright mysteries, until I realized they were toys: wind-battered Sesame Street figurines, dirt-caked plastic Muppet Babies, rain-stained My Little Ponies. This particular grave marker was for a little girl, and beside her stone was a battery-operated red lantern. The red ghost was a night light. I wondered who came at night to turn it on for her and who returned in the morning to turn it off. Maybe her parents took turns. We left. We did not get back together. Not so many years after that, the ex-boyfriend married someone who was the right religion. I have loved a few people since then and have married none of them. I have no idea where he is now. I do not love him anymore, but I hope he is happy, in the way that I hope most strangers are happy.
https://humanparts.medium.com/in-conclusion-86c9f4f678a7
['Sara Benincasa']
2020-05-01 18:29:57.749000+00:00
['Relationships', 'Death', 'Family', 'Coronavirus', 'History']
Fake-vlogging, Your Inner Critic, and Plunging into Your Courage Zone
Fake-vlogging, Your Inner Critic, and Plunging into Your Courage Zone The Innovation’s featured stories for the week of December 21, 2020 Photo by Azrul Aziz on Unsplash Happy almost-new year, Innovation readers. First, we wanted to let you know that we were excited to hop into a new and exciting 2021 that we created a new logo! Peep it on our socials and on Medium! Second, we would like to offer a wholehearted thank you to our writers for submitting exceptional stories this week. As promised, we have chosen seven articles from this week that we would like to highlight and celebrate. Without further ado, below are this week’s featured articles along with a little teaser excerpt. Show our writers some love and have a wonderful week. The reason you haven’t taken the leap yet comes down to fear. What is really stopping you? Put your fear under a microscope and use it as a compass. Fear is a reaction to a thought, but courage is a choice. How can you practice dipping your toe into your courage zone to give you the confidence to take the leap? As I stated before, the Mind Map was a game-changer for me. Since I started using Mind Maps, I got the following benefits: - A standardized way of organizing ideas. - Simplicity to create an overview of everything in a straightforward way. - Accelerated learning time by summarizing everything with mind maps. - Productivity boost organized notes allowed me to find, reuse and understand them; - Simple and clear communication. With Mind Maps, I can get a single understanding from meetings. We’re addicted to the failure of productivity. There’s a concerning trend that powers a huge subset of the content being clicked on, read, and shared these days. I’ll call it ‘productivity-bait’. Why does productivity bait work so well? The answer is simple but surprising. Even during a pandemic, building and maintaining connections in person is an important skill for us to cultivate. Especially today, the ability to have meaningful conversations with anyone, even the people who disagree with you, is even more necessary. So, as we all trudge through the rest of 2020, finding ways to help level up our communication skills is a good way to spend our time. Even when we’re social distancing, we can still do this with some ingenuity. Seth encouraged us not to settle, to pick ourselves, and always make a ruckus — to share our gifts with the world and unabashedly inspire change. Here I was, supposedly figuring out what he could share with my Fortune 15 colleagues to motivate them to achieve our company goals, and all I could think was, I have to leave my job — it’s literally killing me. Changing our inner voices feels awkward at first, and it’s worth it. I have been carefully listening to my feelings of self-doubt or anxiety. I place my hands over my heart or stomach, or I open my palms to relax. I acknowledge my pain. I look for ways to make my environment safer and prioritize my needs. Listening to our inner critics helps us to prioritize our needs and move forward. We may take just 5 minutes to understand and address our needs and prevent ourselves from getting blocked.
https://medium.com/the-innovation/fake-vlogging-your-inner-critic-and-plunging-into-your-courage-zone-34cc90faf4d6
['Michelle Loucadoux']
2020-12-28 17:54:04.576000+00:00
['Featured', 'Newsletter', 'Writers On Writing', 'Inspiration', 'Writing']
The Eye of the Hurricane
Photo by Shashank Sahay on Unsplash Being in the unknown is hard for me. Truth be told, it’s nearly impossible. I have immediate and persistent physical manifestations that remind me I am struggling. Like many people, my dis-ease appears in my breath. I can’t get a deep breath. Each time I try to inhale, it is like a bird flying freely and then smashing into a windowpane. The breath reaches the spot right before my lungs open and gets turned around. I try again and again until finally, I can get a deep breath. But the multiple attempts render me even more anxious and frightened. This newest COVID surge has got me back on the crazy train and I’m in a daily battle with my breath. The teacher in my meditation group told us yesterday that she likes to start her day by stepping outside and feeling the weather. But before she does that, she says, she does a brief inventory of how her internal weather looks. My partner Nancy is from New Orleans and has lived through dozens of hurricanes. Last night she was talking about the eye of the hurricane — the moment of quiet where it seems like the storm is over, but it’s not. When the storm comes back though, is unknown. Without warning, BOOM!, it’s back. When she was talking last night I realized that this is how I feel when my breath is tight. It’s dark and ominously still and scary and I am petrified in waiting for what comes next. I hate this moment. I want it to be over. My weather right now is that of a hurricane, in the eye of the storm, hidden indoors, trying to avoid the inevitable winds and rains and floods that I know will come before the storm is over. About a month ago, one of my oldest friends disappeared. She’s still there, but she doesn’t want to be in contact with me. The circumstances are complicated and confusing. The truth is I don’t fully understand why she doesn’t want to be friends anymore. I tend to be direct, maybe too direct. I want to work it out, talk it out, get to the bottom of it all. But that’s not her way. I understand now that her way is to do what she’s doing — to disappear. In response to her dropping away, I’ve had to do some uncomfortable self-study about my reaction to her disappearance. My feeling of discomfort is familiar, like a mild version of my anger, fear, and frustration about COVID. I am sitting in wait, anticipating if and when this friend will show up again. But my real feelings are bigger than that. I am mad. I am hurt. I am outraged. In not giving myself permission to feel those feelings, I am putting myself in the eye of the hurricane. I have rendered myself powerless, waiting for her to make a decision. Sitting in the eye of the storm is the safest. I’m contracted, not letting myself feel the full range of real feelings about this friendship. I know why I’m sitting here, waiting. I fear that if I let myself go beyond the eye of the storm, into the turbulent emotions that are really there — the mad, sad, and rejected feelings — that I will not be able to turn back, that I will be saying goodbye to the friendship forever. It’s the same with COVID. In really sitting in it, acknowledging how painful the losses are, I am letting something in that I really don’t want to let in. I am opening myself up to a reality that deeply saddens and frightens me. But being a bird banging into a window over and over is not fun. I don’t like being here. I would rather be in violent wind and rain, knowing what is happening, than in this waiting, the unknown, anticipating the destruction at any moment. At least after the stormy weather, I know there will be a moment of calm. And so it is — the only way out of this emotional hurricane that has hijacked my breathing is to move beyond the eye of the storm. I must venture into the torrential rains and gale force winds of anger and sadness and loss and fear. I have to make room for all of that before I can step out into a clear blue sky. And even as I write these words, understanding their truth, I can feel my chest soften. I feel a sense of relief. I can breathe again.
https://medium.com/know-thyself-heal-thyself/the-eye-of-the-hurricane-f020a6623aac
['Laura Culberg']
2020-12-28 06:52:20.275000+00:00
['Unknown', 'Covid 19', 'Hurricane', 'Self-awareness', 'Anxiety']
Sometimes your audience on social media surprises you.
Sometimes your audience on social media surprises you. I was recently scrolling through my comments on my gadget-centered YouTube channel, and saw a negative comment about one of my videos. I usually delete comments that are negative and hostile, and keep comments that are merely negative and critical. This one fell into the former category, so I was about to delete it, when I saw that a random follower of my channel had seen the comment and leapt to my defense. It’s a reminder that for every nasty troll on the Internet, there’s a troll-buster ready to fight back. I don’t know who you are Aquaboy, but I’m thankful for you. Read more about the benefits of negative comments and my strategies for managing YouTube comments in my article in Better Marketing.
https://tomsmith585.medium.com/sometimes-your-audience-on-social-media-surprises-you-6ec22c92637c
['Thomas Smith']
2020-11-27 15:24:33.983000+00:00
['Marketing', 'Short Form', 'Social Media', 'Moderation', 'YouTube']
My Favorite Books About Books (so far)
Weird Things Customers Say in Bookshops by Jen Campbell This is a book that talks about all the weird and wonderful experiences booksellers have had with their customers. It will make you laugh to no end, and perhaps you will find some cool reads along the way! Or maybe even find out what not to read! The Encyclopedia of Early Earth by Isabel Greenberg This graphic novel isn’t really about books, but it is about stories and the way stories shape us. A young man from the farthest reaches of the world travels far and wide from one adventure to another, telling stories of myth and folklore to those he meets. Each tale shows a bit of how each person and culture relates to the stories, and how the young man impacts those who he has met. Tilly and the Book Wanderers by Anna James This is a book about people who love to get lost in books. Literally. Tilly lives with her grandparents, who run the bookshop Pages & Co. Her whole life she has been surrounded by books. Then one day, she discovers that she and many others have the ability to travel within books. Tilly starts by traveling into her favorite stories like Anne of Green Gables, but soon finds that bookwandering can be tricky, and gets into some adventuresome trouble! Wouldn’t we all just love to have a short adventure inside a book? Fahrenheit 451 by Ray Bradbury This is a book about the importance of books. In a dystopian world where books are banned, and anyone caught with them is either arrested or killed, we remember the importance of the written word. Our main character, a fireman (someone who burns books for a living), realizes there is nothing in the ignorance they’ve all been brainwashed to love. He starts to read the books he’s meant to burn, and his attitude towards books, to life even, changes. The Little Paris Bookshop by Nina George If you need a remedy for a malady of the soul, this book is for you. A man works on a book barge, prescribing specific books for specific maladies of the customers that come to peruse. When the man, depressed at the loss of his love, realizes that his love has not been lost, merely misunderstood, goes on an adventure to discover the truth. On the way he meets new friends, and new books! The Shadow of the Wind by Carlos Ruiz Zafon Everyone likes a good mystery, but a mystery about a book? Even better. In this book we read about the life of a young man in Spain who’s sole mission is to find out everything he can about a book he found in the “Cemetery of Forgotten Books”. The young man finds love and friendship on the way, but most importantly, he finds the truth. These are my favorite books about books so far. I will update the list once I read more!
https://asiegelster.medium.com/my-favorite-books-about-books-so-far-996e15ad602f
['Abigail Siegel']
2020-01-18 16:16:01.140000+00:00
['Adventure', 'Books', 'Books About Books', 'Reading', 'Stories']
Multisignature Wallet for ETH and ERC20 Tokens
We have created an extended version of Multisignature Wallet dApp. This wallet is applicable not only for Ether, but also for ERC20 tokens. Try it now in our Marketplace. New Multisignature Wallet creating If you need to use Ether and ERC20 tokens in your wallet — this solution is for you. If you need a wallet only for Ether, it’s better for you to create an older version of the Multisignature Wallet dApp, because it needs less Gas for creation. New Multisignature Wallet interface For using your ERC20 tokens in your multisignature wallet you need to send them directly to the Wallet address and then add them in the Wallet interface by entering the token’s smart contract address. Adding tokens and tokens list The mechanism of sending multisignature transactions with tokens is the same with sending Ether. One of the wallet’s owners creates a transaction, and then it must be approved by neccessary number of owners, then transaction is able to be sent. Token sending transaction Try it now on https://dappbuilder.io/builder Or watch the code on https://github.com/DAPPBUILDER/dApp-Builder Stay in touch for for updates via
https://medium.com/ethereum-dapp-builder/multisignature-wallet-for-eth-and-erc20-tokens-d0cc2e0a4c82
['Dapp Builder Team']
2018-09-27 12:11:03.288000+00:00
['Token', 'Ethereum', 'Development', 'ICO', 'Blockchain']
5 More macOS Apps For Your Work Life
5 More macOS Apps For Your Work Life Apps for a Better Work Life Photo by Josue Valencia on Unsplash Apple’s macOS has always been one of the go-to Operating systems for many professionals in various industries. The main or simple reason can be is It just works. Apple always brags about it for its less complex, fast, and smooth user interface. Even though there are some obvious advantages in using macOS, there are equally obvious downsides in using it, especially for professional use cases. However, we can at least try to make them seem less obvious downsides by tinkering with the OS. Try out these few apps to make your work life a bit easier.
https://medium.com/macoclock/5-more-macos-apps-for-your-work-life-be8d87026611
['Abhinav Chandoli']
2020-12-16 07:39:02.230000+00:00
['Technology', 'Apple', 'Mac', 'Debugger', 'Software']
How to Neutralize Gender Bias in Your Writing
Gender equality is a fundamental human right. In spite of this, women and girls suffer discrimination and violence in every part of the world. Unconscious bias forms an invisible barrier to equal opportunity. It affects those we care for most in life, our families, friends and colleagues. As a writer, you have a choice to either enforce or neutralize gender bias through your work. What is gender-neutral language and why is it important? Language shapes our cultural and societal attitudes. Until the 1970s, the use of masculine pronouns in place of generic was the custom. Women’s movements were successful in challenging this sexist norm. “A gender line… keeps women not on a pedestal, but in a cage .” — Ruth Bader Ginsburg, U.S. Supreme Court Justice Today, many governments and schools use language that places women and men at the same level. This helps to reduce stigma and ensure that vulnerable people are not left behind. Adopting a gender-sensitive lexicon is far easier than you may expect. Being mindful is the first step. If done right, gender-neutral text does not affect readability. It generally reads well, if not better, than antiquated or outmoded gendered text. How does gender-neutral language work? There are several ways to promote gender equality though language in a text. It’s as easy as 1, 2 & 3 below. 1) Use gender-neutral expressions Avoid gender-specific nouns when making generic references to both men and women. For example, the words policeman and stewardess are gender-specific. The corresponding gender-neutral terms are police officer and flight attendant. Here are a few more examples. Replace men and mankind, with people , humanity , human beings , humankind , we , women and men etc , , , , , etc Replace man-made disaster with human induced disaster . . Replace congressman with legislator , congressional representative , or parliamentarian etc. , , or etc. Replace chairman with chair , chairperson or head . , or . Replace landlord, landlady with owner and so on. 2) Use inclusive language The use of the generic masculine form to refer to both genders creates a gender bias. Avoid > A good student knows that he should strive for excellence. Prefer > A good student strives for excellence. 3) Use both feminine and masculine forms of words Each professor should send his or her assistant to the conference. Whoever she is. Wherever he lives. Every child deserves a childhood. Note: A gender-neutral alternative to he or she is the singular they. Bonus tips Use the active voice to show empowerment of women. Avoid expressions that have a negative connotation. Avoid > Carol had lunch with the girls at the office Prefer > Carol had lunch with some colleagues at the office Avoid stereotyping roles. Avoid > The Conference participants and their wives are invited. Prefer > The Conference participants and their spouses/partners/guests are invited. Be the change! Gender-neutral language is now the norm for progressive writers, journalists and thought-leaders. By rewiring our linguistic patterns we can create a more even playing field for all. So, if you would like to make a difference, please mind your language! For comprehensive guidelines on gender-sensitive expression, see the UN Women gender-sensitive lexicon.
https://medium.com/mindset-matters/mind-your-language-25ab0f71a91d
['Lucy King']
2019-02-01 11:36:22.215000+00:00
['Diversity', 'Language', 'Gender Equality', 'Women', 'Writing']
Why Nike’s move to play the fence works to their benefit
When entertainment, politics and business collide When companies attach themselves to a public figure, they cross their fingers and hope that their brands don’t somehow get ruined if the celeb decides to go off the rails. With that said, some consumers are pretty good at separating “the art” from the politics. Others simply refuse to separate the two. Then there’s a third group that falls more into a gray area. For example, actor/director Kelsey Grammer is open about being a Trump supporter. But there’s no denying his quiet production credit for the eight-year, legendary African-American sitcom “Girlfriends.” Kanye West considers the current president a “father figure,” but even former fans who are against his politics still gave the rapper’s ninth album “Jesus Is King” a shot. It soared to the top of the Billboard charts, with 197 million streams and 109,000 copies sold as a full album during its opening week — much to the relief of Def Jam Recordings and Universal Music Group. Comedic actor Ashton Kutcher endorsed former, two-time presidential candidate Hillary Clinton but still starred in the Netflix series “The Ranch.” And Sam Elliott, who also stars in “The Ranch” as a conservative-leaning father, hasn’t hidden his anti-Trump views either. Meanwhile, Netflix is also giving viewers the option to skip Trump jokes altogether in comedian Seth Meyers’ latest comedy special. On the opposite side, ABC stopped ignoring Roseanne Barr’s Twitter rants on politics and race, and removed her from the reboot of her own ’80s sitcom. Season two of the show became “The Conners,” which still kept its conservative-leaning theme but made Dan Conner a widower. Meanwhile, John Goodman and the rest of the cast saved the show — even after John Goodman suggested Trump supporters build their own dome away from everybody else.
https://medium.com/we-need-to-talk/why-nikes-move-to-play-the-fence-works-to-their-benefit-c496f26e6daf
['Shamontiel L. Vaughn']
2020-10-12 01:44:38.123000+00:00
['Marketing', 'Adiversity', 'Kaepernick', 'Controversy', 'Nike']
5 Persuasion Techniques That Aren’t Manipulative
On an otherwise routine Monday in 1994, I witnessed a remarkable demonstration of persuasion. I was a stockbroker trainee on Wall Street at the time. On a typical Monday, my boss’s nail technician would come to the office for his weekly manicure/pedicure. I sat at a desk across from his, and thankfully I faced a wall. But on this day, with his nail tech doing her thing, he commanded me to turn and look at him. He wanted to train me. “I can persuade anyone,” he said in his thick Brooklyn accent. “Listen.” With the speakerphone blaring, he finessed his way past the receptionist and assistant, and finally landed the prospect. And then he patiently endured a tirade of foul-mouthed language. “Now, I’ll turn him,” he said after muting the phone. And he did. He then repeated his success on two more people, generating sales that totaled tens of thousands of dollars. With his nail technician now working on his feet, he said, “That paid my rent.” Years later, I found a mentor who followed the same principles but without the theatrics. In fact, he was a quiet person like me. He proved that you don’t need flashy charisma or charm to master the art of persuasion. You only need to learn and practice the principles. And if you’re worried about being manipulative, that’s a good thing. Keep your intent honorable and be honest. Above all, remember that it’s your job to help the other person change their mind, not to decide for them. These five techniques may surprise you. Some contradict conventional wisdom, but the best persuaders follow them diligently.
https://barry-davret.medium.com/how-to-be-persuasive-without-being-manipulative-cb108a8f101a
['Barry Davret']
2020-03-07 21:37:07.321000+00:00
['Relationships', 'Leadership', 'Psychology', 'Self Improvement', 'Life Lessons']
Submission Guidelines
Submission types Articles and short essays: research (always with Works Cited), and opinion. Short stories: should be relevant to the journal theme, which changes issue by issue. Aim and Scope This journal began as a response to our fundamental concern with current materials available to students — and to others looking for new approaches. We worry that students are alienated by pretentious academic jargon that does not appear relevant to everyday experience. (Please see “Adapting the Humanities in 2020”). We are also interested in general opinion pieces, about the present state/future of the humanities. Please note · All essays must be no longer than 1500 words (a 6 minute read). · Must be written clearly with the aim and scope of the journal in mind. . Criticism and theory should not be unnecessarily jargon-laden. · This journal is intended for a broad readership. We encourage all of our authors to promote and share their articles on social media. · If you are enrolled in Medium’s partner program, your essay is eligible to earn money. · Don’t assume that your readers know the definitions of technical terms. · Be sure to read the newest call for papers carefully. These will also be posted on the Penn CFP site. · Please send proposals first before sending full articles. Thank you Please send all initial proposals to: [email protected]. We will only contact you if we think your proposal looks promising. We look forward to reading your submission!
https://medium.com/the-humanities-in-transition/submission-guidelines-caa1d729e735
['Flannery Wilson']
2020-08-17 16:01:37.061000+00:00
['Literature', 'Philosophy', 'Publication', 'Storytelling', 'University']