title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
4 interactive Sankey diagrams made in Python
Plotly has a new member of the Plotly.js chart family: The Sankey diagram. Allow us to introduce you: A Sankey diagram showing changing voter views. The Python code to make this chart is in this Jupyter notebook: https://plot.ly/~alishobeiri/1591/plotly-sankey-diagrams/ Sankey diagrams were invented to chart energy flows (such as through a steam engine). As an example, you might start with 100% of a fuel’s energy on the left side of the diagram, then X% is lost to friction, Y% is lost to compressing the fuel, and Z% is used for propulsion — these losses are shown on the right side of the diagram. Connecting the left and the right sides are ribbons whose widths are proportional to the flow magnitude. This simplified example would show 3 ribbons for 3 energy flows, but practical Sankey diagrams can have dozens (or hundreds) of ribbons. Sankey diagrams are still used in science and engineering for charting energy flows. Today, Sankey diagrams are commonly seen charting the flow of website traffic. Consider the case where 1,000 website visitors start on 3 landing pages, then trickle off to hundreds of sub-pages before they leave the site entirely. Analysts use these diagrams to understand which pages get the most traffic, how the traffic arrives, and where the visitors go afterwards. Plotly Sankey diagrams come supercharged with a few extra features: Hover labels — Add labels to sources, sinks, and flows so that your audience knows what they’re seeing when inspecting a complicated Sankey diagram. — Add labels to sources, sinks, and flows so that your audience knows what they’re seeing when inspecting a complicated Sankey diagram. Like all Plotly charts, there are open-source interfaces to make Sankey diagrams in R , Python , or JavaScript . , , or . You can track the flow of individual items through a Plotly Sankey diagram. For example, the screenshot below shows one of 400 drones highlighted through all stages of its lifecycle. This tracking feature could be useful in manufacturing: Imagine tracking thousands of battery cells through the stages of an electric vehicle manufacturing line. The Sankey diagram could start with the delivery of the cells at the factory and end when they are installed in a completed car. Individual items can be tracked through the flow of a Sankey diagram. In the example above, We see a Predator MQ-1B drone became part of the Predator program, was sold to the Air Force, then ended up in Afghanistan. There’s even hover text describing the fate of this drone. See this full-sized chart on plot.ly: https://plot.ly/~alishobeiri/1367.embed Without further ado, here are 4 interactive Sankey diagrams made in Python. The Python code to make these is in this Jupyter notebook hosted on plot.ly.
https://medium.com/plotly/4-interactive-sankey-diagram-made-in-python-3057b9ee8616
[]
2017-09-19 17:48:25.623000+00:00
['Data Visualization', 'Python', 'Physics', 'Website Traffic', 'D3js']
Too Drunk to Fuck: Savage Shots of 1980s Rock Fanatics
Went to a party / I danced all night / I drank sixteen beers / And I started up a fight ~Dead Kennedys, “Too Drunk To Fuck” Close your eyes / And forget your name / Step outside yourself / And let your thoughts drain / As you go insane, insane ~Slayer, “Seasons in the Abyss” You know I’m born to lose, and gambling’s for fools / But that’s the way I like it baby / I don’t wanna live forever / And don’t forget the joker! ~Motorhead, “Ace of Spades” I feel bad, and I’ve felt worse / I’m a creep, yeah, I’m a jerk / Come on, Touch me, I’m sick ~Mudhoney, “Touch Me I’m Sick” All the children on the street / hope they get something good to eat / But for me it’s not so great / Fuck Christmas! ~Fear, “Fuck Christmas” Torches blazed and sacred chants were praised / As they start to cry hands held to the sky / In the night the fires are burning bright / The ritual has begun, Satan’s work is done ~Iron Maiden, “The Number of the Beast” Dad’s got a job at DuPont / Mom’s working at Monsanto / All my friends are here / Ready to party, man / It’s stupid haircuts and stupid clothes / What a stupid life / In a wonderful subdivision ~Drunks With Guns, “Wonderful Subdivision” I don’t want to live / To be forty-three / I don’t like / What I see going on around me ~Circle Jerks, “Life Fast, Die Young” She’s a white girl / But I’m living with a white girl ~X, “White Girl” We ain’t got no friends / Our troubles never end / No Christmas cards to send / Daddy likes men ~The Ramones, “We’re A Happy Family” Typical girls are looking for something / Typical girls fall under spells / Typical girls buy magazines / Typical girls feel like hell ~The Slits, “Typical Girls” Jesus don’t want me for a sunbeam / ’Cause sunbeams are not made like me ~The Vaselines, “Jesus Wants Me for a Sunbeam” And so it’s now we choose to fight / To stick up for our bloody right / The right to ring, the right to dance / The right is ours…we’ll take the chance! ~Bad Brains, “Pay to Cum” Satan flying / From his throne / Towards the gates / Of Heaven ~Nocturnus, “Lake of Fire” You tell me that I make no difference / At least I’m fuckin’ trying / What the fuck have you done? ~Minor Threat, “In My Eyes” This sucks more than anything that has ever sucked before ~Butt-head I’m so happy we live in harmony / I’m so happy we’ll change history / I’m so… I’m so happy / I wanna kill my family ~Coffin Break, “Kill the President” If you’re gonna scream / Scream with me / Moments like this never last ~The Misfits, “Hybrid Moments” Standing on the stairs / Cold, cold morning / Ghostly image of fear / Mayday, mayday / Gonna leave this region / They’ll take me with them ~Wipers, “D-7” Adrenaline starts to flow / You’re thrashing all around / Acting like a maniac / Whiplash ~Metallica, “Whiplash” I live my life like there’s no tomorrow / And all I’ve got I had to steal / Least I don’t need to beg or borrow / Yes, I’m living at a pace that kills / Runnin’ with the devil ~Van Halen, “Runnin’ with the Devil” I am a patient boy / I wait, I wait, I wait, I wait / My time is water down a drain ~Fugazi, “Waiting Room” You’re not punk, and I’m telling everyone / Save your breath, I never was one / You don’t know what I’m all about / Like killing cops and reading Kerouac ~Jawbreaker, “Boxcar” Feeling so groovy / I’m feeling so groovy / I’m feeling so groovy it sucks ~The Queers, “Feeling Groovy”
https://medium.com/cuepoint/too-drunk-to-fuck-savage-shots-of-1980s-rock-fanatics-c7f919a792a6
['Ted Pillow']
2016-10-18 02:29:23.642000+00:00
['Rock', 'Heavy Metal', 'Featured Rock', 'Music', 'Photography']
So easy.
Did you find this article interesting? If you liked it and if you feel like it, leave some claps to help whoever wrote it to pay his Lamborghini Urus. Thank you!
https://medium.com/the-fluxus/so-easy-3ea4317b3c9a
['Martino Pietropoli']
2020-10-30 12:15:33.788000+00:00
['Comics', 'Cartoon', 'Illustration', 'Music', 'Drawing']
Summer in the Bronx
Summer in the Bronx And love, unspeakable love. Photo by Kaique Rocha via Pexels For what seemed like countless summers of my upbringing, I’d be shipped off to my family in New York, specifically, the Bronx, in order to find some semblance of relief from boredom and the murderous heat of Savannah, Georgia. I’d count the days toward the middle of May and flaunt my happiness to my friends as much as I could, however, I knew I’d miss them. I knew I’d want to know what their days would entail without me. In the summer of ’98, I had two crushes: Joel & Mackenzie. Joel was Puerto Rican & Black and Mackenzie was Jamaican but was raised in Queens for the bulk of her life. (Every other weekend, she’d visit her aunts and cousins in the Bronx.) I lusted over them— would do anything for the heat of their presence to sway my way, however, I was not out then, so Mackenzie could never know my true feelings. I paraded around my Grandma’s neighborhood, tossing back coconut icies, running through fire hydrants, and staying out late in the park. Bronx heat was a bomber; a killer, if you will. We’d have blackouts that’d last for days and I would find myself yearning for the sunlight just to get a glimpse of Joel and his smile or Mackenzie and her long legs. I used to think she walked on clouds and I wanted to know just how soft her steps were. She’d call me “Tree” with a hint of her Jamaican accent slipping through and she’d ask me to turn the ropes when we played Double Dutch. And what a damn honor that was — what it did to and for my ego . . . *Mac wants me to turn again. Maybe she knows.* But I was just hella good at turning the ropes and going with her flow and although I wanted to flow with her in other ways, I settled for our daily games. Joel came and went. He was fluid, like water. I couldn’t catch him and even if I could, my hands weren’t big enough to hold him. He’d slide through every single time. Enigmatic — that’s how I described him. He would sit near me on the park swings and just talk. Just talk . . . He had a gold tooth and a fat herringbone chain and my Grandma used to yell from our fifth-floor window for me to “get my fast ass upstairs" and I always ran away from him. Authority was our downfall — I never truly felt his heat until I couldn’t have it. Summer became my favorite season that year. It was the year I’d compare all others to. It was the year I searched for the heat I loved and the heat I lost. I often wonder how both of them are doing; if Mac still walks on clouds and if Joel is still hard to catch. I wonder, sincerely wonder if they knew about my heat.
https://medium.com/prismnpen/summer-in-the-bronx-5f6728f0cbb2
['Tre L. Loadholt']
2020-07-09 21:41:05.555000+00:00
['Creative Non Fiction', 'Bisexual', 'Childhood', 'LGBTQ', 'Music']
Breaking news: an alternative to React
Recently I was reading some articles, in order to stay ahead of the latest trends within the ecosystem of my favorite framework Ruby on Rails, and I came across an article that read: “There is absolutely no technical needs for RoR devs to use ReactJS “. Clearly it was something that caught my attention, and I had a question that helped me to dig deeper into the subject: Why go against a library as popular as React? In order to better understand how this can be possible, we should decompose the final solution into different portions that synergistically work to improve development productivity (something common in Rails) and simplify the development team, and process, removing diversity from it. The elements of this recipe are the following, some of these will be covered in this article, and others we will leave for another opportunity: Ruby on Rails If we have to do a brief review of what is Ruby on Rails (RoR for simplicity), it is a framework written in Ruby, an object-oriented programming language, with a very simple and readable syntax, it is used by technology companies such as Github, Gitlab, Airbnb, Shopify, among others. Since it has many first-rate technology companies, which base their products on this technology, many of them contribute to the growth, improvement and generation of new ideas within the Rails ecosystem. ViewComponent This is a frontend framework that allows us to decompose views into components, thinking of them as blocks that contain only the necessary information and to which certain behavior can be added according to the information they receive. We can think of the components as an evolution of the Decorator Pattern, since in the end, it fulfills a similar but enriched function. The components are Ruby objects, with a well-defined interface with which we have the certainty of the input and output information, allowing us to improve the unit tests on them, as we will see later in their benefits. Stimulus This Javascript microframework emerged from Basecamp (the company from which RoR emerged), comes to replace other libraries such as JQuery, or Vanilla Javascript, allowing us to define controllers that add behavior to our HTML. StimulusReflex Library that allows us to extend the functionalities of both RoR and Stimulus. The ultimate purpose of this library is to intercept interactions with the application by the user and send them in real time (over Websockets) to be handled by Rails. It is responsible for managing the state of the application and the changes applied to the interfaces, achieving update rates of between 20–30 ms. CableReady Gem that allows the creation of applications to generate real-time experiences for users. It uses sockets provided by ActionCable, a framework present in RoR. Now, all these new components at first glance can be scary, especially when they seem to introduce unnecessary complexity, but the truth is that many of them work with a minimum amount of effort in configuration and code, for example between StimulusReflex + CableReady. In other cases, they are libraries that replace others, such as ViewComponent, which replaces the use of Partials. Benefits of removing front-end frameworks At this point, I want to highlight the main benefits that we identify and adhere to with respect to avoiding introducing a frontend framework or library (React, Angular, Vue, etc). Use the same RESTFul application that we have, we don’t need to define endpoints or anything extra. The rendering is kept on the server side, allowing to use ERB, Slim, HAML, etc. Keeping the rendering on the server side, no longer implies reloading the entire page when a change is made, rather with the help of these technologies, we only change the parts of the view that are needed, thus improving the general performance and the usability. The development team remains less diversified in technologies, allowing to maintain a solid background in it. Progressive inclusion, which removes a “Yes or No” panorama, since all these technologies can be incorporated little by little, replacing and refactoring existing code perhaps? Centralization of business logic In this first iteration about new trends and the development of modern applications with RoR, we are going to delve a little more about ViewComponents, leaving the other components for another time. What is a component and why is it important? In its most simplistic form, a component is a Ruby object that returns HTML. The main benefits that we obtain from using them are: Reuse: Allowing us to take advantage of the components defined in different situations throughout the project. Testing: Acceptance or integration tests are already solved in Rails, through the SystemTests or IntegrationTests, but these must run the entire server, load base information and execute the entire life cycle, something expensive both in the time it takes its execution, like its writing. Through the use of components, we can perform unit tests on them, being these simpler to write and faster to execute their tests. Performance: In point 1 I talked about the re-use that allows us to carry out the components, this was already solved to a certain extent with the use of Partials, but the components give us superior performance, according to the benchmarks made by the developers up to 10x compared to the original Partials. What does a component look like? If we refer to the initial documentation of the framework provided by the Github team, we can find all the details about how exactly it works, how to install, the generators it provides. What we should know is that now, our original views inside the app/views/ * directory will make use of the render method and the instances of our components that we have defined will show the output. These components will be contained within the app/components/* directory and each of them has two files, which must be named the same, a file with an .rb extension and a file with an .html.erb extension (or any rendering engine that we use, eg: HAML, SLIM, etc.). app/views/posts/new.html.slim = render PostForm::Component.new(post: @post) As we saw before, in this way what we do is render the component instance, with a parameter, which in this case is a new instance of the Post class app/components/post_form/component.rb module PostForm class Component < ViewComponent::Base def initialize(post:) @post = post end private attr_reader :post end end This would be our definition of the class that we are instantiating in our view. Here we could define certain methods that give behavior to the component according to the information we pass. Similar to what we can do using a decorator pattern. app/components/post_form/component.html.slim = simple_form_for sale do |f| = f.input :title = f.input :body = f.button :submit, class: 'btn btn-sm btn-primary float-right' And finally, we have our component with an HTML extension, which will finally be what our returns to be shown in the view. References https://stimulusjs.org/ https://rubyonrails.org/ https://docs.stimulusreflex.com/ https://cableready.stimulusreflex.com/ viewcomponent.org https://medium.com/@obie/react-is-dead-long-live-reactive-rails-long-live-stimulusreflex-and-viewcomponent-cd061e2b0fe2
https://medium.com/arionkoder/asd-d413985d3114
['Edgardo Martin Gerez']
2020-11-05 20:53:12.573000+00:00
['React', 'Software Development', 'Components', 'Programming']
And the irony of that irony. . . .
There are a number of good thoughts here and a lot of confusion. Sander Huijsen, you have a good point but your technical history is a bit weak. That is not unusual and is not critical to what you are saying, I think, but it does open up the door for people to throw things at you. In an effort to be helpful I’ve found that understanding major paradigmatic change is best looked at as a spiral through a series of stages and not a straight line or an end product. From any position on the spiral you are in the same vicinity as at some point in the past but this is never the past repeating itself. This is easier said than thought but it is impotant. The internet and the web and social media are all interrelated stages and include both social change and technical change. One continuous line of development is the movement from centralized information processing to decentralized and then back to centralized at a more individual level. The easiest way to understand that is as break up of information processing into components and then internetworking of those components back together at ever higher levels producing a planetary computing environment. This is the cloud and is quickly evolving into a planetary computer that appears to be individual for every user but is, of course, so tightly interconnected that data can be dealt with in its entirety. And that is big data, as it is called, and what gets sold and is so valuable. This is so valuable to us that there is no going back. Just as expected information processing expectations means that we will never go back to individual, lightly networked computers. That world is gone except in very specialized settings, e.g. interstellar exploration for now or very secure research. The problem you are tackling is the problem of data security and privacy within the context of increasingly dense urban societies. And that is the very difficult issue of human evolution. We have been on this path for some fifteen thousand years. From that perspective social media is a logical progression and moves human societies into the virtual realm. They are, after all, mental constructs hence the natural movement of the first fully exposed generation into that world. The only thing left will be the usual percentage of fringer hermits and such people and, at this stage, the older generations who will be able to make only partial conversion to the newer virtual planetary societies. It’s interesting that you are looking for an older form of early social media, individually hosted blogs. email newsletters, and the Well or the early community bulletin boards. The interesting thing about this is that you are posing this on Medium that is one of the newest forms of exactly what you, I think, want. This is a planet wide, moderated community of artists, thinkers, writers and readers all of these roles increasingly interchangeable. With the advent of the Partnership program there is a more committed level that costs money but also pays. If you haven’t I would suggest you look at Steemit as it is another version of what seems to be developing here but is based on cryptocurrency/blockchain specific to that community. I would not be surprised to see Medium move in that direction at some point. These may well form the foundation for future virtual sovereign communities. But that is another topic. Hopefully this was helpful in some way . . .
https://medium.com/theotherleft/and-the-irony-of-that-irony-3dad7fb36b33
['Mike Meyer']
2017-10-14 06:10:33.214000+00:00
['Blockchain', 'Internet', 'Culture', 'Future', 'Social Media']
How to Stop an Attacker From Hurting you
How to Stop an Attacker From Hurting You Use the human body to buy you time to run Photo by Timothy Eberly on Unsplash The city of Dallas, like many other major cities worldwide, has had a high incidence of deadly home invasions over the last several years and this continues to be a problem for the Dallas residents and police department now in 2020. Many of these invasions occur with the residents of those homes at HOME at the times of the crimes. Unfortunately, many of those residents are shot and killed in their own homes and it takes much effort with too little staff for the Dallas PD to stop the streak of terror. We are all aware as American citizens of the tragedy of frequent occurrence of mass shootings in our country and violent crime that surrounds us and our communities every day. As unfortunate as these facts are, the larger truth is that the hopeless state our communities are in where so many families and individuals are daily lacking food, shelter, and adequate healthcare has evoked criminal desperation in those most vulnerable with the notion to do so. Rampant drug and sex trafficking and crime, in general, are also NOT at an all-time low as President Trump will proverbially tweet-claim. I do not believe in living in fear, but that said, I do believe in staying privy to the most recent risks to my home, family and our lives…and I also believe in readiness. In my years as a nurse and as a parent, due to the risks, I encountered performing home visits in many dangerous areas and having been the single parent to small children and no man in the house at the time, I have mulled over the prospect of having to hand-to-hand combat an intruder or an attacker. I have also imagined having to circumvent tragedy to myself or one of my kids if we are involved in a mass shooting outdoors or in a public place. I have created scenarios in my head that I feel keep help to keep me in check with the reality of this life and its many dangers. There are many ways an individual can be attacked or threatened. Some of those ways are in public, on the street or in an alley, or at a public place where others are gathered, getting in or out of one’s car anywhere, arriving to a new acquaintance’s home or being in some other unfamiliar place, or even in your own home or walking to or standing at a bus stop.
https://medium.com/publishous/how-to-stop-an-attacker-from-hurting-you-21d084850663
['Christina Vaughn', 'Nurse', 'Freelance Writer']
2020-03-04 19:00:21.622000+00:00
['About', 'Better Living', 'Health', 'Self Improvement', 'Life']
Do your ML metrics reflect the user experience?
‘Wind tunnels’ provide a testing ground for faster feedback loops Developing ML products shares some surprising parallels with developing a Formula 1 car. The costs of delivering a race car to the track and a web-scale ML product to production are both high. To manage costs, Formula 1 teams don’t have every race car prototype go straight from the blueprint to the race track for a trial run. Instead, cheaper small-scale models first go through rigorous testing and refinement in a wind tunnel. In that wind tunnel, metrics that are relevant to the car’s aerodynamics — such as downforce and drag — are measured as an indicator of the design’s performance. The wind tunnel allows a much shorter feedback cycle, allowing the team to gauge the performance of the car after every iteration of the research. They only take the race car to the track for a proper trial run once the results from the wind tunnel show satisfying performance, because trial runs take a lot of time and effort. Similarly, we can test our product iterations by conducting UX interviews with real customers. By asking targeted questions, we’re having a trial run with customers about the performance of our product. However, these interviews are expensive to schedule, run, and analyse. And they don’t scale to a large number of customers. Without alternative testing strategies, we run the risk of exposing a subpar product that can potentially damage customers’ trust and Xero’s reputation. So if our product were a race car, a UX interview might be a trial run on a race track. Which raises the question: what’s our wind tunnel? In ML, our wind tunnel is the evaluation pipeline We needed a wind tunnel for our ML product, to perform offline evaluations of each iteration and benchmark its performance before showing it to customers. So we built an evaluation pipeline for this purpose, which reduced the feedback cycle from weeks (organising user interviews) to minutes (calculating and analysing metrics). How we develop and measure our ML product’s performance What did we measure in our wind tunnel? Good question. We started with two basic ML metrics — precision and recall: Precision : What proportion of predicted transactions actually happened? : What proportion of predicted transactions actually happened? Recall: What proportions of transactions that should have been predicted were correctly predicted? They might sound simple, but can quickly become more complicated once we put them in the context of cash flow. For example, what is a correct prediction? Will our customers care if the predicted amount is off by 5%? How about 20% or 30%? And if the predicted amount is correct, what if the predicted date of the transaction is out by a few days? As for recall, many business transactions only occur once, so which ones should be considered predictable for our recall? Are all transactions equal? Our customers care a lot more about a correctly predicted rent payment, than a bank fee. So how do we account for the relative importance of transactions? These are just a few of the open-ended questions we had to address when designing our metrics. Without considering the answers, we would have had different variations of precision and recall, without knowing if one was better than the other in terms of how they reflect user experience. The metrics we measure are complex, and require validation Clearly, designing metrics is a complex endeavour, and it’s tricky to get right. Therefore, we needed a way of knowing if our metrics were measuring the right things. Without this evidence, we ran the risk of satisfying ourselves and our stakeholders on irrelevant numbers that didn’t actually reflect our customers’ product experience. It would be like putting a Formula 1 race car in the wind tunnel, and feeling confident about its performance by measuring its shininess or paint colour. We needed to find evidence to determine whether our ML metrics were a good proxy for user experience. To do this, we correlated our ML metrics with the subjective, amorphous concept of user experience. Correlating ML metrics with user feedback Step 1: Scoring qualitative feedback Our team works closely with a group of early adopters to get regular user feedback on our product. Every month or so, we interview these small business owners to talk through the latest changes we’ve made. This might be new features, UI components, or improvements to predictions. We used the feedback from these interviews to set the context for our ML metrics. We did this by giving each interview two scores — one for what we expected their precision value to be, and another for recall. The quality of predictions were categorised using a score out of five: 0/5 — Terrible, all predictions are wrong (precision). No predictions, missed everything (recall) 1/5 — Bad, not useful 2/5 — Meh 3/5 — OK 4/5 — Good 5/5 — Perfect, couldn’t be better. Andrew Ng’s face appears in a dream and pats you on the back Taking precision as an example, we estimated how precise our predictions were based on their comments: “Almost all of these predictions are wrong!” → 1/5 expected precision score “Wow, every single one of these predictions is correct!” → 5/5 expected precision score We used the same process for recall — if a customer indicated that we failed to predict some key things, we expected the recall score to reflect that. “You’ve correctly predicted some, but you’ve missed many transactions that are important to me!” → 2/5 expected recall score “Wow, it predicted everything I can think of!” → 5/5 expected recall score This process of assigning a score to qualitative feedback can be subjective, so we put some measures in place to reduce bias: We scored the qualitative feedback before looking at the metrics, because we didn’t want the actual values to bias our judgement We agreed beforehand on the user experience that each score represents We got our scores checked and signed off by our product owner and designer, since they attended all the interviews and had more context on these small businesses TIP: We were able to bootstrap off our existing usability testing schedule, but you don’t need a formal UX research plan to get a rough gist of how good your product is to a human. Maybe you could show your feature to some people from another team who are less familiar with the product, and get informal feedback that way. Step 2: Calculating ML metrics for the interviewed users After we attempted to put some numbers against our qualitative feedback, we calculated our algorithm’s performance metrics (precision and recall) to begin our correlation study. Using our evaluation pipeline, we were able to quickly generate metrics for each of the users in our testing group. Step 3: Analysing correlation between qualitative feedback and quantitative metrics Once we had both the expected qualitative scores and the actual quantitative metrics, we analysed their correlation.
https://medium.com/humans-of-xero/do-your-ml-metrics-reflect-the-user-experience-1cc198dd7087
['Danny Doan']
2020-11-22 20:46:22.318000+00:00
['Machine Learning', 'AI', 'User Experience', 'Technology', 'Product Development']
ICO Alert Crypto Minute: June 27, 2018
Facebook Loosens its Ban on Cryptocurrency Ads As reported by CoinTelegraph, Facebook is allowing some cryptocurrency ads to be promoted as long as they go through an application to be approved. The ads were originally banned in January 2018. Facebook stated that the ads were “associated with misleading or deceptive practices.” The social media giant intends to verify their advertisers by requesting licenses and other information. For now, ICOs are still prohibited under the new policy. Bitcoin Education Center Opens its Doors in Atlanta Whether you’re a blockchain novice or an avid cryptocurrency enthusiast, the newly developed Atlanta Bitcoin Embassy, may be the right place for you. According to American Inno, the location was founded by Jeffrey Tucker and hosts courses aimed towards teaching the community about digital currency, cryptoassets, and related blockchain technologies. It also acts as an organizational hub, allowing Bitcoin veterans to gather and discuss the entire cryptocurrency ecosystem. Leading VC Firm Andreessen Horowitz launches $300 Million Crypto Fund, Plans to HODL The announcement by the top Silicon Valley VC firm confirmed that the $300M fund would be dedicated to Bitcoin and other cryptocurrencies. As reported by CNBC, the firm plans to “invest aggressively,” and “consistently over time, regardless of market conditions.” Chris Dixon, a partner at Andreessen Horowitz was quoted as saying “We’ve been investing in crypto assets for 5+ years. We’ve never sold any of those investments, and don’t plan to any time soon.” The firm seems to have a very positive outlook on the future, and longevity of the cryptocurrency ecosystem. Despite Bitcoin’s lows this year and the market’s recent downturn, reputable top investors are continuing to remain bullish on crypto, disregarding what’s happening in the markets.
https://medium.com/ico-alert/ico-alert-crypto-minute-june-27-2018-7e12b779d108
['Evan Schindler']
2018-06-28 20:33:13.523000+00:00
['Cryptocurrency', 'Crypto Minute', 'Blockchain', 'Facebook', 'Bitcoin']
Fundraising During a Pandemic
The mysterious arrival of the novel coronavirus (COVID-19) during the winter of 2020 was an unexpected plot twist that impacted millions of individuals and businesses around the world. In what seemed like the blink of an eye, global economies started to crash as retail shops shut their doors, massive waves of employees were laid off and the global population went into unprecedented hibernation. The universal mindset quickly shifted to focus solely on three priorities: protecting your health, preservation and access to money and looking after loved ones. With the world shifting into a mode of survival through this new reality of a global pandemic, donation-based organizations and sports teams quickly began to struggle to find the funding needed to keep their organizations running. Nothing seems more fitting to accompany these dramatic changes brought on by a global pandemic than to completely revamp the old and outdated fundraising methods that many of these organizations relied on with newer and more advanced ones. For donation-based organizations to adapt to these new challenges, it will require the fundraising enterprise to go through a complete makeover. Picture this, you’re a small non-for-profit organization facing the difficult task of spreading awareness about your cause throughout your local community, so you decided to host a fundraising gala to encourage donations and raise money. Your goal is to have a couple of hundred people attend, host a live 50/50 draw, and raise enough money to make a substantial difference. It is also March 2020 and just one week before the event the novel coronavirus pandemic starts to appear in your town. The government quickly imposes social distancing measures, stores and businesses shut down, schools close and your event must be cancelled. Now what? The option of holding a live 50/50 is out the window, you are unable to find any volunteers and large social gatherings are now considered illegal. The future of your gala is looking bleak and the promise of raising enough money for your cause seems almost impossible. What to do?!?!?! This is just one example of how donation-based organizations, non-for-profits, charities and sports teams are being affected by the challenges imposed by COVID-19. These organizations rely heavily on hosting fundraising events to be able to provide them with enough money to pay staff, rent and to put towards their cause. The traditional fundraising methods, such as 50/50 draws at sporting events or live auctions at galas, are no longer permitted due to coronavirus restrictions. This is where online fundraising can help. Fundraising for a sports team or charity organization online can eliminate many of the challenges that are being faced due to COVID-19. There are countless benefits to using an online fundraising platform that can lighten the workload and volunteer demand of these organizations. This lower demand for volunteers can be extremely beneficial during normal conditions and especially during a global pandemic when most social interaction is seen as unsafe. Online fundraising has many dimensions including hosting a home lottery or gala, running individual fundraising marathons, or even hosting an online style TED Talk. It can also be taken up a notch by using a host platform to run live 50/50 draws or online style lotteries. A SaaS company (software as a service) has the ability to obtain a license they can use to build an online database that will allow third-party companies (such as a charitable organization) to host their own online, licensed 50/50 lotteries. The Lotto Factory is an example of a SaaS solution that provides this service for non-for-profits and sports teams looking to host online 50/50 fundraisers. These companies can be based anywhere in the world (with the exception of the U.S. and unsanctioned countries) and their raffle tickets can be purchased around the globe. The Lotto Factory’s platform is extremely user friendly and can be customized to suit the specific style of the organization using it. Moving your fundraisers online makes it easier to track and account for earned money through secure payment methods. Payments are automatically authorized through the SaaS company that is providing the service. Automated payment methods will save the organization countless volunteer hours and can eliminate risks such as miscounting or misplacing money. These fundraisers will also enable your organization to reach a wider audience compared to only the handful of select individuals that might see a 50/50 donation box or be attending a sports game. Online fundraising is available 24 hours a day, 365 days a year. This time frame is massive compared to the short window that is available during a specific sports team’s season or for the couple hours that an event is taking place. These individuals are also more likely to promote these organizations and their causes by using other social media platforms potentially engaging an even larger audience. Whether your organization would benefit more from using an online platform, such as the one provided by The Lotto Factory to host 50/50 draws, or from hosting online auctions or TED Talks, there are still countless ways to meet fundraising goals during this economically and socially challenging time. On average, over 50% of North Americans donate annually to a charity. When an organization and its mission are moved online, the chances of being able to reach out to more of these generous individuals grows exponentially. So what are you waiting for? Start fundraising online today to raise awareness and grow support for your organization because even a small change is better than no change at all. Author: Kalli Wilson | Marketing Intern | The Lotto Factory Global
https://medium.com/the-lotto-factory/fundraising-during-a-pandemic-758a03a623c6
['Kalli Wilson']
2020-07-07 16:54:42.527000+00:00
['Fundraising', 'Money', 'Software Development', 'Charity', 'Coronavirus']
Persado and Digitised Copywriting: Fad or Future?
By now you’ve probably heard of Persado. It’s the software that writes your copy for you, and (it says) can do it better — claiming to outperform human-crafted messages 100% of the time. For the (roughly) five thousand years that humans have been writing, we like to think we’ve gotten pretty good at it. So can a computer really do it better? Should copywriters be packing up their MacBooks and re-training as baristas? The big question is: can it write? Persado’s ‘cognitive content’ platform deconstructs and analyses your copy before generating a new, optimised version, based on response data from 40+ billion ad impressions gathered across more than 4,000 marketing campaigns via Facebook, display ads, email and mobile. It highlights the emotional language you’ve used, intuits what you’re trying to achieve, and rifles through a few million possible semantic combinations before settling on the permutation that its data says will achieve the highest click-through rate. So far, so Skynet. Persado reports a 75 per cent rise in engagement across web, email, and social. This sounds hugely impressive, and all of their 80 global clients — including the likes of Microsoft, eBay, and American Express — have yet to refute it. The company was recently bolstered by £21 million worth of investments led by Goldman Sachs, bringing the running total from outside funding to over £46 million. For a company only 3 years old, Persado has quickly attained widespread credibility in a field many still write off as science fiction; in all likelihood, it’s here to stay. But what’s really behind the numbers? Automation has already proved its worth in other areas of the industry, with programmatic buying and retargeting having gained firm footholds. Computers can even outthink us at strategy games. Only last month Google’s DeepMind program — AlphaGo — beat the world number two at Go (a game widely considered to be more complex than chess), in three consecutive matches. But with any automated system, things can go wrong — a system can’t account for context. It doesn’t know not to put an ad for flight sales beside a news story on a plane disaster, or an ad for funeral services under that of a care home. It can be intuitive, but it can’t be sensitive. It promises a ‘consistent and continued lift in campaign performance’, but it can only fall short when it comes up against truly emotional, creative content that grabs attention and tells a story. Content production is often over-simplified to ‘storytelling’, but there’s an element of truth in that. Stories showcase human wit, beauty and innovation — and it’s this that we as consumers identify with. Can an automated system really replicate that? Now that every company out there has to generate digital content to retain and grow their customer base, there’s a seemingly endless deluge of substandard content pouring onto the web. Does 75 per cent more engagement really mean better brand recognition, higher conversion, or is it just a marked improvement on a sea of already-weak content? Persado’s site is littered with too-good-to-be-true statistics, but until they define the parameters of ‘engagement’, these mean little. There’s definitely a place in the industry for companies with quick, effective ways of creating marketing material, especially for the thousands of brands and businesses for whom content is an afterthought or a reluctant obligation. If what you aim for is ‘above average’, then Persado is for you. If you want crafted copy that rises to the top of the pile; that prompts engagement while building powerful, lasting relationships with clients and customers, then a copywriter is probably what you’re looking for. By Curtis Batterbee, Copywriter at Hugo & Cat To find out more about content planning and creation, drop us a line at [email protected]
https://medium.com/nowtrending/persado-and-digitised-copywriting-fad-or-future-a67c6f004e2d
['Hugo']
2017-06-07 10:42:43.356000+00:00
['Future', 'Persado', 'Copywriting', 'Advertising']
CI/CD for Android and iOS Apps on AWS
Mobile apps have taken center stage at Foxintelligence. After implementing CI/CD workflows for Dockerized Microservices, Serverless Functions and Machine Learning models, we needed to automate the release process of our mobile application — Cleanfox — to deliver features we are working on continuously and ensure high quality app. While the CI/CD concepts remains the same, its practicalities are somewhat different. In this post, I will walk you through how we achieved that, including the lessons learned and formed along the way to boost your Android and iOS application development drastically. Cleanfox — Clean up your inbox in an easy manner The Jenkins cluster (figure below) consists of a dedicated Jenkins master with a couple of slave nodes inside an autoscaling group. However, iOS apps can be built only on macOS machine. We typically use an unused Mac Mini computer located in the office devoted to these tasks. We have configured the Mac mini to establish a VPN connection (at system startup) to the OpenVPN server deployed on the target VPC. We setup an SSH tunnel to the Mac node using dynamic port forwarding. Once the tunnel is active, you can add the machine to Jenkins set of worker nodes: This guide assumes you have a fresh install of the latest stable version of Xcode along with Fastlane. Once we had a good part of this done, we used Fastlane to automate the deployment process. This tool offers a set of scripts written in Ruby to handle tedious tasks such as code signing, managing certificates and releasing ipa to the app store for the end users. Also, we created a Jenkinsfile, which defines a set of steps (each step calls a certain actions — lane — defined in the above Fastfile) that will be executed on Jenkins based on the branch name (GitFlow model): The pipeline is divided into 5 stages: Checkout : clone the GitHub repository. : clone the GitHub repository. Quality & Unit Tests : check whether our code is well formatted and follows Swift best practices and run unit tests. : check whether our code is well formatted and follows Swift best practices and run unit tests. Build : build and sign the app. : build and sign the app. Push : store the deployment package (.ipa file) to an S3 bucket. : store the deployment package (.ipa file) to an S3 bucket. UI Test: launch UI tests on Firebase Test Lab across a wide variety of devices and device configurations. If a build on the CI passes, a Slack notification will be sent (broken build will notify developers to investigate immediately). Note the usage of the git commit ID as a name for the deployment package to give a meaningful and significant name for each release and be able to roll back to a specific commit if things go wrong. Once the pipeline is triggered, a new build should be created as follows: At the end, Jenkins will launch UI Tests based on XCTest framework on Firebase Test Lab across multiple virtual and physical devices and different screen sizes. We gave a try to AWS Device Farm, but we needed to get over 2 problems at the same time. We sought waiting for a very short time, to receive tests result, without paying too much. Test Lab exercises your app on devices installed and running in a Google data center. After your tests finish, you can see the results including logs, videos and screenshots in the Firebase console. You can enhance the workflow to automate taking screenshots through fastlane snapshot command and saves hours of valuable time you’ll burn taking screenshots. To upload the screenshots, metadata and the IPA file to iTunes Connect, you can use deliver command, which is already installed and initialized as part of fastlane. The Android CI/CD workflow is quite straightforward, as it needs only the JDK environment with Android SDK preinstalled, we are running the CI on a Jenkins slave deployed into an EC2 Spot instance. The pipeline contains the following stages: The pipeline could be drawn up as the following steps: Check out the working branch from a remote repository. Run the code through lint to find poorly structured code that might impact the reliability, efficiency and make the code harder to maintain. The linter will produces XML files which will be parsed by the Android Lint Plugin. Launch Unit Tests. The JUnit plugin provides a publisher that consumes XML test reports generated and provides some graphical visualization of the historical test results as well as a web UI for viewing test reports, tracking failures, and so on. Build debug or release APK based on the current Git branch name. Upload the artifact to an S3 bucket. Similarly, after the instrumentation tests have finished running, the Firebase web UI will then display the results of each test — in addition to information such as a video recording of the test run, the full Logcat, and screenshots taken: To bring down testing time (and reduce the cost), we are testing Flank to split the test suite into multiple parts and execute them in parallel across multiple devices. Our Continuous Integration workflow is sailing now. So far we’ve found that this process strikes the right balance. It automates the repetitive aspects, provides protection but is still lightweight and flexible. The last thing we want is the ability to ship at any time. We have an additional stage to upload the iOS artifact to Test Flight for distribution to our awesome beta tests. Like what you’re read­ing? Check out my book and learn how to build, secure, deploy and manage production-ready Serverless applications in Golang with AWS Lambda. Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy. We’re not sharing this just to make noise We’re sharing this because we’re looking for people that want to help us solve some of these problems. There’s only so much insight we can fit into a job advert so we hope this has given a bit more and whet your appetite. If you’re keeping an open mind about a new role or just want a chat — get in touch or apply — we’d love to hear from you!
https://medium.com/foxintelligence-inside/ci-cd-for-android-and-ios-apps-on-aws-79695520fde4
['Mohamed Labouardy']
2019-04-16 08:07:43.025000+00:00
['Android', 'AWS', 'Technology', 'DevOps', 'iOS']
This Pandemic Is Teaching Us That We Can Change
“Mr. Jones used to live there.” “The Kaminski kids used to play on this corner. Joe Blum used to stay next door.” Or maybe, it was someone in your family. Maybe it was your entire family. A new problem–what to do for the children made orphans by the Spanish influenza epidemic–is facing Washington today. —The Washington Times, October 19, 1918. In addition to the deaths from the Spanish flu, even more Americans had died in the war that had just ended that November of 1918. It was a world devastated by twin curses: World War I and plague. The history books tell only part of the story. When the plague was over, human beings picked themselves up and kept going. It released something in them, though. There’s a reason that the 1920s are called the “Roaring Twenties.” The history books say that it was because of new technologies, movies, planes, automobiles, new clothes (no corset!), and a break with tradition. These all would have had a role, but it's most likely because a cloud of death had passed overhead (much like the Angel of Death, the one sent by God to kill the firstborn in Exodus 12:23 in the Bible) and people on Earth, aware of their narrow escape, let loose. They had been passed over. They were sad, but they had a life to live. They would have to live it for those who had passed away. They had survived. In the midst of a tragedy, it does feel as if the earth and life as we know it is over. Time stops. The pain of loss makes everything else trivial and meaningless. We realize that which was so important just days ago means nothing at all. Millions of us, for example, were overworked just a few months ago. Now? We’re unemployed. This pandemic is different. COVID-9 is a more insidious horror. It’s extremely contagious, and fools many into thinking that it’s “just like the flu.” But it’s worse. The Spanish flu generally killed pretty quickly — that alone could help stop some of it’s spread. After all, if one died in a fast fashion, the chance of spreading the infection would’ve been lower. Even so, it is conservatively estimated that as many as 500 million worldwide were infected, and approximately 50 million died from 1918 to 1920. There was no vaccine. It is estimated that 20 million died in World War I. The totality is astounding. Then, as now, politicians tried to stop medical professionals from warning the public: In anticipation of Philadelphia’s “Liberty Loan March” in September, doctors tried to use the press to warn citizens that it was unsafe. Yet city newspaper editors refused to run articles or print doctors’ letters about their concerns. The citizens of Philadelphia paid for this by dying more than in any other city in the nation. The Spanish flu killed 12,191 people in four weeks in that one panic-stricken city. Even today, Philadelphia remembers — because their descendants have passed on their stories. But an un-masked, superspreader of the coronavirus can walk around for a week or two spreading the virus to others before becoming ill. They won’t even know it. Ironically, the rules to help keep people alive in 1918 are the same as today: To Prevent Influenza, Illustrated Current News, October 18, 1918 National Library of Medicine #A108877 But, there was an aftermath. We will survive. The question is: How will we survive? The latest news is that a vaccine appears to be on the horizon. University of Oxford vaccine is safe and induces early immune reaction, early results suggest. Since the pandemic began, over 600,000 people have died. This vaccine trial is just one of 23 taking place all over the world, and so this is a good start. Just like the people in 1920, in 2020 we have to think about how we will live for the rest of our lives. What has changed? What should change? Many jobs do not require your presence in the office. We already knew that, but there was much resistance to change. And many managers want to look at your face. It’s not always a bad thing. But for a lot of us, it’s unnecessary. We need to be cleaner. Why are our subways and public bathrooms filthy? Friends and family told me of their travels to Japan, Singapore, and other like-minded places. The airports and streets were squeaky clean. Why isn’t it a priority here? We should incorporate more cleanliness. We are learning to social distance. AND, we’re relying on technology more. Is that good or bad? I miss my hugs, though. We’ve put people in prisons who have no business being there. We’re now releasing people who were put behind bars for marijuana sales, something that one can buy from the store downtown. Some of us are learning (or should learn) to change our mindset from individualism to a culture that thinks more about one's neighbor. Selfishness doesn’t work in a pandemic. The coronavirus, if it was a person, loves selfishness. It thrives on it. We have a lot of homeless people here, but steps have been taken to mitigate this evil in some of our cities. They’re putting them into hotel rooms during the lockdown. But we don’t normally do this. When I asked an international traveler about this phenomenon, she said she didn’t even see them in certain cities. What this pandemic is teaching us is that we can change. We just have to have the will. We are still in the midst of the pandemic. Our future is still uncertain, but that was always the case even before the pandemic. We must stay strong. We must fight our way through these divisions in our country. Not to be too pollyanna-ish, but there is no way to win, to survive, without merging our strengths along with our scientific and medical knowledge. The politics of our time is poisonous, even deadly — this we have to admit. It’s contaminating us all. And, as we can see, it’s confirmed by the fact that people are dying needlessly. And if a house be divided against itself, that house cannot stand.~ Mark 3:25 King James Version This was as true when it was written around AD 65–75 in the Greek as it is today. We must live together. Life will change. We will have to adapt. Because our lives should change in the midst of illness.
https://medium.com/ninja-writers/life-before-and-after-covid19-77b8346cd289
['Vanessa Robinson']
2020-08-02 17:47:22.415000+00:00
['Culture', 'Politics', 'Health', 'History', 'Pandemic']
Don’t Lose Hope for the Planet
Don’t Lose Hope for the Planet Indigenous peoples haven’t, and neither should we by AJAI RAJ “Doom. Gloom. All I ever hear.” — Case, Neuromancer by William Gibson I’ll be the first to admit it: reading and writing about climate change can be a real drag. Not only does the news seem uniformly bad — a relentless march of cold, implacable facts signifying an oncoming catastrophe — but there is a whole cottage industry (more of a mansion industry, really) devoted to discrediting those facts. It can feel like reporting on the impending razing of your own house, accompanied by a chorus of aggressively ignorant naysayers (who, by the way, live in the same house) insisting that everything is fine. Not unlike the picture of that dog in the bowler hat that’s so popular these days. You could argue that climate change was patient zero of the “fake news” phenomenon. But on another level, I get it. Leaving aside the longstanding role of the fossil fuels industry in actively misleading the public, very much like the tobacco industry, I can understand why one would prefer to deny that climate change is happening, to instead cling to feeble arguments to the contrary and accuse scientists (and meteorologists, and the weather) of orchestrating a worldwide conspiracy. It’s the same reason that I, when I was a cigarette smoker, found myself Googling for assurances that my risk of getting cancer wasn’t really that high— and that was after my dad died from complications following surgery on an abdominal aortic aneurysm, a smoking-related condition which, like most cardiovascular diseases, has nothing to do with cancer. Like the old saying goes, denial ain’t just a river in Egypt. Actually, the more I think about how dreadful and dystopian climate change headlines can sound, the more I understand the appeal of climate change denial. And the more I see the inevitability and destructiveness of the cycle. In a perverse way, to deny climate change is to be able to have hope for the future. So the worse the headlines, the fiercer the denial. Round and round we go, the serpent eating its tail. But it doesn’t have to be that way. We don’t have to lose hope. We can change the narrative from one of doom and gloom to one of healing and hope, and we can do so without denying facts or ignoring reality. We not only can. We have to. That’s the message of Snowchange Cooperative, an Finland-based organization that’s at the front lines of climate change, and one that’s unlike any other on the planet. For almost 20 years, Snowchange has been combining modern scientific techniques with the traditional wisdom of indigenous peoples— including the Saami, Chukchi, Yukaghir, Inuit, Inuvialuit, Inupiaq, Gwitchin, Icelandic, Tahltan, Maori, Indigenous Australian, and many others that this writer admits to having never heard of before reporting this story — to help these communities document and adapt to the effects of climate change all over the world. Tero Mustonen, PhD and president of Snowchange, embodies this duality perfectly. Not only is he a scholar of Arctic biodiversity, climate change, and issues facing indigenous peoples, he is also the head of a Finnish village called Selkie, in the eastern region of North Karelia. Dr. Mustonen lives off of the land, in a land-based economy complete with fisheries, in the middle of an old-growth (read: ancient, undisturbed) forest, with his wife, a couple of goats, a few chickens, and no running water. “The boreal [regions] and the Arctic are one of the places where climate change has been proceeding faster than almost anywhere else— a lot of the things that were predicted in 2001 or 2002 have manifested in many ways throughout the arctic,” Dr. Mustonen said. “One of the things we try to put forward are practical solutions and opportunities, and things that actually address the impacts [of colonialism, and the environmental damage wrought by industry] and also the larger scale of what we need to do in terms of nations, and cultures, and economies, and things like that— so we operate from the grassroots level in many, many remote communities, all the way to very high policy levels, including the UN, US, UK, and Indian governments. “For this year, we have a number of flagship activities on ecological restoration. Many of these remote communities are better off if they are able to restore and address past land use changes that have taken place,” most often through colonial land theft and industrial pollution, “and through that, rebuild their resilience, for example in the context of rivers and catchment areas.”
https://medium.com/defiant/dont-lose-hope-for-the-planet-68155b0435ac
['Ajai Raj']
2017-02-26 20:03:04.680000+00:00
['Defiant Science', 'Climate Change', 'Environment']
How to Build a Reporting Dashboard using Dash and Plotly
A method to select either a condensed data table or the complete data table. One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present: Code Block 17: Radio Button in layouts.py The callback for this functionality takes input from the radio button and outputs the columns to render in the data table: Code Block 18: Callback for Radio Button in layouts.py File This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' . Conditionally Color-Code Different Data Table cells One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues. There is lack of formatting functionality in Dash Data Tables at this time. If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly. There is a bug in the Dash data table code in which conditional formatting does not work properly. I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide: Code Block 19: Conditional Formatting — Highlighting Cells The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash. *This has since been corrected in the Dash Documentation. Conditional Formatting of Cells using Doppelganger Columns Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements: Code Block 20: Adding Doppelganger Columns Then, the conditional cell formatting can be implemented using the following syntax: Code Block 21: Conditional Cell Formatting Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values. The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method): Code Block 22: Data Table with Conditional Formatting I describe the method to update the graphs using the selected rows in the data table below.
https://medium.com/p/4f4257c18a7f#0d3f
['David Comfort']
2019-03-13 14:21:44.055000+00:00
['Dash', 'Dashboard', 'Data Science', 'Data Visualization', 'Towards Data Science']
A Primer on Supervised and Unsupervised Machine Learning Models
As an aspiring machine learning engineer, I always tell people that I like to make computers think like humans. Image by DS3 Marketing Team …Okay, I knew I couldn’t fool you. To tell you the truth, what we do isn’t anything fancy, and it’s certainly not magic. This isn’t to say that what ML can accomplish isn’t remarkable: these mathematical models optimize our internet searches, drive cars, re-tailor education experiences for all sorts of individuals, predict sicknesses, and classify all objects with uncanny accuracy, breaking milestones at prophetic rates. Academics tout their “AI solutions” as homologs to human reasoning, but the merit of doing so doesn’t really transcend the fact that these algorithms work by learning and improving based on previous iterations. We operate on introspection, planning, higher-order thinking, and self-reflection. We take in sequential information well. The machines that we train hone probability, an ability to take in feedback on a continuous, cyclical basis, and pattern recognition. Much of the time, the “artificial mind” crunches numbers and “things” in a manner that remains a mystery to the general public. So, what processes do ML algorithms undertake to achieve such eery results? They build internal models of the data (sensory information) and look for an underlying structure, sewing patterns from the top down. This is what is called an unsupervised learning task. Others take a bottom-up approach and generate their own assumptions about their outputs based on the correct answers they have learned before, similar to how we are taught in a classroom. This is the heart of supervised learning. Of course, these different workflows are saturated with moving parts beyond what most of us can comprehend, but the fundamental goals may not be as worthy of the reverence we hold them in. I will be going over the mechanics of some elementary machine learning models and differentiating between the unsupervised and supervised ones, and discussing why each of them falls into their respective categories. Let’s start with some supervised algorithms. Supervised Learning 🟢 Linear Regression In simple terms, linear regression finds the best-fit line for describing the relationships between two or more variables. This is found by calculating the sum of the squared residuals from the observations to the predicted line. When we end up with a resultant line, we can express it as follows: E(Y | X_1,…, X_p)=Y=β_0+β_1X_1+…+β_pX_p, where: E is the expected value. Y is the dependent variable, eventually designated as a function of the predictors. X_i, i = 1,…, p are the observations. β_i,i = 1,…, p are the degrees in change of the outcome for every unit of change in the predictor variable. ε is the disturbance, which represents variation in Y that is not caused by the predictors Note that a linear regression model has three main types of assumptions: Linearity assumption: There is a linear relationship between the dependent and independent variables. Error term assumptions: ε^(i) are normally distributed, independent of each other, have a mean of zero, and have constant variance, σ^2 (homoscedasticity). Estimator assumptions: Independent variables are independent of each other (no multicollinearity) and measured without error. Linear regression is supervised 🟢. You are trying to predict a real number from the model you trained on your dataset of known dependent variables. Visual from Backlog Linear Support Vector Machines (SVMs) SVMs detect classes in training data by finding the optimal hyperplane between the classes. This hyperplane must be optimal in the sense that it maximizes the margin, or separation, between the classes. This is used in both classification and regression problems, but SVM typically exemplifies the idea of a large margin classifier. The geometric intuition behind classifier optimality is shown below. There are clearly infinitely many separators for the two classes (blue circles and X’s), but there is one that is equidistant from the two “blobs”. Therefore, we margin between the blue circles and the X’s is maximized. As for how this hyperplane is found, we take a subset of the training samples such that its elements are close to the decision surface, regardless of the class. Intuitively, SVM draws two parallel lines through the subset members of each class. These two parallel lines, in red below, are called support vectors. Image created by author Eventually, the margin is maximized, and the line is drawn with the help of the support vectors. This optimal hyperplane will improve the classifier accuracy on new data points. In our case, the dataset is linearly separable and has no noise or outliers, making it a hard margin SVM. However, soft-margin SVMs are preferred in practice because they are less susceptible to overfitting than are hard-margin SVMs and are more versatile as they can include some misclassified outliers. Clearly, SVM is supervised 🟢. It requires the dataset to be fully labelled, hence the blue circles and the X’s. The only answer you’re trying to procure from SVM is the function describing the separation between the two known classes. Image from Velocity Business Solutions Naive Bayes You probably understand the Naive Bayes operates with Bayes’ Theorem, which gives you the posterior probability of an event given prior knowledge. We mathematically define it as follows: P(A|B)=P(B|A)P(A) / P(B), where A, B=events It’s also expressed as the true positive rate of an experiment divided by the sum of the false positive rate of a population and the true positive rate of the experiment. What’s interesting about Bayes’ Theorem is that it separates a test for an event from the event itself. You can read more about how this works here. Naive Bayes is a binary and a multi-class classification algorithm in which predictions are simply made from calculating the probabilities of each data record associated with a particular class. The class having the largest probability for a data record is rated its most suitable class. Naive Bayes is an uncomplicated and fast algorithm and is a benchmark for text categorization tasks. It works well even with very large volumes of data. Why is Naive Bayes “naive”, however? It calculates a conditional probability from other individual probabilities, implying independence of features, a fact we’ll almost never encounter in real life. We can infer from looking at the formula that Naive Bayes is supervised 🟢. We need labels in our records to compute the probabilities for values of a certain feature given the label. Unsupervised Learning 🔴 Principal Component Analysis (PCA) Principal Component Analysis attempts to reduce dimensions in a dataset while preserving as much variance as possible, storing data points in vectors named “principal components”. These principal components are in descending order of how much variance in the dataset they carry. Visually, the data points are projected on to the principal axis. Below, PC 1 is the axis that preserves the maximum variance. Finding PC 2 requires a linear orthogonal transformation. The amount of axes that PCA obtains is equal to the number of dimensions of the dataset so that every principal component is orthogonal to one another. Image from StatistiXL Step by step, PCA is conducted by computing the covariance matrix, an example of which is shown below. Next, we decompose the covariance matrix to find its eigenvalues, denoted by λ, and corresponding eigenvectors. Geometrically, the eigenvector points in the resultant direction after the applied transformation, and it is stretched by a factor denoted by the eigenvalue. Therefore, the eigenvector gives us information about the direction of the principal component, while the eigenvalue tells us about its magnitude. Image from Wikipedia Mathematically, we can express these terms as follows. Consider the linear differential operator, ddx, that scales its eigenvector (or eigenfunction). For example, d/dx e^(λx)=λe^(λx). In terms of matrices, let A_n×n denote a linear transformation, then the eigenvalue equation can be written as a matrix multiplication Ax=λx, where x is the eigenvector. The set of all eigenvectors of a matrix associated with the same eigenvalue (including the zero vector) is called an eigenspace. This math might look menacing, but you’ll have a firm understanding after you’ve completed one linear algebra course. It all boils down to geometric intuition. Note that PCA won’t work well if you: Don’t have a linear relationship between your variables Don’t have a large enough sample size Don’t have data suitable for dimensionality reduction. Have significant outliers PCA is an unsupervised algorithm 🔴. It learns without any target variable. We also tend to associate clustering techniques with unsupervised learning algorithms. People have contested about whether it can be considered a machine learning method at all since it’s used largely for preprocessing, but you can use the resultant eigenvectors to explain behavior in your data. K-Means Clustering Clustering excavates a dataset and discovers natural groupings, or clusters, within it, not necessarily disjoint. K-Means accomplishes this by computing the distance from a particular record to fixed numbers called centroids. As more records are “processed”, the centroids are redefined to equal the means of their corresponding groups. This is essentially the high-level description of the algorithm. Theoretically, K-Means works to minimize its objective function, which is the squared error. The squared error expresses intra-cluster variance. Image created by author J is the objective function. k is the predefined number of clusters. n is the number of records. x_i^(j)is the i-th point in the j-th cluster. c_j is the j-th centroid. ||x_i^(j) — c_j||^2 is the Euclidean distance function. In practice: k points are selected at random as cluster centroids. Objects are assigned to a cluster according to its minimum Euclidean distance from each centroid. Update the centroids. They will be recalculated as the mean of the objects in their analogous cluster. Stopping condition: The centroids do not change, or the objects remain in the same cluster, or the maximum number of iterations is reached. Below is a useful flowchart that visualizes the k-means algorithm: Image from Revoledu K-Means clustering is one of the most popular and straightforward unsupervised learning algorithms 🔴. It infers the grouping solely from the Euclidean distance, and not any initial labels. However, there are semi-supervised variations of k-Means, such as semi-supervised k-means++, that adopt partially labeled datasets to add “weights” to cluster assignment. Much of the time, “supervised” and “unsupervised” labels shouldn’t serve as limitations for machine algorithms, and certainly shouldn’t forestall new developments for alternative methods on the same dataset. Association Rules Finding interesting associations among data items is the leading unsupervised learning method after clustering. Association learning algorithms track the rates of complimentary cases in datasets, and make sure that these associations are found after random sampling. This approach is rule-based so it scales to categorical databases. I’ll briefly discuss its motivation and intuition. Consider a database of 10,000 transactions from a store. It is found that 5,000 customers bought Item A and 8,000 bought Item B. Although at first glance, these transactions are statistically independent, but a second look at the dataset helps reveal that 2,000 bought both Items A and B. Not only did 50% of the customers buy Item A and 80% Item B, but 20% bought both items. This information has been useful for marketing strategies. Rules are developed from these uncovered relationships. The composition of an association rule is as follows: an antecedent implies a consequent, and each element in both of these sets makes up an itemset. For example: {A,B}⇒{C}; I={A,B,C} Various metrics such as support, lift, and confidence help quantify the strengths of found associations. Support is expressed as the frequency of an itemset in a database, as calculated by the following: supp(X)=|{t∈T;X⊆t}| / |T|, where T is a database of transactions, and X is the itemset in question. Confidence is the rate at which a rule such as X⇒Y is found to be true. It’s expressed as the proportion of transactions that contain X that also contain Y. conf(X⇒Y)=supp(X∪Y) / supp(X) Lift of a rule is the ratio of the actual confidence and the expected confidence. How likely is Item B to be purchased when Item A is purchased, while still adjusting for how popular Item B is? It’s defined as follows: lift(X⇒Y)=supp(X∪Y) / supp(X)×supp(Y) Some known algorithms that discover these rules are the Apriori algorithm, Eclat, and FP-growth. Again, no class labels are assigned to the datasets in use, and it works solely in its constraints to find relationships, so it is unsupervised 🔴. Conclusion I hope you found this article useful in differentiating unsupervised and supervised learning, and what segregates machine learning algorithms from our internal processes of problem solving and building intuition. However, the upper echelons of the artificial intelligence world are rapidly developing, with self-aware agents, deep learning, attention mechanisms, and so much more, aiming to mimic our own cognitive systems.
https://medium.com/ds3ucsd/a-primer-on-supervised-and-unsupervised-machine-learning-models-79fc4a109b51
['Camille Dunning']
2020-11-29 23:14:04.473000+00:00
['Supervised Learning', 'Artificial Intelligence', 'Unsupervised Learning', 'Machine Learni', 'Data Science']
3 Steps to Improve the Data Quality of a Data lake
The previous project I was working on was dedicated to the construction of a data lake. Its purpose was to inject gigabytes of data from various sources into the system and make it available for multiple users within the organization. As it turned out, it was not always easy to certify if all the data was successfully inserted and even if the problem was already evident, it required hours to identify its cause. Hence, there was no doubt that the system needed fixing. From a technical perspective, the solution that we proposed might be divided into three principal blocks: logging the necessary information in the code , , indexing logs in Elasticsearch using Logstash , using , visualizing logs in custom dashboards on Kibana. In software development, logging is a means to decrypt the black box of a running application. When the app is growing in its complexity, it starts to be trickier to figure out what is going on inside and here is where the logs are getting more influent. Who could benefit from them? Both developers and software users! Thanks to logs, the developer can restore the path the program is passing through and get a signal of potential bug location while the user can obtain the necessary information regarding the program and its output: such as time of execution, the data about processed files etc. In order to improve the robustness of the application, the logs should fulfill the two standards: we wanted them to be customized so that they contain only the data we are interested in. Hence, it is important to think of what really values in the application: it may be the name of a script or an environment, time of execution, the name of the file containing an error, etc. The logs should be human-readable so that the problem could be detected as fast as possible regardless of the processed data volume. Step 1: Logging essential information in the code. The first sub-goal is to prepare the logs that can be easily parsed by Logstash and Elasticsearch. For that reason, we are keeping the logs messages as a multi-line JSON that contains the information we would like to display: log message, timestamp, script name, environment (prod or dev), log level (debug, info, warning, error), stack trace. The code below can help you to create your customized logs in JSON format for a mock application which consists of the following parts: the application body is written in main.py script, the logger object is defined in logging_service.py, its parameters are described in logging_configuration.yml. To add the specific fields into the logging statement we have written CustomJsonFormatter class that overwrites add_fields method of its superclass imported from pythonjsonlogger package. The function get_logger from logging_service.py returns the new logger with the desired configurations. Note: the best practice is to define the logger at the top of every module of your application. To create file.log, run the code above, placing the files in the same folder and running the following command from your terminal: Step 2: Indexing logs in Elasticsearch using Logstash. To promote the readability of logs we were using the ELK stack: the combination of the three open-sourced projects Elasticsearch, Logstash, Kibana. There exist multiple articles that can give you insights about what it is and its pros and cons. … Read the full article on Sicara’s blog here.
https://medium.com/sicara/improve-data-quality-data-lake-custom-logs-elk-monitoring-fcea9bcabb9a
['Irina Stolbova']
2020-01-30 13:40:06.248000+00:00
['Python Logging', 'Elk', 'Data Engineering', 'Data Vizualisation', 'Code Quality']
No More Mess in my Head Around Phrases Related to Identity in Computing
Azure No More Mess in my Head Around Phrases Related to Identity in Computing What is Identity? Azure Active Directory is just Active Directory in Azure? Microsoft Graph is a Data Visualization Framework or What? Credits of logos for Microsoft. Identity What is Identity in Cloud Identity is a unique identification of an object. Such an object can be a human being, machine, or a combination of it. When we talk in the cloud computing context, identity means a set of properties about this object stored in the cloud's datacentre. Identity & Access Management Identity Management (IdM) and Identity and Access Management (IAM) is an interchangeable term in identity access management. So if you are reading about one, you are probably reading about the second term too. IAM is a framework of policies that tell what users can do in their restricted area and what he needs from the user to operate properly. Such systems identify, authenticate, and authorize individuals or hardware applications to use restricted resources. IAM exists in the world without the internet too. It appears in different forms. For example, the “Staff Only” label at doors in markets, id card pinned at employees suit, or doorman as a profession by itself. Even your dog protecting yard is some Access Management of your property. Dog protecting an off-limits area. Every IAM operates in its defined context. The context of IAM specifies the number of properties it needs from the Identity. For example, every patient has a folder with properties about his identity at the doctor's office.
https://medium.com/the-innovation/no-more-mess-in-my-head-around-phrases-related-to-identity-in-computing-482cd0f8cad3
['Daniel Rusnok']
2020-11-05 17:18:51.564000+00:00
['Azure', 'Software Engineering', 'Software Development', 'Identity Management', 'Azure Active Directory']
Sharding Pinterest: How we scaled our MySQL fleet
Marty Weiner | Pinterest engineer, BlackOps “Shard. Or do not shard. There is no try.” — Yoda This is a technical dive into how we split our data across many MySQL servers. We finished launching this sharding approach in early 2012, and it’s still the system we use today to store our core data. Before we discuss how to split the data, let’s be intimate with our data. Mood lighting, chocolate covered strawberries, Star Trek quotes… Pinterest is a discovery engine for everything that interests you. From a data perspective, Pinterest is the largest human curated interest graph in the world. There are more than 50 billion Pins that have been saved by Pinners onto one billion boards. People repin and like other Pins (roughly a shallow copy), follow other Pinners, boards and interests, and view a home feed of all the Pinners, boards and interests they follow. Great! Now make it scale! Growing pains In 2011, we hit traction. By some estimates, we were growing faster than any other previous startup. Around September 2011, every piece of our infrastructure was over capacity. We had several NoSQL technologies, all of which eventually broke catastrophically. We also had a boatload of MySQL slaves we were using for reads, which makes lots of irritating bugs, especially with caching. We re-architected our entire data storage model. To be effective, we carefully crafted our requirements. Requirements Our overall system needed to be very stable, easy to operate and scale to the moon. We wanted to support scaling the data store from a small set of boxes initially to many boxes as the site grows. All Pinner generated content must be site accessible at all times. Support asking for N number of Pins in a board in a deterministic order (such as reverse creation time or user specified ordering). Same for Pinner to likes, Pinner to Pins, etc. For simplicity, updates will generally be best effort. To get eventual consistency, you’ll need some additional toys on top, such as a distributed transaction log. It’s fun and (not too) easy! Design philosophies and notes Since we wanted this data to span multiple databases, we couldn’t use the database’s joins, foreign keys or indexes to gather all data, though they can be used for subqueries that don’t span databases. We also needed to support load balancing our data. We hated moving data around, especially item by item, because it’s prone to error and makes the system unnecessarily complex. If we had to move data, it was better to move an entire virtual node to a different physical node. In order for our implementation to mature quickly, we needed the simplest usable solution and VERY stable nodes in our distributed data platform. All data needed to be replicated to a slave machine for backup, with high availability and dumping to S3 for MapReduce. We only interact with the master in production. You never want to read/write to a slave in production. Slaves lag, which causes strange bugs. Once you’re sharded, there’s generally no advantage to interacting with a slave in production. Finally, we needed a nice way to generate universally unique IDs (UUID) for all of our objects. How we sharded Whatever we were going build needed to meet our needs and be stable, performant and repairable. In other words, it needed to not suck, and so we chose a mature technology as our base to build on, MySQL. We intentionally ran away from auto-scaling newer technology like MongoDB, Cassandra and Membase, because their maturity was simply not far enough along (and they were crashing in spectacular ways on us!). Aside: I still recommend startups avoid the fancy new stuff — try really hard to just use MySQL. Trust me. I have the scars to prove it. MySQL is mature, stable and it just works. Not only do we use it, but it’s also used by plenty of other companies pushing even bigger scale. MySQL supports our need for ordering data requests, selecting certain ranges of data and row-level transactions. It has a hell of a lot more features, but we don’t need or use them. But, MySQL is a single box solution, hence the need to shard our data. Here’s our solution: We started with eight EC2 servers running one MySQL instance each: Each MySQL server is master-master replicated onto a backup host in case the primary fails. Our production servers only read/write to the master. I recommend you do the same. It simplifies everything and avoids lagged replication bugs. Each MySQL instance can have multiple databases: Notice how each database is uniquely named db00000, db00001, to dbNNNNN. Each database is a shard of our data. We made a design decision that once a piece of data lands in a shard, it never moves outside that shard. However, you can get more capacity by moving shards to other machines (we’ll discuss this later). We maintain a configuration table that says which machines these shards are on: [{“range”: (0,511), “master”: “MySQL001A”, “slave”: “MySQL001B”}, {“range”: (512, 1023), “master”: “MySQL002A”, “slave”: “MySQL002B”}, ... {“range”: (3584, 4095), “master”: “MySQL008A”, “slave”: “MySQL008B”}] This config only changes when we need to move shards around or replace a host. If a master dies, we can promote the slave and then bring up a new slave. The config lives in ZooKeeper and, on update, is sent to services that maintain the MySQL shard. Each shard contains the same set of tables: pins, boards, users_has_pins, users_likes_pins, pin_liked_by_user, etc. I’ll expand on that in a moment. So how do we distribute our data to these shards? We created a 64 bit ID that contains the shard ID, the type of the containing data, and where this data is in the table (local ID). The shard ID is 16 bits, type ID is 10 bits and local ID is 36 bits. The savvy additionology experts out there will notice that only adds to 62 bits. My past in compiler and chip design has taught me that reserve bits are worth their weight in gold. So we have two (set to zero). ID = (shard ID << 46) | (type ID << 36) | (local ID<<0) Given this Pin: https://www.pinterest.com/pin/241294492511762325/, let’s decompose the Pin ID 241294492511762325: Shard ID = (241294492511762325 >> 46) & 0xFFFF = 3429 Type ID = (241294492511762325 >> 36) & 0x3FF = 1 Local ID = (241294492511762325 >> 0) & 0xFFFFFFFFF = 7075733 So this Pin object lives on shard 3429. It’s type is 1 (i.e. ‘Pin’), and it’s in the row 7075733 in the pins table. For an example, let’s assume this shard is on MySQL012A. We can get to it as follows: conn = MySQLdb.connect(host=”MySQL012A”) conn.execute(“SELECT data FROM db03429.pins where local_id=7075733”) There are two types of data: objects and mappings. Objects contain details, such as Pin data. Object Tables! Object tables, such as Pins, users, boards and comments, have an ID (the local ID, an auto-incrementing primary key) and a blob of data that contains a JSON with all the object’s data. CREATE TABLE pins ( local_id INT PRIMARY KEY AUTO_INCREMENT, data TEXT, ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) ENGINE=InnoDB; For example, a Pin object looks like this: {“details”: “New Star Wars character”, “link”: “http://webpage.com/asdf”, “user_id”: 241294629943640797, “board_id”: 241294561224164665, …} To create a new Pin, we gather all the data and create a JSON blob. Then, we decide on a shard ID (we prefer to choose the same shard ID as the board it’s inserted into, but that’s not necessary). The type is 1 for Pin. We connect to that database, and insert the JSON into the pins table. MySQL will give back the auto-incremented local ID. Now we have the shard, type and new local ID, so we can compose the full 64 bit ID! To edit a Pin, we read-modify-write the JSON under a MySQL transaction: > BEGIN > SELECT blob FROM db03429.pins WHERE local_id=7075733 FOR UPDATE [Modify the json blob] > UPDATE db03429.pins SET blob=’<modified blob>’ WHERE local_id=7075733 > COMMIT To delete a Pin, you can delete its row in MySQL. Better, though, would be to add a JSON field called ‘active’ and set it to ‘false’, and filter out results on the client end. Mapping Tables! A mapping table links one object to another, such as a board to the Pins on it. The MySQL table for a mapping contains three columns: a 64 bit ‘from’ ID, a 64 bit ‘to’ ID and a sequence ID. There are index keys on the (from, to, sequence) triple, and they live on the shard of the ‘from’ ID. CREATE TABLE board_has_pins ( board_id INT, pin_id INT, sequence INT, INDEX(board_id, pin_id, sequence) ) ENGINE=InnoDB; Mapping tables are unidirectional, such as a board_has_pins table. If you need the opposite direction, you’ll need a separate pin_owned_by_board table. The sequence ID gives an ordering (our ID’s can’t be compared across shards as the new local ID offsets diverge). We usually insert new Pins into a new board with a sequence ID = unix timestamp. The sequence can be any numbers, but a unix timestamp is a convenient way to force new stuff always higher since time monotonically increases. You can look stuff up in the mapping table like this: SELECT pin_id FROM board_has_pins WHERE board_id=241294561224164665 ORDER BY sequence LIMIT 50 OFFSET 150 This will give you up to 50 pin_ids, which you can then use to look up Pin objects. What we’ve just done is an application layer join (board_id -> pin_ids -> pin objects). One awesome property of application layer joins is that you can cache the mapping separate from the object. We keep pin_id -> pin object cache in a memcache cluster, but we keep board_id -> pin_ids in a redis cluster. This allows us to choose the right technology to best match the object being cached. Adding more capacity In our system, there are three primary ways to add more capacity. The easiest is to upgrade the machines (more space, faster hard drives, more RAM, whatever your bottleneck is). The next way to add more capacity is to open up new ranges. Initially, we only created 4,096 shards even though our shard ID is 16 bits (64k total shards). New objects could only be created in these first 4k shards. At some point, we decided to create new MySQL servers with shards 4,096 to 8,191 and started filling those. The final way we add capacity is by moving some shards to new machines. If we want to add more capacity to MySQL001A (which has shards 0 to 511), we create a new master-master pair with the next largest names (say MySQL009A and B) and start replicating from MySQL001A. Once replication is complete, we change our configuration so that MySQL001A only has shards 0 to 255, and MySQL009A only has 256 to 511. Now each server only has to handle half the shards as it previously did. Some nice properties For those of you who have had to build systems for generating new UUIDs, you’ll recognize that we get them for free in this system! When you create a new object and insert it into an object table, it returns a new local ID. That local ID combined with the shard ID and type ID gives you a UUID. For those of you who have performed ALTERs to add more columns to MySQL tables, you’ll know they can be VERY slow and are a big pain. Our approach does not require any MySQL level ALTERs. At Pinterest, we’ve probably performed one ALTER in the last three years. To add new fields to objects, simply teach your services that your JSON schema has a few new fields. You can have a default value so that when you deserialize JSON from an object without your new field, you get a default. If you need a mapping table, create the new mapping table and start filling it up whenever you want. When you’re done, ship your product! The Mod Shard It’s just like the Mod Squad, only totally different. Some objects need to be looked up by a non-ID. For instance, if a Pinner logs in with their Facebook account, we need a mapping from Facebook IDs to Pinterest IDs. Facebook IDs are just bits to us, so we store them in a separate shard system called the mod shard. Other examples include IP addresses, username and email. The mod shard is much like the shard system described in the previous section, but you can look up data with arbitrary input. This input is hashed and modded against the total number of shards that exist in the system. The result is the shard the data will live on / already lives on. For example: shard = md5(“1.2.3.4") % 4096 shard in this case would be 1524. We maintain a config file similar to the ID shard: [{“range”: (0, 511), “master”: “msdb001a”, “slave”: “msdb001b”}, {“range”: (512, 1023), “master”: “msdb002a”, “slave”: “msdb002b”}, {“range”: (1024, 1535), “master”: “msdb003a”, “slave”: “msdb003b”}, …] So, to find data about IP address 1.2.3.4, we would do this: conn = MySQLdb.connect(host=”msdb003a”) conn.execute(“SELECT data FROM msdb001a.ip_data WHERE ip='1.2.3.4'”) You lose some nice properties of the ID shard, such as spacial locality. You have to start with all shards made in the beginning and create the key yourself ( it will not make one for you). Always best to represent objects in your system with immutable IDs. That way you don’t have to update lots of references when, for instance, a user changes their username. Last Thoughts This system has been in production at Pinterest for 3.5 years now and will likely be in there forever. Implementing it was relatively straightforward, but turning it on and moving all the data over from the old machines was super tough. If you’re a startup facing growing pains and you just built your new shard, consider building a cluster of background processing machines (pro-tip use pyres) to script moving your data from your old databases to your shiny new shard. I guarantee that data will be missed no matter how hard you try (gremlins in the system, I swear), so repeat the data transfer over and over again until the new things being written into the new system are tiny or zero. This system is best effort. It does not give you Atomicity, Isolation or Consistency in all cases. Wow! That sounds bad! But don’t worry. You’re probably fine without these guarantees. You can always build those layers in with other processes/systems if needed, but I’ll tell you what you get for free: the thing just works. Good reliability through simplicity, and it’s pretty damn fast. If you’re worried about A, I and C, write me. I can help you think through these issues. But what about failover, huh? We built a service to maintain the MySQL shards. We stored the shard configuration table in ZooKeeper. When a master server dies, we have scripts to promote the slave and then bring up a replacement machine (plus get it up to date). Even today we don’t use auto-failover. Acknowledgements: Yash Nelapati, Ryan Probasco, Ryan Park and I built the Pinterest sharding system with loving guidance from Evan Priestley. Red Bull and coffee made it run.
https://medium.com/pinterest-engineering/sharding-pinterest-how-we-scaled-our-mysql-fleet-3f341e96ca6f
['Pinterest Engineering']
2017-02-21 18:54:49.956000+00:00
['MySQL', 'Sharding', 'Ec2', 'Zookeeper', 'Engineering']
PySpark on macOS: installation and use
Spark is a very popular framework for data processing. It has slowly taken over the use of Hadoop for data analytics. In memory processing can yield up to 100x speed compared to Hadoop and MapReduce. One of the main advantages of Spark is that no more need to write map reduce jobs. Moreover, the spark engine is compatible with a large number of data sources (txt, json, xml, sql and nosql data stores). Spark is with Hadoop, SQL, Python and R one of the most sought after skills for data scientists. A spark application is made of: several execution processes which perform the data processing task. a driver process: which is responsible for managing the resources allocated to the executors and distribute the data processing work load among the execution processes. Users interact with the driver trough their code. Spark is initially written in Scala but also has APIs in other languages: R, Java and more importantly Python. Spark is meant to be run on a cluster of machines but can also be run locally as driver and executors are merely processes. This can be useful to prototype applications locally before sending them to the cloud. Google Cloud DataProc is (among other solutions) a very convenient tool to launch Spark jobs on the cloud. Setting up Spark on MacOS does not have to be a pain. I have seen many posts on the topic, but I have not really been satisfied and often found myself not understand the reason behind some steps. So I decided to write my own post on the topic: Install dependencies: A very convenient way to install dependencies is with homebrew (in my opinion the best package manager for MacOS). To install homebrew, you only need to copy this in your terminal: /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Java First we need to install Java as Spark is written in Scala, which is a Java Virtual Machine language. You can install the last version of Java with the command: brew cask install java You can always get information on the package using: brew cask info java You should get something that looks like this: Scala As Spark is written in Scala, we need to install it. If you get info on Spark using: brew info apache-spark You should get a result similar to this: You notice that Java is a dependency. We install Scala with the command: brew install scala Spark Finally, Spark can be installed on the system brew install apache-spark After installing the packages, it is good to check your system with brew doctor if you have: “Your system is ready to brew.” you can go to the next step. If you have any message you can follow the instructions given by the console. 2. Installing pySpark I assume that you already have python 3 installed on your computer. If not installed already, you can do so with homebrew brew install python3 Then you need to install the python Spark API pySpark: pip3 install pyspark 3. Setting up environment Almost done! Now you need to define several environment variables and declare paths so that the Spark driver is accessible through pySpark. First of all we need to declare which java version to use. I have two versions on my system. You can do so by opening the .bashrc file with nano: nano .bashrc The java version can be declared with the following environment variable: export JAVA_HOME=/Library/java/JavaVirtualMachines/adoptopenjdk-8.jdk/contents/Home/ and the java runtime environment is declared as follows: export JRE_HOME=/Library/java/JavaVirtualMachines/openjdk-13.jdk/contents/Home/jre/ Now two variables must be declared for Spark: export SPARK_HOME=/usr/local/Cellar/apache-spark/2.4.4/libexec export PATH=/usr/local/Cellar/apache-spark/2.4.4/bin:$PATH The first defines where your Spark libraries are installed and the second make visible your binaries to your shell (in particular the pypspark executable that we use later). Finally, PySpark must be configured: export PYSPARK_PYTHON=/usr/local/bin/python3 export PYSPARK_DRIVER_PYTHON=jupyter export PYSPARK_DRIVER_PYTHON_OPTS='notebook' You can find your python3 distribution using the command: which python3 which is used to define the variable PYSPARK_PYTHON. In order to be able to start a notebook with the command pyspark we define the two last variables: PYSPARK_DRIVER_PYTHON and PYSPARK_DRIVER_PYTHON_OPS. You are all setup. The command pyspark will startup a new jupyter notebook session. The Spark UI can be consulted at the address: http://localhost:4040 It looks like this:
https://medium.com/swlh/pyspark-on-macos-installation-and-use-31f84ca61400
['René-Jean Corneille']
2019-10-22 11:35:37.458000+00:00
['Spark', 'Python', 'Data', 'Hadoop', 'Data Science']
Enhancing prioritization with networks
Last month Michael Dubakov suggested using networks to prioritize features. Now let’s go further and try to turn the idea into a practical framework. This article is meant as a standalone so reading Michael’s post is a suggestion, not a prerequisite. What are the prioritization techniques missing? As a diligent product manager, you’ve done customer discovery and collected feedback. You notice recurring Use Cases and even some clusters which represent Personas or Niches: Use Cases grouped by Personas, Niches or your favorite term Your team blends empathy with product vision and invents Features to address the Use Cases: Features address the Use Cases Now it’s time to decide which Features you should build next. You pick a prioritization formula like Value/Effort or RICE so your teammates can follow your thoughts and rationally debate the score. Here is what these formulas capture: Prioritization formulas decomposed So which of the two Features below should you build next? Prioritizing these two Features is not a tough riddle, right? Feature A has more value and takes less effort so it’s a no-brainer, case closed? Not so fast. The formula fails to capture two important factors. How connected are the Use Cases? Solving a bunch of unrelated Use Cases means building a product for everyone and no one. Most often than not, this is a bad strategy: The true importance of a Use Case often lies in its connections How does the Feature affect other Features? Some Features open new possibilities and enhance existing functionality, others add complexity and slow the development down: While some Features solve important Use Cases, they are damaging to the network With these two questions in mind, let’s take another look at our alphabetical Features: The same Features and Use Cases with their 1st-level connections Feature A doesn’t look so good anymore — it would have been a shame to build it first. Wonder why you often want to override the priorities determined by the formula? It’s because the prioritization formula is short-sighted. It doesn’t know that addressing this small Use Case will greatly contribute to your strategy. It doesn’t know that building this seemingly unimportant Feature will serve as a foundation for something bigger. The good news is, you can teach the formula. How to incorporate networks into prioritization? .The goal is not to replace but to enhance the existing prioritization techniques. Let’s start with a simple formula and make it more c̶o̶m̶p̶l̶i̶c̶a̶t̶e̶d̶ nuanced step-by-step. The simplest Value/Effort formula Step 1. Break down the vague “Value” into pieces that facilitate discussion: Reach, Impact, and Confidence. All the Use Cases that the Feature addresses contribute towards the score: RICE formula visualized: Reach×Impact×Confidence/Effort Let’s compare the actual RICE scores for our favorite Features: RICE scores with a lot of confusing numbers Feature A is indeed a clear winner. For now. Step 2. Incorporate Use Cases network into the formula. The importance of a Use Case is composed of its isolated value plus its network effect: The formula enhanced with the Use Cases network effect A note for really attentive folks When calculating the network effect for Use Case A, measure the ★ scores for other Use Cases excluding Use Case A from the network. Otherwise, a cyclic graph will eat your computer. Use Cases that have a great effect on other important Use Cases get a boost. Unlike in PageRank, we count the outbound, not the inbound relations: Examples of positive and negative network effects Here’s how the scores change with the new formula: Just look at the final scores, please don’t embarrass me by checking the calculations Okay, we are getting closer to reality. Let’s take one last step. Step 3. Incorporate Features network in the same way: The formula enhanced with both network effects The Feature is rewarded for enhancing other Features and penalized for negative side effects on the network: Examples of Features doing good and bad to other Features Let’s compare our good old pals one last time: Final scores with both network effects incorporated in the formula Feature B drops the mic 🎤 We included only 1st-level connections here — the effect of the whole network will be even greater. As you can see, using networks for prioritization is not just a wild fantasy — it’s both effective and practical. Are there unexpected bonuses? Glad you asked. Once you include networks into prioritization, dependencies are resolved automatically. The foundational Features get a higher score and are implemented first. If you want to focus on a specific niche or persona, just put a 2x multiplier before the relevant Use Cases’ reach. No need to override the prioritization formula to stay in sync with OKRs. Do product management tools support this? Not yet. Work management tools cannot interactively visualize networks of real data. Hopefully, we’ll see graph views appear in productivity software next to boards, tables, and calendars. We are making baby steps here at Fibery. Visualization aside, most tools fail to capture attributes of a relation. When connecting a Feature to a Use Case, we need to define impact and confidence. When building networks of Features and Use cases, it’s important to capture the strength of connections. Auxiliary tables in Coda or auxiliary Types in Fibery is the closest you can get so far. However, without proper visualizations, the experience is just bad. Once any tool solves these two problems, I will be happy to build and share a prototype. Stay tuned! Thanks to the Fibery team, Tanya Avlochinskaya, and Ilya Tregubov for thoughtful comments.
https://uxdesign.cc/enhancing-prioritization-with-networks-894760555b04
['Anton Iokov']
2020-05-29 12:56:51.932000+00:00
['Product Design', 'Software Development', 'Startup', 'User Research', 'Product Management']
A Quick Primer On Big O Notation
First, let’s be clear about what Big O isn’t. Big O is not going to give you or your team an exact answer on how long a piece of code will take to run. Far from it. As I mentioned earlier, Big O allows us to discuss our code algebraically to get a sense of how quickly it might operate under the strain of large data sets. Luckily, this simplifies things a bit for us newcomers, as it allows us to move our focus from runtime in terms of milliseconds to discussing runtime in relation to the number, and complexity, of operations in our code. For example, let’s say we have the following piece of code: def example_search(key, array) for i in 0...array.length return "#{key} found!" unless array[i] != key end return "#{key} not found :(" end Here, we’re merely looping through a provided array to find a specific key. If that key isn’t found, we’re sure to highlight that as well. It’s a pretty quick operation overall if we’re using an array with five items, but what if we use this method to search for a key in an array 1,000 items long, or even 100,000 items long? Well, with Big O Notation, we can look at our algorithm and see that it will take O(n) time to run. Big O Notation, written as O(blank), show us how many operations our code will run, and how its runtime grows in comparison to other possible solutions. Our example code runs at O(n) because in the worst-case if our key isn’t found the first time our code runs, it might have to continue looping through the entire array until we can determine if the key is there or not and its runtime will continue growing at a constant rate. Why did I emphasize worst-case? We always want to focus on the maximum amount of time a process could take to understand the outer limits of our code to avoid bad solutions. It reassures us that a solution won’t take longer than we expect, and empowers us to feel confident that our code won’t lead to runtime issues in the future. So how does a runtime of O(n) compare to other run times, and what other runtimes can we expect for different types of solutions? Well, every algorithm is going to have its own runtime, but you can expect to see the following run fairly commonly: O(log n) O(n) O(n * log n) O(n²) O(n!) For now, don’t worry about how to calculate these yet (we’re just taking baby steps for the moment). All you need to know for now is that the list above is ordered by efficiency — algorithms that run at O(log n) run much faster than algorithms that run at O(n!). You can find a graph demonstrating this below: Graph of Possible Runtimes (source: https://bit.ly/31XKRXO) Ultimately, this is a simplification, and things are rarely so neat. But, with practice and further research, you’ll begin to develop a better understanding of how Big O run time can be determined for specific algorithms, and how they compare to other possible solutions. Conclusion And that’s it — a quick primer on what Big O Notation, and why it is so essential for Software Engineers. If you only remember a few things from this article, it should be these: Algorithm speed isn’t discussed in actual runtime but Big O Notation. Big O Notation is important because it allows us to discuss the efficiency of our code and compare solutions without getting caught up in complex calculations. We always focus on the worst-case when using Big O Notation to ensure that we choose the most efficient solution at scale. Scale influences speed significantly, and even though an O(n) solution might be faster than an O(log n) solution at first, that changes quickly as you add more data to the equation. Hopefully, you walk away from this article feeling empowered to tackle this subject on your own. As I mentioned earlier, Big O Notation and Space & Time Complexity are subjects that truly essential for Software Engineering, and your understanding of them could be what leads you to land your dream job. Good luck!
https://medium.com/dataseries/a-quick-primer-on-big-o-notation-c99ccc7ddbae
['Maxwell Harvey Croy']
2020-07-08 13:34:58.329000+00:00
['Software Engineering', 'Algorithms', 'Programming']
MIT’s LeakyPhones Help You Interact with Strangers by Sharing Music at a Glance
The Tangible Media Group within MIT’s Media Lab is focused on researching human interaction with the gadgets and gizmos around us, and how that technology affects social dynamics. Their research has led to a number of interesting developments that we’ve featured in the past here on Hackster, but this is one you’re either going to love or hate. Amos Golan’s LeakyPhones are headphones that let you share your music with a stranger at a glance — and that let them share their music with you. Golan’s motivation for this project is rooted in our current social dynamics, or lack thereof. In the past, starting a conversation with a stranger was a simple matter of saying “hello” when you made eye contact for a moment. These days, many of us spend our time in public with our eyes glued to our smartphones and our ears covered with headphones. For many of you, that’s a good thing — you don’t want strangers talking to you anyway. But, there’s no denying the effect it’s having on spontaneous interactions. LeakyPhones is intended to open the doors back up to that kind of interaction, but respectfully. The idea is that if two people wearing LeakyPhones come across each other and hold eye contact for a moment, they’ll begin to hear what the other person is listening to. The longer they hold eye contact, the more the stranger’s music starts to overcome their own. Hopefully, that will spark a conversation. There are, of course, a number of privacy concerns here. Many of you are already cringing at the idea of a stranger hearing what you’re listening to, or you hearing what they’re listening to. That’s why LeakyPhones has four different operating modes. You can either choose to allow transmission of your own music or not, and choose to receive music from strangers or not. It allows for controlled social interactions with strangers, but only when you want it.
https://medium.com/hacksters-blog/mits-leakyphones-help-you-interact-with-strangers-by-sharing-music-at-a-glance-b4c15817719
['Cameron Coward']
2019-01-29 23:26:00.726000+00:00
['MIT', 'Research', 'Social', 'Technology', 'Music']
Topic-based versioning architecture for Scalable AI Application
We have refined this design in projects at large European retailers and airlines, and in our own Source AI platform. Today, these enhancements power machine learning and optimization models at scale, enabling our teams to iterate faster and deliver without fear of losing valuable insights. Establishing Principles Our first step in enhancing software design is to establish principles for our algorithms: They must always return a result, never change the state outside of their local environment and, given a set of unchanging input parameters, always return the same result. In establishing these principles, we make the algorithms deterministic, cacheable and testable. This, in turn, enables applications that are efficient but, if the source data changes, not reproducible. Guided by these principles, we can assert that when given the same application configuration, input arguments, input data and source code, we will obtain the exact same output result. As an alternative to total-input versioning, we can use topic-based organization for our input data. As per our principles, topics can only be appended and messages must be immutable, linearly ordered and consistent within a topic. Furthermore, input data should be referenced by its topic, message and cursor for all consumed topics in the application. This data layout, alongside our design principles, enables total reproducibility and scalability for all AI-driven applications. The Uniqueness of Production Software I’d like to share some knowledge I acquired while working at BCG Gamma as a Software Engineer. One aspect of my job is to think about to best software design when given constrained development time and tight deadlines. In my experience, a specific design pattern has proven time and again to be useful when manipulating data frames in Analytics or Data Science projects. I’m not claiming that this pattern is novel or original, nor am I the first to reveal these principles. In fact, you can find similarities with internal database architecture in the form of a shared commit log. My reason for including it here is to help sprout ideas and discussions around the subject by focusing on organizing and sharing knowledge. Our shared objective is clear: To improve reproducibility, testability and scalability in AI Applications. We have accomplished all three first-hand at BGC Gamma while implementing AI at scale in production projects at large European retailers, airlines and at our own Source.ai platform. Let me be clear: production software has very different requirements compared to a PoC or an MVP: Something considered acceptable in an earlier stage of project may have consequences in later phases of production that would not occur with these other projects. This is to say that the initial application design matters because defects are harder to correct when scaling up in width (feature) and depth (productionization). It is equally important to note the importance of retaining all experiment data when prototyping. I’ve witnessed many teams struggle to recover or recreate results they just presented in a meeting. Failing to retain data not only damages a team’s credibility. It can also lead to the loss of valuable insights that may play a key role in follow-up work or comparison. Suffice to say: Your data must be accessible at all times. Of course, extra performance is always welcomed — especially in software that will be used several times a day and drive the company’s bottom line. Term Definitions SIDE EFFECT In computer science, a function or expression is said to have a side effect if it modifies some state outside its local environment (such as when reading a file or writing to a socket). PURE FUNCTION In computer science, a pure function is a function that evaluates with no side effects. DETERMINISTIC A deterministic algorithm is an algorithm that, given a particular input, will always produce the same output. CACHEABLE An expression is called cacheable (referentially transparent) if it can be replaced with its corresponding value without changing the program’s behavior. MEMOIZATION Memoization is an optimization technique used to speed up programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again. TESTABLE There is a strong correlation between good design and a testable system. Code that has weak cohesion, tight coupling, redundancy and lack of encapsulation is difficult to test. Designing AI-Driven Scalable Applications First, let’s establish some principles for the algorithms in our applications: Functions must always return a result. Functions must never change the state outside of their local environment (i.e. with no side effects). Functions must always return the same result given a set of input parameters (referential transparency). You may wonder how, if functions are forbidden from creating side-effects, they can interact with other programs or humans. For that purpose, an application is built with a “pure” algorithmic core and an “impure” wrapper around it. Core functions can call only other pure functions. Only the wrapper is free to call any pure or impure functions and allowed to create side effects. The wrapper must do its best to retain the deterministic property of the algorithm. In other words, an algorithm can expect to receive from the wrapper a sales table extracted from a SQL database. In this case, the wrapper is allowed to initiate the connection and fetch the table in question, but not to update rows at that time. Some class of errors cannot be avoided (think database connections, full disk), thus true deterministic behavior is only a best effort. From these practical considerations, we can derive the following best practices: Wrappers should read data only before the algorithm is called. Wrappers should write results only after the algorithm has returned. Wrappers are responsible for reading configurations and transmitting their value to algorithms. Applications should fail fast, hard and explicitly. Defer the retrying logic to their scheduler. Write unit tests for the pure core, and integration tests for the impure parts. For your algorithms to remain pure you must avoid some functions such as unseeded random. Instead of using this function, pass the seed as a parameter or configuration value. The use of database select or file read must be avoided and pass their result as a parameter. Database update/delete or file write can also change the state of the filesystem or the network interface. Logger calls are an exception to the rule and, as such, can be used without restrictions. From Deterministic to Reproductible These principles enable testable and scalable applications. But how can you also enable reproducibility if the source data changed? With the application’s algorithm being deterministic we can assert that, given the same configuration, input arguments, input data and source code, we must obtain the exact same output result — or receive no result if the application encountered an error. We assume dependencies are properly pinned to explicit versions. Some metadata are easier to obtain than others. Configuration, input arguments, and source code version do not usually pose problems, but input data may change since the application will not have any control over its content or availability. One possible approach to this problem is to collect all data from all sources into a controlled, versioned environment. This requires that the application be able to copy the entire dataset for each time it is being run, which can prove to be quite expensive. In the next part, we will explore a different approach — one that allows for less-expensive storage, as well as for some optimizations. Topic-based Architecture Term Definitions TOPIC An ordered folder, table, or collection containing messages. MESSAGE A file, row, or entry containing an arbitrary amount of data and following an established schema. CURSOR A comparable pointer to a specific message within a topic. Usually an integer. As we did before, let’s establish some principles: Topics are append-only: You cannot delete any messages once they are committed to a topic. Messages are immutable: You cannot update messages once they are committed. Messages are linearly ordered within a topic. Messages should be consistent within a topic, sharing identical or assimilable schema. Now, let’s unwrap those principles into practical considerations. First, applications can read from multiple topics and write results to a different one. Although reading and writing messages to the same topic within the same application is possible, this practice should be kept to specific use-cases only. Logs and metadata can also be written to dedicated topics, the schema used for reading and writing may differ, and the only way to remove previously omitted messages is to delete the entire topic. To enable reproducibility, applications must take a reference to their input data as topic and cursor pair. This approach allows a standalone application to read from several sources — considering their state at a given point in time — without mandatory duplication. Integrating with other sources is always more tedious, and this pattern is no exception to the rule. Reading data from externally controlled relational databases will, for example, often include a copy step to extract the data from there and import them into our system. This step becomes entirely optional if you can guarantee the external database is upholding our design and following our principles. A system implemented with those principles, can only propose the following functions: - get : cursor -> topic ->message - put : message ->topic ->cursor Please appreciate the symmetry of their definition. You may also define optional support functions, such as: - list : topic -> cursors - delete : topic -> () The following illustration is a simple example of topic-based architecture in an AI-driven application: Topic-based versioning architecture Now that we covered the basics, we can dive into more advanced usages. Reproductible Data Science A Pipeline is an ensemble, mesh or dag of functions (or applications) sharing similar properties and scheduled on different intervals. Each pipeline step may read from a set of topics, compute one or more results, and write those results to dedicated topics. To enable easier auditing, pipelines may compile their configuration, input arguments, and git commit hash into a single message and dump it to a “run configuration” topic. When the pipeline is completed, aggregated logs and other metadata should be written to a consolidated message in a “run logs” topic[i]. This ensures that the cursor 0 in “run configuration” will match the cursor 0 in “run logs.” Finally, you can now build a “run explorer” that will scroll those topics and match each run configuration with their logs. Neat! Simple Timeseries Storage Timeseries can take advantage of the total ordering of messages within topics by expressing individual data points as messages. Usually, cursors are defined as unsigned integers, but it is possible to consider them as offsets to a timestamp of origin. Then, a function that is given the time resolution, the origin and the current cursor can retrieve the timestamp of the entry — and vice-versa. Event Sourcing Last but not least, schema updates mandate writing migration functions (up/down) to allow the reading of “old” data with the “new” schema: Doing otherwise would violate the consistency rule. As is often the case, writing migrations on early prototyping projects can be quite cumbersome, and erasing entire topics and starting over can lead to information loss. We don’t want to encourage the latter, so we can use “snapshots” instead. We define snapshots as new states aggregating multiple messages into a single message. Let’s see how they are created: First, read history from topic A with schema S until the branch point, where the schema change. Then, create a new message from the current application state. Write the state to a new topic B with schema T. Copy the remaining messages, after the branch point, from topic A to topic B. Finally, erase topic A and schema S. Now all messages reside in topic B and are readable with schema T. This approach is very close to something we would do using another pattern called “event sourcing.” Event sourcing also operates under the assumption that the application is capable of aggregating multiple messages into a single-state representative of several entries without causing information loss. Event sourcing is the natural evolution of such applications. (Learn more about this at https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing.) Conclusion and Annexes Topic-based architecture is the missing piece to most AI-driven application. Unlike other practices, data science really benefits from the ability to retain earlier data and reproduce results. This design pattern is usable from PoC, to MVP, to production and will not require any software architecture change between stages. Adopting topic-based versioning in your data science project is a first step toward a truly reproductible AI pipeline. For further reading and information on similar or connected topics, please refer to: [i] Capable frameworks would provide the capabilities of opening a message, streaming content to it, and committing it on close. This interface would be very handy for streaming logs!
https://medium.com/bcggamma/topic-based-versioning-architecture-for-scalable-ai-application-c926ffa92c1
['Quentin Leffray']
2020-03-24 13:54:33.538000+00:00
['Software Architecture', 'Programming', 'Data', 'Design Patterns', 'Data Science']
Why GM Is Tesla’s Greatest Threat to Overtake the EV Throne
In recent years, much of the electric vehicle discussion has revolved around Tesla — a luxury EV company that is moving towards a more affordable future thanks to innovations in energy density and manufacturing. As electric vehicles gain more momentum in the consumer markets, however, this is no longer Tesla racing against itself. The future of driving is electric — and autonomous — and the rest of the auto manufacturers are now investing in that future. To this point, there is one legacy auto company that stands above the rest in electric driving: General Motors. GM recently announced plans to up its spending on production and development of electric and autonomous driving through 2025 to $27 billion, up from the $20 billion plan announced in March. Part of this plan includes releasing 30 models of electric vehicles by 2025, with more than 20 solely dedicated to North America. The company is planning for 1 million global EV sales by 2025 (Tesla, for comparison, has sold over 300,000 cars this year — representing an 18% market share; it sold 370,000 units in 2019 for a 16% market share). To this point, Tesla has earned a firm hold on the EV market. In 2019, nearly half of the EV sales in America were the Tesla Model 3. When including Model S and X sales, Tesla held a 58% U.S. market share. The remaining 42% came from 52 different EV models from other companies. These numbers make it very obvious: the barrier to entry for selling electric vehicles is quite high. Legacy car makers have yet to crack Tesla’s code, but GM seems dedicated to giving it a shot. By Fall 2021, the company will have released two new electric vehicles: the GMC Hummer EV and Cadillac Lyriq. The Lyriq offers a sleek look to compete with the attractiveness of Tesla models while the Hummer offers impressive performance specs. GM will also release a Terrain-like EV, a Buick and Cadillac SUV, an improved Chevy Bolt, and Chevy Blazer-inspired SUV in the coming years. To crack into the luxury market, there will also be the Cadillac Celestiq — which the company will build by hand, producing fewer than 500 models a year. New names and models don’t necessarily give GM an edge over its competitors in the market, particularly Tesla. But it’s supposed innovations in battery development might. GM will be using what it calls an Ultium battery cell in its electric vehicles, produced in a joint venture with LG Chem. The company claims that the Ultium battery will cost nearly 60% less than currently-used batteries and offer a range of up to 450 miles. The battery performance and cost will play a huge role in the inevitable Tesla-GM battle of the decade. The truth is, debating on the better of the two future battery technologies is worthless at this time, as they are — after all — future technologies. Both companies have made bold claims, but until those batteries make their way into cars, comparisons won’t yield any worthwhile analysis. Tesla should probably get the edge as they have been using Panasonic’s well-produced batteries for years, giving them the research and testing advantage. Soon, Tesla will be producing its own battery. GM seems confident that its battery technology will beat Tesla in the long run, but that could simply be a marketing ploy. GM doesn’t need superior battery performance to beat, or at least compete with, Tesla, though. GM has the advantage of holding four huge car manufacturers — GMC, Buick, Cadillac, and Chevy — underneath its umbrella. The company will be able to offer a much wider variety of electric vehicles and reach a larger audience of buyers. While Tesla is slowly expanding its offering and lowering costs, GM has the resources, brand loyalty, and (seemingly) capability to make a larger splash in the EV market than it has to this point. Autonomous driving is another area where GM could have an edge over Tesla. Through its subsidiary, Cruise, GM will soon be testing self-driving vehicles on the roads of San Francisco. GM has interesting plans with Cruise. Through the Cruise Origin, GM will be rolling out driverless vehicles — that are literally impossible to drive, as they will have no steering wheels — for ride-sharing. Tesla would seemingly have an advantage in the autonomy category as well, seeing as though its Autopilot program has been collecting data for years. Cruise has been collecting similar data, however, and has even made use of the pandemic in San Francisco, sending its vehicles out to make contactless food deliveries. Dethroning Tesla will not be easy for GM, and it may never actually be done. Tesla sells more than three times the EVs of any other singular company worldwide, giving it quite a large lead on the competition. GM is firmly embedding itself in the race either way and seems unafraid of challenging the current EV King. The performance, cost, and longevity of the battery technology will play a huge role in this race, and GM doesn’t seem phased by Tesla’s advances in that area, either. For GM to make a true run at Tesla’s crown over the next 5-to-10 years, constant innovation and exciting product development will have to be a top priority. If the company can turn loyal gasoline-powered car buyers into electric vehicle fanatics as Tesla has, GM will gobble up worldwide EV market share.
https://medium.com/swlh/why-gm-is-teslas-greatest-threat-to-overtake-the-ev-throne-13c721999a69
['Dylan Hughes']
2020-11-24 20:41:30.799000+00:00
['Investing', 'Electric Vehicles', 'Sustainability', 'Business', 'Transportation']
5 Ways To Avoid Jet Lag
Changing time zones isn’t easy. Our body is an amazing thing. It can take a lot of stress in multiple situations, and come out the end even stronger. But when it comes to traveling, when we enter new time zones, our body sometimes has a very difficult time adjusting. We begin to have trouble staying awake or falling asleep, and our energy levels begin to fluctuate. If we’re traveling to a new time zone, we most likely want to have enough energy, regardless of whether we’re traveling for work or for pleasure. The best way to make sure you stay energetic is to allow your body to adjust to the new time zone. In order to help, here are 5 ways that we can hack our body to avoid jet lag and adjust to new time zones a lot easier (without using melatonin as a supplement). Get Grounded — Grounding basically means standing on the earth barefoot. While you may read that and brush it off as hippie nonsense, grounding is something that is backed by science. While you’re flying in the air, your body becomes static charged, and once you stand barefoot on the earth, the ions bring you back to your normal negative state. Grounding works on soil, grass, sand, and even brick or concrete, but just make sure you’re outside to get the full benefits. Get Some Sun — Sunlight is a great way to help regulate your circadian rhythm, which is your internal body clock. If you’re outside and you absorb some sunlight, your body does a great job at telling what point of day that it is, and has an easier time adjusting to a new schedule. Before your trip, if you can hack your time outside to replicate the place you’re going, you may be able to slightly adjust before you actually get there. For example, if it gets dark at 7pm in the location that you’re heading, and it’s five hours of ahead of your current time zone, get inside and avoid the sun around 2pm while you’re still home. And once you get to your new location, make sure you get outside to allow your body to adjust. Exercise — Your body does a great job at adjusting to routines. If you have a certain time where you exercise, exercising at that same time in your new time zone will allow your body to adjust. A study done on mice showed that exercising shifted the internal body clock of muscle tissue within those mice. If you can’t exercise at the same time for some reason, just getting in any form of exercise will still help the body make somewhat of an adjustment. Fast — Fasting for 12–16 hours before breakfast in your new location will allow your body to adjust to the new time zone. When you’re normally asleep, you’re obviously not eating, so when you enter a fasted state, your body has an easier time adjusting to the new location. For example, if you normally eat breakfast at 8am, you would want to stop eating anywhere from 4pm to 8pm in the new location’s time. So if that new location is 12 hours ahead of you, you would want to stop eating anywhere from 4am to 8am your time. Confusing, right? Drink A Lot Of Water — When we fly to new time zones, it’s tempting to order a glass of wine or a beer on the flight. If we really want to get adjusted to new time zones, it’s in our best interest to skip the alcohol, soda, or juice, and drink a lot of water instead. Drinking water not only will help keep us hydrated (the moisture in the air on an airplane is less than the Sahara Desert), but will also allow our body to shift gears and adjust to new time zones more efficiently. Did you like this article? Check out my podcast on my website, iTunes, or Google Play for more! Or leave me a comment below letting me know your favorite place to travel.
https://medium.com/swlh/5-ways-to-avoid-jet-lag-feebcfc6e4d
['Schuyler Diehm']
2018-04-21 12:21:56.572000+00:00
['Airports', 'Travel Tips', 'Health', 'Traveling', 'Travel']
With AI and Criminal Justice, the Devil is in the Data
With AI and Criminal Justice, the Devil is in the Data In the criminal justice context, it’s easy for bias to creep into risk assessment tools. The devil is in the data. Vincent Southerland — Executive Director, Center on Race, Inequality, and the Law, NYU Law APRIL 9, 2018 | 11:00 AM If we have learned anything in the last decade about our criminal justice system, it is how astonishingly dysfunctional it is. Extensive investigations have revealed persistent racial disparities at every stage, a different kind of justice for the haves and the have nots, and a system that neither rehabilitates individuals nor ensures public safety. In short, the system is in crisis. Rather than scrapping everything and starting anew, many criminal justice stakeholders have turned to technology to repair the breach through “risk assessment tools.” Also labeled artificial intelligence, automated decision-making, or predictive analytics, these tools have been touted as carrying with them the potential to save a broken system, and they now play a role at nearly every critical stage of the criminal justice process. If we’re not careful, however, these tools may exacerbate the same problems they are ostensibly meant to help solve. It begins on the front lines of the criminal justice system with policing. Law enforcement has embraced predictive analytics — which can pinpoint areas allegedly prone to criminal activity by examining historical patterns — and then deploy officers to those areas. In Chicago, for example, the predictive tools analyze complex social networks through publicly accessible data in an attempt to forecast likely perpetrators and victims of violent crime. Once an individual is arrested, they are likely to be subjected to a pre-trial risk assessment tool. Such tools are used to inform the thinking of a judge who must decide whether to incarcerate that person pending trial or release them. Pre-trial risk assessments attempt to predict which of the accused will fail to appear in court or will be rearrested. Some states have used these pre-trial tools at the sentencing and parole stage, in an attempt to predict the likelihood that someone will commit a new offense if released from prison. While all of this technology may seem to hold great promise, it also can come with staggering costs. The potential for bias to creep into the deployment of the tools is enormous. Simply put, the devil is in the data. All risk assessment tools generally rely on historical, actuarial data. Often, that data relates to the behavior of a class of people — like individuals with criminal records. Sometimes it relates to the characteristics of a neighborhood. That information is run through an algorithm — a set of instructions that tell a computer model what to do. In the case of risk assessment tools, the model produces a forecast of the probability that an individual will engage in some particular behavior. That order of operations can be problematic given the range of data that fuels the forecast. Data scientists often refer to this type of problem as “garbage in, garbage out.” In a historically biased criminal justice system, the “garbage in” can have grave consequences. Imagine, for a moment, a city where Black people made up 67 percent of the population, but accounted for 85 percent of vehicle stops, 90 percent of citations issued, 95 percent of jaywalking charges, 94 percent of those charged with disobeying the order of an officer, and 93 percent of the arrests made by the city’s officers. What about a city where Black people comprised 54 percent of the population, but 85 percent of pedestrian stops and 79 percent of arrests by police, and were 2.5 times more likely to be stopped by the police than their white counterparts? Or a police department that singled out a city’s Black and Latino residents for 83 percent of all stops, but 88 percent of the stops resulted in no further action? These aren’t imaginary cities or made-up numbers. They are drawn from Ferguson, Missouri; Newark, New Jersey; and New York City, respectively. It is now well known that the police forces in these cities engaged in racially biased policing on the false assumption that doing so was an effective means of fighting crime. In the case of Ferguson, fighting crime was only half the goal; generating revenue for the municipality through law enforcement was the other. Now consider the potential harm done when police departments like these use their crime data to feed the algorithms and models used to predict behavior. If one only examined the data, the working assumption would be that white people rarely engage in criminal activity. Most algorithms would simply predict that these disparate numbers represent a real, consistent pattern of criminal behavior by individuals. The data provides a distorted picture of the neighborhoods where crime is happening that, in turn, drives more police to those neighborhoods. Police then come into contact with more people from those communities, and by virtue of more contact, make more arrests. Those arrests — regardless of their validity or constitutionality — are interpreted as indicative of criminal activity in a neighborhood, leading to a greater police presence. The result, as mathematician and data scientist Cathy O’Neil calls it in “Weapons of Math Destruction,” is “a pernicious feedback loop,” where “the policing itself spawns new data, which justifies more policing.” Any system that relies on criminal justice data must contend with the vestiges of slavery, de jure and de facto segregation, racial discrimination, biased policing, and explicit and implicit bias, which are part and parcel of the criminal justice system. Otherwise, these automated tools will simply exacerbate, reproduce, and calcify the biases they are meant to correct. These concerns aren’t theoretical. In a piece two years ago, reporters at ProPublica sparked a debate about these tools by highlighting the racial bias embedded in risk assessments at pretrial bail hearings and at sentencing. That study found that Black defendants were more likely to be wrongly labeled high risk than white defendants. Humans have always deployed technology with the hope of improving the systems that operate around them. For risk assessments to advance justice, those who seek to use them must confront racism head-on, recognize that it is infecting decisions and leading to unjust outcomes, and make its eradication the ultimate goal of any tool used. When the data reveals racism and bias in the system, risk assessment tools must account for that bias. This means privileging the voices of communities and those with experience in the criminal justice system so that the quantitative data is informed by qualitative information about those numbers and the human experiences behind them. It means employing the tool in a criminal justice ecosystem that is devoted to due process, fairness, and decarceration. Finally, it requires the implementation of frameworks that ensure algorithmic accountability. An Algorithmic Impact Assessment is one such framework, proposed by the research institute AI Now in the context of New York City’s efforts to hold public agencies accountable in their automated decision-making. AIAs do so by publicly listing how and when algorithms are used to make decisions in people’s lives, providing meaningful access for independent auditing of these tools, increasing the expertise and capacity of agencies that use the tools, and allowing the public opportunities to assess and dispute the way entities deploy the tools. No system or tool is perfect. But we should not add to the problems in the criminal justice system with mechanisms that exacerbate racism and inequity. Only by making a commitment to antiracist and egalitarian values and frameworks for accountability, can well-intended reformers ensure that these new tools are used for the public good. Vincent Southerland is the executive director of the Center on Race, Inequality, and the Law at NYU School of Law. He previously served as a senior counsel with the NAACP Legal Defense and Educational Fund, where he focused on race and criminal justice, and as a public defender with The Bronx Defenders and the Federal Defenders of New York. This piece is part of a series exploring the impacts of artificial intelligence on civil liberties. The views expressed here do not necessarily reflect the views or positions of the ACLU.
https://medium.com/aclu/with-ai-and-criminal-justice-the-devil-is-in-the-data-304b4edebf7d
['Vincent M. Southerland']
2018-04-16 17:15:35.405000+00:00
['Artificial Intelligence', 'Bias', 'Data', 'Ai And Civil Liberties', 'Criminal Justice']
Web Scraping a Javascript Heavy Website in Python and Using Pandas for Analysis
I set out to try using the Python library BeautifulSoup to get data on the retailers that would be attending a market, as shown on this webpage: https://www.americasmart.com/browse/#/exhibitor?market=23. What I found, however, was that the BeautifulSoup function (BeautifulSoup(url, ‘html.parser’)) only returned the header and footer of the page. It turns out that this website relies on Javascript to populate most of the data on the page, so the data I was looking for was not in the html tags. To find the AJAX request that returned the data I needed, I looked under the XHR and JS tabs in the Network section of the Google Chrome browser (see image below). The credit for this idea came from this blog post: https://blog.hartleybrody.com/web-scraping-cheat-sheet/#useful-libraries. You need to hover over the “Name” fields and right click to copy the link address, which you paste in the code below. There are a bunch of options here, but I just tried them one by one to see if they retrieved the data I was looking for. The Network tab on the Google Chrome Inspect menu import requests r = requests.get(url) info = r.json() url = ‘ https://wem.americasmart.com/api/v1.2/Search/LinesAndPhotosByMarket?status=ACTIVE_AND_UPCOMING&marketID=23' r = requests.get(url)info = r.json() For this website, the data was returned in a list of dictionaries. I had to play around with the indexes to extract the data I needed. Ultimately, I wanted to create a Pandas DataFrame with location information about every retailer, and then merge it with a narrowed down list of retailers, housed in a .csv file. The link to the full code is on my github: https://github.com/mdibble2/Projects/blob/master/Web%2BScraping%2Bwith%2BAJAX%2Brequest%20(1).ipynb I used the df.merge() function to select only the rows of data from the website that matched the retailers listed in the csv file (i.e. an inner join). The drawback to this approach is that every retailer needed to be spelled correctly with the correct capitalization in order to match the primary key of my first table, the one generated from the website. To check that there were not any misspelled retailers, I created a series from the retailer names in the merged dataset and another series from the retailer names in the csv dataset. I then merged these together and used df.drop_duplicates to see if any unique values remained besides those in the merged dataset. I found that I had misspelled or incorrectly abbreviated 9 retailers. Once I corrected these records, the merged dataset gave all the information I required. Some take-aways from this mini project were:
https://megandibble.medium.com/web-scraping-a-javascript-heavy-website-in-python-and-using-pandas-for-analysis-7efb22315858
['Megan Dibble']
2020-01-15 12:25:33.499000+00:00
['JavaScript', 'Python', 'Pandas', 'Jquery', 'Data Science']
4 Business Lessons From The Man Who Made Louis Vuitton An Empire
1) Allow Creators To Run The Inventions What I have fun with is trying to transform creativity into business reality all over the world. To do this, you have to be connected to innovators and designers, but also make their ideas livable and concrete. I’ve been working on the Louis Vuitton Foundation for the past decade. I worked with Frank Gehry on this fantastic building dedicated to the arts. We have a very good relationship. I told him you can do anything you want on the outside, but on the inside, I want something that is usable. — Bernard Arnault about giving creative freedom to his artists. When we look at Louis Vuitton’s creators, it’s as if they are the ones running the business. It’s rare I find any statements from Bernard Arnault himself about the products that they are launching, it’s always the creators doing it instead. This showed that Bernard Arnault gave so much freedom to the creators. We see it so much in the designs released every time there is a fashion show event. He allowed the designers to be as wild as possible and let them do the final decision. This leadership is called laissez-faire, where the leaders allow their employees to run the business, create the decision, and are okay when facing mistakes. It’s not usually common in the business world, since the most common one is democratic leadership. However, Bernard Arnault seemed to believe in the creative directors and designers the company hired. Therefore, he allowed them to take over the departments and be innovative. I’m sure that he probably said a few words to them, but the bottom line is, the designers are the ones in control when creating new products, not him. Thus, giving them no limits on the canvas to paint. 2) Quality Before Marketing “If you do marketing like consumer goods, I don’t think that’s possible (for LVMH). But if you produce something that is really unique, I think that’s possible” — Bernard Arnault at Oxford Union, 2016. Most companies that involve art in their business are always approaching quality first. We’ve seen this a lot in Disney, Apple, Youtube videos, and books. Businesses like Louis Vuitton are no exception. If you notice, Louis Vuitton’s marketing is mostly just from Social Media. They don’t do massive advertisements the way grocery stores/consumer goods do. They just announce one thing on Instagram, and that’s it. Sometimes they’ll call the product back to the internet, but they don’t do massive marketing. And yet, when I worked at Louis Vuitton, every day was always a busy day in stores. Clients keep coming to buy their luxury products. Bernard Arnault said that they don’t do marketing. He believed that marketing is against what LVMH must do, which is quality. For them, marketing is product creation. In other words, he let quality speaks and let word of mouth from their clients do the marketing. Therefore, the quality will always exceed everyone’s expectations, so the phrase quality over quantity is very true. 3) Create Timeless/Evergreen Products/Contents “When I see a product of some of our best brands, it has to be timeless but also it has to have in it something of the utmost modernity in it. That is the key to success.” — Bernard Arnault, 2017. This is why Bernard Arnault allows his creative designers to go wild. Whether it’s in writing, stocks, or fashion, always aim for the long-term, which is timeless. Timeless, according to the CEO of LVMH himself, is a brand that lasts forever. That means no matter the generations to come, even when the culture change in time, the company can still adapt and live on. Louis Vuitton was created in 1837 by Louis Vuitton himself when he was 16 years old. This means that the brand is already more than 165 years old and today, it’s still as good as new. They made sure that the company keeps innovating and lasts. Bernard Arnault made sure that the products his companies produce are crafted to their best. He mentioned how his team sets a Louis Vuitton suitcase in a ‘torture’ machine where the bag would be opened and closed every five minutes, thrown around, and even dropped with a loud bang. This is their way to test its quality and see how long it lasts (though I’m certain that they had more ways of doing that). Today, Louis Vuitton is still selling the same bags since the 19th century such as the speedy bag launched in the 1930s. Over the years, the designers made sure that the bag can still fit into the modern world as a classy bag and people are still loving them. We can also apply this in our own lives. No matter how many products we create, make sure we innovate and be creative on how to make the product long-lasting, ultimately how to make your business long-lasting. 4) Always Treat Your Company Like A StartUp “I often say to my team we should behave as if we’re still a startup. Don’t go to the offices too much. Stay on the ground with the customer or with the designers as they work. I visit stores every week. I always look for the store managers. I want to see them on the ground, not in their offices doing paperwork.” — Bernard Arnault, 2017. I agree with this quote very much. If a company thinks that they are good enough already, one day, someone else will catch up with them. Plus, I find that people who treat themselves as startups are very humble. A startup's main purpose is to grow its business. This includes their product, people, and systems. When I worked at Louis Vuitton, when my trainer identified one of my weaknesses in public speaking, which is speaking in my own native language, he drilled me and worked harder to help me surpass my weakness. This is why out of all the companies I worked at, Louis Vuitton was the place where I grew the most. Today, Louis Vuitton grows tremendously beautiful and powerful. It’s so powerful that they managed to acquire 75 different brands, some that are not even related to their main industry, and today, Bernard Arnault is the third billionaire in the world. I don’t know about the people there but as a former employee, but I learned so much from the company. Public speaking, the fashion world, its businesses, and so many more. It was one of the biggest highlights of my whole career.
https://medium.com/live-your-life-on-purpose/4-business-lessons-from-the-man-who-made-louis-vuitton-an-empire-768df2f46568
['Nicole Sudjono']
2020-12-24 21:01:09.869000+00:00
['Leadership', 'Business', 'Productivity', 'Life', 'Self Improvement']
Here’s What Essential Workers Want People Who Work From Home to Understand
Sick employees are being forced to work When we are sick, we are pressured to still come in and work because we are so behind. If you have gone into a store, ordered takeout, or visited a gym or salon anytime in the past month, there’s a good chance you were served by someone sick with the coronavirus. I heard from many, many people that employers simply are not taking quarantine policies (or the health of their staff) seriously. And because so many workplaces are short-staffed (due to employees getting sick with Covid-19), many workers are expected to come in no matter what, even if they aren’t feeling well. I spoke with a health care worker who is employed by one of the largest hospital networks in the Midwest. She told me her hospital’s official policy is that if an employee tests positive for the coronavirus but is not showing any symptoms, they are expected to still come into work. When I shared this fact with a few other people who work in health care, they confirmed for me this is a pretty standard policy at the moment. The logic being, apparently, that if you are asymptomatic, at least you won’t be sneezing and coughing Covid-19 particles into the air. Many workplaces also lack contact tracing protocols and don’t alert employees when one of their co-workers comes down with the virus. “They still don’t have to tell you if you were in contact with someone who tested positive at work. No tracing,” one health care worker messaged me. Essential workers also have to cope with the uncertainty of working alongside people whose personal safety protocols might look very different from their own. “You don’t know what goes on after you clock out. Who’s being safe, who’s not,” a food service worker from the Midwest told me. Hygiene and safety protocols are a shallow performance Even with ~new company policies~ about masks and whatnot, it’s often not adhered to or enforced. Numerous essential workers shared that the hygiene and safety protocols on display at their workplaces were mostly performative and not rigidly adhered to. Hygiene theater is abundant; it’s easy to make a big show of sterilizing surfaces often despite the overwhelming scientific evidence that surface transmission of Covid-19 is not a genuine concern. This oversterilization takes a toll on employees’ health: “The daily sanitizing agent exposure makes my essential worker housemate come home feeling ill.” The heavy-duty disinfectants now being dispensed on an hourly basis at many stores and restaurants were never designed for such frequent use. Overuse of these chemicals dries out people’s skin and can cause respiratory problems and headaches. They exacerbate sensory issues and allergies and do damage to the environment. They can also lead to the proliferation of anti-bacterial-resistant superbugs. Essential workers bear the brunt of this, all so that we work-from-homers can feel comfortable using the self-checkout at Target. A lot of essential workers also told me their workplaces’ masking and social distancing measures were being ignored. Customers frequently “forget” to bring masks with them to stores, for example, or refuse to wear them properly. Here are just some of the many things essential workers shared about customers refusing to mask up: Please assume a mask is required when you go out. We run out of extras. Being an entitled white man is not a medical exemption. Put on a fucking mask or go home. Complaining that you’re sick of wearing a mask for small talk is annoying. Unfortunately, the oblivious entitlement of customers doesn’t end there. It turns out lots of us work-from-homers are leeching a great deal of attention and emotional labor from the essential workers who watch our children, make our coffee, and cook our meals. Customers are demanding a ton of emotional labor An anonymous response that reads, “People are complaining about our tone more than ever to managers because [we are] projecting due to masks.” Essential workers told me over and over again that customers and clients are being pushy and demanding to a degree they have never seen before. Tips have dropped off, complaints from customers are skyrocketing, and everyone seems to be cranky and seeking validation from the service staff. One retail worker explained that due to obligatory mask-wearing and plexiglass barriers, they have to project their voice in order to be heard by customers. Some customers interpret this as “yelling,” though, and complain to management about employees using an inappropriate “tone.” I received many stories of bored work-from-homers coming into stores and restaurants seeking social stimulation and absolutely terrorizing the essential workers around them. Here’s how one person put it: Work-from-home people seem to be conversation starved, so they want to talk more than normal. But I’m not your friend… I get paid to sell you things. Plus sometimes there are people waiting to come into the store while you jabber away (we have a small store with a two-customer limit). Again and again, retail workers told me they wished work-from-homers would recognize the incredible stress they are under and learn to shop efficiently without making oversized social demands: Saying you are just browsing is a slap in the face. Get what you need and go. People forget that what is their one outing for the day is my workplace. Get in, follow every rule, be polite, and get out. I’m not here to service your ego. I’m here to do work. Small businesses aren’t necessarily more humane Reading through these responses, you might be tempted to think they all come from workers of evil corporate behemoths like Walmart and McDonald’s. Surely the scrappy small businesses you’ve been supporting don’t mistreat their workers in these ways — right? Well, not so fast. A response that reads, “Your support of small businesses does not equate to support for your friends who work there.” Since March, there’s been a major public push to support small businesses, which have been financially devastated by lockdown and demand shock. “Support small businesses!” has become a rallying cry, buying local an act of consumer “activism” that is assumed on its face to be a net good. But that financial support doesn’t always trickle down to the employees who make local businesses run. As one responder put it, “A lot of essential workers hate their jobs. Even the ones who work at your fave small business.” Several essential workers told me that while their small business employers initially treated them well during the early days of lockdown, the benefits and paid quarantining periods have long since run out. One woman told me in the spring that she received an extra $2 per hour as “hazard pay,” but months ago, her wages returned to normal and stayed there even though the “hazard” of catching Covid-19 cases is far worse now than it was then. A coffee shop manager told me that though her employer had offered a pretty generous paid leave to all employees earlier into the pandemic, all stores are now open, and she has no choice but to come into work every day. She can’t quit because then she wouldn’t qualify for unemployment. All she can do is submit to the risk of contracting Covid-19 every day and hope that the governor eventually closes businesses back down. The trauma is inescapable Far and away the most common response I received from essential workers is that they are living with a degree of dread and fear that we work-from-homers cannot even begin to understand. Everyone is dissociating. We all feel like it’s only a matter of time until we’re sick. It’s exhausting. Many essential workers told me that for them, it’s not a question of if they will get Covid, but when and how will they continue to make a living or care for their loved ones when that time comes. Some have had to stop caring for ill or aging relatives because of the risk of virus exposure they face every day. “When I serve tables I’m afraid to go home and care for my immunocompromised mom,” one restaurant worker wrote. So many people shared with me the unspeakable emotional and existential burden of having to work for a minimum wage pittance, knowing it might be the thing that kills their vulnerable family members or themselves. I also feel guilty for having a job and like it’ll be my fault for going to work if I get sick. How sad it is that my life and the lives of my family members are only worth the minimum wage I earn. Essential workers have a higher “risk tolerance” out of necessity Finally, many essential workers told me that since they are forced to confront a heightened risk of catching Covid-19 at their jobs, they tend to have greater “risk tolerance” when they’re off the clock as well. This may mean they socialize more than work-from-homers, particularly with their co-workers. After all, they’ve already been forced into a massive “pod” with their co-workers, so why not at least experience the relief and connection of grabbing a drink with them after their shift? However sensible this approach is, essential workers frequently get shamed for it. Many told me they’d been chewed out by friends and loved ones for not social distancing rigorously enough, even though their jobs, of course, have made this impossible. “People expect us to go into these wildly unsafe environments to have our labor sucked out of us and not have higher decompression needs,” one kitchen staff worker wrote. Many essential jobs are not only perilous on paper, but they’re also psychologically overwhelming. Frontline workers end up desperately craving time among their peers and loved ones in order to process all that stress. Numerous people told me they’ve had to hide this fact from others, for fear of being “canceled.” Here’s what a friend who works in food service had to say: Someone I was close with told me they were going to “unfollow me for a bit” because they saw my post about taking a [solo] car trip out of state… but I am out at work every day handling people’s spit in the second most dangerous city in the U.S. Another friend, who works as an EMT, told me he was criticized as “irresponsible” for driving a co-worker home from their job. “But we’d just been working the back of an ambulance together for hours,” he told me. It made no sense for people to expect him to socially isolate from his co-worker in arbitrary ways when the nature of their work made true distancing impossible. The double-standards on display here are striking and distressing. In reality, Covid-19 cases are on the rise because so many people are required to go in to work, not because they’re also choosing to socialize in order to cope with that work. We work-from-homers may feel we have sacrificed a lot this year by withdrawing from the public world and connecting with others almost exclusively via Zoom. In reality, we’re very lucky to even have the option to isolate. The people who keep us fed and clothed and healthy are living don’t have that choice. They live in a state of risk resignation, assuming (quite rationally) that for them, catching Covid-19 is pretty much inevitable.
https://gen.medium.com/what-essential-workers-wish-work-from-homers-understood-a2524ebb2c3c
['Devon Price']
2020-12-01 19:01:29.853000+00:00
['Covid 19', 'Work From Home', 'Work', 'Healthcare', 'Society']
Using Both Sides of the Brain (and the Company) to Craft Effective Work
Creatives communicate through clear visuals and storytelling; we’re typically identified as right-brained thinkers. Digital strategists research, analyze, and manipulate data, activities of the left brain. Just because these traits are associated with these departments, however, does not mean digital strategists aren’t creative and creatives give no thought to reporting and strategy. In fact, most effective web designers and strategists are constantly crossing the line between allure and analytics. When you combine the targeting of digital with the appeal of creative, work doesn’t just reach the correct audiences, it also inspires them to act. But this isn’t always easy to do. Creatives may shy away from scenarios that require tangible, quantifiable proof for good reason. It’s hard to calculate and analyze such invisible motivators as emotional connection to another human, or empathy felt while watching a beautiful story unfold. These aspects of life and art are inherently subjective. But according to global consulting firm McKinsey, “…the notion that creativity and data are adversaries is simply outdated.” Creatives should consider digital strategy a roadmap to get in front of the right audience, and a blueprint for what to do once you get there. At the end of the day the common goal is to appeal to audiences on their turf, and digital strategy allows creatives to do just that. Though creative teams want to protect the “unprogrammed” part of their process, a large part of creative development involves a thorough understanding of a target audience and the environment. Adobe hits the nail on the head in a 2017 article, stating: “Creative teams crave information. And the best creative solutions are formed when they have access to the right information like demographic, psychographic, or other audience insights.” So what might this actually look like in action? Let’s dive into an example. Collaboration in Practice BraunAbility, a leading wheelchair vehicle manufacturer in Winamac, Indiana, came to Element Three with a problem: after a recent site relaunch, their organic traffic was down and they needed a boost in leads. The digital team went to work researching BraunAbility’s audience segments and competitive keywords and auditing the company’s website. This research was accompanied by content recommendations and delivered to the client in the form of an SEO Roadmap. E3’s overall recommendation was for BraunAbility to create a content hub of pages on their website that took their audiences through visually interactive product information and imagery. Using this roadmap, our creative team designed and wrote three web pages that centered around “Help Me Buy” content. Because our designers and writers had the information needed to narrow content to the researched audience’s needs, we were able to craft a visual web experience that appealed not just to the audience’s emotions and preferences, but to their position in the marketing funnel and their digital habits. Let’s examine the first page of this interactive experience, which is chock-full of content. Users start by reading through “5 Tips for Choosing a Wheelchair Accessible Vehicle” and experiencing visuals that animate as you scroll down the page. Listing content numerically and enticing viewers down the page with visuals is a great example of creative and digital strategy coming together to create a great user experience. As users continue scrolling, the next section of content allows users to click through side-by-side comparisons of vehicles. This is followed by a visual that utilizes branded product photography to organize models by manufacturer and link users to every model on the site. As users travel down the page, CTA buttons are placed throughout content sections to lead traffic to other relevant pages on braunability.com. After launching these Help Me Buy content hub pages and filling them with relevant content, BraunAbility saw a 25% increase in organic search traffic and a 7.2% increase in leads across the site. To see the full extent of this project, check out E3’s case study for BraunAbility’s content hub. Making it Work: The Campaign Blueprint While digital-creative collaboration is nice in theory, it’s often difficult to make it a reality. One way we’ve been able to bridge the gap is through campaign blueprints, or clear visuals that help explain the digital strategy for everyone involved. We launch campaigns with many digital components, from paid media to display ads to PPC and SEO updates. Understanding how all of these components interact and relate to overall marketing goals can be challenging, especially when pitching work to stakeholders or clients. A majority of people learn and take in information visually, so being able to understand the complex environment of a digital campaign in clear visuals doesn’t just help the higher-ups, it keeps the entire team accountable to strategy and on the same page. Here’s an example: Encouraging Cross-Department Collaboration So what can you do as a designer or manager of designers to facilitate this way of working? Find or create an open, collaborative environment where design and digital aren’t siloed. As an Art Director at E3, I currently spend equal amounts of time thinking through strategy with the creative team as I do optimizing work with our digital and development teams. This is possible because E3 has created a workplace where anyone can pitch an idea or take initiative to solve a problem without fear of “overstepping” or working outside their job description. We don’t draw lines or create barriers between departments. My team works to fully understand the digital and design perspectives and how they can best work together to solve a problem, rather than one following another in a sequenced process. Some tactical ways this is accomplished include: Open Up the Office An open office layout makes it easy to get up and talk to others. In fact, it’s almost impossible to hide here, which might be bad for introverts but is great for collaboration. Push for Transparency We have monthly company-wide meetings to inform everyone of our current financials, stresses, work we’ve produced, and company-wide updates. With this culture of transparency comes team lunches, project retrospects, and shared learning. Adobe is another great example of how companies can encourage a “get everyone involved” mentality. Get Creative and Digital Input Early Creative and digital directors should review and contribute to the planning and scoping of a project so user experience can be considered properly. The same goes for any new marketing project or campaign your business plans to launch. Make sure the right people are brought in early so that digital and design considerations are noted earlier rather than later. Value Company Culture When employees have the opportunity to interact off-site, they are naturally more comfortable around one another across fields and teams, which makes collaboration more organic and likely to happen. Build culture by giving employees the opportunity to relax and have some fun — whether that’s during work hours, or off the clock meetups (we’re really keen on kickball and happy hours). Hire and Grow “Stay Curious” Employees Encourage every employee to continue learning — even if it’s not within their “discipline.” For designers, this could be diving into all the new online environments, whether that be exploring the backend of your CMS, marketing automation platform, or CRM. For the digital strategist, this might be learning more about brand. But the more that you can hire and grow “whole-brain” talent, the more successful you’ll be in the long run. One Team, One Goal As obvious as it sounds, simply communicating regularly and sharing insights across departments rather than making assumptions is the easiest way to create effective work. We’re all on the same team, but sometimes when our work becomes siloed, it can seem like the priorities of the creative team and the priorities of the digital team are counter to one another. And that’s practically never true. Collaboration will get the right story in front of the right person at the right time. Some of us may think better with the right side of the brain, others may use the left side more — but as marketers, we have to make sure that science and story, creativity and data, are always balanced. Nobody’s an expert at everything. But together we can be. Perspective from Laura Merriman, Art Director at Element Three
https://medium.com/element-three/using-both-sides-of-the-brain-and-the-company-to-craft-effective-work-c82ea125cebf
['Element Three']
2018-08-17 13:33:59.601000+00:00
['Analytics', 'Business', 'Creative', 'Marketing', 'UX']
Get Excited for a New Era of Feminine Energy
Get Excited for a New Era of Feminine Energy The balance of male and female is tipping and the matriarchy is coming. Photo by Jackie Parker on Unsplash For as long as anyone alive can remember, we’ve lived in global patriarchy. Our world leaders are largely men. Women still have not achieved equality in the workplace, and the global pandemic this year has economically hit women harder than men. Female energy has been suppressed for most of human recollection — perhaps thousands of years. People are beginning to see things shift. The feminist movement has gained traction for about half a century. More narratives have arisen in popular culture about heroines. (Some of my favorites from this year have been Mulan and Enola Holmes.) Everywhere I turn these days, I observe women finding their voices and expressing their female power. I believe that the scales are tipping and we are entering a new era. It’s been coming for a while. Indigenous Peoples and Ancient Wisdom Masculine energy hasn’t always dominated culture and society. There are spiritual balances tipping, being restored, and tipping again throughout human history. There is evidence that at certain times there may have even been equality. The Iroquois Tribe of North America was known for its emphasis on feminine energy and power: Tribal Council was dominated by male speakers, but the women decided which men should be speakers. If the chosen man expressed opinions that clashed with those of the Womens’ Council, they could replace him with someone who more closely represented their views. If the Tribal Council took a course of action that the women disagreed with, such as a raid, the women might simply refuse to give them any food for the journey. What happened to this idea of women having a place in politics and community decisions? Colonialism came with patriarchal values, and male-dominated modernity has forgotten what came before this. But our values are changing, and this past election has shown this. What did the election prove about the power of feminine energy? If we trust women to lead, things will get better. Let us never forget that black women led grassroots organizing that resulted in many democratic victories this year. I loved this article in Yes! Magazine about modern indigenous women who are bringing back the tradition of a matriarchy, where it says: Today, contemporary Indigenous women are taking the matter into their own hands and showing the public how to rethink, reframe, and relearn a new American-Canadian story that seamlessly incorporates the voices of Indigenous women. These women are living in the tradition of their ancestors, whose societies and nations were often matriarchal. They are reclaiming the tradition of female leadership and turning the old, white, male-dominated perspective of history on its head. Who is better suited to heal the land and restore balance in our environment than indigenous women? There is hope that with this ancient, traditional wisdom, the guidance of female energy can be restored. I believe that it’s already happening. Everywhere I turn these days, I observe women finding their voices and expressing their female power, such as in this article I read today by Bradlee Bryant. Kamala Harris is a great example of a woman breaking through limitations that women have endured for centuries. Her success in rising to Vice Presidential status is evidence of our bright future in America. Anthea Butler of the University of Pennsylvania wrote of her: We see in her the promise that our mothers held out for us, one our grandmothers could not even imagine. For our daughters and granddaughters, we see a brighter future. Men can also become more accepting of their inner feminine energy, and have become open to allowing women like Kamala to emerge as leaders. Recognizing the need to balance male and female is the key to paving the way for new possibilities in society. In ancient Chinese philosophy, yin and yang represent the balance that needs to exist between extremes: Yin and yang create a perfect balance, the concept of duality. In a Chinese cultural study, women have yin energy while men are considered to be more yang. Women are sweet, weak, and are a positive force of preservation of life. Men are dominating, aggressive and protective towards their yin. We’ve lived in the extreme of male energy for too long. We all know that when there is extreme unbalance in scales, they will next tip in the other direction. Masculinity has been dominant and may even be stronger in recent years than ever before — and now the time has come to embrace our femininity. Mother Earth, the Chakras, and Shakti We all have both female and male energy in us. In her book Wheels of Life, Anodea Judith speaks about Vishnu’s male energy entering us through the crown chakra, and Shakti’s female energy entering us through the base chakra, meeting in the heart chakra (in the middle). I interpret this to mean that the first three chakras, representing Earth, emotion, and personal power, are feminine. Thought, sound, and light, representing the fifth, sixth, and seventh chakras, are related to male energy. The first three chakras are in the realm of feminine power; in ancient times she was called Shakti. In Hinduism, Shakti’s energy is divine feminine creative power: According to Hindu philosophy, on the earthly plane, all forms of shakti are most tangibly manifested through females, creativity and fertility; however, males also possess shakti in a potential form which is not fully manifested. Shakti is responsible for all creation and change in existence. The base chakra, where the feminine Shakti resides, represents solid Earth. Mother Earth, famous for being the feminine bearer of life in cultures around the world, has a connection to the female energy in us all. No wonder women are at the forefront of activism in the climate crisis. We are the stewards of the land. Return the power to women, and we will guide us all to hope and restoration. We can begin to reverse the destruction of this planet caused by relying too much and for too long on male energy related to the highest three chakras. Mother Earth has been forgotten, but we can start to remember her. Think with your ‘heart’, not your mind The patriarchy has dominated our values. Thought has been deemed good and emotion has been deemed bad. We aren’t allowed to have public emotional outbursts. Any woman working in a male-dominated field can tell you that displays of emotion and personal power are often suppressed. I can feel a revolution of female energy arising in the world. It will take time, but women will be more and more trusted to lead, and our societal values will be transformed in the other direction. To embrace the feminine energy, we will begin to evaluate decisions and situations with our feelings and ‘heart’. Pure logic will no longer be enough to solve problems. Subtle emotional and spiritual knowledge will come to the forefront of our collective consciousness.
https://medium.com/mystic-minds/get-excited-for-a-new-era-of-feminine-energy-1ff89aa93c70
['Emily Jennings']
2020-12-21 11:03:21.411000+00:00
['Consciousness', 'Culture', 'Feminism', 'Self Improvement', 'Creativity']
The Power of the Gut, Outside of the Gut
The Power of the Gut, Outside of the Gut “The gut is not like Las Vegas. What happens in the gut does not stay in the gut.” - Dr. Alessio Fasano, Gastroenterologist There was once a time in the nutritional realm that gut function was considered to solely impact upon digestive disorders, like bloating and constipation. However, the times they are a-changin’, as emerging research begins to highlight the fascinating ways the gut can influence the functioning of the entire body. A run down of influence of the gut in immune function, body weight and mental health only adds to the ever-growing list of reasons why 2016 is the year of gut health (sorry Kanye). Immunity Over 80% of body’s immune cells reside in the gut. The microbes living alongside these immune cells are vital to the introduction, training and functioning of a kick arse immune system. If you haven’t yet learnt about the somewhat disgusting, somewhat amazing world of bacteria currently within you, refer here. Your gut microbes provide information regarding the ingestion of harmful substances (for example the flu virus, or, the bacteria living within a 6 day old burrito), in order to stimulate the immune system as needed. When fed the right foods, gut microbes also produce molecules that regulate immune function and reduce inflammation, such as Short Chain Fatty Acids (SCFAs). Given the large importance of these very little microbes to immunity, it is logical that abnormal gut microbe composition (a condition known as “dysbiosis” in the land of gut health) has been linked with lowered immunity and autoimmune disease. It is hypothesized that the rise antibiotic use, cesarean births, poor dietary habits and hyper-sanitation (shout out to you Dettol) in high income countries have resulted in gut compositions which lack the strength and diversity to establish balanced immune responses. Research has identified striking differences between the microbiomes of western children to those of rural Africa, where modern autoimmune diseases such as allergies and asthma are essentially nonexistent. African children, eating a very high fiber diet, have greater gut microbe diversity and produce significantly more SCFAs, both of which are critical for immune function. The power of the gut-immune relationship has also been demonstrated in a recent review that found the composition of microbiome can influence the effectiveness of vaccines in the body. Vaccines were shown to be less effective in individuals with gut dysbiosis, because the immune system was already preoccupied with dealing with existing gut inflammation. This discovery has exciting implications for future research projects that will investigate if the gut can also influence the effectiveness of other medications, supplements and even food. Weight The gut is also thought to exert powerful effects on metabolism and resultantly, body weight. We know that the bacterial gut compositions of normal weight vs obese people are very different. Fecal transplant studies (yes, that is what you think it is) in rats have provided fascinating data of the power of the microbiome to influence weight. A 2013 study published in Science, transplanted gut bacteria from obese and lean pairs of twins into germ free mice. The results showed mice injected with bacteria from fat twins grew fat; while those injected with bacteria from lean twins stayed lean. Furthermore, another study in Nature demonstrated that mice injected with obese gut microflora actually extract more calories from the exact same food than mice with lean microflora. This research provides not only reasoning for the observed changes in these rats, but potentially reasoning for why humans do not lose even when following a calorie restricted diet. Recent research is showing these results are applicable among humans. Researcher Max Nieuwdorp has completed double blinded fecal microbiota transplantation (FMT) on Type 2 diabetics with lean donors. The study found FMT from Lean male donors into males with metabolic syndrome resulted in a significant improvement in insulin sensitivity in conjunction with an increased intestinal microbial diversity. Scientific America explain recent findings in saying; “the wrong mix of microbes, it seems, can help set the stage for obesity and diabetes from the moment of birth.” It is a powerful idea for future health intervention that changing the microbiome may change the way food is metabolised, the occurrence of obesity and development of chronic disease. Mind While it is well established the brain signals the gut to exert effects over the Gastrointestinal Tract (GIT), it has recently been shown this relationship goes both ways. The gut can effect the brain too. Mark Lyte in Advances in Experimental Medicine and Biology explains: “The bidirectional interaction [of the gut-brain axis] is important not only in normal gastrointestinal function but also plays a significant role in shaping higher cognitive function such as our feelings and our subconscious decision-making.” When considered physiologically, this idea is actually quite intuitive. The gut produces many molecules that inevitably end up in the brain, and can effect on mental state of an individual. For example, 90% of serotonin, the body’s feel good neurotransmitter, is produced in the gut. It therefore makes sense that a healthy gut environment is vital in facilitating a healthy mental state. Likewise, it is no surprise that dysbiosis is beginning to be associated with many neurological disorders, from Autism to Alzheimer’s. For a deeper insight, check out Neuroscientist David Perlmutter explaining some of these recent findings about the gut-brain connection: [embed]https://www.youtube.com/watch?v=ucAJ0U5Veis[/embed] Or, if you prefer your gut information delivered in a Scottish accent, I can offer you a Ruari Robertson Ted Talk. Obviously, the presented information is classified as emerging knowledge. There is still much more to learn. Although mechanisms are unclear, the idea that the gut can change the brain, immune response and body weight is too powerful to ignore. We have known for a long time now that plant foods, fiber, probiotics, mindful eating habits, and the limitation of antibiotics are all good things. Now, with the impact of such gut health practices having the potential to extend far beyond the bathroom, there has never been a better time to start lovin’ your gut.
https://medium.com/the-isthmus/the-power-of-the-gut-outside-of-the-gut-92e071e308f3
['The Isthmus']
2016-05-22 01:05:54.924000+00:00
['Health', 'Microbiome', 'Gut Health']
Role of Big Data in Academia
CAN ACADEMIA, RESEARCHERS, DECISION MAKERS AND POLICY MAKERS MANAGE THE CHALLENGES OF BROADER COLLABORATION AND PRIVACY TO HARNESS VALUE FROM BIG DATA? For academia, researchers and decision-makers, and for many other sectors like retail, healthcare, insurance, finance, capital markets, real estate, pharmaceutical, oil & gas, big data looks like a new Eldorado: Read More
https://medium.com/data-analytics-and-ai/role-of-big-data-in-academia-516a6d10c637
['Ella William']
2019-06-07 11:59:59.493000+00:00
['Data Science', 'Big Data', 'Analytics', 'Information Technology', 'Data Visualization']
Why Isn’t Agile Working?
Why Isn’t Agile Working? A couple drawings… I was visiting a relative a couple years ago. My poor cousin (the CEO of an insurance company) had been sold the Agile Silver Bullet ™ and was pissed. He said something like: It’s a sham! We changed the way we do everything. We brought in consultants. We hired these master project managers. And nothing worked! It made no difference. There’s no accountability. All I get is excuses I forget how I responded, but I know how I’d respond today. I’d draw some pictures and not even mention the word Agile. There are a couple core concepts I’d need to communicate to him…. 1. Flow Efficiency First, if we look at lead time — the time from when we dream up an idea, until it reaches customers — you’ll notice that most of the time is spent “waiting”. 15% flow efficiency (work time / lead time) is normal. Crazy, right? Yet we focus on what’s (relatively) visible…the small amount of time spent actually doing the job. The best companies hit 40%. Short story, to go faster, you need to address the waiting time. 2. Unplanned Work and Multitasking It is not uncommon to have teams paying 75% “interest” on a combination of unplanned work and task switching. The team may not even be paying down the principle. It’s literally overhead, and often never tracked in the ticketing system. Most likely, the team complains about this (it is a terribly uninspiring situation). But ignored long enough, they accept the dismal reality. Now imagine if this is a “shared service”, and the team is responsible for addressing production issues, or provisioning new infrastructure while simultaneously doing “projects”. Suddenly you have a bottleneck. Lesson: Address sources of unplanned work, and quantify the economic impact of having a shared service. Shared services make intuitive sense, but they often inspire a good deal of expensive pre-planning. 3. S, M, and L This is a fun trick. Plot the time-to-completion for your large, medium, and small work items. Try to move up a level and focus on items of actual customer value (not tasks). What you’ll notice in many organizations is that the “size” of the work has no bearing on the time-to-completion. Why? There are too many other factors influencing how long it takes to complete the work (e.g. dependencies, unplanned work, lots of work in progress, etc.) 4. Benefits Realization So much effort is spent reducing what I call “delivery risk”. This makes sense if you deliver custom projects and the customer pays cash-on-delivery. In SaaS (software as a service), we don’t get paid when we deliver the work. The benefits accrue over time. I call this “benefits risk” (the risk the work will be a dud). It is common for big orgs to adopt Agile, but see no financial benefits. Why? Development is faster, but that has no bearing on 1) making the right product decisions, and 2) working to realize benefits. The whole POINT of Agile is to reduce risk. In project work, that risk is “on time / within scope”. In product work, that risk is “this thing doesn’t ****ing work”. This is the whole fallacy of the PO “accepting” delivery of a feature. No benefit have been delivered! Lots of companies adopt the model on the left. Few adopt the model on the right. When they get shitty results, they try to cram more work into the system which brings about a world of hurt. 5. Unmanaged Complexity And finally. Take a well understood reference feature and pass it through your product development system. Without managing complexity / refactoring / automation, it will take longer to complete this feature each year. Even if your team remains the exact same. Having something go from 3 days to 6 weeks is not unheard of. Agile Which brings me to Agile. Agile is worthless unless it serves as a catalyst for continuous improvement. Scrum and SAFe are worthless unless they serve as a catalyst for continuous improvement. Why? Because the factors that are slowing you down, are only partially due to whether you are sprinting, writing user stories, and doing biweekly demos. I’d argue those things are relatively minor (once you wrap your head around the idea of incrementally lowering risk). To “be Agile”, you’ll need to spend a good deal of money and energy on: Doing the work that actually matters (benefits). Doing less. Automation, tooling, deployment pipeline, feature flags, etc. (DevOps) Changing your management culture Adjusting how you fund initiatives. Shift to incremental, mission based funding vs. funding projects Allocating resources to manage complexity (regular refactoring and re-architecting) Mapping value streams, and treating your business a service ecology A fresh look at shared services There’s no silver bullet. You have to do the work. Beware of anyone who says otherwise.
https://medium.com/hackernoon/why-isnt-agile-working-d7127af1c552
['John Cutler']
2017-10-05 13:50:50.678000+00:00
['Design', 'Software Development', 'Product Management', 'Agile']
Data storytelling - Black Lives Matter!
Data storytelling - Black Lives Matter! Introduction #BlackLivesMatter was founded in 2013 in response to the acquittal of Trayron Martin’s murderer. Black lives matter foundation, Inc. is a global organization in the US, UK and Canada whose mission is to eradicate white and build local power to intervene in violence inflicted on Black communities by the state and vigilantes. So many death have been recorded as a result of Police violence in US which has claimed many lives over the years and left the family of the deceased in lifetime agony. The core question is Can the death rate be minimized or completely avoided? Yes, it can. Data scientist can give insight and recommendation to solve this problem. This key question motivated me to give a shot at the publicly available data on police violence in US on Kaggle. I have done some exploratory data analysis on the US police violence & fatal shootings against citizen between January, 2015 and August, 2020. This analysis will not only discuss about black race people but in fact all races living in the US. The intention of this analysis is to answer the outlined questions, to/show; 1. Which day, month and year death is highly recorded? 2. What is the percentage of age bucket (group) and gender affected? 3. What race, state, city and demography are most affected? 4. What is the proportion of flee or not flee and threat to the police or not? 5. What are the most used arms? 6. Black lives Matter, further insight on black race 1. Which day, month and year death is highly recorded? Death analysis by day proportion by day The above visual clearly shows that mostly killings occur in mid week that is, Tuesday and Wednesday and decline towards the weekend and rise again at the beginning of the new week that is, Saturday and Monday. Death analysis by month
https://medium.com/analytics-vidhya/data-storytelling-black-lives-matter-d8b1dc0cb71c
['Oluwatosin Sanni']
2020-09-10 13:34:58.918000+00:00
['BlackLivesMatter', 'Data Science', 'Storytelling', 'Data Analytics', 'Data Visualization']
How Will You Change This Holiday Season?
Lesson to Learn We get to control how we act and respond to the situation around us. It is our life, and despite how others act or events unfold, we do get to choose what we do. It is not easy to be upbeat, joyful, and helpful to others when surrounded by pain, uncertainty, or sorrow. We can choose to be a ray of sunshine during the storm. Doing so changes the situation. Doing so changes our outlook. We can choose to be the joy that is so often lacking in our world during this holiday season.
https://medium.com/illumination-curated/how-will-you-change-this-holiday-season-8cf7b0f5eec0
['Randy Wolken']
2020-12-28 13:35:21.773000+00:00
['Leadership', 'Business', 'Self Improvement', 'Self-awareness', 'Life Lessons']
Evaluation Metrics for Regression Analysis
Evaluation Metrics for Regression Analysis Methods for quantifying error and assessing predictive performance in regression modeling Terms to know These terms will come up, and it’s good to get familiar with them if you aren’t already: Regression analysis — a set of statistical processes for estimating a continuous dependent variable given a number of independents Variance — measurement of the spread between numbers in a data set ŷ — the estimated value of y ȳ — mean value of y “Goodness of fit” Goodness of fit is typically a term used to describe how well a dataset aligns with a certain statistical distribution. Here, we’re going to think of it as a way of describing how well our model is fitted to our data. If we can think about our regression model in terms of the imaginary “best-fit” line it produces, then it makes sense that we would want to know how well this line matches our data. This goodness of fit can be quantified in a variety of ways, but the R² and the adjusted R² score are two of the most common methods for describing how well our model is capturing the variance in our target data. R² R² — also called the coefficient of determination — is a statistical measure representing the amount of variance for a dependent variable that is captured by your model’s predictions. Essentially, it is a measure of how well your model is fitted to the data. This score will always fall between -1 and 1, with values closest to 1 being best (value of 1 means our model is completely explaining our dependent variable). R² uses a sort of “baseline model” as a marker to compare our regression results against. This baseline model simply predicts the mean every time, regardless of the data. After fitting the regression model, the predictions of our baseline (mean-guessing) model are compared to the predictions of our newly fitted model in terms of errors squared. fig. 1 SSR — residual sum of squared errors for our regression model (squared difference between y and ŷ) — residual sum of squared errors for our regression model (squared difference between y and ŷ) SST — total sum of squared errors (squared difference between y and ȳ) We can see now that if SSR is very low, meaning our model has low error, our R² score will be closer to 1. Likewise, if we have higher amounts of error in our model, R² will swing towards 0 and in the worst cases -1. Adjusted R² R² is a great indicator of the goodness-of-fit of our model, but it has one major drawback — R squared will always increase as we add more independent variables. This happens regardless of whether or not our new features are actually predictive of our target, giving us a sometimes false sense of confidence in our newly included variables. This is where the adjusted R² score comes into play. It’s important to note that adjusted R² is always going to be lower than R², and we’ll see why in just a second. Unlike with R², the adjusted R² score takes into account how effective each feature is at explaining our target, and will actually decrease as we add less-predictive features to our model. The formula for adjusted R² looks a little messy but it’s easy to solve once we’ve calculated our R² score fig. 2 R² — R² score for our model, calculated using the formula from fig. 1 — R² score for our model, calculated using the formula from fig. 1 N — number of items in our dataset — number of items in our dataset p — number of independent variables The major feature to pay attention to here is that as p (no. items in dataset) increases, R²adj will always decrease unless there is a significant boost to R². This ensures that our R²adj score reflects the usefulness of each feature. Quantifying error Another intuitive way of measuring success in regression modeling is to talk about the total amount of error in our model’s predictions. While R² and adjusted R² both factor in model error at some level, what if we want a separate metric to express this error? That’s where MAE, MSE, and RMSE come in handy. MAE (Mean Absolute Error) MAE is the most intuitive out of the three, and the calculation is performed just like the name implies. We simply calculate the absolute difference between the predicted and actual target values for each sample, and divide by the number of samples in the dataset. We take the absolute difference because we are considering error in either direction. fig. 3 n — number of samples This value can range from 0 to infinity, with values closest to 0 being best. It’s important to note that this value will be in terms of the continuous dependent variable you are measuring, making it easily interpretable. For example, if we’re predicting pizza prices, MAE would tell us the avg. dollars off our predictions were from the actual prices. MAE is preferred when we aren’t too worried about outliers — since we are only taking the mean of our raw errors, very poor predictions aren’t necessarily weighted any differently than very good predictions in the MAE calculation. MSE (Mean Squared Error) MSE works a lot like MAE, however here, our method for dealing with positive and negative errors will be to square our error term as opposed to taking the absolute value of it. Again, we divide by the number of samples to get a mean score for the dataset. fig. 4 n — number of samples With MSE, it’s important to note that we no longer get an error score in terms of our target variable. Instead, we get a potentially very large number that is very responsive to errors in our predictions — essentially penalizing large error values in the squaring step. Given its lack of interpretability, MSE is typically used more as a stepping stone to RMSE; however, it can be used by itself as an extremely good indicator of outliers in predictions. RMSE (Root Mean Squared Error) RMSE is almost identical to MSE and very easy to calculate. To find it, we simply take the square root of our MSE score. fig. 5 Clearly, RMSE will be smaller than MSE, with the main motivation being that now we have a score in terms of our target variable (dollars, for our pizza prices example). This creates an error metric that is both sensitive to outliers — large errors are penalized — and easily interpretable — score is a continuous value with the same unit as our target.
https://medium.com/ai-in-plain-english/evaluation-metrics-for-regression-analysis-9ac34df7be13
['Sam Thurman']
2020-10-30 19:27:20.300000+00:00
['Machine Learning', 'Deep Learning', 'AI', 'Data Science', 'Statistics']
Working on an Open Source Project as a Team
Working on an Open Source Project as a Team Reflecting on our experience contributing to open source project Reactime Photo by Javier Allegue Barros on Unsplash. Now that our team has launched Reactime version 7.0, we have some time to reflect on our experience contributing to an open source project. Our team was incredibly productive despite working across multiple time zones and exclusively communicating virtually. The most important aspects of our successful team dynamic were pretty simple: empathy and communication. More concretely, here is how we put these into practice. Before diving into a project, we discussed everyone’s individual goals and searched for an application that we were all passionate about and would enjoy working on. We all love using React to build applications and wanted to deepen our understanding of the JavaScript library. Reactime was the perfect open source project for us to do that. Our approach to it was pretty simple. At every stage, our main focus was the experience of the user, the user of our app, and the developer who would want to contribute to our extension. We wanted to improve the performance of the extension itself to make it easier for developers to spot bugs in their applications and to easily identify performance bottlenecks.
https://medium.com/better-programming/working-on-an-open-source-as-a-team-50b61b85bb55
['Becca Viner', 'Caitlin Chan', 'Mai Nguyen', 'Tania Lind']
2020-12-15 16:23:09.717000+00:00
['JavaScript', 'Open Source', 'React', 'Programming', 'Women In Tech']
How to Build a Reporting Dashboard using Dash and Plotly
A method to select either a condensed data table or the complete data table. One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present: Code Block 17: Radio Button in layouts.py The callback for this functionality takes input from the radio button and outputs the columns to render in the data table: Code Block 18: Callback for Radio Button in layouts.py File This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' . Conditionally Color-Code Different Data Table cells One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues. There is lack of formatting functionality in Dash Data Tables at this time. If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly. There is a bug in the Dash data table code in which conditional formatting does not work properly. I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide: Code Block 19: Conditional Formatting — Highlighting Cells The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash. *This has since been corrected in the Dash Documentation. Conditional Formatting of Cells using Doppelganger Columns Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements: Code Block 20: Adding Doppelganger Columns Then, the conditional cell formatting can be implemented using the following syntax: Code Block 21: Conditional Cell Formatting Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values. The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method): Code Block 22: Data Table with Conditional Formatting I describe the method to update the graphs using the selected rows in the data table below.
https://medium.com/p/4f4257c18a7f#37f2
['David Comfort']
2019-03-13 14:21:44.055000+00:00
['Data Science', 'Dash', 'Towards Data Science', 'Dashboard', 'Data Visualization']
Best Python IDEs and Code Editors You Should Know
Best Python IDEs and Code Editors in 2020 Choosing the right tools for a job is critical. Similarly, when starting a new project, as a programmer, you have a lot of options when it comes to selecting the perfect Code Editor or IDE. There are loads of IDEs and Code Editors out there for Python, and in this section, we’ll discuss some of the best ones available with their benefits and weaknesses. ● PyCharm Image Source — PyCharm Category: IDE IDE First Release Date: 2010 2010 Platform Compatibility: Windows, macOS, Linux Windows, macOS, Linux Who It’s For: Intermediate to advanced Python users Intermediate to advanced Python users Supporting Languages: Python, Javascript, CoffeeScript, etc. Python, Javascript, CoffeeScript, etc. Price: Freemium (free limited feature community version, paid full featured professional version) Freemium (free limited feature community version, paid full featured professional version) Download: PyCharm Download Link PyCharm Download Link Popular Companies using Pycharm Python IDE - Twitter, HP, Thoughtworks, GROUPON, and Telephonic. Developed by JetBrains, PyCharm is a cross-platform IDE that offers a variety of features such as version control, graphical debugger, integrated unit tester, and pairs well for web development and Data Science tasks. With PyCharm’s API, developers can create their custom plugins for adding new features to the IDE. Other features include: ● Code completion ● Live updates to code changes ● Python refactoring ● Support for full-stack web development ● Support for scientific tool such as matplotlib, numpy, and scipy ● Support for Git, Mercurial and more ● Comes with paid and community editions Advantages of PyCharm — ● Can boost productivity and code quality ● Highly active community for support Disadvantages of PyCharm — ● Can be slow to load ● Requires changing default settings for existing projects for best compatibility ● The initial installation might be difficult Screenshot for References- Image Source — PyCharm ● Spyder Image Source — Spyder Category: IDE IDE First Release Year: 2009 2009 Platform Compatibility: Windows, macOS, Linux Windows, macOS, Linux Who It’s For: Python data scientists Python data scientists Price: Free Free Download: Spyder Download Link Spyder is awith support for packages like NumPy, SciPy, Matplotlib, and Pandas. Targeted towards scientists, engineers, and data analysts, Spyder offers advanced data exploration, analysis, and visualization tools. Features of this cross-platform IDE include: ● Code completion ● Syntax highlighting ● Code benchmarking via Profiler ● Multi-project handling ● Find in Files feature ● History log ● Internal console for introspection ● Third-party plugins support Advantages — ● Includes support for numerous scientific tools ● Comes with an amazing community support ● Interactive console ● Lightweight Disadvantages — ● Comes with execution dependencies ● Can be a bit challenging at first for newcomers Screenshot for References- Image Source — Spyder ● Eclipse + Pydev Category: IDE IDE First Release Year: 2001 — for Eclipse , 2003 — for Pydev 2001 — , 2003 — Platform Compatibility: Windows, macOS, Linux Windows, macOS, Linux Who It’s For: Intermediate to advanced Python users Intermediate to advanced Python users Supporting Languages: Python, (Eclipse supports Java and many other programming languages) Python, (Eclipse supports Java and many other programming languages) Price: Free Free Download: PyDev Download Link PyDev Download Link Popular Companies using PyDev and Eclipse Python IDE — Hike, Edify, Accenture, Wongnai, and Webedia. Eclipse is one of the top IDEs available, supporting a broad range of programming languages for application development, including Python. Primarily created for developing Java applications, support for other programming languages is introduced via plugins. The plugin used for Python development is Pydev and offers additional benefits over Eclipse IDE, such as: ● Django, Pylint, and unittest integration ● Interactive console ● Remote debugger ● Go to definition ● Type hinting ● Auto code completion with auto import Advantages — ● Easy to use ● Programmer friendly features ● Free Disadvantages — ● Complex user interface makes it challenging to work with ● If you’re a beginner then using Eclipse will be difficult Screenshot for References- Image Source — Pydev ● IDLE Image Source — Python Category: IDE IDE First Release Year : 1998 : 1998 Platform Compatibility: Windows, macOS, Linux Windows, macOS, Linux Who It’s For: Beginning Python users Beginning Python users Price: Free Free Download: IDLE Download Link IDLE Download Link Popular Companies using IDLE Python IDE — Google, Wikipedia, CERN, Yahoo, and NASA. Short for Integrated Development and Learning Environment, IDLE has been bundled with Python as its default IDE for more than 15 years. IDLE is a cross-platform IDE and offers a basic set of features to keep it unburdened. The features offered, include: ● Shell window with colorized code, input, output and error messages ● Support for multi-window text editor ● Code auto-completion ● Code formatting ● Search within files ● Debugger with breakpoints ● Supports smart indentation Advantages — ● Perfect for beginners and educational institutions Disadvantages — ● Lacks features offered by more advanced IDEs, such as project management capabilities ● Wing Image Source — Wing Category - IDE - IDE First Release Year— September 7, 2000 September 7, 2000 Platform - Windows, Linux and Mac - Windows, Linux and Mac Who It’s For: Intermediate to advanced Python users Intermediate to advanced Python users Price: $179 per user for a year of commercial use, $245 per user for a permanent commercial use license $179 per user for a year of commercial use, $245 per user for a permanent commercial use license Download: Wing Download Link Wing Download Link Popular Companies using Wing Python IDE- Facebook, Google, Intel, Apple, and NASA The feature-rich IDE for Python, Wing, was developed to make development faster with the introduction of intelligent features such as smart editor and simple code navigation. Wing comes in 101, Personal, and Pro variants with Pro being the most feature-rich and the only paid one. Other notable features by Wing include: ● Code completion, error detection, and quality analysis ● Smart refactoring capabilities ● Interactive debugger ● Unit tester integration ● Customizable interface ● Support for remote development ● Support for frameworks such as Django, Flask, and more Advantages — ● Works well with version control systems such as Git ● Strong debugging capabilities Disadvantages — ● Lacks a compelling user interface ● Cloud9 IDE Image Source — AmazonCloud9 Category : IDE : IDE First Release Year — 2010 2010 Platform : Linux/MacOS/Windows : Linux/MacOS/Windows Popular Companies using Cloud9 Python IDE — Linkedin, Salesforce, Mailchimp, Mozilla, Edify, and Soundcloud. Part of Amazon’s Web Services, Cloud9 IDE gives you access to a cloud-based IDE, requiring just a browser. All the code is executed on Amazon’s infrastructure, translating to a seamless and lightweight development experience. Features include: ● Requires minimal project configuration ● Powerful code editor ● Code highlight, formatting and completion capabilities ● Built-in terminal ● Strong debugger ● Real-time pair programming capabilities ● Instantaneous project setup, covering most programming languages and libraries ● Unobstructed access to several AWS services via terminal Advantages — ● Enables painless development of serverless applications ● Remarkably robust and globally accessible infrastructure Disadvantages — ● Depends entirely on internet access ● Sublime Text 3 Image Source — Sublime Category: Code Editor Code Editor First Release Year: 2008 2008 Platform Compatibility: Windows, macOS, Linux Windows, macOS, Linux Who It’s For: Beginner, Professional Beginner, Professional Supporting Languages: Python and C# Python and C# Price: Freemium Freemium Download: Sublime text 3 Download Link Sublime text 3 Download Link Popular Companies using Sublime Text Python IDE- Starbucks, Myntra, Trivago, Stack, and Zapier. Sublime Text is one of the most commonly used cross-platform Code Editors and supports several programming languages, including Python. Sublime offers various features such as plenty of themes for visual customization, a clean and distraction-free user interface, and supports package manager for extending the core functionality via plugins. Other features include: ● Up-to-date plugins via Package Manager ● File auto-save ● Macros ● Syntax highlight and code auto-completion ● Simultaneous code editing ● Goto anything, definition and symbol Advantages — ● Uncluttered user interface ● Split editing ● Fast and high-performance editor Disadvantages — ● Annoying popup to buy sublime license ● Confusingly large number of shortcuts ● Complicated package manager ● Visual Studio Code Image Source — Visual Studio Code Category: IDE IDE First Release Year: 2015 2015 Platform Compatibility: Windows, macOS, Linux Windows, macOS, Linux Who It’s For : Professional : Professional Supporting Languages: All the major programming languages (Python, C++, C#, CSS, Dockerfile, Go, HTML, Java, JavaScript, JSON, Less, Markdown, PHP, PowerShell, Python, SCSS, T-SQL, TypeScript.) All the major programming languages (Python, C++, C#, CSS, Dockerfile, Go, HTML, Java, JavaScript, JSON, Less, Markdown, PHP, PowerShell, Python, SCSS, T-SQL, TypeScript.) Price: Free Free Download: Visual Studio Code Download Link Visual Studio Code Download Link Popular Companies using Visual Source Code (Python IDE — The Delta Group, TwentyEight, Inc., Focus Ponte Global, Creative Mettle, and National Audubon Society, Inc. Developed by Microsoft, Visual Studio Code is an acclaimed cross-platform code editor that is highly customizable and allows development in several programming languages, including Python. It offers a wide variety of features to programmers, such as smart debugging, customizability, plugin support for extending core features. Key highlights include: ● Built-in support for Git and version control ● Code refactoring ● Integrated terminal ● IntelliSense for smarter code highlight and completion ● Intuitive code debugging capabilities ● Seamless deployment to Azure Advantages — ● Regularly updated with active community support ● Free Disadvantages — ● Vast collection of plugins can make finding the right one challenging ● Lackluster handling of large files ● Longer launch time Screenshot for References- Image Source — Visual Studio Code ● Atom Image Source — Atom Category: Code Editor Code Editor First Release Year: 2014 2014 Platform Compatibility: Windows, macOS, Linux Windows, macOS, Linux Who It’s For: Beginner, Professional Beginner, Professional Supporting Languages: Python, HTML, Java and 34 other languages. Python, HTML, Java and 34 other languages. Price: Free Free Download: Atom Download Link Atom Download Link Popular Companies using Atom (Python IDE) — Accenture, Hubspot, Figma, Lyft, and Typeform. Developed by Github, the top dog in source-code hosting and software version controlling, Atom is a lightweight and cross-platform Code Editor for Python and many other programming languages. Atom provides a lot of features in the form of packages, that enhances its core features. It’s built on HTML, JavaScript, CSS, and Node.js, with the underlying framework being Electron. Features offered include: ● Support for third-party packages via built-in Package Manager ● Supports developer collaboration ● Over 8000 feature and user experience-extending packages ● Support for multi-pane file access ● Smart code completion ● Customizability options Advantages — ● Lightweight code editor ● Community-driven development and support Disadvantages — ● Recent updates have increased RAM usage ● Some tweaking required in settings before use ● Jupyter Image source — Jupyter Category: IDE IDE First Release Year- February 2015 February 2015 Browser Compatibility: Chrome, Firefox, Safari Chrome, Firefox, Safari Price: Free Free Download: Jupyter Download Link Jupyter Download Link Popular Companies of Using Jupyter Python IDE- Google, Bloomberg, Microsoft, IBM, and Soundcloud. Also known as Project Jupyter, it is an open-source and cross-platform IDE that many data scientists and analysts prefer over other tools. Perfect for working on technologies such as AI, ML, DL, along with several programming languages, Python included. Jupyter Notebooks offer seamless creation and sharing of code, text, and equations for various purposes, including analysis, visualization, and development. Features offered include: ● Code formatting and highlight ● Easy sharing via email, Dropbox ● Produces interactive output ● Plays well with Big Data ● Can be run from local and cloud machines Advantages — ● Requires minimal setup ● Perfect for quick data analysis Disadvantages — ● Inexperienced users may find Jupyter complicated Screenshot for References-
https://towardsdatascience.com/best-python-ides-and-code-editors-you-must-use-in-2020-2303a53db24
['Claire D. Costa']
2020-12-11 07:18:23.627000+00:00
['Pycharm', 'Coding', 'Programming', 'Python', 'Data Science']
How to implement MICE algorithm using Iterative Imputer to handle missing values?
How to implement MICE algorithm using Iterative Imputer to handle missing values? Bhuvaneswari Gopalan Follow Nov 17 · 5 min read Any machine learning model is only as good as the data, so having a complete dataset with proper data is a must to develop a good model. Missing data often plagues real-world datasets, and hence there is tremendous value in imputing, or filling in, the missing values. Often the common methods like mean, median ,mode, frequent data and constant doesn’t provide correct data for the missing values. To know more about MICE algorithm and it’s working check out my blog “ MICE algorithm to Impute missing values in a dataset “, in which I have explained in detail how MICE algorithm works with an example dataset. In this blog, we will see how the MICE algorithm is implemented using the Scikit-learn Iterative Imputer. The Iterative Imputer was in the experimental stage until the scikit-learn 0.23.1 version, so we will be importing it from sklearn.experimental module as shown below. Note: If we try to directly import the Iterative Imputer from sklearn. impute, it will throw an error, as it is in experimental stage since I used scikit-learn 0.23.1 version. First, we need to import enable_iterative_imputer which is like a switch so that scikit-learn knows that we want to use the experimental version of Iterative Imputer. I will use the same example that I used in my previous blog “ MICE algorithm to Impute missing values in a dataset “, so that it will be easy to understand as shown below: Let’s create the data frame for this data in the Kaggle notebook as shown below: Next, let’s remove the personal loan column as it is the target column and as we will not be imputing that column, we will not need it. We will work only with the feature columns as shown below: Now, let’s find out how the values are correlated to decide which algorithm to use to impute the null values. As we see here, the values we got are either 1 or very close to 1, so we can use linear regression to impute null values. Next, fit and transform the dataset with the imputer. In the above image, we can see that after we transform, the null values are imputed (circled numbers) and the dataset is shown in the form of an array. This is how easy it is to impute null values using Iterative Imputer with very few lines of code. How to use Iterative Imputer in the case of training and testing sets? To demonstrate the working of Iterative Imputer in the case of training and testing sets, we will use the same dataset with more records as shown below: Next, let’s remove the personal loan column as it is the target column and as we will not be imputing that column, we will not need it. Next, let’s split the dataset in to train and test datasets using train_test_split function. Now, let’s repeat the same steps we did earlier for the whole dataset, using linear regression as the imputer model and we will fit and transform the training dataset with that imputer as shown. As we can see, the null values in the training dataset are imputed (circled numbers). Finally, let’s do the same on the test set to impute the null values. The test set is as follows: For the test set, we should just use the same imputer that we used for the train set and call the transform function on the test set. We should not again define a new imputer for the test set. As shown in the above image, the null values in the test set are now imputed(circled numbers) and thus we have imputed all the null values in both training and testing sets easily without much difficulty. In a nutshell, this is how the Iterative imputer works to impute the null values in a dataset and we can see how effortlessly we could impute the missing values using the same. Hope this would be useful to everyone who is working with datasets that has missing values and trying to fill appropriate data for those, so that the model that uses that dataset could predict accurately. Happy Imputing!
https://medium.com/nerd-for-tech/how-to-implement-mice-algorithm-using-iterative-imputer-to-handle-missing-values-3d6792d4ba7f
['Bhuvaneswari Gopalan']
2020-11-25 03:36:17.550000+00:00
['Machine Learning', 'Missing Data', 'Data Engineering', 'Towards Data Science', 'Data Preprocessing']
11 Phrases to Be More Persuasive Without Being Overly Pushy
Recently I came across a statistic that caught me off gaurd. According to research performed by persuasion expert Daniel Pink in his book “To Sell Is Human,” 41 percent of the average person’s workday consists of influencing, persuading, and convincing the people around us. This statistic may seem high, but it makes perfect sense. From being offered opportunities and getting your ideas to fly, to convincing your kids in an effective manner to pick up after themselves, persuasion is all around us and it plays a key role in both our personal and professional lives. As an introvert with a stutter who has spent the last two decades as either an entrepreneur or in sales-based roles, I’ve studied the topic of influence and persuasion a great deal. I love the work Daniel Pink puts out. The same goes for the author of “Influence” and “Pre-suasion,” Robert Cialdini. However, if I was looking for an introduction to persuasion, few books are more effective than “Exactly What to Say,” by international sales trainer Phil M. Jones. This is for the simple fact that Phil’s teachings are not only super easy to digest, but they are also extremely practical as he gives very clear examples as to why they are effective along with loads of everyday examples. I’ve been sprinkling the phrases Phil recommends into my daily conversations since coming across his book three years ago. They work. Best of all, many of them you can begin to implement today without feeling like you’re being overly pushy. This past week, Phil was kind enough to give me the green light to share some of his magic phrases. Below are 11 of them to get you started.
https://medium.com/personal-growth/11-persuasive-phrases-to-move-people-to-action-and-get-more-done-f41a15217b07
['Michael Thompson']
2020-11-10 19:02:14.964000+00:00
['Relationships', 'Business', 'Leadership', 'Self Improvement', 'Productivity']
A Bucket Full of Knives and Other Tales from Working in the Concert Industry
Photo by Aditya Chinchure courtesy of Unsplash. “He pointed at me, and I know he wanted me to come up on stage,” said an attractive, drunk, 30ish-year-old woman at Nelly’s concert. As the event manager for the arena, it was my job to kick people out, especially if they tried to rush the stage. Even though I was escorting her out with help from my security staff, she decided that she liked me. “You get it. I know you understand,” she said about nothing in particular as I continued to gently lead her to the gate, nodding in agreement and propping her up a little as we went. The Nelly superfan fit the profile of the type of concertgoer who caused me the most trouble during my year on the job — 30-year-old women out for a night on the town. Teenagers and people in their 20s rarely caused issues (or perhaps couldn’t afford tickets), and middle-aged folks generally spent a lot of time wishing that the people in front of them would sit down. Ladies in their 30’s, though, were there for a good time. “She threw up ON MY HEAD!” said a patron at the I Love the 90s show. Seats in an arena are perfectly positioned so that if you puke, it lands directly on the head on the person in front of you. Instead of apologizing, the puker proceeded to curl up and take a nap in her plastic seat; therefore, the pukee’s anger was directed squarely at me. “ON. MY. HEAD.” she repeated as I was relocating her to a fresh seat, knowing she had about a 30% chance of it happening again. Several women at the Kip Moore concert punched each other in the face for unkown reasons. The walk to my car after work was as treacherous as the concerts themselves. Located directly under the interstate, the employee lot was a flowing river of bird poop. My co-workers were disturbingly alright with the quantity of bird poop (and bird remnants). I learned how to tiptoe and sprint simultaneously to avoid stepping in it or having it land on my head. The finance director would casually wave as she walked out, not noticing my delicate dance as she rolled her briefcase through an inch-deep pool. The staff would barely even flinch when a bird would dive-bomb them indoors. About five survived in the building by drinking old mop water and would circle ominously overhead at all times. I’m pretty some of them lasted longer than I did at the arena. The birds were like part of a curse on the building that caused people and objects to function poorly. The elevator only broke twice during my time there and both times were on the same day — when Mannheim Steamroller, a band that attracts a crowd of a certain age, was scheduled to play. While children were streaming in for a sold-out circus show, a car hit a deer in front of the building — not a typical occurrence, considering the arena is in the middle of downtown. The deer wasn’t quite dead, so a police officer walking by decided to “take care of the situation” as children watched in horror from the sidewalk and skyways. Several women at the Kip Moore concert punched each other in the face for unknown reasons. During one of the smallest events — an indoor football match — a group of drunken guys ganged up on my best rolling cart and broke it. That probably bothered me a disproportionate amount given the various affronts to humanity that regularly took place around me at work. Our typical staffing plan was to throw about 20 retired folks into a crowd of 4,000 drunk people. My coworkers didn’t do a lot to make the job more bearable. “Can we please make a note that Monster Truck drivers need pens to sign autographs?” my co-worker asked me, perturbed. After the show, the drivers sauntered upstairs to the concourse with empty hands as approximately 3,000 fans waited for them to sign various loudly-colored t-shirts and toy cars. As soon as I realized this, I rushed to grab the closest pens, which happened to belong to this particular co-worker. “I buy my own pens, you know,” she said. At our weekly operational meetings, I was positioned the approximate distance of a firing squad away from the rest of the staff as I took “post-mortem” notes about what went well and what went poorly. The room we used for the meetings was meant to be a “pre-concert lounge,” but almost none of the concerts sold well enough to bother opening it. “Did you get that? Did you get that?” my boss would ask, making sure I wrote down every comment made about the usher and security staff that I supervised. Our typical staffing plan was to throw about 20 retired folks into a crowd of 4,000 drunk people, so there was always plenty of material. “The ushers weren’t stopping people from dancing in the aisles,” my co-workers would say. Before the show, the conservations sounded a little different. “Can you cut down on the number of ushering staff? The show isn’t selling so hot,” my boss would unfailingly ask a couple of days before the concerts. As I neared, about 50 women pointed in silent unison at one man. Metal detectors were (wisely) installed during my tenure at the arena, but staff struggled with them at first. Right before the Dierks Bentley concert was scheduled to begin, an usher handed me a plastic container full of knives and said, “I’m not sure what to do with these.” As patrons were walking through the newly installed metal detectors to go to the show, this usher mistakenly collected everyone’s knives, thinking he was supposed to hold on to them instead of asking people to return them to their cars (which was the new protocol). In the ten minutes before he realized his mistake, he had collected at least 50 knives and had no idea which knife belonged to whom. “Should we pass them out to people as they leave?” he asked. During the show, I heard the vague call, “We need security on the floor,” over the radio. This was not an unusual thing to hear when there was a concert with a general admission floor, but it was always hard to figure out who the instigator was due to the haze being pumped into the air and the loud music, which caused people to have to shout the problem into my ear. On this occasion, it was easy to figure out who the culprit was. As I neared the floor, about 50 women pointed in silent unison at one man, who was aimlessly spinning in circles with his arms outstretched just enough to lightly brush up against all of the women around him, driving them nuts. After I removed him, he decided to continue his space-invading dance with the police officers outside the venue, who promptly tackled him to the ground. “Are you aware that one of your staff held on to people’s items after they went through the metal detectors?” a co-worker asked during our post-mortem meeting. “Yes, because I have a bucket of knives in my desk drawer,” I responded.
https://medium.com/the-haven/a-bucket-full-of-knives-and-other-tales-from-working-in-the-concert-industry-6753c0c2a8b0
['Jessica Carney']
2020-09-22 14:37:40.688000+00:00
['Work Stories', 'Humor', 'Concerts', 'Backstage', 'Music']
Kick Start Your Marketing Career: Take the Google Advertising Fundamentals Exam
Do you know what all the ads are doing on the top and right side of the search results? At this point, almost everyone uses the internet, and most internet users use search engines to navigate the web, find information, and purchase goods. The goal of most search engines is to deliver the most relevant results for each query. Do you feel that your company or product is relevant for certain search queries? If you are trying to kickstart your marketing career, it is very important that you understand search engine marketing. In many organizations you will be hard pressed to find someone who is knowledgable about search engine marketing. Not only that, very few people can say they are experts. If you’d like to show your initiative and become an expert in one portion of online marketing, take the Google Advertising Fundamentals Exam. According to the exam website, the fundamentals exam is designed to cover “basic aspects of AdWords and online advertising, including account management and the value of search advertising”. It’s not just a test on how to use AdWords. In fact, the exam only scratches the surface of AdWords and its functionality. The fundamentals exam is really a course on how online advertising works on Google. How do the ads get there? Why should I click on a paid ad? How can I know the ads are safe? Are the ads relevant to my search? This AdWords exam is only one of the exams Google offers. They also have training and testing centers for Google Analytics and Google Apps. You don’t even have to go with Google. If you would rather get a certification from Microsoft, you could take the adExcellence exam. Gaining expertise in search marketing can help differentiate yourself from other job applicants and employees in your future workplace. If you don’t want to pay the $50 for the exam, all the learning materials are available for free online. Take a look for yourself; I bet you’ll learn something new in the first 5 minutes.
https://medium.com/jordan-silton/kick-start-your-marketing-career-take-the-google-advertising-fundamentals-exam-88ea6580a49e
['Jordan Silton']
2016-04-29 01:35:52.865000+00:00
['Career Advice', 'Google', 'Certification']
NLP on Edinburgh Fringe 2019 data
NLP on Edinburgh Fringe 2019 data Web scraping and text analysis of the events taking place during the Edinburgh Fringe In this post, we dive into the basics of scraping websites, cleaning text data, and Natural Language Processing (NLP). I’ve based the context of this project around the Edinburgh Fringe, the world’s largest arts festival, currently taking place between the 2nd and 26th of August. Getting data into Python from web scraping For my analysis, I wanted to acquire text about all of the events taking place during the Fringe festival. This text data was located across several webpages, which would take a long time to manually extract. This is where the Python library requests helps us out, as it can be used to make HTTP requests to a particular webpage. From the response of making a HTTP request, the website text data can be obtained. This text data, however, is a large scroll of text, and this is where the library BeautifulSoup is implemented to parse the returning HTML we get from the webpage so that we can efficiently extract only the content we want. The below code snippet demonstrates making a request to a webpage and parsing the response through BeautifulSoup: import requests from bs4 import BeautifulSoup url = ' # URL to queryurl = ' https://url_to_query' # Scrape html from the URL response = requests.get(url) # Use html parser on webpage text soup = BeautifulSoup(response.text, 'html.parser') Within the returned variable soup, we can search for particular classes in the HTML using commands such as soup.find_all(‘’, class_=’CLASS_NAME’). Using this approach, I acquired data for 5,254 festival events, including fields of the event name, a brief description of the event, ticket prices, and the number of shows being performed. Exploring the Fringe data After getting a data set, typically the next step is to explore what you have. I was interested to know what the distribution of ticket prices was across the events. A restricted view of this data is shown below: Distribution of events by ticket price (view limited to cost up to £60) From the chart, we can see that over 25% of all events are free to attend, with £5–£20 holding a large portion of the distribution. Deeper analysis into the data revealed shows costing more than £60, most of which were technical masterclasses or food/drink tasting sessions. What I was really interested in, though, was the types of shows taking place, for which we need to start working with our text data. Cleaning When using text for data science projects, the data will almost always require some level of cleaning before it is passed to any models we want to apply. With my data, I combined the event name and description text into a single field called df[‘text’]. The following code shows a few of the steps that were taken to clean our text: import string import pandas def remove_punctuation(s): s = ‘’.join([i for i in s if i not in frozenset((string.punctuation))]) return s # Transform the text to lowercase df[‘text’] = df[‘text’].str.lower() # Remove the newline characters df[‘text’] = df[‘text’].replace(‘ ’,’ ‘, regex=True) # Remove punctuation df[‘text’] = df[‘text’].apply(remove_punctuation) Bag Of Words (BOW) With each row in our DataFrame now containing a field of cleaned text data, we can proceed to inspect the language behind it. One of the more simple methods is called Bag of Words (BOW), which creates a vocabulary of all the unique words occurring in our data. We can do this using CountVectorizer imported from sklearn. As part of setting up the vectorizer, we include the parameter stopwords=‘english’, which removes common words from the dataset such as the, and, of, to, in, etc. The following code performs this on our text: from sklearn.feature_extraction.text import CountVectorizer # Initialise our CountVectorizer object vectorizer = CountVectorizer(analyzer=”word”, tokenizer=None, preprocessor=None, stop_words='english') # fit_transform fits our data and learns the vocabulary whilst transforming the data into feature vectors word_bag = vectorizer.fit_transform(df.text) # print the shape of our feature matrix word_bag.shape The final line prints the shape of the matrix we have created, which in this case had a shape of (5254, 26,869). This is a sparse matrix of all the words in our corpus and their presence in each of the supplied sentences. One benefit of this matrix is that it can display the most common words in our dataset; the following code snippet shows how: # Get and display the top 10 most frequent words freqs = [(word, word_bag.getcol(idx).sum()) for word, idx in vectorizer.vocabulary_.items()] freqs_sort = sorted(freqs, key = lambda x: -x[1]) for i in freqs_sort[:10]: print(i) From the fringe data, the top ten most-frequent words in our data are: ('comedy', 1411) ('fringe', 1293) ('new', 927) ('theatre', 852) ('edinburgh', 741) ('world', 651) ('life', 612) ('festival', 561) ('music', 557) ('join', 534) This is to be expected and doesn’t tell us much about the depth of what is on offer. Let’s try mapping how events might be connected to each other. TF-IDF and Cosine Similarities When working within NLP, we are often aiming to understand what a particular string of text is about by looking at the words that make up that text. One measure of how important a word might be is its term frequency (TF); this is how frequently a word occurs in a document. However, there are words that can occur many times but may not be important; some of these are the stopwords that have already been removed. Another approach that we can take is to look at a term’s inverse document frequency (IDF), which decreases the weight for commonly used words and increases the weight for words that are not used very much in a collection of documents. We can combine TF and IDF (TF-IDF). TF-IDF is a method for emphasising words that occur frequently in a given observation while at the same time de-emphasising words that occur frequently across many observations. This technique is great at determining which words will make good features. For this project, we are going to use the TF-IDF vectorizer from scikit-learn. We can use the following code to fit our text to the TF-IDF model: from sklearn.feature_extraction.text import TfidfVectorizer as TFIV vctr = TFIV(min_df=2, max_features=None, strip_accents=’unicode’, analyzer=’word’, token_pattern=r’\w{1,}’, ngram_range=(1, 2), use_idf=True, smooth_idf=1, sublinear_tf=1, stop_words = ‘english’) X = vctr.fit_transform(df.text) From the matrix X, we can understand each of our events’ text in the form of a vector of the words. To find similar events, we are going to use a method called cosine similarities, which again we can import from sklearn. The following snippet demonstrates how to perform this on a single event (X[5]), and the output shows the most relevant events and their similarity score (a score of 1 would be the same text). from sklearn.metrics.pairwise import linear_kernel cosine_similarities = linear_kernel(X[5], X).flatten() related_docs_indices = cosine_similarities.argsort()[:-5:-1] cos = cosine_similarities[related_docs_indices] print(related_docs_indices) print(cos) [5 33 696 1041 ] [1. 0.60378536 0.18632652 0.14713335] Repeating this process across all of the events leads to the creation of a map of all events and how similar events link together. Gephi network graph showing cosine similarities >0.2 for all events, coloured by LDA topic What is interesting about the network graph above is the vast number of events that have no relationships to others in the network. These are scattering around the edge of the network and highlight the degree of originality on show at the Fringe festival. In the middle, we can see a few clusters with a high degree of overlap between events. In the final stage of the project, we model these clusters and try to assign themes to them. Topic Modelling Latent Dirichlet Allocation (LDA) is a model that generates topics based on word frequency. We use it here for finding particular themes in our Fringe data about the types of events taking place. The code below shows how to get started with this approach: from sklearn.decomposition import LatentDirichletAllocation vectorizer = CountVectorizer(analyzer="word", min_df=20, token_pattern=r'\w{1,}', ngram_range=(2, 4), preprocessor=None, stop_words='english') word_bag = vectorizer.fit_transform(df.text) lda = LatentDirichletAllocation(n_topics=25, max_iter=50, learning_method=’online’, learning_offset=40., random_state=0).fit(word_bag) names = vectorizer_stop.get_feature_names() for topic_idx, topic in enumerate(lda.components_): print("Topic %d:" % (topic_idx)) print(" ".join([names[i] for i in topic.argsort()[:-5 - 1:-1]])) The above code is just an example of running the LDA model and printing the output topics and the most important words. From the output of the LDA, I used Principal Coordinate Analysis to be able to create a 2D projection of our topics: Topic modelling from LDA method Being able to plot the topics from running LDA is useful for observing where topics are overlapping and tuning the model. It also provides a method for understanding the different types of events taking place during the festival such as music, comedy, poetry, dance, and writing. Closing thoughts Our analysis demonstrates that, with the basics of NLP methods and techniques, we can interrogate a small dataset of text to gain insight. The BoW model, despite being simple, provides a fast view on the number of words present and the main ones used in the text, although it wasn’t much of a surprise for our data that words such as ‘comedy’, ‘fringe’, and ‘edinburgh’ were the most common. Expanding on this, TF-IDF provides a way to start thinking about sentences rather than words, with the addition of cosine-similarities providing a method to start grouping observations. This showed us the extent of original events present in the Fringe with low common text between other events. Finally, with LDA we get our model to produce topics where groups of events are allocated to a theme. This allows us to gauge the main types of events happening throughout the festival. There are several other methods for NLP worth investigating. As part of the project, I also used Word2Vec and TSNE; however, the findings from these have not been presented.
https://towardsdatascience.com/nlp-on-edinburgh-fringe-2019-data-aa71dca818b4
['Christopher Doughty']
2019-08-05 18:35:23.628000+00:00
['NLP', 'Data Visualization', 'Edinburgh Fringe', 'Data Science', 'Analysis']
Full Stack Pronounced Dead
The 2020 Stack This leaves us with open questions. Is there still a role for talented individuals with knowledge and skills that span multiple layers? How do we accommodate the diverse mixture of experience that full stack developers have traditionally brought to the project? How does management fulfill all these needs? How should recruiters filter prospective recruits? How should job applicants declare their skillset? It’s time to rethink the term full stack. I say this with some ambivalence because I helped to popularize the concept. The website full-stack.com was my take on the state of the art in 2009. But sadly, in 2019, it’s a relic suitable for the Computer History Museum. Google Trends 2009–2019 for the term fullstack Oddly, there seems to be an ever-increasing curiosity about the term full stack. A Google Trends snapshot for the period 2009–2019 shows a sharp uptick over the past five years. And it’s reaching new heights each year. But if all those queries are looking for the next thing, here it is. Of course, we have to give it a name so we can properly discuss it. I’ll call it the 2020 stack. A new name for a new generation. Here’s my take on it: First, let’s honor those individuals who have deep skills in diverse areas. They are our best hope at not falling into the trap of specialization. We want to shun those empire-building tendencies, those gurus with secret incantations, and those silos of information that creep in with specialization. Second, let’s come to grips with the fact that career development means that people come and go. Organizations must expect this as part of their ordinary operation. Business can’t be interrupted when senior staff move on. “No one is indispensable.” Third, communication between specialists is weakened by domain jargon. When experts use terminology, acronyms, and idiomatic expressions that are domain-specific, they put themselves and their teammates in peril. “He said/she said.” Cross-domain fertilization is vital in keeping communication channels clear. So here are some characteristics of the new 2020 stack specialist: Since no one person can handle it all, the 2020 stack must be covered by a team. Not a group of individuals, but a true team. That means that when one person is falling behind, another will pick up the slack. When one person has superior skills, there’s a mechanism in place for mentoring the others. When there’s a gap in the team’s knowledge-base, they seek out and hire a team member who’s smarter than all of them. Every 2020 team player must be a cross-domain expert. Any individual with a skillset limited to just one or two layers of the stack isn’t truly a 2020 team player — these types of individuals may aspire to be future 2020 team members, or they may not. But until they acquire deep knowledge across multiple layers of the stack, they’re just 2020 candidates. The mix of skills that 2020 team members bring to a project isn’t rigidly categorized. Unlike the front end/back end categories that we’ve embraced until now, the 2020 divisions are manifold. One 2020 team may have a member with skills that include NoSQL, cloud configuration, and continuous integration. Meanwhile, another 2020 team may have an analogous team member with skills that include SQL databases, Node.js servers, containers, and orchestration. To refer to either as simply a 2020 back end team player gives them too little credit. Finally, the vital ingredient: Communication should be carried out with the shared intention of making the best decision for the problem at hand. This means that peers whose skillsets overlap should communicate with an open mind. Rather than just informing peers about new developments, peers should discuss things. This makes everyone smarter, and it prevents specialization from creeping back in. Symbiotic growth.
https://medium.com/better-programming/2020-001-full-stack-pronounced-dead-355d7f78e733
['Joe Honton']
2020-01-02 14:54:45.178000+00:00
['Programming', 'Software Development', '2020 Stack', 'Startup', 'Full Stack']
Do White Men Rule the World Because They’re More Intelligent?
In 1921, Albert Einstein was awarded the Nobel Prize in Physics “for his services to Theoretical Physics.” Einstein was undoubtedly a genius and his contributions to science were deserving of such a prestigious award. Being awarded a Nobel Prize is no easy feat. It involves making an outstanding contribution to the sciences, economics, literature or world peace. Now, genius is highly subjective. What genius means for some, is different for others. Not every genius who has made a brilliant scientific discovery receives a Nobel Prize — Nikola Tesla being a notable exception. And not every genius falls under the scope of the Nobel Prizes. There are countless brilliant creatives in the arts, music and film industry’s who aren’t awarded prizes, as the arts (bar literature) don’t fall under their remit. But, the Nobel Prizes do offer a useful barometer into whose considered a scientific, economic, or literary genius deserving of praise, and whose not. When looking at the diversity of those awarded a prize the shocking lack of diversity may surprise you. Since the first ceremony in 1901, 923 people have received a Nobel Prize. Only 54 prizes have been awarded to women or 6% of the total. The lack of racial diversity might be even worse. Of the 923 winners, 70 are of Asian descent. 14 winners are of African descent. And a black person has never won a Nobel Prize in Science, not one. That means 90% of winners have been white, 94% of winners have been male. If you saw these statistics without reflecting on them, the only conclusion you could make is white men are geniuses, everyone else isn’t.
https://medium.com/an-injustice/do-white-men-rule-the-world-because-theyre-more-intelligent-4eef189edd96
['Paul Abela']
2020-08-17 18:56:43.345000+00:00
['Patriarchy', 'Society', 'Culture', 'Gender Equality', 'Racism']
智能数据问答
欢迎关注可视化笔记专栏 (本文阅读时间大约3分钟) 什么是数据问答? 在面向企业用户的商业分析领域,数据是支撑用户决策的重要基础。而传统分析软件往往需要用户具备较强的基础数据知识与技术能力,一些复杂的分析方法与交互方式往往把一大群数据小白拒之门外。相比于复杂的软件交互,自然语言对话是一种门槛更低、效率更高的智能交互方式。随着AI技术的不断发展,大众用户已经在一些民用领域感受到了自然语言交互的便捷,比如个人助理Siri、智能客服机器人等。用户在和机器进行自然语言沟通的过程中无需关注技术细节,只需提出自己的意图或诉求,就可以得到他们所需要的答案。 数据问答是自然语言交互技术在数据分析领域的应用。基于数据问答系统,用户可以对数据集提出各种类型的询问,系统会在自动分析后通过可视化和文字的形式给出相应的回答。在这样面向数据的自然语言交互过程中,用户可以专注于分析问题和业务逻辑,而无需关注数据处理和具体的软件操作。显然这样的智能交互方式很大程度上对数据分析入门者更加友好,同时也提高了商业分析决策的效率。 应用与研究现状 目前在一些领先的可视化软件产品中已经有对数据问答的集成。Tableau软件公司在2019年的新版本中提供了Ask Data的新功能[1],当用户将数据上传至Tableau Server后,无需任何额外的配置即可开始对数据进行提问。微软的办公软件Excel也在2019年加入了“对话”功能[2],可以自动理解用户的问题并自动对数据表格进行智能分析,最终呈现给用户相应的可视化图表。从这些前沿科技公司的行动上来看,基于自然语言交互的数据分析技术已经成为行业发展的趋势方向。 在近几年的可视化学术领域中早已经有大量关于自然语言交互的研究,值得注意的是,在2019年的可视化大会上基于自然语言交互的可视化探索系统FlowSense[3]获得了最佳论文奖,这也证明了学术界对于这一方向的关注。对于数据问答的研究,大部分的研究方法集中在从自然语言中提取用户意图然后转化为类似SQL的数据查询语句,并通过数据可视化的方式作为查询结果的展示[4,5,6,7]。例如,用户询问“价格最低的SUV车型是哪款?”,应该可以被转换为SQL语句:SELECT MIN(PRICE) FROM CARS WHERE CATEGORY = ‘SUV’。还有一些研究专注在语用学(Pragmatics)上,通过研究用户分析过程中的语言行为,从而提出可视化分析的语言规则用于数据问答系统的优化[8,9,10]。 机会与挑战 虽然业界和学术界已经对数据问答有大量的实践和前沿研究,但目前依然面临诸多挑战。首先,如何更好的处理用户多样化的查询?当用户面对自然语言交互系统的时候,他们会默认系统是非常智能的,相对于传统软件标准化的数据格式,他们会用更偏向于用口语化的方式或者使用一些行业中的专业术语去阐述他们的需求,此外他们的对话中也会包含一些数据以外的常识性概念和知识,这也会导致目前的数据问答系统失效;其次,如何在上下文中优化数据问答?用户在使用问答系统的过程通常是持续多次地去寻找答案,系统需要在多轮对话中能够分析出用户意图,结合之前的查询结果给出当前的最优结果;最后,除了多轮对话,一些用户在系统中的交互操作和公开的个人信息(地理位置等)也可以成为进一步优化要考虑的因素,因此如何更好地将这些额外的上下文信息和数据问答进行结合也是未来的研究挑战。 商业应用设想 这里设想一个企业应用场景,比如智能数据大屏应用,可以通过自然语言的方式完成对数据大屏的快速搭建。用户无需具备非常强的数据分析知识,他们只需向大屏应用提出业务需求,比如:关注的KPI,行业趋势,等等。在通过语义理解模块提取用户提出需求的信息后,可以快速为用户自动搭建行业相关的数据大屏,并根据用户的关注点进行智能布局。对于数据大屏的最终读者来说,他们也可以直接与大屏进行“对话”,数据大屏会根据用户的问题自动高亮出用户感兴趣的内容或对视图作出对应的变换。 参考文献 [1] Tableau Ask Data Ask Data [2] Microsoft Excel 智能数据分析技术,解锁Excel“对话”新功能 [3] Yu, Bowen, and Cláudio T. Silva. “FlowSense: A natural language interface for visual data exploration within a dataflow system.” IEEE transactions on visualization and computer graphics 26.1 (2019): 1–11. [4] Dhamdhere, K., McCurley, K.S., Nahmias, R., Sundararajan, M. and Yan, Q., 2017, March. Analyza: Exploring data with conversation. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (pp. 493–504). [5] Setlur, V., Battersby, S.E., Tory, M., Gossweiler, R. and Chang, A.X., 2016, October. Eviza: A natural language interface for visual analysis. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (pp. 365–377). [6] Fast, E., Chen, B., Mendelsohn, J., Bassen, J. and Bernstein, M.S., 2018, April. Iris: A conversational agent for complex tasks. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–12). [7] Gao, T., Dontcheva, M., Adar, E., Liu, Z. and Karahalios, K.G., 2015, November. Datatone: Managing ambiguity in natural language interfaces for data visualization. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (pp. 489–500). [8] Hoque, E., Setlur, V., Tory, M. and Dykeman, I., 2017. Applying pragmatics principles for interaction with visual analytics. IEEE transactions on visualization and computer graphics, 24(1), pp.309–318. [9] Srinivasan, A. and Stasko, J., 2017. Orko: Facilitating multimodal interaction for visual exploration and analysis of networks. IEEE transactions on visualization and computer graphics, 24(1), pp.511–521. [10] Setlur, V., Tory, M. and Djalali, A., 2019, March. Inferencing underspecified natural language utterances in visual analysis. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 40–51).
https://medium.com/shidanqing/%E6%99%BA%E8%83%BD%E6%95%B0%E6%8D%AE%E9%97%AE%E7%AD%94-bf4b79c2446
[]
2020-07-07 05:40:10.319000+00:00
['Artificial Intelligence', 'Questions Answers', 'Data Visualization']
How Effective Is Facebook Advertising?
My Own Experience Back in the day, I worked for a few small firms selling various household products as well as running a promotional campaign for a streaming channel. It was not my choice to promote on Facebook, but I have been using Facebook for the past seven years and had a few courses done in social media marketing. My last assignments advertising on facebook For most of the firms, I have been told that they desire to promote nationally which meant the United Kingdom, but for one of them I have been told to promote not necessarily internationally but, to research in which countries the advertising campaign would be most effective and choose five different countries that are relatively big in population. This was a part-time job, but the amount of qualitative research I had to do to ensure that the fund for the advertising campaign was not going to go down the drain and used efficiently made me more of a full-time worker. The research required for any type of social media advertising campaign can be intensive if you want to gain the most from it. The types of products that I had to advertise were a series of cleaning products and different household gadgets. All of the posts were up to a business standard quality making sure that the products were advertised as well as possible. Most of the banners and posters were done by me and another friend using Adobe Photoshop. The performance of the posts varied a great deal. Most of the time I ended up getting the views that I had paid for. Rarely that number would double and the reason for that can also vary on many different factors but from my analyses, the post really caught on with the population to which it was advertised to. Analyzing the performance of the advertising campaign To make a conclusion on the performance of the marketing or advertising campaign did not go that well based on the amount spent. However, I was aware and expecting such results for the reason of knowing the products I was trying to advertise would not end up being a hit on social media. At the same time, the resources I had at hand did not allow me to do extensive research which could also have been a vital point of the not so good results. There is a lot more complexity to social media advertising than people would think. Yes, if you have the financial muscle to just throw money left and right at different types of advertising then you are automatically throwing efficiency out the window. Those examples were startup businesses that were trying their best with whatever resources they had and thought that any product will have a success on social media. However, as mentioned before, there are many more factors to take into account. As a big picture, advertising on its own for most small businesses is like a betting game, trying different things to see what type of advertisement is most efficient. Looking at this from a marketing perspective From my own experience as a business adviser as well as a marketer, I will tell you that research is just so vital I am not able to stress it enough in words. Most people fail at research as they either do not have the patience in order to go through extensive research or do not know how to do extensive research. Marketing researchers can end up being expensive for a startup or small business. To this day I am still quite dependent on Facebook, though I can say it is slowly dying at least in the United States and West Europe. It is very difficult to research this fact with statistics and this is because of the huge number of fake accounts present on social media platforms and especially Facebook. This can also be seen as another reason to not choose to use Facebook as an advertising platform. With a better look into the marketing perspective, we can say that it very much depends on the number of competitors you are going up against. If the number is big then you automatically will require doing much more advertising than most of them, which is very expensive. If they decide to promote on the same platform as you, then you either have to promote twice as much so as to overtake them or sell to a different public that has not been exposed to the competitors yet. So it comes down to those types of factors, If based on your research you think your business can be promoted well through the Facebook platform then go for it, however, do keep in mind other factors that may affect your type of business. Therefore it may work for some businesses and may not for others.
https://medium.com/better-marketing/how-effective-is-facebook-advertising-1158292b7d37
['Andrei Tapalaga']
2019-11-15 21:49:30.460000+00:00
['Marketing', 'Business', 'Research', 'Social Media', 'Social Media Marketing']
How To Become A One-Drink Wonder
By Lauren Bravo PHOTOGRAPHED BY ANNA JAY. So you managed to take a little time off drinking! Congrats! Strike up the band! Lie on the floor and let someone pour Malbec into your mouth through a funnel! But then what? Once the first hangover has cleared, what’s going to be the legacy of your abstinence? You may not want to go completely teetotal, but you also know you don’t want to slide straight back into your old habits either. Because what’s the point of surviving days of dusty sobriety if you aren’t going to make long-term change? You’d like to get a handle on your drinking before everybody else quietly gives up and you’re the only boozer under 40 left. On the one hand, life is short. But on the other… same. It’s quite a conundrum. The answer? My friend, you need to become a One-Drink Wonder. An ODW, as the cool Bumble bios will read one day. Not to sound too much like I’m recruiting for a cult, but I’ve been an ODW for years now and I’m here to tell you it’s possible — to go out, even out-out, have one drink and stop. It’s hard, but you can do it. It takes practice, sure; but then so did liking the taste of tequila. “If you are trying to drink less overall, then alcohol-free days are actually easier than the days when you just have one or two drinks — for the simple reason that self-control and decision-making skills often go out of the window after a couple of drinks,” says Rosamund Dean, author of Mindful Drinking: How Cutting Down Can Change Your Life. She’s right of course; once you’re in the pub or at the party, it can feel as though resistance is futile. Lead a horse to water, and you might as well make it a double. For those of us who live on a feast-or-famine seesaw, always either on the wagon or crashing dramatically off it, a balanced middle ground is the holy grail. Having one drink might not sound thrilling, but it gives you options between ‘on for a rager’ and ‘home on the sofa’. You don’t have to sacrifice your Saturday morning for the sake of Friday night. You can join in the fun and remember it afterwards. Trust me, I’m a lightweight. In the interests of transparency I should say that I’m a One-Drink Wonder by accident, not through iron-willed self-restraint. I became one at the age of 22, when horrible hangovers started to massively eclipse my enjoyment, and it’s taken until my 30s for my feeble alcohol tolerance to be seen as a special skill rather than a massive deficiency. But now, finally, people want to know the secret. And I’ve learned a few things in all my years of “Honestly, I’m fine with this lukewarm water!” and “No you were really funny last night, promise!” that I’d like to share, backed up with sound advice from some experts. Here are 10 steps to help you become a One-Drink Wonder. I only hope this goes some way to compensating for all the rounds I haven’t bought. This advice is designed for people looking to cut down their alcohol intake, not those concerned they have a serious alcohol problem. If you are worried about your drinking, please contact Alcohol Change UK or Drinkline on 0300 123 1110. 1. Water, water everywhere This is the most annoying advice, so we’ll deal with it first. Water. You know this. For god’s sake, water. More specifically, drink water before anything else. We all know the old adage about alternating booze with soft drinks — but realistically this just means you end up sloshing around with a liquid belly, tripping balls on sugar and caffeine. Instead, I find it’s more helpful to co-opt the Instagram coffee mantra: “But first, water.” When the waiter comes round for the drinks orders, have a water before you reach for the wine list. When you finally choose a picnic spot on a blistering hot bank holiday, drain a bottle of water before you open that cold, cold beer. Always deal with your thirst first. It rhymes so it must be true. 2. Patience, child “For those who drink impulsively to soothe social anxiety, boredom or stress, I suggest creating a window of time between wanting to drink and actually drinking,” says Shahroo Izadi, behavioural change specialist and author of The Kindness Method: Changing Habits for Good. She recommends telling yourself that you can drink whatever you want, but in an hour’s time. “Not only does this reinforce that thoughts are alerts, not commands and that ultimately we are in control of our actions, but it buys you an hour when you’d otherwise be drinking.” 3. Get your plan in place “It may sound boring but the better you plan for the obstacles that come your way, the more likely you are to succeed,” says Laura Willoughby MBE, cofounder of mindful drinking movement Club Soda. “It helps to use a tool like ‘WOOP’ — wish, outcome, obstacle, plan — before you hit the bar. Knowing why you are sticking at one or two drinks (your wish, and outcome) will help keep you motivated, and planning for the point where people encourage you to stay for one more (the obstacle) will mean you know what you will do instead with confidence.” So have your official speech prepared in advance — whether it’s “I have to get up early” or “I recently discovered I have a rare condition where a second beer could make my head explode.” 4. Go hard, then go home My breakthrough moment as a One-Drink Wonder was learning to like the strongest, bitterest thing on the menu. If you’re naturally sweet-toothed this can take practice, much like training yourself to drink black coffee or liquidised kale — but while it might seem counterintuitive when you’re trying to cut down, hard liquor is this ODW’s secret weapon. For one thing, a punchy drink commands respect from a certain breed of dickhead; the kind who always refuses to order you a shandy, believing it to be charming banter. In an ideal world, the dickheads wouldn’t bother us. But in this world, sometimes it’s helpful to keep them at bay. For another thing, stronger drinks should last longer. It’s so much easier to slowly sip a negroni or a nice single malt over the course of an hour than it is to drain a G&T and not immediately want another one. It feels grown-up, wincing slightly at every swig like a 1940s detective. And finally, when you’ve paid a tenner for your drink, you should want to make every drop count. Woe betide the barman who clears my glass when there’s still a 50p dribble in the bottom. 5. Quality over quantity A swish drink can also help you feel less deprived. “If I’m going out and I want to make sure I only have one or two drinks, I choose the nicest (most spenny) drink I can,” says Rosamund. “So I’ll have a glass of champagne, rather than prosecco. Or I’ll have the nicest red wine on the menu. Or maybe a martini, which I will sip and savour.” After all, cheap drinks are a false economy once you factor in the Uber home. And the Domino’s. 6. Eating isn’t cheating This isn’t an ironclad rule, because ironclad rules rarely do anybody any good. But as a mere suggestion: don’t drink through dinner. Unless you’re in a tasting-menu-with-wine-pairings situation, truth is you might appreciate both the food and the booze more if you keep them separate. The chef didn’t toil over the flavour profile of each dish only to have you souse your tastebuds in house white. So have a drink before dinner, enjoy it, then say, “I’ll have another one afterwards.” It’s amazing how often ‘afterwards’ rolls around and you’re too full and happy to bother. 7. Not sharing is caring We’re lone wolves, the One-Drink Wonders. We march to the beat of our own bottle opener. And because there’s nothing more likely to make you drink more than you wanted to drink than paying more than you wanted to pay, ODWs don’t do sharing, or rounds. This is easier said than done, of course. We’ve all felt pressured into ordering a bottle in a fit of jolly camaraderie. But while being the “actually I’ll just have a glass” person can make you feel like a killjoy, it can also break the spell and allow everyone else to be honest about what they really want, too. Rosamund says: “I’ll tell my friends at the beginning of the night: ‘I’m only having one tonight, so think I’ll make it a martini.’ Often, you’ll be surprised how keen they are to do the same.” 8. Hack yourself sober Forgive me if this sounds a bit Silicon Valley, but: rather than thinking of cutting down your drinking as an exercise in self-denial, try to think of it as a positive modification. You’re applying booze carefully and cleverly, to get the best results. “See it as an opportunity to actually ‘optimise’ your drinking experience as opposed to controlling or punishing yourself,” explains Shahroo. “Some of my clients who have experience of taking drugs (such as ecstasy) in the past actually find it easier to learn to moderate their drinking by remembering that alcohol is a drug and they want to be enjoying the positive effects for as long as possible. This tends to mean they drink more slowly and in smaller amounts, to maintain the buzz that they most enjoy. They simply drink until they are feeling the benefits and either slow right down or stop entirely when they’ve reached that sweet spot.” 9. Practice makes perfect “Most of us want to moderate because we can’t imagine not drinking at a club, with a meal or at a friend’s birthday. But until you have done some of those occasions without a drink in hand you will never know what it feels like and if it is possible,” says Laura. “Set yourself some missions to do some big nights out without drinking, and develop the skills to really pick and choose.” Once you’ve danced sober, dated sober, even survived a hen weekend sober (dare you), settling for just the one drink will feel like a better compromise. And as Rosamund points out, “If you are drinking less overall, you’ll find that all it takes is one or two drinks to get you nicely buzzy.” 10. Tell yourself you can still get trashed from time to time If the idea of never being more than “nicely buzzy” again feels unspeakably dull, earmark the occasional future sesh in your diary, as a treat. On those nights you might say yes to a second, and a third, and a sixth, but only if and when you really want to. After all, as Oscar Wilde once wrote on a fridge magnet, everything in moderation. Including moderation.
https://medium.com/refinery29/how-to-become-a-one-drink-wonder-89a17a5c8599
[]
2020-12-03 16:42:04.302000+00:00
['Health', 'Alcohol', 'Healthy Lifestyle', 'Life', 'Drinking']
The popularity of Django
Django is a open-source framework of Python which is unlike any standalone server solution like Spring, Rest API and ASP.Net to name a few. Like any other MVC framework, Django too uses model-template-view architectural pattern. As the charter states: Built by experienced developers, it takes care of much of the hassle of Web development, so you can focus on writing your app without needing to reinvent the wheel. It’s free and open source. Model: A model in Django specifies the interface and the architecture for defining the operation of a particular operation based on a particular situation. Views: Views contain methods for defining the operations done by a particular call to url requests. Templates: Templates deal with rendering of the pages and define operations to deal with url response from particular view in a model. The MVT architecture of Django is as depicted:- Few useful commands:- • django-admin startproject : Creates the project directory and inbuilt files of that project. • python manage.py startapp : Creates an app under current project. • Python manage.py collectstatic: Makes meta information of static files and migrates them under one static directory. The project directory of Django is as shown: manage.py contains the code adapter to handle databases, connections and templates of the current server-side solution. Sqlite is the sql database which is the database for the given project. The Django framework is not globally used in production servers as of yet but its success is enviable. Do you know what powers YouTube and the Netflix platform so that its performance is such hyped? Yes, it’s Django. The current trends and the relative position of Django is as shown: Trends in the popularity of programming languages So why you don’t see such wide use of Django in production servers? Well there are two primary reasons: • Django is relatively new having launched in 2005 and not fully deployed until 2007. • Programmers are reluctant to change to Django because of its relatively limited reach. So what are the features of Django that is attracting developers like magnets to this framework? 1. Django integrates SQL into its container. 2. Django is a standalone server-solution that can be hosted on a remote or local server. 3. Firewall and reach, configuration of ports and sockets can be efficiently managed in Django. 4. Combined with the flexibility of Python, Django has added robust deployments. 5. Flexibility in creation of models and handlers. 6. Easily create apps and integrate several of those and manage them under one admin panel. 7. Django provides an inbuilt admin panel which saves a lot of code of course that can be customized. That’s it. So when are you shifting to Django?
https://medium.com/analytics-vidhya/the-popularity-of-django-9b8b5338dad7
['Projjal Gop']
2020-04-25 16:41:48.703000+00:00
['Programming Languages', 'Popularity', 'Python', 'Trends', 'Django']
Quest for a Portable Amp and Pedalboard
There’s a few additions such as extra footswitches for the Boss GT-1, the Digitech Trio+, a mixer and a PolyTune 2 mini. But mainly, there were a few issues with the previous setup that needed to be addressed while keeping everything battery powered. Ground Noise The first issue was that there was tons of resting noise in the previous system. http://www.pedalsnake.com/blog/category/the-noise-manual was a pretty neat resource for debugging the noise source. There was no AC anywhere in the system since it was all on a USB battery pack but there’s still likely a ground-loop-like hum causing issue. All these pedals filter their own output signals but likely don’t filter noise they generate back into the DC. This causes voltage fluctuations across other devices that are all powered by the same battery and draws or pushes against their output signals All the 9v pedals are powered by a daisy chain from a 5v->9v step-up converter which apparently is a bad decision regardless of the power source. The solution was luckily fairly simple. I swapped out the daisy chain with: A USB->9v power supply with 4 parallel regulated outputs for 24$. The difference was night and day. With the isolated outputs, high gain amp settings are completely silent at rest. I kept the adapter underneath the pedalboard to keep everything clean and out of sight Mixer The previous setup also had 2 other issues: There was no bluetooth which was super annoying and inconvenient for playing along with YouTube etc. Adding one more bluetooth receiver adds just one more device (and friction point) to connect and to turn on and off each time I want to play. The BeatBuddy Mini doesn’t list its detailed specs like output impedance but the full sized BeatBuddy is 26Ω which is a weird number that ends up strangely amplified with a poor SNR when plugging directly into the GT-1’s aux in. The DigiTech Trio+ will end up needing a mixer but we’ll get into it later. The challenge is that most mixers have their own AC adapter plug that neither uses the 5.5mm/2.1mm pedal plug or are 12v or 5v. Mixers also tend to waste one of the channels being optimized for microphone levels and impedance. Luckily, I found an uncommon mixer but a huge gem. The Pyle Bluetooth 3-Channel Mixer: Is fully powered by USB (which also acts as an audio interface both in and out) Has an adaptive channel 1 combo XLR/TS 1/4" that changes impedance Has bluetooth built-in!!! And all that for 53$ which is about the same price as the Behringer Xenyx 302USB but is way better built with a full metal construction and has bluetoot. Looping Since the objective is still playing for fun at home, looping makes the sound way fuller and more fun. The GT-1 has pretty much all types of effect pedals from octaver to wah to looping included. But using the GT-1’s looper turned out to be less practical. Since, as mentioned in the previous article, the GT-1 is missing a drum machine that makes it way more fun to play by myself. And without MIDI sync on neither the BeatBuddy Mini and GT-1, timing the external drum machine and the looper is basically impossible. Even if you played a loop and pressed the start/stop almost absolutely perfectly with the drum loop from the BeatBuddy, a 1ms deviation will accumulate to a point where everything’s a mess after a couple of measures. So I turned to the DigiTech Trio+ which had the drums and looper together (and self synchronized). It also has the bonus of having an interpreted bass player and is also a fairly bit easier to control than the GT-1’s looper which needs to be turned into looping mode first. One challenge with it though is that it’s so geared towards a conventional amp with a separate pedal chain that it’s hard to figure out how to even wire the thing. The DigiTech Trio+ has 5 I/O connections. Guitar In needs a clean input signal so that’s easy to figure out. The wireless Line 6 G10 receiver goes through the PolyTune then into it directly. Mixer outputs the drums and bass sound so that one’s easy too. Good thing we have a mixer already :) There’s an option to use just the Amp Out or the Mixer connection but both are very suboptimal. Amp Out outputs both what you’re playing live and and the recorded guitar loop after going through the Fx Send and Fx Return loop which is pretty awkward instead of sending the recorded loop to Mixer instead. Fx Send is meant to go to your pedals before coming back into the loop and out to Amp Out Fx Return is the return signal from your pedals. All of this works well if you had a single amp and make the assumption that the effect pedals change while playing but the amp is mostly static. But the amp in this case is the same unit as the effect pedals and we wouldn’t want to put the GT-1 after the Amp Out because if you change your GT-1 settings after looping, the looped recording gets altered. Luckily, the looped recording saves everything that went through the Fx Send and Return loop as is so we can wire this as: G10 wireless receiver -> PolyTune -> DigiTech Trio+ Guitar In -> Fx Send -> Boss GT-1 -> Fx Return. Then send both the Trio+’s Amp Out and Mixer to 2 different channels on the Pyle Mixer. This deviates from DigiTech’s canonical usage of letting the guitar’s pre-amp and cabinet simulation be inside the Fx loop instead of being after the Amp Out connection but since it records Fx Return and sends it out to Amp Out verbatim, it doesn’t really matter. Summary In total, this portable setup is still powered by a single USB powerbank. The whole package can be picked up and played anywhere. It consumes 1A from the G10 wireless system and 500mA from the mixer at 5v. It consumes 100mA from the PolyTune, 200mA from Boss GT-1 (😱 so surprisingly low), 800mA from DigiTech Trio+ (I don’t know what it does with so much power) and an unknown but <500mA from the BeatBuddy Mini at 9v. It comes to ~4A at 5v which is about 1 hour from a 5000mAh battery. All probably want something like this 185Wh portable power block for longer sessions that will put me back to ~9h per charge.
https://medium.com/xster-tech/quest-for-a-portable-amp-and-pedalboard-441e17362970
[]
2018-04-23 05:46:47.672000+00:00
['Guitar', 'Guitar Pedals', 'Music']
Complex Data Types in Python
In the previous blog we talked about Simple Data Types in Python, if you want to read about the basic data types in python have a look at the blog: https://medium.com/analytics-vidhya/data-types-in-python-c23b8178f96d Lets talk about some more data types in python: LIST List is basically a collection of elements of same or different data types. Let’s discuss a use case of using lists. You have to store marks data for the student: english, maths, science We can either use 3 different variables for each student. var eng=90 var maths=99 var sci=89 If we keep on adding these type of variables for every student we can have a large number of variable, creating a highly unmanageable code. The second option is to make one variable and store all three marks in it like var marks_1= [90,99,89] var marks_2= [94,88,81] var marks_3= [91,90,82] This new data structure that we have used to store the data is called List. It can have data of same or different types e.g var student_1=[“The Data Singh”,”28”,”SDE”] var marks_1=[90,89,69] List is one of the widely used data type in Python. Some of the most used methods of list are: marks_1.append(99) # to add a new element in a list marks_1.extend([10,20]) # to add all elements of a list to existing list marks_1.pop(index) #pops a value from the list just like a stack marks_1.reverse() #reverse. a list marks_1.insert(index, value) #add a value to a particular index Now that we know how to create a list and add values to a list, let’s see how can we access elements from a list. List Indexing Indexes in list start with 0 in python. In order to access elements from list you can use the following commands marks_1[0] #will access first element of the list marks_1[1] # will access the second element of the list and so on… marks_1[-1] #will access the last element of the list marks_1[-2] #will access the second last element of the list and so on… List Slicing We can also access more than 1 elements in a single statement using list slicing. marks_1[1:5] #this will access elements from 1 to 4 i.e 1,2,3,4. 5 is not included in the elements We can also skip the start or the ending point in this slice marks_1[:5] will give all elements till 5 i.e 0,1,2,3,4 marks_1[1:] will give all elements from 1 till the end In the next post we will discuss the use case and use of the Dictionary Data type. Do have a look at my channel on youtube : The Data Singh My Blogger link: Tech Scouter Happy Learning
https://medium.com/analytics-vidhya/complex-data-types-in-python-26a25b917f64
['Japneet Singh Chawla']
2020-10-22 13:19:13.036000+00:00
['Beginners Guide', 'Programming', 'Coding', 'Python']
Landmark Recognition and Captioning on Google Landmark Dataset v2
Introduction and Motivation: Click and Upload is a trend these days. Many times we wish to remember and relive the moment spent in that picture. The very first thing that comes to our mind is Where it was clicked! Social Sites upload makes us search internet to come up with good captions. What if we get a caption and description of the image together? How efficient and easy would that be? For landmarks, it becomes quite difficult to identify objects or components as a mountain is a mountain, but Mt. Fuji is a landmarked mountain, because of its history or characteristics. When you search for a particular landmarked image 😒 Captioned Image: Swim time at the lake, the pragserwildsea or lake prags lake braies is a lake in the prags in south tyrol, italy 😎 Delving deep into the history of oriented descriptions with different images on the largest dataset ever- Google Landmark Dataset v2, involving various Text processing and Deep learning techniques, here we present our project which helps in recognizing Landmarks and Captioning them. Prerequisites: This read, assumes familiarity with Keras, Tensorflow, Numpy, Pandas, Seaborn, Classical Classifiers, Deep Learning Classifiers like Multi-layered Perceptron, Convolution Neural Networks, Recurrent Neural Networks, Transfer Learning, Backpropagation, Text Processing, Python syntax and data structures. Data Collection: Google Landmark Recognition v2 Dataset — the largest dataset up to date, the dataset is divided into two sets of images, to evaluate two different computer vision tasks: recognition and retrieval. Here we chose the recognition set, it contains URLs of images . There are 4,132,914 images in the train set and 117,577 images in the test set. We downloaded data and processed from 512*512 to 128*128 size of each image from 256GB data to 8Gb train 300mb test. Now we chose those image folders (total folders being 14915) which had number of images more than 800 so that we can convert it into 500 test and 300 train images in each folder to reduce variance in images and to balance it. Now, Train images = 1,48,000 and Test images = 62,512 is taken. As for image captioning landmark dataset of Google doesn’t have image captioned file or image:caption mapping which had to be created by us. We created 8000 captions for images belonging to 10 labels chosen from 12K labels. Data Exploration😍 Image Preprocessing: Libraries used for preprocessing are : PIL , urllib , os ,multiprocessing ,tqdm, sys ,csv. The image data was in the form of URLs stored in a csv file. Images are downloaded from URLs using the following code by giving arguments as : filename and output directory . Since, the image size is very big and numbers of images are about 5M, hence images is reduced to 128*128 pixel size using the resize function of the pillow library. Image Cleaning: Altering the pixel dimensions of the image is called ‘resampling.’ The reverse mapping function is applied to the output pixel, so that the obtained ‘resampling pixel’ is reversed to obtain the original input pixel. To make the image more suitable for definite applications, contrast enhancement must be used. It improves the visibility and the transparency of the image and the original image is more acceptable to process the computer. Noise Removal: At the time of image acquisition or during transmission, noises are produced. It degrades the image quantification to different range. In general, the noises in the image can be classified as: Impulse noise (Salt & Pepper noise), Gaussian noise (Amplifier noise), Speckle noise (Multiplicative noise), Poisson noise (photon noise). Text Cleaning: Punctuations, Numerical Values, Contraction Mapping, Stop words Elongated Words, Emoticons, Negation, Hash Tags, Non-ascii characters, Contractions Mapping, Short Words were all removed as part of text cleaning Preprocessing Snippet👀 Proposed Model Architecture: For landmark Recognition and Image Captioning, we first preprocess the data same as Baseline which involved noise removal, then we extract features through various model such as; CNN, VGG16, FAST, HOG. Then feed these features into our Image Captioning module to predict captions for images. Working of Model Feature Extraction: The reason for selecting these features(CNN, VGG16,RESNET50) is that usually Deep Learning algorithms perform well on huge dataset, so extracting features from them and implementing for lesser dataset along with traditional feature extractors (FAST, HOG, SIFT) proved more beneficial. SIFT algorithm is found as a very involved algorithm. The major advantage of these features over edge features are that they are scale invariant, they are robust to orientation. SIFT features of an image are extracted from opencv-contrib-python==3.4.2.16 library through cv2.xfeatures2d function. After extracting keypoints and descriptors of every image of our train data it was observed that sift gave 128 dimensional features for each image. Total (580787, 128) features are extracted. SIFT Extracted Feature Visualization 😉 Better edge detection and identification from FAST, with gradient calculation from HOG to identify on basis of pixelized gradient histograms, combined with CNN and VGG16. or extracting CNN features, 4 layers of Conv2D with 64,64,32 and 16 filters respectively are used along with Max Pooling and dropout to extract best features and avoid model overfitting. After that Flatten layer is added to obtain features. In order to extract image features from VGG model, a layer of 128*128*64 filters along with Max Pooling followed by a layer of 64*64*128 filters with Max Pooling and last layer layer of 32*32*128 filters with Max Pooling applied. After these 3 dense layers are added and features are obtained by Flattening the outputs from different layers. The dataset for train is of 5000 and test is 3000. From which: CNN gave (5K/3K,784); VGG16 gave (5K/3K,8192); HOG (5K/3K,3192); FAST(5K/3K,49152) HOG and FAST extracted feature visualization😉 Image Captioning: Caption Tokenization: For initiating the task for captions, the RNN model needs to be trained with a relevant dataset. Tokenizer of keras library is used with nb_words = 8000. It is essential to train the RNN model for predicting the next word in the sentence. However, training the model with strings is ineffective without definite numerical alphas values. Word2Vec: Word2vec is a two-layer neural net that processes text by “vectorizing” words. Its input is a text corpus and its output is a set of vectors Word embedding: A word embedding is a learned representation for text where words that have the same meaning have a similar representation. It is this approach to representing words and documents that may be considered one of the key breakthroughs of deep learning on challenging natural language processing problems. Encoding/Decoding: The encoder-decoder 135641 samples train and validate 57623 samples after glove embedding. Embedding layer of Vocab_size = 1896 and dim_embedding = 200 is added and in some cases outperforms classical statistical machine translation methods. Vectorization for words 👩‍💻 LSTM networks: Well-suited to classifying, processing and making predictions based on time series data, with 256 Dense Layer and Cross-entropy loss function. Recurrent training of network on predefined weights from pretrained models. The LSTM function above can be described by the following equations where LSTM(xt) returns pt+1 and the tuple (mt, ct) is passed as the current hidden state to the next hidden state. 😎 Results: Complete Google Landmark dataset split into Train images = 1,48,000 and Test images = 62,512 is taken and ran on provided server and following accuracy is obtained by feature extraction and classification: Accuracy on this dataset ranges from 65–70% for classical classifier. Classification results obtained for 8k image dataset on which state of art is implemented. Various classifiers are applied on the extracted features : LGBM and Stacked Classifier gives highest accuracy of 89–90% among all the classifiers and also outperforms the state of art (71%). Other classifiers that outperforms are Logistic Regression , KNN , and Random Forest. On the other hand Decision tree performs less than the state of art and sift features + other classifiers gives maximum of 47–56% of accuracy. Image Captioning on Landmark dataset is a difficult task to achieve, because the components in an image eg, boy/girl/cat/dog/ objects, cannot be classified or differentiated when it comes to landmarks. A lake is a lake, a mountain is a mountain but all of them hold high their importance. With such belief in mind, our featured model gives us a 62.11% BLEU rate(approximated bilingual translation metric) with correct labels prediction and their description. Though the captions seem vague it can be improved with more training on captioned. Validation and Training Loss Curve for image recognition and captioning together Baseline ref[1] & [2] comparison with our model 😎 Performance on Classical Classifiers 😎 Stacked Classifier: Using all the classical classifiers at level 0 and Logistic Regression at level 1 of Stacking Classifier we obtained the best result. Performance Comparison Plot Epilogue: A whole new perspective of image recognition comes with bringing out the past moments/memories by rejoicing the place and reinventing the moment or through flaunting them over social media platforms. This is achieved by image Captioning and can also help you decide your Instagram caption soon. In future developments we plan on doing this on a much larger set and not just landmarks but to all varieties. Some predictions are as follows: Predictions made on test images 😍 References: 2. https://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=true&queryText=landmark%20recognition%20in%20cnn%20 Authors: Ayushi Sinha (M.Tech CSE-AI, IIITD)[LinkedIn] : Feature Extraction (CNN, VGG16, RESNET50, HOG) and Combination, Ensembler Classifier, Feature-Image-Caption Extension, Image Captioning on extracted features using LSTM, Data Acquisition for Captioned Model, Text Preprocessing, Blog writing. Ashi Sahu (M.Tech CSE-DE, IIITD)[LinkedIn]: Image Preprocessing, Feature Extraction and Classification(SIFT), Blog GIF, Data Acquisition for Captioned Model,Blog Writing. Ravi Rathee (M.Tech CSE-IS, IIITD)[LinkedIn]: Baseline Implementation on CNN, Data Preparation and Exploration, Feature Extraction(FAST) Data Acquisition for Captioned Model, Image Captioning on Inception V3.
https://ayushi20023.medium.com/landmark-recognition-and-captioning-on-google-landmark-dataset-v2-b62f3e2c82bd
['Ayushi Sinha']
2020-12-20 19:00:05.872000+00:00
['Image Captioning', 'Machinelearning2020', 'Google', 'Image Recognition', 'Landmarkdataset']
What Chopping Your Vegetables Has To Do With Fighting Cancer
A Chopping Conundrum When any cruciferous vegetables is chopped, crushed, shredded, or chewed, it releases both myrosinase, as well as glucosinolate compounds, that then combine to make sulforaphane. This chemical reaction is exactly what produces the mild burning sensation that we associate with some foods of the mustard family. It’s what gives raw broccoli its bite, and what makes spicy mustard, well…spicy (4). However, in order for the myrosinase enzyme to work its magic, it first has to be given the chance. All in all, myrosinase can take up to 45 minutes once first released (by way of cutting or chopping your vegetables, remember?) to successfully convert itself into disease-fighting sulforaphane. The other issue here is that myrosinase is highly heat sensitive. One of the fastest ways to prevent any and all sulforaphane formation from occurring is to immediately transfer your chopped cruciferous vegetables into a hot oven or a pot of boiling water — a practice I think it’s fair to assume is pretty routine for most of us. Interestingly, a solution to the issue lies with the fact that sulforaphane is not heat sensitive. What this means, is that as long as you’re happy to plan ahead, and cut your cruciferous vegetables 45 minutes before you cook them, they will have more than enough time to work their enzymatic magic, thereby retaining the same efficacy in fighting cancer as when consumed raw. In this sense, you can either exclusively stick to eating raw cruciferous vegetables — and rest assured that myrosinase will continue to produce sulforaphane even after you’ve chewed and swallowed your veg — or, you can chop your cauliflower, cabbage, and kale 45 minutes in advance. All in all, it seems pretty straightforward. The benefits of regularly eating sulforaphane-rich foods drastically outweigh the inconvenience of leaving your chopped veggies on the countertop for the better part of an hour. But what if there was another way altogether?
https://medium.com/beingwell/what-chopping-your-vegetables-has-to-do-with-fighting-cancer-22cab5712b8b
['Alexandra Walker-Jones']
2020-12-28 20:20:23.441000+00:00
['Health', 'Lifestyle', 'Healthcare', 'Nutrition', 'Food']
Flutter CodePen challenge
From the outset, one of our goals with Flutter was to enable developers to create beautiful user experiences. And every day the worldwide community amazes us with countless apps and experiments that showcase Flutter’s creative potential. Today, we’re pleased to partner with CodePen on a fun series of new challenges to let you show off your Flutter skills. CodePen is one of the top destinations on the internet for front-end developers to experiment, share, and iterate. Ever since we announced our partnership with CodePen in April, we’ve been amazed by the Flutter Pens you’ve created. And so we want to give you the opportunity to show off your amazing Flutter designs to CodePen’s community of over a million designers and coders. Challenge details What is it? CodePen Challenges are fun opportunities to level up your skills by building things. Previous challenges have covered web development themes like JavaScript, Images, and Color Palettes. The Flutter CodePen challenge is a month-long challenge in July to build user experiences with CodePen’s new Flutter editor. How it works: The Flutter CodePen challenge consists of 4 weekly challenge prompts, each on a different Flutter topic. Each prompt will be released at the beginning of the week, and the prompts will progress from basic to more advanced, building on each other. To help you, we’ll share some ideas on what to build, recommend resources, and share example pens. Once you complete your Pen, we encourage you to share it with the CodePen community using the tag FlutterPen and the broader developer community on Twitter and LinkedIn with #FlutterPen. The CodePen community will select the best Pens, which will be showcased on CodePen’s homepage. Date: The first weekly challenge will be released Monday, July 6th, 2020. Sneak peek: For the first week, we’ll start with one of the most foundational building blocks in Flutter: a Stack widget. Stack lets you layer widgets on top of each other in paint order. You can use it in simple scenarios such as overlaying texts on top of gradients or building some really cool custom designs. Here’s an embedded CodePen that uses the Stack widget to create a three-panel view. Click Run Pen to see the app in action. Then click Flutter to see the code that implements the app. Take the challenge! Get creative and check out the challenge page, starting July 6th! In the meantime, why not use the #FlutterPen hashtag to show off the Flutter Pens you’ve already created? We’re eager to see what you’ve built so far!
https://medium.com/flutter/flutter-codepen-challenge-689beedf6ce6
['Anjan Narain']
2020-07-09 18:33:48.125000+00:00
['Contests', 'Codepen', 'Google', 'Flutter App Development', 'Designer']
A Letter from a Deceased Trump Supporter
Where do I begin? How do I begin? Yes, I was that guy. The “loud mouth” in the Walmart letting everyone know my thoughts on this Covid-thing; just a Chinese-Democrat hoax — like climate change. I was that guy. Wearing my MAGA hat and going out of my way to shake hands and get in real close to everyone — to show how brave I was. Proving my freedom I wanted , my support was for our president — the man preserving our freedoms. I was that guy who accosted the lady who dared to wipe down the keypad on the ATM after I used it — I overexaggerated a cough in her direction, and while it was a windy day and most likely no droplets hit her, it did scare the hell out of her. I laughed as she ran back to her car. The terrified look on her kid’s faces made me laugh then. I would cringe now if I could but I can’t. I am dead, just a wisp of energy flowing in a direction-less pool of warmth — feels like nothing, it just is. I began to feel odd a few days after the Trump rally. Back home, I went to my usual local diner, unmasked of course, and told everyone about the life-changing experience the rally had been. I had seen our “greatest” president. The rally was packed with my fellow brave and free Americans, patriots. When he looked directly at me and gave a thumbs up, I squealed like a teenager. If I die tomorrow, I’ll die a happy man, I announced to all — looking in the direction of the two women sitting socially distanced up near the front window. They were both masked, disrespecting our president with their paranoia, I believed. The women were not enjoying my loud narrative of the rally. I really wanted to go over and do another fake coughing fit but that had become an offense they could arrest me for now. I didn’t die the next day —but fifteen days later. It was there, in the diner, sitting next to two World War II vets in their late nineties, both masked, that I first felt something might be wrong. Thinking I was thirsty, I kept drinking water but the back of my throat was slowly dissolving into sandpaper. A heaviness pulled at my eyes creating a perpetual mist in my vision. You people are really a bunch of whiners, Ed, one of the vets told me. A decorated veteran, he stormed the beaches on D-Day and ended the war in Austria having fought in the every significant battle. Trump was the enemy in his book. He was the only person I would tolerate expressing that opinion to me; well he and the guy next to him. If others dared admit such emotions about Trump, whatever happened to them after was not my fault — I used to say I would die for Trump. I recall this conversation with Charlie quite well because it was one of the last ones I had before my lungs filled and I drowned. You being racist, who are you people? I was joking. We were both white. Your generation. Whining about these God-damned masks, he elbowed his buddy next to him, Ted. Ted had also been at D-Day. There are only about 200 vets left who landed on those Norman beaches and two were eating breakfast with me. They’d cry about the water being too cold, Charlie, Ted piped in. They both laughed. No understanding of sacrifice for the good of the nation, just worshiping that orange, draft-dodging clown and watching that Fox, Ed continued. Me, me, me, he added, whiners the whole lot of them. Vets or not, those words hit home too closely and I lost my cool. What I did next, however, was stupid. Listen you old, son’s of bitches. You don’t know what year it is or whether you peeing yourselves or shitting yourselves. Trump is saving the country your generation almost killed with that damn Johnson and his war on poverty. My throat was itching so that I took Charlie’s water and took a big gulp. The icy water cooled the rising heat I could feel on my neck, easing the dull ache in my head — the itch was scratched. I got up and left out the back door without touching my breakfast. It then hit me that Charlie shouldn’t drink that water — I went back in but he was already sipping it. It’s just a hoax anyway, he’ll be alright, I assured myself. By that night, I was in the hospital and within two days on a ventilator. No longer able to speak, I became a prisoner of my thoughts. Alone in the room, as the guy next to me had died shortly after I was brought in — Covid-19, they said, but who knows — the nurse put on Fox News at my request — this is all I watched lately, just the news. A patriot needs to be informed, right? They showed the rally I was at. The president was going on about how the pandemic was over. I remember him telling us what a great job he had done. I shouted in agreement at the rally. Now, I am beginning to question whether some things could have been done differently? Nevertheless, he is a very smart man and I am inclined to believe him. When I first arrived to the hospital, it was pretty shocking — even a little scary — to see the precautions being taken by the medical staff. The amount of people lying around on gurneys, checking in and looking as bad as I felt was totally unexpected — for a second it crossed my mind they might be actors but then I thought, there is no way they could pull off such a massive fraud. While I kept telling myself it was just a bad cold, probably even the flu, seeing those barely-conscious people was making me think I could catch Covid for real from them. You have tested positive for coronavirus, sir. Your lungs are filling and there is a good chance we will have to intubate you, the very tired-looking doctor told me. This was the first moment I recall that something wasn’t jibing between the reality of this thing and the White House/Fox narrative being sold to us. I remember as the doc was telling me in a very weary voice about how this was all likely going to play out, he suddenly just stopped and looked up at the TV. Kayleigh McAnany was talking about how Covid cases were dropping and the blues states were politicizing the faux pandemic to make Trump look bad. The doctor looked at me, grabbed the remote and muted the television, you still buying their bullshit, Mr. Jackson? I wanted to hit him because I knew he was one of them — he might try to kill me now in order to make President Trump look bad. Without any political rhetoric, Mr. Jackson, you are in very bad shape but we will do our best to get you out of here alive. I waited to see if he was joking, perhaps exaggerating. Be strong. science-willing, we will have you back home and watching that Fox News stuff in a few weeks. He was clearly taking advantage of the fact that I couldn’t talk back. Their science didn’t know a thing — one day masks don’t help, next day they do. It was all just a ploy to control us, Tucker thought. One more question, Mr. Jackson. Just nod…did you ever wear a mask? Over the next three days, my wife was hospitalized and also intubated — she survived. Charlie from the diner died from a Covid-induced stroke five days after I sat next to him. Ted was admitted a day after Charlie died— I had killed one of our finest and hospitalized another one. My condition worsened and I could feel the energy waning. It is an odd thing, the process of dying, you kind of know it is happening but it takes a strong person to survive. There is an ebb and flow of conversation inside the trapped mind, posing the question over and over: Do you really want to go on fighting? Maybe we should just let go? So used to being supplied with truths and answers, I would focus in on the Fox News, praying a sign would come from our president. All he kept saying was that the pandemic was over and that the rising cases were fake news — to make him look bad he said. To be honest, I was beginning to lose interest in his image. What was really messing with my head was the surprised looks the doctors had that I was still here, in this life. Seems there was nothing too fake about this, pandemic, after all. I still can’t really say that I had coronavirus but whatever it was, it was like nothing ever before. Lying there, beginning to realize that I may never see my kids again, my wife, taste a beer or put up the Christmas tree, the questions started ripping through my head fast and furious — could you have protected yourself, Ed? Was wearing a mask such a big inconvenience or did it just make me feel good being one of the so-called miserable Trump supporters? Going to Trump rallies made me feel like a bad boy — the way I did back in high school. One of the last things I saw before I passed on, suffocating in my own fluids — it really was a horrible sensation and nothing I would ever wish on anyone, not even on a Trump-hater — was how the president was already being discharged. Seems he too had gotten sick and was out in three days. Why did he get drugs that I didn’t? Why was his life more valuable than mine? After all, he was the one who told us to show our support by not wearing masks. The noise I heard before dying sounded like a thunderstorm, like wind and water beating against the window — but as I write this I recall that I was in a windowless room. That sound might of have been my final struggle to breath. Before, I let go, I saw my wife and three of my four kids waving from behind the glass, in the hallway. Seems that they had all been infected from me— as I cried, I wondered where was my middle daughter? She suffered from asthma. As I write this to you, I can say my daughter is out here with me in this energy field somewhere. I stole her life. She was only 14. The president and his boy survived. Looking back on it all, I can say it wasn’t worth it, not in the least. It’s funny the things you miss when you are dead. I miss hearing the popping noises the refrigerator would make at night. I am totally going to miss that smell of freshly cut grass in the summer. To think, I gave it all up for Donald Trump. I must have been out of my mind.
https://medium.com/politically-speaking/a-letter-from-a-deceased-trump-supporter-cfd9331dd3dc
['Brian Kean']
2020-12-14 14:10:44.728000+00:00
['Covid 19', 'Society', 'Perspective', 'Politics', 'Trump Administration']
The Intuition Behind the Apriori Algorithm
Analyzing shopping trends is a pretty big deal in data science, and market-basket analysis is one way in which that’s done. Techniques in this sub-field seek to understand how buying certain items influences the purchase of other items. This allows retailers to increase revenue by up-selling or cross-selling their existing customers. Understanding the Apriori Algorithm is foundational to building your understanding of many techniques for market-basket analysis. It’s used to find groups of items that occur together frequently in a shopping dataset. This is usually the first step to finding new ways in which to promote merchandise. So… What does shopping data look like? Imagine we have a file where each line represents a customer’s shopping cart at checkout time. Let’s call this a basket. Each line above represents a basket Each basket is made up of items. Our objective is to find sets of items that occur frequently together in the dataset. We’ll call these frequent itemsets. Our objective is to find sets of items that occur frequently together in the dataset. We’ll set our own definition of what frequent means. This definition will likely change based on the number of baskets in the dataset. Typically, we’ll be interested in frequent itemsets of a particular size. Today, let’s assume that we’re looking for frequent triples (i.e. itemsets of size 3). As we continue exploring below, let’s use this example dataset to enhance the discussion. It takes on the format described above. Some quick stats about the example dataset: It contains 100,000 baskets There are 5,000 items that are sold at this particular supermarket Each item is a fake ISBN number (you know, like for books 📚) Let’s also say that appearing more than 1,000 times makes an itemset frequent. (Note: The count of an itemset is often called its support.) The Naïve Approach The first thing that comes to mind is to scan through the dataset, count up the occurrences of all the triples, then filter out the ones that aren’t frequent. The naïve approach is appealing for its simplicity. However, we end up counting a lot of triples. In the example dataset, there are over 7 million total triples, but only 168 of them are frequent. There are over 7 million total triples, but only 168 of them are frequent. Lots of triples, but not a lot of frequent triples. This is a problem for two reasons: Keeping track of the counts for those millions of triples takes up a lot of space. Building and counting all of those triples takes a lot of time. If only there were a way to avoid building and counting so many triples… A Key Intuition There is a way to avoid doing that! And it relies on a key piece of information. Don’t worry, we’ll break it down together, but here it is: The key intuition is that all subsets of frequent itemsets are also frequent itemsets. To add some clarity, consider these four baskets: Eggs, flour, and milk seem to be a pretty popular combo We can clearly see that the set {eggs, flour, milk} occurs 3 times. And at risk of sounding overly simplistic: each time we see the group {eggs, flour, milk}, we are seeing the individual items {eggs}, {flour}, and {milk}. As a result, we can safely say that the support (i.e. count) of {eggs} will be at least as large as the support of the set {eggs, flour, milk}. The same applies to flour and milk. Our newfound conclusions In fact, we can extend this fact to pairs of items inside of the set {eggs, flour, milk}. That means Support({eggs, flour}) ≥ Support({eggs, flour, milk}). Again, that’s because the set {eggs, flour} necessarily occurs whenever the set {eggs, flour, milk} occurs, since it’s a subset of it. Of course, this generalizes to sets and subsets of larger sizes. Okay… So what? The fact outlined above becomes useful when you think about it in the other direction. Since sugar, for example, only occurs once, we know that any set that contains sugar can only occur once. Think about it: if a triple that contained sugar occurred more than once, that would mean that sugar occurs more than once. And since sugar only occurs once, we can guarantee that any triple that contains sugar will not appear more than once. That means that we can simply ignore all of the sets that contain sugar, since we know that they can’t be frequent itemsets. We can apply this same logic to larger subsets as well. In the image above, {milk} and {butter} each appear three times. But as a pair, they only appear together twice (i.e. Support({milk, butter}) = 2). This means that any triple that contains the pair {milk, butter} can appear at most 2 times. When scanning for triples that appear 3 or more times, we could simply ignore {milk, butter, flour} since it contains {milk, butter}, so we know that it must appear no more than 2 times. The Apriori Algorithm makes use of this fact to eliminate needlessly constructing and counting itemsets. The Apriori Algorithm Unlike the naïve approach, which makes a single pass over the dataset, the Apriori Algorithm makes several passes — increasing the size of itemsets that are being counted each time. It filters out irrelevant itemsets by using the knowledge gained in previous passes. Here’s an outline of the algorithm: First Pass Get the counts of all the items in the dataset Filter out items that are not frequent Second Pass Get the counts of pairs of items in the dataset. BUT: only consider pairs where both items in the pair are frequent items. (These are called candidate pairs.) Filter out pairs that are not frequent. Third Pass Get the counts of triples in the dataset. BUT: only consider candidate triples. If any of the items in the triple are not frequent, the triple is not a candidate triple. If any of the pairs of items in the triple are not frequent, the triple is not a candidate triple. Filter out triples that are not frequent. Nth Pass Get the counts of candidate itemsets of size N. ( Remember: If any of the subsets of items in the itemset are not frequent, then the itemset cannot be frequent.) If any of the subsets of items in the itemset are not frequent, then the itemset cannot be frequent.) Filter out itemsets that are not frequent. While this algorithm ends up taking more passes over the data, it saves a lot of time by cutting down on the number of itemsets that it builds and counts. Take a look at these stats from our example dataset: We only consider 352 of the 7m+ possible triples. Ultimately, this reduction in the number of itemsets considered is what makes the Apriori Algorithm so much better than the naïve approach. Want to truly understand it…? I encourage you to implement the Apriori Algorithm yourself, as a way of cementing your understanding of it. To get you started, I’ve set up a Github repo with some example datasets and other resources to get you started. Here are two challenges — one much harder than the other. The Challenge Extract frequent triples from the example datasets by implementing the Apriori Algorithm and conducting three passes through the dataset. Check you implementation by also implementing the naïve approach and comparing your results. The Bonus Challenge Implement the Apriori Algorithm such that it will extract frequent itemsets of any given size. For example, if I want to extract frequent itemsets of, say, size 13 it should be able to do that. The Github repo includes a script that you can use to generate very large datasets. Use this to test your more general implementation. (If you choose to take on the bonus challenge, I encourage you to do the 3 pass implementation first. I think it’ll help build your understanding.) Avoid this common mistake One of the most common mistakes when implementing this algorithm happens when checking for candidate itemsets. Folks often remember to check if the individual items in an itemset are frequent, but they forget to check if all subsets of the current itemset are frequent. Folks often remember to check if the individual items in an itemset are frequent, but they forget to check if all subsets of the current itemset are frequent. Remember: Triples contain pairs. Quads contain triples and pairs. And so on. Let’s chat! If you take on either of the challenges above, please let me know. I’d love to swap implementations of the algorithm with you and hear about your experience. I’ve listed my email on the Github repo, so please feel free to reach out.
https://medium.com/weekly-data-science/the-intuition-behind-the-apriori-algorithm-4efe312ccc3c
['Dan Isaza']
2018-07-19 06:25:06.194000+00:00
['Algorithms', 'Artificial Intelligence', 'Data Science', 'Education', 'Machine Learning']
Abstract Classes and Metaclasses in Python
Metaclasses As you have just seen, Python’s implementation of abstract classes differs from what you see in other popular languages. This is because abstraction is not part of the language’s grammar. Instead, it is implemented as a library. The next logical question is “Can you do something similar yourself, purely in Python?” And the answer is yes. Before moving to metaclasses, there is something you need to understand about classes first. You may think of classes as blueprints for object creation, and you would be right. But in Python, classes themselves are objects. For example, when running this code: class YouExpectedMeToBeAClass: pass Python will instantiate a YouExpectedMeToBeAClass class and store this "object" in memory. Later, when you refer to this class when creating objects, Python will use that "object." But how does Python instantiate a class? By using a metaclass, of course. Metaclasses are classes for classes. Metaclasses provide blueprints for class creation. Every class has a metaclass by default (it is called type ). To create a custom metaclass, you will have to inherit type : class CustomMeta(type): pass class SomeClass(metaclass=CustomMeta): pass Note: type 's parent class is type itself. This is a hack in Python internals, and it is impossible to implement an alternative type using pure Python. By itself, CustomMeta does nothing. Let's add some more features to show you the power of metaclasses. Let's make CustomMeta check if every child class has a render attribute (like with AbstractRenderer ): If you try to run this code (without even instantiating anything!), it will throw an error. Let me explain what __new__ is first. This is the constructor for classes, like __init__ for objects. It is called at the moment SomeClass is defined, and whatever is returned from this function becomes the class. These are the arguments: the metaclass ( cls ), new class's name ( clsname ), parent classes ( bases ) and attributes ( attrs ). In the function body, we enumerate the attributes and check if render is one of them. If not, raise an exception. To make this code run, add this to SomeClass : def render(self): pass
https://medium.com/better-programming/abstract-classes-and-metaclasses-in-python-9236ccfbf88b
['Michael Krasnov']
2020-07-29 14:28:05.106000+00:00
['Programming', 'Python', 'Data Science', 'Algorithms', 'Python3']
How to monitor your leads’ activity through Pipedrive CRM
This guide is the part of An Ultimate Guide to Integrating Your CRM to Grow Sales and Automate Work. If you don’t know what is Pipedrive, yet, you can test Pipedrive CRM for extended 30 days (normally it’s 14) using this link. Once you have your Pipedrive set up with your data imported into the system (data import) and your lead sources connected (CRM integrated with the website, mailing campaigns, customer service process) you’d now like to monitor how the leads progress through your sales process and how they respond to your campaigns. Knowing how far you’re in the relationship with your prospects is essential to assess the chances of your sale’s success and to know which deals/prospects to focus on. E.g. Let’s say you have 100+ deals in your pipeline but you only have 2–3 salespeople to handle the communication process. You’d rather have your team focus on the leads which have the highest probability to succeed (those that are the furthest in the sales process & show actual interest in your value proposition). So how to do it properly in Pipedrive CRM with proper configuration & integration? Let’s take a look: 1. Configure sales process to actually reflect your relationships with customers. A good first step to keep track of your leads’ activity is to properly set up the stages of your relationship with customers (pipeline stages). These should actually reflect the stages of your sales process. Example stages in the sales process Once you have it properly set up, it’s much easier to prioritize your tasks and sales process. Simply focus on the deals that are furthest in the pipeline (the closest to the right-hand side in the pipeline). Also, start any summary meeting and discussion with your sales team from the deals which are closest to the right-hand side. 2. Set up probabilities for sales stages to forecast sales for upcoming months/quarters. Once you have your sales pipeline configured, it’s worth spending some time assessing what’s the probability of each lead passing through a specific stage of your sales pipeline. This will provide you with the actual sales forecast, letting you plan your sales ahead of time. Probabilities’ setup in Pipedrive Inputting probabilities at the very beginning of using a CRM will be difficult as you don’t really know what’s the chance of progressing the deal to the next stage. Once you have some data to analyze (actual sales data) it will become much easier. You will then simply enter actual conversion data or Pipedrive (unfortunately, Pipedrive will not recalculate it automatically for you). Tip! Pipedrive will also allow you to override overall stages’ probability with individual deals’ probabilities if you feel a certain deal is more likely to progress to the next stage of the sales process. 3. Implement a “no red circle” policy and set up “rotting days”. After you have your sales process defined and sales forecast implemented, you can now focus on progressing the deals to the next stages and on monitoring your or your salespeople's activities and interactions with potential customers. A good first step to make sure you’re able to effectively monitor activities for deals in your sales pipeline is introducing a so-called “no-red-circle” policy which means that none of the deals in the pipeline can have an exclamation mark (meaning no activity planned) or a red circle mark (meaning an activity with overdue date). Activity planning from the pipeline view in Pipedrive This means all of your salespeople will need to actually report the activities they perform with leads in the CRM system. If any of the deals in the pipeline turns into a “red-circle” one, you can then set up a notification in Slack to notify your team about the need to execute or re-schedule the activity. Also, if you’re in the industry with quick and intensive sales process (like retail), it’s good to set up a “deal rotting” making sure your deals cannot lag in your pipeline without any activity made for more than X days. Otherwise, they should be marked as lost. Deal rotting notifications in Pipedrive 4. Integrate with lead activity software. With the “no red-circle” policy in place, you can now monitor the activities/interactions of your team with prospects/potential customers. But how to make sure you then know what happens on the customer’s side? You can’t spy on them whenever they turn on the computer but there are some techniques you can use to know when customers are browsing through your offer, e-mail, or website. Integrate with proposal software If you integrate your Pipedrive with proposal software, like PandaDoc, you will then be able to know what happens once you send the proposal to the customer. This includes the number of times they open your proposal or the most often viewed parts of your offer. Also, you can set up an instant Slack notification every time a customer opens your proposal and have your team call your prospect exactly when they scroll through your offering. Pipedrive & PandaDoc proposal software integration Integrate with a lead scoring system As Pipedrive does not have a built-in lead scoring system at the moment, you’d need to integrate it with a proper lead scoring solution like Salespanel to score leads based on multiple data collected in your CRM. The lead scoring solution will collect overall information about your leads’ activity, presenting you with a unified score reflecting your chance to close the sale with the customer. Salespanel lead scoring in Pipedrive Integrate with your website After you send a proposal, it’s very likely that your prospect will be visiting your website checking on your value proposition, reviews, and many others. It’s good to know when somebody decides to visit your website after your email. By integrating with tech solutions like Leadworx or Outfunnel, you will easily check on the website visits of your potential customer. Website tracking through Leadworx integration 5. Track your sales and marketing email opens and clicks. Mailing remains one of the most important channels to communicate with your prospects. Hence, it’s important to know if your communication catches the eye of a potential customer. Integrate with your newsletter campaigns It’s essential to know if somebody opens your newsletters and it’s great to have this kind of information in your central CRM system (e.g. to catch the customer on the follow-up call). If you’re using MailChimp for emailing campaigns, you will easily stay up to date with your prospects opening or clicking on them, e.g. using an Outfunnel integration. Newsletter & Pipedrive integration through Outfunnel Integrate with your outbound campaigns If you use outbound to reach out to potential prospects, you can also use a proper integration to know about your prospects opening on clicking on your outbound campaigns. Set up an integration with your CRM system and have your team react to any replies, clicks, or opens of your campaigns (you can use Slack notification as additional notification support). Custom fields for cold mailing campaigns in Pipedrive Make use of built-in Pipedrive’s mail tracking features Pipedrive open and click tracking Pipedrive will allow you to monitor clicks and opens for every email you send from Pipedrive CRM. Make use of this feature and monitor the status of your emails without the need to leave your CRM system. You can do it by marking a click and open icon whenever you send an email and by using a sales assistant every day you use your Pipedrive CRM. Pipedrive sales assistant with open and click tracking -> Learn how to integrate with mailing solutions and outbound campaigns in our guides on How to configure and use mailing in Pipedrive CRM and How to integrate Pipedrive CRM with cold emailing campaigns in 3 steps. Monitor your leads’ activity and stay active in your sales process Having the data about your leads and their activity in one central place will allow you to make data-driven decisions in your sales process. It will also help you focus on the leads in your pipeline which actually have a chance of closing instead of reaching out to each and every lead not interested in what you’re offering. Integrating with all the lead activity happening in your business will also let you analyze the activity data and optimize your sales processes. However, we will look into data analytics in the next part of our guide. For now, use the right tech setup and integration to your advantage and to improve your team’s sales efficiency.
https://medium.com/softwaresupp/how-to-monitor-your-leads-activity-through-pipedrive-crm-504ad6bb25ba
['Matt Pliszka']
2020-10-04 09:56:43.381000+00:00
['Sales', 'Marketing', 'Business', 'Pipedrive', 'CRM']
Most European regions back to normal death rates — several regions in Sweden still above
Most European regions back to normal death rates — several regions in Sweden still above Clara Guibourg Follow Aug 24 · 5 min read As summer nears its end, it’s becoming possible to evaluate the coronavirus pandemic’s toll on European regions over nearly six months. We’ve analysed data from 776 subnational regions to better understand where the virus is continuing to hit hard. Data from 21 European countries shows that over 225,000 more people than usual have died since the start of the pandemic. This story was co-published with Svenska Dagbladet: Överdödligheten minskar långsammare i Sverige Read our previous analysis or download the data yourself. Breaking down the excess deaths by when they occurred allows us to identify areas where the epidemic is continuing to hit hard, after the first wave made areas like Bergamo and Madrid its first epicentres. In spring (March-May), the worst-hit areas counted their excess deaths in the thousands, and about one-third, or 255, of all the regions we’ve analysed had deaths at least 25% higher than usual. Over the summer months, from June onwards, this was the case in just one region, Legnicko-Głogowski in Poland, where 81 more people than usual died. Normal levels in four in five regions This far into the pandemic, almost all regions have managed to bring their excess deaths down. In fact the vast majority of regions, about four in five, had deaths at more or less normal levels over the summer months. However, progress has been slower in some countries than others. Sweden, an outlier in its handling of the coronavirus pandemic, also stands out in our data. Read more: Överdödligheten minskar långsammare i Sverige (Svenska Dagbladet) Nearly half of all regions (43%) in Sweden still had more deaths than usual during summer. This is higher than all but two other countries, Poland (51%) and Czechia (50%), both of which currently have as many new daily cases as they ever have. Other countries where several regions were still seeing excess deaths over summer include Lithuania (40%), Latvia (33%), Portugal (32%) and the UK (22%). The UK stood out in our June analysis of excess deaths, as a country with a rather uniform spread of high excess deaths across the country: In most countries the spread of coronavirus this spring was regional, rather than national, and largely contained to one part of the country. In the UK however, all regions but one had at least 25% excess deaths. Stockholm back to normal levels For Sweden, Stockholm has been where the bulk of coronavirus deaths have occurred. In the spring, 4,745 more people than usual died in Sweden. Nearly half of these excess deaths occurred in the capital. With 59% more deaths than usual, Stockholm had a higher excess death rate of than all but 68 of the 776 regions we looked at. However, since June Stockholm hasn’t had any more deaths than usual. The regions in Sweden where most excess deaths occurred over summer are Uppsala (16% more deaths than usual), Halland (+15%), and Blekinge (+13%). It is worth noting that all of these excess deaths are not necessarily related to covid-19. In small regions coincidences may bump the death tolls. For example Gotland in Sweden has recorded 31 more deaths than usual (+15%), but only 6 officially covid-related deaths, according to the Public Health Agency of Sweden. What about a second wave? Cases are rising again in many European countries, not least Spain, France, Belgium. Across the EU, new daily cases are higher than at any point since April. So why is this not visible in our data? Comparing countries’ coronavirus figures is a tricky business: official case counts depend on how much testing is being done, and deaths from coronavirus are counted in very different ways, and both are difficult to reliably compare across many countries. As an indicator of covid-19’s toll, excess deaths dodges many of issues related to international comparability. However, this is also the slowest measure, with several weeks lag at best, which is why we only cover deaths that occurred up to July or the start of August. A spike occurring in new cases today won’t result in excess deaths for a number of weeks. Use the data We’ve published the data behind our analysis here. Methodology Our analysis is based on data showing daily or weekly all-cause deaths in each region, which has been collated from Eurostat and national statistical agencies (Scotland: NRS, Northern Ireland: NISRA, Germany: Destatis). A number of countries in Central and Eastern Europe have not reported any regional statistics on excess. These are excluded from this analysis. Excess deaths have been calculated by comparing all deaths reported in a region since the start of the pandemic with the average number of deaths during that time period in the previous couple of years. We have further broken this down by season, to calculate the excess deaths in spring by comparing all deaths reported in a region between weeks 10–22 with the average, and the excess deaths in summer by comparing all deaths reported from week 23 onward with the average. Countries have reported up to different weeks, and we have used the latest data available. This largely means up to mid-July or the beginning of August, but Italy and Poland, for instance, only have data available up to 28 June. For most countries, the average period is 2015–2019. Others have fewer years of data available, but at least two full years have been used. We’ve used as granular data as possible, which is NUTS3-level for most countries. However, for Germany, Scotland and Northern Ireland, comparative data is only available at NUTS1-level. A region is defined as having had excess deaths if reported deaths were at least 5 percent higher and 20 more than expected. If deaths were at least 25 percent higher than expected, we have defined it as a region with “significant excess”.
https://medium.com/newsworthy-se/most-european-regions-back-to-normal-death-rates-several-regions-in-sweden-still-above-9abcffabb31a
['Clara Guibourg']
2020-08-25 08:21:43.990000+00:00
['Data Journalism', 'Coronavirus', 'Data Visualization']
A Start-to-Finish Guide to Building Deep Neural Networks in Keras
1 | Loading Image Data and Basic Preprocessing Images will (most of the time) be in a .png or .jpg format. They can be loaded using the cv2 library with image = cv2.imread(file_directory) . The cv2 library has handy exporting from a cv2 image to a numpy array, done through img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) . This will yield an array of dimensions ( m , n , 3), where m and n are the dimensions of the image. 3 is representative of the depth, or the amount of red, green, and blue to incorporate for the final pixel color. Extracting the numerical data from an image. Finally, the data should be scaled, or put on a scale from between 0 to 1. This improves model performance (mathematically, neural networks operate better on a 0-to-1 scale). This can be done with x /= 255 . When all the data is collected, it should be in a array of dimension ( x , m , n , 3) where x is the number of samples in your dataset. If the y value is categorical, it can be easily one-hot encoded with Keras’ to_categorical :
https://medium.com/analytics-vidhya/a-start-to-finish-guide-to-building-deep-neural-networks-in-keras-3d54de097a75
['Andre Ye']
2020-05-28 17:01:33.398000+00:00
['Keras', 'Neural Networks', 'AI', 'Machine Learning', 'Data Science']
Heavy trappings
A dusty gaze revealed one map that shows my inner rivers with their muddy shores so you can trace me back to my darkest self you know the walls but I removed the doors Having the last word comes with heavy trappings forking out onto a path littered with old deceptions it sets the stage for acted blessings to feed resentful unresolved questions Butterflies swaying the threads of webs trapping grueling memories conceal ordeals beneath shiny garments cunningly disguising naked tragedies Clogged within a fable where every touch is shy voices gasp for the sustenance of air be ready to start and lose, be ready to try be daring enough to look beyond every vanity fair
https://nunoricardopoetry.medium.com/heavy-trappings-cc15d34429cd
['Nuno Ricardo']
2020-07-05 19:21:59.229000+00:00
['Society', 'Life', 'Self', 'Poetry Sunday', 'Poetry']
Kentucky Nights: A Story of Unexpected Endings
One hundred and sixteen years before Louisville police fired the 32 rounds that killed Breonna Taylor, a hail of bullets was aimed at another African American woman just south of the city. Mary Dent Thompson, a sharecropper in Shepherdsville, Kentucky, was the target of an extrajudicial killing — like the one that took Breonna and countless other Black lives. Only 28 years old, Mary was dragged from a jail cell in the early morning hours by a lynch mob. Hung up from a tree. Shot, by some accounts, 100 times. I know her story because it is mine. The story of Mary Dent Thompson of Shepherdsville, Kentucky is my family’s origin story. It’s how we got to Southern California. It’s also the story of Kentucky Nights, the song I wrote about returning to that part of the country 110 years after my family fled. This is the story I want to leave you with in these last days before the most important election of our times. A story of unexpected endings. A ghost story with a heavy dose of horror and more than a little hope. My Family’s Origin Story The way my family told it, one day in 1904 back in Bullitt County, Kentucky, my great grand aunt Mamie Mace came upon a scene all too familiar in the post-Civil War South: a lynch mob. Mary Dent Thompson, a Black sharecropper, had killed her white landlord after he’d attacked her son. She’d been hauled from her jail cell and strung up from a tree with a noose around her neck. My great grand aunt packed up the family and headed west that same day, the story goes. They had their sights set on San Francisco, but before they could get there from their waystation in Arizona, the city was destroyed by earthquake and fire. They ended up in Southern California instead. My family has lots of stories, one for every name inscribed in the family bible, at least, and then some. The story of the lynching that propelled them west never seemed particularly important to those who told it. But it loomed large for me. And I didn’t yet know the half of what had actually happened to Mary Dent Thompson. About 80 years later I, too, fled for a better life. I’d found my groove in the late 1970s L.A. punk scene. But by the early ’80s I knew that if I didn’t leave Southern California soon, I would die. The punk and skinheads rumbles, the cops terrorizing us outside of shows, the war on drugs that we knew was a war on us. Like my father’s people before me: time to go. Yes, it’s a bit ironic that I headed north to the region being claimed by white nationalists as an Aryan homeland. That led me into the work I’ve done ever since; work which took me from Oregon to Chicago to Brooklyn. And it was from there that I finally ended up full circle: back in Kentucky. My First Trip to Kentucky When a collegial trip took me to Kentucky, the ghost of Mary Dent Thompson was waiting for me. I was conscious of the fact that I would be only about 20 miles from the scene of that lynch mob, from the place my family had left for good reason. But I had a few other things on my mind too. A trip to a legendary guitar store, for one. I spent a day walking miles down the main street of Louisville, sampling bourbon in every other bar. I got to the guitar store just before closing. They welcomed me in, told me to take my time. I fell in love with a guitar that spoke to me despite my wallet’s objections. I bought her and I started looking around for an open mic. I found one at a BBQ joint a few days later. Let me take you back to that night. I take a cab there with my new guitar, walk in and notice a couple of things. There’s a Confederate flag hanging above the bar. I’m on the outskirts of Louisville and I know this is probably not going to be like any open mic scene I’m familiar with. I have a choice to make. But I really want to play this guitar. And my cab driver has already somewhat reluctantly left. I sit boldly down at the bar and order a whiskey and a beer and learn that the music is in the back room. It turns out it’s a jam session; basically, an open mic with a backing band — a bunch of old boys playing music, all much better musicians than me. I can feel my tension. I am consciously aware that I’m unknown and the only Black person in the room. I go back and sit for a while, nursing my drink. And eventually they ask, “Do you have a song you want to play?” I do have a song I want to play. But my people have not been at home in Kentucky for many years. There in the room with me, my guitar, and the olds boys playing music is the ghost of Mary Dent Thompson. I can feel her everywhere. Mary never denied killing John Irvine, her landlord. Versions vary, as they always do, especially when the only witnesses are Black and a white man is dead. Wikipedia tells it like this: “While she and her son were working in her vegetable garden, Irvine approached them, and demanded the return of a pair of pliers. Thompson’s son said that he had already returned the tool. Irvine began to accuse the boy of stealing the pliers, verbally berating him, and kicked him several times in the back. Thompson confronted the landowner over her son and they argued. Shocked that Thompson challenged him, Irvine demanded that she ‘get off his place.’ By evicting Thompson, Irvine took her home, income, and dignity. ‘Angry and desperate… Thompson struck back.’ According to Thompson, she complied with Irvine’s demand, but ‘intentionally walked slowly’. Irvine became enraged and tried to attack Mary from behind with a knife. Thompson, a woman weighing 255 pounds, got the better of Irvine and cut his throat with a razor, killing him. Thompson sold her horse and furniture to her neighbors, and was preparing to flee when she was arrested.” But that was then. And here I am, and the band is inviting me up. So I get up there and start playing an original. And these old boys start backing me up. It’s the first time I’ve ever had a band behind me. I’d never heard my music that way. It feels so right. Like for the first time, I’m hearing my own music. All the tension in the room ebbs away. The band asks me, “Did you write that?” I say I did. Their immediate response is, “Let’s play another one.” I play three of my originals that night. We get to talking, I stay until the bar closes, the bass player gives me a ride home. We still stay in touch on social media. It was a profound night for all of us. It was a night that ended in a way I didn’t expect. “Walking down the road/ Where you been before/ Seems like a lifetime since you/ Stood at her door.” I flew home the next day and wrote the song Kentucky Nights. It’s a tribute to Mary Dent Thompson, the jumble of fears and aspirations I felt that night, the old boys that backed me and my guitar, and the moment where we found a bridge. Stories Don’t Always End The Way You Think They Do That first trip to Kentucky, the ghost of Mary Dent Thompson both called to me and repulsed me: the racial terror, the courage, a woman who stood her ground but didn’t survive. I looked for more information about what happened that day of the lynching. I came to find out that she’d done more than stand her ground in that vegetable garden. Nor had her community stood by passively as she was locked up in jail. That night when a dozen white men came to haul Mary out of jail to lynch her, they were rebuffed by a crowd of African American men. The sheriff talked everyone into going home. Early that next morning a larger mob of 30 to 150 white men returned and stormed the jail. They busted Mary out and took her to the hanging tree. The historical record isn’t clear on what happened in the melee that took place once they got the noose around Mary’s neck. Some say the rope broke under the strain of her 250 pound heft; some say a rescuer severed the rope with a bullet. I prefer the version where Mary, swinging in the air, noose around her neck, gripped her legs around a white man, grabbed his knife, and cut herself down. As she fought her way out of the crowd, the gunfire hit her. That’s where most versions of the story end. Mary Dent Thompson, escaped from the noose and then shot dead. But remember, this is a story about unexpected endings. Mary Dent Thompson did not, in fact, die from that lynch mob’s bullets. It turns out she was taken by the sheriff to a doctor. A .38 caliber pistol ball had “entered her back and went completely through her body, barely missing her right lung” according to the local history museum’s account. Patched up, she was taken to a stone jailhouse that was sturdier than the wooden one she’d been dragged from the night before. Once more a lynch mob came for her. This time they left without her. The next morning she was moved to a Louisville jail. The newspaper headline reads, “MARY THOMPSON BROUGHT HERE FOR SAFEKEEPING: POSSIBILITY OF ANOTHER LYNCHING CAUSES HER TRANSFER.” The story reports, “On arriving in Louisville yesterday she breathed a sigh of relief. She said that she feared lynching at the hands of the men in Bullitt County and was glad to be behind the bars of the jail in Jefferson County. Her wounds are nearly healed, and it is almost certain that she will recover.” We need these stories of unexpected endings. We need to remember, always, that the last chapter has yet to be written. Horror is not always the end of the story. Horror Is Not Always the End of the Story What happened once Mary Dent Thompson went to trial in Kentucky in 1905 is just as remarkable as her and her son surviving the garden-row confrontation with John Irwin; just as remarkable as her surviving not one but three lynching attempts and a bullet that ran right through her chest. Over 350 potential jurors had to be called in order to form a proper jury. You would think that the courts in white supremacist, early 20th-century America would do what the mobs had failed to do to a Black woman, a sharecropper, who had killed a white landowner. Instead, Mary Dent Thompson was sentenced to two years in the state penitentiary. “As true as there is a God in Heaven I did not kill that man until after he had attacked me, and I was forced to fight for my life,” Mary is said to have testified. Mary Dent Thompson went on to live another 30 years. She died August 18, 1934, the mother of twelve children and widow of her husband Ben. She was buried in Greenwood Cemetery, less than 10 miles from the BBQ place I played that night with my new guitar. She still had a future, after all that horror. And after four years of white nationalist terror, increasing political violence, and authoritarian grabs for power — so do we, America. This is why it’s so important for us to tell our stories to each other. Stories of courage in the face of terror, stories of resilience and redemption. Our stories help us to remember each other’s humanity. Our stories connect us to each other and to all those who came before us who survived the unimaginable. I want to send you off with some more stories. In honor of Mary Dent Thompson, my father’s people, Breonna Taylor, those old boys in the bar, all the names we say, and the names we don’t even know. I’ve made a playlist for you of the songwriters and the stories that have kept me going over the last four years. May they sustain you and feed your own stories. I’d love to hear from you: What are the songs that speak to you in this moment, that lift you up. What are the stories that keep you going? Eric K. Ward is a Senior Fellow with SPLC and Race Forward and Executive Director of Western States Center. For the times we are in: my Kentucky Nights playlist.
https://westernstatescenter.medium.com/kentucky-nights-a-story-of-unexpected-endings-9608aaa945c
['Western States Center']
2020-10-18 19:36:54.570000+00:00
['History', 'Western States Center', 'Music', 'Open Mic', 'Election 2020']
Weighing the Odds
Weighing the Odds A Reflection on 3 Mind-boggling, Near Impossible Events Image by WikiImages from Pixabay “If you set out in a spaceship to find the one planet in the galaxy that has life, the odds against your finding it would be so great that the task would be indistinguishable, in practice, from impossible.” Richard Dawkins There are many days in our lives that are routine; everything happens the way it always does. There is security in this; it’s predictable. When incidents happen that are out-of-the-ordinary, it gets our attention because it is not predictable and not all these events are bad; some are extraordinary. I had one such day recently when three mind-boggling things came to my attention. As I reflect on them, the mathematical probability of them happening is minute and yet, they did. Three mind-boggling, near impossible events 1. Mini-golf and the Number 10 Image by Felix Wolf from Pixabay My husband and I recently celebrated our 32nd wedding anniversary. As part of our day, we went to a mini-golf course and played a round of 18 holes. Neither one of us plays golf nor have we played this course in years. When we arrived, there were many people playing the easier course, so we played the challenging one. As we worked through each hole, we would shake our heads at the complexity of them. Despite this, we both got holes-in-one, on hole number 10. What are the chances of both of us, who don’t play golf; playing the challenging course; and getting holes-in-one on the same difficult hole? Small, huh? This story doesn’t end here, either. At the end of the course, when you bring back your clubs, this mini-golf business looks at your score card. If you have any holes-in-one, they spin a wheel made up of 18 pie-wedge shapes. If it stops on the number you have a hole-in-one; you win a free game. That day, the mini-golf clerk spun the wheel, and it landed on number 10; both of us won a free game. What are the odds of this happening? These two non-golf playing people both get holes-in-one on the challenging course; on the same number 10 hole; and that number is the one the wheel landed on for two free games. I am not a mathematician, but I’m guessing the odds of this happening are rather slim. 2. The Teacher in Rural America Photo by Susan Grant On the same day, my husband was telling me about a new teacher he had worked with this year. I had met him at a ballgame in the winter and knew he was from down South but that day, my husband said he recently discovered this man had taught in the same school district we had in North Carolina and while teaching there, was a colleague to a friend of mine. Not astonished yet? Let me fill you in on a few details. The school system in North Carolina is small and rural. This place is a small spec on the map compared to bigger cities surrounding it, such as Charlotte and Raleigh. When this man moved, he chose a small fishing village on the coast of Maine, almost at the Canadian border. This village has just over 1,000 residents and the high school tops off at 80 students. What are the chances that a teacher who moved to our tiny fishing village taught in the same school district we did and knows many people we do? It’s mind-boggling. 3. The love of a Savior for Someone Like Me Image by sspiehs3 from Pixabay On this same day, as my husband and I reflected on our 32 years of marriage, with its ups and downs, I began weighing some other odds. What is the chance that God, Who is so powerful, He spoke everything into being, would be in the least bit interested in my everyday life? Psalm 34:18 (ESV) says, “The Lord is near to the brokenhearted and saves the crushed in spirit.” What are the chances that God, Who even knows the number of hairs on my head (Luke 12:7) would care enough about one person, in all of history and be close to me when my spirit has been crushed? In addition, this same God sent His Son, Jesus, to die on the cross for me. What are the odds? It truly is mind-boggling.
https://medium.com/publishous/weighing-the-odds-8570c7abb41a
['Susan Grant']
2019-06-28 19:08:51.868000+00:00
['Christianity', 'Golf', 'Religion', 'Mindfulness', 'Creativity']
Couples Are Rethinking the Idea of Having Kids Because of Climate Change
Couples Are Rethinking the Idea of Having Kids Because of Climate Change The future of our children is extremely uncertain. Photo by Robo Wunderkind on Unsplash In 2020, we’ve experienced a pandemic, wildfires, and the busiest hurricane season in recorded history. Horrific natural disasters are becoming increasingly common. So with all of that in mind, it’s understandable why some parents are reconsidering having children. A lot of people are nervous about what their kid’s future may look like in a world that’s been continually devastated by the effects of climate change. I’m one of those people. Since I was a little boy, my dream was to have kids of my own. Someday, I want to meet an amazing woman & raise a family in a beautiful city. But part of me is worried that dream will never come to fruition, as the long-term future of humanity is extremely uncertain. Many other people feel the same way. A 2019 poll by Business Insider reported that almost 38% of Americans aged 18 to 29 believe that couples should consider climate change when deciding to have kids. And with the United Nations predicting a predicted global temperature increase of 3–5 degrees by the year 2100, one can only ask, “what kind of world are we leaving for our children?”
https://medium.com/curious/couples-are-rethinking-the-idea-of-having-kids-because-of-climate-change-523da9e5c47d
['Matt Lillywhite']
2020-11-12 03:32:44.409000+00:00
['Environment', 'Life', 'Climate Change', 'Parenting', 'Children']
‘A Quiet Storm’: How Smokey Robinson Invented A New Genre Of Soul
Ian McCann There are examples of records that gave a name to a whole genre (like ʻRapper’s Delight’, which used the term “hip-hop” on wax for the first time), and even a sound system that gave birth to a record label and a style of music, such as Rockers. But there aren’t many records that gave a name and musical outlook to an entire programming format for radio — and one that has thrived for more than four decades. Welcome to the show, ladies and gentlemen. Tonight we are going to play it smooth, black and gently funky; low, slow and slinky for all-grown-up listeners who still have soul. So pour yourself something nice, turn the lights down, and get ready to be immersed in A Quiet Storm. Listen to A Quiet Storm right now. If that makes this album sound like a long, warm candlelit bath, possibly with a perfumed bath bomb fizzing around, well, you could certainly listen to it while enjoying one. The album cover looks like a scene from The Bridges Of Madison County, which is a little baffling. But don’t go thinking that A Quiet Storm is soapy mush. Mellow and mature it may be, but Smokey Robinson never failed to commit his soul to his records. There’s a reason why this album struck a lasting chord. There is a storm here, even if it is quiet. Lost in its intimate life A Quiet Storm was Smokey’s third solo album, released on 26 March 1975, and by far and away his best to that point. The first two were not without their moments, but it seemed as if Smokey was reaching for a style that only really coalesced here. The mature singer-songwriter chiefly concerned with relationship issues was a successful musical trope of the first half of the 70s, and while Motown had tried to nurture such a figure before, particularly in the two Valerie Simpson albums they released, Smokey Robinson was the artist equipped for the role. Opening with an arresting synthesised storm effect and an irresistible bassline, the title track is close-miked, and the breathy quality of his voice was never caught better on record. The beat of the song is barely apparent — we are miles from Motown’s original trademark crashing snare here — but still, you find yourself rocking to its subtle groove. Getting on for eight minutes later, you are still lost in its intimate life. ʻThe Agony And The Ecstasy’, a ʻMe And Mrs Jones’-type tale of infidelity, but with a more thoughtful lyric, boasts ringing guitar parts from former Miracle Marv Tarplin and a lush, languid feel as Smokey endures what fellow adult soulsters would later simply term ʻJoy And Pain’. ʻBaby That’s Backatcha’ focuses more on a funky sound, but this US Top 30 single is hardly grits and growling: threaded with a strand of flute, this is as sweet as club funk ever got; mellow-mellow, right on. ʻWedding Song’, written for the 1973 nuptials of Jermaine Jackson and Hazel, the daughter of Motown boss Berry Gordy, mostly sidesteps sickly sentiment, telling of the most important day of many people’s lives with sincere optimism. Photo: Motown Records Archives Soulful, absorbing, life-affirming Smokey Robinson could have been accused of over-optimism when he wrote ʻHappy (Love Theme From Lady Sings The Blues)’, basing it on Michel Legrand’s theme from the movie — but not until after the movie had been completed, reputedly to Berry Gordy’s chagrin as he was highly impressed with the song. No matter; Michael Jackson recorded it, and the song found the perfect home on A Quiet Storm, where it had space to spread out over seven minutes. ʻLove Letters’, dripping with synth from bass to the piccolo-like trebly whistle, is the album’s least-satisfying cut, a muted funk that sounds like it was assembled as much as written, despite Smokey’s apparently confessional lyric that declares his life has been a love letter — which, in terms of his writing career, was correct. Tarplin hits a rare snagged note in the intro, a sure sign that this was not as finished as it might have been, though we are far more used to sophisticated synth music now than the world was in ’75 when ʻLove Letters’ represented something fresh and intriguing. The uptempo closer, ʻCoincidentally’, a tale of romantic dissatisfaction which would have perfectly suited the late-60s Miracles, is far more accomplished, with burbling, boiling clavinet and buoyant electric piano supporting Smokey’s irritated, dismissive, even vengeful lyric. The album ends with a brief reprise of the title track over the synthesiser that has filled the gaps between each song. Like most great albums, A Quiet Storm occupies a world of its own making: a warm, sometimes steamy world, but not a world without its drawbacks and irritations. If this is love, it’s not a gooey, cloying one, but the flawed one all relationships have to endure. This is love for adults, not starry-eyed, smitten teens. The storm of the title is not only romantic passion but the difficulties of romance amid life’s struggles. Smokey Robinson evokes this convincingly because he does it with utter conviction. Soulful, absorbing, life-affirming and every bit as moving as the best of his work with The Miracles and beyond, A Quiet Storm is one of the landmark soul albums of its era — and its innovations continue to resonate. A Quiet Storm can be bought here. Listen to the best of Smokey Robinson on the Motown Songbook playlist on Apple Music and Spotify. Join us on Facebook and follow us on Twitter: @uDiscoverMusic
https://medium.com/udiscover-music/a-quiet-storm-how-smokey-robinson-invented-a-new-genre-of-soul-846f6b94daa
['Udiscover Music']
2019-03-26 11:36:41.467000+00:00
['Pop Culture', 'Features', 'Culture', 'Soul', 'Music']
Facebook: No, COVID-19 Vaccines Won’t Require You to Get Microchipped
As the public waits to learn more about the upcoming COVID-19 vaccines, Facebook is working to prevent baseless conspiracy theories from mucking up the discourse. Can the company pull it off? By Michael Kan Facebook is preparing to remove misinformation about upcoming COVID-19 vaccines, including conspiracy theories that claim the treatments will require people to get microchipped. “Given the recent news that COVID-19 vaccines will soon be rolling out around the world, over the coming weeks we will start removing false claims about these vaccines that have been debunked by public health experts on Facebook and Instagram,” the company announced. According to Facebook, the crackdown will target false claims “about the safety, efficacy, ingredients or side effects,” of the new vaccines. “For example, we will remove false claims that COVID-19 vaccines contain microchips, or anything else that isn’t on the official vaccine ingredient list,” the company added. The conspiracy theory has its origins with the anti-vaccine movement, which accuses Microsoft co-founder Bill Gates of funding COVID-19 vaccines to secretly microchip the world’s population. However, there’s no evidence to support any of this. “No. There’s no connection between any of these vaccines and any tracking type thing at all. I don’t know where that came from,” Gates told CBS News in June. Nevertheless, the conspiracy theory has persisted on social media for months now. Another conspiracy theory Facebook plans on removing covers false claims that certain groups of people are being exploited to test the COVID-19 vaccines without their consent. The impending crackdown will face complaints from the anti-vaccine movement and free speech advocates. However, Facebook said it’s removing the misinformation out of fear it’ll put people in “imminent physical harm.” Indeed, the upcoming COVID-19 vaccines hold the promise of ending the ongoing pandemic once and for all. On the flip side, the public is still trying to understand the efficacy and potential side effects of the treatments as the FDA prepares to approve them for mainstream distribution. To keep users informed, Facebook plans on promoting “authoritative sources” on the coming vaccines through the company’s official COVID-19 information center. “We will not be able to start enforcing these policies overnight,” the company added. “Since it’s early and facts about COVID-19 vaccines will continue to evolve, we will regularly update the claims we remove based on guidance from public health authorities as they learn more.”
https://medium.com/pcmag-access/facebook-no-covid-19-vaccines-wont-require-you-to-get-microchipped-be8504f8d52a
[]
2020-12-04 14:06:44.006000+00:00
['Conspiracy Theories', 'Disinformation', 'Facebook', 'Technology', 'Social Media']
Can AI help in Protein Folding?
With advances in computational power and resources as well as new developed algorithms, AI has stretched its prowess in various domains. Protein folding problem is a difficult challenge in structural biology but recent advances using AI is very promising and might yield the desirable results. What exactly is the protein folding problem and why is it important Proteins play a very important role in biological processes. They are large, complex molecules and essential for all life forms. Any function we do or observe, be it contraction of muscles or receiving environmental stimuli or even taking a decision, relies upon how some proteins function. From a biochemical perspective, a protein can perform various different types of functions. The biological mechanism behind a protein’s function is determined by its three dimensional (3-D) structure. The intricate 3D structure in turn is dependent on the 1-D sequence of amino acids. Predicting the intricate three-dimensional structure from the amino acid sequence is known as the Protein Folding problem. Fig. 1: A 3-D protein structure (source) As stated earlier, a protein’s structure determines its function and the structure is dependent on the amino acid sequence, but the prediction of the 3-D structure is not very straight forward. There are multiple factors like, i) Hydrogen bonds between residues, ii) Van der waals interactions (since protein molecules are so tightly packed), iii) Backbone angle preferences, iv) Electrostatic interactions, v) Hydrophobic interactions etc. which can affect the structure of a protein based on the amino acid sequence. Also, the greater the sequence is, the more complex it’s structure becomes. For almost five decades now, scientists have painstakingly tried to find out about various protein’s three dimensional structures by using different experimental techniques like, Cryo-electron Microscopes, Nuclear Magnetic Resonance (NMR), X-Ray Crystallography etc. But these techniques include a lot of trial and errors and thus consume huge amounts of resource and time. By now, we approximately know about only half of the proteins present in the human body. If we have a robust and accurate approach to predict the 3-D structure of a given protein from just it’s amino acid residue sequence, we can delve more deeply into their function and would be in an advantageous situation to modify them if required. How AI can help With the huge amount of cost associated with the experimental methods, effort has been made into theoretical construct for modelling the intricate structures. In recent years, leveraging the complexity a deep learning algorithm can achieve, significant progress has been made. The main ideas behind few of the approaches are as follows. One of the initial approaches was to try out all the possible structures from a given amino acid sequence and then implement the force laws due to amino acid interactions. The number of possible foldings were huge. And then energetically favourable foldings were drawn out of the available samples. This approach was computationally expensive and required use of supercomputers. An extension of the previous approach could also be found, where to reduce the computational requirement, these approaches relied on pre-defined templates which are nothing but proteins with already known structures through experiments. In recent times, we also have access to a large amount of genetic data due to the advances in genomics and low-cost sequencing. This genetic data is basically a blueprint for the amino acid sequencing. By parsing large amounts of genetic data, sequences across species are found out which have likely evolved together. Using these co-evolutionary data found in sets of similar sequences, structural predictions are made. Another way of approaching the problem is to predict the probability that two residues would have contact. By searching through the protein database, similar sequences from the target sequences are found out and aligned. These similar sequences are used to generate MSA (Multiple Sequence Alignment). Then different methods including Neural Networks are implemented to predict the contact probability between a pair of any two residues. (Two residues are said to be in contact if they are closer than a threshold distance). At CASP12 and CASP13 (CASP is a biennial competition to assess the prediction for protein structures), the winning solution used ResNet architecture to predict the contact probabilities. The downside to this set of approaches is that in absence of a large number of homologues (proteins with similar sequence), the performance of these models drop significantly. A similar but more impactful approach is to predict the distance metric between the residues instead of contact probabilities. Distance metrics contain more fine-grained information than contact probability. One example network architecture using this approach from one of the current research is as follows. Fig. 2: Overall Deep Neural Architecture for Protein Distance Prediction (source) . Here the authors have used both 1D and 2D ResNet blocks to predict distance distribution. The reason for using a 1D ResNet block was to capture the sequential context of the residues whereas 2D ResNet block was used to capture the pairwise context of a residue pair. By summing up the prediction probabilities for the first few distance labels, this model can also be used for contact prediction and performs better in terms of long range accuracy. Google’s AlphaFold has gained popularity in this domain as a state of the art algorithm. AlphaFold is basically a mix of a few of the above mentioned approaches. Along with the target sequence, AlphaFold takes into account features derived from MSA (Multiple Sequence Alignment). Then instead of contact probability, the neural network predicts a distance between residues and angle between chemical bonds. Using these two information, they construct a potential of mean force from which protein shape can be accurately described. This resulting potential is then optimized by using a simple gradient descent algorithm. Fig. 3: Workflow for AlphaFold algorithm (source) Fig. 4: Animation showing optimization of potential of mean force using Gradient Descent (source) Conclusion: Protein folding problem is still a challenging one and is at the forefront of research. Most of the current techniques are still not tested for long sequences of residues. Even then, most of the state of the art approaches have their own challenges. But something that we can surely state is that Deep Learning and AI as a whole will take a major part in the future developments in this domain. References:
https://medium.com/bayshore-intelligence-solutions/can-ai-help-in-protein-folding-569e22716a01
['Chan Chal']
2020-04-08 07:26:04.132000+00:00
['AI', 'Deep Learning', 'Protein Folding']
Out of Yellows
I am crying yellow tears. It has only just occurred to me that summer isn’t here anymore. That my three yellow flowers that went missing must be dead. I walk on the wet cement floor of London and think. Where was I yesterday? When did the fallen flakes of sun become curled up autumn leaves shaking in the anticipation of winter rains. Where are all my friends I made in the yellow illusions of daylight. Are they taken? Have the walls of London claimed back warm human bodies it released to the sun. I must be dreaming — it cannot be that I have missed one whole season (one whole cycle of bloom). The year can not be rolling down the hill of life with such speed. I run upwards to look for warmth when a gush of autumn cold reminds me that the mountain of life doesn’t allow climbing backwards. Leaves are but moving time, life is but lost flowers and wilting friends. I wrap a scarf around my stiffened neck. I am thirsty but I don’t want to see or touch water. I want to dry up like the land. I want to hibernate in the ditches with broken foliage and the city’s sludge. It’s uncanny how nothing reaches the earth anymore, all kinds of yellows are frozen too far up in the sky somewhere. Not to be touched by wanting trees, not even by birds. A tin of man-made chrome acrylic stares at me. I look in the other direction, at a lone baby clementine sitting on the other end of the table. If we could draw lines between us, maybe we would be a warmer geometry of our own. But we are out of each other’s reach. We are drying in our own spots. Absorbing the last of autumn. Giving up our fluids to the transforming cosmos. Yellow birds is all I can see when I close my eyes. What has happened to me. I was depressed all summer, why am I looking for it now. My scarf tightens and loosens like a snake — all by itself. My mind opens and closes like a window with a broken hinge — all by itself. I smash the yellow acrylic on a cold leaf of paper. It is the coldest yellow I have ever seen. Thick strands of paint snatch away all the opportunities from the blank sheet. They are all wasted in hope. I am with them — part of the attempt of a painting. I try to capture summer in words instead. All I do is spill one grudge after another. I seem to be upset with the sun and the leaves. They don’t even care that I exist. I shout in words on nature and its cycle. Reminded of how my poetry doesn’t reach its listeners, how it is stuck in a dense network of nothings in the sky, just like the yellow which doesn’t reach any of us — anymore. Or maybe, I am simply out of all my yellows. And from here, I need to either mix colors to make my own yellow or gradually embrace less bright shades with the same yearning. It is a shame though, I thought I had more time with the yellow. I thought my friends weren’t an illusion. I thought some flowers — especially the three of them were here to stay. for a while longer.
https://medium.com/soul-sea/out-of-yellows-1150799fb15
['Shringi Kumari']
2019-11-05 00:31:25.675000+00:00
['Prose', 'Summer', 'Self', 'Poetry', 'Weather']
Software Engineering — Software Process and Software Process Models (Part 2)
Software Process A software process (also knows as software methodology) is a set of related activities that leads to the production of the software. These activities may involve the development of the software from the scratch, or, modifying an existing system. Any software process must include the following four activities: Software specification (or requirements engineering): Define the main functionalities of the software and the constrains around them. Software design and implementation: The software is to be designed and programmed. Software verification and validation: The software must conforms to it’s specification and meets the customer needs. Software evolution (software maintenance): The software is being modified to meet customer and market requirements changes. In practice, they include sub-activities such as requirements validation, architectural design, unit testing, …etc. There are also supporting activities such as configuration and change management, quality assurance, project management, user experience. Along with other activities aim to improve the above activities by introducing new techniques, tools, following the best practice, process standardization (so the diversity of software processes is reduced), etc. When we talk about a process, we usually talk about the activities in it. However, a process also includes the process description, which includes: Products: The outcomes of the an activity. For example, the outcome of architectural design maybe a model for the software architecture. Roles: The responsibilities of the people involved in the process. For example, the project manager, programmer, etc. Pre and post conditions: The conditions that must be true before and after an activity. For example, the pre condition of the architectural design is the requirements have been approved by the customer, while the post condition is the diagrams describing the architectural have been reviewed. Software process is complex, it relies on making decisions. There’s no ideal process and most organizations have developed their own software process. For example, an organization works on critical systems has a very structured process, while with business systems, with rapidly changing requirements, a less formal, flexible process is likely to be more effective. Software Process Models A software process model is a simplified representation of a software process. Each model represents a process from a specific perspective. We’re going to take a quick glance about very general process models. These generic models are abstractions of the process that can be used to explain different approaches to the software development. They can be adapted and extended to create more specific processes. Some methodologies are sometimes known as software development life cycle (SDLC) methodologies, though this term could also be used more generally to refer to any methodology. Waterfall Model The waterfall model is a sequential approach, where each fundamental activity of a process represented as a separate phase, arranged in linear order. In the waterfall model, you must plan and schedule all of the activities before starting working on them (plan-driven process). Plan-driven process is a process where all the activities are planned first, and the progress is measured against the plan. While the agile process, planning is incremental and it’s easier to change the process to reflect requirement changes. The phases of the waterfall model are: Requirements, Design, Implementation, Testing, and Maintenance. The Waterfall Model The Nature of Waterfall Phases In principle, the result of each phase is one or more documents that should be approved and the next phase shouldn’t be started until the previous phase has completely been finished. In practice, however, these phases overlap and feed information to each other. For example, during design, problems with requirements can be identified, and during coding, some of the design problems can be found, etc. The software process therefore is not a simple linear but involves feedback from one phase to another. So, documents produced in each phase may then have to be modified to reflect the changes made. When To Use? In principle, the waterfall model should only be applied when requirements are well understood and unlikely to change radically during development as this model has a relatively rigid structure which makes it relatively hard to accommodate change when the process in underway. Prototyping A prototype is a version of a system or part of the system that’s developed quickly to check the customer’s requirements or feasibility of some design decisions. So, a prototype is useful when a customer or developer is not sure of the requirements, or of algorithms, efficiency, business rules, response time, etc. In prototyping, the client is involved throughout the development process, which increases the likelihood of client acceptance of the final implementation. While some prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system. A software prototype can be used: [1] In the requirements engineering, a prototype can help with the elicitation and validation of system requirements. It allows the users to experiment with the system, and so, refine the requirements. They may get new ideas for requirements, and find areas of strength and weakness in the software. Furthermore, as the prototype is developed, it may reveal errors and in the requirements. The specification maybe then modified to reflect the changes. [2] In the system design, a prototype can help to carry out deign experiments to check the feasibility of a proposed design. For example, a database design may be prototype-d and tested to check it supports efficient data access for the most common user queries. The process of prototype development The phases of a prototype are: Establish objectives: The objectives of the prototype should be made explicit from the start of the process. Is it to validate system requirements, or demonstrate feasibility, etc. Define prototype functionality: Decide what are the inputs and the expected output from a prototype. To reduce the prototyping costs and accelerate the delivery schedule, you may ignore some functionality, such as response time and memory utilization unless they are relevant to the objective of the prototype. Develop the prototype: The initial prototype is developed that includes only user interfaces. Evaluate the prototype: Once the users are trained to use the prototype, they then discover requirements errors. Using the feedback both the specifications and the prototype can be improved. If changes are introduced, then a repeat of steps 3 and 4 may be needed. Prototyping is not a standalone, complete development methodology, but rather an approach to be used in the context of a full methodology (such as incremental, spiral, etc). Incremental Development Incremental development is based on the idea of developing an initial implementation, exposing this to user feedback, and evolving it through several versions until an acceptable system has been developed. The activities of a process are not separated but interleaved with feedback involved across those activities. The Incremental Development Model Each system increment reflects a piece of the functionality that is needed by the customer. Generally, the early increments of the system should include the most important or most urgently required functionality. This means that the customer can evaluate the system at early stage in the development to see if it delivers what’s required. If not, then only the current increment has to be changed and, possibly, new functionality defined for later increments. Incremental Vs Waterfall Model Incremental software development is better than a waterfall approach for most business, e-commerce, and personal systems. By developing the software incrementally, it is cheaper and easier to make changes in the software as it is being developed. Compared to the waterfall model, incremental development has three important benefits: The cost of accommodating changing customer requirements is reduced. The amount of analysis and documentation that has to be redone is much less than that’s required with waterfall model. It’s easier to get customer feedback on the work done during development than when the system is fully developed, tested, and delivered. More rapid delivery of useful software is possible even if all the functionality hasn’t been included. Customers are able to use and gain value from the software earlier than it’s possible with the waterfall model. It can be a plan-driven or agile, or both Incremental development is one of the most common approaches. This approach can be either a plan-driven or agile, or both. In a plan-driven approach, the system increments are identified in advance, but, in the agile approach, only the early increments are identified and the development of later increments depends on the progress and customer priorities. It’s not a problem-free But, it’s not a problem-free … Some organizations have procedures that have evolved over the time, and can’t follow informal iterative or agile process. For example, procedures to ensure that the software properly implements external regulations. System structure tends to degrades as new increments are added and get corrupted as regular changes are incorporated. Even if time and money spent on refactoring to improve the software, further changes becomes more difficult and costly. Spiral Model The spiral model is a risk-driven where the process is represented as spiral rather than a sequence of activities. It was designed to include the best features from the waterfall and prototyping models, and introduces a new component; risk-assessment. Each loop (from review till service — see figure below) in the spiral represents a phase. Thus the first loop might be concerned with system feasibility, the next loop might be concerned with the requirements definition, the next loop with system design, and so on. The spiral model Each loop in the spiral is split into four sectors: Objective setting: The objectives and risks for that phase of the project are defined. Risk assessment and reduction: For each of the identified project risks, a detailed analysis is conducted, and steps are taken to reduce the risk. For example, if there’s a risk that the requirements are inappropriate, a prototype may be developed. Development and validation: After risk evaluation, a process model for the system is chosen. So if the risk is expected in the user interface then we must prototype the user interface. If the risk is in the development process itself then use the waterfall model. Planning: The project is reviewed and a decision is made whether to continue with a further loop or not. Spiral model has been very influential in helping people think about iteration in software processes and introducing the risk-driven approach to development. In practice, however, the model is rarely used. Iterative Development Iterative development model aims to develop a system through building small portions of all the features, across all components. We build a product which meets the initial scope and release it quickly for customer feedback. An early version with limited features important to establish market and get customer feedback. In each increment, a slice of system features is delivered, passing through the requirements till the deployment. The phases of iterative development The phases of iterative development are: Inception: The goal is to establish a business case for the system. We should identify all the external entities that will interact with the system, and define these interactions. Then, uses this information to assess the contribution that the system makes to the business. If the contribution is minor, then the project may be cancelled. Elaboration: We develop an understanding of the problem domain and architecture framework, develop the project plan, and identify risks. Construction: Incrementally fills-in the architecture with production-ready code produced from analysis, design, implementation, and testing of the requirements. The components of the system are dependent on each other and they’re developed in parallel and integrated during this phase. On the completion of this phase, you should have a complete working software. Transition: We deliver the system into the production operating environment. All the phases will be done once, while the construction phase will be incrementally visited for each increment; for each slice of system features. Agile Agility is flexibility, it is a state of dynamic, adapted to the specific circumstances. The agile methods refers to a group of software development models based on the incremental and iterative approach, in which the increments are small and typically, new releases of the system are created and made available to customers every few weeks. The principles of agile methods They involve customers in the development process to propose requirements changes. They minimize documentation by using informal communications rather than formal meetings with written documents. They are best suited for application where the requirements change rapidly during the development process. There are a number of different agile methods available such as: Scrum, Crystal, Agile Modeling (AM), Extreme Programming (XP), etc. Increment Vs Iterative Vs Agile You might be asking about the difference between incremental, iterative and agile models. Each increment in the incremental approach builds a complete feature of the software, while in iterative, it builds small portions of all the features. An agile approach combines the incremental and iterative approach by building a small portion of each feature, one by one, and then both gradually adding features and increasing their completeness. Reuse-oriented Software Engineering It’s attempting to reuse an existing design or code (probably also tested) that’s similar to what’s required. It’s then modified, and incorporated to the new system. The Reuse-oriented software engineering model Although the initial “requirements specification” phase and the “validation ” phase are comparable with other software processes, the intermediate phases in a reuse-oriented process are different. These phases are: Component analysis: A search is made for the components to implement the given requirements specification. Usually, there’s no exact match, and components may be only provide some of the functionality required. Requirements modification: During this phase, the requirements are analyzed using information about the components that have been discovered. They are then modified to reflect the available components. If the modifications are impossible, the component analysis activity may be re-entered to search for alternative solutions. System design with reuse: During this phase, the framework of the system is designed or an existing framework is reused. The designers take into account the components that are reused and they will organize the framework accordingly. Some new software has to be designed if some reusable components are not available. Development and integration: The components are integrated to create the new system. System integration, in this model, may be part of the development process rather than a separate activity. There are basically three types of software components that can be used in a reuse-oriented process: Web services that are developed according to well-known service standards and which will become available for remote invocation. Collections of objects that are developed as a package to be integrated with a component framework such as .NET or Java EE. Standalone software systems that are configured for use in a particular environment. It’s has an obvious advantage, But! Reuse-oriented software engineering has an obvious advantage of reducing the amount of software to be developed and therefore reduced cost and risks, and usually leads to faster delivery. However, requirements compromises can’t be avoided, which may lead to a system that does not meet the real needs of users. Furthermore, some control over the system evolution might also be lost as new versions of the reusable components are not under the control of the organization using them. Summary Waterfall It’s useful when the requirements are clear, or following a very structured process as in critical systems which needs a detailed, precise, and accurate documents describes the system to be produced. Not good when requirements are ambiguous, and doesn’t support frequent interaction with the customers for feedback and proposing changes. It’s not suitable for large projects that might take long time to be developed and delivered. Prototype Again, it’s an early sample, or release of a product built to test a concept or to act as a thing to be replicated or learned from. This is very useful when requirements aren’t clear, and the interactions with the customer and experimenting an initial version of the software results in high satisfaction and a clearance of what to be implemented. It’s downsides are, good tools need to be acquired for quick development (like coding) in order to complete a prototype. In addition, the costs for for training the development team on prototyping may be high. Incremental & Iterative They’re suited for large projects, less expensive to the change of requirements as they support customer interactions with each increment. Initial versions of the software are produced early, which facilitates customer evaluation and feedback. They don’t fit into small projects, or projects that waterfall are best suited for; A structured process with a detailed, and accurate description of the system. Spiral It’s good for high risky or large projects where the requirements are ambiguous. The risks might be due to cost, schedule, performance, user interfaces, etc. Risk analysis requires highly specific expertise, and project’s success is highly dependent on the risk analysis phase. It doesn’t work well for smaller projects. Agile It suits small-medium size project, with rapidly changes in the requirements as customer is involved during each phase. Very limited planning is required to get started with the project. It helps the company in saving time and money (as result of customer physical interaction in each phase). The daily meetings make it possible to measure productivity. Difficult to scale up to large projects where documentation is essential. A highly skilled team is also needed. If team members aren’t committed, the project will either never complete or fail. And there’s always a limitation in time, like in increments, meetings, etc. Process Activities The four basic process activities of specification, development, validation, and evolution are organized differently in different development processes. In the waterfall model, they are organized in sequence, while in incremental development they are interleaved. How these activities are performed might depend on the type of software, people involved in development, etc.
https://medium.com/omarelgabrys-blog/software-engineering-software-process-and-software-process-models-part-2-4a9d06213fdc
['Omar Elgabry']
2017-09-28 07:05:46.937000+00:00
['Agile', 'Software Development', 'Software Engineering', 'Software']
Photographers, Here’s How To Get Followers on Instagram
Photographers, Here’s How To Get Followers on Instagram You may know about photography, but you need to learn more about Marketing Photo by Benjamin Combs on Unsplash If you’re a photographer, chances are you are active on Instagram. You’ve listened to all the gurus on YouTube, and you’ve read all the content available about personal branding and social media. Deep down, you know that you made the right choice by launching your Instagram account. You’ve started posting now and then. You’re sharing the photographs you’re the proudest of, but you might be kind of disappointed by the results you’re getting in return: not that many likes, your number of followers isn’t growing quickly, and you’re not getting more jobs because of your account. You thought that you had the advantage of being a photographer on Instagram since the platform is so visual. You thought you could produce better content than most people since you’re using fancy gear, you edit tastefully in Lightroom and Photoshop, and you’re using the best hashtags. Now you’re disappointed because you realize it’s hard and it takes more time than it seems. As a marketer, who also does photography on the side, I can tell you one thing, though: photographers are great at taking pictures (duh), but they’re terrible at marketing. You can be the most talented photographer on Earth, if you suck at putting an audience in front of these photographs, no one will know you’re the best! That’s what marketing is: putting an audience in front of your images. For some reason, most photographers have a tough time doing that. Acknowledge The Competition Lots of photographers have decided to use Instagram to share their work. That makes total sense: Instagram is a social platform that relies on sharing images. Photographers add an unfair advantage at the beginning of the platform, as most people were using their phones and the built-in photo filters to share images of their daily lives. With an expensive DSLR and a good quality lens, it was quite easy to create much better content than what was already posted on the platform. Naturally, photographers started to get on the platform and posting amazingly looking content. Nowadays, in 2020, the quality of the average picture shared on Instagram is so high. Almost every photo posted comes from either a very high-end camera, or the latest iPhone. Everything is professional looking, very highly edited, and so thought through to get more exposure, likes, and comments on posts. Needless to say, while being an excellent photographer with a little bit of gear was a fantastic way to do much better than others in 2013 is certainly not enough in 2020. Professional images are now a standard on Instagram. The photos alone won’t be enough to stand out on Instagram. I’m sure you’re highly talented, but take a look at all these pictures that are posted. I’m not even talking about brands and photographers. Look at regular people, bloggers, celebrities. Even your friends with the latest iPhone that do not know about photography can post fantastic content on their feed. It’s important to understand that your talent is not going to be enough if you want to have some success on the platform. Of course, it’s better to be excellent and talented than bad and tasteless, but being good at photography is now the minimum on Instagram. Using the Right Hashtags One of the best ways to get people to discover your work on Instagram is by using hashtags. If you are not using any, good luck with your growth. I know it’s tempting to go on the profile of super successful photographers to copy their list of hashtags and paste them on your posts. That may seem like the right thing to do, but this is a wrong practice. The best way to get any traffic to your work from hashtags is to be featured in the most popular posts of these hashtags. This means that if the hashtag that you are planning on using is very highly popular, you will need a lot of interactions on your post to get in the most popular posts. The successful photographer that you are stealing the hashtags from may have enough likes and comments on his content to rank on these general hashtags, but that might not be the case for you. If you want to find the best hashtags to use for your post, try looking for smaller hashtags that are maybe more specific. From there, look at the most popular posts in it and try to see how many likes and comments they have. If you get the same amount on average, that means that you probably can use that hashtag. Otherwise, keep looking for different ones and even smaller ones. I know it’s tempting to use the most popular hashtags possible like #photography or #landscapes, but unless you’re averaging 10,000 likes per post, I wouldn’t even bother using them. Attract the Right Followers This is an important one: who do you want to attract to your work? Do you want other photographers to look at your images? Do you want regular people? People in a particular industry? These are such important questions to answer. The answer to these questions is going to determine the hashtags that you are going to use. Now that you know which size of hashtags you should look for, you need to focus on which tags you will be using the attract the right people. If you are a wedding photographer, the chances are that you want to attract engaged people to your work. You probably don’t care that much about other photographers since they probably won’t give you business. In this example then, it’s a better idea to use tags such as #WeddingPhotographer, or #Engagement, or #BrideAndGroom than using the hashtag of your gear, your lens and so on. Engaged people planning a wedding and looking for a photographer are not going to look through #SonyA7III or #24mmGM. There are no “good” and “bad” hashtags. It all depends on who you want to attract to your content. Always keep in mind that you are posting your photos to show it to a specific audience. Your job with these hashtags is to make sure that you put that audience in front of your work. Interact With Others The one last thing about Instagram: Instagram is a social media platform. That means that you shouldn’t only post your work on there, you should also interact with other people. You should spend time on the platform, follow who you want to follow, like posts, comment, reply to comments, DM, use stories, and so on. This point is always forgotten or underestimated. People love interactions, and that’s why they spend time on social media. If they want to look at pretty pictures, there are millions of accounts to follow, websites to visit, and so on. It doesn’t matter how talented you are at that point; you probably can’t compete with the outstanding volume of fantastic content out there. Not only should you be engaging in your captions, but you should reply to the comment you’re getting, you should explore other people’s accounts, follow them, give them some likes when you like what they do. That’s what Instagram is all about. Too often, I would see photographers post a shot, without any caption, with just a list of 30 general hashtags. That makes no sense at all. As a user, we don’t have context. We don’t know why you took the shot, how you took it. There is no story, no context, nothing. The shot may be good, but that’s not enough for people to like, comment, and eventually share it with others. The more you engage with people, the more they will engage with you and the more they will discover your work, like it, and become fans. Next time you post on Instagram, keep all this advice in mind. Make sure you’re posting content that’s relevant to an audience you want to reach. Make sure you’re using the right hashtags and the right size of hashtags. Don’t forget to be social, to give your opinion on other people’s posts, to provide context about who you are and what you do, to follow other accounts that you like. If you do all this, I guarantee you’ll get more results in a week than you got in the past two months. If you don’t, reach out to me, and I’ll try to help.
https://charlestumiotto.medium.com/photographers-heres-how-to-get-followers-on-instagram-d7d0d42d4923
['Charles Tumiotto Jackson']
2020-10-15 14:41:07.035000+00:00
['Instagram', 'Business', 'Marketing', 'Photography', 'Social Media']
The Siren’s Song: How Altruism Leads to Death
Most people in the modern world have been fed a constant diet of altruism. In a vague sense, we have all accepted it as a virtue. If humanity was more altruistic, the world would be a better place. I want to argue a slightly different point… If the world were truly altruistic, we would all be dead. There was a time when I agonized over any money spent on myself. If someone gave me money, I wanted to give it away. If someone gave me a new iPhone, I wanted to sell it and donate the money to the poor. I believed that any money spent on myself above and beyond what I needed was selfish and, therefore, wrong. I remember one day I had a startling realization. I saw myself 20 years in the future, agonizing over whether or not I should buy a pizza for my family. I knew my thinking was screwed up, but I didn’t know how. I only new that I was in a prison. And, given more time, that prison would only get smaller and smaller. I know now that prison has a name: Altruism. And I know why it crippled me, because even though is sounds virtuous, it leads to death. Meriam-Webster defines Altruism as such: “1: unselfish regard for or devotion to the welfare of others 2: behavior by an animal that is not beneficial to or may be harmful to itself but that benefits others of its species” Altruism looks like sacrifice for the sake of your neighbor. Altruism calls us to forego our own needs for the sake of those around us. As a culture, we can exalt altruism as the penultimate of humanity. But I believe Proverbs may contain a warning for us in this: “There is a way that seems right to a man, But its end is the way of death.” Proverbs 14:12 Let’s take a closer look at altruism and see where it leads… Healthy people love themselves, meaning they highly value themselves. They work, eat, and sleep because they love their life and they want to keep living. Their love of self is what motivates them. Altruism declares that evil comes from selfishness and greed. If people weren’t as selfish, the world would be less evil. So far so good. But then it goes haywire. Altruism goes on to categorize any action done out of self-love as selfishness. Thus, good is defined as anything done for others, and evil is defined as anything done for yourself. Many people can’t accept this in its purest form. So you will hear compromises, but what you’re really hearing is a desperate cry for life. People will say, “Some selfish actions are necessary so that I can help others.” So, eating, buying things for yourself, and making money must be done for the sake of others. Anything done for yourself must be a means of enabling you to help your neighbor. I’ve heard many people use the airplane/air-mask analogy to illustrate this point. When you get on an airplane, they go through a safety procedure. They say that in case of an emergency, air masks will drop from the ceiling. They then tell you to put your own mask on first so that you can then help those around you. If you can’t breathe, you won’t be of much help anyone. Ministers will often use this analogy to describe why it is okay to spend time on yourself. If you aren’t spiritually healthy, then you won’t be able to help anyone else. But again, this means that all your actions are done so that you can help people. Your value does not come from your sense of self, but rather how you can help others. Your “love of self” (if this can even be called love) is a means to someone else’s end. Altruism states that you only have value so long as you are valuable to others. People don’t usually think this through. If your value is determined by how you can help society, what happens when you can no longer help society? What about senior citizens? Or the handicapped? It could be argued that for society to be truly altruistic, seniors and handicapped people should commit suicide. They wouldn’t value themselves (because they couldn’t help anyone — and their value comes from how they can help people), so they have no reason to live. Under altruism, I can’t love myself. It is selfish. And selfishness is evil. Some selfish actions are necessary so long as they equip me to help others. But once I can’t help others, I have no value. Altruism forces you to hate yourself. And ultimately, it values death over life. Jesus said to love your neighbor as you love yourself. Your love for self was supposed to be your standard for how you love those around you. In order for this statement to makes sense, it requires that you love yourself! Otherwise, if we were supposed to be altruistic, it would mean that we would have to hate our neighbors (which, I would argue, altruistic people end up doing). Life was meant to be lived, loved, and enjoyed. We put our masks on because we want to live. We go to work because we want money to buy things we enjoy (material things as well as freedom to do the things we love and be with friends and family). We help those around us because we love them and we want them in our lives. Real love for others stems from a strong sense of self. The love of others is a natural byproduct of a love for self. Your love for self is not a means to an end. It is an end in and of itself. With that, I bid thee to go love yourself and life. The world will be a much better place if you do 🙂
https://medium.com/christian-intellectual/the-sirens-song-how-altruism-leads-to-death-a7b2120fde53
['Sean Edwards']
2018-07-19 13:56:21.045000+00:00
['Life Lessons', 'Motivation', 'Ethics']
10 alternate ways to spend New Year's eve stuck in tier 3.
10 alternate ways to spend New Year's Eve stuck in tier 3. With Thanos, wine, 24 hours parties, and a mini world tour. Photo by Immo Wegmann on Unsplash 2020 showed us how precious life is, so next year don’t let fear win. Book that flight. Apply for that opportunity, say I love you more. Read those books, start that business. Stop procrastinating. Block that person. Learn that skill, Take that risk. Live while you can. — Steven Bartlett With the new and improved strains of covid 19 being found in the United Kingdom and other parts of the world everyone is both shocked and sad, this new virus is mutating very rapidly and the existing vaccines are failing to tackle it. Both the spreading and strength of this new virus are elite over the older virus. The holidays are here and all the world hoped and aspired for was a peaceful and relived end to this tragic year but instead, all parts of the world are under strict covid guidelines, the flights are being canceled and countries are shutting down their borders where people are struck into places without a family to hold by. But let’s keep the holiday spirit high and hold our faith in the capacity and pray for a pleasanter 2021. Here are the 10 ways you can welcome the new year by not compromising the safety of yours and others and which might soften the blow of having to stay at home. 1. Go to bed so that 2020 is over early. Photo by Stephen on Unsplash Because yes people let’s agree that 2020 was pretty messed up. The number of lives lost and the family affected by the loss of a loved one or staying afloat financially was not a comfortable assignment. The economy crumbled, the real estate market was jolted and mental health was at its most profound. So you are totally allowed to get in bed early and let this year pass by leisurely and quietly. No one is judging you, folks. 2. Pick up those cookbooks, put on some music, and fill yourself a glass of wine. Photo by jimmy dean on Unsplash Cook an elaborate meal at home for your loved ones. Involve your lover, friends, or even kids. It’s a great bonding activity which is fun too. And those kids will thank you later for passing on a skill or maybe a whole tradition of cooking together. Most of the fancy dining places have age restrictions so you can celebrate this new year with your toddlers or the teenagers you are never able to get a hold of. Cooking food at home always comes cheap and leaves you with leftovers too. Make sure to plant a kiss on your partner’s lips. 3. Start Avengers Infinity War at 9:48:51 pm so that Thanos wipes out half the universe as the new year rings in. Photo by Morning Brew on Unsplash Okay, This is called humor kids. Most of us are a big fan of marvel and DC universe we either grew up watching the movies or reading the comics and the nerds like myself doing both. So put on that favorite superhero movie and submerge yourself into the world of fantasy and nostalgia. Superheroes make us believe in the good and power of sticking together and that’s what we need the most at this time. Let us escape reality altogether. 4. Being of service to others is a true joy. Photo by Philip Veater on Unsplash We spend hundreds and even thousands of dollars on our new year celebrations. We buy the clothes we’ll never wear again and hang out with people we might not even like. Let’s contribute less to the fast-fashion garbage and more to making somebody’s life better. A hundred dollars might not make any significant difference in your life but that’s probably weeks' worth of food for a mother in need or the school fees for one entire academic year in third world countries. Give some kid the gift of education, put warm clothes on someone’s back and hot food in their belly. So let’s be considerate and donate to the food banks, homeless, or anyone in need and make the holidays warm for everyone. Make the holidays special for others and the harmony you’ll feel within you can’t be compared to any momentary material spark. 5. Some awesome fast food followed by rubbish on television till midnight. Photo by Jarritios Mexican Soda on Unsplash We should stop beating ourselves up and trying to be Relevant to social media all the time. Let’s explore that hidden introvert and stay at the home person within us. There is no more delight than chewing on the favorite and familiar taste of fast food we all grew up, in our own cozy blanket. A fancy bottle of Wine is amazing but a chilled bottle of beer is unbeatable, So go grab that kebab, burger, taco, or a big box of pizza and stuff our faces into simple but wonderful food. Let’s start the new year by embracing simpler times. 6. Travel the world via your telly. Photo by Erik Mclean on Unsplash We all know what a headache it is to witness the new year at Times Square, Switzerland, or Sydney Harbor. The extreme cold and over-the-top prices along with the crowds of tourists and locals don’t really ring the bell right. It’s the biggest fuss to get your hands on the most happening restaurants or events without breaking a sweat or bank. So let’s do a favor on ourselves and watch the New Years around the world via media after all – Happiness cannot be traveled to, owned, earned, worn, or consumed. Happiness is the spiritual experience of living every minute with love, grace, and gratitude. Denis Waitley 7. Party like an old soul with your housemates at home. Photo by the sun Parties were not always supposed to be an outdoor event in fact to date some of the coolest parties are exclusive and invite-only. When the world flocks to those landmarks to drop the ball celebs like Kendall and Gigi and other royals and elites around the world chose to go with intimate house parties with loved ones instead of a room full of strangers at the Ritz or a crowd of tourists at the Eiffel Tower. So dress yourself up like it’s a blank tie event and dance the night away at your own home without the worry of driving back. Be the Grace Kelly in your own castle. 8. Order a fancy food box Photo by Kouji Tsuru on Unsplash Support your small and local business by ordering in those exceptional New Years' food boxes and don’t forget to tip the delivery person well. Almost all Restaurants have come up with uniquely themed statement food cases to bring that magical New Years' dining experience to your home just pick up your phone and search your favorite cousins around your area and have a fuss-free lip-smacking meal delivered at your doorstep. 9. Attend those virtual parties or concerts and turn your living room into a pub. Photo by Desighecologist on Unsplash Live singing events, DJ gigs and even 24 hours party are occurring thought the world. Be a global audience and go party hopping in any country you wish. This year you can have it all. Wash this terrible year off with lots of drinking and dancing of course without getting into any trouble and knowing the legal limitations. Your liberty only exists to the point you are not infringing others. 10. Simply work, play, and read. Photo by Thought Catalog on Unsplash I do have something for the introverts and workaholics too, be unapologetic and don’t worry you are not missing out on anything, so code your nigh away, or finish reading two books in a night, or play video games for 4 hours, or do all of the simultaneously it’s absolutely your choice. Do whatever makes you happy and don’t cave in under peer pressure.
https://medium.com/age-of-awareness/10-alternate-ways-to-spend-new-years-eve-stuck-in-tier-3-e69ad4b418a6
['Sophia Malik']
2020-12-28 12:20:17.069000+00:00
['Society', 'Mindfulness', 'Life', 'Culture', 'Self']
Interviewing at Google: Key Things (and Languages) to Know
Interviewing at Google: Key Things (and Languages) to Know Yes, you could land a job at the search engine giant, if you have the skills. Many technologists want to work at Google, and with good reason. In addition to handsome compensation and great perks, the search-engine giant offers the opportunity to work on some truly groundbreaking projects, from mobile app development to quantum computing. However, actually landing a job at Google is easier said than done. Although the firm long ago abandoned the infamous brainteasers that distinguished its interview process, it still subjects many candidates to multiple interviewing rounds, with an emphasis on evaluating not only your technical knowledge, but how well you’d work with potential team members and managers. Google’s interview process usually kicks off with a phone interview, during which you might have to write code in a shared Google Doc that your interviewer can view. That interview may also involve other kinds of problem-solving and behavioral questions. In ordinary times, that’s usually followed by an onsite interview, where your interviewer(s) will ask questions designed to evaluate four areas: Leadership ability Problem-solving ability (termed your ‘general cognitive ability’) Knowledge related to the role “Googleyness” (whether you’re a cultural fit, in other words) For software engineering candidates, the interview will focus on how you think through complex problems, including data structures; you’ll have to defend your solutions and thinking, as well as prove that you have all the skills you listed in your application. Google itself has a video that breaks down this process a little more: In the video, one of the recruiters talks about testing candidates’ coding skills on a whiteboard, although Google’s own career FAQ suggests that whiteboards have been largely shunted aside in favor of coding on laptops, so that’s confusing. If you’re applying for a job at Google, it might pay to familiarize yourself with rapidly sketching out code on a whiteboard and a laptop, just to be safe. As with interviewing at any other company, technologists should make sure their answers are clear, that they’re capable of talking through their previous experiences and challenges in a way that shows what they’ve learned, and that they can explain how they arrived at particular solutions. Any software engineer with an interest in Google is probably curious about the programming languages that the company is hiring for. Fortunately, we have Burning Glass, which collects and analyzes millions of job postings from across the country; it gives us insight into the skills that Google has requested from candidates over the past 90 days. Here’s the breakdown: That Python tops the list should come as no surprise. Long an ultra-popular “generalist” programming language, it’s increasingly deployed in specialist contexts such as data science. If you’re new to Python (or you just need a refresher), check out Python.org, which offers tons of documentation, including a useful beginner’s guide to programming. Microsoft also has a video series, “Python for Beginners,” with dozens of short videos that cover everything from “Hello world” to calling APIs. Java also has a variety of online learning resources, including Codeacademy, as well as extensive documentation. A Google interview may also focus on your abilities with Kotlin, which has been positioned not only as a “first class” language for Android development, but also a Java upgrade (in an interesting twist, though, Kotlin didn’t make the Burning Glass list). The relatively strong presence of Objective-C is likewise something to note: This older language is used to build and maintain apps within Apple’s ecosystem, including macOS and iOS. Google’s need for technologists skilled in this language suggests it’s either maintaining a lot of legacy code, or it doesn’t have a lot of internal interest in Swift, Apple’s newer development language. Whatever languages and positions you might pursue, it’s clear that Google demands you know your stuff. Keep that in mind when applying for a job at the company.
https://medium.com/dice-insights/interviewing-at-google-key-things-and-languages-to-know-a9ec9933975a
['Nick Kolakowski']
2020-09-24 14:56:01.878000+00:00
['Job Interview', 'Technology', 'Google', 'Job Hunting', 'Programming']
I am a time traveler begging you to stop what you are doing
It started with expensive safes, then began to include bodyguards, and today, “earlies” (our term for early adapters), as well as those rich whose wealth survived the “transition” live in isolated gated cities called Citadels, where most work is automated. Most such Citadels are born out of the fortification used to protect places where Bitcoin mining machines were located. The company known as ASICMiner is known here as a city where Mr. Friedman rules as a king. In my world, soon to be your world, most governments no longer exist, as Bitcoin transactions are done anonymously and thus governments can’t enforce taxation on their citizens. Exchanges of services are nearly all online, totally decentralized and anonymous. Most of the success of Bitcoin is due to the fact that Bitcoin turned out to be an effective method to hide your wealth from the government. Whereas people entering “rogue states” like Luxembourg, Monaco and Liechtenstein were followed by unmanned drones to ensure that governments know who is hiding wealth, no such option was available to stop people from hiding their money in the blockchain. Governments tried to stay relevant by buying Bitcoins, this made the problem worse, as it increased the value of Bitcoin further. Governments did so in secret of course, but the generation’s “Snowdens” have become greedy government employees who transferred Bitcoin to their own private wallet, and escaped to anarchic places where no questions are asked as long as you can cough up some crypto. The four institutions with the largest still accessible Bitcoin balance are believed to be as following: ASICMiner: 50,000 Bitcoin The IMF’s “Currency Stabilization Fund”: 70,000 Bitcoin Government of Saudi Arabia: 110,000 Bitcoin The North Korean government: 180,000 Bitcoin Economic growth today is about -2% per year. Why is this? If you own more than 0.01 Bitcoin, chances are you don’t do anything with your money. There is no inflation, and thus no incentive to invest your money. Just like the medieval ages had no significant economic growth, as wealth was measured in gold, our society has no economic growth either, as people know their 0.01 Bitcoin will be enough to last them a lifetime. The fact that there are still new Bitcoins released is what prevents our world from collapse so far it seems, but people fear that the decline in inflation that will occur during the next block halving may further wreck our economy. What happened to the Winklevoss twins? They were among the first to die. After seeing the enormous damage done to our society, terrorist movements emerged that sought to hunt down and murder anyone known to have a large balance of Bitcoin or believed to be responsible in any way for the development of cryptocurrency. Ironically, these terrorist movements use Bitcoin to anonymously fund their operations. Most people who own any significant amount of Bitcoin no longer speak to their families and lost their friends, because they had to change their identities. There have been also been a few suicides of people who could not handle the guilt after seeing what happened to the bag-holders, the type of skeptical people who continued to believe it would eventually collapse, even after hearing the rumors of governments buying Bitcoin. Many people were taken hostage, and thus, it is suspected that 25% percent of the “Bitcoin riches” actually physically tortured someone to get him to spill his wallet. Why didn’t we abandon Bitcoin, and move to another system? Well, we tried of course. We tried to step over to an inflationary cryptocurrency, but nobody with an IQ above 70 was willing to step up first and volunteer. After all, why would you voluntarily invest a lot of your money into a currency you know would continually decline in value? The thing that made Bitcoin so dangerous to society was also what made it so successful. Bitcoin allows us to give in to our greed. In Africa, surveys show that an estimated 70% of people believe that Bitcoin was invented by the devil himself. There’s a reason for this, it’s a very sensitive issue that today is generally referred to as “the tragedy”. The African Union had ambitious plans to help its citizens be ready to step over to Bitcoin. Governments gave their own citizens cell phones for free, tied to their government ID, and thus government sought to integrate Bitcoin into their economy. All went well, until “the tragedy” happened. A criminal organization, believed to be located in Russia, exploited a hardware fault in the government-issued cell phones. It’s believed that the entire continent of Africa lost an estimated 60% of its wealth in a period of 48 hours. What followed was a period of chaos and civil war, until Saudi Arabia and the North Korean government, two of the world’s major superpowers due to their authoritarian political system’s unique ability to adapt to the “Bitcoin challenge”, divided most African land between themselves and were praised as heroes by the local African population for it. You might wonder, what is our plan now? It’s clear that the current situation can’t last much longer, without ending in a nuclear holocaust. I am part of an underground network, who seek to launch a coordinated attack against the very infrastructure of the Internet itself. We have at our disposal about 20 nuclear submarines, which we will use to cut all underwater cables between different continents. After this has been successfully achieved, we will launch a simultaneous nuclear pulse attack on every dense population area of the world. We believe that the resulting chaos will allow the world’s population to rise up in revolt, and destroy as many computers out there as possible until we reach the point where Bitcoin loses any relevance. Of course, this outcome will likely lead to billions of deaths. This is a price we are forced to pay, to avoid the eternal enslavement of humanity to a tiny elite. This is also the reason we are sending this message to your time. It doesn’t have to be like this. You do not have to share our fate. I don’t know how, but you must find a way to destroy this godforsaken project in its infancy. I know this is a difficult thing to ask of you. You believed you were helping the world by eliminating the central banking cartel that governs your economies. However, I have seen where it ends.
https://medium.com/cryptolinks/i-am-a-time-traveler-begging-you-to-stop-what-you-are-doing-ab33e335e2ca
[]
2020-04-12 16:58:08.031000+00:00
['Satire', 'Future', 'Cryptocurrency', 'Bitcoin', 'Economics']
Scotland Becomes First Country to Make Period Products Free
In 2018, Scotland made period products free and accessible in every school, college, and university in the country. The new law will incorporate and solidify this measure. Period products will also be available in all public buildings and facilities. The legislation is expected to cost around £8.7 million ($11.6 million) per year. While period poverty does not affect every single person who menstruates, it is much more rampant than one might think. According to a study conducted by the grassroots group Women for Independence, nearly one in five women have experienced period poverty in Scotland. Another study published by Plan International UK reveals that one in ten girls in the UK are completely unable to afford period products. The global coronavirus pandemic has only made this worse. Of course, period poverty is not just a cisgender women’s issue. It can affect all people who menstruate, including transgender and nonbinary people. In the UK, people who menstruate typically spend around £13 ($17.37) per month on tampons and pads, and many struggle to afford the cost. Not being able to afford period products can not only cause people who menstruate to miss work and school, but it also lead to urinary tract infections and reproductive health issues. Ensuring that period products are free and accessible to all who need them will create a more fair and equitable society and allow all people who menstruate to reach their full potential. “We have shown that this Parliament can be a progressive force for change when we collaborate,” Lennon said before the vote on Tuesday. “Our prize is the opportunity to consign period poverty to history. In these dark times, we can bring light and hope to the world this evening. Scotland will not be the last country to make period poverty history.” In the US, there are currently no state or federal laws mandating free period products. Only four states have passed laws requiring schools to provide students with free period products. Some states have outlawed the tampon tax — the sales tax on tampons and other period products that are exempt on other basic health necessities. The UK has also moved to scrap the tampon tax, although the ban does not go into effect until January 2021.
https://catherineann-caruso.medium.com/scotland-becomes-first-country-to-make-period-products-free-177753295a2c
['Catherine Caruso']
2020-11-26 00:58:21.460000+00:00
['Equality', 'Health', 'Politics', 'LGBTQ', 'Feminism']
How we increased active Pinners with one simple trick
John Egan | Pinterest engineer, Growth For many areas of growth, presenting your message with the right hook to pique a user’s interest and to get them to engage is critical. Copy is especially important in areas such as landing pages, email subject lines or blog post titles, where users make split second decisions on whether or not to engage with the content based on a short phrase. Companies like BuzzFeed have built multi-billion dollar businesses in part by getting this phrasing down to a science and doing it more effectively than their competitors. At Pinterest, we knew copy testing could be impactful, but we weren’t regularly running copy experiments because they were tedious to setup and analyze in our existing systems. This made it difficult to do the type of iteration necessary to optimize a piece of copy. Last year, however, we built a framework called Copytune to help address these issues. The framework has helped us optimize copy across numerous languages and significantly boost MAUs (Monthly Active Users). In this post, we’ll cover how we built Copytune, the strategy we’ve found most effective for optimizing copy and some important lessons we learned along the way. Building Copytune When we decided to build Copytune, we had a few goals in mind: Optimize copy on a per-language basis by running an independent experiment for each language. What performs best in English won’t necessarily perform the best in German. Make copy experiments easy to set up, and eliminate the need to change code in order to setup an experiment. Have copy experiments auto-resolve themselves. When you’re running 30+ independent experiments (one experiment for each language), each with 15 different variants, it becomes too much analysis overhead to have a human go in and pick the winning variant for each language. To achieve these goals, we built a framework that mimicked the API for Tower, the translation library that every string passes through. We first had every string pass through Copytune, which would check the database to see if there was an experiment setup for that string. If so, it would return one of the variants. If the string was not in experiment, Copytune would then pass the string to Tower to get the correct translation of the string. A nightly job would then compile statistics on all the copy experiments and would automatically shut down experiments when there was enough data to declare the winner. Copy optimization strategy Testing copy requires an iterative process to achieve the best results. It’s almost impossible to identify the ‘best’ copy in one go, so we took an incremental approach to discover it. Explore Phase: You can’t know for sure what will work, so we started by testing many variants that touch on very different themes, tones, etc. We typically brainstorm 15–20 different variants. For example: The latest Pins in Home Decor Come see the top Pins in Home Decor for 12/3/2015 We found a few Pins you might like Refine Phase: After the Explore Phase, we began to see which tones and phrasing were performing best. Then we could refine by testing different components of the winning variants of the Explore Phase. Let’s say that in Explore Phase, the winner was “We found some {pin_keyword} and {pin_topic} Pins and boards for you!”. There are many possible optimizations we can test in this example. We can try adding “Hey Emma!” at the beginning to catch the Pinner’s attention. We can even test whether “Hi Emma!” or just “Emma!” is better than “Hey Emma!”. We can test some phrases like “we found” vs. “we picked.” We can test if “Pins and boards” is better than just having “Pins” or “Boards.” In this example there are at least 10 components we can test. We treat them as independent components and test each of them against the winner. Combine Phase: Let’s say “Hi Emma!”, “we picked” and “{pin_topic}” were winners in the Refine Phase. We can now test if the combination Refine Phase winners (a) performs better than the original winner (b) ​​​“Hi Emma! We picked some {pin_topic1} and {pin_topic2} Pins and boards for you!” “We found some {pin_keyword} and {pin_topic} Pins and boards for you!” Note that it’s possible that some components are not independent, so we also tested other combinations that seem promising. ​​​In one of our highest volume emails, the winning variant from the Explore Phase showed only a gain in one percent open rate. By the end of the whole iteration, optimizing the subject line on one email boosted it to an 11 percent gain, adding hundreds of thousands more active Pinners each week. Lessons learned Copytune has been in place for almost a year now, and we’ve learned some lessons along the way: Defining Success: When we initially started testing email subject lines, we defined the success criteria as driving an email open. This seemed to be the most straightforward since the Pinner reads the email subject line and the next action is to either open it or don’t. What we found, however, was that defining success with metrics that were further downstream (i.e. clicking on the content in the email) was more effective. Some subject lines were great at getting opens, but there was a mismatch in user expectations based on the subject line and the actual content in the email so net net they were actually resulting in fewer clicks. Picking Variants: The original vision for Copytune was to use a Multi Armed Bandit framework for picking variants and auto-resolving experiments. The difficulty we ran into was feature owners wanted to see how the experiment performed across a variety of metrics and to be able to report concrete MAU gains from the experiment. To accommodate these needs, we ultimately needed to integrate Copytune with our internal A/B testing framework. Acknowledgements: Koichiro Narita for co-writing this post, helping develop Copytune, and running the subject line experiments covered in this post. Devin Finzer and Sangmin Shin for helping develop Copytune.
https://medium.com/pinterest-engineering/how-we-increased-active-pinners-with-one-simple-trick-a157f0a527b9
['Pinterest Engineering']
2017-02-21 19:39:44.790000+00:00
['Growth', 'Experiment', 'Klp', 'Engineering', 'A B Testing']
Container and System Monitoring with Docker, Telegraf, Influxdb, and Grafana on AWS
A container monitoring system collects metrics to ensure applications running in containers are functioning correctly. Metrics are tracked and analyzed in real-time to determine if an application is meeting expected goals. Container monitoring solution uses metric capture, analytics, performance tracking and visualization. Container monitoring covers various metrics like memory utilization, CPU usage, CPU limit, memory limit and many more. Technology stack Fig 1: Architecture Diagram for Container and System Monitoring with Docker, Telegraf, Influxdb and Grafana Grafana - Grafana is an open-source metric analytics & visualization suite. It is most commonly used for visualizing time series data for infrastructure and application analytics and is also used in other domains including industrial sensors, home automation, weather, and process control. It can be used on top of a variety of different data stores but is most commonly used together with either Graphite, InfluxDB, Elasticsearch or Prometheus. Visualizations in Grafana are called panels, and users can create a dashboard containing panels for different data sources. Grafana ships with a built-in alerting engine that allows users to attach conditional rules to dashboard panels that result in triggered alerts to a notification endpoint of your choice (e.g. email, slack, mail, custom webhooks). - Grafana is an open-source metric analytics & visualization suite. It is most commonly used for visualizing time series data for infrastructure and application analytics and is also used in other domains including industrial sensors, home automation, weather, and process control. It can be used on top of a variety of different data stores but is most commonly used together with either Graphite, InfluxDB, Elasticsearch or Prometheus. Visualizations in Grafana are called panels, and users can create a dashboard containing panels for different data sources. Grafana ships with a built-in alerting engine that allows users to attach conditional rules to dashboard panels that result in triggered alerts to a notification endpoint of your choice (e.g. email, slack, mail, custom webhooks). Telegraf - Telegraf is the open-source server agent to help you collect metrics with 200+ plugins already written by subject matter experts in the community. With the help of output InfluxDB plugin in Telegraf, we can capture and visualize the metrics on Grafana. - Telegraf is the open-source server agent to help you collect metrics with 200+ plugins already written by subject matter experts in the community. With the help of output InfluxDB plugin in Telegraf, we can capture and visualize the metrics on Grafana. Docker - Docker stats command provides an overview of some metrics we need to collect to ensure the basic monitoring function of Docker containers. Docker stats shows the percentage of CPU utilization for each container, the memory used and total memory available to the container. Added to that we can see the total data sent and received over the network by the container. - Docker stats command provides an overview of some metrics we need to collect to ensure the basic monitoring function of Docker containers. Docker stats shows the percentage of CPU utilization for each container, the memory used and total memory available to the container. Added to that we can see the total data sent and received over the network by the container. InfluxDB- InfluxDB is a time-series database designed to handle high write and query loads. It is known for its homogeneity and ease of use, along with its ability to perform at scale. It helps to store the metrics. Also Grafana ships with very feature-rich data source plugin for InfluxDB. InfluxDB supports a feature-rich query editor, annotation and templating queries. Setup steps of Monitoring system on AWS Step 1: Create an empty cluster with ECS optimised instance to run our monitoring system. The instance has a role attached to it with AmazonEC2ContainerServiceforEC2Role policy. We will install the requisite monitoring components on our ECS instance. Elastic IP is attached to the instance launched in cluster so that Public IP remains static for connection with InfluxDB with any number of instances attached. Step 2: We can access the grafana dashboard using http:<elastic_ip>:3000 and Login credentials: Username: admin Password: admin Then add the data source as InfluxDB. Fig 2: Grafana Login Page Fig 3: Datasource InfluxDB We need to have Telegraf connected with InfluxDB via environment variables i.e. URL and database name with its username and password are set in Telegraf configuration file hence data source verifies all the parameters. This task is achieved by configuring output plugin named <outputs.influxdb> in Telegraf configuration file. Telegraf agent then posts its metrics to the Influx DB. For collecting docker metrics, similarly, <inputs.docker> plugin is configured. Step 3: For custom made dashboard JSON definition is imported which visualizes every aspect of the container as well as system. Custom Made Dashboard : Templated dashboard uses Telegraf as collector and Influxdb as data source. It gives a quick overview of Container and System Monitoring. No. of containers No. of Images No. of the container based on images CPU utilization container wise Memory container wise Disk i/o container wise Cpu utilization Memory utilization ….and mostly everything that could be possibly extracted from a server. Docker host and Server will give a drop-down menu if multiple instances are monitored and configured with same elastic IP in <output.influxdb> plugin URL. Fig 4: Overview of Container and System Monitoring Fig 5: Overview of Container and System Monitoring Fig 6: CPU Usage Monitored in respect of System Fig 7: Per Container Monitoring with CPU usage, I/O, Memory Usage and Network Fig 8: Per Container Monitoring with CPU usage, I/O, Memory Usage and Network Fig 9: Per Container Monitoring with CPU usage, I/O, Memory Usage and Network Fig 10: Memory Usage in detail for system Fig 11: Kernel, Swap, Disk space usage, etc graphically represent for system monitoring Fig 12: Kernel, Swap, Disk space usage, etc graphically represent for system monitoring Fig 13: Kernel, Swap, Disk space usage, etc graphically represent for system monitoring Step 4: Any number of ECS optimized application instances can be launched with the same role attached as mentioned above and can be monitored by the existing monitoring solution by simply having Telegraf installed: The instance can be in the same cluster or different clusters. In all the further instances we won’t require the installation of Grafana or Influxdb as we have customised the Telegraf configuration file to point to the existing instance. Once the service of Telegraf is started, it can attach to the Grafana dashboard and post metrics to Influxdb. Note: Telegraf configuration file is stored in S3 bucket and then is being copied to /etc/telegraf/telegraf.conf Conclusion One can easily set up a monitoring tool with different data types and their respective node collector. We have used InfluxDB as a data source which is an open-source tool based on push-based architecture and support integer and string both data type. It only requires one node collector for collecting container and system monitoring hence reduces the complexities. InfluxDB requires Kapacitor for the alert manager but instead of using an additional tool one can simply use Grafana’s alert system. Hence by saving time and memory with reduced complexity, we can easily monitor any container within any number servers.
https://medium.com/xebia-engineering/container-and-system-monitoring-with-docker-telegraf-influxdb-and-grafana-on-aws-2007e5511746
['Prachi Jain']
2019-11-07 10:19:43.576000+00:00
['AWS', 'Telegraf', 'Grafana', 'Influxdb', 'Docker']
Women in Tech, Quantum Entanglement, and Spotting the Next Nikola | DDIntel
Photo by Michael Dziedzic on Unsplash DDIntel newsletter for the week of September 28 DDI Writer Highlights To potentially be featured in DDI Intel and on Datadriveninvestor.com, please submit with this form. By Rodrigo Marino Times of crisis don’t make an extraordinary entrepreneur less capable, and a product does not lose its technical quality. Therefore, the main variable changing and consequently impacting the investments and valuations is the market. By Luka Beverin In 1992, Hurricane Andrew struck Florida and inflicted 27 billion dollars worth in damage. As a result of the destruction, numerous insurance companies dissolved and went bankrupt after having to pay out an unanticipated amount of money to policyholders. By Norbert Biedrzycki The earth is sick, and the causes of its disease lie in our choices, our myopia, and our lack of imagination. By Robert Locke We all express fear, anger, and laughter by using facial expressions and other body movements. But these experts have gone way further and tied body language to what we actually say and how we say it. By Marina Alamanou When two particles — such as atoms, photons, or electrons — are entangled, they experience an inexplicable link that is maintained even if the particles are on the opposite sides of the universe, a million light-years away. By Jim Katzaman We all have a mental boardroom, and usually, there’s a hidden bigot at the table.
https://medium.com/datadriveninvestor/women-in-tech-quantum-entanglement-and-spotting-the-next-nikola-ddintel-ea0ea6e88ad8
['Justin Chan']
2020-09-28 15:34:37.871000+00:00
['Fintech', 'Artificial Intelligence', 'Business', 'Blockchain', 'Technology']
What Really Are The Fundamentals Of Money?
DO Breakthrough Seminar. This is the question we ask the most: How do you? It is why we started the DO Lectures. We wanted to find out why some people achieve their potential and share the clues they leave along the way. Follow
https://medium.com/do-breakthrough/what-really-are-the-fundamentals-of-money-3025ed5d37ab
['Do Lectures']
2018-07-28 11:01:01.202000+00:00
['Finance', 'Money', 'Future', 'Do Breakthrough', 'Bitcoin']
11,000 Scientists Declare a Climate Emergency in One Voice
11,000 Scientists Declare a Climate Emergency in One Voice But an investment banker writing for ‘The National Review’ doesn’t believe them Photo by Xavier Balderas Cejudo on Unsplash Early this morning, The National Review Online published an analysis of the United Nations’ 2014 Intragovernmental Panel on Climate Change (IPCC) report. The article, titled “We Have Time to Prevent Climate Change,” argues that Joe Biden, Bernie Sanders, and other politicians on the left exaggerate the impending consequences of global warming. It further claims that they do so in direct contradiction to the science cited in the IPCC’s “5th Assessment.” The story’s author, investment banker William Levin, contends that nuclear substitution for coal power over the next 50 years will reduce carbon emissions to acceptable levels long before disaster strikes. He adds that as-yet undeveloped technology, such as low-cost carbon sequestration systems and solar distributed battery storage, will also play a vital role in preventing a climate catastrophe. Finally, Levin concludes that there is “plenty of time” to do what is necessary, implies that politicians who advocate for aggressive environmental measures are criminals, and claims that the left’s agenda on climate change will prove to be the “greatest avoidable burden imposed on the world, ever.” No word from Levin, however, on the avoidable burdens imposed on Americans by unregulated investment banking.
https://medium.com/discourse/11-000-scientists-declare-climate-emergency-in-one-voice-9518388f0ef7
['Dustin T. Cox']
2020-12-17 03:24:15.840000+00:00
['Politics', 'World', 'Climate Change', 'Energy', 'Science']
The Technology Apocalypse
As the country is debating important technology initiatives, and is making technology mandatory in day-to-day governance, we are still unable to understand the shortcomings of the technology itself. An interesting trait which Telecom Regulatory Authority of India (TRAI) Chairman RS Sharma brought into the debate — incidentally, last week — is harm. How can technology harm us? If technology can harm us, how do we classify/define such harms? These are important questions which no one has an answer to. The latest draft of the personal data protection bill presented by Sri Krishna Committee tries to answer some of them. Harm from technology is something everyone is studying across the world as we enter the age of ubiquitous computing, with sensors tracking every aspect of our life. Racial biases and discrimination through technology have been extensively documented and debated in the West. It is important as a community that we do not cause any harm to others, especially when Big Data and algorithms are being termed “Weapons of Math Destruction”. As stakeholders of developing these tech solutions, we can’t ignore the rights of people and work for profits. Employees across different tech firms are even voicing out against technologies like surveillance and Artificial Intelligence which may violate human rights. So there is a need to have a conversation about some of these issues in tech more and start looking at tech differently. You can read the following books and papers from these organisations who work on digital rights Organisations: MIT Civic Media Lab — https://civic.mit.edu/publications/ Gov lab — http://www.thegovlab.org/publications.html Harvard Berkman Klein Center — http://cyber.harvard.edu/publications Web Foundation — https://webfoundation.org/research-publications/ Data and Society — https://datasociety.net/research/ Algorithmic Justice League — https://www.ajlunited.org/ Books:
https://medium.com/the-fifth-elephant-blog/the-technology-apocalypse-3b5090f835ea
['Srinivas Kodali']
2018-08-01 08:23:13.449000+00:00
['Technology', 'Books', 'Trai', 'Aadhaar', 'Ethics']
Let Me Bottom Line It for You
Profitability isn’t just about sales volume. Diversification of revenue streams, controlling expenses and maximizing resources also have a direct impact on the bottom line. A myopic view of company resources could be costing your company money. As an accountant, I’ve worked with many companies over the years. During my stint as the controller of a pharmaceutical research corporation, I witnessed the phenomenon of short-sighted management first hand. DIVERSIFY THE REVENUE STREAM Management’s main focus was on the big fish. The multi million-dollar in-patient study that required 24-hour staffing. The allure of such studies was the promise of big money. The down side was risk. The right type or volume of adverse reactions could pull the plug on the study at any time leaving the company with significantly less revenue than expected. Seeking out smaller, less glamorous out-patient studies was given minimal attention. Smaller projects, referred to as bread and butter studies, are an important aspect of a well diversified revenue portfolio. Bread and butter studies keep the lights on. A single small study generated less revenue, albeit steady and predictable revenue. Small studies were often long term and required less daily staffing than the large flashy studies. Bread and butter studies tended to yield a high profitability ratio per study but a small portion of total company revenue overall. Small studies are most beneficial to the company if the revenue portfolio includes a high volume of activity. Like Walmart selling a huge volume of low-priced items. Don’t put all your eggs in one basket. Classic advice that has timeless relevance. CONTROL EXPENSES The company didn’t have enough small studies to keep the high paid project leaders of the big studies busy. The study leaders performed many tasks not in alignment with their skill set in order to fill the day. Costly personnel were used to staff the big studies in positions better suited to lesser skilled, lower paid employees. Allowing the most costly employees to work evening, weekend and holiday shifts meant inflated payroll costs and less profitability on all studies. Making money isn’t only about what you bring in. It’s also about what you spend. Another reminder of classic and timeless advice. MAXIMIZE ALL COMPANY RESOURCES Management considered the revenue earners of the business to be not only the most valuable employees of the company, but rather the only valuable employees of the company. Administrative personnel were an underutilized resource seen as cost centers, if seen at all. Many tasks performed by costly medical personnel could have been performed more effectively by administrative personnel not regarded by management as capable enough for inclusion on the team. Business degrees and administrative skills were not medical and therefore not held in high esteem. A common problem really. Universities teach various areas of expertise but fail to educate these experts on the importance of administrative skills to running a successful business in real life. A topic worthy of a full article, some other time. For now, the lesson is to appreciate the value of all employees. A different perspective can open doors overlooked by a narrow field of vision. CONCLUSION OF MY CASE STUDY The suggestion of diversifying the revenue streams by increasing the volume of smaller studies was ignored. The suggestion of aligning employees’ skill sets and salaries to tasks was dismissed. The suggestion of study leader positions as salaried instead of hourly was denied. This company could have seen extraordinary profitability and growth. Instead, management’s tunnel vision looked only to revenue earners for answers to profitability problems and the company burnt out quickly. A business is a multi-faceted entity. Look in all directions to find answers to unprofitable performance. You may be surprised by how much unused potential you have hidden in the business minds sitting behind desks. Unsung heroes observing the forest while you are focusing too much attention on the trees. Don’t Miss Out: Join my email list to keep in touch with me. Once a week I will notify you of my latest article and I will share a link to one of my favorite articles in my guest writer pick of the week. You Might Also Like: Contact Me: This story is published in The Startup, Medium’s largest entrepreneurship publication followed by +386,966 people. Subscribe to receive our top stories here.
https://medium.com/swlh/let-me-bottom-line-it-for-you-101736cb03eb
['Tammy Hader']
2019-10-02 16:15:32.511000+00:00
['Work', 'Management', 'Business', 'Startup', 'Finance']
What I learnt from not drinking
Giving up drinking is great Photo by Nate Johnston on Unsplash I decided at the beginning of this year to cut back dramatically on my drinking. The main goal, to not be hungover once this entire year. So far so good, and its been great. I have never been a heavy drinker. Although like most young South Africans, drinking is a huge much part of our social lives. We socialise around drinking. We meet for drinks on a Friday night, we go out for dinner on Saturday night and drinks before or after, we have a braai and we throw back a couple, we go for Sunday afternoon sundowners. All these things center around drinking. It is too easy to drink the whole weekend without even thinking about it. I have experimented with not drinking in the past, I enjoyed it so I decided this year to try it out for even longer. This is what I learnt. Drinking isn’t fun Its actually not that fun to wake up in the morning with a pounding headache. Or no memory of the night of the before. Its not fun to spend the day feeling crap and battling to get anything done. Its not fun to not be able to exercise or go for a run or walk. Its not fun to sit on the couch all day and eat junk food. Its not fun to throw up either. The consequences of drinking are not fun and not worth it. Even drinking itself. Once you have had a couple you start to behave in an embarrassing way. Drinking isn’t fun, its boring. It gets easier Like any habit it takes a while to break. Drinking had become a habit. I didn’t drink every night but every weekend I had a glass of wine or two. Every time I went out for dinner I had a drink. Sometimes I think I drank more out of habit than anything else. At first I did dry January, one month seemed easy enough. And it was easy. Although on the 1st of February I had a glass of wine. February was also my birthday month so I had a few drinks on the night. I also had two weddings one in March and one in April. At both I had a glass of champagne and a glass of wine and it was great. No hangover. Just because the drinks are available does not mean you have to drink. Over time it gets easier and easier to say no to alcohol. I also realised I didn’t have to drink at every social occasion only the special ones. Drinking is expensive Drinks cost money. Especially fancy cocktails or a nice bottle of wine. I often added a bottle of wine to my weekly grocery shop. Going out for drinks can also add up very quickly. I have saved a lot of money by not drinking. Drinking doesn’t taste great I have lost the appetite for drinking. I used to crave a glass of wine, particularly after a long week or hard day. I used to look forward to that glass of wine because it would relax me. Now I hardly crave the taste of alcohol. I don’t enjoy the taste as much as I used to. Sometimes the occasional glass of wine with a meal or glass of champagne on special occasions is nice but other than that alcohol has lost its appeal for me. Drinking is not an activity Many South Africans consider drinking to be an activity. Its not uncommon to hear people say, “I’m going drinking this weekend.” Why is drinking an activity? It is an expensive and unhealthy activity. I wish more South Africans did not see drinking as a weekend activity. I sometimes feel like the odd one out. There are loads more fun activities to do on weekends that don’t involve a hangover. Playing board games, going for a run or walk in the park, cycling, reading, swimming, hiking and cooking a nice meal. Drinking is not a hobby. People don’t notice as much as I thought One of the reasons I was hesitant to give up drinking completely was the social pressure. Almost all my friends and family drink. I am alone in my sober journey. I was terrified they would try and pressure me into drinking or convince me I needed to drink. But that is not the case maybe its because I’m older and drinking is no longer such a thing. At social gatherings I just don’t drink. I am offered a drink most of the time and I simply decline. I usually drink sparkling water with lemon or fruit juice. And no one seems to notice or care. I thought it would be much harder but its not that hard at all. In the end… I hardly drink at all now. Occasionally I have a small glass of wine or champagne. I think I have about one drink a month now. I used to drink every week. And I feel great. I love waking up in the morning fresh and ready to face the day. I love not having a head ache. I love being able to exercise to my full potential. I love not feeling dehydrated. This is now my lifestyle choice and I plan to stick to it.
https://katy-conradie.medium.com/what-i-learnt-from-not-drinking-602e9641041b
['Kate Conradie']
2019-10-07 10:53:51.636000+00:00
['Health', 'Drinking', 'Lifestyle', 'Social', 'Alcohol']
Get INSPIRED
I started this publication around a month ago with the goal to create a community where everyone can support each other. Many publications expect that you have the top of the top content and even then some of the stories flop. Here is how we are going to change that: Every day, depending on the number of stories we have submitted I am going to create a document with stories of the week, and post with new ones every day. I encourage everyone to read and if worthy, clap at least two articles from the list that interest you a day. Hence, we can go ahead and support other authors, and begin to create our community. Now, I am going to add a curated stories tab where people can check out of all the curated stories on Never Fear.
https://medium.com/never-fear/get-inspired-bcee26c5f433
['Aryan Gandhi']
2020-09-13 00:44:17.088000+00:00
['Growth', 'Business', 'Finance', 'Banking', 'Writing']
Toxic Relationships — How do You Know?
We see so many quotes, lists and writings on toxic relationships. And, across every social media brand and platform that I have worked on, toxic relationship content is always the most popular. Why? Because it’s really freaking hard, when you’re in a toxic relationship, to figure out if you’re in a toxic relationship. That’s what makes it so certainly toxic, is the fact that you don’t know what’s real. Therefore, there are always people looking for information on toxic relationships because they are trying with every fiber of their being to figure out if they are in a toxic relationship. “If someone says they love you but the way they treat you says they don’t, listen to what their actions are telling you. People who truly love you don’t just tell you, they show you.” — Bobby J Mattingly It’s the constant battle between the head and the heart that makes you literally have no idea what is actually going on. You earnestly pray to know what is right. You search, but you are utterly confused. Then eventually, one day you figure out, without a doubt that you are indeed in a toxic and spirit crushing relationship. Maybe it was an article you read or something that happened. Maybe it was an epiphany. Maybe it was just a combination of everything. It doesn’t matter. You finally know. This is what I hope to help people move towards, who need to of course, through my writings on toxic relationships. So let’s start with the first question. How do you know that you are in a toxic relationship? Well there are countless lists of information on what everyone thinks the signs of toxic relationships are, that any relationship would meet a few of these qualifications. I don’t think automatically labeling any relationship as toxic just because it matches a couple bullet points is healthy either. Looking for toxicity can itself become toxic.
https://hollykellums.medium.com/toxic-relationships-how-do-you-know-ab5a00988d09
['Holly Kellums']
2018-11-19 02:26:36.543000+00:00
['Love', 'Relationships', 'Life Lessons', 'Life', 'Health']
Tear Up Your Baseline
Zero has special significance for many axis scales. If you are traveling at zero miles per hour then you are standing still. If you have zero dollars then you are broke. The zero line is often emphasized accordingly by making it thick. Willard Cope Brinton explains in 1939’s Graphic Presentation: The horizontal axis, zero line or other line of reference, should be accentuated so as to indicate that it is the base for comparison of values. There is no such base of comparison for the time scale in a time-series chart, however, there being no beginning or end of time. … The zero line or other base of comparison should never be omitted when the interest is in relative amount of change between points on the same curve. –Willard Cope Brinton, 1939 The zero line is essential to bar charts 📊, which link total value to total height. William Playfair introduced this rule in 1786 using a stack of coins metaphor¹. If a man stacked all the coins he made at the end of each day into a new pile, then he would be able to see time, proportion, and amount all at once simply by looking across all of his stacks. Pierre Levasseur poetically referred to vertical bar charts in 1885 as columns of stacked facts². In either case, you have to see the whole stack, i.e. the zero line, to have an honest look. We sometimes omit the zero line on line charts 📈 when our interest is in the absolute, not relative, amount of change. In this case a non-zero baseline appears (in the cartoon above, 50). This is familiar to many as the movement of a stock price over time. We like zooming in to see detail. It is bad practice to accentuate this non-zero baseline because that implies undue importance. Instead, today’s standard practice has us style all non-zero lines the same way because they all share the same significance. Design mirrors — and conveys—meaning. Standard practice does not directly indicate the omission of the zero line. I contend that this standard practice is too passive—and passive design is risky design. Here, the risk is that the non-zero baseline’s position, even with uniform styling, may still convey undue importance. How can our approach be more active? Brinton provides the answer: Brinton, Graphic Presentation (1939), p. 303 A torn paper metaphor helps. A jagged or wavy baseline calls to the reader’s attention: This is not a reference line! It alludes to the zero line, offscreen, by showing that only part of the scale is on display. The wavy baseline is intentional. It is forward. You might even say it is more honest. Brinton, Graphic Presentation (1939), p. 386 Brinton’s wavy lines³ are not an accidental fluke. They appear in many more examples and their use is detailed in the chapter “Standards for time series charts”: When the zero line is omitted, this is one method of indicating its absence. Brinton even includes a variant, a straight line waved at each end, pictured here as method (b). Data visualization, and graphic computing generally, has made great hay from pictorial metaphors that mimic the real physical world. By linking abstract concepts, like deleting a file, to real-world objects, like the 🗑, we make the strange more familiar. Designing with worldly metaphors in mind, such as bar charts representing stacks of stuff, can help us produce visualizations that are more accessible, and more meaningful, to all. Willard Cope Brinton was an American consulting engineer famous for his landmark Graphic Methods for Presenting Facts, a 1914 textbook that perfectly packaged all of the previous century’s graphic inventions for modern industrial use. He also led the Joint Committee on Standards for Graphic Presentation, which included 15 scientific societies and 2 Federal Bureaus. Andy Cotgreave, Tableau’s Technical Evangelism Director, was the first to introduce me to Brinton via his 100 yrs of Brinton tumblr. Read Andy’s highlights from Brinton there, then dive into hundreds of beautiful charts and expert commentary from the original book and its sequel: Graphic Presentation (1939): “There is magic in graphs.” The torn baseline example above was brought to my attention by Jason Forrest. He shared it in a Data Visualization Society thread about historic visualization. While we certainly delight in appreciating the craft of those who showed before, I believe our interest has a practical undertone. Historic practitioners faced the same challenges we do today: Data overwhelmed their capacity to manage it. The audiences they had to inform were lacking in graphic and numeric and statistical literacy. Yet, historic practitioners worked to meet these similar challenges in totally different contexts. They had different tools, different constraints, and different advantages compared to modern practitioners. It is the intersection of such a similar pursuit with such different conditions that produced so many interesting solutions. Many of their natural solutions, like the torn baseline, are less likely to emerge in today’s digital environment, where we work under own unique constraints. Certain creative paths open at different moments in history. I am so happy to have learned from this example and am excited for the future discovery of more past solutions. It excites me to return to Brinton—and the many who inspired him—and keep looking.
https://medium.com/nightingale/tear-up-your-baseline-b6b68a2a60f1
['Rj Andrews']
2019-03-26 15:21:25.988000+00:00
['Design', 'Dvhistory', 'Infographics', 'Data Science', 'Data Visualization']
Yeah. It Really is That Simple.
Photo by Arek Adeoye / Unsplash Why walking really is probably the single best thing you can do for your health Three hours ago, I finally did it. After months of battling a variety of injuries, including fracturing a pinky toe which kept me in a therapy boot and off the trails for nearly three months, I hit the trail head at Spencer Butte, just a few miles south of town here in Eugene. It was Sunday, the parking lot jammed. Most folks (but not all ) wore the required masks and most who did also were very polite on the trail. For my part, I dug my poles into the damp dirt and pushed myself hard. I was so damned grateful to be in sneakers, on both feet. I still can’t wear hiking boots, but look, you take what you get. Those deteriorating sneakers are for shit on slippery, uneven rocks, but I was so glad to be ambulating, I didn’t care. I just took a touch more care heading down. Walking works. It’s the single best, easiest thing, you and I can do for our health. Blessedly easy, given that you have places to walk and that you can indeed put one foot in front of the other, barring too many trips to the neighborhood liquor store, which of course is essential to our health. But I digress. While I prefer to run, I’ll take walking. Because that simple act can do more for us than we realize. To that: From the article: It’s easy to get excited about the latest and greatest trends, from high-intensity interval training to ultramarathons to triathlons to powerlifting. But at the end of the day, regular brisk walking gets you most, if not all of the way there-”there” meaning a long and healthy life. This is the main conclusion from the June volume of the prestigious British Journal of Sports Medicine (BJSM), a special edition dedicated exclusively to walking. … The main study in the BJSM special edition surveyed more than 50,000 walkers in the United Kingdom-a variety of ages, both men and women-and found that regularly walking at an average, brisk, or fast pace was associated with a 20 percent reduction in all-cause mortality and a 24 percent reduction in the risk of dying from cardiovascular disease. Yeah. Walking. No Pelotons, no special machines, just the machine we came equipped with: legs. Which, when moved vigorously, push the cardiovascular system (VO2), which pushes oxygen and nutrients to the body, etc. etc. While just getting out for a stroll, a good brisk one, doesn’t exactly get you bragging rights (Lookit me I ran a bajillion miles today!!!!), it is far more likely to serve you better, longer and well into your very late life. Something to be said for that. Lot to be said for that. Photo by Eric Muhr / Unsplash The other great joy of being up here in the Pacific NorthWest (PNW) is that with rare exception, if you ask, most folks are delighted for you to butt-rub their fur balls. You can chat, talk dogs, hiking and the weather. I get my exercise, my need for social stimulation and my fur fix all at the same time. Christ, I’ve missed it. But nowhere near as much as I’ve missed my exercise, being outside in the forests- what the fires didn’t level, anyway- and being surrounded by green, cool mist, and the occasional smiling hiker. Masked smiling hiker, thank you. Back in July, before I landed in the hospital with kidney stones, before I flipped my car in Idaho and fractured my finger and carved new canoes in my punkin head, before I fractured my toe, I was training myself to run on high, rocky trails in my soon-to-be old home. All I could see in the future, even under Covid, was lots more of same. I couldn’t wait to get to Oregon and run the trails. For me it had been a huge win to have mastered running on rocky trails, which have forever scared the holy crap out of me. But I did it. So, coming to Oregon meant that I could take my new-found skill to a whole other level. Yah. Sure you can, Sparky. Guess who laughs when you make plans? Photo by Marcela Rogante / Unsplash Not to be. The injuries made setting up my gym in the basement a pipe dream until this past weekend. Between construction projects and the limited availability of crews, it’s taken five months, not five weeks. Like so many I had to forfeit my gym membership due to the shutdown. My equipment’s been gathering dust in the garage, waiting for me to finally move things around and put them into curios, onto shelves, into closets. Injuries meant that I couldn’t even walk my local hills. No yoga, no kick boxing, no nothing. Not just because the TV wasn’t set up, but because a busted pinky toe means you can’t do anything with that foot. Then, if you’re like me, you smack that busted toe against every single available surface. One hand down, one foot down, and on top of that I had my ovaries removed two weeks ago. Gah. I feel like the Black Knight in Monty Python’s Holy Grail. Again, from the article: Walking has also been compared to more intense forms of exercise, like running. Though experts believe running may be marginally better for you, that’s only if you don’t get injured and manage to run regularly, something that more than 50 percent of runners (myself included) struggle with. I’m a lifetime runner, with the occasional break, usually because I broke something. However, when you or I twist a toe or crank an ankle badly enough so that we have to sit even more than usual, breaking out for a two-mile hike is like busting out of Alcatraz. Deposit photos The older I get, the more I need to move to maintain the shape I’m in, build on it and continue to do what I love. However, for me, for you, for any of us who is able, simply putting one foot in front of the other, starting with laps around the living room, the driveway, the block, doesn’t matter- walking works. Bottom line, the simplest, easiest thing you and I can do is often one of the best. Why? Because if you insist on complex routines, pricey exercise machinery, programs that require a classroom or special clothing, it’s far easier to just not do it today. And tomorrow, and the next day. The easiest thing in the world is just get outside and walk. Deposit photos Again, assuming you’re ambulatory and you have a space where you can do it safely. For me, it’s a chance to meet new people at at time when we’re in another lock down. It’s a chance to pet puppers who could use socialization at a time when I can’t buy one of my own. It’s also a chance to reboot the badly-needed aerobic program which has been on hold half this year, because of too many injuries. I can hang with simple. Simple is superb, especially when it pays off so well. Want to get healthy? Start walking. You never know where it might take you. Photo by Sébastien Goldberg / Unsplash Originally published at https://www.walkaboutsaga.com on December 7, 2020.
https://medium.com/in-fitness-and-in-health/yeah-it-really-is-that-simple-33c114b5ffdf
['Julia E Hubbel']
2020-12-08 16:00:57.177000+00:00
['Exercise', 'Walking', 'Health', 'Aging', 'Fitness']
5 Phenomenal Hootsuite Add-ons Every Social Media Junkie Should Be Using
5 Must-Discover Hootsuite Apps for Social Media Marketers 5 Phenomenal Hootsuite Add-ons Every Social Media Junkie Should Be Using Are you a hardcore Hootsuite junkie? Do you use Hootsuite to schedule your social media posts across multiple platforms? If you want to make your time on Hootsuite even better (yes, it’s possible!), try integrating powerful Hootsuite apps into your social media marketing strategy. Check out the following five phenomenal Hootsuite add-ons; they’ll make you feel like kicking yourself in the pants for not discovering them sooner! PromoRepublic PromoRepublic is every Hootsuite user’s dream come true. Their powerful app lets you automatically discover everything from social media images to trending topics and events. Offering tools like daily social media marketing notifications and content templates, PromoRepublic provides the secret sauce your business needs to totally rock customer connections this year. PromoRepublic Crowdbabble If Instagram is part of your visual marketing strategy, you absolutely, positively need to add Crowdbabble to your Hootsuite activities. The Crowdbabble app helps you discover the best times to post on Instagram, offers data on your most influential Instagram followers, and provides detailed engagement analytics too. Crowdbabble StatSocial StatSocial is a tempting tool for Twitter tweeps who use Hootsuite as their social scheduling platform of choice. The StatSocial add-on for Hootsuite lets you discover detailed analytics on your Twitter flock. Uncover everything from the top cities your followers are from to brands they’re affiliated with and the television shows they watch. If you want to increase your Twitter ROI this year, StatSocial is a must-use resource. StatSocial Pictographr Do you want to up your visual marketing game this year? Add the Pictographr add-on to Hootsuite and watch your engagement rates skyrocket. This awesome application lets you instantly add your Pictographr images to your social media posts and schedule upcoming visual marketing outreach posts. Say goodbye to hunting for the right social media images; with Pictographr and Hootsuite, your visual outreach efforts just got much easier. Pictographr Vidyard Video marketing is a crucial component of a stellar social media marketing campaign. The Vidyard app for Hootsuite makes video marketing a breeze. Instantly add videos to your Hootsuite dashboard, share your videos on multiple social networks, and track engagement rates with detailed audience analytics. If you’re planning on doubling-down on video this year, the Vidyard add-on for Hootsuite is a must. Vidyard Hootsuite apps let you take your social scheduling activities from awesome to out-of-this-world. While Hootsuite is great all on its own, it is magical once you integrate multiple add-ons into your daily social networking activities. Check out these social apps for yourself and watch your follower counts explode this year. 👇👇👇 Need help creating content to grow your brand? Hired Gun Writing offers custom business blog posts as a service. From small businesses and entrepreneurs to thought leaders and investors, Hired Gun Writing can help tell your story on an as-needed basis. Reach out at [email protected] to discuss your storytelling needs.
https://medium.com/business-growth-tips/5-phenomenal-hootsuite-add-ons-every-social-media-junkie-should-be-using-2e5c54943b3e
['Hired Gun Writing']
2019-01-06 11:52:43.103000+00:00
['Hootsuite', 'Business', 'Marketing', 'Digital Marketing', 'Social Media']
An Unsupervised Mathematical Scoring Model
An Unsupervised Mathematical Scoring Model Unleashing the power of mathematical models Picture by Geralt on Pixabay A Mathematical model is a description of a system using mathematical equations. The system is governed by a set of mathematical equations that can be linear, non-linear, static, or dynamic. The model can learn the parameters of the equation from available data and even predict the future. In this blog, I will discuss one such practical mathematical model that can be utilized in a variety of problems in the absence of labeled data with some prior domain knowledge. All the codes and datasets used in this blog can be found here. Our Mathematical Model: Logistic Function Logistic function commonly known as the Sigmoid function is a mathematical function having a characteristic “S”-shaped curve or sigmoid curve. The logistic function can take any value of x between -∞ to +∞ Source where, X₀ = the X value of the sigmoid’s midpoint L = the curve’s maximum value k = the logistic growth rate or steepness of the curve. The logistic function can take any value of x between -∞ to +∞. For x approaching +∞, f(x) approaches L and for x approaching -∞, f(x) approaches 0. The standard sigmoid function returns a value in the range 0 to 1. The equation is given by Sigmoid can take any value of x between -∞ to +∞ Source For x = 0, S(x=0) = 0.5 x < 0, S(x<0) < 0.5 and x >0, S(x>0) > 0.5 So, the Sigmoid function is 0-centered. Problem Statement We have financial data of customers available. One feature from it is the credit amount i.e. the credit already in the name of the customer. Now, depending on the amount of credit we intend to generate a risk score between 0–1. Snapshot of Customers and existing credit amount data The distribution of the data can vary in different datasets and among different features. Let’s see the distribution of credit_amount. Histogram plot of credit_amount The credit_amount is skewed towards the right. In different datasets and use-cases, the skewness or distribution of data may vary. We wish to come up with a scoring mechanism that penalizes the outliers more. In our case, we will come up with an ideal behavior and try to learn the parameters for the logistic function that can best mimic the ideal behavior. Let’s define the ideal behavior: The Risk score to be in range 0 to 1. The data to be centered at the 65th percentile of data. (Assumption since we want to penalize the outliers more) Ideally, we wish the score for 65th percentile to be 0.50, for 75th percentile to be 0.65, for 80th percentile to be 0.70, and 85th percentile to be 0.75. For the rest of the data, we wish the scores to vary accordingly. Different features may have a different distribution and range, so we wish to come up with a technique that can learn the parameters of the logistic function to match the ideal behavior defined in Step 3. For our problem of defining a risk score for credit_amount using the logistic function, let’s decode the parameters. The logistic function can take any value of x between -∞ to +∞ Source where, X₀ = the X value of the sigmoid’s midpoint L = the curve’s maximum value k = the logistic growth rate or steepness of the curve. Since we want the risk score range between 0 and 1, L = 1. Since we want the logistic function to be centered around 65th percentile of data, X₀ = 65th percentile of credit amount The growth rate k, we will learn through Random search, which can best mimic the ideal behavior with minimum Mean squared error from the ideal behavior defined in Step 3. # Logistic function ~ b denotes X₀ and c denotes k(growth rate) def sigmoid(x,b,c): return 1 / (1 + math.exp(-c * (x-b))) Calculating error for a probable growth rate ‘k’ # Mean Squared error between ideal and observed behavior # b denotes X₀ and c denotes k(growth rate) def cal_error(c,pcts,values_expected): error = math.pow(sigmoid(pcts[0],pcts[3],c) - values_expected[0],2) + math.pow(sigmoid(pcts[1],pcts[3],c) - values_expected[1],2) + math.pow(sigmoid(pcts[2],pcts[3],c) - values_expected[2],2) + math.pow(sigmoid(pcts[3],pcts[3],c) - values_expected[3],2) return error Random search to find the best ‘k’, growth rate def find_best_decay_parameter(pcts,values_expected): best_error = 999999 best_c = 1.0 iterations = 5000 for i in range(iterations): tmp = random.random() error = cal_error(tmp,pcts,values_expected) if error<best_error: best_error = error best_c = tmp return best_c Calling the function percentiles = [85,80,75,65] values_expected = [0.75,0.70,0.65,0.50] b,c = find_decay_parameter(df.credit_amount,values_expected) Output Best value of Growth rate 'k' = 0.00047 Value of L = 1 value of X₀ = 3187 65th 75th 80th 85th Value 3187 3972 4720 5969 Expected Score 0.50 0.65 0.70 0.75 Observed Score 0.50 0.59 0.67 0.79 I checked how the score varies with the different credit amounts credit_amounts = [100,500,1200,3000,4000,5200,6000,7500,10000,12000,20000,25000] risk = [] mp_values = {} for credit_amount in credit_amounts: mp_values[credit_amount] = round(sigmoid(credit_amount,b,c),2) Output:
https://towardsdatascience.com/an-unsupervised-mathematical-scoring-model-f4f56a756f
['Abhishek Mungoli']
2020-07-08 18:38:49.406000+00:00
['Analytics', 'Artificial Intelligence', 'Mathematics', 'Data Science', 'Machine Learning']
The perfect hot chocolate recipe to keep you warm and cozy
This article was originally published on blog.healthtap.com on December 14, 2017. Nothing feels like the holidays more than curling up in front of the fire with a steaming cup of hot chocolate in hand. Hot chocolate is a staple of this time of year, so we’ve come up with a decadent, indulgent hot chocolate recipe that you can feel great about sipping throughout the holiday season. Don’t opt for the simple packet of hot chocolate mix in hot water, which is full of sugar, additives, and only a little actual chocolate. Instead, make your own! Not only will it taste much richer and creamier, it will also be much better for you. This recipe for hot chocolate uses raw cacao powder, which is chocolate in its raw and and purest form. Cacao is known for being a heart healthy superfood due to its incredible concentration of flavonols, a class of antioxidants that are associated with decreasing the risk of heart disease, hypertension, and chronic inflammation. According to Dr. Randy Baker, cacao “also contains mood-enhancing chemicals like phenylethylamine, theobromine, serotonin and anandamide,… and also [aids in] reducing insulin resistance.” While Dr. Baker mentions that these health benefits of cacao are powerful, he also notes that they can be counteracted by added sugar, which is best to limit. Studies indicate that to derive the optimum health benefits of chocolate, the darker the chocolate the better, so you should aim to choose chocolate that has at least 70% cocoa solids. This recipe combines both raw cacao and dark chocolate for the ultimate rich cup of hot chocolate. It only takes a few minutes to whip up, so you can be sipping, relaxing, and enjoying the holiday spirit in no time at all. Decadent Hot Chocolate What You’ll Need: (serves 2) 2 cups non-dairy milk (we recommend a mix of almond and coconut for some extra creaminess) 2 tbsp raw cacao powder 1.5 oz dark chocolate (70% or above) 1–2 tbsp maple syrup (sweetened to taste) ½ tsp vanilla extract Combine everything all ingredients in a medium sauce pan. Over medium-low heat, whisk all ingredients together until the chocolate is melted, everything is evenly combined, and the drink is warm. Serve into two mugs, and enjoy. We recommend topping your hot chocolate with a dollop of coconut whipped cream and a sprinkle of cacao nibs to make your drink a little extra festive! Author: Maggie Harriman
https://medium.com/healthtap/the-perfect-hot-chocolate-recipe-to-keep-you-warm-and-cozy-d81c706700c4
[]
2017-12-19 19:28:45.501000+00:00
['Chocolate', 'Health', 'Nutrition', 'Holidays', 'Recipe']
What is React Context API?
Taking a deeper dive into React Context API and how it differs from redux This past week, I wanted to dive into newer technologies in React, study them and practice using them. After redux, I read about React Context API and how it helps with state management. Of course, when I first studied it, it seemed almost exactly like redux. However, after practicing and playing around with store setup and state retrieval there are some key differences that may make React Context API a bit more easy for beginners (if I may say so) by not being as context heavy to set up as redux. Let’s take a look at why. First, What is React Context API? Let’s start with Redux. In summary — redux is a tool used to manage state in React applications in a more concise manner to avoid something called ‘prop drilling’. Very often, if we aren’t using redux or something similar, in order to retrieve the correct props from a parent component, we need to pass them down the component chain in order for the props to reach the designated component. Not only does this consume time, a lot of times the ‘passing’ props have no use for these props (thus unused) and many times because there is so much passing it is not uncommon to misspell or get lost in your component hierarchy. To avoid this and help make your code more efficient — welcome redux and React Context API. The setup for both of these tools can be intimidating at first, but once you get the hang of it, you’ll realize how efficient data retrieval becomes in your React application. Yes, if I missed it, ‘state’ simply refers to data that is needed to render specific UI correctly. Redux consists of four main building blocks: One single, main centralized state. Reducer functions — these contain the logic needed to alter state (similar to the functionality of {…this.setState}. Actions — these are dispatched to trigger a specific reducer function to run, depending on specific switch cases. Subscription — in order to actually use this state in the react component. React has amazing documentation that goes deeper into these four levels to explain what exactly is happening with state management and data passing. React’s Context API is quite similar as to what it delivers to the developer — easier and more efficient state management- but with three main setup blocks: Context Object — defined in a seperate file or next to the component in a component js file. Any data can be stored in this object, and there can be several in one react application. Context Provider- provides the context object to all of your components that may need it. Context Consumer- a wrapper component that can be used to ‘inject’ the Context provided in some parent component. In summary — providing Context in something like your App.js file (or any component that wraps all of your components in the application) and consuming context in the components that actually need the Context data. React Context API or Redux? Although React Context API is a bit cleaner and let setup heavy, it is not built for high frequency updates. This was seen when people began to switch to react context API from redux. However, for the moment and if you are just newly diving into redux/react context API for state management, using React Context for low frequency updates such as user authentication may be a good start. Personally, while building a e-commerce application similar to Amazon, React Context AP — both setup and usage — seemed a-breeze after using redux. It’s built into react so there is no need to update and install new dependencies. The API is also pretty straightforward to use, especially if you’ve used Hooks before. Lastly, async functions will not require an extra package (such as redux-thunk) to be implemented. All in all, both are optimized systems and work to make applications’ stronger and efficient. Using any will only benefit your app; React Context is jsut an easier start ;) Happy coding! References: https://reactjs.org/docs/context.html
https://samanbatool08.medium.com/what-is-react-context-api-5c6876dbab54
['Saman Batool']
2020-08-23 02:43:22.686000+00:00
['JavaScript', 'React', 'Developer', 'React Context Api', 'Redux']
Donald Trump Has Turned America Into a Failed State
Donald Trump Has Turned America Into a Failed State He didn’t even have to try that hard. I read the news today — oh, boy. There was a terrorist attack on Christmas morning. The media has decided to call it a “massive explosion believed to be intentional,” because many of us prefer to live in the shallow comfort of denial. I’m not going to say a word to my family until we’re done opening presents, over video chat, because we can’t see each other in person. And yet, my thoughts are in Nashville and its implications. My thoughts are also with the kids who didn’t write to Santa for toys this year. They wrote for things like shoes and textbooks. I’m wondering when we’re going to wake up. I think Americans live in a failed state now, close if not identical to the “third world countries” we’ve always considered ourselves better than. Right now Donald Trump, still our president, isn’t thinking about kids in poverty or victims of terrorism at all. He’s busy playing golf and casually plotting to overthrow the government. Before leaving for his private resort, he vetoed a pay raise for American troops and destroyed a stimulus bill, but not before he pardoned 26 of his most loyal allies, who basically serve as his personal mafia. Now the government might shut down. Let’s think about that for a minute. We’re in the middle of the worst health crisis in American history, and we’re facing the possibility that our own government will be temporarily unavailable. This isn’t complaining. It’s not an expression of outrage. These are just the facts Americans are all processing over the holidays, as we try to rest and spend a little time with our families.
https://medium.com/the-apeiron-blog/donald-trump-has-turned-america-into-a-failed-state-820c25f44915
['Jessica Wildfire']
2020-12-25 22:52:02.593000+00:00
['Society', 'Politics', 'News', 'Government', 'Opinion']
7 Websites Every Software Developer Should Follow
1. DEV Community It is a great website to stay updated on the trending technologies and what’s the next boom. You can follow topics that you are interested in. Your feed of articles and videos will be generated based on your preferences. Just go to the website and search for the technology and concept you want to explore. You’ll get numerous articles, videos, and podcast tutorials for the same. It has a section for tech news that will keep you connected with the world of technology. You can apply for jobs as well. In a nutshell, it is a constructive and inclusive social network for software developers. 2. Indie Hackers This website is a bit different from the rest in this list. The focus is talking about the journey that developers go through when they start their own companies, products, or freelance. These are stories of failure and success that motivates us to do something better. The website is pretty transparent with the information about product launches — like how many people brought that and how much developers made out of it. They got really cool podcasts from amazing developers across the world. For instance, this podcast from Sam Eaton about how he made the most out of his sister’s cookie business idea. It is a community of developers and entrepreneurs who are trying to build profitable internet companies without raising money. As a developer, we can get inspired by hearing these stories that changed their lives or be a part of it. 3. Stack Overflow This website does not require any introduction. Most of the developers any way end up here during their code time. This was the first website that I bookmarked when I started my professional programming career. I end up here almost every other day at work and cannot imagine my life without the stack overflow. It is a community for developers, where they post questions, and someone who’s encountered something like that answers it. Most of the time, I end up finding the answer I am looking for on here. This is a “must to have bookmarked” website for all the developers. 4. DevDocs As the name suggests, this is a website full of documentation. You can find syntax, manuals, and references for different front-end languages and libraries. They have almost everything a developer is looking for while working on any technology. It includes — beginner’s guide, advanced topics, and references. It is a great resource if you are looking to learn front-end technologies like HTML, JavaScript, jQuery, Angular, or React. Their docs include a lot of easy to follow examples to learn various things. It is good to have this website bookmarked, as it helps to look up syntaxes or to understand how specific functions work. 5. GitHub This is another website apart from stack overflow that a developer eventually comes to know about. It is basically a code repository that developers can utilize to learn and share knowledge with developers across the world. Many famous and successful developers are using GitHub to share tutorials and pet projects. It is a good way to revamp your skills as a developer. I’ve worked with several good programmers, who in their free time love to visit GitHub. As the tagline suggests, indeed, it is a place where the world builds software. 6. Glassdoor This is a different website that helps you more with your career. Here, you not only can find different jobs but also can go through the reviews & salary structures of many companies. It is a useful resource to understand the interview process, salary expectations, and management reviews from the people who are or were part of the organization. Glassdoor can help you when you are looking for a job change. You can read about the interview process and experiences from other candidates. This can help you with what to expect when going for the same interview. 7. Codementor It is the largest community for developer mentorship and an on-demand marketplace for software developers. It’s an online platform providing instant one-on-one help for software developers. It can be done by utilizing screen sharing, video, and text chat in order to replicate the experience of having a mentor. It is mostly used for code reviewing, debugging, and online programming. This is one of the good resources for newbie programmers to get a mentor if they are self-taught programmers.
https://towardsdatascience.com/7-websites-every-software-developer-should-follow-cae345c52355
['Shubham Pathania']
2020-12-21 13:29:14.489000+00:00
['Developer', 'Software Engineering', 'Programming', 'Software Development', 'Tips And Tricks']
How to set default properties in Figma to speed up your workflow
How to set default properties in Figma to speed up your workflow with some drawbacks While browsing through the Figma support forums, I found a common disdain for the sad, dumpy, default gray that Figma assigns to new object layers created. Sad dumpy gray We’ve all seen it. We always go to change the color as soon as we draw it: either by color picker or choosing a library color. Setting it to UI white Thankfully, we can say goodbye to the default gray. In fact, not only can you just say goodbye to the gray, you can set any number of different properties as default. The default properties can be entirely defined by you. The process is to create an object, set that object’s properties as you desire them, then top-left menu->Edit->Set Default Properties. Ah, nice little cards all ready for elevation and content And presto! Now all objects you create will have the properties you defined as default. This includes border radius and library colors. As you can see here, the next rectangle drawn has 4px borders and is set to my UI/White color. Rounded corners by default? oolalah Now when I draw a rectangle, I can jump right into the purpose of that UI element without having to set it up every time. Just one more thing to take off my cognitive workload. This being said, there is a lot of room for Figma to improve on this feature. When you set the default properties, it changes all layers to those properties. I might want my rectangles to default to UI/White, but if I set that to default, then my text layers will inherit those properties as well. As it is, you have to pick which object saves you the most time. For me, it was rectangles. For some, it might not be worth it at all. However, the feature could be very useful in the future. A tie-in with the new variants announced would definitely be worth considering. Hopefully Figma will address this in the future.
https://medium.com/design-bootcamp/how-to-set-default-properties-in-figma-to-speed-up-your-workflow-ab32cccd0117
[]
2020-09-23 01:07:07.762000+00:00
['Figma', 'UI', 'Design', 'Software Development', 'UX']
Things I learned in one year working as a Software Engineer
Things I learned in one year working as a Software Engineer Gurdip Singh Follow Nov 12 · 4 min read Photo by Christopher Gower on Unsplash It has been over a year, I started working as a software engineer and, the whole year has been a rewarding and learning experience. When I look back, I can already see how much I have learned and grown as a software engineer. While I was reflecting on this lovely experience, I found many things that I have learned and I would like to share them here so that it can benefit budding software engineers who are in the same shoes as I was a year ago. I have divided my learnings into soft skills and technical skills. I am going to start with soft-skills as I believe they will help in advancing one’s technical skills as well. Ask questions: Never be afraid to ask questions. As cliche as it sounds, I can never stress enough because often a lot of people like me have no idea what is going on when they start, especially during the first couple of months and they hesitate to ask questions. When I started, everything was new to me like the technology stack, the process, and the terms my colleagues used to refer to things related to work. I was very hesitant to ask at first as I thought I might look stupid but, when I finally ended up asking my doubts, I always learned new things. So, never hesitate to ask questions even if you think they are trivial. Take Notes: During my first couple of months, I used to get stuck in figuring out things because I did not have knowledge of the system and when I asked for help from seniors, often time they did some magic to get everything working again. When this happened a couple of times, I realized how important it is to keep notes so that you don’t have to hassle your seniors repeatedly. Further, maintaining notes will help you track the progress and might also help new people who might join after you. Take initiatives: It is so important to take initiatives whenever you get an opportunity. There will be times when things sound as trivial as setting up a team meeting and, nobody is willing to take the initiatives. Do not hesitate to step ahead and take initiatives as such things will help you build a better relationship with colleagues and hence can ultimately help in your learning path. Communicate: Another significant factor to be at the top of one’s game is communication. In my one year, the one thing I got appreciated for more than anything was my communication as I always communicate if someone is dependent on me. For example, when can one expect me to finish or if something is taking longer, keeping team members in the loop always help and avoid confusion. Ask feedback: Getting proper and timely feedback is very important to making progress towards your career goals. It can be as frequent as having one on one weekly meetings or having a bi-monthly meeting. Ask your manager about what the team thinks about your progress and work in general and adjust your career goals accordingly. It is a great way to discuss and know about your future in the company. It is not only the soft skills that I have learned. I also ended up growing a lot more technically sound than I was on day one. Version Control Systems: Before joining I sure knew about version control systems but, I just knew beginner level commands that one could get familiar with within a day or two. The actual version control may vary from company to company. It can be challenging at first to work with hundreds of repositories and managing version control with multiple simultaneous projects. I would advise learning and get comfortable with git as much as possible. Linux development: When I started working, I had little to no experience with Linux as I come from a non-computer science background. So, this year, I spent a lot of time learning about development in a Linux environment. I wish I had known all these things before I started as I believe the journey would have been more rewarding but, as they say, you don’t have to know everything. Code Reviews: Code reviews are an essential part of the job as a software engineer. One should keep in mind when writing code that you may have to explain the reasoning behind the code during code reviews. I feel code reviews help in getting hands-on feedback and help you grow and write better code. Also, it is equally important to take some time to review the code of your colleagues and discuss the best coding practices. Debugging: Software engineering is incomplete without having sound debugging skills. Whenever I have to work on a bug fix, I always start from the top and try to reach the root cause of the problem. Again, this is something that one learns with practice and the proper guidance of mentors. Well, I learned countless things which I can’t write about all here but, I found these few important and worth mentioning here. I hope these few things will help you in understanding more about the daily job as a software engineer and eventually help you in your journey. Happy Coding!
https://medium.com/javascript-in-plain-english/things-i-learned-in-one-year-working-as-a-software-engineer-f4e7e8bf7ba7
['Gurdip Singh']
2020-11-12 06:32:23.682000+00:00
['Web Development', 'Software Development', 'Software Engineering', 'Learning To Code', 'Programming']
Longer Form Thoughts on Naming and a Potential Alternative Emphasis
There is a popular quote that occasionally gets tossed around among software developers: There are only two hard things in Computer Science: cache invalidation and naming things. — Phil Karlton When invoked with the intent to emphasize the latter of the two hard things–naming–it is often in response to frustration with trying to mentally evaluate source code or read documentation that someone else wrote. Naming is hard The following are at least two common issues with names that I have seen lead to frustration: Names are sometimes too generic or abstract (i.e. they don’t provide enough information). For example a variable is named e instead of element or error or theFifthLetterInTheAlphabetSinceIAlreadyUsedTheFirstFour . instead of or or . Names are misleading (i.e. they provide inaccurate information). For example Model is an overloaded term in software development (and really in modern English). Everyone has their own understanding about what a model is, so there is no chance of consistency in its usage. And even if a project comes up with a more formal internal definition, there is always the possibility that over time the meaning will need to change or the term will get reused with different connotations. Also, discussions about how to name things often devolve into bikeshedding. Naming is easy The computer does not care what you name things as long as the names are unique. Throughout history disciplines such as philosophy and math have used abstract symbols such as the letters of the alphabet as shortcuts for creating identifiers when communicating ideas. Today we write computer programs that programmatically generate unique names. Even if it takes some time, our brains have an impressive capacity to learn new names/symbols, and to associate new definitions with already known names/symbols. Any specific word (or concept name) in a human language is arbitrary. When coming up with a new word we can cleverly combine Greek or Latin roots, or we can string together a few random letters. Either works pretty well. (And as for those Greek or Latin roots, or whatever their predecessors were, they too are just approximately random strings of letters or sounds.) An interesting experiment would be to take a code base that uses completely arbitrary abstraction names and see how long it takes for someone to be able to work with that code. Not only would this potentially illustrate how unimportant names really are, it would likely also uncover problematic abstractions and less ideal patterns and practices. Naming is impossible The problems I described above (names not providing enough information, or the reuse of names being potentially confusing or misleading) are impossible to overcome, especially by just trying to improve the way things are named. There will always be the need to learn new name/definition combinations, and there will always be the possibility that new meaning or new definitions will be attributed to an existing name/symbol making any given application less clear. There will always be some frustration. An alternative emphasis Rather than trying to get better at naming things perhaps we should focus more on improving how we define things (i.e. how we author abstractions), and come up with more powerful approaches for learning new definitions and disambiguating between definitions. Type systems for example are a more formal tool that diminish the importance of naming. Computers provide a rich medium for authoring and relating ideas. We can and have already started to leverage graphics, audio, etc… to create richer more engaging and interactive representations of our ideas and abstractions. A couple straightforward examples are testing tools and IDEs. (Both provide an interactive means for exploring and understanding abstractions.) Post publication thoughts I think ideally we can get to a place where thanks to our tools abstraction names are completely arbitrary, and generating names is an automated process. Tools could instead allow for human annotations. And every copy of a code base could potentially have its own set of human language names and annotations. Developers could share annotations, but any given developer could customize their own annotations. One thing this could facilitate is international collaboration (e.g. developers could annotate code in their primary language). But, even more important than naming and annotations, is for our tools to make it easy to quickly understand the formal definition of an abstraction, and for local abstractions make it easy to review how they are currently being used. (Confusing or unexpected abstraction use could also be an indicator of less ideal patterns, practices or architecture. A simple example is a method being overly complex to the point that it’s hard to infer from the context the role a local variable is intended to play.) Naming shouldn’t hamper creativity Sometimes (often?) as developers we are playing the role of inventor or ideator. And our new ideas may not be conducive to being succinctly identified with even a combination of existing names/symbols. Hopefully we provide ourselves some leeway to also invent new names that may not have much if any intrinsic meaning.
https://nateeborn.medium.com/longer-form-thoughts-on-naming-d7decf807369
['Nate Eborn']
2020-05-07 20:55:15.355000+00:00
['Software Development', 'Software Engineering', 'Naming', 'Naming Conventions']
13 Python Tools That Every Developer Should Know
When you are just starting to learn Python, someone may explain to you that you can add your source folder to the PYTHONPATH environment variable, and then your code can be imported from other directories. Very often, the explainer forgets to say that, in most cases, this is a bad idea. Some people find this on the Internet; others know it from their own experience. But too many people (especially inexperienced programmers) think that there are no other alternatives. Even if you know there is an alternative; it is not always easy to get used to and start using it. Python tools are confusing because there is a lot of software built on top of one another, with a ton of overlap and problems that come with it. It is not easy to understand how to use these tools correctly in your project. For this reason, I decided to write this article and touch on the most popular tools in it, figure out when and where they are used and what tasks they solve. I will try to explain as easily as possible how to use each of these tools. If the tool is on this list, then you, as a pythonist, need to at least know about its existence. I will only talk about those tools that can be applied to any project or workflow, and you should keep them in mind when you start a new project. However, this does not mean that you should use all the presented tools in every project. It is unnecessary to overload the project with tools because, in some cases, this can overcomplicate it. ❗ Essential Python Tools ❗ 1. Setuptools Setuptools is the standard and most common way to create packages in Python. It works anywhere and does its job well. For example, by setting up a single setup.py file, you can easily create local Python packages that you can install with pip and even publish them to PyPi so you can install them elsewhere by: pip install <your_package_name> What It Gives: generally for creating .egg, .zip, or wheel files from the source, defining metadata for your project, collaborative structured, and standardized work on the code. When It’s Used: Anytime you write code that needs to run on someone else’s machine. Alternatives: Poetry and Flit 2. Virtualenv Virtualenv is a virtual environment manager. Sandboxes like these are stand-alone python with a specific set of pre-installed packages. Using virtualenv means you don’t need to install packages in the python default system. What It Gives: separation of dependencies, support of different versions of python with one system, easy movement of dependencies. When It’s Used: You need to write code, and this requires a version of python different from your default python system version. Alternatives: heavy-duty things like Docker or Singularity containers. 3. Pip Pip is the most common package manager in Python that you’ve definitely heard of. It allows you to install local or remote packages into your virtual environment or Python system. Also, pip has a lot of nice features like installing a package in developer mode with the pip -e option or producing a list of installed packages including their version by pip freeze > requirement.txt . However, if you need to handle package dependencies for your project, things can get messy here, so this is why, for my projects, I prefer to use Poetry. What It Gives: installing and removing packages, tracking the versions of packages that you are using. When It’s Used: Pretty much always. Alternatives: mentioned Poetry or Conda
https://medium.com/python-in-plain-english/13-python-tools-every-developer-should-know-4ae1218ff60b
['Mikhail Raevskiy']
2020-11-26 10:27:38.868000+00:00
['Python', 'Web Development', 'Software Development', 'Data Science', 'Programming']