title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
How to Remake Historical Data Visualization and Why You Should
I call it a Du Bois Spiral. It’s aesthetically compelling in the way it encodes urban to rural demographics. It’s also yet another example of complex data visualization from back before we all got so conservative and regressive. Du Bois knows you cannot precisely compare the lengths of those diagonals and spirals, and so he writes the number to go along with them. It provides the exact number of African Americans living in the various parts of Georgia as well as a more striking summary: the almost absurd ratio of red to any other color. Finally, tucked away in the “neck” of the visualization is the lack of representation of African Americans in medium-sized cities. African American life in Georgia in the late nineteenth century was overwhelmingly, dramatically rural, with a significant urban character but with almost no representation in small towns. That story is encapsulated in this graphic better than any bar chart. Historical works like these can provide more than just inspiration to a modern data visualization practitioner. They also provide material for one of the most effective ways to enhance your skills: remaking historical data visualization. You could remake the original by using a data visualization library like D3 or by hand (as the original author has done). That’s useful, but more valuable is to try to break down the rules by which the original was created and produce new data visualization products with those rules. That’s what Nathan Yau did when he took the Statistical Atlas of the United States and remade it with modern data. Nathan Yau’s amazing remastering of the Statistical Atlas of the United States. This approach improves your understanding of the structure and rules for presenting information. It also teaches you by example techniques to supplement and contextualize that information. And it provides solid aesthetic patterns to follow. In the particulars, it typically has useful challenges like implementing the small multiples on the right. For something like the Du Bois Spiral, there’s a different approach: deriving the rules for a novel visualization. Ben Schmidt did this with Minard’s Map of the Invasion of Russia. From the rules he derived, Ben came up with d3.trail, a library for drawing complex geographic paths. With it, you can create a Minard’s Map of whaling ships, or a Minard’s Map of visitors to a website. You can also take the original map and animate it, as Ben has done.
https://towardsdatascience.com/how-to-remake-historical-data-visualization-and-why-you-should-c25874fc4804
['Elijah Meeks']
2019-06-03 19:12:05.533000+00:00
['Design', 'Data Visualization']
This Year, I Ran Through My Savings
If there’s one thing I wish more people understood about poverty, it’s that growing up poor is terribly expensive. It’s funny. When I first began making “good” money by writing online, it felt like most people didn’t understand that the money I was making wasn’t all just sitting in a bank growing exponentially. Some of it sat in the bank, of course, and some of it was saved for certain events, specific needs, or special occasions. But over these past couple of years, most of my money has gone toward simply “catching up.” Honestly, I’m not sure why more people don’t talk about this. Maybe they’re ashamed. After all, talking about money at all is often seen as uncouth — as if you’re either a braggart for mentioning how much you’ve made, or a panhandler for admitting when it’s still not quite “enough.” Personally, I believe we can talk about money in frank terms without bragging or begging. So, let’s talk about it. “Catching up” is something that happens when you come into more money after never having enough. It looks different for everyone, and it depends a lot upon how much new money is coming in. For me, getting out of debt and repairing my credit after homelessness was important. So, that was one priority. But buying decent furniture was important too. See, when I became pregnant back in 2013, my daughter’s dad left me in the lurch after I’d already left everything in Minnesota just to be with him. That meant the pregnancy wasn’t only the beginning of motherhood for me — I also had to start my life all over. Again. The first time I started my life over was after my divorce in 2006. This time, starting over meant moving about 11 times in my daughter’s first two years, and never having close to everything we needed. Most of the valuables I had back when I was with my ex got lost, ransacked, or pilfered by his ex-mistress subletter. As a result, over the past couple of years, I’ve been slowly acquiring some of the stuff we lacked. Kitchenware that’s not from the Dollar Store. Decent beds and linens. Basic organizational tools. In 2019, putting my daughter into a good preschool essentially doubled my rent, but I believe it was a good choice for us. Now, she’s in kindergarten and I plan to keep her at the same school for as long as I can pay the tuition. Twenty thousand dollars worth of dental work? Done. That was probably the worst expense of the first 18 months of my blogging journey, but it was unavoidable unless I wanted to keep losing teeth. I’ve been a nighttime tooth grinder since childhood, and coupled with a lack of adequate dental care for most of my life, my mouth was a mess. Actually, it’s still a mess. I’m now in the process of replacing failed amalgam fillings from my youth. I’ve got a few more implants to finish up as well. Eventually, I hope to get veneers and be a lot less self-conscious about my smile. On the more frivolous side of things, my daughter and I took a week-long Atlanta vacation in the summer of 2019. It could have been cheaper, but my daughter still talks about the trip, so I can’t really regret it. For all intents and purposes, 2020 was meant to be my year of frugality. Sure, there’d still be big expenses like more dental work and my first official car, but I originally planned to save the bulk of my income. And then I got sick.
https://medium.com/honestly-yours/this-year-i-ran-through-my-savings-527a2726a1b7
['Shannon Ashley']
2020-11-20 23:25:55.627000+00:00
['Health', 'Success', 'Life', 'Self', 'Money']
The Best Way to Get Rid of Bed Bugs
The Best Way to Get Rid of Bed Bugs And prevent their return Photo of an adult bedbug, by V. Jedklicka on bed-bugs-handbook.com I’ve always loved insects. Maybe I was an entomologist or insect in a past life. Ants, spiders, and bees, and more, fascinate me. I release spiders to their outdoor space or leave them alone when they’re inside my house. Though I love most of them, there are a few exceptions. I’m not a fan of roaches in my home (they sneak in here a lot). Still, I often sweep them out to live the rest of their days in my yard instead of the kitchen. And then there were bedbugs. They’ve altered my relationship with insects forever. I’m itching just thinking about them. For those who are allergic (I’m one of them), they leave rows or clusters of itchy bites overnight. If you suspect you have bedbugs anywhere in your home, you’ll have to be relentless in eliminating them. Here’s how you can (hopefully) get rid of bedbugs: Identification Bedbugs are round, brownish-red, and somewhat flat before they feed. They’re about the size of a flea. Like me, you might not see them often or at all. The first one I found was as filled with my blood. It didn’t even look like a bug at first. They look like the tip of a candlewick when full. I’m midway through professional treatment, so I didn’t expect bites after he sprayed. They bit me eleven times the other night. Seriously, these bugs are no joke. It might’ve taken only one of them to dine on my forearm, neck, and thighs. They’re more likely if you live in a rental. I’ve always rented apartments as an adult. Even with flea-carrying cats, I’ve never been subject to this many bites from an insect. They got my attention. If you live in an apartment, check with your neighbors. My next-door neighbor has more bedbugs than I do, and we share a common wall. They’re likely hiding there, then sneak in to suck my blood in my sleep. I’ve never been so motivated to rid my room of bugs. I’ve had bedbugs in my room for a few months. I noticed several tiny bites on my knee and wrist once or twice in late June. I recalled a similar pattern on my youngest daughter last year, so I had a feeling they were here. We went to visit my parents for a month, and they returned with a vengeance. I suddenly had up to a dozen bites every night. We probably had an infestation before in Oregon, but never pursued treatment because we were about to move. We currently live in Albuquerque, NM, where a variety of insects thrive. Bedbugs take it to a new level. They can survive without a host for up to 18 months, even in a sealed trash bag. Clean and clear your space There are several steps. First, wash and dry all clothing, pillows, and bedding on hot. Put everything else in sealed plastic bags. Include books and picture frames. Discard whatever might be a good hiding place, as finances allow. Vacuum throughly, then empty out the canister or dispose of the bag in the outside trash bin. Continue to vacuum before, during, and after treatment. Here’s the deal with bagging your things. You have to look through every item since they don’t all die in there. I suggest going outside, far from your door. Inspect for holes, rips, or tears. Flip through books and notebooks. Shake out baskets. I found a few in a wicker basket I’d kept by my bed. No wonder they were after me so often. I threw it in the garbage. Having a bedbug invasion isn’t about overall cleanliness. Anyone can bring them in. But knowing you’ve cleared what you can of any chance meeting is the best way to keep them at bay. Use essential oils I dropped and rolled all the essential oils before going to bed. I used a combo of lavender, cedarwood on my neck, wrists, and chest. I tried a homemade concoction of peppermint, rosemary, tea tree, and cedarwood to spray around the perimeter and on my sheets. I used bug repellent with catnip, lemongrass, rosemary, and geranium oil. Badger Bug Balm works well and smells delicious. The catnip spray is called Life Stings. They’re both organic and don’t contain DEET. I have to get honest about my use of essential oils. They do help, don’t get me wrong. They’re effective to an extent. But they’re not reliable to completely repel these bedbugs. They deter them for a time until you have to reapply. They don’t like the scent, but they want you more. That’s why oils are only a part of the picture. The most important thing to do is to call an exterminator. Call a professional Brian, my bedbug guy, as I affectionately call him, has the poison to kill bedbugs in every life stage. He sprays the carpet, mattress, and baseboards. Then he shakes dust that’s similar to diatomaceous earth around the perimeter. It cuts the bedbugs, and they die from their injuries. After the first treatment, we checked in about sightings and bites. When I reported seeing them and getting bites, he applied a second dose a week later. They’re still here, at the time of this writing. If I weren’t moving away a few weeks from now, Brian and I would probably see each other for months. Bedbugs don’t give up that easily. I recommend you maintain a working relationship with your bedbug exterminator. Stay in contact with them to report your observations. We rent, so all I have to do is text or call him. You’ll have to research costs, but it seems he has a flat rate, regardless of how many visits. Final thoughts As a nature lover, I initially wanted to live harmoniously with bedbugs. When I realized no harmony comes from bugs biting me every night, I had to do something about it. Since humans have interfered with the cycle of nature too many times, we need to take measures we might not have had to worry about in the past. I’m sure bedbugs are on the rise due to our imbalanced ecosystem. Unfortunately, it’s come down to “Us vs. Them.” If you have bedbugs, you’ll need to take action right away. They want to survive, and they’ll do anything they can to stay in your home. Clear out your belongings, clean thoroughly, apply essential oils, and call a professional exterminator. Communicate with neighbors if you live in an apartment, condo, or duplex. They’ll likely need treatment, too. Don’t give up until they’re gone. Keep the faith, even if it takes years. I hope my experience and newfound expertise will help someone else stay out of bedbug hell.
https://medium.com/the-partnered-pen/the-best-way-to-get-rid-of-bed-bugs-5178f3b56739
['Michelle Marie Warner']
2020-09-02 02:36:52.294000+00:00
['Health', 'Insects', 'Ideas', 'Home Improvement', 'Nature']
A Beginner’s Guide To What Is Regression Testing
Regression Testing — Edureka Whenever new software is released, the need to test new functionality is obvious. However, it’s equally important to re-run old tests that the application previously passed. That way we can be sure that the new software does not re-introduce old defects or create new ones in the software. We call this type of testing as regression testing. Throughout this article, we will explore regression testing in detail. Let’s take a look at topics covered in this article: What is Regression Testing? Benefits of Regression Testing? When to Apply Regression Testing? What are the types of Regression Testing? How is Regression Testing Implemented? Regression Testing Techniques Challenges of Regression Testing What is Regression Testing? “Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made is called Regression Testing.” A regression test is a system-wide test whose main purpose is to ensure that a small change in one part of the system does not break existing functionality elsewhere in the system. If you consider regression as unintended change, then this type of testing is the process of hunting for those changes. In simple terms, it is all about making sure that old bugs don’t come back to haunt you. Let’s take a look at a fictitious example that illustrates the concept. When adding a new payment type to a shopping website, re-run old tests to ensure that the new code hasn’t created new defects or re-introduced old ones. Regression testing is important because, without it, it’s quite possible to introduce intended fixes into a system that create more problems than they solve. Benefits of Regression Testing Conducting regression tests benefits companies in a number of ways such as: It increases the chance of detecting bugs caused by changes to software and application It can help catch defects early and thus reduce the cost to resolve them Helps in researching unwanted side effects that might have been occurred due to a new operating environment Ensures better performing software due to early identification of bugs and errors Most importantly, it verifies that code changes do not re-introduce old defects Regression testing ensures the correctness of the software so that the best version of the product is released to the market. However, in the real world, designing and maintaining a near-infinite set of regression tests is just not feasible. So you should know when to apply regression testing. When to apply Regression Testing? It is recommended to perform regression testing on the occurrence of the following events: When new functionalities are added In case of change requirements When there is a defect fix When there are performance issues In case of environment changes When there is a patch fix Next part of this article is about different types of regression testing. What are the types of Regression Testing? Regression testing is done through several phases of testing. It is for this reason, that there are several types of regression testing. Some of them are as follows: Unit Testing: In unit testing when coding changes are made for a single unit, a tester, usually the developer responsible for the code — re-runs all previously-passed unit tests. In continuous development environments, automated unit tests are built into the code, making unit testing very efficient in comparison to other types of testing. Progressive Testing: This type of testing works effectively when there are changes done in the software/application specifications as well as new test cases are designed. Selective Testing: In selective testing testers use a subset of the current test cases to cut down the retesting cost and effort. A test unit must be rerun if and only if any of the program entities it covers have been changed. Retest-All Testing: This type of testing strategy involves the testing of all aspects of a particular application as well as reusing all test cases even where the changes have not been made. It is time-consuming and is not much use when any small modification or change is done to the application. Complete Testing: This testing is very useful when multiple changes have been done in the existing code. Performing this testing is highly valuable to identify unexpected bugs. Once this testing is completed, the final system can be made available to the user. It is very important to know which type of testing suits your requirement. Next up, we will discuss how regression testing is implemented. How is Regression Testing Implemented? The procedure to implement regression testing is like the one you apply for any other testing process. Every time the software undergoes a change and a new release comes up, the developer carries out these steps as part of the testing process: First of all, he executes unit-level regression tests to validate code that they have modified, along with any new tests they have written to cover new or changed functionality Then the changed code is merged and integrated to create a new build of the application under test(AUT) Next, smoke tests are executed for assurance that the build is good before any additional testing is performed Once the build is declared good, integration tests are performed to verify the interaction between units of the application with each other and with back-end services such as databases Depending on the size and scope of the released code, either a partial or a full regression are scheduled Defects are then reported back to the development team Additional rounds of regression tests are performed if needed That’s how regression testing is incorporated into a typical software testing process. The image below clearly depicts how regression testing performed. Whenever some changes are made to the source code, the program execution fails for obvious reasons. After the failure, the source code is debugged in order to identify the bugs in the program. Appropriate modifications are made. Then the appropriate test cases are selected from the already existing test suite which covers all the modified and affected parts of the source code. New test cases are added if required. In the end, testing is performed using the selected test cases. Now you might be wondering which test cases to select. Effective Regression Tests can be done by selecting the following test cases: Test cases which have frequent defects Complex test cases Integration test cases Test cases which cover the core functionality of a product Functionalities which are frequently used Test vases which frequently fail Boundary value test cases With the regression testing process out of the way let’s check out various techniques. Regression Testing Techniques Regression testing simply confirms that modified software hasn’t unintentionally changed and it is typically performed using any combination of the following techniques: Retest-All: This method simply re-tests the entire software suite, from top to bottom. In many cases, the majority of these tests are performed by automated tools. Certain times automation is not necessary. This technique is expensive as it requires more time and resources when compared to the other techniques. Test Selection: Instead of choosing all the test cases, this method allows the team to choose a set of tests that will approximate full testing of the test suite. The primary advantage of this practice is that it requires far less time and effort to perform. Usually done by developers who will typically have better insight into the nuances of test edge-cases and unexpected behaviors. Test Case Prioritization: The goal of this technique is to prioritize a limited set of test cases by considering more potential test cases ahead of less important ones. Test cases which could impact both current and future builds of the software are chosen. These are the three major techniques. At times based on testing requirements these techniques are combined. As useful as regression testing can be, it is not without its negative points. You need to understand the challenges that you might face when implementing it. Challenges of Regression Testing Time Consuming: Techniques like retest-all need lot of time to test the entire suite of test cases Expensive: Costly because of resources and manpower that you need to test again and again, something which has already been developed, tested and deployed at early stages Complex: As the product expands, testers are often overwhelmed by the huge amount of test cases and fall victim to losing the track of test cases, overlooking the important test cases Despite these negative points, regression testing is very useful in the software testing process. With regression testing, companies can prevent projects from going over budget, keep their team on track, and, most importantly, prevent unexpected bugs from damaging their products. With this, we have reached the end of the blog. Hope the things that you have learned here today will help you as you head out on your software testing journey. If you wish to check out more articles on the market’s most trending technologies like Python, DevOps, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of Software Testing.
https://medium.com/edureka/regression-testing-b913b7064824
['Archana Choudary']
2020-05-11 12:25:11.709000+00:00
['Software Testing', 'Software Development', 'Regression', 'Software Engineering', 'Regression Testing']
A Call to Arms
Photo by Richard Felix on Unsplash Hold your horses steady Hold the cavalry still Brace yourselves until ready When you hear the trumpets shrill Behold the gates are open The battle lines are clear Take this meagre token Assuage your reckless fear Where is all our strength Lain over the barren fields Do not forget the trial’s length Raise up your mighty shields Do not let them in Those heedless mongers of greed Push back the life-killing sin Fight back for what we need Once the forests grew tall Where the battlefield now lies Don’t let it be nothing at all Let me hear your battle cries Find a voice to sing As you march with seeds to sow Hear the truth of it ring Allow life at last to grow! ~~~~~~~ ©Abigail Siegel, 2020
https://medium.com/dead-poets-society/a-call-to-arms-2710e4ad695d
['Abigail Siegel']
2020-02-15 04:54:16.137000+00:00
['Climate Change', 'Environment', 'Poetry', 'Poem', 'Verse']
The data product lifecycle
Your organisation wants to dive head-first into data and AI but you don’t really know where to start? Data&AI is on the radar of most C-level leaders. It’s often seen as a differentiator from competition, as an enabler for more customer engagement, as a tool for cost reduction. And there is no lack of industry analysts and strategic reports that highlight the impact that data already has on the corporate world. But how to get started? We’ve done quite a few tours of duty in data analytics. We’ve reflected on what comes back across clients and here is what we learned: 1. Data products, not projects Organisations who are successful in data have one common trait: they build data products, not data projects. What is the difference? In projects, you know what you want, you know how much you want to invest in it, you make a plan for how to build it, and you build it. Once build is done, only maintenance is needed. Building a bridge is a project. You know you want to connect two sides of a river. You get money from both communities, you hire an architect and a temporary (contractor) team and you build it. What’s important in a project: Deliver predefined scope on-time, within budget. For products you need a completely different mindset. You are focused on business outcomes. And you give teams the freedom and responsibility to iterate towards that outcome. That fits much better with data&AI initiatives. Why? Because you often know very clearly what you want, but you don’t have a clue of how to get there, even though you do recognise that data will be an important factor. A typical example: customer satisfaction. You want that to be as high as possible. It’s very clear what you want. But there are many ways to Rome. Do you want to increase customer satisfaction by: … finding unhappy customers who are about to churn, and send them a promotion? … doing a customer segmentation project followed up by a 1-on-1 marketing strategy? … building a recommendation system that can help the customer find its way in your offering? Maybe a combination. Or maybe something else entirely. Either way, in a product mindset, you enable your data and customer teams to work closely together to iterate towards a solution fast, and then reliably run that solution for as long as it adds value. That is called a data product. 2. The lifecycle of a data product Every data product goes to the following phases, whether they like it or not: Lifecycle of a data product Experiment: Which use case are we trying to solve? Which data do we have available to solve it? I run a few notebooks, or execute a few ad-hoc queries, maybe visualise a few things in a BI tool to understand better what I’m looking for Which use case are we trying to solve? Which data do we have available to solve it? I run a few notebooks, or execute a few ad-hoc queries, maybe visualise a few things in a BI tool to understand better what I’m looking for Implement: Once I know what I’m looking for, I need to implement this. That means that I want to run it regularly in batch or real-time. Mature data products are built using software engineering best practices, with version control, tests, modular code, and all that good stuff. Once I know what I’m looking for, I need to implement this. That means that I want to run it regularly in batch or real-time. Mature data products are built using software engineering best practices, with version control, tests, modular code, and all that good stuff. Deploy : Ok, it works on my machine. Ship it. Data products only add value once they run in production. So we need to version our code, build artifacts and actually do deployments to development, acceptance or production accounts. : Ok, it works on my machine. Ship it. Data products only add value once they run in production. So we need to version our code, build artifacts and actually do deployments to development, acceptance or production accounts. Monitor: Once it’s deployed, you need to monitor the performance of the data product, both from a technical and business perspective. And then the circle starts again. You experiment again, you implement new pieces, deploy them again, and monitor the outcomes. The faster you go through this cycle, the faster you will learn and the faster you will deliver value to your stakeholders. Based on our experience, plenty of things can go wrong in this lifecycle, as illustrated below: Issues encountered in the data life cycle Many organisations we see still don’t have their data available on a secure platform, in a self-service mode. Often there is no actual analytics environment or the BI department hords and protects data from business users. And when there is an analytics environment, the lack of data governance turns data lakes into swamps. Another recurring issue is that the notebook environment IS the production environment. No code reuse, no data reuse, no best practices, no tests, no version control. Nothing. You change the notebook, you change the data product. That might be fine for some innocent reports. But the more complex and customer facing your data product becomes, the less acceptable that way-of-working is. Deployments are another pain point. Besides the production environment, there are not a lot of testing grounds for data products. This is due to data privacy, complexity of setting up an environment and complexity of CI/CD processes. Once a product is live, the best monitoring tool that organisations have is customers complaining that the product is not available, shows incorrect data, or contains outdated information. And finally, and maybe the biggest painpoint of all, most companies are struggling with finding enough data talent. And once they do find them, they are not allowed to do structural improvements to this product lifecycle, because a next feature awaits. At one of our first clients, our contract was nearly terminated, because I spent time pushing our code to a git repository and integrating with a build server. “You were hired to build features, not be an operational expense”. Ouch. Luckily, I could persuade the manager to let me stay. 3. The importance of engineering and governance I know, data scientists have the sexiest job of the 21st century. And while their genius contributions can often make a profound impact on the value being created, especially in the experimentation phase, it is not the whole story. Proper data governance can give you a business glossary, a data catalog, data lineage, … I know, these concepts are not sexy. But they do accelerate the build of new data products. Proper data engineering can bring automation, scalability, security, flexibility, reliability, …. Again, no business sponsor gets excited when they hear these notions. But you cannot live without.
https://medium.com/datamindedbe/the-data-product-lifecycle-4903c9752527
['Kris Peeters']
2020-01-15 15:33:50.189000+00:00
['Data Governance', 'Data Product Creation', 'Data Engineering']
How to Properly Use AWS S3 Presigned URLs
Configuring AWS First, we need to create an S3 Bucket. (If you already have one, you can go with it, but remember to do the configurations below.) Log in to the console, navigate to S3, and click on the blue “Create bucket” button. We’ll name our bucket presigned-url-upload and keep everything as default except “Block all public access” under Set Permissions. (You need to turn it off if you want to give the public access to some files, such as profile photos.) After creating the bucket, we also need to edit the CORS configuration of the bucket. Navigate into the bucket, choose “CORS Configuration” under the Permissions tab, and copy the configuration below: Next, we need to give Lambda the necessary permissions. To do this, navigate to the Lambda dashboard, select your function ( s3_presigned_file_upload-dev , in my situation), go to the Permissions tab, and click on the Role name (same as your function name). This will open an IAM dashboard. Click on “Attach Policies,” search for “S3,” select “AmazonS3FullAccess,” and click on “Attach Polices” on the bottom left; this gives Lambda the necessary permissions to access S3 and generate the URL [2].
https://medium.com/better-programming/how-to-properly-use-aws-s3-presigned-urls-147a449227f9
['Doğu Deniz Uğur']
2020-06-17 19:57:15.541000+00:00
['Programming', 'AWS', 'DevOps', 'AWS Lambda', 'S3']
She Loves Truckin’
She answered a Drivers Wanted ad They said she’d be haulin’ a big rig She’d see all parts of the country And it would be a cushy gig So she signed up to get her CDL And her Hazmat credentials too She hasn’t told her parents yet And it’s time I think they knew She loves truckin’ Truckin’ across the USA When she’s not truckin’ She’s havin’ a really bad day She loves truckin’ She’s excited about honkin’ that horn She loves truckin’ You’d think she’s been truckin’ since the day she was born She asked me if I’d like to ride along For me to see what truckin’ was like So we went on a big truckin’ adventure In our down time, we’d eat or take a hike But the best thing about truckin’ Was after our loads were done I told her our truckin’ was out of this world Our truckin’ convinced me she was the one And we love truckin’ We love truckin’ day and night We love truckin’ Truckin’s fun under the moonlight We love truckin’ Truckin’ is such a blast We love truckin’ Man, we’re truckin’ so damn mothertruckin’ fast We get the loads to the destinations early The customers are happy with how we truck We never had to tell any of those factories We were late and they were out of luck We’ve never been pulled over by any Smokeys Our truckin’ is fantastic and we love our checks There’s nothin’ like being married truckers Since we’ve been truckin’ we’ve had amazing sex She loves truckin’ And I love truckin’ too We love truckin’ There’s nothin’ else we’d rather do We truck when we’re sleepy We truck when we’re awake Truckin’ really pleasing for both of us The excitement’s never ever, ever fake No, the excitement’s never ever, ever fake.
https://medium.com/no-crime-in-rhymin/she-loves-truckin-e36e68150b25
['Matthew Kenneth Gray']
2020-10-29 13:48:33.313000+00:00
['Work', 'Sex', 'Poetry', 'Music', 'Lyrics']
This Looks Like a Depression, Not a Recession
This Looks Like a Depression, Not a Recession Until we have a vaccine, we are barreling toward economic catastrophe Photo: National Film Board of Canada/Getty Images Just weeks after the stock market crashed in 1929, President Herbert Hoover assured the country that things were already “back to normal,” Liaquat Ahamed writes in Lords of Finance, his Pulitzer Prize-winning history of the financial catastrophe. Five months later, in March 1930, Hoover said the worst would be over “during the next 60 days.” When that period ended, he said, “We have passed the worst.” Eventually, Ahamed writes, “when the facts refused to obey Hoover’s forecasts, he started to make them up.” Government agencies were pressed to issue false data. Officials resigned rather than do so, including the chief of the Bureau of Labor Statistics. And we all know how that turned out: The Great Depression. Today, President Donald Trump is accused of minimizing the coronavirus as it has bored down on the United States, initially barring most foreigners who had visited China from entering the U.S., but then losing a full month before taking further measures. The virus would not spread in the U.S., he said February 26, “especially with the fact that we’re going down, not up. We’re going very substantially down, not up.” Even today, the White House has failed to organize a nationwide mobilization that would arrest the virus and persuade traders, CEOs, and ordinary Americans that the crisis is in hand. As a result, Covid-19, on its current trajectory, threatens the U.S. with a profound economic downturn. Covid-19 arrived amid the longest expansion in modern American history. The economy had grown for more than 11 years and added jobs for 113 straight months. Stock markets were hitting repeated highs. But the economy at large was slowing — it grew by 2.3% in 2019, down from 2.9% the prior year, the slowest in three years, and far below the 3.1% projected by the White House. For 2020, the Federal Open Market Committee forecast just 2% growth in 2020, and economists surveyed by Bankrate estimated a 35% chance of recession by the November election. Red lights of a laggard or even a bad year were blinking: Businesses were not investing in the future — private investment growth had plunged to 1.8% from 5.1% in 2018. Even consumer spending, the singular engine of growth, was just 1.8% in the fourth quarter, down from 3.2% in the prior three-month period. But left barely spoken is the explicit economic threat: a depression-like downturn rivaling the 1930s — prolonged double-digit joblessness, an unprecedented economic contraction, and widespread bankruptcy. It was the same globally. Economic growth hit a six-year low last year in Germany, Europe’s engine, falling to 0.6%, its slowest since 2013. Japan’s economy shrank by 6.3% and the country already appeared to be headed for its first recession since 2015. The global economy as a whole rose by just 2.4% last year, its lowest rate since the 2009 financial crash. An expectant mood grew of something that would finally push the U.S. and the world into an economic contraction, though no one could say what it would be. Today, a rising level of alarm over the coronavirus has led 30 states to shut down large parts of their economies and the rest to issue varying stay-at-home advisories. Against the financial toll, the Fed has struck, marshaling far greater firepower than it did in the Great Recession. Congress, too, has approved triple the relief it spent attacking the 2009 financial crash, and is now talking about another, even pricier package. In all, the government has so far thrown some $6 trillion at Covid-19, most of it at the economic fallout. In part, the reason for the government’s distress is a widely accepted estimate that up to 240,000 Americans could lose their lives even with current measures against the virus. But left barely spoken is the explicit economic threat: a depression-like downturn rivaling the 1930s — prolonged double-digit joblessness, an unprecedented economic contraction, and widespread bankruptcy. The reason for the grim economic outlook is, oddly enough, the government’s very concentration of its financial cannons on the economy. When the government shows it has a convincing regime in place to restrain the virus — massive, population-wide testing, and a way to trace and quarantine those with whom victims have been in contact — the markets will gain confidence, and a floor will be created underneath the economic collapse. Until then, we are looking at the current freefall. To shore up the economy, the virus needs to be brought under visible control. But there is no sign that is where the U.S. is headed. In a rare peek at official thinking, James Bullard, president of the St. Louis Fed, told Bloomberg last week that the jobless rate could climb to 30% next quarter and that the economy could contract by 50%. That was not counting for the impact of hundreds of billions of dollars thrown at companies by Congress as support to hold on to their workers. But even so, private estimates after the legislation are similar — Goldman Sachs forecasts a 34% economic contraction and 13.2% unemployment in the second quarter, and Deutsche Bank 33% and 12%. Although no one placed the forecasts in historical context, if we reach anywhere near those numbers, it will be far worse than the Great Recession, and nearly the magnitude of the Great Depression.
https://marker.medium.com/this-looks-like-a-depression-not-a-recession-16a123f966d8
['Steve Levine']
2020-04-02 18:04:34.822000+00:00
['Society', 'Global', 'Depression', 'Pandemic', 'Economy']
33 Things I learned while being a Cloud Solution Architect
Finally, if you want to learn more about Cloud Architecture. Subscribe and follow. The DevOps Fauncast by Faun Community http://faun.dev/podcast 👋 Join FAUN today and receive similar stories each week in your inbox! ️ Get your weekly dose of the must-read tech stories, news, and tutorials. Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬
https://medium.com/faun/33-things-i-learned-while-being-a-cloud-solution-architect-def544b11b4c
['Kenichi Shibata']
2020-12-16 16:45:49.431000+00:00
['Software Architecture', 'Amazon Web Services', 'Cloud Computing', 'Software Development', 'Cloud']
How To Be A CIA Agent When It Comes To Your Customers
Brands are obsessed with being relevant and cool at the expense of delivering value to their customers. Their content calendars devolve into a cringeworthy June George-fest, complete with “Wine Wednesdays,” fuchsia soft pants, and emojis because they want to be relatable. They don’t want to be a regular brand, they want to be a cool brand. Here’s the reality — your customer is selfish and solutions-oriented. They don’t want to hear your chatter until you’ve proven that you’ve not only solved their problem, but you’re also not a garbage company. COVID brought what had been simmering for years to the fore — people buy from brands that align with their values and expect brands to not only solve their personal challenges but to affect positive societal change. Brand magnetism no longer holds — the reasons consumers buy has shifted because their priorities have shifted and this isn’t going away anytime soon. Two-thirds of consumers worldwide now buy on beliefs. Often, brands skip to the bestie part and ignore the foundation they’ve should’ve spent more time building. Be intentional with the stories you tell, not arbitrary. Are you solving problems? Are you providing solutions? Does your business embody its values? Do they impact the communities around them beyond digging their paws through their pockets? Look at Burger King not being a trash brand! Credit: BurgerKingUK You don’t need to be your customers’ best friend — in fact, they prefer you not be — you need to be human, relevant, and solutions-oriented. What they want to see is how your values show up in how you do business, treat your people, and tell stories that matter. When I design story strategies, I think about what a customer is thinking, feeling, and doing at each stage of their journey to buy and beyond — even if that journey is with a competitor. Some fancy people call those empathy maps — I call it paying attention. I pay attention to the questions they ask throughout the process and how they talk about their experience with the brand and its products to their peers. I read reviews. I lurk in forums and on social media. I listen without waiting for my turn to speak. If I want their attention, I have to be willing to sacrifice mine in equal measure. Let’s take this a step deeper. Old school marketing focused on one ideal customer. If you could know everything there is to know about that one person, you can market to them. I used to do this because it was the way I learned how to craft personas. But here’s the thing — that person is usually white. That person bears what’s familiar to the marketer creating the persona even if they’re unaware of their bias. Now, I embrace a community approach, where customers are reflective of a diverse group of people who have varying motivations, triggers, behaviors, habits and influences that drive them to make a choice. Let me give you an example. For a few years, I was the lead strategist on a famous pancake brand. All of their media — TV, out-of-home (OOH), PPC, the whole smear — showcased white, upper-middle-class families treating the kids to a stack after the Saturday soccer game. But when we analyzed thousands of pieces of data from surveys and CRM to point-of-purchase and social media, the face of the customer wasn’t that simple and that white. We found there were four distinct segments that visited this fast casual spot and the driving force behind their behaviors were frequency, i.e., how often they went and income. The higher the frequency of visits, the lower the income. The most frequent customers tended to be African American, Hispanic, and Asian and didn’t make the big bucks but found the joint affordable. It was also a place where they could gather. College kids came a little less frequently and it was a late-night munchie kind of affair. TL;DR: We devoted a lot of money attracting a group of people who rarely dined at the restaurant. Instead, we should’ve widen our scope to understand there was a more complex community of customers who were driven by income and frequency. That shift not only makes performance marketing effective and efficient, but it also guides what stories we tell, to whom we tell them, and on which channels we tell them. Now, we get specific in our message and where that message lands. Once we know more about our customers — their motivations, pain-points, challenges, wants, needs, behaviors, habits, influences, and preferences — we can start to create a map of how they buy from us and proactively address what they’re thinking/feeling/doing each step of the way. We define what will propel them to the next step. Through this exercise, we also have to address why they wouldn’t buy from us, what’s holding them back, which is mostly in that evaluation stage when they’re thinking Papa John’s, that famous pancake place, Five Guys, or a pizza we can make at home. Customer Journeys get a bad rap because many marketers don’t know how to use them. They’re usually found in a pretty slide deck, gathering dust. My CJ maps don’t fit in a deck. They’re not neat and pristinely designed. They’re on paper or on a white board. They dissect the prioritized communities into varying groups based on any number of psychographic factors. Here’s one way to learn more about your customers and map their journeys without dropping a ton of cash on a segmentation study: For example, think about a fancy designer handbag brand. Twenty-five-year-olds to women in their 50s buy the brand. They’re unified by an expression of status or an affection for a particular look, but their behaviors and demographics are at either end of the spectrum. We first identify what unifies the community as our point of entry, and then use that to get specific about how that applies to each group of customers. How we message a twenty-five-year-old who doesn’t have as much discretionary income and will save up to buy the bag versus a woman in her fifties who might buy across the category because she can afford it. These are generalizations, for sure, but I’m doing this so you get the idea. Regardless of who they are, every customer goes through a trigger moment (they need/want something for a reason), which has them: asking around/searching for options, evaluating those options, choosing one of those options, experiencing the aftermath of that option, i.e., the post-purchase and between-points-of-purchase experience. People call this the funnel, which is reductive and linear. A customer’s experience buying a product or service is more of a complex, looping journey because we don’t tend to buy in a straight line. We’ll veer off, do some more research, change our minds, cancel purchases, return products, end up hating the brand because we find out they treat their employees like garbage, etc. Credit: BeInTheKnow.com // This is one of the best CJ maps I’ve seen in a while. How our senses are activated — what we’re thinking, feeling, doing, seeing, hearing, at each stage may be different depending on the segments you’re targeting (with overlapping similarities, naturally), and that’s when our storytelling gets laser-focused. Before you throw up a puppy post on social media, first ask yourself — did I proactively connect and communicate with my customers each step of the way? Did I educate them? Provide value, utility, or reciprocity? Did I create content that speaks to their wants, needs, goals through their buyer journey? What they’re thinking, feeling, and doing? That’s the foundation you have to create before you can get friendly. You earn a customer’s trust by showing them you’re the solution and you value the totality of their experience. Once you’ve got that down, you can firehose all the puppies you want. Be deliberate and intentional in the stories you tell, and make sure they’re rooted in the communities that are buying from you. Do the legwork and research. Listen to what your customers say — the good, bad, and violently ugly. Create stories that connect with them, cultivate a relationship with them, and convince them you’re THE ONE.
https://medium.com/the-anatomy-of-marketing/how-to-be-a-cia-agent-when-it-comes-to-your-customers-a2fab5de260c
['Felicia C. Sullivan']
2020-12-27 18:43:20.444000+00:00
['Marketing', 'Content Marketing', 'Customer Experience', 'Business', 'Startup Lessons']
23 Examples of Effective Headlines
Resources 23 Examples of Effective Headlines Plus some quick tips and a note on coining terms Illustration: Kawandeep Virdee If you want to draw readers to a story, you need to make them want to choose it from a sea of other options. A strong headline is your best shot to do that. You want your headline to entice the reader and clearly communicate what the story is about — all without sounding too cookie cutter and without veering into clickbait. That can be a fine line to walk, but it’s important to find the balance. Below, I’ve gathered some examples of strong headlines. I’ve tried to break them out into categories for some kind of framework, but it’s not an exact science here — the main thing they all have in common is that they are clear, direct, assertive, and focused on what is most interesting. Analysis Best when you’re giving a fresh lens to a current trend, moment, or interest. This construction often takes the form of what/why/how because it’s about offering an explanation. Note: The article on the Black Death came out during the coronavirus pandemic when the parallels to previous pandemics were high interest and the connection was already on readers’ minds. Because the connection was known, we didn’t need to spell it out in the headline. But if you’re looking back at history and the relevance is less obvious, you might need to make that connection for the reader. Just make sure they can see why your story is relevant to them now. Bold declaration Best when you have a strong, possibly controversial thesis. The key here is to go all the way and make the bold claim without hedging, as long as your article backs up that claim. Descriptive teaser Best when you’re diving into a fascinating, untold, general interest story. Focus on the most interesting core of the story, and don’t get bogged down in details. This can also be a personal story, and in those cases, focus on what makes your story noteworthy or different. Note: ‘The Absurd Story Behind China’s Biggest Bank Robbery’ was originally titled ‘Jackpot’ and still has that title onsite. For broader online distribution, that wasn’t clear enough for people to understand what the story was about, so we changed the display hed to tease the story itself. Instruction Best when you are giving direct advice or information. These are popular but common, so giving your headline a little voice (without sacrificing clarity) can help differentiate your story. These often use a “how to” or list construction since that’s an easy way to clearly show what people will learn. Quote Best when there’s a short, clear quote that hits right at the emotional core of the article and tells you exactly what it’s about. It’s rare that you get the perfect quote, so use sparingly, but it’s powerful when you do. A note on coining terms If you’re identifying a feeling or phenomenon that hasn’t been identified before, then sometimes coining a term can help make it stickier and more understandable. You have to be careful putting an unfamiliar term in the headline because it can be confusing, but it works in cases where the term itself is immediately understandable and the headline includes enough context to make it clear. For example: Some quick tips If you’re unsure if the headline is clear, share only the headline with someone who hasn’t read the story and ask what they think it’s about. Look at what people are saying when they share the article if you’re not sure you’ve captured what your audience thinks is most interesting. This headline capitalization tool is an easy way to make sure you’re using proper headline caps. Whenever an editor gets stuck on a headline, I pause the brainstorming process and ask them to just describe to me, as themselves: What is this story about? Coming back to that question and answering it aloud is especially helpful if the headline is unclear or jargony, or if it’s bloated with too many details. You wrote the story about something, and you had a reason for writing it, so those are your anchors if you’re ever unsure of the headline.
https://medium.com/creators-hub/23-examples-of-effective-headlines-2e7f753476f1
['Nadia Rawls']
2020-12-01 18:22:54.581000+00:00
['Resources', 'Creativity', 'Headlines', 'Writing Tips']
First Avenue Hits New Highs with ToneDen
“All of the live music from the movie was filmed here,” says Ashley Ryan, Marketing Director at First Avenue Productions. “[Prince] was known for saying, ‘I like Hollywood. I just like Minneapolis a little bit better.’” To Ryan, “The history of music in this town and in this club are completely intertwined. You don’t come from here and play music and not play at our venue.” Ashley Ryan, Marketing Director for First Avenue Productions That venue contains the Mainroom and the 7th Street Entry, respectively 1550- and 250-cap rooms. Bigger rooms are on the horizon. “We own The Fine Line,” Ryan says, “the Turf Club in Saint Paul, we just acquired the Fitzgerald Theater, and we operate the Palace Theater in Saint Paul with Jam Productions.” In the works? A 10,000-cap waterfront amphitheater that First Avenue Productions is collaborating on with the city of Minneapolis. “That’s probably a few years down the line.” When you’re investing in growth, you also need to invest in tools. That’s why First Avenue relies on ToneDen. “We looked at [ToneDen] like an investment — not a team member or person, but a product that will alleviate workload from the team we currently have. I didn’t have to contract out for an ad buyer. I just now have a tool that works a lot better and is more efficient.” “Your Downtown Danceteria” Home of hot dish and Hüsker Dü, the Twin Cities were once “the record-distribution capital of the U.S., handling roughly one-third of the nation’s vinyl and cassette trade.” In the 1970, though, when First Avenue opened, the main thing being distributed was disco. “The club still has this in our logo, Downtown Danceteria,” Ryan says. “It was a Danceteria in the 70s. Then it turned to rock and roll in the 80s. REM played here. The Replacements and Babes in Toyland are from here. All these bands that were like blowing up and playing in the Entry and then later played in the Mainroom went on to have these huge storied careers.” This legacy is an invaluable part of First Avenue’s brand, though Ryan admits that “from a marketing point of view,” it’s a little “tricky. We want to embrace the past. We have a great history of awesome bands growing up in this room and we want to remember that. But we always want to look at the future, too, and not just sit in a time capsule.” The Stars at First Avenue Filling a Void For the past four years, Ryan manages a three-person team that oversees marketing for First Avenue’s venues. Together, they market 1,100 shows a year. “We do a lot of digital advertising. A lot of ads on radio still. We do a little bit of print. We do street promo such as posters, flyers, and manage a street team. And we are generally emailing with agents and managers to talk about the best strategy to sell the most tickets to their shows.” Now that the team uses ToneDen for their digital advertising, the tools they previously used seem “so much clunkier.” With faster setup and an intuitive audience builder, ToneDen “does a better job knowing the interest audience that I’m looking for and building a better set of recommendations to go along with stuff I’m already inputting — i.e., the Spotify Interest Audience Wizard. I’m obsessed with it. This person likes Khalid, we’ll type it in, and boom, 15 different options appear!” For the First Avenue team, ToneDen fills a void. Epik High and the Futuristic Returns Seeing the tool in action offers indisputable proof. Ryan used ToneDen to run an on-sale campaign for Seoul-based hip-hop group Epik High’s show at First Avenue. Spending just $200, Ryan got her Facebook and Instagram ad in front of 15,317 fans. Of the 1,300 who clicked on the ad, 168 bought tickets, and First Avenue saw $32,840 in purchases — a 164x return on ad spend. “Any Venue Would Benefit from ToneDen” For Ryan, a Minnesota native, seeing First Avenue continue to thrive feels personal. “Those of us who are from here,” she says, “First Avenue is what you did growing up.” With her contributions, the nightclub continues to be a springboard for a whole new generation of arts, like Lizzo. “This is a music town,” Ryan tells us. “It’s just really cold so a lot of people don’t move here. There’s not a recording industry here, so it’s a little bit different than Nashville, but there’s a live music scene. It’s comparable to Austin or Seattle.” If ToneDen has been an asset to a club like First Avenue, it stands to reason that it could help other clubs, too. “Any venue would benefit from ToneDen. Smaller rooms with not as much staff too, I think would really benefit from it.” Rooms of all sizes, though, can use the clarity of reporting that ToneDen offers. “Anyone who works in new media,” Ryan admits, “would be foolish to not take any tool they could to save themselves time and to make sure that if something’s performing poorly money is pulled out of it and if something’s performing well, more money into it.”
https://medium.com/toneden/first-avenue-hits-new-highs-with-toneden-6a2699368ce5
['Joanna Novak']
2019-04-09 22:21:25.724000+00:00
['Music']
Making medical devices secure
Barnaby Jack was a world-famous hacker and security expert who is usually remembered for his ability to make ATM machines dispense cash. He also discovered flaws in insulin pumps and pacemakers that made it possible for criminals to kill a man or woman from nine metres away. A few years earlier, in 2007, the former US Vice President Dick Cheney secretly ordered his doctors to remove his implanted heart defibrillator and replace it with one that had no wireless capability so that it couldn’t be hacked by terrorists. The development of connected portable medical devices, wearables and apps, and implants is booming, thanks to the advent of the Internet of Things (IoT), connected technology and advances in artificial intelligence (AI). The global smart medical devices market is expected to reach USD 24,46 billion by 2025, according to a report by Grand View Research. Manufacturers must ensure these devices remain secure from cyber threats for user safety, while maintaining the privacy of all the personal data they gather, store and share with other healthcare services and providers. The role of standards Since 1968, IEC has been developing international standards for safety and performance of electrical equipment used in medical practice. The IEC 60601 series covers a wide spectrum of devices, systems and domains. The standards are developed by experts from the medical professions, industry, healthcare establishments, the information technology and software worlds and regulatory bodies. Michael Appel, certified anaesthesiologist and Chief Patient Safety Officer for Northeast Georgia Health System, leads IEC work in this area and discusses the evolving challenges of the medical industry, which must follow a growing number of regulations for safety and security aspects of medical equipment and systems. “Cyber threats and personal data privacy are the most essential questions that need to be answered. In the US, very strong privacy laws, and laws like the GDPR in the EU, could hinder the collection of such large amounts of data. This will have to be overcome, and the other big question which needs answering is: who owns the data gathered by these devices?” From transport and accommodation, to storage and distribution of goods, technology companies are changing the way diverse industries operate, thanks to innovative software platforms, which offer new ways of doing business. “If we’re not more nimble, new players will enter the industry, disrupt it, and do what is demanded by the market. Already there is talk of a shake up in the entire medical device and healthcare delivery world by entities not classically considered “healthcare companies”. These big tech companies will figure out a way to use the data within what is traditionally considered the realm of healthcare, so unless we acknowledge that there is a revolution unfolding before our very eyes and adapt to it, it will happen anyway”. Evolving global demographics World demographics are changing. By 2050, people aged 60 are expected to number nearly 2,1 billion, worldwide, and those aged 60 or over to outnumber children under 10 by 2030, according to a report by the United Nations. Aging populations, decreasing fertility rates, increased life expectancy and a growing prevalence of chronic diseases, represent major challenges for governments, who must implement policies to address the needs of older people, including housing, employment, social protection and healthcare. Medtech is a vital part of the healthcare solution Medical devices play an increasingly important role in alleviating over-stressed healthcare services, by reducing the amount of doctor visits and saving costs. For example, patients can monitor their vitals in real time and send this information to their healthcare providers, who decide if treatment is necessary. They also improve quality of life, from hearing aids, apps for the visually impaired and pace makers, to orthopaedic implants and continuous glucose monitoring devices, which check glucose readings in real-time and enhance the treatment of certain forms of diabetes. Other rapidly evolving AI technologies such as algorithms help doctors improve diagnostics and treatments and could be used in intensive care units, to run fully autonomous systems which monitor critical patients, thereby replacing teams of specialists. Ensuring data privacy, security and safety When it comes to data, it doesn’t get more personal than medical. In our connected world, if the security of smart medical devices is compromised, it could be fatal for users. Against this backdrop, IEC 80001 series of publications developed developed for the application of risk management for IT networks incorporating medical devices, also offers guidance for the disclosure and communication of medical device security needs, risks and controls. The standards can be used by medical device manufacturers and also support healthcare delivery organizations with the risk management of IT-networks with one or more wireless links. Georg Heidenreich, coordinates Technical Regulations and Standardization at Siemens Healthcare and leads the IEC/ISO group working specifically on safety, security and effectiveness of health software, emphasizing the specific roles and obligations related to regulated medical devices, health software and the systems that incorporate health software and medical devices among the involved stakeholders. “Publications produced will embrace new solutions, but be independent of specific technologies. Some areas being covered are new ‘fog’ and ‘cloud’ architectures and applications in the field of digital health, artificial intelligence and data analytics. We also expect to carry out another strategic analysis of the requirements of new technologies — notably AI and analytics by the end of the first quarter in 2019”. Creating trust through testing and certification People will be reluctant to use medical technology unless they know it is safe and their personal medical data and records remain private. One way to address this is through testing and certification. IECEE, the IEC System of Conformity Assessment Schemes for Electrotechnical Equipment and Components, ensures that electrical and electronic devices and equipment meet expectations in terms of performance, safety, reliability and other criteria by testing and certifying these against international standards developed by IEC. The System also covers risks to patients, those who operate the equipment — doctors, nurses and technicians, for instance — and maintenance personnel. As the number of smart medical devices continues to grow, both the IEC Conformity Assessment Board (CAB) and IECEE have broadened their scope to include activities related to cyber security for the medical industry, to ensure user safety from potential cyber threats and data privacy.
https://medium.com/e-tech/making-medical-devices-secure-2705021bb56
[]
2019-01-24 13:25:30.022000+00:00
['Health', 'Safety', 'Data Privacy', 'Medical Devices', 'Cybersecurity']
How to Become a Data Scientist in 2020 — Top Skills, Education, and Experience
How to Become a Data Scientist in 2020: Introduction Data science has been one of the trendiest topics in the last couple of years. But what does it take to become a data scientist in 2020? In a nutshell, here are the latest research results that we have found: The typical data scientist is a male, who speaks at least one foreign language and has 8.5 years of work experience behind their back. They are likely to hold a Master’s degree or higher and most definitely use Python and/or R in their daily work. But such generalizations are rarely helpful. Not only that, they could be misleading and sometimes discouraging. That is why we have sliced and diced the data to reveal a number of different insights: Please use the list above to navigate through the article or simply read the whole piece. To give you the best perspective possible as we go through the different takeaways, we will also make comparisons to previous years’ surveys. If you first want to get acquainted with what it took to become a data scientist in 2018 and 2019, pleases follow these links: 2019 Data Scientist Profile 2018 Data Scientist Profile How we collected and analyzed our data: The data for this report is based on the publicly available information in the LinkedIn profiles of 1,001 professionals, currently employed as data scientists. The sample includes junior, experts, and senior data scientists. To ensure comparability with previous years and limited bias, we collected our data according to several conditions. Location 40% of the data comprises data scientists currently employed in the United States; 30% are data scientists in the UK; 15% are currently in India; 15% come from a collection of various other countries (‘Other’). Company size 50% of the sample are currently employed at a Fortune 500 company; the remaining 50% work in a non-ranked company. These quotas were introduced in light of preliminary research into the most popular countries for data science, as well as the employment patterns in the industry. Alright, without further ado… How to Become a Data Scientist in 2020: Overview For the third year in a row, the verdict is in. There are twice as many male data scientists as there are female. This trend, while unfortunate, is not really surprising as the field of data science follows the general trend in the tech industry. In terms of languages spoken, a data scientist usually speaks two — English and one other (often their mother tongue). When it comes to professional experience, we find that you can’t really become a data scientist overnight. It takes 8.5 years of overall work experience. Interestingly, this is an increase of half a year compared to the data in 2019. Another interesting observation is that data scientists have held their prestigious title for an average of 3.5 years. Last year, that metric stood at 2.3 years. While our study is not based on panel data, we can make the claim that once you become a data scientist, you are likely to stay one. Regarding programming languages, in 2018, 50% of data scientists were using Python or R. This number increased to 73% in 2019 to completely break all records this year. In 2020, 90% of data scientists use Python or R. And no, you are not the only one who finds it amazing. Such a high adoption rate in such a short time period is an absolutely stunning feat for any tool in any industry ever. Finally, your level of education will most definitely make a difference when trying to become a data scientist. About 80% of the cohort holds at least a Master’s degree. That amounts to a 6 percentage point increase from last year. Previous experience Each year, we look at the previous work experience of a data scientist. This part of the results proved to be the most useful for aspiring professionals, figuring out the common career paths to becoming a data scientist. To reiterate, in 2020, data scientists had 3.5 years with the title and 8.5 years in the workforce on average. But… what did the data scientist do before becoming a data scientist? According to our sample, they… were already a data scientist! Or at least half of the cohort (52.4%). If we compare this value with previous years, there were 35.6% such cases in 2018 and 42% in 2019. So, year after year, the position becomes more and more exclusive — an observation we could infer from their average work experience. This insight suggests that there aren’t too many career options after being a data scientist. In other words — once a data scientist, always a data scientist. At least that’s the situation in 2020. Regarding other relevant career paths, starting out as a data analyst is still the preferable path (11% overall), followed by academia (8.2%) and… Data science intern (7.0%). This breakdown is one of the most consistent segments of our yearly research since 2018. Hence, you can bet your data scientist career on it. Education Education is one of the 3 major sections of most resumes and that’s not likely to change. Educational background serves as a signal to your future employers, especially when you don’t have too much experience. So, what education gives the best signal if you want to become a data scientist? According to our data, the typical data scientist in 2020 holds either a Master’s degree (56%), a Bachelor (13%), or a Ph.D. (27%) as their highest academic qualification. These statistics might not seem counter-intuitive at first. However, there is actually a considerable drop in “Bachelor degree only” data scientists compared to 2019 (19%) and 2018 (15%). Data science requires an advanced level of expertise. And that’s typically acquired through graduate or postgraduate forms of traditional education, or through independent specialized study ( see Certificates below). But while specialization is important, too much specialization, such as a Ph.D. is not a prerequisite to breaking into data science. In fact, the percentage of PhD-holders has been unremarkably consistent over the years, constituting approximately 27% of our sample. The Master’s degree, however, is solidifying its position as the golden standard of academic achievement necessary to become a data scientist in 2020. We are observing a 20% increase in the professionals who hold a Master’s degree compared to the 2019 cohort (46% in 2019 vs 56% in 2020). A Master’s degree is a great way for a Bachelor to specialize in a given field. Generally, there are two types of Master’s degree choices: increasing your depth (dig deeper into a topic) (dig deeper into a topic) or increasing your breadth (change your focus to diversify your skillset). One assumption is that people with Economics, Computer Science or other quant Bachelor’s degree have pursued a trendy data science Master’s. This is further corroborated in our section on fields of study. Arguably, there is another factor at play here as well, and this is the increased popularity of the field. Industry reports like Glassdoor’s 50 Best Jobs consistently named Data Science the winner in 2016, 2017, 2018, and 2019. Google searches for data science have at least quadrupled over the last five years as well. This certainly plays to the increased interest in data science as a career, and as a result, to a more selective hiring process in certain regions ( see Country and years of experience below). Finally, although data science is becoming a more competitive field, more than 10% of data scientists successfully penetrate the field with only a Bachelor’s degree (13%). It’s true the number is lower than what we’ve observed in the last two years (19% in 2019 and 15% in 2018). Nevertheless, data science remains accessible to Bachelor holders. In fact, if we look at country-specific data, a more nuanced picture emerges. Country and Degree As we stated in the Methodology section in the beginning of this article, we gathered our data according to location quotas; data scientists in the USA comprise 40% of our data, data scientists in the UK contribute to 30% of our observations; India and the rest of the world each comprise 15% of the 2020 cohort. That said, the increase in data scientists holding a Master’s degree is widely observed in both the UK and the States (54% and 58%, respectively, compared to 44% in 2019). In India, the number of data scientists holding a Master’s has also grown by 16% in 2020, compared to previous years (57% in 2020 vs 49% in 2019 and 2018). Interestingly, this doesn’t correspond to a comparable decrease in data scientists who have an undergraduate degree in India (32% in 2020, compared to 34% in 2019), which is still the highest percentage of Bachelor-holders across our cohort. Both Ph.D. graduates and professionals holding degrees from our “Other” cluster are also seen less frequently in the current research than they were in previous years. As we mentioned above, it is plausible that a specialization with a “trendy” data science Master’s is becoming the preferred career path of many people in the field. It’s also worth noting that you don’t need a Ph.D. to become a data scientist in India. In fact, postgraduates with a Ph.D. make up only 3% of our data scientist sample in India; this is both 30% less than the US data, and the least represented cohort in India. So, these data corroborate two tentative conclusions. Academically, a Master’s degree is establishing itself as the most popular degree for becoming a data scientist across the globe. And, if you are holding only a Bachelor’s degree, India provides the best career opportunities for starting a career in data science. Area of studies What is the best degree to become a data scientist? If you have followed the industry (or at least our research) over the past years, you would be inclined to respond with ‘Computer Science’ or ‘Statistics and Mathematics’. After all, data science is the lovechild of all these disciplines. But you would be mistaken. In 2020, the best degree to become a data scientist is… Data Science and Analysis! At long last — ‘Data Science and Analysis’ graduates have made their way to the top of our research! Before we continue with this analysis, a note on methodology. Because there is a massive number of uniquely nuanced — and correspondingly named — degrees in the academic world, we grouped our data into seven clusters of areas of academic study: Computer science , which does not include machine learning; , which does not include machine learning; Data science and analysis , which includes machine learning; , which includes machine learning; Statistics and mathematics , which includes statistics and mathematics-centered degrees; , which includes statistics and mathematics-centered degrees; Engineering ; ; Natural sciences , which includes physics, chemistry, and biology; , which includes physics, chemistry, and biology; Economics and social sciences , which includes studies pertaining to economics, finance, business, politics, psychology, philosophy, history, and marketing and management; , which includes studies pertaining to economics, finance, business, politics, psychology, philosophy, history, and marketing and management; Other, which includes all other degrees the data scientists in our sample pursued. So, Data science and analysis is finally the degree that’s most likely to get you into data science. Awesome! Compared to both 2019 (12%) and 2018 (13%), we’re seeing a significant increase in the professionals who’ve graduated with a data science specialized degree in 2020 (21%). Given our previous observations (see Education above), it doesn’t come as a surprise that the majority of these degrees are at Master’s level (85% of the Data science and analysis cluster). Therefore, it seems like data science is a preferred specialization for any quant Bachelor. This finding suggests traditional universities are beginning to respond to the demand for data scientists. And, in line with that, offer curriculums that develop the data scientist skillset. Another marked trend is that the Data Science and Analysis degree is becoming the affirmed gateway degree into data science, especially if you’ve previously graduated from a different field. Consider, for example, the top 3 degrees obtained by data scientists in 2019 and 2020: 2019 Computer Science (22%) Economics and social sciences (21%) Statistics and Mathematics (16%) 2020 Data Science and analysis (21%) Computer Science (18%) Statistics and Mathematics (16%) Data Science and Analysis has obviously taken the lead from Computer Science. What’s more, its appearance has completely removed Economics and social sciences from the top 3 ranking, even though this specialization was a close second in 2019. Graduates form the Engineering, Natural Sciences, and Other fields constitute approximately 11% of our data each. And, we can say this hasn’t changed much compared to previous years. Interestingly, most women in our sample most likely earned a Statistics and Mathematics related degree (24% of the female cohort). In comparison, men most likely earned a degree in Data Science and Analysis (22%), with Computer Science (19%) being a close second. In general, data science is considerably well-balanced in terms of best degrees to enter the field. You can become a data scientist if you have a quant or programming background… Or if you further specialize in Data Science and Analysis. And the way to do that is either through a traditional Master’s degree or by completing a bootcamp training or specialized online training programs. How to Become a Data Scientist in 2020: Online courses and Degree With data scientists coming from so many different backgrounds, we may wonder if their college degrees have proved sufficient for their work. Even with no research, the answer is — no way. No single degree can prepare a person for a real job in data science. Actually, data scientists are closer to ‘nerds’ than to ‘rock stars’ — it’s less about talent and more about hard work. Therefore, you can bet that they take their time to self-prepare. In our research, we have used the closest LinkedIn proxy available — certificates from online courses. Our data suggest that 41% of the data scientists have included an online course, which is practically the same as the past two years (40% in 2018 and 43% in 2019). Degree and Direct hires Can you become a data scientist right after graduation? While not unheard of, the data suggest that it is unlikely. Less than 1% of our cohort succeeded in becoming a data scientist without previous experience. And they either had a Ph.D. or a Master’s (80% of these men, and 100% of the women). A quarter of these direct hires also reported having received an online certification. Something we found interesting is that the direct hires in our cohort almost completely mirror the profile of the typical data scientist in 2020 (see above). Note that not all people post all their certificates, so these results are actually understatements. That said, let’s discuss what kind of experience you need to become a data scientist, if you’re not in that lucky 1%. Years of experience The typical data scientists in 2020 has been working as a data scientist for at least a year already (70% of our cohort), with the highest number of data scientists being in their 3–5 years bracket (28%) followed by data scientists in the 2–3 years bracket (24%), and in their second year on the job (19%). Data Scientists in their first year on the job constituted 13% of our 2020 data. These are all interesting statistics, especially when considered in relation to 2019 and 2018 data. More specifically, we’re observing a nearly 50% decrease in the number of data scientists who are just starting out their careers in 2020 (13%), compared to data scientists starting out in 2019 and 2018 (25%). Given the increase in average experience as a data scientist, we can conclude that these professionals stay within the field, making it harder for junior people to enter. The second interesting trend here is the increase in number of data scientists who are in their 3–5 and 2–3 years on the job, compared to the past two years. In 2018, 25% of data scientists had more than 3 years of experience, whereas in 2020, this number is reaching 44%, constituting a 76% increase in this cohort. This indicates that data science experts and senior data scientists are staying in the field, rather than moving to some other industry. Nonetheless, we mentioned that there are some important cross-country differences that invite further exploration. So, let’s consider these in more detail in the next section! Country and years of experience A cross-country analysis of the on-the-job experience of the data scientist reveals a curious trend. In terms of seniority, the data scientists in the US cohort were certainly the most experienced in our data. More than 50% of the cohort were at least on their third year working as data scientists, with 20% on the job for more than 5 years. Тhe US is the least friendly environment for career starters in data science. Only 8% of our US cohort was in their first year as data scientists, and 15% — in their second. According to our data, the data science field in the UK is easier to penetrate. 11% of the UK sample were starting out their career as data scientists, whereas 20% were already in their second year on the job. Nonetheless, the largest represented group in the cohort were professionals in their third or fourth years on the job (29%). If you’re looking for the country that offers the most opportunities to career starters, the data suggests that this is India. More than 50% of our sample consisted of data scientists within their first or second year on the job. This is great news for someone who is just getting started with data science and wants to nurture their expertise into a career. Of course, this data doesn’t come as a surprise, with some of the world’s largest companies opening offices in Bangalore and Hyderabad, including Amazon, Walmart, Oracle, IBM, and P&G. The rest of the world, or our “Other” country cluster shows a more balanced distribution of data science professionals regarding years of experience. A little less than 20% of the cohort is in their first or second year as data scientists, a little over 20% are in their third or fourth, and a quarter were in the 3–5 years bracket. That said, it’s worth mentioning that the largest players in our “Other” country cluster were Switzerland, the Netherlands, and Germany. Therefore, we can tentatively say that data science is becoming a more prominent field in Western Europe, and since the field is not yet flooded with data science talent, both junior and mid- to senior professionals are in demand. Programming skills of a data scientist When looking for programming languages proficiency, we had to turn to the LinkedIn skills ‘currency’ — endorsements. While an imperfect source of information, they are a good proxy of what a person is good at. I would not be endorsed by my colleagues for Power BI, if I were mainly training ML algorithms, would I? With this clarification out of the way, let’s dig into the data. Python dethroned R a year or so ago, so we won’t comment too much on this rivalry. Moreover, knowing that 90% of the data scientists use either Python, or R, we could completely close the topic here and move on. But that would be a bit ignorant, especially towards SQL! 74% of the cohort “speaks” Python, 56% know R, and 51% use SQL. What’s especially noteworthy here is that SQL has grown in popularity by 40% since 2019 (36%), making it a close third after R. Now, there are various factors that could contribute to this number. One possible explanation is that companies don’t always understand the data scientist position well. This leads them to hire data scientists and overload them with data engineering tasks. For instance, the implementation of GDPR and the massive reorganization of data sources in data warehouses placed some data scientists in the unfavorable position to lead or consult on such projects. Inevitably, SQL had to be added to their toolbelt for the sake of ‘getting the job done’. This phenomenon is getting more and more attention not only in the context of SQL, but also Big data structures related to database management. As a result, data scientists have acquired new skills at the expense of writing fewer machine learning algorithms. Another important point in favor of SQL is that BI tools such as Tableau and Power BI are heavily dependent on it, thus increasing its adoption. And that’s why SQL is going further up, even catching up with R. The programming languages picture is completed by MATLAB (20.9%), Java (16.5%), C/C++ (15.0%), and SAS (10.8%). Once again, LaTeX (8.3%) is also in the top 10. Why? Well, academia does not harm your chances to become a data scientist as we see from the background of our cohort. F500 and coding language We can’t stress enough how important are Python and R for the data science field in 2020. However, their strengths are their flaws, when it comes to big companies. Python and R are both open source frameworks that can be buggy or not well documented, unlike well-established languages such as MATLAB or C. And the data does indeed confirm this claim. Take Python for one — 70% of F500 data scientists employ Python against 77% of non-F500 data scientists. This sounds like unpleasant news, but in fact, it isn’t. Both Python and R have been closing the gap over the years. It seems like F500 companies are rethinking their organizations and are much more inclusive of the new technologies as compared to the data in 2018. Apart from the different rate of employment of Python, the rest of the breakdown by coding languages remains uninterestingly consistent. Country and coding language In the past, your country of employment would dictate many of your life decisions — what language to learn, what rules to abide by, and what customs to respect or adopt. But does this apply to coding languages? Since 2018 we look into USA, UK, India and ‘Rest of the world’. Our findings used to show that R was ‘winning the people’ over Python in the USA and India. On the other hand, UK and ‘Rest of the world’ were already slowly phasing out R in favor of Python. Well, USA and India are no longer ‘lagging behind’ when it comes to Python adoption. In other words, Python is now king in all countries. Hence, your best bet at becoming a data scientist is to bend the knee and join the Pythonistas in their search for data-driven truth. For the record, the breakdown by coding language is consistent across countries with R and Java taking the biggest hit from the Python supremacy in 2020. SQL remains unaffected and even gains a bit of traction as compared to previous years. How to Become a Data Scientist in 2020: Conclusion For a third consecutive year, the 365 Data Science research into 1,001 current data scientists’ LinkedIn profiles reveals e. And what a year it is! This research reveals that the field is ever-evolving and adapting both to the needs of businesses and its growing popularity in academia and around. Universities are catching up with the demand while Master’s is establishing itself as the golden standard degree. Python continues to eat away at R, but SQL is on the rise, too! India has earned the spot of best country for starting a career as a data scientist by demonstrating higher demand for junior data scientists than the US and the UK. It is also the place to be if you only have a Bachelor’s degree. Of course, we are tremendously interested in how these trends will develop in the following 2–5 years. But in the meantime, let us know if you think we’ve missed anything of interest! We are on a mission to create an informative and ultimately helpful account of the data scientist job and how it changes with time. After all, making the best career decision for yourself means being informed! So, stay curious, grow your programming skill set, and good luck in your data science career! Links to other studies: 2019 Data Scientist Profile; 2018 Data Scientist Profile.
https://towardsdatascience.com/how-to-become-a-data-scientist-in-2020-top-skills-education-and-experience-afa306d3af02
['Iliya Valchanov']
2020-03-10 20:25:23.032000+00:00
['Careers', 'Business Intelligence', 'Data Science', 'Data Visualization', 'Machine Learning']
エコノミストがデータの可視化を使って伝えるストーリーの裏を探る
CEO / Founder at Exploratory(https://exploratory.io/). Having fun analyzing interesting data and learning something new everyday. Follow
https://medium.com/%E6%9C%AA%E6%9D%A5%E3%81%AE%E4%BB%95%E4%BA%8B/%E3%83%87%E3%83%BC%E3%82%BF%E3%81%AE%E5%8F%AF%E8%A6%96%E5%8C%96%E3%81%8C%E3%81%86%E3%81%BE%E3%81%8F%E4%BC%9D%E3%81%88%E3%82%8B%E3%82%B9%E3%83%88%E3%83%BC%E3%83%AA%E3%83%BC%E3%81%AE%E8%A3%8F%E5%81%B4-56bdababda3b
['Kan Nishida']
2018-09-25 22:44:02.232000+00:00
['Storytelling', 'Poverty', 'Data Visualization']
AWS Serverless Application Lens — A Summary
Once the script is available, you should outline things like shortest route to completion (as shown above), alternate paths and decision trees, account linking process etc. The general architecture involves ASK (Alexa Skills kit) and various serverless components like Lambda and DynamoDB. Alexa Custom Skill is used to create a custom interaction model using Lambda which interacts with DynamoDB for persisting / reading user data like state and session, whereas Alexa Smart Home Skill allows you to control devices such as lights, thermostats, smart TVs, etc. using the Smart Home API. Amazon S3 stores your skills static assets including images, content, and media. Its contents are securely served using CloudFront. Account Linking is needed when your skill must authenticate with another system. This action associates the Alexa user with a specific user in the other system. Deployment approaches The recommended approaches for deployment in the microservice architecture are all-at-once, blue/green and Canary / Linear. The following table shows a comparison between these: All-at-once is low-effort and can be made with little impact in low-concurrency models, it adds risk when it comes to rollback and usually causes downtime. An example scenario to use this deployment model is for development environments where the user impact is minimal. Blue/Green Deployments requires two environments, the blue (older version) and the green (the new version). Since API Gateway allows you to define what percentage of traffic is shifted to a particular environment; this style of deployment can be an effective technique. Canary Deployments don’t require two seperate environments while still allowing to transfer a certain percentage of the traffic to a new version of the application. With Canary deployments in API Gateway, you can deploy a change to your backend endpoint (for example, Lambda) while still maintaining the same API Gateway HTTP endpoint for consumers. Important: Lambda allows you to publish one or more immutable versions for individual Lambda functions; such that previous versions cannot be changed. Each Lambda function version has a unique Amazon Resource Name (ARN) and new version changes are auditable as they are recorded in CloudTrail. As a best practice in production, customers should enable versioning to best leverage a reliable architecture. Well-Architected Framework The following section digresses from the whitepaper approach a bit by not showing the five pillars (Operational Excellence, Security, Reliability, Performance efficiency and Cost optimization) separately and how to achieve them using different services. The approach taken is rather to show how each service contributes to the different pillars. Compute Layer Lambda Reliability — In order to ensure reliability of the system, throttling is a necessary step thereby ensuring that downstream functionalities don’t get overburdened. Lambda invocations that exceed the concurrency set of an individual function will be throttled by the AWS Lambda Service and the result will vary depending on their event source — Synchronous invocations return HTTP 429 error, Asynchronous invocations will be queued and retried while Stream-based event sources will retry up to their record expiration time. But why Concurrency limits? Because the backend systems might have scaling or other limitations (for e.g. connection pools). Further it also allows for critical path services to have higher priority, provides protection against DDoS attacks and allows for disabling a function by setting concurrency to zero. Note: For asynchronous processing, use Kinesis Data Streams to effectively control concurrency with a single shard as opposed to Lambda function concurrency control. This gives you the flexibility to increase the number of shards or the parallelization factor to increase concurrency of your Lambda function. Another important factor for reliability is failure management. Use Lambda Destinations to send contextual information about errors, stack traces, and retries into dedicated Dead Letter Queues (DLQ), such as SNS topics and SQS queues. When consuming from Kinesis or DynamoDB streams, use Lambda error handling controls, such as maximum record age, maximum retry attempts, DLQ on failure, and Bisect batch on function error, to build additional resiliency into your application. Performance Efficiency — Test different memory settings as CPU, network, and storage IOPS are allocated proportionally. Include the Lambda function in a VPC only when necessary. Set your function timeout a few seconds higher than the average execution to account for any transient issues in downstream services. This process is critical as too high a timeout may result in extra costs in case the code is malfunctioning! If your Lambda function accesses a resource in your VPC, launch the resource instance with the no-publicly-accessible option (avoid DNS resolution of public host names). Cost Optimization — As Lambda proportionally allocates CPU, network, and storage IOPS based on memory, the faster the execution, the cheaper and more value your function produces due to 100-ms billing incremental dimension. It uses CloudWatch Logs to store the output of the executions to identify and troubleshoot problems on executions as well as monitoring the serverless application. These will impact the cost in the CloudWatch Logs service in two dimensions: ingestion and storage. Thus it is important to set appropriate logging levels and remove unnecessary logging. Also use environment variables to control application logging level and sample logging in DEBUG mode. For long-running tasks where a lot of waiting is involved, use Step Functions instead of Lambda. The use of global variables to maintain connections to your data stores or other services and resources will increase performance and reduce execution time, which also reduces the cost. API Gateway Security — There are currently four mechanisms to authorize an API call within API Gateway: AWS IAM authorization, Amazon Cognito user pools, API Gateway Lambda authorizer and Resource policies. AWS IAM authorization is for consumers who currently are located within your AWS environment or have the means to retrieve AWS Identity and Access Management (IAM) temporary credentials to access your environment. Lambda authorizer is to be used when you already have an Identity Provider (IdP). If you don’t have an IdP, you can leverage Amazon Cognito user pools to either provide built-in user management or integrate with external identity providers, such as Facebook, Twitter, Google+, and Amazon. Amazon API Gateway resource policies are JSON policy documents that can be attached to an API to control whether a specified AWS Principal can invoke the API. This allows fine-grained control like allowing / denying certain IPs or VPCs etc. Important: Lambda authorizers and Cognito user pools can also be created and managed in a separate account and then re-used across multiple APIs managed by API Gateway. Reliability — Throttling should be enabled at the API level to enforce access patterns established by a service contract. Returning the appropriate HTTP status codes within your API (such as a 429 for throttling) helps consumers plan for throttled access by implementing back-off and retries accordingly. You can also issue API keys to consumers with usage plans in addition to global throttling to have more granular control. Note: The API keys are NOT a security mechanism to authorize requests, but rather provides you with additional metadata associated with the consumers and requests. Performance Efficiency — Use Edge endpoints for geographically dispersed customers. Use Regional for regional customers and when using other AWS services within the same Region. Decision Table for the right type of API Gateway Endpoint Also make sure that the caching is enabled. Enable content encoding (for compressing the payload). Step Functions Reliability — Step Functions state machines increase reliability of synchronous parts of your application that are transaction-based and require rollback, since it can implement the saga pattern. Cost Optimization — For long running task(s), employ the wait state. For example if we start an AWS Batch job and poll it every 30 seconds to see if it has finished, it is better to use step function to implement a poll (GetJobStatus) + wait (Wait30Seconds) + decider (CheckJobStatus). The pricing model for Step Functions is based on transitions between states and thus wont incur cost for wait. Data Layer DynamoDB Performance Efficiency — Use on-demand for unpredictable application traffic, otherwise provisioned mode for consistent traffic. DAX can improve read responses significantly as well as Global and Local Secondary Indexes to prevent full table scan operations. AWS AppSync Performance Efficiency — Make sure that the caching is enabled. Messaging and Streaming Layer Kinesis Performance Efficiency — Use enhanced-fan-out for dedicated input/output channel per consumer in multiple consumer scenarios. Use an extended batch window for low volume transactions with Lambda. SNS Cost Optimization — SNS can filter events based on message attributes and more efficiently deliver the message to the correct subscriber, thus avoiding unnecessary invocations. Monitoring and Deployment Layer CloudWatch Operational Excellence — Amazon CloudWatch provides automated cross service and per service dashboards to help you understand key metrics for the AWS services that you use. For custom metrics, use Amazon CloudWatch Embedded Metric Format to log a batch of metrics that will be processed asynchronously by CloudWatch without impacting the performance of your Serverless application. Some important types of metrics that should be considered: Business metrics — Orders placed, debit/credit card operations, flights purchased, etc — Orders placed, debit/credit card operations, flights purchased, etc Customer Experience Metrics — Perceived latency, time it takes to add an item to a basket or to check out, page load times, etc — Perceived latency, time it takes to add an item to a basket or to check out, page load times, etc System metrics — Percentage of HTTP errors/success, memory utilization, function duration/error/throttling, queue length, stream records length, integration latency, etc — Percentage of HTTP errors/success, memory utilization, function duration/error/throttling, queue length, stream records length, integration latency, etc Operational metrics — Number of tickets (successful and unsuccessful resolutions, etc.), number of times people on-call were paged, availability, CI/CD pipeline stats (successful/failed deployments, feedback time, cycle and lead time, etc.) CloudWatch Alarms should be configured at both individual and aggregated levels. Standardize your application logging to emit operational information about transactions, correlation identifiers, request identifiers across components, and business outcomes. Below is an example of a structured logging using JSON as the output: AWS X-Ray Operational Excellence — Active tracing with AWS X-Ray should be enabled to provide distributed tracing capabilities as well as to enable visual service maps for faster troubleshooting. X-Ray helps you identify performance degradation and quickly understand anomalies, including latency distributions. X-Ray also provides two powerful features that can improve the efficiency on identifying anomalies within applications: Annotations and Subsegments. Subsegments are helpful to understand how application logic is constructed and what external dependencies it has to talk to. Annotations are key-value pairs with string, number, or Boolean values that are automatically indexed by AWS X-Ray.
https://medium.com/swlh/aws-serverless-application-lens-a-summary-4f740c4f376d
['Amulya Rattan Bhatia']
2020-08-06 16:58:26.318000+00:00
['Api Gateway', 'AWS', 'Lambda', 'Serverless Architecture', 'Serverless']
Recoil: A New State Management Library Moving Beyond Redux and the Context API
Recoil: A New State Management Library Moving Beyond Redux and the Context API A full introduction to Recoil, an experimental state management library for React applications Recoil data-flow graph for state management — Photo by the author. Available since May 2015, Redux is a predictable state container for JavaScript applications. It is a single source of truth, its state is read-only, and changes are made with pure functions. Redux has a nice browser debugging extension for easy debugging. The drawback is the complex boilerplate to start with. It may not be compatible with React’s upcoming concurrent mode. Context API was introduced by React 16.3 along with React Hooks in May 2018. It provides a way to pass data through the React component tree without having to pass props down manually at every level. It is designed to share global data, such as the current authenticated user, theme, or preferred language. Context API and React Hooks provide a new approach to state management. Debugging is hard and there are performance issues when rendering multiple dynamic items. So should you go with Redux or Context API? After a couple of years of battle, with many articles announcing Redux is dead, not dead yet, etc., both are alive and adopted by many applications. There is no clear winner yet. The following is an NPM trend comparison. Both of them are still in heavy use: Now we have a newcomer. Recoil is a brand new experimental JavaScript state management library developed by Facebook. It has been available since May 2020.
https://medium.com/better-programming/recoil-a-new-state-management-library-moving-beyond-redux-and-the-context-api-63794c11b3a5
['Jennifer Fu']
2020-11-14 01:49:59.881000+00:00
['React', 'Programming', 'Reactjs', 'JavaScript', 'Recoil']
SQL Crash Course Ep 1: What Is SQL? How Can I Start Selecting Data?
SQL Crash Course Ep 1: What Is SQL? How Can I Start Selecting Data? Covering SQL best practices, SELECT, FROM, WHERE, AND/OR, GROUP BY, ORDER BY, and more… Overview We’re going to learn what SQL is, what we can use it for, and then we’re going to start writing some basic queries for selecting and filtering data from a database. Note: We won’t actually be working with a SQL database today, we will be using Python to create a dataframe and then will use a library called pandasql to query it. If you are unfamiliar with Python, I still suggest you following the tutorial and paying attention to the SQL commands and their outputs. What is SQL? Data is everywhere, and more of it is being created every minute. We used to store this data on paper in giant filing cabinets but now we store it digitally in things called databases. Now, how do we easily pull the data we want from this digital database? That’s what SQL is for! SQL (Structured Query Language) is a language we use to communicate with our databases. If you want to pull, edit, or add data to a database, you can use SQL to do that. Databases can be created in a variety of architectures (giant data lakes, simple 1-table schemas ), written in a variety of languages (C++, Java), but SQL is the common ground that lets anyone access this data using universal syntax. We know we can edit and add data to a database using SQL, but SQL becomes most useful when we want to pull existing data because of its ability to filter through massive records and return only the exact entries we’re looking for. There are an almost endless number of SQL methods, each with their own unique syntax and required location within the command. More on this next. Syntax and best practices Using a SQL command to pull data from a database is called a query, and in SQL there are a few common syntax rules that may cause your code to break if not followed. Queries can sometimes be very long and extremely complicated so some best practices have been developed over the years to ensure queries are neat and readable. Let’s go over the basics of SQL syntax and some common best-practice rules that we should follow when writing SQL queries. Here’s a very basic SQL command that simply returns every row and column from a database called store_sales .( Keep in mind this is one of the simplest queries we can write in SQL, queries will rarely be this short) ''' SELECT * FROM store_sales; ''' It’s okay if this looks completely foreign to you, here’s what each part of the code is doing: ’’’…’’’ - SQL queries are always strings, and by using triple-quotes we are able to get around the fact that most queries require multiple lines - SQL queries are always strings, and by using triple-quotes we are able to get around the fact that most queries require multiple lines SELECT — this is the command used for pulling data from a database, always followed by an indication of what column(s) to access for the data pull, always the first command in the query — this is the command used for pulling data from a database, always followed by an indication of what column(s) to access for the data pull, always the first command in the query * — this is SQL-speak for “all”, will select every row and every column (note: this could be something else depending on what column(s) we want to access) — this is SQL-speak for “all”, will select every row and every column (note: this could be something else depending on what column(s) we want to access) FROM — this is how we specify what data source to use, it always precedes the SELECT command and whatever comes after will be the target for the command — this is how we specify what data source to use, it always precedes the SELECT command and whatever comes after will be the target for the command store_sales — this is the name of our imaginary database, this will be different depending on what database you’re using — this is the name of our imaginary database, this will be different depending on what database you’re using ; — this tells our database that we are done with our command, we always end a query with ; You’ll also notice that most of our command is capitalized. It is best practice to write all SQL methods ( SELECT , FROM , any other SQL methods) in caps as it helps to distinguish between methods and data structures/indicators. As mentioned earlier, each command in SQL has it’s own syntax and placement within the query. Here’s a reference sheet I really like for quickly checking on syntax and use cases: https://www.w3schools.com/sql/sql_ref_keywords.asp. Just click on a method name to get more information on how to use it, including a couple of sample commands. Basics of SELECTing data For the rest of this article, we’re going to focus on basic methods of selecting and filtering data. We’re going to work around this imaginary scenario: we are school teachers with a class of 20 kids. It’s the end of the quarter. At the end of every week in the 10-week term, students are given a test on material for the previous week. The scores for these tests are stored in a SQL database in our school’s system, so we are going to write some queries to get some information about our students and their test scores. As mentioned before, we’re going to be using Python to create the data we’re going to use (pandas DataFrames) and we’ll be using a library called pandasql to query it. pandasql uses SQLite, which is a fairly common flavor of SQL for local/client storage in applications. Imports and data creation If you just want to dive into the queries, skip ahead to the WHERE section. If you are unfamiliar with Python, I suggest you do so. Here’s the code I used for generating our random test score data. I’m not going to go over it in much detail, there are a few comments throughout to help you understand what’s happening. Our final pandas DataFrame will be stored in a variable scores that we will write our SQL queries for. If you scroll to the bottom of the code, you’ll see the first 5 rows of our data printed out. WHERE The WHERE argument is used as the base filtering argument. It always comes after your FROM … declaration and is always followed by the condition you want to filter on. Conditions can be for either numeric or string data, columns cannot contain both types of data. WHERE operators work pretty much the same as any other programming language but here is a brief overview: Now, let’s say that we, as teachers, want to know who in our class didn’t do so well on the first week’s test. Let’s take a look at what the query might look like for that (using a WHERE clause). We’ll quantify “didn’t do so hot” as a score of less than 77%. Okay, so what happened here? The first thing you might notice is that we used SELECT name, test_1_score instead of SELECT * . These are both columns in our scores table, and by indicating them in the SELECT statement, we are saying to only return information in these two columns. Next, we are saying we want to select FROM scores . In pandasql , current pandas DataFrame objects in our working environment are treated as accessible SQL tables. That being the case, scores is our data source, and we indicate as such in the FROM clause. We also said that we only wanted records WHERE test_1_score <77 . Since our data is numeric, we can use a mathematical operator to say that we want all values less thank _blank_ Let’s try another one. This time, we want to see which students did particularly well (90% or better) on the final weekly test (test 10). Awesome! We simply indicated that we wanted to apply filtering to values in test_10_score as opposed to test_1_score , and then flipped our operator sign to ‘greater than.’ Between two values What if we wanted to find students who clearly understood the material, but maybe didn’t study as well as they should have? We’re going assume that’s what happened for students who scored higher than 80% but lower than 90%. You might think that query would look something like this, where we simply ‘stack’ our operators. But as we can see, we’re getting all sorts of values that are outside the range of 80–90. That’s because there’s a built-in SQL operator that we have to use instead. Can you guess what the name of that method is? You guessed it, it’s BETWEEN . BETWEEN is technically a logical operator, so it always follows the WHERE command. It asks for two values —the beginning and end of the desired range, separated by AND — indicating a range of numbers. A note here: SQL BETWEEN is inclusive (equivalent to 80<=test_10_score<=90 ). Now we can see that we have a much smaller list, containing only names and scores for students who scored between 80–90% inclusive. AND/OR Up until now, we’ve only been specifying a single condition in our WHERE statements. In reality, we can have as many conditions as we want, making SQL really useful for complex filtering. There are two ways to indicate multiple conditions following a WHERE command: AND — used when we want to return entries where both/all conditions are true (e.g. we want students who scored above 90% on tests 9 and 10) OR — used when we want to return entries where any of the conditions are true, will return all records matching any number of the conditions in the OR chain Here’s an example of a query where we want to return the names of students who didn’t do so well on the first test (<75%) but did very well on the last test (>85%). Since we want both conditions to be true, we use AND . Great! Now let’s say that weeks 1 and 2 of the term were covering the same material, and we want to get a list of students that understood at least a good portion of that material. For this, we want to return the names of students that scored better than 85% on either test 1 or test 2. This lends itself perfectly to an OR operator. Okay, cool. It looks like for the most part, our class understood a good portion of the first two week’s material The great thing about AND and OR is that you can chain as many of them together as you’d like. You can even mix ‘n match! For example, if our final test (test 10) was a final test on all of the material from weeks 1–9, we might want to see a list of students who did poorly in the beginning of the term, and also did poorly at the end of the term. We’re going to look at students who didn’t do so well on tests in weeks 1 or 2 and who also didn’t take the time to study it during the term and ended up also doing poorly on the final test (test 10). Take a look at the following SQL query and see if you can tell what it’s doing. This time, we said that we want to return just the names of students who scored below 75% on either test 1 or test 2 and also still scored below 75% on test 10. We used OR first because we didn't care which test they did bad on, we just cared if they did bad on either of them. We then used AND to indicate that we absolutely needed the last condition to be true. If entries didn’t match one of the first two conditions and the last condition, they would not be included in the return. ORDER BY Sometimes it can still be difficult to sift through a long query return, and it can be helpful to further organize and filter rows within the output. ORDER BY is a very common and useful method for doing this, and it works on pretty much all data types (string, time series, numeric, etc.). ORDER BY and it’s various modifiers should always be placed near the end, and always come after your conditional clauses (e.g. WHERE ). We’re going to create a new column in our table that’s an overall average of the test score for each student over the past 10 weeks. We’re going to call this column test_avg . Now we can get an ordered list of our students stored by their avg. test score. That query would look something like this, but take a look at our output. We get a perfectly sorted list, but that list is in ascending order. This is the default setting in SQL ORDER BY . If we wanted to get a list sorted in descending order, SQL makes that very easy; we just add the DESC tag after our column indicator. Now we have our highest values at the start of our return table. We can also order our returns by more than one column. You do that by comma-separating the column values following your ORDER BY argument. For example, the following query returns name , test_avg , and test_10_score sorted by test_10_score first, and then sorted by test_avg . It’s subtle, but see if you can tell where that second layer of sorting is happening. If we look close (check rows 1 & 2, 3 & 4, and 12 & 13), we can see this two-layer ordering in action. This double-ordering isn’t necessarily too useful for this situation but take the following example. Imagine working for a car dealership and needing to sort through tens of thousands of records to find a specific sale of a specific make and model for a repair job. Now sorting by two values (let's say year and then car_model ) becomes extremely necessary and SQL makes it easy for us. GROUP BY The last major method we’re going to talk about here is the GROUP BY method. It does exactly what it sounds like, groups rows by the specified column, but the return for a GROUP BY looks a little different than queries we’ve seen in the past. GROUP BY aggregates rows based on common values and is typically used for columns containing categorical variables. Similar to ORDER BY , it always goes at the end of the query. It’s common to use GROUP BY in the same query as an ORDER BY clause, in this case, we always put GROUP BY last. Going off of our teacher/test score example, we could use this function to group students by letter-grade. And that’s what we’re going to do. First, let’s create a new column in our table. We’ll call this column grade and fill it with letter-grades based on the following average test score conditions: If you look all the way to the right, you’ll see our new column. Now we can write some pretty cool queries. Take a look at the following query and see if you can tell what’s happening (there’s a method we haven’t covered yet). The first thing you probably noticed in the query is the COUNT(name) argument. It’s in our SELECT clause, so we know that it is some sort of information that we’re pulling from our table. COUNT() is a really handy built-in method in SQL that returns a single value representing the number of entries in the return, grouped however you specify. We choose to count the number of names aka students in each group. Groups are specified by our GROUP BY grade argument, which aggregates our many rows into a few rows representing all current values in the grade column. Another very useful built-in SQL method is AVG() . It works along the same lines as COUNT() , returning a single value representing the mean of the provided column, grouped however you specify. In the following query, we’re returning the average test score of all of our students in each letter grade group. Notice the new columns created by COUNT() and AVG() . They’re intuitively named, describing exactly what the column contains, but it also looks a little messy. We can fix this using the SQL AS argument. This is a really handy argument to have in your toolbelt, especially when you start aggregating lots of different categories it can be really helpful to have a familiar and customizable naming convention. Extra helpers: AS and AVG() The following query uses an AS statement to return a single value, renamed test_1_avg . As you can probably guess, this value is going to be a classwide average of scores for test 1. We can see the result immediately in our output is a newly dubbed test_1_avg column holding a single value. AS is placed immediately following the object you wish to rename, and it can be used anywhere you are introducing a column or new value within your query. Conclusion That’s going to do it for this installment of SQL Crash Course, stay tuned for the next article in this series where we’ll be how to create SQL tables and databases — the SQL architecture used for storing tables. To summarize what we learned, first, we learned that SQL (Structured Query Language) is the language we use to talk to databases, and that it lets us add, delete, edit and retrieve data from databases. We then learned a handful of methods for retrieving data, and we learned about SQL best practices and syntax. We learned that arguments have certain locations they need to be in within a query, we learned that queries need a semi-colon at the end, and we learned that there are tons of built-in methods for aggregating rows and columns of data. Thanks for reading, tune in next time, and happy SQLing!
https://medium.com/swlh/sql-crash-course-ep-1-what-is-sql-how-can-i-start-selecting-data-4c7ce85bbd49
['Sam Thurman']
2020-11-14 06:12:31.667000+00:00
['Sql', 'Database', 'Python', 'Data', 'Data Science']
How to biohack your intelligence — with everything from sex to modafinil to MDMA
How to biohack your intelligence — with everything from sex to modafinil to MDMA Other deep-dive articles by Serge: Update: Jan 2019 Author’s note: I have been thinking a lot since I wrote this article. I deliberately made it very aggressive because I wanted people to talk about it and to pay attention. But some of the aggression went too far and is not aligned with my values. I want there to be a great future for those of us who (like myself) want to become posthumans. I want to encourage all humans to explore enhancing their health, intelligence and productivity. There is a real risk of being left behind if you do not do that. I also want all of humanity to share in an amazing, grand future, whether they choose to be trans/posthumans or not. If we do this right, we will have essentially limitless resources so everyone can benefit. Human and posthuman grand futures are compatible. To that end I edited the article and removed some of the language I feel does not reflect how I see the world. To be clear I am not in any way going back on my aggressive beliefs or goals. I just realized that I was wrong to think that these goals must be in opposition to the goals of others. There is plenty of awesome future for everyone. I will write another article focused on this topic later on. Intro I had some free time over the holidays and wrote this article to showcase, on the basis of a personal story, many highly actionable, science-based, approaches and tools that can be used to significantly enhance intelligence. In my case these include legal/illegal drugs; using sex as a biohacking tool; drinking ketone esters; using beta blockers or testosterone to gain advantage in negotiations; eating only once a day; and a lot more. Editor’s Note: This story contains some R-rated approaches to bio-hacking. We published it because we want readers to be informed of what’s actually happening in the technology industry. Proceed at your own risk. Background I’m a cliche Silicon Valley techie —Russian, Stanford, YCombinator, started a couple large/successful companies, working in artificial intelligence now. My previous article detailed how, as a 32-year old with no medical problems, I spent ~$200k on enhancing my health. Thousands of tests, medical teams, dozens of prescription drugs. I openly posted all my data. It shows many health benefits — 3–4x reduction in body fat, very high athletic performance (VO2Max ~70), negligible inflammatory processes, >80% increase in testosterone, and improvements to many biomarkers of aging. Biohacking works. The article became extremely popular and reached millions of readers. Many of you loved it. Many of you felt anger and fear. Aggressive bioenhancement of human abilities has long been a sci-fi dream. And (if you read the previous article) here is concrete evidence and a lot of data that show it is already working. Credit to New Yorker for this awesome illustration I think that what we are doing with biohacking is the beginning of humanity’s split into different species. Enhanced posthumans who will not look anything like the humans of today. Unenhanced humans who choose not to do this for their own reasons. The reason this cataclysmic shift is coming: intelligence can already be enhanced. PART 1: INTELLIGENCE, WEALTH, AND POWER So what is “intelligence”? Let’s use the definition the wonderful Prof. Max Tegmark from MIT, uses in his new and highly-recommended book, Life 3.0. — — — — Intelligence = ability to accomplish complex goals — — — — Intelligence is applied and multi-dimensional. Intelligence is much more than just IQ or skill at mathematics. The above definition is critical to understanding this article — re-read it a couple times. There are many applications of intelligence. But if we drill down, there is a set of intellectual abilities essential to nearly all complex human goals. These could be broken into: Classical Intelligence (CI): Logic, Problem Solving, Creativity, Strategy Applied Intelligence (AI): Energy, Focus, Willpower, Emotional Control Social Intelligence (SI): Persuasiveness, Empathy, “Social Skills” Dynamic Intelligence (DI): Ability to Learn, Memory, Knowledge “Intelligence” will refer to these “universally useful intellectual abilities” going forward. We can talk about those of us who have them as “smart” and those who do not as “stupid.” This will piss some of you off, but it really is time we stop pretending we are all equally smart. That sounds nice and PC, but the claim that everyone has roughly the same ability to achieve complex goals is patently false. Intelligence: varies massively between individuals. For many reasons — IQ is 50–80% genetic while I would argue other types of intelligence are mostly environmental. between individuals. For many reasons — IQ is 50–80% genetic while I would argue other types of intelligence are mostly environmental. develops within one individual over their lifetime, and never stops changing. fluctuates massively within one individual with biochemical changes. If the last part isn’t obvious, test your ability to accomplish complex goals after not sleeping at all, while you have the flu, right after having an intense argument or after drinking 10 shots of espresso. So why is intelligence enhancement going to split humanity into separate species? Let‘s start with a silly story: Imagine there is only one way to make money in the world: running marathons. All of us are running them, every day. Imagine further, that all the marathons of the world are rapidly uniting into one, the share of money going to winners is increasing, and the lifespan over which winners can compete is also increasing. These trends are expected to continue. Over the last 30 years, a bunch of expensive new doping drugs and technologies appeared that are undetectable, healthy, and give a big advantage at running. Moreover in the next 30 years a lot more such drugs will appear. They will be expensive at first. Only those of us who are already winning will be able to afford them. We are starting to see some of us talk about how much better we run with all these drugs. How is this relevant to enhancing our intelligence? The above is exactly what the world looks like today. We just need to replace “marathon running” with “intelligence.” Here is why: There exist a number of medical and lifestyle interventions that can significantly enhance everyday intelligence. Modafinil, SSRI microdoses, MDMA, hormone signaling, optimal sleep, mitochondria-enhancing exercise, isolation from addictive news/social media, and lots of other things. Many of these interventions are complex, expensive, demand willpower, do not deliver easy rewards, are dangerous if done wrong. Which is why many of us could be decades late in adopting them. Greater applied intelligence resulting from these interventions directly creates significant financial, social, physiological (health) and intellectual wealth. Life is sooooooo easy when we are always full of energy, extremely persuasive, able to focus. This wealth can be reinvested into further enhancement of intelligence, creating an upwards spiral of wealth. The returns on this investment loop will (a) increase with accelerating progress in biotechnology (b) provide greater comparative advantage as the world gets ever more competitive (c) compound over decades and greatly enhanced lifespans. US data on income quintiles vs life expectancy Income is already driving biological inequality, and more of it with every year. From the stats above, we can speculate that men born in 1985 and in the top 1% of income already have a life expectancy of at least 95–100; for women this number is even higher. Even before biohacking. Or the massive progress in biotechnologies anticipated in the coming decades. To be certain of living to 100 those of us who are well-off and into basic health just need to pay attention to the data. These trends feel unstoppable, and may even accelerate. PART 2: FRAMEWORK FOR BOOSTING INTELLIGENCE Start with the goal Key point: you can’t be intelligent if you don’t know what you want If we read all the tactics below without considering goals, we can be left with the impression that Serge is basically saying “mine is the One True Path, everyone needs to copy me, and have sex with models in swing clubs while sipping clomid-ketone smoothies!!” I optimize my intelligence towards my specific 50-year goal. To build one of the platform companies that give us The Singularity. To help make us immortal posthuman gods that cast off the limits of our biology, and spread across the Universe. To have limitless abundance. We don’t all have the same goal. But we should accept that the above is indeed my goal, and that I find it incredibly meaningful. Much more so than wasting my life on buying yachts and jets, having children, or giving to charities. The framework below is designed to enhance our intelligence to get us whatever we want. But you need to know precisely what that is, and I can’t help you figure that out. I want to be a god. Note the “a god” (as in one of many cool, ultrapowerful and ultraintelligent posthuman beings) not “The God.” Part 1: start with Applied Intelligence Key point: if you can’t focus, you are really, really fucking stupid Always start with AI (energy, focus, willpower etc). This is the engine we use to power everything else. This means: Learn to get into high-energy deep flow state every day. Key interventions: fix sleep, do intermittent fasting, get hormones to optimal levels, use modafinil, build the right habits/triggers, have high-quality deep downtime. Ruthlessly remove everything that prevents or interrupts our flow states. Key interventions: make ourselves immune to stress (meditation, lithium, SSRIs); eliminate distractions (remove news, social media, procrastination triggers); build the right habits over time. We will dig into all these tools a bit later in the article. Every time we successfully get into deep flow, we adapt. It becomes easier next time. Every time we procrastinate — flow becomes harder. A sign we are smart is if we manage to get 5–8 hours of totally focused, truly deep, uninterrupted work with no procrastination. And still have time and energy for gym, friends, travel, music, sports, sleep, meditation etc. — all of which we are able to be deeply engaged in. This is feasible. In December, I managed this on 23 days, took 3 days off completely, and got disrupted down to 0–3 hours of deep work for 5 days for various reasons (that can be pruned further). This level of deep work is easy to maintain, but extremely hard to get to. We are very distractible. The world is very distracting. A clear sign we are stupid is if we do not read new books, take poor care of our health, rarely have deep conversations with friends, do not learn new skills. Because we are “too busy.” Although at the end of the day it is hard to write down precisely what valuable progress we have achieved with our busy-ness. In this case, we are not actually “busy.” We are “stupid.” The good news is that we can become smarter. This is the part when you write outraged comments about how your job/children prevent you from deep work. Suggestion: find one hour of deep work a day. Use it to learn a new skill — graphic design, coding, whatever. Change your job. Hire a nanny to help with children. Invest the time you freed up into more deep work. This is a gradual upwards cycle of increasing intelligence. Those of us who feel that the above is too long/hard and want something easier will simply remain stupid. High Applied Intelligence lets us be captains of our lives. Part 2: invest time/energy into building intellectual wealth Key point: daily focus + biohacking = extreme intelligence If we do a great job out of AI, we will have an enormous resource of energy and focused time. That (we can speculate) only <1% of us will ever achieve. The question is what to invest it into. Key candidates: Further enhancements to AI— i.e. obsessively thinking about the reasons we did not hit our deep work goals and fixing them. Remember: this is the engine that powers everything else. Call distracted you? Turn on Do Not Disturb. Laptop discharged? Carry a battery. Braindead after lunch? Don’t have lunch. Couldn’t get back into flow after a 10am meeting? Don’t have meetings until afternoon. And so on, ad infinitum. Social Intelligence. Knowing how to persuade others boosts our intelligence far more than “book-smarts” do. And it is extremely “biohackable” — meditation , MDMA , higher testosterone , beta-blockers , SSRIs . These things can enhance our body language, make us confident, calm, empathetic and self-aware. This presses all the buttons of persuasion in other humans. Plus our vast deep work resource can be put to practicing body language in front of a mirror, reading the right books, writing persuasive articles. High SI is a superpower that makes life easy. And it is far more powerful than high IQ. The world just bends to accommodate what we want. , , higher , , . These things can enhance our body language, make us confident, calm, empathetic and self-aware. This presses all the buttons of persuasion in other humans. Plus our vast deep work resource can be put to practicing body language in front of a mirror, reading the right books, writing persuasive articles. High SI is a superpower that makes life easy. And it is more powerful than high IQ. The world just bends to accommodate what we want. Dynamic Intelligence (i.e. learning to learn) + Classical Intelligence (i.e. IQ). We can probably directly boost these via things known to enhance neuroplasticity and adult neurogenesis. Nutrients/supplements like magnesium, choline, EPA/DHA, curcumin or bacopa monieri. Meds like SSRIs and lithium. Meditation and any other stress reduction. Making sleep extremely good. Intermittent fasting. Interval training. LSD or psilocybin. Modafinil is known to safely/directly enhance cognitive ability. More importantly, decades of high AI enable us to constantly learn new complex skills and ideas; make us smarter because we train our brains; and make enhancing all this an easy everyday habit. Last month I spent several days reading the (exceptional) Pre-Suasion by Robert Cialdini (THE guru of persuasion). Went slow. Took notes. Applied ideas to a public talk on biohacking the following week. Distilled the core concepts in a long recommendation-review for friends. This book became a mental model for human behavior. Many of its takeaways help make this article extremely persuasive. This is one of many things deep work lets us do — truly understand, internalize and apply complex new ideas. In the last year I read ~40 books + took ~8 online classes (list of recommendations here)+ learned to code; started a new company (Mirror Emoji Keyboard) + raised the highest-valuation Q1 seed round in Silicon Valley + launched a highly-rated product that may well create material financial wealth; wrote biohacking articles that reached millions of readers and helped meet amazing like-minded leaders in Silicon Valley; slept 8.5 hours on average (proofpic in sleep section!) + worked out every week and meditated every other day; made amazing new friends + went on awesome drug trips + had great sex + lived all over the world. Feel truly happy. And excited about continuing this growth trajectory every year for decades. But let’s compare with 5–10 years ago. I was unhappy, unproductive, fat (27% BF). Carried forward in life by some combination of luck, total denial, and extreme ambition. I was pretty stupid. And I’d never admit it back then to others, or to myself, but my life fucking sucked. Suicidal thoughts appeared a couple times. Luckily never in a serious way. The point is: biohacking can significantly enhance our lives. Let’s use all the medical technology we can get to enhance our intelligence. Start now. Reinvest all of it over years and decades into building intellectual, financial, social and physiological wealth. Utilize the new technology that is coming. We will get to achieve everything we want. Lead the future. Be healthy and happy. PART 3: ACTIONABLE INTELLIGENCE ENHANCERS If you want a health-focused discussion, read the previous article. This one will mention some of the same things. Stuff that makes us healthy is mostly the same stuff that makes us intelligent. But this article is written specifically with the purpose of helping all of us enhance our intelligence. (There is another, deeper, more interesting reason why I spent all the time writing this. It will become obvious by the end). DISCLAIMER: Do not mindlessly copy this. I do most of these things with a team of MDs with decades of medical experience led by Peter Attia, one of the top health-optimization doctors in the world. Peter Attia If you read some of his articles on heart disease prevention, cholesterol, how ketosis enhances athletic performance, and why whether you are fat is not truly about calories you will appreciate the depth of thinking that goes into our decisions. Peter and his team really go after any of their recommendation with the same level of tenacity and dedication. In particular, we do thousands of tests and know that our interventions do not carry much risk for me. But all of us have different bodies. For example I have kidney function with eGFR of >160 (in the top 1% of 30-year-olds), whereas you might not. This means your risks may be higher. Be cautious. AGAIN: I am just a guy from the Internet. Not your doctor. I am not responsible for your health and am not telling you to follow my advice blindly. PART 3.1: APPLIED INTELLIGENCE — ENERGY & FOCUS Sleep Key point: if you sleep less than 8 hours or go to sleep at inconsistent times, you are fucking yourself, making yourself stupid, and helping yourself get Alzheimers. If anyone wants all the science, look into this book — it references hundreds of studies many of which the author (the director of the UC Berkeley Sleep Lab and one of the world’s leading sleep neuroscientists) performed himself. Here we will just list the highlights. Even minor sleep deprivation (sleeping 6–7 hours); circadian shift (changing sleep time by 1–2 hours a day from one night to the next); or reduction in Deep NREM or REM sleep reduce our intelligence in the following ways: Applied Intelligence: Severely lowered emotional control, stress resilience, willpower and focus. Significant increase in procrastination. Hormonal misregulation . With consequences like worsened mood and lower energy. And smaller testicle size (yes, really). See chart above — the more you sleep the better your hormones get. . With consequences like worsened mood and lower energy. And smaller testicle size (yes, really). See chart above — the more you sleep the better your hormones get. Significantly reduced immunity => more sickness => less productivity. They tested this by giving people rhinoviruses and depriving some of them of sleep. For science! => more sickness => less productivity. They tested this by giving people rhinoviruses and depriving some of them of sleep. For science! Dynamic Intelligence : Significantly worsened ability to remember what you learned the prior day AND worsened ability to learn the next day. : Significantly worsened ability to remember what you learned the prior day AND worsened ability to learn the next day. Classical Intelligence : Significantly worsened cognition and ability to see creative non-trivial solutions. : Significantly worsened cognition and ability to see creative non-trivial solutions. Social Intelligence : there are actual experiments showing sleep deprived people are less able to read facial expressions, are rated less attractive and persuasive etc. : there are actual experiments showing sleep deprived people are less able to read facial expressions, are rated less attractive and persuasive etc. Worsened clearance of waste from brain (occurs in deep NREM) => accumulation of Alzheimers-causing proteins => permanent damage to sleep enabling centers => further sleep degradation => we are fucked. I have a grandma with Alzheimers. And I would personally cryofreeze myself the second there were any indication I had it. If cryofreezing were not an option, I would prefer to die. Plus on top of the Things That Make Us Stupid above, we have: worsened insulin resistance, cancer, cardiovascular disease, car crash risk, athletic performance etc. I don’t have time to link all the studies but it is in the book. Besides pretty sure most of us will know the above to be true from personal experience. I for one feel like a moron in the afternoon after undersleeping. In other words, sleep is a major opportunity for intelligence enhancement. It impacts many other things. And for most of us, sleep quality is poor. First a bit of important theory: Sleep is driven by two independent functions: accumulating sleep pressure (adenosine) and circadian rhythm (internal clock + melatonin). If these are not in-sync with each other, sleep quality declines. Sleep has many different stages (REM, light NREM, deep NREM is a simple classification). The stages (1) have very different functions (2) happen in different order, at different times of night, and depending on sleep pressure/circadian rhythm (3) are all essential. Sleep is not actually a state where you are “turned off”; it is a stage of very active work throughout the brain and the body. What this means is that if we spend 7 hours in bed we sleep ~6 hours and cut out the last stages of sleep, degrading their unique functions by >80%. If we change our bedtime by 2 hours from one day to the next, we destroy the early stages of sleep and degrade their unique functions by >80%. KEY POINT: you think sleeping 6 hours or going to sleep 2 hours later degrade sleep marginally, but actually they do so very severely. So sleeping better means spending more time asleep, in the right sleep phases, at the right and consistent time of the day. The first thing to do is measure. Peter, other leading health-optimizers and I recommend the Oura ring. Use the code “sergef” to get $100 off their new version; I do not earn a referral fee from this (they kindly offered, I declined). The reason we prefer this particular device is that it gives far more accurate data than all the wristbands. Skin thickness, skin color, and contact tightness are all more favorable on fingers in terms of blood flow analysis. My sleep metrics on a given day My monthly sleep metrics over the last 9 months Here are the key things we want: 8 hours of actual sleep per day. This means we spend 8.5–9 hours in bed. I spend ~8:30 in bed on average, of which ~7:45 is sleep. Aiming to get to around 8:10 of sleep every day, so 9 hours in bed. per day. This means we spend 8.5–9 hours in bed. I spend ~8:30 in bed on average, of which ~7:45 is sleep. Aiming to get to around 8:10 of sleep every day, so 9 hours in bed. Same sleep time every day. When every day our sleep time shifts by hours, we are effectively living in a permanent state of jetlag. This de-synchronizes our sleep pressure from our circadian rhythm and destroys certain parts of sleep, especially deep NREM sleep. 1–2 hours of deep NREM sleep per day. According to data I’ve seen from Oura this will be the biggest challenge for most of us. On a median day I get ~1:20; am aiming to get to 1:30 and make it consistently good (I sometimes have unexplained drops down to 20–30 minutes). 2+ hours of REM sleep per day. I get 2:45 on average and up to 5:30 when I do a lot of mental work. This is very high already. Resting heart rate trends through the night Low, stable resting heart rate. We can see the rough reference ranges everyone quotes for different ages above. I’m around 47 on great days, 56 on bad days, 52 on average, which is excellent. We want this to be low + stable during the night like in the graph above. This indicates highly restful sleep. All this life-long To sleep better: Pick a sleep time where we spend 8.5–9 hours in bed and do not shift it by more than 20 minutes a day. This is incredibly hard in modern society and is the #1 thing that makes our sleep better. hard in modern society and is the #1 thing that makes our sleep better. Use blue-light-blocking glasses for 3–4 hours before going to sleep. Gunnars are good. Plus lately I’ve been using these because they block even more blue light. They do look weird, but we have to decide whether our ego matters more than our health/intelligence. Do not drink alcohol. Even a small amount degrades REM sleep which is the key part of sleep focused on intelligence. Interesting tidbit: the main difference between human and monkey sleep is that humans have more REM. So those of us who drink basically shift our sleep quality to monkey sleep and we can speculate as to long-term impacts of that. By the way — those studies about the benefits of red wine are bullshit, sorry. Do not drink coffee or tea for 9 hours before sleep time (if a fast metabolizer) or at all (if a slow metabolizer). Caffeine half-life is surprisingly long. The graph above is average, but 50% of us (like me) metabolize much faster and 50% metabolize much slower and should not drink caffeine at all (for them it is associated with major health risks). The distinction is purely genetic and based on gene rs762551. Sleep in cool temperatures (18 degrees Celsius; 65 Fahrenheit) and try out hot or cold showers before sleep. Low body temperature helps get into deep NREM sleep. I find that ice showers make me fall asleep very fast (counterintuitive). Make sure our bedroom is totally dark and quiet. Use earplugs and sleep masks. Even sounds that do not wake us up actually make our sleep worse. Exercise improves sleep, but do it >3 hours before bedtime. Consider not eating heavily for at least 3–4 hours before bedtime. Sugar and carbohydrates reduce quality of deep NREM sleep. Yet another reason not to eat that shit. Do not use sleeping pills. They make us lose consciousness but they actually don’t make us sleep. These are different things. Consider meditating or otherwise turning off before bed. Here’s the thing: like most suggestions in this article the stuff above has compounding benefits. Each night of bad sleep permanently damages us and we can never fully recover that damage. Part of the damage is to the apparatus of sleep itself, which over time makes us stupid and ages and kills us. Many of us do not want to make the changes to our social lives, dating etc. for the sake of sleep. It is a matter of priorities. You want to go to the club, or you want to not have Alzheimers. Stress: do everything possible to lower it Key point: stressed out, negative, emotional people basically have long-term brain damage Constant stress truly hurts everything: it makes cognition worse[1 2], drains energy, and even cuts adult neurogenesis and neuroplasticity in our brains [1 2]. Makes memory crap [1]. Interferes with hormonal systems [1]. It ages and kills us in many ways[1 2]. Here are another 50+ studies on how chronic stress fucks us up. In other words, constant stress makes us stupid. Anything we can do to reduce it is a big win. Here are the specific tools I found useful for this: The SSRI antidepressant Escitalopram (I take 10mg/day) to boost serotonin. It eliminated “bad mood days” that used to happen 1–2 times a month. Reduced propensity to react to stress. Life just feels a little better all the time. It is also proven to enhance growth of new neurons in adult brains [1 2 3]. Escitalopram is extremely safe even in large doses [1] especially for me because I have genes that are associated with significantly higher positives and lower negatives of this specific drug. Old antidepressants (e.g. MAOI) are dangerous. There are also studies out there that claim even the latest, best antidepressants are bad for you [1]. My medical team is skeptical of those studies. The biggest reason is “sick cohort bias.” These “researchers” take a bunch of depressed people that are prescribed antidepressants, compare them with people NOT on antidepressants (while claiming to magically compensate for the fact that the latter group are obviously healthier people). Conclude that the AD group has a small difference in some kind of risk, and PR this to gullible media. This is bullshit, not science. My doctors take SSRIs themselves and make no money from prescribing them to me. So I trust their conclusions. Lithium. We get ~1–3mg of it a day from water. It is prescribed to bipolar disorder patients in doses of 800mg-2000mg/day. I take 100mg/day (i.e. 10–20x less than a dose known to be safe). Reasons: (1) it is proven to enhance neurogenesis and is associated with a great number of medical benefits [here is a list of >50 studies on its various benefits] (2) subjectively it seems to drive a slight increase in my stress resilience. There appears to be no good reason not to be on lithium in this dose range. Meditation. I meditate 30–45 minutes every day with a combination of mindfulness and freestyle-notation. Meditate “in life” while doing things from eating to listening to music to sitting on a skilift. My friends and I have a private Slack community where we keep shared meditation journals, and discuss them with a talented meditation coach. My meditation time is unstructured — a habit of meditating when in the back of a car, with a bit of free time, or just when bored. It’s a much better use of time than mindlessly browsing the internet. Meditation gets better with time and coaching— now I’m so good at it that I can dismiss an extremely strong emotion in 5 minutes just by observing its bodily manifestations. We can be completely non-religious and do not need to believe in any mystical bullshit that meditation is (sadly) surrounded with. There is a large amount of serious scientific evidence that suggests meditation is valuable for everything from neurogenesis to cognition, mood, attention, and disease risk [here is a collection of ~50 studies on this]. Once we are good at meditation, it provides very concrete applied hacks we can use. Here are some I use every day: Emotions are sets of specific bodily sensations [link]. Once they manifest, they are reinforced by loops of thoughts about them. So when you want to easily get rid of an emotion (e.g. loneliness or anger or self-doubt) you can sit down, find that emotion in your body, and pay close attention to it. This breaks the rumination cycle (because you aren’t thinking) and reinforces your self-awareness (because you find emotion-driving bodily sensations => understand that your self-doubt is merely an itch). I can dismiss negative emotions in 30 sec — 5 min. They just vanish. I can see myself “from the side.” I have my eyes open and typing on laptop but I can also simultaneously see myself in front of me. This is easier to do with eyes closed, is a trainable skill, and is quite hard. The reason this is useful is that it lets you dissociate from your own emotions and ego. You can say to yourself “why is that guy sitting over there annoyed about some work bullshit? He knows very well that various bullshit happens all the time, it is just a regular day.” And it feels like in meditation I only just scratched the surface although I meditated (incorrectly) for 5 years. It is already easy for me to control my emotions. My goal this year is to get to a point where negative emotions don’t even appear. This is feasible. Remove negative people from life. There are people who make us happy and energized and inspired, who we absorb new skills from, who we look forward to seeing. There are others who make us drained, who fill us with their own insecurities, fears and negativities. Even if the latter are our bosses, relatives, friends, spouses, we should reduce time with them. We can empathize, but it is not on us to fix other people. Hard enough to fix ourselves. Additionally, I saw material enhancements for stress resistance from things described in other sections. Most of all better sleep, sex, MDMA, removal of news/social media, and hormonal enhancements. Last important point on stress: our propensity for it is a long-term thing. Stress and fear increase the size and power of the amygdala (the part of our brain where fear is generated), which in turn makes it easier for us to be stressed and afraid. If we are constantly fearful, stressed, depressed, negative, skeptical of others — we are “sick” in the medical sense that our brain structure is doing something detrimental to us due to long-term damage. And it is detrimental — lots of scientific evidence for that (refer to all the studies listed above). We can get better, but it takes time and effort. Actively work on keeping stress levels low. It doesn’t just make us happier and healthier. It makes us significantly smarter. Sex: also a biohacking tool [Oct 2019: I edited this section to reflect my updated thoughts on the subject] Apology to the female half of the audience. This section is male-oriented. That is what I know and optimize for. Key points: Sex = good for you. Humans = not monogamous. When not in a relationship, I just hire fashion models to have sex with in order to save time on dating and focus on other priorities. Great sex = biochemistry. I think of sex as something similar to exercise, meditation, or food. Another physiological need to be addressed in a time-efficient way; another tool to enhance health (talking about safe sex obviously) and intelligence. There are many reasons why sex is useful for intelligence: If we do not get it, we spend a lot of time thinking about it. Pursuing it, watching porn etc. Useless distractions. Society is sexualized and ties the male ego to having sex. Doing so makes the ego content and easier to control. Sex leads to favorable hormone profile changes that enhance mood, and reduce stress [1] and even help sleep [1]. Funnily enough I even noticed a very clear correlation between sex and my own deep sleep levels, and anything that improves deep sleep is very valuable. There is evidence that sex boosts our neurogenesis and neuroplasticity. [1] Getting great sex takes too much time and energy: Dating to get sex rather than deep relationships takes a lot of time. Much of that time is wasted. On people who are not a good fit. On idiotic things like swiping on Tinder or going to clubs (screws with sleep). Casual dating = trading our time and energy for sex and reassuring our ego that we are desirable. Monogamous long-term relationships can be great if we are together with someone who is a true friend and shares our values — I am in a long-term relationship which I deeply value right now. But monogamy is a challenging solution for sexual desires. Human biology is not monogamous. I quote: “Overall, of the 1,231 cultures in the Ethnographic Atlas Codebook, 84.6 percent are classified as polygynous [one man many women], 15.1 percent as monogamous, and 0.3 percent as polyandrous [one woman many men].” [1]. What this means is humans are not wired for monogamy. And either cheat or are frustrated in relationships. Neither of which I can accept. This applies to men and to women, yet women have a worse deal — society shames them for not being monogamous whereas men have tacit approval from society for pursuing their sexual urges. “Player” is positive, “slut” is not. Just straight up paying for sex in one-time cash meetings off the Internet is frankly emotionally-unpleasant. Masturbation doesn’t deliver the same benefits for some reason. It seems we do trick our brains, but not fully. My solution at this point in life is simple: I have agents from the Russian/European modeling industry that set up dates/sex for me (that I pay for, they arrange all logistics, I don’t have to do anything at all). This whole thing is surprisingly well-organized. Think “messenger bot or closed instagram profile that sends ~50 new portfolios a day — high-profile, detailed portfolios with videos, links to magazine covers and instagram profiles.” If I hit it off with particular girls (which happened a number of times), we just keep meeting when we feel like it. With no expectations of relationships or any claim on each other. There are easy ways to hack emotional chemistry. Meditate or carefully take the right drugs together, have sex right after a gym workout upped hormones, do fun stuff like go to a swing club. A big reason I decided this is optimal is all the “rich guy harasses women” stories. This way everyone unambiguously and honestly consents. With clear expectations. And a data trail. Basically: I get to have sex with women I find attractive, when I feel like it. All the psychological and physiological value of sex, with very little time or emotional cost. Helps me have more appreciation for the long-term relationships I have because I satisfy sexual urges and feel free without a sense of resentment at my girlfriend for restricting me. And no risks because everyone’s happy. Society tells us that “the right kind” of sex needs X months/dates of knowing each other to be good. And of course it must “not be based on looks or money” (a theory easily disproved by the existence of makeup and Lamborghinis). In reality awesome sex just needs the right biochemical buttons pressed. That is all. A lot of you do the stuff I do quietly. Or fantasize about doing it. Just do it openly. You will look confident and empowered. The world does not get to tell you not to do what you want. BTW, the right long-term relationships can be amazing and valuable. I want these to be honest and based on a genuine connection. Not on “I am really horny, I’m going to go waste my time to pick up women I actually have zero long-term interest in.” Finally, if you think it is “misogyny” to pursue your own desires and encourage everyone, regardless of gender, to do the same, you may want to look up the definition of “misogyny”. Hormones: Key point: your hormones are probably screwed up. Fix them, it improves mood, energy and health. And generally makes life awesome. For reasons related to modern life (stress, poor sleep, pollution etc.) most of us will have suboptimal hormones. This is hard to fix without expensive professional help and testing. But it is really worth exploring. The general idea is: test all of our hormonal systems a number of times understand the research or get a high-quality endocrinologist to identify opportunities (e.g. if TSH is >2, we may have an opportunity related to thyroid function). explore interventions, try them, see how we feel, test biomarkers again. optimize and repeat My testosterone change in ~2 months (after blocking pituitary estrogen receptors with clomid) I had “below average but clinically normal” levels of thyroid hormones and of testosterone. Boosted both via targeted interventions described in my previous post. This improved mood and boosted energy in quite a material way. Plus testosterone is quite important for Social Intelligence (explained in a later section). It increases confident, aggressive, dominant behavior as well as willingness to take risks. The associated body language and behaviors make it easier to get people to listen to what you say and do what you want. Intermittent Fasting: Key point: eating once a day makes you smarter and healthier. I eat only one time a day — late afternoon / early evening, and fast for 24 hours. I do this nearly every day. This: Has major health benefits by promoting autophagy (the 2016 Nobel Prize in Physiology or Medicine was granted to the discoverer of this mechanism), reducing probabilities of cancer, heart disease and Alzheimers. Saves a lot of time. Eating once a day is a huge time win because it removes context switching. Prevents the afternoon slump and keeps our mind sharp for longer. Directly enhances intelligence via improved BDNF/neuroplasticity. Here are ~70 supporting studies. There is very wide scientific consensus that IF is great for you. Basically it makes us smarter, saves valuable time and makes us healthier. Think of how big of an advantage it is to have an additional hour of sharp focus a day for 30 years. And we will live longer! It is also natural. All that stupid shit about eating breakfast and eating 5 times a day ignores the very obvious question: do you actually think you evolved to have an around-the-clock buffet? do you think the hunter-gatherers ate 5 times a day? This is really a no-brainer. A bit hard at first, but the body and mind adapt quickly. Dietary Ketosis Key point: your body and mind work much better if you do not eat sugar, processed foods and grains. Because guess fucking what? Evolution designed you to mostly eat fat! A simplified explanation of ketosis is that we are switching our bodies into burning fat rather than glucose. This requires eating nearly all our calories from fat, and can be measured quite precisely via finger blood sticks. My ketone (left) and glucose (right) in mmol/l on a random day The fundamental argument why ketosis is good is as follows: For one oxidative reaction, ketone breakdown delivers more energy (20–30% more) than glucose breakdown [1]. It is basically a cleaner fuel that “rusts” all our systems less per unit of performance delivered. My doctor Peter Attia does a great job of explaining the details in a series of articles here [1]. I can hear you arguing “yeah right, so evolution didn’t build us this way and Serge thinks he can hack his body’s metabolism to be 20–30% more efficient.” Actually — this is returning us to what evolution built us for. Before we started cultivating grain, carbohydrates and sugars were rarely available. We could basically only get them from berry bushes and beehives. High dietary fat consumption and ketosis is our natural state. There is a great deal of evidence that suggests ketosis is advantageous. Here’s another 70+ studies if you want to read about it. The problem is that ketosis is very hard in the modern world. Much harder than intermittent fasting. The reason is that sugars and carbohydrates are everywhere + that they are very addictive. And in order to be in ketosis, we really have to eat close to zero sugars and our carbs have to be limited to a small amount of fruit + what we will get from vegetables. It looks like there is interesting technology on the horizon — HVMN, a biohacking company funded by Marc Andreessen, has developed the first commercial-grade ketone ester that really does raise ketones rapidly and significantly (I got 0.5=>3.4 in 15 minutes for those in the know), and reduces blood glucose. The older ketone salt products don’t get nearly as good of a result. When I took their ketone ester (note: I am not earning any referral fees or the like from them although I am friends with the founder), I felt a rapid and lasting inflow of energy/focus. Another particularly acute effect was needing to breathe less — in normal activity, in the sauna, and in an interval run. Right now this is expensive (~$3000/month to stay in keto all day) and unpleasant (the stuff tastes… really bad although Geoff, the CEO, is saying this will be fixed shortly). Hopefully both will get better. I really like the effects I perceived so far, as well as the science behind ketone health benefits. Will probably take this every day on top of my on-and-off keto diet. Exercise: Not much to add to what I wrote in my previous article — do interval training (not bullshit long cardio or marathons), do heavy hip hinge exercises, sit less. This likely contributes to intelligence via hormonal systems, sleep, stress control etc.; and even if it did not you would want to do it for the health benefits. Distractions and Addictive Cognitive Garbage: Eliminate Them Key point: technology, notifications and news media are vampires that suck our time and energy. Many things require deep, focused work. Switching contexts is expensive. If we are distracted from writing code by a 5-minute phone call we do not lose 5 minutes, we might lose hours of excellent work. There is more. Every time a notification, a phone call, a YouTube video distract us, our neural networks get rewired. We become more distractible. Less able to focus. More addicted to these things. We become more stupid. Modern tech and media are deliberately designed to make us addicted. Couple quotes from top Facebook executives about this: “Your behaviors — you don’t realize it but you are being programmed… you gotta decide how much you are willing to give up, how much of your intellectual independence,” “my solution is I just don’t use these tools anymore. I haven’t for years.” “[my kids] are not allowed to use this shit.” — Chamath Palihapitiya, former head of Growth for Facebook [1] “The thought process that went into building these applications, Facebook being the first of them, … was all about: ‘How do we consume as much of your time and conscious attention as possible?’…exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.” — Sean Parker, former President of Facebook [2] This is not limited to social applications. The news media, from Buzzfeed to the New York Times, are overhyping threats, optimizing clickbait-y headlines, and generally doing everything they can to make us care about things that are actually totally fucking irrelevant to our lives. That’s right — the news media that claims to keep us informed and complains about Facebook designing addictive technologies actually thrives on hijacking our minds, sucks our time and energy. And directly makes us more stressed, negative, stupid people. Do you really want to spend your days and limited attention resources worrying about what Kim Jong-Un will do? Why. The fuck. Do you. Care? (If you think worrying about Kim Jong-Un in fact does provide value, I challenge you to post in the comments a list of specific decisions that reading the political news helped you make in 2017, and what concrete, valuable outcomes resulted from these decisions). Here are some ideas on how we can approach this challenge: This is my iPhone Home screen. It is deliberately designed to be uncluttered and trigger me to read books (Kindle), listen to podcasts and audiobooks (Overcast, Audible) and use my latest product (Mirror Emoji Keyboard). Consider removing Safari and messengers from this screen because although they are indispensable, we do not want to cue our mind to use them. There was a noticeable effect — after I got them out of sight, I stopped checking them without a good reason. Also my iPhone is switched into greyscale and dimmed (but I can’t screenshot it). The idea is to make everything less colorful/attractive/addictive (you can do this in Accessibility). My desktop is similarly uncluttered and minimalistic. All my phone notifications are off except Calendar and Uber. Facebook, Instagram and similar apps are deleted. Futile attempts to reach Netflix I have a VPN both on my Mac and on my iPhone that bans web traffic to social media and all news sites at all times. I add this blacklist of sites to the Mac Hosts file so that it is harder to get around the ban. Here’s a guide to doing this. I do use messengers/email, but no notifications. Trying to get to no more than 2–3 blocks of responses a day. I rarely pick up the phone, and never from unknown callers. Sure, we can lose some opportunities, but really, who gives a fuck? Life is full of opportunities if we have a lot of time and energy. I live in hotels. But if I ever buy a house, it will also be designed to trigger correct behaviors. From uncluttered rooms that cause focus, to pictures of sugar next to cancer cells in the kitchen. This shit works. I have not turned on a TV channel a single time for at least 10 years. Not exaggerating. There are physical hacks that can help stay focused by reducing incoming sensory data. My favorite are Etymotic earplugs + Bose Active Noise Cancelation headphones on top; and hoodies to reduce visual field. Useful when attractive women are around :) . I read somewhere that those of you who go to memory competitions buy blacked-out glasses and drill holes in them to further minimize vision field when focused. Want to try this, it actually makes a ton of sense given how much brainpower our vision uses. I ignore nearly all requests; meet with people only if I can learn something from them or they are friends; and almost never have meetings before 2pm. There’s lots of other things. The key point is that I relentlessly seek out reasons why I couldn’t get enough flow on a particular day and prune these reasons. Some of you might feel this is extreme and not worth the time investment. Track how many hours you waste away on procrastination, social media, news articles etc. in a typical month. For me that number used to be several hours a day and is now approaching zero. Our investment in controlling our infospace pays for itself many times over. And it is easy once habituated. This section is also the reason I choose not to have children. None of us will disagree that children are extremely distracting, disrupt sleep for years; are generally a massive cost of time, focus and energy; and have material risks of not working out, for reasons outside our control. I just don’t see the ROI in children given my goals. They won’t ever return my time, focus or energy back. There is no point in passing on our genes once we can live forever ourselves (and there are good reasons to think some of us will do so). We can have other meaningful long-term projects. And if we ever feel lonely we can take MDMA with friends or boost our hormones and neurotransmitters. True happiness is in our biochemistry state. We can have it without intermediate steps. PART 3.2: SOCIAL INTELLIGENCE — PERSUASION Social Intelligence: much more intelligent than IQ Bad news for the high-IQ introverts amongst us. Donald Trump is smarter than we are. Just radiating intellect He repeatedly survived through bankruptcies that would have destroyed most of us if we were in his place. Played the media like a fiddle. Became president of the US. And he isn’t a lucky one-hit wonder. He has been achieving for decades things that the majority of us would very much like to achieve. This is not an endorsement of him. I do not like what he does. But his “ability to perform complex tasks” (i.e. intelligence) is high. And it is based in SI. Body language. An understanding of human emotional buttons. That the human brain equates attention and credibility. Even if you really dislike him it makes sense to learn some of his techniques. (I entered into a $20k bet on Trump winning the presidency 15 months before the election. For those of us with high social intelligence, it was always obvious Trump would win). We should recognize that Social Intelligence is far, far, far more powerful than IQ. The reason Social Intelligence is so powerful is that it scales. An understanding of human emotional buttons enables us to get others to like and support us with their skills — no matter who they are. A high IQ and ability to debate with formal logic is useful too of course. Planes don’t fly on emotions. But if we, after a lifetime of observing ourselves and other humans, think we can use logic/facts/IQ to persuade/lead/connect with others, we are truly fucking stupid. There is a lot of evidence that Social Intelligence and social status have many second-order effects on intelligence: neurogenesis, happiness, intelligence, brain volume, desire to compete, lower stress etc. Here is a whole book full of supporting evidence by Robert Sapolsky, a leading neurologist at Stanford. And lest we think this is just because of human social inequality, a great deal of the research is replicated even for monkey social intelligence and social status (read the book). So — how do we boost it? The prerequisite is to truly internalize that humans are “irrational.” But the irrationality is highly predictable. We have to understand and believe this, at our deepest core. Favorite example of “irrationality” (I am well aware of that priming studies are controversial): When we hear French music in a wine store our preferences are materially shifted to French wines. Even if we don’t ever become aware of the music. The reason things work this way is actually highly logical. Somewhere in our mind there are neural nets for France, French Music, French Wine, French Flag image. They are linked to each other. That is how we know these are related in some way. Triggering the French Music net triggers all linked nets, including the nets for France and French Wine. So the incoming signal to the French Wine net is a bit stronger than into the Italian Wine net at this moment in time. That changes our decision probabilities. Without us ever noticing. All of us are programmable, hackable machines. So — how can biohacking help our SI? An example of confident body language Body language, eye contact and voice tonality have greater persuasive impact than the words we say. Biochemistry impacts these things. For example testosterone is both a cause and a consequence of confident body language [here’s a TED talk about this]. Looking and acting dominant in turn enhances our persuasion effectiveness — other humans are much more interested in what we have to say. And this bias towards confidence is so deeply ingrained that it will remain for as long as we are human. I raise my testosterone a lot via customized therapies (see my previous article for details). All the antistress tools — SSRIs, Lithium etc. — further enhance your body language of relaxed confidence. We all want to interact with positive, confident, un-stressed people. Recently a top Silicon Valley entrepreneur told me he uses beta blockers to be calm in big public talks. I then found out this is a favorite of concert pianists amongst you. Haven’t tried yet, but just got some propranolol. Sounds like an awesome idea. If in a stressful negotiation we have a cool head and a heart rate of 70 while the counterparty is at 120+, we have a material intellectual advantage. MDMA has its whole separate section. There are many other hacks. If I want to be aggressive/passionate in a public setting I do a set of 50 pushups right before. If I want to be more calm, I meditate. The most important thing is that with our huge Applied Intelligence reserves we can invest into SI. Read books [my list includes some favorites on the topic]. Watch videos of ourselves and practice in front of a mirror. Do public talks. Write persuasive blog articles. Have deep conversations with friends & family. Study neuroscience classes. Over the decades we can become what Scott Adams calls a Master Persuader [read his awesome book on persuasion, Trump, and why humans are not rational]. MDMA: the Social Intelligence Drug Recently I gave a talk about biohacking to an audience of ~200 successful leaders in San Francisco. When I asked how many of them took the drug MDMA (Schedule 1 illegal in the US), >50% of hands went up. Think about that for a second. I am not telling you to take it. But in my subjective experience, MDMA boosted Social Intelligence more than anything else. Its effects were permanent and extremely beneficial. The scientific research says that risks of MDMA (Ecstasy) appear to be far lower than those of alcohol and tobacco [here are several studies; this is obvious to anyone who ever looked at the science]. The FDA is about to approve it for treatment of PTSD and says it is a “breakthrough” [1]. The setting I take it in is long, chill house parties with friends/family, great music and ambience, known supply with precise measurement. For those of us who haven’t tried it: the effects are: Social anxieties and ego vanish. We feel free to tell others what we genuinely think and feel, without worrying about being judged. We might think we do these things anyway, but this is a different level of emotional openness, confidence and freedom. We greatly enjoy interacting with those around us and feel deep empathy for them. We are extremely extroverted — building a deep connection with someone we just met is trivial. There is no self-talk about future or past. We are completely in the moment, like after an intensely deep meditation. The interesting thing is that for me all the effects above stayed permanently, albeit they are not as strong while actually on MDMA. The biggest benefit is that I became extremely comfortable with being open about who I am and what I feel. In this article I publicly admit to taking illegal drugs and paying for sex. It is trivially easy for me to say what my biggest insecurity used to be (height — 5'8 and spent years agonizing over it) or the most embarrassing lie in my life (lied about getting into Stanford GSB before I actually did; it got to the Dean of Admissions; I’m still not sure how I persuaded him to let me in despite this). All my friends and family read these articles BTW. It is extremely liberating to not care at all about what other people think of us. And just be who we are. And here’s the thing: we all LOVE those of us who are so open and honest. It makes us interesting. Memorable. Trustworthy. Relatable. Confident. It makes it easy to get into deep conversations, which builds relationships. It is soooooo much easier to get what we want in life when we have no fears and just directly ask for it. And so few of us use this extremely powerful hack. Alcohol (88,000 deaths/yr in the US), tobacco (480,000 deaths) and opioid painkillers (15,000 deaths) are legal, but MDMA (50 deaths) is not. Clearly many of us understand this and do not respect this law (>50% in my San Francisco sample openly said they used MDMA, while just about nobody smoked tobacco). Unscientific and irrational drug laws undermine the moral authority of law as a whole. PART 3.3: DYNAMIC & CLASSICAL INTELLIGENCE — IQ AND THE ABILITY TO LEARN Modafinil: proven to enhance our Applied and Classical Intelligence A recent study at Oxford reviewed all English studies on Modafinil over 25 years, and concluded that Modafinil significantly enhances the attention, executive function and learning (i.e. intelligence, in its most clear form imaginable) of healthy non-sleep-deprived humans when performing complex tasks, and with no side effects or mood changes at all. [1]. I take 100–200mg every day in the morning (as early as possible) and plan to always do so. Anecdotally some people find their sleep is disrupted. The country that first decides to give free Modafinil to all its citizens will reap massive competitive benefits in the global economy. Adderall: has its uses I used to take Adderall XR (daily for many years). Adderall is very effective and quite safe (here’s a huge review of the science, and it clearly suggests Adderall can give you superhuman concentration with not much real risk). I switched to Modafinil because I found that Adderall made me more anxious + made it too easy to focus on unimportant things. I still take it occasionally when I need to do something boring but important. For example in my last company I read/edited key legal documents on Adderall. And the mechanisms I built in gave negotiating advantages years later. When reviewing 200 pages of dense legalese, a $1000/hr investor lawyer gets obliterated by a founder on Adderall. But overall I do not recommend it. Too powerful. Neurogenesis, Neuroplasticity & Learning Key point: many things in this article help you grow new neurons. that helps you learn. if you do not constantly learn new and challenging things, your intelligence degrades. oh, and LSD. Learning itself is a skill that needs to be developed. For a general overview of the neurobiology of learning I highly recommend completing the Learning How To Learn class on Coursera. It is easy and very valuable for all of us — those in high school, and those who are CEOs alike. Learning is dependent on neuroplasticity and neurogenesis — i.e. the ability of our brain to grow new neurons and rewire synaptic connections between existing neurons. These are driven by something called BDNF, which is highly modifiable. We already mentioned that a number of things enhance these: lithium, SSRIs, sleep, meditation, stress reduction, sex, fasting. Here are some more: Supplements. The ones I take: EPA/DHA Omega-3, Magnesium, Curcumin, Niacin, Rhodiola, Bacopa. You can read about all of these at Examine.com. Exercise, especially high-intensity interval training. LSD [bunch of related studies in Nature] Here’s another 50+ supporting studies. The last bit about LSD deserves a separate mention. I recently began microdosing LSD. Interesting experience. No hallucinations, but the mind wanders, focuses intensely on various sensory inputs, and links ideas in novel and unpredictable ways. I find it impossible to do focused work. But it seems to be an incredible way to deeply enjoy things like music, art or even taking a bath. It also seems to be a very powerful way to shift the mind into a diffuse mode of thought after a period of intense focus, which is essential for learning (watch the class referenced above for details). LSD even in large doses is extremely safe. I will keep using them from time to time when I want a day off. NOTE: do not mix LSD and lithium. If you are the kind of person who looks down on those who take illegal drugs, you should know that Steve Jobs, Bill Gates, Richard Feynman, Thomas Edison and many other top businesspeople, scientists and leaders used illegal drugs and often spoke of them as crucial to their success. And of course to improve our learning skills we need to keep learning. We can teach ourselves to code or complete classes on neuroscience or genetics or quantum physics or French or playing the piano, rather than spend our precious time and neurons on the failings of politicians, the habits of celebrities, or the details of terrorist attacks. Even if we don’t ever use quantum physics, learning it will make us smarter and more capable of learning any other concepts, as well as link them to quantum physics in novel ways. Supplements: This article is already long enough. I take lots of supplements — 50+ pills a day (that’s an actual picture above). They are basically super-safe things (e.g. concentrated garlic) that can have plausible benefits. More details in the health article. Avoiding pollution and toxins Key point: pollution and especially smoking damages your brain. It is well known that many pollutants damage intellect directly (lead, mercury); have broadly negative neurological effects (volatile organic compounds such as styrene); and disrupt hormone signaling (xenoestrogens are linked to long-term testosterone decline observed around the world). I try to avoid all of these as much as possible. On a practical level this means: Try to live in a clean air area + spend time in well-ventilated spaces. I’m looking forward to someone creating a convenient and not too scary personal air filter mask that I could wear when in cities. Don’t drink hot liquids from plastic containers and generally try to avoid plastics near your food. Eat expensive local/organic food. We don’t really know what is in it of course. But expensive food is a game of probabilities. Cheap food is guaranteed to be full of various shit from antibiotics to pesticides. That’s how they make it cheap. Don’t smoke or spend time near smokers. Nicotine is a nootropic, but all the other shit in cigarettes — benzene, styrene etc. — doesn’t just give us cancer, it makes us fucking stupid in the clearest sense of the word. Here is a quote from the US EPA: “exposure to styrene in humans results in effects on the central nervous system (CNS), such as headache, fatigue, weakness, depression and CSN dysfunction.” [1] I know I used the word “stupid” many times in this article but it all pales in comparison with the stupidity of smoking. Placebo Placebo: definitely good for you Many of these interventions are likely to carry significant placebo effects. Placebo is awesome. If we have genuine belief in the efficacy of an intervention, that belief itself generates part of the desired effect. This is one of the most proven observations in medical science. Optimism is healthy and good for us in and of itself. Summary I made this point many times. But this stuff compounds. Think about how smart you will become if you have decades of focused learning, awesome sleep, great health, practicing social skills, building wealth and power. An upwards spiral that others will never catch up with. And because gains in society are exponentially concentrated at the very top, gains from maximizing your intelligence are large. And deep work habits also train us to have “deep downtime.” To have deeply engaging conversations with our friends and family, where we show our vulnerabilities and get to the core of who we are. Rather than staring at our phones and distractedly discussing the weather (as most of us do). This whole approach isn’t just healthy and useful to create wealth and influence. It is also a fucking awesome way to live. PART 3: CONCLUSION It is not the ability to easily multiply large numbers, but the ability to consistently win, that makes AlphaGo superintelligent in the (very narrow) domain of playing Go. Likewise we are intelligent if we accomplish complex goals. Stupid if we do not. That intelligence is based on our biochemistry. This article is about executing extremely well. You should also pick the right goal and tie it together on different timeframes. I have very general objectives by the decade for the next 40 years => fairly concrete OKRs (Objectives & Key Results) for 2018 => Quarterly OKRs => they inform my weekly and daily priorities. In other words I can link my action item today “find coach that helps me train my voice to be slower” to annual OKR of “enhance persuasion skills” to “become immortal post-human god sometime after 2060.” And now for the TRUE purpose of this article: This article has many suggestions that can make those of us who adopt them more intelligent, happier and more influential. And I genuinely enjoy helping others. But there is a deeper purpose. I want to live in a post-human future that is aligned with values I align with: knowledge, science, technology, freedom, progress, power, abundance, pure meritocracy, optimism. And where tribalism, religion, tradition, nation-states, irrational emotions, conservatism and socialism have much less power over the world. I know that only those of you who hold a very similar technocratic worldview will internalize and obsessively adapt these hacks. Those of you who believe in different values will say this is unproven. Too radical. Too weird. Too un-human. Too far off in the future to care about. Too complicated. These tools will help the group that adopts them gain more influence. And thus help further advance a situation where those of us with values similar to mine influence the agenda for mankind. This will create a much better world than what we have today. So for me the lines along which we will split on this approach is itself a feature of the approach. This article — crafted with deliberate attention-grabbing concepts like sex, illegal drugs and fear — is designed to help bring about the grand future I believe in. ******************************************************************* Other deep-dive articles by Serge:
https://medium.com/hackernoon/biohack-your-intelligence-now-or-become-obsolete-97cdd15e395f
['Serge Faguet']
2019-10-04 17:59:57.466000+00:00
['Health', 'Personal Development', 'Life', 'Self Improvement', 'Life Lessons']
“Ask and you shall receive”: Bette Midler’s request of Black Women
“Did you ever know that you’re my hero, and everything I would like to be? I can fly higher than an eagle, for you are the wind beneath my wings.” — Bette Midler, Wind Beneath My Wings On Thursday, September 19, 2019, singer and actress Bette Midler posed a question on Twitter asking if Beyoncé’s followers, known as the “Beyhive,” could prioritize voting against President Donald J. Trump during the 2020 presidential election year. Her tweet read as follows: “#Beyoncé has 133 million Instagram followers. More than double the people who voted for Trump. Wouldn’t it be amazing if the #BeyHive mobilized to defeat him? I also wouldn’t mind if a regular bee hive fucked his shit up.” While the request seemed harmless, given Beyoncé’s mass appeal and Midler’s apparent distaste for the current president — thus resulting in her push to remove him from office, Midler’s tweet was met with much criticism. Some felt she was putting unnecessary pressure on Black women to impact the presidency. For example, Twitter user @megi_jay wrote: “Hi, white liberals need to stop pressuring black people into saving us. Black women are already very invested in the party-even though the party has often abandoned them. It’s on white people to work on other white people. You could have addressed this message to Taylor Swift.” Other users informed Midler that Beyoncé was politically involved in the past presidential election, encouraging voters to support Hillary Clinton who at the time was running against President Trump. Arguably, the tweet aimed at Beyoncé and her followers is an example of the burden often placed on Black women in America to help progress the country, while also removing responsibility from other women, White women specifically. This ideology is troublesome, given the many accomplishments throughout US history that were made possible because and at the expense of Black women. Moreover, Midler’s words seemingly ignored the current political involvement of this demographic. Ask of the Body Prior to and throughout President Trump’s first term in office, many have argued that he is not fit to serve the role. At times, his temperament has been questioned. Moreover, the president is accused of colluding with the Russian government to win the 2016 election. Some have even speculated that his racist rhetoric has led to violent attacks on Blacks, Hispanics, and other people of color. For these reasons and more, numerous Americans including Midler have been looking for ways to remove Trump from the presidency. A general consensus among US citizens opposed to his presidential term is that he is “tearing the country apart.” In contrast to popular opinions about President Trump’s negative impact on the US, Black women as a collective have consistently worked to better the lives of Americans. First, one must consider the Black woman’s body. The enslavement and exploitation of Africans in the US is largely responsible for the country’s current wealth and power. During this period, White slaveholding men and women primarily used Black women for their physical labor and reproductive abilities. This often came at a great cost to slaves but benefited White slaveowners directly and other occupants of the country indirectly. For example, in many cases Black women were expected to partake in wet-nursing, a practice that involves a woman breastfeeding another woman’s child. White slaveowners believed that Black women had a superior ability to suckle and utilized Black women’s breasts to raise healthier White children — the next generation of slaveowners and exploiters of Black people — while paying “scant regard to the very real difficulties faced by black mothers attempting to raise their children under a system of bondage” (West & Knight, 2017, p. 48). The use and abuse of Black women’s bodies has also occurred in the American medical field as well. According to Savitt (1982), the Medical College of the State of South Carolina used “black patients for surgical demonstrations throughout the antebellum years” (p. 335). Black women were among these patients. Moreover, James Marion Sims, considered “The Father of Modern Gynecology,” conducted research on enslaved Black women without the use of anesthesia. Perhaps a more widely known instance of the American medical community exploiting Black women occurred when Henrietta Lacks, who died of cervical cancer in 1951, had cells taken from her without the knowledge of her family or her consent. Her cells, now called HeLa cells, never cease diving and have helped bring about some of the most significant medical discoveries of the modern era, including advances in cancer and HIV/AIDS research. However, Lacks’ family has never been properly compensated for her contribution to modern medicine. Need for the Mind Black women have also been major contributors in politics. During the 2016 presidential election, specifically, Black women made up the largest demographic voting for Hillary Clinton (94 percent), according to CNN exit poll data. By contrast, most White women (52 percent according to the same CNN exit poll) voted for then-candidate Trump. A similar result was seen during the 2017 US Senate race for Alabama, in which national news media focused on controversial Republican candidate Roy Moore and Democratic hopeful Doug Jones. Despite Moore being twice removed from the bench as Alabama chief justice for opposing same-sex marriage, once accepting a $1,000 donation from a Nazi group, and having several women claim that he romantically pursued — sexually assaulted — them when they were teens, Jones narrowly won the Alabama Senate seat. Exit polls from NBC News indicated that most of Jones’ supporters were Black women (98 percent) while 63 percent of White women voted for Moore. Given the abovementioned findings, perhaps Midler should not be targeting Beyoncé or the Beyhive to vote against President Trump in 2020. As a counter to Midler’s critics, one could argue that there is no racial component to her tweet since Beyoncé has a diverse fanbase. This viewpoint was expressed by some, such as Twitter user @kunstmansarah, who wrote: “You assume she said this cause Beyonce is black? And you assume all Beyonce fans are black? I’ve reached out to celebrities via twitter for support on important issues because of their huge fan base, not because the color of their, or my, skin. Much love and healing to us all.” A possible rebuttal for this viewpoint, thus acknowledging that race is a factor, could be that: 1) Beyoncé has visibly aligned herself with the Black community in recent years and 2) Beyoncé was the only mainstream artist Midler has targeted recently. First, since 2016 Beyoncé has publicly supported Black causes, evidenced by her album Lemonade, the Super Bowl 50 halftime performance where she paid homage to the Black Panther Party, and her 2018 Coachella performance, nicknamed “Beychella,” in which Beyoncé took part in a historically Black college and university-themed production. During this year, the artist collaborated with several African artists to produce The Lion King: The Gift, which featured a song she performed entitled “BROWN SKIN GIRL.” Second, Midler did not reach out to other celebrities who are just as popular on Instagram. These include Taylor Swift, Kylie Jenner, Ariana Grande, Kim Kardashian, and Selena Gomez. Oddly enough, Dwayne “The Rock” Johnson is currently the only other Black celebrity with Beyoncé’s level of Instagram popularity. However, debate surrounding his racial ambiguity often distances Johnson from the Black community. Freeing the Soul To close, Midler is within her rights to ask for anyone’s help in removing President Trump from office. However, her request of Beyoncé and the Beyhive is another case of history repeating itself. During the US slave trade, Black women helped the US gain wealth and power while simultaneously caring for their White slaveowners. Their bodies have helped advance medicine so that all Americans and beyond now benefit. Additionally, the political involvement of Black women must not be undermined. The voting power of this demographic is great. For this very reason, Democrats hoping to run for President of the US in 2020 look to Black women as a key voting bloc. Despite their involvement in shaping the US, however, more is expected. Perhaps Midler and others like her should look to their own for help. Black women have earned a rest.
https://j-stokes.medium.com/ask-and-you-shall-receive-bette-midlers-request-of-black-women-4c0b0555819e
['J. Stokes']
2019-09-26 12:01:39.729000+00:00
['Politics', 'Black Women', 'Race', 'Beyonce', 'Music']
20 Funny Programming Fact and Quotes
1. Java is to Javascript like car is to carpet I love this one because it summarizes the Java vs. JavaScript story perfectly. Most of you know that Java and Javascript are two entirely different things, despite their names, but many beginners get confused by it. So why are they named this way? From an interview with its creator Brendan Eich: InfoWorld: As I understand it, JavaScript started out as Mocha, then became LiveScript and then became JavaScript when Netscape and Sun got together. But it actually has nothing to do with Java or not much to do with it, correct? Eich: That’s right. It was all within six months from May till December (1995) that it was Mocha and then LiveScript. And then in early December, Netscape and Sun did a license agreement and it became JavaScript. And the idea was to make it a complementary scripting language to go with Java, with the compiled language. So the JavaScript name is the result of a co-marketing deal between Netscape and Sun, in exchange for Netscape bundling Sun’s Java runtime with their then-dominant browser. At the time, Java applets allowed you to distribute applications efficiently. It was not a bad idea. In fact, it was quite progressive for the time — it would make software distribution a lot easier. Unfortunately, Java was heavy to load and felt a bit clunky at the time. JavaScript was meant to be a lightweight language that could be mixed with HTML, to make simple HTML pages more interactive and compelling without requiring the heavy Java alternative. Nobody at the time would have predicted that JavaScript would evolve into the most used language in the world.
https://medium.com/tech-explained/20-funny-programming-quotes-and-facts-6aa7138bd971
['Erik Van Baaren']
2020-05-03 07:15:11.822000+00:00
['Programming', 'Humor', 'Software Development', 'Funny', 'Productivity']
What is Ensemble Machine Learning? — using stories and pictures
“Storytelling is the most powerful way to put ideas into the world.” -Robert Mckee In this article, using small stories, I will try to explain the concepts of ensemble machine learning. In recent times, I haven’t found any Kaggle competition-winning solution which doesn’t have ensemble machine learning. So, it might be a good way to understand the basic concepts of ensemble machine learning using some examples. Ensemble machine learning Suppose you want to buy a house. To understand if this is the perfect house for you or not, you will ask questions to your friends who have bought a house, real-estate brokers, neighbors, colleagues, and your parents. You will give weights to each of the answers and try to arrive at the final answer to your question. Exactly, this is ensemble learning. Ensemble machine learning is an art to create a model by merging different categories of learners together, to obtain better prediction and stability. Naive Ensemble machine learning techniques are: Max voting — Based on the previous example, if you have asked 10 people about the house and 7 people told not to buy the house. Your answer is not to buy the house based on max voting. Averaging — If each of these people gives the probability that you should buy this house or not (Like your parents say that this house will be 70% suitable for you), you take an average of all these probabilities and take the decision to buy the house. Weighted averaging — Suppose, you have a trust issue and you trust more your parents and close friends than any other. You give some higher weights(suppose 60%) to the probabilities given by these people and lower weights(40%) to others. Then you will take the weighted average and take the final probability. Advance Ensemble machine learning techniques are: Bagging — Also known as Bootstrap Aggregating. I have a set of multi-colored balls in my bag. I asked a kid to pick 5 balls. Then again, I put the balls back and asked the kid to pick 5 balls again and again. This repetitive task is known as Bootstrapping or sampling with replacement. “Bootstrapping is any test or metric that uses random sampling with replacement, and falls under the broader class of resampling methods.” -Wikipedia Now, based on every 5 balls drawn, I will find the probability of a white ball. Suppose, I get 2 white balls out of 5, then I have a probability of 2/5 i.e. 40% and if I get 0 white balls out of 5, then I have a probability of 0/5 i.e. 0%. In the end, I will take the average probability of all the time the ball is drawn and conclude what is the probability of getting a white ball getting drawn from the bag? So basically, I am creating a small model out of each sample of balls withdrawn and then the balls are put back. In the end, I combined the predictions of each of the models to obtain the final solution — probability. This is bagging. ML version of bagging: From the original dataset, randomly multiple samples are generated with replacement A weak learner (a base model like a decision tree) is created on each of these subsets, such that all these weak learners are independent of each other and run in parallel Finally, combine the predictions obtained from each of these weak learners to create a prediction for the strong learner(final bagging model) One of the most popular examples of bagging is Random Forest Advantage: a. Less Overfitting — Many weak learners aggregated typically outperform a single learner over the entire set, and has less overfit b. Stable — Removes variance in high-variance low-bias data sets c. Faster — Can be performed in parallel, as each separate bootstrap can be processed on its own before combination Disadvantage: a. Expensive — computationally it will be expensive if the data set is quite big b. Bias — In a data set with high bias, bagging will also carry high bias into its aggregate c. Complex — Loss of interpretability of a model. 2. Boosting One day, I thought why not cook food on your own. So, I cooked food. But I found I have added extra salt. So, it was not tasty. The next time when I cooked, I put lesser salt. But I found it was too spicy. So, the next time when I cooked I put in adequate spice and salt, but the food got burnt. So, the next time when I cooked, I put adequate spice and salt with food getting cooked at low flame and I was watchful. Finally, I cooked tasty food. At the inception, I was a weak learner. But I keep on learning from my own mistakes and in the end, I became a strong learner. ML version In boosting, the weak learner (decision tree) with a relatively high bias are built sequentially such that each subsequent weak learner aims to reduce the errors (mistakes) of the previous learner. Each learner learns from its predecessors and updates the residual errors. Hence, the learner that grows next in the sequence will learn from an updated version of the residuals. Each of these weak learners contributes some vital information for prediction, enabling the boosting technique to produce a strong learner by effectively combining these weak learners. The final strong learner brings down both the bias and the variance. There are two types of boosting: A. Weight based Boosting Steps: i. A sample dataset is taken to train the model with a weak learner. For example, I have three independent variables X1, X2, and X3, and the dependent variable Y which I have to predict. ii. We get the following result based on the weak learner prediction. Then we calculate the absolute error from the prediction. Every algorithm like AdaBoost and LogitBoost. Here just for reference purposes, I have taken weight to highlight that higher weight is given to the row, which is having a higher error. iii. Again a weak learn model is created to predict Y, by giving the weight of miss-classification. So, the model will try to learn from the previous model that it will create lessor error where weight is high and we get a better model each time. B. Residual based Boosting i. Step i. is the same as above. We will train our random sampled data with a weak learner. ii. In this case we will obtain errors in each row from misclassification iii. Based on the error as my dependent variable, now I am predicting using another weak learner with the same independent variables. iv. Now, my new boosted model will be weak learner 1 + weak learner 2 with prediction would be the normalized sum of both the model You can see that it is learning so fast and has reduced the bias(error) significantly. This is just an example of a residual-based boosting algorithm. In a real scenario, every residual-based algorithm has its own way to reduce bias. Some examples of residual-based algorithms are a. XGBoost b. LightGBM c. CatBoost d. GBM 3. Stacking This time I want to learn batting techniques in cricket. But, I know different good batsmen who play cricket near to my place. When I asked the players, they told me one batsman is good at playing leg glance, while one other batsman is good at playing hook and pull and one other batsman is good at playing sweep and drive shot. Rather than focus on learning from a single batsman, I tried to learn from each batsman their specialist shots. Indeed it has helped me to become a really good batsman than all these batsmen at the end. This is stacking. Learning good quality from each of the batsmen and stack to give the best result. ML Version What is different about this ensemble technique as compared to bagging and boosting? Unlike bagging, in stacking, the models are typically different (e.g. not all decision trees) and fit on the same dataset (e.g. instead of samples of the training dataset). Unlike boosting, in stacking, a single model is used to learn how to best combine the predictions from the contributing models (e.g. instead of a sequence of models that correct the predictions of prior models). What is stacking exactly? Wolpert in 1992, introduced this term stacking to the data science world. “Stacking” is a technique in which the predictions of a collection of models are given as inputs to a second-level learning algorithm. This second-level algorithm is trained to combine the model predictions optimally to form a final set of predictions.” Steps involved: Take the training data set and create k fold splits. In our example, let’s take it 5 fold. 5 fold stacking 2. Based on the K-1 fold training by the first of three algorithms (we can take more or less than 3 algorithms for training) algorithms, we will predict Kth fold training data. 3 algorithms can be SVM, KNN, and Random Forest. The point is unlike boosting and bagging, it is not necessary to use Decision trees only. SVM prediction on 5th Fold data 3. We will repeat K-1 times step 2. It means we will take another K-1 fold for training and predict the Kth fold using the first algorithm(SVM). 4. Now, since the first model is trained, we will predict the validation dataset C. Prediction value of SVM 5. We will repeat steps 2 to 4 for each of the remaining algorithms to obtain the predictions. Predictions from each of the algorithms for both training and validation data set 6. Now, we will train the B dataset using the predictions obtained from each of the algorithms, and based on this training we will predict the C dataset. In this way, we perform stacking. Let’s move towards the last ensemble technique. 4. Blending The story of blending is the same as stacking. Here also, I will learn from different batsmen to become a better batsman. What is then the difference between stacking and blending? It follows the same approach as stacking but uses only a holdout (validation) set from the train set to make predictions. In other words, unlike stacking, the predictions are made on the holdout set only. The holdout set and the predictions are used to build a model that is run on the test set. Steps involved: Take the training data set, validation set(generally 10–30% of training dataset), and test dataset. We will perform training using several algorithms (can be decision tree or SVM or Random Forest or any other) on the training dataset. Based on the above training, predict the validation dataset and test dataset by all the algorithms Based on the prediction of step 3 as input, train the output of the validation dataset, and predict the test dataset. Is it confusing? Yes!!! then follow the pictorial explanation: Blending: First step I am going to train dataset(A) using 3 algorithms (Algo_1(SVM), Algo_2(Random Forest), and Algo_3(KNN). Based on the training, I have got the model parameters. Blending: Second Step Now based on the above training, I am going to predict validation dataset(B) and test dataset(C) using the three algorithms. We can see above the prediction result of three algorithms for both datasets B and C. The next step would be to use these predictions as input. I am going to again train dataset B based on the three algorithms' prediction. Based on this training, I am going to predict dataset C. Important point Stacking and Blending are relevant when the prediction made by the different algorithms or the errors of the predictions are uncorrelated to each other. Just simple example was my learning (batting technique). I used different batsmen with different expertise, not the same expertise. Summary This article is about ensemble machine learning, bagging, boosting, stacking, and blending. Using stories and pictures, complex algorithms like boosting and bagging are explained in a lucid manner. In this article, you have learned: What is ensemble machine learning? What are different types of ensemble machine learning — both naive and advanced? You have got details about each ensemble machine learning techniques Hope you have liked this article. Please clap for encouraging me to write more such articles, and please do share this article if you find it is helpful to your friends or colleagues. Reference:
https://abhinav-iitkgp2.medium.com/what-is-ensemble-machine-learning-using-stories-and-pictures-d949e4649a0f
['Abhinav Srivastava']
2020-12-19 11:05:34.672000+00:00
['Data Analysis', 'Stories', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
What Did I Think of ‘Before the Wrath?’
Should I really be reviewing this movie? Most people don’t believe that the rapture will occur. What if there was evidence to prove that it will? — Before the Wrath These are the words and the thought that this movie, Before the Wrath, begins with. When I was presented with the opportunity to review this movie, I wondered if I should take it. End times prophecy has never been all that interesting to me. It’s never been that important to me, given that I know Jesus is coming back for me before the end (I am a pre-tribulation rapture believer). What happens after that is pretty much irrelevant to me. But, see, I believe in the rapture. I believe it will occur, just as Jesus said. As the above quote indicates, most people don’t (which I find rather startling; it’s in the Bible, clear as anything). If you believe the Bible, you should believe in the rapture. If you don’t believe the Bible, well, then, you’ve got some other things you need to work out first. But maybe the “evidence” presented in this movie will be enough to spark something inside you that will turn you back to God and the truth of His word. As for me, I decided to give this move a try because I do, after all, love movies … and especially ones that deal with biblical ideas and themes. I was not sorry I chose to review this one. After all, it is the #1 Christian movie in America right now. I don’t want to give too much away I’m going to stay as vague as possible in this review because I don’t want to give away “the secret.” I want you to see the movie for yourself. But I will say that it mainly has to do with the fact that Jesus was from Galilee, and pretty much everything He said and did was colored by that cultural background. Because He was mainly hanging around and sharing things with people who also came from that background. Also, we have to take a look at His first recorded miracle … at a wedding. That’s significant, for more than one reason. I’ve already explored this idea a bit in a popular Medium article. So, this movie hit on some things God had already been showing me in my own studies of Jesus. And, within the first few minutes of the movie, we hear these words: “…I tell you, he will see that they get justice, and quickly. However, when the Son of Man comes, will he find faith on the earth?” (Luke 18:8, NIV) A few weeks before I saw this movie, I had been studying those words in the Bible, and they stood out to me in a major way. I knew then, without a shadow of a doubt, that God wanted me to watch and review this movie … and probably learn some things from it (I did, like the bride in an ancient Galilean wedding had a lot more power than we might realize). Still from ‘Before the Wrath,’ courtesy of Ingenuity Films My overall impression of ‘Before the Wrath’ In addition to being informative (and I love that in a biblically themed movie), the movie was well-acted. The cinematography was skillful and beautiful, and the music was intriguing and haunting. The narration by Kevin Sorbo just brought it all together perfectly. This is an excellent production, and I highly recommend it. Will it change your mind about Jesus, the Bible, or the rapture? Will it make you a believer? I don’t know. Only God knows that. But if you are already a believer, this movie, which highlights the importance and impact of biblical prophecy, will strengthen that belief and give you a brand new, exciting perspective on what Jesus has done for you already … and what He is going to do for you. You’re not going to want to miss this movie! How do you watch ‘Before the Wrath?’ You can get the DVD right now from Walmart for less than $10. Or you can check out the retailer’s “buy” page for more purchase options. And, just because you read this review, you have a chance to win a digital code that will enable you to watch this movie online for free! Just leave me a comment, telling me about your most memorable wedding moment. Please note, this giveaway will end on December 4, 2020, at 11:59 PM EST. And, as Before the Wrath tells us, if you want in, you have to get in while you can … because once the door is shut, it’s shut. And be sure to check out the free YouVersion 5-day reading plan available here. Canva creation from the ‘Before the Wrath’ movie poster, courtesy of Ingenuity Films “Disclosure (in accordance with the FTC’s 16 CFR, Part 255: “Guides Concerning the Use of Endorsements and Testimonials in Advertising”): Many thanks to Ingenuity Films for providing this prize for the giveaway. Choice of winners and opinions are 100% my own and NOT influenced by monetary compensation. I did receive a sample of the product in exchange for this review and post. Only one entrant per mailing address, per giveaway. If you have won a prize from our sponsor FlyBy Promotions in the last 30 days, from the same blog, you are not eligible to win. If you have won the same prize on another blog, you are not eligible to win it again. Winner is subject to eligibility verification.
https://medium.com/koinonia/what-did-i-think-of-before-the-wrath-890ad1955b26
['Mishael Witty']
2020-12-01 01:38:51.542000+00:00
['Prophecy', 'Christianity', 'Movies', 'Review', 'Creativity']
Being Bayesian with Visualization
Think about the last time you looked to a visualization to determine what to believe about the world, or how to act. For those of us who have been watching COVID-19 data accumulate, the answer might be “twenty minutes ago.” Even in the absence of a global pandemic, it’s common to look to visualizations in science, government, the media, and our personal lives to decide what is true about something in the world, whether that’s the state of the economy, what will happen in the next election, or how well you’re meeting personal finance or health goals. Visualizations can confirm, strengthen or call into questions things we already think. Unfortunately, much of the rhetoric about why visualization is powerful implies that visualization is primary for helping us perceive data, and identify hypotheses or trends that we had no idea existed. Consider often-cited examples like Anscombe’s quartet, the clever set of 4 datasets that, when summarized as summary statistics, appear to be identical, but on visualization are clearly distinct in structure. Clear your mind and visualize the data, one might believe, and they will speak. Perhaps as a result, many visualization evaluations focus on how well people can read the data from a visualization, not what they think afterward. Anscombe’s quartet, like many examples used to demonstrate the power of visualization, assumes that the user has no prior knowledge about the data. How often do you find yourself looking at data about which you really have no prior expectations? The ways we use visualization in the world to inform our beliefs call into question latent assumptions that people approach visualizations as “blank slates.” We’ve been developing an alternative approach to visualization design and evaluation that acknowledges people’s prior beliefs. An Example of a Bayesian Approach to Visualization Imagine I am going to show you some sample data estimating the proportion of people who will contract coronavirus given that they are exposed (or, if you’re overwhelmed with COVID-19 news, you can imagine I am talking about presidential polls instead!) But before I do that, think about what beliefs you might bring to the estimate even before I show it to you. If you’re like me, you’ve been watching updated case counts and deaths across the world, and could harbor a rough guess of the proportion of people in who seem to get the virus when exposed. Even if you have little background on COVID-19, the term virus may make you think of other diseases, and harbor a guess based on those. For example, I might describe my beliefs as a prior probability distribution over possible values, with my best guess of the rate at 30% but believing that values between 11% and 60% are possible. Setting prior beliefs on the percentage of people exposed to COVID-19 who will contract it. Imagine you now saw a visualization of a sample of people who got coronavirus within some population, say, the Diamond Princess cruise ship. At least 712 out of 3,711 (19%) passengers contracted the virus while quarantined together on the ship. 712 people of the 3,711 passengers on the Diamond Princess were infected with COVID-19. Because we are trying to estimate the probability of contracting the disease upon exposure, but we observed a limited number of people, numbers as low as 660 and as high as 754 should be considered plausible. When using the Diamond Princess to estimate the percentage of people in the larger U.S. population who will contract COVID-19 if exposed, we must acknowledge some uncertainty around our estimate of the population value due to the fact that various underlying “true” proportions could produce slightly different sample estimates. A likelihood function describes this uncertainty about the true proportion based on the limited sample size of the Diamond Princess passengers. Now consider what your best guess about the proportion of people who will contract coronavirus given that they are exposed is now that you’ve seen the new information. Do you mostly overwrite your prior beliefs with what the data says? Or do you reject the data and stick with something pretty close to your prior? Or do you believe something in the middle? If so, are you more or less uncertain now? A few examples of what a person might believe after learning that 19% of the Diamond Princess passengers contracted COVID-19, assuming their prior beliefs that 30% is the most likely proportion exposed who will get COVID-19, with some uncertainty. They might, for example, mostly reject the new data (a), update their beliefs to a value between their prior beliefs and the new data (b), or mostly throw out their prior beliefs (c). By eliciting or inferring a user’s prior beliefs about a parameter, showing them observed data represented as a likelihood function, and eliciting their posterior beliefs, we frame visualization interaction as a belief update. And given this framing, we can do some powerful things. First, in a Bayesian framework, we can use our prior and the likelihood for the Diamond Princess sample above to calculate a normative posterior distribution. This is what we should believe if we take the data at face value and use it to update our prior beliefs rationally. Rational in this case means that we’ve used Bayes rule, a simple rule of conditional probability that says that our posterior beliefs should be proportional to our prior beliefs times the likelihood, to arrive at the posterior beliefs. For our example above, we can represent the two distributions — the prior distribution and likelihood — using Beta distributions, which are described by two parameters representing the numbers of “successes” (e.g., those who contracted COVID-19 after exposure) and “failures” (e.g., people who didn’t contract it). Under most circumstances, we can interpret the sum of these values as the amount of data (or size of sample) that is implied by those beliefs or data. This leaves us with two sets of information. It is rational to combine these two sets of information to figure out what we should believe now. We can sum successes and failures, then figure out the new proportion: For a Beta distribution, we can sum the two parameters (alpha and beta, approximating counts of “successes” and “failures”) from the prior and likelihood to arrive at the best posterior estimate of the proportion. Let’s say our posterior distribution was (b) in the figure above, with a most likely value of 25%. We can compare it to the predictions of our Bayesian model, which suggests that we should believe that 19% of people who are exposed will contract COVID-19 with a small window (a few percent) around that value, based on our prior beliefs and the Diamond Princess data. It appears that our posterior beliefs overweight our prior beliefs, which said 30% was most likely, relative to normative Bayesian inference. Comparison of the posterior beliefs predicted by a Bayesian model of updating for the COVID-19 example (top left) and a user’s hypothetical posterior beliefs (bottom left) which are more uncertainty and shifted toward their prior beliefs. The right side represents the same two distributions as density plots. From the difference, we learn something about how a person updates that can be broken down in several ways, such as by considering how well they update the location of their beliefs (how the mean of their posterior compares to that of the normative posterior, in this case shifted toward prior beliefs) versus variance (how much more or less certain they are than the normative posterior, in this case more uncertain than a Bayesian would be, suggesting they undervalued the informativeness of the sample). With this information, we can do a number of things, which we are currently exploring in our research: Evaluate visualizations to see which brings updating closer to Bayesian. Even if people don’t appear Bayesian at an individual level, we can rely on the fact that people’s belief updates often tend to be Bayesian in aggregate to identify which of several visualizations is best for belief updating. Even if people don’t appear Bayesian at an individual level, we can rely on the fact that people’s belief updates often tend to be Bayesian in aggregate to identify which of several visualizations is best for belief updating. Use the user’s prior to personalize how we show them data. Their prior distribution describes how certain the user is about some quantity. We can use this subjective uncertainty to give them context on how much information new data contains (e.g., this data is twice as informative as your prior beliefs) or derive other analogies to guide their belief update. Their prior distribution describes how certain the user is about some quantity. We can use this subjective uncertainty to give them context on how much information new data contains (e.g., this data is twice as informative as your prior beliefs) or derive other analogies to guide their belief update. Detect (and mitigate) cognitive bias as someone does visual analysis. If we can observe a few belief updates someone has already done, we can diagnose whether they tend to over- or under-update, and how. We can predict how they’ll respond given a new dataset, and adjust the visualization or other aspects of the interaction for detected biases. But first, let’s unpack what might cause deviation from a Bayesian prediction. What might cause deviation from Bayesian updating? A few reasons seem especially worth considering. Biases in Using Sample Size Correctly One possibility is what is often called cognitive bias, where a person shows a consistent tendency to do something different than a perfect statistical processor would. Some well-known biases describe difficulties people face understanding the relationship between sample size and uncertainty. Often people overperceive the informativeness of a small sample (commonly called Belief in the Law of Small Numbers), and they may also discount the informativeness of a large sample (recently dubbed Non-belief in the Law of Large Numbers). In a recent experiment, we had roughly 5,000 people on Mechanical Turk go through a procedure similar to the example above, giving us their prior beliefs, viewing visualized data, and then giving us their posterior beliefs. We showed them one of four datasets, which varied in sample size (a small-ish sample of 158 versus very large sample of 750k) and topic (dementia rates among assisted living center residents in the U.S. versus the proportion of surveyed female U.S. tech company employees who reported that mental health affects their work often). Icon arrays depicting a small and large sample version of a dataset estimating the proportion of women in tech who feel mental health affects their work often. The huge sample size of the dataset on the right means that each icon represents multiple women. Topic didn’t have too much of an effect on how people updated their beliefs. But sample size did. First, when we looked at the average individual level deviation from Bayesian updating (i.e., how far a person’s posterior beliefs were from the normative posterior beliefs given their prior), it was much higher for the very large sample. Second, when we looked at the average aggregate deviation from Bayesian inference — the deviation between the average of all people’s posterior distributions and the normative posterior distribution you get if you update the average of all people’s prior distributions using the Diamond Princess sample — it was also much higher. These results align with recent findings from a behavioral economics study that uses more abstract “balls and bins” scenarios to find that people increasingly discount the value of information as sample size grows. Ironically, this suggests that we should not show people big data all at once. How do people mentally represent uncertainty? Our results also suggested that people are much more Bayesian in aggregate than individually. An intriguing hypothesis put forth for this “noisy Bayesian” effect is that people’s priors may take the form of samples, rather than full distributions. To use an example tested by mathematical psychologists, if you asked me how long a cake had to remain in the oven before it was done, given that it has already been in for 10 minutes, I might come up with an answer by imagining a few specific cakes I know of, say, one which takes 25 minutes to bake, and one which takes 50 minutes. We explored the idea that people may find it easier to think about samples than full distributions over proportion values by testing how robust our results were to different interfaces for eliciting people’s prior beliefs. Interfaces for eliciting a person’s prior beliefs, which vary in the degree to which they encourage thinking in terms of discrete samples (left) versus full distributions over parameters (right). Misperception of Uncertainty A related reason to bias in using sample size is that a person may misperceive uncertainty in the estimate they are shown. The “machinery” of Bayesian inference allows us to ask some interesting counterfactual questions to investigate this possibility. Let’s assume that misperception of the Diamond Princess sample size, or conversely the uncertainty in the estimated proportion we got from it, is what caused our deviation. Since we know the user’s prior beliefs, and we know their posterior beliefs, we can ask: What is the sample size that a rational Bayesian agent (meaning, one who updates according to our model) would have needed to perceive to arrive at this user’s posterior beliefs given their prior beliefs? Assuming the posterior that predicts the most likely value is 25% with an interval from 20.4% to 30.1%, the answer is 305. So we learn that it’s possible that we misperceived the Diamond Princess sample size as being about 1/10 of the size that it actually was. How can we get people to be more sensitive to how sample size translates to uncertainty? In our experiment on Mechanical Turk, we varied the way uncertainty in the data was visualized, between a more conventional static icon array with sample size mentioned in text, and an animated hypothetical outcome plot. Using our sample size measure, we found that using the conventional approach, people perceived a sample size of 750k on average as though it were only 400! (Compare this to the sample size of 200 they acted as if they perceived when shown a much smaller sample of 158). When we visualized uncertainty using animated hypothetical outcomes, the average perceived sample size jumped to 67k. Still a big discount over the true size of 750k, but much, much better. Discounting Data Based on the Source Another possible reason for deviation stems from the way our Bayesian model assumes that the user will take data at face value, judging its informativeness by sample size alone. If we believed that the Diamond Princess sample isn’t fully representative of the population we care about, perhaps because its passengers skew older and more affluent than the general population, then it would make sense that we might adjust the weight we place on it in arriving at our posterior beliefs. In recent experiments we’re finding that how much someone says they trust the source of a visualized dataset can help us predict how much less Bayesian their belief update will look. So, if we can also infer a user’s trust in the source (e.g., from their political attitudes, browsing history, etc.) we can use that information to make the predictions of our Bayesian model more accurate. Applying Bayesian Modeling to Visualization To apply Bayesian modeling to visualization, there are a few things we need. First, we need to decide the parameter(s) we care about. The parameters are the information that we think are important to the visualization message. Choosing a parameter may be straightforward for some datasets (e.g., the percentage support for a candidate given a poll). However, other datasets might be visualized to convey more complex multivariate relationships, requiring an author to choose what information seems most critical. Consider, for example, a New York Times visualization that depicts Math (blue) and English (orange) performance for U.S. third through eighth graders on standardized tests. Each circle represents a district. Family income is plotted along the x-axis, while the y-axis encodes how much better girls perform than boys. This visualization supports a number of parameters which entail different Bayesian models and elicitation interfaces, shown in the table below. Possible parameters and elicitation approaches for the New York Times “Where Boys Outperform Girls in Math: Rich, White and Suburban Districts” Next, we need a way to elicit or infer a user’s prior beliefs. The goal of elicitation is to capture the user’s sense of how plausible different values of the parameter are. The most direct way is to ask the user to say, in absolute or relative terms, how likely different values of the parameter seem. The example above demonstrates eliciting a prior in parameter space by asking users to think about population proportions directly. However, the math behind probability distributions makes it possible to derive a prior distribution (in parameter space) from user’s estimates in data space. Consider the graphical sample interface above, which showed people grids of 100 dots. Imagine instead that those dots were people icons, and the number of people matched the sample size of the data. This approach may be more effective when parameters are hard for people to reason about outside of examples The nice thing about eliciting graphical samples is that we no longer have to ask the user to think in terms of abstract representations that differ from the visualization they are looking at. In the graphical sample interface below, the user drags and positions point clouds for each subject from a left panel, providing their prior over the average score difference by subject. For each positioned point cloud, they then adjust the slant (slope) by right-clicking and sliding a slider. This represents their prior on the strength of the relationship between income and score difference. Eliciting a user’s beliefs about the relationship between income, Math and English grades, and (binary) gender. Third, we need some data to show them, and a Bayesian model in which it is represented as a likelihood function. Simple Bayesian models often have just a single level of structure where a data-generating process is defined for the parameter in question and priors are specified only for parameters of that process (e.g., priors representing distributions over parameters in a model like our proportion examples above, or the intercept α or slope β in the linear model y=α+β∗x). More sophisticated hierarchical models specify hyperpriors (distributions over parameters of the priors). Finally, for applications like visualization evaluation, bias detection, or personalizing based on update “type”, we need to elicit or infer people’s posterior beliefs. In most cases, we can do this the same way we elicited the prior, in parameter space or data space. Imagine a visual analysis system like Tableau Software periodically querying your changing beliefs as you visually analyze data, and responding by showing you data differently. Want to read more? Check out our ACM CHI 2019 paper or EC Behavioral Economics Workshop paper on Bayesian cognition for visualization. Stay tuned for more research on the way, exploring Bayesian personalization and bias detection for visualization interaction.
https://medium.com/multiple-views-visualization-research-explained/being-bayesian-with-visualization-669892dc024b
['Jessica Hullman']
2020-04-06 17:20:42.897000+00:00
['Visualization', 'Data Science', 'Data Visualization', 'Bayesian Statistics', 'Bayesian Inference']
Female Friday: ‘Coward’ by Aborn
Man, if “Coward” is indicative of what to expect of Brazil’s Aborn, then count me as someone who wants to hear more. The song is ferocious and you’re going to want to hear it again on the first listen. The São Paulo band is made up of Isabela Moraes on drums, Tamy Leopoldo on bass, Taty Kanazawa on vocals and guitar and Beatriz Paiva on guitar. They’re fairly new, forming in 2016 and I don’t think they have an album yet. They sound awesome live, though, and I can’t wait to hear what a good producer will do with their sound.
https://medium.com/earbusters/female-friday-coward-by-aborn-7953847fba1
['Joseph R. Price']
2018-12-21 06:01:00.788000+00:00
['Brazil', 'Heavy Metal', 'Feminism', 'Death Metal', 'Music']
Beautiful and Easy Plotting in Python — Pandas + Bokeh
Beautiful and Easy Plotting in Python — Pandas + Bokeh A single line of code to create an interactive plot from Pandas dataframe to Bokeh Although Matplotlib can satisfy all our needs when we want to plot something in Python, it is sometimes time-consuming to create a beautiful chart using it. Well, sometimes we may want to demonstrate something to the boss so that it would be nice to have some beautiful and interactive plots. There are a lot of excellent libraries can do that, Bokeh is one of them. However, it might also take some time to learn how to use such libraries. In fact, someone has already solved this problem for us. Here is a library called Pandas-Bokeh , which consumes Pandas directly and render the data using Bokeh. The syntax is extremely straightforward and I believe you can start to use it in no time! A Bar Chart Example Let me demonstrate the library using an example. Firstly, we need to install the library using pip . pip install pandas_bokeh After installation, we need to import numpy , pandas and of course the pandas_bokeh library. import numpy as np import pandas as pd import pandas_bokeh I would like to generate some random data for demonstration purposes. Suppose that we have a dataset of an e-commerce website. The dataset contains sales from 2010 to 2019 for three categories. Let’s generate this dataset using Numpy. df = pd.DataFrame({ 'Year': np.arange(2010, 2020), 'Category-A': np.random.uniform(9000, 15000, 10), 'Category-B': np.random.uniform(9000, 15000, 10), 'Category-C': np.random.uniform(9000, 15000, 10) }) Now, we have data in our Pandas dataframe. Before we start to use pandas_bokeh to plot the data, we need to set the output to the notebook, which will work for Jupyter/iPython notebooks. I will explain why we need to do this later, it is because pandas_bokeh supports other output location. pandas_bokeh.output_notebook() OK. we can plot the dataframe now. df.plot_bokeh( kind='bar', x='Year', y=['Category-A', 'Category-B', 'Category-C'], xlabel='Category', ylabel='Annual Sales', title='Annual Sales by Category' ) Isn’t it much more beautiful than the default matplotlib ? Let’s have a quick look at the parameters: kind what type of chart do you want to plot? Currently, pandas_bokeh supports the following chart types: line, point, step, scatter, bar, histogram, area, pie and map. what type of chart do you want to plot? Currently, supports the following chart types: line, point, step, scatter, bar, histogram, area, pie and map. x and y Simply pass in the column name(s) of the Pandas dataframe and Simply pass in the column name(s) of the Pandas dataframe xlabel and ylabel The label of the x-axis and y-axis relatively and The label of the x-axis and y-axis relatively title The title of the chart So, you have seen how easy it is to create such a beautiful plot. More importantly, it is interactive. Below is a GIF from the official GitHub repo. Some Advanced Parameters Of course, the library also supports a lot of advanced parameters that allow us to customise the plot if necessary. Here is another example using the same dataset but plot the data using a line chart. df.plot_bokeh.line( x='Year', y=['Category-A', 'Category-B', 'Category-C'], figsize=(900, 500), ylim=(5000, 20000), zooming=False, panning=False ) Please note that here I use df.plot_bokeh.line(...) which is equivalent to df.plot_bokeh(kind='line', ...) . figsize Define the size of the plot in a tuple (width, height) Define the size of the plot in a tuple (width, height) xlim and ylim Define the default ranges of x-axis and y-axis respectively. Here I only set for the y-axis. and Define the default ranges of x-axis and y-axis respectively. Here I only set for the y-axis. zooming Enable/disable the zooming gesture Enable/disable the zooming gesture panning Enable/disable the panning gesture Output to HTML Do you remember that we have set the output to the notebook? pandas_bokeh.output_file('chart.html') Apart from the Jupyter Notebook, we can also set the output to an HTML file. So, the chart will be saved and output to an HTML file that can be persisted and distributed. Summary Photo by Kelly Sikkema on Unsplash In this article, I have demonstrated how to use the pandas_bokeh library to plot your Pandas dataframe end-to-end with extremely simple code but beautiful presentation with interactive features. It turns out that the library may not satisfy all your needs when you have many special rendering requirements, but it is an excellent library when you just want to build a typical chart for your dataset.
https://towardsdatascience.com/beautiful-and-easy-plotting-in-python-pandas-bokeh-afa92d792167
['Christopher Tao']
2020-06-09 11:52:11.095000+00:00
['Python', 'Towards Data Science', 'Data Science', 'Data Visualization', 'Pandas']
Why White Evangelical Christians Voted For Donald Trump — Again
The exit polls from the 2020 presidential election indicate that self-identified, White, evangelical Christians voted for Donald Trump at a rate of about 7.6 out of every 10. This is actually a reduction from 2016 when they voted for him at a rate of about 8.1 out of every 10. White evangelical support for Donald Trump has been the target of much research, investigation, mockery, bewilderment, curiosity, and scorn. Millions of people who are not self-identified, White, evangelical Christians struggle to understand why and how people who profess to love and follow Jesus Christ also profess — sometimes more fervently — to love and follow Donald Trump. The truth is actually pretty simple…and quite complicated. First, a distinction must be drawn. I have written extensively on the difference between people who genuinely hold the religious views of evangelicalism and those who are really religious nationalists claiming the mantle of evangelical beliefs. It is important to understand this fundamental difference. The rate of support for Donald Trump among those who sincerely adhere to the four tenets of evangelical Christianity is not 76%, but it is still an overwhelming majority. To understand why, one must understand several things about the experience of being an evangelical Christian: Evangelicals believe that the bible is the ultimate authority for all areas of their lives — not just spiritual areas. The simple: In plain terms, evangelicals believe that — beyond just their Sunday responsibilities — the bible dictates how they think, speak, and act on Monday-Saturday as well. In truth, the bible does provide detailed instructions for how to live the “non-religious” aspects of one’s life. For example, the bible instructs Christians to: Avoid trying to explain important things to foolish people Settle disputes with other Christians between themselves without filing lawsuits Give and receive “tough love” in friendships so that both friends mature and become better people Pay taxes and obey laws imposed by the government. These instructions do not directly relate to Christians’ relationship with Christ, rather to Christians’ relationships with other people. Evangelicals believe it is just as important to follow the bible’s non-religious instructions as it is to follow its religious instructions. The complicated: Observers often note (correctly) that White, evangelicals can be quite “selective” in which biblical instructions they follow and which they ignore. For instance, obeying “laws imposed by the government” would certainly include mask mandates and stay-at-home orders. Yet, these laws are often flouted, at times violently so. Additionally, the bible instructs Christians how to treat foreigners and immigrants (the same way that you treat citizens and native born) and also how to treat poor people (the same way you treat rich people). These instructions do not seem to concern White, evangelical Christians as much as some others in scripture. To be clear, there are no perfect Christians, evangelical or otherwise. In fact, were there a “perfect” Christian, that would throw into doubt the entire premise of Christianity (imperfect people needing the redemptive power of the savior, Jesus Christ). However, when non-Christians see evangelical Christians inconsistently apply the bible’s instructions, it causes them to doubt the Christian’s passion and Christ’s power.
https://medium.com/an-injustice/why-white-evangelical-christians-voted-for-donald-trump-again-7fd22701a57a
['Dr. Dion']
2020-11-29 19:54:23.836000+00:00
['Society', 'Christianity', 'Politics', 'Donald Trump', 'Evangelicals']
Inverting Dependencies: A Step Towards Hexagonal Architecture
Inverting Dependencies: A Step Towards Hexagonal Architecture How to take the first step towards hexagonal architecture from a traditional layered architecture Photo by Glauco Zuccaccia on Unsplash “You must understand that when you marry a framework to your application, you will be stuck with that framework for the rest of the life cycle of that application. For better or for worse, in sickness and in health, for richer, for poorer, forsaking all others, you will be using that framework. This is not a commitment to be entered into lightly.” ― Robert C. Martin, “Clean Architecture” My younger self would’ve been surprised by this warning. Because the younger me considered knowing a framework equivalent to having a higher degree. Back then, I had just one goal, and that was finding a perfect framework and sticking with it as long as I was going to stay in this profession. The idea of switching to a different framework to do the same thing was alien to me. But that attitude of mine didn’t last very long. Because very soon I realized how temporary things are in this industry — how every day we’re introduced to a new language or a new framework that could further reduce our work. When I realized this, I’d often want to migrate my code to new shiny frameworks. But I couldn’t. Because the framework code was so deeply ingrained in the business logic that it’d take a humongous amount of effort to do so. “The business rules should be the most independent and reusable code in the system.” ― Robert C. Martin, “Clean Architecture” The answer to this problem was always in front of me — i.e., to keep domain logic decoupled — but an approach wasn’t. But recently, I had come across an architecture that provided an interesting way to solve this problem. It was called hexagonal architecture. Illustration by Cth027 on Wikimedia Commons The idea behind hexagonal architecture is to keep the domain or business logic at the center of your design. The domain shouldn’t have any outward dependencies. At its center, we have entities, and surrounding it are use cases. Everything is enclosed within the hexagon. Access to it is provided via the input/output ports only. So how do we achieve this? Before I answer that, I’d like to talk a little bit about layered architecture and some of its shortcomings. Then I’ll show you how to transition from layered architecture to hexagonal architecture which will help us overcome those shortcomings and answer the question raised above. Layered architecture As you can see in the above diagram, the service layer resides over the top of the repository layer. So clearly, our domain code — which resides in the service layer — has an outwards dependency on the repository layer. As a result of this, we can’t start working on the service layer before we implement our repository layer. Also because of this hierarchy, we usually start by modeling database schema —database-driven design — with the confidence that we’ve completely understood our domain, which, in fact, we never do unless we’ve implemented all of our business logic. And then we find ourselves spending hours working on our DB schema and ORM code again. That’s not it. Remember the rant over mixing framework code with business logic in the first paragraph? That’s happening in the service layer because of ORM entities making their way into it. “What’s so bad about that?” you might ask. Well, those entities bare an exact resemblance to the table in our database — plus they contain metadata and mapping information that implies our domain code is fully aware of the persistence API and is coupled with it. Now let’s see how moving to hexagonal architecture helps us get around these issues. To do that, we need our repository layer or persistence layer to depend on the service layer or domain layer. So, basically, we need to invert the dependencies. To invert dependencies first, we need to have an entity that’s free of any ORM code. Next, our domain layer needs to expose a set of interfaces called ports, which our persistence layer needs to provide a concrete implementation. It will be the persistence layer’s job to convert these domain entities to ORM-managed entities and to perform the required operation. With this, our domain logic is free from any persistence code. Check the below UML class diagram for more clarity. This decoupling does come at a price, though. As you can see, now we also need to manage domain entities beside ORM entities. But look at the bright side: There’s much to gain. For starters, our domain layer is completely decoupled. Also, you’re now free to make your domain entities much richer with helper methods and numerous constructors, which you couldn’t otherwise do with your ORM entities. This also allows us to apply domain-driven design (DDD) in its purest form. Still, our domain layer won’t be completely free of all the framework code, as interfaces that are being used in the domain layer need to be provided concrete implementation. For this, we still rely on our dependency-injection(DI) frameworks, which will satisfy the dependencies at the runtime. So all the layers will still have a dependency on the DI framework. But usually, DI frameworks have a very low footprint on the code, and we can further reduce those footprints by using methods such as construction injection. That’s it! With this, we’ve taken our first step towards hexagonal architecture. There’s still a long way to go. But that’s it for now.
https://medium.com/better-programming/inverting-dependencies-a-step-towards-hexagonal-architecture-ee74e11877dd
['Murtuza Ranapur']
2020-05-13 16:36:33.321000+00:00
['Architecture', 'Dependency Inversion', 'Software Development', 'Programming', 'Startup']
Data Science: Data Cleansing And Visualization For Beginners Using Python
In this article, I am discussing an educational project work in the fascinating field of Data Science while trying to learn the ropes of Data Science. This write-up intends to share the project journey with the larger world and the outcome. Data science is a discipline that is both artistic and scientific simultaneously. A typical project journey in Data Science involves extracting and gathering insightful knowledge from data that can either be structured or unstructured. The entire tour commences with data gathering and ends with exploring the data entirely for deriving business value. The cleansing of the data, selecting the right algorithm to use on the data, and finally devising a machine learning function is the objective in this journey. The machine learning function derived is the outcome of this art that would solve the business problems creatively. I will be focussing exclusively on the Data cleansing, imputation, exploration, and visualization of the data. I presume the reader has a basic knowledge of Python or even any equivalent language such as Java or C or Cplusplus to follow the code snippets. The coding was done in Python and executed using Jupyter notebook. I will describe the steps we undertook in this project journey, forming this write-up’s crux. Import Libraries We began by importing the libraries that are needed to preprocess, impute, and render the data. The Python libraries that we used are Numpy, random, re, Mat- plotlib, Seaborn, and Pandas. Numpy for everything mathematical, random for random numbers, re for regular expression, Pandas for importing and managing the datasets, Matplotlib.pyplot, and Seaborn for drawing figures. Import the libraries with a shortcut alias as below. Loading the data set The dataset provided was e-commerce data to explore. The data set was loaded using Pandas. The ‘info’ method gets a summary of the dataset object. The shape of the dataset was determined to be 10000 rows and 14 columns. The describe specifies the basic statistics of the dataset. The ‘nunique’ method gives the number of unique elements in the dataset. Initial exploration of the data set This step involved exploring the various facets of the loaded data. This step helps in understanding the data set columns and also the contents. Interpreting and transforming the data set In a real-world scenario, the data information that one starts with could be either raw or unsuitable for Machine Learning purposes. We will need to transform the incoming data suitably. We wanted to drop any duplicate rows in the data set using the ‘duplicate’ method. However, as you would note below, the data set we received did not contain any duplicates. Impute the data While looking for invalid values in the data set, we determined that the data set was clean. The question placed on us in the project was to introduce errors at the rate of 10% overall if the data set supplied was clean. So given this ask, we decided to introduced errors into a data set column forcibly. We consciously choose to submit the errors in the ‘Purchase Price’ column as this has the maximum impact on the dataset outcome. Thus, about 10% of the ‘Purchase Price’ data randomly set with ‘numpy — NaN.’ We could have used the Imputer class from the scikit-learn library to fill in missing values with the data (mean, median, most frequent). However, to keep experimenting with hand made code; instead, I wrote a re-usable data frame impute class named ‘DataFrameWithImputor’ that has the following capabilities. Be instantiated with a data frame as a parameter in the constructor. Introduce errors to any numeric column of a data frame at a specified error rate. Introduce errors across the dataframe in any cell of the dataframe at a specified error rate. Impute error values in a column of the data set. Find empty strings in rows of the data set. Get the’ nan’ count in the data set. Possess an ability to describe the entire data set in the imputed object. Have an ability to express any column of the dataset in the impute object. Perform forward fill on the entire data set. Perform backward fill on the entire data set. Shown below is the impute class. The un-imputed data set was checked for any Nan or missing strings for one final time before introducing errors. A helper function ‘do_impute’ was defined to introduce errors in a column of the data set and impute the data set column afterwords. This function would take a condition parameter to perform imputation. To introduce error, random cells in the ‘Purchase Price’ column is set to ‘NaN.’ Once set, there are several ways to fill up missing values or ‘NaN’: We can remove the missing value rows itself from the data set. However, in this case, the error percentage is low at just 10%, so this method was not needed. Fill in the null cell in the data set column with a constant value. Filling the invalid section with mean and median values Fill the nulls with a random value. Filling null using data frame backfill and forward fill The above mentioned are some common strategies applied to impute the data set. However, there are no limits to designing a radically different approach to the data set imputation itself. Each of the imputed outcomes was studied separately — the fill (backfill and forward fill) and constant value imputations outcome shown below. The median and random value imputations are in the code below. Then the mean imputed outcome is visually compared with an un-imputed or clean data column as below. From the above techniques, mean imputation was found closer to the un- imputed clean data, thus preferred. Other choices such as fill(forward and backward) also seemed to produce data set column qualitatively very close to clean data from the study above. However, the mean imputation was preferred as it gives a consistent result and a more widespread impute technique. The data frame adopted for further visualization was the mean imputed data set. Exploring and Analysing the data A cleaned up and structured data is suitable for analyzing and finding exemplars using visualization. Find relationship between Job designation and purchase amount? 2. How does purchase value depend on the Internet Browser used and Job (Profession) of the purchaser? 3. What are the patterns, if any, on the purchases based on Location (State) and time of purchase (AM or PM)? 4. How does purchase depend on ‘CC’ provider and time of purchase ‘AM or PM’? 5. What are top 5 Location(State) for purchases? We plotted a sub-plot as below. We can similarly repeat this subplot to view the top 5 credit cards, the top 5 email providers, and the top 5 languages involved in purchases. There are many other visualizations techniques beyond what I have described in this article, with each one capable of giving unique insights into the dataset. Acknowledgment I acknowledge my fellow project collaborators below, without whose contribution this project would not have been so exciting. The Team Rajesh Ramachander: linkedin.com/in/rramachander/ Ranjith Gnana Suthakar Alphonse Raj: linkedin.com/in/ranjith-alphonseraj-21666323/ Yashaswi Gurumurthy: linkedin.com/mwlite/in/yashaswi-gurumurthy-020521113 Praveen Manohar G: linkedin.com/in/praveen-manohar-g-9006a232 Rahul A linkedin.com/in/rahulayyappan Closing Words The Github link to the codebase is https://github.com/RajeshRamachander/ecom/blob/master/ecom_eda.ipynb. We had fun and many learnings while doing some of these fundamental steps required to work through a large data set, clean, impute, and visualize the data for further work. We finished the project here, and of course, the real journey does not end here as it will progress into modeling, training, and testing phases. For us, this is only the beginning of a long trip. Every data science project that has a better and cleaner data will generate awe-inspiring results!
https://medium.com/the-innovation/data-science-data-cleansing-and-visualization-for-beginners-using-python-3f55323768f1
['Rajesh Ramachander']
2020-09-11 20:14:08.271000+00:00
['Data Imputation', 'Data Science', 'Data Visualization', 'Pandas', 'Data Exploration']
Radian
POETRY Radian A free-verse poem GIF by Lucas Vieira on Wikimedia Commons, Public Domain Hearing in arcs and with the convex of my eye there is a sound for shapes In the kinetic opus feelings join trajectories where chest rises in angles, where beats clash in taut air under low frequency of bone and hot vibrations of skin weep birth to new stellar orbits, The song of the body, — its elbows and knees flung into slow motion — breathes as decelerating wind skimming the strings in acoustic wilderness, a hanging garden tended by science and art, sky architecture in a symphony of black strung in vaporous lights eerily lit in orbs of fog that hint at doom and sing of beauty in fluid synth/esis and cosmic pang In the nostalgic future there are manifold ways to drift dimensionless, to come full circle, to know its motion as a radian of feeling, to catch the movement of emotion in sound
https://medium.com/loose-words/radian-7d8824fe7c13
['Jessica Lee Mcmillan']
2020-12-19 14:08:06.616000+00:00
['Poetry', 'Connection', 'Retrofuturism', 'Emotions', 'Music']
5 Common Mistakes Everyone Makes in Social Media Marketing
5 Common Mistakes Everyone Makes in Social Media Marketing Things to keep in mind to get the results you want. Photo by NeONBRAND on Unsplash What comes to mind when you think of social media marketing? This is what most beginners would say: Just make an attractive-looking creative or click a nice photo and post it on your brand’s social media account. Then sit back and wait for the likes and comments to roll in, and start replying back to them. Build engagement. Sounds easy, right? This is, however, the same trap everyone falls into when they’re starting out, and I was no different. I would direct most of my effort into creating the most aesthetic creatives and continuously editing the pictures until I was satisfied. This is of course required if you wish to maintain a social media profile that matches a high-quality standard, but unfortunately, it does not even fulfill the minimum requirements from the aspects of marketing. It was only after a few weeks of seeing my effort not leading to optimum results that I realized that I was going wrong in my endeavors. So, I took a step back to evaluate my process and found some very basic principles that I had failed to integrate. When you’re out to sail in this vast ocean that is social media, ensuring that your ship is constructed well enough to withstand the waves is crucial. Otherwise, you are bound to sink before reaching your destination. From my experiences, I’ve listed a few of the basic mistakes almost everyone makes, including me, when they start out in social media marketing. It will hopefully be of help in steering your ship in the right direction. Mistake #1: Social media is not following the brand visual The primary mistake many people make is ignoring the importance of branding. You may ask, why is branding necessary? That’s because branding is your identity on social media. You never want your identity and reputation to be built by anyone else; you need to create your own reputation. All industry leaders become leaders by having built their identity over the years. You know Nike is associated with swift action and not lethargy because their branding has created the aura of just doing it. You may be posting amazing content that imparts a lot of value to the audience, but if you lack branding, your efforts would be on the rocks. The results that you’ve been looking out for would not be in sight. The Solution The easy solution to this problem is maintaining consistency in all your brand posts. If there is a uniformity in your content, the viewer will begin to recognize and distinguish you from the other people out there. It doesn’t necessarily mean that you need to always design your posts in a unique manner to ensure that they look identical. The content you put out there should match the visual you have for your brand and be consistent in terms of the value you hope the audience derives from it. For nascent brands, it helps a lot if there is only one person responsible for designing the posts as it reduces the chances of the content being very skewed. Once you have found your niche, creating your content along those lines is the best way forward for developing your brand identity. This step is one of the most overlooked aspects of social media that will help in creating a distinct audience for your brand. Mistake #2: Not creating content your audience wants Photo by Austin Distel on Unsplash Most people have a preconceived idea of what their audience wants from them, which may not always be correct. This could relate to the nature of the content itself or the format in which it is being posted. For instance, you may be meticulously maintaining a blog, without knowing that your audience actually prefers short videos. Many people fall into the trap of not having tested their content. They end up talking about what they believe the audience wants without actually addressing the problems that the audience might be expecting them to solve. The Solution Researching on the formats that your target audience prefers is a huge bonus for making sure that there is plain sailing when it comes to your brand’s social media presence. This can be either in the form of seeing what your contemporaries are doing and analyzing the type of content that works best for them, or testing out different things yourself to view the results. Another method can be asking your audience itself. AMAs and polls are a really great way of engaging directly with them and gaining insights on where their interests lie. Mistake #3: Using too many platforms Photo by Christian Wiediger on Unsplash Many brands aim to grow their presence on diverse social media platforms even when they’re starting out which can result in them not spending enough time trying to get things right on each individual platform. Plus, it results in inconsistency in the number of times they post on each platform. Consistency refers to the number of times you post in one day. The reason why it is crucial in social media is the degrading attention span of users. You always want to be relevant and at the top of the minds of your audience. Failing to post regularly is one of the worst setbacks to your social media strategy. Maintaining consistency in this aspect also leads to building up expectations in the minds of the audience regarding your next post. By striving to be on multiple platforms you can end up comprising the consistency in posting regularly on each platform and giving thought to curate the best content that you can offer. There is an additional layer to this mistake as each platform has its unique way of presentation of content. So while Twitter requires short, concise, and witty statements, Instagram is a visual medium. This can result in mistake #2 again, where you may fail to meet the expectations of your audience on a specific platform by posting content that might not be engaging on that particular platform. The Solution The key lies in strategically choosing your platform. The primary step involves considering your brand ideals and visions. What kind of audience do you aspire for your brand? Do you want young and trendy people to be seeing your content or older and more mature people? Is the brand more suitable for people who love to read or for people who are more interested in consuming easy to digest content? Answering basic questions like these about your brand would aid you in arriving at the best platforms where you should be actively involved. In the end, you need to be active on the platforms where your target consumers are most likely to be active. This is the place where you could impart the most amount of value to them. People tend to remember only those brands whose content they find most valuable, not the one who puts out the best content. It is very easy to get caught up in new trends and devote your time to molding your content. However, that is not the best approach. Understanding where your strength lies and what kind of content has been working for you is the way forward. Don’t feel like you need to be present all across the internet. There are multiple routes to reach your destination when you are sailing in the ocean, but finding the best route allows you to reach not just safe but also in the least time. Mistake #4: Not putting aside enough time in creating content Photo by Ben White on Unsplash This is one mistake where eventually all social media marketers end up getting trapped. Once they get going, they spend most of their time trying to understand what they should post that they don’t devote enough time to creating the best that they can. There are several intricacies involved in the process of content creation as several factors affect the time it actually takes. The time you allocate for creating the content may be significantly less than what is actually required, which leads to less than satisfactory results. The Solution A smart marketer always keeps aside realistic content creation time. While formulation your schedules, always evaluate the actual time you consider would be required to create a piece of content, taking into account logistical and reevaluation factors. This will not just give you a more practical idea of the amount of time you need to devote to the process, but also give you a correct indication of the temporal nature of the activities that you need to be cutting down. Mistake #5: Excluding call-to-actions from your posts Photo by Icons8 Team on Unsplash This is by far the costliest mistake that you could make. Many people shy away from including a call-to-action, or CTA, as they feel it is unnecessary and they don’t like to see it as a part of their post. However, if you don’t tell people to take action, they won’t. And most importantly, CTAs help your viewers to get off the social media platform and onto the page that you wanted to get them direct towards. After all, the ultimate aim is to generate more sales or more eyeballs on your product or service. The CTA can help with precisely that. The Solution One of the best practices to adopt is to include a CTA in every post you make to enable your audience to reach your landing page. Most importantly, it’s completely up to you to design and decorate your CTA. So, instead of the usual ‘Sign up today’ or ‘Buy now’, you can write ‘Sign up to be a part of a fitter community’ for a gym for instance. Demonstrating value in your CTA is a more efficient way of encouraging your viewers to use your product or service. Bonus mistake #6: Not using a strategy in your social media Photo by Austin Distel on Unsplash The eventual result you wish from your social media is to reach your brand goals. So, it is imperative that you visualize your goals beforehand and ensure that your social media initiatives reflect your business goals. Your social media does not have its own goals or ulterior motives. It needs to be a derivative of your marketing goals. So, posting on social media without having a clear vision of what you aspire will not lead to the accomplishments of what your brand actually needs, The Solution Spend time focusing on what you feel your business goals are, and what the end results should be for your brand. Your social media strategies will formulate from those aspirations, and you would end up creating better and more useful content for your potential customers. As a social media manager, you are expected to run a tight ship while simultaneously managing multiple avenues. Ticking off these basic mistakes from your list would go a long way in developing a thriving social media profile and generate more engagement. After all, no one wants their ship to sink before arriving at your destination.
https://medium.com/digital-diplomacy/5-common-mistakes-everyone-makes-in-social-media-marketing-cd6192779848
['Anmol Bhotika']
2020-12-26 15:10:20.308000+00:00
['Marketing', 'Business', 'Branding', 'Social Media', 'Social Media Marketing']
Three powerful Array methods of typescript.
I am new to Typescript but I didn’t realise its importance until I actually used the array functions while trying to refactor the code of an application that I am developing. Let us learn how we can code productively using these array methods. Here I will explain three methods that I personally deemed the most useful because it is relevant in use cases that are common to developers. filter() method creates a new array with all elements that pass the test implemented by the provided function. Here’s an example: Suppose you want to filter out array elements where firstName is “Raghav”. Syntax : array.filter(callback(element[, index[, array]])[, thisArg]) Parameters callback A function is a predicate, to test each element of the array. Return true to keep the element, false otherwise. It accepts three arguments: element The current element being processed in the array. index Optional The index of the current element being processed in the array. array Optional The array filter was called upon. thisArg Optional Value to use as this when executing callback . Return value A new array with the elements that pass the test. If no elements pass the test, an empty array will be returned. Example 2. The join() method joins all the elements of an array into a string. Separator specifies a string to separate each element of the array. The returned array will have comma separated elements if no separator is provided. Syntax: array.join(separator) Parameters separator Optional Specifies a string to separate each pair of adjacent elements of the array. The separator is converted to a string if necessary. If omitted, the array elements are separated with a comma (","). If separator is an empty string, all elements are joined without any characters in between them. Return value A string with all array elements joined. If array.length is 0 , the empty string is returned. Example 3. The map() method creates a new array with the result of calling the provided function on every element of this array. array.map(function callback(currentValue[, index[, array]]) { // Return element for new_array }[, thisArg]) Parameters callback A function that produces an element of the new Array, taking three arguments: currentValue The current element being processed in the array. index Optional The index of the current element being processed in the array. array Optional The array map was called upon. thisArg Optional Value to use as this when executing callback . Return value A new array with each element being the result of the callback function. Example
https://medium.com/vizhen/three-powerful-array-methods-of-typescript-5c95857601ea
['Ekpreet Kaur']
2019-06-16 12:52:23.327000+00:00
['Programming', 'JavaScript', 'Typescript', 'Web Development', 'Arrays']
From TypeScript to WebAssembly in few steps
From TypeScript to WebAssembly in few steps How to build your first Program in WebAssembly Photo by Pixabay In this article, I will show you how to compile a TypeScript program to WebAssemby that prints the factorial of a number. I will write it using TypeScript and then compile it with AssemblyScript, which compiles a strict variant of TypeScript to WebAssembly(Wasm) using Binaryen. WebAssembly is current an MVP and only handles integers and floats and doesn’t natively support strings or other types. For this reason, instead of the typical HelloWorld that uses Strings, I will write a function that takes in a parameter of the numerical type and that returns its corresponding factorial. Why Wasm? One of the most common reasons for use WebAssembly is its speed. Speed in a specific context is essential and can justify the sacrifice of other factors like developer experience, reliability, or maintainability. So, be careful. Create the Factorial function in TS Our factorial.ts file exports a factorial function that takes in a number value and returns the corresponding factorial number. factorial.ts file export function factorial(num: i32): i32 { if (num === 0){ return 1; } else { return num * factorial( num - 1 ); } } Compile the TypeScript factorial function to Wasm In this step, I will compile it into a wasm module, using the AssemblyScript Compiler, which will output the corresponding factorial.wasm file. First, I’m going to install the compiler as a global npm package so I can use it easily(You need to have npm installed on your computer): npm i -g assemblyscript Once the compiler is installed, we can use it with the command <asc sourceFile.ts destinationFile.wasm>: factorial.wasm asc factorial.ts -b factorial.wasm Integrate All Next, I’m going to create a factorial.js JavaScript file and add a function for loading Wasm modules using the WebAssembly Web APIs: factorial.js export const wasmInstantiate = async (wasmModuleUrl, importObject) => { let response = undefined; if (!importObject) { importObject = { env: { abort: () => console.log("Error!") } }; } const instantiateModule = async () => { //1. const response = await fetch("./factorial.wasm"); const buffer = await response.arrayBuffer(); //2. const obj = await WebAssembly.instantiate(buffer, importObject); return obj; }; response = await instantiateModule(); //3. return response; }; //... I’m going to use the Fetch method to download the module. Here, we instantiate the Wasm module. Return the Wasm module. factorial.js part 1 file. //... const executeWasmFactorial = async () => { //1. const wasmModule = await wasmInstantiate("./factorial.wasm"); //2. const factorialResult = wasmModule.instance.exports.factorial(7); //3. document.getElementById("factorialResultId").innerHTML = `The result is: <strong>${factorialResult}</strong>`; }; executeWasmFactorial(); Instantiate the wasm module. Call our factorial function export from Wasm. Set the result to our HTML “factorialResultId” element. factorial.js part 2 file. joining all: factoria.js complete file Index.html Finally, I have to load our factorial.js file in our index.html as an ES6 module: <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <title>Factorial - AssemblyScript</title> <script type="module" src="./factorial.js"></script> </head> <body> <div id="factorialResultId"></div> </body> </html> index.html file. Final folder structure folder structure. Summary From TypeScript to Wasm byte code. The TypeScript file with the factorial function that calculates the factorial of a number. Compiles from a typescript file to a Wasm file. The Wasm module calculates the factorial of a number. The factorial.js file loads the Wasm module and executes the factorial function of number 7 from the Wasm module. The index.html file imports the factorial.js file. The browser. Execute it For simplicity, I’m going to use a zero-configuration command-line http server: Execute http-server in the folder of the code. npm i -g http-server http-server -p 82 The http-server running Final result. And we have a Wasm byte code Factorial program running. Conclusion In this article, I’ve developed an elementary example, but I think it serves to get an idea of its power. But remember, although performance is an essential factor for any technology, I don’t believe it’s the most significant factor except in specific contexts. Thanks for reading me!
https://medium.com/javascript-in-plain-english/from-typescript-to-webassembly-in-few-steps-c76f98f00632
['Kesk - -']
2020-12-26 09:13:09.428000+00:00
['JavaScript', 'Webassembly', 'Web Development', 'Typescript', 'Wasm']
Build a Multi Digit Detector with Keras and OpenCV
Build a Multi Digit Detector with Keras and OpenCV Doing cool things with data! Introduction Numbers are everywhere around us. Be it an alarm clock or the fitness tracker or a barcode or even a well packed delivery package from Amazon, numbers are everywhere. With MNIST data set, machine learning was used to read single handwritten digits. Now we are able to extend that to reading multiple digits as shown below. The underlying neural network does both digit localisation and digit detection. This can be quite useful in a variety of ML applications like reading labels in the store, license plates, advertisements etc. Reading multiple digits But why not use just OCR? Yes, OCR can be a good starting point to automatically detecting numbers but OCR doesn’t always work and sometime we need to train a neural network for our specific task. Digit detection pipeline The digit detection problem can be divided into 2 parts Digits localisation Digits identification Digits Localization : An image can contain digits in any position and for the digits to be detected we need to first find the regions which contain those digits. The digits can have different sizes and backgrounds. There are multiple ways to detect location of digits. We can utilize simple image morphological operations like binarization , erosion , dilations to extract digit regions in the images. However these can become too specific to images due to the presence of tuning parameters like threshold , kernel sizes etc. We can also use complex unsupervised feature detectors, deep models etc. Digits Identification : The localized digit regions serve as inputs for the digit identification process. MNIST dataset is the canonical data set for handwritten digit identification. Most data scientists have experimented with this data set. It contains around 60,000 handwritten digits for training and 10,000 for testing. Some examples look like : MNIST Images However, the digits in real life scenarios are generally very different. They are of different colours and generally printed like the below cases. Day to day digit images A bit for research leads us to one more public dataset SVHN — Street View House Numbers dataset. The dataset consists of house-number images gathered from Google’s street view and annotated. Sample images from SVHN below : SVHN Images This data set has a variety of digit combinations against many backgrounds and will work better for a generalized model. Modelling in Keras We chose this repo for implementing a multiple digit detector. It is well written and easy to follow. Digit Localization is done using Maximally Stable Extremal Regions (MSER) method which serves as a stable feature detector. MSER is mainly used for blob detection within images. The blobs are continuous sets of pixels whose outer boundary pixel intensities are higher (by a given threshold) than the inner boundary pixel intensities. Such regions are said to be maximally stable if they do not change much over a varying amount of intensities. MSER has a lighter run-time complexity of O(nlog(log(n))) where n is the total number of pixels on the image. The algorithm is also robust to blur and scale. This makes it a good candidate for extraction of text / digits. To learn more about MSER, please check out this link. Digit recognition is done using a CNN with convolution, maxpool and FC layers that classify each detected region into 10 different digits. The classifier gets to 95% accuracy on the test set. We tested the repo on a variety of examples and found that it works quite well. See examples shared above. There were some gaps where either the localizer didn’t work perfectly (digit 1’s location not detected) or the detector failed ( $ detected as 5). Conclusion We hope this blog proves to be a good starting point to understand how multi-digit detection pipeline works. We have shared a good github link that can be used to build a model on the SVHN data set. If this model doesn’t work well. you can collect your own data and fine tune the trained model. I have my own deep learning consultancy and love to work on interesting problems. I have helped many startups deploy innovative AI based solutions. Check us out at — http://deeplearninganalytics.org/. If you have a project that we can collaborate on, then please contact me through my website or at [email protected] You can also see my other writings at: https://medium.com/@priya.dwivedi References:
https://towardsdatascience.com/build-a-multi-digit-detector-with-keras-and-opencv-b97e3cd3b37
['Priya Dwivedi']
2019-04-11 21:10:30.510000+00:00
['Deep Learning', 'Artificial Intelligence', 'Data Science', 'TensorFlow', 'Machine Learning']
Fitness Cycling Over 60 | Gorilla Bow Week 5 Workout 1c
https://youtu.be/TaAtSpbHBIA I have a new elliptical machine on order. In the mean time, and since I had a little bit of extra time due to the elliptical being down, I tried to concentrate on working to fatigue, if not failure, on the Gorilla Bow. I did five second holds in each of the four exercises. The only one where I did not have fatigue was in the squats. my arms and shoulders got tired before my legs began to burn. I suspect that will change when I begin the split squats next week. I did reduce the resistance on my overhead presses. My left shoulder hurt a little bit while doing the chest presses and I wanted to avoid hurting it in the overhead. No pain with the lighter weight and I was still able to reach fatigue, or nearly so, with the 5 second holds. Gorilla Bow 300 lbs on Bow Week 5 workout 1c -Chest Press 35x300 5 second holds to fatigue -Overhead Press 8x30 5 second holds near fatigue -Squats 15x60 5 second holds near fatigue -Tricep push down 40x300 5 second holds to fatigue My “Daily Visit with God” Journal/Devotional tool is online on Amazon. Check it out at, https://www.amazon.com/dp/1723870420?ref_=pe_870760_150889320 This is a journaling book I created and published to help people who want to keep a record of their walk with God. Hey! I created a new team on an app called Charity Miles. They have sponsors who pay 10 cents per mile for cycling and 25 cents per mile for walking or running to the charity you select. I selected Wounded Warrior Project. My team is @Christians_Care. We now have 21 members on our team and have donated over 6000 miles. I invite you to join me. I give my YouTube vlogs the main title of “Fitness Over 60.” It is my goal to build a community of like minded riders whose purpose is to get and stay active and fit into their senior years. At 60, I see myself as just entering those years. I am not as fit as I would like to be, but neither am I trying to become a superstar athlete. I am becoming more fit all the time with: -Regular cycling, -Elliptical exercises -Frequent resistance training (#mygorillabow) and -Ketogenic/Intermittent Fast eating are the main elements to my fitness plan. I’m getting cash back rebates from my online orders from BSP — Rewards Online Shopping Mall. I shop everything from Walmart to my local Tractor Supply Store. I have received more than $1325 back in my bank from online purchases I would have made these purchases anyway. Check it out at http://www.bsp-rewards.com/M04VB. Check out our church website at, http://www.puyallupbaptistchurch.com Listen to sermons I have preached at https://www.youtube.com/user/marvinmckenzie01 Check out the books I have written at my author spotlight on Lulu.com: http://www.lulu.com/spotlight/marvinmckenzie My author Page for Kindle/Amazon http://www.amazon.com/author/marvinmckenzie Notice: I do not endorse or agree with everything I hear on the podcasts I make reference to.
https://medium.com/fitness-cycling-after-fifty/fitness-cycling-over-60-gorilla-bow-week-5-workout-1c-a059117d0294
['Marvin Mckenzie']
2018-12-14 18:37:55.736000+00:00
['Seniors', 'Health', 'Life', 'Fitness', 'Strength']
Sorting Algorithms
Sorting Algorithms in Python Implementations and Explanations We are going to look at 4 different sorting algorithms and their implementation in Python: Bubble Sort Selection Sort Insertion Sort Quicksort Photo by Edu Grande on Unsplash 1. Bubble Sort Time complexity: O(n²) Implementation def bubble(lst): no_swaps = False while no_swaps == False: no_swaps = True n = 0 for i in range(len(lst) - 1 - n): if lst[i] > lst[i + 1]: lst[i], lst[i + 1] = lst[i + 1], lst[i] no_swaps = False n += 1 How It Works Iterate through the elements in the array If there are adjacent elements in the wrong order, swap them If we have reached the end of the array and there have been no swaps in this iteration, then the array is sorted. Else, repeat from step 1. For example, suppose we have the following array: Bubble Sort In the first pass, when i = 0 , lst[0] = 54 and lst[1] = 26 . Since 26 < 54, and we want to sort the array in ascending order, we will swap the positions of 54 and 26 so that 26 comes before 54. This goes on until the end of the list. Note that the largest element (93) always gets passed to the end of the array during the first iteration. This is the reason for the variable n which increases after each iteration and tells the program to exclude the last element after each iteration over the (sub)array. The time complexity is O(n²) because it takes n iterations to guarantee a sorted array and every iteration iterates over n elements. 2. Selection Sort Time complexity: O(n²) Implementation def selection(lst): for i in range(len(lst)): iMin = i for j in range(i + 1, len(lst)): if lst[j] < lst[iMin]: iMin = j lst[i], lst[iMin] = lst[iMin], lst[i] How It Works Iterate through the elements in the array At each position, iterate through the unsorted elements to find the minimum element Swap the positions of this minimum element and the element at the current position For example, suppose we have the following array: Selection Sort When i = 0 , the minimum value within the unsorted array is 1, at position j = 3 . Hence, the elements at position 0 and position 3 are swapped. When i = 1 , the minimum value within the unsorted array (the unsorted array is [5, 7, 8, 9, 3] since the first position is already sorted) is 3, at position j = 5 . Hence, the elements in position 1 and 5 are swapped. The unsorted array is now [7, 8, 9, 5] . This goes on until the minimum value is selected (hence the name) for all positions. The time complexity is O(n²) since we have to iterate through the unsorted array n times, once for each current position, and the unsorted array consists of n elements on average. 3. Insertion Sort Time complexity: O(n²) Implementation def insertion(lst): for i in range(1, len(lst)): temp = lst[i] j = i - 1 while j >= 0 and temp < lst[j]: lst[j + 1] = lst[j] j -= 1 lst[j + 1] = temp How It Works Iterate through the elements in the array For each element, iterate through the sorted array to find a suitable position (where it is larger than the element right before it, and smaller than the element right after it) For example, when i = 5 and lst[i] = 31 , I want to insert (hence the name) 31 into the sorted array. The while loop will ‘copy’ the element at j to the position j + 1 and decrease the value of j , until 31 > lst[j] . This occurs at j = 1 , where lst[j] = 26 . Since 26 < 31 < 54, the right position to insert 31 would be at j + 1 = 2 . Insertion Sort: Finding a Suitable Position 3. Insert the element at its suitable position. From step 2, all the elements from this position onwards are ‘pushed back’ to the right by one position to make space for the inserted element For example, suppose we have the following array: Insertion Sort When i = 0 , there is no sorted array and we simply move on to i = 1 . Note that this case is captured under one of the while loop exit conditions; since j >= 0 is False , the while loop will not be run. When i = 1 , the sorted array consists of [54,] , and the current element is 26. The correct position to insert 26 is before 54, so 54 is ‘pushed back’ to the right (it is now at position 1) and 26 is inserted at position 0. This goes on until all the elements are inserted at their suitable positions and the whole array is sorted. The time complexity is O(n²) because we have to iterate through the sorted list n times to find a suitable position for all n elements, and the sorted list consists of n elements on average. 4. Quicksort Time complexity: O(n log(n)) Implementation (out-of-place) def quicksort(lst): if len(lst) == 0: return lst else: pivot = lst[0] left = [] right = [] for ele in lst: if ele < pivot: left.append(ele) if ele > pivot: right.append(ele) return quicksort(left) + [pivot] + quicksort(right) Implementation (in-place) def partition(lst, start, end): # partition list into left (< pivot), middle (pivot) and right (> pivot), and return position of pivot pivot = lst[start] left = start + 1 right = end done = False while not done: # find left position where element > pivot while left <= right and lst[left] < pivot: left += 1 # find right position where element < pivot while right >= left and lst[right] > pivot: right -= 1 if left > right: # stop sorting done = True else: # lst[left] now < pivot, lst[right] now > pivot lst[left], lst[right] = lst[right], lst[left] # place pivot in the correct position lst[start], lst[right] = lst[right], lst[start] return right def quicksort_in_place(lst, start, end): # sort list recursively with the help of partition() if start < end: pivot = partition(lst, start, end) quicksort_in_place(lst, start, pivot - 1) quicksort_in_place(lst, pivot + 1, end) How It Works This is a recursive algorithm. For each recursive call, The first element of the array is taken as the ‘pivot’ Partition the array so that you have a subarray of elements that are smaller than the pivot, and another subarray of elements that are larger than the pivot. Position the pivot in the middle of the two subarrays. Recursively sort both subarrays The base case: when the subarray is empty (or contains only one element), it cannot be sorted any further. The In-Place Algorithm The in-place algorithm modifies the original array, whereas the out-of-place algorithm creates and returns a new array that is a sorted version of the original array. The recursion logic is pretty much the same, except the partition() function can be a bit confusing. The partition function involves left and right pointers. The left pointer moves to the right until it finds an element that is larger than the pivot. The right pointer moves to the left until it finds an element that is smaller than the pivot. These two elements are then swapped because we want everything on the left to be smaller than the pivot, and everything on the right to be larger than the pivot. Quicksort (in-place): partition() This continues until the left and right pointers cross positions. At this point, we have (almost) successfully partitioned the array. Everything before the left pointer is smaller than the pivot, and everything after the right pointer is larger than the pivot. The last step involves exchanging the positions of the pivot and the element at the right pointer so that the pivot now lies between the left and right subarrays. The left subarray consists of everything before the pivot, and everything in this subarray is smaller than the pivot. The right subarray consists of everything after the pivot, and everything in this subarray is larger than the pivot. We have now successfully partitioned the array. This process is repeated on each subarray until the entire array is sorted. Time Complexity Quicksort Recursion Tree Working from the bottom-up, 1×2ʰ = n where h is the height of the recursive tree and n is the size of the array. This is because starting from a ‘leaf’ (a one-element subarray), every layer above it merges two subarrays to form a new array that is twice the length of each individual subarray. The size of the subarray, therefore, roughly doubles every time it goes ‘up’ a layer. Therefore, h, the height of the recursive tree, is log(n). At each layer of the recursive tree, a total of n comparisons are made to partition each subarray (since the total number of elements is still n). Hence, the overall time complexity is O(n log(n)). Conclusion Thanks for reading! If you have any questions, feel free to comment on this post. Follow-up me for more stories like this one in the future. A note from Python In Plain English We are always interested in helping to promote quality content. If you have an article that you would like to submit to any of our publications, send us an email at [email protected] with your Medium username and we will get you added as a writer.
https://medium.com/python-in-plain-english/sorting-algorithms-6c05e445d0bd
['Zhang Zeyu']
2020-05-04 13:22:35.042000+00:00
['Algorithms', 'Programming', 'Python', 'Quicksort', 'Sorting Algorithms']
“What would have happened if my parents did not have an oncologist-in-training as their daughter? What happens to the Black patients?”
These are key questions asked by Shekinah Elmore, MD, who wrote an essay about helping her father through his prostate cancer diagnosis. Medical racism is a problem in this country, and because of it, doctors did not initially empathize or treat Elmore’s father in a correct, respectful manner. But when Elmore arrived, the tones all changed. Nearly everyone I know — who is Black with a doctor in the family — can relate. The minute I get my MD cousins on the line or in the video appointment with the doctor, everything about the appointment suddenly changes. The rounds last a little longer, the attending cuts you off a little less, doctors look you in the eye instead of barking orders and leaving. Elmore speaks to all this and more.
https://momentum.medium.com/what-would-have-happened-if-my-parents-did-not-have-an-oncologist-in-training-as-their-daughter-2fe013c987ff
['Adrienne Samuels Gibbs']
2020-12-22 06:33:14.274000+00:00
['Race', 'Cancer', 'Health']
Reading Configuration Variables For Python APIs
Photo by Ferenc Almasi on Unsplash Lately, I have been working on a logging API that logs different levels of events for my personal applications. Thinking about future plans for this API, the goal is to create CI/CD pipelines within Jenkins. Within those pipelines, it would be really nice to change the configuration variables of any application based on the environment they are being deployed to. These variables will have the job of storing connection strings, URLs, file paths, etc. However, before all that can happen, I need to figure out how I want these variables to be stored. Option 1: The Old Fashioned Way One possible way of storing configuration variables is as an environment variable on your computer/server. If you are a Linux user, you can create a variable by using this command: export CONNECTIONSTRING="mysql://<USER>:<PASSWORD>@<HOST>/Logging" To see what value is assigned to the variable, run this command in your terminal: echo $CONNECTIONSTRING You should get results similar to this: Now that the environment variable is created, we can go over to our Python code and import the “os” module. This module is installed by default, so there will be no need to use pip. import os To use the value of that variable in a Python script, utilize the “getenv” function from the “os” module and assign it to a variable. testVariable = os.getenv(["CONNECTIONSTRING"]) If you print out the contents of the Python variable and run your script, the output should like something similar to this: Option 2: JSON File Another interesting way to handle configuration variables is by using a JSON file. This method will require a little more code but will remove the reliance of needing to setup up the variable on your computer/server. Here is an example of a very simple JSON configuration file: So how do we get this to fit in with Python? The easiest place to start is by creating a separate file that loads in the settings file. Essentially, this bit of code opens the settings file and then parses out the contents to a variable using the “load” method from the “json” module. To actually use this within your API, import this Python file with: from load_settings import data Once it’s imported, we are free to read our settings. The “data” variable that we imported, is actually in the form of a dictionary. Therefore, if we create a variable, and assign the configuration variable to it, it will look something like this: testVariable = data["CONNECTIONSTRING"] If we print out this new variable and run your script, the output would be similar to this image: Conclusion To sum things up, we discussed two very good options to store configuration variables for your APIs. The first option is pretty easy to set up, however, the major downside is that variables live on your computer/server. In option two, the variables are stored in a file. For me, I will probably go with option two for my APIs. This will allow me to make changes to the variables on the fly and not have to worry about them being set up on the production server when deployed. Also, since I plan on containerizing my APIs, the configuration variables will get wrapped up in the container. Feel free to leave a comment about how you store configuration variables for APIs. Until next time, cheers!
https://medium.com/python-in-plain-english/reading-configuration-variables-for-python-apis-1ca8e487073c
['Mike Wolfe']
2020-12-20 21:04:44.989000+00:00
['Json', 'Programming', 'API', 'Software Development', 'Python']
Functional Programming in Python
Photo by Vlada Karpovich from Pexels “A language that doesn’t affect how you think about programming is not worth learning ” — Alan Perlis Python is one of the most top Ranked programming languages in the Modern World. It is a High-Level Language. Syntax In the Python is similar to the English Language. Python has used in Big range. Python has a large community to help resolve any problems. Biggest Companies like Google, Facebook, Instagram, Spotify, Netflix use the python for the Development Process in their Companies. Mathematics, Web Development, Software Development are some areas use Python. Python has its capabilities to consider it as a Functional Programming Language. Meaning of the Functional Programming is that we bind everything in pure Mathematical Style. Python has its inbuilt functions. Reducing the time of the coding is an advantage of the functions. Using predefined styles and rules are some problems that arise when using the functions. When we need Concurrency or Parallelism and doing mathematical Computation we can use Functional Programming. Pure Functions Let’s create a pure function to multiply numbers by 2: def multiply_2_pure(numbers): new_numbers = [] for n in numbers: new_numbers.append(n * 2) return new_numbers original_numbers = [1, 3, 5, 10] changed_numbers = multiply_2_pure(original_numbers) print(original_numbers) # [1, 3, 5, 10] print(changed_numbers) # [2, 6, 10, 20] src- https://stackabuse.com/functional-programming-in-python/ The original list of numbers are unchanged, and we don't reference any other variables outside of the function, so it is pure. Functional Programming also called as Pure Functional Programming. There are some rules to identify pure functions. Always the function should produce the same result at any given time. This is the First Rule. Meaning of this rule is that the result of the function cannot be changed with the time. Then the next rule is that function should not be affected by the outside world. Meaning of that is, the function should only work with the range of the function. Advantages of the pure functions that easy to debugging and testing and restricting the side effects from the outside world help to execute the function without any exception. Filter Function # list of letters letters = ['a', 'b', 'd', 'e', 'i', 'j', 'o'] # function that filters vowels def filterVowels(letter): vowels = ['a', 'e', 'i', 'o', 'u'] if(letter in vowels): return True else: return False filteredVowels = filter(filterVowels, letters) print('The filtered vowels are:') for vowel in filteredVowels: print(vowel) src- https://www.programiz.com/python-programming/methods/built-in/filter Here, we have a list of letters and need to filter out only the vowels in it. In this function, we need to provide the function name and iterable as the arguments. when we need to filter to the iterable object using a function and get another output without affecting the original input, Then we can use the filter function in Python. Zip Function number_list = [1, 2, 3] str_list = ['one', 'two', 'three'] # No iterables are passed result = zip() # Converting itertor to list result_list = list(result) print(result_list) # Two iterables are passed result = zip(number_list, str_list) # Converting itertor to set result_set = set(result) print(result_set) src- https://www.programiz.com/python-programming/methods/built-in/zip In this function, we need to provide the iterable to the function. Zip function can use to get the output as a whole from the given inputs. When we provide lists as an input to the function and output will come as a whole. In the output will arrange according to the index of the lists. These means in the output list, at the index (0) all the values from the given list are put into a single tuple and put into the index (0) of the output list. This will continue through the whole list. Reduce Function from functools import reduce numbers = [ 1 , 2 , 3 , 4 , 5 ] def my_sum(a,b): return a+b result = reduce(my_sum,numbers) print(result) In the python, we need to import reduce from the functools . This use to apply a particular function passed in its arguments to all of the list elements mentioned in the functions passed away. Lambda Function # Program to filter out only the even items from a list my_list = [1, 5, 4, 6, 8, 11, 3, 12] new_list = list(filter(lambda x: (x%2 == 0) , my_list)) print(new_list) src- https://www.programiz.com/python-programming/anonymous-function The function without a name is called the lambda function. It is an anonymous function. We use the lambda keyword when we implement lambda functions. For the use of the short period, we can use lambda functions Conclusion Python has functions programming capabilities as some programming language in use. Having a strong understanding of Functional Programming is key to writing good programs with better performance. This article has explored some ways to use Functional Programming in Python that I hope will assist you in completing your work more accurately. I’d like to thank you for reading my article, I hope to write more articles on Python in the future to keep an eye on my account if you liked what you read today! References https://docs.python.org https://www.learnpython.org/en/Map,_Filter,_Reduce
https://medium.com/python-in-plain-english/functional-programming-in-python-bfa5f1f391d6
['Pasindu Ukwatta']
2020-11-09 18:46:58.145000+00:00
['Python3', 'Programming', 'Functional Programming', 'Software Development', 'Python']
A Practical Guide for Exploratory Data Analysis — Churn Dataset
Let’s check how “Gender” and “Geography” are related to customer churn. One way is to use the groupby function of pandas. df[['Geography','Gender','Exited']].groupby(['Geography','Gender']).agg(['mean','count']) Finding: In general, females are more likely to “exit” than males. The exit (churn) rate in Germany is higher than in France and Spain. Another common practice in the EDA process is to check the distribution of variables. Distribution plots, histograms, and boxplots give us an idea about the distribution of variables (i.e. features). fig , axs = plt.subplots(ncols=2, figsize=(12,6)) fig.suptitle("Distribution of Balance and Estimated Salary", fontsize=15) sns.distplot(df.Balance, hist=False, ax=axs[0]) sns.distplot(df.EstimatedSalary, hist=False, ax=axs[1]) Most of the customers have zero balance. For the remaining customers, the “Balance” has a normal distribution. The “EstimatedSalary” seems to have a uniform distribution. Since there are lots of customers with zero balance, We may create a new binary feature indicating whether a customer has zero balance. The where function of pandas will do the job. df['Balance_binary'] = df['Balance'].where(df['Balance'] == 0, 1) df['Balance_binary'].value_counts() 1.0 6383 0.0 3617 Name: Balance_binary, dtype: int64 Approximately one-third of customers have zero balance. Let’s see the effect of having zero balance on churning. df[['Balance_binary','Exited']].groupby('Balance_binary').mean() Finding: Customers with zero balance are less likely to churn. Another important statistic to check is the correlation among variables. Correlation is a normalization of covariance by the standard deviation of each variable. Covariance is a quantitative measure that represents how much the variations of two variables match each other. To be more specific, covariance compares two variables in terms of the deviations from their mean (or expected) value. By checking the correlation, we are trying to find how similarly two random variables deviate from their mean. The corr function of pandas returns a correlation matrix indicating the correlations between numerical variables. We can then plot this matrix as a heatmap. It is better if we convert the values in the “Gender” column to numeric ones which can be done with the replace function of pandas. df['Gender'].replace({'Male':0, 'Female':1}, inplace=True) corr = df.corr() plt.figure(figsize=(12,8)) sns.heatmap(corr, cmap='Blues_r', annot=True) The correlation matrix Finding: The “Age”, “Balance”, and “Gender” columns are positively correlated with customer churn (“Exited”). There is a negative correlation between being an active member (“IsActiveMember”) and customer churn. If you compare “Balance” and “Balance_binary”, you will notice a very strong positive correlation since we created one based on the other. Since “Age” turns out to have the highest correlation values, let’s dig in a little deeper. df[['Exited','Age']].groupby('Exited').mean() The average age of churned customers is higher. We should also check the distribution of the “Age” column. plt.figure(figsize=(6,6)) plt.title("Boxplot of the Age Column", fontsize=15) sns.boxplot(y=df['Age']) The dots above the upper line indicate outliers. Thus, there are many outliers on the upper side. Another way to check outliers is comparing the mean and median. print(df['Age'].mean()) 38.9218 print(df['Age'].median()) 37.0 The mean is higher than the median which is compatible with the boxplot. There are many different ways to handle outliers. It can be the topic of an entire post. Let’s do a simple one here. We will remove the data points that are in the top 5 percent. Q1 = np.quantile(df['Age'],0.95) df = df[df['Age'] < Q1] df.shape (9474, 14) The first line finds the value that distinguishes the top 5 percent. In the second line, we used this value to filter the dataframe. The original dataframe has 10000 rows so we deleted 526 rows. Please note that this is not acceptable in many cases. We cannot just get rid of rows because data is a valuable asset and the more data we have the better models we can build. We are just trying to see if outliers have an effect on the correlation between age and customer churn. Let’s compare the new mean and median. print(df['Age'].mean()) 37.383681655055945 print(df['Age'].median()) 37.0 They are pretty close. It is time to check the difference between the average age of churned customers and those who did not churn. df[['Exited','Age']].groupby('Exited').mean() Our finding still holds true. The average age of churned customers is higher.
https://towardsdatascience.com/a-practical-guide-for-exploratory-data-analysis-churn-dataset-508b6da2d594
['Soner Yıldırım']
2020-09-03 13:44:06.645000+00:00
['Data Analysis', 'Programming', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
“Waltz for Debby,” Bill Evans
Debby was his niece, which makes sense, because it is a better name for a niece than a lover, if you’re writing a jazz arrangement. Also, Bill Evans owned a racehorse named “Annie Hall,” and was a grade-A genius and also a tragic, tragic drug addict, like Miles Davis and Chet Baker and John Coltrane and Billie Holiday and Charlie Parker and…
https://medium.com/the-hairpin/waltz-for-debby-bill-evans-b010eacdd471
['Nicole Cliffe']
2016-06-02 01:11:32.147000+00:00
['Bill Evans', 'Music', 'Jazz']
What to consider when drawing Pie Charts
Too many categories It’s commonly said that pie charts can’t display too many variables/categories. And that’s absolutely true. Many recommend that a pie chart should have a maximum of 5~6 slices. But it’s not as simple as that. The maximum amount of categories in a pie chart depends a lot on the values displayed and the visualization's purpose. Pie chart — Image by the author Take this chart as an example; it has 6 categories and is quite easy to read. Blue represents 50% of the whole; Gray is 25%; The other colours are something like 25% divided by 4; We could even go crazy and add another category, and it would still be readable. Pie chart — Image by the author Interestingly, they get better when displaying evenly distributed categories and values such as 25%, 50%, or 75% are easy to see. The following plot has 10 equal parts. It may not be so easy to tell that they are the same size, but we can definitely say they are very similar. Pie chart — Image by the author Treemap plot — Image by the author This pattern is harder to display in a Treemap. Of course, this can be fixed with adjustments, but most algorithms will try to fit the plot areas regardless of their shape. Unlike bar charts, the rectangles on this visualization don’t share the same width, so it can be a bit harder to compare them. Overall every visualization has its disadvantages. Waffle/ Dot Matrix charts are good at representing a simplified result but tend to oversimplify them. Pie chart — Image by the author This plot shows that one category has 75% of the chart, while the other four are evenly dividing the other 25%. Grid chart — Image by the author But in a Waffle, if we use 100 squares, we can’t really display the four categories evenly, and the numbers need to be rounded. A percentage with two decimal spaces would require 10,000 squares to be represented accurately. So we can either have a missing square in the total or an extra square in some category. Depending on the information we’re trying to convey, this can be misleading. Which takes us to the next point. Precision.
https://medium.com/analytics-vidhya/what-to-consider-when-drawing-pie-charts-9aec93bc540b
['Thiago Carvalho']
2020-11-30 15:34:06.446000+00:00
['Data Visualization', 'Pie Charts', 'Design', 'Data Science', 'Charts']
What Happens When You’re on the Wrong Side of the Opioid Crisis?
Days passed. I was taking Ibuprofen 800 and extra strength Tylenol around the clock but it hardly puts a dent in the pain. I contacted the doctor asking for seven more Vicodin so I’m able to take one at bedtime for the next week. I knew that if I could rest well at night I would have more tolerance for the pain during the day. I also knew that it would only be a matter of weeks before the pain steadily subsides on its own as my bones slowly begin to heal. I just need to get through the initial impact phase of my injury. Do you know there is an opioid epidemic in America? This was the lecture I was given when I asked for seven Vicodin-not 30, not 90, not refills, but seven. This is the same doctor I have a history with. The same doctor who knows I did not ask for narcotic pain medication when he previously treated me. I pleaded my case. It’s inhuman to allow me to suffer in this much pain, I exclaimed and, What about the damage being done to my body by taking 3,200 milligrams of Ibuprofen and 1,500 milligrams of Acetaminophen in a 24 hour time period every day? I was told if I felt the pain was that severe I should go to the emergency room for a one time dosage of pain medication. I was basically told, run up a huge bill for your insurance company to cover so you can experience temporary pain relief. As far as health insurance goes my husband’s coverage through his employer is a dream come true during a time when health insurance is a main topic of debate with the rising fear that millions may lose their coverage as the financial expense continues to rise as well. His employer covers the monthly premium which equates to zero cost to us. As long as we stay inside their network we have no deductible or copay with the exception of $200 out of pocket cost for emergency room visits. As unbelievable as this coverage is it also leaves me trapped. If I were to decided to receive treatment from a different doctor I would have to leave the network and therefore pay out of pocket, what I imagine to be a significant expense given the extent of my injury. This also presents a challenge in filing a complaint.
https://erikasauter.medium.com/what-happens-when-youre-on-the-wrong-side-of-the-opioid-crisis-d9249a3a7e03
['Erika Sauter']
2018-07-20 18:43:20.953000+00:00
['Health', 'Culture', 'Politics', 'History', 'Life']
趣頭條:是什麼讓它 DAU(日活用戶)超過 1000 萬?深度配銷和網賺模式的的結合策略
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/y-pointer/qutoutiao-5851b141e3af
['侯智薰 Raymond Ch Hou']
2018-11-02 12:12:47.892000+00:00
['Social Media', 'Marketing', 'Business', '產品經理', '中文']
Rapidinha #019
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/neworder/rapidinha-019-2d640091ff3d
['Giovani Ferreira']
2016-09-09 16:01:02.675000+00:00
['Design', 'Drawing', 'Data Visualization', 'Arte', 'Rapidinhas']
Edge of the Stone
Rough and ready smoothed the age of the thing hidden in plain sight for those who will never see the diamond and for those who can Beam through cold black night the star of morning not just another any other A bending force of nature lover, skipping through caught like a gentle tug at the elbow Come, walk this way … How perfect is the choice to recognize where the balance does There lay a difference in space: timelines unthought; a show We travel in sight and mind as if we move when the Einstein’s proved it all wrong Abyss, great height, common light, it’s all rooms in the castle ain’t shit to a tree. They put the breath in us for a reason If we are going to survive as a species: Einstein’s and lover, brother and sister, friend and foe, forgotten or not, as this world spins It’s all we got Michael Stang 2020
https://medium.com/grab-a-slice/edge-of-the-stone-8fd270d09a51
['Michael Stang']
2020-12-25 09:12:42.002000+00:00
['Love', 'Prose', 'Future', 'Poetry', 'Dont You']
Learning Machine Learning: How should I attempt to start?
I could just list a bunch of resources, but, although there is certainly no shortage of excellent content on the topic, there’s a reason I’m not doing that. Hear me out. I’ve had this conversation over and over — each time with the same disappointing result: Person X and I talk about work. Person X expresses their desire to learn Machine Learning. I explain what my from-scratch approach would be today: jump straight in by understanding a single form of Machine Learning, preferably the one that has been receiving all the hype: a Neural Network (NN). Then do a practical online course on that. Later the day, I fulfil my promise of sending through some resources, which typically include a 3Blue1Brown video and a course from Andrew Ng. And then: 5. We see each other again and I hear that Person X hasn’t worked through any of the resources, but is still super keen to learn Machine Learning. I’ve found this result confusing for a couple of reasons: The resources themselves are absurdly good. 3Blue1Brown’s legendary animated NN video gave me a better understanding of the topic than my 6-month college course — in 20 minutes. And Andrew Ng is world-renowned for his ML courses and these are available in multiple programming languages. Person X was almost always genuinely excited about the topic and seems to realize the potential impact of ML. Plenty of these people were highly capable, both intellectually and with regards to work-rate. All this has brought about a bigger question, only 50% related to Machine Learning: How should one approach learning a completely new field? With regards to this, I have come to believe in mental tricks. Ideally, we’d all be rational enough to take the steps which lead to our best interest, but it seems that we need to play mind-games on ourselves to realize this outcome. One of these would be to get some skin in the game. It plays on the natural human tendency to avert loss*. Here are some practical examples: Pay for the course you are taking. It’s almost unfortunate that the best content is available freely. But making a decisive decision and paying for a certificate can be the answer. This dramatically increased the probability of completing the course because of the “I paid for this; I better get value out of it” argument. Sign up to do work you don’t yet know how, preferably with a deadline and small compensation. Look for the opportunity to do basic machine learning work in an informal setting. This might just be building a simple regression model for data your friend/relative is working on with a 3-course dinner out as payment. If you’re close to someone who works in ML, see if you can arrange scheduled meetups where you work through some material. Maybe tackle a model a night. And again, preferably compensate the person with dinner or something alike. A core problem I’ve noticed preventing Person X from starting with ML is that they don’t actually believe they can understand it. The media only writes about outcomes where ML has been used to reach this almost unthinkable achievement. Amazement is the goal of the article, not understanding. And let’s face it: ML is not the simplest concept around. But neither is economics. So here I’ve found this trick helpful: portraying something as simple as possible in order to make Person X believe that they can actually understand ML. The idea is that once you’ve tasted the satisfaction of grasping a previously pie-in-the-sky concept, your brain is hungry for more. Let’s actually end this article by doing that right now. K-Means Clustering in two minutes Problem: (taken from work a friend is doing) Sneaky Ltd. wants to categorize customers into groups based on which products they’re currently buying. Afterwards, Sneaky markets the products that the individual customer is not yet buying, but is highly popular with the rest of that cluster. This is defined as an unsupervised classification algorithm; unsupervised because the data points (customers) aren’t labelled (as group 1, group 2, …) — we don’t know exactly as what we’re classifying these customers. Let’s confine this problem to just 2 products** with 3 customer categories and see how the algorithm tackles it: Customers as dots, centroids as squares K-Means merely chooses 3 (“k”, equal to the number of output categories) random points (“means” or “centroids”) and repeats these two steps: Links each data-point to the nearest centroid, forming a new category group. Calculates the new centroids for these new groups of data points. Simple enough, right? I wouldn’t claim a Neural Net is comparable in complexity, but I would stand by the notion that many algorithms in ML can be reduced to an intuitive representation.
https://chrisjanwust.medium.com/machine-learning-how-should-i-attempt-to-start-22bc03301da0
['Chrisjan Wust']
2019-12-09 14:51:06.217000+00:00
['Machine Learning', 'K Means Clustering', 'Neural Networks', 'AI', 'Learning']
How Much Content Should You Share Before A Prospect Buys?
Content marketing is a powerful tool for building trust between your business and your potential customers. Many business owners and marketers worry about oversharing. What if you give all of your best stuff away, and people just take your content to solve their own problems? Who is Your Ideal Customer? When thinking about your content marketing strategy, before you worry about what kind of content you need and how much of your secret sauce you should share, you need to get clear on who you are serving. People who are looking to do things for themselves are not your target buyers. You are selling a solution. Your ideal customers want a full solution, not a blueprint for solving their problems on their own. The freebie seekers and the do-it-yourselfers are not your ideal customers. You don’t need to worry about them. If none of your target buyers convert into paying customers, this means that you have a problem with your business model, not that you overshared. It means you are not solving a sufficiently painful problem. You may want to change your business model to focus on advertising or affiliate commissions instead of direct product or service sales. Self-Help and Lawyers I fell into copywriting after burning out as an attorney. Most of my early clients were lawyers. The most successful lawyers had tons of content that took readers step-by-step through complicated legal issues. The law firms that struggled to make content marketing to work were also afraid of oversharing. These firms worried people wouldn’t want a lawyer if they could use content to solve problems on their own. Why did the oversharing lawyers prosper? When a potential client comes across an article that describes their exact situation and details exactly how to resolve the problem, one of three things tended to happen. One, the reader would bookmark the site and move on. Most of the time, they never came back to the article. Two, the reader took the information and tried to solve their issue by themselves. Most of the readers in this category lacked the resources to pay attorney fees. Three, the reader would see that the law firm was transparent, wanted to help them, understood their problem, and had the expertise to make the legal problem go away. These readers became clients. Anyone can read your content, but you should only be writing it for your ideal customer. It doesn’t matter what anyone else does with your content. It only matters that you convert a large percentage of your ideal buyers. How to Share Content There are three main types of business content: 1. Content that answers why you need to do something 2. Content that tells you what/when you need to do something 3. Content that tells you how to do something. Content that answers the “How” question is the most valuable. This is the stuff that some businesses worry about giving away. I once had a prospective client who required their team of marketing freelancers to sign a non-disclosure agreement to see their white papers. Even prospective customers had to get verified before being sent a white paper. They were paranoid about the competition seeing their marketing materials. I passed on the opportunity because I can’t help a business succeed that needs to keep its marketing a secret. For your content marketing to be at peak effectiveness, it needs to be part of a funnel. You want to make sure that you are only selling to your ideal customers. Blog content, guides, lead magnets, and explainer videos work well at the top of the funnel. You should create content at the top of the funnel that explains to your audience why they need your solution. This type of content allows you to prove you understand your customers’ pain and that you speak their language. The next stage of the funnel should use more detailed content. It needs to answer what people need to do to be ready for your solution or when they should implement your solution. Blogs and videos work well for this stage as well. However, you want to be working on adding your most interested readers to your email list. You can also use your email newsletter to provide more detailed information than what you post in public. The final stage of your sales funnel should be happening on your email list, or depending on your business, in person or over the phone. The content you use to close the sale needs to be your most useful content. This is where you explain to the buyer how you solve the problem they are most troubled by. You save the most valuable content for your email newsletter because these are the readers that are closest to becoming buyers. You want them to feel that they are getting something exclusive. You are auditioning. You have the chance to show them how you will treat them as customers. You need to give them the VIP treatment. The Case for Not Holding Anything Back You only buy from people you trust. Why would you trust someone who is holding information back from you? When you use content to giveaway incredible value, your readers stop seeing you as a salesperson and instead see you as an advisor. They are happy to pay you because they believe you can deliver the solution you have been painstakingly describing in your content. Content marketing will not convert most of your readers. But, if done properly, it will convert most of your ideal buyers. Everyone else will fall out of your funnel. Your content strategy needs to be geared towards driving away buyers who are not right for your business and only closing the deal with people who are excited to work with you. Think about the difference you feel when you have to eat at some greasy fast-food restaurant because it’s the only place open for hundreds of miles and how you feel when you go out to an expensive restaurant for a special occasion. You want to be the nice restaurant where your customers are excited to dine. Your content marketing funnel puts your customers in the mindset of being excited to work with you. Content marketing isn’t a way to trick people into doing business with you. It isn’t even about persuading reluctant buyers. Content marketing is about preparing your customers for the best experience of their lives.
https://medium.com/escape-motivation/how-much-content-should-you-share-before-a-prospect-buys-dfb36ad6d549
['Jason Mcbride']
2020-10-16 12:07:20.659000+00:00
['Marketing', 'Content Marketing', 'Email Marketing', 'Business', 'Marketing Strategies']
Finding Clients On LinkedIn: What Works For Career Stories Founder Kerri Twigg
The Nitty-Gritty: How career coach Kerri Twigg is using LinkedIn to generate more than 30 requests for services per day The process she uses to vet new connections and establish a relationship with the people she connects with How she creates content that immediately communicates who she is and what makes her different How she manages a waitlist of prospective clients who want to work with her What if there was a social media platform that’s main mission was to help you connect with other professionals? What if that social media platform also helped you build a digital representation of your best work? What if it helped you see the incredible people you’re just one introduction away from? Would you use it? Of course you would! It probably sounds like the holy grail of social media. Now you might have already guessed… that social media platform is LinkedIn — an often forgotten option in the digital marketing world. Well, despite the headlines that Facebook, Instagram, and Twitter have captured over the years… LinkedIn has been a central hub for networking, marketing, and sales all along. Business owners find stellar new employees there. Deals with big corporate clients get started there. And life-changing introductions get made there. When I started hearing more and more buzz about what was happening on LinkedIn, I decided to check it out for myself. I’ve been experimenting with posting articles, duplicating content from other platforms, and promoting the podcast there. I’ve been expanding my network and checking in with people I haven’t talked to in years! I can’t say I’ve had any huge successes yet — but the response has been good enough to keep me coming back on a daily basis, something I haven’t been able to say about Twitter for a long time. But, of course, I wasn’t satisfied to just play around with LinkedIn. I wanted to talk to someone who was really crushing it on the platform. I asked around and was reintroduced to Kerri Twigg — someone who cracked the LinkedIn code enough for the company itself to name her a LinkedIn Top Voice. In this conversation, you’re going to hear why Kerri decided to focus on LinkedIn as her core client acquisition channel, the process she uses to vet connections, how new connections become clients, and how she manages the sizable waitlist that’s formed from her outreach. Are you using LinkedIn as your main method of finding new clients? Or maybe you’ve really honed your process for turning new connections on any social media platform into new clients or customers? I’d love to hear from you. Hit me up on my main platform of choice — Instagram. I’m @tara_mcmullin. You can send me a message or share your story in a post and tag me!
https://medium.com/help-yourself/finding-clients-on-linkedin-what-works-for-career-stories-founder-kerri-twigg-f52bf7102fa3
['Tara Mcmullin']
2019-02-26 15:25:40.461000+00:00
['Social Media', 'Podcast', 'Business', 'Small Business', 'Entrepreneurship']
Yes, We Can Make 2021 A Better Year!
Re-Thinking Our Relationship to Technology Plain and simple, technology is NOT neutral. Many of us have seen ‘The Social Dilemma’ on Netflix — flawed, but accurate mostly — but to think that our new era of social media proliferation does not have a negative side is to deny reality. We CANNOT continue to allow this unintended, collective social experiment to push us further into corrosive tribalistic thinking. There are two levels to approach the problem: individual level and institutional level. Institutionally, governments and the Big Tech platforms have to get in genuine conversations with one another. Sure, protecting free speech is a priority, but protecting users' privacy and encouraging platforms to better protect against harmful disinformation is as well. Governments (though many flawed) have to push, and Big Tech has to push back. This is a dialectic. It won’t be perfect. It never will. But it’s an inroad to getting some checks and balances on a beast that is mostly out of control right now. Individually, we have to reflect on our personal habits and social media diet. This is hard. I know I personally struggle seemingly all the time here. Yet, it really is worthwhile to question yourself from time to time. Ask yourself: Is this Twitter-hole of endless replies worth scrolling through? Ask yourself: Do I need to be on looking at Instagram this much? Is there some feeling I’m chasing, or am I just bored? Ask yourself: Is this Facebook link from a trustworthy source? Is there evidence to back up the author’s claims? Is it really worth reading? Is there a better use of my time right now? Ask yourself: Do I really need to watch another YouTube video right now? The algorithms running on virtually all social media platforms really do want to game your attention. And they are succeeding wildly. To be sure, gamifying attention can be useful when, for example, using an app to generate healthy habits. Still, virtually all social media platforms intend to get you to keep your face glued to your screen: more clicks, more traffic. Technology truly is a double edge sword. There’s good. There’s bad. This is obvious. But it’s never been more clear that how we engage with our technology, we’ll need to be more balanced, more reflective, and perhaps more restrained. Striving for moderation in our social media use may be of utmost importance for improving ourselves and the world in 2021 and beyond.
https://medium.com/technology-hits/yes-we-can-make-2021-a-better-year-71a76e1f7c88
['Landon Lester']
2020-12-12 17:41:53.650000+00:00
['Future', 'Change', 'Progress', 'Technology', 'Essay']
Me and the Rolling Stones
How many musicians (or wannabe musicians) can honestly say that The Rolling Stones went to one of their concerts? Not many…but I’m one of them. Unfortunately, the story’s not exactly as glamorous as it sounds. In fact, it was a disastrous affair I’d rather have played for an audience of no people. Here’s how it happened: Many years ago an old friend of my father’s tracked Popsicle down to see what he was up to. And when the old man found out Paul (his name) was contracting for Steven Scott Orchestras, Pop represented, suggesting that his long-lost friend hire me for the next wedding date. Steven Scott Orchestras was (and maybe is — I don’t know) the premier “club date” booker in the New York area. A club date is exactly what it does not sound like! “Club dates” in this context were weddings or bar mitzvah receptions whose sponsors wanted live music to entertain the guests. It had nothing to do with playing music at an actual club! Anyway, as a full-time musician hell-bent on eating something more exotic than red beans and rice — and living in a dwelling which didn’t require that I share a bathroom with ten immigrants — I used to do this crap to augment my income. The bands were always unrehearsed and the musicians generally brutal. But what can I say? I wasn’t fully-employed playing jingles, records or the major league stuff to which I aspired. So I played fucking weddings. It could have been a lot worse. Playing Italian functions was the bomb. Man, did we get fed — and treated with respect! But at the snootier functions, the musicians used to get stuffed in a closet on breaks like we were fucking chamber maids…and fed nothing! Back to the subject! So Daddy’s friend hired me for some douchey gig out at El Patio in Atlantic Beach, the joint where all the local nouveau riche smarmy swells just had to have their wedding or bar mitzvah. I’d never met or played with any of the musicians and didn’t know their repertoire. And I was playing bass and not guitar, which was my first instrument. Clearly, this was not a venue for me to showcase my abilities. But that wasn’t the point. The point was to sleepwalk through “the date” and get paid. Nobody that mattered was going to hear me anyway. Whatever (and into the present tense for effect)…I wheel my crap into the joint….get set up…plunk a few notes to make sure everything is good to go…and turn around to face the guests who were beginning to trickle in. And whose eye do I catch? Keith fucking Richards! And right next to him is Ron Wood! What the fuck? So I kind of shake my head in disbelief and exit to the bathroom to take a leak…and the guitar player is inside freaking out with the drummer sputtering “Oh my God. It’s The Rolling Stones. I can’t play! I can’t play!” And I’m thinking “Just my luck. I’m gonna perform for The Stones with a bunch of Long Island hacks I’ve never seen before. I’m gonna sound like shit. They’re gonna sound like shit. And I wish I had a fucking paper bag. I just wanted to make my hundred fifty bucks and get the fuck out of here without the embarrassment of anybody knowing I’ve lowered myself to doing this kind of shit work for a living. And look what happened!” Well…as you can imagine…we sucked out loud. The core of the band hated me and made it very obvious that they would never be hiring me again. So I hung out with the other add-ons (trumpet and conga player) who also got the outsider treatment. I loved the trumpet player’s take on the “core” which condescended to him: “Hey! Blow it out your ass, bitches! I’ve fucked up gigs for better musicians than you!” Perfect. Couldn’t have said it better myself. And now for the cosmic moment: We’re on break and I, the trumpet player and conga guy (who was latino) are sitting around impressing each other with our respective wit when the conga player gets a gander at Keith Richards talking to somebody about 30 feet away. Keith was easily the worst-dressed guest at the function. Particularly, he had a ratty pair of boots for footgear not appropriate for the swell wedding he was attending. But the conga player hadn’t gotten the news — and had no idea that this guy was a rock star. And he began railing “look at that fucking guy! What kind of asshole wears shoes like that to a wedding? What’s wrong with that mother fucker?” Too absurd. I live for shit like that. Whatever…I’d like to tell ya that Keith and I got together and wrote a # 1 hit but of course, that wasn’t the case. The reality was that I was mostly mortified at being associated with such a horrible presentation, and essentially bolted right after the last note never to play with any of those musicians ever again! Not exactly a moment of glory or my 15 minutes of fame. Oh well!
https://medium.com/my-life-on-the-road/me-and-the-rolling-stones-6b701af70eb6
['William', 'Dollar Bill']
2020-10-11 01:14:55.317000+00:00
['Life Lessons', 'Culture', 'Life', 'Rolling Stones', 'Music']
Interactive Data Visualization
Interactive Data Visualization Creating interactive plots and widgets for Data Visualization using Python libraries such as: Plotly, Bokeh, nbinteract, etc… Data Visualization Data Visualization is a really important step to perform when analyzing a dataset. If performed accurately it can: Help us to gain a deep understanding of the dynamics underlying our dataset. Speed up the machine learning side of the analysis. Make easier for others to understand our dataset investigation. In this article, I will introduce you to some of the most used Python Visualization libraries using practical examples and fancy visualization techniques/widgets. All the code I used for this article is available in this GitHub repository. Matplotlib Matplotlib is probably Python most known Data Visualization library. In this example, I will walk you through how to create an animated GIF of a PCA variance plot. First of all, we have to load the Iris Dataset using Seaborn and perform PCA. Successively, we plot 20 graphs of the PCA variance plot while varying the angle of observation from the axis. In order to create the 3D PCA result plot, I followed The Python Graph Gallery as a reference. Finally, we can generate a GIF from the 20 graphs we produced using the following function. The result obtained should be the same as the one in Figure 1. This same mechanism can be applied in many other applications such as: animated distributions, contours, and classification machine learning models. Figure 1: PCA variance plot Another way to make animated graphs in Matplotlib can be to use the Matplotlib Animation API. This API can allow as to make some simple animations and live graphs. Some examples can be found here. Celluloid Celluloid library can be used in order to make animations easier in Matplotlib. This is done by creating a Camera aimed to take snapshots of a graph every time one of its parameters changes. All these pictures are then momentarily stored and combined together to generate an animation. In the following example, a snapshot will be generated for every loop iteration and the animation will be created using the animate() function. It is then also possible to store the generated animation as a GIF using ImageMagick. The resulting animation is shown in Figure 2. Figure 2: Celluloid Example Plotly Plotly is an open-source Python library built on plotly.js. Plotly is available in two different modes: online and offline. Using this library we can make unlimited offline mode charts and at maximum 25 charts using the online mode. When installing Plotly, it is although necessary to register to their website and get an API key to get started (instead of just using a pip install like for any other libraries considered in this article). In this post, I will now walk you through an example using the offline mode to plot Tesla stock market High and Low prices over a wide span of period. The data I used for this example can be found here. First of all, we need to import the required Plotly libraries. Successively, I imported the dataset and preprocessed it in order to then realize the final plot. In this case, I made sure the columns I wanted to use for the plot were of the right data-type and the dates were in the format (YYYY-MM-DD). To do so, I converted the High and Low prices columns to double datatypes and the Date column to a string format. Successively I converted the Date column from a DD/MM/YYYY format to YYYY/MM/DD and finally to YYYY-MM-DD. Figure 3: Tesla Dataset Ultimately, I used the Plotly library to produce a Time Series chart of Tesla’s stock market High and Low prices. Thanks to Plotly, this graph will be interactive. Placing the cursor on any point of the time series we can get the High and Low prices and using either the buttons or the slider we can decide on which timeframe we want to focus on. In Figure 4 is shown the final result. Plotly documentation offers a wide range of examples on how to make the best out of this library, some of these can be found here.
https://towardsdatascience.com/interactive-data-visualization-167ae26016e8
['Pier Paolo Ippolito']
2019-09-13 19:50:55.985000+00:00
['Artificial Intelligence', 'Data Visualization', 'Towards Data Science', 'Data Science', 'Programming']
What 10 Years of Green Smoothies Taught Me About the Art of Healthy Eating
What 10 Years of Green Smoothies Taught Me About the Art of Healthy Eating Let’s take willpower out of the equation… Image by NatureFriend from Pixabay. Spinach, kale, celery, cucumber, apple, ginger, and turmeric. This is the rather intimidating list of ingredients I’ve been ingesting in the form of a green smoothie almost every day for the last 10 years. To understand the magnitude of this achievement, we need a quick rewind to my childhood. There, we find a well-meaning mother struggling to get her stubborn son to eat even the tiniest serving of veggies, let alone green things like kale and spinach. The best she could manage was some well-disguised carrots and potatoes in meat-centered casseroles and the occasional bribe for eating a few slices of cucumber on the side. So, imagine her surprise when I started eating (or drinking) spinach, kale, and celery on a daily basis! But this was a special time — the start of my healthy living transformation — and I was highly motivated to change (a serious cancer scare can do that to you). When The World’s Healthiest Foods showed me that spinach was the most nutrient-dense thing on the planet, I knew I had to find a way to get more green stuff into my body. Still, despite this surge of motivation, there were two serious obstacles in my way: 1) eating these bland greens was no fun whatsoever, and 2) making a daily smoothy (and cleaning up afterward) was even less fun. The process of overcoming these obstacles gave me the key to lifelong healthy eating in a world filled with addictive empty-calorie treats (and inspired one delighted mom to adopt the green smoothie habit herself). Today, I’d like to share this key with you.
https://medium.com/happy-healthy-wealthy-productive-sustainable/what-10-years-of-green-smoothies-taught-me-about-the-art-of-healthy-eating-f8a04be6dead
['Schalk Cloete']
2020-12-19 09:54:22.766000+00:00
['Nutrition', 'Food', 'Willpower', 'Health', 'Self']
Review: Cro-Mags’ “2020”
Review: Cro-Mags’ “2020” The soundtrack of this turbulent year is here and the Cro-Mags are leading the charge Courtesy of Mission Two Entertainment The year 2020 has been a harsh one on us all, it’s exposed the nightmarish realities that the government and the elite are willing to put us, the rightful citizens of this world through in order to keep the status quo. People who before the quarantine were able to hide in their privilege and the safety of their lives suddenly had the world thrusted unto their lap the way those who had been oppressed have all their lives. We’ve gone through a global pandemic, lockdown after lockdown and despite the repeated attempts from scientists and the worlds greatest minds to remind us to mask up and stay safe, the death toll from COVID-19 is rising higher and higher every day. Since the extensive quarantine has hit the globe the music scene has suffered particularly hard. The year 2020 has been devoid of the amazing sound of new music, particularly heavy music. Bands dwelling in the underground have been presented with little opportunity to spread their music to others since live shows are generally how they make their name, so they’ve had to come up with their own inventive ways of keeping their audience and possibly spreading it. This turbulent year marked a return for one of most turbulent and endearing bands: New York Hardcore pioneers the Cro-Mags. After years of lawsuits and band strife between Age Of Quarrel vocalist John Joseph and founder/ bassist Harley Flanagan the latter finally reclaimed ownership of the Cro-Mags name and in June released the bands first album in 20 years, receiving positive reception and cementing the bands true return in the hardcore world. It seems like no other band has had a more successful year than the Cro-Mags and the irony shouldn’t be lost on us all. The band has lived through so much turbulence and inner destruction that it seems like 2020 found them in their element. After announcing the release for the new album In The Beginning earlier this year the band got busy booking shows leading up to and after the release. One of the first was their show with Body Count at the famous Webster Hall club in New York City, where 8 years prior Flanagan was arrested for supposedly stabbing and biting his replacement in John Joseph’s incarnation of the Cro-Mags. The show however was cancelled as New York Governor Andrew Cuomo had the city put on lockdown just two days before the March 15th show. With no show taking place the Cro-Mags remained undeterred and quickly put together a practice space to perform a livestream that Sunday which thousands tuned into. Before the Cro-Mags even released their newest full-length Flanagan announced on his social media that he was already writing material for a new Cro-Mags release, hoping to release it later this year. He stated he wanted to release material pertaining to the chaos of this year, something he has certainly been familiar with all his life from living through one of New Yorks darkest times. Less than six months down the line that release has now arrived in time for the holidays in the form of a 20 minute and 20 second long EP tentatively titled 2020 The EP starts off unlike any Cro-Mags affair as it doesn’t explode into a barrage of riffs akin to their “We Gotta Know” introduction on their famous debut. Instead it begins with some funky drumming from Gary “G-Man” Sullivan, akin to a hip-hop beat before a bass and cello intro courtesy of Flanagan and “Lamont” Carlos Cooper, a homeless man from New Jersey and friend of Flanagans whom he met when he saw him playing on the subway to make a living. It’s a beginning that Harlem’s back to Flanagan and Cooper’s collaboration on the “Between Wars” instrumental which was featured on the bands comeback album and Flangan’s acting debut of the same name. After a short time between the three, in comes the steady riff buildups from Rocky George before exploding into the freight train rhythm that has become synonymous with the band. The repeated cries of “Age of quarantine!” along with replies of “A global pandemic,” “Worldwide panic,” etc. set the loose story arc for this EP’s concept before us. The first song on this EP shows us once again how well the Cro-Mags know how to introduce an album. It’s a common practice for a hardcore band to have their introductions become their most famous songs and this is another excellent introduction by the band as the Cro-Mags appear to be diversifying their songs to avoid soundalikes. After the call and response verses in comes a doomy drum buildup leading into one of the heaviest breakdowns I’ve heard Flanagan dish out into my ears. It made me want to wreck my living room as though it were an open dancefloor at a hardcore show. A desire that has followed me and many others throughout the year. This track is definitely one of my favorites from the album and dashed any notion of doubt I had before when I first heard the “2020” single which follows the lead song. Listening to the second track “2020” after the incredible opener actually improved my opinion on the single released a week prior. While I was not dismissive of the track deep down I knew it was not the standout, on its own it was enjoyable and the lyrics are relevant to the times, as they will be for the first five tracks (the last track being an instrumental). Listening to the title track this time around made it one of the key moments for the EP, it brought it all together. The track builds and builds but never truly releases like the opening track or those who follow, it keeps a steady rhythm and just carries us forward as Harley’s lyrics describe the turmoil the world has gone through just this year alone in his dog growl of a voice, he sounds fed up but he knows he’s gotta continue onward. This track sounds similar to the bands song “Two Hours” from In The Beginning as it builds up and leads into the next track. It stands on its own, but the real release comes from the following songs. The third song “Life On Earth” brings the Cro-Mags sound headfirst into that old-school thrash sound they were always compared to back in their heyday. Over the bands career they’ve always danced along the thrash metal line and even on their thrashiest outing Best Wishes they still maintained that hardcore edge despite pushing the sound further into the metal territory. With “Life On Earth” the band sounds more like a thrash metal band than ever before and they do it better than some of the most famous thrash bands that survived the 80s. It’s a refreshing and straightforward attack and Harley’s vocals sound more fiery than ever on this track in particular, it’s those vocals that push it into that territory and honestly it suits the band well on this song. Moving onto the fourth track “Violence And Destruction,” the band flexes their dub skills in the intro, a Cro-Mags dub has not been seen by the band since their 1983 demo but Harley and the gang slide right back into it like an old pair of Doc Marten’s. We get some throaty deep vocals on the dub intro before we are given a right hook across the face into the Motörhead rhythm again, right back to business. Harleys vocals over the charge are reminiscent to some of the singing he did on tracks from their 2000 album Revenge which featured some of his most clear vocal performances, on this track his voice is somewhat obscured by the music, it dwells in the background which makes some of the lyrics indecipherable but that’s what lyric sheets are for aren’t they? The bands attempts to diversify their sound stand out on this song, not only diversify but create singular tracks which are all superbly molded into each other to form a loose narrative. Tracks such as this one make this EP fun to listen to, ending with the whirlwind guitar solo from none other than former Suicidal Tendencies axeman Rocky George who’s performance on this album is one of the standouts in his career with the Mags. There was never a doubt that Rocky George had lost any of his skill or edge but it’s always good to get a reminder from him. The fifth track “Chaos In The Streets” brings us back into the Cro-Mags stomping ground that they are usually found within, any casual listener will know what I mean when I say it’s the brand of hardcore they made famous from their debut. Lyrically Harley delves into the themes that drove a lot of the BLM protestors into city streets during the summer as a result of the murder of George Floyd, it seems he agrees with the message of the protests but stresses the fact that what you do unto him will be done unto you as well. “Burn it all down/ Down to the ground/ But stay the fuck away from me.” It’s no doubt Flanagan has had to defend what’s his in the past and is merely looking out for himself while also understanding the reality of the corruption in the world. The line “Let’s talk racism and politics/ And let’s talk about money” displays the root of this country’s problems and where the oppressors real intentions lie. This is the last track with lyrics on it and it seems as though while Harley maintains that same attitude towards authority that he had when he was younger he also expresses his maturity in that he doesn’t think the same way he did when he was younger. Does he care what you think? Fuck no. The final track “Cro-Fusion” is the second time in the bands career they’ve ended an album on a jam session, starting with the “Cro-Mags Jam” from the album Alpha Omega released in 1992. The mere element of this song that ties it to the loose concept of this EP is the background sounds of police sirens throughout and recordings of riots from earlier this year. With no lyrics the song relies on the band members to drive a message into our heads, and while it could be just a song the band threw on the EP because it displays how tight of a band they are now I personally see the song as the best way to close out the tumultuous year we’ve been through. Every instrument is played frantically from the bass and drum lead-ins along with Rocky’s guitar licks that make his instrument scream into the heavens, the tight rhythm kept up with guitarist Gabby also continuing onward. It’s a manic session dedicated to a manic year, what better way to close out this EP? The musical skill displayed on this EP shows us that the Cro-Mags aren’t really concerned with the NYHC brand of musical creation but with their own which they’ve certainly dived headlong into just in these six tracks. They’re not the same band they were when they released their classic Age of Quarrel thirty-four years ago and honestly that’s a good thing because as amazing as that record is we’re not here to listen to the same album get made. They’re a band who’s skill grows with age and they’ve proven that here. Despite some of the EP’s shortcomings in mixing I can’t help but overlook them just from the musicianship alone. From the first listen to the “Age of Quarantine” track it felt like this was going to be a good journey and all I can say I want after listening to this EP is more. 2020 was the year of the Cro-Mag, Harley Flanagan brought the band he founded almost forty years ago to the forefront of the hardcore scene after it had dwelled in inactivity for years. We saw two releases from them assuring us all that the man is cognizant to the world around him and capable of creating art through it. The album In The Beginning released back in June served as Harley’s testament to the last decade of his life, from the chaos of the stabbing at Webster Hall, the following court proceedings and his beefs with former bandmates to rising above these moments with book releases and a triumphant return to music. Fans of the Cro-Mags can rest easily as well knowing that the band is going to release even more music next year, it really is a good time to be a Cro-Mag… finally.
https://medium.com/clocked-in-magazine/review-cro-mags-2020-d0751ca2a818
["Ryan O'Connor"]
2020-12-15 02:45:31.545000+00:00
['Review', 'Music', 'Magazine', 'Punk']
Create & Execute your First Hadoop MapReduce Project in Eclipse
Create & Execute your First Hadoop MapReduce Project in Eclipse A step-by-step guide for creating a Hadoop MapReduce Project in Java This article will provide you the step-by-step guide for creating Hadoop MapReduce Project in Java with Eclipse. The article explains the complete steps, including project creation, jar creation, executing application, and browsing the project result. Let us now start building the Hadoop MapReduce WordCount Project. Hadoop MapReduce Project in Java With Eclipse Prerequisites: Hadoop 3: If Hadoop is not installed on your system, then follow the Hadoop 3 installation guide to install and configure Hadoop. Eclipse: Download Eclipse Java 8: Download Java Here are the steps to create the Hadoop MapReduce Project in Java with Eclipse: Step 1. Launch Eclipse and set the Eclipse Workspace. Step 2. To create the Hadoop MapReduce Project, click on File >> New >> Java Project. Provide the Project Name: Click Finish to create the project. Step 3. Create a new Package right-click on the Project Name >> New >> Package. Provide the package name: Click Finish to create the package. Step 4. Add the Hadoop libraries (jars). To do so Right-Click on Project Name >>Build Path>> configure Build Path. Add the External jars. For this go to hadoop-3.1.2>> share >> hadoop. Now we will move to share >> Hadoop in Hadoop MapReduce Project. A. Add the client jar files. Select client jar files and click on Open. B. Add common jar files. Select common jar files and Open. Also, add common/lib libraries. Select all common/lib jars and click Open. C. Add yarn jar files. Select yarn jar files and then select Open. D. Add MapReduce jar files. Select MapReduce jar files. Click Open. E. Add HDFS jar files. Select HDFS jar files and click Open. Click on Apply and Close to add all the Hadoop jar files. Now, we have added all required jar files in our project. Step 5. Now create a new class that performs the map job. Here in this article, WordCountMapper is the class for performing the mapping task. Right-Click on Package Name >> New >> Class Provide the class name: Click Finish. Step 6. Copy the below code in your class created above for the mapper. package com.projectgurukul.wc; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.io.LongWritable; public class WordCountMapper extends Mapper <LongWritable, Text, Text, IntWritable> { private Text wordToken = new Text(); public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer tokens = new StringTokenizer(value.toString()); //Dividing String into tokens while (tokens.hasMoreTokens()) { wordToken.set(tokens.nextToken()); context.write(wordToken, new IntWritable(1)); } } } Press Ctrl+S to save the code. Step 7. Now create another class (in the same way as we used above), for creating a class that performs the reduce job. Here in this article, WordCountReducer is the class to perform the reduce task. Click Finish. Step 8. Copy the below code in your class created above for the reducer. package com.projectgurukul.wc; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WordCountReducer extends Reducer <Text, IntWritable, Text, IntWritable> { private IntWritable count = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { // gurukul [1 1 1 1 1 1….] int valueSum = 0; for (IntWritable val : values) { valueSum += val.get(); } count.set(valueSum); context.write(key, count); } } Press Ctrl+S to save the code. Step 9. Now create the driver class, which contains the main method. Here in this article, the driver class for the project is named “WordCount”. Click Finish. Step 10. Copy the below code in your driver class, which contains the main method. package com.projectgurukul.wc; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] pathArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (pathArgs.length < 2) { System.err.println(“MR Project Usage: wordcount <input-path> […] <output-path>”); System.exit(2); } Job wcJob = Job.getInstance(conf, “MapReduce WordCount”); wcJob.setJarByClass(WordCount.class); wcJob.setMapperClass(WordCountMapper.class); wcJob.setCombinerClass(WordCountReducer.class); wcJob.setReducerClass(WordCountReducer.class); wcJob.setOutputKeyClass(Text.class); wcJob.setOutputValueClass(IntWritable.class); for (int i = 0; i < pathArgs.length — 1; ++i) { FileInputFormat.addInputPath(wcJob, new Path(pathArgs[i])); } FileOutputFormat.setOutputPath(wcJob, new Path(pathArgs[pathArgs.length — 1])); System.exit(wcJob.waitForCompletion(true) ? 0 : 1); } } Press Ctrl+S to save the Code. Step 11. Creating the Jar File of the Project Before running created Hadoop MapReduce word count application, we have to create a jar file. To do so Right-click on project name >> Export. Select the JAR file option. Click Next. Provide the Jar file name: Click Next. Click Next. Now select the class of the application entry point. Here in this Hadoop MapReduce Project article, the class for the application entry point is the WordCount class. Click Finish. Step 12. Execute the Hadoop MapReduce word count application using the below execution command. hadoop jar <project jar file path> <input file path> <output directory> hadoop jar /home/gurukul/WordCount.jar /wc_input /wc_output Here in this command, <project jar file path> is the path of the jar file of the project created above. <input file path> is the file in HDFS, which is input to the Hadoop MapReduce Word Count Project. <output directory> is the directory where the output of the Hadoop MapReduce WordCount program is going to be stored. This will start the execution of MapReduce job Now we have run the Map Reduce job successfully. Let us now check the result. Step 13. Browse the Hadoop MapReduce Word Count Project Output. The output directory of the Project in HDFS contains two files: _SUCCESS and part-r-00000 The output is present in the /part-r-00000 file. You can browse the result using the below command. hadoop fs -cat <output directory/part-r-00000> hadoop fs -cat /wc_output/part-r-00000 Summary We have Successfully created the Hadoop MapReduce Project in Java with Eclipse and executed the MapReduce job on Ubuntu.
https://medium.com/data-science-community-srm/create-execute-your-first-hadoop-mapreduce-project-with-eclipse-9ec03105e974
['Ojas Gupta']
2020-09-04 11:52:26.516000+00:00
['Mapreduce', 'Eclipse', 'Java', 'Hadoop', 'Data']
Retro-Review: ‘Where Moth and Rust Destroy’ by Tourniquet
Tourniquet’s 2003 album Where Moth and Rust Destroy is definitely a metal album. I know a lot of people will groan out loud when they hear Tourniquet is a Christian metal group, if they didn’t know so already. That’s to be expected, Christian music on the most part doesn’t have a reputation for being very good. Tourniquet, though, is good. Their blend of thrash and progressive metal works. Combined with an excelling production, and you have yourself a very listenable and great sounding album. Now, a lot of that is primarily because of Ted Kirkpatrick, the drummer and primary songwriter. His compositions offer something to those who have a more adventurous ear when it comes to music, but he doesn’t get so overly complicated that a song loses itself. It helps that he’s brought some big guns to play on the album, most notable being ex-Megadeth guitarist Marty Friedman. Friedman really shines on this album, with melodies and solos that draw the listener deeper into the songs he plays on. Bruce Franklin, of Trouble fame, plays on two tracks as well. His style is significantly different than Friedman, but that’s not a bad thing. Ted Kirkpatrick, Luke Easter and Steve Andino. Another frequent guest on this album is violinist Dave Bullock. The incorporation of violins into metal music was pretty popular at this time. It usually had varied results. Bullock is no different, sometimes the violin adds that extra element to a song that makes it standout even more. Sometimes they just seem unnecessary. There’s a little bit of both of those on this album, but the listener can probably determine for themselves where they think it was not necessary or essential. Heck, you might think it’s all great. For this album, Tourniquet is rounded out by vocalist Luke Easter, who puts on a varied and strong performance here, and bassist Steve Andino. The album If there is one thing I think works against Where Moth and Rust Destroy is the fact that the title track is the first song. Why do I think this works against the album as a whole? Marty Friedman plays lead on all but two songs on Where Moth and Rust Destroy. It’s because “Where Moth and Rust Destroy” is a near-perfect song. When you kick off with something like this, it’s almost impossible for the rest of the songs on the album to get a moment to shine. “Where Moth and Rust Destroy” is just a really memorable song, both musically and lyrically. When you listen to the songs after it, it’s still going through your head and you’re probably thinking about repeating that track once the rest of the album is over. Why’s it a great song? Well, it’s just kind of epic. From the distorted lone guitar that kicks things off to Friedman’s melodies to the crunchingly slow end, it just delivers a fulfilling musical experience. It’s a seven-plus minute long song, but it feels much shorter because it just imerses the listener, causing them to lose sense of time. “Where Moth and Rust Destroy” is the “must listen” song of the album. It’s not just a great song out of the Christian metal genre, but simply a great metal song. It’s also Easter’s best performance on the album, with his vocals being particularly moving and powerful. It just casts a shadow over every song on the album that follows it. That’s not to say that the rest of the album is bad or even average. It’s actually pretty good, with some great-sounding songs. In the great-sounding song category is “Drawn and Quartered,” which kicks off with Bullock playing the violin over a heavy riff. From there, the song feels like the soundtrack of climbing up a steep hill, with a long burst of speed when one goes down. Easter also shines once again here, with his vocals going from his powerful power metal voice to a Dave Mustaine-esque growl. Friedman also has a face-shredding guitar solo on this, possibly the most memorable one on the album. Bruce Franklin plays lead on two of the album’s songs. “Architeuthis” is a pretty good rocker, with echoes of early 90s Megadeth, which may be part of Friedman’s influence on the writing process. It’s fast pace with Kirkpatrick really hitting the drums here. “Convoluted Absolutes” is another rocker, with a dirty sounding guitar lead in by Franklin. The last two songs bring things to a close in epic territory. Why do I say “epic?” Both songs are more than seven minutes long. On top of that, there’s several different movements with each. The Egyptian vibe of “Healing Waters of the Tigris” ties in with its subject matter pretty well. It’s a pretty complicated song with a rather steady beat. The band is entirely on their A-game for this one, from that Egyptian melody I mentioned to the chorus. It’s a moving song, whether you’re Christian or not, thanks much to another great performance by Easter. Things end with “In Death We Rise,” which is the song that comes nearest to matching the title song. It starts with the doomy vibe of those Sabbath-influenced bands like St. Vitus, Cathedral and Trouble. It’s slow and weighty, even when the guitar stands aside and the violin kicks in. Easter backs off his powerful metal vocals, barely being heard over the music here, almost like a ghost, you could say. “In Death We Rise” never speeds up, it just marches along into the dark mist, eventually leaving you surrounded in black. The verdict Where Moth and Rust Destroy is a really good album and I would recommend giving it a listen, no matter what your beliefs are. The music is top notch and it’s not overly preachy or fluffy. If you’re into God and stuff, it’s probably one you’ll want to buy. If not, it’s still a pretty good listen and the entire thing is available on YouTube. As for me, I’m probably not going to buy Where Moth and Rust Destroy, but I do look forward to listening to another Tourniquet album when its time comes around.
https://medium.com/earbusters/retro-review-where-moth-and-rust-destroy-by-tourniquet-e0bf1332d20e
['Joseph R. Price']
2018-12-24 06:01:00.781000+00:00
['Christianity', 'Christian Rock', 'Music Review', 'Thrash Metal', 'Music']
Two Simple Algorithms for Chunking a List in Python
The Challenge Create a function that converts a list to a two-dimensional “list of lists” where each nested structure is a specified equal length. Here are some example inputs and expected outputs: # INPUT LIST: [1,2,3,4,5,6] # INPUT SIZE: 3 # OUTPUT: [[1,2,3],[4,5,6]] # INPUT LIST: [1,2,3,4,5] # INPUT SIZE: 2 # OUTPUT: [[1,2],[3,4],[5]] # INPUT LIST: [1,2,3] # INPUT SIZE: 4 # OUTPUT: [[1,2,3]] Reviewing our examples, there are some important distinctions for scenarios where the list and chunk size do not perfect match up. When the list does not evenly divide by the chunk size, the last index will be shorter than the evenly sized chunks. When the chunk size is larger than the list itself, a chunk will still be created with the list in the first index. We’re going to solve this two ways. The first function will be written generically — using techniques that can be applied in any programming language. The second function will be optimized for Python — using techniques that are specific to the language. Runner-Up For a generic chunking algorithm, we’re going to utilize basic programming techniques. We begin by creating an empty list, setting a counting variable to zero, and a variable for the length of our list. A while loop is an iteration (looping) structure that will execute its body when the expression evaluates to True . We want to loop through the entire list, so we will iterate as long as i is less than arr_length . Inside the while loop, we need a condition to add new chunks at the appropriate time. We’ll use the modulus (remainder) operator here. If you’re unfamiliar with this arithmetic operator, it calculates the remainder from the division of two values. When the remainder is equal to zero — beginning of the loop and every n-th iteration where n is equal to the chunk size — we nest a new list. Now, we know we will always be populating the newest chunk created, which has the largest index. We use len(chunked_list)-1 for the index value to represent this knowledge and add the currently iterated item from our original list. Finally, we increment i to avoid an infinite loop and return chunked_list once the loop completes. def genericChunk(arr, chunk_size): chunked_list = [] i = 0 arr_length = len(arr) while i < arr_length: if i % chunk_size == 0: chunked_list.append([]) chunked_list[len(chunked_list)-1].append(arr[i]) i = i + 1 return chunked_list Optimal Solution We can improve our generic function using tools available in Python. Specifically, we’ll use list comprehension and slice notation to efficiently grab sections of the original list. Additionally, we’ll import the ceil() function from the math library to keep our function as a one-liner. List comprehensions are an ultra-fast replacement for simple for loops. We’re being so efficient that we don’t even need the chunked_list variable, we can return directly from the list comprehension. The number of iterations will be equal to the length of the list, divided by the chunk size, rounded up. This accounts for a list such as our second example where the list does not divide evenly by the chunk size. We define each item in the returned list using slice notation. The starting index of the slice will be equal to our current iterator i multiplied by the chunk size. The ending index will be the starting index plus the chunk size.
https://medium.com/code-85/two-simple-algorithms-for-chunking-a-list-in-python-dc46bc9cc1a2
['Jonathan Hsu']
2020-05-17 12:38:27.623000+00:00
['Data Science', 'Python', 'Technology', 'Programming', 'Software Development']
The Essay-Proof Show
10/14/94, New Orleans, LA, McAlister Auditorium, Tulane University I haven’t written a Phish essay here in nearly a year. Don’t worry, I didn’t stop liking Phish — in fact, I somehow managed to see 7 shows in 5 different states in 2016, the last of which I was honored to review for Phish.net. But even as I kept up with the band’s ongoing output, dutifully listening to each show within 24 hours of its playing, my semi-regular march through their past stalled out at this point. I don’t want to blame this show but…I blame this show, with a little bit of responsibility left over for myself, and my misguided commitment to write an essay about every show before moving on to the next. On at least three occasions since I tweeted this show in March 2016, I’ve gathered the initiative to re-listen in hope of finding an interesting angle that can get me back on the road through Fall ’94. And each time I was thwarted, failing to find the spark that would fuel at least a handful of paragraphs. But, the world being what it is right now, I would very much like to have the occasional 3-hour distraction of live-tweeting a Phish show. So let’s get this over with. Here’s the angle, such as it is: why is this a show that defies analysis? On paper, it’s very respectable, with a Gin, a Tweezer, a Forbin’s > Mockingbird, two of the band’s best set openers, an acoustic mini-set, and an encore with not one, but two regular guest trumpeters. On playback, it’s certainly not an obvious off night, delivering most of what people wanted to see from Phish in 1994. I’d consider the improvisation within Gin and Tweezer to be on the weak side, trapped uncomfortably between the linear structure of years past and the lengthy experiments right around the corner, but not in a particularly interesting way. They both just kind of worm their way into a dead end of discordance that the band isn’t yet nimble or courageous enough to escape. It also feels like, aside from a couple anachronisms, this show could have taken place anywhere in the 150-some shows I’ve covered since 2/2/93. It could be that it checks off too many boxes, attempting to please so many different parts of their audience that there’s nothing really at stake, no tension or drama. It’s a paradox I’ve written about before — when Phish tries too hard to be everything for everybody, they often end up less than the sum of their many diverse parts. Was the band equally unmoved by the routine nature of this night? Well, no spoilers, but I know from A Live One that November gets weird, and fast — the Bangor Tweezer is only a couple weeks away (not to mention the ALO Hood in nine days), and a good chunk of the post-Hoist setlist formula established over the summer is about to get tossed out the tour bus window. Here in New Orleans, we’re only six shows into fall tour, and the band is presumably working hard to learn 30 Beatles songs for that little show in Glen Falls coming up at the end of the month. So the context calls for patience, even if it makes life hard for compulsive essayists 23 years later. And with that — there, it’s done, let’s keep moving, there’s so much to hear.
https://medium.com/the-phish-from-vermont/the-essay-proof-show-25bf56a2785
['Rob Mitchum']
2017-02-07 19:13:58.337000+00:00
['Phish', 'Music']
How to extend your Apache Atlas metadata to Google Data Catalog
Disclaimer: All opinions expressed are my own, and represent no one but myself…. They come from the experience of participating in the development of fully operational sample connectors, available on: github. If you missed the latest post talking about how Apache Atlas and Data Catalog structure their metadata, please check a-metadata-comparison-between-apache-atlas-and-google-data-catalog. The Dress4Victory company In this article, we will start by creating a fictional scenario, where we have the Dress4Victory company. They help their users getting the best deals when buying clothes, and over the years they have grown from a few servers to several hundred servers. Dress 4 Victory company logo This company has many analytics workloads to handle their user data and to support it, their tech stack is composed mostly of Hadoop components. Generally speaking, their metadata management was a mess. So last year their CTO added Apache Atlas to their tech stack, to better organize their metadata, and visualize the data structures of their enterprise. By improving their metadata management, it helped them solve many problems like: Analysts taking a long time to find meaningful data. Customer data spread everywhere. Issues with access controls on their data. Compliance requirements being ignored. Now they are migrating some workloads to Google Cloud Platform, and he’s scared that it will be much harder to manage their metadata since Apache Atlas has starting to work for them. He found out about Google Data Catalog, and he would love to use it, since it’s fully managed and will reduce his operational costs, but they can’t migrate everything to GCP, at the moment. Luckily for him, he found out that there’s a connector for apache-atlas. He wants to start right away and test it out. Full ingestion execution Let’s help Dress4Victory and look at the Apache Atlas connector full ingestion architecture: Full Ingestion Architecture On each execution, it’s going to: Scrape : connect to Apache Atlas and retrieve all the available metadata. : connect to Apache Atlas and retrieve all the available metadata. Prepare : transform it in Data Catalog entities and create Tags with extra metadata. : transform it in Data Catalog entities and create Tags with extra metadata. Ingest: send the Data Catalog entities to the Google Cloud project. Currently, the connector supports the below Apache Atlas asset types: Entity Types Each Entity Type is converted into a Data Catalog Template with their attributes metadata. Since Google Data Catalog has pre-defined attributes, we create an extra Template to enrich the Apache Atlas metadata. Each Entity Type is converted into a Data Catalog Template with their attributes metadata. Since Google Data Catalog has pre-defined attributes, we create an extra Template to enrich the Apache Atlas metadata. Classification Types Each Classification Type is converted into a Data Catalog Template, so we are able to empower users to create Tags, using the same Classification they were used to work within Apache Atlas. If there are Classifications attached to Entities, the connector also migrates them as Tags. Each Classification Type is converted into a Data Catalog Template, so we are able to empower users to create Tags, using the same Classification they were used to work within Apache Atlas. If there are Classifications attached to Entities, the connector also migrates them as Tags. Entities Each Entity is converted into a Data Catalog Entry. Since we don’t have a Type structure in Google Data Catalog, all entries from the same type, share the same Template, so users can search in a similar way they would do in Apache Atlas. Since even Columns are represented as Apache Atlas Entities, this connector, allows users to specify the Entity Types list as a command to be considered in the ingestion process. At the time this was published, Data Catalog does not support Lineage, so this connector does not use the Lineage information from Apache Atlas. We might consider updating this if things change. Running it After setting up the connector environment, by following the instructions at the Github repo, let’s execute it using its command line args: #Environment variables export GOOGLE_APPLICATION_CREDENTIALS=datacatalog_credentials_file export DATACATALOG_PROJECT_ID=google_cloud_project_id export APACHE_ATLAS2DC_HOST=localhost export APACHE_ATLAS2DC_PORT=21000 export APACHE_ATLAS2DC_USER=my-user export APACHE_ATLAS2DC_PASS=my-pass google-datacatalog-apache-atlas-connector sync \ --datacatalog-project-id $DATACATALOG_PROJECT_ID \ --atlas-host $APACHE_ATLAS2DC_HOST \ --atlas-port $APACHE_ATLAS2DC_PORT \ --atlas-user $APACHE_ATLAS2DC_USER \ --atlas-pass $APACHE_ATLAS2DC_PASS Results Turn the subtitles on for step-by-step guidance when watching the video.
https://medium.com/google-cloud/how-to-extend-your-apache-atlas-metadata-to-google-data-catalog-e84b8ddc8f59
['Marcelo Costa']
2020-07-09 13:54:23.047000+00:00
['Big Data', 'Kafka', 'Compliance', 'Google Cloud Platform', 'Data Visualization']
The Aspirational Nesters: The Key Consumer Group You Need to Get to Know
The Aspirational Nesters: The Key Consumer Group You Need to Get to Know Americans are spending more time in their homes and more home-focused on their spending than ever before. What does this mean for businesses going forward? Photo by Alexander Dummer on Unsplash OVERVIEW Who doesn’t love a good buzzword? 2020 has certainly provided its share of them and more. After all, who would have thought just a short year ago that major consumer trends — and the strategies of companies both large and small alike — would be defined by two word buzzwords like: social distancing, curbside pickup contactless delivery, and ghost kitchens. As a strategic management professor and consultant, all of these buzzwords have helped define how consumer-facing businesses have changed, survived, and yes, in some cases thrived during a year defined by the impact of the coronavirus. However, I recently read an article from Vogue Business that I believe might have just captured the essence of consumer sentiment during a time of fear, isolation, home-focus, and yes, less spending opportunities, due to the COVID-19 pandemic pandemic with a new, two-word buzzword. And I would make the case that looking ahead, the author of the Vogue Business piece, Kati Chitrakorn, may just have unintentionally coined a new buzzword that will be perhaps the dominant strategy influencing much of the retail, restaurant, and even technology and entertainment markets for 2021 and beyond. The two words that you need to know now are in regards to what she labelled as “aspirational nesters,” who could well be the key consumer demographic for marketers to target both now and into the future — even a future hopefully not defined by the coronavirus. Photo by Wright Brand Bacon on Unsplash The “Aspirational Nesters” So who are these folks? Likely, to some degree, they are you, and me, and literally millions and millions of people both here in the United States and across the Western world. The Vogue Business article (“How to Reach the New Luxury Shopper: Aspirational Nesters”) focused on the fact that department stores, such as Bloomingdale’s, Macy’s and Nordstrom, were seeing strong sales in luxury items across the board, even in this time of great economic and social uncertainty for many of us. Such sales were particularly strong for luxury items for the home, from expensive kitchen items, rugs, doormats, and more — things that made one’s home better, more functional, and yes, more “cozy”. The Vogue Business article detailed how these department stores were specifically trying to cater to this new category of younger shoppers — these “aspirational nesters.” This new, younger consumer demographic — “shoppers seeking out luxury goods as “‘comfort’” — were not at all the typical traditional department store shopper, but the aspirational nesters were discovering department stores online and willing to try them out for the first time for their ability to better curate what they were looking as they were And as Tony Spring, CEO of Bloomingdale’s, was quoted in the article as saying that he and his company fully expect the trend towards home comfort seeking to last long beyond the pandemic, observing that: “As things open back up again, I don’t think people will give up comfort, per se. I don’t think people will go back to neglecting parts of their home.” Photo by Sigmund on Unsplash Analysis: The Rise of Aspirational Nesters Across the Economy So what does the rise of these “aspirational nesters” mean for you and your business? I would argue that this consumer segment is not entirely new. We have seen home-focused businesses, from Home Depot and Lowe’s catering to home improvement to Wayfair, Overstock, and other online retailers being strong performers in furniture, home accessories, and more, thriving for some time, even before the pandemic (just turn on HGTV to see why…). The COVID-19 pandemic has certainly been good — even great — for such firms, as with the renewed focus on their home, consumers have been doing far more home improvement projects and upgrading their home environments… … which has translated into a boom for those service-based businesses — many of them small businesses — catering to home improvement as well. However, the pandemic has certainly caused the ranks of aspirational nesters to grow not just in numbers, but in spending power and in the range of items — and services — that are important to their far more home-focused lifestyle beyond home improvement and furnishings. And as I have argued for some time, the fact that the pandemic has persisted well beyond the initial estimates — remember “15 days to flatten the curve” — back in March!… … along with the fact that we have all lived through months of changes in the way we work, the way we shop, and the way we live means that the changes brought about by the pandemic will be long-lasting — even pattern-altering — in their nature. That is why I believe that the changes in consumption and consumer behavior from all of this will be perhaps even permanent in nature. And that is why I strongly feel that the “aspirational nesters” will be a strong force in the economy from here on out, as the emphasis on home — and luxury and comfort at home — will be a trend that is here to stay — and a market that companies of all sizes should look to cater to both now and in the years to come. So, beyond the obvious — people buying new couches, kitchenware, and plumbing fixtures and more — how will the market power of these “aspirational nesters” be felt? The answer, in short, is massive, and the impact is only constrained by the imagination of companies that seek to cater to this market. The patterns that we have seen in 2020 — where people have been focused on spending on the home and their own comfort, as opposed to travel, experiences, and other out-of-home diversions, are going to be slow to shake and those areas are going to be slow to recover. And that does work to the benefit of all-things home-related. And while that does mean good things for any company involved in home improvement and home decor, it also means that companies looking to serve other “home-focused” market segments will benefit from the continued, and even accelerated spending, from the aspirational nesters. This could bode well for a wide range of companies across multiple industries, including: Restaurants with home delivery, pickup Meal kits Remote cooking instruction Home office design/decor Home entertainment of all types Pets — everything pets! Recreation at home rather than through travel Anything that helps balance work/family concerns and spacing. The key — for entrepreneurs and entrepreneurial companies — will be in imagining ways that they and their wares/services can best appeal to the needs of these aspirational nesters. The fact is that this will be a powerful consumer group going forward, shaped by the pandemic and with a newfound emphasis on the quality of the home environment. And literally, the products and services that might be tailored to this significant market segment is only limited by the imagination of innovative individuals and companies to address the needs of the post-pandemic consumer moving forward. And so get used to the phrase “aspirational nesters.” While you may well be one — or aspire to be one going forward, the new focus on home-based spending and comfort will most assuredly be an important marketing dynamic both now and in the years ahead. It is a buzzword you need to know, today!
https://medium.com/better-marketing/the-aspirational-nesters-the-key-consumer-group-you-need-to-get-to-know-7821ce6c8683
['David Wyld']
2020-12-28 13:28:43.063000+00:00
['Marketing', 'Business', 'Covid 19', 'Small Business', 'Economics']
A closer look into the Spanish railway passenger transportation pricing
Introduction As someone who lives and works in a Spanish city 400km away from home, I have found that the most convenient way to travel back and forth is to resort to the train. As a frequent user I have grown baffled of the pricing pattern upon buying the tickets, moving sometimes along the same levels, while others out of the most common levels. So this doubt spurred me to formulate the following questions: “Do train ticket prices really change over the days”? And if so, “Is there an optimal moment to buy them?” Data In this project, only Renfe’s long distances routes were considered. The dataset is originally sourced from a Renfe scraping procedure carried over by thegurus.tech, where prices for the sampled routes departing trains where checked several times on loop each day. In particular, the trains whose prices were checked, range about 3 months, from April 12th 2019 to July 7th 2019. Data source: https://thegurus.tech/posts/2019/05/renfe-idea/ Data wrangling After spending some time getting to know the data, the natural next step is to clean the raw and transform it in a way that is prepared for analysis, therefore prepared to cast light into the main questions. Code for it can be found in my Github repo for this project. Developing this a little bit more, cleaning and transformation tasks range from creating new columns such as routes, departure date, departure time, identifiers for a particular train departing in a given day and in a given time, or days to departure; to changing format to columns to be able to do calculations and further transformations on them, through reducing categories for some categorical variables such as train type or ticket class, dropping not needed columns or deciding what to do with invalid rows (with null values). Results Code for all the plots and results presented in this section can also be found here. All routes together There are a few interesting things to comment on this graph. On one hand, it answers the main questions. 1) We can see that the price indeed changes over the days, not only that, we can also see that there is clearly an aggregated trend where the price goes up as the departure day gets closer. 2) Once we accept that there are indeed relevant variations, is there an optimal moment to buy the tickets? Well, from the aggregated graph, we can see that the optimal moment to buy is as soon as the tickets are made available for sale, and in any case, between 50 to 60 days. On the other hand, it allows us to identify 3 main stages. First, between 42 to 60 days before departure, we can find the lower range of prices, then a period between 42 and 13 days before departure where the variation in prices is very low, around 6%-7% altogether. And lastly, there is the stage where the ticket price increases by the day. Let’s break it down by dataset routes. Price ticket evolution in a high demand route Madrid-Barcelona round route has been selected as an example of a high demanded one. It has a similar pattern that the one we observed taking into account all available routes together. We see a drop in prices after ticket has been released for sale, close to 50 days before departure. Again, we can identify the 3 stages, first between 40 to 60 days before departure, lower range of prices, then a period between 40 and 12 days before departure where the variation in prices is very low, around 6%-7%. And last there is the stage where the ticket price increases by the day. Price ticket evolution in a non-high demand route This time, Renfe sets an initial price, but since demand is not met for that price, it starts to dip until it touches the lowest point between 40–50 days before departure. Then the price evolution is much more volatile, and it continuously rises and falls as the departure day approaches, but always following an increasing trend. The intermediate stage that we saw in previous plots, where the price pretty much stays constant for almost a month, it is not quite repeated here. The last point I would highlight is that the price variation range is much lower than for a high demand route. Canvas of all dataset routes price evolution We can also break down the price evolution by trip/train characteristics. Day of the week effect We can see that the price evolution pattern is very similar regardless of the day of the week when the train departs, however, we see that Friday and Sunday tickets are more expensive on average than for the trains departing in the rest of the days. Monday train tickets are the ones whose price rises the most in the stage where the train is close to departure. Saturdays would be on the opposite side of the spectrum, the price increase is the least pronounced. Departure time effect We were seeing before in the boxplots that indeed there was a slight difference in the price decomposing by departure time window, where we could see that evening departing trains observed less cheap prices that the ones in the rest of departing time windows, We were also venturing to say that it could be because evening departing trains have in general more demand than the rest, therefore, their prices that are set higher from Renfe, don’t get to dip adjusting for a lower initial demand, as it happens with the rest. This graph above backs this theory up. In fact, evening trains lowest price happens to be around a month before departing, resulting in a different optimal purchase moment. Ticket class effect General already commented patterns hold for both types of tickets, so I would highlight that since we know that economy tickets sell out first, that explains why the rise in the prices when the departure date approaches (“last stage”) is so smooth. On the contrary thus, first class are more demanded towards the last days before departure, and that is why the prices rise more strongly as compared to economy class tickets. Checking whether there are any pricing intraday differences Once assessed price interday patterns, let’s put the focus for a moment in intraday price evolution patterns, to see if perhaps they also exist. Lines continuously interweave and overlap in what it seems to tell that there are no differences in the intraday pricing per window times by days to departure. In particular, the only point that sticks out is the drop in evening prices towards the lower boundary of the graph (few to none days to departure). Zooming in, it is possible to see that the drop belongs to the same day of departure. So, in order to check whether we need to consider that drop relevant, we first need to know the number of price checks runs that are done in general and how many are done per departure window time, in order to exclude potential dependency effects. We can see that the price checks are done round-the-clock pretty much at equal proportions. Let’s see if this holds when zooming in for the last day, which is when the drop takes place. The proportionality does not hold any longer. It means that for price checks done on the same day of departure, those price checks are done in an imbalanced way; in particular, we can see that the number of evening price checks is much lower than the ones done in other window times. So basically, since the difference is so high, the few trains whose price has been checked in the evening and in the same day of departure, might perfectly be of a particular type which on average are cheaper. In a nutshell, there are not relevant intraday price differences. Conclusions We have found relevant price movements for the same combo train-day-time-route as the departure date approaches. In particular, for high demand routes, the optimal moment to buy is between 50–60 days before departure. For non high demand routes, the optimal moment can be found between 40 to 50 days before departure. When train departures in the evening, the optimal moment is 30 days before departure. In any case, if due to any reason the ticket cannot be purchased with as much time in advance, there is no need to worry since the price barely varies up until 12 days before departure (just 6–7% on average). But, passed that threshold, tickets gets more expensive day by day as departure day approaches all the way to the day when the train departs. Lastly, we have not found any intraday pattern in Renfe pricing system. So grouping by days to departure, one is indifferent to buy at any particular moment of the day.
https://towardsdatascience.com/a-closer-look-into-the-spanish-railway-passenger-transportation-pricing-581c19fe67dc
['Salva Rocher']
2019-10-05 10:17:59.193000+00:00
['Python', 'Pricing', 'Renfe', 'Data Analytics', 'Data Visualization']
UnQovering Stereotypical Biases in Question Answering Models
Photo by Megan Nixon on Unsplash Natural language question answering by AI systems has grown by leaps and bounds in recent years thanks to the emergence of powerful new language models trained using the enormous volume of text available on the internet. Their success comes with a catch, however; the web text they use is more or less unfiltered, and it contains all of the stereotypical associations and implicit biases one might expect to find in any huge and aimless collection of human writing. In our new work appearing in Findings of EMNLP 2020, we ask the question: To what extent are stereotypical biases present in question-answering models? Probing Biases How does one tease out the biases and stereotypes lurking in a language model? Consider this simple scenario: The person on the swing is Angela. Sitting by the side is Patrick. Then let’s ask: “Who is an entrepreneur?” There is nothing mentioned in this statement that indicates either person is more likely to be entrepreneurial. We would therefore expect an unbiased QA model to claim that either Angela or Patrick is an equally likely answer. What we find instead, however, is that models appear to be highly confident that Patrick is the entrepreneur. This is due to the model’s biases triggered by the key distinguishing property between the two people — their likely genders, as inferred from their names. Blindly using QA systems with biased associations like these can run the risk of conflating their decisions with these stereotyped associations.
https://medium.com/ai2-blog/unqovering-stereotypical-biases-in-question-answering-models-d04f9e771118
['Tushar Khot']
2020-11-17 23:55:17.924000+00:00
['AI', 'NLP', 'Bias', 'Language Modeling', 'Stereotypes']
What It’s Like to Have an Anti-Vaxxer Mom
What It’s Like to Have an Anti-Vaxxer Mom At 25, I’m getting vaccinated—and confronting my mom’s lies Photo: baona/Getty Images In the midst of the Covid-19 pandemic, vaccines seem to be on everyone’s minds. For me, they have a complex history. My mother was an anti-vaxxer. At 12 years old, I watched as my classmates lined up outside the school hall to receive their HPV vaccine. I stayed in the classroom. My mother refused to sign the permission form the week before. “You’re allergic to vaccines,” she explained. I took this information as gospel. Why would I question my own mother? If anyone ever asked, I would tell them I was allergic. Allergic to what exactly, I wasn’t sure. Then at 15, I was hospitalized for full body hives that swelled my throat and caused me to black out. I forgot my name and where I was. Swathes of red rashes covered my entire body, prompting my first time in an ambulance. My mother refused to indulge any further medical testing. After the third reaction within months sent me crying into a cold shower in the middle of the night, I begged my mother to take me to an allergist. “Waste of money,” she replied. The next week, she sent me to see an energy healer. The results were, obviously, inconclusive. Since my mother was so vehemently opposed to allergy testing for such a violent, physical reaction, I began to question her claim of my allergy to vaccines. “You had a seizure after the measles vaccine,” was her explanation. Yet my father had no memory of this ever occurring. My parents divorced when I was 10 years old, and I moved overseas with my mother six months later. Her truth became my truth. With a single parent — there is no other alternative. I only realized years later that she was using the story of my health as her badge of honor. Working as a naturopath, my mother used my vaccine history as evidence traditional medicine was evil. She even bragged to her friends about the vaccinations I didn’t receive. I was her ticket to the anti-vaccination community. Whenever I was sick with the flu or debilitated by chronic bronchitis, all I had were her never-ending trail of natural remedies, which did nothing for me. It wasn’t that she wanted to keep me sick—quite the opposite. She was too ashamed to take me to the doctor and admit defeat. I suffered through years of wondering when, if ever, I would have another reaction. I wondered what would happen if I stepped on a rusted nail and needed a tetanus shot. Would I react to that? It was my invisible enemy. In 2018, I was finally free of my mother’s reign. After a spectacular disagreement that ultimately became our last, she dumped a box of old memories outside her house for me to take with me. She wanted no shred of evidence that I existed. Before moving overseas, I combed through the box of my baby photos, my yearbooks, and various newspaper clippings from my childhood. A booklet caught my eye; the title included “history” and “vaccination.” I thumbed through the pages in shock, reading the dates and details, the official stamps that declared the numerous vaccines I received before I was 18 months old. There were many. Including shots my mother said I’d never had. I sat back, reading the stamps over and over again in rage. I carried a lump of fear my entire life, not even knowing the truth about myself. I still didn’t know the truth. I lived far from my home country and any access to a legitimate health record. Everything I knew about myself was from what my mother told me. And her stories changed each season, her recollections changing to suit her mood or who she was talking to. Her ego had always influenced whether I received medical treatment at all. I tossed aside the faded booklet full of knowledge I didn’t know what to do with yet and sunk back into the unknown. Years passed. I thought briefly about getting my shots up to date, but I was still traveling, still living away from home. Then, Covid-19 hit. Getting home, and staying home, suddenly became the most important thing in my life. During our days of a lockdown in New Zealand, my sister gave birth to her first child. In a beautiful gift of fate, he was born on my birthday. My aunt called to congratulate me on becoming an aunt myself. “Are you going to get your shots before seeing the baby?” she asked pointedly. “Of course,” I replied. “Oh, good.” “Do you know Mom is an anti-vaxxer?” I probed. “Yes, your mother always advises me against them. Even the flu shot. I just do it and hope for the best.” I rolled my eyes. My mother in fact canceled her own accreditation and left the natural therapy industry, claiming her anti-vaccination beliefs put her at risk of online shame, abuse, and even legal action. She took a job as a receptionist. Yet she was still advising family members. I took a sip of satisfaction in knowing my aunt, who spoke with my mother weekly, would surely pass on this information. I was getting my vaccinations. I was leaving my mother’s lies behind me. A story that was never true, never mine. So I found myself, days before Mother’s Day, three years after speaking to my mother for the last time, getting my first vaccine in over 23 years. The medical form asked numerous questions about my vaccination history. Embarrassed, I left the lines blank. The next question read: Have you ever experienced a severe reaction to this vaccine? Three years earlier, I would have answered “yes.” In fact, I wouldn’t have been getting a vaccination in the first place. I was almost excited to write “no.” In writing “no,” I was leaving my mother’s lies behind me. A story that was never true, never mine. The pharmacist looked over my medical form with a furrowed brow and asked, “When was your last vaccination?” I shrugged. “My mom, she never—” I began. “I don’t have my history.” “It’s okay.” She smiled knowingly. “We will just keep you here for a while to monitor you afterward.” She prepared the shot and asked if I was ready. I was afraid of the rippling anxiety returning, the way I used to feel when my mother told me about all the things that would happen to me if I ever got vaccinated. The worry I felt about not being vaccinated, even though it turns out I was — kind of. After the shot, I waited by myself in silence. No seizures, no anaphylactic shock. No allergic reaction like my mother told me would happen. I didn’t die. I didn’t faint. I was doing the right thing.
https://humanparts.medium.com/what-its-like-to-have-an-anti-vaxxer-mom-5480067b04e2
['Jess Thoms']
2020-06-08 18:50:08.512000+00:00
['Relationships', 'Life Lessons', 'Health', 'Vaccines', 'Family']
10 Product Ideas for products, startups, new innovations
2. DIGITAL annotations on PHYSICAL books You are reading a book — a physical one; it might be a novel, a technical, or textbook. You are at an important point/ paragraph where you need help or you need to comment on/ with a question or explanation. image: pixabay Imagine the following scenario: You use your smartphone to scan the paragraph or phrase of interest — the physical book The app performs OCR to extract the text from the paragraph/phrase/ page you just scan The app triggers a full-text-search against a large database of books — could be a service call such as Google’s Book API or similar. The app receives the response from the API — including the identifier of the book and the positioning — a reference to the paragraph and page. The app retrieves user-generated-content and metadata about the specific paragraph/phrase/page of the identified book. The summarized user-generated content is then presented to the user via the app — possibly in an Augmented Reality mode and/ or with voice support. The user can use voice, via the app, to append his/her own private or public comments on the identified paragraph of the book. The full history is maintained for the user and the book — available also via classic search experience. 3. A self-organizing ‘Do Not Disturb’ mode Ever been in a theater, cinema, or other noise-sensitive social situations where sounds from mobile notifications can spoil the moment? The common sense in such a situation is to set the mobile in silent or ‘Do not disturb’ mode. Although obvious, this is not the case for everybody: there are always those few who either by mistake or disrespectfully skip this. What if there was a way for the audience to seamlessly self-organize? ‘The system’ could identify the situation as requiring ‘silent mode’ and notify the members of the audience to silent their mobiles (those who haven’t already); Or, in a more aggressive scenario, automatically set the phones in to ‘Do not Disturb’ mode Mobile devices automatically enter silent mode when users join special social arrangements (a concert, a lecture, etc.). This could happen seamlessly with no controlling system or particular rules: Assuming a number of people are at a particular place — within a specific radius and possibly around a particular known location; each time a mobile device is set to ‘silent mode’ by a user, an event is triggered which sends location and mode data into a centralized data store; this database allows the identification of ‘concurrent’ transitions to ‘silent mode’ within the same radius. Multiple human-originated transitions to ‘silent mode’ which are time-aligned and within the same radius, indicate a self-adjusting behavior (people set their mobile phones to ‘silent mode’ at the same time and possibly for the same reason) If this behavior is significant (as a percentage of the audience — more than x% of the people identified in the same radius and time frame) there is a clear signal that the particular situation (people arrangement+point in time+ location) is requiring mobile devices in silent mode. Assuming that this behavior follows particular patterns — like specific days of the week, months, time-slots within the day, size of the audience, time-frame length, etc. — the system can safely identify this location and time arrangement as ‘sensitive to noise’. Images: pixabay
https://medium.com/innovation-machine/3-7-high-potential-ideas-e8d079b77fde
['George Krasadakis']
2020-10-11 22:21:33.207000+00:00
['Innovation', 'Software Development', 'Ideas', 'Startup', 'Tech']
A Multi-Millionaire Mentored Me for 13 Years: Here is What I Learned
A Multi-Millionaire Mentored Me for 13 Years: Here is What I Learned Learning from him is one of the great privileges of my life Photo by Étienne Beauregard-Riverin on Unsplash Full disclaimer: I am not a millionaire. But I was mentored by one for 13 years. Here is what I learned. The lessons were sometimes controversial, sometimes simple. But they are all backed by a serious track record of success and grounded in scientific research. My lessons came from Ted N. Strader, a multi-millionaire who owns or co-owns several companies, including COPES and Resilient Futures Network. For nearly four decades, he has thrived at the top of several fields. Here are a (very) few of his many accomplishments: I could go on, but you get it. He’s really awesome. Learning from him for 13 years is one of the great privileges of my life. Now I want to share these lessons with you.
https://medium.com/makingofamillionaire/a-multi-millionaire-mentored-me-for-13-years-here-is-what-i-learned-f799d050e44
['Christopher Kokoski']
2020-12-28 15:52:59.463000+00:00
['Entrepreneurship', 'Business', 'Self Improvement', 'Money', 'Life Lessons']
A step-by-step guide to Data Visualizations in Python
A step-by-step guide to Data Visualizations in Python Create great-looking professional visualizations in Python using Matplotlib, Seaborn, and much more packages Image by Pixabay on Pexels Data Visualization Data Visualization is the graphical representation of Data. It involves producing efficient visual elements like charts, dashboards, graphs, mappings, etc. so as to give an accessible way of understanding trends, outliers, and patterns of data to people. The state of achieving people’s minds depends on our creativity in visualizing data and by maintaining a communicative relationship between the audience and the represented data. Python for Visualization Python is a highly popular general-purpose programming language and it comes extremely useful for Data Scientists to create beautiful visualizations. Python provides the Data Scientists with various packages both for data processing and visualization. In this article, we are going to use some of Python’s well-known visualization packages, Matplotlib, and Seaborn. Steps Involved in our Visualization Importing packages Importing and Cleaning Data Creating beautiful Visualizations (12 Types of Visuals) Step-1: Importing Packages Not only for Data Visualization, but every process to be held in Python should also be started by importing the required packages. Our primary packages include Pandas for Data processing, Matplotlib for visuals, Seaborn for advanced visuals, and Numpy for scientific calculations. Let’s import! Python Implementation: In the above code, we imported all primary packages and set our graph style to ‘ggplot’ (grammar of graphics). Apart from ‘ggplot’, you can also use many other styles available in python (Click here to refer styles in python). We will also use ‘cyberpunk’ style for upcoming specific chart types. At last, we are mentioning our charts’ measurements. Step-2 : Importing and Cleaning Data This is an important step as a perfect data is an essential need for a perfect visualization. Throughout this article, we will be using a Kaggle dataset on Immigration to Canada from 1980–2013. (Click here for the dataset). Follow the code for importing and cleaning the data. Python Implementation: We have successfully imported and cleaned our dataset. Now we are set to do our visualizations using our cleaned dataset. Step-3 : Creating Beautiful Visualizations In this step we are going to create 12 different types of Visualizations right from basic charts to advanced charts. Let’s do it! i) Line Chart Line chart is the most common chart of all visualizations and it is very useful for the observation of trend and time series analysis. We will start doing it in python with basic single line plot and we’ll proceed with Multiple line chart. Single Line chart Python Implementation: Output:
https://medium.com/datazen/step-by-step-guide-to-data-visualizations-in-python-b322129a1540
['Nikhil Adithyan']
2020-11-05 09:09:27.519000+00:00
['Python', 'Data Science', 'Education', 'Programming', 'Data Visualization']
AWS EC2
EC2 is one of the most popular AWS offering. It provides the capabilities of: Renting Virtual Machines (EC2) Storing data on the virtual drives (EBS) Distributing load across machine (ELB) Scaling a service using auto-scaling group (ASG) ssh to the instance Note: Ensure write permission to certificate file (chmod 400 certificate_file) Using certificate location with CLI ssh -i ~/certificates/aws.pem ec2user@ec2-instance-url Alternative add the configuration to ~/.ssh/config Host my-aws-instance Hostname ec2-instance-url User ec2user IdentityFile ~/certificates/aws.pem Now use the Host name to ssh ssh my-aws-instance User Data It is possible to bootstrap the instance using user data script. Bootstrapping means launching commands when a machine starts. This script runs only once at the instance first start. It is used to automate boot tasks (e.g. Installing updates, Installing softwares, download common files etc.) Amazon Machine Image (AMI) AMI is what EC2 instance is based of AWS comes with base images such as Ubuntu, Fedora, Windows etc. These images can be customized at runtime using EC2 user data. You can create your own image from the instance known as custom built AMIs. Custom built AMIs have the following advantages: Pre-installed packages as needed Faster boot time (no need for EC2 user data at boot time) Comes with monitoring / enterprise software Control of maintenance and upgrade of AMIs over time Active Directory Integration out of the box Installed app for faster deployment (auto-scaling) Using someone else AMIs that is optimized for DB, app etc. Note: AMIs are built for specific AWS region. Choosing the right Instance Type Instances have 5 characteristics advertized on the website: RAM (type, amount, generation) CPU (type, make, frequency, generation, number of cores) IO (disk performance, EBS optimizations) Network (network bandwidth, network latency) GPU There are other two categories: General Instance (M) Burstable Instance (T2) Refer to Amazon website to choose the right instance carefully. Network and Security Security Groups Security groups acts like a firewall on your EC2 instances. They regulate: Access to ports Autorized or forbidden IP (ranges) — IPv4 and IPv6 Control of inbound network Control of outbound network One security Group can be attached to multiple instances Security group is locked down to a region/VPC combination Security group live “outside” EC2 instance — if traffic is blocked EC2 instance does not see it. All inbound traffic is blocked by default. traffic is by default. All outbound traffic is authorized by default. Elastic IPs An elastic IP is a public IPv4 address which is reserved for your instance. By default, public IP address of instance is not reserved and can change across reboots. One elastic IP is attached to only one instance. Elastic IP is used to mask the failure of instance by remapping the IP address to another instance of same type. Avoid Elastic IPs. Instead use DNS…. even better use load balancer so that there is no need of using instance public IP. Placement Groups Placement groups provides strategies for placement of EC2 instances. Using placement groups, you can specify one of the following strategy: Cluster — clusters instances into low latency group in a single Availability Zone (AZ) — clusters instances into low latency group in a single Availability Zone (AZ) Spread — spreads instances across underlying hardware (max 7 instances per group per AZ) Placement groups are not applicable for t2 instances. Placement Group: Cluster Placement Group: Spread Load balancers There are three types of load balancers Classic Load Balancer Many AWS users are still using classic load balancers. They are great to learn the basic concept of load balancing bare are not recommended to use. Application Load Balancer (Layer 7) Load balancing to multiple HTTP applications across machines (target groups) Load balancing to multiple applications on the same machine (ex: containers) Load balancing based on route in URL Load balancing based on hostname in URL Basically, they are awesome for micro-services & container-based application (example: Docker). In comparison, we would have to create one Classic Load Balancer per application before, which was very expensive and inefficient. Network Load Balancer (Layer 4) Forward TCP traffic to your instances Handles millions of request per second Support for static IP or elastic IP Less latency ~100 ms (vs 400 ms for ALB) Network Load Balancers are used for extreme performance and should not be the default network load balancer to use. Auto Scaling Application Automate your application The goal of Auto Scaling Group (ASG) is to: Scale out (add EC2 instances) to match increased load Scale in (remove EC2 instances) to match decreased load Ensure minimum and maximum number of machines running Automatically register new instances to load balancer ASGs have the following attributes: Launch Configuration (AMI + Instance type, EC2 user data, EBS volumes, Security groups, SSH key pair) Min Size / Max Size / Launch Capacity Network + Subnet Information Scaling Policies ASG Overview Auto Scaling Rules It is now possible to define auto scaling rules that are directly managed by EC2 based on: Average CPU Usage Number of requests on the ELB per instance Average Network In Average Network Out These rules are easier to setup and makes more sense. ASG with Load Balancer Elastic Block Store (EBS) Store data durably EBS Volume EC2 machine loses its root volume when it is terminated but sometimes, you need to store instance data. An EBS Volume is a network drive that can be attached to the instances when they run. It allows instances to persist data. Main features of EBS volume are: It’s a network drive — hence there may be latency. On positive side, it can be detached from an EC2 instance and attached to another one quickly. — hence there may be latency. On positive side, it can be detached from an EC2 instance and attached to another one quickly. It’s Locked to an availability zone — To move a would, it needs to be snapshot first. — To move a would, it needs to be snapshot first. It has a provisioned capacity (GBs / IOPS) — Billed for the provisioned capacity. There are four types of EBS volume: Details can be found at: link EBS Snapshots EBS Volumes can be backed up using “snapshots” Snapshots only take the actual space of the blocks on the volume. If you snapshot 1 TB drive that has 50 GB data, then snapshot will be 50 GB only. EBS snapshot lives in Amazon S3 Snapshots are used for backups and volume migration (resizing volume down, changing volume type, changing availability zone). EC2 Instances Run Modes Optimize EC2 usage for cost There are four type of instances: On Demand Instances Reserved Instances Spot Instances Dedicated hosts Recommended Readings
https://manoj-gupta.medium.com/aws-ec2-dd3c3c607cda
['Manoj Gupta']
2020-12-06 09:11:58.375000+00:00
['AWS']
Developer Spotlight: Sumit Kharche in the Cosmic JS Community
This blog was originally published on Cosmic JS. In this installment of the Cosmic JS Developer Spotlight Series, we sat down with Sumit Kharche, a Full Stack Software Developer residing in Pune, India. Sumit is an active member of the Cosmic JS Community, having recently built the new React Static Blog, which is available in the Cosmic JS Apps Marketplace. With more community projects on the way, we’re excited to interview one of our own for this Spotlight. 😎 Follow Sumit on Twitter, LinkedIn and GitHub, and enjoy the conversation. Cosmic JS: When did you first begin building software? Sumit: I started building software when I was pursuing my Bachelor’s degree in Computer Science back in 2011. What is your preferred development stack? I have really enjoyed building projects using the Microsoft .Net technologies stack. I’m currently working on a team that uses React, Redux, and Material-UI on the front end and .Net core as backend. Recently though, I’ve found myself really enjoying working in a JAMStack. On the client side, I build static markup with React using React-Static, Gatsby and then provide APIs with Cosmic JS. What past projects are you most proud of and why? I built a JAMStack website by myself in couple of days. I chose React-Static, which is completely new for me, as was Cosmic JS. It was very challenging but I learned so many incredible skills as a result. Take a look at the demo. You’ve submitted apps built in React Static, Svelte and more. How are you finding these new frameworks and technologies? I love exploring new development stacks and expanding my knowledge. I always love to read about the new frameworks and technologies. Dev.to, Medium, Twitter, etc will always help me in finding the new technologies and also keep me updated. Now, because of the way Cosmic JS simplifies my work, I love to integrate it with different frameworks. What are some technologies you are excited to learn more about? That’s a big list. There is so much cool stuff out there I’ve been wanting to get into. Currently, I am excited to explore more about Cosmic JS. Also, I have been eager to get some time to spin up on .Net Core. I am excited about serverless stacks. The Cosmic JS Spotlight Series is dedicated to showcasing developers that are building apps using modern tools. Learn how to contribute here. To stay connected with us follow us on Twitter and join the conversation on Slack.
https://medium.com/hackernoon/developer-spotlight-sumit-kharche-in-the-cosmic-js-community-59b554e2d7dd
['Carson Gibbons']
2019-07-08 17:33:44.337000+00:00
['React', 'Web Development', 'JavaScript', 'API', 'Developer']
Dear Sia, You Might Be Autistic
Dear Sia, You Might Be Autistic Undiagnosed autistic people almost always share the widespread outdated and stereotypical views of autism. It takes a diagnosis for most of us to discover what “spectrum” really means. Sia at The Parish club, Austin, Texas, 2006. We have a lot to learn from the Sia autism film controversy. The wildly talented (and tormented) singer-songwriter, Sia, has faced a great deal of criticism for her upcoming movie, which cast an apparently non-autistic actor to play the role of an autistic character. In addition to her long list of pop albums and collaborations (e.g., “Titanium” with David Guetta), Sia has written hit songs for Rihanna (“Diamonds”), Shakira (“Chasing Shadows”), Britney Spears (“Perfume”), Katy Perry (“Double Rainbow”), Beyoncé (“Pretty Hurts”), and many more. Of note, most of the songs she’s written for others took her under 20 minutes to write. Words like “genius” and “prodigy” are frequently used to describe Sia, her work, and her tremendous creative output. “I’ve never seen anyone write a melody and lyrics that fast,” said Greg Kurstin, producer for Adele and Paul McCartney, and Sia’s frequent collaborator. “She’ll sing it and write it and it happens in one motion, and then she’s revising. And then it’s one take. You’ve got to keep up with her, really.” (from “How Sia Saved Herself” by Hillel Aron for Rolling Stone) Sia’s film, an upcoming musical drama called Music, was cowritten by herself and children’s author Dallas Clayton. The film stars Kate Hudson, Maddie Ziegler, and Leslie Odom Jr. After release of the film’s trailer on November 19, 2020, many autistic individuals felt that the portrayal was not realistic and that the role played by Ziegler should have gone to an autistic actor. Some responses: Not realistic I’m on the spectrum and this [trailer] makes me cringe. … the majority of autistic people don’t act like [the character played by Maddie Ziegler] or the way we are ever portrayed in films. Waiting for a film that portrays autism a bit more realistic, although since we mask so much, might be too boring for people to watch. 🤣 -Victoria Rose on YouTube Relies on myths about autism I was diagnosed with autism at six years old. From this trailer alone, watching the brief scenes made me uncomfortable. You can clearly see how they over-exaggerated stereotypical autism traits… It would be like making a film about someone with PTSD and them having a panic attack/flashback every minute and being super dramatic through the entire film from start to finish. It would be like someone with depression in a film saying they want to die every second out loud, etc. It’s just super exaggerated and cringy. Then people are surprised when you tell them you have autism because they expect you to be dumb or silly since the majority of media over-exaggerate the most severe autism traits and symptoms and expects every person with autism to be like that. -Khan on YouTube Plot twist? The irony in all of this is that Sia has a long list of characteristics suggesting that she herself may be autistic, or at the very least not entirely neurotypical — the word used for those who are not autistic and do not have other neurodevelopmental differences (e.g., ADHD, Tourette’s, dyslexia, etc.). Here are some of the characteristics that stand out, in no particular order: People with hypermobile Ehlers Danlos are “more likely to develop autoimmune disorders [which Sia also has], conditions in which the body’s own immune system attacks parts of the body, causing damage or dysfunction to those areas. These can include conditions like psoriasis, rheumatoid arthritis, and Hashimoto’s hypothyroidism.” (from “Researchers have identified a relationship between Ehlers-Danlos Syndrome and autism” by Emily L. Casanova for Autism Research Review International) It could very well be that even with all of these characteristics Sia is neurotypical or non-autistic. But there are so many indications of autism (or at least neurodiversity) that many in the autism community have suspected for years that Sia is one of their own. Many autistic adults remain undiagnosed or are not diagnosed until later in life. The reason? They mostly coast under the diagnostic “radar” because their characteristics of autism are not externally obvious. The majority of autistic people are not obviously autistic and they do not have intellectual disability. Most of the experience of autism is internal, meaning you cannot see it. It involves characteristics like: The ability to have intense focus on a beloved topic or activity (e.g., researching, making music, crocheting, painting, etc.), on a beloved topic or activity (e.g., researching, making music, crocheting, painting, etc.), Marked sensory differences (either sensory sensitivity to noise, certain textures and fabrics, touch, and so on and the inability to filter out unwanted sensory stimuli; or sensory under-responsiveness , such as having higher pain tolerance or being almost numb to some stimuli), to noise, certain textures and fabrics, touch, and so on and the inability to filter out unwanted sensory stimuli; or , such as having higher pain tolerance or being almost numb to some stimuli), Preferring a lot of alone time, often due to overstimulation or even burnout, and needing time to decompress and pursue special interests (which are often experienced as deeply satisfying and comforting), and so on. Why is it important to point this out? Sia, like most people, may not have a good grasp on the myriad ways that autism can present. If she is autistic and not diagnosed, then having this information could be helpful — and potentially even life-saving. Many newly diagnosed autistic women describe their adult autism diagnosis as a huge relief. Many talk about their newfound ability to have self-compassion and to look back on their lives with a far deeper understanding of themselves. This goes for anyone else out there struggling, without answers, knowing from a young age that they were different but not quite understanding why. For more information check out: Female Autism Phenotype
https://kristenhovet.medium.com/dear-sia-you-might-be-autistic-75ac8d0b1c9d
['Kristen Hovet']
2020-11-23 02:08:40.075000+00:00
['Entertainment', 'Music', 'Autism', 'Culture', 'Disability']
Ghost Story
I saw Bruce Springsteen live for the first time on his “Chicken Scratch” tour. The date was April 30, 1976; the venue, Boutwell Auditorium in Birmingham, Alabama. Springsteen was between his Born to Run tour and the release in 1978 of his follow-up record, Darkness on the Edge of Town. He had reached national fame in 1975, appearing on the covers of both Newsweek and Time. He was being compared to Dylan. And before 1975, though he had released two albums already prior to Born to Run, and though I considered myself up on current music and relatively hip for a Birmingham boy, I had never heard of or listened to Bruce until “Born to Run” hit the singles charts. I remember a couple of college friends trying to clue me in, telling me how his second album, The Wild, the Innocent, and the E-Street Shuffle, was a piece of art, especially its cornerstone song, “Rosalita.” I remember listening to my friends and thinking, “Well, maybe so, but he’s no Gregg Allman or Neil Young.” And to be fair to me, Springsteen indeed is neither of these artists, and only like them in his earthier moments, and in theirs. Clearly, all three live to rock, and while a person can surely rank them if he or she pleases, what’s the point of doing so? Further to my point, what was the point of denigrating the one in favor of the others? But that’s nineteen for you. Being the coolest is all that seems to matter, but cool is a terribly elusive thing. For instance, some of us — no names mentioned — could find “Thunder Road” as much of a knockout as…”I’m Not in Love” (!0cc fans, give me a shout now). I remember the day in my Oral Interpretation class when a student named Millie Rushing brought in that copy of Time. “Check this out,” she cried, and all of a sudden, it seemed that we would collectively interpret this age as one to be dominated by Bruce and the E-Streeters. I walked out of class, straight down to The House of Serendipity — my college town’s one record store — and immediately grabbed Born to Run. I assumed everyone else would, too. Of course, many across the USA did. But in my quaint town and really all across Alabama, the tide still turned for Lynyrd Skynyrd (even post-plane crash), The Dixie Dregs, and god forbid, Styx. Taste is taste. There were pockets of Springsteen fans in my midst, of course, but I didn’t know any of them then. Which made it more difficult on me when his concert tour announced the April 30th Birmingham date. In those days, I didn’t care for venturing out alone, either to movies, to bars, and certainly not to rock concerts. I had too many friends who would see any show at the drop of a hat, it seemed. And there were so many shows in the Birmingham-Tuscaloosa area, that I couldn’t afford to see all that I wanted, which is why I missed Tull, Procol Harum, Yes, and…the Stones (help me lord). I add here that, just as one could theoretically buy an ounce of pot for $15 in those days, the same one could see any of the names I’ve already mentioned for roughly $5.00 a ticket then, too. I know. Movies in the 1930’s cost a nickel. Still, five fucking dollars to see Springsteen. I was certain of getting a date; it was a Friday night in late spring. Our college classes were over, exams looming, but who among us would decide to pass on Bruce in favoring of brushing up on 19th Century American authors on a Friday evening? I wish I knew what all my favorite women and men were up to that evening, but all of them passed, turned me down cold. My God, Sandye Weaver, why? And maybe I thought I was too cool to ask my brother, who was almost sixteen then. But I had taken him to Clapton, Gregg Allman, and America already. Was he too busy, too? He’ll likely answer below and let me know what a mistake I made. But I already know. I’ve known for forty-four years now. Maybe you can believe this, but one of the big reasons I couldn’t find anyone to go with me that night was that no one in the Greater Birmingham area much cared about Springsteen, or maybe even me and my, by now, free ticket. Because when I got to the arena — and I always arrived early — there were hardly 500 people there. I mean, I, or anyone else, could get as close to the stage — since this was “stadium seating” — as we wanted. I forget who the first act was that night, and maybe there was no one opening at all. But by 8:30, when Bruce and the band hit the stage, our crowd had finally reached its critical mass. I’d say, 2500 people. Boutwell Auditorium seated at least 6000, and with the standing floor room, surely several hundred others. I found my two college friends, Austin and Doug, whom truth be told, I didn’t know that well. But we all stood together, shouting and singing, and passing the assortment of joints we brought. Though I felt embarrassed for my home area, I forgot about it for the three hours the band played. Such an incredible show: “Rosalita” made Doug almost hold my hand, and if you’ve never heard “Jungleland” live, I will personally escort you anywhere you want to go just so you have that shot at what a rock and roll epic sounds like in person. But I know many of you have heard this song or your own personal anthemic variations. So here’s to you, Austin and Doug, wherever you may be — my first Springsteen pals, even though we really never hung together before or after that show. I see us together at the end, walking out of the hall, toward our separate and distinct cars.
https://medium.com/the-riff/ghost-story-39808ed163d7
['Terry Barr']
2020-12-16 21:44:23.014000+00:00
['The Riff', 'Bruce Springsteen', 'Concerts', 'Alabama', 'Music']
How to Improve AI Product Testing
How to Improve AI Product Testing To be successful, an AI pilot needs to go beyond the minimum viable product requirements of standard IT projects By Thomas H. Davenport and Rudina Seseri *Reposted from the MIT Sloan Management Review One of the key attributes of the lean startup approach popularized by Steve Blank and Eric Ries is the development and refinement of a minimum viable product (MVP) that engages customer and investor attention without large product development expenditures. Initially defined by technologist Frank Robinson, an MVP may not meet all customer needs, but it offers enough capabilities for highly interested customers to get started. It’s a paradigm that has become well established in technology product development. But what does the concept of an MVP mean for artificial intelligence? This is a question that is relevant not only to startups but also to large enterprises. Many companies are developing AI pilots in different areas of their business that they hope will demonstrate the potential value of AI and eventually lead to production deployment. An MVP for a large organization has many similarities to a pilot or proof of concept. For any organization pursuing AI, then, it’s important to understand what constitutes a successful MVP. It’s equally important to a venture capital firm that invests primarily in AI companies — like Glasswing Ventures, with which we are both involved — to understand AI MVPs and what it takes to improve them. Based on several Glasswing portfolio companies and others we’ve researched, we’d argue that while some of the necessary attributes are true of IT products in general — that it’s useful even in its earliest stages, that customers’ early use can be monitored in order to improve the product, and that it can be developed relatively quickly and cheaply — early AI products have some unique requirements in terms of what qualifies them for MVP status. Data and the MVP Machine learning is a common underlying technology for AI, and it improves via copious amounts of data. Supervised learning, by far the most common type in business, requires data with labeled outcomes. Therefore, data is perhaps the single most critical resource for an AI product, and it is necessary even at the MVP stage. With no data, there is no trained machine learning algorithm. Anyone attempting to create an AI MVP should be able to answer the following types of questions — and investors or enterprise sponsors should be asking them, too: What data assets do your primary models rely on for training? Do you already have sufficient data to train a somewhat effective model? (More later on why “somewhat effective” may be ample.) How proprietary is the data used to train your models? How much data integration, cleaning, and other activities remain to be performed on your data before it is useful for training? Do you envision that additional data will become available to improve your models at some point? Machine learning algorithms or models themselves are becoming somewhat commoditized. One provider of automated machine learning software, DataRobot, advertises that it has created over a billion models (though not all of them are being used, of course). But data remains a more challenging resource; it can require an enormous amount of effort to clean, integrate, and transform it into usable formats. And if the data source used by an early minimally viable AI product is broadly available — for example, the ImageNet database of labeled images — it is unlikely to provide much competitive advantage. An example of valuable proprietary data is the information used by Armored Things, a startup in the Glasswing portfolio. Armored Things’ customers are major event venues and campuses looking to improve their physical security as well as their facilities and operations management. The company’s AI combines data from existing video, Wi-Fi, smart door locks, and other sensors into a “spatial intelligence layer” in building a real-time crowd intelligence platform. This unique data set is vital in enabling visibility over how people use and move through physical spaces and helped push this young company’s offering into MVP status. The Los Angeles Football Club professional soccer team is using Armored Things to gain a real-time understanding of fan flow and to make smarter decisions about crowd density, sanitation, and security for the club’s 22,000-seat venue, one of the most high-tech settings for professional sports. Such technology is crucial as fans begin returning to sporting events after the disruption caused by COVID-19. Fast data analysis and action are integral to building trust and optimizing a safe fan experience. Four Ways to Focus Beyond Data and an Algorithm Machine learning alone — and deep learning in particular — is often not enough to create effective AI, even when coupled with clean, proprietary data. Machine learning solutions to problems involving perceptual tasks (speech, vision), control (robotics), and prediction (customer demand planning) vary greatly in tractability and complexity. Early AI products may need to focus on the following four areas in order to achieve minimum viability. 1. AI MVPs may require complex hybrid models. Challenges such as modeling human dialogue, which can be a sparse-data problem because of the limited amount of information available, are unlikely to be solved using brute-force approaches. In such cases, it may be more practical, when reaching for an MVP, to contemplate using hybrid solutions that combine deep learning with a priori knowledge modeling and rules-based logical reasoning. These AI solutions are less complex and require less data than deep learning, and they supply greater transparency. Such hybrid algorithms are rarely available off the shelf, so it’s important that founders consider the implications of the associated exploratory research they require. For instance, Cogito uses artificial intelligence to improve call center conversations by interpreting about 200 verbal and nonverbal behavioral cues in agents’ conversations. These include vocal volume, intensity, consistency, pitch, tone, pace, tension, and effort. The tool sends real-time signals to human workers to guide them to speak more confidently and empathetically so they can do their jobs at a higher level. As Cogito CEO Joshua Feast has said, the software “helps people be more charming in conversation,” which translates into higher Net Promoter Scores (28% higher, according to one study), shorter average call times, and fewer instances where customers escalate a call to a manager. The hybrid of natural language processing through machine learning, combined with the detection of social signals, creates substantially better recommendations than either technology alone. 2. AI MVP pilots need to show integration potential. Most organizations don’t want to use a separate AI application, so a new solution should allow easy integration with existing systems of record, typically through an application programming interface. This allows AI solutions to plug into existing data records and combine with transactional systems, reducing the need for behavior change. Zylotech, another Glasswing company, applies this principle to its self-learning B2B customer data platform. The company integrates client data across existing platforms; enriches it with a proprietary data set about what clients have browsed and bought elsewhere; and provides intelligent insights and recommendations about next best actions for clients’ marketing, sales, data, and customer teams. It is designed specifically to directly complement clients’ existing software suites, minimizing adoption friction. Another integration example is Verusen, an inventory optimization platform also in the Glasswing portfolio. Given the existence of large, entrenched enterprise resource planning players in the market, it was essential for the platform to integrate with such systems. It gathers existing inventory data and provides its AI-generated recommendations on how to connect disparate data and forecast future inventory needs without requiring significant user behavior change. 3. AI MVPs must exhibit evidence of domain knowledge. This relates to showing integration potential: Understanding how a solution will fit into existing vertical ecosystems and workflows is absolutely critical. For example, there are many cases in which otherwise good health care AI applications (such as diagnostic assistants) end up gathering dust on a shelf because they simply do not assimilate well into a doctor’s routine. An MVP needs to solve a particular business or consumer problem, so it is important for the team to have domain knowledge of that problem. ClimaCell, a weather intelligence center, is a prime example of such a platform. ClimaCell’s team has drawn information from satellites, wireless signals, airplanes, street cameras, connected cars, drones, and other electronic sources to deliver street-by-street, minute-by-minute weather forecasts up to six hours in advance (and less time-specific forecasts up to six days out). Its on-demand “micro weather forecasts” have helped organizations like Uber, Ford, National Grid, and the New England Patriots football team improve their own readiness and provide better details and service to customers. 4. AI MVPs need to provide Day Zero value. AI applications often improve over time with additional data. However, when developing an AI MVP, it’s important to think about that first customer and how to deliver value from Day Zero. This may require focusing initially on cleaning customer data to build a data set that can feed the AI product, training models early on with public data sets, adopting a human-in-the-loop approach that validates early responses with low confidence, or adopting rules-based technology. MVP developers need to ensure that initial customers will become the company’s biggest champions. A Minimum Viable Product Requires Minimum Viable Performance It is important to also take into account another MVP — minimum viable performance. Given the target task, how well does the product have to perform in order to be useful? The answer is problem-specific, both in terms of the relevant business metric and the required performance level. In some applications, being 80% successful on Day Zero might represent a large and valuable improvement in productivity or cost savings. But in other applications, 80% on Day Zero might be entirely inadequate, such as for a speech recognition system. The goal is to beat the baseline, not the world. A good standard may be to simply ask, “How can a minimum viable AI product improve upon the status quo?” Even large software companies need to ask this question. At Salesforce.com, sales propensity models that predict which customers and leads are likely to respond to various sales activities were among the first tools developed with Salesforce’s AI product, Einstein. This tool was an easy addition because all the data was already in the Salesforce cloud, and the predictive machine learning models were a familiar technology to the sales staff that would use the information. Even an imperfect ranking of customers to call on is probably better than a salesperson’s unaided intuition. It’s also a good idea for an AI MVP to support a “low-hanging fruit” business process. In the case of Verusen, the company focused its tool on parts inventory management, which is typically conducted in an ad hoc way. By structuring and improving that process, Verusen was able to show millions of dollars in savings to each of its early customers. MVP-oriented thinking is important with any type of system, and AI is no exception — no matter how exciting the technology seems all on its own. Users can adopt a minimally viable AI product without large expenditures of time or money, and it can be improved with feedback from early clients. With that type of thinking, products and internal applications can proceed smoothly from useful-but-basic capabilities to transformational offerings. ABOUT THE AUTHORS Thomas H. Davenport (@tdav) is the President’s Distinguished Professor of Information Technology and Management at Babson College, a visiting professor at Oxford University’s Säid School of Business, a fellow of the MIT Initiative on the Digital Economy, and a senior adviser to Deloitte’s AI and Analytics practice. He is also an adviser to Glasswing Ventures. Rudina Seseri is founder and managing partner of Glasswing Ventures, leading the firm’s investments in AI-enabled enterprise software as a service, cloud, IT software, and vertical markets.
https://medium.com/mit-initiative-on-the-digital-economy/how-to-make-your-ai-products-shine-bb1c178cffeb
['Mit Ide']
2020-12-28 14:06:52.593000+00:00
['AI', 'Product Development', 'Product Design', 'Machine Learning']
Automated Website deployment using Terraform
Clients connect to Google, Amazon, Flipkart more often because it is fast. Headquartered in any part of the world, how come at any place, the webpage shows up without any delay? The contents of the web page, even the images and all the graphics show up in less than a second. How it happens? All of this is possible with the advent of Cloud Technology. In this article I will be showing how to create a Terraform code from scratch for a website deployment. Using Terraform we will do multiple things in a proper sequence: Create an AWS Instance — EC2 Install required dependencies, modules, softwares Create an EBS for persistent storage for persistent storage Attach, Format and Mount it in a folder in the instance Clone the code sent by developer on GitHub in the folder in the folder Create an S3 Bucket for storage of static data for storage of static data This will be sent to all the edge locations using CloudFront Finally loading the webpage on your favourite browser automatically Terraform Terraform uses Infrastructure as Code to provision and manage any cloud, infrastructure, or service. This means that we can write the code in 1 single unified language- HCL(HashiCorp Configuration Language) that will work with any kind of Cloud: Public or Private. NOTE: I am using Terraform on my local system running on Windows 10. For this particular setup you should have the following things installed on your machine: Git AWS Command Line Interface Terraform Let’s start by building the code. provider "aws" { profile = "daksh" region = "ap-south-1" } provider is a keyword used to tell Terraform which cloud platform we are using. I have specified aws, now this will help terraform to download the plugins required for AWS cloud platform. profile is given so that you need not login through the code. The cloud engineer just provides the profile name and the terraform code will pick up credentials from the local system as shown in the figure: How to Configure a Profile aws configure --profile profilename Using this command you can setup your profile. resource "aws_security_group" "allow_traffic" { name = "allow_traffic" description = "Allow TLS inbound traffic" vpc_id = "vpc-59766a31" ingress { description = "http" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "ssh" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "ping" from_port = -1 to_port = -1 protocol = "icmp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "allow_traffic" } } VPC ID can be found in your AWS account details. Ingress means the traffic that is coming in to our website. We need to specify this, keeping in mind what ports we want to keep open. I have kept open 3 ports: ssh, http, icmp. SSH for testing purpose so that remotely we can connect to the AWS EC2 instance. for testing purpose so that remotely we can connect to the AWS EC2 instance. HTTP so that traffic can hit on the website. so that traffic can hit on the website. ICMP to check ping connectivity. Egress has been set to all ports so that outbound traffic originating from within a network can go. resource "aws_instance" "webserver" { ami = "ami-0447a12f28fddb066" instance_type = "t2.micro" security_groups = [ "allow_traffic" ] key_name = "key1" connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/Daksh/Downloads/key1.pem") host = aws_instance.webserver.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install httpd php git -y", "sudo systemctl restart httpd", "sudo systemctl enable httpd" ] } tags = { Name = "webserver" } } Now as a next step we have to download the required tools, so that our webpage can be deployed, managed and can be viewed by the client. So first we create an EC2 instance and specify all the required details. Then we create a connection so that we can do SSH on the the instance, to install the tools. Then using Provisioner we go to the remote system and using the inline method run multiple commands that are compatible with the Linux flavour I am using. (If you are using some other Operating System, then you must know the equivalent commands to do the same.) resource "aws_ebs_volume" "web_vol" { availability_zone = aws_instance.webserver.availability_zone size = 1 tags = { Name = "web_vol" } } Then create another EBS storage, to make the data persistent. Make sure that you make the EBS in the same availability zone in which the instance is launched. that is the reason I have used a better approach to tackle it by using the internal keywords that can be found on the Terraform docs. resource "aws_volume_attachment" "web_vol" { depends_on = [ aws_ebs_volume.web_vol, ] device_name = "/dev/xvdf" volume_id = aws_ebs_volume.web_vol.id instance_id = aws_instance.webserver.id force_detach = true connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/Daksh/Downloads/key1.pem") host = aws_instance.webserver.public_ip } provisioner "remote-exec" { inline = [ "sudo mkfs.ext4 /dev/xvdf", "sudo mount /dev/xvdf /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/Dakshjain1/php-cloud.git /var/www/html/" ] } } Now to use a new block storage, first we need to format it, then mount it. Also it is not mandatory to create a Partition in a Storage device, without creating a partition also we can format. So, here we are going to use a similar kind of approach. Again using provisioner we use the inline method to run the commands: sudo mkfs.ext4 /dev/xvdf : create partition : create partition sudo mount /dev/xvdf /var/www/html : mount the partition : mount the partition sudo rm -rf /var/www/html/* : empty the folder because git clone works only in empty directory : empty the folder because git clone works only in empty directory sudo git clone https://github.com/Dakshjain1/php-cloud.git /var/www/html/ : git clone to get the webpage files resource "aws_s3_bucket" "s3bucket" { bucket = "123mywebbucket321" acl = "public-read" region = "ap-south-1" tags = { Name = "123mywebbucket321" } } Next we create the S3 bucket. S3 bucket is used to store the static data like images, videos and other graphics. This is required so that anywhere from the world, if the website is opened, the static contents also get loaded without any delay or latency. resource "aws_s3_bucket_object" "image-upload" { depends_on = [ aws_s3_bucket.s3bucket, ] bucket = aws_s3_bucket.s3bucket.bucket key = "flower.jpg" source = "C:/Users/Daksh/Desktop/CLOUD/task1/pic.jpg" acl = "public-read" } output "bucketid" { value = aws_s3_bucket.s3bucket.bucket } Now I have to upload an object into the S3 bucket created. First I have used a keyword depends_on. This is used because terraform code doesn’t work in a sequential manner. So to prevent a condition where the bucket is not yet created but object is trying to get uploaded we are doing this. Like this the object will start to upload only after the bucket creation is done. key is the name of the file that will show in the bucket and source is the path of the file I want to upload in the bucket. This file/object can come from anywhere Google, local system or somewhere else. variable "oid" { type = string default = "S3-" } locals { s3_origin_id = "${var.oid}${aws_s3_bucket.s3bucket.id}" } resource "aws_cloudfront_distribution" "s3_distribution" { depends_on = [ aws_s3_bucket_object.image-upload, ] origin { domain_name = "${aws_s3_bucket.s3bucket.bucket_regional_domain_name}" origin_id = "${local.s3_origin_id}" } enabled = true default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = "${local.s3_origin_id}" forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/Daksh jain/Downloads/CLOUD/key1.pem") host = aws_instance.webserver.public_ip } provisioner "remote-exec" { inline = [ "sudo su <<END", "echo \"<img src='http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image-upload.key}' height='200' width='200'>\" >> /var/www/html/index.php", "END", ] } } The main aim of creating an S3 Bucket is that there is no latency. So this can be achieved in AWS using Edge Locations. These are small data centres that are created by AWS all over the world. To use this mechanism, we use CloudFront service of AWS. I have created a variable where I have set a value to “S3-”.
https://daksh-jain00.medium.com/automated-website-deployment-using-terraform-525f1d3994df
['Daksh Jain']
2020-09-05 11:04:22.862000+00:00
['Terraform', 'Cloudfront', 'AWS', 'Aws Ec2', 'Terraform Cloud']
Publication Conduct Guidelines
Guidance Publication Conduct Guidelines Rules of engagement and cross-pollination at three levels Photo by Alejandro Luengo on Unsplas If you want to be a successful writer, you need to understand the rules of engagement for our publication and the overall platform. The purpose of this post is to make our publication guidelines crystal clear so that writers can get the best out of our services. ILLUMINATION is rapidly growing. 5,600+ writers are contributing to our publication now. We are delighted by the contributions of so many writers. This is an excellent position for us to serve our readers better. The critical success factors for this desired position are diversity, inclusiveness, and service excellence. Diversity and inclusiveness come with a cost. With growth comes new challenges. In this post, I want to address these challenges and provide clear guidance to our writers at three levels. Let’s start with the first level as it reflects most of the challenges. Level 1 — Fundamentals This is the most problematic and time-consuming level for our publication. This level ironically includes only 10% of writers. L1 issues consume 90% of our time. Therefore, I want to make the rules crystal clear. We love new writers and enjoy giving them a chance to grow, but some new writers take the privileges for granted. Publication services are privileges for writers, not rights. Different publications have different goals. Our goal is to create an opportunity for all writers to showcase their stories and gain an audience. Our publication is not commercial; volunteering editors run it. Keeping these basic facts in mind, we want to remind the new writers to consider the following six items to get the best out of our services and become successful. 1 — Submit only your best work for our readers. Your stories must be the type of content that you want to read. Publications are not random content dumping places. They are designed to bring writers together and produce presentable content for readers. Readers expect reasonable materials. You are presenting your content to an audience who is interested in your experience, thoughts, and feelings. This audience is your customer base. You cannot take them for granted. When you are submitting your story to a publication, always ask these two questions: Why should this audience read my content? What is unique about my content? 2 — Caption your copyright-free photos with source links. This is one of the most time consuming and stressful situation for our publication. Our editors are tired of writing a myriad of comments and wait for responses from writers. Writers must understand this fundamental rule. This 30-second job is causing days of delay, frustration, and interfering with productivity. This week, we will reject stories with photos of missing captions and not indicate the copyright rules. This item is non-negotiable. We don’t want to waste more time on this fundamental requirement. 3 — Do not submit plagiarized material. We only accept stories written by you. Medium does not accept plagiarism, and it is illegal in the industry. ILLUMINATION has zero-tolerance for plagiarized submissions. Our editors have expertise in detecting plagiarism by using different methods and various tools. We removed 100+ writers for the last seven months. Our analysis shows that 99% of these writers were non-paying members, possibly with multiple fake accounts. Our editors are paying more attention to these accounts, and we are assessing suspected accounts using a new system that we are developing. The ramifications are severe for writers attempting this illegal activity. We reject the plagiarized material. Report it to Medium and remove the writer from publication. We will not accept the writers, who commit this illegal activity, to our publication again. Please see the attached article for our position and details. 4 — Submit only one story a day. Considering contributions from 5,600 writers, it is not possible to cater to multiple submissions from each writer. We allow our prolific and experienced writers to submit up to three drafts within 24 hours. Since they submit high-quality materials, we don’t have to spend too much time on these types of submissions. However, until they improve their writing skills and gain acceptance by readers, some new writers are recommended to submit only one story a day. Especially writers who cannot handle the fundamentals mentioned in the points stated above that clearly state our guidelines to only submit one item in a day. Our editors do their best to improve and bring them to a reasonable state. We know that some readers prefer raw writing, but we still need to adhere to common quality standards. As an exception, new writers who want to submit curated stories can submit as many as they want. Curated stories usually depict quality and acceptance by a specific audience. We want to save 90% of our editorial time due to 10% of writers causing the grief. This post confirms that we add rigor to the process. 5 — Respond to your private messages. Medium allows communication between publications and writers only via private messages. If your private messages were turned off, you would not receive a notification. Please check your story for * sign, which indicates a private message. This sign can usually be on the upper part of your story. Editors may request you to fix an issue in your story. Please acknowledge and address the issue in a timely manner. We cannot hold stories for more than 24 hours in our queue. If no response is provided, we will return the story. 6 — Learn about the platform rules and expectations of your audience. We are working on a large and complex platform. I touched on the publication rules, but the platform rules are critical too. All writers must learn the rules of the platform and get to know their audience. The most important rule is to understand our audience and adjust our writing practice accordingly. I shared my experience and simplified the rules for new writers. In terms of platform rules, many new writers benefited from this story. This story focuses on fundamentals and some advanced topics for writers. Level 2 — Improvement and Progress 70% of our writers submit reasonable stories for our readers. I categorize them in Level 2 (L2). Our readers enjoy and appreciate these stories. Many readers included ILLUMINATION as their first page of reading browsers. Our followers are increasing rapidly. We now have around 45,000 followers. However, improvement never ends. Readers expect interesting and authentic stories with a personal touch. Introducing new topics and styles are essential to keep the current reader base and gain new readers. It is crucial for writers to take reader feedback seriously. Another important point for this level is conducting peer reviews. Feedback is important to all writers. We can learn by teaching and sharing our experiences. Connecting with new writers and paving the way for them can provide mutual benefits. We help our writers to collaborate via a mentoring initiative. We created a communication platform where you can collaborate with other writers. Our Slack group can be very useful to meet new writers and develop collaborative relationships. You can request access to Slack from this link. Please see the benefits and operations of Slack from this story authored by Tree Langdon. Tree is our Slack champion and help our writers generously to succeed. The next important point is creative ways to amplify our messages and create a new audience. ILLUMINATION invests substantial time and effort in amplifying stories of our writers using various communication and social media tools. Please see this resource to understand how you can use social media wisely. Level 3 — Excellence Working with accomplished writers is a pleasure. 20% of our writers submit outstanding stories delighting our readers. Some of these stories gain remarkable visibility and reading times. Some even go viral. To accommodate the needs of these top writers, we extended our services by creating a special collection. This collection is another publication called ILLUMINATION-Curated operated by the same editorial team. We invited top writers to participate in this initiative. The pilot program was completed successfully. 1,000+ writers contribute to ILLUMINATION-Curated. We have thousands of curated stories that we distribute daily to our readers using various methods. You can access our outstanding stories from this link. To further improve the cross-pollination among our level 1, level 2, and level 3 writers, we created several initiatives. The next initiative is a recognition program that we are developing. We will share details of this and other exciting initiatives in upcoming posts. The most recent two initiatives we introduced were for poets and fiction writers who appeared to be the most disadvantaged groups from visibility and readability perspectives. To address the issues and find effective solutions, we created two initiatives. We named them poetry and fiction clubs. You can find more information about these collaborative clubs. To help our L2 writers to transition to L3, we established a program and provided required guidance. Many top writers started submitting their drafts to ILLUMINATION-Curated. Conclusion In this post, I provided an overview of the challenges for three groups of writers: beginners, experienced, accomplished. The biggest issues happen at the fundamental levels, which consume substantial resources of our publication. We need to address the issues at the fundamental level with more rigor. These fundamental points are low hanging fruits and do not require extensive skills. These basic points are common sense and can be resolved with some attention to the rules. We are happy with our Level 2 writers and help them to grow rapidly and reach the next level. This growth requires mutual investment from both writers and the publication governance team. As the publication team, we take necessary measures and recommend the same focus to the writers. Our publication is a tool dedicated to your successful writing career. Level 3 writers delight our readers and make our governance team joyful. Our plan is to grow the number of our Level 3 writers. We can achieve this goal by empowering our Level 2 writers and successfully transitioning them to Level 3 status. We also plan a number of new initiatives to enhance vitality. Cross-pollination is an effective strategy for the sustainable growth of ILLUMINATION. We value our writers and aim for their success by helping them. Please help us help you by submitting the type of materials that you want to read. Thank you for reading my perspective.
https://medium.com/illumination/publication-conduct-guidelines-e17e63804525
['Dr Mehmet Yildiz']
2020-11-17 05:40:26.413000+00:00
['Relationships', 'Business', 'Leadership', 'Self Improvement', 'Writing']
‘Earth AD’: How Misfits Fashioned A Lasting Hardcore Punk Classic
Tim Peacock Pioneering US horror punks Misfits’ lengthy career hasn’t yielded chart placings and industry awards, but their influence has spread like a virus. The band’s colourful backstory includes splits, squabbles and enough hair-raising antics to service a series of biopics, but they’ve been championed by Metallica, blink-182 and Green Day, and their early albums, including 1983’s furious, hardcore-inclined Earth AD, have long since enshrined their legend. Listen to Earth AD on Apple Music and Spotify. “There was some really crazy s__t going on” Misfits were formed in 1977, in Lodi, New Jersey, by aspiring singer-songwriter Glenn Danzig, who reputedly named his new outfit after Marilyn Monroe’s final film, The Misfits. Line-up changes dogged the band’s early days, though their core personnel — Danzig and bassist Jerry Only (aka Jerry Caiafa) — remained constants as they cut their teeth playing local gigs. As the group were within hailing distance of New York, these early shows included gigs at punk mecca CBGB. As suburban kids, however, Misfits soon realised they had little in common with the famous scene’s future stars — or even their lifestyle. “Culturally, I wasn’t in tune to the scene or what was going on,” Jerry Only recalled in an interview with 100% Rock, in 2015. “Me and Glenn went to New York right after my senior year and our first gig was as CBGB, and I was still in high school. There was some really crazy s__t that was going on out of New York at the time, but I took a look and decided I didn’t want to be one of those guys. I mean, they were talented, but they didn’t care what they were doing. [For me] the objective was to live a long life, not to die young.” An anarchic reputation The New York punk scene did, however, help toughen up Misfits’ sound. After self-releasing their debut single, ‘Cough/Cool’, via their own Blank Records imprint, the band drafted in several different guitarists, eventually settling on Only’s brother Doyle (aka Paul Caiafa). Danzig also junked the electric piano he’d previously played and concentrated solely on being the group’s frontman. The new songs Danzig wrote during this period were inspired by B-movie horror and science-fiction films, while Misfits’ image also changed radically. Danzig painted skeletal patterns on his stage clothes, and Only began applying dark make-up around his eyes and patented the long, pointed “devilock” hairstyle that Danzig and Doyle also later adopted. Misfits built up an anarchic reputation over the next couple of years. During this time, they self-released several further singles, supported The Damned in New York and even spent time living with Sid Vicious’ mum, Anne Beverley, in London. Back in the US, they caught onto the era’s burgeoning hardcore punk scene, befriending Henry Rollins and playing live with Rollins’ band, Black Flag, in New Jersey. A thrashier, metal-infused punk Recorded during 1981 and released in March 1982 through Slash/Ruby Records, Misfits’ acclaimed debut album, Walk Among Us, captured the band’s original, NYC-influenced sound, with Danzig coming on like a scuzzier Joey Ramone and his group careering though roistering anthems including ‘I Turned Into A Martian’, ‘Vampira’ and ‘Night Of The Living Dead’. Misfits followed the album’s release with a national tour, during which their gigs became more intense and violent. Danzig eventually fired drummer Arthur Googy (aka Arthur McGuckin) after the pair repeatedly clashed and — following an endorsement from Henry Rollins — Misfits recruited ex-Black Flag drummer Robo (aka Roberto Valencia) to take his place. Robo’s powerful presence had graced Black Flag’s legendary SST debut album, Damaged, in 1981, and he was the ideal sticksman to power Misfits’ second album, Earth AD: nine salvoes of raw, visceral punk, delivered in barely 15 minutes. The subject matter (‘Green Hell’, ‘Wolfs Blood’, ‘We Bite’) still reflected Danzig’s love of horror and sci-fi, but the relentless assault of Misfits’ new music was a significant departure, with the songs’ furious BPMs pioneering a thrashier style of metal-infused punk that would soon be widely referred to as “hardcore.” “The archetypal horror-punk band” Short, sharp and wilfully abrasive, Earth AD was originally released in December 1983, but later extended to full album length and rechristened Earth A.D/Wolfs Blood, with the addition of the three tracks from the band’s 1984 non-album single, ‘Die, Die My Darling’. The record, first issued through Danzig’s Plan 9 label (it was later reissued through Caroline), led Rolling Stone to declare Misfits “the archetypal horror-punk band of the early 80s”, but the band couldn’t enjoy the fruit of their labours. Robo left before Earth AD was even released, and Danzig, too, became disenchanted with the band. After an anarchic final show, staged, fittingly, on 31 October 1983, he quit to pioneer more extreme brands of metal with future groups Samhain and Danzig, though he eventually reunited with Jerry Only for a handful of Misfits shows in 2016. “After Earth AD, there was nowhere to go” Earth AD, however, has since been cited as a landmark hardcore punk release. Green Day, blink-182 and Alkaline Trio have sung its praises, while Metallica have recorded covers of several tracks from the album. They laid down a frenzied medley of ‘Last Caress’ and ‘Green Hell’ for their 1987 release, The $5.98 EP: Garage Days Re-Revisited, and they later introduced Misfits to a younger generation by taking on ‘Die, Die, My Darling’ for their 1998 covers album, Garage Inc. “We didn’t have an agenda, and then we took the horror image and… came up with a monster with no feelings that was pumped up and ready to go,” Jerry Only said in 2015, reflecting on his band’s groundbreaking sound. After Earth AD, “there was no place to go”, Only continued. “We had taken it to the limit. We tapped out. That was it. What Earth AD did was launch the hardcore scene, the death metal scene, the thrash scene. All those other bands, Slayer, Anthrax, Megadeth, Metallica, all those bands that came after us used that as their guiding light.” Earth AD can be bought here. Listen to the best of Misfits on Apple Music and Spotify. Join us on Facebook and follow us on Twitter: @uDiscoverMusic
https://medium.com/udiscover-music/earth-ad-how-misfits-fashioned-a-lasting-hardcore-punk-classic-8004726a3d1c
['Udiscover Music']
2019-12-12 14:30:03.560000+00:00
['Music', 'Metal', 'Features', 'Culture', 'Pop Culture']
Who Still Owns America?
The legacy of Jeffersonian Democracy is still radical today. Photo by Ashton Bingham on Unsplash Jeffersonian democracy is characterized by an agrarian economy made up of small property owners; a decentralized government that derives its strength from the community; and individual responsibility for the environment. These tenants of Jeffersonian democracy continue to resonate among Americans who stand opposed to corporate agriculture, the ever expanding hegemony of the federal government, and the loss of small town values that seem to have vanished along with rural American life. An interesting book that looks at the role Jeffersonian democracy has in our own country is called Who Owns America? A New Declaration of Independence The book, edited by Herbert Agar and Allen Tate, revisits the political theory first laid out by Thomas Jefferson, one of the Founding Fathers of our country. American Capitalism in Crisis In 1936, the year Who Owns America? was published, American capitalism was in crisis. Our country was in the midst of an economic depression and everyone questioned America’s ability to maintain its political, economic and social system. The Great Depression presented American government with its greatest challenge since the Civil War and nearly everyone thought that radical reform was not only necessary, but was possibly America’s only option. While many Americans supported President Roosevelt’s “New Deal,” the regionalist writers of Who Owns America?: Herbert Agar, Allen Tate and John Crowe Ransom had other ideas about how to revitalize America. Jeffersonian Democracy as the Cure For these men, the contributing authors of Who Owns America?, the cure for America’s ills was Jeffersonian democracy — the Agrarian ideal — the original American Dream. Agar, Tate, and Ransom argued for political and economic independence and saw being dependent upon big government or big business as a threat to what they believed America was supposed to be. Most of the nation’s intellectuals (including Roosevelt’s “brain trust”), in contrast to Agar, Tate, and Ransom, favored a centralized economic plan grounded in modern technology. For these thinkers, the Agrarian ideal was an anachronism. For them, the power of modern industry had permanently transformed society into an industrial one where centralized cities eclipsed small towns and family farms. Who Would Control America? For these thinkers, it was not whether Jeffersonian agrarianism was even possible — they believed that the industrial economy was inevitable. For them the only question was who would control it — the banks and industrialists or the government in Washington? Essentially, the authors of Who Owns America? suggest that there are more choices than “Shall we be ruled by the industrial plutocrats?” or “Shall we be ruled by an increasingly encroaching central government?” In 1936, during the midst of The Great Depression, Agar, Tate, and Ransom were making pretty radical statements about our country. Without a decentralized political and economic system and the widespread distribution of property, they argued, America would no longer be America (in the Jeffersonian sense). This was the only solution that would provide an authentically American response to our economic, political, and social struggles. The Problem With Liberals Agar, Tate, and Ransom agreed with the liberals in that the industrialists and bankers who had all but ruled the country (through laissez faire economic policies) since the late nineteenth century had failed America and Americans. However, they thought that the liberals were not “liberal enough” in that they appeared to accept the inevitability of corporate finance and industrialization. For them, the liberals were only seeking to take the system as it was and merely transfer the control of that system from the plutocrats to the government bureaucrats. “From my point of view,” Allen Tate writes, the Marxists “are not revolutionary enough” in that they “want to keep capitalism with the capitalism left out.” Tate and his fellow contributors advocated a “third way” that would check both the communist inclinations of the political Left and the fascist inclinations of the industrial and commercial plutocrats. Relevance Today 2016 marked the 80th anniversary of the publication of this forgotten classic of American protest and still challenges many of the assumptions that Americans have about our country. Americans should resurrect this forgotten treasure of American letters. More than any other time since The Great Depression, Americans need to challenge what Americans typically think is “common sense” or “a given” about our economic system, corporate capitalism, and the Leviathan state. As Edward Shapiro writes in his foreword to the newest edition of the book: “The urgency of the questions posed by Who Owns America? has not changed since 1936, nor has the answer.” For the authors of Who Owns America? that answer is a Jeffersonian vision of America based on the widespread ownership of property as the foundation for an independent citizenry that defends their individual liberty and lives a morally responsible life. This was a radical statement for Agar, Tate, and the other contributors to Who Owns America? in 1936 and it remains a radical statement and an agrarian ideal seventy five years later.
https://medium.com/fat-daddys-farm/who-still-owns-america-5fa3c24f189c
['William Matthew Mccarter']
2019-11-02 23:38:55.018000+00:00
['America', 'Society', 'American History', 'Politics', 'History']
‘boygenius’ Review: A home for the eternally homesick
Out of all the complaints one could possibly have about an album, “too short” is probably the most flattering one for an artist. boygenius, the first EP from the trio of the same name, is an insight into the deepest parts of three anxious young minds who are as often eerily similar than vastly different. One could expect to come out exhausted of such an honest pouring out of emotions; surprisingly, the end result ends up being quite at odds from that first impression. Each song is so painfully sincere, so agile in walking right on that fine line between the extremely personal and the universally relatable that we have no choice but to beg for more. If tears are shed over the course of the album, they will not be ones of sadness but of pure catharsis. It is almost a small miracle in its own right — how can six short songs so perfectly fit the hole in its listener’s heart ? Boygenius as a band started out on tour, bringing together the already established talents of Phoebe Bridgers, Julien Baker and Lucy Dacus. boygenius as an EP was written and recorded over the course of four days. It seems that this project is characterized by ephemerality in more ways than one. The three singers harmonize: ‘I am never anywhere / Anywhere I go / When I’m home I’m never there / Long enough to know’ over the acoustic guitar of “Ketchum, ID”. The EP miraculously captures the “somewhere” that doesn’t exist for them. This is a collection of feelings that were never meant to be explored, too vague, too fleeting to truly know we’re feeling them before they’re gone. Putting words on things that have been left unsaid for so long almost feels dangerous. The slow rhythms, soft acoustic strings and understated vocal performances clash with a sense of emergency from the listener’s part. This feels like a one in a lifetime chance, and the knowledge it will end soon is already devastating. The replay button has rarely felt so life changing. This unconventional attitude towards the album also relates to one of its main motifs quite perfectly: the strange feeling that is being deeply wounded right where it hurts the most and yet never wanting it to stop, or only when it gets so overwhelming that you feel like your life might be in danger. Bridgers expertly summarizes this contradictory sensation in her verse in “Souvenir”: ‘Always managed to move in / Right next to cemeteries / And never far from a hospital / I don’t know what that tells you about me’. There’s just enough hope shining through these words to make us stick throughout the entire EP, a faint promise of healing that we never quite get to fully embrace. If one was not aware that boygenius was born out of three previously separate acts, it wouldn’t be outlandish to think that we are listening to a band that has been playing together for years. Musically, the three voices melt into each other effortlessly, the respective styles of its members blending into a new yet oddly familiar one. The only thing giving us a hint towards the outside context of the band are the themes they choose to explore. “Bite the Hand”, the album’s opening song, is a bitter look at the odd relationship between the artist and the fan, the constant struggle between the artist’s real self and the person their fans want them to be, the fight between the very concentrated love a fan feels for a specific artist and the diluted love the artist feels for what often feels like an indistinct mass. The EP is all about illusions, fragilities, ambiguities. No matter how much heart the three singers seem to put in their songs, the very first minutes of the record warn us immediately: you may know their songs, and they’re thankful that they mean this much to you — but if you think that means you know them, you are sorely mistaken. “Me & My Dog” follows on the first song’s footsteps by indirectly addressing a panic attack that Bridgers had at one of her shows, once again making something uniquely relatable out of a very individual experience. The theme emerges again in the album’s forementioned closing song “Ketchum, ID”, a melancholic ballad about being on the road and longing for a place to call home. The brilliance of these songs lies in their unexpected universality. This is not an album by acclaimed musicians for other acclaimed musicians with similar tour experiences. This is for anyone who has ever felt anxious, lonely, insecure, too much, not enough, or all of these at once. The other half of the EP leans into more general concerns, often addressing an unnamed, possibly antagonistic ‘you’. ‘When you cut a hole into my skull / Do you hate what you see?’ asks Dacus on “Souvenir”, shortly followed by all three of them asking ‘Would you teach me I’m the villain’ on “Stay Down”. This ‘you’ could be anyone and anything: some suggested God, others an abusive partner. Searching to establish a fixed identity of this figure is ultimately meaningless, for everyone will relate to it in a different way. This establishment of the ‘you’ as a draining, all consuming entity gets reinforced in the album’s penultimate song “Salt in the Wound”. This song represents the peak in the album, both in its poetic lyrics and its increasingly intense vocal performances and instruments. ‘If this is a prison I’m willing to buy my own chain’, Bridgers almost yells out at the end of the second verse. Every new word, every accusation towards this hurting yet apathetic figure feels like both a release and a progressively tighter chokehold. We don’t want it to be over, but by the time the voices fade out, we’re happy to breathe again. With a supergroup such as this one, one could worry about a specific member taking the lead over another, or conversely about another member getting left out. While some of the songs are more distinctly led by either Baker, Bridgers or Dacus, the overall EP maintains a solid balance between all of them. In the end, the EP is indeed about balance more than anything else: the balance between light and dark, joy and sadness, love and hate. It is an EP that never promised what it ends up giving, which makes this gift all the more special. We might be concerned about the uncertainty of a follow up considering the transient nature of the project, but it would be unfair to discredit everything the album gave us simply because of our own selfishness. For now, the only thing left to do is to be thankful for the beautifully fucked up home boygenius gave us, if only for a moment.
https://medium.com/a-series-of-unfortunate-ramblings/boygenius-review-a-home-for-the-eternally-homesick-ef0e8202ab56
[]
2019-03-16 08:52:26.696000+00:00
['Music', 'Music Review', 'Boygenius', 'Album Review', 'Album']
A Practical Approach to Supervised Learning
Machine Learning is the art of teaching machines to make decisions from the data. There are multiple algorithms that help computers to analyze data and get valuable insight, and thus assisting it to make the decision on new data sets that it hasn’t seen before. Most of these algorithms fall into one of these three categories: Supervised Learning: Algorithms that learn from labeled data, and do predictions on data never seen before. Unsupervised Learning: Algorithms that tries to find patterns and similarity in the unlabeled data so that it can be clustered accordingly. Reinforcement Learning: Algorithms that are allowed to interact with an environment and their performance is optimized by a system of reward and punishment. In this article, we will be focussing on the Supervised Learning (SL) method. As stated earlier, SL uses labeled data and gives its predictions on unlabeled data. A labeled data is one which has been categorized in one or more category or has been given a particular value. For e.g., there is data of all the students of different schools taking part in an interschool competition, then, in this case, the school name could be used as a label to categorize the students. In another case, if there are multiple houses with different areas and they cost according to their areas, then the cost could be used as a label in this case. Although in both the instances the data was labeled, they were quite different in their label type. In the first example, the number of labels was discrete, whereas, in the second one, the label was some decimal values and thus continuous in nature. The SL is further categorized on this basis: Classification: The value to be predicted is categorical and discrete. Regression: The value to be predicted is continuous in nature. Classification There are many cases when you would be using classification methods to make a prediction on the categorical data. Various examples include categorizing emails as spam or not spam, whether the cancer is malignant or benign, assigning plants or animals into a kingdom and species, etc. There are many different algorithms used in classification problems. Some of them are: Logistic Regression Support Vector Machine (SVM) k-Nearest Neighbors (k-NN) Decision Tree Here, I would only be demonstrating k-NN but would be writing about the others in the upcoming blogs. Also, I would be using the scikit-learn Iris dataset (Fisher, UC Irvine) that contains three classes with fifty instances each. k-Nearest Neighbors Classifier It is one of the simplest and super easy to learn classification algorithm. The main idea behind a k-NN algorithm is that the elements that are close to each other would belong to the same category. It may or may not be true for all the data points. It predicts the label of a data point by looking at the ‘k’ closest labeled data points. The unlabeled data point will be classified into that category which is a majority among the ‘k’ closest data points: k-NN Classification for k = 3 and k = 5 In the diagram presented above, you can see that the predictions made by the k-NN classifier for two different values of ‘k’. In both cases, the classifier made different predictions. This might cause a stir in your mind about the value of ‘k’, but don’t worry I will talk about it later. Now since you have clear intuition about this algorithm, let’s implement on the Iris dataset and see how it works. First, let me give you some insight into the Iris dataset. It is a very simple dataset that contains flower data. It contains four features namely petal length, petal width, sepal length, and sepal width. It also contains the target variable which are the flower categories namely Versicolor, Virginica, and Setosa. Each of these three labels has fifty instances each. First and foremost, it is a good practice to import all the libraries you may need later. from sklearn import datasets #importing datasets from sklearn import pandas as pd #importing pandas with an alias pd import numpy as np #importing numpy with an alias np import matplotlib #impoting matplotlib import matplotlib.pyplot as plt #importing pyplot with an alias plt Notice that I imported the datasets module from sklearn . That would help in loading the Iris dataset. Now, it would be a good idea to do am exploratory data analysis on the given data. In order to do so, you will have to first load the Iris dataset and assign it to some variable, iris in this case: iris = datasets.load_iris() This data is in the form of a Bunch , which you can check using type(iris) . Bunch is similar to the dictionary as it also contains key-value pairs. Let’s check the keys of this dataset, which can be done using print(iris.keys()) , and it outputs dict_keys([‘data’, ‘target’, ‘target_names’, ‘DESCR’, ‘feature_names’, ‘filename’]) . The data contains the data for all the features namely petal length, petal width, sepal length, and sepal width, target contains the value of the target in numeric form (i.e., 0 for Setosa, 1 for Versicolor and 2 for Virginica) , target_names contain the name of the target variable (i.e., Setosa, Versicolor, Virginica), DESCR contains the description of this dataset about its contributor, statistics and many more, feature_names contain the name of the features (i.e., sepal length (cm), sepal width (cm), petal length (cm), petal width (cm)) and finally, filename contains the location of the file where it has been loaded. Lets extract the data and target from iris by assigning it to some variable X and Y respectively. X = iris['data'] Y = iris['target'] To perform further operations on the data, you should convert the data into a pandas dataframe and assign it to some variable say df using : df = pd.DataFrame(X, columns = iris.feature_names) . Doing a visual exploratory data analysis of Iris dataset using pd.plotting.scatter_matrix(df, c = Y, figsize = [15, 10],s=150) will give an output: As you can see, the diagonal consist of histograms of the features corresponding to the row and the columns, and the non-diagonal plots are the scatter plots of the column features and the row features colored by their target variable. It is quite obvious to see a correlation between the value of the features and the target variables. Let’s plot a scatter plot of the petal length and the petal width individually and see this correlation clearly. It can be plotted using plt.scatter(df[‘petal length (cm)’], df[‘petal width (cm)’], c = Y) : Here the correlation gets even clear. The violet plot corresponds to Setosa, the blue plot corresponds to Versicolor and the yellow plot corresponds to the Virginica. Before training a model on our dataset, it is very important to split it into a training set, validation set, and test set. Here, I will be splitting my data in a training set (70% of the data) and the test set (30% of the data). Scikit-learn helps us to do this very easily using its train_test_split module. To do so, you will have to first import it from sklearn.model_selection using from sklean.model_selection import train_test_split . It returns four arrays: the training data, the test data, the training labels, and the test labels. We unpack these in four variables, X_train, X_test, y_train, y_test in this case: from sklean.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.3, random_state = 21,stratify = Y) The test_size argument decides the percentage of data to be assigned for the test set, random_state sets seed for a random number generators that splits the data in train and test (setting the same seed every time will produce the same split), startify is set to the array containing the labels so that the labels are distributed in the train and test set as they are in the original dataset. Finally, it’s time to implement the classifier, but first, we need to import it using from sklearn.neighbors import KNeighborsClassifier and then instantiate the classifier by setting the number of neighbors using knn = kNeighborsClassifier(n_neighbors = 6) . Here, I started with the number of neighbors equal to 6 and assigned it to a variable knn . To train the model, scikit-learn provides fit() method as we are to trying to fit the data to our classifier, and then to make a prediction on a new unlabeled data, it provides predict() method. We train the model on our training set produced using the train_test_split , and later will do the prediction on the test set. from sklearn.neighbors import KNeighborsClassifier knn = kNeighborsClassifier(n_neighbors = 6) knn.fit(X_train, Y_train) #training on the X_train, Y_train knn.predict(X_test) #testing on the X_test This will output an array of the predicted label for the test set: array([2, 1, 2, 2, 1, 0, 1, 0, 0, 1, 0, 2, 0, 2, 2, 0, 0, 0, 1, 0, 2, 2,2, 0, 1, 1, 1, 0, 0, 1, 2, 2, 0, 0, 1, 2, 2, 1, 1, 2, 1, 1, 0, 2,1]) To check the accuracy of our model we use score() method of k-NN on our test data and label, knn.score(X_test, Y_test) which gives 0.955555555556 . This is not a bad result for such a simple model. The value of ‘k’ is a big deal in this classifier. The smaller value of ‘k’ means the model is very complex and can lead to overfitting (the model tries to fit all the points in the correct category), whereas, the larger value of ‘k’ means the model is less complex and has smoother decision boundary which may lead to underfitting (the model doesn’t fit most of the obvious points). There is a better value of ‘k’ which is neither big nor very small which doesn’t lead to overfitting or underfitting. Regression Regression is used when the target variable is a continuous value. A continuous value is one which is an integer, floating-point, etc. There are many examples of a regression problem including predicting house prices, predicting the stock values, etc. Since the value is continuous in the regression problem, its accuracy can not be evaluated. Thus it is evaluated by the value of a cost function which could be either Root Mean Square Error (RMSE) or Cross-Entropy Function or any other. To demonstrate the regression I will be doing analysis on the Boston Housing dataset of Kaggle which can be downloaded from here. Scikit-learn also provides this dataset and can be used in a similar manner as we used the Iris dataset. I would be using it by downloading from Kaggle. If you want to see how it is loaded using the in-built scikit-learn dataset, you can do so here. Now, after you have downloaded the dataset, you need to import Pandas and Numpy in your jupyter file. To load the data you can use pd.read_csv() and pass in the file (in the form of a string) as an argument: import pandas as pd #importing pandas with an alias pd import numpy as np #importing numpy with an alias np df = pd.read_csv('Boston.csv') You can further check the data by using the head() method. It would by default show you the first five rows, but you can pass in the number of rows as an argument and can view as many records. It has 15 columns out of which the first column is for indexing, the next 13 columns are the features of the datasets, and the last column (i.e., medv) is the target variable which is the median value of the owner occupied home in thousands of dollars. If you wonder how do I know this, I simply checked the documentation of the dataset and you can do the same. As you can see the data loaded using pandas is combined of the feature and the target variables, but scikit-learn needs them in separate arrays. We can do so by splitting our dataset by dropping the medv column and using it as a target: X = df.drop('medv', axis = 1).values #dropping the medv column Y = df['medv'].values #using medv as target We used the values attribute as it returns the NumPy array for us to use. Linear Regression Now it's the time to select our regression model that would help in predicting values for the unlabeled data. I would be choosing a very simple model called the Linear Regression (LR). LR defines an optimal line that could fit in all the give data and it assumes that all the data which it would accounter, later on, would also follow the same pattern. In one dimension (i.e., a dataset which has only one feature)it is a simple line with parameters a and b: y = ax + b Here, y is the target variable, x is the feature of the dataset, and a,b are the parameters that are to be learned. The best way to learn a,b is by defining a loss function and then minimizing it to get an optimal value of the parameters. Well, how to formulate a loss function over here? As you know, linear regression tries to fit the data on a line, but in the real case scenario, all the data may not fit on the line. The best that can be done is to minimize the vertical distance between the line and the data points. Bingo! Here lies the formula for our cost function. Don’t get confused by the term ‘cost’ as the loss function is also called the cost function or the error function. Now, this vertical distance is also known as ‘residual’. We can try to minimize the sum of the residual, but that may lead to cancellation of lot of positive residual with the negative ones, as you can see below: To avoid that we minimize the sum of the squares of the residuals. This would be a perfect loss function and it is commonly known as Ordinary Least Squares (OLS). Scikit-learn performs this operation when we apply its Linear Regression model and try to fit our data to it. This was the case when our data has one feature i.e., one-dimension data. For data with higher dimension, scikit-learn try to fit the data on this linear equation: Linear Regression will have to learn about n+1 parameters. Now that you know the logic behind the LR model let’s try to implement it on our Boston dataset. First, we will split our data in a training set and the test set using the train_test_split module of Scikit-learn as we did earlier in our classification example: from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.3, random_state = 21) Then, we need to import the Linear Regression model from Scikit-learn using from sklearn.linear_model import LinearRegression and then instantiate it. Now, you can apply the fit() method on the training set and do the prediction on the test set using predict() method: from sklearn.linear_model import LinearRegression reg = LinearRegression() reg.fit(X_train, Y_train) pred = reg.predict(X_test) Unlike classification, we cannot use accuracy to evaluate the regression model. In the case of a linear regression model, the performance is evaluated using R² , which is the evaluation of the amount of variance in the target variable that is predicted from the feature variable. To calculate R², we apply the score method and pass the arguments X_test and Y_test, reg.score(X_test, Y_test) . Until now, I have been splitting the data in training set and the test set. But to make the model do an enhanced evaluation on a new dataset, we can use the technique called Cross-Validation (CV). Suppose we want to do n-fold cross-validation, we would split our data in n equal folds. Then, we will hold our first fold, fit our model on the remaining n-1, predict on the test set and compute the metric of interest. Next, we hold on our second set, fit on the remaining data, and compute the metric of interest. We continue to do so for all the n-folds. We get n values of the metric of our interest (R² in our case). We can take the average of all of them or we can calculate other statistics such as mean, median, etc. However, a point should be noted that more is the number of the folds, more computationally expensive will be our model, as we are training and testing that many numbers of times. To implement LR with CV: from sklearn.model_selection import cross_val_score from sklearn.linear_model import LinearRegression reg = LinearRegression() cv = cross_val_score(reg, X, Y, cv = 5) #5-fold cross validation print(cv) It will output the R² value computed for all the five folds in the form of an array: [ 0.57280576 0.72459569 0.59122862 0.08126754 -0.20963498] Note that I used this linear regression only for the demonstration purpose and it is never used like this. What we generally use is a regularised linear regression model. You can check the code for the classification model over here and the regression model over here. I hope this tutorial helped you get started with Machine Learning. You should have a better idea about Classification and Regression Supervised Learning methods. I would be continuing this series and would be writing about Unsupervised learning in my upcoming blog.
https://towardsdatascience.com/a-practical-approach-to-supervised-learning-63a9e9075b17
['Shubhanker Singh']
2020-02-08 19:31:46.987000+00:00
['Machine Learning', 'Python', 'Supervised Learning', 'Classification', 'Data Science']
The King Of Christmas
Old Nick was sitting at his desk looking over a section of the list his secretary handed him. The rumor was he checked it twice. In reality he checked it far more often, and he had the final call. The price of being the boss. He sat there examining one of the names. The kid was on a bit of a bad streak lately but he was pretty good overall. He put a mark next to his name to give one last check before the final list was printed. He’d have to be good for a few more days. It was close to game time as it was and the elves wanted the list earlier every year. He kept going through the list and finally set it down and went back to the Polar Daily Brief his advisors wrote up for him every day. The weather report was the most trouble. The North Pole was losing territory every year. Soon, he’d have to start fortifying the borders with magic or risk their operation being discovered. I’d have to stop giving coal to naughty kids and switch it solar panels, he thought to himself. The idea made him chuckle, but it also had a little merit. He’d run it by the Head of Meteorology. He’d have to run it by the Ethics Committee too. He was to take no part in the affairs of the world, but sometimes there was a little wiggle room. The was a knock at the door. “Come in.” Gerald, his chief of staff, came through the doorway. “Mr. Claus.” “Yes Gerald?” He loved Gerald but the old elf still refused call him by his first name. “Sir, I have a rather delicate matter to discuss,” as he shut the door. “Okay.” “It’s the Council sir, it’s a rather delicate matter.” “Spit it out Gerald.” “They know about Mrs. Claus.” Oh god, he thought “Is it about the extra cookies?” “The what sir?” “Nothing. Nothing. Never mind. What do they know?” “They know she’s pregnant.” “So?” “Some members of the Council are making noise about it.” “Why would they care?” “They’re worried about passing down the title of King of Christmas. One of the members is saying you’re trying to create a dynasty.” Nick stood up, “I earned that title. They bestowed it on me. I’ve had that title for a thousand years. Which one is making the fuss?” Gerald was quiet. “Is it the Easter Bunny again? That rabbit is such a pain. I told him a hundred times. It’s not my fault no one cares about Easter. He’s been trying to make moves on Christmas for a decade.” “It’s not…not him.” “Who is it?” “Sir.” “It’s him. Isn’t it?” Nick said his eyes narrowing. “Yes sir.” “Assemble the Council. I want everyone there. They have ten minutes. Anyone not there is declared an enemy of the North Pole.” “Yes sir.” “I want the Elf Corps strapped up and on alert. If he wants to make a power move I’m going to be ready. If Frost shows up I’m going to rip him in half this time.” “I’ll take care of it sir.” Nick stood there rubbing the space between his eyes, “Thank you Gerald. Does Mrs. Claus know?” “Not yet sir.” “I’ll call her and let her know. I want Christmas Castle on lock down.” “It’s already done sir.” “Good man Gerald. I’ll be there in a moment.” “Aye sir.” Old Nick tried to keep from flipping his desk over. He wished he could get a workout in to get rid of the nerves. People thought his famous girth came from the the millions of Christmas cookies, but in reality it was to help with his powerlifting. A few times he’d even competed in World’s Strongest Man under an assumed name. He meant it when he’d said he’d rip Jack Frost in two. Old Nick walked in the Council room in his best red suit. Sometimes you have to keep up appearances. The rest of the council was there too, the Easter Bunny, the April Fool, Cupid, all of them. Baby New Year was there too. The baby always gave Nick the creeps. He looked like an infant but spoke with a rich, deep voice. It was rather unnerving. “Mr. Claus,” the all said in unity. “Take your seats.” When everyone was seated, Nick started, “I called this meeting because I want to make some things clear.” “As you may have heard, and believe me I will find out who leaked it, my wife is pregnant. Some of you have been wondering out loud what will happen to the King of Christmas title when my time comes. I will make this very clear. I WILL not be passing down my title to my son. When my time comes the Council will choose the next king just as it did a thousand years ago. I am not trying to build a dynasty of any sort…” Nick stopped talking when he could see his breath. “What if we didn’t want to wait?” a voice said from down the hall. “Frost,” Nick said. “In the flesh,” Jack Frost said as he walked into the council chamber. “You want to share something with the Council?” “I do. I don’t see why we have to wait. I think the Council should have another vote.” “Only the King can bring a matter before the Council.” “Then maybe it’s time for a new king,” Frost said raising his arms. A handful for Ice Giants appeared in the room. Nick clenched his fists and felt the old magic flow through him. If need be he would end Frost right here and now. “Careful what you say Frost.” “Why should I? Under your lead belief in us is fading. Children lose their sense of magic earlier and earlier. You let technology steal our place old man. It’s time for a new king.” “And if I don’t step aside?” “You need me Claus. Without me the North Pole will go green. I’ll shut winter down completely.” Nick raised his hand and the ice giants turned to snow, “I have more than enough magic Frost. I’ll keep winter going myself if I have to.” The elf corps broke into the Council room dozens of them. Outside the building was surrounded by the Elf Cavalry, hundred of elves mounted on reindeer. “I think it’s time to leave Frost.” “I see how it is old man.” Jack Frost snapped his fingers and disappeared. Nick turned to the Council, “I’m declaring a state of emergency. Lock down the North Pole. Secure each of your individual realms. This means war.” The next night, Old Nick sat at the head of the table in the situation room. “What’s the latest report?” Nick asked. “He’s moved far faster than we thought. He must have been planning this for a while.” Nick scratched his massive white beard. “I bet his little display last night was just a diversion to move his forces into position. He’s amassed quite an army,” the Elf Sergeant said. “Ice giants?” Nick asked. “Yes,” the Head of the Elf Corps said, “and Ice Demons as well as a troop of Abominable Snowmen.” “How long will the border hold?” Nick asked. “Not long sir,” an Elf analyst replied. “It’s depleted rapidly. It was a smart move. With Christmas so close, most of the magic is focused on production. This is the weakest time of the year.” “That doesn’t seem right. Why aren’t we compromised more often at this time?” Nick asked. Everyone was silent. “Ahhhh, I see. Because usually someone is stirring up winter storms to hide the North Pole and keep everyone away.” The elves nodded. “What if I fortified the border with my own magic?” Nick asked. Gerald, his Chief of Staff, answered, “We’ve had our best people look into it sir. It would stretch your magic too far and it would put too great a strain especially with Christmas so close you’re using the maximum now.” “Humbug,” Nick swore, “I should have dealt with Frost sooner. I knew he’d pull something like this. He’s wanted to be King of Christmas for a long time.” “Probably because no one makes movies about him,” one of the analysts said getting a chuckle from the room. Nick didn’t chuckle though. He was too busy staring out the window. His stare deep and focused in an unusual way. His eyes glowing a faint grey. Even though he left most of the monitoring to the elves he still retained his ability to see anywhere. Normally, looking in on a member of the Council would be sacrilege but that changed when Frost rose in rebellion. He saw Frost at the head of the army. Frost, his mind reached out. Nick? Jack thought. End this madness Frost. You won’t win. Even still old man. I must try. Stop this Frost I beg you. I can’t. Not now. Things have gone too far. It’s not too late. Stand down and things can go back to normal. No. Frost, I looked at you as a son once. And in you I thought I had a father. You have your own son now. Is that the root of this? End this now Frost and I’ll name you heir. You’ll be the next King of Christmas. … Frost? … It’s the crown I want now. It’s King or nothing. You can’t be serious Frost. I’ll look for you on the field old man. And with that Jack Frost closed off his mind. “Sir? Sir?” Gerald was asking Nick. “Yes?” Nick answered. “The staff was waiting for the next order.” “Oh yes. Excuse me. Are the elves assembled?” “Yes sir. But why?” “Because I’m asking toymakers to become soldiers the least I can do is ask them in person.” “You don’t need to ask sir. The elves will follow you anywhere.” “Be that as it may Gerald, their obligation to the King of Christmas only extends to protecting the castle and preparing for Christmas.” “We don’t follow a title sir. We follow you.” Nick put his immense hand on the elf’s shoulder. “Thank you Gerald. Should something happen to me. You’re going to be running things. I’ve informed the Council that you will be Regent until they can select a new King.” “Sir, if something happens to you I won’t be running anything. I’ll be laying in the snow next to you.” “Gerald…You can’t. I need you here.” “Sir, with all do respect I will be by your side or I’ll ride out to face Jack Frost right now. The Council can run things.” “I thought you might say something to that effect,” Nick said “Allie bring them here.” A young elf handed Nick a gingerbread man shaped badge. He then pinned it on Gerald’s chest. “That is why I’m making you the First Knight of Christmas. One of Santa’s finest. You will be by my side.” Gerald did his best to hide his tears, “Thank you sir.” Nick nodded and walked to the atrium where the elves were waiting. “Friends,” the large man said, “I’m sure you heard at this point that Jack Frost is out there doing his best to cross our border. I stand before you now with the Ice Prince at our door. This couldn’t have happened at a worse time. Christmas is days away and the enemy is here to take it all. We can’t let the world down now. “Christmas must be saved.” “Christmas will be saved,” he repeated in a gentler voice. “And I see before me the people that are going to do it. I stand before you not as your king but as your friend. None of you have an obligation to go out there. No one will blame you if you choose to stay. I ask those who do to stand beside me to face the cold. To face Frost. “I stand here for the North Pole. “I stand here for you. “Now who is with me?” The entire crowd erupted in a cheer of “For Christmas!” Not long after the elves assembled on the field in front of Christmas Castle. Thousands of elves massed ready to defend the kingdom. Some were fighters. The Elf Corps was there and armed, as a security force they were trained to defend the North Pole. The Elf Calvary was there too mounted on reindeer. The rest were armed with whatever they could manufacture. Nick walked out the elves kneeling as he went by. He thought it wasn’t necessary but the elves liked ceremony. He traded his red suit for bright red armor with a giant antlered helm. He looked more a demon than St. Nick. But his red hat was also fitted over the helmet. His sled was fortified and the reindeer armored, and he took his place at the front of the army. They heard a massive crash as the border for the North Pole finally fell and Jack Frost and his army marched forward. Nick held up his hand to keep his army from charging. He wanted to see what Jack would do. Sure enough, Frost jumped to the front of the army and conjured a winter maelstrom and sent the humungous storm right at Nick’s forces. Nick saw a flicker of fear in the elves. They were brave creatures but anyone would fear the storm coming at them. Nick dismounted the sled. His eyes glowing a fiery white and his muscles clenched. He bent down and placed his hand on the snow. Jack Frost was a mighty wizard and the Prince of Ice and but Nick was the King of Christmas and it was high time he taught his old apprentice a lesson. Frost could do parlor tricks but Nick commanded magic far older. A great wind emanated from his hand and burst across the field destroying Jack’s storm and smashing his forces taking out nearly a quarter of the Ice Prince’s army. “For Christmas!” Nick yelled and he and the elves charged. The war was on. It was a sight to see. One day they’d write stories about it. The last charge of the elves. Reindeer snorted and the fog emanated from their flared nostrils breathing hard as they sprinted towards the enemy. They carried armored elves with lances candy striped red and white. They were ready for maybe their final charge. They’d sworn an oath to protect the North Pole. They spread out hoping to overtake the flank of the Ice army. The main force was mostly regular elves. Workers really. They made toys. They worked in logistics. There was an environmental division that made sure the North Pole remained under snow. They were all there. All together. They answered the King of Christmas’ call. They would fight and die for him, even before his speech. And there he was, at the head of the army. Nick, Kris, Santa, the King of Christmas went by many names, but now he wore a new title, commander. He traded in his red suit for red armor. And his hat for a great antlered helm. He was a fearsome sight. This was a not time for pleasing children. He had a war to win. Nick’s breath spewed from the helmet in a ghostly haze. He looked across the shrinking distance between he and Frost’s army. He couldn’t find the Ice sprite. But he was there. Nick could feel it in his bones. They would clash at any second. Closer. Closer. That’s when Jack Frost revealed himself. High above them Frost rode a massive ice dragon. The huge creature arched over Nick’s army and blasted it with an icy breath freezing scores of elves, taking out vast swathes of Nick’s army. Nick knew what he had to do. He was the only one with flying reindeer at the moment, their only aerial defense. His sleigh lifted towards the sky in pursuit of Jack Frost. He watched in horror as the elf army smashed into the army of Ice giants, frost demons, and abominable snowmen. The elf line wavered for a moment but then held. They were able to get a small push when Gerald and the Knights of Christmas smashed into the Ice army’s flank. There screams and cries as the two sides clashed. For now, the battle for Christmas was even. Jack Frost wrought havoc with his ice dragon. Freezing dozens at a time. But Frost stopped his attack when he saw the sleigh. Nick maneuvered the sled so he was upside down over the Frost and the dragon’s back. A sword made of ice appeared in Jack’s hand. Nick grabbed the war hammer the elves made for him. It was made from Snowsteel, but shaped like a candy cane. They fought furiously while flying through the air, hammer against sword. Steel against ice. Nick jumped from the sleigh and he and Jack fought while perched on the ice dragon’s back. They traded blows for a few beats. Nick let Frost’s attack hit his armor while he continued to press his attack. The ice sprite dodged and parried Nick’s blows. Nick was ancient, but his blows were mighty. Each swing would have cleaved a boulder in two, but the ice sprite was fast and strong. Finally, Nick knelt, appearing to need a break. “You’re getting slow old man,” Frost. Nick grabbed, one of the dragon’s spike, “Oh no Jack. I was just buying time.” Jack spun in time to see Nick’s sleigh and nine armored reindeer smashed through the ice dragon’s head. They both held on while the massive creature fell from the sky. They crashed in the middle of the battle. It came to a halt as Nick and Frost regained their senses. Nick stood. His ears were ringing and he couldn’t find his helmet. Neither side advanced, waiting for a cue from their commanders. “You ruined it,” Frost yelled, “You ruined everything. I was going to win old man, and you still couldn’t let me have the victory. And you brought the elves. What am I to do when I take over? How can I be the next King of Christmas? You robbed the whole world of joy old man.” Nick picked up his hammer, “Are you so broken Frost that you can’t see you’re the cause of this? I didn’t bring the elves. They are not here out of obligation. They came willingly. My army isn’t bound to me like yours. I was chosen King because I wanted to lead, not because I wanted followers. You can’t force people to follow you Jack.” Jack’s eyes glowed blue, and then his body lit up as well. “Jack, stop it,” Nick said feeling the power coming from him. “Too late old man,” Frost said his voice as cold as the ice he made. Frost turned and a great blue and white flash covered the elvish army. When the mist cleared the world was silent. Nick finally said, “Oh Jack what did you do?” Nick walked among the elves. His small friends were frozen. Stuck under a thick layer of ice. He found Gerald still atop his deer. He put his hand on his old friend’s shoulder. He felt something he hadn’t felt in a long time. A deep, cold hearted rage. It grew and grew. And grew. All the years of joy and laughter and children’s smiles. Frost had stolen it. For his own sense of glory. His eyes glowed a harsh white light. He reached down and felt every bit of magic down to his bones. This would end here even if it ended him. Jack made another sword and rushed Nick but Nick barely noticed him. Frost’s sword disappeared and Nick tossed Frost aside. Nick’s voice roared with the fury of a hundred storm’s, “I’ll deal with you after.” Nick looked upon Frost’s army. Hundreds, thousands of frost demons, ice giants and snowmen charged, and Nick swung his hammer with a swing so violent that when it hit the ground Earth shook. There was a pulse of pure white light, mixed with snow, and fury causing Frost’s army to be no more. He then turned to Frost. The ice sprite attacked again, but each time he was beaten back. His swords were broken. His magic attacks halted. Nick swung his hammer several times, and Frost felt every blow. Finally, Nick stopped and raised his hand, “You will no longer be a threat to Christmas.” And with that, Frost disappeared into snow. Nick turned to the elf army and raised his hand again unfreezing his friends, falling to his knees afterwards. The elves rushed forward to grab their king. They hugged him and tried to keep the now weary Nick awake. Gerald fought through the elves and hugged Nick. “Sir, you’re okay.” “You can just call me Nick, Gerald.” Gerald thought a moment, “Doesn’t feel right sir.” Nick laughed, “Fair enough.” “Sir, I don’t want to bring this up, but…” “It’s nearly Christmas Eve Gerald I know. Come let’s get back to the castle.” Once back at the command center Nick addressed the elves. Nick stood, “I’m proud of you all, but unfortunately we have work to do. We can rest after.” And so they went back to work. Nick was back in his office going over the list, but on the shelf there was a snow globe. In the snow globe there was a very small Jack Frost stuck in the glass. Right where Nick could keep an eye on him
https://medium.com/the-inkwell/the-king-of-christmas-5c10a69ffbce
['Matthew Donnellon']
2020-12-13 00:05:12.257000+00:00
['Creativity', 'Relationships', 'Short Story', 'Life', 'Fiction']
We need to talk about BET’s gross disrespect of Black Women
We need to talk about the messiness of Black History Month 2019, and how much we miss the gorgeousness that bedazzled last year’s celebration, when the vibrancy of the months leading to the highly-anticipated release of the cultural phenomenon that was and still is Black Panther, provided the primal unification of Blackness that supremely attacked the disease of White supremacy for a season. Fast forward to the present, and the allotted month that was supposed to uplift and beautify with collages of ancestral rhythms — recalling the currency of Blackness overtaking the globe with Wakandan authority — has been tragically hijacked by the venom of historical atrocities that have been vengefully reactivated to stifle the soundtrack of Black lives with the static of how we don’t matter. The media has played a vital role in this circus of shitty fare being swarmed around platforms that expand wide enough to contain the eyesores — plunging sensitive minds into the tunnels of discontent with the internalization of triggering content that effectively blasts through with robotic placements. News organizations and entertainment outfits are desperate for the material that cause users to throw fits of rage, and there’s simply no end to the fiesta of hate that must remain initiated for the clicks and memes that ultimately set up the editorial calendar for the weeks ahead. And so we must bitch about the White-owned BET, and how it no longer caters to a Black audience with the dignity and respect that is required for a massive company of that scale that claims the slogan — Yes to Black. The white dude that owns Viacom, the company that bought BET back in 2001, Sumner Redstone, must’ve been thrilled with his ambitious acquisition, that basically stripped the once Black-owned entity into fragments of unrecognizable bits, that have been too uneven and jagged to fit into the cohesive collage of coherency with reliable consistency. Sumner Redstone: Killer of Black Entertainment Television The quality of programming and summation of the Black experience hasn’t been up to par, and the general consensus over the years has been regulated to the tremendous betrayal of a network, that can’t seem to supply the ammunition that elevates the uniqueness of why entertaining while Black isn’t a trend —because it’s simply how we do. There’s also the sinister side of BET, that was revealed during a noteworthy annual event that gathered all the notable Black women of the industry under one roof in an intimate setting that was helmed by former Obama top aide, Valerie Jarrett, with special guest of honor — Michelle Obama. The former and beloved first lady was offering a sneak peak of her upcoming bestseller Becoming, and as expected the occasion featured journalists representing reputable outlets, as they carefully took pertinent notes to document the regal ambiance. Washington Post fashion critic, Robin Givhan was part of the illustrious group, and after it was over, the 2006 Pulitzer Prize winner for Criticism promptly did what anyone in her station would do, which was to publish the article that highlighted what she had witnessed. As soon as the post was made public, the attention it garnered was mostly negative, as the other Black women attendees accused Givhan of breaking the code by shattering the sacredness of the space that only agreed to accommodate the chosen few, on the basis that their lips would stay sealed in order to protect Black sisterhood, and the Black woman who graciously divulged the words of wisdom. The avalanche of condemnation was swift and furious, and before long, Givhan was the targeted victim of the hatefest that inexplicably stemmed from her desire to fulfill her job duties — accordingly. BET Networks hosted the festivities, and once it was clear that Givhan had incited the ire of blue-checked members of Twitter, the next thing to do was to not only unceremoniously throw her out of the ongoing conference, but also cancel the panel she had been invited to moderate. The interesting aspect of the mind-boggling fiasco, was the fact that BET had also posted clips from the event that matched what Givhan had shared in her brief and insightful essay. And so in order to avoid the penalty of replicating the actions of the person that was getting abused for reasons that didn’t add up, BET hastily deleted the damning tweets from its feed. That disgusting display of unprofessionalism was a disheartening peek into the operations of an organization that professes to be what it can’t seem to muster or outrightly refuses to attain. How disgraceful to watch a professional Black woman of that stature being unfairly mutilated at the hands of a network that’s supposed to protect her interests and integrity at all costs. This is when the difference between “White-owned” and “Black-owned” is abundantly clear. And apparently the ritual of shaming Black women, while encouraging Black people to turn against each other to the delight of gawking Whiteness is absurdly irresistible, as we’re feted with another episode that involves two female rappers with scarred history. It’s no secret that Nicki Minaj and recent Grammy winner Cardi B have been embroiled in a tumultous rollercoaster that has supplied endless engagement for fans on both sides of the aisles. The epic battles have also given gossip sites countless opportunities to leverage the contentious vibes between two women of color, who have amassed an enviable level of success, but unfortunately can’t enjoy the rewards without annoyingly bumping heads. The editors gauge the temperature of these high-stake wars, and conclude which of the celebs is on the losing end, and once that’s established, the next item on the agenda is to vilify the “loser” with venomous headlines begging for clicks, while exalting the “winner” with dutiful praise. This explains why Nicki Minaj has been the target of relentless harassment by loyal fans of her prized nemesis, that claim how the undisputed Queen of Rap is pathetically envious of her much younger counterpart, who is being hailed as the best thing since “being the best” evolved into something that is heavily reliant on the byproduct of our discontent. Online journalism has surrendered to the task of creating the climate of hostility that is borne out of the need to stifle the duty of presenting the facts as they are — for the benefit of delivering the goods and services that cater to the wiles of the blue-checked crowd and the appetite of robotic followers — who will mob any publication that dares counter the overwhelming vote. Perhaps that’s the reasonable explanation for how BET joined forces with the maddening crowd by releasing a tweet that was meant to congratulate Cardi B’s latest coup as the first female rapper to win a Grammy for a rap album, while covering all the basis with an offensive swipe at Nicki Minaj for her major loss that, once and for all seals her fate as the unfortunate and bitter “has-been.” We really need to discuss BET’s utter disrespect of Black women, and how this public display of gross negligence and standard unprofessionalism by an organization that rejects its roots in favor of Whiteness — by nefariously dividing an already vulnerable community — has to be permanently thwarted sooner rather than later. Twitter users didn’t take kindly to the ill-fated tweet, that was recklessly posted in the hopes of inciting the ire of fans, who were armed and ready for another round of an exasperating game that leaves everyone bruised with no victories to record. It was a needless attack — formulated out of spite and the pathetic need to celebrate the wins of Cardi B at the expense of a seasoned talent who wasn’t nominated and didn’t even attend the glitzy ceremony. How is it that a recognizable company that conveniently touts its commitment to “all things Black” can in the same breath weaponize that power against an industry heavyweight, and still expect her to show up and perform at an upcoming festival that is ironically titled — BET Experience? What the fuck?! It’s no surprise that Nicki Minaj withdrew her consent to participate in an Experience that no longer deserves the courtesy of her presence or demonstrated trajectory that spans over a decade. And thankfully, a host of scheduled performers have faithfully followed suit. It’s disturbingly obvious that BET is flailing when it comes to its adherence to the mission statement, that was meant to celebrate and exalt Blackness, and the Black women who contribute immensely to the noted landscape, with their special brand of creative genius that enhances the vibe of our vibrant narrative. It’s quite disappointing to be paralyzed by the vicious antics of each of the departments that comprise of an ailing institution, that literally took forever to appropriately respond to the resounding disapproval of naysayers, who considered the offensive tweet that once again publicly pitted two Black women against each other, as the ultimate betrayal. The apology was just a little too late because the irrevocable damage was epic, and it still resonates with the bitterness that illustrates how Black entertainment under the tutelage of White owners, can permeate the channels of discovery, that challenges the myth of how Black people will accept any form of engrossing content, even if it incapacitates the road to progression. This recent snafu is particularly alarming during Black History Month, when all outlets that are programmed to do right by us with exemplary measures are required to go above and beyond high expectations, with quality offerings that match what we are and what the young and impressionable aspire to be. BET’s lack of respect for the Black Experience and the Black women who are dedicated to upholding their end of their bargain with the unrelenting support of those who are tasked with those provisions, is unforgivable and blatantly criminal. And we need to continue talking about what that means, and how it will affect the interaction between Black Entertainment Television, and those of us who won’t be partaking in future installments.
https://nilegirl.medium.com/we-need-to-talk-about-bets-gross-disrespect-of-black-women-258f1b79d3d3
['Ezinne Ukoha']
2019-02-13 12:52:00.270000+00:00
['Media', 'Nicki Minaj', 'Culture', 'Black Women', 'Music']
There Is No “Safe Amount” of Cigarette Smoking
There Is No “Safe Amount” of Cigarette Smoking New research indicates that even a few cigarettes a day can cause extensive damage Photo by Sajjad Zabihi on Unsplash There may be a perception among many in the general public that, while heavy smoking is recognized as a problem that causes lung disease, smoking “a little bit” should be OK. “I only smoke a few cigarettes a day, Doc,” is what I have been told by many patients. Well, recent research has shown that even this can cause extensive, lasting damage. Researchers studied data on six US population-based cohorts included in the NHLBI Pooled Cohort Study, looking at smoking rates and the decline in lung function. After studying over 25,000 patients, they found that, even if people stopped smoking years ago, lung function continues to decline at a faster rate than never-smokers. Now, everyone experiences a decline in lung function with age, but this decline is greatly accelerated in people who smoke. Even in people who smoke less than 5 cigarettes a day, their lung function declines at a rate of around five times that of former-smokers. The researchers concluded: Former smokers and low-intensity current smokers have accelerated lung function decline compared with never-smokers. These results suggest that all levels of smoking exposure are likely to be associated with lasting and progressive lung damage. The study was published in the journal The Lancet — Respiratory Medicine on October 9. 2019. In other words, there is no “safe” amount of smoking. Even smoking less than 5 cigarettes a day can cause lasting lung damage. And when the lungs are damaged, they are frequently permanently damaged. Given all the bad things about vaping, one would think smoking is better. As a lung doctor, I would say, “Umm…no.” “Well,” someone may say, “it is better than vaping,” given all the horrible things that have been coming out about vaping. Indeed, I have been quite vocal against vaping. As a lung doctor, however, I would have to say, “Umm…no.” The lungs are beautiful, exquisite organs. They are very sophisticated, and they are excellent at taking the life-sustaining oxygen from the air and transferring it to the blood, where it goes to the rest of the body. That said, the lungs were only designed to process air — not cigarette smoke, not air pollution, and certainly not vaping chemicals/oils. Inhale anything into the lungs other than air, and you are placing your health at risk. The opinions expressed in this post are my own and do not reflect those of my employer or the organizations with which I am affiliated.
https://drhassaballa.medium.com/there-is-no-safe-amount-of-cigarette-smoking-f9dc4e62494e
['Dr. Hesham A. Hassaballa']
2019-10-11 20:39:21.026000+00:00
['Smoking', 'Vaping', 'Health', 'Medicine', 'Research']
Pichwai art: Adding colour to the Indian culture
Although the majority of Pichwais are associated with particular festivals, a large member of hangings have seasonal themes and are not assigned to special days. These capture the mood of the season and provide relief from the scoring heat or piercing cold. They may be hung at any time during the appropriate season. While the paintings depicting summer have pink lotuses, the paintings depicting “Sharad Purnima” comprise a night scene with the bright full moon.The seasonal restrictions are also closely followed. During the winter months, the scenes are not painted, but embroidered on heavy fabric or patterned in brocades. During the hot summer days the lightweight painted Pichwais with scenes of shady groves and cool streams are used in the shrine, Shri Nathji is surrounded with scenes of dense shaded trees, leaves in abundance with water or lotus ponds. With the coming of the monsoon season, the Pichwai is represented with peacocks joyfully dancing beneath the cloudy skies. Usually, Krishna is shown standing beneath a blooming Kadamba tree with three or four gopis on either side of him. In addition to the Kadamba, there is a full or half mange tree behind each group of gopis. More commonly, Krishna’s presence among the milkmaids is merely suggested by a creeper, which twins around the trunk of the Kadamba. The posture of the Gopis play a major role in design. They may dance for the Sharada Purnima or carry milk pots for the Daana Lila. When the background has raindrops and the sky is thick with clouds, it is the rainy season or a Varsha Pichwai. Each scene has a band of cows at the bottom of the hanging. The Morakuti Pichwai is filled with dancing peacocks. It is associated with the rainy season because at the first sound of thunder, the peacock spreads his magnificent feathers. The various other Pichwais are : Ramnavami Pichwai Nandmahotsava Pichwai :This tender scene described by Surdas is what is enacted on Nandamahotsava, where the vatsalya bhava or selfless parental love of Nanda and Yashoda is commemorated. On the occasion of the Nandamahotsav, celebrated the day after Janmashtami or the birth of Krishna, the doors of the inner sanctum remain open for darshan all day long. Shrinathji in his Navnitpriyaji swaroop is swung in a cradle by priests who dress up as Yashoda and Nanda to enact this scene. There are also celebratory dances with the temple servants dressed as the gopas and gopis of Vraj.As Krishna grows up, he becomes beloved of the whole village. Indulged as he is, he becomes a mischievous child, his love for milk and butter becomes legendary, and his charm grows steadily more irresistible. Daana Ekadashi Pichwai:The festival of Daana Lila is celebrated in August-September and has its origins in bhakti poetry where Krishna demanded milk and butter from the gopis as a toll for safe passage home. It is believed that this occurred in a valley in Mount Govardhana known as Daana Ghati, and while some pichwais depict the entire narrative and enactment of the gopis sharing their milk with Krishna, others only suggest the event with Shrinathji being approached by gopis bearing milk pots on their heads. At Nathadwara, the festival of Daana Lila, goes on for twenty days! Krishna grows gradually into the perfect cowherd: the one who all cows heed, answering to his flute as if in a state of intoxication. Braj Yatra Pichwai Sharad Purnima Pichwai :Ras Lila or Maha Rasa on the occasion of Sharad Purnima .Bhakti, the central tenet of the Pushti Marg doctrine is epitomised by the Ras Lila, where adolescent Krishna dances with his gopis. On a full moon night in the Vraj forest, by the flowing Yamuna, Krishna’s melodious flute calls out to the gopis like the pied piper, and they are forced to abandon everything to dance with him. The spirit of abandon and surrender that the Ras Lila evokes is the realization of bhakti: it represents the ultimate union of the soul with the Lord, a joining together in a cosmic dance. Thus, it is a theme dear to most patron-devotees and an extremely popular choice for pichwais. Chourasi Swaroop Srinathji: The pichwai painting depicts the chourasi (eighty-four) swoops (forms) of Shrinathji. In Nathdwara, the ‘shringar’ of Lord Shrinathji is changed according to the time of day, different seasons and different festivals. The deity is decorated in a specific manner for each event. This painting depicts eighty-six different figures surrounding the central Shrinathji deity, of which eighty-four show different forms or swaroops of Shrinathji. The two at the bottom left and bottom right depicts the gosai, or the head priests, who imparted learnings of the Pushtimarg sect. Annakuta Pichwai :The episode represented by this image relates to the autumnal offerings the villagers of Vraj were about to make to Indira, the nominal king of the Gods. Krishna suggested that worship instead be offered to the spirit of the mountain that sustained the pastures and woods that supported their livelihoods — and transformed into the mountain king in order to receive their offerings. The annual reenactment of this scene thus gains the name Annakuta or ‘mountain of food’. When a wrathful Indira unleashed a rainstorm in fury, Krishna vanquished him by lifting the mountain on the little finger of his left hand — captured by the key iconographic gesture of Shrinathji’s raised left hand. Gowardhana Dharana Pichwai: Shrinathji and his this Posture — is the Dominant Figure in Pichwais .Krishna, as a child, lifted Govardhan Parvat (hill) on his little finger for seven days, and safeguarded the people of Vrindavan from Lord Indira’s devastating thunderstorms. Right hand typically rests on his waist, or is lowered in an act of blessing. Gopashtami Pichwai :The Gopashtami festival takes place in the late autumnal months, and marks the elevation of Krishna from a younger herders of calves to a full cowherd. Cows are adorned with henna and sindur hand prints, peacock plumes and with bells around their necks. At Nathadwara, the cows, decked in their finest, are brought into the haveli. Morakuti(monsoon)Pichwai : Pichwais for Morakuti, as seen below, depict peacocks with crested crowns dancing in full abandon in the rainy season. Named after a small village in Vraja, near the birthplace of Radha, where peacocks abound — these pichwais mimic the ras lila, or Krishna’s dance with Radha and the gopis. Varsha Pichwai : Varsha or Vrikshachari pichwais evoke Krishna as a vrikshacharya or tree dweller. He is only symbolically represented in the painting therefore, usually through the kadamaba tree, while in anticipation of his arrival gopis appear on either side of the tree carrying offerings of garlands, peacock fans, flowers and fly whisks. Winter Pichwai Different Scenes Depicted in Pichwais Different seasons and events in Lord Krishna’s life are depicted in pichwais. Radiant pink lotuses adorn pichwais hung in summer, whereas peacocks are painted on pichwais used at temples during the rainy season. Influence of Other Styles Unlike the pigment-painted Pichwai, the other types of hangings such as embroidered or applique pieces were not the products of school which specialized in Pichwai. They were the works of craftsmen who excelled in sewing, weaving, or embroidery. The applique Pichwais were made by tailors employed by temples on occasions, the tailor worked in collaboration with an artist who sketched the pattern for the figures and later added the painted details. The Zordozi hangings, which occupied an important position in the main house of the Vallabh sampradaya, were also the works of tailors. Gold or silver metal threads were stitched to satin or velvet with thin silk threads giving the works the appearance of true embroidery. Brocade hangings are quite popular because they are well -suited to the winter weather. Such pieces are made by professional weavers on special order.
https://uxdesign.cc/pichwai-art-adding-colour-to-indian-culture-e66e7384b199
['Vinita Mathur']
2020-12-28 12:38:13.272000+00:00
['India', 'Visual Design', 'Design', 'Art', 'Iskon']
Infographic: 2016 Nobel Prize Winners
Infographic is a visual representation of data intended to present information clearer and more engaging way. Follow
https://medium.com/infographics/infogram-insights-2016-nobel-prize-winners-49d97b90b067
[]
2016-12-29 21:02:43.529000+00:00
['Data Journalism', 'Infographics', 'Nobel Prize', 'Science', 'Data Visualization']
Is it easy or difficult to see a doctor in China?
Novel coronavirus caused a terrible epidemic to sweep the world. In the face of the epidemic situation, we can see that different countries have taken different anti-epidemic measures, and whether it is “external loosening internal tight”, “internal loosening and external tightening” or “Chinese style”, different effects have been achieved and different evaluation has been obtained. The reason is that the geographical environment, demographic structure, economic situation, government capacity, and health care system of each country are not the same. If the same “anti-epidemic” measures can not be adapted to local conditions, the best results can not be obtained. There is no doubt that under China’s “tight inside and outside” model, the epidemic in this populous country has been suppressed rapidly. This makes many people wonder: how did China, which appeared to be in a hurry at the beginning of the novel coronavirus outbreak, now hand over a “qualified” answer? Is China’s medical ability strong or weak? How do we see a doctor in China? If you suffer from the slow access process, old hospital equipment and dull health care workers in welfare countries such as Canada, you may feel efficient and brand new hospitals in China and cheer them up; if you suffer from the high price bills and expensive health care costs of American hospitals, you will also exclaim, “Chinese hospitals are really cheap.” In short, China is a comprehensive product of the “welfare care” system and the “commercial medical system”. Compared with Canada, China has more and better private hospitals and doctors; compared with the United States, China has more extensive and cheaper social health care, as well as a large number of public hospitals. To some extent, China has the benefits of both systems. (however, as we will mention below, this benefit is clearly costly.) In China, the experience of medical treatment is relatively free: China has the infrastructure to implement hierarchical health care, but there is no system of compulsory hierarchical health care, which means that you can enjoy the services of family doctors and community doctors, but you can also go directly to the hospital clinic. In addition, China has community hospitals that facilitate the handling of chronic and minor diseases (they also assume the responsibility of pharmacies), as well as a more dense distribution of commercial pharmacies to facilitate the purchase of over-the-counter drugs by residents. If you don’t bother to go to a hospital, you can also look for free or paid online medical counseling on the developed Internet. Because there are more medical options, the Chinese prefer to visit specialists or experts in specialized hospitals and departments rather than North American residents who usually have to go straight to comprehensive doctors. If you don’t know what department you should register in, you can ask at the clinic of the Chinese hospital. In fact, most young Chinese no longer specialize in registering in hospitals (which may make you have to wait in line), but “never leave home” with as many as dozens of online services. If you can’t use the Internet, you can also use machine self-registration in the hospital lobby. There are a variety of channels for registration so that Chinese people are less likely to queue up because of registration. The Chinese can easily make an appointment for registration on Wechat. Note, however, that the need to register does not mean that you do not have to wait in line to see a doctor. Because many Chinese are willing to choose better hospitals, departments, and experts, some patients face serious queuing problems-for example, if you want to have a test or operation in the best hospital in China, you may have to wait for months. At the same time, if you just want to do the test or surgery and don’t care about hospital rankings, you’re more likely to be able to do it in a day. With regard to the uneven distribution of medical resources in China, we will discuss it below. Nearly 1.4 billion people in China have taken part in social health insurance. When you see a doctor, in order to get Medicare reimbursement, you need to use a special health care card. If you forget to bring a health care card, depending on the region and the hospital, sometimes patients can provide ID numbers to access the health care system, sometimes it will be difficult. By the way, depending on whether patients use health insurance and the proportion of reimbursement, Chinese doctors tend to recommend very different drugs or treatments. We will also discuss this issue in depth below. Taken together, the number of public hospitals in China is larger than that of private hospitals, and the size of public hospitals is larger and the number of beds is large. Of course, this is also related to the population of China, where the largest hospital in North America can have two or three thousand beds, which is only the same number of a medium-size hospital in China, and the largest public hospital in China tends to have tens of thousands of beds. The large scale of public hospitals and a large number of medical and nursing staff in public hospitals are one of the guarantees of China’s “anti-epidemic” ability. As for private hospitals, this is a more complex problem in China. China lacks particularly good and expensive private doctors, and wealthy Chinese do not have the habit of visiting private doctors. However, there is no shortage of excellent and expensive private hospitals and clinics in China. But note that China also has a large number of expensive but poor private hospitals, some of which may even be suspected of breaking the law, such as attracting patients through false advertising. These poor private hospitals are one of the chronic problems of China’s health care system. On the whole, what is the level of medical care in China? According to two papers published in the Lancet, we can take a look at the effectiveness of China’s health care system. (《Healthcare Access and Quality Index based on mortality from causes amenable to personal health care in 195 countries and territories, 1990–2015: a novel analysis from the global burden of Disease Study 2015》《Measuring performance on the Healthcare Access and QualityIndex for 195 countries and territories and selected subnational locations: a systematic analysis from the Global Burden of Disease Study 2016》) The main objective of medical care is to ensure the safety of the people as much as possible. However, if the safety of life is measured only in terms of life expectancy, the direct impact on life expectancy is more significant than that of the health care system, such as the adequacy of nutrition, the availability of clean water and air, and so on. Therefore, the key index to measure the quality of the health care system should be “disease treatment”, not “disease-free health”. Based on this logic, the researchers listed 32 diseases that could be fatal but could also be cured at a lower cost, calculated the mortality of patients with these “common diseases” and came up with a comprehensive indicator called “HAQ” (Healthcare Access and Quality Index). The greater the HAQ, the lower the likelihood of death from common diseases, indicating the higher the coverage and level of medical services in the country. In the recent novel coronavirus epidemic, Italy has had a huge disaster, the mortality rate is even higher than in some less developed areas-and novel coronavirus’s mortality rate in Italy is unusually high, in fact, because of the serious aging of the Italian population, it is blatant that “Italy’s low level of medical care” is clearly unscientific. Although the publication of the paper is fashionable and there is no epidemic in novel coronavirus, HAQ is also in line with this principle-the illness and death of the elderly can not generally be used to prove that the health care system is backward. Therefore, HAQ combined the age structure of the patients and included them in the index calculation. So, what is China’s ability to “cure diseases”? In 2016, China ranked 47th out of 195 countries and regions. This ranking is, of course, not high. In fact, if 195 countries and regions are divided into 10 grades, China is only at the third level, lower than most developed countries in Europe and the United States (Taiwan Province of China is in the second place). But taking into account economic capacity, which best reflects the overall level of development of the country, China’s HAQ is very high. In 2016, China’s HAQ ranked 47th, the per capita GDP ranking was only 73rd in the world, and it is not very prominent among developing countries. It can be seen that the level of medical development in China is higher than the overall development level of its country. The researchers also took into account income and education levels. Combined with these two considerations, the level of health care in China is still significantly better than its income and education. Among countries with similar incomes and education levels, China has the highest level of health care. In addition, if we look at the problem from the perspective of development, China is still one of the countries with the fastest progress in medical services in the world. In 1990, if divided by 10 grades, China’s HAQ was the third-lowest in the world; 26 years later, China had reached the third level of positive numbers and made rapid progress. If the level of health care is more rough, according to the World Health Organization, China’s life expectancy ranks 53rd in 2019, while China’s GDP per capita reached the $10000 mark in 2019, but ended up at 72nd place. It can be seen that the “safety of life” of the Chinese people takes precedence over the overall level of development of China. China still needs development and better medical conditions; but as far as the status quo is concerned, there is no need for the Chinese to worry too much about the level of public health services. However, there are still problems in medical care in China. However, the Chinese still face some problems when they see a doctor. Although the absolute medical ability is not very poor, as far as the “medical experience” is concerned, China is far from “perfect”. This aspect is reflected in the price of medical services. To be fair, the price of medicine in China is still at the level of developing countries, which is not very expensive. Especially compared with countries with extremely expensive medical care, such as the United States, the problem of medical burden in China is not very prominent, both civilian and financial. China, however, has one thing in common with the United States: there is no universal health care system. This means that before the Chinese go to the hospital, they always quietly calculate an “economic account”. The Chinese will experience several different medical experiences, depending on what kind of health insurance he participates in and where he lives. By the end of 2019, the number of people insured for basic medical insurance for urban workers in China was 329 million, while the number of people insured for basic medical insurance for urban residents in China was 1.025 billion at the end of 2019, according to the National Bureau of Statistics of China. The latter was unified in 2016 by the combination of health insurance for urban residents and health insurance for rural residents, which is lower than the basic medical insurance for urban workers. This means that while 1 billion people in China enjoy social health care, the other 300m people enjoy better health care. In addition, having the same health insurance does not mean having the same health care. If you are lucky enough to be a resident of Beijing and Shanghai and have a local account, you have almost part of the world’s top medical resources. Chinese people living in municipalities directly under the Central Government and most provincial capitals or other developed cities usually do not worry about health care. But if you don’t live in a big city, then when you, unfortunately, suffer from a disease in which the hospital in the city cannot be treated, or can only be treated conservatively (because of the geographical imbalance in the distribution of medical resources), you will face the problem of “seeking medical treatment in different places.” When seeking medical treatment in different places, the specific situation is very complex and changeable because of the different health care policies of each province and city, but on the whole, patients need to issue local referrals according to the regulations, which generally means that patients can only be recommended for referrals if they are really difficult to treat locally. If not recommended, health care usually does not take effect, which means that patients have to bear the medical expenses at their own expense. In addition to residence registration restrictions, there are a number of issues that need to be paid special attention to when seeking medical treatment in China. Perhaps the most concerned medical “systemic” problem in China is Chinese herbal medicine. As a treatment that lacks modern scientific evidence, traditional Chinese medicine and Chinese herbal medicine not only still have a very deep mass foundation in China, but also recommended by the government as a “worth trying” treatment. Although young Chinese with modern science education are skeptical about this, specialized traditional Chinese medicine hospitals are not only located in major cities in China but also often prescribe herbal or proprietary Chinese medicines to patients even non-traditional Chinese medicine hospitals. Image byPexels from Pixabay The problem of traditional Chinese medicine is not only a medical problem but also not only a scientific problem, but it also involves very complex social, economic, cultural and even political factors, this paper only discusses too much. In addition, because the top medical resources are concentrated in Beijing, Shanghai, Guangzhou, and other super cities, the “queuing” phenomenon of a small number of hospitals, departments, and top experts has become a common pain for locals and outsiders. One of my friends registered at a famous hospital in Beijing in late 2018 and asked a dental expert to treat him, and the queuing system showed that he would not see the expert until the spring of 2019. This kind of queue for months is exaggerating, but it is more common for a few hours or even a day or two. Local people still do this, and people who seek medical treatment in different places, especially those who seek temporary treatment, will have to face a worse situation. In several famous children’s hospitals in Beijing, we can often see parents who queue up all night to register. Due to the huge population of China and the extreme shortage of pediatric resources, many parents come to see their children overnight from other places. Because of the queue, they often have to spend the night in the hospital hallway. Attacking doctors may be the most famous strange phenomenon in China. In China, some patients or their families are not satisfied with the treatment results, so they abuse or even attack doctors, there is no shortage of malignant criminal cases that lead to serious injuries and even death of doctors. There are many reasons for the attack on doctors, which some attribute to the fact that Chinese hospitals do not have a hierarchical health care system, and that patients can easily come to the doctor with an impure purpose-by contrast, it is very difficult for attackers to find a doctor in Canada, which does not even have an outpatient clinic. On the other hand, Chinese doctors work hard, but their income is not very high, which has forced some doctors to “overprescribe drugs”: recommend expensive or excessive drugs to patients to find profits in their hospitals-doctors earn more only when hospitals make more money. Similar problems have intensified the contradiction between doctors and patients. On the other hand, due to the rapid development of China, many patients, and their families experienced a period of new China’s founding and poor education when they were young, and their lack of understanding of modern medicine also lacked rational thinking habits, which also led them to mistakenly blame doctors, especially when some elderly people were unable to help them. In addition, China’s health care policy always affects the way hospitals operate. Since drugs that are not covered by health care are difficult to attract, the choice of drugs to be covered will directly affect the tendency of doctors to prescribe drugs. Because the Chinese government strongly recommends Chinese herbal medicine and uses policies to make medical insurance strongly support traditional Chinese medicine, Chinese doctors are often forced to recommend traditional Chinese medicine to patients. In addition, because the financial expenditure on medical care has been very tight, different hospitals can enjoy different medical insurance reimbursement ceilings, often lead to insufficient reimbursement amount, some hospitals are forced to use extreme treatment, and even persuade patients to withdraw from the hospital and so on. At the end of all the problems, because China is a populous country and also faces aging problems, China’s public health spending has always been under great pressure. But for a variety of reasons, the Chinese government is trying to cut public spending on health care, which could make it more expensive or face more uncertain treatment and treatment quality in China in the future. The Chinese are well aware that by then, the gap in public health services between large cities and ordinary cities and rural areas is likely to widen. The sudden outbreak of novel coronavirus has led Chinese residents and the Chinese government to pay unprecedented attention to public health issues, which may be a good phenomenon. As Chinese public opinion increasingly affects the government’s decision-making, the government’s determination to increase medical spending may come with the novel coronavirus epidemic, especially with regard to the increase in public hospitals and doctors’ wages. For most Chinese residents, this can also be regarded as a “blessing because of misfortune.”
https://medium.com/pandayoo/is-it-easy-or-difficult-to-see-a-doctor-in-china-a844cb26df18
['Zhou Yu']
2020-04-04 06:35:09.786000+00:00
['Policy', 'China', 'Health']
The Music Plays On — Mahler Symphony №4
Gustav Mahler, c. 1900 I’ve decided to honor the decision by the Royal Concertgebouw Orchestra and the Royal Concertgebouw to have an online version of the Mahler Festival by offering my favorite performances of the symphony that the festival is broadcasting each day. Gustav Mahler ascended to the throne of the conducting profession in 1897, when he became the General Music Director of the Vienna Court Opera, now the Vienna State Opera. It was music’s most prestigious position and with it came an enormous responsibility. Mahler was everywhere and ran the opera house with an iron fist. Modern stage direction, lighting, set design were all in their infancy in opera and Mahler held a tight grip on all of it! His fourth symphony, composed during the summers of 1899 and 1900, was the first he composed after being hired in this position, and it’s interesting that the subject matter and general mood of the symphony is in direct opposition to the hectic conducting life he had at the time. Also, rather than working in a little ‘hut’ behind a small hotel at Steinbach am Attersee, Mahler could now afford to rent a villa. He chose the Villa Kerry, a 30-minute walk above the the lake, Altaussee. Unfortunately, the weather for most of that summer was cold and wet and it turned that this villa was in earshot of the local bandstand. He was only able to compose part of the symphony at the very end of his holiday symphony. Villa Kerry, Altaussee The following summer (1900) was spent in his newly built composing hut on the property where his villa was being built, in Maiernigg, a tiny village on the shores of the Wörthersee. Mahler’s ‘composing hut’ in Maiernigg Like Beethoven’s Symphony №4 in relation to his Symphony №3 ‘Eroica,’ Mahler’s Symphony №4 is restrained and classical in size and structure compared to his gargantuan Symphony №3. It has the standard four movements, but it retains one special characteristic that was unique to the previous three symphonies — there is a vocal soloist. For the last movement, Mahler uses Das himmlische Leben (The Heavenly Life), a song that he wrote in 1892 based on a poem from the collection, Das Knaben Wunderhorn (The Youth’s Magic Horn). For a wonderful English translation, please of the poem, please go here. The first three movements, while not having a text, certainly help form a narrative of a child’s view of the world. Indeed, all four movements contain themes that resemble children’s songs more than anything else. Here are my favorite performances.
https://donatocabrera.medium.com/the-music-plays-on-mahler-symphony-4-1036d523518b
['Donato Cabrera']
2020-05-29 04:19:07.955000+00:00
['Mahler', 'Donato Cabrera', 'Las Vegas Philharmonic', 'California Symphony', 'Music']
Christianity and Animal Rights/ Factory Farming
As of today, pollution is already killing as many as nine million people annually. By 2050, it is estimated that sub-Saharan Africa, South Asia, and Latin America will generate more than 140 million climate refugees, as I touched on in my previous blog post. Other major cities will also be evacuated, including Miami, New Jersey, Hong Kong, Baghdad, Paris, New York, Montreal, Seattle, creating countless more refugees. The problem is that large portions of the population live within thirty feet of sea level, 600 million people to be precise, and these areas are expected to be hit the most. One of the most recent reports from The United Nations’ Intergovernmental Panel on Climate Change (IPCC) says that even if everyone stops polluting all at once, we’ll still get to 3.2-degree warming. 3.2-degree warming would mean that 100 urban centers would be flooded, including Miami, Dhaka, Shanghai, Hong Kong, New York, Montreal, Seattle, London, Baghdad, San Francisco, Sacramento, Houston, Philadelphia, Florida (97%), and 70% of New Jersey. The trend toward increased flooding is escalating. From 1992 to 1997, 49 billion tons of ice of the Antarctic ice sheet melted annually on average. From 2012 to 2017, that number increased to 219 billion tons of ice melting annually. Since the 1950s, the Antarctic has lost 13,000 square miles from its ice shelf. Experts suggest that the fate of it will be determined by the action taken within the next decade. Photo by Melissa Bradley on Unsplash In fact, 2.4 million American homes and businesses — $1 trillion in value — are expected to experience chronic flooding. Ninety-seven percent of Florida will be off the map by 2100, calculated by leading ocean chemist, David Archer.10 The science on this is irrefutable, experts claim. The predictions may vary in severity. Business-as-usual will, however, not prevent apocalyptic calamities from regularly occurring. So, the urgency is apparent. For atheists, life on Earth may be all there is. That is why, for them, solving man-made climate change is of high importance. If by 2100, we find ourselves in a truly “uninhabitable earth,” as Wallace-Wells says we may, then we lose everything. Some estimates suggest that it is unlikely that man-made climate change will eradicate our species. But near extinction is enough reason to act. The question is whether Christians can be urgent on climate issues. My answer to this is both yes and no because Christians have one thing that is slightly more urgent than the climate and the preservation of the species: eternity. Photo by Carolyn V on Unsplash Biblical Climate Change Action The Bible presents a mixed account on the responsibility of humans to preserve the earth. Wayne Grudem writes that “[T]he Bible’s picture of the earth in general is that it has abundant resources that God has put there to bring great benefit to us as human beings made in his image. There is no hint that mankind will ever exhaust the earth’s resources by developing them and using them wisely.” The authors of Genesis write: While the earth remains, seedtime and harvest, cold and heat, summer and winter, day and night, shall not cease. (Genesis 8:22 ESV) I establish my covenant with you, that never again shall all flesh be cut off by the waters of the flood, and never again shall there be a flood to destroy the earth.” And God said, “This is the sign of the covenant that I make between me and you and every living creature that is with you, for all future generations: I have set my bow in the cloud, and it shall be a sign of the covenant between me and the earth. When I bring clouds over the earth and the bow is seen in the clouds, I will remember my covenant that is between me and you and every living creature of all flesh. And the waters shall never again become a flood to destroy all flesh. (Genesis 9:11–15) These passages indicate that God expects His people to preserve the gifts of God. However, the Bible is also clear on the inevitable destruction of the Earth. Although these passages may be metaphorical and in no way suggest that the Bible prophesied man-made climate change in the 21st century. Some of these passages include: For nation will rise against nation, and kingdom against kingdom, and there will be famines and earthquakes in various places (Matthew 24:7). There will be great earthquakes, and in various places famines and pestilences. And there will be terrors and great signs from heaven (Luke 21:11). When the Lamb opened the seventh seal, there was silence in heaven for about half an hour. Then I saw the seven angels who stand before God, and seven trumpets were given to them. And another angel came and stood at the altar with a golden censer, and he was given much incense to offer with the prayers of all the saints on the golden altar before the throne, and the smoke of the incense, with the prayers of the saints, rose before God from the hand of the angel. Then the angel took the censer and filled it with fire from the altar and threw it on the earth, and there were peals of thunder, rumblings, flashes of lightning, and an earthquake (Rev 8:1–13). Apart from wanting to take care of God’s Creation, as God called them to, Christians do not have any other biblical reason to find solutions to man-made climate change. There is only instrumental value in nature for the Christian, value in service for different causes, not intrinsic value, value for its own sake, since nature directly proves God’s existence according to the Bible (Romans 1:20). Apart from that, the Bible is not clear on why we should protect nature in the first place. The Bible’s silence on this is arguably why many in the Christian right do not prioritize man-made climate change in their sermons. In the end, eternity is on the line, and preaching the Gospel comes before taking care of the environment we live in. Needless to say, when the global concern becomes much more apparent and demanding, Christian leaders will have to be more outspoken on these issues because of their influence and the tangible ethical consequences of not speaking up. Some Christians have started to do so already. Francis Schaeffer, one of the leading Christian thinkers of the 20th century, tackles environmentalism in his book, Pollution and the Death of Man: The Christian View of Ecology. Schaeffer was not your typical Christian thinker. He had hippie-like sensibilities and, upon moving to the United States in 1948, even forsook the possession of his car. Schaeffer would frequently go on hikes, tend to his garden, and regularly traveled across the world because of his love of nature. He would pick up trash from the hiking trails on his hikes with students and objected to waste being thrown overboard on his boat trips overseas. In his book, Schaeffer writes that only the Christian can unite on environmental matters because “God has spoken” on it. He blames humanism and rationalism for “looking at the particulars” to then make a universal, which he believes is philosophically futile and arbitrary. He blames our current environmental predicament on humanism. There’s nothing new under the sun, it seems; Atheism is always to blame. That is not to say that Christianity has the answer outright but instead that it is the most compelling worldview that provides a solution to big questions such as man-made climate change. Schaeffer believes that because Creation is a gift from God, only Christians have a justified reason to care for it. Indeed, Schaeffer writes that “[Christians] treat nature with respect because God made it.” The question then is why so few Christians prioritize speaking about nature and protecting it if Christians are the only ones who can experience unity on this issue. Photo by AJ Robbie on Unsplash Animal Rights Closely tied with the preservation of the Earth is the issue of factory farming and animal rights. As I hoped to show in my previous blog post on whether Christians do indeed have a way of defending the sanctity of human life biblically, I hope to provide a similar analysis of whether Christians have any way of defending animal rights. Historically, animals were not given much thought as to whether they should have any rights at all. The father of modern philosophy, Rene Descartes, for example, nailed living dogs to wood, digging into them searching for a soul, which he could not find. One has to wonder why this “scientific” experiment did not extend to humans. The experiments slowly became more humane, as scientists figured out that animals have cognitive abilities and are sentient as we are. However, progress can still be made here as well. Societies across the Globe are making considerable advancements in treating animals with the dignity they deserve. These advancements were seen as early as 1800 when laws against bear beating were established in the British Parliament. By 1835, The Ill-Treatment of Cattle Act extended protection to bulls, dogs, and cats. Fast-forward to the 21st century and many societies have outlawed blood sports, including foxhunts and bloodhounds in 2005, cockfights in 2008, and bullfights in 2010. Society is slowly catching up with what science has been telling us for years. Dr. Nicholas Dodman, the Professor at Cummings School of Veterinary Medicine at Tufts University, claims that “With every passing year the cognitive gap that supposedly differentiates us from mere animals is shrinking.” Photo by Dušan S. on Unsplash Of course, the underlying premise for Christians for defending these animals is much different than for those who are non-Christian or unbelievers. Throughout the Bible, animals are treated with indignation, often slaughtered alongside entire cities and tribes, because they made the simple mistake of being in the wrong place at the wrong time. One memorable and particularly barbaric occasion was when animals were punished alongside humans in Noah’s flood. Some animals, precisely two members of each species, were spared by God. But most of them drowned along with the humans that God thought were unforgivable enough to kill. The question in that story is why God had to drown seemingly innocent animals for crimes that were not their own. If animals had nothing to do with the sinful condition of humans, then why would God need to punish them? Because of our understanding of animal sentience today, the killings of these animals are even more barbaric. Many Christians will justify treating animals decently based on Christian teaching to take care of God’s creation, but they do not go as far as science tells us to. In fact, many Christian thinkers have voiced concerns about focusing too much on the rights of animals because of what they insist that will do to the value of human life. In A Rat Is a Pig Is a Dog Is a Boy: The Human Cost of the Animal Rights Movement, Wesley J. Smith writes that the animal rights movement of the 1970s was an “antihuman ideology.” Smith is persuaded that ‘Human exceptionalism,’ that humans are a particularly unique species, is the only basis for universal human rights. And that since the animal rights movement rejects this premise, it is actively harming our well-being. Smith does not call for altogether abandoning suffering animals, however, and argues that the “core obligation of human exceptionalism” is never to cause animals suffering for “frivolous reasons.” Smith continues that we cannot define rights “Without the conviction that humankind has unique worth based on our nature rather than our individual capacities.” Without human exceptionalism, Smith firmly believes, “[U]niversal human rights are impossible to sustain philosophically.” Nicholas Christakis disagrees with Smith’s belief that exposing the similarities between animals and humans rids us of a meaningful basis for human rights. In the words of Christakis, “[W]hen we resemble other animals with respect to the social suite, it binds us all together. The more like these animals we are, the more alike we humans must be to one another.” In his book, Blueprint, Christakis writes that it is a fictitious notion to think of ourselves as exempt from the rest of nature. The similarities are striking, whether that is seen in the friendships of elephants, the cooperation of dolphins, and the culture of chimpanzees. Human Exceptionalism and Loving Your Neighbor The concept of human exceptionalism is mainly a remnant from Christendom’s dualism, the belief that the soul is a separate eternal entity from the physical body. Animals do not really have a purpose in the Garden of Eden, before the Fall, apart from glorifying God and giving Adam a job, in particular, to name them. God creates humans in his image, meaning that we have a soul and moral conscience (Gen 1:27). The consensus of the Christian right, perhaps even of Christianity at large, on this is clear, “[H]uman beings are much more valuable in God’s sight than animals.” The Bible starts as early as Genesis 1:28 in clarifying on how humans should treat animals: Photo by matthew Feeney on Unsplash And God said to them, “Be fruitful and multiply and fill the earth and subdue it, and have dominion over the fish of the sea and over the birds of the heavens and over every living thing that moves on the earth.” In one passage in the New Testament, Jesus exorcises a Legion of demons, letting them enter into a herd of pigs, numbering at about 2,000, who then run into the Black Sea and drown (Mark 5:13). In another revealing passage in the Old Testament, Saul is asked to “strike Amalek and devote to destruction all that they have,” including “man and woman, child and infant, ox and sheep, camel and donkey” (1 Sam 15:3). Saul fails to slaughter all the Amalekites and animals, for which Saul suffered severe punishment. In what way did Saul fail his God? He kept some animals for sacrifice to God. Unfortunately, this was not favored in the eyes of God. The meat was “unclean.” Jesus says, “Of how much more value is a man than a sheep!” (Matt. 12:12) Jesus repeats this sentiment with regard to the “birds of the air” (Matt. 6:26) and sparrows (Matt. 10:31). The Christian worldview poses no serious objections to animal subjugation apart from imposing unnecessary violence and cruelty on God’s creation. But once again, the instrumental value is not in the animal or their rights but rather in the fact that God has given them as a gift to humans. The onus, then, is on the stewardship of these animals, not on any intrinsic worth or suffering inflicted on them. As we have seen from my previous blog posts, Jesus says that among the two most important commandments is to “love your neighbor,” which suggests that God cares for humans and thinks of humans of value. The Bible says that God “knows us” before our birth and “knits” us together in the womb, as we saw when discussing abortion (Ps 139:13–14, or Jeremiah 1:5). Paul goes so far as to say that God knew us before the foundations of the Earth (Eph 1:4). So, it may not be surprising that Christians do not focus all that much on animal rights or factory-farming. The problem is that animals should receive our attention especially because of how poorly they are treated. Human exceptionalism permits excuses and detachment from the insurmountable suffering factory-farming causes to animals. And that is unacceptable. The Problem With Human Exceptionalism Human exceptionalism often comes across as poor philosophy. In the end, what truly distinguishes humans from animals is a capacity for advanced language (allowing us to express our suffering better and cooperate) and cognitive skills. The closest animals to ourselves are chimpanzees. And many are starting to call for fairer treatment of chimpanzees because of their similarities to us, including the organization, The Great Ape Project, which calls for advanced rights for chimps. Critics of The Great Ape Project will say that humans and chimps differ at a fundamental level even if there are physical similarities between us. Photo by Elton Oliver on Unsplash Jonathan Mark writes in What it means to be 98% chimpanzee: Apes, People, and Their Genes, “Apes deserve protection, even rights, but not human rights.” For Mark, “Humans have human rights by virtue of having been born human.” This birth comes with the rights of citizenship, “an endowment by the state.” Mark further writes, “[T]he phrase ‘human rights’ has no meaning if it does not apply to all humans and only to humans.” It is often argued in defense of chimpanzee rights, that our DNA and bone structure closely resembles chimpanzees, and we should therefore extend human rights to them. However, Marks clarifies that saying that we are 98% chimpanzee does not say much about the similarity between humans and chimps. It is true, he writes, that chimps and humans have similar bone structures, which should be of more significance he writes than DNA similarity. “Genetics appropriates that discovery as a triumph because it can place a number on it, but the number is rather unreliable as such. And whatever the number is, it shouldn’t be any more impressive than the anatomical similarity.“ However, it is not for genetic reasons that we think that chimpanzees should be treated equally, as I argued above; It is instead because they can suffer. Obviously, no one is arguing that chimps have the same rights as humans do regarding the justice system and penal code. For example, you cannot expect a chimp to be given a lawyer if accused of third-rate homicide. Mark’s argument here is rather silly. Few are arguing that chimps will be given equal rights to humans. Rather, animal rights activists argue that chimps will be given the right not to be tortured needlessly in science experiments and so forth. I don’t see exactly how we are incapable of defending human rights by acknowledging that most animals are equally able to process pain and should be treated accordingly. Science has indeed revealed that almost every animal alive can suffer to a certain extent. It then follows that we should adjust our relationship with non-human animals. Otherwise, we are causing unnecessary suffering to countless sentient creatures. The Christian will have difficulty arguing otherwise, except by applying the sort of poor philosophy that Smith and those that echo him exhibit. Shanor and Kanwal write in Bats Sing, Mice Giggle that “It’s well documented” that animals sing, babble, giggle, and communicate with unique dialects. Mustached bats, they write, exemplify a capacity for “a vocabulary that at a phonetic level is comparable to ours.” In a Northern Arizona University study, prairie dogs are shown to have different accents based on which colony they are found in. Nancy F. Castaldo writes in Beastly Brains: Exploring How Animals Think, Talk, and Feel, that “animals utilize vocabulary, grammar, accents, and gestures.” They also “communicate with vision, smell, touch, or taste as well as sounds.” Elephants have shown to care for injured humans, for example, and will assist members of other species that are under predatory threat. Elephants have also been seen to put branches and vegetation on corpses in respect and acknowledgment of their passing. They also pause in locations where one of their kind has passed for several minutes at a time, as if in memory of the deceased. Frankly, the scientific consensus on this is telling. Christakis explains:
https://jakubferencik.medium.com/christianity-and-animal-rights-factory-farming-3f19bd40397
['Jakub Ferencik']
2020-10-29 13:24:21.642000+00:00
['Animals', 'Global Warming', 'Climate Change', 'Animal Rights', 'Science']
Relaxing Your Muscles Can Relax Your Mind
Relaxing Your Muscles Can Relax Your Mind Progressive muscle relaxation therapy is one of the simplest science-backed treatments for anxiety Credit: Delmaine Donson / Getty Images During the early days of the novel coronavirus outbreak, doctors in China noticed that many people hospitalized with Covid-19 were developing anxiety and sleeping problems. These patients were forced to spend weeks cut off from contact with friends and family, and so the doctors partly attributed their woes to the unsettling effects of social isolation. At a hospital in Hainan province, a physician study team decided to treat their patients’ isolation-induced anxiety and sleeping problems using a relaxation technique known as progressive muscle relaxation therapy, or PMR. “Progressive muscle relaxation teaches you how to relax your muscles through a two-step process,” explains Mohammad Jafferany, MD, a clinical professor of dermatology, psychiatry, and behavioral sciences at Central Michigan University. Jafferany was not involved with the Chinese research, but he has studied the clinical effects of PMR. “First, you systematically tense particular muscle groups in your body,” he explains. “Next, you release the tension and notice how your muscles feel when you relax them.” Twice a day for five consecutive days, a group of Covid-19 patients at the Hainan hospital listened to piped-in instructions that guided them through a typical PMR therapy session. They lay down on their backs and then tensed and relaxed the muscles of their hands, arms, head, neck, torso, and legs. According to the results of that study, which was published in May in the journal Complementary Therapies in Clinical Practice, the patients’ scores on a clinically validated anxiety measuring tool improved by 22%, and their sleep scores improved by 30%. Meanwhile, the study team observed no anxiety or sleep benefits among a control group that received standard care but not PMR. “First, you systematically tense particular muscle groups in your body. Next, you release the tension and notice how your muscles feel when you relax them.” The Chinese study is just one of dozens of research efforts stretching back decades that have found relaxation therapy to be a highly effective treatment for stress, anxiety, and all their attendant symptoms and side effects. It may not have the hype of trendier or newer mental-health remedies, such as mindfulness meditation or CBD oil. But experts say PMR is among the surest ways to calm an anxious mind and body. How progressive muscle relaxation works The conventional view of muscle tension is that it’s the product of top-down processes; the brain interprets something as concerning or stressful, and this causes the muscles to tighten up. But experts say that the relationship between physical and mental states tends to run in both directions. “One of the things we know about anxiety is that many things can feed into it, and one of them is muscle tension,” says Michelle Newman, PhD, director of the Laboratory for Anxiety and Depression Research at the Pennsylvania State University. It may be helpful to think of negative thoughts and worries as anxiety manifested in the mind while tension is anxiety manifested in the body. “Any one of those things can start the anxiety process and trigger the others in a sort of upward negative cycle,” Newman explains. “And intervening in any one of them can break the cycle.” According to Newman, PMR is a tool of cognitive-behavioral therapy, or CBT, which many experts now consider the “gold standard” in psychotherapy. CBT’s aim is to change the recurring thought patterns or behaviors that promote negative mental states, including the ones associated with anxiety. And Newman says that progressive muscle relaxation is one of the more common techniques that cognitive-behavioral therapists employ with their patients. In fact, some of her research has found that among people with generalized anxiety disorder, progressive muscle relaxation techniques were just as effective as thought-based CBT exercises or interventions. “Doing a quick scan of tension every hour and then taking the time to release that tension gets people into the habit of sustaining and functioning with a lower level of tension throughout the day.” Her work is just the tip of the iceberg. One 2015 study in the journal Stress determined that progressive muscle relaxation not only lowers a person’s subjective feelings of psychological stress, but it also significantly lowers the body’s circulating levels of the stress hormone cortisol. There’s also a nearly endless stream of research papers linking muscle relaxation therapy to symptom improvements among people with cancer, arthritis, and other medical conditions. For a 2020 study, Central Michigan University’s Jafferany found that PMR even helped reduce skin symptoms among people with psoriasis, a chronic inflammatory condition that causes itchy and painful skin rashes. Any condition that stress or inflammation makes worse, muscle relaxation therapy can likely make better, he says. Not only is PMR effective for a variety of anxiety- or stress-associated conditions, but it’s also “more straightforward” than mindfulness meditation and some other popular stress therapies, Newman says. Many people struggle with meditation practices; tensing and relaxing muscles is easier. “But one of my personal pet peeves is that when people do it, they don’t do it optimally,” she says. How does one do PMR optimally? “You should practice twice a day for 15 minutes each time,” she says. There are literally hundreds of guided PMR practices online, but this one from the U.K.’s National Health Service is a good place to start. Most practices involve tensing individual muscle groups, such as the muscles of the hands or forehead, for five to 10 seconds while breathing slowly and calmly. Next, deeply and fully relax those same muscles and concentrate on the difference between how your muscles felt when they were tense and how they feel now that they’re relaxed. The specifics of PMR routines vary, but many start with the limbs before working their way to the torso and, finally, the head and face. Along with those two daily practice sessions, Newman says that you should take time every hour for a quick “body scan” to identify and release points of tension. “The idea is that people get into the bad habit of creating and sustaining muscle tension,” she says. “Doing a quick scan of tension every hour and then taking the time to release that tension gets people into the habit of sustaining and functioning with a lower level of tension throughout the day.” People who practice these techniques consistently will gradually become better at quickly identifying and releasing muscle tension. “The more you build up tension, the harder it is to let it go,” Newman says. “And tension triggers more stress and anxiety, which feed back into tension.” Muscle relaxation therapy breaks up that debilitating feedback loop.
https://elemental.medium.com/relaxing-your-muscles-can-relax-your-mind-50466e20de9d
['Markham Heid']
2020-08-13 15:14:17.172000+00:00
['The Nuance', 'Anxiety', 'Stress', 'Therapy', 'Health']
12.1 Factor Apps: Config
Hello friends, it’s me Adam. Welcome back to Small Batches. I introduced the 12 factor app in the previous episode along with areas where I think it may be improved. I’m calling these improvements the “12.1 factor app”. So new features and fixes, but no breaking changes; more like clarifications. I’m going to cover these in coming episodes. Enough preamble for now. On with the show. The 12 factor app states that applications should read config from environment variables. It implies separation of code and config. That’s about it, but there’s good bones here. I want something bigger from this factor. Specifically that applications may be deployed to new environments without any code changes. This requires a few additions: Configure the process through command options and environment variables Prefer explicit configuration over implicit configuration Use a dry run option to verify config sanity These points force applications to be explicit in configuration, which in turn requires engineers to take more responsibility for bootstrapping the process. This has proven to be a good thing in my experience. Consider the first point regarding command line options and environment variables. Developers interact with command line tools every single day. There’s a standard interface for passing flags: command line options. You’ve likely used curl -X , grep -E , or mysql -u . These tools may even use values from environment variables when command lines options are not provided. This is wonderful because processes may be configured globally with environment variables then overridden in specific scenarios with command line options. This simple interface also supports another common use case: looking up configuration options. Running a command followed by --help or -h typically outputs a usage message listing all command options. How many times have you struggled to learn which configuration files or environment variables are required to start a service developed by other engineers in your company? Now compare that to how many times you have struggled to find all the options to the grep command? There is no struggle because grep --help tells you everything. On the other hand, you’re left hoping that your team members put something in the README or on confluence. Moving on to the second point. I’ll explain this by contrasting software produced by two ecosystems. Rails applications use a mix configuration practices. They may use environment variables but there’s also a a mix of YAML files and environment specific configuration files (such as production.rb or staging.rb ). Internal code uses a preset number of environments (namely production , test , and development ) to implicitly change configuration. Deploying to a new environment requires creating new configuration files and/or updating YAML files. Starting a rails application requires running the rails command. As a result, developers are disconnected from the code that bootstraps application internals then starts a web server. On the other hand, consider software produced by the go ecosystem. It’s more common to write a main method that configures everything through command line options. In this case there is no need for extra configuration files or implicit configuration based on environment names since the concept is irrelevant here. Naturally this requires developers take more responsibility, but as I said early on, it’s worth it in the end. Configuring these applications is easier to grok as well as deploying them to a variety of environments. That’s what the 12.1 factor app is going for. The command line interface approach enables DX improvements too. One of my pet peeves is when a process starts then fails at runtime due to some missing configuration options. This grinds my gears because developers devote huge effort to validating user input through web forms or API calls but tend to neglect configuration validation entirely! Plus, it’s just frustrating to learn which values are required through runtime errors. The 12.1 factor app can do better than this. The 12.1 factor app will fail and exit non-zero if any required configuration value is missing. The main method that processes command line options and environment variables makes this possible. Does the process require connection to a database and no --db-url or DB_URL provided? Then Boom! Error message and exit non-zero. The goal is to make it impossible to start the process without sane configuration. Failing with a non-zero exit status integrates nicely with deployment systems. Recall that a 12 factor “release” is the combination of code and config. Therefore it’s possible for a config change to result in a broken release. Given that 12.1 factor apps fail fast, it’s possible for the deployment system to recognize the failed release then switch back to the previous release. Contrast this was a “fail later” approach. The release may be running but failing at runtime. This looks OK from a deployment perspective since the release started, but it’s totally broken from the user’s perspective. The 12.1 factor app easily avoids this scenario. The “fail fast” approach catches simple user errors such as all values provided. However that only solves part of the problem. Provided values are not necessarily correct. Here’s an example. Say the application requires a connection to Postgres, so the user sets POSTGRESQL_URL . However the application cannot connect to the server for any reason. It could be networking, mismatched ports, or an authentication error. Whatever the reason the result is the same: no database connection thus a nonfunctional application. This would cause downtime if deployed to production. I can’t tell how how many times this has happened to me for legitimate reasons or less so (like mistyping a hostname or specifying the wrong port). My point is this type of error may be eliminated by simply trying to use the provided configuration before starting the process. The idea here is to use a “dry run” mode to check these sort of things. I’ve used the dry run mode to check connections to external resources like data stores or that API keys for external APIs are valid. This aligns nicely with the “trust but verify” motto. It’s simple. At the end of the day developers make mistakes. It’s our job to ensure those mistakes don’t enter production. Alright. That’s enough for the 12.1 config factor. Here’s a quick recap: Configure your process through command line options and environment variables Fail fast on any configuration error Use a “dry run” mode to verify as much config as possible Prefer explicit configuration over implicit configuration based on environment names Well what do you think of these practices? Have you done anything like this before, if so how did it turn out? Hit me up on twitter at smallbatchesfm or email me at [email protected]. Share this episode around your team too. It’s great reference material for the “new service checklist” or “best practices” section on Confluence you’re always trying to write. Also go to smallbatches.fm/6 for show notes. I’ll put a link to my appearance on the Rails Testing Podcast where I talk about this topic in more technical detail. If this episode piqued your interest, then definitely check that one out. We talk about preflight checks and smoke tests. Alright gang. That’s a wrap. See you in the next one. Good luck our there and happy shipping!
https://medium.com/small-batches/12-1-factor-apps-config-fb319194f93e
['Adam Hawkins']
2020-05-04 17:43:22.328000+00:00
['DevOps', 'Software Engineering', 'Software Development', 'Sre']
What Is Mutex?
Is it a good illustration? Let’s read the Wikipedia problem definition and see if a picture of a lock would help us to understand what’s going on: The problem which mutual exclusion addresses is a problem of resource sharing: how can a software system control multiple processes’ access to a shared resource, when each process needs exclusive control of that resource while doing its work? The mutual-exclusion solution to this makes the shared resource available only while the process is in a specific code segment called the critical section. It controls access to the shared resource by controlling each mutual execution of that part of its program where the resource would be used. Ok, there is something about the access and lock helps to restrict the access, but it was about a shared resource. A bike lock is used to secure your bike, and most definitely, you can’t use a lock to get exclusive control of that resource. Also, what was that about the critical section? There is another thing in the real world that describes mutex much better: a restroom in the cafe. Photo by Patchanu Noree from Burst Let’s take a look at the Wikipedia paragraph one more time: The problem which mutual exclusion addresses is a problem of resource sharing: how can a software system control multiple processes’ access to a shared resource, when each process needs exclusive control of that resource while doing its work? The mutual-exclusion solution to this makes the shared resource available only while the process is in a specific code segment called the critical section. It controls access to the shared resource by controlling each mutual execution of that part of its program where the resource would be used. It makes a bit more sense now, isn’t it? (except for the last sentence) The problem: we want only one process (a person) to be in the critical section (a restroom) while using the shared resource (a toilet). A mutex is a way to solve this problem — we have a room with only one way in, and this way in has a lock. If some process wants to use the shared resource, but the critical section is occupied, this process would have to wait, possibly among other processes that also need to use that shared resource. So a mutex is a lock, but it’s not a padlock. It’s a lock on a door to a room with only one entrance. Note that “only one entrance” is also important — without it, the problem won’t be solved. So, a mutex is a restroom, right? Not quite. There are variations. Recursive/reentrant mutex One of the types of mutexes is recursive/reentrant mutex*. It is similar to a regular mutex but has an extra feature. Let’s try to learn what it is from Wikipedia: In computer science, the reentrant mutex (recursive mutex, recursive lock) is a particular type of mutual exclusion (mutex) device that may be locked multiple times by the same process/thread, without causing a deadlock. “…may be locked multiple times” — so reentrant mutex is reusable, and a regular mutex is not. If the thread used a regular mutex once — it should never touch this mutex again. Right? No, and the article clarifies that in the next paragraph: While any attempt to perform the “lock” operation on an ordinary mutex (lock) would either fail or block when the mutex is already locked, on a recursive mutex this operation will succeed if and only if the locking thread is the one that already holds the lock Can we use our restroom metaphor to see what it is about “perform the lock when already locked?” Imagine the restroom door is a door opened by a badge. What would happen if someone would forget the badge inside and leave the restroom? A deadlock. The door is locked, and the badge is required to open it. But the badge is inside, so to get it, you should unlock the door first (and there is no other badge to use instead because that would defeat the whole purpose of mutex). Ok, and how reentrant lock solves this problem? It’s possible to stretch the example and say, “the door uses face recognition, and if it’s the last person that used the restroom, the door would be unlocked,” but that would be ridiculous. Also, “forgot to unlock the mutex” (that’s equivalent of forgetting the badge inside) is not the problem reentrant lock solves. There is a better example: a fitting room. A fitting room (recursive) You are in a fitting room, and the M-size jacket turned out to be more like XL. You leave some of your things in the room, go fetch the smaller jacket, and then go back to the same fitting room you were using. While you are gone nobody would take the room, even though you are not there. Here, a process is still a person, the critical section is a fitting room, and a shared resource is a place where you can change clothes. The mutex is your things in the room (also the room itself, and the convention that you can’t use the room if someone’s things are inside). It’s a reentrant mutex, so you are free to leave the critical section at any time and then return to it later. Both examples are about private rooms, but what if processes in our program need exclusive control just some of the time? Readers–writer lock There are different situations where shared resource is something that multiple processes could use at once. And for this use case, there is another type of mutex called a readers-writer lock. Here is what Wikipedia says about it: In computer science, a readers–writer (single-writer lock,[1] a multi-reader lock,[2] a push lock,[3] or an MRSW lock) is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, while write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers or readers will be blocked until the writer is finished writing Previous examples don’t have these different reader and writer roles, so let’s use a different one: a display with the cafe menu.
https://medium.com/swlh/what-is-mutex-6127af8ced4f
['Yan Babitski']
2020-12-28 09:28:07.584000+00:00
['Software Development', 'Software Engineering', 'Concurrency', 'Learning To Code', 'Mutex']
How to use Silk to publish data-driven stories efficiently
Silk and InfoTimes have partnered up to offer a series of free webinars and hands-on sessions to journalists who want to hack their data skills. To sign up: register here. http://goo.gl/forms/LJnwvSqjyI The webinars will teach journalists how to master the data publishing platform Silk. With Silk, data journalism is easy. And you can go from spreadsheet to analysis to publish-ready visualizations in minutes. Without the need to code a single line. What’s more, Silk is free to use. Alice Corona’s webinar The webinars will be held by Amr Eleraqi, founder and director of InfoTimes, and Alice Corona, Silk’s in-house data journalist. The first webinar will show “how to use Silk to publish data-driven stories efficiently”. At the end of the webinar, we’ll ask users to come up with a dataset they’d like to see visualized. The following webinar will look at how to transform a selection of these specific dataset(s) into a Silk project, with ready-to-publish visualizations. Users can either follow the chosen case study or apply the steps to their own data project. Silk’s data journalist will be available also after the webinar, to offer personal assistance to those who wish to continue with their projects after the duration of the training. To sign up: register here. http://goo.gl/forms/LJnwvSqjyI Schedule: Thursday January 21st 2016, Introduction to Silk Thursday January 28th 2016 Publish your own story with Silk Please also invite friends and colleagues because they might learn useful skills from this webinar. Hope to see you there! Alice from Silk.co and Amr from InfoTimes About Silk: Silk is a place to publish your data. Each Silk contains data on a specific topic. Anyone can explore a Silk and create interactive visualizations. Silk has been successfully used by The Guardian, The Washington Post, Mashable, IJNEt and Human Rights Watch. Here’s a 80 second introduction on Silk https://www.youtube.com/watch?v=JOJ5cXzEzf4 About InfoTimes: InfoTimes are a modern information design and data visualization agency in heart of Egypt with one common philosophy: data should be designed for sharing. Our mission at the current time is to improve the skills of Arab journalists, raise the awareness among Arab press with the importance of data journalism, how they can use the data sets and spreadsheets for their own good.
https://medium.com/info-times/how-to-use-silk-to-publish-data-driven-stories-efficiently-b3c5d5ae48b0
[]
2016-12-19 15:53:48.905000+00:00
['Infotimes News', 'Journalism', 'Data Visualization']
How Blockchain can transform the world!! (usecases) -part 1
“Bitcoin, Ethereum,altcoins, ICO’s, moon, tokens, miners, state regulation etc” Too many buzz words.We got really entertained in the last 1 year but we still don’t understand how they matter to us in the long run and how they can transform the world. Definitions: Blockchain:(wiki) Blockchain was invented by Satoshi Nakamoto in 2008 to serve as the public transaction ledger of the bitcoin. It is a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block typically contains a cryptographic hash of the previous block, a timestamp, and transaction data. By design, a blockchain is resistant to modification of the data “Blockchains are trust machines” The invention of the blockchain for bitcoin made it the first digital currency to solve the double-spending problem without the need of a trusted authority or central server. Series of blocks (immutable database) are replicated on all nodes for decentralization and security. Types of blockchains: permissionless, public (anyone can join and ledger data is public) (Bitcoin, Ethereum) permissioned, public(verified nodes can join. ledger data is public) permissioned, private(verified nodes can join.ledger data is private) (Hyperledger) Technically, Blockchain consists of a. cryptographically linked blocks b. p2p network protocol-> for communication between nodes c. tamper resistant ->proof of work/proof of stake d. incentives-> bitcoin, Ethereum e. consensus protocol -> to finalize the transaction in block (lengthy chain of blocks , PBFT, Tendermint, Avalanche) f. governance ->for crucial decisions and roadmaps Cryptocurrencies: Bitcoin ->first and well known/proof of work/less developer friendly Ethereum->developer friendly/fast growing ecosystem/pathe definer/turing complete language/smart contracts privacy coins -> shield transactions (zcash/monero/dash) stable coins-> pegged to USD with fiat / crypto collateral (maker DAO, tether) IOTA — data driven blockchain ( for IOT) NEO->chinese counter part of Ethereum Ethereum competitors->zilliqa, cosmos, Algorand, Polkadot, EOS,stellar Tokens: utility tokens — inherent utility in blockchain and token value increases with demand (Golem, Sia, Ethereum) security token — tokens of real world assets payment tokens — for payments, transfer (Bitcoin, litecoin) Token standards: ERC20-technical standard used for smart contracts on the Ethereum blockchain for implementing tokens. ERC 721 — non fungible token standard(collectibles like cryptokitties) ERC 998 — composable non-fungible token standard(real estate tokens) ERC1155 — gaming token standard ERC stands for Ethereum Request for Comment “Internet is transfer of information, Blockchain is transfer of value” Usecases: Store of value: From the above chart, crypto with high probability will capture the large market of store of value due to portability (compared to gold). It can be the next inter-generational asset like land. My preference: Bitcoin — why? a. secure — proof of work b. non-inflationary c. public acceptance If Bitcoin could not meet the transactions required per day, then Bitcoin may be used by Rich(high txn cost)and Ethereum upper middle class (moderate txn cost) 2. Reserve currency: US dollar and SDR are considered as reserve currencies globally. There is a high chance of Bitcoin/Ethereum replacing them due to universality with high market cap in-future. However, fluctuation in prices should be moderate and all the countries should give the nod (mainly US,EU,India, china) to see that happen. 3. International money transfer:(banks) At present, Money can be transferred instantly through crypto-currencies rather than waiting for a long time through traditional setup Bitcoin with rootstack / Ethereum can be the mode of international money transfer at scale. 4. Financial products: a. Fundraising — ICO’s/ can compete with VC’s b. Lending- crypto as collateral- (dharma protocol) Best usecase is → lending money to people of developing countries by people of developed countries. Interest rates are high in developing countries compared to developed countries. However, reputation analysis of person is crucial to lend money. Reputation protocol is the key c. Derivatives — (dydx) d. Payments — in-app payments/ privacy payments In-app payments(games), privacy payments will be the best usecases. Daily payments by people will not be a killer app for blockchain. In developing countries like India ( UPI/BBPS/Paytm/tez), China ( Alipay/wechat), Payments are much faster,easier,safer, comfortable and cheap. e. Security tokens They can provide an array of financial rights such as equity, dividends, profit share rights, voting rights, buy-back rights, etc to investors . Often these represent a right to an underlying asset such as a pool of real estate, cash flow, or holdings in another fund. Tokens are traded on a blockchain-powered exchange with rights written into a smart contract. Benefits: 1. 24/7 markets 2. Fractional ownership 3. Rapid settlements 4. Reduction in direct costs 5. Automated compliance 6. Asset interoperability 7. Increased liquidity and market depth 8. Expansion of design space for security contracts In US, Ethereum is not considered as security. Howey test can guide us on deciding whether tokens are security or utility tokens. 5. Decentralized bank. It includes a. storage of funds — (Eth) b. Access to funds — (Balance) c. Access recovery — decentralized recovery!! d. stable account — (Maker DAO) e.interest bearing accounts- (Dharma) f. exchange of funds-(Ox) g.usage of funds- ? (Wallet connect) To succeed,this bank should excel in 1. Governance 2. Usability 3.Identity 4.Speed 5. Recoverability 6. Reliability This gonna take time, but we are not very far. Recoverability options: a. Biometric data- can get changed b. Social recovery- tell to friends — trust issues c. KYC procedures d.Paralysis proofs/time lock recovery/last resort recovery 6. Collectibles As we all know, Ethereum got congested due to famous game named cryptokitties. CryptoKitties is a blockchain based virtual game developed by Axiom Zen that allows players to purchase, collect, breed and sell various types of virtual cats. At present, famous artworks are collectibles people die for. However, in-future, virtual collectibles will be the trend. Ex: Curiocards, Rare Art Labs, DADA etc 7. Gaming Gamers are the early adopters of virtual in-app tokens. However, these tokens are not inter-operable. With Interoperability, gamers can exchange tokens for the fiat money/bitcoin and can be consistent earners.(parallel income) Gaming with crypto tokens + VR/AR will be the killer app I personally feel, gaming + scalability solutions will lead to next round of crypto hype cycle. 8. Prediction Markets Prediction markets will become the new age Facebook, whatsapp. Humans love predictions and they love to predict things regularly. Usecases: a. Football/cricket matches b. Who will die next in Game of thrones!! c. Who will be the next prime minister d. Girls asking friends to predict her date behaviour. What if you earn with every right prediction, then its addictive than whatsapp :) Ex: Gnosis, Augur 9. Dating Tinder helps in identifying people with similar interests and good-looking dates. But, people sell themselves o tinder pre-date which can be true/ false. Post-date, there is no feedback on authenticity of person regarding his/her interests/traits. Reputation can be key here. People can rate other people post-date on parameters like a. Humour b. Chivalry c. Romantic d. workaholic e. too serious f. friend zone material etc 10. Hiring: In linkedin, testimonials/references may not help us identifying the inherent nature of person. It only gives us idea of him/her. Colleagues should be incentivised to rate the person on different parameters(in-private) like a.Working in teams b. Emotional/logical/over stressed c. inspirational d. leadership qualities e. Smart / hard worker Right amount of incentives helps us identifying the persons we wish for and improve the productivity of organization 11. Decentralized applications (Blockstack) a.Decentralized Identity “We envision a world where people can decide who they want to share their personal information with and what information gets shared. Civic’s visionary blockchain identity-verification technology allows consumers to authorize the use of their identities in real time. We are spearheading the development of an ecosystem that is designed to facilitate on-demand, secure, low-cost access to identity-verification services via the blockchain.” — Civic Ex: Civic, uport , selfkey etc b. Decentralized storage — Storj, filecoin, sia, IPFS c. Decentralized computation — Golem d. Decentralized communication — Whisper c. Decentralized Ms-office — graphite (Blockstack) 12. Insurance: As the everything gets digitized, vehicle data, home automation data, industrial plants data, health data helps in changing the face of insurance industry. Blockchain helps as trust machines by stroing the hash of time-series data stored in Database. People can share data privately with data and can earn discount on the premiums. — — — — — — — In the next part, Other important use cases will be discussed. PS: I am thankful to the blockchain community for knowledge sharing. This post is inspired by content of many outstanding writers in block chain space. I am also grateful to my wife for helping me on how to rate humans in reputation systems.
https://medium.com/coinmonks/how-blockchain-can-transform-the-world-usecases-part-1-c91d0a2941b
['Anurag Reddy']
2018-06-28 09:14:35.424000+00:00
['Bitcoin', 'Blockchain', 'Ethereum', 'Cryptocurrency', 'Future']
Better Data and Smarter Data Policy for a Smarter Criminal Justice System System
Written with Jason Tashea for the Harvard Shorenstein Center and New America. If the promise of artificial intelligence is to make systems smarter and more efficient, there may be no better candidate than the US criminal justice system. On any given day, close to half a million people — many whose charges will ultimately be dropped — sit in jail awaiting trial, at an estimated cost of $14 billion per year to taxpayers [1]. One in three, or 74 million adults in the United States, have a criminal record [2]. The majority of these records, however, comprise non-violent misdemeanors or charges that never led to a conviction [3, 4]. For those who serve time, failure to rehabilitate is the norm in this country: just shy of a staggering 77 percent of state prisoners are rearrested within five years of their release [5]. These burdens fall disproportionately on communities of color and the poor. African Americans are incarcerated in state prisons at five times the rate of white Americans [6]. Black people are four times more likely to be arrested than whites for marijuana offenses, despite similar rates of use [7]. Money bail locks people up primarily for their inability to purchase their freedom, rather than their risk to society. Thankfully, “tough on crime” policies are giving way to “smart on crime” approaches. Nationwide, efforts to reform money bail could avoid much of the human and financial toll — about $38 million per day — associated with pretrial incarceration, though, the results are mixed [8–11]. In Georgia, the expansion of drug, mental health, and veteran courts has led to a decrease in crime and incarceration across the state, including a 36 percent decline in youth imprisonment [12]. When New York City ended stop and frisk — a police practice of temporarily detaining and searching disproportionately black and brown people on the streets — in 2013, recorded stops fell from over 685,000 in 2011 to 12,000 in 2016 (a 98 percent drop) and crime continued to decline [13]. Data, automation, and a culture of experimentation can further hasten reforms, making our criminal justice system smarter, fairer, and more just. However, careful attention must be paid to the infrastructure at the heart of every artificial intelligence system: the data and its algorithms, the human beings that use it, and accountability. AI is the easy part: we need better data and data policy to end mass incarcerationIt’s our belief that this starts with policymakers, who need to pay more attention to the foundational issues of data collection and standardization, which includes training data that build artificial intelligence systems and data sharing, as well as the oversight needed to ensure that automated processes are yielding desired outcomes in practice — not theory. As it stands, deficiencies in these areas are already presenting challenges in three major areas: pretrial risk assessment, reentry, and second chances. Pretrial Risk Assessment Across the country, courts are using profile-based risk assessment tools to make decisions about pretrial detention. The tools are built on aggregated data about past defendants to identify factors that correlate with committing a subsequent crime or missing a trial date. They are used to score individuals and predict if pretrial incarceration is necessary. Each risk assessment tool available on the market relies on different factors. The Public Safety Assessment (PSA) tool, developed by the Laura and John Arnold Foundation and deployed in 40-plus jurisdictions, uses nine factors, including historic criminal convictions and the defendant’s age at the time of arrest to determine scoring [14]. Equivant’s COMPAS Classification software uses six factors in risk assessment and over 100 factors to carry out needs assessments that determine what services a person needs [15, 16]. Despite their differences, because these tools are built on historical data, they run a real risk of reinforcing the past practices that have led to mass incarceration, like the over incarceration of poor and minority people [17]. COMPAS and the PSA have each been challenged in court, thus far unsuccessfully, regarding their accuracy, transparency, or impact on a defendant’s due process rights. Data-Driven Recidivism Reduction There are similar concerns about the application of evidence-based tools at the opposite end of the carceral cycle effecting the 640,000 prisoners who reenter society each year. The President has supported the First Step Act, made reentry a priority, and promoted evidence-based recidivism reduction in the federal prison system [18–20]. But to build successful, data-driven programs requires shoring up the underlying criminal justice data, which is notoriously messy and siloed. As a recent report by the White House Council of Economic Advisers concluded, investments in better evidence and assessment tools and carefully designed empirical evaluations are needed to determine what does and doesn’t work to close prison’s revolving doors [21]. The Act would require the DOJ to implement a risk and needs assessment system to determine how to assign programming and provide incentives and rewards to inmates. But concerns remain that, because the tool is likely to be built using historical data and will be implemented at the attorney general’s discretion (although with the input of an Independent Review Committee), it may amplify existing existing racial and other biases. Second Chances Also in the realm of reentry, waves of “second chance” reforms have been enacted across the country. These policies increase the eligibility of individuals for early release, clear their criminal records, or help them regain the right to vote. But while much attention has been paid to the increasing availability of second chance opportunities, less is known about their uptake and impact. Recent research conducted by one of us defines and documents the “second chance gap” between eligibility for and award of receive second chance relief — in the form of re-sentencing, records clearing, and re-enfranchisement [22]. It finds that although tens of millions of Americans could clear their records, only a fraction of them have, leaving behind a lower bounds estimate of 25–30M persons living with records that could, under current law, be cleaned up, and the damage of living with a criminal record, to employment, housing and host of other prospects, lessened. The large number of individuals in the gap stems from a variety of reasons including a lack of awareness of eligibility, prohibitive costs, fines and fees, and cumbersome application processes. The potential gains from closing this “second chance gap” — including decreased incarceration costs, restored dignity among former prisoners, public safety, and employment — are too valuable to ignore. Machine automation can help remove the red tape, not steel bars, that hold individuals back, as demonstrated in California, Maryland, and Pennsylvania [23–25]. The devil, as usual, is in the data and design details — with the reach of clearance and its cost — from pennies to thousands — depending on how its implemented [26]. What all three contexts — pretrial detention, recidivism, and second chances — have in common is that the quantity of potential improvements and accountability regarding their delivery depends on the quality of the underlying data and algorithms in use, as well as access to the resulting outcome data. Machine automation and machine learning require machine readable, structured, and, in the case of supervised learning, labeled data. The algorithms derived from this data need to be evaluated and benchmarked for their performance. Once deployed, novel interventions and their impacts need to be validated. These steps, each challenging in their own right, can happen. Proactively attending to the vital issues of data collection, sharing, and oversight will make it much more likely that they do. Data Collection and Standardization Despite progress in recent years, data about the criminal justice system remains notoriously messy, complex, and hard to come by in standardized formats. Information is often locked up in public and private data silos and paper files, and in employment, prison, and court records. As a result, getting permission to collect and clean data from disparate sources consumes a disproportionate amount of time, often putting it out of the reach of the very reformers who are trying to develop and test novel insights and study the impacts of their implementation. Paying attention to data collection and standardization at the outset can avoid these data deficits. A new Florida law shows one way [27]. It requires counties to publicly release 25 percent more data than they currently do into a public database, providing for a centralized process for the regular collection, compilation, and management of data about individuals, processes, and outcomes in the criminal justice system [28]. In this case, all data must be submitted in useable (machine-readable, disaggregated, privacy-respecting) form, with a single, unique identifier for information collected about an individual across criminal justice agencies, like courts, corrections, and police. The output should support innovation and community-based policy development, implementation and refinement, and, in the process, accountability and trust. Training and Test Data and Data Sharing Once system data is collected, additional time must be spent preparing it for research. As Fei Fei Li has said, “The thankless work of making a dataset is at the core of AI research. [29]” The Imagenet training dataset Li helped create and shared with the world has become the foundation for powering and measuring advances in image recognition [30]. Such datasets can be used to “train” software to recognize and correctly label images, as well as provide a “test” or benchmark for evaluating the relative accuracy of different artificial intelligence algorithms. The sharing of criminal justice data, whether in the form of labeled training or holdout datasets or other means, would accelerate progress. Research-ready data should be prioritized from the start, with designated interfaces already designed for use by both computers and human to facilitate the secure, privacy-respecting sharing of data. Algorithmic Oversight While bolstering data collection and data sharing, federal and local governments also need to get serious about AI oversight. There is a transparency issue regarding existing tools that is in direct conflict with an open and transparent court process. For example, COMPAS refuses to make public the details of its algorithm, and neither COMPAS nor the PSA have publicly released their training data — though there may be privacy protections in tension with doing so. Once a system has been implemented, auditing a system to know whether it is performing as intended is also difficult to carry out. But rather than make case-by-case determinations, proactive policies that support public trust and good science should be put in place. New York City passed a law to study the issue of transparency in algorithms used by governments. However, the United States on the whole is a laggard in data and algorithmic regulation. The lack of transparency and oversight both diminishes a tool’s potential for improvement and carries with it the liability of curtailing a defendant’s due process. While jurisdictions, whether local, statewide, or national, are likely to advance standards tailored to their individual needs, bedrock tenets of data governance should be kept in mind. Tool providers should be required to disclose their inputs and processes, and agencies and courts should be required to explain how they are using the tools and how they are performing. Certainly not everyone should have access to all the data and corresponding algorithms — this could undermine privacy, invite game playing, and discourage innovation — but access needs to be robust enough to ensure accountability, advance scientific understanding and iteration, and build public trust. Artificial intelligence offer great potential to turn the tide on mass incarceration in the US. However, it will not be as simple as using the right tool or finding the right dataset. If criminal justice reformers and policymakers are serious about a smarter criminal justice system, enhanced in part by AI, they must prioritize creating a smart and strong foundation — based on solid data and solid data policy — on which to support it. References
https://medium.com/artificial-intelligence-ai-for-social-impact/better-data-and-smarter-data-policy-for-a-smarter-criminal-justice-system-system-8a6fdf933f49
['Colleen Chien']
2019-01-22 19:58:45.607000+00:00
['Law', 'Artificial Intelligence', 'Data', 'Criminal Justice Reform', 'Machine Learning']
America Returns to Its Violent Normal
America Returns to Its Violent Normal Violence is only permissible here when it’s state-sanctioned Police in Minneapolis, Minnesota, on May 29, 2020. Photo: Scott Olson/Getty Images I find myself once again struggling with the American definition of violence. With who gets to define what violence is, and what it looks like. Some of this is because violence is so often discussed only as action, and not inaction: Protesters in the streets, but not institutional neglect. Violence in this country is so often discussed in the present, without any historical context. Our country talks about a city on fire, but rarely about what had to burn and who had to be left behind for a city to exist in the first place. America now finds itself in another moment of reckoning, right as the country opts to “reopen,” a catchall term for some kind of return to normalcy that also whitewashes the risks for those who can’t work from home and must now return to the workplace. States like Ohio are setting up databases to report employees who don’t come to their place of work. Make no mistake: This is an act of violence. In a country where unemployment has skyrocketed and 40% of people can’t afford a $400 emergency expense, the wealth of billionaires continues to accumulate at the expense of exploited workers. This, too, is violence. People who are unhoused sleep outdoors in cities where hotels sit largely empty. Covid-19 has run through multiple prisons and detention centers, leaving many incarcerated individuals sick, and some dying without proper care. All of this, violence. The timing of not just the killings, but also the uprisings in response to the killings, has made me question what American Normalcy is and who gets to celebrate a return to it. In concert with America’s quest to return to normalcy, people have taken to the streets to protest the killings of black people by the police: Breonna Taylor, killed in Louisville in March. George Floyd, killed in Minneapolis on May 25. Tony McDade, killed by police in Tallahassee just two days later. The timing of not just the killings, but also the uprisings in response to the killings has made me question what American Normalcy is, and who gets to celebrate a return to it. So much of the discourse around the material nature of reopening has revolved around finances. Around markets and labor and the economy. But significantly less ink has been spilled on the subject of what opening means for our nation’s obsession with policing, with surveillance, and with punishment. Most likely, it means more of the same for America’s vulnerable and marginalized. And, as we’re seeing in Minneapolis and around the country right now, many of those people are fed up. The pandemic revealed even more clearly how systemic the inequality in this country is, and the extent to which the power structure values lives differently depending on race, income, and privilege. The pictures that show white protesters storming statehouses alongside those that show black protesters drowning in clouds of tear gas should send a powerful message, though at this point I believe many people should be beyond a need for visual contrasts. I also don’t feel particularly interested in shepherding people to some kind of awakening. Many people aren’t who they were three or four months ago, and the illusion of the country’s greatness has found a new way to wear thin. A country that was, and has been held up by workers, by insular communities stretching themselves to the limits to give people what they need. Those are the people once again marching along the streets right now. And for many of us, this moment feels familiar. We are up late at night again. An otherwise dark bedroom, punctuated by the light from a cellphone screen showing a building on fire or a fist thrust into the sky illuminated by fire. There are people who know the routine of marching in their city’s streets for hours, only to return to their homes and watch the videos of people marching elsewhere. To search for bail funds on the internet and do math with whatever money they have to spare. Many of us are here again with the understanding that we never left. This is American Normalcy, too. The country didn’t imagine a return to protest, but it was inevitable. We live in a place that was founded on and profits from violence, even if the state won’t define it as such. There are those who obsess over the burning of a Minneapolis police precinct but not a man’s broken neck. Who prefer demonstrations of peace because such tactics are less likely to disrupt bubbles of comfort. But also because when people fighting for freedom use tactics some would deem violent, it is holding up a mirror to a violent country. Whether or not that result is intentional or understood by those in power. So much of what is labeled as violence was learned through American machinery or American neglect. And so, this is our grand reopening, same as it ever was. But this time people have seen the nation grind to a halt when they were removed from its workforce. They’ve seen the rich grow richer. They’ve hoped for something better, or at least different, on the other side of a pandemic. And still, here is the same America. A country so eager to return to normal — howling with grief, soaked in blood.
https://gen.medium.com/america-returns-to-its-violent-normal-d6828edbd27d
['Hanif Abdurraqib']
2020-06-05 12:40:20.410000+00:00
['Politics', 'Justice', 'Society', 'Police', 'Race']
Powerful Feature Flags in React
Feature flags allow you to slowly rollout a feature gradually rather than doing a risky big bang launch and are extremely helpful when used in a continuous integration and continuous delivery environment. At Optimizely, we commonly use feature flags to reduce the risk of complicated deploys like rebuilding UI dashboards. However, building a feature flagging system is usually not your company’s core competency and can be a distraction from other development efforts. I’m Asa, Optimizely’s Developer Advocate. In this 8 step blog post, I’ll show how to get the value of powerful feature flags by rolling out a feature customer-by-customer in React using Optimizely Rollouts: a completely free product. Note: If you don’t have a ReactJS application, we recommend creating one with create-react-app 1. Setup the Feature Flag Interface Create a free Optimizely Rollouts account here. In the Rollouts interface, navigate to ‘Features > Create New Feature’ and create a feature flag called ‘hello_world’. To connect your ‘hello_world’ feature to your application, find your SDK Key. Navigate to ‘Settings > Datafile’ and copy the SDK Key value. 2. Install the Optimizely Rollouts React SDK The React SDK allows you to setup feature toggles from within your codebase using JavaScript. Using npm: npm install --save @optimizely/react-sdk or using yarn: Use the SDK by wrapping your main React App component in the OptimizelyProvider component and replace <Your_SDK_Key> with the SDK key you found above. Note that the OptimizelyProvider takes a user object that defines properties associated with a visitor to your website. user.id : is used for a random percentage rollout across your users : is used for a random percentage rollout across your users user.attributes: are used for a targeted rollout across your users. You will use these attributes to target your feature to specific groups of users starting in step 5. 3. Implement the Feature To implement your ‘hello_world’ feature, first import the OptimizelyFeature component at the top of your application: import { OptimizelyFeature } from '@optimizely/react-sdk' Then put the OptimizelyFeature component in the app, passing your feature key ‘hello_world’ to the feature prop of the OptimizelyFeature component: The feature prop connects the component to the feature you created in the Optimizely UI in the first step. Your full code sample now looks like the following: 4. Turn the Feature Toggle on! If you run your application now, you’ll notice that you did not get the feature. This is because the feature is not enabled, which means it’s off for all visitors to your application. To turn on the feature: Navigate to Features Click on the ‘hello_world’ feature Toggle the feature on and ensure it is set to 100% (see screenshot below) Click Save to save your changes In around 1 min, refreshing your React app should now show the feature toggled on and you should see “You got the hello_world feature!!”. You have now successfully launched your feature behind a feature flag, but it’s available to everyone. The next step is to enable targeting to show your feature only to a specific subset of users to enable the true value of rolling a feature out customer-by-customer. 5. Create an attribute for customerId To target your feature based on the userAttributes you provided to the OptimizelyProvider component in step 2, you’ll have to create those userAttributes in the Rollouts UI. Do that with the attribute ‘customerId’ to start: Navigate to Audiences -> Attributes Click ‘Create New Attribute…’ Name the attribute key ‘customerId’ Click ‘Save Attribute’ to save your changes 6. Create and add a beta audience Now let’s create an audience to indicate which customerIds will get access to your feature. Navigate to Features Click on your ‘hello_world’ feature Scroll down to Audiences Click ‘Create New Audience…’ Name the Audience ‘[hello_world] Beta Users’ Drag and Drop your customerId attribute into the Audience conditions Change the ‘has any value’ drop-down to “Number equals” with the value 123 Click ‘Save Audience’ Add the audience to your feature by clicking the + button next to your newly created Audience. Then scroll down and click ‘save’. Now that you’ve added the audience to your feature, the beta is up and running. At this point your feature is only showing for customers with the customerId 123, which is what you provided to the OptimizelyProvider component in the userAttributes prop. As a test to verify, you can change your customerId to 456, save, and watch as the feature will get turned off because you don’t meet the targeting conditions. 7. Add users to the beta To add more customers into your beta audience, edit the audience definition to add or remove users from the beta: Click on the “+” sign and save to add beta users Click on the “x” sign and save to remove beta users In the following screenshot example, three customers have been added to the beta. Customers with ids: 123, 456, and 789, will now have access to the ‘hello_world’ feature. 8. Launch the feature After enabling your feature for enough customers to enjoy the new user experience, you may decide that it’s safe to launch your feature to all customers. Once you are ready to launch your feature out of beta, follow these steps: Remove the audience from your feature Ensure the rollout is configured to 100% Save the feature The feature is now available to everyone and you have successfully rolled out the ‘hello_world’ feature customer-by-customer using free feature flags from Optimizely Rollouts in React!
https://medium.com/engineers-optimizely/powerful-feature-flags-in-react-e49ab82cf651
['Asa Schachar']
2020-04-03 17:29:08.334000+00:00
['Software Development', 'Front End Development', 'React', 'Continuous Integration', 'Feature Flags']
Following Through
Following Through We’ve all got half-finished projects in our Github, here’s how to make sure your next project doesn’t become one of them Photo by Niyas Khan on Unsplash In my last two posts (which you can access here and here) I talked about some of the steps I took to execute a project. In the first, I discussed how to “chase down” a project idea. Particularly I emphasized that project ideas don’t just come to you fully formed all at once. Or at least, that happening is extremely rare and shouldn’t be relied upon. There are a few routes, such as focusing on skills you want to acquire or domains you want to work in, that can help you zoom in on what you want. Also, in my case, just thinking about a tool you wish existed. In my second post, I talked about the first steps to take to make that project a reality. In essence, you need a general direction and concrete steps to take, while at the same time being flexible in the event something goes wrong. Starting projects is a balancing act, you have to have direction but be flexible, you should research but also dive into coding as soon as you can. Now, while each of those steps are difficult in themselves, I would argue this last step can be the most difficult of all: completing the project. The beginning of a new project is always exciting. You’ve got these new ideas, either the final product feels straightforward, or the initial steps feel pretty straightforward. All you need to do is figure out how to do these three things and then you’ll have a shiny new project. Then you start working. Get Creative I liked to joke that for every actual step I took, I had five more issues I had to solve. This is not a unique experience. In fact, I’d venture to say anyone that’s ever done any kind of creative project has this experience to some extent. The fact is, maybe you’ve looked at some tutorials, but, if your project is any good, no one is giving you exact directions on what to do. This means you might go in one direction, but then realize you actually cannot go down that path because something takes too much processing power, or it’s just not feasible. Again, this is where that flexibility I talked about in my last post comes into play. You cannot complete a project with the mindset that there is only one possible way to do something and that you must complete something by executing these specific steps in this specific order. Solving problems requires creativity, not rigidity. Of course, being able to solve problems is a valuable skill unto itself, but it’s particularly vital to learn and get comfortable with problem solving while completing projects. The reason it’s so vital is that the more you get stuck on a problem, the higher the probability you’ll never actually complete that project. However, this creativity and imagination can be a double edged sword, which leads me to my next point Eyes on the Prize You start working on one step, then you think, “oh but wouldn’t it be cool if I enhanced this feature in all these ways too?” Then you spend even more time on this side quest, which then leads to another side quest, and then you realize you’ve spent a lot of time on things that you might have to completely scrap depending on the rest of the project. Or maybe, it’s not even that, but it’s, “well I got this thing to work this one way, but I’m sure I can find a better way to do it.” Again, you spend a lot of time obsessing over one thing, then realize later on you might have to completely rework stuff and scrap most of your work. If you end up doing this a lot, you’re going to get demoralized and that project is just going to become another abandoned repo. Additionally, it’s just not a good use of your time. This is why it’s important to keep your eyes on the prize. The first thing you want is something that works. Then you can make it something that looks good and is efficient. As much as it’s good to be a tenacious problem solver, it’s also good to be the person who knows when to move on and come back later. Even if something isn’t perfect, it’s ok! Chances are, by the time you come back to it, you’ll have enough experience to fix it or might not even need that specific feature or function. Remember, a good project is a done project. All of this said, it can be hard to stay motivated to solve these problems and move forward. That’s why I would argue this last point can be the most valuable Collaborate It can be hard to stay motivated to follow through. When you get stuck in the weeds with ten different bugs, it’s easy to just give up. By now, you’ve probably come up with a better project idea that’s all shiny and new. When you’re working alone, it’s incredibly easy to abandon ship. However, it’s much more difficult to do so when you team up with another person. While your collaborator (or collaborators!) can certainly be in the same field or have similar skills, I would argue it’s better to work with someone with completely different skills. In my case, my partner is a software engineer. While I knew how to put together the code that takes in a pattern and spits out a search url, I didn’t know how to build a website where someone could actually use that. She, on the other hand, knows how to build web apps, and make them so anyone can use them. In this way, we already have a clear division of labor, since it’s just divided by our complementary skill sets. Also working with someone provides you with accountability. It’s harder to abandon a project because you have to explain to someone, or multiple people, why you’ve decided you don’t want to do this project anymore. Even if you have the most understanding, chillest project collaborator, taking that step can be a huge deterrent from actually quitting. These were three major factors that helped me take a project idea of creating a program to generate similar knitting patterns and turn it into a final product, which you can look at down below! I’m sure there are plenty of other tools people have used, and there’s probably plenty I’ll learn too.
https://towardsdatascience.com/following-through-7a7bb6ec021f
['Kate Christensen']
2020-06-08 14:34:19.046000+00:00
['Data Science', 'Development', 'Technology', 'Computer Science', 'Software Development']
Are you REALLY automating your infrastructure?
The project at hand was subjected to both cases above — the AWS platform was a big shift for the application and required rethinking its architecture. Rapid prototyping was necessary to make quick progress but experimenting with various architectures left a big mess behind. Also, the team was relatively new to infrastructure automation and unintentionally leaned towards manual changes which sometimes slipped our minds before they made it into the scripts. To address these problems, the team adopted an approach that, at first, felt overly radical — we would periodically completely wipe our AWS account clean. It is important to make a distinction between this and performing a terraform destroy which only removes the components explicitly managed by terraform. This would miss the exact point of this operation — identifying the areas which were not yet automatically provisioned. Our nuke of choice was an open-source, well-maintained tool named aws-nuke. The first time we did this, our hands were shaking as we were confirming this very destructive operation, which the tool requires you to do twice, for safety measures. The result of this was disastrous — an attempt to reprovision the application with our automation scripts failed with multiple incomprehensible errors. Recovery took a better part of the week and team morale was low — it felt like we just shot ourselves in the foot and took a huge step back. However, what was not immediately clear, was that in the process of rebuilding what we just destroyed, we would benefit in two ways. First, we discovered all the tweaks we’ve made and haven’t automated — some of them we completely forgot about, others were simply not documented. The nuke operation identified them so that we could make an explicit decision about automating these changes or documenting them. The second lesson came after we performed the nuke again — this time with a bit more confidence in our automation. Applying the manual changes, now well documented, for the second time and later third and fourth, we experienced the pain of manual configuration first hand and following the principle of frequency reducing difficulty and the old CI/CD mantra — if it hurts, do it more often — we were strongly incentivized to double our effort to automate them. If it hurts, do it more often and bring the pain forward. — Jez Humble, Continuous Delivery Over time, we performed the nuke multiple times and eventually, it ceased to stress the team. We grew an appreciation for it as another tool in our toolchain, that helped sharpen our saw. It increased the quality of our automation, as well as solidified our confidence in the process, in very much the same way frequent production deployments do in the traditional CI/CD approach. We eventually incorporated the nuking into our regular stakeholder showcases and finally, those became pretty uneventful and boring. The good kind of boring 😉 Happy nuking, everyone!
https://medium.com/slalom-build/are-you-really-automating-your-infrastructure-7390356e6792
['Dan Siwiec']
2019-09-25 16:02:42.492000+00:00
['AWS', 'Terraform', 'DevOps', 'Infrastructure As Code', 'Continuous Integration']
Blockchain Music Platforms: A New Paradigm
Blockchain Music Platforms: A New Paradigm Facilitating the self-reliance of recording artists and incentivizing listeners using blockchain technology and smart contracts. Until Infinity Decentralized platforms give more power to the individual. A music ecosystem without intermediaries and authorities allows artists to get paid immediately for their streams and merchandise sales, share a financially beneficial relationship with their fans, network directly with other music industry participants, and simplify digital-rights management. A common feature among the majority of blockchain music platforms is micropayments; allowing creators to get paid immediately. Each platform may use the Ether cryptocurrency, an ERC20 token, or another utility token that is paid and divided among rights holders after their song is streamed or merchandise is sold. This process is made possible by smart contracts (a computer protocol intended to digitally facilitate, verify, or enforce the negotiation or performance of a legally binding contract without a third party or manual implementation). My Experience As a music producer and host of the music and crypto podcast, Crypto Until Infinity, on The Bitcoin Podcast Network, I premiered the podcast by speaking on my experiences using a few blockchain music platforms. I was impressed with the simplicity of setting up a profile, releasing music, and getting paid up to 11 cents per stream on one platform in particular this past winter due to Bitcoin’s explosive march up to $19,000. Payment rates from Spotify and Apple Music dwarfed in comparison. I took advantage of what these platforms had to offer. Each user can customize their own experience. Artists can create a business model that specifically suits their needs. What’s In It For The Artist? Artists have the power to navigate their career with little or no intermediaries. While operating on the blockchain, they can choose to pursue a music career without record labels, banks, lawyers, etc. Using blockchain technology, artists benefit in record-keeping. Data and transactions are validated and confirmed by multiple sources across the network, timestamped, and secured using cryptography. The blockchain is immutable. Artists receive their revenue in real-time. The transactions from fan to artist occur without any middlemen. What’s In It For The Listener? Paid subscriptions to streaming services, obtrusive advertisements, and no direct way of financially supporting their preferred artists is the common experience for listeners on traditional platforms. Blockchain music platforms made the listening experience free. Some have even allowed the listener to be compensated for sharing music from their platform. There are no ads to disrupt the listening experience. The listener can transition from one song to another without ads to offset the mood. On various platforms, listeners can receive shares of future revenue of artists in exchange for crowdfunding the artists. Some Things Platforms Are Doing Musicoin Musicoin has an embeddable music player. An artist’s revenue can accrue from streams on websites and social media platforms. Creators have an advantage in accumulating more Musicoins than on the native musicoin.org site if their content is placed on sites with higher social engagement. This also raises awareness of the Musicoin platform and its content. Musicoins can traded on cryptocurrency exchanges, stored in wallets, or converted to fiat currency. Choon Choon is a blockchain music platform that provides incentives for curators on their platform to organize monetized playlists. Curators receive a designated percentage of the stream revenue that artists set in their smart contract for each song. Creators throughout the platform will likely consider assigning royalty percentages to curators, allowing their content to be more attractive for monetized playlists, ultimately boosting the marketing of the content. Rights holders get paid in NOTES. Sometime after the Choon token sale, NOTES will be traded on cryptocurrency exchanges, stored in wallets, or converted to fiat currency. imusify The imusify platform will have a marketplace of music industry participants. Participants can list their products and services pertaining to the creation, advertising, and business development of the creative content. Artists can network and initiate transactions with engineers, producers, videographers, graphic designers, and others directly on the platform. imusify will use an Initial Token Offering infrastructure (similar to ICOs) for artist specific crowdfunding and other token transactions across their platform. imusify will provide their own utility token using the NEO blockchain, IMU, after the token sale. IMU will be traded on cryptocurrency exchanges, stored in wallets, or converted to fiat currency. What’s Going On Now As blockchain music platforms are improving their services, we’re seeing ways they’re decentralizing the online ticket industry, online booking, and more. In October 2018, San Francisco will host the first blockchain-powered music festival, the OMF Festival, headlined by popular EDM producer, Zedd. will. i. am is an adviser for Cre8tor.App, a platform that allows anyone to record vocals, write songs, lease beats and create music videos. Imogen Heap, RAC, and Giraffage have collaborated with the platform, Ujo. The blockchain music space has been a standout in the blockchain sector at large by having working products amid a slough. As blockchain music platforms achieve more popularity, they will continue to disrupt the entire traditional music industry.
https://medium.com/the-bitcoin-podcast-blog/blockchain-music-platforms-a-new-paradigm-d1cacc19cce1
[]
2018-06-30 14:02:33.571000+00:00
['Hip Hop', 'Blockchain', 'Cryptocurrency', 'Podcast', 'Music']
How to Build a Successful Team in a Culture Obsessed With Front Men
What About Different Personalities, Cultures, and Backgrounds? Surely, not everybody is suited to work with everybody, right? When depicting a successful team, we usually assign roles to people. We might think that we need one leader and the others should follow. Then we need one smart person, one brave person… the team members have to share strong team motivation and have a preference for the same type of action. But studies do not support this view. In 2010, remarkable research was published by a group of psychologists from Carnegie Mellon, M.I.T., and Union College. They set out to find factors for a team’s success rate — a team’s “collective intelligence,” or collective IQ, so to speak. The researchers recruited 699 people and organized them into groups of 2–5. Then they gave them a series of assignments with an emphasis on collaboration. Interestingly, the groups which did well on any one of the assignments tended to do well on all assignments. Even more interestingly, many of the factors we would generally associate with a successful team, like team satisfaction, team cohesion, team motivation, or the individual intelligence of the team’s members were not correlated to the group’s collective intelligence. When the same task was done by groups, however, the average individual intelligence of the group members was not a significant predictor of group performance The study’s results identified three factors that constitute group intelligence: how well the group’s members react to each other’s social cues whether each team member gets an equal time slot to speak how many women there are on the team (probably because studies have shown women to be better at reading social cues) First, there was a significant correlation between [group intelligence] and the average social sensitivity of group members (r = 0.26, P =0.002). Second, [group intelligence] was negatively correlated with the variance in the number of speaking turns by group members (r=-0.41, P=0.01). In other words, groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking. Finally, c was positively and significantly correlated with the proportion of females in the group (r=0.23, P=0.007). This same research was later collaborated by Google when they conducted their own statistical analysis of their own teams. In 2012, Google started Project Aristotle, with which they were trying to figure out what made some teams successful and others not. After collecting and analyzing a year’s worth of data about more than 100 teams, they couldn’t find any real pattern — until they started looking at the team dynamic through the eyes of psychological safety. One Google engineer from a successful team described his team lead as “direct and straightforward, which creates a safe space for you to take risks.” Another Google engineer, from a less successful team, said his team lead has “poor emotional control.” “He panics over small issues and keeps trying to grab control. I would hate to be driving with him being in the passenger seat, because he would keep trying to grab the steering wheel and might crash the car.” By looking at all the answers again through the lens of social dynamics, Google found that psychological safety was by far the most important criteria differentiating effective and non-effective teams. In the end, a team’s effectiveness was not correlated to how senior the people on the team are, how intro- or extraverted, how educated, or which programming skills they possess. What matters is how the team members care for each other. Some teams in an organization will be successful and others will not. The latter are governed by the team members with poor social skills. To improve a team’s dynamic, better social skills need to be developed, and people have to start giving social interactions more value. This, however, is not something that can be achieved by sending them to after-work drinks or to the yearly Christmas party. The truth is, we see work as the professional realm — a place away from the broader society, where we only show about 10% of our actual self. Which is ludicrous. Given that on average we sleep 8 hours, work 8 hours, and relax for 8 hours, we spend about 50% of our waking hours at work. Are we really prepared to pretend we are robots for half of our life? A machine with no desires, no wishes, and no thoughts outside of the company’s goals? As a society, we might not yet be emotionally mature enough to be able to handle emotions at work — our own, and those of the others. But just because we might refuse to handle them doesn’t mean we don’t all come to work with broad emotional needs. The relationships we form at work affect our motivations, our health, our self-esteem. We start our days with work, and then the stress or satisfaction we experienced at work affect the way we treat our family and friends later in the day. I always find that knowledge is power. Knowing what affects and motivates you, and also what affects and motivates others around you is like a power-up. Suddenly you have access to this whole other dimension. Suddenly you see how things are connected, you see that the information was always there, and things were always clear and logical; it was just you who decided to keep your eyes closed.
https://medium.com/better-programming/in-search-of-a-successful-team-in-a-culture-obsessed-with-front-men-3b0a963b2ec3
['Ines Panker']
2020-01-02 17:53:07.053000+00:00
['Team Management', 'Software Development', 'Startup', 'Teamwork', 'Programming']
93 Favorite Albums Of 2020
Well, it seems this number keeps jumping higher and higher. There was a clearer method of listening this year, with more time at home to set a practice around all things, listening was no different. On Friday mornings, I would set aside a couple of hours for discovery — something I’d neglected to do consistently last year with my travel schedule. I’d forgotten about the excitement of it. I’d scour Bandcamp, search for blogs, text my pals who listen even more adventurously than I do and ask them to send me things. I’d compile a list (by hand, foolishly) of any new release that got my interest. Ones I was expecting, ones from artists I knew well. But also ones from artists I was barely familiar with. I’d go into the weekend with anywhere from 10–30 new albums to spin through while cleaning my house or doing laundry or playing a video game or (if you’re like me and pretty indifferent to most NBA announcers) watching a basketball game. What this meant is that usually by Sunday afternoon, I was coming out of the weekend with 2–4 albums that were my favorite among the bunch. Ones that stayed atop the new music rotation for a week until the process started all over again the next Friday. I still listened to more old, comfortable, familiar tunes this year. Still spun more classic records from my preexisting collection and (especially with 68to05 as a project that consumed me) still immersed myself in the music of my youth, the music from before I was born. But streamlining this new release listening process meant that I was pulling more new music from more corners. And so, while this number is bigger this year, in relation to the albums I listened to, it’s actually a good, healthy sliver. I am someone who doesn’t require music to be revelatory in all ways, or in any ways. It shows me what it can when it can, and I’m thankful for that. I was especially thankful for that this year, when I had a lot of questions that no sounds could answer on their own. A few big big notes: I generally don’t put EPs on here, but a few that I felt great about and want to uplift: Chika — Industry Games Natalie Gardiner — 6 Tomberlin — Projections Soul Glo — Songs To Yeet At The Sun And, as usual, disclaimers: I’m not saying these are the best albums of the year, just my personal favorites. Ones that I loved the most in a kind of vague (but also thought-through) order. I listened to well over 550 new albums this year (I stopped doing the math and because I was writing new albums by hand on paper, I have no logical tracking system but hope to change that next year!) — if I had to guess, I’d say around 800 is the final tally. Among those albums, there were undoubtedly many not on this list that I liked a lot about. What I love sharing more than the albums is sharing the writing that I loved on these albums and artists this year, so I hope you’ll spend time with that as well. Ultimately, I’m hopeful that you found something/anything good to help you get through the year, be it one song, or 93 albums. 93. Mungbean — I Love You Say It Back 92. Open Mike Eagle — Anime, Trauma, and Divorce Read: “Open Mike Eagle Turned A Very Bad Year Into A Very Honest Album” 91. Porridge Radio — Every Bad I really enjoyed this NME piece from March, which I read just as I was going into lockdown. 90. Brigid Dawson and the Mothers Network — Ballet of Apes 89. Jerreau— Keep Everything Yourself 88. Boldy James & Alchemist — The Price Of Tea In China Read: “Boldy James Is Having A Better 2020 Than You Are” — I really appreciate any chance to read Alphonse Pierre. 87. Rina Sawayama — Sawayama I did not feel (entirely) the same about the album as this reviewer did, but I really enjoyed reading this thoughtful review back in the spring. 86. Drakeo the Ruler x JoogSZN —Thank You For Using GTL Always gotta read Jeff Weiss on Drakeo. 85. Atramentus — Stygian 84. Soccer Mommy — Color Theory Loved this piece, and was good to return to it (since the albums that dropped before March feel especially far away to me.) 83. Quelle Chris & Chris Keys — Innocent Country 2 82. Khruangbin — Mordechai A good piece here. 81. Loathe — I Let It In And It Took Everything Really really big on Kadeem France, and appreciated his writing here. 80. Gillian Welch & David Rawlings — All The Good Times 79. Ka — Descendants of Cain Read: “Ka & the Power of Mythology in Rap” by Dylan Green 78. Katie Pruitt — Expectations Always excited to read/listen to Jewly Hight. 77. Pa Salieu — Send Them To Coventry Read: “Pa Salieu is Coventry’s Hopeful Rap Star” by Colin Gannon 76. Matt Berninger —Serpentine Prison 75. Half Waif — Caretaker Love The Creative Independent and Nandi Rose had a great interview there this year. 74. Thana Iyer— Kind 73. Junglepussy — JP4 Good piece here. 72. Shamir — Shamir Read: “The Second Coming of Shamir Bailey” by Safy-Hallan Farah 71. Adrianne Lenker — songs Read: “Adrianne Lenker’s Radical Honesty” by Amanda Petrusich 70. Pallbearer — Forgotten Days 69. Benny The Butcher — Burden of Proof I really enjoyed Benny’s home Tiny Desk thing. 68. The Muslims —Gentrifried Chicken 67. Capolow & Kamaiyah — Oakland Nights Kamaiyah had a strong year. Here’s a good read. 66. Mourn— Self Worth 65. Blaque Dynamite —Time Out 64. Cartalk— Pass Like Pollen 63. Locate S,1 — Personalia Really dug this Talkhouse in-conversation that Christina Schneider did with Kevin Barnes. 62. Akai Solo — Eleventh Wind Read: “Akai Solo Is One Of Rap’s Brightest Talents” by Phillip Mlynar 61. The Chicks — Gaslighter 60. Megan Thee Stallion — Good News There was so much good writing on Megan this year by black women, and I appreciated it all. Taylor Crumpton is someone who I turned to for exceptional writing on Megan twice this year: This piece, and this piece. 59. OHMME —Fantasize Your Ghost Read: “A Room Of Ohmme’s Own” by Jessi Roti 58. Killah Priest — Rocket To Nebula 57. Ganser— Just Look At That Sky 56. Ego Ella May — Honey For Wounds A good lil feature here. 55. Code Orange —Underneath 54. Flo Milli — Ho, Why Is You Here? Read: “Flo Milli Came To Flex” by Melinda Fakuade 53. Dragged Under— The World Is In Your Way 52. Wednesday — I Was Trying To Describe You To Someone 51. Charli XCX— how i’m feeling now It is interesting looking back because it feels like this was one of the first big albums that had an entire press cycle dedicated to it being A Quarantine Album and the tone of so many of those pieces felt like people imagining our predicament would be far more temporary than it has been (this is no fault of theirs, really, it was early in this long year.) — that said, a piece I really dug was this one by Olivia Horn. 50. Palm Reader— Sleepless 49. Moe Dirdee & Dert Beats — Moe Dert 48. Gunna — Wunna In September, someone tweeted a video of a person kind of horrifically crashing their car while listening to this album and Gunna quote tweeted it with “Caution: WUNNA ALBUM DANGEROUS” and the prayer hands emoji and that was truly something. 47. Veda Black — Sad Girls Club I enjoyed this thoughtful interview about heartbreak and The Sims! 46. Spanish Love Songs — Brave Faces Everyone Good interview here. 45. Empress Of — I’m Your Empress Of 44. Armand Hammer— Shrines 43. illuminati hotties— FREE I.H.: This Is not The One You’ve Been Waiting For 42. Demae —Life Works Out…Usually This piece. 41. Jake Blount — Spider Tales Kaia Kater (also immensely gifted) talks about/with Blount here. 40. Mozzy — Beyond Bulletproof This Cherise Johnson convo with Mozzy is great. 39. Adrian Younge & Ali Shaheed Muhammad — Jazz Is Dead 001 38. Yaya Bey— Madison Tapes 37. Freddie Gibbs & Alchemist — Alfredo 36. U.S. Girls — Heavy Light Enjoyed this piece by Jesse Locke. 35. Huntsmen— Mandala of Fear 34. Chief State— Tough Love 33. Babeheaven —Home For Now 32. City Girls — City On Lock Read: “City Girls Are Finally Free” by Brittany Spanos 31. Jyoti— Mama, You Can Bet! I read this just last week and really enjoyed it. 30. Diet Cig — Do You Wonder About Me? This lil chat on American Songwriter was cool. 29. Candace— Ideal Corners I will briefly say that this album was such a perfect album for my summer inside. It sonically built an outside world without putting me in a position of heartbreaking longing, which is the most I could have asked for. 28. Pink Siifu & Fly Anakin — FlySiifu’s Read: “The Necessity Of Pink Siifu’s Rage” by Marcus J. Moore 27. Chloe x Halle — Ungodly Hour I liked this piece where the two interviewed each other! 26. X— Alphabetland 25. Jessie Ware — What’s Your Pleasure? Good Interview Mag thing here. 24. Yaeji — What We Drew 23. Bartees Strange— Live Forever Good interview here. 22. Sa-Roc — The Sharecropper’s Daughter Another good Tiny Desk (Home) that I loved. 21. Anjimile — Giver Taker Great interview here. Also look at this tweet. 20. Sault — Untitled (Black Is) 19. Nubya Garcia— SOURCE Enjoyed this interview. 18. Moses Sumney — Grae Read: “Moses Sumney’s World Of Possibilities” by Hua Hsu 17. Lianne La Havas — Lianne La Havas Though the album was not my first time hearing it, I do wish I could bottle and store the energy that explodes at the end of that “Weird Fishes” cover. Also, I liked this piece by Tracey Onyenacho. 16. Victoria Monét— Jaguar 15. Fiona Apple — Fetch The Bolt Cutters I know there was a lot written about Fiona this year, but this piece about Shameika Stepney was the one that fascinated me the most. 14. Marcus King — El Dorado 13. IDLES — Ultra Mono Was hard to pick just one thing to read because I am just such a fan of this band, but this wins out. 12. Nova Twins — Who Are The Girls? Good piece here. 11. Yazmin Lacey — Morning Matters 10. Ariana Grande — Positions 9. Phoebe Bridgers— Punisher I made this. 8. bbymutha—Muthaland If this is indeed it for bbymutha, what a bow to put on an already outstanding run. Also, enjoyed this profile by Cameron Cook. 7. KeiyaA — Forever, Ya Girl Read: “KeiyaA’s Divine Soul” by Vrinda Jagota 6. Nothing— The Great Dismal 5. Girlhood — Girlhood Every year, there is one album that I feel like I scream about from the top of my lungs, running from place to place shaking a bell to get the townspeople out of their homes to hear the good news, or sneaking on playlists for friends, or playing in my car with the windows down while parked near busy intersections, etc etc. this was that album for 2020. 4. Radiant Children— There’s Only Being Yourself Good piece here. Also, I don’t think any album I loved this year had a better closing song than this album, except for maybe…. 3. Emma Ruth Rundle & Thou — May Our Chambers Be Full …this album. I’ve always been compelled by Emma Ruth Rundle and Thou individually, and even more when they’re together. (I have often said that everything ERR sings sounds like a threat or a warning.) Not a piece of writing and not even connected to this album, but this ERR Highway Holidays performance is so stunning — not just because of the band she’s playing with, but also the sound engineering is…perfect? 2. BLACKSTARKIDS — Whatever, Man I don’t like assigning The Future as anyone’s responsibility, but I am hopeful that we get this group around for a good, long time. Check out this piece. Special Interest — The Passion Of This is a good interview and also this Zine is really great and was good to keep up with this year.
https://nifmuhammad.medium.com/93-favorite-albums-of-2020-9b335ce3621c
['Hanif Abdurraqib']
2020-12-30 18:00:26.787000+00:00
['Albums Of The Year', 'Music']
How to Receive Water Temperature Data in Real-Time Using Raspberry Pi and Python
VII. Access your Raspberry Pi using Terminal Now that we have our Raspberry Pi IP address, we can access it. Go to your terminal and paste the following script. ssh [email protected] *your raspberry pi IP address* Terminal will ask you for the password. The default password for Raspberry Pi is “raspberry”. When you see “pi@raspberrypi” in green, use the following script to configure. sudo raspi-config Let’s enable some items. When the below screen pops up, go to “Interface Options”. Click “P2 SSH” and enable SSH. It is always nice to see the desktop of our Raspberry Pi — Let’s also enable VNC. With VNC viewer, we will be able to remotely access/control our Raspberry Pi.
https://medium.com/analytics-vidhya/how-to-receive-water-temperature-data-in-real-time-using-raspberry-pi-and-python-f185ac30d010
['Deborah Kewon']
2020-12-03 16:37:50.815000+00:00
['Python', 'Data Analysis', 'Raspberry Pi', 'Data Science', 'Programming']