title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Science Might Have Identified the Optimal Human Diet
Science Might Have Identified the Optimal Human Diet A combination of several popular approaches could yield the best long-term health benefits Americans are notoriously unhealthy eaters. The so-called Western diet—one that adores meat, abhors fat, and can’t get enough of processed food — has dominated menus and mealtimes for nearly half a century and has become synonymous with obesity and metabolic dysfunction. Short of swallowing actual poison, it’s hard to imagine a more ruinous approach to eating than the one practiced by many U.S. adults. If this story has a silver lining, it’s that the dreadful state of the average American’s diet has helped clarify the central role of nutrition in human health. A poor diet like the one popular in the West is strongly associated with an elevated risk for conditions of the gut, organs, joints, brain, and mind — everything from Type 2 diabetes and cancer to rheumatoid arthritis and depression. “We’ve realized that diet is arguably the most important predictor of long-term health and well-being,” says James O’Keefe, MD, a cardiologist and medical director of the Duboc Cardio Health and Wellness Center at Saint Luke’s Mid America Heart Institute. “Most of the major health problems we deal with in America are connected to the ways we eat.” If eating the wrong way can contribute to such a diverse range of illnesses, it stands to reason that eating the right way could offer people a measure of protection from most ailments. But what’s the right way? That question lies at the heart of countless studies stretching back several decades. By panning the newest and best of those studies for gold, some experts say we may be closing in on the optimal approach to eating. “Highly restrictive diets are usually not advised unless there is an underlying medical condition that warrants it.” An ‘ideal’ diet? In September, O’Keefe and colleagues published a paper in the Journal of the American College of Cardiology that sought to identify the “ideal” diet for human cardiovascular health. Based on the most comprehensive research to date, his paper makes the case that a pesco-Mediterranean approach paired with elements of intermittent fasting is a strong contender for the healthiest diet science has yet identified. The diet is essentially a modified Mediterranean plan, which makes sense; O’Keefe and his co-authors highlight research that has found consistent associations between a Mediterranean diet and lower risk for death, coronary heart disease, metabolic syndrome, diabetes, cognitive decline, depression, cancer, and neurodegenerative diseases, including Alzheimer’s disease. Plant-based foods — vegetables, fruit, legumes, nuts, seeds, and whole grains — form the foundation of the diet. Fatty fish and other types of seafood, along with “unrestricted” helpings of extra-virgin olive oil, round out the plan’s major components. Modest helpings of dairy products, poultry, and eggs are allowed, while red meat should be eaten sparingly or avoided. Low or moderate amounts of alcohol — preferably red wine — are acceptable, but water, coffee, and tea are preferred. The diet isn’t overly prescriptive when it comes to portion sizes or calorie counts. But it does advocate for a form of intermittent fasting known as time-restricted eating, which calls for all the day’s calories to be consumed within an eight-to-12-hour window. This is a practice that multiple studies have linked to lower food intakes and beneficial metabolic adaptations. “Time-restricted eating is a great way to reduce total calories and also get inflammation and hormones back into healthy ranges,” O’Keefe says. “Virtually everybody’s health and well-being will improve if they follow a good diet,” he adds. “This diet seems to have the most cumulative scientific evidence supporting it.” “Choose natural foods, mostly plant-based and fish. Choose whole grains rather than refined, and avoid processed foods, especially salty snacks, processed meats, and sugary beverages.” The pitfalls of restrictive diets For those who follow a low-fat diet, a ketogenic diet, or any other diet that rigidly defines what a person can or can’t eat, the approaches highlighted in O’Keefe’s paper may seem unhelpfully general or far too agnostic toward macronutrients. But he and other nutrition researchers say that fewer restrictions are a feature, not a bug, of most healthy diets. “Highly restrictive diets are usually not advised unless there is an underlying medical condition that warrants it,” says Josiemer Mattei, PhD, MPH, an associate professor of nutrition at the Harvard T.H. Chan School of Public Health. For example, someone who has a metabolic or gut disorder may need to avoid certain foods. But for most people, diets that eliminate whole macronutrient categories or food groups present more risk than reward. Mattei says that restrictive diets also tend to be unsustainable in the long run and in some cases can lead to disordered eating. Another problem with highly exclusionary diets: What works well for one person may not work for another. “There are many individual factors that could cause differential responses to the same diet,” says Regan Bailey, PhD, MPH, a professor in the Department of Nutrition Science at Purdue University. These include person-to-person genetic variation, age, baseline nutritional status, inflammation levels, and microbiome makeup — to name just a few. Finally, and maybe most important, experts say that restrictive diets — even ones that provide short-term benefits — may lead to trouble down the road. For example, a certain approach to eating may induce weight loss in the near term, but it could also contribute to the development of a disease or disorder 20 or 30 years later. It’s these sorts of unanticipated or unforeseeable consequences that lead most nutrition experts today to recommend looser, more inclusive approaches to eating. “This is what I tell my friends and relatives,” Mattei says. “Choose natural foods, mostly plant-based and fish. Choose whole grains rather than refined, and avoid processed foods, especially salty snacks, processed meats, and sugary beverages.” Do that, and you end up with a diet that looks a lot like the pesco-Mediterranean plan backed by O’Keefe’s new paper. Even if you can’t quite manage to break up with red meat, or for ethical or environmental reasons you adhere to a vegan diet, O’Keefe agrees that eliminating processed meats, refined grains, and sugary foods is one of the most crucial elements of healthy eating. “The biggest problem in the American diet is all the sugar and white flour and processed or fried foods we eat,” he says. “They’re toxic and just highly addictive.” If you can avoid those foods, or at least restrict them to special occasions, you can feel confident that you’re doing your health a huge favor without exposing yourself to the added exertion or risk that may come packaged with trendier, more extreme approaches to eating. “If you’re going to put in all this time and effort and willpower, you want to make sure it’s for a diet that’s really going to be a good bet in the long term,” O’Keefe says. Based on the best nutrition research to date, he says that a diet loaded with plant-based foods, fish, and extra-virgin olive oil — coupled with time-restricted eating — seems like the plan with the most upside.
https://elemental.medium.com/science-might-have-identified-the-optimal-human-diet-ec618b2fb8f2
['Markham Heid']
2020-10-01 13:23:06.695000+00:00
['Nutrition', 'Diet', 'Health', 'The Nuance', 'Science']
First Steps — Deep Learning using Python and Keras
For quite sometime now you have been reading about Machine Learning articles and concepts here on X8 and hopefully all over the internet. You read about introduction to various algorithms from Linear regression to Neural Networks to Decision Trees and Random Forests. We saw how Neural Networks specifically CNNs work and are good at image classification tasks. It will be fun to implement a small CNN and see things in action! But before actually going ahead and implementing a neural network we will have to equip our computer with the right set of tools to enable us to implement a neural network quickly. There are various tools and languages which can help us do that but this and the next article will just talk about a Python Anaconda environment, Jupyter notebooks and relevant python packages. We will be using a machine learning framework called Keras which runs on top of TensorFlow library for our mini-implementation. We will not talk about GPU acceleration in this article as this is targeted towards a first timer foraying into Deep learning. All the setup in this article assumes a windows 10 system but you can easily find the setup steps of those tools for other systems as well. First, let us look at what those fancy reptilious words in the above paragraph are about. Python Photo by Sharon McCutcheon on Unsplash Python is one of the most used programming languages for machine learning and data science. It is a high level programming language which supports multiple programming paradigms and has a large and comprehensive standard library and an awesome community. What this means is that you easily get small pieces of code to accomplish a lot of tasks and you just have to take care of the high level functionality of your program. You also get a lot of support from the forums and online community if you run into any issues or are looking for optimal implementations of various tasks. Anaconda Python equips us with lot of functionality with its libraries a.k.a. packages. Those packages are maintained by the Python community and are updated regularly. Which means there are multiple version for every package and a huge number of packages. Keeping track of all those packages is a big pain and moreover sometimes certain versions of packages are incompatible with each other. Moreover you might want to use a Python package A with version 1.0 with another package B having version 2.3 for one project. For another project you might want to use version 1.2 for package A and 2.5 for package B. Anaconda takes care of this for us, you can create a virtual environment in Anaconda with specific packages and versions. Anaconda highly simplifies package management and deployment. Jupyter Notebook Random Trivia: The “Great Red Spot” on Jupiter is 1.3 times the diameter of the Earth Jupyter Notebook is an incredible tool for working with Juila, Python and R languages and it is the go-to code writing tool for a data scientists. Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more. I would highly recommend you to get familiar with Jupyter Notebook with a basic tutorial. Keras Keras is a Deep Learning library written in Python language. It can utilize TensorFlow library and makes life so much easier when it comes to fast experimentation with implementing Neural Networks. It provides the building blocks of a Neural Network so you don’t have to worry about the lower level implementations. It is very modular and user friendly and allows you to just define your Neural Network architecture and connect layers with each other and voila, much like a Lego kit you can assemble your neural network! About TensorFlow TensorFlow is an open source software library for high performance numerical computation. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains. TensorFlow simplifies a lot of tasks for us while implementing various machine learning algorithms. Imagine writing code for backpropagation with all the derivatives and matrix multiplications. Using TensorFlow it is just a few lines of code and the lower level details are already implemented for you. Lets see how you can turn your boring windows 10 machine into your own deep learning laboratory. TL;DR — Installing Anaconda We will start with installing Anaconda first. The other packages will be installed in an Anaconda virtual environment. Head to the Anaconda download site Hit download on the Python 3.x version. You can also download the 2.x version if you wish but it is recommended to get the 3.x version. The implementations we will do later in this guide will use 3.7 version. Click next for the first few screens. Accept the license agreement and on the select installation type screen select “Just me” you can select the other option as well but that will require you to have admin privileges to your system. When you reach the above screen just select the Add Anaconda to my PATH environment variable. You will need admin rights to your system for this option. You can use the Anaconda prompt post installation if you don’t select this option while installation. PATH variable just lets your computer know where Anaconda is installed so that when you run the command to launch Anaconda from the cmd terminal it immediately picks up the executable file from the PATH variable. Click Install, you can also choose to install Visual Studio Code which is an Integrated Development Environment (IDE). It is not necessary but it is a very nice coding environment which provides lot of functionality to ease your development process. You can also use PyCharm which is a Python IDE. Meanwhile Anaconda installation is in progress, check your sitting posture, get up from your chair and stretch a bit! Open Anaconda Navigator after the installation. As mentioned earlier Anaconda can create different virtual environments depending on your project requirements. We will just create a basic environment for our first CNN implementation. Creating Anaconda Virtual Environment Anaconda provides a GUI navigator to help you create an environment. You can also create a new environment from the windows command shell using the following command. This is where the PATH variable will help. If that is not set the below command will give you an error. conda create --name my-new-environment python=3.7 You can choose any name for your environment and the version of python you want to use. If you want to create the environment using Anaconda Navigator follow the steps shown in the image below Click on Environments in the left panel Click on create and enter the name of the environment and the python version you want to use and wait for the process to finish. It might take a couple of minutes. After the environment creation is done click Home on the left panel of Anaconda Navigator and click on Launch on the Jupyter Notebook card. If it says install then hit install and wait for the Navigator to install Jupyter. I would suggest to move to the windows cmd since it is faster and you won’t have to wait for the GUI to open each time. On the windows command prompt run the following commands after creating the environment activate my-new-environment Creating environment is just a one time task. After this you will just have to activate and deactivate the environments. To deactivate the current environment just type deactivate. The activate command will activate your newly created environment. After this you can run the following command from the windows command prompt to install Jupyter — conda install jupyter After installing Jupyter notebook we will now go ahead with installing TensorFlow, Keras and other required libraries. Installing required packages Now that you have spent so much effort and disk space on installing Anaconda it is time to reap its benefits. As I mentioned earlier the power of Python lies in its ease of use and huge number of community maintained libraries and Anaconda makes the management very simple. You will be using lot of math and matrix operations while implementing Deep Learning codes. Go ahead and install keras in your virtual environment. The command for that is — conda install keras This will install the required dependencies like TensorFlow and numpy. If you want to install any other package for Python you can use this command after you have activated your environment. That is all! You are all set with your laboratory for Deep Learning projects. In the next article we will implement a small Neural Network using Keras and will build a website of our own which will somewhat look like the below image — Since implementing a neural network and building a website is a slightly longer process we will do that in the next article. Till then you can familiarize yourself with Jupyter notebooks, Python and Anaconda environments. Also read and watch videos about TensorFlow and Keras. You will be able to write digits on screen using your mouse and the neural network will recognize them. How cool is that!
https://medium.com/x8-the-ai-community/first-steps-deep-learning-using-python-and-keras-3ae6bcb37d0e
['Prateek Karkare']
2019-03-01 13:53:06.097000+00:00
['Keras', 'Artificial Intelligence', 'Python', 'TensorFlow', 'Machine Learning']
Introducing Prophecy.io - Cloud Native Data Engineering
Starting in systems engineering and excited by the potential of GPUs, I moved from Microsoft Visual Studio team to become an early engineer in the CUDA team at NVIDIA, and played a key role in compiler optimizations to get high performance on GPUs. It’s a delight to see how far CUDA as come in powering deep learning and bitcoin mining. Passionate about building a startup, I moved to learn Product Marketing & Product Management, leading both functions for ClustrixDB through a hard pivot to a limited, repeatable product-market fit. At Hortonworks, I product managed Apache Hive through the IPO. It was not fun to be in front of customers and see them struggle with Data Engineering. The Hortonworks team put in a massive effort to make Hive better & simpler — fewer configurations, faster performance, a real cost based optimizer, and a simplified stack in Hive 2.0 and beyond. Taking the learnings of technology and the market, we’ve decided to build a product centric company with relentless focus on the customer needs and it just works! experience, coupled with a sprinkle of joy! My co-founder Rohit Bakhshi brings strong product expertise, with the go-to-market experience from Hadoop, GraphQL Apollo and Kafka - scaling multiple Enterprise Data Engineering platform companies to unicorns. We’re delighted to raise our seed round from SignalFire with Ilya Kirnos joining the board & Andrew Ng working closely with us. They join our existing investors Berkeley SkyDeck (Fall 2018 cohort), and friends from Enterprise software industry who invested Angel funds! SignalFire is unique - besides knowing the technology & market well, they get their hands dirty -running Spark in-house for their Machine Learning based product Beacon for locating talent!
https://medium.com/prophecy-io/introducing-prophecy-io-cloud-native-data-engineering-1b9247596030
['Raj Bains']
2019-08-15 23:25:24.202000+00:00
['Spark', 'Data Engineering', 'Etl', 'Apache Spark', 'Cloud Computing']
10 Quick Exercises to Boost Your Creativity
Adults often have a love hate relationship with creativity. Many grownups don’t feel especially “good” at being creative. As if it’s natural to lose our creativity as we age. Even those of us in creative fields like writing, music, or design frequently find ourselves hitting our heads against the wall as if creativity requires some sort of sacrifice to a mystical muse. While hopping three times with our hands behind our back. We make creativity much harder than it has to be, probably because we’re so focused on the results. But true creativity requires freedom. The freedom to experiment and make plenty of mistakes. Of course, that includes the freedom to quit worrying about looking like a fool. 1. Set a timer for 10 to 20 minutes every day. This is your creative workout. Make it short enough to be doable for you, but long enough to help you feel like you got in a good exercise. Schedule it, and then set a timer. Use the timer on your smart phone, or get a dedicated timer that’s exclusively used for your creative exercises. Bonus points if you can find a timer that inspires you. Image via Amazon Once you find your perfect timer, it’s up to you if you’re going to go with a specific creative exercise or follow your gut and freestyle the whole session. 2. Try a little Zentangle. Zentangle is an 8-step method of design that isn’t just creative but incredibly relaxing too. What I really like about Zentangle is the way it breaks down the “big picture” into simple steps. It’s an activity that engages your brain much like knitting or crochet, and when you’re finished, you’ll have something very cool to show for your time. mboyes on pixabay A few months ago, I picked up a similarly themed book called Zenspirations Dangle Designs. It's another great method for doodling and getting those creative juices flowing. 3. Go for quantity over quality. Whether you’re a writer, painter, musician, or teacher, it pays to make your creative time a prolific one. Many folks think that quality trumps quantity, but in terms of idea generation, you really need to use up as many ideas as you can to help yourself come up with something truly out of the box. The next time you decide to have a freestyle creativity session, focus on quantity first. You might be surprised to see what great ideas you end up with. 4. Write new lyrics to the melody of a favorite song. Music is such a great mood lifter, and it’s a wonderful way to put your brain on autopilot, which is something your creative self seriously needs. But what would happen if you began ad libbing your own lyrics to a song that keeps playing on the radio? Could you come up with an entirely new story? Try it out and let me know. 5. Complete a task with your non-dominant hand. There are some great brain benefits to using your non-dominant hand in everyday tasks. I’m all for anything that boosts our brain power, but the coolest thing in my opinion, is how using your other hand will tap into the brain hemisphere that’s typically underutilized. Think about it. Using the side of your brain that you never rely on means tapping into ideas you never knew you had. So, whether you draw a picture, write a note, or create a craft, think of using your non-dominant hand as a special workout for your creative side. 6. Change up your environment. Where do you usually work on your creative endeavors? Lots of people work indoors, perhaps at home, school, or the office. You can easily recharge your creativity by practicing in new environments. Try writing a poem outside. Take your sketchpad to the zoo or conservatory. Try working on a creative project in a very noisy place, and then a very quiet one. An enclosed space, and a large one. See if you notice a difference in how well you operate within different places. And give yourself the ability to breathe new creative air. 7. Exercise when you brainstorm. Yeah, yeah, yeah. Exercise is good for the body and mind. It’s so good for our brains, that it can even boost our creativity. I love the creative power I feel when I hit the treadmill (I just walk on a steep incline) and play a mix of my favorite songs. That’s when I come up with my best ideas and feel most inspired. Again, I’m pretty sure it’s partly the power of going on autopilot. If you’ve got ten minutes and a piece of cardio equipment that’s been collecting dust, brush it off for a quick creative workout. 8. Draw a picture or write a letter without looking at your hand or the page. This one is so strange, but it’s another good way to give your brain a boost. Most of us are used to looking at our hands and paper when we write or draw. It’s a bit startling, but try a quick picture or letter with your eyes closed. Can’t trust yourself not to look? Throw a lightweight towel over your hand and the page. See what happens when you’re not looking. 9. Try new writing prompts. You don’t have to be a writer to benefit from a writing prompt. In fact, you don’t need to consider yourself a writer to benefit from writing at all. In general, a routine writing habit offers immense benefits to every type of person. But if you aren’t a writer, or, if you’re used to writing very specific things, you might be unsure what to regularly write about. If that’s you, consider a daily writing prompt. Pick a prompt, and for 10 minutes, just write. There are no wrong answers when you do a writing prompt. Don’t worry about editing or even telling a good story. Just write. 10. Channel another artist. Here’s something I am planning to do more of. Pick a favorite artist. It might be a writer, painter, actor, comedian, or anyone else whose work you really admire. Create a piece of art in their voice, as best as you can. Obviously, it’s going to wind up being your interpretation of their voice, but that’s okay. The point is to make art (anything) that sounds like them, instead of you. It’s an interesting way to flex your creative muscles and do something you’ve never done before. Bonus tip: Always create like nobody is watching. Whatever you do with each creative exercise, throw your whole self into the endeavor and forget about looking foolish. Creative genius flows when we let down our guards and create without any inhibitions getting in the way and telling us that our ideas are dumb. Real creativity is all about exiting that zone where we fear, worry, or label our ideas in a negative way. Give yourself just 10 minutes every day to unleash your creativity, and you’ll be amazed at how easy it is to jump back into your creative flow whenever you like.
https://medium.com/the-post-grad-survival-guide/10-quick-exercises-to-boost-your-creativity-c88f4a83c024
['Shannon Ashley']
2020-01-01 18:08:47.508000+00:00
['Coaching Corner', 'Inspiration', 'Writing', 'Creativity', 'Work']
Amazon Is Closer to Drone Shipping Than We Think
The Journey to 30-Minute Shipping Amazon went from selling books in 1995 to selling almost everything. Jeff Bezos believes its success can be boiled down to an extreme focus on the customer. A big part of focusing on customer experience is in the area of delivery. Packages often arrive well before they are supposed to, even if you don’t pay for Amazon Prime. But if you do pay for the service, deliveries are getting even faster. In some major cities like Nashville, groceries can be delivered within 2 hours and some packages are able to be delivered the same day or next. But what Amazon plans to do next is truly inspiring. Drone Ships Recently a Japanese filmmaker posted a video of a massive Amazon drone ship deploying several smaller delivery drones. The video wasn’t real and he posted it as a joke, but Amazon did actually file a patent for a floating warehouse. Imagine a fulfillment center stocked with products in the air flying to places where demand is high like major sporting events and festivals. Whenever one of these fulfillment centers needs to re-stocked, a smaller drone ship will meet it in the sky bringing more staff, drones, fuel, and inventory. Companies file for patents all the time, but it almost never translates into an actual real-world product. What this does show is Amazon’s ambitions for its drones. For the company, this isn’t a joke or a marketing stunt, but a very real possibility. Drone Delivery Success The plans to deliver packages within 30-minutes by drone started out in the UK. Two hours north of London, in the Cambridge area, Richard Barnes became the first person to ever receive an Amazon package by drone as a part of Amazon’s test trial. After putting in his purchase, a fulfillment center received it, then packaged up the items. The maximum weight their drones can carry is 5 lb, which doesn’t sound like a lot, but Amazon is quick to point out that 75% to 90% of its orders are under that limit. After Barnes’ order is packaged, the box is loaded onto a drone and taken via a track to the outside where the ship raises to nearly 400 ft. The drone then guides itself via GPS to his house and uses sense and avoid technology to detect the surroundings. Since that trial in 2016, the drone’s design has changed, favoring a shape that better uses all of its propellers for movement. Tests are still being carried out in the UK and around the world. Better yet, the drones could be coming to America. Drone Shipping in The US Earlier this year, Amazon received federal approval to operate as a drone company in the United States. The drone company joined two others, Wing Aviation, owned by a Google parent company, and UPS Flight Forward. Getting the OK from the FAA was not easy. Amazon had to prove its operations were safe. The drones are packed with three types of sensors: visual, thermal, and sonar. The sensors can detect when a person gets too close or within the landing zone and can even spot things as thin as a clothesline. Still, when we’re talking about an unmanned aerial vehicle, it’s not without risks. In case of a catastrophic failure, another Amazon patent shows a drone breaking a part deliberately in the air, before a controlled collision into a tree. Yes, Amazon plans — at least on paper — to explode malfunctioning drones in the air and crash them into trees. The reasoning is if it’s going to crash, it’s better for it to crash in small pieces rather than one large chunk of metal. Another option is to rely on parachutes for the landing. The California startup Zipline uses this method to deliver life-saving blood supplies to rural areas in Rwanda. Drone Infastructure While Amazon continues to develop its technology to be safer, they have to figure out the logistics of making drone deliveries within a tight 30-minute window. Fulfillment centers are usually located on the outskirts of cities to accommodate their large size, but this is inconvenient for deliveries into large cities where a growing number of people live. One solution is to construct different kinds of fulfillment centers that can be tucked into a downtown area. Amazon holds patents for several cylindrical and coned shaped buildings. These buildings would be almost like beehives with drones coming and going. These buildings could also act like a command center, similar to a flight tower at an airport. Amazon’s drones can travel up to 15 miles before running out of power. Instead of heading back to the fulfillment center to replenish the battery, Amazon could charge their drones on a structure similar to a lamppost, where a drone can land and recharge. These lamp-like structures could also double as a pick-up and drop-off point for packages. Amazon thinks this kind of set-up could extend to any tall structure like parking garages, church steeples, and cell towers. There are many options to consider. A combination of any or all of them could very well be used, although many regulatory hurdles exist. If and when delivery drones take to the sky on a large scale, they will change our idea of delivery. We will no longer think in terms of days or hours, but minutes. I mentioned that the company’s guiding philosophy is a focus on the customer, but there is another philosophy. At Amazon’s headquarters in Seattle, Bezos named one of the buildings “Day One,” which reminds everyone to treat every day as the first day of a startup in order to work quickly and focus on results. If it’s day one at Amazon, then they’re just getting started.
https://medium.com/swlh/amazon-is-closer-to-30-minute-shipping-than-we-think-ef8827cb4ee
['Matt Stevenson']
2020-12-14 06:48:33.260000+00:00
['Technology', 'Amazon', 'Drones', 'Future', 'Tech']
Strategy Design Pattern | Android real-life example
Strategy Design Pattern: is a type of behavioral design pattern that encapsulates a “family” of algorithms and selects one from the pool for use during runtime. The algorithms are interchangeable, meaning that they are substitutable for each other. Check out the source code of the app to make it easier for you to follow up on the tutorial. Problem: In our movie app. The app fiches a list of top rated movies and TV shows from an API. The app also contains a filter options.We have 3 types of filters (genre, year, and rate). We fetch a list of movies from the server and applying each filter via a checkbox for each one. The problem here is that we need to apply each filter without affecting the previous one, e.g. If we choose genre filter and then we choose the year filter, I need the genre list to be updated and applying the year filter on it. Also, if we remove the year filter, we need the genre-filtered list to be shown (not the original one).
https://medium.com/android-dev-hacks/strategy-design-pattern-android-real-life-example-a055bb0353b3
['Jehad Kuhail']
2020-07-22 13:33:01.884000+00:00
['Android', 'Java', 'Design Patterns', 'Coding', 'Android App Development']
Transfer Learning SOTA— Do Adversarially Robust ImageNet Models Transfer Better?
Authors decide to check will adversarial trained network performed better on transfer learning tasks despite on worst accuracy on the trained dataset (ImageNet of course). And it is true. They tested this idea on a frozen pre-trained feature extractor and trained only linear classifier that outperformed classic counterpart. And they tested on a full unfrozen fine-tuned network, that outperformed too on transfer learning tasks. On pre-train task they use the adversarial robustness prior, that refers to a model’s invariance to small (often imperceptible) perturbations of its inputs. They show also that such an approach gives better future representation properties of the networks. They did many experiments (14 pages of graphics) and an ablation study.
https://medium.com/deep-learning-digest/transfer-learning-sota-do-adversarially-robust-imagenet-models-transfer-better-dba9bf61b9c3
['Mikhail Raevskiy']
2020-07-27 08:48:05.489000+00:00
['Deep Learning', 'Computer Vision', 'Python', 'Data Science', 'Machine Learning']
How Amazon Innovates
In 2014, Stephenie Landry was finishing up her one-year stint as technical adviser to Jeff Wilke, who oversees Amazon’s worldwide consumer business, which is a mentor program that allows high potential executives to shadow a senior leader and learn firsthand. Her next assignment would define her career. At most companies, an up-and-comer like Landry might be given a division to run or work on a big acquisition deal. Amazon, however, is a different kind of place. Landry wrote a memo outlining plans for a new service she’d been thinking about, Prime Now, which today offers one-hour delivery to customers in over 50 cities across nine countries. It’s no secret that Amazon is one of the world’s most innovative companies. Starting out as a niche service selling books online, it’s now not only a dominant retailer but a pioneer of new categories like cloud computing and smart speakers. The key to its success is not any one process, but how it integrates a customer obsession deep within its culture and practice. Starting With the Customer and Working Back At the heart of how Amazon innovates is its six-page memo, which is required at the start of every new initiative. What makes it effective isn’t so much the structure of the document itself, but how it is used to embed a fanatical focus on the customer from day one. It’s something that is impressed upon Amazon employees early in their careers. The first step in developing Prime Now was to write a press release. Landry’s document was not only a description of the service, but how hypothetical customers would react to it. How did the service affect them? What surprised them about it? What concerns did they want addressed? The exercise forced her to internalize how Amazon’s customers would think and feel about Prime Now from the very start. You often need to go slow to move fast. Next, she wrote a series of FAQs anticipating concerns for both customers and for various stakeholders within the firm, like the CFO, operations people, and the leadership of the Prime program. Landry had to imagine what questions each would have, how her team would resolve issues, and then explain things in clear, concise language. All of this happens before the first meeting is held, a single line of code is written, or an early prototype is built, because the company strongly believes that until you internalize the customer’s perspective, nothing else really matters. That’s key to how the company operates. A Deeply Embedded Writing Culture It’s no accident that the first step to develop a new product at Amazon is a memo rather than, say, a PowerPoint deck or a kickoff meeting. As Fareed Zakaria once put it, “Thinking and writing are inextricably intertwined. When I begin to write, I realize that my ‘thoughts’ are usually a jumble of half-baked, incoherent impulses strung together with gaping logical holes between them.” Amazon focuses on building writing skills early in an executive’s career. “Writing is a key part of our culture,” Landry told me. “I started writing press releases for smaller features and projects. One of my first was actually about packaging for diamond rings. Over years of practice and coaching, I got better at it.” Being able to write a good memo is also a key factor in advancement at Amazon. If you want to rise, you need to write—and write well. Landry also stressed the importance of brevity. “Keeping things concise and to the point forces you to think things through in a way that you wouldn’t otherwise. You can’t hide behind complexity, you actually have to work through it,” she said. Or, as another Amazon leader put it, “Perfection is achieved when there is nothing left to remove.” Moreover, writing a memo isn’t a solo effort; it’s a collaborative process. Typically, executives spend a week or more sharing the document with colleagues, getting feedback, honing, and tweaking it until every conceivable facet is deeply thought through. Reinventing the Office Meeting Another unique facet of Amazon’s culture is how meetings are run. In recent years, a common complaint throughout the corporate world is how the number of meetings has become so oppressive it’s hard to get any work done. Research from MIT shows that executives spend an average of nearly 23 hours a week in meetings, up from less than 10 hours in 1960. At Amazon, however, the six-page memo cuts down on the number of meetings. If you have to spend a week writing a memo, you don’t just start sending out invites whenever the fancy strikes you. Similarly, the company’s practice of limiting attendance to roughly the number of people that can share two pizzas promotes restraint. Each meeting starts out with a 30- to 60-minute reading period in which everybody digests the memo. From there, all attendees are asked to share gut reactions—senior leaders typically speak last—and then they delve into what might be missing, ask probing questions, and drill down into any potential issues that may arise. The truth is that there is no one “true” path to innovation because innovation, at its core, is about solving problems. Subsequent meetings follow the same pattern to review the financials, hone the concept, and review mock-ups as the team further refines ideas and assumptions. “It’s usually not one big piece of feedback that you get,” Landry stressed. “It is really all about the smaller questions; they help you get to a level of detail that really brings the idea to life.” All of this might seem terribly cumbersome to fast-moving executives accustomed to zinging in and out of meetings all day, but you often need to go slow to move fast. In the case of Prime Now, the service took just 111 days to go from an idea on a piece of paper to a product launch in one ZIP code in Manhattan, and it expanded quickly from there. Co-Evolving Culture and Practice Every company innovates differently. Apple has a fanatical focus on design. IBM’s commitment to deep scientific research has enabled it to stay on the cutting edge and compete long after most of its competitors have fallen by the wayside. Google integrates a number of innovation strategies into a seamless whole. What works for one company will likely not work for another, a fact that Amazon CEO Jeff Bezos highlighted in a recent letter to shareholders. “We never claim that our approach is the right one—just that it’s ours—and over the last two decades, we’ve collected a large group of like-minded people. Folks who find our approach energizing and meaningful,” he wrote. The truth is that there is no one “true” path to innovation because innovation, at its core, is about solving problems and every enterprise chooses different problems to solve. While IBM might be happy to have its scientists work for decades on some arcane technology and Google gladly allows its employees to pursue pet projects, those things probably wouldn’t fly at Amazon. However, the one thing that all great innovators have in common is that culture and practice are deeply intertwined. That’s what makes them so hard to copy. Anybody can write a six-page memo or start meetings with a reading period. It’s not those specific practices, but the commitment to the values they reflect, that has driven Amazon’s incredible success.
https://medium.com/s/story/how-amazon-innovates-67747090c4d2
['Greg Satell']
2018-12-01 19:50:48.297000+00:00
['Company Culture', 'Leadership', 'Amazon', 'Writing', 'Business']
How we work as a distributed product team
It’s possible to decide to be distributed for all the right reasons, but to screw it up royally with the wrong tactics and tools. The more a team is distributed, the more potential for people to move fast and get many things done, but for those things to not necessarily be the right things. Why does this happen? Because communications is really, really hard. It’s hard when everyone is together in the same room. It’s harder when people are separated by time and distance. In both work and life, there’s one thing I’ve learned about problems: All problems are people problems, and all people problems are communication problems. Doing distributed team projects well requires that the organization and its people, tools and processes all facilitate open communications and access to information at the lowest possible “cost”. Scheduling a bunch of meetings is high cost. It’s not the answer. You need to set things up so that, as much as possible, great communications happens naturally, serendipitously — by accident if you will. At Talko, where we are split between Boston, Seattle and San Francisco, we’ve benefited from having previously worked together. We knew each other, we respected each other, we’d built software together before. That’s a huge running start that many don’t have. But it didn’t eliminate the need to be intentional about distributed team practices and processes. This is not all-inclusive, but here are several things we do to help us communicate effectively without ever being together in one place. There is no hub. There are no spokes. We are a constellation. People often ask “Where is Talko ‘based’?” Many assume we’re based in Boston because we have more people there than in Seattle or San Francisco. I say Talko is based in Talko (the app). Sure, we have an official company address for the purposes of incorporation. But we don’t think of our team as being based in any one location. We give equal weight to our presence in all three cities, regardless of how many people are in each. We refer to our team as distributed, not “remote”. The word remote infers that some aspect of the team is centralized and others are remote. “Us” and “them”. “The core” and “the others”. Watch the toxicity take over once people start thinking and acting like this. We model our company as a constellation, not as a hub with spokes. This isn’t the only model that works for distributed teams. Arguably, it’s not possible for a much larger organization. But I believe it’s best for any small team that chooses to be distributed. This sounds ethereal when talking about it as an abstract model. But the model, the words used to describe it and the way it’s drawn on a napkin matter a lot. These things shape the team’s mindset, how people relate to one another, and how individuals perceive and act their role on the team. They also inform the tactics necessary for the organization to be true to what it wants to be. For example, at Talko we encourage people to work from home 2–3 days per week. We do this even though we have office space in all three cities. It’s a tactic that helps our people empathize and relate to one another as equals, always, regardless of how many team members live in one city versus another. It’s how we avoid situations where several people huddle in a room to tackle big problems and make important decisions, then later realize that ‘elsewhere others’ needed to be part of the conversation. Tactics like this help avoid obvious sources of misunderstanding within a distributed team. We have a clear set of top-level projects. A project is substantive enough to be sustained beyond any particular sprint or milestone. It has a well-defined team of people working on it. Unless you have a 1 project operation, which may be likely in the early days, a project team is a subset of the overall team. It’s essential that everyone in the organization knows who’s on each project. A project has well-defined goals, stories and designs, milestones and timelines. Project team members are all deeply intimate with these things. All others in the company understand at least enough to know and support why the project is essential to the company’s mission. Our top-level technical projects tracked as Epics in Jira. We have 5 top level technical projects at Talko: iOS App, Android App, Web App, Service, and Slack Integration. These are the set of sustained technical efforts that we organize, track and report around. We adapt our tools to work for our preferences. The image to the left shows how we use Epics in Jira as the organizing model for our projects. This is not exactly true to the Agile (capital A) definition of Epic, but it works well for us right now. We’re rarely compelled to be 100% true to the dogma of the processes or methodologies we use. We do what works. We invest in organizing our tools. There’s virtually always some way, some how for someone to get whatever information they need to do their job. The question is — at what cost in time, attention, disruption? Let’s say you need to come up to speed on the technical design for a feature. Do you: Schedule a meeting with the project team. Interrupt a developer to ask them directly. Go get the 2-page spec (because you know right where it is) and take 5 minutes to read it. The first two options are costly. Operate like this as a distributed team and nobody will ever get anything done. The point is that everyone needs to know exactly what tools are used for what purpose, and where to go to find what they need. Nobody should ever be uncertain of where or how to find something at the moment of need. We use many different tools for different aspects of the business at Talko. Jira for projects, stories and bugs. Office docs, PDFs, and increasingly Quip docs for features that require specification. Github for source control and code review workflows. Amplitude for app analytics. Zendesk for support. Nagios, PagerDuty, Crittercism for error reporting. And more. A sub-folder in our Dropbox. A little structure saves oodles of time down the line. Beyond the specific tools, the important thing is that each is organized for maximum transparency and quick access to information. A simple example can be seen in the image to the left. It shows how the Product Management subfolder is organized at one point in its hierarchy in Dropbox. It takes some time and care to set up at the beginning, but it saves time in spades later on. There’s never a question from a developer about where to find, for example, the app analytics event schema spec — of course it’s in Product Managment > 01 Feature Specs > Analytics. We operate on a cadence that naturally keeps people in sync. We use cadence as a tool to achieve common awareness across a project team. Quick cycle times force regular communication and help people self-organize with a calendar that is near-clear of expensive meetings. Shorter than typical sprints/milestones/releases create natural sync-ups, both planned and ad-hoc. It makes it difficult for things to slip through the cracks, and easy for a project team to shift priorities quickly when necessary. For example, we do 1-week sprints. For co-located teams I’ve worked with in the past I’ve always preferred 2-week sprints. However, many things can change contextually over a 2-week period. One-week sprints require conversations to happen with a tempo that ensures people don’t fall out of sync about what matters most. No fancy meetings required typically, just a quick, often ad-hoc check-in as we move from one sprint to the next. Of course, for such a discussion to be quick it presumes that people have done their jobs prior, e.g. PM has prioritized stories and bugs for the next sprint and can succinctly communicate what and why to Dev. Our work is done and ‘documented’ in our communications. We write documents and detailed specifications for several things. A new and big feature area. Test scripts. Technical architecture. And more. However, we produce far fewer documents relative to similarly scoped projects of several years ago. The document (or email) is no longer the primary artifact around which we do our work. There’s a flow to our communications and decision-making that is far more organic and fast-moving. It happens when we’re mobile and on-the-go as much as it does when we’re at our desks. It centers around text and voice messages, photos and screen-shares, and quick calls when we need to converge urgently. All of this happens with very few scheduled meetings or document reviews. The challenge is how to ensure all team members have access to the information they need to do their job, or to simply be an informed and participating team member. There’s no longer a presumption that everything is written in a document or summarized in mail. In fact there’s a preference that it’s not. Instead, there’s a presumption that key issues discussed and decisions made in the flow of conversation are “on the record” so that team members who weren’t there can be aware. Talko makes it simple to find key conversations by searching on flags and tags. Tactically, there are several tools we use to make this real. One of them is our own. Since mid-2012 we’ve had all of our daily standups, design & architecture conversations, bug triages, all-hands meetings and 1:1 discussions in Talko. The fact that all conversations — LIVE calls or messages — are saved in Talko means that key issues or decisions are easily flagged, tagged and discoverable at any time in the future. This helps in a range of scenarios. When an engineer misses daily standup but needs to quickly know the status of another engineer who’s working on a blocking bug. Or, when a PM is trying to remember why a particular design decisions was made for a feature implemented long ago (see image). Slack is great as a single hub for real-time status of all project activities and processes. Slack is another tool we use to support transparent, real-time access to our workflows and communications. We use many integrations — Github, Jira iDoneThis, Talko, Zendesk and several others. Essentially, the tool of choice for every aspect of the business is integrated with Slack. As such, Slack has become our hub for real-time status of any aspect of our engineering projects. A great example is our #github channel shown in the image. It provides a simple, stream view into what everyone is getting done. Like in Talko, what turns this from an activity stream to an irreplaceable productivity tool is the fact that all channels are easily searched. Anyone can find something that was said or done in the past and which may be important to recall at the moment. Tools such as Talko and Slack won’t solve all communications problems. They can’t communicate for you! But it’s in tools like these where work is getting done and ‘documented’ these days, much more so than in files and emails. By turning conversations and activity streams into persistent, searchable objects these tools help our team avoid the lost time, misunderstanding and strained team dynamics that otherwise occur when people are moving fast and making decisions in the flow of communicating. We treat team operations like a product. A software product is never done. It can always be made better, simpler, more capable, more performant. The same can be said for how a team works together. We treat our organization and its tools, processes and practices just like we treat the product we build together. We’re motivated to make things better. We’re far from perfect. Sometimes we do things poorly. Sometimes things slip through the cracks. But we’re conscious and aware when these things happen. We talk about it. We make tactical adjustments when we’re convinced the benefit will outweigh the cost. That is why, for example, I emphasize that using Epics to organize projects in Jira works for us “right now”. It’s why we recently made a change in how we do app analytics and the tool we use. It’s why we’re embracing an organic team shift from documents as files in Dropbox to Quip docs for many things. It’s why we’re currently contemplating changes in how we communicate client/service deployment dependencies to better support our aggressive multi-app release goals. And so on. What works for us now as a 12-person team is not what will work for us in the future. As the team and our projects grow we will adapt the way we organize our work, how communicate, the tools we use and how we structure them to meet the needs of the business at any particular moment in time.
https://medium.com/talko-team-talk-share-do/how-we-work-as-a-distributed-product-team-834e9485ba7b
['Matt Pope']
2015-11-12 15:44:40.593000+00:00
['Startup', 'Teamwork', 'Engineering']
Data Augmentation | How to use Deep Learning when you have Limited Data — Part 2
Getting Started Before we dive into the various augmentation techniques, there’s one issue that we must consider beforehand. Where do we augment data in our ML pipeline? The answer may seem quite obvious; we do augmentation before we feed the data to the model right? Yes, but you have two options here. One option is to perform all the necessary transformations beforehand, essentially increasing the size of your dataset. The other option is to perform these transformations on a mini-batch, just before feeding it to your machine learning model. The first option is known as offline augmentation. This method is preferred for relatively smaller datasets, as you would end up increasing the size of the dataset by a factor equal to the number of transformations you perform (For example, by flipping all my images, I would increase the size of my dataset by a factor of 2). The second option is known as online augmentation, or augmentation on the fly. This method is preferred for larger datasets, as you can’t afford the explosive increase in size. Instead, you would perform transformations on the mini-batches that you would feed to your model. Some machine learning frameworks have support for online augmentation, which can be accelerated on the GPU. Popular Augmentation Techniques In this section, we present some basic but powerful augmentation techniques that are popularly used. Before we explore these techniques, for simplicity, let us make one assumption. The assumption is that, we don’t need to consider what lies beyond the image’s boundary. We’ll use the below techniques such that our assumption is valid. What would happen if we use a technique that forces us to guess what lies beyond an image’s boundary? In this case, we need to interpolate some information. We’ll discuss this in detail after we cover the types of augmentation. For each of these techniques, we also specify the factor by which the size of your dataset would get increased (aka. Data Augmentation Factor). 1. Flip You can flip images horizontally and vertically. Some frameworks do not provide function for vertical flips. But, a vertical flip is equivalent to rotating an image by 180 degrees and then performing a horizontal flip. Below are examples for images that are flipped. From the left, we have the original image, followed by the image flipped horizontally, and then the image flipped vertically. You can perform flips by using any of the following commands, from your favorite packages. Data Augmentation Factor = 2 to 4x # NumPy.'img' = A single image. flip_1 = np.fliplr(img) # TensorFlow. 'x' = A placeholder for an image. shape = [height, width, channels] x = tf.placeholder(dtype = tf.float32, shape = shape) flip_2 = tf.image.flip_up_down(x) flip_3 = tf.image.flip_left_right(x) flip_4 = tf.image.random_flip_up_down(x) flip_5 = tf.image.random_flip_left_right(x) 2. Rotation One key thing to note about this operation is that image dimensions may not be preserved after rotation. If your image is a square, rotating it at right angles will preserve the image size. If it’s a rectangle, rotating it by 180 degrees would preserve the size. Rotating the image by finer angles will also change the final image size. We’ll see how we can deal with this issue in the next section. Below are examples of square images rotated at right angles. The images are rotated by 90 degrees clockwise with respect to the previous one, as we move from left to right. You can perform rotations by using any of the following commands, from your favorite packages. Data Augmentation Factor = 2 to 4x # Placeholders: 'x' = A single image, 'y' = A batch of images # 'k' denotes the number of 90 degree anticlockwise rotations shape = [height, width, channels] x = tf.placeholder(dtype = tf.float32, shape = shape) rot_90 = tf.image.rot90(img, k=1) rot_180 = tf.image.rot90(img, k=2) # To rotate in any angle. In the example below, 'angles' is in radians shape = [batch, height, width, 3] y = tf.placeholder(dtype = tf.float32, shape = shape) rot_tf_180 = tf.contrib.image.rotate(y, angles=3.1415) # Scikit-Image. 'angle' = Degrees. 'img' = Input Image # For details about 'mode', checkout the interpolation section below. rot = skimage.transform.rotate(img, angle=45, mode='reflect') 3. Scale The image can be scaled outward or inward. While scaling outward, the final image size will be larger than the original image size. Most image frameworks cut out a section from the new image, with size equal to the original image. We’ll deal with scaling inward in the next section, as it reduces the image size, forcing us to make assumptions about what lies beyond the boundary. Below are examples or images being scaled. From the left, we have the original image, the image scaled outward by 10%, and the image scaled outward by 20% You can perform scaling by using the following commands, using scikit-image. Data Augmentation Factor = Arbitrary. # Scikit Image. 'img' = Input Image, 'scale' = Scale factor # For details about 'mode', checkout the interpolation section below. scale_out = skimage.transform.rescale(img, scale=2.0, mode='constant') scale_in = skimage.transform.rescale(img, scale=0.5, mode='constant') # Don't forget to crop the images back to the original size (for # scale_out) 4. Crop Unlike scaling, we just randomly sample a section from the original image. We then resize this section to the original image size. This method is popularly known as random cropping. Below are examples of random cropping. If you look closely, you can notice the difference between this method and scaling. From the left, we have the original image, a square section cropped from the top-left, and then a square section cropped from the bottom-right. The cropped sections were resized to the original image size. You can perform random crops by using any the following command for TensorFlow. Data Augmentation Factor = Arbitrary. # TensorFlow. 'x' = A placeholder for an image. original_size = [height, width, channels] x = tf.placeholder(dtype = tf.float32, shape = original_size) # Use the following commands to perform random crops crop_size = [new_height, new_width, channels] seed = np.random.randint(1234) x = tf.random_crop(x, size = crop_size, seed = seed) output = tf.images.resize_images(x, size = original_size) 5. Translation Translation just involves moving the image along the X or Y direction (or both). In the following example, we assume that the image has a black background beyond its boundary, and are translated appropriately. This method of augmentation is very useful as most objects can be located at almost anywhere in the image. This forces your convolutional neural network to look everywhere. From the left, we have the original image, the image translated to the right, and the image translated upwards. You can perform translations in TensorFlow by using the following commands. Data Augmentation Factor = Arbitrary. # pad_left, pad_right, pad_top, pad_bottom denote the pixel # displacement. Set one of them to the desired value and rest to 0 shape = [batch, height, width, channels] x = tf.placeholder(dtype = tf.float32, shape = shape) # We use two functions to get our desired augmentation x = tf.image.pad_to_bounding_box(x, pad_top, pad_left, height + pad_bottom + pad_top, width + pad_right + pad_left) output = tf.image.crop_to_bounding_box(x, pad_bottom, pad_right, height, width) 6. Gaussian Noise Over-fitting usually happens when your neural network tries to learn high frequency features (patterns that occur a lot) that may not be useful. Gaussian noise, which has zero mean, essentially has data points in all frequencies, effectively distorting the high frequency features. This also means that lower frequency components (usually, your intended data) are also distorted, but your neural network can learn to look past that. Adding just the right amount of noise can enhance the learning capability. A toned down version of this is the salt and pepper noise, which presents itself as random black and white pixels spread through the image. This is similar to the effect produced by adding Gaussian noise to an image, but may have a lower information distortion level. From the left, we have the original image, image with added Gaussian noise, image with added salt and pepper noise You can add Gaussian noise to your image by using the following command, on TensorFlow. Data Augmentation Factor = 2x. #TensorFlow. 'x' = A placeholder for an image. shape = [height, width, channels] x = tf.placeholder(dtype = tf.float32, shape = shape) # Adding Gaussian noise noise = tf.random_normal(shape=tf.shape(x), mean=0.0, stddev=1.0, dtype=tf.float32) output = tf.add(x, noise) Advanced Augmentation Techniques Real world, natural data can still exist in a variety of conditions that cannot be accounted for by the above simple methods. For instance, let us take the task of identifying the landscape in photograph. The landscape could be anything: freezing tundras, grasslands, forests and so on. Sounds like a pretty straight forward classification task right? You’d be right, except for one thing. We are overlooking a crucial feature in the photographs that would affect the performance — The season in which the photograph was taken. If our neural network does not understand the fact that certain landscapes can exist in a variety of conditions (snow, damp, bright etc.), it may spuriously label frozen lakeshores as glaciers or wet fields as swamps. One way to mitigate this situation is to add more pictures such that we account for all the seasonal changes. But that is an arduous task. Extending our data augmentation concept, imagine how cool it would be to generate effects such as different seasons artificially? Conditional GANs to the rescue! Without going into gory detail, conditional GANs can transform an image from one domain to an image to another domain. If you think it sounds too vague, it’s not; that’s literally how powerful this neural network is! Below is an example of conditional GANs used to transform photographs of summer sceneries to winter sceneries. Changing seasons using a CycleGAN (Source: https://junyanz.github.io/CycleGAN/) The above method is robust, but computationally intensive. A cheaper alternative would be something called neural style transfer. It grabs the texture/ambiance/appearance of one image (aka, the “style”) and mixes it with the content of another. Using this powerful technique, we produce an effect similar to that of our conditional GAN (In fact, this method was introduced before cGANs were invented!). The only downside of this method is that, the output tends to looks more artistic rather than realistic. However, there are certain advancements such as Deep Photo Style Transfer, shown below, that have impressive results. Deep Photo Style Transfer. Notice how we could generate the effect we desire on our dataset. (Source: https://arxiv.org/abs/1703.07511) We have not explored these techniques in great depth as we are not concerned with their inner working. We can use existing trained models, along with the magic of transfer learning, to use it for augmentation.
https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced
['Bharath Raj']
2018-06-04 12:15:12.573000+00:00
['Deep Learning', 'AI', 'Artificial Intelligence', 'Neural Networks', 'Machine Learning']
60 Questions to Test Your Knowledge of Python Lists
Questions 1–10: 1. Check if a list contains an element The in operator will return True if a specific element is in a list. li = [1,2,3,'a','b','c'] 'a' in li #=> True 2. How to iterate over 2+ lists at the same time You can zip() lists and then iterate over the zip object. A zip object is an iterator of tuples. Below we iterate over 3 lists simultaneously and interpolate the values into a string. name = ['Snowball', 'Chewy', 'Bubbles', 'Gruff'] animal = ['Cat', 'Dog', 'Fish', 'Goat'] age = [1, 2, 2, 6] z = zip(name, animal, age) z #=> <zip at 0x111081e48> for name,animal,age in z: print("%s the %s is %s" % (name, animal, age)) #=> Snowball the Cat is 1 #=> Chewy the Dog is 2 #=> Bubbles the Fish is 2 #=> Gruff the Goat is 6 3. When would you use a list vs dictionary? Lists and dictionary generally have slightly different use cases but there is some overlap. The general rule of algorithm questions I’ve come to is that if you can use both, use a dictionary because lookups are faster. List Use a list if you need to store the order of something. Ie: id’s of database records in the order they’ll be displayed. ids = [23,1,7,9] While both lists and dictionaries are ordered as of python 3.7, a list allows duplicate values while a dictionary doesn’t allow duplicate keys. Dictionary Use a dictionary if you want to count occurrences of something. Like the number of pets in a home. pets = {'dogs':2,'cats':1,'fish':5} Each key can only exist once in a dictionary. Note that keys can also be other immutable data structures like tuples. Ie: {('a',1):1, ('b',2):1} . 4. Is a list mutable? Yes. Notice in the code below how the value associated with the same identifier in memory has not changed. x = [1] print(id(x),':',x) #=> 4501046920 : [1] x.append(5) x.extend([6,7]) print(id(x),':',x) #=> 4501046920 : [1, 5, 6, 7] 5. Does a list need to be homogeneous? No. Different types of object can be mixed together in a list. a = [1,'a',1.0,[]] a #=> [1, 'a', 1.0, []] 6. What is the difference between append and extend? .append() adds an object to the end of a list. a = [1,2,3] a.append(4) a #=> [1, 2, 3, 4] This also means appending a list adds that whole list as a single element, rather than appending each of its values. a.append([5,6]) a #=> [1, 2, 3, 4, [5, 6]] .extend() adds each value from a 2nd list as its own element. So extending a list with another list combines their values. b = [1,2,3] b.extend([5,6]) b #=> [1, 2, 3, 5, 6] 7. Do python lists store values or pointers? Python lists don’t store values themselves. They store pointers to values stored elsewhere in memory. This allows lists to be mutable. Here we initialize values 1 and 2 , then create a list including the values 1 and 2 . print( id(1) ) #=> 4438537632 print( id(2) ) #=> 4438537664 a = [1,2,3] print( id(a) ) #=> 4579953480 print( id(a[0]) ) #=> 4438537632 print( id(a[1]) ) #=> 4438537664 Notice how the list has its own memory address. But 1 and 2 in the list point to the same place in memory as the 1 and 2 we previously defined. 8. What does “del” do? del removes an item from a list given its index. Here we’ll remove the value at index 1. a = ['w', 'x', 'y', 'z'] a #=> ['w', 'x', 'y', 'z'] del a[1] a #=> ['w', 'y', 'z'] Notice how del does not return the removed element. 9. What is the difference between “remove” and “pop”? .remove() removes the first instance of a matching object. Below we remove the first b . a = ['a', 'a', 'b', 'b', 'c', 'c'] a.remove('b') a #=> ['a', 'a', 'b', 'c', 'c'] .pop() removes an object by its index. The difference between pop and del is that pop returns the popped element. This allows using a list like a stack. a = ['a', 'a', 'b', 'b', 'c', 'c'] a.pop(4) #=> 'c' a #=> ['a', 'a', 'b', 'b', 'c'] By default, pop removes the last element from a list if an index isn’t specified. 10. Remove duplicates from a list If you’re not concerned about maintaining the order of a list, then converting to a set and back to a list will achieve this.
https://towardsdatascience.com/60-questions-to-test-your-knowledge-of-python-lists-cca0ebfa0582
['Chris I.']
2020-05-19 15:45:39.058000+00:00
['Programming', 'Software Engineering', 'Software Development', 'Python', 'Data Structures']
What I’m Listening To Today, Episode #15
What I’m Listening To Today, Episode #15 Ladder Of Success, Skeeter Davis Two ladders lean up against my neighbour’s house, a sure sign of his industry that throws an unwelcome spotlight on my sloth. One ladder, okay, but two? I gaze up resentfully and think: what ambition! I have not had a single moment of enthusiasm or vision in the home improvement department this year. I have instead watched over the summer as he has gone up and down the ladder. Sometimes clipping away plants, other times with a tin of paint or a heat gun, or in one case, a cup of tea, as though drinking from the top of his artificial precipice afforded him some closer relationship to god, while the beasts next door squirmed and rolled in their own dust and filth far below. Every time I look at my house or its contents, I fall into an apathy-induced lethargy. I don’t know what to do with any of it, except throw it all away and start again. And then I being to ache with hope, because maybe I can keep some of the growing cardboard avalanche in my outbuilding, it would be such a shame to throw all of this stuff away. I love clutter and I hate clutter. I love order, but I’m a mess. I’m sorry I put you in the outbuilding. I put you there because I didn’t have anywhere else to put you. Maybe one day we’ll move and I can put you somewhere better. Let’s wait. Let’s wait until it’s later, and we’ll either have moved or I will have found a use for you. That way, I have a good reason for doing nothing. The last great push towards modern good living began in January. Looking hard over the forty year old kitchen, I concluded with the heaviest of sighs that the job was beyond my YouTube tutorial apprenticeship, and so I went to a kitchen showroom and we picked out something we hoped would give our 500-year old cottage a modern facelift without ruining the period aesthetic. People were milling about the showroom happily. My four year old was plotting the downfall of some complimentary cookies on the designer’s desk top, and we were trying to balance cost and style while also attempting to appear sane to the poor woman who was putting together a 3D render of it all. On the radio, there were some reports coming in of a strange outbreak in China, but we thought nothing of that. I put down a deposit, shook her hand and we wandered out into a rainy afternoon. Contrast this with a few weeks later, when I went in to sign the rest of the paperwork approving the work. Lockdown was not yet upon us, but it wasn’t far away. The lights were off in the showroom, and I wondered if I had gotten the appointment time wrong, until someone came to the door to open up for me. I walked in and told them who I was there to see. By now, we had some small understanding of what was going on, and so I sat away from her, but we passed papers back and forth and shared a pen, so, as I say, a small understanding. I handed over a large chunk of cash and then the pandemic really got going and with some naivety, I thought they might give the money back and just call the whole thing off. Instead, I got a call a few weeks later to tell me that everything was going ahead, as planned. What? The entire country was in lockdown. I wasn’t allowed to leave the house unless I was seeking food or medical aid. And yet, it was somehow going to be fine for a couple of guys to deliver a kitchen to me, and then have a team of people invade my cripplingly small house for a week to install it. By now, I felt we really did have a pretty good understanding of what was going on, so when they came, they put up a sheet of paper in the kitchen (which was also the living room and the reception room), which read: THIS IS A SOCIAL DISTANCING WORK ZONE. We lived upstairs for a week and a half, the three of us. I would come downstairs every now and again to stick the kettle on and make coffee for the builders (you’ve got to keep them on your side), and invariably get drawn into a conversation about how terrible people are (you’ve really, really, got to keep them on your side). I have encountered few better ways of rapport building than to make and distribute hot drinks to parched lips. I think in the end they grew more irritated about working in the cramped, shitty little house that I haunt, rather than with me, and I can say that we parted company on good terms, because they had delivered a kitchen, as promised, and I had paid them 25% more in additional subcontractor costs. One evening I arrived home to find my neighbour on the other side hacking away at a bush for no good reason. That bush is there so that I don’t have to see you when I walk outside. I felt up until that moment that I had a good handle on the complex social dynamics of living mid-terrace. Now people were breaking down barriers on one side and looking down on me from the other. They had emerged from isolation with a desire for things to be fresh and new. I just wanted things to go back to the way they were in the relatively less sorry state that existed pre-2020. Dropping socks while carrying out my laundry to the machine, two overalled men perched high above peer down. “You missed one”, my neighbour says, cheerfully. He is drinking tea again on his improvised balcony. “Thanks,” I reply. Maybe it’s time I bought a ladder too. As I step into the outbuilding, I trip over my son’s bicycle and my dirty laundry flies out of my hands and then the cardboard avalanche lurches menacingly above me for a second or two before crashing down on my unproductive head. This is what happens to hoarders, you know. This is how they end. Surrounded by their own disorder.
https://medium.com/the-manic-depressives-handbook/what-im-listening-to-today-episode-15-f51bb4fbb4f4
['Jon Scott']
2020-09-15 15:50:19.381000+00:00
['Humor', 'Ladders', 'Success', 'Music', 'Productivity']
Calm Acceptance and Clouds of Unease
Outside earlier today, on the street, I sensed dollops of panic, flavored with a sprinkling of resistance. Many people live in fear for themselves and their loved ones. Or they want to pretend the threat to health isn’t real and continue as if all is well. Safe in my friendly abode, though, the atmosphere is one of calm acceptance. I watch condensation trickle down a cold windowpane and am glad to have shaken off the feeling of oddness with which I awoke this morning. It’s strange to open your eyes in the time of the Coronavirus. The inky silhouettes of bare trees that reach crisp fingers into the gray sky are a stark contrast to the warmth here indoors. At this time of day, I’m used to the whir of my computer, birds that chirp in the treetops, and my fingers as they tap the keyboard. Now, though, a gentle hum and snort joins them as my husband and dog sleep “until it’s all over.” My collie, I suspect, since he knows nothing of what’s happening, joins in with his master’s nap out of camaraderie and convenience: It’s a splendid excuse to loll. Up here in my room at the top, heat rises from radiators lower in the house to mingle with the chill. A feint scent of toasted cauliflower cheese, our lunchtime meal, drifts along with fresh coffee, and spicy aromatherapy. Despite a measure of recognition and tranquility, the strange wave of unease drifts close by like a cloud in a breeze-less sky. And what do I taste? A memory of this morning’s walk in the drizzle. Of yellow flowers and pink buds. Of dark ginger chocolate from my coffee break. My heart speaks too. It acknowledges a sense of togetherness. Of people who reach out to one another. Who hold hands across the divide of nations. I am reminded that in a crisis, humanity rises. These are times of extremes, and though some people hoard with fearful grasps, others spread their kindness wide.
https://medium.com/creative-humans/calm-acceptance-and-clouds-of-unease-7402f25962df
['Bridget Webber']
2020-03-19 18:00:57.827000+00:00
['Writing Prompts', 'Writing', 'Creative Humans', 'Creativity', 'Healing']
How Big Data is Helping in Big COVID-19 Pandemic Situation
Photo by Clay Banks on Unsplash Health and technology are inseparable; technology enables easy accessibility to health, while significant breakthroughs in science and health are considered the core reason why technology is improved upon every now and then. Today, when the world is struggling with the novel COVID-19 (coronavirus) pandemic, the role of technology has never felt so important and game-changing; right from bringing education to the comfort of a mobile device to the more complex processes of contact tracing for the virus. New cases of COVID-19 continue to grow at alarming rates worldwide, with more than 28 million people acquiring the infection, and more than 905k people dead so far. At the core of containing the spread of the pandemic is big data — the health data acquired from these cases — which has become a valuable source of information and knowledge, processed by government and health organizations to improve their response to the pandemic. What is big data and how is it helping? Photo by Franki Chamaki on Unsplash Big Data involves an advanced technology to store, process, and analyze vast amounts of information for which traditional software techniques do not suffice. In the health sector, big data includes patients’ data for coronavirus which is stored digitally. With the help of artificial intelligence (AI), it helps reveal patterns, trends, correlations, and discrepancies through computational analysis. It may also help to reveal insights into the spreading and controlling of the virus. All of this data is used to conduct research and development about the virus that caused it along with the efforts to tackle this virus and its after-effects. Big data can be used profitably with comprehensive data capture capability to reduce the risk of transmitting this virus. This system is used to store data of all forms of COVID-19-affected cases (infected, recovered, and expired). This data can be used efficiently to classify cases and to assist in allocating the resources for improved public health security. Several digital data modalities including location of patients, proximity, patient-reported travel, patient physiology, comorbidities, and current symptoms can be digitized and used to produce actionable insights at both demographic and community levels. Leveraging public datasets A quick search for publicly available datasets for COVID-19 and you will come across thousands of them continuously being updated and analyzed to help better the response of the nations and health industry against the pandemic. Here’s a link to some of the datasets you can access to understand the scope and reach of big data: The COVID-19 Data Lake contains COVID-19 related datasets from various sources, covering testing and patient outcome tracking data, social distancing policy, hospital capacity, mobility, etc. Bing COVID-19 Data Bing COVID-19 data includes confirmed, fatal, and recovered cases from all regions, updated daily. Bing COVID-19 data includes confirmed, fatal, and recovered cases from all regions, updated daily. COVID Tracking Project The COVID Tracking Project dataset provides the latest numbers on tests, confirmed cases, hospitalizations, and patient outcomes from every US state and territory. The COVID Tracking Project dataset provides the latest numbers on tests, confirmed cases, hospitalizations, and patient outcomes from every US state and territory. European Centre for Disease Prevention and Control (ECDC) Covid-19 Cases The latest available public data on geographic distribution of COVID-19 cases worldwide from the European Center for Disease Prevention and Control (ECDC). Each row/entry contains the number of new cases reported per day and per country or region. The latest available public data on geographic distribution of COVID-19 cases worldwide from the European Center for Disease Prevention and Control (ECDC). Each row/entry contains the number of new cases reported per day and per country or region. Oxford COVID-19 Government Response Tracker The Oxford Covid-19 Government Response Tracker (OxCGRT) dataset contains systematic information on which governments have taken which measures, and when. COVID-19 Open Research Dataset Challenge (CORD-19) COVID-19 Open Research Dataset (CORD-19). CORD-19 is a resource of over 200,000 scholarly articles, including over 100,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. COVID-19 Open Research Dataset (CORD-19). CORD-19 is a resource of over 200,000 scholarly articles, including over 100,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. Coronavirus Genome Sequence Phylogenetic analysis of the complete viral genome (29,903 nucleotides) revealed that the virus was most closely related (89.1% nucleotide similarity) to a group of SARS-like coronaviruses (genus Betacoronavirus, subgenus Sarbecovirus) that had previously been found in bats in China. Phylogenetic analysis of the complete viral genome (29,903 nucleotides) revealed that the virus was most closely related (89.1% nucleotide similarity) to a group of SARS-like coronaviruses (genus Betacoronavirus, subgenus Sarbecovirus) that had previously been found in bats in China. Coronavirus (covid19) Tweets This dataset contains the Tweets of users who have applied the following hashtags: #coronavirus, #coronavirusoutbreak, #coronavirusPandemic, #covid19, #covid_19. From about 17 March, the dataset also included the following additional hashtags: #epitwitter, #ihavecorona This contains numerous datasets time-series datasets for reporting of daily cases country wise. There is a Full Dashboard representing this Collection of all the sources of COVID datasets curated by the Reddit community Spreadsheets and Datasets: Other Good sources: These publicly available datasets for COVID-19 prove a valuable resource to the public, doctors, other healthcare professionals, and researchers in order to track the virus and analyze the infection mechanism. Now let us look at how these datasets help towards the ongoing research and analysis for COVID-19. Identification of infected cases One of the publicly available datasets, such as the one available by Microsoft, provides information on infected cases based on region. Not only does this big data store the complete medical history of the patients, but it also assists in identifying the infected cases and conducting further risk level analysis. Travel history One of the first identifying factors for infection is the travel history of an individual. If you look closely, right from the moment you book a ticket, your data is stored with the airline, in the mandatory government apps such as Aarogya Setu to identify if you’re coming from a contaminated zone, and also with the taxi aggregators. Big datasets such as these store people’s travel history for risk analysis. It also helps in identifying individuals who may be in contact with an infected individual. Fever symptoms We all have some or the other app installed in our mobile phones or available natively that helps keep a record of our health. These apps, too, have big data as their backbone to help identify and record symptoms for possible illness. The associated datasets keep records of a patient’s fever and other symptoms and help determine when medical treatment is required. Identification at an early stage In a pandemic, time is of utmost importance. If accurate identification is done in time, it is possible to save the lives of millions. With big data, it has become possible for health authorities to move swiftly in identifying infected people at an early stage. For instance, if a patient logs in information about symptoms associated with COVID-19 in his or her doctor’s appointment app, it is possible for easier and swifter identification of the infection at an early stage. Big data also helps examine and classify individuals who could be infected with this virus in the future. When it comes to India’s fight against the pandemic, big data can be seen at play in a lot of places. For instance, enabled in even a few of your favorite food delivery apps such as Zomato and Swiggy, you can see the body temperature of the delivery person even before it reaches your doorstep with that large pizza you ordered. Meanwhile, the government-backed Aarogya Setu app helps in tracking the movement of the citizens. It also notifies the individual if they came in contact with an infected person and for how long. At the heart of all this is big data. Big data analytics; a tool towards healthier tomorrow Image by Gerd Altmann from Pixabay Big data analytics is capable of serving as a tool for COVID-19 monitoring, control, study, and prevention. It will diversify research and help improve vaccine development. With the assistance of data collection, China suppressed COVID-19 and enforced the process with AI leading to a low spread rate. There are many big data components to this pandemic where AI plays an important role in biomedical testing and mining the scientific literature required to help speed up the process of containing the spread. Access to public information has resulted in the creation of dashboards that track the virus continuously. Several entities use big data to create dashboards. Techniques for identifying faces and measuring infrared temperatures have been built in all leading cities. Here’s how China used big data in its seemingly Big Brother ways to contain the virus. Chinese AI companies such as Hanwang Technology and SenseTime claim to have developed a special face recognition technology that can accurately identify people even if they are wearing a mask. Smartphone applications are also used to keep a watch on the movements of citizens and to determine if they have been in touch with an infected person or not. Al Jazeera stated that China Mobile, a telecom provider, sent text messages to state-owned media agencies, informing them of those who were infected. The messages had all the information about the travel history of the citizens. CCTV cameras are placed at several locations to ensure that quarantined individuals do not step out. Big Data vs Privacy The basis of any data is information collection. More often, this data collection gives privacy advocates a sore eye over infringing the rights of citizens. However, it needs to be widely acknowledged and accepted that when it comes to the health industry, no data would equate to outbreaks bigger than COVID-19. Even as critics have an alternative say when it comes to data collection, in the coming years, big data is poised to play a crucial role in analyzing global data around detected viruses, modeling disease, monitoring human activity, and visualizing the data. As more and more data keeps on piling up into massive datasets, data scientists will get a better shot to avoid such outbreaks altogether. Meanwhile, a publicly available dataset ensures enhanced transparency as well as accessibility to all stakeholders, including the very public it is meant to benefit. Follow Me on Linkedin & Twitter If you are interested in similar content do follow me on Twitter and Linkedin Linkedin Twitter
https://medium.com/the-innovation/how-big-data-is-helping-in-big-covid-19-pandemic-situation-abdc3633b0e4
['Anuj Syal']
2020-09-18 19:13:10.950000+00:00
['Covid 19', 'Big Data', 'Data Science', 'Coronavirus', 'Data Visualization']
The Gigantic Intergalactic Idea Alarm Clock
Photo credit: Milos Luzanin / maskins.com The Gigantic Intergalactic Idea Alarm Clock Are you hitting the snooze-button on brilliant ideas? For every Garbage Pail Kids there are twenty Trash Can Tykes. For every Pet Rock there are fifty Boulder Buddies. For every Candy Crush there are one hundred Sweets Smashers. You’ve probably been sitting in front of the television and all of a sudden a commercial comes on pitching some product that made you poke the person next to you while pointing at the screen and angrily shouting “Hey! I thought of that idea two years ago!” It’s a pretty common occurrence and it happens to people everyday. I’ve always imagined there is a Gigantic Intergalactic Alarm Clock of Ideas with individual alarms set to go off for every single idea that will ever be thought of, and when the time for a particular idea to shine is approaching, the concept starts to pop-up in the creative minds of people all over the world. If that first group of people doesn’t do anything with the idea, then the creative clock triggers idea alarms in the minds of an ever increasing number of people until somebody somewhere finally does something with the concept and brings the idea into reality. So, if you came up with the idea for Collectible Disk Slammers twenty years ago and didn’t do anything with it, don’t get angry at Blossom Galbiso because she didn’t hit the snooze button when the idea alarm went off in her head. PS… The way the article started (“You’ve probably been sitting in front of the television…”) goes a long way toward explaining why you chose to hit the snooze button (instead of choosing to take action) when the alarm went off in your head. The Gigantic Intergalactic Idea Alarm Clock does not “wait for the commercials to come on” before it moves on and triggers inspiration in someone else’s head. — Possessing creative powers beyond those of mere mortals, Don The Idea Guy is a Gitomer Certified Speaker helping businesses in need of customized and personalized seminars on sales, customer loyalty, personal development, and creativity. To book Don for your next event, go to www.GitomerCertified.com or contact the friendly folks at Buy Gitomer via email or by calling 704-333-1112.
https://medium.com/business-creativity/the-gigantic-intergalactic-idea-alarm-clock-35ab10b6f860
['Don The Idea Guy']
2018-08-28 19:09:59.904000+00:00
['Ideas', 'Money', 'Action', 'Motivation', 'Creativity']
Wholehearted & Upstanding: Danielle Bettmann
Danielle Bettmann is an Enthusiastic Coach, Mother & Leader Source: Wholeheartedly “Parenting is a journey.” It’s a phrase you’ll hear from Danielle Bettmann often, if you meet her. A graduate in early childhood education, Danielle Bettmann started out working at young child learning institutions, to conducting home visitation, and ultimately, she formed her own parenting coaching business, called Wholeheartedly. Danielle’s skills, emotional intelligence, and warm personality endear her to everyone she meets. She starts every meeting stressing the importance of early childhood years on a person’s growth. “The first seven years of a child’s life wires their subconscious,” Danielle said, “The people meeting their child’s needs are their parents. [My job is] to help parents meet those needs — understand their kids, their development, and set kids up for life success. It changes the trajectory for their whole life.” Danielle understand good mental health is key to better parents. She noted when she coaches parents, they often have to face internal blockages toward becoming the parent they both want to be, and the type their child needs. She said a parent’s largest obstacle in becoming the parent they envision themselves being is to, “Reckon with how they were parented. How I see my child is wired by how I was parented.” She said parents have to overwrite that wiring. The contemporary consensus on child development says empathetic, leading parenting is important for understanding. As Danielle explained, “A child is a separate person, with their own free will. They have their own challenges. It’s not our job to fully control them. Set them up for success. It’s hard for parents to understand, if they were raised in an obedience mindset.” Danielle works with parents to help let go of unhelpful mindsets and approaches, and develop ones which understand what a child’s behavior really means. She helps them learn how to connect with a child, and rebuild the parent themselves with new tools for the obstacles they face everyday. Danielle helps parents understand how to have humility. She empathizes with the feeling parents have that they are failing, and not living up to their ideal for parenting. She said, “You don’t know everything. A lot we want to read into…we think we’re failing.” The fear of failure inspired Danielle’s cathartic podcast, Failing Motherhood. Danielle features mothers from across the country, from different geographic, cultural and ethnic backgrounds, sharing their motherhood stories. She said, “I looked at a survey and saw what parents fear most often is failing: ‘I’m screwing up my kids. I can’t rein in my temper. I’m ruining everything. I’m all alone.’ I wanted to start a conversation around that idea and normalize how common it feels like you’re failing. [Parenting] is a journey that breaks you down. It creates in you a bigger, better person. It allows you to grow up alongside your kids. If we normalize, we know we’re not alone.” Danielle’s hope is that each parent she works with becomes a self-actualized, emotionally intelligent leader who is capable of connecting with their children. “Raising people is the most powerful form of leadership,” she said, “It’s a bizarre concept to many, to marry the business world, and the concept of best practices of leadership, to the home. To have a family business plan.” Danielle elaborated, “I want parents to think about and contemplate: what is a good boss and a bad boss? What did they do which made you work harder? Toxic vs. encouraging culture. Your kids are your employees. They are seeing you as having a leadership style. What is it? How are you kids interpreting it? How are they being encouraged? Are you shutting them down with frustration, criticism, and micromanaging?” She described parenting as the “linchpin,” the critical influence in children growing up as adjusted, healthy, and capable future leaders and human beings. She also said critical influence cannot work without a strong connection with children, and for the parents to be good leaders for them. Credit: Danielle Bettmann “It’s huge for parents to see themselves as leaders. Empower themselves to rise above power struggles. We don’t have to win this battle. Pick the right battles. It’s okay if they have this cookie, or don’t wear shoes to the grocery store. Focus on the right things, and they will be a good human being. They’ll still come home for Christmas. Place attention and focus on the right things,” she said. As a parenting coach, Danielle said it’s her duty to lead by example. She said, “I am trying to lead by example through my own self-development. I’m going first. I have to master something or understand a new concept. Take the steps first to turn around, encourage others to follow, and trust me in that process.” Finally, Danielle offered this support for parents out there, who feel like they aren’t up for the challenge, they are failing their children, or they aren’t good enough for them: “You are the one for the job. You are the parent your kid needs. You are a match for a reason. When you really do believe that, you can trust the process, trust the child, and trust yourself.” Danielle Bettmann’s podcast, Failing Motherhood, is on all podcasting outlets, while she can be inquired about parenting coaching at https://www.parentingwholeheartedly.com/ Danielle’s quotations come from an interview conducted by Jack Pryor on Oct. 1, 2020.
https://medium.com/the-innovation/wholehearted-upstanding-danielle-bettman-949b9907e8eb
['Jack Rainier Pryor']
2020-10-05 17:30:07.295000+00:00
['Children', 'Mental Health', 'Society', 'Family', 'Parenting']
Difference Between AWS and Azure
Difference Between AWS and Azure Amazon Web Services (AWS) is an Amazon cloud services platform that offers services in a variety of areas, including computing, storage, broadcast, and other features that help businesses grow and grow. You can use this domain in the form of services, which you can use to create and deploy different types of applications on the cloud platform. This service is designed so that they work together and produce scalable and efficient results. AWS service offerings are classified into three types, such as infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS). AWS, launched in 2006 and has become one of the best platforms for cloud platforms in the cloud, is now available. The cloud platform offers several advantages, such as reducing overhead management, minimizing costs, and so on. What is Azure? Microsoft Azure is a Microsoft cloud services platform that offers services across disparate scopes such as computing, storage, databases, networks, tool developers, and other features that help organizations scale and grow their business. . Azure services are broadly classified as platform-as-a-service (PaaS), software-as-a-service (SaaS), and infrastructure as a service (IaaS) that can be used by software developers and employees to create, deploy and manage services and through the cloud. Microsoft Azure was released in 2010 and is setting itself up as one of the largest commercial service providers in the cloud. It offers a variety of cloud services and integrated functions such as analysis, computing, networking, databases, storage, mobile and web applications that seamlessly integrate into the environment for efficiency and scalability. The difference between AWS and Azure AWS and Azure databases Microsoft Azure and Amazon Web Services (AWS) are two of the top players in computing in the public cloud career. Although Azure is a cloud computing platform for Microsoft Public, it is only in the race, but since 2010 it has established itself as an endless source of cloud services. AWS, as the name suggests, is a cloud computing platform for Amazon, which has been leading the cloud computing career for more than 10 years with several children sacrificing themselves. Offerings of AWS and Azure The AWS infrastructure is categorized as a service offering for computing, storage and content delivery network (CDN), database and network to help businesses and individuals grow. It offers a wide range of products, including cloud-based development tools, Internet of Things applications, security and analytics companies. Azure offers a variety of services covering all categories of relief, including calculations, data, applications and best practices in the network industry and poor fixation into account. Hybrid Cloud in Azure and AWS It is a mixture of public and private cloud to allow data and applications to be shared between them. It is a service in the cloud that offers the benefits of an implementation of several integrated models. It simply means managing private and public clouds as one. Microsoft Azure has a space advantage through the AWS hybrid cloud. This makes it easier for companies to use their Azure site stack servers to run their applications. Pricing models for the Azure and AWS Amazon offers a flexible pricing model, so you only pay for what you use. This is more like a utility style billing means you pay the exact amount of resources required. For certain products, you can opt for coverage instead of opting for a demand model to achieve significant overall savings. In addition, you can save on increasing the use of more means that you use, the less you pay. Azure, on the other hand, is a bit less flexible when it comes to pricing models. Conclusion — AWS vs Azure Finally, the differences of opinion between the providers of BLUE against the cloud AWS. I hope you have a better understanding of the services offered by AWS providers in Azure and select a cloud provider based on your needs. If you are looking for infrastructure as a service or a variety of services and equipment, you can opt for AWS. If you are looking for a window or integration platform as a good cloud service provider (PaaS), you can select Azure. To getting expert level Training for Java Training in your location AWS Training in chennai | AWS Training in Bangalore
https://medium.com/besant-technologies/difference-between-aws-and-azure-2c66e22366ad
['Cristy Blossom']
2019-04-26 12:53:49.862000+00:00
['Cloud Computing', 'AWS', 'Training', 'Azure']
What I learned from Damon Dash
In 2003, I interviewed Damon Dash for Glamour magazine. At the time, he was Jay Z’s manager and business partner, running a successful music label, Roc-A-Fella, and one of the first big streetwear brands, Rocawear. He’d arrived in London to launch launched a high-profile collaboration with Victoria Beckham — her first venture into fashion. At the same time, he was juggling a dizzying array of other projects: films, a new nightclub, branding on all kinds of products. I’m not holding him up as a particularly great role model here. After acrimoniously splitting with Jay Z in 2004, Dash has had a somewhat chequered career, his successes marred by conflict and court cases. But there was something he said, during our meeting, that has stuck with me ever since. I was asking about his exhausting schedule, about what drove him. And he talked about how you behave when you have an early morning flight. You don’t lie in bed once your alarm has gone off. You don’t procrastinate, or get distracted. Instead you get up, you get ready, you get out and you go straight to the airport. Because you don’t want to miss that flight. “To me everything is like catching a plane,” Dash explained. “I’m trying to get up and catch a cheque. So that’s how I think — if there’s money out there, I get up. I can’t even sleep if I know there’s money sitting on the table.” For you, money might not be the main driver. It might be about recognition, fame, being the best at what you do. Perhaps you want to change the world, tell your truths. To make something beautiful or useful. Or just make people happy with your work, entertaining them or evoking an emotional response. But without a deadline, a sense of urgency, it’s easy to put off the things we really want to do. To convince ourselves we’ll start tomorrow, next month, next year. Or some vague point in the future when we’ll magically have more time, more space, more inspiration. Whatever motivates you, it’s worth considering what you might achieve if you got up every day, and dived right into life and work as if you had a plane to catch. Or a cheque. If you got straight to your work without hesitation, without debate, without procrastination, what could you create?
https://medium.com/creative-living/what-i-learned-from-damon-dash-a04d09bc9473
['Sheryl Garratt']
2020-11-17 14:06:49.890000+00:00
['Business Strategy', 'Focus', 'Creativity', 'Money', 'Entrepreneurship']
Prior Common Cold: A Protective Factor Against Covid-19?
Prior Common Cold: A Protective Factor Against Covid-19? When T-cells learn from other coronaviruses and outsmart the novel SARS-CoV-2 — cross-immunity. One reason why Covid-19 is so lethal is some is that the immune system has never encountered it before. So, the immune system mounts all sorts of reactions, some of which might be unsafe. A July study in Nature, titled “Longitudinal analyses reveal immunological misfiring in severe COVID-19,” showed that the immune system of Covid-19 patients mounted anti-fungal and anti-parasitic reactions. It seems like the immune system is doing trial-and-error against the novel coronavirus. Such non-specific immune reactions, however, are not only ineffective but take up immunological resources and encourage unnecessary inflammation. “It seems completely random,” Akiko Iwasaki, a professor in the Departments of Immunobiology and Molecular, Cellular, and Developmental Biology at Yale University who directed this study, told The Atlantic. “The immune system almost seems confused as to what it’s supposed to be making.” What if the immune system already knows what immune cells to deploy? If that’s the case, immune reactions would be more specific. Unwanted inflammation and multi-organ damage would happen less often. And the host would stand a good chance at overcoming Covid-19. Such non-specific immune reactions, however, are not only ineffective but consume immunological resources and encourage unnecessary inflammation. Five different cohort studies from the Netherlands, US, UK, Germany, and Singapore reported that 20–60% of individuals never faced Covid-19 had T-cells that reacted against SARS-CoV-2 spike and other important proteins, but to a lesser extent than that of recovered Covid-19 patients. Still, it shows that T-cells profiles of unexposed and exposed people to Covid-19 can overlap. Seems like in these subsets of people naive to Covid-19, their immune system knows what to do. How? All five research groups suggest that cross-immunity from prior exposure to human coronaviruses (HCoVs) — such as HCoV-OC43, HCoV-HKU1, HCoV-NL63, or HCoV-229E — that causes common cold might be the reason. Does cross-immunity of T-cells really exist? In a published paper in Science on August, academics at the La Jolla Institute for Immunology in California sought to find out if prior common cold infection provides cross-immunity against Covid-19. Daniela Weiskopf, an assistant professor who have spent 11 years researching on T-cells, and Alessandro Sette, a professor with 35 years of immunology research experience, both led the study. In this study, the immunologists isolated immune cells from 88 donors who never had SARS-CoV-2. They mixed those immune cells with 142 distinct SARS-CoV-2 proteins — of which 66 from the spike and 76 other relevant proteins — and monitored the immune reactions over two weeks. Results showed that each donor recognized an average of 11 SARS-CoV-2 proteins (range of 1–33; median of 6.5). And 54% of donors recognized parts of the SARS-CoV-2 spike protein, the key immunogenic and virulence factor. Such cross-immunity boils down to grasping an evolutionary concept that natural selection builds on existing biological entities. Thus, some parts of SARS-CoV-2 would still be similar to previous coronaviruses. Next, using bioinformatics, the researchers showed that SARS-CoV-2 proteins recognized by T-cells shared a substantial sequence similarity to proteins of human coronaviruses (HCoVs). Accordingly, nearly all donors had antibodies against common cold-causing HCoVs. To further confirm cross-immunity, they constructed HCoVs proteins and repeated what they did with SARS-CoV-2 proteins. And, indeed, the donors’ T-cells reacted against HCoVs proteins too, as they did with SARS-CoV-2 proteins. To validate cross-immunity again, the immunologists exposed cultured T-cells to HCoVs proteins. About one-quarter (10/42) of these ‘trained’ T-cells cultures then successfully displayed cross-immunity against SARS-CoV-2 proteins. In sum, this exhaustive study used different research methods to prove a point — that coronavirus cross-immunity in T-cells exist. When T-cells outsmarted SARS-CoV-2 “We have now proven that, in some people, pre-existing T cell memory against common cold coronaviruses can cross-recognize SARS-CoV-2, down to the exact molecular structures,” Professor Weiskopf said in a news release. “This could help explain why some people show milder symptoms of the disease while others get severely sick.” By committing HCoVs protein structures to memory, the T-cells can outsmart SARS-CoV-2 that is entirely novel to the immune system. Such cross-immunity boils down to grasping an evolutionary concept that natural selection builds on existing biological entities. Thus, some parts of SARS-CoV-2 would still be similar to previous coronaviruses. And it is these parts that confer cross-immunity. Ultimately, an efficient adaptive immune response is needed to overcome infections the innate immunity cannot handle. And T-cells have a final say on that. “Having a strong T cell response, or a better T cell response may give you the opportunity to mount a much quicker and stronger response,” Professor Sette added. “We knew there was pre-existing reactivity, and this study provides very strong direct molecular evidence that memory T cells can ‘see’ sequences that are very similar between common cold coronaviruses and SARS-CoV-2.” Why T-cells? T-cells and B-cells comprise the adaptive immune system that takes time to prepare for a specific pathogen. After fighting infections, T-cells and B-cells also keep a ‘memory’ of that pathogen, so that it can activate quicker the next time it sees the same pathogen. In contrast, the innate immune system — such as neutrophils, macrophages, antimicrobial peptides, mucus, skin barrier, etc. — is fast and respond the same to all pathogens. Think of innate immunity as frontline soldiers and adaptive immunity as snipers, suggests Bo Stapler, MD. Ultimately, an efficient adaptive immune response is needed to overcome infections the innate immunity cannot handle. And T-cells have a final say on that. T-cells have two types: (1) Cytotoxic CD8+ T-cells that kill infected cells. (2) Helper CD4+ T-cells that help B-cells to produce antibodies and other immune cells to do its job better. Cripple the helper T-cells, like what HIV does, and the whole immune system functions less effectively. Hence, it is not an issue if antibodies wane like they usually do (perhaps as to not expend unnecessary resources) as long as T-cells retain their memory, which they did even from prior common HCoVs. As follows, the age-related shrinkage of the thymus gland — where T-cells mature — may be one reason why old age alone is a major risk factor for severe Covid-19. Short Abstract Many cohort studies found that people who never encountered Covid-19 had T-cells that reacted against SARS-CoV-2 proteins. Researchers, thus, suggest that cross-immunity might be at play: Prior exposures to human coronaviruses (HCoVs) that causes common cold could prime the T-cells to mount a more effective immune response to SARS-CoV-2. A recent meticulous study has just confirmed that cross-immunity exists in T-cells. So, in certain cases, T-cells outsmarted an enemy it has never seen by learning from its predecessors. This prevents the immune system from mounting unspecific responses (e.g., anti-fungal, anti-parasitic, etc.) and unnecessary inflammation.
https://medium.com/microbial-instincts/when-t-cells-outsmart-the-novel-coronavirus-cross-immunity-df711261ea8c
['Shin Jie Yong']
2020-08-12 13:29:22.218000+00:00
['Technology', 'Innovation', 'Health', 'Covid 19', 'Science']
Smarter robots, at your service
Teaching robots to understand where they are and how they can interact with their environment is the key to unlocking the next generation of automated service systems. Dr Niko Sünderhauf has been awarded a competitive and prestigious Amazon Research Award for 2019, receiving $120,000 in funding to investigate robotic models for navigation, manipulation and interaction. Sünderhauf is a leader for the visual learning and understanding program at the QUT Centre for Robotics. His research focuses on how robots can perceive the world around them, understand the objects they see, and use that information to complete tasks. Image: miriam-doerr/Getty Sophisticated semantics The new project, funded by the Amazon Research Award, will build on existing methods of robotic vision and navigation to make them more sophisticated and capable. “Current robotic algorithms use camera images to create a map of their environment, but this only supports a very simplistic understanding of objects in the environment,” Sünderhauf said. “Robots today can get a good understanding of where things are around them, but not what these objects are, and what the robot — or humans — could do with them. This simplified model of the world isn’t sophisticated enough to inform higher levels of learning, often making it impossible for a robot to learn complex tasks.” Sünderhauf is investigating the possibilities of object-oriented semantic mapping to develop more meaningful maps of environments, which robots can navigate and interact within. “Object-oriented semantic mapping means that robots have learned to detect and distinguish different objects, and can create a map of all the objects in an environment, each with its own location, type, and functionality,” Sünderhauf explained. While the robot still uses the camera to see, its interpretation of the environment is based on a deeper understanding of objects. “Rather than using camera data to recognise visually what a singular fridge in a particular environment looks like, the robot will create a deeper understanding of what a fridge is, how it acts, what’s kept inside it, and how to open the door — things like that. When it moves into new environments, it will be able apply that knowledge to unknown settings.” Good robot! Once the robot begins to formulate ideas of semantic meaning for objects within its environment, researchers have to devise a way to help the robot learn appropriate interactions with those objects. Optimising a robot’s interactions is managed by rewarding correct behaviour, using a process called Deep Reinforcement Learning. “It’s probably not that different from training a dog,” Sünderhauf said. “We’re providing a reward signal when the robot exhibits correct behaviour and interactions in the environment. The robot is programmed to optimise its behaviour to seek as many reward signals as it can, so that reinforces correct actions. “Investigating the limitations and strengths of Deep Reinforcement Learning in this context is a key focus of our project.” Dr Niko Sünderhauf in the QUT Centre for Robotics. Image: SEF Communications Mapping environments Sünderhauf and his team are creating rich, descriptive maps of environments called graph-based maps to help the robots interpret their surroundings. Unlike traditional maps with roads and landmarks, these maps are made up of nodes and links, coming together to describe an environment and its contents. “Nodes in the graph represent objects, like a sink, a pen, or a knife. Other nodes represent locations in the environment the robot can navigate to. The links between nodes represent potential interactions, like how a robot could reach them from certain positions within the environment,” Sünderhauf explained. These graph-based maps are the key to robots being able to interact more robustly in an environment. “When the model of the environment is more detailed and more meaningful, the robot can learn better, act faster and function more efficiently and successfully within the environment as its understanding of that environment grows,” Sünderhauf said. While the team are preparing the maps and programming them into the robotic models initially, robots would be able to use simultaneous localisation and mapping (SLAM) techniques to build these maps themselves in the future. “SLAM is when a robot builds a map inside its ‘brain’ as it moves within an environment, while also localising itself with the help of the still incomplete map,” Sünderhauf explained. “The robot can use the map to keep track of where objects are and how to move around them as it goes.” Robo-butlers of the future Sünderhauf’s project is focused on robots in domestic settings, completing tasks like helping clean the house, finding lost objects, or fetching a delivery from the front door. Image: AndreyPopov/Getty “Beyond that, it can have a huge impact in specialised care robots, like those working in hospitality, assisted living or aged care. It’ll be really useful for logistical robots, like delivery robots that have to interact with partially unknown environments to complete their task, including navigating gates and doors.” It could also mean that robots who interact with humans in logistics or manufacturing will perform better, as they will be faster to train, better at understanding different appearances, and ultimately safer. The key challenge for Sünderhauf and his team will be taking the robot from simulated test environments to unknown test environments. At the moment, they’re testing their robotic models in a simulation platform. “It’s quite a rich simulation environment. The robot can do things like open a fridge, take a loaf of bread out, slice it, and put the bread in a toaster,” Sünderhauf said. “Simulation is great for our research because it allows robots to do things that are currently hard for them to do in the real world, like articulation and grasping.” Other research teams at QUT are working on projects dedicated to grasping arms and robotic manipulation, and Sünderhauf is excited for the opportunity to develop his project in parallel. Where to from here? Over the course of this one-year project, Sünderhauf and his team will build on the proof of concept that won them the Amazon Research Award. “Our proof of concept demonstrated that the robot could find objects, like house keys or a television remote, in a simulated environment, using this graph-based reinforcement learning. “Our initial investigations worked across both training and unseen environments, so we know that what we want to do is possible.” Sünderhauf is enthusiastic about the opportunity to develop key fundamental research in robotics. “We’re looking at creating a whole new generation of robotic learning. We’re very excited to see where this takes us,” he said. More information Explore more research at QUT’s Science and Engineering Faculty. Find out more about the QUT Centre for Robotics. Keep up to date with the project, Learning Robotic Navigation and Interaction from Object-based Semantic Maps. Visit Dr Niko Sünderhauf’s website.
https://medium.com/thelabs/smarter-robots-at-your-service-18e0d84c6541
['Qut Science']
2020-08-21 05:43:34.413000+00:00
['Robotics', 'Science', 'AI', 'Technology']
This is How I Made Over 5k My First Month on Medium
Vape pens filled with DMT. Those are money makers. Writing? Ha. This is a hobby. My first month on Medium I wrote a few stories while waiting for clients in Safeway parking lots. Thirty years ago maybe you could have made a few bucks on writing. Nobody wants to pay attention for longer than three minutes anymore. Here’s how to make money online. Grease the floors and get the camera ready for grandma. Dub in some 80’s song and jog the video back and forth. Then Tik-Tok your naan popping a hip. Then, racks of cash. Then, mad bitches. Wii!
https://medium.com/title-and-picture-gag/this-is-how-i-made-over-5k-my-first-month-on-medium-50d5a05c5220
['Hogan Torah']
2020-11-30 17:28:28.255000+00:00
['Humor', 'Writing', 'Music', 'Gag', 'Satire']
Convolution: the revolutionary innovation that took the AI world by storm
There’s an old saying in AI that computers find things easy that humans find hard (like doing complex math) and computers find things hard that humans find easy (like catching a ball or recognizing objects in an image). Let’s take recognizing images as an example. Check out the following collection of images: This is a sample from the ImageNet Large Scale Visual Recognition Challenge. The challenge involves a collection of 1.2 million images that need to be classified into 1000 categories. Looks like an easy task, right? There’s images with birds, dogs, people, cars, geometric shapes, and so on. Piece of cake! Don’t get too comfortable though. The categories include distinct breeds of animals. Would you be able to spot a Border Collie in the images? I know I wouldn’t. I’m terrible telling dog breeds apart Humans on average make about 5 mistakes for every 100 images, which corresponds to a Classification Error of 5%. Computers used to do much worse. The best a computer could do in 2011 was a horrible 25%. That’s one error in every four images! But then something interesting happened. In 2012, a team used a neural network and won the challenge with a classification error of only 15%, way better than any competing team. This proved that neural networks could outperform more traditional machine learning algorithms. People took notice, and in the following year everybody was using neural networks. Then in 2014, something amazing happened. A team from the Oxford Visual Geometry Group won the competition with an astounding score of 7.4%. This was mind-blowing. The algorithm, called VGG16, performed only 2% worse than a human. Computers actually got _this_ close to beating humans at image recognition. How did Oxford pull this off? They used a revolutionary new concept called Convolution. Convolution is well-known in mathematics, but nobody had used it in machine learning before. When applied to a neural network, a convolution layer makes the network ‘translation-equivalent’. Here’s what that means… Older neural networks were notorious for only detecting objects in the exact same location where they had been trained. Take the following images for example: A neural network trained on the middle image would be unable to recognize the statue in the two other images, because it’s not exactly in the center of the image. To be able to recognize 1000 image categories, these older networks needed massive amounts of network nodes to learn each image individually. And because computing time rises exponentially with each additional network layer, there simply wasn’t enough CPU power to run a network large enough. Convolution solves all that. A convolution layer allows the network to learn features of the statue independently of its location in the image. The network now needs a much smaller amount of network nodes to learn to recognize the statue. The Oxford team went all out with this concept. Here is their network architecture: Each slab in the diagram is a single convolution layer in the VGG16 network. Oxford stacked 13 of them on top of each other! Oxford’s victory proved that Convolution was the way forward. Everybody started using it, and one year later the winning ImageNet algorithm scored 4.8% and beat humans for the first time. It’s no understatement to say that Convolution has revolutionized the field of Machine Learning. But you know what the good news is? The VGG16 algorithm is open source and available for public use. It’s very popular and built into many machine learning libraries. The network weighs in at around 500MB and can easily run on a modern computer. A very popular trick to get an image recognizer up and running quickly is to take the pre-trained VGG16 network, chop off the classifier at the end (the blue and brown layers in the diagram), and replace it with your own. This trick is called Feature Extraction, and you can use it to build a reliable image detector for any object. For example, you could train your network on images of car parking spaces, and then build an automatic parking space finder: Did I inspire you to build an image detector of your own? Leave a comment and let me know!
https://medium.com/machinelearningadvantage/the-revolutionary-innovation-that-took-the-ai-world-by-storm-e078e9a0053f
['Mark Farragher']
2019-04-02 16:42:37.814000+00:00
['Machine Learning', 'Deep Learning', 'Computer Vision', 'Artificial Intelligence']
In 2021, Don’t Deprive Yourself
In the Before Times, I wrote a column called Optimize Me, about the most bizarre things people did to their bodies in the name of health. Occasionally, these “wellness hacks” were scientifically backed, but more often they were misguided and even concerning at times. Some of them were efficiency tricks to get more time out of the day — using electric shocks to fast-track a workout or accelerated listening settings to consume more content. Others involved injecting substances into people’s bloodstreams and, ahem, backdoors. Looking back on these articles now, after a year of hardship and suffering, some of these optimization trends, particularly the ones focused on deprivation, seem downright perverse. Why would you ever intentionally deny yourself of the fleeting moments of joy life offers you? (And by the way, that’s not how dopamine works.) Why would you commit to an all-meat diet and starve yourself of fruit and vegetables, the source of so many essential vitamins and minerals needed to maintain optimal immune health? And what could possibly inspire anyone to stop drinking water, a literal life source? Life is hard enough as it is. In 2021, go easy on yourself and indulge a little bit.
https://elemental.medium.com/in-2021-dont-deprive-yourself-6f09021d300d
['Dana G Smith']
2020-12-30 06:32:39.989000+00:00
['Health', 'Happiness', 'Body', 'Life', 'Science']
Can You Autorun RxJS?
It’s a sweetened countdown Our first example! A simple mapping Say, we want to prettify each value on a timer stream. So we’ll write such an expression: Explanation: computed takes a function that uses some streams and re-evaluates it when those streams update. It returns an observable that you can manipulate further. And $(a) indicates that a is a stream and its updates should be listened to. So technically this expression is equivalent to a.pipe( map(x => x + '🍰') ) But let’s keep discovering what else this tiny lib can do:
https://medium.com/swlh/can-you-autorun-rxjs-246822f8b757
['Kostia Palchyk']
2020-11-23 15:07:43.749000+00:00
['Typescript', 'Angular', 'JavaScript', 'React', 'Rxjs']
Is M1 Mac Worthy or Good for Developers? [Developer Review]
I decided to purchase the new Apple M1 MacBook Pro (2020) for development purposes. My current macs work great but I was sold on the long battery life and the power of the claimed M1 chip. Here is why I love it and I am hopeful. What Kinda Developer Am I? I am a Software & Web Developer and I work in an R&D department, so I am constantly researching and experimenting either for work or to create content. I use IntelliJ, WebStorm, PhpStorm, and Xcode depending on what I am coding. I also have a Code YouTube channel which I use Adobe Premiere, Adobe Audition, Adobe After Effects, Adobe XD, Photoshop, and Illustrator to edit and prepare the content I post. Why I bought It I own a 2019 iMac which I mainly use for Audio, Video, and Text Editing for its power and convenience to always be in a nice desk. I also own a 2019 and 2015 MacBook Pros which I use for coding and research. My iMac fan goes out of control when I am exporting video or converting a lot of video files to mp4. My MacBooks fan screams when I am compiling a lot of code and sometimes I have to run many dev apps at once to test multiple services at once. Sometimes I get “out of memory warning” when I have a lot of things going on at once. The issue is, they are almost always running hot, the battery doesn’t last long and I feel like I need better. They are terrible for traveling cause or to occasionally pull it out in a Cafe or Train station to code real quick and that why I bought an iPad to code simple ideas sometimes on the go. Battery Life After the first day, I got blown away. I literally coded 8 hours straight like I normally would and without plugging the Laptop in. The battery life is mind-blowing. This is probably the most attractive thing about these new M1 Laptops. The Fan My coding time became unusually quiet. I purposely tried many things at once, and it continued to be quiet. It got a little warm but nothing like my other laptops. It remained quiet even when I was exporting or editing my YouTube videos without mentioning that it exported the videos much faster than my iMac. The Camera & Ports The camera sucks. The reason that it does not bother me is that whenever i need video calls I use my iMac or iPad. I also have a go-pro camera I can attach for better video quality just in case. Honestly, I don’t care about the camera anyway. Two ports do not bother me either. I use adapters that give me more ports for when I need them and honestly, I rarely plug anything else besides the charger and my iPhone. I prefer not to plug anything besides my iPhone but I understand how this may not work for others. Coding related Programs When I first launched IntelliJ and WebStorm I felt a little lagging but after a couple of minutes in, it started to feel normal. I believe maybe because it was the first time and not optimized for this chip that it is normal? I somehow felt that IntelliJ and WebStorm file indexing was a little slow at times. Chromium, Edge, and Firefox, all took about 3–8 seconds to launch for the first time. After the first launch, they took about 1–2 seconds. Overall, I felt they were slow sometimes. Safari is super fast which gives me an idea of how browsers can be once adapted for the M1. All native Apple apps work great and so far none of my coding related software stopped me from doing anything I need. I did not have luck with Docker or any VM though, that’s a huge one for me but I know the Docker team is excited for M1. Check this video for test details by DevChannel on Docker and Android Studio. Anything that relies on VM right now cannot run and for me, only Docker limits my workflow. XCode like any apple Software, works even better. Apps build faster and the experience is just fantastic. Perhaps this is where I felt the biggest improvements when compiling and building code. Code Compilation Overall, the coding experience is the same or slightly better. I felt a quicker code compilation and building time which helps a lot but it was not like, 10 times faster. It was maybe 1.5x — 2x faster for the big apps, I tested with. Adobe Programs For Adobe XD, some old projects it was not able to open or open correctly and gave me errors. For projects I started in the M1 MacBook everything worked smoothly with no changes at all. I was able to open and edit most of my old projects but for a couple of the biggest ones, it failed. Adobe is yet to update these programs to take advantage of this M1 chip so, no biggie. Adobe Premiere took longer to run for me but when It did, everything worked fine. I got errors with Media Encoder on some files. After Effect failed to open correctly one of my projects but overall I am still hoping I can open those projects eventually. The Keyboard The typing experience is great. I really like this keyboard which I find to be way better than all the MacBooks and the iMac keyboards I own. The keys are not flat that you feel like you are typing on a flat screen and the travel is just right, at least for me. I can feel the keys moving and the response is just fantastic. I also like the noise it makes when I type which is an important detail for me. I am not a touch bar fan but I enjoy the options it gives me for IntelliJ to quickly run or create commits. It can be fun for designing programs and it is something you need to remember to incorporate in your interaction with the computer. The Bugs I experienced bugs, unfortunately. It is hard to tell whether it was because of the software or the hardware but because I never experienced them in my other computers running the same OS, I am to blame the MacBook. The laptop went to sleep unexpectedly sometimes forcing me to open and close the lid repeatedly to get it back. Chrome and Edge exited unexpectedly on me a couple of times, once while I was writing this article. Everything else has been related to programs launching slowly, doing some stuff slower than usual, failing to open projects and documents sometimes. MacBook Air or Pro? I went for the Pro even though internally they seem to be the same. I would say, go for the Air as it is amazingly cheaper if the price is a concern. I love the size of both, the air is even lighter, and I can just pull it out anywhere and just start coding. The Transition The transition is definitely one of the best. I simply set up this new laptop from my old one back-up and everything felt like my old laptop in a new system. This is down to configuration files, scripts, packages and languages installed, etc. It pretty much just worked! Conclusion I still keep my other laptop for professional stuff since the environment is set up to work the type of work I do, and I would hate if a new laptop system mess it up for me. If you trying to replace your current professional laptop, don’t do it. It is too early for that and the 16 option size is not yet available. If you are in the market for a small, powerful laptop to start coding or you don’t mind the risk of some stuff not working 100% for a little while, please join me in this experience. The battery and power pretty much make up for it. With time, the applications will be built for this new chip and things will improve but overall I haven’t been affected in a big way mostly because I have backup machines and I am in love with the battery, fan, and power.
https://beforesemicolon.medium.com/is-m1-mac-worthy-or-good-for-developers-developer-review-3ed832f4105e
['Before Semicolon']
2020-11-26 13:16:57.945000+00:00
['Technology', 'Design', 'Apple', 'M1', 'Software Development']
The Amazon of Gene Therapy
The Amazon of Gene Therapy How scientists used genetic engineering and ML to bring a virus back from the dead as a better deliver method for gene therapies Molecular structure of the Anc80 virus. [Figure 2C, Zinn et al. 2015] Gene therapy is a powerful tool to treat diseases from cancer to deafness, but it requires safe and effective delivery to the correct cells in the body. This post is about the unique genesis of Anc80, a new viral delivery system. Gene therapy has two parts: 1) the genetic package, to fix a disease-causing error, and 2) the delivery system, to deliver the fix to its intended location. Existing delivery systems cannot reach all cells, limiting gene therapy’s utility. Anc80 has proven 1000X more effective in targeting difficult-to-reach cells in mice and the first human trials start this year. If successful, Anc80 will enable drugs for many currently-untreatable diseases. Delivering Gene Therapy: A Very Brief History (1985–2015) Delivering genetic packages is hard, but nature has given us a bespoke cellular infiltration system: the adenovirus. Adenoviruses normally cause respiratory infections, but scientists in the 1980s began hijacking the infectious machinery to deliver genetic packages into cells without harmful virality. There’s one big problem: humans have evolved a sophisticated immune defense against adenoviruses. In 1999, scientists tried to treat 19-year old Jesse Gelsinger for a liver disorder using an adenovirus-delivered gene therapy. Jesse wasn’t the first to receive gene therapy via adenovirus, but he was the first to suffer an overwhelming immune response. He died within four days. Scientists began searching for alternative viruses that could deliver gene therapies without lethal immune reactions. Adeno-associated viruses (AAVs) looked promising: AAVs don’t cause disease, so we haven’t developed immunity against them. AAVs became the postman for many gene therapies, including the first FDA approval in 2017: Luxturna—a $850K drug to treat infant blindness. Roche is now buying Luxturna’s developer Spark Therapeutics for $4.8B. Despite advantages over adenoviruses, AAVs still struggle to reach certain cell types and trigger immunity at high doses or in certain contexts, restricting gene therapy to a minority of potential uses. Scientists have tried to reengineer AAVs, but enhancing one aspect (better immunity) often introduces new limitations (worse delivery). It was time for a new approach. The Lazarus Effect: Inventing Anc80 (2015) In 2015, the scientist Eric Zinn, along with colleagues from Harvard Medical School and the Massachusetts Eye & Ear Infirmary, published a breakthrough. Rather than modifying current AAVs, Zinn et al. explored the evolutionary history of the viral family, looking for common ancestors that no longer exist. The big idea: earlier AAVs may have once had properties that were since lost during evolution because they didn’t help the virus at the time, but which could now be beneficial in the context of viral gene therapy delivery. Zinn et al. identified Anc80, an ancestral AAV that they hypothesized could have enhanced properties. As the DNA sequence of Anc80 was lost to time, the team looked at comparable sequences from its descendants. Then, using genetic engineering and in silico reconstruction, Zinn et al. recreated 776 possible variants of the ancient Anc80 virus. The team then tested the ability of each Anc80 variant to infect cells. On the 65th variant, Zinn et al. hit the jackpot: Anc80L65. The 65th iteration of Anc80 outperformed existing AAVs in the delivery of a genetic package to liver, muscle and retinal cells in mice. It also displayed higher heat tolerance and improved molecular stability, important for delivering gene therapies in the tumultuous innards of a living organism. Green is Good: green shows the amount of a gene that reached its target when delivered to liver (top row), muscle (middle row) and retina (bottom row) using Anc80 (right column) versus AAV2 (left column) and AAV8 (middle column). Anc80 is better because more of the gene reached its target. [Figure 4A, Zinn et al. 2015] Anc80 proved safe and effective in monkeys. With significant differences from all of its descendant AAVs, Anc80 also didn’t stimulate immune reactions, because modern organisms haven’t encountered this virus for generations. Zinn et al. went full Lazarus on Anc80 — and it worked. Preclinical R&D: Delivering Results (2015–2019) Zinn’s colleagues began testing Anc80 against genetic diseases, starting with challenging disorders of the inner ear. Five studies (Landegger et al. 2017, Tao et al. 2018, Yoshimura et al. 2018, Duarte et al. 2018 and Gu et al. 2019) proved Anc80 could deliver genes to ear cells that existing AAVs cannot effectively reach, including cells that lead to debilitating disorders when mutated. One such disease — Usher syndrome — causes deafness, blindness and loss of balance. As AAVs don’t hit the responsible cells, over 15K people in the US alone have no good treatment. Pan et al. 2017 showed that because Anc80 can reach those requisite ear cells, Anc80 could deliver a gene to correct Usher syndrome. Tested in mice, Anc80 proved 1,000X more effective than comparable delivery with current AAVs. Suzuki et al. 2017 demonstrated that Anc80 could even fix genes in ears that were structurally damaged — a common real-world scenario that has stymied past gene therapy attempts — further validating Anc80. Additional studies showed Anc80 can target other remote cell types: in the anterior eye (Wang et al. 2017), retina (Carvalho et al. 2018), kidney (Ikeda et al. 2018) and CNS (Hudry et al. 2018), establishing uses beyond the ear. The catch: So far, all of the data is in mice or monkeys—not in humans. Towards the Clinic: What’s Next? (2020+) Biotech companies have begun to develop Anc80 for human patients. Akouos is a Boston-based gene therapy startup focused on ear diseases and co-founded by an Anc80 inventor. The company has licensed exclusive rights from the Mass Eye & Ear Infirmary to use Anc80 to treat hearing loss. Akouos launched in 2017, then raised $50M from top investors including RA Capital and the venture arm of Novartis, which acquired AveXis and its spinal muscular atrophy gene therapy for $8.7B. Akouos manufacturing partner Lonza has separately sponsored several Anc80 studies. In 2016, Lonza licensed rights from the Mass Eye & Ear Infirmary to manufacture Anc80 for each therapeutics company like Akouos. While Akouos takes Anc80 to the clinic for ear diseases, its scientific founder is guiding GenSight, which has added Anc80 to its gene therapies for inherited eyesight loss. In parallel, the non-profit Odylia Therapeutics is using Anc80 for ultra-rare eye diseases. Another startup, Vivet Therapeutics, raised $41M in 2017 (also partly from Novartis) and licensed rights to Anc80 for metabolic diseases. The later-stage company Selecta has licensed Anc80 for a combo treatment with its improved version of rapamycin for immunomodulation. Conclusion: More to Come From its unique genesis to strong preclinical data, Anc80 is a useful case study in the discovery and development of novel science. Like all new biomedical tech, Anc80 is risky. A quirk of human biology could lead to harmful effects not seen in earlier studies—a common problem in translational biomedicine. Or it may simply not work in people at all. Other emerging delivery tools like nanoparticles could also outcompete Anc80. Despite these challenges, I’m bullish on Anc80 becoming a leader in the field of gene therapy delivery. I hope you’ve enjoyed learning Anc80’s provenance and potential. Feel free to email me if you have questions, comments or other exciting biotech to discuss. —NBH
https://nbhorwitz.medium.com/the-amazon-of-gene-therapy-anc80-a865542e5d4e
['Nathaniel Brooks Horwitz']
2019-07-04 15:01:02.237000+00:00
['Machine Learning', 'Biotechnology', 'Genetic Engineering', 'Venture Capital', 'Health']
6 Marketing ‘Post-COVID’ trends in 2021.
Transparency and Reliability. If the corona crisis has taught us one thing, it is the importance of transparency and data reliability. Transparency and reliability will play an increasingly important role. You can see that Google is already working with E-A-T on verifying the expertise, authority, and trustworthiness of authors. This will weigh even more heavily. Referring to reliable sources is becoming increasingly important post-corona. Are you referring to a reliable source with an external link? Then that is a positive signal. On the other hand, links to different or even unreliable sources have a negative impact. That is the development we need. Post-corona trust is the new credo. The virtual world has increasingly become a battlefield of fake news and misinformation. Clear, reliable, and consistent communication is a relief for the consumer. Therefore, ensure a clear brand image and brand value pattern. Work this consistently into all your communication. In short: prove the wording, win and strengthen the relationship with customer and prospect.
https://medium.com/evolve-you/6-marketing-post-covid-trends-in-2021-a83085508f59
['Bryan Dijkhuizen']
2020-12-27 17:26:04.349000+00:00
['Money', 'Marketing', 'Business', 'Entrepreneurship', 'Covid 19']
This clever AI hid data from its creator to cheat at its appointed task
Do you know about the Law of Unintended Consequences? It broadly comes down to this: Any action that involves a complex system is certain to have unintended consequences. This is especially relevant in the field of machine learning where we are working with highly complex software. Machine learning systems almost always have unintended side-effects. Here’s a beautiful example. Consider a Deep Convolutional Inverse Graphics Network, or DCIGN. It looks like this: It may look complicated, but a DCIGN is actually made up of a convolutional neural network (CNN) and a deconvolutional Network (DN), mounted end-to-end. The first half reads in an image and converts it to abstract information called a ‘feature map’. The green nodes in the middle can then make subtle changes to the map, and the second half takes the modified map and reconstructs it back into an image. So in a nutshell a DCIGN reads in an image, makes changes, and produces a new image. The most famous DCIGN is CycleGAN, which can do mind-blowing stuff like this: Look at the Monet –> photo conversion. A hundred years ago, Claude Monet saw that river scene and turned it into a painting. CycleGAN just ran that process in reverse, and reconstructed a photorealistic image from his painting. Isn’t that mind-blowing? It seems as if you can do anything with CycleGAN. So researchers at Google and Stanford asked themselves if you could use CycleGAN to map satellite photos to street maps and back again. The results exceeded their wildest expectations. Look at this: A perfect mapping from an aerial photo to a street map and back again. Amazing, right? But look closer. Look at that white building at the bottom of the image. See all those skylights on the roof? They’re not in the middle image! And yet, they reappear in the final image. So how did the second half of the DCIGN network know where to put them all back? Common sense tells us that this is impossible. And our common sense is correct. The AI actually cheated! The researchers gave the software an impossible task, and it found a loophole and cheated. Let’s start with the basics. We know that mapping aerial photos to street maps and back again is impossible, because we lose information when we convert the photo to a street map. Subtle details like the position of skylights and smokestacks on roofs are lost. And yet, the software managed to reconstruct a perfect aerial photo. So it somehow managed to hide all these details in the middle image without us noticing. Let’s take a closer look at the middle image. We’re going to crank up the contrast to reveal small color differences: Do you see that? There’s actually a wealth of information hidden in the map image, encoded as tiny color changes that are imperceptible to human eyes. You can see lots of red pixels clustered around the white building. These are used by the second half of the DCIGN to draw the skylights in the correct place on the roof. This is the equivalent of the AI cheating and secretly writing down all the test answers to pass an exam To test their theory, the researches played a trick on the AI. They used a different map image, but overlaid it with the ‘cheat notes’ from the original map. If the AI really is cheating, it’s going to ignore the new map and reconstruct the original aerial photo from the cheat notes. And here are the results: And that proves it. The software is only using the cheat data to reconstruct the aerial photo, it’s completely ignoring the street map. So here’s the moral of this story: Be very careful when you are building machine learning solutions, because computers will always try to give you the result you ask for. If you give them an impossible task, they will simply find a loophole and cheat. If the results of your AI app seem too good to be true, they probably are. So always make sure to include a test that catches your software off guard to expose any cheating. How do you feel about AI’s cheating? Do you think it’s funny, cute, or scary? Add a comment and let me know!
https://medium.com/machinelearningadvantage/this-clever-ai-hid-data-from-its-creator-to-cheat-at-its-appointed-task-8b91ed9872f5
['Mark Farragher']
2019-04-02 16:41:22.139000+00:00
['Machine Learning', 'Deep Learning', 'Computer Vision', 'Artificial Intelligence']
Why and how should I publish on Predict?
Dear Writer, Now is the time to launch an open publication dedicated to the future. In fact, it’s overdue. What is an open publication? It’s one where anyone can submit an article, regardless of credentials, popularity, and writing experience. The future is not written merely by an elite class of salaried journalists and Ivy League freelancers, it’s written by you. Gone are the days when money, prestige, or power are needed to change the world. A reminder from “What is Predict?”: We welcome common futurism topics such as artificial intelligence (AI), virtual reality (VR), quantum computing, nanotechnology, self-driving cars, flying cars, robotics, automation, cryptocurrencies, computer-body interfaces, and living on Mars. We also try to answer how these and other technologies and innovations will impact our lives, our work, and our society. You can change the world wielding only your phone or computer by writing out an idea. That idea about the future may become implemented by you or anyone else in the world, to change the world. Or your idea can spark another idea in someone’s mind and that idea goes on to change the world. Publishing an idea could spark another published idea that then sparks an idea that is implemented, and so on. Ideas spark new ideas, and by publishing your ideas you are making the great Earth-encompassing wildfire of ideas even stronger. Putting your story on Predict is a way for all of us future thinkers to collectively increase the exposure of our ideas by joining forces in some way. We will not agree with everything written here, but we also accept that we might be wrong and they might be right. Whether it’s about tomorrow or the year 3000, we want your thoughts about the future. Predict wants to publish ideas about the future. Even if you think your idea isn’t a great idea: please submit it, because the world might disagree. To submit an article to Predict: email me at [email protected] with your Medium username and the text of the well-written article you’d like to see published, or if it’s already published, the link to the article. If I like it, I’ll make you an author, help you get your story published, and going forward you can submit stories to Predict any time using the method shown here. The stories I’m most interested in are written well and are 9 minute reads or longer. But I’ll accept anything if I like it, even something very short. It’s up to you whether you want your story behind Medium’s paywall or not, and you can change that setting at any time. Any earnings are yours just like if the article weren’t part of a publication. Feel free to have a call to action in your story, or more than one. You can also remove a story from our publication at any time and even resubmit it later, I don’t mind. You may edit your story if you need to: it always remains 100% your story. I prefer that you use the Oxford Comma, even though some people feel it is optional. Using the Oxford Comma means that in lists of three or more, there should be a comma before the “and”. The following sentence is an example of this where I’ve bolded the comma that I hope you will include, and I’ve bolded the letters closest to it, all so that the comma will stand out to you: “I like horses, dog s, a nd cats.” nd cats.” I generally like to avoid spiritual predictions on Predict (but spiritual commentary alongside a separate prediction can be fine). It may seem counterintuitive but the predictions here are more technological in nature, and Predict is more about logically thinking about the future, rather than trying to ascertain from something supernatural what the future is. I am sometimes flexible, though, so if you’re not sure feel free to submit it and we’ll see. I recently went through and checked the top 100 non-paywalled articles across Medium at https://medium.com/topic/popular. Only 20 of the top 100 articles on Medium are not in publications, the other 80 are in a publication. Why not give yourself an 80% chance of top article success, instead of a 20% chance? “The Best Way To Predict The Future Is To Create It.” This goes both ways. In some cases, by predicting the future you are creating it. Sincerely, Eric Martin, founder of Predict
https://medium.com/predict/contribute-to-predict-on-medium-95c6ea3674e4
['Eric Martin']
2020-11-22 18:49:38.261000+00:00
['Writing', 'Predict', 'Futurism', 'Future', 'About Predict']
With Figma’s new SVG Exports, less = more
Here’s one big lesson we’ve learned since launching collaborative design tool Figma: when it comes to SVG Export, less is more. For a long time, we stuck as much information as possible into our SVG files, hoping it would help other tools render and import designs accurately. Fueled by feedback from our community, we’ve had a change of heart and have been gradually tweaking the Export format over the past few months. As our release note junkies have noticed, our SVGs are now simpler, more compact and compatible with more tools (like Android Studio). Read on for a quick primer on SVG and details about what we changed. Why tools render the same SVGs so differently SVG — which stands for Scalable Vector Graphics — is an increasingly popular image format for 2D vector graphics. It emerged in 2001 as an open specification aimed primarily for use in web browsers. Unlike traditional bitmap image formats like JPG and PNG that become blurry when resized, SVG is designed to always remain crystal clear. That’s because SVGs are effectively instructions describing how to paint an image from scratch while bitmap images are static snapshots of the final result. SVG is perfect for assets like logos and icons on the web, where they might show up anywhere from a huge monitor to a high resolution retina screen on a phone. Here’s the rub though: there’s no standardized way of converting the SVG markup to pixels on the screen. Here’s the rub though: there’s no standardized way of converting the SVG markup to pixels on the screen. Most tools have their own custom SVG importers or renderers and the quality of these implementations varies widely. The SVG specification is also sufficiently complex that most tools only understand a subset and have bugs even when dealing with the subset that they claim to support. For example, SVG has a nifty feature that allows you to define instructions in a <defs> block and reference them repeatedly with the <use> element. As we learned the hard way, Android Studio does not support this in most cases. In addition to choking many importers, our complicated and extraneous markup also bloated the file size and made the output difficult to digest for humans. Many of our users had resort to post-processing with tools such as SVGO and svgito or, even worse, by manually cleaning up the file by hand. 😬😬😬 From Figma’s SVG naïveté… to our new pragmatic approach People have been asking for smaller, simpler SVGs from Figma for a long time. We resisted making the change because we were hoping that SVG would become the de facto data transfer format between design tools. In this utopia, you could easily spread your design workflow across different design tools depending on what was the best fit for each stage. We now accept that was a bit of pie-in-the-sky naïveté, and our new SVG Exporter takes a more pragmatic approach. To understand the improvements, let’s start with a seemingly innocent example: a frame containing a single black rectangle with a grey inside stroke. With our old SVG Exporter, we generated this embarrassing novel: <title>Frame</title> <desc>Created using Figma</desc> <g id="Canvas"> <clipPath id="clip-0" clip-rule="evenodd"> <path d="M 0 0L 100 0L 100 100L 0 100L 0 0Z" fill="#FFFFFF"/> </clipPath> <g id="Frame" clip-path="url(#clip-0)"> <g id="Rectangle"> <use xlink:href="#path0_fill" transform="translate(10 10)"/> <mask id="mask0_outline_ins"> <use xlink:href="#path0_fill" fill="white" transform="translate(10 10)"/> </mask> <g mask="url(#mask0_outline_ins)"> <use xlink:href="#path1_stroke_2x" transform="translate(10 10)" fill="#CCCCCC"/> </g> </g> </g> </g> <defs> <path id="path0_fill" d="M 0 0L 80 0L 80 80L 0 80L 0 0Z"/> <path id="path1_stroke_2x" d="M 0 0L 0 -10L -10 -10L -10 0L 0 0ZM 80 0L 90 0L 90 -10L 80 -10L 80 0ZM 80 80L 80 90L 90 90L 90 80L 80 80ZM 0 80L -10 80L -10 90L 0 90L 0 80ZM 0 10L 80 10L 80 -10L 0 -10L 0 10ZM 70 0L 70 80L 90 80L 90 0L 70 0ZM 80 70L 0 70L 0 90L 80 90L 80 70ZM 10 80L 10 0L -10 0L -10 80L 10 80Z"/> </defs> </svg> http://www.w3.org/2000/svg " xmlns:xlink=" http://www.w3.org/1999/xlink "> Frame Created using Figma After a lot of hard work, we’ve managed to squeeze all of that down to a tweet: Let’s go through the larger changes one by one: Primitives For simple shapes such as rectangles and circles, we now use the easily understandable SVG shape primitives instead of cryptic paths. <path d="M0 100V0H100V100H0Z"/> ⬇ <rect width=”100" height=”100"/> Inside/Outside Strokes Because SVG only supports center strokes, design tools have had to devise various workarounds to simulate inside and outside strokes. Previously, we did this by using center strokes with a doubled stroke width. That is visually equivalent to having both an inside and outside stroke so we used a <mask> to hide the unnecessary half. This approach greatly inflated the file size and complicated the markup. We now try to adjust the points in the path such that the visual result will resemble an inside or outside stroke while using center strokes with the original stroke width. For example, a rectangle with an inner stroke can be represented as smaller rectangle with a center stroke. <mask id="path-1-inside-1" fill="white"> <path d="M0 100V0H100V100H0Z"/> </mask> <path d="M0 100V0H100V100H0Z" fill="#C4C4C4" stroke="black" stroke-width="2" mask="url(#path-1-inside-1)"/> ⬇ <path d="M99.5 99.5H0.5V0.5H99.5V99.5Z" fill="#C4C4C4" stroke="black"/> Minimal Markup We no longer output purely informational elements like <title> and <g> or attributes like id and version . We also added some smarts to avoid markup when they have no effect. For example, we used to always output a <clipPath> for clipped frames, but now we do so only when clipping is actually necessary. Finally, we now inline SVG elements where we previously had a <use> reference to a deduplicated element defined within the <defs> block. Even though we lost deduplication, it turns out that the simplified structure actually reduces the overall file size in most cases. This is especially true when the SVGs are compressed with something like gzip. Export Options The new SVG Export defaults are optimized for the most common usecases. For example, most of our users will not miss the id attributes, but for those of you that do, we went ahead and added an option. We also have an option to control the markup for strokes and text objects. We hope Figma’s new short ‘n sweet SVGs have been treating you well. If you have feedback, please let us know in the comments below!
https://medium.com/figma-design/our-new-svg-exports-are-short-n-sweet-1b2e2cedf319
['Biru Mohanathas']
2018-08-21 00:00:03.637000+00:00
['Programming', 'UI', 'SVG', 'Design', 'Engineering']
My Time-Based Challenge to Write Every Day
Do you write every day? Do you want to? Let me ask you this. When do you write? When you have time? When ‘more important’ things are done and out of the day? At night after everyone has gone to bed and the house is quiet? How is that working out for you? I know that’s a lot of questions, but we’re almost there. I only have a couple more. What things do you do every day? When do you do them? Do this exercise for me. Make a list of things you do every day, and when you do them. Mine might have looked like this at one time. 6 am Go to the gym 8 am Practice Guitar 2 pm Go for a walk. Yours will vary in times and content, but I’ll bet it’s the same in one key element. The things you do every day, the things you are committed to doing every day, you do at a precise time. Every day. That’s what a schedule is. That’s how a schedule works. That’s how habits are built, enforced, and become daily rituals. Do you go to the gym or go for a run or walk every day? Did you ever try to fit that in whenever it was convenient? That didn’t work, did it? I know it didn’t work for me. When I was working, I would try to go in the morning before work. If I slept late or had an early commitment, I would go during lunch. When that didn’t work out, I would plan to go after work. That never happened. But once I schedule that time every day, it got done every day. At 6, I got to the gym. By 7:30, I was at work. Now, I’m retired. No commitments. The day is mine to do as I please. So, I can just do things whenever I feel like it, right? Not if I want to do it every day. Because sooner rather than later, seven or eight at night would roll around, and I would think, “Oh, I forgot to do that thing.” Do you ever forget to eat lunch? I know I don’t. Besides getting hungry, it’s a part of the day. At 11:30, I eat lunch. I’ve eaten lunch at 11:30 as long as I can remember. Retired or not, my day still follows a schedule. I have things I have challenged myself to do every day. So they are done at a particular time, mostly early. Because the day can still get away from you even if you don’t work. So, my days look like this, seven days a week. 5–6 am Read 6–7 am Marketing 7–8 am Practice Guitar 8–9 am Write 9–10 am Walk After that comes everything else. Like lunch. But also all the things I don’t have to do every day or things that vary from day to day. But well before lunchtime, I have finished my daily commitments, my daily challenge. What about days I don’t feel like writing? That’s the beauty of this time-based challenge every day. From eight until nine, I’ve committed to writing. So, I may as well write. I’ve written elsewhere how I cultivate ideas, so that’s rarely a problem. And time isn’t a problem because that’s what this hour is for. I’m not going to do anything else until nine. So, feel like it or not, I have an hour devoted to writing. And so, it gets done. I wouldn’t know what else to do with that hour. And it feels good to head out the door for my walk, knowing that it is done. But I have a job, I can’t devote five hours of my morning to this stuff. Probably not, but one thing I discovered in all my years of working. We have the time to do what we want to do. People work eight to ten hours a day and sleep seven or eight. That leaves at least eight hours for everything else. Of course, you have other things you have to do, things you are committed to. But if you really want to do something every day, you will find the time for it. Guaranteed. And to make it work, it needs to be the same time every day. If I were still working and wanted to get all that done, my schedule would obviously shift. I would probably do my workout and write before work, whatever time I needed to get up to do that. Reading might get done during lunch or the last thing in the day. I would practice guitar as soon as I got home from work. The point is, if you want to commit to writing every day, then make it a challenge. Give it the same importance you give any other daily task, such as working out or going for a walk. Put it on your calendar and your to-do list. It’s not optional if you have time or feel like it. It’s a daily ritual, a challenge. Maybe it can’t be an hour. Perhaps you do it at lunch while munching on a sandwich. But there is one thing I know, writers write. And they find the time to write., They commit to it. Now, if you will excuse me, it’s time for my walk. Oh darn, it’s raining. And you know what that means? I get wet. I’m still going to walk.
https://medium.com/curious/my-time-based-challenge-to-write-every-day-47ed67908d81
['Darryl Brooks']
2020-10-07 12:45:46.288000+00:00
['Self Improvement', 'Life Lessons', 'Writing', 'Time', 'Creativity']
Plotly Dash: A beginner’s guide to building an analytics dashboard
Hello World, Welcome to my first medium article! I believe you’re here because you want to create an aesthetic dashboard but don’t know where to start or don’t know what skills you require. Don’t worry, I have got you covered. This article will walk you through the journey one step at a time and I will try to keep it as simple as possible. Let’s get started! Why bother? We all have heard this infinite number of times — ‘Data is the new oil’. Well, it’s true, but what does it mean exactly? Data is everywhere. Businesses keep track of user’s activity such as their purchase habits, previously viewed products, location details, etc, and spend a lot of money to refine this data to get valuable insights. Data visualization helps decision-makers to make informed business decisions on the basis of this extracted information that was unseen before which results in increased customer retention and increased profits. Hence, this a very useful and in-demand skill. What is Plotly Dash? Dash is an open-source python framework used to build interactive data visualization web applications. It is developed by the plotly team and was released in mid-2017. It is built on top of Flask, Plotly.js, React.js. It is super easy to learn as Dash abstracts away all the hard parts. Prerequisites I would recommend you to create a virtual environment for this but it's optional. These are all the things you will require to create a Dash application : Supports both Python 2 and Python 3 Pandas — Data manipulation and analysis python library pip install pandas 3. Plotly — Interactive Data visualization python library pip install plotly-express 4. Dash pip install dash==1.18.1 That’s all you need. (Note: starting with dash 0.37.0, dash automatically installs dash-renderer, dash-core-components, dash-html-components, and dash-table, using known-compatible versions of each. You need not and should not install these separately any longer, only the dash itself. — Dash Documentation) Let’s get started... Dash apps are composed of the following: 1.Data source / database connectivity 2. Dash Layout consisting of dash_core_components and dash_html_components 3. Interactivity via callbacks. import dash import pandas as pd import plotly.express as px import dash_core_components as dcc import dash_html_components as html # Initialise the app app = dash.Dash(__name__) # Connect to database or data source here #Define graphs here # Define the app Layout here app.title = 'Analytics Dashboard' app.layout = html.Div() # Define callbacks here # Run the app if __name__ == '__main__': app.run_server(debug=True) “Dash includes ‘hot-reloading’, this features is activated by default when you run your app with app.run_server(debug=True) . This means that Dash will automatically refresh your browser when you make a change in your code.” — Dash Documentation Data Source / Database connectivity There are two ways to get data: 1. CSV file df = pd.read_csv('dataset_name.csv') 2. Connecting to a SQL database (if using this, you need to install Python MySQL connector) # Database Credentials ENDPOINT=”co9od6lav8.ap-south-1.r1ds.amazonaws.com” #replace PORT=”3306" USR=”admin” REGION=”ap-south-1" DBNAME=”mysql” # Database connection try: conn = mysql.connector.connect(host=ENDPOINT, user=USR, passwd=’sih2020’, port=PORT, database=DBNAME) c = conn.cursor() c.execute(‘SELECT * FROM table_name’) query_results = c.fetchall() #print(query_results) except Exception as e: print(“Database connection failed due to {}”.format(e)) df = pd.read_sql_query('SELECT * FROM table_name',conn) Dash App Layout This component describes how the front-end of the app will look like. The app won’t start if we don’t define the layout. dash_html_components This is the coolest thing about Dash. One only needs basic knowledge about HTML and CSS. The dash_html_components library provides HTML components that can be rendered via python. The html.div(children='Hello Dash') component generates a <div>Hello Dash</div> HTML element in your application.The children property is always the first attribute but can be omitted. dash_core_components The dash_core_components includes a set of higher-level components like drop-downs, graphs, markdown blocks, and more. Graph component uses plotly.js to render interactive data visualizations/graphs. app.title = 'Analytics Dashboard' app.layout = html.Div() Dash Callbacks Dash helps us to design dynamic UIs that are customizable through reactive and functional Python callbacks. In simple words, callbacks come into the picture when you want to render a graph that depends on user input via dropdown menus, radio buttons, sliders, etc. Callbacks link input components like dropdown menus with output components i.e graphs. “Every element attribute of the declarative components can be updated through a callback and a subset of the attributes, like the value properties of the dcc.Dropdown , are editable by the user in the interface.” — Dash Documentation Let’s create a dashboard using all the concepts I explained above… OUTPUT: Additional Resources: https://www.kaggle.com/babyoda/women-entrepreneurship-and-labor-force https://dash.plotly.com/ Conclusion Plotly Dash is a great open-source python framework to create an analytics dashboard with good-looking visualizations at no cost. It is beginner-friendly as Dash abstracts away most of the time-consuming work. In this article, you have learned to create an analytics dashboard from scratch. Kudos! The next step will be to understand how callbacks work in Dash. All the best! Keep Learning. Follow me: https://github.com/skothari07
https://medium.com/analytics-vidhya/plotly-dash-a-beginners-guide-to-building-an-analytics-dashboard-cedf297e01f1
['Saurabh Kothari']
2020-12-18 07:41:20.391000+00:00
['Analytics', 'Dashboard', 'Data Visualization', 'Dash', 'Plotly']
Stories About the Art of Making Art
“Repressive forces don’t stop people expressing themselves, but rather force them to express themselves,” wrote French philosopher Gilles Deleuze in 1985. Over 30 years later, writer and artist Jenny Odell used the quote as a jumping-off point for a legendary lecture (and later, a book) about reclaiming our attention — and creative spirit — in an era of endless distraction. When the world feels like it’s closing in on us, art is humanity’s way of prying it back open. On that note, this week’s Reading Roulette explores creativity and expression — what it is, where it comes from, and how to cultivate it. First step? Do nothing. Or try to. As Odell writes, “the artist creates a structure […] that holds open a contemplative space against the pressures of habit and familiarity that constantly threaten to close it.” Sergey Faldin explains it like this: Creativity is like building a tunnel between your subconscious (where your best ideas reside) and the world. You can craft your tunnel out of anything — a blank piece of digital paper, a guitar, a sketchpad, or a fridge full of ingredients that don’t seem to match. To communicate more complex ideas, you must widen your “Idea Tube” — which sounds weird, but it’s just another term for the contemplative space you create within yourself. Doing so requires hard work. It also requires freedom. As John Gorman advises, creativity (in writing, specifically) is about experimenting “with form and format, rhythm and cadence.” Break the rules. Play. Or, as Felicia C. Sullivan writes, “Great stories bear the weight of the ephemeral. There are no rules, only the ones you make for yourself that align with your work and how you wish to communicate it.” And breaking the rules doesn’t just apply to writing essays, or fiction — it even applies to copywriting, as Clare Barry explains. Your favorite brands stand out because they’re human (or their voices are, at least). So, whether you doodle or free-write or embroider your mask this weekend, remember: Rules are an illusion, and they always were.
https://humanparts.medium.com/stories-about-the-art-of-making-art-b4e248cf993a
['Human Parts']
2020-05-08 17:16:30.339000+00:00
['Writing', 'Stories', 'Recommendations', 'Reading Roulette', 'Creativity']
Transfer File From FTP Server to AWS S3 Bucket Using Python
Transfer File From FTP Server to AWS S3 Bucket Using Python File transfer functionality with help from the paramiko and boto3 modules Image from unspalsh. Credits @iammrcup Hello everyone. In this article we will implement file transfer (from ftp server to amazon s3) functionality in python using the paramiko and boto3 modules. Prerequisites Python (3.6.x) AWS S3 bucket access FTP server access Python Libraries paramiko boto3 Note: You don’t need to be familiar with the above python libraries to understand this article, but make sure you have access to AWS S3 bucket and FTP server with credentials. We will proceed with python functions step by step and I’ll leave a github link at the bottom of the article. Step 1: Initial Setup Install all of the above packages using pip install: pip install paramiko boto3 Also install awscli on your machine and configure access id, secret key and region. here is the link on how to do it. Step 2: Open FTP Connection Lets have a look at the function which will make ftp connection to server. We will make a new SSH session using paramiko’s SSHClient class. We need to load local system keys for the session. For FTP transport over ssh we need to specify server host name ftp_host and port ftp_port . Once the connection is made, we authenticate the FTP server to open the new ftp connection using transport.connect() . If authentication is successful, we initiate FTP connection using SFTPClient of paramiko. We’ll get the ftp_connection object, with which we can perform remote file operations on the FTP server. Step 3: Transfer file from FTP to S3 This will be a big function that will do the actual transfer for you. We will break down the code snippets to understand what is actually going on here. First things firs t— connection to FTP and S3 initial ftp and s3 connection setup The transfer_file_from_ftp_to_s3() function takes a bunch of arguments, most of which are self explanatory. ftp_file_path is the path from the root directory of the ftp server to the file, with the file name. For example, folder1/folder2/file.txt . Similarly s3_file_path is the path starting from root of the S3 bucket, including the file name. The program reads the file from the ftp path and copies the same file to S3 bucket at the given s3 path. We will also read the file size from ftp. According to the size of file we will decide the approach — whether to transfer the complete file or transfer it in chunks by providing chunk_size (also known as multipart upload). Avoid duplicate copy This small try catch block will compare the provided s3 file name with the same path. It will also check the size of the file. If it matches we will abort transfer, thereby closing FTP connection and returning from function. Transfer the small files in one go Transfer files at once If the file is smaller than the chunk size we have provided, then we read the complete file using the read() method. This will return the file data in bytes. We then upload this byte data directly to s3 bucket, with the given path and file name, using the upload_fileobj() function. Transfer big files in chunks AKA Multipart Upload Transfer file in chunks We will transfer thefile in chunks! This is where the real fun begins… First we count the number of chunks we need to transfer based on the file size. Remember, AWS won’t allow any chunk size to be less than 5MB, except the last part. The last part can be less than 5MB. We iterate over for loops for all the chunks to read data in chunks from ftp and upload it to S3. We use the multipart upload facility provided by boto3 library. create_multipart_upload() will initiate the process. The chunk transfer will be carried out by transfer_chunk_from_ftp_to_s3() function, which will return the python dict containing information about the uploaded part called parts . The python dict parts_info has key ‘Parts’ and value is a list of python dict parts .This parts_info dict will be used by complete_multipart_upload() to complete the transfer. It also takes the upload id from multipart dict returned after initiating multipart upload. After completing multipart upload we close the ftp connection. How to transfer the chunk? This function will read the ftp file data of chunk size in bytes by passing chunk size to ftp_file.read() function. This byte data will be passed as a Body parameter to s3_connection.upload_part() function. upload_part() will take other parameters like name of the bucket, s3 file path. PartNumber parameter is just the integer indicating number of part, like 1,2,3 etc. Once part is uploaded, we return part-output dict with Etag and PartNumber , which is then passed as value to the dict called part_info to complete the multipart upload. We did it! That’s it! You have transferred file from ftp to s3 successfully — you should now see the message on the console. Visit the Github Link for the complete python script. Thank you for reading this so far. I hope you found this article helpful. Cheers!
https://medium.com/better-programming/transfer-file-from-ftp-server-to-a-s3-bucket-using-python-7f9e51f44e35
['Kiran Kumbhar']
2019-07-08 16:13:16.544000+00:00
['S3', 'Ftp', 'Python', 'Boto3', 'AWS']
Head-on collisions: Lessons from the storage industry
Building a new product or a service is a challenging endeavor. As an entrepreneur you will need to figure out why customers care about your product — it’s value proposition. You’ll also need to figure out who your customers are, how to reach them, how to price your product and so forth. You’ll also have to understand the competitive landscape: who are the other players in your market and what if any are the substitutes facing your product. You might find yourself selling into a new market with very few competitors, or conversely in a crowded and hyper-competitive market with powerful incumbents. In this article, I will focus on market dynamics that I dub a head-on collision. Before I define what a head-on collision means, I want to first present a very simple value-chain model. Companies build products, which they then place in distribution channels to ultimately reach their customers. These channels could be physical like retail stores, virtual like websites and could include all sorts of intermediaries like re-sellers, consultants, partners and so on. There are obviously many other aspects of the product value chain that this model ignores. The world’s simplest value-chain You can apply this model to almost any market. Ford makes trucks, sells them to dealerships all over the country. The dealerships then ultimately sell cars to you and me. Microsoft makes software, some of which it sells to Dell which places them on laptops and then resells them online or in stores to consumers and enterprises. And so on. A head-on collision is when a new company (NewCo) tries to enter a market whereby the following three conditions are satisfied: NewCo’s product is no different than that of incumbent NewCo and incumbent share the same channels NewCo and incumbents sell to the same customers If the above three conditions are satisfied, then NewCo is about to enter into a head-on collision with the incumbent in this market. That head-on collision could pose a significant hurdle into NewCo’s ability to sell its product, especially if the market is not growing rapidly or is faced with external threats. The software storage world bears witness to this phenomena. The storage world: A petri-dish of head-on collisions Before I dive into the world of software storage, I should note that I spent the past 6 years working at a storage startup — Qumulo. The opinions expressed are solely my own and do not express the views or opinions of Qumulo. The on-premises software storage market is dominated by two very large incumbents: Dell EMC and NetApp. These companies have a wide array (pun intended) of different on-premises storage products ranging from block, file and object storage. Their products come in different sizes, performance characteristics and price points. Moreover, the on-premises storage market has been under intense pressure from cloud service providers like AWS and Azure. These vendors offer products like S3, EBS, Azure Storage, Azure Blob and so on that are a great alternative to products offered by Dell EMC and NetApp. Over the past years a few startups, the most notable being ones like Pure Storage, Nutanix and Nimble Storage entered this space. The products these companies offered where either very similar, or identical to ones offered by the incumbents. If there was any differentiation, it was very short-lived and the incumbents typically responded with like products in a relatively short period of time. Moreover, the channel these products were sold through were the same across these companies and EMC + NetApp. Lastly, the buyers tended to be the same. The folks who bought Nimble Storage, likely had NetApp or EMC products. Likewise for Nutanix and Pure. In a nutshell, the dynamics in this market satisfy the criteria laid out for a head-on collision. Pure and Nutanix were all able to IPO, which gives us a chance to evaluate their performance in this market. Nimble also IPOed, but had a rough time scaling their business and was eventually acquired by HPE for about $1B, which was far less than its market cap at the time of its IPO in 2013. The chart below presents the performance of both Pure Storage and Nutanix from the time of their IPO up to the end of June 2019. The performance of this stocks is compared to the S&P 500 which is a good proxy for the overall market. The approximate returns of the S&P 500 over that period of time were +45%, while Pure’s stock yielded -17% and Nutanix’s stock returned -49%. As an investor, you were much better off investing in the S&P 500 over these two stocks. Even more interesting is looking at the sales growth for these two companies since their IPO. Both Pure and Nutanix were able to grow their sales anywhere from 7x to 9x from the time they IPOed. That’s spectacular growth, yet the results reflected on their share price are very poor. Why is that? Short answer is that they are growing their sales figures but unable to generate operating cash making their businesses economically unsustainable. The chart below plots the free cash flow (FCF) of these two companies from time of IPO to their most recent annual filings. Witness that both have spent most of their life-time as public companies with negative FCF. So why are these two companies, and Nimble before them, struggling? Simple. Head-on collisions with existing incumbents along with great alternatives from AWS and Azure have resulted in these new companies competing with the existing incumbents on price and price alone. When your product is no different than every other vendor and the intermediaries and buyers are the same, you will resort to trying to win business by price alone. That’s not sustainable, especially if the other companies do the same. So is it all doom and gloom then? Well, not quite. I’ll try and present a few strategies that can help companies navigate through head-on collision. The strategies proposed and in no way comprehensive nor are they mutually exclusive. Segment the market Segmenting the market allows NewCo to look at customers that are unserved by the existing incumbents. NewCo specifically targets those customers as a means to completely bypass the incumbents. Slack is a good example of this strategy. On paper, the messaging market doesn’t look that appealing. The consumer side of this market is dominated by the likes of Apple’s iMessage and Facebook’s WhatsApp. Similarly, the enterprise side of this market has been historically dominated by Microsoft’s Skype for Business, now known as Teams and Cisco’s Unified Communications suite led by WebEx. Yet somehow Slack was not only able to penetrate this market but thrive. What Slack did is focus on the enterprise side of this market, which as mentioned earlier was dominated by Microsoft and Cisco. However, instead of targeting the same buyer, typically a CIO persona, Slack went directly to the end user. Slack realized that its value was to focus specifically on tech teams, small companies and startups that work quickly and needed communicate quickly with tech features integrated in their product. Slack’s products were offered for free, were easy to download, install and use and as a result spread like wildfire. The end result of this strategy was Slack’s ability to grow a substantial user base within Microsoft and Cisco’s market yet completely circumventing the incumbents’ value chain. Had Slack attempted to sell directly to CIOs they would have met resistance from Microsoft, Cisco and other players in the enterprise messaging market. The outcome could have been dramatically different. “Spread it did. Where HipChat had gotten businesses chatting, Slack got everyone chatting. The free version was Slack’s trojan horse. Whether you ran a small team in a corporation, launched a startup, or just wanted to organize your local sports team more effectively, Slack worked for everything.” When Slack Won the Team Chat Market Create new channels Tesla is a good example of a company entering a very crowded and competitive market — the auto industry. One thing to note about Tesla is that its product was quite different than other auto manufacturers. Tesla offered 100% electric vehicles which is in stark contrast to the rest of the auto manufacturers who predominantly rely on gas/diesel based engines. However, Tesla coupled that with another interesting move. It built its own distribution channel. Unlike traditional auto manufacturers who rely on a network of third party car dealerships to sell their vehicles, Tesla built its own network. Tesla decided that it was far better off owning the channel and building its own versus having to rely on the existing auto dealership channels. In this article, Tesla’s general council, Todd Maron, outlines some of the main reasons why Tesla went to great lengths to build its own dealerships. Reason #7 outlined in this article shows how building its own network of dealership allowed Tesla to avoid a conflict of interest and a collision with the existing auto dealerships. 7). Gas conflict of interest: Tesla is striving to replace gas-powered cars with its electric cars, and promotes its models as superior to those with internal combustion engines. However, the vast majority of cars sold through dealers are gas-powered cars. Tesla’s Maron said that dealers, therefore, wouldn’t be the good advocates for electric cars that Tesla needs. Redefine the product Zoom is a very interesting case study of a company that decided to enter a very crowded and highly competitive market. The enterprise web conferencing market has been historically dominated by the likes of Microsoft, Cisco and Citrix. Even more interesting is the fact that Zoom’s value chain looked identical to that of its competitors: similar products, very similar channels and the same set of customers. Yet Zoom is by all means a very successful company. So how did they do that? I’d argue that Zoom’s success was predicated on having a much better product. Zoom’s audio and video quality is far superior than that of its main competitors. Zoom came out with a product that was far superior than its existing competitors and was duly rewarded. “[Cisco and Skype] have been relentlessly trying to win us back since we switched to Zoom five years ago. We use video and collaboration tools for remote physician check-ins. We need high quality, reliable video in all locations — not just the ones with high bandwidth. Zoom was the only one that could deliver that. Zoom was easier to use, cloud-based, did not require a hardware investment, and its pricing model — a freemium pricing model when we signed on — made it convenient to try without an investment. We reconsider our videoconferencing needs every year — but we stay with Zoom because they listen to what we ask for and unlike the others, they actually provide it. For example, we asked for digital signage and room scheduling and they delivered.” Dennis Vallone — BAYADA Home Healthcare Own the value chain Netflix started out by first offering rental DVDs that were delivered to its customers home by mail. Soon afterwards, the company decided to fundamentally change its business model and offer content over the internet. Initially the content on Netflix was not its own, it was content developed by third party studios. However, Netflix wasn’t done yet. The company realized that “content is King” and decided to make another interesting change to its business. It developed its own content. More recent figures peg Netflix’s spend on its own content at ~$15B. These moves show Netflix’s strategy of trying to alter the value-chain and ultimately owning it. By first offering DVDs via the mail, Netflix was able to completely circumvent the traditional retail route that its main competitors at the time — Blockbuster — adhered to. Once Netflix pivoted to streaming content over the internet its ability to create its own content allowed it to bypass its main competitors large movie and animation studios. With this latest move, Netflix is now able to completely control its value chain from content creation, distribution all the way to customers. If you can’t beat them join them “Exactly. In fact, the general model for successful tech companies, contrary to myth and legend, is that they become distribution-centric rather than product-centric. They become a distribution channel, so they can get to the world. And then they put many new products through that distribution channel. One of the things that’s most frustrating for a startup is that it will sometimes have a better product but get beaten by a company that has a better distribution channel. In the history of the tech industry, that’s actually been a more common pattern. That has led to the rise of these giant companies over the last fifty, sixty, seventy years, like IBM, Microsoft, Cisco, and many others.” Marc Andreessen Again, the storage industry is a great example of seeing how this plays out. EMC in particular has a long history of acquiring companies and plugging them into its massive sales and distribution engine. Some of the notable storage acquisition by EMC include DataDomain and Isilon. More recently, HPE acquired Nimble Storage, a once high flying storage startup that IPO-ed in 2013, only to find it increasingly difficult to scale its business in an economically viable fashion. Much like with driving vehicles, I strongly recommend that you avoid head-on collisions for your products and services. However, should you find yourself about to enter in a market dynamic with head-on collision characteristics, then perhaps consider one or more of the strategies mentioned in this article. You might be able to avoid the crash.
https://karimfanous.medium.com/head-on-collisions-lessons-from-the-storage-industry-8dc1508e1763
['Karim Fanous']
2020-07-22 13:27:23.009000+00:00
['Strategy', 'Entrepreneurship', 'Business', 'Startup']
5 Easy to Apply Fitness Tips for People Over 40
Photo Credit: Pixabay We want to be able to enjoy life from start to finish. Keep your body flexible, fit, and fast to stay happy and healthy as you age. Exercise may be the magical key that unlocks not only longevity but happiness. Science tells us that exercise improves mood, fights depression, enhances quality of sleep, reduces stress and prevents disease. And according to a study published in Medicine & Science in Sports and Exercise, regular exercise can actually slow the aging process. If you are at the peak of turning 40, keep your body strong and your energy up with my best exercise advice. 1. Choose Something that Makes You Happy If you see exercise as a chore, you are less likely to experience its benefits because you probably won’t stick with it in the long run. Find an exercise you love and you don’t have to go in search of your motivation. No one has to drag you out of bed to do something you love. Experiment until you find a type of exercise that makes you happy. The feel-good emotions can also help you stick with exercise long-term. In his book, Spark: The Revolutionary New Science of Exercise and the Brain, Dr. John Ratey, professor of psychiatry at Harvard, writes, “When we begin exercising, we almost immediately begin releasing dopamine, norepinephrine and serotonin. Those are all neurotransmitters that deal with feelings of reward, alertness, contentment and feelings of wellbeing.” What to do: What exercise did you love as a child? Use your answer as inspiration to find an exercise you love as an adult. Ride a bike. Go for a hike. Swim laps or try water aerobics. Take up Pilates or the newest class at your gym. 2. Incorporate Strength Training If making yourself exercise seems like a burden, you might be immediately dismissive of strength training. However, “doing some form of strength training is mandatory as we age,” says national fitness trainer and founder of GetHealthyU Chris Freytag. “You can use dumbbells, resistance bands or your bodyweight, but muscle is the best way to rev up your metabolism as you age, and it’s something you have control over,” Chris says. “Muscle tissue can burn three to five times more calories than fat does. So the more muscle you have, the more calories you will burn, even while sitting,” Chris explains. Strength training also slows bone and muscle loss as you age and keeps your body strong for everyday activities like taking the stairs and gardening. “As people age, there needs to be a stronger emphasis on functional movement and activities that are performed in daily life, such as squatting and pushing doors open,” says Mary Edwards, MS, director of fitness and a professional fitness trainer at Cooper Fitness Center. “Strength training helps increase muscle strength in the limbs and core, which are most important as people age. American College of Sports Medicine recommends strength training, especially for those ages 56 and up as important for maintaining functional movement, balance and power.” You don’t need to invest much time with strength training to see results. What to do: “Working with weights or your body weight for as little as 20 minutes for two to three days a week can crank up your resting metabolic rate over time,” Chris says. If you are using your body weight, try pushups, squats, lunges and planks. 3. Mix it Up If you love to jog or love to run, you might just want to stick to your favorite workout day in and day out, but your body needs a mix of cardio (for your cardiovascular health) and weight training (for your body’s strength). Founder and chairman at Cooper Aerobics Center and practicing preventative physician, Dr. Kenneth H. Cooper prescribes the following ratio of aerobic training vs. strength training for maximum health benefits as we age: § If you’re 40 years old or younger, devote 80 percent of your workout time to aerobic training and 20 percent to strength training. § If you’re 41 to 50 years old, shift to 70 percent aerobic and 30 percent strength work. § If you’re 51 to 60, do 60 percent aerobic exercise and 40 percent strength training. § After you pass 60, divide your workout time more evenly between the two strategies — while still giving an edge to aerobic exercise, which provides the most health benefits: 55 percent aerobic work and 45 percent strength work. What to do: Sometimes people are intimidated by the weight rooms or weight machines at gyms. You can strength train using your own body weight by holding planks, doing pushups and situps, wall-sits, lunges and squats. Or buy some hand weights and do some workouts at home. There are lots of online workouts both free and subscription-based. 4. Set a Goal, Deadline, and Track your Progress Write down the workouts you do on your calendar on a daily basis. Seeing your efforts in writing (or on your phone) gives you a boost and a sense of accomplishment. Happiness is the joy you feel striving toward your goals Use what you’ve done to fuel your motivation to do more. A goal can be a powerful reminder to exercise consistently. Becoming better for the future starts Today! What to do: Set a goal that holds you accountable. Maybe it’s signing up for a race, a desire to see muscle tone in the mirror, or working out a certain number of times per week. Keep going until you reach that goal. 5. Stretch After age 30, we start losing elasticity in our tendons and ligaments, making them tight. As we age, stretching helps us maintain a good range of motion in the muscles, allowing joints to operate at normal functionality so they’re not limited. Put together a weekly schedule to stretch out 1–2 days a week. What to do: Make it a practice to stretch regularly when your body is already warmed up. American College of Sports Medicine recommends stretching muscles surrounding major joints two to three times per week, while holding each stretch for 60 seconds. The best advice from all the experts? Keep moving. “As people age, the body changes and injury can occur, so dysfunction can creep in, Mary says. “Focus on what you can do, not what your limitations are.”
https://medium.com/gethealthy/5-easy-to-apply-fitness-tips-for-people-over-40-52d87940da94
['Jeremy Colon']
2018-07-22 20:41:42.638000+00:00
['Fitness Tips', 'Longevity', 'Health', 'Wellness', 'Fitness']
The History Of Your QWERTY Keyboard
The technology before the skill Before we begin, I’d like to offer a disclaimer. At points in this article, you may be tempted to curse and scream at your keyboard. Please don’t. As Jimmy Stamp points out in his article in the Smithsonian Magazine, the keyboard arrangement was designed before touch typing ever came into existence. So, the machine came into existence, then the typist. Our history of QWERTY begins with Christopher Latham Sholes in the 1860s, according to Stamp. The amateur inventor went about figuring out a way to improve the efficiency of the printing business. He linked up with Samuel W. Soulé, James Densmore, and Carlos Glidden, and acquired a patent for the Type-Writer in 1868. This pre-typewriter only had twenty-eight keys and looked more like a piano. The keys were set in alphabetical order. They assumed this to be a logical setup — if you know the alphabet, you’ll know where the letters are. It wasn’t only logic that pushed this design though, the existing Hughes Phelps Printing Telegraph looked similar. Phelps Combination Printer (1859) — Telegraph History. org Koichi and Motoko Yasuoka in their scholarly article for Kyoto University explain that this first Type-Writer was sold to Porter’s Telegraph College in Chicago. The college needed numbers though, which were also transmitted in Morse code. So, Sholes added numbers and some punctuations in 1870, pushing it to thirty-eight keys. Now, the keyboard had four rows but looked different from the QWERTY keyboard you’re used to. The keyboard crew then met with the American Telegraph Works in the same year. They agreed to buy the machine if some key adjustments were made. The Yasuokas’ article also notes Thomas Edison saw the early invention and panned it, saying: “This typewriter proved a difficult thing to make commercial. The alignment of the letters was awful. One letter would be one-sixteenth of an inch above the others; and all the letters wanted to wander out of line.” By 1872, the Type-Writer landed on the front cover of Scientific American magazine with its new forty-two key arrangement. The same year, the machine was demonstrated to the head of Western Union Telegraph. No matter how quickly they typed Morse messages, the Type-Writer kept up. In order to meet demand, a newly forty-three-keyed Type-Writer was brought to arms manufacturer E. Remington & Sons. They signed an agreement to mass-produce the machine and the finished prototype added a few more keys. The unit worked well, but Sholes insisted “Y” be moved to the middle of the keyboard. It ended up next to “T”, giving us our basic QWERTY set up on the top line. Prototype of the Sholes and Glidden Type-Writer — George Iles [Public Domain] So, the basic typewriter and keyboard as we know it was based around the telegraph industry’s requirements. The scholarly paper notes shorthand writers also adopted the Type-Writer. The first eight finger typing school like we use today wasn’t introduced until 1882. So, the modern skill of typing was built around the existing key formation nearly ten years afterward. The question of slowness “To overcome the problem of invisible jamming, Sholes applied antiengineering principles with the goal of slowing down the typist and thus preventing the second bar from jamming the falling first bar. At that time, modern typing speeds were not yet a goal.” — Jared Diamond, Discovery Magazine In Diamond’s article, he mentions the common conventional wisdom about the QWERTY keyboard: it’s designed to intentionally slow you down. He’s correct, the Type-Writer did have issues with jamming. The design was changed from piano-like for just that reason. He also points out the fact the machine worked on an “upstrike” design, which made jamming a more complicated problem. Upstrike means the keys which hit the paper sat below it. If a jam occurred, the typist couldn’t see it. Moreover, one couldn’t even see what they were typing at all. How’s that for engineering? Obviously, this sounds like no modern dream machine. However, was it actually designed to slow you down? Koichi and Motoko Yasuoka argue that’s nonsense. Since the Type-Writer was designed to capture Morse Code messages, it would have to write them as quickly as they were sent. In fact, the demo done for the head of Western Union won Sholes and crew the business due to the Type-Writer’s speed. Designing a purposely slow machine makes no sense. Moreover, touch typing wasn’t designed when Sholes and Remington were going back and forth on the design of the QWERTY style keyboard. The Type-Writer team was more interested in the telegraph industry. Typing speeds that could cause regular jamming didn’t exactly exist yet. Diamond also mentions another piece of conventional wisdom. T,Y,P,E,W,R,I,T,E,R are all on the top row as a sales gimmick. Salesman were quickly able to peck out “Typewriter” when doing demos. The Yasuokas find no evidence of this in their research. It also appears to be a strange gimmick, because most of the machines were aimed at telegraph operators initially. The best “gimmick” would be to demonstrate how quickly the machine could type down telegraph messages. So, is your QWERTY keyboard secretly designed poorly to slow you down on purpose? Likely not. Sholes and his crew didn’t just randomly throw keys around; there was a good deal of thought involved. Better designs A Mac iBook With Dvorak Keyboard Set-Up — Michael Bunsen [Public Domain] “In a normal workday a good typist’s fingers cover up to 20 miles on a QWERTY keyboard, but only one mile on a Dvorak keyboard. QWERTY typists achieve barely half the speed of Dvorak typists, who hold most world records for typing speed. QWERTY typists make about twice the errors that Dvorak typists make…To reach a speed of 40 words per minute, the person would need 56 hours of training on a QWERTY keyboard…but only 18 hours on a Dvorak keyboard.” — Jared Diamond, Discovery Magazine Diamond also delves into the creation of the Dvorak keyboard. William Dealey went to an industrial efficiency seminar in 1914 and watched slow-motion footage of QWERTY typists. He saw numerous issues that could be improved and explained this to his brother-in-law August Dvorak. The two spent nearly twenty years redesigning the keyboard. By 1932, the Dvorak keyboard was created. Diamond explains within two years Dvorak typists were beating QWERTY typists in speed contests. A study in the 1930s in the Tacoma school district showed children picked up typing on the Dvorak keyboard within a third of the time. In another instance during World War II, the navy couldn’t find enough trained typists. As a result, they tried out the Dvorak keyboard. They found the typists using the new board made almost seventy percent fewer errors and typed almost seventy-five percent quicker. The navy agreed to order thousands of them, but it was shot down by the Treasury department. Why? Mainly because QWERTY dominated the typing landscape. The Type-Writer was the most largely available machine for a good time and their largest competitor Underwood also used QWERTY according to Diamond. Furthermore, most touch-typing schools used that style keyboard. While Dvorak’s keyboard may function better according to its fans, the existing infrastructure was built around QWERTY. Sometimes good enough beats better So, basically your current keyboard sucks, but it’s good enough. It’s not a satisfying conclusion, is it? But, it’s a surprisingly regular one. We all have this idea that if you build a better mousetrap, the world will beat down your door. However, many times a good enough mousetrap will do. The Dvorak may very well be better, but who the hell wants to learn how to type all over again? The QWERTY may be archaic, but it gets the job done. Moreover, there are countless examples where old and adequate beats better. The Game Boy dominated the handheld game market in the 1990s with technology designed in the ’70s. The designer, Gunpei Yokoi, came up with a strategy called “lateral thinking with withered technology”. It was based around using old and outdated tech in new ways and became Nintendo’s general business strategy. Famously, VHS defeated the better Beta for the videocassette tape industry. One of IBM’s growth industries nowadays is mainframes — talk about archaic. In addition, the U.S. Air Force was also thinking about bringing back World War II era prop-driven fighters for close ground support because they made less friendly fire mistakes. QWERTY is just another version of this. It’s an idea wrapped up in a strangely set up keyboard. Hopefully, as we look at our lowly set of keys every day, we’ll be reminded the world may not need a great new invention. It may need one that’s just good enough.
https://medium.com/history-of-yesterday/the-history-of-your-qwerty-keyboard-7f2886f50ccd
['Erik Brown']
2020-10-08 17:02:43.077000+00:00
['Technology', 'History', 'Business', 'Marketing', 'Entrepreneurship']
The Holocene expired
I am here when cities are gone. I am here before the cities come. I nourished the lonely men on horses. I will keep the laughing men who ride iron. — Carl Sandburg [1] We owe Anthropocene to Paul Crutzen, winner of the Nobel Prize. He believes that — “The stratigraphic scale had to be supplemented by a new age to signal that mankind had become a force of telluric amplitude. After the Pleistocene, which opened the Quaternary 2.5 million years back, and the Holocene, which began 11,500 years ago, ‘It seems appropriate to assign the term “Anthropocene” to the present, in many ways human-dominated, geological epoch’”. [2] This universe is 13.8 billion years old — we’re in the Phanerozoic Eon, which is divided in Eras, we’re currently in the Cenozoic; Eras are divided in periods, we’re in the Quaternary; these Periods are broken into Epochs, we’re currently living in the Holocene (the last 11,550 years of the Quaternary Period)…a matryoshka doll of time sets: Eons with detail, down to the epoch. Public Domain Image. Humanity has managed to mess things up to such an extreme, that the geological time scale requires its signature: Anthropo, our cosmic We were here. To put our meager centuries next to eons and epochs…mere mortals like me see timescales in 1,000-year intervals; but it seems likely that Time will have to be updated. We face global post-corona problems: complex and accelerating challenges. Meanwhile, we live our lives scared, scarred, and shocked. Could this this trauma be eased with naming the mess? In The shock of the Anthropocene, Bonneuil & Fressoz are a big help on this quest for crisis nomenclature, which requires scientific consensus. This new age of the scale is by no means set in stone. Anthropo can be easily replaced: Phagocene, Capitalocene…Regardless of the permutation that sticks, we’re tasked with asking questions as old as civilizations: is it immoral to bring children into this world? Are we facing the beginning of the end? Is this essay another example of apocalyptic porn? Maybe it is, nevertheless, it might be wise to consider the implications of our agency: we probably triggered a mass extinction event. Here I will review some remarks — critiques and commentary — taken from French authors, about the concept in question. As with anything new and emergent, Anthropocene cannot be but pregnant of use and abuse. Please, forgive the eschatological tone of my comments. The legacy of our agency reverberates across Eons & Epochs. History is no longer enough to assess our impact and its consequences. In that sense, Anthropocene is important, “It attributes practical — that is to say, stratigraphic — truth to the notion of epoch as studied by a historian”.[3] In other words, human agency should be measured in terawatts, a unit useful for massive amounts of energy — like the rumblings of a volcano, or the motion of plate tectonics. It is no small thing to accept such a hypothesis: that humanity occupies a tangible locus in the Natural History of the planet, plastic being our cosmic smoking gun:
https://adroverlausell.medium.com/the-holocene-expired-1073c969702e
['Miguel Adrover']
2020-07-06 15:25:28.344000+00:00
['Sustainability', 'Philosophy', 'Climate Change', 'Essay', 'Science']
How I cleared All 12 AWS Certifications, in a good long time
[UPDATED Mar 2020] How I cleared All 12 AWS Certifications, in a good long time There are no shortcuts to learn hard skills, it comes with a lot of practice, passion, and dedication. Note: I sat for the Database Specialty Beta Exam in Jan 2020 and cleared it. Yes, it took me a long time to “pass” all of them, never planned to do ALL but it turned out to be a good learning journey. We are what we repeatedly do. Usual questions that I get over Email/LinkedIn How did you prepare for X certification, and Y certification? Did you get any benefits by clearing the X certification? Hey bro, I want to clear AWS certifications, help me pls. I want to take X certification, but don’t know where/how to start! Will I get this job if I pass X certification? These certifications are very costly, did your employer paid for it? And many more around the same lines… Hopefully, the following sections will help answer most of the above and understand the real benefits. A brief about myself I am a father of a one-and-half-year old son, I run a Cloud Development/Consulting firm based in India with a growing team of over a dozen AWSome colleagues, managing a handful of great customers who trusted us with their challenging problems. And a fair share of all the usual work and life hustle-bustle. If I can do it, anyone can do it. You just need to find right reasons to do it. Updates for 2020 3 months in 2020 and the landscape and expectations are changing rapidly. Our customers are demanding more from our cloud services. Irrespective of whether we are building a Web application, Mobile-backend, Architecture Validation or anything in-between. Knowing a few cloud services is not enough anymore. The customers want a fully cloud-native solution and many of them have done their homework quite well. Usually, the discussion happens around these topics. Serverless or Containers or Both Security of the environment at both Infrastructure and Operational level Cost Validation and Optimization of the resources Automation at both the Infrastructure and Operational level Full visibility of the system Many of these things are not part of certification exams directly, but certifications definitely help you make the best out of your knowledge, you will undoubtedly learn a million new things along the way. Why I decided to appear for AWS exams! I started my AWS usage directly with Lambda in around mid-2015, while working for a startup. It was a fascinating experience. The Serverless helped me learning a lot of things and then eventually I decided to finally jump onto other services of the AWS cloud. I am into software development for a very long time. Things are changing for good, and almost all software runs in a cloud, or may eventually run in a cloud. So it’s good to have a better understanding of the cloud if you want to design better software and serve your customers. As an owner of the business, it makes perfect sense to learn this new paradigm shift and help the team understand it better. The next big reason to have a certification is to promote my business as well, we do a lot of good stuff but the certifications would surely help us connect with new exciting customers all over the world. When I finally decided to appear for a certification, I honestly wanted to give just one, Solution Architect Associate. As this one covers a lot of AWS services and you will start getting a pretty good idea about the internals of AWS if you study well. If your work is not hard, you’re not doing great work. How I approached this certification “exams” Instead of focusing just on the exam blueprints, my goal was to learn something different and start putting those skills in practice in our on-going work. With passing time, I learned more and more things and that helped me decide for the next suitable certification that I should approach. I remember the Advanced Networking and Security Specialty helped me a lot in real projects that we were working on. If you think you are struggling with few areas, which comes up in almost every exams, focus on them first. Cloud services work in layer approach, there are core services which will keep coming up in every exam. The better you know them, the easier it would get eventually. Practice is the key. Preparation wise, I would check the exam blueprint first and see which are really important services covered in the exam and try to match them with on-going work. If there is a match, then I re-evaluate how we are doing it in our applications and what all things are available in the service. Going deep gives you a lot more perspective and you can start using the service right away. Next, for all the services that are part of the exam blueprint, go to the AWS console and start playing around. There are only a few which you may not be able to do easily like AWS Direct Connect or Snowball, but otherwise, you will be fine with most. If any of you have used AWS before, you will find that there are multiple use-cases for each service. It all depends on what you want to accomplish and how you orchestrate the services to fit your use-cases. The more you try the more you learn. Keep turning the knobs. When it sounds right, you’ll know. Tons of resources to go through I will still admit that it’s hard to start. Each exam has a lot of new concepts and you will start losing the focus or may lose previous concepts if you can not mix all together. There are no real benefits in the end if you clear one certification and forget everything from the previous. I used limited but some of the best learning resources to help me stay focused and organized. I had a limited time availability on a daily basis so instead of going through everything that I could find on the Internet, I stick with a few resources and mix in my own strategy, like conducting Meetup Sessions or Speaking about Cloud at events, which really really helped. Besides the above, what really helped me are these “extracurricular activities” ;) Write about something new you learned along the way, Medium/LinkedIn wherever you prefer Attend AWS Meetup Groups or create one in your city if it’s not already there. I created one in my city, and honestly met some really good people all around If you get a chance, talk about these services and explain how you used it, listen and read through how others used it as well Keep Notes, I filled up nearly 3 journals, which includes everything from a scratch pad to my own explanations of services to important things to go through again. Tell me and I forget. Teach me and I remember. Involve me and I learn. — Benjamin Franklin The “real” cost of these certifications Exam Fees, $150 or $300 depending on your exam Online Courses or Subscriptions from $5 to $30, per month AWS gives away a 50% discount voucher once you clear an exam. You can use the voucher for your next exam, and you will get another one :) I used 6 vouchers and saved a ton of money right there. Online courses or subscriptions can be costly. But you can organize your schedule accordingly and can utilize these subscriptions carefully. I enable/disable them based on my time availability. And try to complete the lessons without wasting the month. Spare between 5%–10% of your earnings every month on learning and don’t buy those useless things every now and then, you should be fine with the cost. Don’t just pass these certifications I am grateful to all the people who helped me to achieve this over the period of time. And now I am passing on my learnings every now and then when I get an opportunity. I am not the first person to do this, and I will surely not be the last person. I have met incredible people over time who knows far more than me and they didn’t have the certifications, and I am sure I will meet many more knowledgable people in the future as well. You pass these certifications and people will expect many more things from you, and they are usually right. If you “just” pass, eventually you will run out of time or resources to keep up. Make sure you do some real learnings and try to continue working on something in the same area. Focus on how you can put this to use, instead of how much you can earn immediately. With great power comes great responsibility Finally… This is not the end, neither I know everything about AWS or Cloud. There are many things besides AWS I work on and like to continue working on in the future. I like to read more, write more and talk more. So I will continue focusing on that besides learning and will share interesting things here in the future. When will it end? It doesn’t. It’s always day one. Embrace the suck. These are all the services that AWS currently has, and it is very difficult to keep up with all of these. My preferred services are Secury, Storage, Compute/Serverless and Machine Learning and will remain the same for some more time.
https://medium.com/appgambit/how-i-cleared-all-11-aws-certifications-in-a-good-long-time-c37a8a5e2a62
['Dhaval Nagar']
2020-03-23 04:17:17.811000+00:00
['Amazon Web Services', 'AWS', 'Cloud Computing', 'Certification', 'Cloud Certification']
Ruby on Rails: Active Record Enum
Writing software from scratch can be a satisfying journey full of planning, implementing, and revising. While in the development phase, decisions can be temporary and easily undone, allowing development to take new paths. Coming into an existing live codebase is an entirely different task since decisions have already been made, and they are not easily changed. A good starting point is to make a small, low-risk change that can have good returns. A recent project I worked on included an order model and order status model, both backed by database tables and developed using Ruby on Rails. The code utilizing the models involved magic numbers as a means of checking and setting statuses. For example: order.status = 2 or if order.status == 3 then <insert complex operations here> end The numbers could be swapped out with constants, but ActiveRecord has built-in functionality for enums. The ActiveRecord implementation of enums seems intended for models with an integer field for the enumerated value, a situation that does not match my project. Here is the assumption for implementing ActiveRecord enums: class CreateOrders < ActiveRecord::Migration[5.2] def change create_table :orders do |t| t.date :created_at t.decimal :total t.integer :order_status t.timestamps end end end Where my existing project is structured as: class CreateOrderStatuses < ActiveRecord::Migration[5.2] def change create_table :order_statuses do |t| t.string :order_status t.timestamps end end end class CreateOrders < ActiveRecord::Migration[5.2] def change create_table :orders do |t| t.date :created_at t.decimal :total t.belongs_to :order_status, foreign_key: true t.timestamps end end end Through some trial and error, I was able to implement a solution that makes use of the built-in functionality that ActiveRecord enums provide while not having to make any changes to my database schema or breaking changes to my existing production codebase. To follow along with me you can use the following instructions. Navigate to the folder that will hold your project files. cd ~/Projects/Rails/ Create the file seeds.rb ~/Projects/Rails/seeds.rb and paste in the following: OrderStatus.create(order_status:'open') OrderStatus.create(order_status:'hold') OrderStatus.create(order_status:'closed') (1..100).each {|i| Order.create(total:rand(10000),created_at:rand(Time.now-2.years..Time.now),order_status_id:rand(1..3))} Then, execute the following: rails new enum cd enum spring stop cp ../seeds.rb db/seeds.rb rails generate model OrderStatus order_status rails generate model Order created_at:datetime total:decimal order_status:references rake db:migrate rake db:seed Excellent! Now we can get to work. Let’s begin by firing up a rails console ( rails c ) and see what we are starting with. To test if the first order in our database is closed here is what some of the current options look like: Order.first.order_status==OrderStatus.where(order_status:"open").first Order.first.order_status==1 The first way is not convenient and is error-prone and the second is what we are trying to avoid. To improve the situation, we will add some enum notation to our models. Modify the following files accordingly: order_status.rb: class OrderStatus < ApplicationRecord enum status: {'open':1,'hold':2,'closed':3} end order.rb: class Order < ApplicationRecord has_one :order_status enum order_status_id: OrderStatus.statuses end We have now gained the following functionality: > o = Order.first > o.open? => false o.order_status_id=1 > o.open? => true o.close! > o.open? => false Order.open => <list of open orders> but also some problems: > o = Order.first > o.hold! > o.hold? => true > o.order_status_id==OrderStatus.where(order_status:'hold').first.id => false > o.order_status_id => "hold" > Order.where(order_status:'open') => #<ActiveRecord::Relation []> If we want to add this functionality to the order_status field, it will require some more work. Create the file order_status_enum.rb in the model directory and paste in the following code: module OrderStatusEnum def order_status self.method(:order_status_id).super_method.call.to_s end def order_status=(value) if value.is_a? String self.method(:order_status_id=).super_method.call(value) elsif value.is_a? Symbol self.method(:order_status_id=).super_method.call(value.to_s) else raise ArgumentError.new "#{value} is not a valid order_status" end end def order_status_id OrderStatus.statuses[self.method(:order_status_id).super_method.call] end def order_status_id=(value) if value.is_a? Integer self.method(:order_status_id=).super_method.call(value) else raise ArgumentError.new "#{value} is not a valid order_status_id" end end end To include this new module in your model, make the following change to order.rb: class Order < ApplicationRecord has_one :order_status prepend OrderStatusEnum enum order_status_id: OrderStatus.statuses end This change forces order_status and order_status_id to behave as expected. Trying to use the Active Record query interface will still produce an error, to fix this we need to make an alias linking order_status to order_status_id. Make the following change to order.rb: class Order < ApplicationRecord has_one :order_status prepend OrderStatusEnum enum order_status_id: OrderStatus.statuses alias_attribute :order_status, :order_status_id end Now everything is working as expected, but we are still comparing status against hardcoded strings, which can be fixed by making the following change. order_status.rb: class OrderStatus < ApplicationRecord enum status: {'open':1,'hold':2,'closed':3} OPEN='open' HOLD='hold' CLOSED='closed' end This allows us to test for status as follows: > o.order_status == OrderStatus::OPEN => true All is looking well, except I have neglected to mention the harmless informational messages: Creating scope :open. Overwriting existing method OrderStatus.open. Creating scope :open. Overwriting existing method Order.open. All Ruby objects inherit from Object, which includes the Kernel module, which in turn, has an open method. In my situation, I do not use the Kernel module’s open method and I need to keep the naming consistent with the existing project. To silence these messages we can make the following change to order_status.rb:
https://medium.com/quark-works/ruby-on-rails-active-record-enum-6a08df7f3685
['Gregory Bryant']
2020-08-12 04:30:10.835000+00:00
['Coding', 'Startup', 'Development', 'Technology', 'Education']
The Ultimate Shortcut to Entrepreneurial Success
The Ultimate Shortcut to Entrepreneurial Success How to be certain you won’t fail Photo by Campaign Creators on Unsplash You want to fix your car. There are two places you know where you can get it fixed. One is a new place owned and run by an ex-banker. The other is also a new place but the boss has been fixing cars for over 10 years. All other things are pretty much the same. Where will you go? You want to style your hair. You have two options. The first shop is run by someone who also has a full-time job in the pharmaceutical industry. The other one is run by someone you’ve known as a stylist for many years. All other things being pretty much the same. Which would you choose? You require a software solution to be created for your company. You meet the boss of the first company you approach and he knows nothing of the technicalities involved. In fact, you corrected his notion on two occasions in your discussion. The second company you went to is lead by a former top employee of one of the big tech firms. Your conversation with him shows that he knows a lot about what he does. With every other thing being the same; Who will you work with among the two companies? We can continue to run this illustration in several other industries. We will likely end up in the same place. The leader of an organization determines the fate of that organization to a large extent. While there are businesses that can be started with little or no expertise, a competitor with an expert leader may burst that bubble. In a product company, leaders can still be novices and have great products that attract customers. However, the different approach of an expert can put the game out of reach for the novice. This is why restaurants that are run by a chef are (usually) somewhat different. In a service company, there is literally no choice but to be an expert to succeed. There are very few service companies that became successful and retained their success without an expert at the helm. Naturally, the boss at the helm must own the craft. This is an extra edge on every front. The lesson here is simple: Become an expert before you become an entrepreneur Sometimes it makes no sense to look to gather several years of experience to create a simple company. However, you can bypass this by enrolling in a training that will get you up to speed. If possible, learn from the best in the field. Get your hands on a few things and prove to yourself that you are a master of the craft. What happens if you are not an expert? Well, find one to bring on board! It increases your chances of success. However, do not neglect your own training. A wise man once said: Whatever is worth doing well is worth training for This is one of the secrets of success. Don’t start out and launch out like a novice. Invest in yourself and become an expert first. Not an expert in managing a business, but an expert in the craft you are going to be involved in. Business management is good and should be learned. But expertise should be based on the product or service you are involved in. Be the expert before you launch out. It gives you a real edge Cheers!
https://medium.com/swlh/the-ultimate-shortcut-to-entrepreneurial-success-3e95550d631f
['David O.']
2019-08-13 10:41:01.127000+00:00
['Life Lessons', 'Entrepreneurship', 'Business', 'Startup', 'Success']
How To Launch Your Brand Like a Successful Instagram Influencer
How To Launch Your Brand Like a Successful Instagram Influencer A case study of three popular businesses Photo by @Wearetala on Instagram The world of influencer marketing and influencer business has changed massively in the last year or so. Sponsored content is no longer the sole way influencers are monetising their audiences — but that shouldn’t come as a surprise, really. Make way for different kinds of collaborations. From publishing books to launching podcasts, influencers and content creators are stepping up their revenue game. One of the latest trends on this subject has been influencer-driven product collaborations, something that has swept across every industry, including health and wellness. Using examples such as Carly Rowena’s jewellery line, more and more influencers are finding smart and fitting ways to tap into bigger audiences and create branded products. The most notable initiative to date comes from the world of fashion, and it involves Amazon’s “The Drop.” This is an influencer-lead initiative that will give customers limited-edition, “street style-inspired” collections from global bloggers and social media famous folks. Designs from “The Drop” will be available internationally for only 30 hours, which the homepage of “The Drop” promises will mean less waste as things are only made to order. Influencer Quigley Goode is among the influencers collaborating with Amazon on the initiative, and as she shared in one of her latest Instagram posts: “I announced recently that I’ve been working on designing a collection for @amazonthedrop — and my heart just about exploded from the amount of support and excitement I felt from this community. I can’t thank you enough for cheering me on.” In this article, I outline a few lessons from three successful brands launched by influencers, and how they seamlessly marketed their products.
https://medium.com/better-marketing/how-to-launch-your-brand-like-a-successful-instagram-influencer-930cb9ed4959
['Fab Giovanetti']
2020-01-20 18:38:55.511000+00:00
['Business', 'Startup', 'Marketing', 'Influencer Marketing', 'Social Media']
The best tools for Dashboarding in Python
Streamlit Streamlit Demo — Source Do you want to create dashboards quickly in Python?. — Streamlit is your best option. Streamlit revolutionises creating web applications with easy to use API and constant feature development. It was only last year October when this open-source tool was launched and no doubt its popularity has increased rapidly in the data science community. Today, Streamlit boosts more functionalities with its recent introduction of streamlit component, where the developer community adds new functionalities. Sharing and Deploying Streamlit apps has also become super easy with the new one-click deploying service from streamlit (In Beta). You can now develop and create web applications and dashboards, and deploy them in minutes rather than days, Thanks to Streamlit. What I like about streamlit is that it has the shortest learning curve of all Python Dashboard creating tools in this list. It offers simple API with excellent documentation and lets you develop applications with less code in pure python. In simple terms, Streamlit empowers you to focus on what matters rather than thinking about front-end back-end technology stacks to use for your project. Panel Panel Gapminder Demo — Source Do you want to create powerful and advanced dashboards in Pure Python with declarative and reactive programming? — Panel is your best bet. Panel is an open-source Python library that lets you create custom interactive web apps and dashboards by connecting user-defined widgets to plots, images, tables, or text. While it is possible to work Streamlit in Jupyter notebooks, we use primarily with Python files. If your favourite data science tool is Jupyter Notebook, then Panel offers extensive support for all plotting libraries. The learning curve is steeper than Streamlit. However, it is simple to create an interactive web application in Panel, using less code with widgets and parameters. Deploying and sharing your web applications and dashboards in Panel is easy. You can display your dashboards inside Jupyter Notebooks, render it as Ipywidgets, run it from the command line, or deploy it using popular tools like Heroku, MyBinder or even other cloud platforms. Voilà Voilà Demo — Source. Do you want to turn your Jupyter Notebooks quickly into standalone web applications? Voilà is for your service. Voilà — Rendering of live Jupyter notebooks with interactive widgets. Voilà is primarily a native Jupyter rendering tool. However, you can create interactive reports with widgets in Jupyter notebook using Ipywidgets. You can also use Viola to render any content on Notebooks into Dashboards. Quick Jupyter notebook deployment into dashboards is the strong side of Viola but also its downside. You can make coherent Dashboards with Voilà, but that needs you to format your experiments and code in Jupyter notebooks accordingly or suppress and hide unused code and markdowns. Plotly Dash Plotly Dash Demo — Source Do you need more advanced and production-grade level dashboards in Python? — Plotly Dash covers that. Plotly Dash focuses on production and enterprise dashboard creation but also offers free and open-source options in Python, R and Julia. It is the most mature option on this list. Although Dash empowers users to build and deploy web applications without full-stack and DevOps tools in hours, it has the steepest learning curve of all the options in this article. That is also changing with the introduction of Plotly Express — easy to use and high-level API for creating figures with Plotly Libary. I find the Plotly Express library one of my favourite data visualisation tools in Python. Deploying your Plotly Dashboards in your local environment is easy, and you need to deploy it to a server if you need to share it outside. Final Thoughts The choice of your dashboard tool depends on your project needs. Streamlit is for creating quick and versatile dashboard apps with an easy learning curve and less code. I find Streamlit the easiest and the best tool out there to create data visualisation web apps. Viola and Panel serve primarily for Jupyter Notebook users, and you can combine them with any plotting libraries of your choice. Finally, use Plotly Dash for more advanced and production level dashboards.
https://medium.com/spatial-data-science/the-best-tools-for-dashboarding-in-python-b22975cb4b83
[]
2020-11-05 08:56:08.146000+00:00
['Plotly', 'Dashboard', 'Streamlit', 'Data Science', 'Data Visualization']
My Recap of KubeCon 2019’s “Running Istio and Kubernetes On-Prem at Yahoo Scale”
This article will summarize a talk given by Suresh Visvanathan & Mrunmayi Dhume of Verizon Media at KubeCon San Diego 2019. If the video recording of the presentation is posted, I will link it here directly in the article so you can follow along. Yahoo! has more than 18 production grade Kubernetes clusters; Visvanathan’s team operates one that has more than 150,000 containers, 500 applications, and 1,000,000 requests per second. Their most mission-critical applications, such as Yahoo! Sports, Yahoo! Finance, and Yahoo! Home, are deployed and enabled by Kubernetes and Istio platforms. The talk covered how the teams worked on modernizing their platform to a microservices architecture. Among their goals were:
https://medium.com/cloud-native-the-gathering/my-recap-of-kubecon-2019s-running-istio-and-kubernetes-on-prem-at-yahoo-scale-d5621907fb6e
['Tremaine Eto']
2019-11-20 21:28:45.497000+00:00
['Software Engineering', 'Technology', 'Software Development', 'Kubernetes', 'Istio']
PHP or JavaScript headache. There are endless stories about why PHP…
Lanzarote, Green Window in Sunshine, by Vlad Madejczyk There are endless stories about why PHP is better than JavasScript or why only JavaScript — I mean JavaScript technologies like React, Angular etc, — should be used, and PHP should die, or will die next year. Few years ago I heard that PHP is dead (Node.js developers’ opinion), and this opinion pops up frequently. However, we have year 2020 and PHP is in much better shape than ever before. I have used in my professional work mainly Assembler (ancient times), C++ (for 3d programming), ActionScript 3, Java, Node.js, Django/Python, and PHP CodeIgniter, Yii, Laravel, WordPress. Working long time as a freelance web developer taught me to be programming languages-agnostic. Mostly it is a company, your customers, which decides what technology you can use, because this is based on budget, and they don’t care what JavaScript can (or cannot) be better than PHP. Everything is possible, but ultimately what technologies you can use to deliver solution always depends on budget. Let’s take simple scenario with an online bespoke Content Management Service (CMS). Why — for instance — not to use Amazon Web Services, set up an Ubuntu server and start with Laravel, if we go PHP route? Laravel is much better than WordPress, and WordPress is bad (well, some developers say that). This approach looks good, but it only looks that way. Laravel looks beautiful, but it comes with its, so to speak, not polite children: periodic not always backward compatible upgrades and unpredictable business dependency injections (like Tailwind CSS, and who knows what is coming in future, look at point 3). new stuff: Livewire vs Inertia.js, excellent, both excellent but WHY? After your job is done, somebody, out there, in your customer’s company, has to take care about it, because CMS may (or may not) need maintenance, or some modifications and extensions from time to time. If the company has no IT department (well, because they hired you, a freelancer), it means that most probably Laravel is NOT an option, unless they can afford to have your services from time to time. Keep also in mind, that for most commercial projects Laravel cannot be used in shared hosting. I found that some companies offer Laravel hosting, but when you start using it, you will find that you need sudo permissions for some Composer stuff and — well — such cheap hosting becomes pretty much useless. In theory, you can use web hosting services like Digital Ocean, AWS or similar, and they are good and reliable, but again — somebody needs to take care about it after you deliver CMS. I found that every reliable Laravel hosting needs some server command line management skills and root/sudo permissions, and all this makes maintenance cumbersome for your customer. How many customers can afford such scenario? Well, some of them can, but not all of them. You could use some JavaScript technology for CMS, but, if out there what you use is relational database, and there is nothing about chat or video chat, or some user interface updates in real time, using JavaScript technologies (back and front end) for CMS typically based on relational database would be overkill. In fact, there are at least three things you should tick off before you start development: what is your budget (how much working hours you have at your disposal) what tools are the best for the job to be done (based on point 1) choosing the right tool look at technology, look at what features it delivers out of the box If a project is about some SPA (Single Page Application), without need to save any data in database, or save it once only, JavaScript technology like Vue.js or REACT, or even Bootstrap/jQuery with Ajax or some typical front end stuff would be much better option than pure PHP with HTML/CSS. And how about mixed scenario, where you need classic CMS functionality, but on top of that some pages here and there, with rather advanced business logic, and high level of user interface interaction, a typical SPA, which can be done on user side (internet browser) saving server resources? I would say: use Laravel, CodeIgniter, WordPress or something similar and a JavaScript library. You can develop SPA with PHP, no doubt about it, but at the same time you degrade user experience (UI) because of server connections/page reload. Degraded UI means worse performance on mobile devices and finally lower conversion, and business doesn’t like that. Yet another example: advanced CMS with very complex module with much ML/AI and mathematical, statistical advanced stuff in the back end. Sounds like Django/Python with some JavaScript libraries here and there on front end. What I described above is the tip of the iceberg. Anyway, I found that mixed solutions can be very often simpler, more time efficient, easier for maintenance, more cost effective than attempts to develop a project following narrow minded one-is-best-for-all approach. For instance: use WordPress, with cashing, as some kind of advertising (blog/posts) interface - as food for SEO and content eager users, where security is not a big issue - which can be updated by company employees, and create another module based on Laravel or other PHP MVC framework, with higher security level, for managing more important data. And make — if needed — both modules talk to each other via API. Except that, web services developed in such way that they perform nicely also on mobile devices are welcomed — you cannot get easily this functionality with pure PHP. In my opinion, if you compare different technologies, like for instance CMSs etc. or different programming languages, the most important thing is to find what they deliver out of the box. Not, what you can develop by using them, because ultimately you can build CMS also based on assembler. Look at a library, language, CMS to find what it delivers out of the box, what it delivers now, because developing missing features can be very painful. For long time I have been not sure what is wrong with PHP vs JavaScript, until I realised that times changed. Code is poetry is long time gone. There is no time for code is poetry, it is a bait only. The pay better for delivery, not for poetry (unless converted into song and placed on top of Billboard’s The Hot 100 chart). What we have now is rather assembly line, a kind of 2 steps development: find what modules and/or technologies you need for your project make them working together, connect them seamlessly Somebody somewhere have already developed and tested the bits you need for your project, so why to try to reinvent the wheel? Some PHP developers say that Laravel is much better than WordPress, and they are right, but also owners of WordPress websites, roughly 30% of all websites, where some of them make good money, are right too. It all depends. JavaScript developers say JavaScript is better than PHP and they are right, but — still — WordPress is based not only on JavaScript (it uses it heavily for good reasons) but mainly on PHP. So, PHP developers who say that PHP is better than JavaScript are right too. All this depends on context. Is CodeIgniter better than Laravel? Yes, it is. Is opposite true? Yes, it is. Without the context comparing most things doesn’t make much sense. I develop in PHP, and “PHP is dead” Should I switch to JavaScript? Just to give you an example: Cobol, a programming language designed in 1959, has been on a death bed for at least 30 years. It it is still not cold dead, it still delivers in many business companies, delivers reliably. In my opinion there is a vaccine for the PHP or JavaScript sickness, and it is quite simple and cheap. If you are a back end PHP web developer, the best you can do is to keep what you already know, and learn at least one JavaScript library, like REACT, or Angular or Vue.js or something similar. This will change you into an augmented PHP developer, something like PHP^Vue.js. And this 20% upgrade of your skills, will solve you 80% of problems you face in your development. It will also solve one of the biggest issues in back end development: your work is basically not visible on front end, and customer’s private GitHub repositories where you keep your code must stay private — so, again your code is not visible. With back end stuff it is not always easy to create a portfolio, a showcase. PHP back end development has very poor visual side I would say. There are things PHP can do in excellent way, but we have to face truth that there are things which JavaScript can do much better than PHP.
https://lanzarote.medium.com/php-or-javascript-headache-6c48176b78a4
['Vlad Madejczyk']
2020-12-01 21:22:57.359000+00:00
['Php Vs Javascript', 'JavaScript', 'React', 'Vuejs', 'PHP']
How a Bowl of Yogurt Can Make You Mindful
The Yogurt Does 227g mean anything to you? It’s a typical serving size of yogurt. If you have a tub of yogurt in your fridge, pour yourself a serving in a bowl. Shower it with your favorite assortment of fruits or toppings. If you don’t have any of the items above, add them to your shopping list. You’re just going to have to visualize this bit, and it might make you hangry. Take a spoonful of yogurt into your mouth. Continue this process, but by following this one rule: only one spoonful per minute. Photo by Pascal Meier on Unsplash Only taking one spoonful per minute might seem like a difficult task at first. You might think that you can’t afford that much time on a single bowl of yogurt. The thing is, the more you focus on the time, the more you have to wait. Have you ever stood next to a boiling pot of water, waiting for the bubbles to start forming on the bottom? Contrast that to when you decide to step away, do some chores, and you come back to a pot that’s about to burst. Coming back to the yogurt, instead of fixating on time, focus on what you’re sensing inside your mouth. What flavors are bursting? How’s the texture of your food changing as your saliva starts breaking the bits down? If you’re eating a standard serving of yogurt, you probably just spent 5–10 minutes meditating. This ten-minute practice can help reset your notion of time. It can reset your senses. The ten minutes can feel long or short, depending on what you decide to focus on. The thing is, we can’t afford not to spend some time sitting with something as simple as a bowl of yogurt to “meditate.” We need to check in on our relationship with time frequently. We will always be a slave to our excuses and confinements created by what’s outside of us unless we can start controlling our sense of time and manipulate it as we see fit.
https://medium.com/stayingsharp/how-a-bowl-of-yogurt-can-make-you-mindful-8723e4fb0b6b
['Yuta Morinaga']
2019-09-29 17:23:56.817000+00:00
['Lifestyle', 'Health', 'Productivity', 'Mindfulness', 'Meditation']
Tyler Perry doesn’t want a seat at the table
I’m still looking for the person who unequivocally loves Madea. Either people can’t stand the character or think it’s been overdone to the point that they can’t watch any Tyler Perry movie that she’s a part of. As I say that, I bet the overwhelming majority of you know exactly who Madea is. You can picture her curly grey hair and hear her annoying, high pitched voice. You’re imagining her curse someone out or hit someone with her purse. That alone makes Perry one of the greatest writers of our generation. He’s created a character that’s become popular enough to be instantly recognized by name alone without attachment to any one particular movie. But that hasn’t been enough for Perry. I don’t think it ever was. He famously self-financed his first play. That was after living out of his car not knowing if his dreams would ever come true. So when the announcement was made that he is the first African American to independently own his own studio, as amazing an accomplishment an entertainer can imagine, it couldn’t have been too surprising. Tyler Perry has never wanted a seat at the table It’s a curious time to be black on this side of the world. Even if your eyes are closed it’s impossible not to feel the way we’re celebrating each other. It’s also impossible to ignore our demands. That we be respected, that we be let into spaces that previously took us for granted despite our impact or presence. Read: Why do I have to write about being black? Counter that with what almost feels like a racial civil war. Our demands haven’t been met without resistance and that resistance is creating deep fractures in our society. Some will argue those fractures were always there, and they have been. But it’s also true that social media has made this division more contentious, or at least more public. In the midst of this, Perry has quietly been constructing his empire away from the noise of Hollywood. He’s stayed in Atlanta despite his commercial success in the film industry. The newly built Tyler Perry Studios is on over 300 acres of former slave ground in Georgia, an irony that shouldn’t be overlooked. Image by Clarke Sanders Perry has unapologetically catered his art specifically to black culture. “My audience and the stories that I tell are African-American, stories specific to a certain audience, specific to a certain group of people that I know, that I grew up with, and we speak a language. Hollywood doesn’t necessarily speak the language.” he said. “A lot of critics don’t speak that language. So, to them, it’s like, ‘What is this?’ “ That choice has made Perry an “other.” Someone who’s acknowledgement outside of black culture doesn’t equal his achievements or influence within it. But that has never seemed to bother Perry. He came into the game on his own dime so he’s always understood where he would stand. He would have to do it on his own, and so he has. Tyler Perry Studios needs to work It won’t be enough for this to be some kind of symbolic gesture. In twenty years, we can’t look back and ask, “what ever happened to Tyler Perry’s studio.” We need to make this work. Great movies must be born there. Iconic TV series must live in one of the twelve sound stages named after black people who’ve inspired Perry to control his own destiny. This is not a test and shouldn’t be treated as such. It’s a waving flag of victory for a battle we still need to win. Tyler Perry has taken a leap. He’s built a table we can call our own. We creators should all feel welcome to pull up chairs and begin sharing our stories. CRY
https://medium.com/cry-mag/tyler-perry-doesnt-want-a-seat-at-the-table-cc96d39c8e73
['Kern Carter']
2019-10-10 15:46:03.117000+00:00
['Writing', 'BlackLivesMatter', 'Pop Culture', 'Creativity', 'Culture']
How the Night I Struggled for Breath Changed My Life
How the Night I Struggled for Breath Changed My Life A sudden deterioration in your health might be the best wake-up call out there. Photo by Daniel Torobekov on Pexels It was the first week of December and I hadn’t been feeling well all week. I was 23 and couldn’t breathe freely anymore. Every breath hurt and my lungs felt like they were burning. My heart didn’t feel right and I had random palpitations that no doctor could explain. I suddenly had no idea what was wrong with me. My mother sat by me terrified, reading the Qur’an until it became gibberish and I couldn’t understand what she was even saying. We went to the doctor. We did all the tests. I still couldn’t breathe properly. I heaved and gasped suddenly throughout the day, crying for my ability to breathe in and out normally. My mother was convinced it was black magic or a curse at one point when there seemed to be no explanation for how badly my lungs were doing. She was desperate for an explanation, as was I, but we weren’t given one. This culminated in me being rushed to the hospital on the night of Saturday, December 9th, 2017.
https://medium.com/be-unique/how-the-night-i-struggled-for-breath-changed-my-life-568406269dcb
[]
2020-12-25 02:07:03.567000+00:00
['Self Improvement', 'Self', 'Self-awareness', 'Health', 'Encouragement']
5 Ways Working Out Will Develop You Mentally
1| Consistency creates success “Success isn't always about greatness. It's about consistency. Consistent hard work leads to success. Greatness will come.” — Dwayne Johnson Consistency seems to be a quality that people innately question when it's towards something they can't wrap their heads around. “Why do you work out so much? I don't see the point.” I know I heard that one far too many times when I was starting out. Funnily enough, those same people that thought I was pointlessly putting in the time and effort for nothing have come desperately crawling back, asking for fitness advice or to help them start out in the gym. If there's one golden rule to life that all the most successful and inspirational people preach, it’s that if you stick to something long enough you will become successful. Take the 10,000-hour rule for example. Popularized by Malcom Gladwell in his bestseller ‘Outliers: The Story of Success.’ If you practice any given skill for 10,000 hours or put that time into a certain field, you will inevitably become an expert in that field or skill. Despite debate around “debunking” this “myth”, what’s important is what we can learn from this simple rule. Putting time and work into something will give you rewards, no matter how significant they are, you will see them. Strictly following a fitness regimen, week in and week out, without making excuses, builds up a mentality that refuses to do things half-heartedly. Seeing the consistent work and time you’ve put into your body and how it developed over the months or years you’ve been training can only teach you one thing. Consistency gains results. The beauty of this simple takeaway from dedication to working out is that you learn this: Seeking results from something, say your new side hustle, a business venture, or whatever involves a long journey, you realize that the only way to achieve your vision is by proving to yourself how badly you want it. How many hours are you willing to put in? How many times are you going to give in to laziness? How many times are you going to persevere and beat the odds when nothing is happening? Developing your body through fitness will teach you this: Results take time and success is relative to how much work and effort you put into something consistently. 2| Comfort crushes ambition “You need to get out of your comfort zone to make new connections with new ideas. If you don't have that grit and resilience to embrace a growth mindset, you might never get a taste of what it feels like to be successful” — Ifeoluwa Egbetade Here it is. The comfort zone. You knew it was coming. A comfort zone is a strange place and the relationship we, as humans, have with it is even stranger. It’s like getting out of bed in the morning. Sure, you can decide to stay in bed all day under your warm comforting covers, but if you never leave your bed what will you achieve? Nothing. Exercising on a schedule puts us in that state of physical discomfort I mentioned earlier, and at times it isn't pleasant. But would you get any drastic results from your fitness programs if they were too easy? That is the point of progressive overload, to always increase the level of intensity in which you work at. Similarly, if you got too comfortable or confident with your fitness routine and skip sessions, you will sooner or later fall out of the rhythm and end up quitting, as happens to the majority of beginners. You should regard your training as mental development as well as physical development. Growth happens in the most uncomfortable of scenarios. There is a channel on youtube called ‘Yes Theory’ with over 6 million subscribers who have branded their channel around two simple words. ‘Seek Discomfort’. As the name suggests, they say ‘yes’ to everything. The success and impact that this channel has had on countless people globally, shows us that working out consistently and actively putting ourselves in discomfort can only yield positive outcomes. Continue to leave your comfort zone and allow the discomfort to permeate through your body as you feel the mental growth take place 3| Proving them wrong “People ask me why I work so hard and why I have this compassion to reach the top and be great. I respond by telling them ‘I work insanely hard because people said I couldn't do it.’ When someone tells me I can't do something, I am determined to prove them wrong.” — Robert Cheeke When I was 17, I hit a low point. Self-esteem levels were devastatingly low, my first bout with heartbreak, and people constantly commenting on my ‘sickly thin appearance’, I decided to quit feeling sorry for myself and started to chase a dream I was always too scared to live out. The nights of crying in front of the mirror because of the deep sense of hatred and shame I had for my body died right there and then. 3 years later with over 40 pounds of muscle added to my thin frame, I can safely say the satisfaction of transforming yourself and proving people wrong doesn't fade away. It’s almost a constant subtle high in the background of your daily life, humming away and making you carry yourself with pride and fortitude as your own body becomes living proof of the hardships you overcame. External successes lead to internal developments. Doubts start to fade, ambitious dreams start to grow, and possibilities never seem to end. Proving people wrong enhances that part of our mentality that believes we can do whatever we want, despite the hate. If you can do it once, you most certainly can do it many more times. From starting the businesses your family told you to stay away from, to any doubt that people have placed on you, you yourself can be a living testament to how you will not let anyone determine your capabilities. 4| Become the architect of your life “If you don’t choose what you are doing today, all your tomorrows will look like yesteday” — Jim Rohn No matter who you are on this earth, everyone sets goals for themselves. Especially more so now than ever, with 2021 rapidly approaching. You may have had a sudden realization of something that you wish to change about yourself. As I ambitiously progressed through my fitness journey I had one goal in mind. Building as much muscle as I physically could naturally. As the changes slowly started to come it was as if every time I would look at my body I realized that I had the capacity to change it, grow it and essentially construct it through the grittiness of hard work. As I continued to grow I was overjoyed and in awe that I actually had it within me to transform my physique. I focused on building aesthetic proportions such as the “boulder shoulders”, the V-shaped Back, the peaked biceps, etc. It hit me around 6 months after I started training. If I can effectively pick, choose, and pretty much design my physique in the aesthetic vision I had in mind then why couldn't I do this in other aspects of my life? And so it began, my wonderful journey into self-improvement. Each day I focused on bettering myself, creating new habits, reading more books and before I knew it, a year had passed. What struck me by surprise was how much I changed within this year by focusing on the little things. Most people talk about their goals as stand-alone events but never think about the actual work and process that goes into achieving them. You may talk about losing 40 pounds or writing a book, but these significant achievements are the result of marginal successes and improvements that accumulate over time. It’s like a compounding effect that continues until you reach your goal. James clear talks about the idea of improving yourself by 1% each day, so you end up 37% better than when you started. This is mentioned in more detail in his book “Atomic habits”. The beauty of this is that it all stemmed from the changes that I made and saw in my body from continuously hitting the gym. I’m sure many fitness junkies will agree with me when I say this. When you realize you have the capacity and determination to actively go out and improve your physical health and image you get this eureka moment where a burning flame alights within you. This flame pushes you to persevere and chase other dreams that you were always putting off as you continue to prove to yourself that there is a passionate drive within you, even after all the precarious self-doubt. 5| The role of sacrifice “A noble purpose inspires sacrifice, stimulates innovation and encourages perseverance” — Gary Hamel How many times have you decided to eat the piece of fruit instead of the cake? How many times have you decided to wake up before everyone else to go to the gym? Or left your friends early so you could go to train? to which they proceed to call you boring. If the answer to any of these is “no” then you need to reevaluate how bad you want it. Sacrifice is one of the core necessities needed when working towards a goal or a vision. The more experience you gain working out the more you realize that the sessions where you would rather do anything but go workout always end up being the best. Once you drag yourself out the door or into the gym the element of pride for overcoming your laziness and breaking out of that comfort zone is character building, to say the least. Of course, the longer you stick with your regiments the more this will happen, and the more benefits and carry over you will see in your life. This has created a mental note for me that I access whenever I need that extra push to do something that I’m too scared to do “It will be hard. It will require you to sacrifice aspects of your life you do not want to. But you’ve sacrificed time and comfort many times, what’s stopping you from doing it again this time?” I’ve applied this mentality of “if you can sacrifice aspects of your life every week to build the body of your dreams then you can in any aspect of your life” to most notably side hustles, investments, and business ideas such as starting a podcast or an online store. They to require work and sacrifice to achieve success, but luckily enough, I learned the role sacrifice has in building dreams early on through working out, and so can you.
https://medium.com/in-fitness-and-in-health/5-ways-working-out-will-develop-you-mentally-24b74c2ed47f
[]
2020-12-29 16:10:30.578000+00:00
['Motivation', 'Health', 'Mental Toughness', 'Self Development', 'Fitness']
Choropleth Maps using Plotly
I have been working as a Data Analyst for almost 5 years now but, in this time I have mostly used business intelligence software for all my data visualization tasks. So my experience with visualization has been limited to knowing when to plot a bar chart vs a line chart. To correct this, I have recently taken up to learn matplotlib and other plotting and graphing libraries. This is an attempt in that direction. You can use this as a guide to choropleth maps, or plotly, or both. Hope this helps. Map by Our World in Data Choropleth Maps are a type of thematic map in which areas are shaded or patterned in proportion to a statistical variable (from Wikipedia). This map for example shows the share of adults who were obese in 2016. Plotly is a company based in Canada, that develops analytics and data visualization tools. They have created an open-source scientific graphing library for Python, which I am using here. For this task, I am looking at data for Religious Adherents in the United States (Source: theARDA.com). The data contains County-level information for different religious groups for the years 1980, 1990, 2000 & 2010. It also includes the population of the counties in that year, along with some other information. So let’s first load the data and the libraries we are going to need. #import libraries %matplotlib notebook import pandas as pd import numpy as np import matplotlib.pyplot as plt import plotly.express as px import plotly.graph_objects as go from plotly.subplots import make_subplots #import raw data df = pd.read_excel(‘RCMSMGCY.xlsx’, dtype={“FIPSMERG”: str}) df.head() Now, the next 3 steps involve cleaning, filtering, and summarizing the data for analysis, so if you want you can skip this and move directly to the Choropleth maps. Data Cleaning To plot the US county data, I am going to use the FIPS code, stored in column FIPSMERG here. The FIPS code is a unique 5-digit code that identifies counties in the United States. As you can see from the data snapshot above, in the first 5 rows, the FIPSMERG has only 4 digits. This means, that the leading zero is missing. Let’s add that using the lambda function. #adding 0 to FIPS code df[‘FIPSMERG’] = df[‘FIPSMERG’].apply(lambda x: str(0)+str(x) if len(str(x))<5 else x) Now, each religious group has been categorized into a religious tradition and family which is represented by the columns RELTRAD & FAMILY. I am adding the names of these categories and the State names to make the data more comprehensible. #religious family name mapping (FAMILY) rel_fam = pd.read_csv(‘religious_family.csv’) #religious tradition name mapping (RELTRAD) rel_trad = pd.read_csv(‘religious_tradition.csv’) #merging the 2 dataframes on group Code to get the corresponding religious family name & religious tradition catgeory df = pd.merge(pd.merge(df, rel_fam, left_on=’FAMILY’, right_on=’FAMILY’, how=’left’), rel_trad, left_on=’RELTRAD’, right_on=’RELTRAD’, how=’left’) print(‘Shape: ‘+str(df.shape)) #state names mapping (STATEAB) state_code = pd.read_csv(‘us_state_code_data.csv’) #merging the dataframes df = (pd.merge(df, state_code, left_on=’STATEAB’, right_on=’Postal Code’, how=’inner’)).drop([‘Postal Code’], axis = 1) Filtering the data For the first part of the analysis, I want to look at the percentage of adherents in 2010 at the state and county level. For this, I need the population and the count of adherents. #filtering data for only 2010 df_2010 = df[df['YEAR']==2010] #population at county level (year wise) pop_county = df[[‘YEAR’,’FIPSMERG’,’STATEAB’, ‘State Name’, ‘CNTYNM’, ‘TOTPOP’]].drop_duplicates().reset_index(drop=True) #population at state level (year wise) pop_state = pop_county.groupby(by=[‘YEAR’,’STATEAB’, ‘State Name’]).agg({‘TOTPOP’:sum}).reset_index() Summarizing the data The next step is to calculate the percentage of adherents at the state and county level. Using the pandas groupby function and the population estimates, I have created two dataframes, one summarizing the data at the state level and the other at the county level. #creating state level data for %of adherents adh_state = df_2010.groupby(by=['STATEAB','State Name']).agg({'ADHERENT':sum}).reset_index() adh_state = pd.merge(adh_state, pop_state[['YEAR','STATEAB', 'TOTPOP']], left_on='STATEAB', right_on='STATEAB', how='inner') adh_state['PER_ADH'] = np.round(adh_state['ADHERENT']/adh_state['TOTPOP']*100,decimals=1) State-level summary #creating county level data for %of adherents adh_county = df_2010.groupby(by=[‘FIPSMERG’, ‘CNTYNM’, ‘STATEAB’]).agg({‘ADHERENT’:sum}).reset_index() adh_county = pd.merge(adh_county, pop_county[[‘FIPSMERG’, ‘TOTPOP’]], left_on=’FIPSMERG’, right_on=’FIPSMERG’, how=’inner’) adh_county[‘PER_ADH’] = np.round(adh_county[‘ADHERENT’]/adh_county[‘TOTPOP’]*100,decimals=1) County-level summary Choropleth Maps Now, that we have the dataframes ready for state and county-level, we can plot the Choropleth maps. I am using the function choropleth from plotly.express. The locations parameter of the choropleth function works in tandem with the locationmode parameter. For US State data, the locationmode = “USA-states”, and the location parameter will take the state code (AL, CA, etc). #shows %adherents at state level for the year 2010 fig1 = px.choropleth(adh_state, locations=adh_state[‘STATEAB’], locationmode=”USA-states”, color=’PER_ADH’,color_continuous_scale=”inferno”, range_color=(0, 100),scope=”usa”,labels={‘PER_ADH’:’%Adherents’},hover_name=’State Name’, hover_data={‘STATEAB’:False,’State Name’:False,’ADHERENT’:False,’TOTPOP’:False,’PER_ADH’:True}) fig1.update_layout(margin={“r”:0,”t”:0,”l”:0,”b”:0}) fig1.show() To plot data for counties use the parameter geojson, which takes in a feature collection of polygons. You can find this collection for US counties here. Also, set the locations parameter to the FIPS code. counties = json.load(response) with urlopen(‘ https://raw.githubusercontent.com/plotly/datasets/master/geojson-counties-fips.json' ) as response:counties = json.load(response) #plot %adherents at county level for the year 2010 fig1 = px.choropleth(adh_county, geojson=counties, locations=’FIPSMERG’, color=’PER_ADH’,color_continuous_scale=”inferno”, range_color=(0, 100),scope=”usa”,labels={‘PER_ADH’:’%Adherents’},hover_name=’CNTYNM’, hover_data={‘FIPSMERG’:False,’CNTYNM’:False,’STATEAB’:False,’ADHERENT’:False, ‘TOTPOP’:False,’PER_ADH’:True}) fig1.update_layout(margin={“r”:0,”t”:0,”l”:0,”b”:0}) fig1.show() So now that we know how to create Choropleth maps, let’s try to add some more functionality to them. For the second part of the analysis, I want to look at how has the largest religious traditional group in each state changed between 1980 and 2010. I will add a slider to my choropleth map, and use that to change the years. First I need to determine the largest traditional group in each state for each year. df_ = df.copy() reltrad = df_.groupby([‘YEAR’,’STATEAB’,’State Name’,’RELTRADNM’]).agg({‘ADHERENT’:sum}).reset_index() reltrad = pd.merge(reltrad, pop_state, left_on=[‘YEAR’,’STATEAB’,’State Name’], right_on=[‘YEAR’,’STATEAB’,’State Name’], how=’inner’) reltrad[‘PER_ADH’] = (reltrad[‘ADHERENT’]/reltrad[‘TOTPOP’])*100 #adding Ranks and filtering Rank = 1 reltrad[‘RANK’] = reltrad.groupby([‘YEAR’,’STATEAB’])[‘PER_ADH’].rank(ascending=False,method=’first’) reltrad_top = reltrad[reltrad[‘RANK’]==1].reset_index(drop=True) To add the slider, use the parameter animation_frame of the function choropleth. n = [‘Evangelical Protestant ‘,’Mainline Protestant ‘,’Black Protestant ‘,’Catholic ‘,’Orthodox ‘,’Other ‘] cdm = dict(zip(n, [‘#003f5c’,’#444e86',’#955196',’#dd5182',’#ff6e54',’#ffa600'])) figz = px.choropleth(reltrad_top, locations = ‘STATEAB’, locationmode = “USA-states”, scope = “usa”, color = ‘RELTRADNM’, hover_name = ‘State Name’, color_discrete_map = cdm, hover_data = {‘YEAR’:False,’STATEAB’:False,’State Name’:True,’RELTRADNM’:False, ‘ADHERENT’:False,’TOTPOP’:False,’PER_ADH’:False,’RANK’:False}, animation_frame=’YEAR’) figz.update_layout(margin={“r”:0,”t”:0,”l”:0,”b”:0}) figz[“layout”].pop(“updatemenus”) figz.show() I don’t think the slider serves the intended purpose here. It’s cumbersome to compare the states when you have to keep moving between the years. So, I think subplots would serve better. Now because a choropleth is not referenced to a cartesian system of coordinates, we can’t use the plotly.express’ Choropleth function in subplots. So I will use the choropleth function from plotly.graph_objects here. ndf = reltrad_top[[‘YEAR’,’STATEAB’,’RELTRADNM’,’State Name’]] ndf = pd.merge(ndf, df_[[‘RELTRADNM’,’RELTRAD’]].drop_duplicates(), left_on=’RELTRADNM’, right_on=’RELTRADNM’, how=’left’) ndf[‘text’] = ‘State: ‘ + ndf[‘State Name’] + ‘<br>’ + ‘Tradition: ‘+ ndf[‘RELTRADNM’] years = (ndf[‘YEAR’].sort_values(ascending=True).unique()).tolist() cscale = [‘#003f5c’,’#444e86',’#dd5182',’#ffa600',’#ff6e54',’#955196'] rows = 2 cols = 2 fig = make_subplots(rows=rows, cols=cols, specs = [[{‘type’: ‘choropleth’} for c in np.arange(cols)] for r in np.arange(rows)], subplot_titles = years, vertical_spacing=0.1, horizontal_spacing=0) for i, y in enumerate(years): fig.add_trace(go.Choropleth(locations=ndf.STATEAB[ndf[‘YEAR’]==y],z = ndf.RELTRAD[ndf[‘YEAR’]==y], locationmode = ‘USA-states’, zmin=1, zmax=6, colorscale=cscale,hoverinfo=’text’, hovertext=ndf.text[ndf[‘YEAR’]==y]), row = i//cols+1, col = i%cols+1) fig.update_layout(title={‘text’:’Religious Tradition by Year’, ‘xanchor’: ‘center’,’x’:0.5}, **{‘geo’ + str(i) + ‘_scope’: ‘usa’ for i in [‘’] + np.arange(2,rows*cols+1).tolist()}, coloraxis_showscale=False, margin={“r”:10,”t”:70,”l”:10,”b”:0}, hoverlabel=dict(bgcolor=’#e6e6e6', font_size=12, font_family=”Rockwell”)) fig.update_traces(showscale=False) fig.show() And, there we have it. Hope this guide can help you in some way. You can find the data and the full source code here.
https://towardsdatascience.com/choropleth-maps-101-using-plotly-5daf85e7275d
['Shreyeshi Somya']
2020-10-04 16:45:46.814000+00:00
['Choropleth Map', 'Plotly', 'Python', 'Data Visualization']
Ikigai For Entrepreneurs
What if the rising generation of entrepreneurs were equipped with a personal compass that helped them translate individual purpose and professional skills into social impact, while also making a profit? Ikigai For Entrepreneurs Success is about more than money. Move over rock stars, athletes, and A-list actors, we’ve got a new idol this season: Entrepreneurs. A very particular brand of them, too — not the bootstrapper or the small business builder, but the Unicorn with the multi-million dollar Series A. The big name investors. The billion dollar valuation. We glorify the startup life, buying into the hype that: Raising a lot of investment money indicates success Being acquired or going public are the best possible outcomes Success primarily depends on your technical skills and experience Your company will make it if you follow the steps of other entrepreneurs You must choose between making money and making meaning The best way to make an impact on the world is to be a Founder or CEO And, sure, sometimes these are true. But like most careers, rather than imagining how the unlikely ideal scenario feels, you can ask whether you are willing to accept the far more likely downside of the role. In the case of entrepreneurship this includes uncomfortably high odds of failure, financial uncertainty, incredibly hard decisions (that impact other peoples’ lives), frequent isolation, unexpected loneliness, and long thankless hours. And while Benajmins may be the easiest indicator of success to measure, the startup metrics we rely on are often misleading. Valuations are anything but precise. Investment raised is a debt –the more you take, the less control you have over your business. And if you’re really successful in meeting your financial goals, you might get acqui-hired and spend several years at the kind of company you became an entrepreneur to avoid. Fun! The endless listicles promising the ‘top ten keys to startup success” treat entrepreneurship as a science — a formula of activities, circumstances, and processes that can be duplicated, offering standard results. But while navigating the fundamentals of the business is necessary, it is not sufficient. Checklists of essential entrepreneurial ingredients ignore the very nature of entrepreneurship. They treat it like a class that can be aced instead of a dynamic situation that is constantly and unpredictably changing. Like answers are black and white instead of fifty shades of grey. As if the success of the business can happen in a vacuum, irrespective of the Founder (when in reality, it’s more like an unborn child, whose health is directly linked to that of its momtrepreneur). We discount the importance of difficult to quantify elements like drive, passion, and motivation. We likewise undervalue the incredible meaning, purpose, and impact that can be had by running a business with a vision higher than the giant payoff. But what if the rising generation of entrepreneurs were equipped with a personal compass that helped them translate their individual purpose and professional skills into social impact, while also making a profit? How could we make the business case for adding new (hard to measure and quantify) parameters to the equation? What happens when success means something different for each business? And achieving it requires more than money? Entrepreneurial Ikigai In entrepreneurship, more so than in other employment scenarios, the personal and the professional are deeply connected. Before you can make an honest assessment of your professional strengths, weaknesses, talents, values, and passions, (or those of your startup) you should consider your personal strengths, weaknesses, talents, values, and passions. The same way that we look at the business’ strategy, mission, vision, core competencies, and competitive advantage, we can look at the entrepreneur’s differentiation, purpose, capabilities, passions, operating methods, and life experiences. What matters to you? What are you willing to fight for? What can you uniquely offer that others cannot? What do you innately understand? What are you bad at? (The image above has been recreated a lot, so I’m unsure who to give photo credit to — happy to do so, though) The modern Western interpretation of an old Eastern concept — Ikigai There is a resurgence of the Japanese concept of Ikigai (loosely translates to “reason for being”). You’ve probably seen the image above, a ven diagram of overlap between what you are good at, what you love, what you can be paid for, and what the world needs. It’s used as a tool for building a more purposeful life, specifically through meaningful work. If we think about entrepreneurship in terms of the four Ikigai elements, it has always been focused on what you can be paid for and what you are good at. Then social entrepreneurship came along and we added a third element — what the world needs. And of course, many founders have started businesses from their hobbies or passions, but that has never really been a prerequisite. But what if it was? Moving from Entrepreneurship to Social Entrepreneurship to Ikigai Driven Social Entrepreneurship What if the next generation of social entrepreneurs built businesses that were deeply connected to their personal Ikigai? What if the paradigm of entrepreneurship shifted such that every business was a combination of what we can be paid for, what we are good at, what the world needs, and what we love? (To be clear, when I talk about what we love, I don’t just mean passions and hobbies. I mean the things we are willing to fight for, the problems we care enough about to solve, the things that matter most to us.) While IRL there is no perfect equation of the four Ikigai elements (and depending on your priorities at the time, you may have to sacrifice some of one to allow for more of another), they can be used as a compass to bring things into balance. And it isn’t always “what the world needs” that is missing. When I launched my first business, Qualifyor, I wanted to change the way people prepared for and thought about work, to provide resources for young people to step outside the standard college to career path. My ego had bought into the idea that the best (only) way for me to drive significant change was as a Founder and CEO of a venture-backed firm. Almost three years into running the business, I felt the struggle of trying to balance profit with social impact (in Ikigai terms, what I could be paid for vs. what the world needs). Not to mention the realization that although the vision was a fit, the role was not (what I loved vs. what I was good at). Despite my best attempts at disruption, I had essentially re-created a school — something other people were probably naturally better at and happier doing than I was, given that I had left school to self teach my way to an early diploma at sixteen. My role completely squandered the unique parts of me, the things that only I brought to the table. And so as I thought about my next move, I focused on my Ikigai, (below are some of the things I discovered): Good At: Design thinking, writing, problem solving, first principles reasoning Paid For: Education/employment innovation, facilitation, coaching, speaking Love: Self-organization, big ideas, challenging existing systems, reading World Needs: Updated mindset about how and why we work and learn, entrepreneurs solving world’s biggest challenges, financial incentive to build things for the social good, self organization tools Armed with this awareness, I began writing a book of unorthodox questions to help people build a purpose driven career by re-defining success at an individual level. And I started an innovation consultancy aligned with my Ikigai, to help change the way we look at the world by using first principles reasoning and design thinking to drive more than incremental change to our systems, companies, universities, and cities. It takes a lot longer to say than “Ed Tech CEO,” but these are the messy problems in the world that I am equipped, motivated, and paid to tackle. And this is the way I am best equipped to tackle them. Being an entrepreneur can take many different shapes, so before you make the choice to start something, it is worth spending a minute determining the sort of thing you are best suited to start. Because differentiation doesn’t happen through conformity — it happens through embracing individuality. There is competitive advantage in running a business for which you are uniquely suited. Not only in terms of your work experience, but also in terms of your interests, skills, relationships, knowledge, motivation, and the legacy you want to leave. If we want to maximize our service to others, we should be spending the majority of our time doing the things that we care enough about to do regardless of whether we are being paid, the things that come naturally to us (of course it is rare to only have to do things we are good at and love, but those should be the bigger pieces of our time pie). Consider the most influential moments and events you’ve experienced and what they have taught you. What they have instilled in you? These are not just talking points for an interview, but fundamental aspects of your character and drive. For some people, it can be a direct effect — someone you love was shot, so you became a gun control advocate, or you were raised by a single mother and now want fight for better social policies for families like yours, or perhaps you suffered under the burden of student loans and now work for an alternative education company. For others, it is a more abstract connection. You escaped bullying with comic books and great music, and you want to support creative industries. Or you watched a friend fight cancer and it motivated you to make the most of your health. When sh*t really gets hard, it takes a lot more than money to iterate, adapt, and weather the storm. In a race where the majority of runners don’t cross the finish line, it helps to be working in service of something larger than yourself, that you care immensely about, and are singularly equipped to accomplish. It is time to stop viewing profit and impact as opposing sides of the startup spectrum — they are integrally intertwined. And the sooner we stop forcing entrepreneurs to choose between them, the faster we can shift the paradigm toward businesses making more than money (and make more money by doing so). For more Ikigai related posts, follow me on Medium. For updates about my upcoming book and other Ikigai related content, subscribe on my website (scroll to the bottom of the page). I keep my pieces free, so more people have access to these resources. Your likes and shares would help in that mission. This story is published in The Startup, Medium’s largest entrepreneurship publication followed by +368,052 people. Subscribe to receive our top stories here.
https://medium.com/swlh/ikigai-for-entrepreneurs-b100f6a00650
['Kacy Qua']
2019-06-06 11:40:48.599000+00:00
['Success', 'Business', 'Ikigai', 'Entrepreneurship', 'Startup']
How I solved a class imbalance problem
How I solved a class imbalance problem Using distplot() and count functions Imbalance classifications seems to be a common problem that needs to be addressed whenever endeavouring to solve a machine learning problem. If the classes are severely imbalance, and measures are not taken to correct this, a high overall accuracy can still be achieved without generating any good insights into the problem. I therefore endeavour to look at the target variable and analyse it whenever tackling a machine learning problem. To illustrate my point, I have selected a class imbalance competition problem from the Analytics Vidhya site, which can be found in the link below:- https://datahack.analyticsvidhya.com/contest/janatahack-hr-analytics/#MySubmissions Extracts from the problem statement of this competition problem state:- “A training institute which conducts training for analytics/ data science wants to expand their business to manpower recruitment (data science only) as well. Company gets large number of signups for their trainings. Now, company wants to connect these enrollees with their clients who are looking to hire employees working in the same domain. Before that, it is important to know which of these candidates are really looking for a new employment. They have student information related to demographics, education, experience and features related to training as well. To understand the factors that lead a person to look for a job change, the agency wants you to design a model that uses the current credentials/demographics/experience to predict the probability of an enrollee to look for a new job.” The datasets that need to be downloaded and read for this problem have been saved to my github repository, and can be found at the link below:- https://github.com/TracyRenee61/HR-Analytics-Job-Change The first thing I did was to import the libraries and import and read the files needed to be used in the competition question:- I then checked for any missing values in both the train and test files, finding there were many cells that needed to be imputed:- I then imputed the missing values by replacing all null cells in the train and test sets with mode, being the most commonly occurring value in a column. When all of the missing values had been imputed, I converted the object columns to text by using LabelEncoder():- I used distplot() to graphically represent the two classes in the target and this is where it was discovered there exists a class imbalance. I used value_counts() to discover that only 13.21% of the examples were 1:- I then defined X, y, and X_test. X is the train file less the target and the enrollee_id, y is the target in the train file, and X_test is the test file less the enrollee_id. After defining X and y, I split the train set up for training and validation using train_test_split. Because there was a class imbalance, I set stratify to y and shuffle to True. I then put the datasets derived from train_test_split into a standard scaler to ensure the dependent variables are in the same range as the dependant. Because the target is class imbalance, I created a variable called class_weights, where I have endeavoured to balance the classifications:- I decided to use distplot() to have a look at y_val and found that 13.24% of the examples were 1. Therefore, any model selected would ideally pick up 13.24% of 1 in its prediction process. Let’s see what I found:- I selected eight models and used them as the basis for this competition question. All eight of the models’ performance is illustrated in the boxplot below:- I selected seven of the eight model to test on the validation set and the chart below shows the percentages that were achieved. This exercise proves it is a good idea to write down the results of any tests because they can be referred to at a later date for clarification. I decided that XGBClassifier was the best model to use because it gave a more accurate representation than the other models I had tested on:- After testing the selected model on the validation set, I decided to predict on the test dataset and found that 8% of the examples were 1, being 5% less than what has appeared on the train dataset:- Because the competition question stated probabilities had to be predicted on, I used predict_proba() to find the probability that the enrollee would be looking for a job, being a 1. When I submitted my probability predictions to Analytics Vidhya, I achieved a score of 65.24% on the AUC_ROC score, with the highest on the leaderboard being 69.43% The code for this blog post can be found in its entirety in my github account, found below:- https://github.com/TracyRenee61/HR-Analytics-Job-Change/blob/master/HR_Analytics_Job_Change_XGB.ipynb
https://medium.com/ai-in-plain-english/how-i-solved-a-class-imbalance-problem-using-distplot-and-count-functions-ba8f258efadc
[]
2020-10-27 08:11:38.762000+00:00
['Machine Learning', 'Class Imbalance', 'Data Science', 'Python', 'AI']
8 Powerful Writing Tips from Kurt Vonnegut
At this point in your writing journey, you’ve probably read hundreds of writing tips by famous authors. If you’re like me, you might file away your favorites and look them over whenever you need a dose of inspiration and motivation. Kurt Vonnegut’s 1985 essay “How to Write With Style” is a definite gem to add to your collection. The author of the best-selling novel Slaughterhouse-Five outlines eight steps you can follow to improve your writing. “Why should you examine your writing style with the idea of improving it?” Vonnegut asks. “Do so as a mark of respect for your readers, whatever you’re writing. If you scribble your thoughts any which way, your readers will surely feel that you care nothing about them.” I’ve taken my favorite quotes from Vonnegut’s essay and listed them below with my analysis at the end. 8 Rules From Kurt Vonnegut for Writing With Style 1. Find a subject you care about “Find a subject you care about and which you in your heart feel others should care about. It is this genuine caring, and not your games with language, which will be the most compelling and seductive element in your style.” 2. Do not ramble, though “I won’t ramble on about that.” 3. Keep it simple “As for your use of language: Remember that two great masters of language, William Shakespeare and James Joyce, wrote sentences which were almost childlike when their subjects were most profound. ‘To be or not to be?’ asks Shakespeare’s Hamlet. The longest word is three letters long…Simplicity of language is not only reputable, but perhaps even sacred. The Bible opens with a sentence well within the writing skills of a lively fourteen-year-old: ‘In the beginning God created the heaven and the earth.’” 4. Have the guts to cut “Your rule might be this: If a sentence, no matter how excellent, does not illuminate your subject in some new and useful way, scratch it out.” 5. Sound like yourself “The writing style which is most natural for you is bound to echo the speech you heard when a child…I myself find that I trust my own writing most, and others seem to trust it most, too, when I sound most like a person from Indianapolis, which is what I am.” 6. Say what you mean to say “If I broke all the rules of punctuation, had words mean whatever I wanted them to mean, and strung them together higgledy-piggledy, I would simply not be understood. So you, too, had better avoid Picasso-style or jazz-style writing, if you have something worth saying and wish to be understood. Readers want our pages to look very much like pages they have seen before. Why? This is because they themselves have a tough job to do, and they need all the help they can get from us.” 7. Pity the reader “They have to identify thousands of little marks on paper, and make sense of them immediately…So this discussion must finally acknowledge that our stylistic options as writers are neither numerous nor glamorous, since our readers are bound to be such imperfect artists. Our audience requires us to be sympathetic and patient teachers, ever willing to simplify and clarify — whereas we would rather soar high above the crowd, singing like nightingales.” 8. For really detailed advice “For a discussion of literary style in a narrower sense, in a more technical sense, I commend to your attention The Elements of Style by William Strunk, Jr., and E.B. White (Macmillan, 1979). E.B. White is, of course, one of the most admirable literary stylists this country has so far produced…” If you’d like to read Vonnegut’s essay in its entirety, you can find it online here. I’ve also compiled these tips into a helpful infographic that you can get here. The Takeaway Kurt Vonnegut certainly practiced what he preached. Take the opening paragraph of Slaughterhouse-Five, for example (Vonnegut, an American POW during WWII, based the book on his own experience of surviving the firebombing of Dresden): “All this happened, more or less. The war parts, anyway, are pretty much true. One guy I knew really was shot in Dresden for taking a teapot that wasn’t his. Another guy I knew really did threaten to have his personal enemies killed by hired gunmen after the war. And so on. I’ve changed all the names.” The paragraph pulls you right into the story. The sentences are sparse and to the point. There are no unnecessary adjectives or flowery language. It makes you want to read more. Why was a guy shot for taking a teapot? Who were these enemies that this other guy wanted to have killed? Vonnegut’s style is particularly suited to blogging. In this medium, we want to get our point across quickly. There are so many articles vying for our readers’ attention. If a reader isn’t hooked by the first paragraph, he’s probably not going to keep reading. And he also won’t keep reading if the writing is too difficult to untangle. It’s important to write conversationally as if you were speaking to a friend: address the reader with the word “you”, use contractions, use short words, avoid the passive voice, and let your personality shine through. If you enjoyed Vonnegut’s eight rules for writing with style, you might also like his eight rules for fiction writing. And here’s another bonus for all of you Vonnegut fans: a short video of Vonnegut presenting what he believes are the three different types of stories.
https://medium.com/copywriting-secrets/8-powerful-writing-tips-from-kurt-vonnegut-3526fc47d99c
['Nicole Bianchi']
2019-09-14 23:43:28.823000+00:00
['Self Improvement', 'Writing', 'Writing Tips', 'Creativity', 'Fiction']
What Works for Spaceship Builder & Herbalist Lisa Akers: How She Melds Disparate Passions And Manages Her Workload
The Nitty Gritty The breaking point in Lisa’s life that inspired her to turn to herbalism for answers — and why she decided to study herbalism deeper How Lisa manages her job as a spaceship builder and her herbalism clients What specific strategies Lisa uses to plan and optimize her daily and weekly tasks Who, exactly, Lisa works with through her herbalism business — and how she balances client sessions with the unpredictable needs of spaceship building Lisa Akers is both a spaceship builder and herbalist (really!). While this might sound like an unusual duo, Lisa demonstrates just how closely the two are related — and how she balances working as an engineer and working as an herbalist. In this episode of What Works, Lisa shares how she connects engineering and herbalism, what’s so magical about herbalism, and how she optimizes her week around the energy available to her. We release new episodes of What Works every week. Subscribe on iTunes so you never miss an episode. What drew a spaceship engineer to clinical herbalism “I saw an acupuncturist, a massage therapist, and eventually an herbalist who said, ‘here’s what’s going on, sweetheart.’ And she was right. I said, ‘this is magic so I need to learn more about this herbalism thing.’ If she can do that over the course of 90 minutes then I need to know how this works because I could be really helpful and support other people. I wasn’t thinking of it as a business at that point — just to learn for myself to support my own needs and my family’s needs.” — Lisa Akers At one point in Lisa’s health journey, she ended up in the emergency room, convinced she was suffering from a heart attack. Fortunately, that wasn’t the case — but doctors gave her Xanax to help her mediate the stress she was under from working long hours instead. Dissatisfied with that solution, Lisa sought additional professional opinions. Every doctor she saw recommended Xanax. At that point, she explored alternative routes in an effort to understand and fix the root problem. “I’m an engineer. I’m trained to search for the root cause so that we can fix it and prevent the symptoms and the indications of failure from happening,” says Lisa. Through her experience with the herbalist who pinpointed her health imbalance, Lisa knew that herbalism worked — and she wanted to learn more about how she could help herself, her family, and eventually clients. Pinpointing and working with ideal clients “I work with a fairly narrow group of people in midlife and later who are finding that the lifestyle they lived as young people no longer works for them in their more mature adulthood. They’re struggling with diabetes, high blood pressure, autoimmune disease, or maybe even cancer. They need a better solution. They don’t just want to follow down this pathway where they take this medication that makes this other symptom happen that they have to take another medication for that causes something else. They have this downward spiral that ends in their death and nobody wants that — they actually want to make it better so I’m looking for people who want to understand how that works.” — Lisa Akers Lisa knows exactly who she can help: people who want answers to their health woes that they can’t find anywhere else. One way Lisa attracts those folks is through positioning herself as a scientist. She’s someone who not only understands plants, but someone who also thinks with an engineering perspective: that we need to get to the root of the issue to truly fix it. And Lisa’s knowledge and ability to find and understand scientific studies around plants give her a strong foundation to her herbalism business. She’s able to really sit down with her clients and explain to them how this plant works, what the studies show, and educate them. “The stereotypical herbalists know that research and they know how it works,” Lisa explains, “but they’re not taking it and making it a deeper experience for their clients — and that’s where I really see my strength.” Balancing and optimizing her schedule and energy “Being an herbalist makes me a better spaceship builder because I understand how things affect me. I am more in tune with how much energy I have. When I set things up on my calendar, I mark them as either draining, energizing, or neutral as far as the energy it’s going to take. By doing that, I can say OK well, I have a whole day’s worth of draining activities — that’s not a good thing. I’m able to move things around based on the other activities that are going on that day. Having that bit of insight into how my day’s going to go before I even start my day helps me not to get into that downward spiral.” — Lisa Akers Working a job 40–50 hours a week (and sometimes more) with little to no control over her schedule, plus running a business, takes time and energy to orchestrate. It’s a regular balancing act. Lisa employs a few different strategies to make the most of her available time and energy each week. Her most-used method right now is Sunday planning. She looks ahead at the week with what’s going on and determines how much energy every task takes. “Without that, I don’t know that I’d be very successful in any of my pursuits,” Lisa says. The Sunday planning also provides a preview of just how much energy she’ll need each day. If she knows ahead of time that she’s going to have a draining or long day, Lisa makes sure that she gets enough sleep, that she drinks enough water, and that she eats the right food to manage it well. Listen to the full episode of What Works with spaceship builder and herbalist, Lisa Akers, on how she optimizes her lifestyle to meet her needs and how she balances building spaceships and working with herbalism clients.
https://medium.com/help-yourself/what-works-for-spaceship-builder-herbalist-lisa-akers-how-she-melds-disparate-passions-and-af18d5f15633
['Tara Mcmullin']
2018-04-24 14:34:15.695000+00:00
['Health', 'Podcast', 'Time Management', 'Productivity', 'Small Business']
Why Your Microservices Architecture Needs Aggregates
Why Aggregates? We’ve looked in-depth at what Aggregates are, and explored ways to identify our Aggregates. Clearly, it takes some up-front effort to design our Aggregates. So, why should we care in the first place? When defining the pattern in Domain-Driven Design, Evans focuses almost exclusively on the Aggregate as a mechanism for transactional enforcement of invariants. But this pattern — in which we identify atomic collections of entities with a single externally-accessible reference — becomes useful in many other aspects of our microservices architecture. In addition to providing that enforcement of invariants, Aggregates help us to avoid later problems caused by things such as: Unwanted dependencies between entities. Leaky object references. Lack of clear boundary around groups of data. Let’s look at some examples of these issues, and how Aggregates would have helped. Microservice and data schema design Let’s take a look at a typical monolithic database. Typically, over the years, we’ll have developed a large database schema, replete with foreign key references throughout. Starting at any arbitrary table and tracing all of the FK references to and from that table, we’d likely find ourselves traversing the entire schema. A small but very monolithic database schema Even with a monolithic codebase, this doesn’t smell quite right. For example, when making a database call to retrieve an Order , how much data should be returned? Certainly the Order details such as status, ID, and ordered-date. But should we return all of the Order Items? The addresses where the item was shipped from and to? How about the User objects representing the order-placer and recipient? If so, how much data should come along with those User s? As we move towards microservices, we’ll be breaking apart our monolithic data schema as we break apart our monolith codebase. This will likely be the most difficult task that we face as we get started. Fortunately, thinking in terms of Aggregates provides us with a blueprint, and solid guidelines, for designing our data microservices and their associated database schemas. Rather than arbitrarily drawing lines, and debating which objects “feel like” they belong together, the Aggregate pattern tells us to identify: Our root entities. Value objects that would be attached to our root entities. Invariants that are required to maintain data consistency across related entities. While it will still take work, and often many iterations, to settle on our Aggregates, we’ll have a guiding light to direct us. And we can be much more confident that we’ve gotten it right, once we’ve formed our Aggregates. Sharding Most databases can handle an enormous amount of traffic. But even the most highly-performant database can only handle so much. When we get to the point where our data volume has gotten too much for our database, we have a few options. One common option, sharding, describes a way to horizontally scale our databases. When sharding our database, we are effectively creating multiple copies of our schema, and dividing our data across those copies. So, for example, if we create four shards, then each shard would store approximately one-quarter of our data. The schema would be the same across the shards — each one would consist of the same tables, foreign keys, and other constraints, etc. With sharding, we horizontally scale by splitting a large schema into multiple smaller, identical schemas Critical to effective sharding is a sharding key. Effectively, a sharding key is a common identifier that is run through a hashing or modulus function to determine which shard it belongs to. For example, if we’re attempting to update a user, we can take that user’s ID, hash it, and mod it by four (assuming four shards) to determine in which shard we can find the user. Now, if we imagine a typical monolithic database schema, this might seem like an impossible task. Why? Well, in our monolithic schema, we will likely have a number of foreign key relationships. For example, we might have a foreign key from the ORDER table to the USER table (to represent the user who placed the order). Now, we might be able to easily determine where to find a given USER record with an ID of 12345 (12345 % 4 = 1, so that USER record would be found in Shard 1). But what if a foreign key to that USER record was held by an ORDER record with an ID of 6543 ? 6543 % 4 = 3, so that ORDER record would be found in Shard 3. Given the foreign key relationship, then, this would be impossible to implement. While this is a clear example from a monolithic database, we could just as easily paint ourselves into a corner with a microservice’s data schema. Imagine that we’ve created a User service in which — much like our previous examples — a User entity is associated with 0..n email addresses, mailing addresses, and phone numbers. The underlying data schema would then look like the following: Now, let’s pretend that we’d eschewed the idea of Aggregates when we built out this microservice. Instead, we’d provided endpoints that allow direct access to all entities, like so: GET /users/{user-id} GET /users/phones/{phone-id} GET /users/emails/{email-id} GET /users/emails/{email-id} A year later, our user base has exploded, and we’ve decided to shard. But at this point, can we? The example below shows our four USER shards, and a sample USER record with an ID of 12345 (12345 % 4 = Shard 1) and associated PHONE_NUMBER record of ID 235 (235 % 4 = Shard 3). That’s… not going to work We’ve run into the same problem as with the monolithic data schema. If we had properly defined our User Aggregate, of course, we would have ensured that every request travels through the root entity. So, it is the root entity’s ID that determines where every entity — including that phone number — belongs. In our example above, all of the entities — email addresses, mailing addresses, phone numbers, and the root entity itself — associated with user ID 12345 would be stored in Shard 1. Message passing Let’s take a brief detour, and mention the bounded context. This is another extremely useful pattern borne out of Domain-Driven Design. Among other things, it helps us to understand that — rather than a mess of synchronous API calls — our microservices architecture should leverage message passing. Any time an event occurs within one bounded context, that event will be published to an event bus like Kafka, to be consumed by a service in another bounded context. Now, the question usually arises: “What should the message contain?” For example, let’s say a User adds a phone number. Once that change is committed to its data store, we want to publish that edit as a message. But exactly what should we publish? Generally, we want to publish the new state of the modified data. So, we could simply publish the new phone number: That might be sufficient. Unfortunately, it’s hard to say what additional information the message’s consumers might need. Some consumers, for example, might need to know if the new phone number is also the User ’s primary phone number. But what if the primary flag is false… and the consumer still needs to know which phone number is the primary? Hmm. Maybe we should send all of the phone numbers. But… what if another consumer needs to notify the User that the change has been processed, and needs to do it via email? Maybe we should send all of the User ’s email addresses as well? Clearly, this process might never end… and we might never get it right. An alternative approach that some teams try is to simply send the ID of the modified entity in the message. Any consumer can (nay, must) then call back to the event publisher to obtain the details of the event. This approach has two unfortunately problems: It will, from time to time, result in the wrong data being retrieved. Say entity 123 is modified, and the corresponding message published. Then the same entity is again modified. After that point, a consumer consumes the first event and requests entity 123. That consumer will never pick up that first modification. Now, that might not matter; it could be that the consumer only ever cares about the latest version of the entity. But as producers of the event, we don’t know whether any of our consumers — present and future — might need to track individual changes. Worse, it turns our nicely decoupled event-driven architecture back into a tightly-coupled system bogged down by synchronous calls across bounded contexts. So what should we pass as our messages? As it turns out, if we’ve embraced Aggregates, then we have our clear answer. Anytime an Aggregate is changed, that Aggregate should be passed as the message. We know this because an Aggregate is an atomic unit. Any change to any part of the Aggregate means that Aggregate as a whole has been modified. How that Aggregate is represented in the message, of course, depends on our organization. It might be a simple JSON structure, or it might be represented by an Avro schema. The Aggregate’s data may or may not be encrypted. But regardless of the data format, thinking and designing in terms of Aggregates makes questions like these no-brainers. Retries The concept of message passing is often coupled with that of retries. One of the beautiful things about a message-based, event-driven architecture is that resilience — in the form of automatic retries — is essentially baked in. What do we mean by that? When a message is published to an event bus like Kafka, it’s meant to be consumed by a downstream consumer. Most of the time, things will work smoothly. Occasionally, however, a consumer will have a problem consuming a message. Maybe the consumer’s database will temporarily be unavailable, rendering the consumer unable to correctly handle the event on its end. Or maybe a security appliance is briefly unavailable, preventing the consumer from being able to decrypt the message. In such cases, the consumer will not move on to the next message until the current message is processed, and the consumer is able to acknowledge the handling of the message. This happens by default with systems such as Kafka. Effectively, the consumer will keep trying until it succeeds. Often, this is the desired behavior. Usually these problems resolve relatively quickly (although sometimes with the help of an ops team). In the meantime, there is no sense trying to process the next message, as the same problem will likely occur with that message. But there is a second class of problem: when there is a problem with the message itself. Maybe the message became corrupted in transit. Maybe it contains a bizarre special character (did somebody cut and paste a Microsoft Word file?). Maybe it’s failing some validation check. In such cases, the consumer could retry the message a million times; it would never succeed. When detecting such a problem, the consumer might set the current message aside, perhaps into a special triage queue, and continue processing the subsequent messages. But there’s a problem with this approach. We’ll want to ensure that the “bad” message eventually does get processed, even if it takes some amount of manual effort. But… what if, in the meantime, the consumer processes a message in which the same data was changed again? That change will be overwritten when the older, “bad” message is finally re-processed. The diagrams below depict this problem: Entity 123’s “foo” value is changed to “bar” in Bounded Context 1. So a message representing that change is published; however, it cannot be parsed by the consumer in Bounded Context 2. So it is shunted off to a triage queue. Sometime later, Entity 123’s “foo” value is changed to “baz” in Bounded Context 1. So another message — representing foo’s change from “bar” to “baz” — is published by the first Bounded Context. This time, the message is consumed by Bounded Context 2, which now sees entity 123’s “foo” value as “baz”. Some time after that, the initial bad message is fixed (say, a bad character is removed), and it is resent to Bounded Context 2… which now incorrectly sees entity 123’s foo value as “bar”. This is a problem with ordering. Generally speaking, we need to make sure that events are processed in the order in which they occur. But in the scenario described above, that’s not possible. Or… is it? If we’ve designed our data around Aggregates, then we know the scope of changes within any given message our consumers might receive. In other words, any message we receive will depict a new version of an Aggregate. And that Aggregate can be easily identified by its root’s globally-unique identifier (GUID). So, if our consumer determines that a message cannot be processed without manual intervention, it can shunt that message off to a separate queue. Moreover, it can note the GUID of that shunted message. If it encounters any more messages containing that same aggregate, it can likewise shunt that message off to the same queue. And it can continue doing that until the original problem is resolved (maybe the consumer needs to be updated to handle that weird Microsoft Word special character). At that point, the “shunted” messages can be fed to the consumer. It’s certainly not easy to build such a retry mechanism. But with Aggregates, it’s at least possible. Caching Caching is another topic that can become unwieldy without well-defined, bounded data structures. Most caches operate like large hashmaps; they allow us to associate some chunk of data with a single identifier, and to later pass in that identifier to retrieve that chunk. If we haven’t designed our data around Aggregates, it can become difficult to figure out what type of data we want to cache. Imagine a system that is frequently queried but infrequently modified. In this system, we might want to cache our query results higher up in our stack to minimize trips to the database. Fine. But what should we cache? We could simply cache the results of every query. Back to our user example, that means that we could be caching the results of things like: Searching for a certain user. Searching for a certain phone number. Searching for a collection of email addresses. Searching for the marital status of a given user. Notice that we’re potentially duplicating data. We’re caching a user object, but we’re also caching individual contact information and groups of contact information, as well as individual fields from the user object. That has ramifications, of course, in terms of the amount of memory required. It also has more serious ramifications when it comes to cache invalidation. Imagine that an attribute of a cached phone number changes — from our earlier example, let’s say the “best contact” flag is changed from false to true. So, we invalidate the cached phone number. But do we also need to invalidate the cached user object? What about the other piece of contact info, that had a corresponding “best contact” change from true to false? If we’re using Aggregates, we don’t need to worry about these issues. With an Aggregate, we have only a single possible cache key: the Aggregate root’s GUID. When we retrieve the Aggregate, we cache it. When any attribute of the Aggregate changes, we invalidate the entire Aggregate. Problem solved. Service authorization Well into one of my previous company’s move to microservices, I headed a team tasked with implementing service-to-service, data-level authorization. In other words, we’d already solved the problem of “is Service A permitted to access Service B?” We needed to solve the problem of “is Service A permitted to request Entity 123 from Service B?” This meant that we needed to be aware of the current user-agent (for example, the customer who had initiated the request). No problem; that’s what things like JWTs are for. We could pass the user’s ID in a token while making service-to-service calls. We also needed to be aware of whether that user-agent was permitted to view any particular entity. In our case, the number of potential entities was huge. In addition, a user might be viewing their own documents, or they might have been given permission by another user to access their documents (for example, by granting power-of-attorney to a third party). Our goal was to provide a generic, pluggable solution. We also wanted to avoid repeated synchronous calls to a separate service to determine whether a given user had access to a given entity. For that reason, we decided to determine the items that a given user was permitted to access — once, during startup — and include the IDs of those items in the user’s token. Had we not designed our microservices around Aggregates, this would not have been feasible. The list of potential entities would have been prohibitive. However, because we had invested up-front in using Aggregates, we had already constrained ourselves to looking up any entity using its Aggregate’s root’s ID. Therefore, we only needed to track the Aggregates to which a user-agent was granted access. That list was quite feasible. Tracking changes We may find ourselves tasked with tracking changes to our data. Historically, we’d recorded data changes by implementing a change data capture (CDC) system triggered by low-level database activities. More recently, organizations have tended to move towards capturing changes to business entities, rather than changes to column in databases. So, we’re faced with a question: “What data should be in the snapshot, and how will we use it later down the line?” As you might imagine by now, answering these questions will be straightforward if we’ve designed our data around Aggregates. Any time a change to any entity is made, we record the new version of its Aggregate. This is not only simple; it’s also more accurate. Recall that the original purpose of Aggregates is to transactionally enforce invariants. So, each snapshot of the Aggregate will represent the result of any such transactions. Retrieving the changes later on also becomes much more straightforward. If we want to see the history of a User’s contact information, we won’t need to worry about gathering changes across multiple CDC tables. Instead, we just go straight to the Aggregate’s table. Likewise, diffing changes becomes trivial; we simply compare one version of an Aggregate to another. Myriad others This was a non-exhaustive list of the challenges that designing our entities around Aggregates helped us solve. Undoubtedly, some of us will find others (try implementing the Command Query Responsibility Segregation (CQRS) pattern without Aggregates!) When we think about it, it makes sense. Applying the Aggregate pattern forces us to think up-front in a methodical way about which entities belong together. Ultimately, we’ll have constrained ourselves to entities within well-defined, atomic grouping with a single access point. We won’t wind up with those accidental dependencies between entities, or with the sorts of leaky references that will prevent us from implementing scaling solutions.
https://medium.com/better-programming/why-your-microservices-architecture-needs-aggregates-342b16dd9b6d
['Dave Taubler']
2020-04-22 14:34:19.480000+00:00
['Microservices', 'Software Development', 'Programming', 'Design Patterns', 'Domain Driven Design']
The Tripod of Environmental Progress
Behavioral change is important, but it’s not enough. We also need innovation and policy changes to get us where we need to be. The three legs of the Tripod of Environmental Progress A tripod is a valuable piece of equipment and fits nicely as a metaphor. With three equal legs, no one leg is more important than the other — if one fails, the entire unit falls to the ground. I didn’t intend on using a tripod as the most fitting metaphor for how we need to think about environmental progress. It came to me as I realized that one approach is not more important than the others. All three need to be present in order to make timely and meaningful progress. I’ve always talked about the importance of innovation and, to a lesser extent, policy. However, unlike most of my peers, I haven’t always thought of behavioural change as being crucial for progress. Take an exchange I had in Belize as a prime example. It was 2015, and I was working at an NGO that specialized in environmental education, particularly tropical ecology. I worked with several intelligent and educated people from different countries, all of whom were passionate about nature, wildlife, and the environment. Around this time was the beginning of the “straw awakening,” where everybody began to realize single-use straws were more problematic than they were convenient. My one colleague decided to take this to the next level, claiming we should also stop using Styrofoam and plastic take-out containers. I agreed, but said that it wouldn’t work — we would need to replace the nonbiodegradable containers with compostable ones in order for this to happen. She disagreed, saying behaviour change was enough. I found it hard to believe that an entire population would carry around metal bowls just in case they might want take-out at some point. We never reconciled our views. A few years later, Styrofoam and single-use plastic containers were banned in Belize, replaced by compostable options. The petty side of me felt pretty smug. I was right. It took material innovation to provide a viable alternative — that was also cost competitive — and then it took a policy directive to ensure it was followed. Fortunately, I’m not one to gloat. Which is a good thing, because I was actually wrong. I couldn’t see it at the time, but the whole process was sparked by behavioural change. Without people forgoing straws and Styrofoam, being vocal about it, and pushing for change, the demand for innovation would not have grown. And without that innovation, the policy would not have been able to come to fruition. Policies need to be rooted in what’s attainable. Without the innovation that led to affordable, biodegradable containers, the policy simply would not work. And without the behavioural change to spark the innovation, it would not have occurred at all. After realizing the importance of this interconnectedness, I began to incorporate this more holistic view into my thinking for solutions. I was giving a talk on sustainability at Nest Coworking in Playa del Carmen, Mexico when I started speaking about these three different aspects as a triangle — three connected pillars necessary for environmental progress. I’m the kind of person that thinks through problems out loud and prefers to give talks based on an understanding of a concept rather than a written speech. This allowed me to explore the idea of a triangle and look at each represented angle. It wasn’t until a few days later that I realized it’s not a triangle. It’s a tripod. A tripod of environmental progress, where each leg is no more important than the others and relies on the strength of all three to function. And I think we need to approach all environmental problems with this model in mind if we want to accomplish real change. The First Leg — Behavioral Change “The Catalyst” The first leg is the catalyst because it all starts with behavioral change. Without it, there would be no incentive for change. There will always be innovation and changes in policy, regardless of behaviors, but it is essential for sparking and eventually enacting positive environmental change. The reason it is essential is twofold. First, as an example, current energy and plastic usage is cheap and easy, providing little incentive to improve upon either solely for economic gain — which means it is left mostly up to consumer choice to provide incentive, at least in this instance. And second, simply put, we are running out of time before irreversible changes take place. This means we need to make fundamental changes to an otherwise profitable system, that is integral to everyday life, in a very short time period. Behavioural change is crucial to kickstart the process. The Second Leg — Innovation “The Workhorse” The second leg is the workhorse because the real bulk of the solution will be created by innovation. We need carbon-free energy sources and plastic substitutes that naturally degrade. What we don’t need is to ground all planes and never use takeout containers ever again. Nowhere is it written that these things are inherently bad, or that they inherently contain harmful pollutants. That’s just the way it currently is because fossil fuels (particularly oil, which is a fuel and the petrochemical base for single-use plastics) have always been relatively cheap and abundant, with little incentive to look elsewhere. When plastic was invented, it was touted as a miracle product, and it is still invaluable in many respects. The problem is that it doesn’t biologically degrade. It breaks down into microplastics, getting into the food chain at the lowest levels where it bioaccumulates all the way to the top. In order to be progressive we need to make progress, which means moving forward instead of moving backwards. Grounding planes is the wrong kind of conservation. It is moving back to a time before progress and innovation opened up the world for a large proportion of the population. Innovation will allow to continue to fly, without the harmful by-products. The Third Leg — Policy Directives “The Enforcer” The third leg is the enforcer because policy enforces both behavioral changes and innovative technologies to be utilized, regardless of which prompts the other. The only way to get to a zero-usage rate for fossil fuels or disposable plastics is to enforce a ban through policy. Behavioral change will never fully get there. Neither will innovation on its own. In a clean, linear world, behavioral change would come first, inciting innovation to create an alternative — but it is the policy to ban the substance that ensures everyone must change what they use and enforces the innovative practices. Tripod Failures When all three legs are not utilized together, there are always shortcomings. Behavioral change is not enough on its own, as the entire population will never buy into full-scale change. Even an extreme case, where 80% of people adopted a change in habits (which is absurdly unlikely), that still leaves 20% that refused. Without innovation or policy to force the remaining 20% to change, it will never reach that point. And in the case of something like carbon emissions an 80% reduction is better than nothing, but we need to get to zero. There are countless examples of progress failing because one leg of the tripod was not supported by the others. Pipeline protests may block a certain source. But without a change in demand and the accompanying innovation, the same amount of oil will still be used from elsewhere. Tesla has provided great innovation. Sleek, high-performance cars that everybody wants; yet they will not make a difference in GHG emissions until they (or competitors) are affordable for all and electric vehicles are legislated as mandatory. The Paris Accord was ambitious, and many thought it had potential to succeed. And then it fell the way of all the previous accords. This is because there is no realistic way to reach the targets they set while continuing to enjoy a comparable standard of living. Not until innovation makes it possible and allows for meaningful behavioral change to take place on a global scale. Without all three legs, it crumbles. Good ideas and good intentions will not reach their full potential if they can’t be supported on all three fronts. The Success Story There are a few shining examples of each leg of the tripod spurring real change, such as a single-use plastic ban in Rwanda that forced behavioral change through fines. But there is one example that outshines them all — and is very relevant to the current CO2 emissions situation. IN 1987, a meeting was held in Montreal, Quebec, Canada. This may seem similar to any number of the countless international meetings about the atmosphere — Rio 1992, Kyoto 1997, Paris 2015 — except that it was fundamentally different. By the 1980’s there was a noticeable depletion of ozone among certain areas of the atmosphere, and it was clear that chlorofluorocarbons, or CFCs, were the driving force. This is where the Montreal Protocol diverges sharply from any of the aforementioned climate change summits. The delegates in attendance accepted the evidence that CFCs were depleting the ozone layer. They then agreed to phase out the use of these compounds, most of which were found in aerosols and refrigerants, and stuck to their word. (Granted, it’s much simpler to phase out a product that is used a small fraction of the amount of oil or coal, however, it is still impressive that decisive action was taken.) The result was long and slow, because changes in earth systems operate on a longer timescale. But thirty years after banning — and upholding — the use of CFCs, there was an increase in ozone over an area that had been badly depleted. It is one of the greatest international success stories in the history of the world. The Montreal Protocol is a perfect example of the tripod of environmental progress. It illustrates how all three legs do not have to follow a linear path of behavioral change-innovation-policy to move forward. The CFC ban was spearheaded by policy. There were some alternatives available to use in aerosols and coolants, but there was not enough demand to push them until the policy came into effect. And then they filled the gap swiftly and easily. This, in turn, made it extremely easy for behavioral change to follow: nobody had to make a conscious choice to change as all products after 1987 no longer contained CFCs. It was all innovative alternatives, implemented via strict policies. This was also a great example of not waiting on behavioral change to spark the movement — and not placing the onus of change squarely on the consumer’s shoulders. It’s a great, albeit simplified, example for climate change summits to follow. Ban the product, force innovation, and change behavior by default. Unfortunately, our dependence on oil and other fossil fuels is much deeper and complicated than our CFC usage. It would be impossible to ban oil at present and maintain even a decent standard of living. Besides all the livelihoods that depend on it, both directly and indirectly, there are countless industries that currently rely on oil to make everything from life-saving medicine to clothes that most of the population wear daily. All of it is impacted by fossil fuels, whether we like it or not. This is not to say that we shouldn’t move away from CO2 emissions and towards carbon-free alternatives. It’s to say that in this complicated and overdependent state, where fossil fuels are tied in to absolutely everything we do, it’s going to take a lot of work from all three legs of the tripod to move towards a solution. We need to keep pushing demand for carbon-free alternatives as consumers, we need to keep pushing the limits of where innovation can take us, and we need to implement policies that are strict and effective. It’s not about simply feeling good. It’s not about blocking a pipeline and feeling like you’ve done your part, only to drive home in your car powered by oil that was shipped in from overseas. It’s about real, measurable progress. We are past the point where we can just be “eco-friendly” and reduce our carbon footprint. We need to halt our emissions. Completely. Using the tripod model is the only way to do this. If we rely solely on behavioral change, or hope for innovation without providing the demand, or expect a policy to take effect without the other two legs to support it, we will never reach the goal. If one leg fails, the entire model fails. That’s the tripod of environmental progress.
https://medium.com/sustainability-keys/the-tripod-of-environmental-progress-4340a1c4a83a
['Jordan Flagel']
2019-12-14 23:57:44.250000+00:00
['Sustainable Development', 'Sustainability', 'Environment', 'Policy', 'Climate Change']
5 Quotes That Will Unblock Your Writing and Uncover Your Creativity
1. If You Want to Write Better, You Have to Write More “The more you do, the more you can do.” — Annie Clark (Stage name: St. Vincent) One of the best parts of being a writer is that if you want to see how much you’ve grown as an artist, all you have to do is look at your past work. Without realizing it, you improve with every new project you tackle. Because what St. Vincent is saying is that the more you practice, the better you’ll get. In 2017, I started writing fanfiction. I genuinely believed that the first story I wrote was epic. For two years, I kept writing — short stories, and a few longer fics (50,000 to 80,000 words). Two years later, after writing my final fanfic, I decided to read through that first one, and well, I couldn’t. I could only cringe, but that’s a good thing. Cringing at past work means you’ve grown. But you don’t grow unless you practice. The more you write, the better you’re writing will be. In the beginning, practicing when you think you suck is one of the hardest things to do as an artist. When the story is clearer in your head than it is on the page or when you can’t transcribe your thoughts just right, it’s tempting to want to quit. Unfortunately, you can’t fast forward the bad. You can’t improve without being bad. Keep in mind that this is a learning process and that you’re supposed to fuck up. Just make stuff. Even if it’s ugly. Even if it makes so sense. Because the more you make, the more you can make, and the better the results.
https://medium.com/the-brave-writer/5-quotes-that-will-unblock-your-writing-and-uncover-your-creativity-ae703e0ecac5
['Itxy Lopez']
2020-11-04 23:03:21.581000+00:00
['Quotes', '5 Tips', 'Writing', 'Writing Tips', 'Creativity']
How O2 Conditioned Its Customers With Emotions
Unlike Huawei, who had nothing to do with influencing my consumer behavior, there are plenty of companies out there that learned to leverage the subconscious to alter our decision-making intentionally. In 2001, Cellnet, a struggling communications network, rebranded as O2 and launched a campaign that got them from the last to the first position in the market within four years. How did they pull that off? Instead of targeting rational thinking, their ads were all about emotions. First, the core message that O2 marketing geniuses came up with was: “O2: See what you can do.” Then, all of their ads featured a watery blue atmosphere with bubbles flowing through it, people smiling, flirting, and floating around. There was also a dog catching a ball, and the whole thing was wrapped up in calm and serene music — well, at least according to 2001’s standards. Neither in the marketing message nor in the ads was there a single word about the quality of their service or the extent of their coverage — nada. I don’t know about you, but I find their music is somewhat scary. O2 didn’t dominate the market by convincing their prospects but by conditioning them. The conditioning happens by leveraging the way the subconscious mind processes information. The process is called association and takes place in the limbic brain. What you ought to know about this limbic guy is that it’s lazy, always avoiding deep, deliberate thinking and aiming for quick shortcuts. As a result, when we perceive something, our brains make an instant judgment of its emotional value. The assessment is then used as a reference to build a future decision: “I see red iron, I feel a burning heat. Bingo! Red iron is dangerous — stay away!” In the case of O2, when prospects watched the ads, their subconscious mind went like this: “ I see O2, I feel calm and serenity. Bingo! O2 is awesome — grab it!” When prospects started to look for a network service, they subconsciously recalled the calm and serenity that the O2 ad brought — and voilà, they invested. What’s really powerful about exploiting the association process, is its effectiveness regardless of how little the audience is paying attention. In fact, your subconscious mind doesn’t give a fuck about your rational thinking. According to it, the latter is easily distracted and too slow to rely on — and that’s great news for marketers. In several studies, Dr. Robert Heath, who’s an associate professor of advertising theory at the University of Bath, proved that ads are more effective when we don’t pay conscious attention to them. He explained that the association process is precisely robust because we consume the adverts thoughtlessly. In his words: “We are not aware this happens, which means we can’t argue against it.” So, when we passively watch an ad that provokes specific emotional stimuli, we get conditioned. That’s what made O2 a leader in the network market. That’s also what got me to buy Huawei. Even if I didn’t watch any ads, it was the association process that persuaded me: “Bingo. Huawei is an underdog. Support it!” If you relate, you probably recall what happens when someone asks us why we bought this product and not the other. We conceive all sorts of rational arguments to justify our decision. I always brag about my phone’s speed, quality of pictures, and its below-the-market price. But hey, you and I know that those weren’t my real motives.
https://medium.com/better-marketing/how-o2-conditioned-its-customers-with-emotions-e23e66932ea8
['Nabil Alouani']
2020-05-19 16:05:32.500000+00:00
['Marketing', 'Marketing Strategies', 'Business', 'History', 'Psychology']
How to Deal With Climate Change Anxiety
Every time my mind spirals into this world-ending fervor, I try to remind myself of a few things. It’s not all your fault While I have been conforming to a damaging system all my life, it’s statistically impossible for me, an average person, to have caused the burning of fossil fuels that have damaged the ozone. Though I have conformed to a system that perpetuates the earth’s destruction, I wasn’t involved in its formation. It’s been in the works for decades, and until recently, mainstream media has failed to properly educate the public on the magnitude of climate change. You, alone, didn’t cause it. You didn’t know For the majority of my life, I assumed global warming/climate change was one of those big-picture issues that were larger than life. I didn’t realize how big of a deal it was and how much it would impact me. I figured climate change was a problem for government officials who actually had power to change it. I figured they would do something about it, so I didn’t have to. I never thought about how many to-go coffee or boba tea cups I must discard every month. How many times I’ve failed to recycle a can or a box just because it was more convenient to toss it in the trash. Those things may seem small and insignificant in the moment, but they add up over a lifetime. Do what you can Now that I know that conforming to a plastic-wrapped consumerist society leads to major destruction of the planet and a waste of earth’s resources, I can try to make a difference. This can mean any number of simple actions: Taking a reusable water bottle or coffee cup with you wherever you go. Switching from ziploc bags to reusable tupperware. Eating less takeout and packaged fast food. Taking canvas bags to the grocery store or forgoing a bag altogether if it’s a quick trip. Buying your produce without the individual plastic bags. Switching to solid soap and shampoo. Packing reusable silverware or straws. Eating less red meat. Reducing the consumption of fast fashion by instead buying secondhand or thrifted clothing. They’re little things. They don’t take too much effort. And even if they do, it will all be worth it in the end. I understand that going zero waste is impossible for many people due to financial reasons. Alternatives tend to cost more, and millions of people rely on cheaper, convenient options. As a college student, it’s hard to shell out extra money for sustainable brands when I could easily buy the cheaper option and use the money for tuition and rent. But I know my impact is necessary, even if it’s only executed when I can afford it. Encourage others The actions of one person may be minuscule in comparison to the billions of people still unaware, uninterested, or in denial. Every day I feel as though what I’m doing will never be enough. But one by one, more people can help alter the culture we are in and inspire others to do the same. These simple changes may not seem impactful, but it helps to use the same attitude used for years to reconcile the mass consumption of plastic. For years, I convinced myself that my plastic usage was insignificant because it was “just one cup” or “just one bag”. I said this to myself every week for years, and so did billions of others. It’s more than just one bag. Over a lifetime, it adds up. It happened with plastic, and it can happen with the alternative. Just as I saw a zero waste Instagram account and thought, “Wow, maybe I should change my actions too”, I hope you read this and think the same. Maybe you’re already doing it. Maybe you’re doing more. Wherever your mindset is at, I hope we can all gradually move towards a society that is considerate of the world we live in, and compassionate towards the place we have been lucky enough to live in. Our earth has given us not only a place to live, but the resources to survive and thrive in it. It’s given us nearly everything it has, and now it’s our turn to give back. It’s our turn to show the earth some of the love it’s shown us every day of our lives.
https://helenaducusin.medium.com/how-to-deal-with-climate-change-anxiety-25cd9aadd361
['Helena Ducusin']
2019-07-12 21:37:01.669000+00:00
['Zero Waste', 'Sustainability', 'Environment', 'Climate Change', 'Global Warming']
Class 6: Designing AI: Part 2
Jennifer Sukis is a Design Director for AI Transformation at IBM, based in Austin, TX. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.
https://medium.com/ai-design-thinkers/class-6-designing-ai-part-3-da2a0cefe1ba
['Jennifer Aue']
2020-01-23 16:44:55.437000+00:00
['Adfclass', 'Big Ideas', 'Design', 'AI', 'Design Thinking']
How Korea Does Contact Tracing
How Korea Does Contact Tracing It’s not an app South Korea never had a lockdown and yet they beat COVID-19. How did they do it? The short answer is that it’s a long answer, covered in depth here. For this article, I’ll cover just one part of the puzzle —contact tracing. This is all based on a report from the COVID Translate Project, and the official Korean government playbook. I recommend reading the whole report, and all the source documents that they’re translating. There’s a lot we can learn from Korea, and news reports miss almost all of it. When it comes to contact tracing, Korea didn’t reinvent the wheel. They just used technology to dramatically speed it up. The Korean system allowed them to trace contacts in as little as 10 minutes, which is unheard of. It’s like everyone else is on bicycles and Korea has a bullet train. Here’s how they did it. Many Data Points Everyone’s talking about apps. Get everyone to install an app and, boom, contact tracing. That’s not how it works. Korea did not rely on an app for contact tracing. Instead of an app, they used a wide constellation of data. Cellular GPS data Credit card transactions Drug purchase records CCTV footage This was a broad, redundant set of data combed over by trained people. Those people would find and message contacts manually, but the system also released anonymized information to the public. Koreans don’t have to wait for an SMS. They can contact trace themselves. Through an API (I assume) the anonymized location data is released to multiple third-party sites. Anyone can then see each case on a map, and when the infected person was there. That means you can check your own route and see if you need to get tested or be careful. There’s no single point of failure here. If the data is missing there are humans, if the humans miss someone there’s crowdsourcing. If you depend on one app then you’re just one buggy deployment away from the plague. Korea has multiple, redundant data points all backed by layers of human bureaucracy and an engaged population. That’s what makes their contact tracing so robust. Faster Tracing Through Technology Contact tracers work with the data, it’s not an automated system Korea still has contact tracers, they still do interviews, they still do paperwork — the basic system would be recognizable to someone from 1918. The difference is speed. Because all the laws have been written, nobody has to call in a favor at the phone company or get a court order. Because the data is published online, no one has to walk around pasting signs. I’m pointing this out because, again, none of this is a magic bullet. None of it is automatic. The Korean method was planned, negotiated and legislated in advance. Then those bureaucracies were well-staffed and funded for years, when nothing was happening. Finally, the underlying public health methodology had to be sound before layering technology on top. It’s an amazing feat, really, and most of it is invisible. This newly developed system utilizes the cooperation of the police departments, Credit Finance Association, 3 major telecommunication companies, and the 22 credit card companies to update travel and transaction information in real-time. In addition, analysis of big data is used to automatically determine travel route and location information per specified time period and reflect it visually on a map. It also analyses locations with a large pocket of infection to identify the spreader within that region, in addition to other analytical information. As the process to do the same task, which once took 24 hours, is now reduced to 10 minutes, it takes a heavy load off the epidemiological surveyors and allows for a much faster and efficient way to react to situations when mass infection occurs. Ten minutes for contact tracing is crazy. Sometimes it takes me more than 10 minutes to find my phone. Why Korean Contact Tracing Works This speed is why Korea was able to avoid lockdowns. Regular contact tracing is effective, but if you wait 24 hours the virus has already spread. Regular contact tracing is also really labor-intensive. After about 100 cases you just can’t and you have to shut down entire cities or the whole country. In Korea, however, they can track down infected people in minutes and get them off the streets. So the streets can stay open. Contact tracing is, in effect, their secret weapon. Everybody knows you’re supposed to do contact tracing, but only places like Korea and Taiwan can do it this fast. Legal Groundwork All of this is, of course, a privacy landmine. If we enacted a similar system overnight in Sri Lanka, locations would be boycotted or burnt. The system works in Korea because A) the public had hard experience with MERS and B) they democratically created these laws with built-in privacy protections. For example, the locations can only be tracked for a certain period, have to be anonymized, and only certain people can request and use the information. When the pandemic is over, data will be deleted. This wasn’t a system designed in a panic, it was deliberated democratically and written in law. Personally, I think that we’ve already given up our privacy for targeted shoe ads, without any protection at all. A legal system for a clear public health purpose is actually a good use of data. What Other Countries Can Do I hope I’ve said this enough, but the key insight is speed. All of their technology, all of their bureaucracy is just there to make their response as fast and aggressive as possible. That’s the only way to beat an exponential disease. Hence the lesson is not to do nothing until you’ve built this perfect system. Do whatever you can right now, work the phones, walk the streets, trace every contact you can. Build the technology and systems in the meantime or, better yet, get the Korean Ambassador on the phone. But understand that they built this system years ago, not in the middle of a pandemic. I think that too many people just glance at Korea and think ‘OK, we’ll use technology’. Like some magic app, or blockchain, or AI, or some other bullshit. It doesn’t work like that. Korea doesn’t have a tech solution, they have a solution that uses tech. It’s different. The number one thing I’ve learned from reading the COVID Translate source documents is how diligently they prepared, and how far in advance. It’s not just contact tracing, they setup institutions, stockpiled PPE, wrote plans, rewrote plans — they were ready. Technology is just one part of that puzzle, and you need the whole thing for it to make sense. And, honestly, that’s not going to happen right now. Right now just do what you can. Next time, do like Korea.
https://indica.medium.com/how-korea-does-contact-tracing-1b2662b5b894
['Indi Samarajiva']
2020-06-05 13:08:52.561000+00:00
['Coronavirus', 'Privacy', 'Korea', 'Health', 'Government']
The Advent of Architectural AI
Parametricism In the world of parameters, both repetitive tasks and complex shapes could possibly be tackled, when rationalizable to simple sets of rules. The rules could be encoded in the program, to automate the time-consuming process of manually implementing them. This paradigm drove the advent of Parametricism. In few words, if a task can be explained as a set of commands given to the computer, then the designer’s task would be to communicate them to the software while isolating the key parameters impacting the result. Once encoded, the architect would be able to vary the parameters and generate different possible scenarios: different potential shapes, yielding multiple design outputs at once. In the early 1960s, the advent of parametrized architecture was announced by Professor Luigi Moretti. His project “Stadium N”, although theoretical initially, is the first clear expression of Parametricism. By defining 19 driving parameters — among which the spectators’ field of view and sun exposure of the tribunes -, Moretti derived the shape of the stadium directly from the variation of these parameters. The resulting shape, although surprising and quite organic, offers the first example of this new parametric aesthetic: organic in aspect, while strictly rational as a conception process. Bringing such principle to the world of computation will be the contribution of Ivan Sutherland, three years later. Sutherland is the creator of SketchPad, one of the first truly user-friendly CAD software. Embedded at the heart of the software, the notion of “Atomic Constraint” is Sutherland’s translation of Moretti’s idea of parameter. In a typical SketchPad drawing, each geometry was in fact translated on the machine side into a set of atomic constraints (parameters). This very notion is the first formulation of parametric design in computer’s terms. Samuel Geisberg, founder of the Parametric Technology Corporation (PTC), would later, in 1988, roll out Pro/ENGINEER, first software giving full access to geometric parameters to its users. As the software is released, Geisberg summed up perfectly the parametric ideal: “The goal is to create a system that would be flexible enough to encourage the engineer to easily consider a variety of designs. And the cost of making design changes ought to be as close to zero as possible. “ Now that the bridge between design and computation was built thanks to Sutherland and Geisberg, a new generation of “parameter-conscious” architects could thrive. As architects were becoming more and more capable of manipulating their design using the proxy of parameters, the discipline “slowly converged” to Parametricism, as explained by P. Schumacher. In his book, “Parametricism, a New Global Style for Architecture & Urban Design” Schumacher explicitly demonstrated how Parametricism was the result of a growing awareness of the notion of parameters within the architectural discipline. From the invention of parameters, to their translation into innovations throughout the industry, we see a handful of key individuals, who have shaped the advent of Parametricism. This parametrization of architecture is best exemplified at first by Zaha Hadid Architects’ work. Mrs. Hadid, an Iraqi architect trained in the UK, with a math background would found her practice, with the intent to marry math and architecture through the medium of parametric design. Her designs would typically be the result of rules, encoded in the program, allowing for unprecedented levels of control over the buildings’ geometry. Each architectural move would be translated into a given tuning of parameters, resulting in a specific building shape. Hadid’s designs are the perfect examples to this day of the possible quantification of architectural design, into arrays of parameters. Her work however would have not been possible without Grasshopper, software developed by David Rutten in the year 2000’s. Designed as a visual programming interface, Grasshopper allows architects to easily isolate the driving parameters of their design, while allowing them to tune them iteratively. The simplicity of its interface (Figure 3) coupled with the intelligence of the built-in features continues today to power most buildings’ design across the world and has inspired an entire generation of “parametric” designers.
https://medium.com/built-horizons/the-advent-of-architectural-ai-2fb6b6d0c0a8
['Stanislas Chaillou']
2020-01-03 10:38:08.562000+00:00
['Technology', 'Artificial Intelligence', 'AI', 'Architecture', 'Harvard']
Tips for Making Realistic New Year’s Resolutions
Set SMART goals. You may have heard of SMART goals, which stands for goals that are specific, measurable, achievable, relevant, and time-bound. Being specific means outlining concrete actions or a concrete outcome. If your goal is to focus on self-care in the new year, ask yourself: what does self-care look like for me? Maybe you want to dedicate time to hobbies, or find time to meditate each day. Whatever you choose, how can you make the goal measurable? Next, think of what’s achievable and relevant for you. What type of goal can realistically fit into your current lifestyle? What would this goal mean to you? Lastly, make it time-bound. Is there a deadline you can set for yourself? Know the “why” behind your goals. This connects to the “relevant” aspect of SMART goals. You’re more likely to commit to a goal if it has some personal significance. Instead of seeking goals that you think you should achieve, be intentional with your goal-setting. Set goals around what matters to you and suits your lifestyle. Ask yourself: how would my life change if I achieve this goal? What are the pros and cons of achieving this goal? Start small and reward small wins. First, remember that you don’t have to tackle everything at once. In fact, it’s better to focus on fewer goals — or even just one — instead of pouring energy into too many things. You can set smaller but more realistic goals, or break up your big goals into smaller chunks. As you achieve a small part of your goal, reward yourself. For example, start by making a list on how you plan to achieve that goal. Once that list is complete, give yourself a pat on the back. That’s step one, done! Rewarding the small wins can help motivate you towards bigger wins. Plan for your goals. Goal-setting takes time. Think about your goals, why you want to achieve them, and how you plan to do that. Take time to write down your thoughts and outline a plan. Make a plan for addressing challenges, too. For instance, if you’re trying to eat healthier meals, what will you do if you have a busy week and cannot find time to cook? It’ll be much easier to navigate setbacks if you plan for them ahead of time. Learn and adapt after failure. If you’re going after a goal that you failed to achieve in the past, think about what worked, and what didn’t work that time. Are there things that you can do differently so that you achieve the goal this time? Past failures and present setbacks are great opportunities for learning. For example, if you’re trying to quit smoking and you give in to a cigarette after a hard day at work, that doesn’t mean you’ve failed at your resolution. Ask yourself: what triggered this relapse? What else can I do to relieve stress and avoid smoking in the future? No matter what kind of goal you set, there will probably be obstacles along the way. If you slip, don’t lose hope. Treat yourself with grace and continue trying the next day. Find ways to stay motivated. The renewed energy you feel at the beginning of the new year might start to fade after a while, which is expected. You can renew your motivation by reminding yourself of the “why” behind your goals. If you write down your motivations, you can look at the list whenever you’re feeling stuck or down. Find sources of inspiration. That could mean a vision board, motivational videos online, or talking to other people with similar goals. Seek support. Support can mean talking to other people about your goals, or creating shared goals that you achieve together. Both can help hold you accountable to your goals, because you will have someone to cheer you on when things get tough. For example, if you’re trying to exercise more, try starting an exercise group with your friends or join an exercise class — both can be done virtually during the pandemic! Be kind to yourself. You can never go wrong treating yourself with kindness and compassion. Remember that change takes time, and it will be impossible to avoid setbacks. Instead, when setbacks do happen, be patient and encourage yourself the way you would encourage a friend. Being hard on yourself for mistakes will only make it more difficult to move forward. Remember to take care of your mental health. This year, as we grapple with the uncertainty and stress of the pandemic, it’s become even more important to find healthy coping and stress-relief mechanisms. If you’re dealing with increased stress or anxiety, you’re not alone. Here are some ideas for mental health goals that could help you cope — or even thrive — in 2021. Work on stress management — remember that stress management is a skill that requires planning, practice, and intentional goal-setting just like any other skill. remember that stress management is a skill that requires planning, practice, and intentional goal-setting just like any other skill. Take care of your body — eat well, drink enough water, sleep, exercise, and limit alcohol and other drugs. Feeling good physically will help you feel good mentally. — eat well, drink enough water, sleep, exercise, and limit alcohol and other drugs. Feeling good physically will help you feel good mentally. Find time to meditate or take a break — every so often, remember to close your eyes, take a deep breath, and tune out the noise of the world. Take a break from social media if you need to. — every so often, remember to close your eyes, take a deep breath, and tune out the noise of the world. Take a break from social media if you need to. Stay connected — traditions and celebrations might look different this year, but there are so many ways to connect with friends and family virtually. — traditions and celebrations might look different this year, but there are so many ways to connect with friends and family virtually. Seek professional help if you need it — medical professionals are here to help. Recognizing when you’re overwhelmed or need extra help are key to taking care of your mental health. At Alpha, our team of licensed providers can help treat you for mental health and other conditions online. We provide online consultations, suggest treatment, and prescribe medication that can be shipped straight to your door. Check out our website to get started or learn more. Alpha Medical is a telemedicine company with a mission to make healthcare accessible, convenient, and affordable for all.
https://medium.com/in-fitness-and-in-health/tips-for-making-realistic-new-years-resolutions-e815e708be51
['Alpha Medical Team']
2020-12-28 22:43:49.345000+00:00
['Mental Health', 'Health', 'Goal Setting', 'New Year', 'New Year Resolution']
Five Reasons You Should Never Stop Asking: “Why?”
Five Reasons You Should Never Stop Asking: “Why?” Forget about finding an answer, the power is in the question. Photo by niklas_hamann on Unsplash I was that annoying kid constantly asking: “But why?” Now I’m an annoying adult still seeking reasons for everything that happens to me. Whether it’s a failed relationship, a creative challenge, life’s serendipitous course, or a big question like, “Why do bad things happen to good people?” I’m always searching for a reason and a resolution. Often, it’s a pointless exercise. Unless we can fall back on faith, I doubt the world will ever gain an intellectual understanding of why life deals out hardship and blessings on a seemingly arbitrary basis. This doesn’t, however, stop my ruminations. Would my life be lighter if I could just glide along without the constant analysis? Perhaps. But my reflective nature has opened me up in important ways; moved me closer to the person I want to be. And now, after years of navel-gazing, I’ve decided that finding an answer is less important than posing a question in the first place. I’ve discovered five benefits to asking questions, even when answers aren’t delivered, neatly tied up with a big red ribbon: 1. Questions foster personal growth After years of watching people move through the world, I’ve decided that lives conducted without self-examination inevitably lead to self-righteousness. Because we are the sum total of our personality, value system and experiences, we often respond to situations instinctively and in ways that are comfortable and familiar. And this means that human beings seem hard-wired to repeat mistakes. It makes me think of a quote, commonly attributed to Albert Einstein: “The definition of insanity is doing the same thing over and over again and expecting a different result.” I believe the way to avoid repeating a mistake is through questions. We need to constantly ask ourselves: “What role did I play in this drama? Were my actions ego-driven? How can I act in a kinder, smarter or more authentic manner in the future?” If we don’t question our own motivations and actions, how can we remain flexible, learn the lesson and then grow? I don’t think we can. “It is easier to judge the mind of a man by his questions rather than his answers.” — Pierre-Marc-Gaston, duc de Lévis (1764–1830) 2. Questions spark creativity For creative people, the important question is: “Why not?” Hint: There’s never a valid reason not to try. Creativity means fearlessly questioning and discarding the obvious and safe answers. It is as about finding authentic and unique perspectives: a different prism through which to view the world. As a writer, the creative process involves me making leaps that others don’t, and communicating my truth in a way that touches the reader. It is impossible to be imaginative without first posing outrageous questions. And the wonderful freedom to creativity is that there is no wrong answer, so even if you don’t see yourself as an artist capable of creating a masterpiece, you still have the freedom to follow daring questions wherever they lead. 3. Questions make us more empathetic The best way I know to hold myself accountable for my actions is to consider an alternate point of view. I try to ignore my need to be right and reflect on how other people have experienced an event or a relationship. How did my actions impact their lives and their happiness? Sometimes the answers are uncomfortable, and it’s certainly easier to hang on to the belief that I always hold the moral high ground. But it is only through asking confronting questions that I can see different points of view, and become more open to forgiveness and compassion. 4. Questions keep us young The challenge of watching my mother and father age has resulted in an endless questioning of why some people’s agency diminishes as their physical wellbeing ebbs, and others are able to stay engaged with life. I think the most important differentiator is the extent to which people remain connected to the world: stay curious and continue to probe. I’m not saying that remaining interested is easy. Some people’s strength is depleted just by getting through their daily physical battles, which leaves little emotional or intellectual energy left for contemplation. It’s natural for weariness to make looking beyond one’s physical limitations a challenge. Even so, some people push harder than others. I’m in awe of people who see learning as a lifelong journey and use their later years to increase their knowledge, people who are open to new experiences and new perspectives despite their advancing age. None of this is possible unless these wise old souls — my heroes — continue to ask questions of themselves and of the world. 5. Questions encourage humility If you want proof of the power of questions, look no further than philosophy, which is sometimes called the Art of Questioning. People value philosophers not because their work provides easy solutions to imponderable questions, but because their intellectual probing teaches us so much about the human condition. Philosophers show us that questions matter not just because they may provide answers, but also because the nature of our questions determines what answers we receive. Learning that life doesn’t always deliver neat solutions has taught me humility. Despite my intellect, hunger for knowledge, and relentless need for control; asking questions has forced me to accept that I have no automatic right to understanding. I also have no right to closure. If I hadn’t continued along my path of asking inconvenient, unsettling and often unfathomable questions, I may have had a much easier life, but not necessarily a richer one.
https://medium.com/the-ascent/five-reasons-you-should-never-stop-asking-why-11594b6cc44c
['Clare Loewenthal']
2020-02-13 20:43:37.640000+00:00
['Self', 'Relationships', 'Creativity', 'Psychology', 'Life']
Plotly and NVIDIA Partner to Integrate Dash and RAPIDS
Plotly and NVIDIA Partner to Integrate Dash and RAPIDS plotly Follow May 19 · 4 min read We’re pleased to announce that Plotly and NVIDIA are partnering to bring GPU-accelerated Artificial Intelligence (AI) & Machine Learning (ML) to a vastly wider audience of business users. By integrating the Plotly Dash frontend with the NVIDIA RAPIDS backend, we are offering one of the highest performance AI & ML stacks available in Python today. This is all open-source and accessible in a few lines of Python code. On the Enterprise side, Dash Enterprise Kubernetes (DEK)now ships with out-of-the-box support for horizontally scalable GPU acceleration through RAPIDS and Dask. Once you’ve created a Dash + RAPIDS app on your desktop, get it into the hands of business users by uploading it to DEK. No IT or devops team required 🙅‍♀️. NVIDIA’s CEO Jensen Huang mentioned some of the early fruits of this partnership in the first minute of his GTC 2020 Kitchen Keynote last week, and today we’re more formally announcing our partnership. A typical business intelligence (BI) dashboard or analytical application combines graphs, maps, and controls to provide interactive access to queries and AI models running on large, complex datasets. Any organization delivering goods or services at scale will have millions of records to analyze, spread out in time and space and across various more-abstract dimensions. Building a performant application on top of such a dataset usually requires a multi-team, multi-week effort and results in a complex, multi-tiered architecture. New technologies like Dash and RAPIDS are changing this landscape, empowering individual Python developers to easily and quickly build analytical applications that are more performant than their complex counterparts. This COVID-19 Dash + RAPIDS app aggregates over 300M rows in GPU memory (source code on Github) Plotly’s Dash is an open-source framework developed by Plotly, which enables developers to build interactive, data-rich analytical web applications in pure Python, with no Javascript required. Traditional “full stack” analytical application development is done in teams with some members specializing in back-end/server technologies like Python, some specializing in front-end technologies like React and some specializing in data science. Dash provides a tightly-integrated back-end and front-end all controlled from simple Python functions. This means that data science teams producing models and analyses no longer need to rely on back-end specialists to expose these models to the front-end via APIs, and no longer need to rely on front-end specialists to build user interfaces to connect to these APIs. RAPIDS is a collection of open-source libraries developed with NVIDIA to accelerate Python data science workloads by running them on GPUs rather than CPUs. RAPIDS provides massively sped-up, drop-in replacements for the most popular Python data science libraries, such as Pandas, Scikit-Learn and NetworkX. RAPIDS shortens the hypothesis-query-validation loop for data scientists by quickly running queries over full disaggregated datasets on local and remote workstations, rather than having to wait for results, or spending time aggregating or sampling datasets, or loading them into remote databases to get to the desired level of query performance. This means that with RAPIDS, data scientists can be less reliant on data engineering or database administration specialists, who are usually responsible for these aggregation and extract-transform-load (ETL) tasks. Using Dash and RAPIDS together, individual data scientists no longer need to compromise between the iron triangle of performance, aggregation and time-to-delivery. They can now work independently across the entire analytical stack, from raw data to user interface, to quickly deliver applications to their users. A Dash + RAPIDS application is typically less than a thousand lines of easy-to-read, pure-Python code to create a smoothly-interactive interface for users, and can run on single or multi GPU-powered nodes. These interfaces can transparently aggregate data for big-picture overviews at a whole-business scale, while allowing for intuitive slice, dice, and drill-down operations that operate just as fast, on the same raw dataset. Whereas in the past, in order to get good performance on a single node, users could only access pre-aggregated data at a state or county, or daily or hourly level, applications can now re-aggregate data on the fly at any level, or grant access to individual records, to uncover fine-grained patterns that are normally obscured by aggregates. RAPIDS can also be used to quickly train, retrain and execute machine-learning models on these same large, fine-grained datasets. The RAPIDS team has just published an article detailing how they built a Dash + RAPIDS app that allows interactive exploration of a 300-million-row dataset: one row per person living in the US. Plotly and the RAPIDS team will be publishing more articles in the coming days and weeks showcasing some of the fruits of our collaboration, so we encourage you to follow Plotly on Medium or Twitter (and RAPIDS on Medium or Twitter) to catch those updates. In the meantime, contact us if you have any questions about Dash or would like to learn more about the partnership!
https://medium.com/plotly/plotly-and-nvidia-partner-to-integrate-dash-and-rapids-8a8c53cd7daf
[]
2020-05-19 21:00:17.572000+00:00
['Business Intelligence', 'Data Visualization', 'Artificial Intelligence', 'Machine Learning', 'Gpu']
I Don’t Like Brené Brown Anymore
Wrapping up my first year, however, a common sentiment among myself and my fellow teachers is that yes, SEL is incredibly important and a step in the right direction for the students in any school, but what about the teachers? We struggle a lot, too, and with the expectations that arise day in and day out, from behavior management to submitting lesson plans, grading on time, being at meetings day in and day out, and an overall craziness in the classroom and the school building, it’s common to feel like SEL applies to the kids, but not to the adults. I don’t think it needs to be said that adults need self-care, rest, and mental and emotional well-being too, but the culture in education often doesn’t allow teachers to put their own well-being first. For people that don’t know, the culture of accountability for teachers tends to be very punitive across the nation — it usually isn’t implemented in a way that makes educators better at their jobs. Instead, it scares teachers into compliance and encourages teachers to put on a “dog and pony show” every time another adult walks into the classroom. Let me preface that deep down, I love Brené Brown, including two of her TED Talk classics — “The power of vulnerability” and “Listening to shame”. Teddy Roosevelt’s quote on the “man in the arena” was a quote I first heard from Brené Brown. At some point in college, Brené Brown’s research on vulnerability encouraged me to take a leap of faith and be vulnerable about my family and personal struggles. Of course, it wasn’t just Brené Brown’s TED Talks that did it for me — but her compelling research and anecdotes were one of many catalysts in my personal story. When I became a teacher, however, in a very difficult inner-city school and school district, professional development sessions would constantly play YouTube videos from Brené Brown to prioritize social-emotional learning for students and for us to be nicer to students and be as understanding to them as possible. Again, all of this was a good thing, and it’s much better to have an SEL approach than not to. But none of these sessions ever acknowledged or emphasized how teachers felt, even when we were hit by a pandemic in Covid that upended everyone’s lives. A lot of my peers had intense pressure by the district to document multiple phone calls a week to students, teach online at unusual times, and go to several meetings a day for compliance. As any school year naturally wears on, teachers get more exhausted, worn out, and run down, and I certainly did my first year. Very few people, however, check in on the emotional well-being of teachers. We were expected to put on a fresh face every day, keep our heads down, teach, do our jobs to the best of our capacity, and put in a lot of hours after work to get grading and planning done. No one ever cared about how we were doing emotionally — because we were paid professionals, it was implied that SEL was for students, not us. Perhaps the implication is that teachers already have social-emotional learning in their toolbox, but teachers are human beings too, not machines, and doing our jobs shouldn’t come at the expense of constantly being run-down, constantly being overworked and overstressed. As a teacher, it’s easy to internalize that everything is your fault. Having a disengaged student is your fault, having a student that doesn’t show up to school is your fault, having a parent angry at you is your fault, and having disruptive behavior and lackluster test scores are your fault. It’s easy to get into that mindset because a good teacher thinks about what is in his or her control and takes an initiative not to blame the kids for forces outside of their control. It was only when the school year ended that I realized how chronically and constantly stressed I was, how much pressure I felt every single day and how I had to compromise my own emotional well-being for the sake of my students. And yes, I love my students and wanted to do the best I possibly could for them, but like the emotional well-being and mental health of students are important, so was ours as teachers. There’s an unspoken code that parents have to sacrifice for their kids, and that the well-being, needs, and priorities of their children supersede their own. It’s the same in education, and sometimes, like trying to be the perfect mother or father only leads to insurmountable shame, so does trying to be a perfect teacher. There’s simply a culture that puts students, students, and students first, and you come second to that. That culture isn’t wrong, but just neglects the fact that the best way to take care of students is to take care of yourself. There are just a lot of things you have to learn for your own well-being as a teacher that they don’t tell you when you’re training to become a teacher: No one tells you to put boundaries between home and work and have a cutoff time for when you have to put the work away. No one tells you that you’re going to fail more than you ever imagined and let your students down. No one tells you that you’re never going to catch up on everything you want to, from documents to grades. No one tells you that you’re never going to grade every assignment or paper. No one tells you you will skip lunch some days and that calling parents is a time-consuming process can lead you to stay after school for an inordinate amount of time. You know who tells you these things? Other teachers — veteran teachers once you go into the classroom, and your own experience. And my frustration at constantly being overwhelmed, constantly being extremely stressed, and constantly feeling like a failure led to a sort of learned helplessness throughout the year. I would displace a lot of my frustrations on the Brené Brown videos they showed at professional development sessions because I felt like I needed help, that my difficulties in the classroom were a message to the world that I couldn’t do it on my own, that the continual bags under my eyes meant that I couldn’t wait for the weekend on most weeks. The problem wasn’t Brené Brown’s research itself or her message. In a couple months, I’m sure that my love for her work will be rekindled — it was just how her work was used in a context that seemed to ignore the emotional needs of myself and my fellow teachers. Students could at any time be sent to our SEL specialist, who was great and incredible to work with. But if we teachers at any time wanted to go into her office and needed to decompress, it would have been frowned upon. After all, didn’t we have stuff to do? Didn’t we have meetings and more time demands and responsibilities? If we ever acknowledged how not well we were doing sometimes, wouldn’t that have distracted from the students’ needs? Right now, 44% of teachers leave the profession within their first five years of teaching. I don’t know any other profession with a similar turnover or attrition rate. I left every day feeling beat down and exhausted. Sure, I knew it wasn’t sustainable, but I was just trying to get to the next day, let alone survive the whole year. Teachers are just expected to do their jobs and not complain about it, and I love my job and I love my kids, but that doesn’t mean I’m not constantly overwhelmed by it. There are probably a lot of districts easier to teach in than mine, and I had to tell myself: “Ryan, you’re tough. You knew what you were signing up for.” Just like supporting the mental health of kids means supporting the mental health of parents, we have to support students by supporting the mental health of teachers. I cringe whenever I see a Brené Brown video now because I associate it with compassion fatigue — trying to care so much about the students but not myself. Defeating shame and embracing vulnerability have become privileges for other people, but not me or my fellow teachers. Maybe the problem is just me and a malfunction in my own mindset, but I was validated when a lot of other teachers felt the same, that teachers were treated as robots that were just expected to shut up and do their jobs. In that same context, we had to watch Brené Brown videos about the role of vulnerability in education. The emotional toll of the job definitely took a physical toll, too — I knew teachers who got sick all the time and we had days where a quarter or a third of our teachers were out. From the bottom to the top, education must be reformed to better prioritize the emotional well-being of teachers. The problem was never Brené Brown or her work, but the misalignment of priorities in education that has to change.
https://olibeahalsx9197262.medium.com/i-dont-like-bren%C3%A9-brown-anymore-b21f200fcf71
[]
2020-12-01 00:00:29.617000+00:00
['Self', 'Mental Health', 'Health', 'Education', 'Leadership']
11 Visualization Examples to Practice Matplotlib
11 Visualization Examples to Practice Matplotlib A comprehensive practical guide Photo by Wesley Tingey on Unsplash Data visualization is very important in the field of data science. It is not only used for delivering results but also an essential part in exploratory data analysis. Matplotlib is a widely-used Python data visualization library. In fact, many other libraries are built on top of Matplotlib such as Seaborn. The syntax of Matplotlib is usually more complicated than other visualization libraries for Python. However, it offers you flexibility. You can customize the plots freely. This post can be considered as a Matplotlib tutorial but heavily focused on the practical side. In each example, I will try to produce a different plot that points out important features of Matplotlib. I will do examples on a customer churn dataset that is available on Kaggle. I use this dataset quite often because it is a good mixture of categorical and numerical variables. Besides, it carries a purpose so the examples constitute an exploratory data analysis process. Let’s first install the dependencies: import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline Matplotlib consists of 3 layers which are the Backend, Artist, and Scripting layers. The scripting layer is the matplotlib.pyplot interface. The scripting layer makes it relatively easy to create plots because it automates the process of putting everything together. Thus, it is the most widely-used layer by data scientists. We will read the dataset into a Pandas dataframe. cols = ['CreditScore', 'Geography', 'Gender', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'IsActiveMember', 'EstimatedSalary', 'Exited'] churn = pd.read_csv("/content/Churn_Modelling.csv", usecols=cols) churn.head() (image by author) The dataset contains some features about the customers of a bank and their bank account. The “Exited” column indicates whether a customer churned (i.e. left the bank). We are ready to start. 1. Number of customers in each country This one is pretty simple but a good example for bar plots. plt.figure(figsize=(8,5)) plt.title("Number of Customers", fontsize=14) plt.bar(x=churn['Geography'].value_counts().index, height=churn.Geography.value_counts().values) (image by author) In the first line, we create a Figure object with a specific size. The next line adds a title to the Figure object. The bar function plots the actual data. 2. Adjusting xticks and yticks The default settings are usually appropriate but minor adjustments might be necessary in some cases. For instance, we can increase the fontsize and also adjust the value range of y-axis. plt.xticks(fontsize=12, rotation=45) plt.yticks(ticks=np.arange(0, 7000, 1000), fontsize=12) Adding these two lines of codes to the previous plot will produce: (image by author) 3. Changing the default figure size The default figure size is (6,4) which I think is pretty small. If you don’t want to explicitly define the size for each figure, you may want to change the default setting. The rcParams package of matplotlib is used to store and change the default settings. plt.rcParams.get('figure.figsize') [6.0, 4.0] As you can see, the default size is (6,4). Let’s change it to (8,5): plt.rcParams['figure.figsize'] = (8,5) We can also change the default setting for other parameters such as line style, line width, and so on. I have also changed the fontsize of xtick and yticks to 12. plt.rc('xtick', labelsize=12) plt.rc('ytick', labelsize=12) 4. Creating a simple histogram Histogram is used to visualize the distribution of a variable. The following syntax will create a simple histogram of customer balances. plt.hist(x=churn['Balance']) (image by author) Most the customers have zero balance. When zero balance excluded, the distribution is close to the normal (Gaussian) distribution. 5. Customizing the histogram The two essential features that define a histogram are the number of bins and the value range. The default value for the number of bins is 10 so the value range will be divided into 10 equal bins. For instance, the first bin in the previous histogram is 0–25000. Increasing the bin size is like having more resolution. We will get a more accurate overview of the distribution to some point. The value range is defined by taking the minimum and maximum values of the column. We can adjust it to exclude the outliers or specific values. plt.hist(x=churn['Balance'], bins=12, color='darkgrey', range=(25000, 225000)) plt.title("Distribution on Balance (25000 - 225000)", fontsize=14) (image by author) The values that are lower than 25000 or higher than 225000 are excluded and the number of bins increase from 10 to 16. We now see a typical normal distribution. 6. Creating a simple scatter plot Scatter plots are commonly used to map the relationship between numerical variables. We can visualize the correlation between variables using a scatter plot. sample = churn.sample(n=200, random_state=42) #small sample plt.scatter(x=sample['CreditScore'], y=sample['Age']) (image by author) It seems like there is not a correlation between the age and credit score. 7. Scatter plots with subplots We can put multiple scatter plots on the same Figure object. Although the syntax is longer than some other libraries (e.g. Seaborn), Matplotlib is highly flexible in terms of subplots. We will do several examples that consist of subplots. The subplots function creates a Figure and a set of subplots: fig, ax = plt.subplots() We can create multiple plots on the figure and identify them with a legend. plt.title("France vs Germany", fontsize=14) ax.scatter(x=sample[sample.Geography == 'France']['CreditScore'], y=sample[sample.Geography == 'France']['Age']) ax.scatter(x=sample[sample.Geography == 'Germany']['CreditScore'], y=sample[sample.Geography == 'Germany']['Age']) ax.legend(labels=['France','Germany'], loc='lower left', fontsize=12) (image by author) 8. Grid of subplots The subplots do not have to be on top of each other. The subplots function allows creating a grid of subplots by using the nrows and ncols parameters. fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1) (image by author) We have an empty grid of subplots. In the following examples, we will see how to fill these subplots and make small adjustments to make them look nicer. 9. Rearranging and accessing the subplots Before adding the titles, let’s put a little space between the subplots so that they will look better. We will do that with tight_layout function. We can also remove the xticks in between and only have the ones at the bottom. This can be done with the sharex parameter. fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, figsize=(9,6), sharex=True) fig.tight_layout(pad=2) (image by author) There are two ways to access the subplots. One way is to define them explicitly and the other way is to use indexing. # 1 fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1) first subplot: ax1 first subplot: ax2 # 2 fig, axs = plt.subplots(nrows=2, ncols=1) first subplot: axs[0] second subplot: axs[1] 10. Drawing the subplots We will create a grid of 2 columns and add bar plots to each one. fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(8,5)) countries = churn.Geography.value_counts() products = churn.NumOfProducts.value_counts() ax1.bar(x=countries.index, height=countries.values) ax1.set_title("Countries", fontsize=12) ax2.bar(x=products.index, height=products.values) ax2.set_title("Number of Products", fontsize=12) (image by author) 11. Creating a 2-D histogram 2D histograms visualize the distributions of a pair of variables. We get an overview of how the values of two variables change together. Let’s create a 2D histogram of the credit score and age. plt.title("Credit Score vs Age", fontsize=15) plt.hist2d(x=churn.CreditScore, y=churn.Age) (image by author) The most populated group consists of the customers between ages 30 and 40 and have credit scores between 600 and 700.
https://towardsdatascience.com/11-visualization-examples-to-practice-matplotlib-4fe4c7dd665c
['Soner Yıldırım']
2020-11-07 14:47:15.991000+00:00
['Data Science', 'Data Visualization', 'Python', 'Machine Learning', 'Artificial Intelligence']
If you want to be a writer, you have to write.
If you want to be a writer, you have to write. Some advice to light a fire under your ass. One of my freelance clients texted me to ask if a young woman he knows can “shadow” me. She is interested in being a writer when she gets older. My first thought: Why would anyone want to shadow a writer? Second: How do you shadow a writer and what could you possibly learn from it? Will she sit and watch as I agonize over how to begin a piece? And how would I actually be able to write anything with someone else watching my every move or interrupting me to ask questions? The answer: I wouldn’t. So, unless I sit there and communicate what is going through my head, every second, as the words hit the screen what would this young woman actually get out of this experience? Does she want to know about the voices in my head as I write; the ones who tell me I suck and question why I am even doing this in the first place? It is not as if I have an office that I go to every day either. I work from home in our multipurpose library/music room/guest bedroom/office. Most of my days are spent wearing athleisure and drinking copious amounts of coffee. There are always dishes piled in the sink and sometimes there are children in the next room watching SpongeBob at full volume. I might stop to eat something or fit in some exercise but there are days when I don’t even shower until three in the afternoon. Every writer has their own routine, but if you want to be a writer, what can you actually learn by observing another writer at work? I don’t know about anyone else, but I prefer to write in solitude. If I were to sit and watched someone else write, I don’t know if I would learn very much about being a writer. As John Green said, “Writing is something you do alone. It’s a profession for introverts who want to tell you a story but don’t want to make eye contact while doing it.” That’s me! As a teacher of writing, I am perfectly able to articulate my writing process, and have, for the benefit of my writing students, but ultimately my best advice to my students is that they have to find their own process; they have to do what works for them. In my own experience, sitting around saying “Someday I want to be a writer,” but then not writing anything is counterproductive. Sit your butt down and put pen to paper or finger to keyboard. Stop thinking about what it takes to be a successful writer and just write. I think as creative people we are afraid to say “I am a writer” or “I am an artist” until someone validates our talent. Writers as a whole are a self-deprecating bunch and for whatever reason, a lot of us don’t feel like we have earned the right to say “I am a writer” unless we are the next Hemingway. And, let’s face it, none of us are that. Here are some routines of famous writers that I have begged, borrowed, and stolen from and incorporated into my own process: Stephen King tries to get 6 pages a day written (pretty straightforward and simple and its good to have goals). tries to get 6 pages a day written (pretty straightforward and simple and its good to have goals). Haruki Murakami gets up at 4 a.m. works for 5 or 6 hours, runs 7 miles in the afternoon, swims 1500 meters, reads, listens to music and then goes to bed at 9. He is essentially in training and he claims repetition is the key to making these habits stick. (Wow! I get up at 5 a.m. most days and go to the gym for an hour in the afternoon, but I am not that hardcore.) gets up at 4 a.m. works for 5 or 6 hours, runs 7 miles in the afternoon, swims 1500 meters, reads, listens to music and then goes to bed at 9. He is essentially in training and he claims repetition is the key to making these habits stick. (Wow! I get up at 5 a.m. most days and go to the gym for an hour in the afternoon, but I am not that hardcore.) Susan Sontag would tell all her friends “Don’t call me in the morning.” (that is my rule too). would tell all her friends “Don’t call me in the morning.” (that is my rule too). E.B. White could work fairly well among ordinary, everyday distractions. He said, “A writer who waits for ideal conditions under which to work will die without putting a word on paper.” (I have 5 kids, ordinary, everyday distractions are my norm.) could work fairly well among ordinary, everyday distractions. He said, “A writer who waits for ideal conditions under which to work will die without putting a word on paper.” (I have 5 kids, ordinary, everyday distractions are my norm.) Ernest Hemingway believed in morning pages and that you should always stop for the day not when you have run out of things to say, but when you know what is going to happen next. He found it hard to wait until the next day to start writing again and could think of nothing else until he could sit down and get back at it. (Maybe that is why his first 3 marriages failed.) believed in morning pages and that you should always stop for the day not when you have run out of things to say, but when you know what is going to happen next. He found it hard to wait until the next day to start writing again and could think of nothing else until he could sit down and get back at it. (Maybe that is why his first 3 marriages failed.) John Steinbeck felt that writers need to abandon the idea that they are ever going to finish. He said it is better to lose track of your progress and just write one page a day. Then when you are finished it will be a pleasant surprise. He would write freely and as rapidly as possible without editing. He believed that to rewrite in process enables writers to create an excuse for not going on because it interferes with the flow and rhythm and creates a disassociation from the material. (I kind of agree, but I can still rewrite in process sometimes without throwing in the towel.) felt that writers need to abandon the idea that they are ever going to finish. He said it is better to lose track of your progress and just write one page a day. Then when you are finished it will be a pleasant surprise. He would write freely and as rapidly as possible without editing. He believed that to rewrite in process enables writers to create an excuse for not going on because it interferes with the flow and rhythm and creates a disassociation from the material. (I kind of agree, but I can still rewrite in process sometimes without throwing in the towel.) FINALLY, Bernard Malamud said “You write by sitting down and writing. There’s no particular time or place — you suit yourself, your nature… eventually, everyone learns his or her own best way.” (AMEN!) And that pretty much sums it up. But here is one last bit of advice from my own experience. If you want to write, you HAVE to set your fears aside and write. But don’t just write, PUBLISH! Set up a blog, publish on Medium, write an article on LinkedIn and then push it out on your Facebook page, Instagram account, on a Pinterest board, Tweet it with the hashtag #amwriting. Writing takes an insane amount of courage and not everyone is going to like what you have to say, So be it. Say it anyway! Take a deep breath and hit the publish or the post button. Wherever you can share your creation, share it, because that is the only way you will get better, and then you CAN, without any self-doubt say, “I am a writer.”
https://medium.com/the-bad-influence/if-you-want-to-be-a-writer-you-have-to-write-31b8371f6bda
['Kristina Martin']
2020-03-19 18:44:01.549000+00:00
['Change', 'Publishing', 'Writers On Writing', 'Creativity', 'Writing']
How to Avoid Tedious CSS and XPath Queries in End-to-End and Component Tests
How to Avoid Tedious CSS and XPath Queries in End-to-End and Component Tests Five tips to improve your front-end tests Photo by Caleb Jones on Unsplash Component tests and end-to-end tests have some things in common. Here’ a brief overview about the two types of tests we’re looking at: Component tests (sometimes just referred as unit tests) are cheap to develop and easy to execute and debug because they isolate failures. Besides, they can usually run without a browser. Example frameworks: Jest, Mocha. End-to-end (e2e) tests simulate real user scenarios, which allows you to spot errors quickly. Execution of these tests takes longer, and these can be more complex compared to unit tests. On the other hand, end-to-end tests provide a higher validity that your application works as intended. Example frameworks: Protractor, Selenium. One of the most common things to do is to find some element in the DOM and do something with it (e.g., click it and expect some output). If you’ve worked with JavaScript-based Angular and React before, then you’re probably familiar with querying DOM elements using either jQuery or vanilla JavaScript. Most web developers should be somewhat familiar with CSS. However, for testing or scraping, there’s also XPath, which offers some interesting features not available in CSS. While XPath isn’t directly supported in JavaScript, you can use it in component and e2e tests as well as in any modern web browser, like Google Chrome. Sadly, this power often comes with a caveat: XPath can do anything CSS can do plus more: e.g., access the parent of an element. This is sometimes much appreciated as XPath supports many kin. This advantage can turn into a disadvantage — as using XPath often leads to long and complex queries which wouldn’t be possible in CSS In this piece, I want to take a look at common problems with XPath and CSS queries. After that, we’ll explore some ways to avoid these problems. Tip: Browsers like Google Chrome allow you to copy and run CSS and XPath queries in the developer tools.
https://medium.com/better-programming/how-to-avoid-tedious-css-and-xpath-queries-in-end-to-end-and-component-tests-732c5b86f2f6
['Ali Kamalizade']
2020-02-07 03:04:13.229000+00:00
['Software Engineering', 'Tdd', 'Programming', 'React', 'JavaScript']
Exploratory Data Analysis(EDA) From Scratch in Python
# Basic Data Exploration In this step, we will perform the below operations to check what the data set comprises of. We will check the below things: — head of the dataset — the shape of the dataset — info of the dataset — summary of the dataset The head function will tell you the top records in the data set. By default, python shows you only the top 5 records. The shape attribute tells us a number of observations and variables we have in the data set. It is used to check the dimension of data. The cars data set has 303 observations and 13 variables in the data set. 3. info() is used to check the Information about the data and the datatypes of each respective attribute. Looking at the data in the head function and in info, we know that the variable Income and travel time are of float data type instead of the object. So we will convert it into the float. Also, there are some invalid values like @@ and ‘* ‘ in the data which we will be treating as missing values. 4. The described method will help to see how data has been spread for numerical values. We can clearly see the minimum value, mean values, different percentile values, and maximum values. Handling missing value We can see that we have various missing values in the respective columns. There are various ways of treating your missing values in the data set. And which technique to use when is actually dependent on the type of data you are dealing with. Drop the missing values: In this case, we drop the missing values from those variables. In case there are very few missing values you can drop those values. Impute with mean value: For the numerical column, you can replace the missing values with mean values. Before replacing with mean value, it is advisable to check that the variable shouldn’t have extreme values .i.e. outliers. Impute with median value: For the numerical column, you can also replace the missing values with median values. In case you have extreme values such as outliers it is advisable to use the median approach. Impute with mode value: For the categorical column, you can replace the missing values with mode values i.e the frequent ones. In this exercise, we will replace the numerical columns with median values and for categorical columns, we will drop the missing values. Handling Duplicate records Since we have 14 duplicate records in the data, we will remove this from the data set so that we get only distinct records. Post removing the duplicate, we will check whether the duplicates have been removed from the data set or not. Handling Outlier Outliers, being the most extreme observations, may include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations. We Generally identify outliers with the help of boxplot, so here box plot shows some of the data points outside the range of the data. Box-plot before removing outliers Looking at the box plot, it seems that the variables INCOME, have outlier present in the variables. These outliers value needs to be teated and there are several ways of treating them: Drop the outlier value Replace the outlier value using the IQR #Boxplot After removing outlier Box-plot after removing outliers Bivariate Analysis When we talk about bivariate analysis, it means analyzing 2 variables. since we know there are Numerical and categorical variables, there is a way of analyzing these variables as shown below: Numerical vs. Numerical 1. Scatterplot 2. Line plot 3. Heatmap for correlation 4. Joint plot Categorical vs. Numerical 1. Bar chart 2. Voilin plot 3. Categorical box plot 4.Swarm plot Two Categorcal Variables 1. Bar chart 2. Grouped bar chart 3. Point plot If we need to find the correlation Normalizing and Scaling Often the variables of the data set are of different scales i.e. one variable is in millions and others in only 100. For e.g. in our data set Income is having values in thousands and age in just two digits. Since the data in these variables are of different scales, it is tough to compare these variables. Feature scaling (also known as data normalization) is the method used to standardize the range of features of data. Since the range of values of data may vary widely, it becomes a necessary step in data preprocessing while using machine learning algorithms. In this method, we convert variables with different scales of measurements into a single scale. StandardScaler normalizes the data using the formula (x-mean)/standard deviation. We will be doing this only for the numerical variables. ENCODING One-Hot-Encoding is used to create dummy variables to replace the categories in a categorical variable into features of each category and represent it using 1 or 0 based on the presence or absence of the categorical value in the record. This is required to do since the machine learning algorithms only work on the numerical data. That is why there is a need to convert the categorical column into a numerical one. get_dummies is the method that creates a dummy variable for each categorical variable.
https://medium.com/swlh/exploratory-data-analysis-eda-from-scratch-in-python-8c12c2673aa7
['Ritika Singh']
2020-09-04 06:46:58.670000+00:00
['Data Science', 'Statistics', 'Python', 'Data Analysis', 'Exploratory Data Analysis']
5 Steps To Understanding Generators In Python
1. What are generators? Generators are a special type of function to temporarily store all codes that generate information in memory. Generally functional; Instead of a huge area stored in memory, it is being able to use it in a more rational way. It does this by making it easier to create iterators. And as a result, a series of results can be produced instead of a single value, thanks to the generator function. It can be said that generators produce a stream of values. To better understand this, we can think of a simple machine that sequentially reads the code and executes the work process with a command that gives us the requested information. Image created by the author 2. When is it used? It should be noted that generators are also a repetitive. I find this important to reduce more processing time when we want to do multiple operations. Generators can achieve this when we want two significant different program structures to be expressed in a common language in multi-variable or list readings. Basically, speeding up the function is directly proportional to code simplification. And generators are a good alternative for potential problems like cumbersome and slow. In projects related to big data where it will be important to generate a single value from a defined array, it creates operational comfort by reducing memory usage for detailed analysis, and such projects are where its actual usage purpose is well observed. 3. What are the advantages and disadvantages? Advantages Generators are experts at splitting multiple processes at once. It is effective in analyzing big data for quick print out, performing multiple scans and resolving the appropriate changes quickly. It makes many data analysis more flexible by easing the memory load. Disadvantages In fact, the things that seem disadvantageous are the issues that need to be considered while using it. Besides, we cannot perform indexing and slicing operations with it as in the lists. Also you can use a generator once. This can typically be an important factor when iterators are needed. And yes that’s why those can be called lazy: they only yield values when explicitly asked. When operations continue forward in a generator, the previous operation is discarded, and when the entire generator is completed, all is discarded from memory. For those who are not used to this type of operation, using it in a wrong project will not mean taking advantage of it. 4. How is it applied? First of all, it is useful to underline two new concepts: next(): It is the command we specified to call a generator after we create it. Also the call to .next() is necessary to start execution of our generator. yield: yield is statement in Python for generators that is used to return from a function without destroying. In fact, yield is our term that processes the generator to function. Unlike a normal function, the yield statement replaces the generation return in definition(def). And it returns the result different than return. For example, check out the code blog below: And there are two different ways to create a generator: Generator “function” Generator “expression” Generator “function” We know that we have a query inside a function we defined in the ‘for’ loop. With next (), the next element in the loop is queried piece by piece. If we continue by adding to the above statement, a multi-defined function will be formed as follows: But there is a way to freeze the iterate in the first query. And in this way, an action can be done after StopIteration. For this we use multi_generate() directly in next(). So calling a bracketed format inside the parentheses will only show the result as the first digit output: Let’s also try to understand how we use generators to read files. For this, we will first designate our file as our generator function that will distribute line by line. We set it as “FileGenerator”. We used one of the Covid-19 data sets for the file we wanted to read line by line. “covid19_Confirmed_dataset.csv” We will use the open() function to open a file. Here we will select the one that matches the Latin alphabet for the encoding ISO-8859–1 data in brackets. In that case: So we were able to read the first three lines of our file separately with the next() command. Generator “expression” Let’s try to get the sum of the exponential products of the numbers 1 through 10 for example to create a generator object. It is possible to do this with the following statement. I should mention that the difference of the variable from the lists is the use of square brackets. If we activate the sum () function with this variable we created, we get the total result as follows: 5. Conclusion The performance of generators in terms of big data efficiency is still valid today. It is possible to work in architectures where more than one generator is built by connecting, in terms of the advantages it provides. I expect more python developers to be included in the toolbox now and beyond. However, I also know that it will be difficult to learn how to create piplines with generators if establishing and managing a more cumbersome system is defined as an indispensable kind of ritual. Sometimes it may be necessary to dive. Especially if the data tell you this!
https://medium.com/python-in-plain-english/5-steps-for-understanding-generators-in-python-2349ea4c5497
['Kurt F.']
2020-11-23 10:12:30.733000+00:00
['Programming', 'Python Programming', 'Python', 'Software Development', 'Software Engineering']
Configure Jitsi — Open source web conferencing solution on AWS with Terraform
Configure Jitsi — Open source web conferencing solution on AWS with Terraform Prashant Bhatasana Follow Jul 23 · 5 min read Terraform is so popular nowadays. Terraform enables you to create and manage infrastructure with code and codes can be stored in version control. In this article, we are talking about How we can Setup our own version of this application on any AWS region. You can check out the project on the Jitsi GitHub repository. Jitsi Meet is a free and fully encrypted open source video conferencing service solution providing high quality and audio without subscription or the need to create an account. Jitsi is a set of open-source projects that allow you to build a secure video conference system for your team. The core components of the Jitsi project are Jitsi VideoBridge and Jitsi Meet. There are free and premium services based on Jitsi projects, such as HipChat, Stride, Highfive, Comcast. Jitsi Meet is the heart of the Jitsi family, it's an open-source JavaScript WebRTC application that allows you to build and deploy scalable video conference. The tool provides features like: Sharing of desktops, presentations, and more Inviting users to a conference via a simple, custom URL Editing documents together using Etherpad Trading messages and emojis while video conferencing, with integrated chat. Let’s start our Exercise! Pre-Requisites To Creating Infrastructure on AWS Using Terraform We require AWS IAM API keys (access key and secret key) for creating and deleting permissions for all AWS resources. Terraform should be installed on the machine. If Terraform does not exist you can download and install it from here. Amazon Resources Created Using Terraform Networking Module: AWS VPC with 10.0.0.0/16 CIDR. Multiple AWS VPC public subnets would be reachable from the internet; which means traffic from the internet can hit a machine in the public subnet. Multiple AWS VPC private subnets which mean it is not reachable to the internet directly without NAT Gateway. AWS VPC Internet GateWay and attach it to AWS VPC. Public and private AWS VPC Route Tables. AWS VPC NAT Gateway. Associating AWS VPC Subnets with VPC route tables. Server Module: Auto-scaling group for ECS cluster with a launch configuration. 2. ECR container registry. 3. ECS cluster with task and service definition. 4. ECS container with EC2 as a container instance that runs docker-compose and pulls Docker images from the Docker Hub. 5. A load balancer distributing traffic between the containers. Let’s talk about the Terraform deployment of VPC. Please follow this article for more detail of the Networking module that creates our VPC, Subnets, and Other Networking Assets. Let’s talk about the terraform deployment of ECS. Autoscaling Group The Autoscaling group is a collection of EC2 instances. The number of those instances is determined by scaling policies. We will create an autoscaling group using a launch template. Before we will launch container instances and register them into a cluster, we have to create an IAM role for those instances to use when they are launched: I used a special kind of filter ob AMI which find ECS-optimized image with preinstalled Docker. EC2 m4.xlarge instances will be launched. If we want to use created, named ECS cluster we have to put that information into user_data, otherwise our instances will be launched in default cluster. Basic scaling information is described by aws_autoscaling_group parameters. Having an autoscaling group set up we are ready to launch our instances and database. Elastic Container Service ECS is a scalable container orchestration service that allows us to run and scale dockerized applications on AWS. resource "aws_ecs_cluster" "this" { name = "${var.environment}_cluster" } Cluster name is important here, as we used it previously while defining launch configuration. This is where newly created EC2 instances will live. To launch a dockerized application we need to create a task — a set of simple instructions understood by the ECS cluster. The task is a JSON definition that can be kept in a separate file: The family the parameter is required and it represents the unique name of our task definition. The last thing that will bind the cluster with the task is an ECS service. The service will guarantee that we always have some number of tasks running all the time: resource "aws_ecs_service" "this" { name = "${var.environment}" task_definition = "${aws_ecs_task_definition.this.id}" cluster = "${aws_ecs_cluster.this.arn}" load_balancer { target_group_arn = "${aws_lb_target_group.this.0.arn}" container_name = "web" container_port = "${var.container_port}" } launch_type = "EC2" desired_count = 1 deployment_maximum_percent = 200 deployment_minimum_healthy_percent = 100 } Now, We discussed all resources that Terraform creates. It’s time to run the Terraform script Clone this terraform repository. Now go to the directory and run command. cp sample.terraform.tfvars terraform.tfvars Update terraform.tfvars file variables values. Note: we need to run following command because recently jitsi update their security policy so we need to pass strong password After that run sample.terraform.tfvars ./gen-passwords.sh This command updates the value of the following variables. JICOFO_COMPONENT_SECRET JICOFO_AUTH_PASSWORD JVB_AUTH_PASSWORD JIGASI_XMPP_PASSWORD JIBRI_RECORDER_PASSWORD JIBRI_XMPP_PASSWORD These variables are used in the task definition file. References: Thank you for reading, if you have anything to add please send a response or add a note!
https://medium.com/appgambit/configure-jitsi-open-source-web-conferencing-solution-on-aws-with-terraform-b4191ba04d2b
['Prashant Bhatasana']
2020-07-23 12:08:05.194000+00:00
['Jitsi', 'Terraform', 'Amazon', 'Automation Testing', 'AWS']
Whoops Apocalypse
The problems of Normal today are embedded in history. This is not unique. It’s worth looking at how professional and scientific systems have had embedded problems that affect how they try to create positive change but are trapped by foundational myths that divert them into supporting negative norms. I’ll talk of four systems here: Anthropology Psychology Neuroscience Service Design The first three have long histories of embedded problems. The last one is modern but has foundations in the first three. To some extent, this post is a warning to Service Design about what it needs to think about to prevent or minimise what went wrong with Anthropology, Psychology and Neuroscience. Good people today can get trapped by errors made in the past and it’s worth considering that problem. Anthropology Anthropology is the study of human societies and cultures. It grew out of the European-centered Age of Exploration — the realisation that not merely were there different lands and people but their social and political arrangements were radically unlike the modes of thinking in Europe. Anthropology was however, a fellow traveller with Imperialism and Darwinism. Anthropologists were specifically employed to understand other societies in order to enable colonial control. The work in Africa on tribal systems and mapping of political power was used to find the social levers to successfully manipulate compliance to colonial power with minimum force. Anthropologists also were caught up in the imperialist reading of Darwinism. As with what happened with the mathematics of Normal, Darwinism shifted from research and understanding of natural diversity to a measurement and ranking of worth anchored to a sense that White Maleness was optimium. Research looped back on itself to justify the racist viewpoint that Black people were somehow below White people in a fake historical of hierarchy of human development. This may seem merely historical but the Pentagon’s Human Terrain System shows that the past never quite goes away. Psychology I talk about the history of psychology a lot in Post Normal workshops as it is the most relevant science to the digital and service design audiences I most often encounter. As with anthropology, the desire to scientifically measure and rank people was a goal. As with anthropology, understanding compliance was also an early interest. However, those Victorian and early 20th century obsessions are not the crucial problem with psychology. It’s Eugenics. The social darwinist nightmare of controlling human development to get the right humans, the best humans. Who, as with anthropology and darwinism, are a specific class of White people. Eugenics is embedded in psychology because its early leaders were leading eugenicists. Galton (who I talk about in Post Normal workshops) was both. Psychology was how worth was tested and ranking was clarified for further action. As with anthropology and imperialism, the science itself might be in some ways ethical but it allowed its methods to be used to justify deeply unethical actions. Neuroscience Neuroscience is the most modern of the three sciences with embedded problems but, again, it has its roots in late 19th Century scientism. With roots in both anatomy and phrenonology, neuroscience has problems with naming certain sections of the brain as being specific to certain behaviours (a bias which get miniaturised with genetics and DNA). It also has never quite escaped the gravity of Christian epistemology about Mind/Body division. That the person is the mind inside the brain and that the body, and the world it exists in, is corrupt and weak. The wrongness of human bodies and the solitary perfection of the mind contaminates ideas about Transhumanism and Disability. Embodied and extended cognition are modern attempts to reflect upon the value of the body and the people around us socially but neuroscience, especially with its tools like FMRI, still wants to talk about the brain as special, unique and divisible from bodies and societies. Service Design Finally, Service Design. This is a modern practice (maybe 40 years old). It is concerned with human-centered design of systems that deliver services. My fear about Service Design is twofold. Firstly, the research and tools it uses to be human-centered are anthropology, psychology and neuroscience. Service Design is at the nexus of 3 sciences that have serious historic problems with oppression, exclusion and violence. It seems to have no real controls in position to prevent those problems flowing into the future design processes. Secondly, Service Design has been rapidly assimilated into government. The use of Service Design in digital design of existing and new government processes and products is simply assumed now. The history of how entwined anthropology became with imperialism shows that this may not be a good thing. Goverment is often about power and privilege in allocating resources to those without either: powerful people commanding, privileged people seeking gain, powerless and unprivileged people requiring compliance. Service Design is in service to power (as anthropology was) and that has serious consequences. Doing good with Service Design presumes a lot. The danger is that the design systems and processes, enfused with anthropology, psychology and neuroscience, are unable to prevent their use for bad. This is already visible in behavioural design, nudge and gamification becoming attentional design, addiction and dark patterns. Recognising the embedded problems in the sciences we use today and consciously creating tools and processes that prevent or minimise deliberate and accidental design for bad are two goals for today that we might consider. Here’s Eugene O’Neill to finish. There is no present or future - only the past, happening over and over again - now. Reading This is a polemic so I haven’t bothered with too many references. Here are some books that matter to me tho. They are linked (especially Biased, Invisible Women and End Of Average) by how much research that is foundational to accepted sciences, like anthropology, psychology and neuroscience, have deep roots in racism, sexism and ableism. Biased by Jennifer Eberhardt Invisible Women by Caroline Criado-Perez Indigenous Research Methodologies by Bagele Chilisa Neuropolis by Robert Newman How Emotions Are Made by Lisa Feldman Barrett The End Of Average by Todd Rose Anthropology by Peter Metcalf Bodies by Susie Orbach
https://acuity-design.medium.com/whoops-apocalypse-5a1ae85ad6fb
['Alastair Somerville']
2019-08-14 11:03:36.168000+00:00
['Neuroscience', 'Service Design', 'Govdesign', 'Design', 'Anthropology']
The evolution of consumer behavior in the digital age
“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.” -Bill Gates One of the best examples of how we both overestimate and underestimate changes in the future is the evolution of consumer behavior throughout this century. Take a minute and imagine the world we were in 10 years ago (it’s hard to believe 2007 was 10 years ago). Facebook was still competing with MySpace for traffic, Amazon was primarily known for selling books, and the iPhone was just released. Back in those days, the way we shopped for products was drastically different from the way we shop today. Most of us still trusted brick-and-mortar stores, we didn’t have price comparison services, and we were at the mercy of large corporations for discounts. Remember a time before Netflix and iTunes, when we used to rent VHR tapes from Blockbusters? Good times. Video via The Onion. How did we go from that “primitive” world of shopping to the consumer experience we have today in the digital age? More importantly, where are we going? That will be the topic of this article. Today, we will examine three primary paradigm shifts in the marketing world in the last 10 years due to the emergence of digital technologies and platforms such as Facebook, Amazon, and smartphones. More specifically, we will talk about how, in just 10 years, we went from a linear, retail-focused model (the “first moment of truth”), to today’s iterative, digital-centric model of customer behavior (the “accelerated customer decision journey”). But the goal of this article is not merely to explore the history of marketing frameworks. It’s also to project the future of marketing and consumer behavior. Based on the three paradigm shifts I mentioned, we will take a glimpse into the next decade to see how we, as business owners, can adapt to this new and ever-shifting world. Paradigm 1: First Moment of Truth Imagine yourself as a customer in the year 2005. You just walked into a grocery store to buy a bottle of shampoo. You look down the aisle and see over 10 shampoos of different brands and types, and you need to make a decision on which one to purchase. You may consider several factors when making this decision — the design of the label, the position of that shampoo on the shelf, and the detailed explanation on the label. The decision process you are going through right now is what marketers at P&G call the “First Moment of Truth.” Coined in 2005, the “Moment of Truth” model is one of the most celebrated marketing frameworks because it so accurately captures the customer’s decision process when buying a product (First Moment), experiencing a product (Second Moment of Truth), and eventually becoming loyal to the brand. You can see an overview of these “moments” in this graphic: As the shampoo story illustrates, the original “Moment of Truth” model does not incorporate digital technologies or the internet into customers’ shopping behavior. For the purpose of this article, it serves as a starting point. Now let’s add digital to the mix. Paradigm 2: Zero Moment of Truth + Customer Decision Journey Let’s go back to the shampoo story again, but in the year 2011. Now, as a customer, you have sufficient access to smartphones and the internet to go beyond the shelf when evaluating the product. In fact, you might not be at the physical store at all since ecommerce stores like Amazon and Walmart.com have also become significantly more popular, serving as viable alternatives to the physical retail store. Therefore, when you need something like a shampoo, you are unlikely to go directly to the store to purchase, but rather go online to search something like “the best shampoo in the world” — and that’s the Zero Moment of Truth. Coined by Google in 2011 (the entire ebook is linked below), the Zero Moment of Truth (ZMOT) describes how digital channels such as social media and search influence the customer decision journey. The significance of ZMOT is that it is perhaps the first marketing framework that emphasized the importance of digital channels as a critical part of the customer decision journey. This encouraged companies to start considering “buzzwords” such as SEO and SEM (search engine marketing). Whereas ZMOT signaled a turning point of the digital age in marketing, a new model popularized by McKinsey in 2009 gave marketers an even more up-to-date way to think about the new, iterative customer journey created by new technologies. Under the traditional marketing mindset, customers behave in a funnel. They start by becoming aware of the product and brand. Then, they eventually go through several steps to purchase a product and become loyal customers. In each of the stages, customers may “drop off” in the funnel. The marketer’s job here is to prevent these drop-offs by optimizing their messaging in each step of the funnel. However, with an enormous amount of decision power and information unlocked by smartphones and the internet, customers no longer interact with companies in the linear manner described above. Instead, the modern customer’s decision process is much more iterative. Customers today hop between different stages of the funnel between multiple companies, thanks to the power granted by the internet. Their decision journey looks closer to something like this: From trigger, to initial consideration set, to active evaulation, to moment of purchase. Then we do it all again in the ongoing postpurchase experience and loyalty loop. Image via McKinsey. The significance of this new McKinsey model is that it no longer views the customer’s journey as their interaction with one individual company. Instead, it introduced the idea of a “consideration set”: a basket of products that customers are considering that may meet their needs. This “consideration set” model showed companies the importance of providing their customers with enough information for them to make the purchase decision, instead of “plugging the funnels.” This framework, combined with ZMOT, is the most popular marketing framework of this decade. It has been evangelized by countless online courses, and used by businesses ranging from Fortune 500 companies to small ecommerce stores. However, even these two models are being challenged by digital acceleration. Paradigm 3: The Accelerated Loyalty Journey One of the biggest problems of the two previous frameworks is that they are too slow. Nowadays, customers are bombarded with thousands of pieces of information every single day over the internet, and their attention span has deteriorated rapidly. What this means to marketers is that a customer’s evaluation cycle is significantly crunched from a stage of multiple days or hours to a matter of minutes or seconds. If your product does not convince customers to buy right now, you have lost that customer’s attention forever, and they will probably not come back no matter how much you bombard them with ads. This simple fact led McKinsey to update their customer decision journey to an updated model, illustrated below: From classic customer decision journey to the new “accelerated loyalty journey.” Image via McKinsey. The significance of this new “accelerated loyalty journey” is that it doesn’t just focus on providing information to help customers evaluate the company’s products. It also emphasizes the importance of delivering that information in the shortest amount of time to the most targeted customer segments. This allows marketers to get these customers to take immediate action and convert. In other words, having the information is not enough. You need to push that information aggressively in front of the customers at the exact moment their needs are generated. Enabled by advanced technologies such as machine learning and artificial intelligence, more and more companies have started to conduct this type of “hyper-speed targeting” to their audience. This marks a new age of marketing automation and acceleration. Where are we going? Now that we have examined the three shifts in marketing paradigms in the last 10 years, it’s time to talk about where we are going next, and what we can do as modern marketers to stay ahead of these trends. While these shifts in marketing may seem very different, the underlying theme is the same: customers are becoming more powerful in making their own purchasing decisions. Gone is the time when we could say, “advertise it and they will come.” Now is the time when we have to make products WITH and FOR a specific customer audience in order for them to become a loyal customer. As the information available to customers proliferate, this trend will only accelerate in the next decade, making “customer-centric” marketing even more important for companies to succeed. So as marketers, here are some key steps we should take to prevail in this new digital age: Co-create our brand and product with customers: it’s time to talk to our customers face-to-face to understand what they need, what drives them, and how we can best serve them. It is time to stop hiding behind the facade of digital ads. We must develop genuine conversations and relationships with these people that we truly care about. Invest in employees that really care about your cause: the key to modern marketing is to be authentic and genuine. You cannot achieve true authenticity until everyone in your company deeply cares about what you build and believe in your values. Only by hiring these people who are aligned with your identity can you build deep connections with your customers. Invest in technology to accurately target your audiences: the only way to make sure you send the right message to the right audience at the right time is using technology. Machine learning and artificial intelligence platforms are getting cheaper and easier to use every day for non-technical marketers. Take advantage of these technologies and elevate your marketing to the next level. Experiment constantly: the great thing about digital platforms is that they are inexpensive and easy-to-use. This opens up opportunities for a large volume of testing and experimentation in your company. So leverage these new testing opportunities to figure out the best way to reach the people you want to reach. Do you have any other ideas on what else we can do to stay ahead of the upcoming paradigm shifts in marketing? Comment below!
https://medium.com/analytics-for-humans/the-evolution-of-consumer-behavior-in-the-digital-age-917a93c15888
['Bill Su']
2018-06-08 19:24:15.029000+00:00
['Business', 'Marketing', 'Digital Marketing', 'Startup', 'Critical Thinking']
Ideas
Matthew Donnellon is a writer, artist, and sit down comedian. He is the author of The Curious Case of Emma Lee and Other Stories. Follow
https://medium.com/bard-and-quill/ideas-d1e8ed11b0df
['Matthew Donnellon']
2019-10-09 21:53:40.727000+00:00
['Ideas', 'Poetry', 'Writing', 'Creativity', 'Creative Writing']
Getting into Deep Learning, what is Computer Vision? Let’s create a face recognition app!
Star Wars + Han Solo + Artificial Intelligence! Recently, I read on an article that someone used Artificial Intelligence to insert Harrison Ford into the movie: “Solo: A Star Wars Story”. It uses deep learning artificial intelligence, that analyzes a large collection of photos of a given person, in this case, Harrison Ford, then creates a database of those images in multiple positions and poses. This technology then uses that database to intelligently perform an automatic face replacement on a source clip. Hello New World! What started as an interest in learning Data Science, Python libraries and its algorithms to solve business problems, became a desire to understand Machine Learning, Computer Vision and recognize faces in images! Everytime I read an article about Data Science, saw these new words to me. So I decided to start this challenge of learning more about this amazing world of Artificial Intelligence. But what is Machine Learning? We may not see it, but Machine Learning is in our phones, in the products we use today. Tagging people in pictures sounds familiar? Video recommendation systems? Big companies like Netflix and Facebook are using this technology in all their products. Machine Learning consists in a set of algorithms that parse data, learn from them, and then apply this knowledge to make intelligent decisions. You feed the computer with a set of rules and tasks, then it will find a way to complete those tasks. On the other hand, Computer Vision is an Artificial Intelligence that allows machines to capture, process and analyze real world images and videos to extract meaningful and contextual information from the physical world. Nowadays, Computer Vision is used in different fields such healthcare, agriculture, banking, automotive and industrial, among others. Computer Vision technology is helping healthcare professionals to accurately classify conditions or illnesses that may potentially save patients’ lives by reducing or eliminating inaccurate diagnoses and incorrect treatment. Farmers are beginning to adopt computer vision technology to monitor the health of crops. Banks are using image recognition applications that apply machine learning to classify, extract data, and authenticate documents. Last night I read an article about Artificial Intelligence and it said that a school in Hangzhou, China, is using facial recognition to monitor the behavior of their students. This technology classifies the students based on their range of emotions, from antipathy to happy. The system also cross-checks the faces of all students against the school database to mark the attendance and has the ability to predict if a student is feeling sick. So, after trying for hours, getting the right python libraries, installing the video codecs and getting some videos, I will show you the results of running face recognition on a video file. I used a video of Brad Pitt and Jennifer Aniston. Got a picture of each one to teach the machine how to recognize them in the video: What an incredible result! Never imagined machines could learn to do this, but the reality of Computer Vision and AI today is that machines need human help for better results.
https://medium.com/datadriveninvestor/getting-into-deep-learning-what-is-computer-vision-lets-create-a-face-recognition-app-cb7349d05d64
['Viridiana Romero Martinez']
2018-10-25 04:17:11.172000+00:00
['Artificial Intelligence', 'Machine Learning', 'Computer Vision', 'Opencv']
Organum Hydraulicum
Athanasius Kircher was a 17th century German jesuit scholar who produced many publications on machinery. One of them, Musurgia Universalis, was written in 1650 and described ways of creating automatic music. The machine shown above, a water organ, is from Book 9 of that work. Water and air are introduced at the right side into a vessel termed a camera aeolis. The rate of water flow introduces a displacement of air which exits into the vertical pipes comprising the musical organ. The control of which notes are produced, and when, is achieved through barrel rotation. The water not only displaces air for the organ, but also drives the barrel. The barrel has protrusions that interact with the keys as the levers interact with the protrusions. There is a separate mechanism shown in the upper left of the illustration.
https://medium.com/creative-automata/organum-hydraulicum-e296850bc5
['Paul Fishwick']
2016-12-20 17:43:52.750000+00:00
['Automaton', 'Engine', 'Music', 'Science']
How to edit flash fiction
Writer’s Blog How to edit flash fiction Make your short story the best it can be. So I don’t feel right about teaching you flash fiction without providing a real example. The flash fiction used throughout this article is one that I wrote a little while ago. I want to walk you through the editing process with an actual example that you can use to take your first draft from something that needs a lot of work to something that you are happy with. Right below is the finished version of my flash fiction. Finished for now at least, I try not to play with drafts I have deemed ‘good enough’, but we’ll see, never say never, this might change in the future. If you want the example feel free to read it and get familiar with the story, you’ll see it a lot more. I’ll try to leave my points and advice a little less specific so that you can use these tips without having to read the story too! Completed Story The Date ‘Hey, Lily.’ I smile. ‘You look good.’ I sit down and place the strawberry shortcake between us. I pull two forks out of my pocket and put one on her side, keeping the other in my hand. The cake is a dumb gesture. We shared it on a date, the one when we had our first kiss. The cream from the cake was the perfect opportunity, I never expected her to kiss it off my face. A lot has changed in fifteen years. I made my yearly pilgrimage. It has become a ritual for me, something essential. I eat the cake and tell her about my life. Filling her in on the year, I tell her about my wife, the kids. I have this weird idea that I can force Lily to grow old with me, somehow if she knows about the life that I am living I will be able to share the rest of it with her. Just like we always wanted. They say that grieving is for the living, so I will deal with it my own way. I stand to leave. I can’t help looking again at the shrine, with fresh flowers that share her name. They were placed there by her family. Her parents were also caught up in this custom. I never disturb them, I only visit in the evening, I don’t want to disturb their mourning. I walked there with the cake, and I walk back with nothing. So that’s the complete story. Now you get to see the first draft. You’ll notice that the premise is mostly the same, I felt really good about my theme and my direction, so no overhauls in the actual story had to happen, but I did cut things away and add things in, so let’s have a look at where we started. First Draft I walked down the road, glancing over, as I usually do at the small cross beside the edge of the road. Fresh flowers had been placed on it today, the same day that it gets fresh flowers every year. The flowers were lilies, as they are every year. They were her favourite flower. I go over to the grave, as I do every year and offer her, her favourite treat. It was the strawberry shortcake from the bakery down the lane. It was the treat we had shared over our first kiss. I visited this place every year, I walked the whole way here, treat in hand, and the whole way back, holding nothing. It had become a ritual for me, something essential. I tried and failed to stop coming, I tried and failed to let her go. I had moved on you know, a wife and kids, fifteen years will do that to a person. Yet still, I come to grieve her loss. To miss her kisses and to share a strawberry shortcake once again, with the woman that I love. So how did we get from A to B? The entire drafting process took about seven or eight rewrites. I think that rewriting your story is a valuable part of the editing process, particularly with smaller pieces like this. Photo by Sylwia Bartyzel on Unsplash I also set myself up some goals, like keeping the word count below 250 words so I could submit it to a competition. Having some sort of goal in mind is useful. Remember that most flash fiction is under 1000 words, so there is always that if you’re not sure where to start. The first draft was the first conception of the idea for me, but you don’t have to start there. Think of small details and focus in on those details. For me, it was the grave. I wrote the first draft with the grave as the perspective. I was happy with that perspective but if you want to try focusing in on something else, or giving more details to specific items go on ahead and try a draft focusing on the plot through a different lens. You’d be surprised at how much it can change your entire story. Highlight everything The first thing I want you to do is to highlight every line that you love, and then try and keep that in your next draft. The goal here for me was to end up with a draft that was entirely coloured in. It sounds obnoxious, but this is a screenshot of my later drafts. Screenshot by author Very colourful. If you love most of a line but dislike a specific part highlight those parts in a different colour, don’t throw the baby out with the bathwater. Make sure that you are giving each line its best chance. Then rewrite your story Can you take things out? For me, I removed an entire ‘scene’. I didn’t need to talk about how the man in the story walked to the grave because that wasn’t actually important to the story. What mattered was that he was there and what he was doing. Line by line, I rewrote my story, finding ways to work in lines that I loved and keep them there. Once I got to a place where I loved all of the individual lines, I played with story structure. Keep every draft This one is huge, especially with such a small story. You might write something that you love and write it out in the next draft. None of these re-writes are your final draft so keep your writing history! I love to write flash fiction all on the one page, I mark up every draft and write the next one with what I love in mind. I do not edit any given draft until the end, and even then I copy and paste it so that it is its own thing, this prevents losing valuable lines or concepts that you might not want to forget. It is so easy to lose a sentence, a word or a line that you love. Keep everything, even if you think you hate it because having it there will remind you to use it or discard it. Don’t delete anything! Photo by Devin Avery on Unsplash Structure Once you know you like your story, you like every word that you’ve put in there, and all of the words are doing something for your story you need to see your story through yet another lens. Now I want you to look at the structure, can you do anything that would make the story more interesting? Can you change the way it’s set up? This was my ‘final draft’ before I played with the structure, I loved all of the lines, but I hadn’t changed anything about the story. The Date ‘Hey, Lily,’ I smiled, ‘you look good.’ The shrine had been ornamented with fresh flowers of the same name. They were placed there by her family, I smile at the familiarity. Her parents were also caught up in this yearly custom, I always stayed out of their way, only visited in the evening. I didn’t want to disturb their mourning. I sit down and place the strawberry shortcake between myself and the grave. I pull two forks out of my pocket and put one on her side, keeping the other in my hand. The cake is a dumb gesture, we shared it on our first date. I know she can’t eat it, but they say that grieving is for the living, so I will deal with it my own way. I made my yearly pilgrimage. It had become a ritual for me, something essential. I eat the cake and tell her about my life. Filling her in on the year. I tell her about my wife, the kids, that Oliver has started high school. A lot has changed in the last fifteen years. I have this weird idea that I can force Lily to grow old with me, somehow if she knows about the life that I am living I will be able to share the rest of it with her. She has to grow old with me just like we always wanted. I walk there with the cake, and I walk back with nothing. A very different story from the one with structural work. The final draft focuses on the reader figuring out the story, whereas this one tells the reader what is happening. There is no mystery to this draft. It’s still a very pretty story and one to be proud, but objectively, it’s not as interesting as the final draft. Photo by dylan nolte on Unsplash Are there pieces of information you can remove? Can you make your reader work to figure out the plot? Is there something interesting you can do by withholding information or allowing false presumptions to be made? My story reads first as a boy going on a date, then as a man having an affair and then as a man grieving his childhood sweetheart. My story is also under 250 words. With some clever formatting, you can do a lot. Take your time. It took me a few days to write this short piece. In writer years that’s not very productive, but I felt it was worth sinking the time into, then again I am a poetry nerd, so I’m used to working with small pieces of art and refining them. Don’t be afraid to spend an extended amount of time on your story, like I said. I rewrote this way too many times. The end result? Worth it! Get advice from friends. When I was nearing the end of my drafting stage, I send out this story to loved ones and friends. I asked them what they liked, what was working and what wasn’t. I got some candid, valuable feedback that hurt my feelings. I implemented that feedback and my story was better for it, less confusing, clearer and altogether better in the end.
https://medium.com/writers-blog/how-to-edit-flash-fiction-f1ef1f6e8b63
['Beth Van Der Pol']
2020-07-03 09:17:12.192000+00:00
['Flash Fiction', 'Editing', 'Writing', 'Creativity', 'Creative Writing']
Big Data 3 V’s and 5 V’s
Recently, we study the definition and concept of big data. Now, question is “How to identify Big Data?” To explore answer for this question refer following paragraphs. The characteristics of Big Data are categorized in various types of V’s concepts. The main types of V’s are 3 V’s and 5 V’s which demonstrated the pillars of Big Data in brief. In order to identify big data it is necessary to acquire following characteristics. It is very efficient way to understand actually “What is Big Data” and “How can identify it?”. “The big data stands on mainly 5 pillars are Volume, Velocity, Variety, Veracity and Value.These pillars are briefly describes in 3 V’s and 5 V’s architectures.” Photo by rawpixel from Burst Following paragraphs demonstrates 3 V’s and 5 V’s: 3 V’s: 3 V’s contains 3 main characteristics of Big Data. These characteristics are Volume, Velocity and Variety. Each keyword are self explanatory. Each *characteristics demonstrate separate physical as well as logical attributes. Edited/Created by Author 1) Volume: i. In big data, Volume is the huge set of data which has huge form. ii. The volume describes the huge set of data which is very complex to process further for extracting valuable information from it. iii. Volume does not describe actual size to grant it as big data, it have relatively big size. The size could be in Terabyte, Exabyte or even in Zettabyte. iv. The size of big data makes perplex it to process. Photo by Markus Spiske on Unsplash Data Measurement Rows: 1. Bit is an eighth of a byte 2. Byte: 1 Byte 3. Kilobyte: 1 thousand or, 1,000 bytes 4 .Megabyte: 1 million, or 1,000,000 bytes 5 .Gigabyte: 1 billion, or 1,000,000,000 bytes 6. Terabyte: 1 trillion, or 1,000,000,000,0000 bytes 7. Petabye: 1 quadrillion, or 1,000,000,000,000,000 bytes 8. Exabyte: 1 quintillion, or 1,000,000,000,000,000,000 bytes 9. Zettabyte: one sextillion or 1,000,000,000,000,000,000,000 10. Yottabyte: 1 septillion, or 1,000,000,000,000,000,000,000,000 bytes For Example : The world generates 2.5 Quintillion bytes of data per day. 2) Velocity: i. In big data, Velocity demonstrate two things mainly, (1) Speed of growth of data (2) Speed of transmission of data ii. Velocity refers to data generating, increasing and sharing at a particular speed through the resources. Edited/Created by Author iii. Speed of growth of data: The data increases day by day through various resources. Some of the resources are explained below, Internet Of Things (IOT): IOT is prominent for contributing in big data. It generates data through IOT devices placed in automated vehicles, digital IOT bulbs, IOT based robots etc. Social Media: As you see, users on social media increasing day by day so that they exactly generating huge batches of data. Such as many other resources, who generates data at such high speeds. iv. Speed of transmission of data: The speed is also take major role in identifying big data. Big data increasing in rapid fast manner which makes it very complex to process fast and makes difficult to transmit it quickly through fiber optic or electromagnetic way of transmission. Therefore this term is very important to demonstrate velocity. For Example : Twitter generates 500 Million tweets per day, rate of speed of generation of data and rate of speed of transmission of data is very high. 3) Variety: i. In big data, Variety is nothing but different types of data. ii. This term demonstrate various types of data such as texts, audios, videos, XML file, data in rows and columns etc. iii. Each type of data have separate way to process itself therefore, it is necessary to categorize different types of data. iv. In Big Data, data is categorize in mainly three types as follows, Edited/Created by Author a. Structured Data: The data which is in the format of relational database and have structured properly in rows and columns format is known as Structured Data. b. Unstructured Data: The data which includes various types of data such as audio, video, XML file, word file etc. and does not organize in proper format then it is said to be Unstructured Data. c. Semi-structured Data: Semi-structured data is self-explanatory that it is the data which not fully structured or unstructured. In it, data is partially structured and mixed with unstructured format of data. For Example: The social media contains photos, videos and texts of people in huge figure. This data is nothing but big data, it can be well-structured or unstructured or semi-structured. The concept of 3 V’s explains the basic architecture of big data but 5 V’s fulfill some more requirements to make it well demonstrated. Following paragraph explain architecture of 5 V’s concept.
https://medium.com/analytics-vidhya/big-data-3-vs-and-5-v-s-c1cae2a6d311
['Shubham Rajput']
2020-07-02 05:42:09.138000+00:00
['Technology', 'Data Science', 'Future', 'Big Data', 'Big Data Analytics']
Everyone Is a Teacher to Someone
Sometimes, what might be stopping you from starting your journey as a content creator is that you feel like you don’t know enough. It’s normal to feel that way, because what will you teach someone if you don’t know much yourself? But the irony of it is, even with the little you know, there are still people who will find the little you have to say helpful. And that’s why you should start because those people need you. Nobody started with a million fans. You have to build your loyalty from the ground up. People need to see that you’re willing to help. And if you can keep showing up over the long haul, they will stay. It’s a long process that begins when you publish your first content online, and not just when the idea first came to you. Content creation revolves around helping the people around you; although it might require some confidence to publish what you know on the internet, it can still be terrifying. But at the same time, we enter into arguments because we feel like we know what the person we’re arguing with doesn’t know. So, if we can do this, then it means we know something, and creating content shouldn’t be so difficult as we have imagined. It is quite exciting to see people engaging with our content and thanking us for our insights. It gives you the confidence to do more. And the amazing thing about creating content is that new ideas will always come up if you look for them. So, what’s limiting you is not that you don’t know enough, but that you don’t believe you do.
https://medium.com/age-of-awareness/everyone-is-a-teacher-to-someone-132c36753b3c
['Tochukwu Okoro E.']
2020-09-23 00:13:35.115000+00:00
['Content Marketing', 'Startup', 'Education', 'Creativity', 'Teaching']
Startup Spotlight Q&A: E-Sign
E-Sign is a leading provider of electronic signature solutions and document transaction services. We ensure safe transmission of a variety of important documents, from document creation, right through to payment, and signature capture. E-Sign’s Founder and Managing Director, Thomas Taylor, has over 20 years of diverse experience gained mostly across the IT and engineering industries. As a board-level executive with strong leadership and change management experience, he has held leadership roles in a diverse manner of companies as well as government organizations. Thomas has served as an innovation director and work package leader for a European Commission research consortium. He thrives on implementing new processes in large-scale organizations and assisting other technology companies to operate as efficiently as possible. Keep reading if you want to learn more about E-Sign! — In a single sentence, what does E-Sign do? Document transaction management. – How did E-Sign come to be? What was the problem you found and the ‘aha’ moment? The company was founded in 2012, it was following thoughts by the founder that given the available technology at the time, there had to be a better way to send important documents, without the costs and delays associated with the postal service. E-Sign was created and used QR technology to provide a tamper-proof digital infrastructure for document transmissions and signature capture. This has since evolved into many other things and the company now offers a wide range of solutions to make our user’s lives easier. — What sets E-Sign apart in the market? We consistently receive great reviews and great customer feedback. Easy to use platform with no restriction on features or document sends very cost-effectively whilst maintaining an excellent level of service compared to other providers. — What milestone are you most proud of so far? During our early-stage years, it was the balance between not knowing if the company would be operating by the end of the week in one instance, then acquiring big customers from central governments and banks, etc, which then validated what we were trying to do and gave us the drive to constantly strive to be the best we could be. — Have you pursued funding and if so, what steps did you take? We successfully raised angel funding and crowdfunded via Crowdcube and now have institutional backing. – What KPIs are you tracking that you think will lead to revenue generation/growth? We track every aspect of the customer journey, from initial search through to trial and paying customers and their user journey in between. — How do you build and develop talent? Growing and developing the team to where it is today hasn’t been easy, it’s definitely been a learning curve! We have a great team with zero ego culture. There is a culture of openness and communication within the team, each member knows the responsibility that we have on deliver-ability and trust. — What are the biggest challenges for the team? There have been many, the more the e-Sign platform has been used, the more issues that were uncovered. The issues then were to prioritize what could be fixed at the time, and what could be offset against future updates, that could then fix the issues. Development resource is always an important (and expensive) issue to address in a growth company. — What’s been the biggest success for the team? How did you celebrate? Some of the biggest wins for us came when central UK Government departments started using the platform, as well as NHS trusts. Again this felt like it validated the technology and give us the realization that we were playing an important part in improving people’s lives. We usually stay well-grounded (we still like to have fun, day to day)! — What’s something you’re constantly thinking about? Leaving the planet in a better state than I found it. — Who is your cloud provider We have our own Datacentre infrastructure. — What advice would you give to other founders? Never sit in the comfy chair, and always feel comfortable in your own skin. It’s easy to brush these two things aside, but I feel if you embrace these, you can achieve a lot. Listen to your inner voice and believe in what you are doing, but at the same time know when to ask for help from others.
https://medium.com/startup-grind/startup-spotlight-q-a-e-sign-b881fbae52ea
['The Startup Grind Team']
2020-12-01 15:27:32.106000+00:00
['Technology', 'Engineering', 'Startup Lessons', 'Startup', 'Startup Spotlight']
5 Practical Tips for Positive Code Reviews
4. Educate Towards Small PRs Small PRs are like pizza: They make everyone happier. Reviewers have less work and can easily reason about the code, the proposed change, its quality, and can therefore give better feedback. The author, on the other hand, gets the feedback faster — and the feedback is usually better. Moreover, with a chain of small PRs that are combined to a large feature, snippets of the code are reviewed earlier, making following PRs better and better. My rule of thumb is that a PR should be smaller than 12 files. That’s my sweet spot. You will rightfully argue that there are features that require larger changesets, and I would agree. Those features can still be broken up into smaller PRs. From my experience, there are a few ways to do it effectively (in order of preferability): Push small (possibly partial) changes into master. I usually start with pushing the building blocks I need (schemas, models), then another PR for the infrastructure code (Kafka publishers, message buffers, DB queries), and finally a pull request for wiring this code altogether. If possible, I also separate the last pull request of wiring the whole feature into the existing system. Bite-size PRs using feature branches. Photo by the author. 2. Create a “release branch” out of it to create small changes that get you closer to a working feature. Create pull requests for “mature changes” into your “release branch.” This way, each change is small and eventually all the code in the “release branch” has been reviewed in pieces. Bite-size PRs using release branch 3. When your branch already contains a large changeset, you can break it into smaller “reviewable” chunks and optionally create pull requests for each of those. There are some ways to do it (e.g. separated into meaningful commits using git interactive rebase to create a PR with a subset of the changes). Breaking a big changeset to bite-size PRs Lastly, it’s the author’s responsibility to identify in time that the future pull request will turn out to be huge and take proactive measures to make it better.
https://medium.com/better-programming/five-practical-tips-for-positive-code-review-6a41211aaab1
['Boris Cherkasky']
2020-11-26 10:53:25.858000+00:00
['Software Engineering', 'Startup', 'Software Development', 'Programming', 'Code Review']
I was lying when I asked you to stay
It takes a lot of single-minded self-belief to make your way in the music industry, so it’s no shock that brittle egotism is a quality that’s over-represented among singers. What is surprising is that this tendency hasn’t produced a stronger collection of songs about bitter break-ups. The more common emotion is melancholy, or at most a passivity that only switches to aggression in isolated, sublime moments — think the final, defiant chorus of “The Winner Takes It All”, or that exasperated aside “Why don’t you be a MAN about it?” at the end of “You Keep Me Hanging On”. Performers love to be loved, I suppose, so the desire to leave a good impression normally trumps the expression of raw emotion. But the moments when the mask slips are, for me, some of the greatest songs in pop: “Don’t Think Twice It’s All Right”, “Fuck You”, “You’re No Good” (especially in Dee Dee Warwick’s brassy, clattering original take), “You Oughta Know”, or that smoky pair of Jamie Lidell tracks, “What Is It This Time?”/”Game For Fools”. I remember the moment I “got into” Pulp. Downstairs at HMV in Oxford in 1993, I bought a copy of “Intro”, a collection of singles and B-sides in which they developed their sleazy kitchen-sink disco sound. At the time, there was one track that stood above all the others to my ears: “Sheffield: Sex City”. It’s an eight-minute epic that starts with Jarvis Cocker mouthing the names of suburbs like he’s reciting the Kama Sutra (“Intake. Manor Park. The Wicker. Norton. Freshville. Hackenthorpe.”). Pulp’s keyboardist Candida (Candida!) Doyle breaks in to read out a scene from Nancy Friday, about apartment residents waking each other up with their lovemaking until “within minutes the whole building was fucking”. Only once that’s been going on for nearly two minutes does the song really start. I still love “Sheffield: Sex City”. It’s a wordy, panoramic, hot mess of a song about a narrator and his lover failing to get it together for most of a long, torrid, rainy day in a surreal urban landscape: Jesus! Even the sun’s on heat today, the whole city getting stiff in the building heat… Old women clacked their tongues in the shade of crumbling concrete bus shelters Dogs doing it on central reservations and causing multiple pile-ups in the centre of town I didn’t want to come here in the first place but I’ve been sentenced to three years in the Housing Benefit waiting room… At the centre of all this spiralling imagery, the thematic axis on which the song turns is almost motionless — a woman, waiting at home for her lover to come around: Now I’m trying hard to meet her but the fares went up at seven She is somewhere in the city, somewhere watching television Watching people being stupid, doing things she can’t believe in Love won’t last ’til next installment Ten o’clock on Tuesday evening The world is going on outside, the night is gaping open wide The wardrobe and the chest of drawers are telling her to go outdoors He should have been here by this time, he said that he’d be here by nine That guy is such a prick sometimes, I don’t know why you bother, really. Jarvis breaks in on this train of thought with a bloodcurdling yelp of apology, and reels off a string of excuses: Oh babe oh I’m sorry But I had to make love to every crack in the pavement and the shop doorways And the puddles of rain that reflected your face in my eyes. The whole thing is a tour de force which probably seemed romantic to my teenaged, inexperienced self. But Jarvis would have been pushing 30 by the time he wrote this, and surely knew better. Did his lover really credit this guff? “That’s your excuse? You weren’t seeing someone else, you were making love to the city?” So latterly I’ve transferred my affections to “Razzmatazz”, a sort of lyrical remix of the same themes that drops the pretense and revels in the nasty emotion lurking beneath. This Jarvis isn’t a head-in-the-clouds poet thwarted by the frictions of modern urban life. He’s a rancorous ex in an astonishingly vicious game of break-up one-upmanship: You started getting fatter three weeks after I left you Now you’re going with some kid looks like some — bad comedian Are you gonna go out, are you sitting at home eating boxes of Milk Tray? Watch TV on your own, aren’t you the one with your razzmatazz and your nights on the town? Oh-oh-oh And your father wants to help you doesn’t he babe? But your mother wants to put you away Now no-one’s gonna care if you don’t call them when you said And he’s not coming round tonight to try and talk you into bed And all those stupid little things they ain’t working Oh they aren’t working at all. That string of “you”s, shot out like a volley of poison darts. That taunting “babe”. Is that really the troubadour protagonist of “Sheffield: Sex City”, “coming round tonight to try and talk you into bed”? Well, well. We always half-suspected it. “Razzmatazz” is a deliriously hateful song, and made all the more so by its delivery. Jarvis’s arch stage mannerisms always had a touch of the Larry Grayson about them, but here he’s coming off as some — bad comedian. Like a drunken stand-up act or its protagonist’s life, the whole song is curdling into a weak joke at its own expense. Lame one-liners fall flat, before taking an abrupt turn into degradation and naked hostility: The trouble with your brother, he’s always sleeping — with your mother And I know that your sister missed her time again this month Oh am I talking too fast or are you just playing dumb? If you want I can write it down. It’s all so nasty that I find myself hoping Jarvis is simply putting on a very good act. But I wonder. That mixture of desire and venom is a theme he continued to mine successfully throughout this era, from “Lipgloss” (which seems to be a third go at gnawing over the same squalid scenario), “She’s a Lady” and “I Spy”, right up to “Common People”. And thinking about all the subtle class signalling in “Razzmatazz” is another clue to the malign honesty at its heart. Jarvis, with his slightly bohemian upbringing and grammar-school-to-art-college education, never quite fit the solidly working-class English stereotype as much as he played it. And you realise listening to “Razzmatazz” that he’s actually using class imagery to denigrate his protagonist. In the video, she’s rattling around her flat, pouring cornflakes in the sink and leafing through a book with the pathetically aspirational title “How to Live in Style”. Jarvis hangs out with his band on stage, and on a balcony in Paris, actually living in style. Knowing he’s going somewhere, he mocks her as underclass, as lumpenproletariat, employing all the contemptuous finesse of an Edith Wharton character: Oh well I saw you at the doctor’s waiting for a test You tried to look like some kind of heiress but your face is such a mess And now you’re going to a party and you’re leaving on your own Oh I’m sorry but didn’t you say that things go better with a little bit of razz-a-ma-tazz? The mocking way he breezes past his old lover’s medical test — has she missed her time again this month, too? — before settling on that last word, stretched out to four jeering syllables. Who even says that stuff? “Putting on the razzle-dazzle.” Certainly not Jarvis. No one with a clue, no one who’d buy this record. Years from now, Jarvis will be living in Paris like he promised in the video, “half-way up a sumptuous gilt rococo staircase in the ninth arrondissement”, married with a child to the fashion-stylist daughter of a French banker. He’ll be curating music festivals in London, getting Iggy Pop to perform at the Royal Festival Hall. People who read the Independent will line up to pay 100 quid a ticket. Her? She’ll still be on her couch in Sheffield, glued to the telly. Shouting at her delinquent teenage son, who’s starting to remind her of his father. Still getting fat eating boxes of Milk Tray. “All because the lady loves…” The funny thing is, she still thinks that shit is classy.
https://medium.com/a-longing-look/i-was-lying-when-i-asked-you-to-stay-ca810a025cdc
['David Fickling']
2016-04-15 10:15:46.874000+00:00
['Writing', 'Music', 'Lyrics']
8 Simple Habits You Can Start To Build Energy and Get More Done Every Day
8 Simple Habits You Can Start To Build Energy and Get More Done Every Day Actionable steps that will significantly increase your energy levels Photo by andrew dinh on Unsplash There was a time when I didn’t have enough energy to make it through my day. I would burn-out around 7:00 pm and spend the rest of my day in a lethargic state. However, after researching the subject of “energy” and putting my research into action, I now enjoy amazing energy daily. I’m rarely tired, even when it’s bedtime. This article will provide you with some “actionable steps” that will significantly increase your energy levels, giving you the energy you need to succeed. 1. Work-out Daily I was telling someone that I’ve been working out 7 days a week, their response was, “Doesn’t all that ‘working-out’ make you feel tired?” I thought, “Make me feel tired?” Working-out gives me energy. It’s a stimulant. Working-out helps clear your mind and relieve stress. Running for as little as one mile a day will greatly increase your energy; I recommend exercising for at least 30 minutes every day (beginning with a good stretch). 2. Avoid Negative News I never watch the “news,” I could write an entire series on this one. Not only do I not watch the news, but I also don’t understand why anyone would watch the news. It just spills out death, disease, and garbage. Garbage in, garbage out. The negative news is depressing and makes you feel additionally “tired,” there’s no need for you to know about every rape and murder. You may be thinking, if I don’t watch the news, I’ll miss something big. Trust me, if something important happens, you’ll find out about it. (I recommend you get your information from sources other than the news.) 3. Wake-up early and Get Ready Every Morning Lying in bed until the middle of the afternoon may make you more lethargic than getting out of bed early. Additionally, there’s no reason to mope around the house without first showering and brushing your teeth. Every day you should get up timely, take a shower, brush your teeth, get dressed, and then find a way to stimulate your mind. This will make you feel energetic compared to oversleeping and walking around the house in your pajamas all day. 4. Take your vitamins and drink lots of water When I take a “Multi-vitamin” and a “B-Complex,” I notice a significant boost in my energy levels. Additionally, water is a great source of energy for your body, so be sure to drink plenty of it. 5. Get 7 hours of sleep Hopefully, this one doesn’t shock anybody! You’re not going to have great energy sleeping 3 hours a night. Try to get at least 7 hours of sleep, your body will appreciate you if you do, and it will reward you with more energy. Additionally, it’s helpful if you create the habit of waking up and going to bed at the same time every day. I sleep from 11 pm to 6 am daily. 6. Eat three-light, healthy meals, and three-light, healthy snacks Hefty meals, especially during the middle of the day, will make you feel tired. Try eating 3 light meals and 3 light snacks. It’s easier for your body to process smaller meals, and eating several light meals and snacks will keep your metabolism running more efficiently. Note: including lots of raw foods (e.g., uncooked fruits and vegetables, etc.) in your diet is also a great way to increase your energy. 7. Take breaks Don’t stare at a computer screen all day. Take breaks often, get up and walk around, talk with others. Staring at a computer will drain your energy (especially if you’re reading in small font). Take breaks as often as you can. (Also, be sure to sit in your chair properly; feet on the floor while you’re working.) Remember, relaxing is a good thing. You should also plan at least one fun activity daily (e.g., even if it’s simply watching your favorite television show with your spouse or significant other). 8. Organize your life If your house, garage, office, car, or life is a mess, then you will probably feel additionally “worn down.” Organize your life for a feeling of greater energy—clutter breeds clutter. They say, “If I can see your garage, then I’ve seen your mind.” How does your garage look? Is it organized? Clean things up in your life, and you will feel more energetic.
https://medium.com/in-fitness-and-in-health/8-simple-habits-you-can-start-to-build-energy-and-get-more-done-every-day-46c0f8263fde
['Josef Cruz']
2020-12-27 22:41:39.645000+00:00
['Lifestyle', 'Health', 'Wellness', 'Body', 'Fitness']
The Techlash, Capitalism, Facebook, Alt-accounts, and Targeting
Two of the most affecting short films I’ve seen recently have been advertisements. I don’t know if that says anything about changes in the industry, or changes in society, or changes in me; or indeed if it says anything at all. The first ad, for a Peppa Pig movie set in China, is a snapshot of the big changes happening between rural and urban China, and between generations. It’s quite lovely. The second ad, for Microsoft’s adaptive gaming controllers, brought tears to my eyes; it clearly shows the impact that well-designed, well-considered technology can have on people’s lives. Anyway, enough of my blubbering. Here are nine links about technology and how we live with it. Technology in the World The Truth About the ‘Techlash’, Carl Miller. Very insightful piece about the root causes of the public backlash against ‘technology’ (really a very, very limited subset of technology), and — critically — some ways it could be addressed. The techlash would be happening whatever the specific decisions that they’ve made. It exists because of something deeper; an important, incredibly dangerous contradiction at the heart of the digital revolution that isn’t to do with the decisions themselves, but how they are made. Whether it’s Google, Facebook, Twitter, Reddit or Patreon, we know that they are private services. But we feel that they are public commons. We Shouldn’t Blame Silicon Valley for Technology’s Problems — We Should Blame Capitalism, Douglas Rushkoff. On how growth requires innovation, how that innovation destroys existing marketplaces, and how the destruction requires new technological solutions which continue the cycle. An extract from Rushkoff’s new book, Team Human. Well-meaning developers seek to solve technology’s problems with technological solutions. They see that social media algorithms are exacerbating wealth division and mental confusion, and resolve to tweak them not to do that — at least not so badly. The technosolutionists never consider the possibility that some technologies themselves have intrinsic antihuman affordances. Has Facebook Been Good for the World?, Sean Illing, Kurt Wagner, and Karen Turner. On Facebook’s 15th birthday, a variety of influential media and tech people answer the titular question. Because it is online, it means that the “mirror” is warped. Its reflections are reshaped by everything from our conscious choices of what to post (or not) to the algorithmic and deliberate manipulations that alter not just what we see in our feeds, but what we think about the real world. The question of its net positive or negative, thus, will be answered by what we see in that mirror, and what each of us chooses to do about it. Life on the Platforms “It Gives You the Freedom to Be Violent to Other People”: What Has the Alt Account Become?, Sarah Manavis. The history of the ‘alt-account’ — the anonymous secondary social media account that can be used to express feelings that the author doesn’t want to be publicly associated with — for good and for bad. Alt-accounts are born out of a need to shed one’s identity — a desire to voice opinions people feel they can’t as their public selves. That can mean safe places for venting, space to explore one’s identity, even just for jokes; but it can be used for far more insidious purposes, beyond bullying, and beyond abusive behaviour. The Secret Life of Amazon’s Vine Reviewers, John Herrman, and Prime and Punishment: Dirty Dealing in the $175 billion Amazon Marketplace, Josh Dzieza. A pair of stories on the world’s biggest eCommerce platform: the strange and nefarious world of the Amazon Marketplace, where fortunes can be made, and cut-throat schemes hatched to take those fortunes; and inside the life of Viners, the semi-pro reviewers of products on the Marketplace. For sellers, Amazon is a quasi-state. They rely on its infrastructure — its warehouses, shipping network, financial systems, and portal to millions of customers — and pay taxes in the form of fees. They also live in terror of its rules, which often change and are harshly enforced. Everything Else The Route of a Text Message, Scott B. Weingart. A deep-dive into the technical stack required to send a short text message between two phones. I love explanations of the things we take for granted, showing how the magic trick works. This sometimes gets very technical, but stick with it. In order to efficiently send and receive signals, antennas should be no smaller than half the size of the radio waves they’re dealing with. If cell waves are 6 to 14 inches, their antennas need to be 3–7 inches. Now stop and think about the average height of a mobile phone, and why they never seem to get much smaller. Forget Privacy: You’re Terrible at Targeting Anyway, Avery Pennarun. How data tracking makes everything awful and probably isn’t effective anyway because nobody knows how to analyse it properly anyway. I don’t agree with everything in here, but a lot of it is really good. The more tracker data your ad network buys, the more information you have! Probably! And that means better targeting! Maybe! And so you should buy ads from our network instead of the other network with less data! I guess! None of this works. They are still trying to sell me car insurance for my subway ride. Journalism Isn’t Dying. It’s Returning to Its Roots, Antonio García Martínez. Comparing the modern, post-platform media landscape to that of the ‘golden age’ of the founding of the USA, where objectivity was never presumed. If you’ve been reading this newsletter for a while, you’ll know I’m a sap for historical context.
https://medium.com/the-thoughtful-net/the-techlash-capitalism-facebook-alt-accounts-and-targeting-2a06e1b49b5
['Peter Gasston']
2019-02-12 10:46:00.865000+00:00
['Capitalism', 'Facebook', 'Amazon', 'Techlash', 'Text Messaging']
Can We Clean Up Social Media?
Can We Clean Up Social Media? Can we make social media a force for good instead of so much bad? There’s an old story about the boiling frog, a parable that tells of a frog being placed into a pot with water, and the heat is slowly increased to cook the frog alive. I don’t know who on earth would be so cruel, but it’s an old-timey tale that raises a very cautious point: that when change happens slowly it’s possible for you to not notice the small, incremental changes — until it’s too late. The idea is, if you put a frog into a pot of boiling water, it’ll just jump out. If you put a frog into a pot of tepid, room-temperature water, you could slowly turn up the heat bit by bit until the frog was being cooked alive. While the parable probably wouldn’t hold up in real life, it’s quite apt at describing what social media companies and users have done to us collectively over the past couple of decades. It all seemed like fun and games with cute little buttons that helped you show your friends that you “liked” things, with company advertisements, with metrics that rewarded our responses. But in the years since these things have been first developed, social media has taken a very sinister turn and I think just about everyone knows it, though few of us are willing to admit it. I am willing to admit it, social media, as it stands today, is a cesspool of toxic, borderline abusive behaviors that extremely few people would engage in if these conversations were taking place in person. Anyone who spends a long enough time on Facebook or Twitter will end up on the bottom of a bullying dogpile that’s nothing shy of alarming and, for some people, devastating. It’s gotten so extreme, that suicide and homicides have been live-streamed before the world for all of us to see. Children are killing themselves on the internet as a response to online bullying. Self-esteems are plummeting, especially for young women who constantly compare themselves to the models they see on Instagram and other sites. When a 14-year-old girl from Miami Gardens committed suicide on Facebook Live in 2017, I thought that would be the end of it. I thought that’s where the line would be drawn and social media companies would finally take some responsibility. But alas, I’ve been proven wrong. The incentives have only gotten worse, more perverse, and have prioritized sensationalist content at the expense of our healthy public discourse. Boring stories don’t get clicks, likes, shocked react emojis, and don’t elicit other responses in quite the same way extreme ones do. For those not yet in the know, there are a lot of problems with how social media and the internet at large has been structured in terms of what it values. It tends to value things based on the responses of the users who are on the platform, but is the gut-level response to something truly a metric of quality? I think not and I’m not alone. Beyond this, the whole point of Google, Facebook, and other large social media behemoths’ business models are to direct your behavior as a user and try to guide you into a sale to a third-party merchant. In fact, you aren’t even a customer of a social media site, which is why users aren’t prioritized or even addressed. Your attention and your behavioral data are what’s being sold to advertisers. There’s no customer service line you can call, no managers that you can speak to, no recourse whatsoever when things go wrong and it harms you. Doing this almost requires finding the most reflexive and impulsive people in our culture by magnifying the most provocative content, amplifying it through the distribution channels which they have editorial privy too. Ever post a ton of cool stuff and feel like crap after no one “liked” it? Well, that might be because nobody saw the stuff you posted. This is called shadow-banning. Your posts likely took a back seat to stuff that would help platforms obtain more and more information as it pertains to what can be sold from the users who are your friends or followers. The truly dangerous thing of our age is that the internet has united formerly disjointed toxic and harmful people and has simultaneously given them political power. I think that almost everyone in the United States can agree upon this basic premise, this metapolitical statement, that no matter what your background or political affiliation, online bullying is a real problem and has probably affected your life if you use the internet. Someone has polled you, trolled you, and probably Rick-rolled you, all for laughs and a few kicks. The problem is, it’s all fun and games until someone gets hurt until joking around turns into malicious subversion and outright sadism, the kind that can tip the scales of an election and end up locking children up in concentration camps, ripping them from their mothers’ arms, until LGBTQ rights are under assault, until women’s rights are under assault, until the nation is coming apart at the seams and there’s no leadership in sight as a pandemic rages on killing hundreds of thousands of Americans. Never underestimate the power of toxic people in large groups. All of these problems we face today are all underscored by our social media habits and the companies that allow such nonsense to be considered permissible. These companies really need to reign it in, and I think we’d all do wise to switch to platforms that don’t engage in this kind of behavior until they can get their shit together. Medium is a great example of a place that doesn’t prioritize radical and radicalizing content for the sake of advertising revenue. Social scientist Jonathan Haidt, a bit of sensationalist himself, ironically, recently posited a rather good idea that I’m inclined to stand behind: the need to provide government-issued identification in order to set up a social media account. Real-life doesn’t have trolls that hide behind masks of anonymity and it’s the anonymity that provides a seeming layer of security for those who’d engage in nefarious tactics. It may seem intrusive, but for the good of the global dialogue, I’m inclined towards a world where we had to submit a government-issued ID to create accounts and they began clamping down on multiple account usage. Bullying, especially violent and vitriolic language and forms of expression, should be curbed and there would be no way of doing so as long as bullies could create additional accounts in just a few minutes and pick up right where they left off. Right now, the internet at large is a very unhealthy place because of social media. Any attempts at healthy, wholesome conversations are quickly derailed by deranged, often aggressive bad actors who want nothing more than to watch the world burn and people they view as different from them to suffer. They have to be dealt with and disallowed from using the platforms until they can be civil. The internet creates a sort of mental firewall that makes people infinitely more confident than they would be saying something to a person in real life. Men will slander and threaten women, women will gang up on and bully men in large groups, men will upload their ex’s nudes for the world to see, people will post the most racist and evil nonsense possible in hopes of intimidating minorities, and let’s not forget that nations are using social media as a tool to undermine democracy itself. It’s only gotten worse since 2016 because the companies didn’t rigorously clamp down on such unsavory behavior when it happened, nipping it in the bud. Social media is quickly becoming the new smoking, something we use for social points at the rapid and radical destruction of our own health. I’ve written about this here. One idea that the big players of the social media game have been kicking around that we really ought to consider is the removal of the “like” buttons and other reaction metrics, at least visibly. Perhaps it would be best if we had “like” buttons but hid the amount of total “likes” given, that way, companies could still have a real-time metric to consider what gets prioritized in feeds, but users can’t see the likes and thus can’t tailor the content to specifically get more likes and show off through subversive, bad behavior like dunking on helpless teenagers, destroying their self-esteem. Real-life dialogue doesn’t take place with a bunch of people pressing “like”, especially not after the initial reaction where likes can accumulate over time. It incentivizes cheap digs and shameless dunking on people. It’s destroying our communication entirely and by pretty much every available metric, our society as well. The costs of getting this wrong are higher than most people think. Allow me to refresh your memory. Trump isn’t just a fluke president, he’s the first president elected by internet trolls and social media bad actors, by memes of hate and vengeance, by aggression channeled into cyberspace from distant and isolated parties who couldn’t maintain a social life with their insufferable behaviors and attitudes, dangerous people who’ve now found other dangerous people who will help them commiserate, strategize, organize, and inflict pain. Make no mistake: Trump is a weapon, he’s a weapon of the hateful, the spiteful, the losers who hate themselves and want the rest of the world to suffer as they suffer from their own self-inflicted torments. He’s the weapon of those aforementioned toxic and violent individuals who want nothing more than to see the world burn. These people will weaponize every bad behavior, bug, glitch, and feature possible to turn it toward the achievement of their own political aims which usually involve the harm of some other people. In personal relationships, we call these people “abusers” but since it’s remote, or, as I call it, “sadism-by-proxy”, we mistakenly think it’s just a series of defective bugs in the system of the internet; but it is no bug, it’s a feature and they’re profiting off of it while influencing it and fueling it with filter bubbles and a laissez-faire approach to destructive parties that is morally atrocious and absolutely unforgivable. Let’s put pressure on these companies to get this right — our collective mental health is at stake. In the interim, here’s how to partially unplug:
https://medium.com/flux-magazine/can-we-clean-up-social-media-a00eb53e9739
['Joe Duncan']
2020-07-19 23:20:31.111000+00:00
['Technology', 'Politics', 'Mental Health', 'Society', 'Culture']
5 Inconvenient Things All Writers Must Do
How to Become Better at Writing Headlines Ayodeji Awosika recommends writing ten headlines every day. (I write five.) As Jeff Olsen says in The Slight Edge: it’s easy to do and easy not to do. It might feel like a chore, but writing headlines give you ideas to write about, strengthens your titles, attracts more readers, and ultimately, earns you more money. Grueling over ideas might not be fun, especially when you feel like you’ve run out of them, but the work pays off. This is the equivalent of a workout, of strengthening a muscle. Show up often, and you’ll reap the benefits. It’s important to take this seriously. If you’re only going to half-heartedly workout, to the point where your legs and abs aren’t even burning, then what’s the point in working out at all, right? I used to write whatever came to mind — crappy titles that didn’t require much thought — but that was a waste of time. It wasn’t benefitting me or helping me improve. Sit down and write the best five headlines you can draft for that day. Some days will take you longer than others, but your titles will improve and attract more readers over time.
https://medium.com/the-brave-writer/5-inconvenient-things-all-writers-must-do-502797429914
['Itxy Lopez']
2020-08-12 16:01:01.389000+00:00
['Creativity', 'Success', 'Advice', '5 Tips', 'Writing']
Which U.S. States Have The Best Climate Year Round?
In a perfect world, the weather would always be agreeable, we wouldn’t get caught in the rain, snowed in or fear a heat stroke. While that is not the reality, there are some places in the states that offer near perfect weather all year round, and when the climate isn’t ideal, even these places don’t have drastic changes in the weather. California LA tops the list, and California has many other cities on the south and central coasts where the weather is pretty great all year round, such as Long Beach, Santa Barbara, Santa Maria and San Diego. It’s never bitter cold, definitely no snow, and rain adds up to about 20 inches per year; the rest of the time, 73%, skies are clear so there’s sunshine. The hottest it gets is about 80 degrees F, but this is rare; it’s usually somewhere in the mid 60s. Hawaii Maui and Oahu stay around 80 degrees throughout the year with Oahu being a little more agreeable and thus favorable. There’s sunny skies about 70% of the time and only 20 inches of rain. Image courtesy of Unsplash Texas Texas cities, such as Brownesville, Corpus Christi, Victoria and Del Rio are warm while other states are covered in snow, and in the summer, temperatures can reach the mid 90s. Galveston and Austin, on the other hand, get the coolest usually staying at around a comfortable 61 degrees F or just slightly higher in January, and then it’s around 90 degrees starting in August. Most states get 30 to 40 inches of rain except Del Rio, which only gets around 19 inches. Texas remains sunny an average of 60% of the year. Georgia According to the statistics, the entire state of Georgia, and especially Athens, has weather more preferable to most people than Florida, hence its rank on this list. It’s a little cooler in the hotter months and a little warmer in the colder months compared to states that didn’t make the list, which have extreme seasons. Savannah is one of the warmest places in winter in this state as it could reach just above 60 degrees. Other places here reach about 50 degrees in the winter, such as Macon, Atlanta, Augustus and Columbus. Most locales get 45 to 50 inches of rain and sunshine 60 to 66% of the time. Florida Florida offers a lot of places with perfectly warm weather even in the winter, such as Daytona Beach, Orlando, Gainesville, Vero Beach and Tampa. In the summer, the hottest it gets is a little above 90 degrees, and it stays moderately dry with 45 to 50 inches of rain throughout the year. Key West is the most favorable city here. It’s noticeably drier and sunnier with 39 inches of rain and sun 76% of the time. South Carolina Tropical in some respects and always the possibility of a hurricane, Charleston, SC doesn’t get any colder than about 56 degrees in January, which is one among the three coldest months of the year in the US. The hottest it gets is 89 degrees in July with sunshine 63% of the year. Despite the recent flooding in South Carolina, which has been called a 1 in 1,000 year event, rains usually total a hassle-free 46 inches per year. Delaware Delaware can get a little cold for some dropping to about 40 degrees in the colder months, but it’s usually in the mid 80s in the summer. This state gets 43 inches of rain a year in the city and slightly more outside of the city. North Carolina North Carolina doesn’t get too cold, and it’s sunny about 60% of the time all over. Not surpassing the mid 80s in the summer, it’s pretty tolerable here. Kill Devil Hills is a gem here sitting third on the list of cities with the most favorable weather due to its cool breezes and clear skies. Asheville drops to the mid 40s in January but remains cooler than other cities in the summer putting it at number 4 on the list. Cape Hatteras isn’t that bad either and doesn’t get below about 55 degrees F. With 58 inches of rain a year, Cape Hatteras is a little wetter than Asheville, which gets 47 inches. Louisiana Louisiana completes the list of top states with the best weather year round. While it does get 57 inches of rain a year, more than most other places on the list, it stays above 60 degrees, even in the winter, and reaches up to the 90s in the summer. Lake Charles gets the most sun per year at 72%. This list is considering the fact that the vast majority don’t want to live in places where it snows a few months out of the year or perhaps has a love for gardening or nature. If you would like to be able to go skiing in your own backyard without getting snowed in, then consider Denver, CO or Prescott, AZ, which have comfortable summer climates.
https://medium.com/thrive-global/top-us-states-with-the-best-climate-year-round-c2d71225e629
['Tammy Sons Owner Of Tennessee Wholesale Nursery']
2016-12-29 13:01:01.942000+00:00
['Life Lessons', 'Health', 'Wellness', 'Wisdom', 'Climate']
China’s Denial of Being the Birthplace of Covid-19
China’s Denial of Being the Birthplace of Covid-19 Having the world’s largest population, it stands at 72nd position in the list of countries worst hit by coronavirus. Photo by Martin Sanchez on UnSplash With the world advancing and accepting a 'new normal' brought by the coronavirus that will complete an exact one year of its discovery in December, China has given multiple theories for the same. The news has had repetitive articles about China blaming, first the European countries and presently, India and Bangladesh for the origin of the Covid 19 virus. Also, it's not easy to digest that the country with the world's largest population and also the birthplace of the virus, has got relatively fewer cases as compared to the other countries with a population much smaller than the same. China falls at the 72nd position in the list of countries worst hit by the Coronavirus count. Let's have an insight into what actually has happened: Has China hidden the actual numbers? China’s coronavirus case count stops at around 87k and it’s interesting to know that it doesn’t even fall under the top fifty countries affected by the Coronavirus outbreak. The fact that it has the largest population in the world makes it even more difficult to believe that the numbers shown by China publicly are true. Also, in a recent report published by the Times of India, it is evident that the initial figures displayed were less than halved the actual number of cases. WHO has made repetitive remarks and attempts to send a team to China, to detect the origin of the virus but hasn’t been able to yet complete its groundwork. Photo by KOBU Agency on Unsplash China’s blame game Initially, China downplayed and did not necessarily deny the origin of the virus in the country, but ever since coronavirus started spreading to various countries, it has made repetitive attempts to transfer its blames. Around a week back, China says "India or Bangladesh could be the origin of the virus" The reason that China states for the same is that as it was the first country to report the Covid 19 cases, it doesn't necessarily mean that the origin lies in China. However, China is known to have hidden information from WHO for a long time. The organization did not receive the news about the virus in December 2019 from China itself but first came to know about it from other sources. (Source) Political Interactions As the virus was successful in bringing life to a pause all over the world, political interactions were bound to appear. China has suffered a lot of criticism from different countries for its initial statements and misleading information. In the beginning, China's health experts continued to state that the risk of human to human transmission of the virus was low. The USA has regularly blamed China for the outbreak with Mr. Trump announcing that it would go for a war with China. Also, he accused the WHO of being biased and withdrew the USA from the same. Australia has also tried to initiate an independent investigation in detecting the origin of coronavirus but China has refused any investigation if it's not guided under the WHO. Investigations for the origin WHO is trying it’s best to find the birthplace of the virus to prevent future outbreaks of the disease. China says it supports an independent inquiry as long as it’s done by WHO and is not done partially (source). As for now, reports say that the virus has come from animals. A seafood market in Wuhan is known to be the place where the virus might have transferred from animals to humans. Since then, the market has been sealed and no one has been allowed to go there. But, the 'zero case’, a word for the first-ever case ever reported of the coronavirus has not been yet found. Rising beyond blames: The Covid 19 vaccine Even though viewpoints and news are something that will always circulate, the question 'what next?' is the most important as of now. Since the time when the virus was discovered, scientists all over the world have given their best to make the most awaited Covid 19 vaccine. While a number of them have entered various stages of testing, where the vaccine is actually being tested for its effectiveness. The whole world waits for the same. Even though life has begun to run again on new terms, humans have had an experience of what it’s like to be at one place for months through the lockdowns. Masks and sanitizers have come to represent what is called 'A new normal'. With the news of the possibility of the arrival of coronavirus vaccines soon, there has come, a new hope for a brighter tomorrow and getting back to what was 'normal'.
https://medium.com/discourse/chinas-denial-of-being-the-birthplace-of-covid-19-fe54865a3fe7
['Niyati Jain']
2020-12-04 21:49:26.611000+00:00
['Cities', 'World', 'Health', 'Politics', 'Coronavirus']
Why Lazarus Lake’s Great Virtual Race Across Tennessee 1000K is a Perfect Model for Virtual Racing.
As the COVID-19 Crisis spread across the US, it left both runners and running race organizers scrambling to unriddle the mystery of how to keep the sport moving forward. Since the transition I’ve seen one stand-out virtual race up close; The Great Virtual Race Across Tennessee 1000K run organized by Barkley Marathons guru Lazarus Lake. My familiarity stems from the fact that my wife signed up. For the purpose of this article, and because it’s her name, I’ll call her Mindy. She saw the race announcement and couldn’t get her name on the list fast enough. I was surprised, for we’ve never done anything within the ultra running community before, other than my considering to run a local 50K back in the heady days before the virus, (BV), shut down the world. She was a bit disappointed that I didn’t join her, but I still wasn’t making the virtual run transition, and decided my running would focus on getting away from it all, vs trying to keep up with a race schedule. We follow the Barkley, know Mr. Lake’s legend, and are thrilled for her chance to race in one of his own events. With over 19,000 entries, it would appear Mr. Lake hit the virtual race nail right on the head. We live in a large running community in the Hampton Roads region in Southeastern Virginia. There are several active race organizers, plus a large nonprofit running club, as well as numerous neighborhood private groups. On any given weekend, during the BV period, there were multiple races and long group runs to choose from. No one in our area, however, has figured out how to create a genuine virtual run of interest. Most make the very basic mistake and anchoring the motivation in making social media postings. I’m supposed to spend my $40 bucks to run the virtual 5K, get my swag, and post my amazing achievement for my many followers to heart. The problem, of course, is that we’re doing this every day without spending the money. Others attempt to make the run about raising money for a great cause. Our local Tidewater Striders had success with a recent 5K that raised over $2K for feeding hospital workers. It was a fantastic idea, a well executed event, but the total funds revealed modest participation. Especially given the amount of local runners in our area. Runs are hard things to pull off. I researched having a virtual race to raise funds for our small nonprofit during the crisis shut down. Our little museum, as cool as it is, doesn’t rate among local emergency agencies directly dealing with the crisis. I looked into having the race give all the funds to those in need, but the numbers on paper never produced enough cash to cover costs. When I saw the lack of participation our local running club was having, I knew the museum would never attract enough runners to make it work. The virtual 1000K works because it offers a few simple requirements for success: It’s Hard. I’ve yet to find a virtual race that’s so perfectly sculpted with difficulty. It sounds simple; virtually run across the entire state of Tennessee (It’s actually a little further.) The distance is a tad over 1000K, or around 634 miles. That’s about 24 marathons. In four months. My wife was signing up for something that would keep her on schedule, almost every day, until the end of August. That’s hard. That’s a challenge. It’s genuine and any runner knows it. Do the math and you realize an average of over 5 miles per day is needed for the next 16 weeks. That’s actually a higher rate of training miles for a novice to intermediate marathon training schedule, which would have you logging between 350 to 500 miles, depending on how you like to train. Sign yourself up for that kind of mileage and, like a marathon or even half marathon, you are committing yourself to your running. Keep in mind that the majority of folks that signed up for the 1000K are not ultra runners. Yes, ultra performers jumped in, knocked it out, and even took the option of running back, to do a 2000K. The race, though, wasn’t designed for them. It’s designed for runners like Mindy and myself. It’s for the almost 60 million other runners in the US, the every day, get out there for the health of it, find the next race to train for runners. That highlights another strong point of why it’s a great race. It’s A Shared Challenge. There is something very special about lining up in the start corrals for a major race. It’s more than the miles you are about to run, it’s about the thousands of other runners you are sharing the experience with. You run the same course, share the same mile posts, hear the same music from the same bad DJs. Again, it’s shared because it’s hard. All of us in those starting line corrals BV knew we were sharing something of reasonable difficulty. A look at the Social Media community page for the race, and you immediately understand that Mr. Lake has figured out how to create that same shared route, see the same mile markers challenge in a virtual, online community. And sustain it for months. One other element that should be noted by every other race organizer trying to figure out how to create virtual running success; You See Where Your Money Goes From the very beginning, Mr. Lake has shown where the $60 entry fee has gone. He’s showed the nonprofit the excess proceeds funded. He doesn’t just show us the t-shirts, he shows the people in the warehouse packing them. We see the faces and the smiles of the people our money is employing. It’s a powerful message. A last element is a very special part of the race that I’ve never before seen implemented. You’re Part of A Story. Mr. Lake has created the perfect antagonist; the buzzard. The buzzard is simply a fictitious character that runs the required 5 point something miles per day required to finish the 1000K by the August 31 cut-off date. Suddenly Mindy is not running miles by herself. For months she was chasing the bird, feared the bird. The dark winged, circling villain salivating for its next road kill. Recently, she caught the bird, and has since put the squawking menace behind her. Notice that Mindy is no longer simply racing, she’s become the protagonist of her own story. It’s brilliant fun. Every day you see the buzzard on the list. Perfect simplicity. The ultra runners had a character of their own, called the Ginger Bread Man. A cookie cutter creation that was always in the lead. The ultra race had no winner. Ginger Bread Man won. The ultra runners tried to catch and pass him anyway. Again, simplistic, spirited fun. A contrasting example comes from a local running company’s recently organized virtual run for the Hampton Roads area. Using our telephone prefix, 757, as a mileage goal. The prefix is an overused colloquialism by many area businesses, so the message is already wandering. It’s a fun idea, but it’s missing the important stuff. It’s not hard. The challenge is 75.7 miles for the 31 days. That means you need to average only 2.44 miles a day. That’s a smaller average than even a novice half marathon schedule. If I signed up, I would actually run less miles than I would on a normal week. As the outline offers another cloying story of ease, the shared challenge, the fuel that powers community building, has little energy. Mindy still has two months of running ahead of her. She’s stoked to keep it going. She already has the goofy t-shirt with the buzzard on the back, and is more than determined to put the 1000K sticker on her journal cover. We’ve got the map of Tennessee on our fridge, highlighting the miles as she goes. She’s working hard, loving the running, laughs at the struggle with the bird, and is experiencing a true virtual running race. She’s more motivated than during her last half marathon training. There is nothing adventitious molding the Virtual 1000Ks success. It’s a carefully calculated formula created by a runner who’s keenly aware of what runners need. We’re lucky. The mind that created the Barkley Marathons; what is, without debate, the most unique ultra running challenge on the planet, brought his creative wisdom to average runners. The Barkley can’t be replicated. It’s a true one of a kind. The Virtual 1000K, however, is a model of excellence easily duplicated. I hope that race organizers take notice. Make it hard. Run a shared route. Show us where our money goes. Make it a story. The running community needs more genuine events like this to share during this unprecedented time of crisis and hardship. We don’t need easy. We don’t need social media opportunities. We need genuine virtual races. We need to run. We need to race. And we need to connect with others doing the same.
https://william-hazel.medium.com/why-lazarus-lakes-great-virtual-race-across-tennessee-1000k-is-a-perfect-model-for-virtual-racing-431b9572bfc2
['William Hazel']
2020-07-14 10:22:39.249000+00:00
['Wellness', 'Health', 'Running', 'Fitness', 'Lifestyle']
AI Will Teach Us to Be Human
AI Will Teach Us to Be Human Judgment Day Is Only the Beginning In 2017, PwC published a paper that documented its prediction of the future of human jobs in the US and UK; specifically, it looked at job automation as a result of developments in robotics and AI. The paper puts PwC’s prediction side-by-side against other predictions from Frey and Osbourne (FO) and Arntz, Gregory and Zierahn (AGZ). Predictions of current jobs automated by 2030 (Source: PwC) Despite what PwC, FO or AGZ may predict, the magnitude of AI’s effect on the world may not be truly realized until it happens. Think about what happened with the newspaper industry. In 2006, the Guardian published an opinion piece, Blogging Is Not Journalism. Three years later, Business Insider published an article called The Year the Newspaper Died. We saw the Internet coming even before 2006, but at the time, few could have realized the revolutionary impact it would make, not just on the newspaper industry, but the world as a whole. AI has the potential for an even greater impact. It isn’t just through AI itself, though; simultaneously developing technologies like blockchain, 5G, and robotics will integrate with AI to enable the largest impact. In 2030, if PwC’s prediction becomes reality, more than a third of existing jobs in the US today will be handled by robots and/or AI. Take that timeline a bit farther, and one day we may see a complete displacement of human workers, which spans all ends of the employment spectrum. What Makes Us Human We have a tendency, maybe desperation, to draw a line between ourselves and everything else. The question of what makes us human existed far before the idea of artificial intelligence was conceived. That’s why the answer has changed throughout human history. Before civilization, the answer archaeologists pointed to was our opposable thumbs and increasing neural count. During modern human development, it was our complex social structures and cognitive ability to develop language and quantification, which led to society. When robots and AI were little more than the imagination of authors or film-makers, our answer changed yet again, but this time it was a little different. Previous answers were created to explain why we were at the top of the food chain, but this new answer was created to explain why we should remain there. Unfortunately, we could no longer depend on our bread-winning characteristics: language and quantification. After all, computer data is transferred and processed much faster than human data. So we started saying that only humans were capable of love and art. But we were wrong. The complicated thing about art and love is that both are not well-defined. Art is subjective, depending on the tastes of the audience. Love is also subjective — it’s a mysterious emotion that we have a hard time describing. Its complexity arises from the fact that it is a mixture of natural processes and human culture; i.e., it’s both natural and unnatural. What is natural is our desire to reproduce. What is unnatural is the institution of marriage — I’m not saying it’s wrong, I’m only saying that it’s not genetically coded in us, but rather, taught to us. Before 2350 B.C., we have no recorded evidence of marriage and anthropologists believe that families were small groups of people that consisted “of as many as 30 people, with several male leaders, multiple women shared by them, and children.” The best available evidence suggests that [marriage is] about 4,350 years old. For thousands of years before that, most anthropologists believe, families consisted of loosely organized groups of as many as 30 people, with several male leaders, multiple women shared by them, and children. - The Week Although we initially fell back on the softer, unmeasurable aspects of humanity, those features (art and love) may not be unique to humans for much longer. Sugar, Spice, and 3v3ryth1ng Nice Mario Klingemann’s Memories of Passersby I (Source: The Verge) In Spring 2019, AI art went up for auction at Sotheby’s. It was generated by a general adversarial network (GAN) — which is fancy-speak for a methodology of creating AI algorithms by using two computer participants. The first participant takes a bunch of data, tries to make sense of it (in this case, images of art), and then produce its version of it. The second participant is trained to spot AI-generated materials vs. human generated materials. If it can tell the difference, then the first participant did not generate a good-enough piece of art and tries again (note: the participants are not humans, just computers). Now that AI has started generating art, and it’s selling at Sotheby’s, our argument of human artistic skill is questioned. Furthermore, love and affection are becoming more common with our robotics counterparts as the interaction between humans and robots shifts from the hand (i.e., typing) to voice and face. The anthropomorphism of computers is shifting their role from tool to companion. Devices like Amazon Echo and Google Home are becoming integrated into our lives, learning about what we like and the rhythm of our daily routines. Over 110 million of those sorts of devices were shipped in 2018. As person who’s lost an Amazon Echo, I can attest to the human emotions I felt — akin to losing a friend. AIBO, Sony’s AI-enabled Robot Dog (Source: AIBO Official Website) Even the AIBO, a very basic AI-enabled robotic version of a dog, is seen differently from a tool like an iPhone. A 2008 study observing the behavior of children interacting with AIBO, a robotic dog built by Sony, vs. a real dog showed that 60% of those children affirmed that AIBO could have mental states (emotions), sociality, and moral standing — just like a human dog. But children weren’t the only ones giving and finding love and affection in AIBO — even adult owners have strong emotional attachments to their robotic best friends. The AlphaGo documentary is a must-watch for anyone interested in AI. You can view it on NetFlix (Source: IMDB) The once mightily human aspects of our existence are slowly eroding. AI is starting to create art, and it is also able to receive and provide love and affection. While the ability of it to do both may be limited, over time it will become more skilled at these abilities, potentially achieving a mastery that surpasses humans. We already know that this is possible. When it comes to recognizing faces, AI is already better at it than humans — SenseTime offers the world’s most accurate facial recognition algorithms, which can be used to power smart cities with millions in population. When it comes to playing games, OpenAI has beaten the human world champions of DOTA. In the world of board games, AlphaGo has crushed the greatest Go player on the planet. The Day of Reckoning Fueled by technological innovation and our paradoxical desire to rebuild ourselves, the Terminator’s Judgment Day may be more inevitable than we imagine. In our data-driven world, where intelligence is measured by IQ, prosperity is measured by dollars, and social credibility is measured by followers, the human ability to quantify, store data, and act on that data accurately and efficiently falls significantly short compared to a computerized counterpart. We dub this shortcoming human error — it may be the only thing that eventually sets us apart from AI. Goodbye, Old World We hold on to our jobs, compare our titles, and count our money like it’s what defines life, but this very structure prevents many of us from actually living. Many live to work. Exhausting work-hours have become a naive humblebrag. And yet, when AI has come as a solution to transform the existing fabric of our society, we complain about how it will free us from the very chains we built ourselves, like a case of Stockholm Syndrome. Maybe our existing social structure wasn’t built to scale. The population of New York City is over 8 million people. The population of the entire United States during the Revolutionary War in 1776 was 2.5 million, of which 0.5 million were estimated to be slaves. The Founding Fathers wrote the U.S. Constitution with a population of 2 million people in mind. They were managing the rights of a population 1/4 the size of New York City. Today, the Constitution is de facto law of the land in the United States, home to over 320 million people. This is also a land where a melting pot of cultures and values exist, some of which enjoy their right to bear arms while others see it as a growing problem to violence. I’m not arguing whether or not the Constitution is appropriate for the United States in today’s world. One can certainly list the benefits of having such a law as foundational as the Constitution, I’m sure. And I think it plays a vital role in governing the country in the current state of affairs. Rather, I’m implying that the Founding Fathers probably didn’t think about the other 318 million people and cultures that would exist years later when they wrote it. Western Nightmares and Eastern Dreams When next-generation AI comes knocking at the door, though, it may reasonably bring with it a new societal structure. What that new structure looks like, though, varies based on who you talk to — more specifically, where you’re talking to. The most commonly juxtaposed perceptions of AI’s role in the future are between the United States and Japan. Robot vs. Human allusions are rampant in US culture with movies like Terminator and the Matrix, and thought leaders like Elon Musk and Steven Hawking. Meanwhile, Japan and many other Asian cultures grew up on robotic heroes like Astro Boy and Gundam Wing. Joi Ito explains the difference in a 2018 article entitled Why Westerners Fear Robots and Japanese Do Not. He attributes it to the Shinto religion. In the words of Mr. Ito, “Followers of Shinto, unlike Judeo-Christian monotheists and the Greeks before them, do not believe that humans are particularly ‘special.’” Japanese don’t make a distinction between man, the superior creature, and the world about him. Everything is fused together, and we accept robots easily along with the wide world about us, the insects, the rocks — it’s all one. We have none of the doubting attitude toward robots, as pseudohumans, that you find in the West. So here you find no resistance, simply quiet acceptance. Quote by Osamu Tezuka, Interpreted by Joi Ito On the other hand, Christianity, the most commonly practiced religion in the West, states that humans are not only different from other animals but better, only second to God himself/herself/itself. Regardless of your religion, this artifact of Christianity, much like the idea of marriage, has dug its roots into the culture. My generation grew up with Barney songs that include lyrics like: You are special, you’re the only one You’re the only one like you. There isn’t another in the whole wide world Who can do the things you do. - Barney in Concert Maybe the Barney song is a biased example because I grew up with no cable television and a single VHS tape, which I used to record the Barney in Concert episode. Then I watched it over, and over, and over again because it was the only thing I could watch for the next few years of my life (well, that and Doctor Dolittle, which could also fit on the VHS). Maybe I’m the only person who got the message drilled into my brain by Barney the purple dinosaur. Regardless, Barney is not the only preacher of the human-centric view of the universe. In the western view, nothing should eclipse man. Unfortunately, AI may someday. From this perspective, people view the doomsday scenario as a Zero-Sum Game. But it’s important to understand that it’s not a fact that drives to that conclusion; rather, it’s a culture. The proof is on the other side of the world, where people embrace the idea of coexistence with artificial intelligence. Apart from Japan, China is also prioritizing the development of AI. While in America, AI might be a well-funded idea, in China it has full support (financially and legislatively) from the government. This isn’t to say that there is no fear from the east nor any embrace from the west. There isn’t a line drawn between the two — culture, especially in globalized nations, is becoming a blend of influences from within and outside political borders. That’s why there’s no clear determination on AI’s influence on humanity. The Dawn of AI Is the Rebirth of Humans AI will not, in the foreseeable future, be more human than humans because we don’t know what it means to be human; therefore, we cannot measure it and compare it. What it can do is push us to rethink the way society currently runs. With job displacement creeping in, eventually, we will hit an inflection point where we must evaluate the role that jobs play in human life. If money is paid through work, but most of the human population cannot work, then the provision of money will have to be reconsidered. In this event, ideas like Universal Basic Income (UBI) emerge and take center stage. Money may no longer be exchanged for work because work may no longer be a requirement as a contributor to society. But this change won’t happen overnight, and the transition period will not be pretty. Jobs will be lost, and before humanity can react appropriately, many lives will be ruined. These growing pangs, unfortunately, may become an unavoidable effect that many may mistake for the end itself, but it is a means to an end. In this transition time, it is very important for us to consider how we can mitigate the negative impacts that will occur, which is where UBI may play a critical role. That’s about as far as I can think before I start treading into the deeper end of the orders of ignorance. Humans may no longer need to work and universal basic income may play the fundamental economic role of human existence, or the idea of monetary exchange may be replaced altogether. While I can’t contribute any meaningful discussion to the purpose of life, I can agree that the purpose is unlikely to be employed and save up a 401K. Since the emergence of agriculture, though, human life has slowly become an expendable resource — the gasoline that makes the business, the government, or the economy run faster. The dawn of AI challenges this paradigm, but not necessarily for the worse. The efficiencies it brings provides an opportunity to live more fulfilling lives. Without the worry of jobs and paychecks and with the introduction of a more fulfillment-focused way of life, the AI day of reckoning may serve to bring us closer to answering what it means to be human.
https://towardsdatascience.com/ai-will-teach-us-to-be-human-aaf7dd218b6d
['Kenny L.']
2020-02-06 21:54:06.519000+00:00
['AI', 'Philosophy', 'Artificial Intelligence', 'Basic Income', 'Robotics']
Too little, too late. COVID-19 is on track to be more severe in the United States
The most recent data explains why the forecasting models I released last month only worked for a few days before starting to underestimate new daily values. The key assumption in those models was that it would be more or less the same here as it would be everywhere else — a virus doesn’t discriminate, and nearly every government failed to act early enough and with enough conviction to successfully mitigate the spread of the disease. So, there was no reason to believe that it would be different here with the preliminary data we had at the time. Now we know that assumption was absolutely incorrect, which gives me very little confidence in any statistical model that tries to model the eventual US death toll using data from other countries. We’re truly in unchartered waters. The response in the US has been dramatically worse, and the data here validates that sad truth. But make no mistake, that’s not to say that the situation couldn’t be worse than it already is. Social distancing and the various mitigation strategies deployed by governors around the country have helped, and they will continue to minimize the impact and speed up the eventual recovery. There’s plenty of blame to throw around, and there will be a day for that, but it’s clear that the results we’re seeing today are not exclusively the fault of public officials. While it’s true that the response from all levels of government has been woefully inadequate — stories of people purposefully ignoring or going against the guidelines are not uncommon either. Here are just a few: A summary from TIME magazine, one from Reuters, another from USA Today, and a video from CNN. Undercounting Worryingly, there are also reports that deaths are being undercounted in the US due to insufficient testing. There seems to be no difference between this claim and the alleged undercounting in China. Since an infection could result in death so quickly, undercounting appears inevitable when the spread accelerates too quickly. Since we now have a sufficient sample of countries with at least one case and one death, we are now able to better understand the speed of the disease. We can see that the general lag between the first confirmed case and the first death and subsequent ramp-up appears to be consistent across countries. The average delay between the first confirmed infection and the first death, across 143 countries with sufficient data (and excluding China), has been 18 days. The minimum delay observed so far is just one day (Panama, Botswana), with a maximum of 61 days (Sri Lanka). I’m not entirely sure how useful this is now, but it may serve as a reference for anyone that lives in a country that is still in the initial phases of the disease. As of right now, that appears to be just a few countries in Latin America, Africa, and South Asia. Additionally, it’s possible that these statistics are associated with the number of days that individuals are fighting coronavirus once infected. To be clear, however, the John Hopkins data is not specific enough to suggest whether or not this is true. Finally, there is some good news The most recent data suggests that the aggressive mitigation efforts in Italy, Germany, and Spain are starting to take effect, just as they did in China and South Korea. This is why a discussion about an end to the lockdowns at some point in the future is starting in Europe.
https://quinterojs.medium.com/too-little-too-late-covid-19-is-on-track-to-be-more-severe-in-the-united-states-4fc90c659e82
['Sebastian Quintero']
2020-04-11 23:27:59.187000+00:00
['Statistics', 'Health', 'Covid 19', 'Data Science', 'Coronavirus']
Putting Away Plastic: The Rise of Zero Waste Grocery Stores
By John Bassey Around the globe, there is a drive to reduce plastic waste. The movement has picked up in many cities with hotels, restaurants and shops cutting down on the use of single-use items like plastic straws and bags. Most of this waste are used in packaging and many, frankly are unnecessary. From packing avocados and oranges in individual plastic wraps in places like Hong Kong, to boxes or cellophane wraps in the states. Some even package apples in hard plastic clamshells, bananas in foam trays, and in places like Japan, strawberries are packaged in a foam net before been put in a plastic straw and sold in a plastic wrap. Photo by veeterzy In years past, China had been recycling more than half of the world’s waste but since they stopped accepting wastes, the millions of tons of such waste have been left unattended many times. In many countries, plastic fibers contaminate tap water. KEY DRIVERS Last year saw the anti-plastic drinking straw campaign create the “year of the straw”. Big companies like Starbucks and McDonalds pledged to reduce or phase out plastic straw use. Loop, a new zero waste shopping platform has partnered with global companies like Coca-Cola, Nestle, Proctor & Gamble to brand-name goods in reusable containers. This means that instead of using plastic containers to package goods and having to trash them, Loop collects the reusable packaging and prepares it for fresh use. It also would not be possible without the help of the government. The European parliament has approved a ban on single-use plastics (cutlery, straws and sticks) in the EU. British Prime Minister Theresa May has endorsed a plan to remove avoidable plastic waste in British supermarkets, with taxes on single-use containers. In the US, the state of California bans single-use plastic bags at large retail stores and Hong Kong is planning to implement a building management enforcement for plastic waste. Photo by Lacey Williams TRAILBLAZERS With such key drivers in the push for zero waste, it is easier to see why there is a rise in zero waste grocery stores. In the states, Precycle in Brooklyn is one of those stores that sell organic local produce and bulk food without packaging. The founder, Katerina Bogatireva said she was inspired by the Berlin based Original Unverpackt. Over in Canada, Nada are doing great and have reportedly diverted more than 30,500 containers from landfills. They also launched a zero-waste café where visitors are encouraged to bring their own cups from home. Out in Denver, Zero Market are also one of the leading lights in the drive for zero plastic waste in the environment. In Hong Kong, Live Zero and Edgar are two popular zero waste stores. Live Zero, which is more of a wholesale store keeps its items in clear self-service bins or dispensers which are then poured into containers that you come with from home, no plastic packaging. Edgar is more of a grocery shop and they even offer reusable containers for packaging rather than using plastic. Gradually, the change is getting to everyone. There are now plastic free supermarket aisle’s in Amsterdam (the first of its kind in the EU) while Waitrose now sells Pasta in boxes made from food waste. With legislature and global firms steering the wheel, zero-waste grocery stores will continue to rise as they offer a solution to sustainability in the environment. Subscribe to our Newsletter
https://medium.com/in-kind/putting-away-plastic-the-rise-of-zero-waste-grocery-stores-3f8c1feb5ffa
['In Kind']
2019-03-06 17:08:52.542000+00:00
['Zero Waste', 'Plastic Pollution', 'Sustainability', 'Environment', 'Climate Change']
Pinterest + ktlint = ❤
Sha Sha Chu | Android Core Experience It’s been nearly a year since Pinterest’s Android codebase became Kotlin-first, and about two years since we adopted ktlint for Kotlin linting and formatting. Today, we’re sharing that Pinterest has officially taken ownership over the project. Initially, we chose ktlint because of its simplicity, active community, extensibility, and extremely responsive owner. It also integrated easily with our existing Phabricator-based workflow, and, after adding about 65 lines of PHP to our Arcanist library, we were able to apply linting and formatting to our Kotlin files on a per-diff basis. When Stanley Shyiko (the developer of ktlint) put a call out a few months ago looking for new ownership for the project, we were immediately interested. Not only was it a tool we use every single day at Pinterest, but it would be a tangible way for us to give back to the Kotlin community. After a few meetings with Stanley, it was clear that it was a great fit, and we were excited to move forward as owner. In the short term, nothing is going to change about the way the project is run and maintained. We still welcome and encourage outside contributions to the project in the form of Issues and Pull Requests. In the medium-to-long term, we plan to follow Stanley’s proposed roadmap by implementing a way to globally disable rules (ktlint’s most-requested feature), integrating an official Gradle plugin, and updating some of the APIs in ktlint-core to enable cleaner Rule writing. If any of these projects seems interesting to you, please reach out or go ahead and open a Pull Request. Finally, if you or your company uses ktlint, please open a Pull Request to add yourself to the Adopters list. We’re committed to continuing the great work Stanley started, and collaborating with other developers in the growing Kotlin community. Thank you to Garrett Moon and Jon Parise for their assistance in bringing ktlint to Pinterest, as well as Kevin Bierhoff, Beth Cutler and James Lau at Google for their technical input.
https://medium.com/pinterest-engineering/pinterest-ktlint-35391a1a162f
['Pinterest Engineering']
2019-05-10 19:55:31.352000+00:00
['Kotlin', 'Mobile App Development', 'Open Source', 'Engineering']
How To Properly Vet Vendors for a Two-Sided Marketplace Startup
How To Properly Vet Vendors for a Two-Sided Marketplace Startup The biggest vetting mistakes can be avoided image by teravector Don’t let vendor incompetence sink your marketplace. I’ve written a ton about digital marketplaces this year. And as a longtime creator of such marketplaces, I’m not surprised at where most founders are making their mistakes. It’s not crap technology, it’s not poor UX design, and it’s not lack of customer demand. In fact, 2020 saw to it that those three barriers were, if not removed, certainly lowered to a common denominator. No tech experience? No problem, just go no code. On top of that, a quarantine-accelerated acceptance of digital models, as well as the forced boost in the demand for those models, created unending waves of new and ready-to-engage markets. In 2020, the entire world learned, networked, collaborated, and even exercised and socialized online in a way that would have seemed shocking in 2019. Nope. Without question, the majority of mistakes are being made on the vendor fulfillment side. Game changers and posers In a lot of ways, marketplace startups in 2020 remind me of digital currency and blockchain startups in 2016. For every truly game-changing application of the underlying technology, there were several poorly-designed, hastily-constructed, and minimally-vetted projects underwritten by people who assumed there were better controls in place. There is enormous opportunity in digital marketplaces, never more than right now. But just as the roller-coaster ride of Bitcoin shows us that success or failure of a new technology is pinned to the customer acceptance of the infrastructure, digital marketplaces will live and die on trust. And trust in a marketplace is only as valid as the trust in its vendors. The case for vendor vetting as the most important marketplace problem to solve Last week, I wrote a post on vendor availability and the need for building a vendor bench slowly. I got a ton of positive feedback from that post, mostly from folks who have been down that road and fallen into the trap. My point in that post was that fulfillment strategy varies wildly from market segment to market segment, and even from vendor to vendor. If a marketplace platform isn’t equipped to handle and reduce the differences, the complexity and inconsistency of transacting will eventually turn off the marketplace’s customers. Now, that’s the most damning cause of marketplace failure, but the vendor quality problem is the hardest to solve, especially consistently. It’s also the one a marketplace must solve before it becomes a problem. Why? A marketplace isn’t just digital real estate where goods and services can be transacted. By its very existence, a digital marketplace tacitly vouches for every product or service it offers. This is true even for two-sided or third-party marketplaces. So of course, the top reason why vetting is critical is to ensure against vendor fraud and misrepresentation — but that reason also happens to be the easiest reason to ignore. There are a few bad actors in any sector, not a ton. Consider, however, that there are plenty of incompetent actors out there, and wanna-be actors, and experts of fields in which there isn’t any credentialing or governance. The marketplace needs to serve as that governance, not only for its own ethical reasons and reputation, but also because the true value in a marketplace startup is in its customer experience. One bad vendor can do a ton of damage, and usually, that damage can’t be undone. First determine exactly what the vendor relationship is Before I get into vendor relationship, I’ll talk a bit about vendor classification, because those are completely different topics requiring completely different vetting strategies. At Spiffy, the mobile auto-maintenance-on-demand startup where I recently served as Chief Product Officer, we offer a combination of first-party services (our own W2 employees providing our own services to our customers), third-party services (we offer another company’s services to our customers through a corporate partnership), and a two-sided marketplace (we recruit vendors to operate as service providers to our customers). There is obviously a very standardized contracted relationship with W2 employees and corporate partners. However, most marketplaces rely on that third classification — vendors operating as individual contractors — and that’s where a relationship needs to be strictly defined. Note that no matter what service or provider combination I listed above with Spiffy, in each case, they are serving OUR customers, and every marketplace should identify its customer base as their own customers, not the vendor’s customers. That’s the first relationship distinction. Marketplace vendors are the startup’s corporate partners, but without all the name brand recognition. Marketplace vendors aren’t employees, so the startup has neither the liability nor control they’d have with an employee. But here’s the mistake a lot of marketplace startups make. Marketplace vendors aren’t just users or customers of the marketplace platform, even if they are classified as such. The exchange of money through the startup’s platform means that, regardless of all the caveat emptor the startup may communicate to its customers, the customers will come at the startup for resolution. In other words, if your Uber driver sucks, you’re going straight to Uber. Just imagine this is in all caps: Personally vet every vendor I’ve also spent a ton of career time using technology to automate manual processes, from washing cars to writing sports articles, So I can also emphatically state that almost no automation gets around the 80/20 rule. That rule states that when automating any task, 20% of the time should be spent verifying the accuracy of the automation, if not actually completing the task. And since there are virtually zero success stories in the total automation of vetting a two-sided marketplace provider, a marketplace startup should be prepared to spend a lot of time talking to vendors, whether that’s via email, phone, or Zoom. Not every vendor needs a formal interview in front of a panel of experts. I’m just saying that the more time that a marketplace startup can spend interacting with the vendor, the more opportunity they’ll have to find red flags, warning signs, exaggerated qualifications, and things that should have been known when things go wrong. Verify credentials, insurance, bonding, employment, and clients Not every marketplace requires formal documents from its vendors, but if they do, they should always be verified. Negligence rarely happens by being fooled by a phony document (although it can), it mostly comes about by not bothering to verify the document in the first place, or even requesting it. When a marketplace startup is offering services from expert service providers, especially the kind that are going to sell themselves through their employment and client list in the provider’s bio or description, the startup needs to be in control of what goes into that bio or description. Again, this is not always mandatory, but experience has taught me that vendors will lie when they think they’re not going to get caught. Ask for digital evidence before human vetting There’s nothing inherently wrong with using digital sources as a vetting mechanism, as long as the vetting doesn’t stop there. Digital evidence can help the startup execute the human interaction in a much more efficient manner. Almost everyone who is good at what they do has a website to show off how good they are at what they do. Failing that, LinkedIn or something like it can be good digital evidence, but only if there’s detail. Most social media is pretty easily faked. Again, and this is the biggest vetting mistake I see, digital evidence should not be the only source of vetting. It’s a good gating mechanism. If the vendor fails this part, no human follow-up needed. Ratings and feedback as a vetting pulse check Finally, once the vendor is on the platform, the platform should give the customer every opportunity to rate the vendor. That rating opportunity should revolve around specific actions, not just the customer’s vague appreciation of the vendor. Ratings without reasoning are vanity metrics. Ratings with reason not only help the marketplace startup create a better offering, but also help the vendor improve their performance within the confines of the marketplace. And like all the vetting methods I’ve talked about, those ratings need to be reviewed and verified using no more than 80/20 automation. Because any vetting mechanism that isn’t verified by a human just can’t be considered the truth. If the information can’t be verified, then that’s on the vendor. If it can be verified, but isn’t, that’s solely on the marketplace startup. Hey! If you found this post actionable or insightful, please consider signing up for my newsletter at joeprocopio.com so you don’t miss any new posts. It’s short and to the point.
https://jproco.medium.com/how-to-properly-vet-vendors-for-a-two-sided-marketplace-startup-316417b1d376
['Joe Procopio']
2020-12-28 12:41:13.845000+00:00
['Technology', 'Product Management', 'Business', 'Entrepreneurship', 'Startup']
Apache Airflow — Part 1. Every programmer loves automating their…
Implementing Airflow DAGs Airflow Operator While DAGs define how to run a workflow, Operators determine what actually gets done by a task. An Operator represents a single, idempotent task and they are usually (but not always) atomic. They can stand on their own and don’t need to share resources with any other operators. The DAG will make sure that operators run in the correct order; other than those dependencies, operators generally run independently. In fact, they may run on two completely different machines. Summarizing the above discussion on Airflow Operator: represents a single task in a workflow. run independently (usually) generally don’t share information Airflow provides operators for many common tasks and some of the most used operators are as follows. PythonOperator: Calls a python function Calls a python function BashOperator: Execute a bash command or call a bash script Execute a bash command or call a bash script SimpleHttpOperator: Sends a HTTP request Sends a HTTP request EmailOperator: Sends an Email Airflow provides many other operators and we can check from its documentation. Let’s try to discover BashOperator and PythonOpertor by a simple example. A simple BashOperator to generate a random number. from airflow.operators.bash_operator import BashOperator task_1 = BashOperator( task_id='generate_random_number_by_bash', bash_command='echo $RANDOM', dag=dag ) A simple PythonOperator to generate a random number. import random from airflow.operators.python_operator import PythonOperator def generate_random_number(): return random.randint(0, 100) task_2 = PythonOperator( task_id='generate_random_number_by_python', python_callable=generate_random_number, dag=dag ) Tasks Tasks are: Instances of operator Assigned to python variable(usually) Referred to by task_id in Airflow. If we look at the above example then python variable task_1 and task_2 are two examples of the tasks. Task dependencies Define a given order of task completion. In Airflow version 1.8 or above, it is defined by bitshift operators. * >> or upstream operator (It simply means before) * << or downstream operator (It simply means after) If we define task_1 >> task_2 in airflow then task_1 will be executed before task_2 . If we define task_1 >> task_3 << task_2 then it means that task_1 and task_2 will be executed before task_3 . Airflow Scheduling DAGs run A specific instance of a workflow at a point in time. It can be run manually or by schedule_interval . . There can be multiple states like running , failed , success etc. When scheduling a DAG, these are the attributes that come handy. start_date : The date/time to initially schedule a DAG run. : The date/time to initially schedule a DAG run. end_date : When to stop the running DAG instances (Optional Argument). : When to stop the running DAG instances (Optional Argument). max_tries : Number of maximum attempt to make before stopping (Optional Argument ) : Number of maximum attempt to make before stopping (Optional Argument ) schedule_interval : How often to schedule the DAG? You can define via any cron syntax like */5 * * * * i.e. for every 5 minutes. You can check this website to generate any correct cron syntax. schedule_interval gotcha : When scheduling a DAG, Airflow will always, schedule a task at start_date + schedule_interval . For example task_2 = PythonOperator( task_id='generate_random_number_by_python', python_callable=generate_random_number, start_date=datetime(2020, 7, 28), schedule_interval='0 0 * * *', dag=dag ) This means that the earliest starting time to start a DAG is on 29 July midnight 00:00 Kudos!!! We have installed Apache-Airflow and discussed some of the introductory concepts of Apache-Airflow. Let’s meet in the next part of this series and try to discover more about this amazing open-source. Till then Goodbye and Keep Learning.
https://medium.com/analytics-vidhya/apache-airflow-part-1-8844113bda5e
['Ankur Ranjan']
2020-07-29 13:29:14.442000+00:00
['Programming', 'Data Engineering', 'Python', 'Scheduling', 'Airflow']
Using Rick and Morty To Solve An Agile Estimation Dilemma
Agile is great. Agile is wonderful. Agile is the savior of all things related to Software Engineering. On paper. In practice, classic Agile is difficult, confusing, frustrating, and just downright hard to implement. Because of knowing that many people would argue the preceding statement, we need some contextualization here. Therefore, a clarified statement reads: At a small company with less than 10 software developers and a handful of hardware engineers, where the projects change constantly and the number of people working on any single project can change from week to week, classic Agile is difficult, confusing, frustrating, and just downright hard to implement. Some aspects of Agile are awesome for small teams at small companies. This is doubly true when the current projects closely follow the needs and opportunities for the business. Smaller companies do not have the luxury of buffers between the Engineers and the opportunities. Often new opportunities require new features or tweaks to existing products to land the deal. This places the development group immediately in the middle of the uncertain and ever-changing sales cycle. Photo by Alejandra Ezquerro on Unsplash When this happens, the development team feels like a rudderless ship in a storm with the winds of change blowing them around onto different projects whenever a big gust comes through. This manifests in constantly shifting development groups and continually changing projects. At the surface, an agile method seems to be ideal for attacking these changes. However, two of the big issues with using Agile in this environment are regarding calculating velocity and doing project estimation. We could consider these tightly coupled aspects as two sides of the same coin. The problem that arises, as stated above, is that when projects fluctuate often and the people working on the project vary week to week, it is nearly impossible to calculate group velocity and therefore no metric data exists to support a validated estimation technique. Basically, without a consistent means to measure velocity, estimation stays rooted in the realm of the WAG (Wild Ass Guess). In classic Agile, removing a key metric like velocity initiates a breakdown of the whole process. Instead of moseying along in this manner while the process crumbles around us, why not be proactive and deconstruct the whole method first and then rebuild it with solutions for the gaps and issues so that the overall process is stronger and geared towards the people, team, and company using the system? The following explains how our team built success in a custom agile process and created one solution to the challenges around the estimation of use cases. Photo by Phillip Larking on Unsplash One of the key challenges for agile in this environment concerns how to present apples-to-apples comparison and prioritization for the overall effort required to complete business targets and features. This covers both the prioritization needs of the development team while also considering the customer needs for the organization. The show Rick and Morty actually provides an amazing and simple answer to this conundrum and in doing so, for a small team and that team’s process, solves a critical Agile dilemma. The Problems with Agile A “point” to one person is not the same “point” to another person, both in terms of estimation understanding and in terms of actual output. In this scenario, static development teams do not exist. Therefore, the classic agile team modeling is not valid. It predicates velocity as a metric upon a LOT of assumptions that the articles and instructional books and videos never really get into. Classic agile teams necessitate the removal of a lot of variables: Static Teams — The prototypical team is 3–5 people, and it does not change. That allows the team to estimate effort together and for measured velocity as a predictor for future effort. — The prototypical team is 3–5 people, and it does not change. That allows the team to estimate effort together and for measured velocity as a predictor for future effort. Individual Contributions Are Minimized — The classic team approach minimizes discrepancies in what points mean to individuals and applies that logic to a team. That is great when there are static teams. It is not great when people move around and the whole calculation starts over each time a change occurs. — The classic team approach minimizes discrepancies in what points mean to individuals and applies that logic to a team. That is great when there are static teams. It is not great when people move around and the whole calculation starts over each time a change occurs. Team Effort Maximizes Output Against Skill Level — When planning and estimating for future projects, having a team velocity allows for reliable and higher-confidence estimation. When attempting to apply individuals against future projects when there are a wide range of skills (beginner, intermediate, expert) and pacing (slow vs fast). These variables make this planning difficult, if not outright impossible, without assigning people to future projects upfront. — When planning and estimating for future projects, having a team velocity allows for reliable and higher-confidence estimation. When attempting to apply individuals against future projects when there are a wide range of skills (beginner, intermediate, expert) and pacing (slow vs fast). These variables make this planning difficult, if not outright impossible, without assigning people to future projects upfront. Past Metrics Drive Future Planning — The velocity calculation is key to a lot of the classic agile modeling. It is only possible after some period of work where velocity measurements average out to a reasonable confidence level. At that point velocity measurements are available to predict future projects. When changing teams and priorities, this cycle starts all over again and never gets to where this data can drive the process forward. So how do we get around these issues and provide reasonable estimations for future projects? How do we standardize what a “point” means in a way that we can apply it in a general sense for planning? How do we remove as many of the variables as possible? Rick and Morty provides an amazing and simple answer to this conundrum and in doing so, for a small team and that team’s process, solves a critical Agile dilemma. Image by Alexas_Fotos from Pixabay Meet Mr. Meeseeks What is an average Software Engineer? They seem to always initially have a friendly and helpful demeanor, willing to assist anyone who asks. They like to solve problems and complete the task in front of them. If that task is outside their abilities to solve, they will stay on it like a dog on the hunt. As time goes by and the task remains unfinished, their attitude and mental state begin to worsen dramatically. If the task goes long enough, they are even prone to violent behavior and outright insanity. Software Engineers asked to pair program are known to distrust and attack one another as their sanity decays, although they will continue to work on finding any possible solution to the task at hand, including collaborating with other Software Engineers. The psychological and physical symptoms of this painful existence can manifest in less than 24 hours. The funny thing is, the description above is the description of Mr. Meeseeks, a character from Rick and Morty. I simply replaced “Mr. Meeseeks” with “Software Engineer” and voilà! The reflection of the fictional character against an average Software Engineer is frighteningly accurate. In the show Rick and Morty, Mr. Meeseeks appear when someone presses the button on a Meeseeks box. When this event occurs, Mr. Meeseeks springs into existence and exists only long enough to fulfill a singular purpose. After they serve that purpose, they expire and vanish into thin air. Borrowing from the show, their motivation to help others comes from the fact that existence is painful for a Meeseeks, and the only way to remove the pain of existence is to complete the provided task. Physical violence cannot harm them. The longer Meeseeks stay alive, the more sanity they lose. In the show, the main character Rick warns the Smith family to keep their tasks simple. Unfortunately, life does not always follow along with the recommendations of uber-smart characters on the small screen. Just as the Smith family in Rick and Morty gives increasingly complex tasks to their growing collection of Meeseeks, we define ever more complex problems for Software Engineers every single day. Image by Shutterbug75 from Pixabay Using Meeseeks for Project Estimation The concept of Mr. Meeseeks is being used in an attempt to minimize the variables as much as possible. The descriptions and setup above serve an important purpose. When pressed, the button on the Meeseeks box produces a clone of the same ability. Every time the number grows beyond one or two individuals there is additional complexity which highlights the fact that adding a second person to a project does not linearly scale up the output to 2x the current output. This idea of diminishing returns is even more apparent and influential the longer a project goes on. Therefore, the concept of a Meeseeks serves to provide a simple framework for project estimation. This framework seeks a reduction of the open variables and normalization of the metrics used for estimation. We accomplish this through the following effort: We defined a “resume”, skill set, and ability expectations for Mr. Meeseeks. The goal is to define a generic and average member of the team . This definition of a Mr. Meeseeks is used to estimate projects and feature additions to existing products . This definition of a Mr. Meeseeks is used to estimate projects and feature additions to existing products We estimated in a vacuum . This means that project estimation disregards current or future projects or personnel. It does not consider people’s movement between projects, interruptions, or any other external influences. . This means that project estimation disregards current or future projects or personnel. It does not consider people’s movement between projects, interruptions, or any other external influences. While many variables are being removed by estimating using the Meeseeks approach, we minimize additional variables through assumptions . We assumed that all Meeseeks will work on the project from start to finish. We assumed that no support issues, vacations, or changing business needs will interrupt the project. . We assumed that all Meeseeks will work on the project from start to finish. We assumed that no support issues, vacations, or changing business needs will interrupt the project. Project estimation targets an “ideal project” . This means that using the Meeseeks assumptions we will estimate for the most efficient way to finish a project. This does not mean the fastest or least amount of Meeseeks to get there. This will target the ideal, most efficient group to get the job done. . This means that using the Meeseeks assumptions we will estimate for the most efficient way to finish a project. This does not mean the fastest or least amount of Meeseeks to get there. This will target the ideal, most efficient group to get the job done. We created a “fudge factor” metric for each project to add buffer time to the estimated project timeline. This metric attempts to define risk and potential unknown roadblocks for the project. Implementing The Estimation Solution We are NOT attempting to provide an accurate representation of how long a project will take. We ARE attempting to provide a consistent measure of the effort needed for this project against that needed for another project. When estimating, we don’t know how many people will be available to work on the target project or the skill level of those people. So we eliminate those variables while striving to provide a consistent measure of one project against another. We should recognize that these look like timelines. We are trying to be agile and timelines and agile often appear to be concepts at odds with each other. There are also concerns that perceived timelines drive measurements against the development team. Noted. Please move on. Photo by Campaign Creators on Unsplash A process has to start somewhere, and requirements at hand concern an evaluation of scope for projects or targets. This process attempts to meet that goal. This also requires revisiting the collected data against the current known variables present at the project kickoff. This provides a chance to both revisit the numbers and adjust them as needed. Therefore, we are shooting for the following goal: Estimation Planning generates the Meeseeks-based “ideal project” metrics. This also allows a review of the requirements and use cases for the intended work. The primary goal is to provide consistent data from project to project and target to target so that prioritization provides more value for roadmap projects and general release planning. For implementation, we have defined five metrics for estimation to provide a sense of scope for the target being estimated. These are also used to compare different projects or features against each other. Remember — the Meeseeks concept defines a generically average member of the team!
https://medium.com/swlh/using-rick-and-morty-to-solve-an-agile-estimation-dilemma-773d39ff814
['Kevin Wanke']
2020-05-29 02:39:45.858000+00:00
['Engineering', 'Project Management', 'Development', 'Agile', 'Programming']
The Best Writing Advice I Have
About twenty years ago I saw The Lady In The Van on stage in London. Maggie Smith starred in it. It was the play’s first time on stage. It was delightful. The script, written by the genius Alan Bennett, would later be adapted as a feature film. It was 2000. I was in London to study screenwriting with the talented and generous Roy Kendall, so seeing a fantastic piece of stagecraft was especially valuable to me. Shortly after I returned to New York, I read that Alan Bennett would be speaking at the 92nd Street Y, right across town from me. I had to see him speak. Alan Bennett in 1973, from Wikipedia As someone who reads a lot about writing, I’ve read plenty of nuggets of specific advice. These sorts of things speak to niche areas like how to write an antihero or how to pace the parts of the story structure depending on the genre in question. As Mr. Bennett spoke, a new question occurred to me. It wasn’t a craft question, but a process question, one that Mr. Bennett was uniquely qualified to answer. You see, Alan Bennett isn’t just a playwright. He is also a screenwriter and a novelist. He wrote The Madness of King George: the book, the stage play, and the screenplay. Alan Bennett is one of the most prolific authors of our age when working in different media. My question: “How do you handle it when you’re hard at work on something and you are gripped by a sudden inertia, finding yourself unable to push through?” Translation: “What do you do about writer’s block?” Mr. Bennett’s answer was as simple as it was clever. He said he writes something else. The moderator asked him to elaborate. Mr. Bennett said that sometimes you just kind of exhaust yourself on something, but that doesn’t mean you can’t write. It’s like working out: if you exhaust your abs doing crunches, you work on your arms or legs. He said that when he hit a wall while working on a novel, he’d pick up an article he’d set aside. If that didn’t work, he always had a couple of scripts and other ideas on the go to which he would turn for a change of pace. If nothing of the sort worked, he’d write a letter to a friend. The key was to keep writing because you never know when inspiration will strike. As long as you have that keyboard available, or at least a pen and paper, you can always get to work doing something productive, and you’ll eventually turn back to the urgent work at hand. As a result of this advice, I changed the way I work. I always have a few stories on the go. Some are scripts that are in their final drafts — the urgent work. Some are stories I’m still outlining or characters I’m still fleshing out. Some are ideas I executed badly and need a total rewrite but require an amount of distance from them before I can do that. I still write letters (paper ones) to certain friends. Until he died, I still wrote letters, in longhand, to Roy Kendall. Writing, no matter how you’re doing it or what you’re doing it about, keeps you in practice and makes you better at it. You’ll never know when something will suddenly click, and that half-baked idea of yours will suddenly emerge from your brow fully formed like the goddess Athena. So keep writing. If you need to take a break, then write something else but, whatever you do, don’t stop.
https://medium.com/write-i-must/the-best-writing-advice-i-have-b575544754c5
['Mister Lichtenstein']
2020-12-22 19:38:38.176000+00:00
['Writing', 'Screenwriting', 'Writing Tips', 'Creativity', 'Alan Bennett']
Label Smarter Not More
Label Smarter Not More An introduction to active learning. Photo by RUN 4 FFWPU from Pexels Introduction Imagine back to your school days studying for an exam. Did you randomly read sections of your notes, or randomly do problems in the back of the book? No! Well, at least I hope you didn’t approach your schooling with the same level of rigor as what to eat for breakfast. What you probably did was figure out what topics were difficult for you to master and worked diligently at those. Only doing minor refreshing of ideas that you felt you understood. So why do we treat our machine students differently? We need more data! It is a clarion call I often hear working as a Data Scientist, and it’s true most of the time. The way this normally happens is some problem doesn’t have enough data to get good results. A manager asks how much data you need. You say more. They hire some interns or go crowdsource some labelers, spend a few thousand dollars and you squeak out a bit more performance. Adding in a single step where you let your model tell you what it wants to learn more about can vastly increase your performance with a fraction of the data and cost. I’m talking about doing some, get ready for the buzz word, active learning. In this article, we will run some basic experiments related to active learning and data selection. We will train a random forest on a small subset of the IMDB Sentiment dataset. Then we will increase the training set by sampling randomly, and by sampling data points that the model wants to learn about. We will compare our performance increase with respect to increasing data and show how smart labeling can save time, money, and increase performance. The code for this project is in a gist here, and also included at the bottom of this article. Let’s get started. TLDR If your problem needs more data, try labeling it with the help of your classifier. Do this by either choosing the examples with the least confidence or the examples where the highest and second-highest probabilities are closest. This works most of the time but is no panacea. I’ve seen random sampling do as well as these active learning approaches. The Data For this problem, we will be looking at the IMDB Sentiment Dataset and trying to predict the sentiment of a movie review. We are going to take the whole test set for this dataset, and a tiny subset of training data. We’ll gradually increase the training set size based on different sampling strategies and look at our performance increase. There are about 34,881 examples in the training set and only 15,119 in the test set. We start by loading the data into a pandas data frames. df = pd.read_csv("IMDB_Dataset.csv") df["split"] = np.random.choice(["train", "test"], df.shape[0], [.7, .3]) x_train = df[df["split"] == "train"] y_train = x_train["sentiment"] x_test = df[df["split"] == "test"] y_test = x_test["sentiment"] Basic Model For this tutorial, we’ll look at a simple Random Forest. You can apply these techniques to just about any model you can imagine. The model only needs a way of telling you how confident it is in any given prediction. Since we’re working with text data our basic model will use TF-IDF features from the raw text. I know, I know, we should use a deep transformer model here, but this is a tutorial on active learning not on SOTA so forgive me. If you want to see how to use something like BERT check out my other tutorial here. We’ll define our RandomForest model as a SciKit-Learn pipeline using only unigram features: from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import TfidfVectorizer clf = RandomForestClassifier(n_estimators=100, random_state=0) model = Pipeline( [ ("tfidf", TfidfVectorizer(ngram_range=(1,1))), ("classifier", clf), ] ) Now we can call .fit() on a list of text input and the pipeline will handle the rest. Let’s use our initial training set of 5 examples and see how we do on the test set. # Get five random examples for training. rand_train_df = x_train.sample(5) model.fit(rand_train_df["review"], rand_train_df["sentiment"]) Initial performance of a 5 example random forest on the test set. From this, we can see the dataset is pretty balanced since predicting all positive gives us almost .5 precision. This model is pretty crappy though since it only predicts positive. Let’s see if we can use active learning to get to better performance faster than randomly sampling new points. Choosing Good Data Points to Label So we have a classifier now. It’s meh at best, and we want more data. Let’s use the classifier to make predictions on our other training data and see which points the model is least confident about. For most Sci-Kit Learn estimators this is super easy. We can use the .predict_proba() function to get a probability for each class. To do this by hand you could also look at the individual predictions of the trees and count the votes for each class. However, predict_proba is much more convenient : ). preds = model.predict_proba(x_test["review"]) This will give us a numpy array of probabilities where each column is a class and each row is an example. It’s something like: [[.1, .9], [.5, .5], [.2, .8]... Uncertainty Sampling The simplest “intelligent” strategy for picking good points to label is to use the points which the model is least confident about. In the example above that would be the second point, because the maximum probability of any class is the smallest. def uncertainty_sampling(df, preds, n): """samples points for which we are least confident""" df["preds"] = np.max(preds, axis=1) return df.sort_values("preds").head(n).drop("preds", axis=1) Here we have a function that takes a data frame of training examples, the associated predicted probabilities, and the number of points we want to sample. It then gets the maximum value in each row, sorts the data points from smallest to largest, and grabs the n examples that had the smallest maximum probability. If we apply uncertainty sampling to the three example probabilities above we’d say we should label [.5, .5] first because the maximum probability is smaller than all the other maximum probabilities. (.8 and .9) which intuitively makes sense! Margin Sampling Uncertainty sampling is nice, but in the multiclass setting, it doesn’t do as good of a job of capturing uncertainty. What if you had the following predictions? [[.01, .45, .46], [.28, .28, .44], [0.2, 0.0, .80]... The data point which the model seems to be most uncertain about is the first one since it’s predicting class 3 by just .01! But Uncertainty sampling would say that example two is the best point to label since .44 is the smallest maximum probability. They are both good candidates, but the first intuitively makes more sense. Margin sampling caters to this intuition; that the best points to label are those with the smallest margin between predictions. We can perform margin sampling with the following function: def margin_sampling(df, preds, n): """Samples points with greatest difference between most and second most probably classes""" # Sort the predictions in increasing order sorted_preds = np.sort(preds, axis=1) # Subtract the second highest prediction from the highest # We need to check if the classifier has more than one class if sorted_preds.shape[1] == 1: return df.sample(n) else: df["margin"] = sorted_preds[:, -1] - sorted_preds[:, -2] return df.sort_values("margin").head(n).drop("margin", axis=1) In this code, we sort the predicted probabilities. Then we check to see if the array has more than one dimension. If it has only one probability it only saw one class and has no information about the existence of other classes. In this case, we need to just randomly sample. Otherwise, we find the margin by subtracting the second-highest probability from the highest probability and sorting the results. Experiments I ran a simple experiment where I start with five randomly sampled points, then apply each sampling strategy to gain five more points. I do this iteratively 100 times until I’ve sampled about 500 points. I plot the f1 score on the test set at each time point and look at how our performance improves with each sampling strategy. You can see that Marginal Sampling and Uncertainty sampling both do better than random for this problem. They are the same in the binary classification case I didn’t think about this when I started writing this article 😅. I created an additional sampling strategy called combined which does a little bit of margin sampling, a little bit of uncertainty sampling, and a little bit of random sampling. I like this combined approach for many of my projects because sometimes random sampling does help. If we are always sampling according to the margin, or uncertainty, we aren’t sampling uniformly from our dataset and could be missing out on some important information. Anecdotally, I’ve seen a little random in the sampling usually pays off. Though don’t believe me because I haven’t run any good experiments to prove this yet 😅. Conclusion Active learning can help you get better performance with fewer data by choosing new points that add the most information to your model. This strategy works pretty well most of the time but isn’t guaranteed to do better. It’s a nice tool to keep in mind when you’re thinking about labeling some additional data. Code
https://towardsdatascience.com/label-smarter-not-more-4f5bbc3fbcf5
['Nicolas Bertagnolli']
2020-05-19 03:15:28.065000+00:00
['Machine Learning', 'Python', 'AI', 'Data Science', 'Programming']
Heat-Protecting Gene in Corals & Human Cells Get an Upgrade…from Tardigrades
NEWSLETTER Heat-Protecting Gene in Corals & Human Cells Get an Upgrade…from Tardigrades This Week in Synthetic Biology (Issue #15) Underwater corals. Credit: Pexels on Pixabay Receive this newsletter every Friday morning! Sign up here: https://synbio.substack.com/ Reach out on Twitter. Heating Breakthrough in Corals As the earth heats, and the Great Barrier Reef melts away, scientists are scrambling for deeper insights into heat-tolerance in coral — specifically, which genes help corals cope with higher temperatures, and could gene editing be used to create heat-resistant variants? A gene in fertilized coral eggs (Acropora millepora), called Heat Shock Transcription Factor 1 (HSF1), was mutated with CRISPR/Cas9. After one injection of the CRISPR/Cas9 components, 90% of the eggs carried mutations in that gene, causing drastic changes in how well the coral could handle heat later on. “The mutant larvae survived well at 27 °C but died rapidly at 34 °C,” the authors wrote. The higher temperature did not, however, cause any damage to normal, un-edited corals. The authors conclude, therefore, that HSF1 “plays an important protective role” in these stunning creatures. This study was published in PNAS. Link Human Cells Upgraded with Tardigrade DNA Astronauts and flight-attendants alike are bombarded with radiation. In one study, 1.2% of male flight attendants reported that they had melanoma, a rate nearly twice as high as the general population. Female flight attendants, similarly, have reported a skin cancer incidence that is four times higher than the general population. To protect these high-flying professionals, some scientists are turning to…tardigrades. These little “sea bears” are so hardy that they can “survive up to 5kGy of ionizing radiation and also survive the vacuum of space,” according to the authors of a new preprint, posted on bioRxiv. For context, a typical abdominal x-ray is just 0.7 milligrays, or about 7 million times less radiation than tardigrades can withstand. The researchers studied a damage suppressor protein from the tardigrades, called Dsup, that helps repair damage to DNA caused by ionizing radiation. They transplanted Dsup into human cells, grew the engineered cells in a dish, and found that the cells became more tolerant to apoptosis — or cell death — signals. The authors think that their “methods and tools provide evidence that the effects of the Dsup protein can be potentially utilized to mitigate such damage during spaceflight.” Link A tardigrade swims by. Credit: GIF Maker/Giphy Rapid Evolution Creates New TrpB Proteins Creating proteins with new functions can, at least in the natural world, take hundreds or thousands of years of evolution. One protein begets another, through mutations, slowly unraveling a sea of new properties that help organisms adapt and survive in their environments. But evolution can also be “jump-started” in the laboratory, and kicked into overdrive using relatively modern approaches called directed evolution. This technique — using evolution to design new proteins — earned Frances Arnold a share of the 2018 Nobel Prize in Chemistry. In a new study, emanating from a collaboration between the labs of Chang Liu and Frances Arnold, a method for continuous, directed evolution, called OrthoRep, was used to create new variants of the tryptophan synthase β-subunit protein. This enzyme is important because it synthesizes L-tryptophan from indole and L-serine. Over 100 generations of the directed evolution experiment, the researchers created new TrpB enzymes that had higher activities, could use different indole analogs, or could work at different temperatures. This study was published in Nature Communications, and is open access. Link Engineered Bacteria Respond to Inflammation Signals Microbes that live in the human gut can be engineered to release medicines, or sense diseases. They are basically tiny, programmable doctors that could, one day, be used to treat all kinds of ailments. In a new preprint, posted to bioRxiv, scientists at Caltech engineered E. coli Nissle cells with an AND logic gate that triggers a gene’s expression when two signals are present: tetrathionate (an inflammatory biomarker) and IPTG. The authors “report 4–6 fold induction with minimal leak when both signals are present.” These are interesting preliminary results, based on experiments done in flasks and tubes, and the next step will be to take these engineered cells, and see if they can actually sense inflammation inside of, say, a mouse gut. Link Study Evaluates CRISPR-Cas Systems for RNA Editing The gene-editing tool, CRISPR-Cas9, can be adapted to cut both DNA and RNA inside of living cells. Its utility for DNA-editing is well known, but RNA-editing applications have not been studied as carefully. Catalytically inactive Cas9 — that is, Cas9 that cannot cut DNA, but can still recognize and bind to a genetic target — can be fused to a protein called ADAR (adenosine deaminases acting on RNA) to make single-letter changes in RNA. These dCas9-ADAR proteins can switch A to I, for example, but it turns out that they have some issues. The researchers of a new study from the Yeo lab at UC San Diego say that the “Cas-based ADAR strategies have distinct transcriptome-wide off-target edits”. They also performed experiments to figure out how to best design guide RNAs to target a specific nucleotide in an RNA strand. They found that, even without a spacer sequence, Cas9-ADARs could still edit RNA. This study was published in Cell Reports, and is open access. Link
https://medium.com/bioeconomy-xyz/heat-protecting-gene-in-corals-human-cells-get-an-upgrade-from-tardigrades-8e9ff7ef1bbb
['Niko Mccarty']
2020-11-13 14:26:38.745000+00:00
['Tech', 'Newsletter', 'News', 'Future', 'Science']